source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Benzalkonium%20chloride | Benzalkonium chloride (BZK, BKC, BAK, BAC), also known as alkyldimethylbenzylammonium chloride (ADBAC) and by the trade name Zephiran, is a type of cationic surfactant. It is an organic salt classified as a quaternary ammonium compound. ADBACs have three main categories of use: as a biocide, a cationic surfactant, and a phase transfer agent. ADBACs are a mixture of alkylbenzyldimethylammonium chlorides, in which the alkyl group has various even-numbered alkyl chain lengths.
Solubility and physical properties
Depending on purity, benzalkonium chloride ranges from colourless to a pale yellow (impure). Benzalkonium chloride is readily soluble in ethanol and acetone. Dissolution in water is ready, upon agitation. Aqueous solutions should be neutral to slightly alkaline. Solutions foam when shaken. Concentrated solutions have a bitter taste and a faint almond-like odour.
Standard concentrates are manufactured as 50% and 80% w/w solutions, and sold under trade names such as BC50, BC80, BAC50, BAC80, etc. The 50% solution is purely aqueous, while more concentrated solutions require incorporation of rheology modifiers (alcohols, polyethylene glycols, etc.) to prevent increases in viscosity or gel formation under low temperature conditions.
Cationic surfactant
Benzalkonium chloride possesses surfactant properties, dissolving the lipid phase of the tear film and increasing drug penetration, making it a useful excipient, but at the risk of causing damage to the surface of the eye.
Laundry detergents and treatments.
Softeners for textiles.
Phase transfer agent
Benzalkonium chloride is a mainstay of phase-transfer catalysis, an important technology in the synthesis of organic compounds, including drugs.
Bioactive agents
Especially for its antimicrobial activity, benzalkonium chloride is an active ingredient in many consumer products:
Pharmaceutical products such as eye, ear and nasal drops or sprays, as a preservative.
Personal care products such as hand sanitize |
https://en.wikipedia.org/wiki/Rank%20theory%20of%20depression | Rank theory is an evolutionary theory of depression, developed by Anthony Stevens and John Price, and proposes that depression promotes the survival of genes. Depression is an adaptive response to losing status (rank) and losing confidence in the ability to regain it. The adaptive function of the depression is to change behaviour to promote survival for someone who has been defeated. According to rank theory, depression was naturally selected to allow us to accept a subordinate role. The function of this depressive adaptation is to prevent the loser from suffering further defeat in a conflict.
In the face of defeat, a behavioural process swings into action which causes the individual to cease competing and reduce their ambitions. This process is involuntary and results in the loss of energy, depressed mood, sleep disturbance, poor appetite, and loss of confidence, which are typical characteristics of depression. The outward symptoms of depression (facial expressions, constant crying, etc.) signal to others that the loser is not fit to compete, and they also discourage others from attempting to restore the loser's rank.
This acceptance of a lower rank would serve to stabilise an ancestral human community, promoting the survival of any individual (or individual's genes) in the community through affording protection from other human groups, retaining access to resources, and to mates. The adaptive function of accepting a lower rank is twofold: first, it ensures that the loser truly yields and does not attempt to make a comeback, and second, the loser reassures the winner that yielding has truly taken place, so that the conflict ends, with no further damage to the loser. Social harmony is then restored.
Development
Rank theory of depression, initially known as the 'social competition hypothesis', is based on ethological theories of signalling: in order to avoid injury, animals will perform 'appeasement displays' to demonstrate their subordination and lack of desire |
https://en.wikipedia.org/wiki/Chromium%20B.S.U. | Chromium B.S.U. is an arcade-style, top-scrolling space shooter available on Windows, iPhone, PSP, Mac, AmigaOS 4, Linux and numerous other UNIX-like operating systems. It is a free software distributed under the Clarified Artistic License. The original version of was designed in 2000 by Mark B. Allan and released under the Artistic License. Since then it has received many contributions from the community.
Plot
The storyline of consists of the player taking the role of a captain aboard a cargo ship. The name of the cargo ship is "Chromium B.S.U." The player is given the task of delivering cargo to troops on the front line. The cargo ship has a series of robotic fighter spaceships aboard. Your job is to make use of these ships to ensure that the cargo ship makes it to the front line.
Gameplay
is a 2D top-scrolling space shooter. Players must shoot enemy aircraft before they reach the bottom of the screen. For each aircraft that reaches the bottom of the screen, the player will lose a life. This particular rule makes unique amongst scrolling space shooters. Another aspect of the game's difficulty is its limited ammunition. Ammunition must be used efficiently to win.
When a player is having difficulty destroying foes, the player has two options. They can crash into enemy vessels and deal damage to the ship as well as themselves. The other alternative is to self-destruct, thereby destroying all the enemies on the screen.
In the first level of the game there are only three types of enemy ships. More enemy ships are introduced to the player as they advanced through levels. The game was designed to be played in short time intervals rather than long dedicated hours.
Technical information
The game is written in C++. Graphical support is provided by OpenGL. The game demands hardware acceleration in order to reliably maintain a steady frame rate. Therefore, software implementations of OpenGL are not suitable for playing the game. SDL is used for creating the window |
https://en.wikipedia.org/wiki/Risch%20algorithm | In symbolic computation, the Risch algorithm is a method of indefinite integration used in some computer algebra systems to find antiderivatives. It is named after the American mathematician Robert Henry Risch, a specialist in computer algebra who developed it in 1968.
The algorithm transforms the problem of integration into a problem in algebra. It is based on the form of the function being integrated and on methods for integrating rational functions, radicals, logarithms, and exponential functions. Risch called it a decision procedure, because it is a method for deciding whether a function has an elementary function as an indefinite integral, and if it does, for determining that indefinite integral. However, the algorithm does not always succeed in identifying whether or not the antiderivative of a given function in fact can be expressed in terms of elementary functions.
The complete description of the Risch algorithm takes over 100 pages. The Risch–Norman algorithm is a simpler, faster, but less powerful variant that was developed in 1976 by Arthur Norman.
Some significant progress has been made in computing the logarithmic part of a mixed transcendental-algebraic integral by Brian L. Miller.
Description
The Risch algorithm is used to integrate elementary functions. These are functions obtained by composing exponentials, logarithms, radicals, trigonometric functions, and the four arithmetic operations (). Laplace solved this problem for the case of rational functions, as he showed that the indefinite integral of a rational function is a rational function and a finite number of constant multiples of logarithms of rational functions . The algorithm suggested by Laplace is usually described in calculus textbooks; as a computer program, it was finally implemented in the 1960s.
Liouville formulated the problem that is solved by the Risch algorithm. Liouville proved by analytical means that if there is an elementary solution to the equation then there exist cons |
https://en.wikipedia.org/wiki/Spectral%20imaging | Spectral imaging is imaging that uses multiple bands across the electromagnetic spectrum. While an ordinary camera captures light across three wavelength bands in the visible spectrum, red, green, and blue (RGB), spectral imaging encompasses a wide variety of techniques that go beyond RGB. Spectral imaging may use the infrared, the visible spectrum, the ultraviolet, x-rays, or some combination of the above. It may include the acquisition of image data in visible and non-visible bands simultaneously, illumination from outside the visible range, or the use of optical filters to capture a specific spectral range. It is also possible to capture hundreds of wavelength bands for each pixel in an image.
Multispectral imaging captures a small number of spectral bands, typically three to fifteen, through the use of varying filters and illumination. Many off-the-shelf RGB cameras will detect a small amount of Near-Infrared (NIR) light. A scene may be illuminated with NIR light, and, simultaneously, an infrared-passing filter may be used on the camera to ensure that visible light is blocked and only NIR is captured in the image. Industrial, military, and scientific work, however, uses sensors built for the purpose.
Hyperspectral imaging is another subcategory of spectral imaging, which combines spectroscopy and digital photography. In hyperspectral imaging, a complete spectrum or some spectral information (such as the Doppler shift or Zeeman splitting of a spectral line) is collected at every pixel in an image plane. A hyperspectral camera uses special hardware to capture hundreds of wavelength bands for each pixel, which can be interpreted as a complete spectrum. In other words, the camera has a high spectral resolution. The phrase "spectral imaging" is sometimes used as a shorthand way of referring to this technique, but it is preferable to use the term "hyperspectral imaging" in places when ambiguity may arise. Hyperspectral images are often represented as an image c |
https://en.wikipedia.org/wiki/Kerckhoff%20Marine%20Laboratory | The William G. Kerckhoff Marine Laboratory is owned and operated by the California Institute of Technology (Caltech). It is located 101 Dahlia Street, in the Corona del Mar district of Newport Beach, in Orange County, California.
History
The marine laboratory was established by biologist Thomas Hunt Morgan in 1928 to replicate the facilities at the Stazione Zoologica in Naples, Italy. Caltech made the decision to purchase the facility in 1929. It is one of the oldest marine laboratories on the West Coast of the United States. From 1962 until his death in 2002, Dr. Wheeler J. North conducted numerous studies on the ecology of the California kelp forests while based at this laboratory. During the 1990s and 2000s investigators included members of the Eric Davidson lab working on various marine biology related projects. The current Director of the Kerckhoff Marine Lab is Prof. Victoria J. Orphan, who maintains an active research program there.
See also
Marine biology
Marine ecosystem
Notes
California Institute of Technology buildings and structures
Marine biological stations
Biological research institutes in the United States
Buildings and structures in Orange County, California
Newport Beach, California |
https://en.wikipedia.org/wiki/Immunodeficiency%2026 | Immunodeficiency 26 is a rare genetic syndrome. It is characterised by absent circulating B and T cells and normal natural killer cells.
Signs and symptoms
The features of this condition include recurrent candidiasis and lower respiratory tract infections.
Genetics
This condition is due to mutations in the DNA-PKcs gene and is inheritable in an autosomal recessive fashion. The gene is located on the long arm of chromosome 8 (8q11.21) on the minus strand. It encodes a protein of 4128 amino acids with a predicted molecular weight of 469 kiloDaltons. The encoded protein is a protein kinase that is activated by DNA. This protein acts as a sensor for damaged DNA.
Diagnosis
Diagnosis is made by examination of the circulating lymphocytes and gene sequencing.
Differential diagnosis
Ataxia telangectasia
Artemis deficiency
LIG4 syndrome
Nijmegen breakage syndrome
Severe combined immunodeficiency with Cernunnos
X-linked agammaglobulinemia
Management
Epidemiology
This condition is rare. Only two cases have been described up to 2017.
History
This condition was described in 2009 by van der Burg et al. |
https://en.wikipedia.org/wiki/Hub%20%28network%20science%29 | In network science, a hub is a node with a number of links that greatly exceeds the average. Emergence of hubs is a consequence of a scale-free property of networks. While hubs cannot be observed in a random network, they are expected to emerge in scale-free networks. The uprise of hubs in scale-free networks is associated with power-law distribution. Hubs have a significant impact on the network topology. Hubs can be found in many real networks, such as the brain or the Internet.
A hub is a component of a network with a high-degree node. Hubs have a significantly larger number of links in comparison with other nodes in the network. The number of links (degrees) for a hub in a scale-free network is much higher than for the biggest node in a random network, keeping the size N of the network and average degree constant. The existence of hubs is the biggest difference between random networks and scale-free networks. In random networks, the degree k is comparable for every node; it is therefore not possible for hubs to emerge. In scale-free networks, a few nodes (hubs) have a high degree k while the other nodes have a small number of links.
Emergence
Emergence of hubs can be explained by the difference between scale-free networks and random networks. Scale-free networks (Barabási–Albert model) are different from random networks (Erdős–Rényi model) in two aspects: (a) growth, (b) preferential attachment.
(a) Scale-free networks assume a continuous growth of the number of nodes N, compared to random networks which assume a fixed number of nodes. In scale-free networks the degree of the largest hub rises polynomially with the size of the network. Therefore, the degree of a hub can be high in a scale-free network. In random networks the degree of the largest node rises logaritmically (or slower) with N, thus the hub number will be small even in a very large network.
(b) A new node in a scale-free network has a tendency to link to a node with a higher degree, compared |
https://en.wikipedia.org/wiki/Learned%20medicine | Learned medicine is the European medical tradition in the Early Modern period, when it experienced the tension between the texts derived from ancient Greek medicine, particularly by followers of the teachings attributed to Hippocrates and those of Galen vs. the newer theories of natural philosophy spurred on by Renaissance humanistic studies, the religious Reformation and the establishment of scientific societies. The Renaissance principle of "ad fontes" as applied to Galen sought to establish better texts of his writings, free from later accretions from Arabic-derived texts and texts of medieval Latin. This search for better texts was influential in the early 16th century. Historians use the term medical humanism to define this textual activity, pursued for its own sake.
Learned medicine centred on the practica, a genre of Latin texts based on description of diseases and their treatment (nosology). Its interests were less in the abstract reasoning of medieval medicine and in the tradition of Avicenna, on which it was built, and instead it was based more on the diagnosis and treatment of particular diseases. Practica, covering diagnosis and therapies, was contrasted with theorica, which dealt with physiology and abstract thought on health and illness. The tradition from Galen valued practica less than theorica concepts, but from the 15th century the status of practica in learned medicine rose.
"Learned medicine" in this sense was also an academic discipline. It was taught in European universities, and its faculty had the same status as those of theology and law. Learned medicine is typically contrasted with the folk medicine of the period, but it has been argued that the distinction is not rigorous. Its Galenic teachings were challenged successively by Paracelsianism and Helmontianism.
Learned medicine and syphilis
Around the year 1500 an issue for learned medicine was the nature of morbus gallicus, now identified as venereal syphilis. Alessandro Benedetti, in pa |
https://en.wikipedia.org/wiki/List%20of%20flags%20of%20Malta | The following is a list of flags of Malta.
National flags
Governmental flags
Military flags
Historical flags
Local Councils
Some flags had been used prior to the creation of local councils in 1993. The coats of arms of the local councils are officially recognised, however the flags are not and thus a number of variants exist. Since 1993, a new local council, Mtarfa, has been created, and the local councils of Attard, Birżebbuġa, Floriana, Kalkara, Lija, Mellieħa, Mġarr, Mosta, Nadur, Naxxar, Paola, Qrendi, Siġġiewi, Xgħajra and Żebbuġ have changed their flags and coats of arms. Some, such as Mosta, had minor differences, but others, like Xgħajra changed the arms completely.
Malta
Gozo
Political flags
Religious flags
See also
Flag of Malta
Coat of arms of Malta |
https://en.wikipedia.org/wiki/Specialization%20%28pre%29order | In the branch of mathematics known as topology, the specialization (or canonical) preorder is a natural preorder on the set of the points of a topological space. For most spaces that are considered in practice, namely for all those that satisfy the T0 separation axiom, this preorder is even a partial order (called the specialization order). On the other hand, for T1 spaces the order becomes trivial and is of little interest.
The specialization order is often considered in applications in computer science, where T0 spaces occur in denotational semantics. The specialization order is also important for identifying suitable topologies on partially ordered sets, as is done in order theory.
Definition and motivation
Consider any topological space X. The specialization preorder ≤ on X relates two points of X when one lies in the closure of the other. However, various authors disagree on which 'direction' the order should go. What is agreed is that if
x is contained in cl{y},
(where cl{y} denotes the closure of the singleton set {y}, i.e. the intersection of all closed sets containing {y}), we say that x is a specialization of y and that y is a generalization of x; this is commonly written y ⤳ x.
Unfortunately, the property "x is a specialization of y" is alternatively written as "x ≤ y" and as "y ≤ x" by various authors (see, respectively, and ).
Both definitions have intuitive justifications: in the case of the former, we have
x ≤ y if and only if cl{x} ⊆ cl{y}.
However, in the case where our space X is the prime spectrum Spec R of a commutative ring R (which is the motivational situation in applications related to algebraic geometry), then under our second definition of the order, we have
y ≤ x if and only if y ⊆ x as prime ideals of the ring R.
For the sake of consistency, for the remainder of this article we will take the first definition, that "x is a specialization of y" be written as x ≤ y. We then see,
x ≤ y if and only if x is contained in all close |
https://en.wikipedia.org/wiki/Gangs%20Matrix | The Gangs Matrix, also known as the Gangs Violence Matrix, is a database of alleged street gang members created by the Metropolitan Police Service in 2012. It has been criticised for its use of circumstantial evidence and disproportionate targeting of young black men.
A 2018 investigation by the Information Commissioner's Office found that the use of the Gangs Matrix at that time was in breach of data protection laws, and issued an enforcement notice to bring the operation of the system in line with the law. The enforcement notice was lifted in 2021.
In 2022, the MPS announced that it had removed more than 1000 people, representing over 65% of the entries in the database, on the basis that they referred to people who posed no threat of violence. |
https://en.wikipedia.org/wiki/General%20relativity%20priority%20dispute | Albert Einstein presented the theories of special relativity and general relativity in publications that either contained no formal references to previous literature, or referred only to a small number of his predecessors for fundamental results on which he based his theories, most notably to the work of Henri Poincaré and Hendrik Lorentz for special relativity, and to the work of David Hilbert, Carl F. Gauss, Bernhard Riemann, and Ernst Mach for general relativity. Subsequently, claims have been put forward about both theories, asserting that they were formulated, either wholly or in part, by others before Einstein. At issue is the extent to which Einstein and various other individuals should be credited for the formulation of these theories, based on priority considerations.
In general relativity, there is a controversy about how much credit should go to Einstein, Marcel Grossmann, and David Hilbert. Many others (such as Gauss, Riemann, William Kingdon Clifford, Ricci, Gunnar Nordström and Levi-Civita) contributed to the development of the mathematical tools and geometrical ideas underlying the theory of gravity. Also polemics exist about alleged contributions of others such as Paul Gerber.
It was noted by Sir Edmund Whittaker in his 1954 book that David Hilbert had derived the theory of general relativity from an elegant variational principle almost simultaneously with Einstein's discovery of the theory. Hilbert's derivation of the theory predated that of Einstein by five days.
Undisputed facts
The following facts are well established and referable:
The proposal to describe gravity by means of a pseudo-Riemannian metric was first made by Einstein and Grossmann in the so-called Entwurf theory published 1913. Grossmann identified the contracted Riemann tensor as the key for the solution of the problem posed by Einstein. This was followed by several attempts of Einstein to find valid field equations for this theory of gravity.
David Hilbert invited Einstein |
https://en.wikipedia.org/wiki/Carcinogenic%20bacteria | Cancer bacteria are bacteria infectious organisms that are known or suspected to cause cancer. While cancer-associated bacteria have long been considered to be opportunistic (i.e., infecting healthy tissues after cancer has already established itself), there is some evidence that bacteria may be directly carcinogenic. The strongest evidence to date involves the bacterium H. pylori and its role in gastric cancer.
Oncoviruses are viral agents that are similarly suspected of causing cancer.
Known to cause cancer
Helicobacter pylori colonizes the human stomach and duodenum. In some cases it can cause stomach cancer and MALT lymphoma. Animal models have demonstrated Koch's third and fourth postulates for the role of Helicobacter pylori in the causation of stomach cancer. The mechanism by which H. pylori causes cancer may involve chronic inflammation, or the direct action of some of its virulence factors, for example, CagA has been implicated in carcinogenesis.
Speculative links
A number of bacteria have associations with cancer, although their possible role in carcinogenesis is unclear.
Salmonella Typhi has been linked to gallbladder cancer but may also be useful in delivering chemotherapeutic agents for the treatment of melanoma, colon and bladder cancer. Bacteria found in the gut may be related to colon cancer but may be more complicated due to the role of chemoprotective probiotic cancers. Microorganisms and their metabolic byproducts, or impact of chronic inflammation, may also be linked to oral cancers.
The relationship between cancer and bacteria may be complicated by different individuals reacting in different ways to different cancers.
History
In 1890, the Scottish pathologist William Russell reported circumstantial evidence for the bacterial cause of cancer. In 1926, Canadian physician Thomas Glover reported that he could consistently isolate a specific bacterium from the neoplastic tissues of animals and humans. One review summarized Glover's report a |
https://en.wikipedia.org/wiki/Conceptual%20physics | Conceptual physics is an approach to teaching physics that focuses on the ideas of physics rather than the mathematics. It is believed that with a strong conceptual foundation in physics, students are better equipped to understand the equations and formulas of physics, and to make connections between the concepts of physics and their everyday life. Early versions used almost no equations or math-based problems.
Paul G. Hewitt popularized this approach with his textbook Conceptual Physics: A New Introduction to your Environment in 1971. In his review at the time, Kenneth W. Ford noted the emphasis on logical reasoning and said "Hewitt's excellent book can be called physics without equations, or physics without computation, but not physics without mathematics." Hewitt's wasn't the first book to take this approach. Conceptual Physics: Matter in Motion by Jae R. Ballif and William E. Dibble was published in 1969. But Hewitt's book became very successful. As of 2022, it is in its 13th edition. In 1987 Hewitt wrote a version for high school students.
The spread of the conceptual approach to teaching physics broadened the range of students taking physics in high school. Enrollment in conceptual physics courses in high school grew from 25,000 students in 1987 to over 400,000 in 2009. In 2009, 37% of students took high school physics, and 31% of them were in Physics First, conceptual physics courses, or regular physics courses using a conceptual textbook.
This approach to teaching physics has also inspired books for science literacy courses, such as From Atoms to Galaxies: A Conceptual Physics Approach to Scientific Awareness by Sadri Hassani. |
https://en.wikipedia.org/wiki/Arcuate%20uterus | The arcuate uterus is a form of a uterine anomaly or variation where the uterine cavity displays a concave contour towards the fundus. Normally the uterine cavity is straight or convex towards the fundus on anterior-posterior imaging, but in the arcuate uterus the myometrium of the fundus dips into the cavity and may form a small septation. The distinction between an arcuate uterus and a septate uterus is not standardized.
Signs and symptoms
The condition may not be known to the affected individual and not result in any reproductive problems; thus normal pregnancies occur. Indeed, there is no consensus on the relationship of the arcuate uterus and recurrent pregnancy loss. Accordingly, the condition may be a variation or a pathology.
One view maintains that the condition is associated with a higher risk for miscarriage, premature birth, and malpresentation. Thus a study that evaluated women with uterine bleeding by hysteroscopy found that 6.5% of subjects displayed the arcuate uterus and had evidence of reproductive impairments. A study based on hysterosalpingraphic detected arcuate lesions documented increased fetal loss and obstetrical complications as a risk for affected women. Woelfer found that the miscarriage risk is more pronounced in the second trimester. In contrast, a study utilizing 3-D ultrasonography to document the prevalence of the arcuate uterus in a gynecological population found no evidence of increased risk of reproductive loss; in this study 3.1% of women had an arcuate uterus making it the most common uterine anomaly; this prevalence was similar than in women undergoing sterilization and lower than in women with recurrent pregnancy loss.
Cause
The uterus is formed during embryogenesis by the fusion of the two Müllerian ducts. During this fusion a resorption process eliminates the partition between the two ducts to create a single cavity. This process begins caudally and advances cranially, thus an arcuate uterus represents an in the final sta |
https://en.wikipedia.org/wiki/Transcriptional%20activator%20LAG-3 | In molecular biology, the transcriptional activator LAG-3 is a transcriptional activator protein. The C. elegans Notch pathway, involved in the control of growth, differentiation and patterning in animal development, relies on either of the receptors GLP-1 or LIN-12. Both these receptors promote signalling by the recruitment of LAG-3 to target promoters, where it then acts as a transcriptional activator. LAG-3 works as a ternary complex together with the DNA binding protein, LAG-1. |
https://en.wikipedia.org/wiki/Richard%20Lydekker | Richard Lydekker (; 25 July 1849 – 16 April 1915) was an English naturalist, geologist and writer of numerous books on natural history.
Biography
Richard Lydekker was born at Tavistock Square in London. His father was Gerard Wolfe Lydekker, a barrister-at-law with Dutch ancestry. The family moved to Harpenden Lodge soon after Richard's birth. He was educated at Trinity College, Cambridge, where he took a first-class in the Natural Science tripos (1872). In 1874 he joined the Geological Survey of India and made studies of the vertebrate palaeontology of northern India (especially Kashmir). He remained in this post until the death of his father in 1881. His main work in India was on the Siwalik palaeofauna; it was published in Palaeontologia Indica. He was responsible for the cataloguing of the fossil mammals, reptiles, and birds in the Natural History Museum (10 vols., 1891).
He named a variety of taxa including the golden-bellied mangabey; as a taxon authority he is named simply as "Lydekker".
Biogeography
He was influential in the science of biogeography. In 1896 he delineated the biogeographical boundary through Indonesia, known as Lydekker's Line, that separates Wallacea on the west from Australia-New Guinea on the east. It follows the edge of the Sahul Shelf, an area from New Guinea to Australia of shallow water with the Aru Islands on its edge. Along with Wallace's Line and others, it indicates the definite effect of geology on the biogeography of the region, something not seen so clearly in other parts of the world.
First cuckoo
Lydekker attracted amused public attention with a pair of letters to The Times in 1913, when he wrote on 6 February that he had heard a cuckoo, contrary to Yarrell's History of British Birds which doubted the bird arrived before April. Six days later on 12 February 1913, he wrote again, confessing that "the note was uttered by a bricklayer's labourer". Letters about the first cuckoo became a tradition in the newspaper.
Awards
He |
https://en.wikipedia.org/wiki/Portogloboviridae | Portogloboviridae is a family of dsDNA viruses that infect archaea. It is a proposed family of the realm Varidnaviria, but ICTV officially puts it as incertae sedis virus. Viruses in the family are related to Helvetiavirae. The capsid proteins of these viruses and their characteristics are of evolutionary importance for the origin of the other Varidnaviria viruses since they seem to retain primordial characters.
Description
The virions in this family have a capsid with icosahedral geometry and a viral envelope that protects the genetic material. The diameter is 83 to 87 nanometers. The genome is circular dsDNA with a length of 20,222 base pairs. The genome contains 45 open reading frames (ORFs), which are closely arranged and occupy 89.1% of the genome. ORFs are generally short, with an average length of 103 codons. Virions have 10 proteins ranging from 20 to 32 kDa. Of these proteins, 8 code for the capsid and two for the viral envelope, including one that is a vertical single jelly roll (SJR) capsid protein. Entry into the host cell is by penetration. Viral replication occurs by chronic infection without a lytic cycle.
The Portogloboviridae viruses together with Halopanivirales have evolutionary importance in the evolution of the other Varidnaviria viruses since they appear to be relics of how the first viruses of this realm were. Portogloboviridae together with Halopanivirales may have infected the last universal common ancestor (LUCA) and originated before that organism.
It has been proposed that it may be related to the origin of Varidnaviria in the following way.
Taxonomy
The family has one genus which has two species:
Alphaportoglobovirus
Alphaportoglobovirus SPV2
Sulfolobus alphaportoglobovirus 1 |
https://en.wikipedia.org/wiki/Radiative%20equilibrium | Radiative equilibrium is the condition where the total thermal radiation leaving an object is equal to the total thermal radiation entering it. It is one of the several requirements for thermodynamic equilibrium, but it can occur in the absence of thermodynamic equilibrium. There are various types of radiative equilibrium, which is itself a kind of dynamic equilibrium.
Definitions
Equilibrium, in general, is a state in which opposing forces are balanced, and hence a system does not change in time. Radiative equilibrium is the specific case of thermal equilibrium, for the case in which the exchange of heat is done by radiative heat transfer.
There are several types of radiative equilibrium.
Prevost's definitions
An important early contribution was made by Pierre Prevost in 1791. Prevost considered that what is nowadays called the photon gas or electromagnetic radiation was a fluid that he called "free heat". Prevost proposed that free radiant heat is a very rare fluid, rays of which, like light rays, pass through each other without detectable disturbance of their passage. Prevost's theory of exchanges stated that each body radiates to, and receives radiation from, other bodies. The radiation from each body is emitted regardless of the presence or absence of other bodies.
Prevost in 1791 offered the following definitions (translated):
Absolute equilibrium of free heat is the state of this fluid in a portion of space which receives as much of it as it lets escape.
Relative equilibrium of free heat is the state of this fluid in two portions of space which receive from each other equal quantities of heat, and which moreover are in absolute equilibrium, or experience precisely equal changes.
Prevost went on to comment that "The heat of several portions of space at the same temperature, and next to one another, is at the same time in the two species of equilibrium."
Pointwise radiative equilibrium
Following Planck (1914), a radiative field is often described in |
https://en.wikipedia.org/wiki/Pacific%20Islands%20Ocean%20Observing%20System | The Pacific Islands Ocean Observing System (PacIOOS) is a nonprofit association and one of eleven such associations in the U.S. Integrated Ocean Observing System, funded in part by the National Oceanic and Atmospheric Administration (NOAA).
The PacIOOS area covers eight time zones, and 2300 individual islands associated with the U.S. Observation priorities are public safety, direct economic value, and environmental preservation. Among ocean characteristics reported are:
Currents forecast
Shoreline impacts such as high sea level
Buoy water characteristics including salinity, turbidity, and temperature
The PacIOOS website is hosted by the University of Hawaii at Manoa, and provides interactive graphs and map viewers. |
https://en.wikipedia.org/wiki/Leucine-rich%20repeats%20and%20death%20domain%20containing%201 | Leucine-rich repeats and death domain containing 1 is a protein that in humans is encoded by the LRRD1 gene. |
https://en.wikipedia.org/wiki/Orang-Outang%2C%20sive%20Homo%20Sylvestris | Orang-Outang, sive Homo Sylvestris: or, the Anatomy of a Pygmie Compared with that of a Monkey, an Ape, and a Man (1699) is a book by the British natural philosopher Edward Tyson. Regarded as a seminal work on anatomy, this volume led to Tyson being known as the father of comparative anatomy. The book characterizes in detail the anatomy of a creature described as a pygmy (later known as a chimpanzee) and contains Tyson's views on the phylogeny of the pygmy and its relationship to humans, apes, and monkeys.
The use of the phrase "orang-outang" does not refer to members of the orangutan genus Pongo, but rather uses the phrase to refer to the habitat of the subject; that is, a "person of the forest" (orangutan translates from Malay as "person of the forest/jungle".) Due to the absence of previous literature concerning chimpanzees, it is not clear whether or not the titular subject was a bonobo, Pan paniscus, or a common chimpanzee, Pan troglodytes, as there was not great prior distinction between the two. Following his summary of the anatomy of the subject, Tyson attaches four essays concerning the ancients' knowledge of pygmies, cynocephali, satyrs, and sphinges.
The book was originally published in 1699 and was republished in 1894 with an introduction which contains a biography of Edward Tyson by Bertram C. A. Windle. Large portions of the book are block quotations in Latin of works from antiquity regarding the anatomy and socialization of the pygmy, much of which Tyson regarded as inaccurate myths and hearsay.
Letter of dedication
The letter of dedication is addressed to John Sommers, a Lord High Chancellor of England and President of the Royal Society. The letter thanks him for his dedication to the advancement of knowledge, specifically "experimental natural philosophy".
Preface
The preface consists of Tyson giving his reasons for conducting the study, "to find out the truth, than to enlarge in the mythology; to inform the judgement, than to please the fancy". |
https://en.wikipedia.org/wiki/Financial%20network | A financial network is a concept describing any collection of financial entities (such as traders, firms, banks and financial exchanges) and the links between them, ideally through direct transactions or the ability to mediate a transaction. A common example of a financial network link is security holdings (e.g. stock of publicly traded companies), where a firm's ownership of stock would represent a link between the stock and the firm. In network science terms, financial networks are composed of financial nodes, where nodes represent financial institutions or participants, and of edges, where edges represent formal or informal relationships between nodes (i.e. stock or bond ownership).
History
The concept and use of financial networks has emerged in response to the observation that modern financial systems exhibit a high degree of interdependence. Globalization has magnified the level of financial interdependence across many kinds of organizations. Shares, assets, and financial relationships are held and engaged in at a greater degree over time. The trend is a topic of major interest in the financial sector, particularly due to its implications on financial crises.
The Crises have played a major role in developing the understanding of financial networks. In 1998, the crash of Long-Term Capital Management (LTCM) exposed their underlying importance. In particular, the LTCM case highlighted the hidden correlations inherent in financial networks. In the case of LTCM, financial correlations were much higher than expected between Japanese bonds and Russian bonds. LTCM took on a significant amount of risk (at one point leveraged 25:1) to trade on this relationship, while underestimating these correlations. The 1997 Asian financial crisis and the subsequent 1998 Russian financial crisis lead to a divergence of European, Japanese and U.S. bonds, causing the collapse of LTCM. The ensuing crisis in the market proved the impact that financial networks can have. Similarly, |
https://en.wikipedia.org/wiki/Tachyons%20in%20fiction | The hypothetical particles tachyons have inspired many occurrences of in fiction. The use of the word in science fiction dates back at least to 1970 when James Blish's Star Trek novel Spock Must Die! incorporated tachyons into an ill-fated transporter experiment.
In general, tachyons are a standby mechanism upon which many science fiction authors rely to establish faster-than-light communication, with or without reference to causality issues. For example, in the Babylon 5 television series, tachyons are used for real-time communication over long distances. Another instance is Gregory Benford's novel Timescape, winner of the Nebula Award, which involves the use of tachyons to transmit a message of salvation back in time. Likewise, John Carpenter's horror film Prince of Darkness uses tachyons to explain how future humans send messages backward through time to warn the characters of their impending doom. By contrast, Alan Moore's classic comic book limited series Watchmen features a character who uses "a squall of tachyons" broadcasting from space to muddle the mind of the only person on Earth capable of seeing the future.
The word "tachyon" has become widely recognized to such an extent that it can impart a science-fictional "sound" even if the subject in question has no particular relation to superluminal travel (compare positronic brain). Classic anime fans may associate tachyons with the energy source for the wave-motion gun and wave-motion engine in Space Battleship Yamato (Starblazers in the United States). Further examples include the "Tachion Tanks" of the PC game Dark Reign and the "tachyon beam" of the game Master of Orion. The space-combat sim Tachyon: The Fringe utilizes "tachyon gates" for superluminal travel but gives no exact explanation for the technology, and the MMORPG Eve Online features six types of "Large Tachyon Lasers", technically a contradiction since by definition, lasers emit light—photons, not any kind of hypothetical tachyon.
In p |
https://en.wikipedia.org/wiki/Exponential%20distribution | In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the time between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.
The exponential distribution is not the same as the class of exponential families of distributions. This is a large class of probability distributions that includes the exponential distribution as one of its members, but also includes many other distributions, like the normal, binomial, gamma, and Poisson distributions.
Definitions
Probability density function
The probability density function (pdf) of an exponential distribution is
Here λ > 0 is the parameter of the distribution, often called the rate parameter. The distribution is supported on the interval . If a random variable X has this distribution, we write .
The exponential distribution exhibits infinite divisibility.
Cumulative distribution function
The cumulative distribution function is given by
Alternative parametrization
The exponential distribution is sometimes parametrized in terms of the scale parameter , which is also the mean:
Properties
Mean, variance, moments, and median
The mean or expected value of an exponentially distributed random variable X with rate parameter λ is given by
In light of the examples given below, this makes sense: if you receive phone calls at an average rate of 2 per hour, then you can expect to wait half an hour for every call.
The variance of X is given by
so the standard deviation is equal to the mean.
The moments of X, for are given by
The central moments of X, for are given by
where !n is the subfactorial |
https://en.wikipedia.org/wiki/Journal%20of%20NeuroVirology | The Journal of NeuroVirology is a medical journal that publishes review articles on the molecular biology, immunology, genetics, epidemiology, and pathogenesis of CNS disorders with the goal of bridging the gap between basic and clinical studies, and enhancing translational research in neurovirology. It is published by Springer Science+Business Media. The Journal of NeuroVirology is the official journal of the International Society for Neurovirology.
Abstracting and indexing
The journal is abstracted and indexed in:
Science Citation Index Expanded
Journal Citation Reports/Science Edition
SCOPUS
PsycINFO
EMBASE
Chemical Abstracts Service
CSA Illumina
Biological Abstracts
BIOSIS
CAB Abstracts
Current Contents/ Life Sciences
Global Health
Neuroscience Citation Index
PASCAL
Summon by Serial Solutions
External links
Journal of NeuroVirology
Springer - Journal of NeuroVirology
International Society of Neurovirology
Editorial board
Neuroscience journals
Springer Science+Business Media academic journals
Academic journals established in 1994
Bimonthly journals
Virology journals |
https://en.wikipedia.org/wiki/Perillaldehyde | Perillaldehyde, perillic aldehyde or perilla aldehyde, is a natural organic compound found most abundantly in the annual herb perilla, but also in a wide variety of other plants and essential oils. It is a monoterpenoid containing an aldehyde functional group.
Perillaldehyde, or volatile oils from perilla that are rich in perillaldehyde, are used as food additives for flavoring and in perfumery to add spiciness. Perillaldehyde can be readily converted to perilla alcohol, which is also used in perfumery. It has a mint-like, cinnamon odor and is primarily responsible for the flavor of perilla.
The oxime of perillaldehyde is known as perillartine or perilla sugar and is about 2000 times sweeter than sucrose and is used in Japan as a sweetener. It is presented in lower concentrations in the body odor of persons suffering from Parkinson's disease.
See also
Icosane |
https://en.wikipedia.org/wiki/Iterated%20filtering | Iterated filtering algorithms are a tool for maximum likelihood inference on partially observed dynamical systems. Stochastic perturbations to the unknown parameters are used to explore the parameter space. Applying sequential Monte Carlo (the particle filter) to this extended model results in the selection of the parameter values that are more consistent with the data. Appropriately constructed procedures, iterating with successively diminished perturbations, converge to the maximum likelihood estimate. Iterated filtering methods have so far been used most extensively to study infectious disease transmission dynamics. Case studies include cholera, Ebola virus, influenza, malaria, HIV, pertussis, poliovirus and measles. Other areas which have been proposed to be suitable for these methods include ecological dynamics and finance.
The perturbations to the parameter space play several different roles. Firstly, they smooth out the likelihood surface, enabling the algorithm to overcome small-scale features of the likelihood during early stages of the global search. Secondly, Monte Carlo variation allows the search to escape from local minima. Thirdly, the iterated filtering update uses the perturbed parameter values to construct an approximation to the derivative of the log likelihood even though this quantity is not typically available in closed form. Fourthly, the parameter perturbations help to overcome numerical difficulties that can arise during sequential Monte Carlo.
Overview
The data are a time series collected at times . The dynamic system is modeled by a Markov process which is generated by a function in the sense that
where is a vector of unknown parameters and is some random quantity that is drawn independently each time is evaluated. An initial condition at some time is specified by an initialization function, . A measurement density completes the specification of a partially observed Markov process. We present a basic iterated filter |
https://en.wikipedia.org/wiki/Competition%20model | The Competition Model is a psycholinguistic theory of language acquisition and sentence processing, developed by Elizabeth Bates and Brian MacWhinney (1982). The claim in MacWhinney, Bates, and Kliegl (1984) is that "the forms of natural languages are created, governed, constrained, acquired, and used in the service of communicative functions." Furthermore, the model holds that processing is based on an online competition between these communicative functions or motives. The model focuses on competition during sentence processing, crosslinguistic competition in bilingualism, and the role of competition in language acquisition. It is an emergentist theory of language acquisition and processing, serving as an alternative to strict innatist and empiricist theories. According to the Competition Model, patterns in language arise from Darwinian competition and selection on a variety of time/process scales including phylogenetic, ontogenetic, social diffusion, and synchronic scales.
The Classic Competition Model
The classic version of the model focused on competition during sentence processing, crosslinguistic competition in bilingualism, and the role of competition in language acquisition.
Sentence Processing
The Competition Model was initially proposed as a theory of cross-linguistic sentence processing. The model suggests that people interpret the meaning of a sentence by taking into account various linguistic cues contained in the sentence context, such as word order, morphology, and semantic characteristics (e.g., animacy), to compute a probabilistic value for each interpretation, eventually choosing the interpretation with the highest likelihood. According to the model, cue weights are learned inductively on the basis of the extent to which the cues are available and reliable guides to meanings in comprehension and to forms in production.
Because different languages use different cues to signal meanings, the Competition Model maintains that cue weights will dif |
https://en.wikipedia.org/wiki/Thymine | Thymine () (symbol T or Thy) is one of the four nucleobases in the nucleic acid of DNA that are represented by the letters G–C–A–T. The others are adenine, guanine, and cytosine. Thymine is also known as 5-methyluracil, a pyrimidine nucleobase. In RNA, thymine is replaced by the nucleobase uracil. Thymine was first isolated in 1893 by Albrecht Kossel and Albert Neumann from calf thymus glands, hence its name.
Derivation
As its alternate name (5-methyluracil) suggests, thymine may be derived by methylation of uracil at the 5th carbon. In RNA, thymine is replaced with uracil in most cases. In DNA, thymine (T) binds to adenine (A) via two hydrogen bonds, thereby stabilizing the nucleic acid structures.
Thymine combined with deoxyribose creates the nucleoside deoxythymidine, which is synonymous with the term thymidine. Thymidine can be phosphorylated with up to three phosphoric acid groups, producing dTMP (deoxythymidine monophosphate), dTDP, or dTTP (for the di- and tri- phosphates, respectively).
One of the common mutations of DNA involves two adjacent thymines or cytosine, which, in presence of ultraviolet light, may form thymine dimers, causing "kinks" in the DNA molecule that inhibit normal function.
Thymine could also be a target for actions of 5-fluorouracil (5-FU) in cancer treatment. 5-FU can be a metabolic analog of thymine (in DNA synthesis) or uracil (in RNA synthesis). Substitution of this analog inhibits DNA synthesis in actively dividing cells.
Thymine bases are frequently oxidized to hydantoins over time after the death of an organism.
Thymine imbalance causes mutation
During growth of bacteriophage T4, an imbalance of thymine availability, either a deficiency or an excess of thymine, causes increased mutation. The mutations caused by thymine deficiency appear to occur only at AT base pair sites in DNA and are often AT to GC transition mutations. In the bacterium Escherichia coli, thymine deficiency was also found to be mutagenic and cause A |
https://en.wikipedia.org/wiki/G.9970 | G.9970 (also known as G.hnta) is a Recommendation developed by ITU-T that describes the generic transport architecture for home networks and their interfaces to a provider's access network.
G.9970 was developed by Study Group 15, Question 1. G.9970 received Consent on December 12, 2008 and was Approved on January 13, 2009.
Relationship with G.hn
G.9970 (G.hnta) and G.9960 (G.hn) are two ITU-T Recommendations that address home networking in a complementary manner. While G.9970 addresses layer 3 (network layer) of the home network architecture, G.9960 addresses layers 1 (physical layer) and 2 (data link layer). |
https://en.wikipedia.org/wiki/CalDriCon | CalDriCon is an electrical digital signaling interface that for high definition liquid crystal displays (LCDs) for high-definition televisions and mobile handsets. It was originally developed by THine Electronics, Inc.
Outline of CalDriCon
Clock separated point-to-point interface
Data rate up to 2.0 Gbit/s/lane
CalDriCon searches the best sampling point at the receiving devices and finds the best pre-emphasis level at the transmitting devices in order to adjust
The pre-emphasis at the transmitter enables to keep good quality of signal integrity. Because of this feature the set systems can use low-cost cables to achieve stable high-speed data transmission
Terminating both of the transmitting side and the receiving side enables to reduce bad reflection effects from multi-drop points
CalDriCon can reduce pin counts and cables for transmission of high-definition pixel data that resulted in lowering total costs and required space for internal interface systems
History of developing high-speed LCD driver interfaces
In around 2000, LCD-panels often selected high-speed driver interfaces such as mini-LVDS, developed by Texas Instruments, and RSDS (Reduced Swing Differential Signaling), developed by National Semiconductor.
However, as full HD televisions were launched in 2005 and full HD with double frame rate in 2007, higher-speed demands in high-definition televisions have required advanced technologies in driver interfaces. In such situation, new LCD driver interfaces such as Advanced PPmL and CalDriCon have appeared to solve the constraints of high-speed technologies.
Comparison among LCD driver interfaces
Among new LCD driver interfaces, CalDriCon satisfies both of high-speed requirements of 2.0 Gbit/s/lane and noise tolerances against unstable power sources and grounds as well as auto-adjustment of skew between clock and data.
As driver ICs are generally loaded as COF (chip-on-film), they are often affected by noises from unstable power sources and ground. Wh |
https://en.wikipedia.org/wiki/Bedding%20%28animals%29 | Bedding, in ethology and animal husbandry, is material, usually organic, used by animals to support their bodies when resting or otherwise stationary. It reduces pressure on skin, heat loss, and contamination by waste produced by an animal or those it shares living space with.
Types of bedding
Wood shavings (pine, cedar, and aspen) are absorbent and have good odor control. Different textures such as fine cut, soft shreds, or thick cut are used for different animals. Wood shavings can be dusty and contain aromatic oils that can cause respiratory, gastrointestinal, urinary tract, or skin disorders and other health problems in some animals. Aspen and kiln-dried wood shavings tend to be less dusty, plus the oils are removed.
Hemp bedding is extremely absorbent and thus efficient, has good odor control and minimal dust, and provides more insulation than other bedding materials. Additionally, hemp is naturally pest-repellent and horses are not tempted to eat it. Due to its low dust, hemp bedding is recommended for horses with allergy or respiratory issues. From an environmental consideration, hemp is more sustainable than wood as it requires both less time and human intervention to grow repeatedly.
Corncob bedding contains no aromatic oils or dust. Corncobs are heat dried which makes it very absorbent. When water or urine is absorbed, the corncob will start molding so daily cleaning is needed. If not properly maintained, bacterial infections are likely to occur. Corncobs are cut to little pieces making it easy to ingest. This could be dangerous if ingested by a small animal.
Paper bedding includes either recycled paper or cardboard boxes. Paper bedding is ideal for animals with allergies since it contains no oils and little dust. Unlike corncob bedding, paper bedding has no adverse effects with consumption. Paper is very absorbent, but when saturated with water or urine, a strong odor results.
Straw is a soft, dry stalk containing small grains such as barley, oats, |
https://en.wikipedia.org/wiki/Indiglo | Indiglo is a product feature on watches marketed by Timex, incorporating an electroluminescent panel as a backlight for even illumination of the watch dial.
The brand is owned by Indiglo Corporation, which is in turn solely owned by Timex, and the name derives from the word indigo, as the original watches featuring the technology emitted a green-blue light.
History
The Indiglo name was originally developed by Austin Innovations Inc.
Timex introduced the Indiglo technology in 1992 in their Ironman watch line and subsequently expanded its use to 70% of their watch line, including men's and women's watches, sport watches and chronographs. Casio introduced their version of electroluminescent backlight technology in 1995.
From 2006-2011, the Timex Group marketed a line of high-end quartz watches under the TX Watch Company brand, using a proprietary six-hand, four-motor, micro-processor controlled movement. To separate the brand from Timex, the movements had luxury features associated with a higher-end brand, e.g., sapphire crystals and stainless steel or titanium casework — and used hands treated with super-luminova luminescent pigment for low-light legibility — rather than indiglo technology.
When the Timex Group migrated the microprocessor-controlled, multi-motor, multi-hand technology to its Timex brand in 2012, it created a sub-collection marketed as Intelligent Quartz (IQ). The line employed the same movements and capabilities from the TX brand, at a much lower price-point -- incorporating indiglo technology rather than the super-luminova pigments.
Design
Indiglo backlights typically emit a distinct greenish-blue color and evenly light the entire display or dial. Certain Indiglo models, e.g., Timex Datalink USB, use a negative liquid-crystal display so that only the digits are illuminated, rather than the entire display. |
https://en.wikipedia.org/wiki/Disc%20Jam | Disc Jam is a video game for the PlayStation 4, Microsoft Windows and Nintendo Switch. Developed and published by High Horse Entertainment, it was one of the PlayStation Plus free games for download for the month of March 2017.
Gameplay
The game combines elements of air hockey and tennis in a manner similar to the game Windjammers. However, while Windjammers is played from an overhead top-down perspective from the sidelines, Disc Jam is played from a behind the character, third-person perspective. The game feature 1 vs 1, and 2 vs 2 matches, allowing for up to four player multiplayer, either locally or online.
Development and release
The game was first announced in June 2016 as part of E3 2016. A demo was available to play in September 2016 at Minecon 2016. The game is being developed by High Horse Entertainment, a studio formed by two ex-Activision employees who had previously worked on developing entries in the Tony Hawk, Call of Duty, and Guitar Hero series of video games. The game received a limited beta release on February 17, 2017. On March 1, 2017, the game was announced by Sony as one of the monthly free PS Plus games for March 2017, being made available on March 7, making its release date and PS Plus availability the same date.
Reception
Pre-release
The game's beta release was generally well received. Sammy Barker of Push Square felt that the game had the potential to be the next Rocket League in a review of the beta version, due to its smooth online gameplay and high level of player customization, though he conceded it would need more improvement with its control and presentation before it would be ready for mainstream audiences. Many other journalists favorably compared the game to Rocket League as well, noting how both titles aimed to add a sci-fi twist with emphasis on online multiplayer, to a traditional sport.
Post-release
Disc Jam received average reviews, with a score of 71/100 on Metacritic. |
https://en.wikipedia.org/wiki/Bashir%20Rameev | Bashir Iskandarovich Rameev (; formerly "Rameyev" in English; 1 May 1918 – 16 May 1994) was a Soviet inventor and scientist, one of the founders of Soviet computing, author of 23 patents, including the first patent in the field of electronic computers officially registered in the USSR—a patent for the Automatic Electronic Digital Machine (1948). Rameev's inventions paved the way for the development of a new field in Soviet science—electronic computing—and for the formation of a new branch of industry that supported it.
The central ideas incorporated in Rameev's invention of the electronic computer included: storing programs in computer memory, using binary code, utilizing external devices, and deploying electronic circuits and semiconductor diodes. The first publication about similar technology outside of the USSR appeared in 1949–1950. Rameev also suggested that intermediate computation data be automatically printed on punched tape and sent into the computer's arithmetic device for subsequent processing, meaning that the processing of commands would be performed in the computer's arithmetic device; this is usually referred to as the Von Neumann architecture.
Of particular note is Rameev's invention of diode-matrix control circuits, which were used to build his first brainchild, the first serially manufactured Soviet mainframe "Strela" (1954). In the 1950s, the diode-matrix control circuits were not widespread due to their significant dimensions and high power consumption. However, with subsequent development of microelectronics and the emergence of large-scale integrated circuits, which made possible to deploy tens or hundreds of thousands of diodes and transistors in a single piece of silicon, the concept of control circuits became viable and commonly used.
"Strela" computers carried out calculations in nuclear physics, rocketry and space research. Notably, one of “Strelas" was used to calculate “Sputnik” orbit trajectory. For the development of "Strela" Rameev |
https://en.wikipedia.org/wiki/SK7MQ | SK7MQ also known as SK7RNQ B is a D-STAR Gateway enabled HAM repeater, located in Glumslöv, Landskrona Municipality, Sweden. It is set to transmit and receive on the UHF radio band.
It is currently held and maintained by its founder: Repeatergruppen SK7MQ, known as a part of "Österlens sändaramatörer".
History
SK7MQ or also known as SK7RNQ B, is a complementary UHF repeater to the SK7RNQ C repeater which is transmitting on the 2m radio band.
SK7MQ had its first launch in July 2010, and was on 25 September 2010 that the gateway was connected to the US Trust Server network,
making it possible to establish worldwide connections between other D-STAR gateway enabled repeaters through TCP/IP.
The first transatlantic D-STAR QSO on SK7MQ were made between SM7IOE and KC5NID at September 27, 2010 via Reflector 001-C, also known as the "D-STAR Mega Repeater".
As of 2011, Repeatergruppen SK7RNQ is the fastest deploying D-STAR group within Sweden, currently owning and maintaining two RP2C controlled D-STAR repeaters and one experimental GMSK HotSpot.
In the middle of March 2011, the first 23 cm D-STAR repeater in northern Europe will be installed in Malmö Municipality, Sweden by Repeatergruppen SK7MQ.
The repeater will act as a pure PS (Digital Data) repeater, and will at first be experimentally used by SA7AXO and SM7IOE through the use of the ICOM-ID 23 cm transceiver.
In addition to the 23 cm repeater, a 70 cm D-STAR repeater will be installed at the same QTH with the aim to provide portable D-STAR users full coverage within Malmö.
Technical information
The SK7MQ repeater is transmitting on 434.47500 MHz (a.k.a. the 70-centimeter band) with a negative offset by 2 MHz. The antenna position height is 120 meters over sea level.
The gateway hardware for the D-STAR transmitting system is the ID-RP2C connector unit in combination with the ID-RP4000V UHF DV Exciter.
At first the repeater gave out a measured effect of 25W, but a recently added PA increased the effect to 90W, imp |
https://en.wikipedia.org/wiki/Sarcalumenin | Sarcalumenin is a protein that in humans is encoded by the SRL gene.
Sarcalumenin is a calcium-binding protein that can be found in the sarcoplasmic reticulum of striated muscle. Sarcalumenin is partially responsible for calcium buffering in the lumen of the sarcoplasmic reticulum and helps out calcium pump proteins. Additionally, sarcalumenin is necessary for keeping a normal sinus rhythm during both aerobic and anaerobic exercise activity. Sarcalumenin is a calcium-binding glycoprotein composed of 473 acidic amino acids with a molecular weight of 160 KDa. Together along with other luminal calcium buffer proteins, sarcalumenin plays an important role in regulation of calcium uptake and release during excitation-contraction coupling (ECC) in muscle fibers. |
https://en.wikipedia.org/wiki/Dog%20behavior | Dog behavior is the internally coordinated responses of individuals or groups of domestic dogs to internal and external stimuli. It has been shaped by millennia of contact with humans and their lifestyles. As a result of this physical and social evolution, dogs have acquired the ability to understand and communicate with humans. Behavioral scientists have uncovered a wide range of social-cognitive abilities in domestic dogs.
Co-evolution with humans
The origin of the domestic dog (Canis familiaris) is not clear. Whole-genome sequencing indicates that the dog, the gray wolf and the extinct Taymyr wolf diverged around the same time 27,000–40,000 years ago. How dogs became domesticated is not clear, however the two main hypotheses are self-domestication or human domestication. There exists evidence of human-canine behavioral coevolution.
Intelligence
Dog intelligence is the ability of the dog to perceive information and retain it as knowledge in order to solve problems. Dogs have been shown to learn by inference. A study with Rico showed that he knew the labels of over 200 different items. He inferred the names of novel items by exclusion learning and correctly retrieved those novel items immediately. He also retained this ability four weeks after the initial exposure. Dogs have advanced memory skills. A study documented the learning and memory capabilities of a border collie, "Chaser", who had learned the names and could associate by verbal command over 1,000 words. Dogs are able to read and react appropriately to human body language such as gesturing and pointing, and to understand human voice commands. After undergoing training to solve a simple manipulation task, dogs that are faced with an insolvable version of the same problem look at the human, while socialized wolves do not. Dogs demonstrate a theory of mind by engaging in deception.
Senses
The dog's senses include vision, hearing, sense of smell, taste, touch, proprioception, and sensitivity to the Ear |
https://en.wikipedia.org/wiki/Anchipteraspididae | Anchipteraspididae is an extinct family of heterostracan vertebrates restricted to Late Silurian and Early Devonian strata of Arctic Canada.
Anchipteraspidids superficially resemble the ancestral cyathaspidids, but, the articulation and growth patterns of the plates clearly define them as pteraspidids. |
https://en.wikipedia.org/wiki/Structured%20kNN | Structured k-Nearest Neighbours is a machine learning algorithm that generalizes the k-Nearest Neighbors (kNN) classifier.
Whereas the kNN classifier supports binary classification, multiclass classification and regression, the Structured kNN (SkNN) allows training of a classifier for general structured output labels.
As an example, a sample instance might be a natural language sentence, and the output label is an annotated parse tree. Training a classifier consists of showing pairs of correct sample and output label pairs. After training, the structured kNN model allows one to predict for new sample instances the corresponding output label; that is, given a natural language sentence, the classifier can produce the most likely parse tree.
Training
As a training set SkNN accepts sequences of elements with defined class labels. Type of elements does not matter, the only condition is the existence of metric function that defines a distance between each pair of elements of a set.
SkNN is based on idea of creating a graph, each node of which represents class label. There is an edge between a pair of nodes iff there is a sequence of two elements in training set with corresponding classes. Thereby the first step of SkNN training is the construction of described graph from training sequences. There are two special nodes in the graph corresponding to an end and a beginning of sentences. If sequence starts with class `C`, the edge between node `START` and node `C` should be created.
Like a regular kNN, the second part of the training of SkNN consists only of storing the elements of trained sequence in special way. Each element of training sequences is stored in node related to the class of previous element in sequence. Every first element is stored in node `START`.
Inference
Labelling of input sequences in SkNN consists in finding sequence of transitions in graph, starting from node `START`, which minimises overall cost of path. Each transition corresponds to a single |
https://en.wikipedia.org/wiki/Photinia%20%C3%97%20fraseri | Photinia × fraseri, known as red tip photinia and Christmas berry, is a nothospecies in the rose family, Rosaceae. It is a hybrid between Photinia glabra and Photinia serratifolia.
Description
It is a compact shrub with an erect habit that can grow into a medium-sized tree. Its evergreen, oval leaves are dark green but crimson red when young, especially in early spring. Its flowers are small, with five petals, united in large white inflorescences. They bloom at the end of spring. It can reach a height of 5 meters and a diameter of 5 meters. It is frost resistant and can withstand temperatures from -5° to -10°.
Cultivation
The shrub tolerates moderate shade and it grows in well drained soils. It should be sheltered from the cold and dry winds of winter. It can be propagated by semi-woody cuttings in summer.
Cultivars
The hybrid has a number of cultivars, which include (those marked have won the Royal Horticultural Society's Award of Garden Merit):
Photinia × fraseri 'Camilvy'
Photinia × fraseri 'Curly Fantasy'
Photinia × fraseri 'Little Red Robin', a plant similar to 'Red Robin', but dwarf in stature with an ultimate height/spread of around 2–3 ft
Photinia × fraseri 'Pink Marble' or 'Cassini', a newer cultivar with rose-pink tinted new growth and a creamy-white variegated margin on the leaves
Photinia × fraseri 'Red Robin' - probably the most widely planted of all
Photinia × fraseri 'Robusta'
Photinia × fraseri 'Super Hedger' - a newer hybrid with strong upright growth |
https://en.wikipedia.org/wiki/Ugly%20duckling%20theorem | The ugly duckling theorem is an argument showing that classification is not really possible without some sort of bias. More particularly, it assumes finitely many properties combinable by logical connectives, and finitely many objects; it asserts that any two different objects share the same number of (extensional) properties. The theorem is named after Hans Christian Andersen's 1843 story "The Ugly Duckling", because it shows that a duckling is just as similar to a swan as two swans are to each other. It was derived by Satosi Watanabe in 1969.
Mathematical formula
Suppose there are n things in the universe, and one wants to put them into classes or categories. One has no preconceived ideas or biases about what sorts of categories are "natural" or "normal" and what are not. So one has to consider all the possible classes that could be, all the possible ways of making a set out of the n objects. There are such ways, the size of the power set of n objects. One can use that to measure the similarity between two objects, and one would see how many sets they have in common. However, one cannot. Any two objects have exactly the same number of classes in common if we can form any possible class, namely (half the total number of classes there are). To see this is so, one may imagine each class is a represented by an n-bit string (or binary encoded integer), with a zero for each element not in the class and a one for each element in the class. As one finds, there are such strings.
As all possible choices of zeros and ones are there, any two bit-positions will agree exactly half the time. One may pick two elements and reorder the bits so they are the first two, and imagine the numbers sorted lexicographically. The first numbers will have bit #1 set to zero, and the second will have it set to one. Within each of those blocks, the top will have bit #2 set to zero and the other will have it as one, so they agree on two blocks of or on half of all the cases, no matter |
https://en.wikipedia.org/wiki/Sydney%20Brenner | Sydney Brenner (13 January 1927 – 5 April 2019) was a South African biologist. In 2002, he shared the Nobel Prize in Physiology or Medicine with H. Robert Horvitz and Sir John E. Sulston. Brenner made significant contributions to work on the genetic code, and other areas of molecular biology while working in the Medical Research Council (MRC) Laboratory of Molecular Biology in Cambridge, England. He established the roundworm Caenorhabditis elegans as a model organism for the investigation of developmental biology, and founded the Molecular Sciences Institute in Berkeley, California, United States.
Education and early life
Brenner was born in the town of Germiston in the then Transvaal (today in Gauteng), South Africa, on 13 January 1927. His parents, Leah (née Blecher) and Morris Brenner, were Jewish immigrants. His father, a cobbler, came to South Africa from Lithuania in 1910, and his mother from Riga, Latvia, in 1922. He had one sister, Phyllis.
He was educated at Germiston High School and the University of the Witwatersrand. Having joined university at the age of 15, it was noted during his second year that he would be too young to qualify for the practice of medicine at the conclusion of his six-year medical course, and he was therefore allowed to complete a Bachelor of Science degree in Anatomy and Physiology. During this time he was taught physical chemistry by Joel Mandelstam, microscopy by Alfred Oettle and neurology with Harold Daitz. He also received an introduction to anthropology and paleontology with Raymond Dart and Robert Broom. The histologist Joseph Gillman and director of research in the Anatomy Department convinced Brenner to continue towards an honours degree and beyond towards an MSc. Brenner accepted though this would mean he would not graduate from medical school and his bursary would be discontinued. He supported himself during this time by working as a laboratory technician. It was during this time, in 1945, that Brenner would publish h |
https://en.wikipedia.org/wiki/Zsigmondy%27s%20theorem | In number theory, Zsigmondy's theorem, named after Karl Zsigmondy, states that if are coprime integers, then for any integer , there is a prime number p (called a primitive prime divisor) that divides and does not divide for any positive integer , with the following exceptions:
, ; then which has no prime divisors
, a power of two; then any odd prime factors of must be contained in , which is also even
, , ; then
This generalizes Bang's theorem, which states that if and is not equal to 6, then has a prime divisor not dividing any with .
Similarly, has at least one primitive prime divisor with the exception .
Zsigmondy's theorem is often useful, especially in group theory, where it is used to prove that various groups have distinct orders except when they are known to be the same.
History
The theorem was discovered by Zsigmondy working in Vienna from 1894 until 1925.
Generalizations
Let be a sequence of nonzero integers.
The Zsigmondy set associated to the sequence is the set
i.e., the set of indices such that every prime dividing also divides some for some . Thus Zsigmondy's theorem implies that , and Carmichael's theorem says that the Zsigmondy set of the Fibonacci sequence is , and that of the Pell sequence is . In 2001 Bilu, Hanrot, and Voutier
proved that in general, if is a Lucas sequence or a Lehmer sequence, then (see , there are only 13 such s, namely 1, 2, 3, 4, 5, 6, 7, 8, 10, 12, 13, 18, 30).
Lucas and Lehmer sequences are examples of divisibility sequences.
It is also known that if is an elliptic divisibility sequence, then its Zsigmondy
set is finite. However, the result is ineffective in the sense that the proof does not give an explicit upper bound for the largest element in ,
although it is possible to give an effective upper bound for the number of elements in .
See also
Carmichael's theorem |
https://en.wikipedia.org/wiki/ThorCon%20nuclear%20reactor | The Thorcon nuclear reactor is a design of a molten salt reactor with a graphite moderator, proposed by the US-based Thorcon company. These nuclear reactors are designed as part of a floating power plant, to be manufactured on an assembly line in a shipyard, and to be delivered via barge to any ocean or major waterway shoreline. The reactors are to be delivered as a sealed unit and never opened on site. All reactor maintenance and fuel processing is done at an off-site location. As of 2022, no reactor of this type has been built. A proposal to build a prototype in Indonesia has been submitted to the IAEA.
Design
ThorCon has proposed a power station closely based on the Molten-Salt Reactor Experiment in the 1960s, claiming that its design requires no new technology. The power station would contain two 250 MWe small modular reactors. The replaceable reactors are to be removed and replaced every four years. As molten salt reactors, they are designed for the use of fuel in liquid form, which also serves as primary coolant. The fuel would be about 20% enriched uranium tetrafluoride and thorium tetrafluoride. The ThorCon design is a floating power station to be built in a shipyard and then towed to the location of operation.
Safety
Thorcon claims that this reactor design will be safer than traditional nuclear reactors. The design includes several features intended to prevent meltdowns, contain radioactive materials, and protect from terrorism and sabotage.
Reviews
A 2017 study by the Energy Innovation Reform Project looked at the ThorCon and concluded that "if power plants featuring these technologies are able to produce electricity at the average LCOE price projected here (much less the low-end estimate), it would have a significant impact on electricity markets."
Criticism
The Union of Concerned Scientists has expressed worries with the liquid-fueled MSR reactor pattern about issues with safety, environmental impacts, and nuclear proliferation.
See also
Thori |
https://en.wikipedia.org/wiki/Tethered%20particle%20motion | Tethered particle motion (TPM) is a biophysical method that is used for studying various polymers such as DNA and their interaction with other entities such as proteins.
The method allows observers to measure various physical properties on the substances, as well as to measure the properties of biochemical interactions with other substances such as proteins and enzymes.
TPM is a single molecule experiment method.
History
TPM was first introduced by Schafer, Gelles, Sheetz and Landick in 1991. In their research, they attached RNA polymerase to the surface, and gold beads were attached to one end of the DNA molecules. In the beginning, the RNA polymerase "captures" the DNA near the gold bead. During the transcription, the DNA "slides" on the RNA polymerase so the distance between the RNA polymerase and the gold bead (the tether length)is increased. Using an optical microscope the area that the bead moves in was detected. The transcription rate was extracted from data.
Since then, a lot of TPM experiments have been done, and the method was improved in many ways such as bead types, biochemistry techniques, imaging (faster cameras, different microscopy methods etc.) data analysis and combination with other single-molecule techniques (e.g. optical or magnetical tweezers).
Principle of the method
One end of a polymer is attached to a small bead (tens to hundreds of nanometer), while the other end is attached to a surface.
Both the polymer and the bead stay in an aqueous environment, so the bead moves in Brownian motion. Because of the tether, the motion is restricted. Using an optical microscope and CCD camera, one can track the bead position in a time series. Although the bead is usually smaller than the diffraction limit, so the image is a spot which is larger than the bead itself (point spread function), the center of the spot represents the projection on the X-Y plane of the end of the polymer (end-to-end vector). Analyzing the distribution of the bead position ca |
https://en.wikipedia.org/wiki/DNA-directed%20RNA%20interference | DNA detailed study
DNA-directed RNA interference (DRNAI) is a gene-silencing technique that utilizes DNA constructs to activate an animal cell's endogenous RNA interference (RNAI) pathways. DNA constructs are designed to express self-complementary double-stranded RNAs, typically short-hairpin RNAs, that bring about the silencing of a target gene or genes once processed. Any RNA, including endogenous messenger RNA (mRNAs) or viral RNAs, can be silenced by designing constructs to express double-stranded RNA complementary to the desired mRNA target.
This mechanism has been demonstrated to work as a novel therapeutic technique to silence disease-causing genes across a range of disease models, including viral diseases such as HIV, hepatitis B or hepatitis C, or diseases associated with altered expression of endogenous genes such as drug-resistant lung cancer, neuropathic pain, advanced cancer, and retinitis pigmentosa.
DDRNAI mechanism
Unlike small interfering RNA (siRNA) therapeutics that turn over within a cell and consequently only silence genes transiently, DNA constructs are continually transcribed, replenishing the cellular ‘dose’ of shRNA, thereby enabling long-term silencing of targeted genes. The DDRNAIi mechanism, therefore, offers the potential for ongoing clinical benefit with reduced medical intervention.
Organization of DDRNAI constructs
Figure 2 illustrates the most common type of DDRNAI DNA construct, designed to express a shRNA. This figure consists of a promoter sequence, driving expression of sense and antisense sequences separated by a loop sequence, followed by a transcriptional terminator. The antisense species processed from the shRNA can bind to the target RNA and specify its degradation. shRNA constructs typically encode sense and antisense sequences of 20 – 30 nucleotides. Flexibility in construct design is possible: for example, the positions of sense and antisense sequences can be reversed, and other modifications and additions can alter |
https://en.wikipedia.org/wiki/Eccentricity%20vector | In celestial mechanics, the eccentricity vector of a Kepler orbit is the dimensionless vector with direction pointing from apoapsis to periapsis and with magnitude equal to the orbit's scalar eccentricity. For Kepler orbits the eccentricity vector is a constant of motion. Its main use is in the analysis of almost circular orbits, as perturbing (non-Keplerian) forces on an actual orbit will cause the osculating eccentricity vector to change continuously as opposed to the eccentricity and argument of periapsis parameters for which eccentricity zero (circular orbit) corresponds to a singularity.
Calculation
The eccentricity vector is:
which follows immediately from the vector identity:
where:
is position vector
is velocity vector
is specific angular momentum vector (equal to )
is standard gravitational parameter
See also
Kepler orbit
Orbit
Eccentricity
Laplace–Runge–Lenz vector |
https://en.wikipedia.org/wiki/Product%20requirements%20document | A product requirements document (PRD) is a document containing all the requirements for a certain product.
It is written to allow people to understand what a product should do. A PRD should, however, generally avoid anticipating or defining how the product will do it in order to later allow interface designers and engineers to use their expertise to provide the optimal solution to the requirements.
PRDs are most frequently written for software products, but they can be used for any type of product and also for services.
Typically, a PRD is created from a user's point-of-view by a user/client or a company's marketing department (in the latter case it may also be called a Marketing Requirements Document (MRD)). The requirements are then analyzed by a (potential) maker/supplier from a more technical point of view, broken down and detailed in a Functional Specification (sometimes also called Technical Requirements Document).
Components
Typical components of a product requirements document (PRD) are:
Title & author information
Purpose and scope, from both a technical and business perspective
Stakeholder identification
Market assessment and target demographics
Product overview and use cases
Requirements, including
functional requirements (e.g. what a product should do)
usability requirements
technical requirements (e.g. security, network, platform, integration, client)
environmental requirements
support requirements
interaction requirements (e.g. how the product should work with other systems)
Assumptions
Constraints
Dependencies
High level workflow plans, timelines and milestones (more detail is defined through a project plan)
Evaluation plan and performance metrics
Not all PRDs have all of these components. In particular, PRDs for other types of products (manufactured goods, etc.) will eliminate the software-specific elements from the list above, and may add in additional elements that pertain to their domain, e.g. manufacturing requirements.
See |
https://en.wikipedia.org/wiki/Functional%20movement | Functional movements are movements based on real-world situational biomechanics. They usually involve multi-planar, multi-joint movements which place demand on the body's core musculature and innervation.
Functional vs other movements
Sports-specific
Sports-specific movements, such as a tennis swing or bowling a cricket ball, are based on sports-specific situations. While there is some cross-over application from sports-specific movements (such as running), they are usually so specific that they supersede functional movements in complexity. Yet both sports and functional movements are dependent on the body's core.
Muscle-specific
Traditional weight-lifting depends on muscle-specific program-design with the goal of muscle-specific hypertrophy. For example, a concentration biceps curl attempts to isolate the biceps brachii, although by gripping the weight one also engages the wrist flexors. These exercises tend to be the most far-removed from functional movement, due to their attempt to micromanage the variables acting on the individual muscles. Functional exercises, on the other hand, attempt to incorporate as many variables as possible (balance, multiple joints, multiple planes of movement), thus decreasing the load on the muscle but increasing the complexity of motor coordination and flexibility.
Biomechanics
Functional movement usually involves gross motor movement involving the core, which refers to the muscles of the abdomen and spine, such as segmental stabilizers.
See also
Functional Movement Systems
Biomechanics
Core (anatomy)
Functional training
Functional Movement Screen
Erwan Le Corre, trainer in a form of functional movement known as "MovNat"
External links
Lisa Mercer Fitness, "Functional Sports Conditioning: Bridging the Gap Between Fitness and Athleticism."
Biomechanics
[1] http://www.ptonthenet.com/articles/the-functional-continuum-3251
Motor control |
https://en.wikipedia.org/wiki/Vision%20mixer | A vision mixer is a device used to select between several different live video sources and, in some cases, compositing live video sources together to create visual effects.
In most of the world, both the equipment and its operator are called a vision mixer or video mixer; however, in the United States, the equipment is called a video switcher, production switcher or video production switcher, and its operator is known as a technical director (TD).
The role of the vision mixer for video is similar to what a mixing console does for audio. Typically a vision mixer would be found in a video production environment such as a production control room of a television studio, production truck or post-production facility.
Capabilities and usage
Besides hard cuts (switching directly between two input signals), mixers can also generate a variety of transitions, from simple dissolves to pattern wipes. Additionally, most vision mixers can perform keying operations (called mattes in this context) and generate color signals. Vision mixers may include digital video effects (DVE) and still store functionality. Most vision mixers are targeted at the professional market, with newer analog models having component video connections and digital ones using serial digital interface (SDI) or SMPTE 2110. They are used in live television, such as outside broadcasting, with video tape recording (VTR) and video servers for linear video editing, even though the use of vision mixers in video editing has been largely supplanted by computer-based non-linear editing systems.
Older professional mixers worked with composite video, analog signal inputs. There were a number of consumer video switchers with composite video or S-Video. These are often used for VJing, presentations, and small multi-camera productions.
Operation
The most basic part of a vision mixer is a bus, which is a signal path consisting of multiple video inputs that feeds a single output. On the panel, a bus is represented by a |
https://en.wikipedia.org/wiki/AstroPay | AstroPay is a global wallet that provides users with a way to pay, send, and receive money. AstroPay's app offers online payments, virtual and physical debit cards, peer-to-peer money transfers, and more.
The app offers more than 200 payment methods including Banco do Brasil, Caixa, Bradesco, Boleto, PhonePe, Airtel, Google Pay, Visa and Mastercard.
History
AstroPay launched in the Brazilian market in 2009, and since then, it has expanded its services to other countries in Latin America, Asia, Africa, and the UK.
As of 2023, features of the digital wallet include loyalty programmes, debit cards and crypto offerings.
Brand endorsements, partnerships
AstroPay sponsored Burnley Football Club for the 2018–19 Premier League season. In 2021, the app renewed its sponsorship deal with Burnley FC ahead of the 2021–22 Premier League season and also became the official payment service partner for the club.
In August 2021, the company entered into a partnership with Wolverhampton Wanderers for the 2021-22 Premier League season, and the following year, became the team's shirt sponsor.
In September 2021, the company entered a sponsorship deal with Newcastle United Football Club in the English Premier League. AstroPay made arrangements to ensure that branding and logo would be visible on the pitch-side LED advertising during Premier League matches.
In June 2022, the company renewed it's partnership with Wolverhampton Wanderers for the 2022-23 Premier League season and launched its Wolves debit card in February 2023. |
https://en.wikipedia.org/wiki/Ebola | Ebola, also known as Ebola virus disease (EVD) and Ebola hemorrhagic fever (EHF), is a viral hemorrhagic fever in humans and other primates, caused by ebolaviruses. Symptoms typically start anywhere between two days and three weeks after infection. The first symptoms are usually fever, sore throat, muscle pain, and headaches. These are usually followed by vomiting, diarrhoea, rash and decreased liver and kidney function, at which point some people begin to bleed both internally and externally. It kills between 25% and 90% of those infected – about 50% on average. Death is often due to shock from fluid loss, and typically occurs between six and 16 days after the first symptoms appear. Early treatment of symptoms increases the survival rate considerably compared to late start. An Ebola vaccine was approved by the US FDA in December 2019.
The virus spreads through direct contact with body fluids, such as blood from infected humans or other animals, or from contact with items that have recently been contaminated with infected body fluids. There have been no documented cases, either in nature or under laboratory conditions, of spread through the air between humans or other primates. After recovering from Ebola, semen or breast milk may continue to carry the virus for anywhere between several weeks to several months. Fruit bats are believed to be the normal carrier in nature; they are able to spread the virus without being affected by it. The symptoms of Ebola may resemble those of several other diseases, including malaria, cholera, typhoid fever, meningitis and other viral hemorrhagic fevers. Diagnosis is confirmed by testing blood samples for the presence of viral RNA, viral antibodies or the virus itself.
Control of outbreaks requires coordinated medical services and community engagement, including rapid detection, contact tracing of those exposed, quick access to laboratory services, care for those infected, and proper disposal of the dead through cremation or buria |
https://en.wikipedia.org/wiki/ISO%205426 | ISO 5426 ("Extension of the Latin alphabet coded character set for bibliographic information interchange") is a character set developed by ISO, similar to ISO/IEC 6937. It was first published in 1983.
Character set
ISO 5426-2
ISO 5426-2 ("Latin characters used in minor European languages and obsolete typography") is a second part to ISO 5426, published in 1996. It specifies a set of 70 characters, some of which do not exist in Unicode. Michael Everson proposed the missing characters in Unicode 3.0, but some were postponed for further study. Later, new evidence was found, and more was encoded. P with belt is probably an error for P with flourish.
� Not in Unicode |
https://en.wikipedia.org/wiki/OGSM | Objectives, goals, strategies and measures (OGSM) is a goal setting and action plan framework used in strategic planning. It is used by organizations, departments, teams and sometimes program managers to define and track measurable goals and actions to achieve an objective. Documenting your goals, strategies and actions all on one page gives insights that can be missing with other frameworks. It defines the measures that will be followed to ensure that goals are met and helps groups work together toward common objectives, across functions, geographical distance and throughout the organization. OGSM’s origins can be traced back to Japan in the 1950s, stemming from the process and strategy work developed during the Occupation of Japan in the post-World War II period. It has since been adopted by many Fortune 500 companies. In particular, Procter & Gamble uses the process to align the direction of their multinational corporation around the globe.
Purpose
The OGSM framework forms the basis for strategic planning and execution, as well as a strong management routine that keep the plan part of the day-to-day operations. It aligns the leaders to the objective of the company, links key strategies to the financial goals, and brings visibility and accountability to the work of improving the capabilities of the company. Due to the concise format (usually one page) and simple color-coding to signal progress, OGSM allows for quick management by exception of any underperforming activity or underperforming (key) performance indicator. And finally, it is simple, robust and developed as a team.
OGSM is designed to identify strategic priorities, capture market opportunities, optimize resources, enhance speed and execution, and align team members.
History
Although research indicates that this method was developed by Procter & Gamble (Kingham and Tucker) and is commonly used by many consultancies, the verifiable origins of OGSM are unclear.
Brought from Japan to corporate America |
https://en.wikipedia.org/wiki/Maritime%20hydraulics%20in%20antiquity | With the issue of supplying water to a very large population, the Romans developed hydraulic systems for multiple applications: public water supply, power using water mills, hydraulic mining, fish tanks, irrigation, fire fighting, and of course aqueduct (Stein 2004). Scientists such as Ctesibius and Archimedes produced the first inventions involving hydraulic power (Oleson 1984).
Specific technology
The most prevalent hydraulic pump used in maritime situations in ancient Rome was the bilge pump, which functioned to siphon collected water out of a ship's hull (Oleson 1984). The bilge pump was an improvement on the first hydraulic pumps used in antiquity: force pumps. Invented around the early 3rd century BCE, the most primitive design of a force pump consisted of a piston pushing water out of a tube, constructed by soldering individual bronze elements (Stein 246).
Written accounts from Phil of Byzantium (late 3rd century BCE), Vitruvius (c. 25 BCE), and Hero of Alexandria (c. CE 50) confirm archaeological evidence for the original simple design, and both Hero and Vitruvius specify that the pump was typically made of bronze (Stein 2004). Somewhere along the way a new design arose that involved cutting apertures into an individual wood block and then rendering the block pressure-proof with tightly fitting plugs and plates (Stein 2004).
Although cheaper and easier to manufacture, assemble, and repair, the wooden pumps manufactured from then on would most likely not have been durable enough for maritime use, which is why we usually find lead pipes associated with bilge pumps, bilge wells, and hydraulic devices of other function about ships. The amphora shipwreck discovered at Grado in Gorizia, Italy, which dates to around 200 CE, and contained what archaeologists hypothesized to be a ‘hydraulics’ system to change the water in live fish tanks, since other evidence indicates the ship's involvement in the processed fish trade (Beltrame and Gaddi 2005).
This hypothesis h |
https://en.wikipedia.org/wiki/Monad%20%28philosophy%29 | The term monad () is used in some cosmic philosophy and cosmogony to refer to a most basic or original substance. As originally conceived by the Pythagoreans, the Monad is the Supreme Being, divinity or the totality of all things. According to some philosophers of the early modern period, most notably Gottfried Wilhelm Leibniz, there are infinite monads, which are the basic and immaterial elementary particles, or simplest units, that make up the universe.
Historical background
According to Hippolytus, the worldview was inspired by the Pythagoreans, who called the first thing that came into existence the "monad", which begat (bore) the dyad (from the Greek word for two), which begat the numbers, which begat the point, begetting lines or finiteness, etc. It meant divinity, the first being, or the totality of all beings, referring in cosmogony (creation theories) variously to source acting alone and/or an indivisible origin and equivalent comparators.
Pythagorean and Neoplatonic philosophers like Plotinus and Porphyry condemned Gnosticism (see Neoplatonism and Gnosticism) for its treatment of the monad.
In his Latin treaty , Alan of Lille affirms "God is an intelligible sphere, whose center is everywhere and whose circumference is nowhere." The French philosopher Rabelais ascribed this proposition to Hermes Trismegistus.
The symbolism is a free exegesis related to the Christian Trinity. Alan of Lille mentions the Trismegistus' Book of the Twenty-Four Philosophers where it says a Monad can uniquely beget another Monad in which more followers of this religion saw the come to being of God the Son from God the Father, both by way of generation or by way of creation. This statement is also shared by the pagan author of the Asclepius which sometimes has been identified with Trismegistus.
The Book of the Twenty-Four Philosophers completes the scheme adding that the ardor of the second Monad to the first Monad would be the Holy Ghost. It closes a physical circle in a logi |
https://en.wikipedia.org/wiki/Parapaleomerus | Parapaleomerus is a genus of strabopid of small size found in Chengjiang biota, China. It contains one species, P. sinensis. Unlike the other members of Strabopida, Parapaleomerus lacks dorsal eyes and only possesses ten trunk tergites. The telson has been described as trapezoidal. All the trunk tergites are straight and increasingly curve backwards abaxially from T4–10. Specimens of Parapaleomerus sinesis are typically dorsoventrally compressed. The exoskeleton of P. sinensis has a semi-elliptical head shield that lacks any indication of the presence of dorsal eyes. The largest specimen described is recorded as 9.2 cm long, with a maximum width of 9 cm. |
https://en.wikipedia.org/wiki/Stefan%27s%20formula | In thermodynamics, Stefan's formula says that the specific surface energy at a given interface is determined by the respective enthalpy difference .
where σ is the specific surface energy, NA is the Avogadro constant, is a steric dimensionless coefficient, and Vm is the molar volume. |
https://en.wikipedia.org/wiki/Equivalents%20of%20the%20Axiom%20of%20Choice | Equivalents of the Axiom of Choice is a book in mathematics, collecting statements in mathematics that are true if and only if the axiom of choice holds. It was written by Herman Rubin and Jean E. Rubin, and published in 1963 by North-Holland as volume 34 of their Studies in Logic and the Foundations of Mathematics series. An updated edition, Equivalents of the Axiom of Choice, II, was published as volume 116 of the same series in 1985.
Topics
At the time of the book's original publication, it was unknown whether the axiom of choice followed from the other axioms of Zermelo–Fraenkel set theory (ZF), or was independent of them, although it was known to be consistent with them from the work of Kurt Gödel. This book codified the project of classifying theorems of mathematics according to whether the axiom of choice was necessary in their proofs, or whether they could be proven without it. At approximately the same time as the book's publication, Paul Cohen proved that the negation of the axiom of choice is also consistent, implying that the axiom of choice, and all of its equivalent statements in this book, are indeed independent of ZF.
The first edition of the book includes over 150 statements in mathematics that are equivalent to the axiom of choice, including some that are novel to the book. This edition is divided into two parts, the first involving notions expressed using sets and the second involving classes instead of sets. Within the first part, the topics are grouped into statements related to the well-ordering principle, the axiom of choice itself, trichotomy (the ability to compare cardinal numbers), and Zorn's lemma and related maximality principles. This section also includes three more chapters, on statements in abstract algebra, statements for cardinal numbers, and a final collection of miscellaneous statements. The second section has four chapters, on topics parallel to four of the first section's chapters.
The book includes the history of each state |
https://en.wikipedia.org/wiki/Code%20page%201021 | Code page 1021 (CCSID 1021), also known as CP1021 or CH7DEC, is an IBM code page number assigned to the Swiss variant of DEC's National Replacement Character Set (NRCS). The 7-bit character set was introduced for DEC's computer terminal systems, starting with the VT200 series in 1983, but is also used by IBM for their DEC emulation. Similar but not identical to the series of ISO 646 character sets, the character set is a close derivation from ASCII with only twelve code points differing.
Code page layout
See also
National Replacement Character Set (NRCS) |
https://en.wikipedia.org/wiki/Alpine%20steppe | The Alpine-steppe is a high altitude natural alpine grassland, which is a part of the Montane grasslands and shrublands biome.
Alpine-steppes are unique ecosystems found throughout the world, especially in Asia, where they make up 38.9% of the total Tibetan plateau grassland's area.
Characteristics
Alpine grasslands, like the Alpine-steppe, are characterized by their intense radiation, with direct solar radiation periods averaging 2916 hours annually. The average temperature in this ecosystem is very low. For example, they may experience temperatures around −10 °C in winter, and 10 °C in summer. Winters also tend to be long and cold, and summers are mild and short. This ecosystem also experiences year-long frost, with no reported frost-free season.
The annual rates of precipitation in Alpine-steppes are very low, with mean ranges falling anywhere between 280 and 300 mm. In addition, upwards to 80% of this falls between the months of May and September, causing the climate to be arid or semi-arid, making the environment much harsher for plant and livestock life.
Vegetation
Vegetation in the alpine steppe is very vulnerable to climate change. Average air temperature has been increasing by approximately 0.3 degrees Celsius every ten years since the 1960s. This is three times the global average, indicating the sensitivity of this area. Studies have been done that show that the spread of vegetation has changed dramatically since the Holocene period. The Tibetan Plateau is composed of three main regions, based on yearly precipitation levels and types of vegetation, namely the alpine meadow, alpine steppe, and the alpine desert-steppe. Since the Holocene, studies of fossil pollen records have shown that the alpine meadow has extended into areas that were previously alpine steppe as precipitation increased during that period. There is a unimodal pattern across precipitation and vegetation rain use efficiency (RUE), with an increasing trend in Alpine-steppe regions. RUE |
https://en.wikipedia.org/wiki/Haefliger%20structure | In mathematics, a Haefliger structure on a topological space is a generalization of a foliation of a manifold, introduced by André Haefliger in 1970. Any foliation on a manifold induces a special kind of Haefliger structure, which uniquely determines the foliation.
Definition
A codimension- Haefliger structure on a topological space consists of the following data:
a cover of by open sets ;
a collection of continuous maps ;
for every , a diffeomorphism between open neighbourhoods of and with ;
such that the continuous maps from to the sheaf of germs of local diffeomorphisms of satisfy the 1-cocycle condition
for
The cocycle is also called a Haefliger cocycle.
More generally, , piecewise linear, analytic, and continuous Haefliger structures are defined by replacing sheaves of germs of smooth diffeomorphisms by the appropriate sheaves.
Examples and constructions
Pullbacks
An advantage of Haefliger structures over foliations is that they are closed under pullbacks. More precisely, given a Haefliger structure on , defined by a Haefliger cocycle , and a continuous map , the pullback Haefliger structure on is defined by the open cover and the cocycle . As particular cases we obtain the following constructions:
Given a Haefliger structure on and a subspace , the restriction of the Haefliger structure to is the pullback Haefliger structure with respect to the inclusion
Given a Haefliger structure on and another space , the product of the Haefliger structure with is the pullback Haefliger structure with respect to the projection
Foliations
Recall that a codimension- foliation on a smooth manifold can be specified by a covering of by open sets , together with a submersion from each open set to , such that for each there is a map from to local diffeomorphisms with
whenever is close enough to . The Haefliger cocycle is defined by
germ of at u.
As anticipated, foliations are not closed in general under pullbacks but Haefliger structur |
https://en.wikipedia.org/wiki/Non-virtual%20interface%20pattern | The non-virtual interface pattern (NVI) controls how methods in a base class are overridden. Such methods may be called by clients and overridable methods with core functionality. It is a pattern that is strongly related to the template method pattern. The NVI pattern recognizes the benefits of a non-abstract method invoking the subordinate abstract methods. This level of indirection allows for pre and post operations relative to the abstract operations both immediately and with future unforeseen changes. The NVI pattern can be deployed with very little software production and runtime cost. Many commercial software frameworks employ the NVI pattern.
Benefits and detriments
A design that adheres to this pattern results in a separation of a class interface into two distinct interfaces:
Client interface: This is the public non-virtual interface
Subclass interface: This is the private interface, which can have any combination virtual and non-virtual methods.
With such a structure, the fragile base class interface problem is mitigated. The only detriment is that the code is enlarged a little.
C# example
public abstract class Saveable
{
// The invariant processing for the method is defined in the non virtual interface.
// The behaviour so defined is inherited by all derived classes.
// For example, creating and committing a transaction.
public void Save()
{
Console.WriteLine("Creating transaction");
CoreSave();
Console.WriteLine("Committing transaction");
}
// The variant processing for the method is defined in the subclass interface.
// This behaviour can be customised as needed by subclasses.
// For example the specific implementation of saving data to the database.
protected abstract void CoreSave();
}
public class Customer : Saveable
{
public string Name { get; set; }
public decimal Credit { get; set; }
protected override void CoreSave()
{
Console.WriteLine($"Save |
https://en.wikipedia.org/wiki/Animal%20vaccination | Animal vaccination is the immunisation of a domestic, livestock or wild animal. The practice is connected to veterinary medicine. The first animal vaccine invented was for chicken cholera in 1879 by Louis Pasteur. The production of such vaccines encounter issues in relation to the economic difficulties of individuals, the government and companies. Regulation of animal vaccinations is less compared to the regulations of human vaccinations. Vaccines are categorised into conventional and next generation vaccines. Animal vaccines have been found to be the most cost effective and sustainable methods of controlling infectious veterinary diseases. In 2017, the veterinary vaccine industry was valued at US$7 billion and it is predicted to reach US$9 billion in 2024.
History
Animals have been both the receiver and the source of vaccines. Through laboratory testing, the first animal vaccine created was for chicken cholera in 1879 by Louis Pasteur. Pasteur also invented an anthrax vaccine for sheep and cattle in 1881, and the rabies vaccine in 1884. Monkeys and rabbits were used to grow and attenuate the rabies virus. Starting in 1881, dried spinal cord material from infected rabbits was given to dogs to inoculate them against rabies. The infected nerve tissue was dried to weaken the virus. Subsequently, in 1885, the vaccine was given to a 9-year-old boy infected with the rabies disease, Joseph Meister, who survived when no one had before. The French National Academy of Medicine and the world saw this feat as a breakthrough, and thus many scientists started to collaborate and further Pasteur's work.
An indirect view of animal vaccinations is seen through smallpox. This is because the vaccine given to humans was animal based. Smallpox was a deadly disease most known for its rash and high death rate of 30% if contracted.
Edward Jenner tested his theory in 1796, that if a human had already been infected with cowpox that they would be protected from smallpox. It proved to be t |
https://en.wikipedia.org/wiki/Cryptographic%20log%20on | Cryptographic log-on (CLO) is a process that uses Common Access Cards (CAC) and embedded Public Key Infrastructure (PKI) certificates to authenticate a user's identification to a workstation and network. It replaces the username and passwords for identifying and authenticating users. To log-on cryptographically to a CLO-enabled workstation, users simply insert their CAC into their workstation’s CAC reader and provide their Personal Identification Number (PIN).
The Navy/Marine Corps Intranet, among many other secure networks, uses CLO. |
https://en.wikipedia.org/wiki/Vibration%20of%20plates | The vibration of plates is a special case of the more general problem of mechanical vibrations. The equations governing the motion of plates are simpler than those for general three-dimensional objects because one of the dimensions of a plate is much smaller than the other two. This permits a two-dimensional plate theory to give an excellent approximation to the actual three-dimensional motion of a plate-like object.
There are several theories that have been developed to describe the motion of plates. The most commonly used are the Kirchhoff-Love theory and the Uflyand-Mindlin. The latter theory is discussed in detail by Elishakoff. Solutions to the governing equations predicted by these theories can give us insight into the behavior of plate-like objects both under free and forced conditions. This includes
the propagation of waves and the study of standing waves and vibration modes in plates. The topic of plate vibrations is treated in books by Leissa, Gontkevich, Rao, Soedel, Yu, Gorman and Rao.
Kirchhoff-Love plates
The governing equations for the dynamics of a Kirchhoff-Love plate are
where are the in-plane displacements of the mid-surface of the plate, is the transverse (out-of-plane) displacement of the mid-surface of the plate, is an applied transverse load pointing to (upwards), and the resultant forces and moments are defined as
Note that the thickness of the plate is and that the resultants are defined as weighted averages of the in-plane stresses . The derivatives in the governing equations are defined as
where the Latin indices go from 1 to 3 while the Greek indices go from 1 to 2. Summation over repeated indices is implied. The coordinates is out-of-plane while the coordinates and are in plane.
For a uniformly thick plate of thickness and homogeneous mass density
Isotropic Kirchhoff–Love plates
For an isotropic and homogeneous plate, the stress-strain relations are
where are the in-plane strains and is the Poisson's ratio of |
https://en.wikipedia.org/wiki/Theoretical%20astronomy | Theoretical astronomy is the use of analytical and computational models based on principles from physics and chemistry to describe and explain astronomical objects and astronomical phenomena. Theorists in astronomy endeavor to create theoretical models and from the results predict observational consequences of those models. The observation of a phenomenon predicted by a model allows astronomers to select between several alternate or conflicting models as the one best able to describe the phenomena.
Ptolemy's Almagest, although a brilliant treatise on theoretical astronomy combined with a practical handbook for computation, nevertheless includes compromises to reconcile discordant observations with a geocentric model. Modern theoretical astronomy is usually assumed to have begun with the work of Johannes Kepler (1571–1630), particularly with Kepler's laws. The history of the descriptive and theoretical aspects of the Solar System mostly spans from the late sixteenth century to the end of the nineteenth century.
Theoretical astronomy is built on the work of observational astronomy, astrometry, astrochemistry, and astrophysics. Astronomy was early to adopt computational techniques to model stellar and galactic formation and celestial mechanics. From the point of view of theoretical astronomy, not only must the mathematical expression be reasonably accurate but it should preferably exist in a form which is amenable to further mathematical analysis when used in specific problems. Most of theoretical astronomy uses Newtonian theory of gravitation, considering that the effects of general relativity are weak for most celestial objects. Theoretical astronomy does not attempt to predict the position, size and temperature of every object in the universe, but by and large has concentrated upon analyzing the apparently complex but periodic motions of celestial objects.
Integrating astronomy and physics
"Contrary to the belief generally held by laboratory physicists, astrono |
https://en.wikipedia.org/wiki/Cantor%20space | In mathematics, a Cantor space, named for Georg Cantor, is a topological abstraction of the classical Cantor set: a topological space is a Cantor space if it is homeomorphic to the Cantor set. In set theory, the topological space 2ω is called "the" Cantor space.
Examples
The Cantor set itself is a Cantor space. But the canonical example of a Cantor space is the countably infinite topological product of the discrete 2-point space {0, 1}. This is usually written as or 2ω (where 2 denotes the 2-element set {0,1} with the discrete topology). A point in 2ω is an infinite binary sequence, that is a sequence that assumes only the values 0 or 1. Given such a sequence a0, a1, a2,..., one can map it to the real number
This mapping gives a homeomorphism from 2ω onto the Cantor set, demonstrating that 2ω is indeed a Cantor space.
Cantor spaces occur abundantly in real analysis. For example, they exist as subspaces in every perfect, complete metric space. (To see this, note that in such a space, any non-empty perfect set contains two disjoint non-empty perfect subsets of arbitrarily small diameter, and so one can imitate the construction
of the usual Cantor set.) Also, every uncountable, separable, completely metrizable space contains
Cantor spaces as subspaces. This includes most of the common spaces in real analysis.
Characterization
A topological characterization of Cantor spaces is given by Brouwer's theorem:
The topological property of having a base consisting of clopen sets is sometimes known as "zero-dimensionality". Brouwer's theorem can be restated as:
This theorem is also equivalent (via Stone's representation theorem for Boolean algebras) to the fact that any two countable atomless Boolean algebras are isomorphic.
Properties
As can be expected from Brouwer's theorem, Cantor spaces appear in several forms. But many properties of Cantor spaces can be established using 2ω, because its construction as a product makes it amenable to analysis.
Cantor s |
https://en.wikipedia.org/wiki/Seto%20Kaiba | is a fictional character in the manga Yu-Gi-Oh! by Kazuki Takahashi. As the majority shareholder and CEO of his own multi-national gaming company, Kaiba Corporation, Kaiba is reputed to be Japan's greatest gamer and aims to become the world's greatest player of the American card game, Duel Monsters (Magic & Wizards in the Japanese manga). In all mediums, his arch-rival is the protagonist of the series, Yugi Mutou, who is also a superb game player. He is the modern day reincarnation of one of the Pharaoh's Six High Priests, "Priest Seto", who appears in the manga's final arc. Kaiba has also appeared in related anime works and feature films.
Seto Kaiba originates from one of the stories Takahashi heard from a friend involving a selfish card collector. Like the card collector, Kaiba is obsessed with gaming, but Takahashi also gave Kaiba a calmer demeanor when developing his relationship with his rival. He was first voiced by Hikaru Midorikawa in Japanese with Kenjirō Tsuda replacing him in the sequel Duel Monsters. Eric Stuart voiced him in all of his English appearances.
Critical reception to Kaiba has been mixed for being compared to simplistic anime rivals based on his multiple attempts to defeat Yugi and become the superior Duel Monsters player. While his development in the film Dark Side of Dimensions was praised for being the major focus in the narrative, critics still felt Kaiba was obsessed with Duel Monsters to a ridiculous extent based on his continued focus on his original goal.
Creation and development
Seto Kaiba originates from Kazuki Takahashi's stories he was told by a friend. According to the story, there was a real life person who played trading cards but was unwilling to play with him because he was not an expert. Displeased with hearing about this person, Takahashi decided to use this cardgame collector as a manga character, resulting in Kaiba's creation.
In the making of the series, Takahashi wanted to create an appealing creature for his Duel |
https://en.wikipedia.org/wiki/Barshop%20Institute | The Barshop Institute for Longevity and Aging Studies is a basic and clinical research institute located on the Texas Research Park Campus of the University of Texas Health Science Center at San Antonio (UTHSCSA). It is a leading institute in the United States in geriatrics research. The Barshop Institute ranks #1 in National Institute on Aging funding among Texas institutions, and is highly ranked in the country in National Institute of Health funding. The scientific director of the institute has been Nicolas Musi, M.D. since September 2013. In 2009, one of the research projects of the institute was announced by Science magazine as one of the top scientific discoveries of the year.
Research
More than 160 faculty members at the University of Texas Health Science Center at San Antonio are actively involved in biomedical and clinical research and educational activities that range from the molecular genetics of aging to issues of health care for the elderly population. Faculty members of the Barshop Institute are internationally known for their research into disease processes associated with aging, such as Alzheimer's disease, cancer, diabetes, Parkinson's disease, and cardiovascular disease. Faculty members have made major scientific inquiries into the molecular regulation of aging and age-related diseases, development of anti-aging interventions, and health care issues of the elderly.
In 2010, the University established the Center for Healthy Aging where Barshop Institute scientists studying the basic biology of aging work with their colleagues in clinical research to coordinate translational, clinical aging research, and geriatrics education across all schools of the Health Science Center. |
https://en.wikipedia.org/wiki/Hard%E2%80%93easy%20effect | The hard–easy effect is a cognitive bias that manifests itself as a tendency to overestimate the probability of one's success at a task perceived as hard, and to underestimate the likelihood of one's success at a task perceived as easy. The hard-easy effect takes place, for example, when individuals exhibit a degree of underconfidence in answering relatively easy questions and a degree of overconfidence in answering relatively difficult questions. "Hard tasks tend to produce overconfidence but worse-than-average perceptions," reported Katherine A. Burson, Richard P. Larrick, and Jack B. Soll in a 2005 study, "whereas easy tasks tend to produce underconfidence and better-than-average effects."
The hard-easy effect falls under the umbrella of "social comparison theory", which was originally formulated by Leon Festinger in 1954. Festinger argued that individuals are driven to evaluate their own opinions and abilities accurately, and social comparison theory explains how individuals carry out those evaluations by comparing themselves to others.
In 1980, Ferrell and McGoey called it the "discriminability effect"; in 1992, Griffin and Tversky called it the "difficulty effect".
Experiments
In a range of studies, participants have been requested to answer general knowledge questions, each of which had two possible answers, and also to estimate their chances of answering each question correctly. If the participants had a sufficient degree of self-knowledge, their level of confidence in regard to each answer they gave would be high for the questions they answered correctly and lower for the ones they answered wrong. However, this generally is not the case. Many people are overconfident; indeed, studies show that most people systematically overestimate their own abilities. Moreover, people are overconfident about their ability to answer questions that are deemed to be hard but underconfident on questions that are considered easy.
In a study reported in 1997, William M. Go |
https://en.wikipedia.org/wiki/Ethenium | In chemistry, ethenium, protonated ethylene or ethyl cation is a positive ion with the formula . It can be viewed as a molecule of ethylene () with one added proton (), or a molecule of ethane () minus one hydride ion (). It is a carbocation; more specifically, a nonclassical carbocation.
Preparation
Ethenium has been observed in rarefied gases subjected to radiation. Another preparation method is to react certain proton donors such as , , , and with ethane at ambient temperature and pressures below 1 mmHg. (Other donors such as and form ethanium preferably to ethenium.)
At room temperature and in a rarefied methane atmosphere, ethanium slowly dissociates to ethenium and . The reaction is much faster at 90 °C.
Stability and reactions
Contrary to some earlier reports, ethenium was found to be largely unreactive towards neutral methane at ambient temperature and low pressure (on the order of 1 mmHg), even though the reaction yielding sec- and is believed to be exothermic.
Structure
The structure of ethenium's ground state was in dispute for many years, but it was eventually agreed to be a non-classical structure, with the two carbon atoms and one of the hydrogen atoms forming a three-center two-electron bond. Calculations have shown that higher homologues, like the propyl and n-butyl cations also have bridged structures. Generally speaking, bridging appears to be a common means by which 1° alkyl carbocations achieve additional stabilization. Consequently, true 1° carbocations (with a classical structure) may be rare or nonexistent. |
https://en.wikipedia.org/wiki/Bulk%20synchronous%20parallel | The bulk synchronous parallel (BSP) abstract computer is a bridging model for designing parallel algorithms. It is similar to the parallel random access machine (PRAM) model, but unlike PRAM, BSP does not take communication and synchronization for granted. In fact, quantifying the requisite synchronization and communication is an important part of analyzing a BSP algorithm.
History
The BSP model was developed by Leslie Valiant of Harvard University during the 1980s. The definitive article was published in 1990.
Between 1990 and 1992, Leslie Valiant and Bill McColl of Oxford University worked on ideas for a distributed memory BSP programming model, in Princeton and at Harvard. Between 1992 and 1997, McColl led a large research team at Oxford that developed various BSP programming libraries, languages and tools, and also numerous massively parallel BSP algorithms, including many early examples of high-performance communication-avoiding parallel algorithms
and recursive "immortal" parallel algorithms that achieve the best possible performance and optimal parametric tradeoffs.
With interest and momentum growing, McColl then led a group from Oxford, Harvard, Florida, Princeton, Bell Labs, Columbia and Utrecht that developed and published the BSPlib Standard for BSP programming in 1996.
Valiant developed an extension to the BSP model in the 2000s, leading to the publication of the Multi-BSP model in 2011.
In 2017, McColl developed a major new extension of the BSP model that provides fault tolerance and tail tolerance for large-scale parallel computations in AI, Analytics and high-performance computing (HPC). See also
The BSP model
Overview
A BSP computer consists of the following:
Components capable of processing and/or local memory transactions (i.e., processors),
A network that routes messages between pairs of such components, and
A hardware facility that allows for the synchronization of all or a subset of components.
This is commonly interpreted as a s |
https://en.wikipedia.org/wiki/Input%20Field%20Separators | For many command line interpreters (“shell”) of Unix operating systems, the input field separators or internal field separators or shell variable holds characters used to separate text into tokens.
The value of , (in the bash shell) typically includes the space, tab, and the newline characters by default. These whitespace characters can be visualized by issuing the "declare" command in the bash shell or printing with commands like printf %s "$IFS" | od -c, printf "%q\n" "$IFS" or printf %s "$IFS" | cat -A (the latter two commands being only available in some shells and on some systems).
From the Bash, version 4 man page:
The shell treats each character of as a delimiter, and splits the results of the other expansions into words on these characters.
If is unset, or its value is exactly , the default, then sequences of , , and at the beginning and end of the results of the previous expansions are ignored, and any sequence of characters not at the beginning or end serves to delimit words.
If has a value other than the default, then sequences of the whitespace characters and are ignored at the beginning and end of the word, as long as the whitespace character is in the value of (an whitespace character).
Any character in that is not whitespace, along with any adjacent whitespace characters, delimits a field. A sequence of whitespace characters is also treated as a delimiter. If the value of is null, no word splitting occurs.
IFS abbreviation
According to the Open Group Base Specifications, is an abbreviation for "input field separators." A newer version of this specification mentions that "this name is misleading as the IFS characters are actually used as field terminators." However is often referred to as "internal field separators."
Exploits
IFS was usable as an exploit in some versions of Unix. A program with root permissions could be fooled into executing user-supplied code if it ran (for instance) system("/bin/mail") and was called with se |
https://en.wikipedia.org/wiki/Iatrophysics | Iatrophysics or iatromechanics (fr. Greek) is the medical application of physics. It provides an explanation for medical practices with mechanical principles. It was a school of medicine in the seventeenth century which attempted to explain physiological phenomena in mechanical terms. Believers of iatromechanics thought that physiological phenomena of the human body followed the laws of physics. It was related to iatrochemistry in studying the human body in a systematic manner based on observations from the natural world though it had more emphasis on mathematical models rather than chemical processes.
Background
The Age of Enlightenment was an era of radically changing ways of thought in Western politics, philosophy, and science. Major sociological changes occurred in the Enlightenment, as well as industrial and scientific. In medicine, the Enlightenment brought several discoveries and studies that were impacted by changing ways of thought. For example, capillaries were discovered by Marcello Malpighi. Jan Baptist van Helmont (1580–1644) was the first to consider digestion a fermentation process and identified hydrochloric acid in the stomach. Pathological anatomy and clinical observation were also being integrated into the medical curriculum. The Enlightenment also directly influenced the field of iatrophysics through the development of Antonie von Leeuwenhoek's microscope, the advancement of the field of ophthalmology through the use of physics by René Descartes, and Newton's law of universal gravitation, idea of gravitational force, and his treatise Opticks.
Subfields
Iatrophysicists drew inspiration from various established physical phenomena in order to explain how certain biological processes took place and how this can be applied towards medicine.
Particles
A key component of iatrophysical anatomy was the study of particles. This was particularly influenced by 17th century developments in microbiology, the most prominent being the microscope. Antonie v |
https://en.wikipedia.org/wiki/N%C3%A9ron%E2%80%93Severi%20group | In algebraic geometry, the Néron–Severi group of a variety is
the group of divisors modulo algebraic equivalence; in other words it is the group of components of the Picard scheme of a variety. Its rank is called the Picard number. It is named after Francesco Severi and André Néron.
Definition
In the cases of most importance to classical algebraic geometry, for a complete variety V that is non-singular, the connected component of the Picard scheme is an abelian variety written
Pic0(V).
The quotient
Pic(V)/Pic0(V)
is an abelian group NS(V), called the Néron–Severi group of V. This is a finitely-generated abelian group by the Néron–Severi theorem, which was proved by Severi over the complex numbers and by Néron over more general fields.
In other words, the Picard group fits into an exact sequence
The fact that the rank is finite is Francesco Severi's theorem of the base; the rank is the Picard number of V, often denoted ρ(V). The elements of finite order are called Severi divisors, and form a finite group which is a birational invariant and whose order is called the Severi number. Geometrically NS(V) describes the algebraic equivalence classes of divisors on V; that is, using a stronger, non-linear equivalence relation in place of linear equivalence of divisors, the classification becomes amenable to discrete invariants. Algebraic equivalence is closely related to numerical equivalence, an essentially topological classification by intersection numbers.
First Chern class and integral valued 2-cocycles
The exponential sheaf sequence
gives rise to a long exact sequence featuring
The first arrow is the first Chern class on the Picard group
and the Neron-Severi group can be identified with its image.
Equivalently, by exactness, the Neron-Severi group is the kernel of the second arrow
In the complex case, the Neron-Severi group is therefore the group of 2-cocycles whose Poincaré dual is represented by a complex hypersurface, that is, a Weil divisor.
For |
https://en.wikipedia.org/wiki/Trace%20zero%20cryptography | In 1998 Gerhard Frey firstly proposed using trace zero varieties for cryptographic purpose. These varieties are subgroups of the divisor class group on a low genus hyperelliptic curve defined over a finite field. These groups can be used to establish asymmetric cryptography using the discrete logarithm problem as cryptographic primitive.
Trace zero varieties feature a better scalar multiplication performance than elliptic curves. This allows fast arithmetic in these groups, which can speed up the calculations with a factor 3 compared with elliptic curves and hence speed up the cryptosystem.
Another advantage is that for groups of cryptographically relevant size, the order of the group can simply be calculated using the characteristic polynomial of the Frobenius endomorphism. This is not the case, for example, in elliptic curve cryptography when the group of points of an elliptic curve over a prime field is used for cryptographic purpose.
However to represent an element of the trace zero variety more bits are needed compared with elements of elliptic or hyperelliptic curves. Another disadvantage, is the fact, that it is possible to reduce the security of the TZV of 1/6th of the bit length using cover attack.
Mathematical background
A hyperelliptic curve C of genus g over a prime field where q = pn (p prime) of odd characteristic is defined as
where f monic, deg(f) = 2g + 1 and deg(h) ≤ g. The curve has at least one -rational Weierstraßpoint.
The Jacobian variety of C is for all finite extension isomorphic to the ideal class group . With the Mumford's representation it is possible to represent the elements of with a pair of polynomials [u, v], where u, v ∈ .
The Frobenius endomorphism σ is used on an element [u, v] of to raise the power of each coefficient of that element to q: σ([u, v]) = [uq(x), vq(x)]. The characteristic polynomial of this endomorphism has the following form:
where ai in ℤ
With the Hasse–Weil theorem it is possible to receive |
https://en.wikipedia.org/wiki/Phage-ligand%20technology | The Phage-ligand technology is a technology to detect, bind and remove bacteria and bacterial toxins by using highly specific bacteriophage derived proteins.
Origins
The host recognition of bacteriophages occur via bacteria-binding proteins that have strong binding affinities to specific protein or carbohydrate structures on the surface of the bacterial host. At the end of the infection life cycle the bacteria-lysing Endolysin is synthesized and degrades the bacterial peptidoglycan cell wall, resulting in lysis (and therefore killing) of the bacterial cell.
Applications
Bacteriophage derived proteins are used for detection and removal of bacteria and bacterial components (especially endotoxin contaminations) in pharmaceutical and biological products, human diagnostics, food, and decolonization of bacteria causing nosocomial infections (e.g. MRSA).
Protein modifications allow the biotechnological adaption to specific requirements.
See also
Affinity magnetic separation |
https://en.wikipedia.org/wiki/MacScan | MacScan is anti-malware software for macOS developed by SecureMac.
Features
SecureMac runs on Apple macOS. It scans for and removes malware (including spyware, Trojan horses, keystroke loggers, and tracking cookies). It also scans for remote administration programs, like Apple Remote Desktop, allowing users to verify that such programs are installed only with their authorization.
The full version is available as shareware.
Unlike other anti-malware applications available for Mac OS X (and other systems), MacScan scans exclusively for malware that affects Macs, as opposed to scanning for all forms of known threats, which would include Windows malware. Given that there is considerably less macOS malware than Windows-based malware, MacScan's definition files are smaller and more optimized.
See also
List of Macintosh software |
https://en.wikipedia.org/wiki/Structured%20Stream%20Transport | In computer networking, Structured Stream Transport (SST) is an experimental transport protocol that provides an ordered, reliable byte stream abstraction similar to TCP's, but enhances and optimizes stream management to permit applications to use streams in a much more fine-grained fashion than is feasible with TCP streams.
External links
SST home page
Transport layer protocols
Network protocols |
https://en.wikipedia.org/wiki/Organizing%20center | Organizing center may refer to:
Microtubule organizing center
Spemann's Organizer
Certain groups of cells in mesoderm formation, see FGF and mesoderm formation
Primitive streak in Amniotes responsible for gastrulation
a small cell group underneath the stem cells in Arabidopsis and other plants
animal cap cells treated with activin
Pattern formation
Developmental biology |
https://en.wikipedia.org/wiki/Program%20status%20word | The program status word (PSW) is a register that performs the function of a status register and program counter, and sometimes more. The term is also applied to a copy of the PSW in storage. This article only discusses the PSW in the IBM System/360 and its successors, and follows the IBM convention of numbering bits starting with 0 as the leftmost (most significant) bit.
Although certain fields within the PSW may be tested or set by using non-privileged instructions, testing or setting the remaining fields may only be accomplished by using privileged instructions.
Contained within the PSW are the two bit condition code, representing zero, positive, negative, overflow, and similar flags of other architectures' status registers. Conditional branch instructions test this encoded as a four bit value, with each bit representing a test of one of the four condition code values, 23 + 22 + 21 + 20. (Since IBM uses big-endian bit numbering, mask value 8 selects code 0, mask value 4 selects code 1, mask value 2 selects code 2, and mask value 1 selects code 3.)
The 64-bit PSW describes (among other things)
Interrupt masks
Privilege states
Condition code
Instruction address
In the early instances of the architecture (System/360 and early System/370), the instruction address was 24 bits; in later instances (XA/370), the instruction address was 31 bits plus a mode bit (24 bit addressing mode if zero; 31 bit addressing mode if one) for a total of 32 bits.
In the present instances of the architecture (z/Architecture), the instruction address is 64 bits and the PSW itself is 128 bits.
The PSW may be loaded by the LOAD PSW instruction (LPSW or LPSWE). Its contents may be examined with the Extract PSW instruction (EPSW).
Format
S/360
On all but 360/20, the PSW has the following formats. S/360 Extended PSW format only applies to the 360/67 with bit 8 of control register 6 set.
S/370
S/370 Extended Architecture (S/370-XA)
Enterprise Systems Architecture (ESA)
z/Archi |
https://en.wikipedia.org/wiki/RC%20oscillator | Linear electronic oscillator circuits, which generate a sinusoidal output signal, are composed of an amplifier and a frequency selective element, a filter. A linear oscillator circuit which uses an RC network, a combination of resistors and capacitors, for its frequency selective part is called an RC oscillator.
Description
RC oscillators are a type of feedback oscillator; they consist of an amplifying device, a transistor, vacuum tube, or op-amp, with some of its output energy fed back into its input through a network of resistors and capacitors, an RC network, to achieve positive feedback, causing it to generate an oscillating sinusoidal voltage. They are used to produce lower frequencies, mostly audio frequencies, in such applications as audio signal generators and electronic musical instruments. At radio frequencies, another type of feedback oscillator, the LC oscillator is used, but at frequencies below 100 kHz the size of the inductors and capacitors needed for the LC oscillator become cumbersome, and RC oscillators are used instead. Their lack of bulky inductors also makes them easier to integrate into microelectronic devices. Since the oscillator's frequency is determined by the value of resistors and capacitors, which vary with temperature, RC oscillators do not have as good frequency stability as crystal oscillators.
The frequency of oscillation is determined by the Barkhausen criterion, which says that the circuit will only oscillate at frequencies for which the phase shift around the feedback loop is equal to 360° (2π radians) or a multiple of 360°, and the loop gain (the amplification around the feedback loop) is equal to one. The purpose of the feedback RC network is to provide the correct phase shift at the desired oscillating frequency so the loop has 360° phase shift, so the sine wave, after passing through the loop will be in phase with the sine wave at the beginning and reinforce it, resulting in positive feedback. The amplifier prov |
https://en.wikipedia.org/wiki/Helium%20flash | A helium flash is a very brief thermal runaway nuclear fusion of large quantities of helium into carbon through the triple-alpha process in the core of low mass stars (between 0.8 solar masses () and 2.0 ) during their red giant phase (the Sun is predicted to experience a flash 1.2 billion years after it leaves the main sequence). A much rarer runaway helium fusion process can also occur on the surface of accreting white dwarf stars.
Low-mass stars do not produce enough gravitational pressure to initiate normal helium fusion. As the hydrogen in the core is exhausted, some of the helium left behind is instead compacted into degenerate matter, supported against gravitational collapse by quantum mechanical pressure rather than thermal pressure. This increases the density and temperature of the core until it reaches approximately 100 million kelvin, which is hot enough to cause helium fusion (or "helium burning") in the core.
However, a fundamental quality of degenerate matter is that increases in temperature do not produce an increase in volume of the matter until the thermal pressure becomes so very high that it exceeds degeneracy pressure. In main sequence stars, thermal expansion regulates the core temperature, but in degenerate cores, this does not occur. Helium fusion increases the temperature, which increases the fusion rate, which further increases the temperature in a runaway reaction. This produces a flash of very intense helium fusion that lasts only a few thousand years (instantaneous on astronomical scales), but, in a matter of seconds, emits energy at a rate comparable to the entire Milky Way galaxy.
In the case of normal low-mass stars, the vast energy release causes much of the core to come out of degeneracy, allowing it to thermally expand, however, consuming as much energy as the total energy released by the helium flash, and any left-over energy is absorbed into the star's upper layers. Thus the helium flash is mostly undetectable by observation, |
https://en.wikipedia.org/wiki/Homological%20conjectures%20in%20commutative%20algebra | In mathematics, homological conjectures have been a focus of research activity in commutative algebra since the early 1960s. They concern a number of interrelated (sometimes surprisingly so) conjectures relating various homological properties of a commutative ring to its internal ring structure, particularly its Krull dimension and depth.
The following list given by Melvin Hochster is considered definitive for this area. In the sequel, , and refer to Noetherian commutative rings; will be a local ring with maximal ideal , and and are finitely generated -modules.
The Zero Divisor Theorem. If has finite projective dimension and is not a zero divisor on , then is not a zero divisor on .
Bass's Question. If has a finite injective resolution then is a Cohen–Macaulay ring.
The Intersection Theorem. If has finite length, then the Krull dimension of N (i.e., the dimension of R modulo the annihilator of N) is at most the projective dimension of M.
The New Intersection Theorem. Let denote a finite complex of free R-modules such that has finite length but is not 0. Then the (Krull dimension) .
The Improved New Intersection Conjecture. Let denote a finite complex of free R-modules such that has finite length for and has a minimal generator that is killed by a power of the maximal ideal of R. Then .
The Direct Summand Conjecture. If is a module-finite ring extension with R regular (here, R need not be local but the problem reduces at once to the local case), then R is a direct summand of S as an R-module. The conjecture was proven by Yves André using a theory of perfectoid spaces.
The Canonical Element Conjecture. Let be a system of parameters for R, let be a free R-resolution of the residue field of R with , and let denote the Koszul complex of R with respect to . Lift the identity map to a map of complexes. Then no matter what the choice of system of parameters or lifting, the last map from is not 0.
Existence of Balanced Big Cohen–Macaulay Modul |
https://en.wikipedia.org/wiki/SAT%20Subject%20Test%20in%20Mathematics%20Level%202 | In the U.S., the SAT Subject Test in Mathematics Level 2 (formerly known as Math II or Math IIChest, the "C" representing chest) was a one-hour multiple choice test. The questions covered a broad range of topics. Approximately 10-14% of questions focused on numbers and operations, 48-52% focused on algebra and functions, 28-32% focused on geometry (coordinate, three-dimensional, and trigonometric geometry were covered; plane geometry was not directly tested), and 8-12% focused on data analysis, statistics and probability. Compared to Mathematics 1, Mathematics 2 was more advanced. Whereas the Mathematics 1 test covered Algebra II and basic trigonometry, a pre-calculus class was good preparation for Mathematics 2. On January 19, 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Mathematics Level 2. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
The test had 50 multiple choice questions that were to be answered in one hour. All questions had five answer choices. Students received 1 point for every correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank.
Calculator use
The College Board stated that a calculator "may be useful or necessary" for about 55-60% of the questions on the test. The College Board also encouraged the use of a graphing calculator over a scientific calculator, saying that the test was "developed with the expectation that most students are using graphing calculators."
For the Mathematics Level Two test, students were not permitted to use calculators that have a QWERTY format keyboard, require an electrical outlet, make noise, use paper tape, have non-traditional methods of input (such as a stylus), or are part of a communication devic |
https://en.wikipedia.org/wiki/Apache%20Apex | Apache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable.
Apache Apex was named a top-level project by The Apache Software Foundation on April 25, 2016. As of September 2019, it is no longer actively developed.
Overview
Apache Apex is developed under the Apache License 2.0. The project was driven by the San Jose, California-based start-up company DataTorrent.
There are two parts of Apache Apex: Apex Core and Apex Malhar. Apex Core is the platform or framework for building distributed applications on Hadoop. The core Apex platform is supplemented by Malhar, a library of connector and logic functions, enabling rapid application development. These input and output operators provide templates to sources and sinks such as Alluxio, S3, HDFS, NFS, FTP, Kafka, ActiveMQ, RabbitMQ, JMS, Cassandra, MongoDB, Redis, HBase, CouchDB, generic JDBC, and other database connectors.
History
DataTorrent has developed the platform since 2012 and then decided to open source the core that became Apache Apex. It entered incubation in August 2015 and became Apache Software Foundation top level project within 8 months. DataTorrent itself shut down in May 2018.
As of September 2019, Apache Apex is no longer being developed.
Apex Big Data World
Apex Big Data World
is a conference about Apache Apex. The first conference of Apex Big Data World took place in 2017. They were held in Pune, India and Mountain View, California, USA. |
https://en.wikipedia.org/wiki/Female%20intrasexual%20competition | Female intrasexual competition is competition between women over a potential mate. Such competition might include self-promotion, derogation of other women, and direct and indirect aggression toward other women. Factors that influence female intrasexual competition include the genetic quality of available mates, hormone levels, and interpersonal dynamics.
There are two modes of sexual selection: intersexual selection and intrasexual selection. Intersexual selection includes the display of desirable sexual characteristics to attract a potential mate. Intrasexual selection is competition between members of the same sex other over a potential mate.
Compared to males, females tend to prefer subtle rather than overt forms of intrasexual competition. However, they are also less likely to resolve a conflict with a same sex peer.
Self-promotion tactics
Self-promotion tactics are one of the main strategies that can be used during intrasexual competition for mates. It is often perceived to be the most socially desirable strategy, as it can be perceived as self-improvement, rather than an attack on competitors. Self-promotion tactics are especially useful for when women are looking for short-term mates, as such tactics will directly promote their sexual availability.
Luxury consumption
Self-promotion tactics refers to the different strategies that women might use to make themselves look better compared to other competing women. For example, women are interested in luxury items that enhance their attractiveness. Luxury items can indicate attractiveness by emphasising a higher status, which is a factor that potential mates will take into consideration. When testing for female intrasexual competition, research has shown that women would purposely choose luxury items that boosts their level of attractiveness, and will disregard non-attractive items, even if they are luxury items. When consuming attractive luxury items, women are perceived to be more attractive, young, and fli |
https://en.wikipedia.org/wiki/Perl%20Compatible%20Regular%20Expressions | Perl Compatible Regular Expressions (PCRE) is a library written in C, which implements a regular expression engine, inspired by the capabilities of the Perl programming language. Philip Hazel started writing PCRE in summer 1997. PCRE's syntax is much more powerful and flexible than either of the POSIX regular expression flavors (BRE, ERE) and than that of many other regular-expression libraries.
While PCRE originally aimed at feature-equivalence with Perl, the two implementations are not fully equivalent. During the PCRE 7.x and Perl 5.9.x phase, the two projects coordinated development, with features being ported between them in both directions.
In 2015, a fork of PCRE was released with a revised programming interface (API). The original software, now called PCRE1 (the 1.xx–8.xx series), has had bugs mended, but no further development. , it is considered obsolete, and the current 8.45 release is likely to be the last. The new PCRE2 code (the 10.xx series) has had a number of extensions and coding improvements and is where development takes place.
A number of prominent open-source programs, such as the Apache and Nginx HTTP servers, and the PHP and R scripting languages, incorporate the PCRE library; proprietary software can do likewise, as the library is BSD-licensed. As of Perl 5.10, PCRE is also available as a replacement for Perl's default regular-expression engine through the re::engine::PCRE module.
The library can be built on Unix, Windows, and several other environments. PCRE2 is distributed with a POSIX C wrapper, several test programs, and the utility program pcre2grep that is built in tandem with the library.
Features
Just-in-time compiler support
This optional feature is available if enabled when the PCRE2 library is built. Large performance benefits are possible when (for example) the calling program utilizes the feature with compatible patterns that are executed repeatedly. The just-in-time compiler support was written by Zoltan Herczeg and is |
https://en.wikipedia.org/wiki/Uninitialized%20variable | In computing, an uninitialized variable is a variable that is declared but is not set to a definite known value before it is used. It will have some value, but not a predictable one. As such, it is a programming error and a common source of bugs in software.
Example of the C language
A common assumption made by novice programmers is that all variables are set to a known value, such as zero, when they are declared. While this is true for many languages, it is not true for all of them, and so the potential for error is there. Languages such as C use stack space for variables, and the collection of variables allocated for a subroutine is known as a stack frame. While the computer will set aside the appropriate amount of space for the stack frame, it usually does so simply by adjusting the value of the stack pointer, and does not set the memory itself to any new state (typically out of efficiency concerns). Therefore, whatever contents of that memory at the time will appear as initial values of the variables which occupy those addresses.
Here's a simple example in C:
void count( void )
{
int k, i;
for (i = 0; i < 10; i++)
{
k = k + 1;
}
printf("%d", k);
}
The final value of k is undefined. The answer that it must be 10 assumes that it started at zero, which may or may not be true. Note that in the example, the variable i is initialized to zero by the first clause of the for statement.
Another example can be when dealing with structs. In the code snippet below, we have a struct student which contains some variables describing the information about a student. The function register_student leaks memory contents because it fails to fully initialize the members of struct student new_student. If we take a closer look, in the beginning, age, semester and student_number are initialized. But the initialization of the first_name and last_name members are incorrect. This is because if the length of first_name and last_name character array |
https://en.wikipedia.org/wiki/Sparky%20the%20Sun%20Devil | Sparky the Sun Devil is the official mascot of Arizona State University. Originally the ASU athletic teams' mascot was an owl, then became a "Normal" (because ASU was founded as a normal school). It was later changed to a bulldog in an attempt to make the school – Arizona State Teacher's College at the time – appear more in line with Yale and other universities that held a higher level of respect. The State Press, the student newspaper, ran frequent appeals during the fall of 1946, urging the Bulldog to be replaced by the new Sun Devil. On November 8, 1946, the student body voted 819 to 196 to make the change. On November 20, as reported by the Arizona Republic, the student council made it official. The following day, the first Arizona State team played as the Sun Devils. Two years later, Stanford Alum and Disney illustrator Berk Anthony designed "Sparky", a devil holding a trident (colloquially referred to as a pitchfork). Anthony is rumored to have based Sparky's facial features on that of his former boss, Walt Disney.
Trademark
In the 1970s and early 1980s, Orange Julius beverage stands used the image of a devil with a pitchfork around an orange. The company later dropped the logo after threats of a lawsuit from the alumni association.
Redesign
On March 1, 2013, Arizona State announced they were joining forces with the Walt Disney Company to redesign the mascot costume, as part of an effort to modernize the character, and planned to use the character in comic books, children books and animated features. The school also announced that they planned to continue using the iconic mark in various ways, including marketing and selling apparel alongside the new version of the character. This change was met with backlash from students, alumni, and fans. Arizona State University President Michael Crow indicated that officials were meeting with student leaders to discuss the issue; afterwards, Sparky's new look was scrapped on March 19, 2013. The school later announce |
https://en.wikipedia.org/wiki/Flexor%20hallucis%20brevis%20muscle | Flexor hallucis brevis muscle is a muscle of the foot that flexes the big toe.
Structure
Flexor hallucis brevis muscle arises, by a pointed tendinous process, from the medial part of the under surface of the cuboid bone, from the contiguous portion of the third cuneiform, and from the prolongation of the tendon of the tibialis posterior muscle which is attached to that bone. It divides in front into two portions, which are inserted into the medial and lateral sides of the base of the first phalanx of the great toe, a sesamoid bone being present in each tendon at its insertion. The medial portion is blended with the abductor hallucis muscle previous to its insertion; the lateral portion (sometimes described as the first plantar interosseus) with the adductor hallucis muscle. The tendon of the flexor hallucis longus muscle lies in a groove between the two. Its tendon usually contains two sesamoid bones at the point under the first metatarsophalangeal joint.
Innervation
The medial and lateral head of the flexor hallucis brevis is innervated by the medial plantar nerve. Both heads are represented by spinal segments S1, S2.
Variation
Origin subject to considerable variation; it often receives fibers from the calcaneus or long plantar ligament. Attachment to the cuboid bone sometimes wanting. Slip to first phalanx of the second toe.
Function
Flexor hallucis brevis flexes the first metatarsophalangeal joint, or the big toe. It helps to maintain the medial longitudinal arch. It assists with the toe-off phase of gait providing increased push-off.
Clinical significance
Sesamoid bones contained within the tendon of flexor hallucis brevis muscle may become damaged during exercise.
Additional images |
https://en.wikipedia.org/wiki/Nutrient%20cycling%20in%20the%20Columbia%20River%20Basin | Nutrient cycling in the Columbia River Basin involves the transport of nutrients through the system, as well as transformations from among dissolved, solid, and gaseous phases, depending on the element. The elements that constitute important nutrient cycles include macronutrients such as nitrogen (as ammonium, nitrite, and nitrate), silicate, phosphorus, and micronutrients, which are found in trace amounts, such as iron. Their cycling within a system is controlled by many biological, chemical, and physical processes.
The Columbia River Basin is the largest freshwater system of the Pacific Northwest, and due to its complexity, size, and modification by humans, nutrient cycling within the system is affected by many different components. Both natural and anthropogenic processes are involved in the cycling of nutrients. Natural processes in the system include estuarine mixing of fresh and ocean waters, and climate variability patterns such as the Pacific Decadal Oscillation and the El Nino Southern Oscillation (both climatic cycles that affect the amount of regional snowpack and river discharge). Natural sources of nutrients in the Columbia River include weathering, leaf litter, salmon carcasses, runoff from its tributaries, and ocean estuary exchange. Major anthropogenic impacts to nutrients in the basin are due to fertilizers from agriculture, sewage systems, logging, and the construction of dams.
Nutrients dynamics vary in the river basin from the headwaters to the main river and dams, to finally reaching the Columbia River estuary and ocean. Upstream in the headwaters, salmon runs are the main source of nutrients. Dams along the river impact nutrient cycling by increasing residence time of nutrients, and reducing the transport of silicate to the estuary, which directly impacts diatoms, a type of phytoplankton. The dams are also a barrier to salmon migration, and can increase the amount of methane locally produced. The Columbia River estuary exports high rates of n |
https://en.wikipedia.org/wiki/Mason%20equation | The Mason equation is an approximate analytical expression for the growth (due to condensation) or evaporation of a water droplet—it is due to the meteorologist B. J. Mason. The expression is found by recognising that mass diffusion towards the water drop in a supersaturated environment transports energy as latent heat, and this has to be balanced by the diffusion of sensible heat back across the boundary layer, (and the energy of heatup of the drop, but for a cloud-sized drop this last term is usually small).
Equation
In Mason's formulation the changes in temperature across the boundary layer can be related to the changes in saturated vapour pressure by the Clausius–Clapeyron relation; the two energy transport terms must be nearly equal but opposite in sign and so this sets the interface temperature of the drop. The resulting expression for the growth rate is significantly lower than that expected if the drop were not warmed by the latent heat.
Thus if the drop has a size r, the inward mass flow rate is given by
and the sensible heat flux by
and the final expression for the growth rate is
where
S is the supersaturation far from the drop
L is the latent heat
K is the vapour thermal conductivity
D is the binary diffusion coefficient
R is the gas constant |
https://en.wikipedia.org/wiki/Electronic%20skin | Electronic skin refers to flexible, stretchable and self-healing electronics that are able to mimic functionalities of human or animal skin. The broad class of materials often contain sensing abilities that are intended to reproduce the capabilities of human skin to respond to environmental factors such as changes in heat and pressure.
Advances in electronic skin research focuses on designing materials that are stretchy, robust, and flexible. Research in the individual fields of flexible electronics and tactile sensing has progressed greatly; however, electronic skin design attempts to bring together advances in many areas of materials research without sacrificing individual benefits from each field. The successful combination of flexible and stretchable mechanical properties with sensors and the ability to self-heal would open the door to many possible applications including soft robotics, prosthetics, artificial intelligence and health monitoring.
Recent advances in the field of electronic skin have focused on incorporating green materials ideals and environmental awareness into the design process. As one of the main challenges facing electronic skin development is the ability of the material to withstand mechanical strain and maintain sensing ability or electronic properties, recyclability and self-healing properties are especially critical in the future design of new electronic skins.
Rehealable electronic skin
Self-healing abilities of electronic skin are critical to potential applications of electronic skin in fields such as soft robotics. Proper design of self-healing electronic skin requires not only healing of the base substrate but also the reestablishment of any sensing functions such as tactile sensing or electrical conductivity. Ideally, the self-healing process of electronic skin does not rely upon outside stimulation such as increased temperature, pressure, or solvation. Self-healing, or rehealable, electronic skin is often achieved through a polym |
https://en.wikipedia.org/wiki/Hans%20Hermes | Hans Hermes (; 12 February 1912 – 10 November 2003) was a German mathematician and logician, who made significant contributions to the foundations of mathematical logic.
Personal life
Hermes was born in Neunkirchen. From 1931, he studied mathematics, physics, chemistry, biology and philosophy at the University of Freiburg. In 1937, he passed the state examination in Münster and was attending there in 1938 when the physicist Adolf Kratzer was present. After that, he went on a scholarship to the University of Göttingen and then became an assistant at the University of Bonn. During World War II, he was a soldier on the Channel Island of Jersey until 1943 and then on to the Chemical Physics Institute of the Navy in Kiel. At the end of the war, he moved to Toplitzsee, where he was tasked with working on new encryption methods. In 1947, he became a lecturer at the University of Bonn where he took his habilitation, his thesis called Analytical manifolds in Riemannian areas. In 1949, he became a Professor at the University of Münster, where he turned back to the subject of mathematical logic.
Work
Hans Hermes was a pioneer of the Turing machine as the central concept of predictability. In 1937, Hermes reported under the title Definite terms and predictable numbers an article about the Turing machine, which still adheres closely to Turing ideas, but doesn't contain the concepts of the universal machine and the decision problem.
In 1952, he published together with Heinrich Scholz, an encyclopedia, which has significantly promoted the development of mathematical logic in Germany.
In 1953, he took over management of the influential Institute for Mathematical logic and basic research at the University of Münster, from Heinrich Scholz. Under his leadership, the Institute became a noted centre for attracting young researchers, both within the Federal Republic but also abroad. With Hermes there, among others, were Wilhelm Ackermann and Gisbert Hasenjaeger. In 1966, he accept |
https://en.wikipedia.org/wiki/Lens%20regeneration | The regeneration of the lens of the eye has been studied, mostly in amphibians and rabbits, from the 18th century. In the 21st century, an experimental medical trial has been performed in humans as this is a promising technique for treating cataracts, especially in children.
History
In 1781, Charles Bonnet found that a salamander had regenerated an eye one year after most of it, including the lens, had been removed. Vincenzo Colucci made a histological study of the phenomenon in newts, publishing his finding that it regenerated from the iris in 1891. Gustav Wolff then published several papers on the topic, starting in 1895, and this form of regeneration is now called Wolffian regeneration. The priority issue between Colucci and Wolff is examined in more detail by Holland (2021).
Regeneration of the lens in rabbits was first studied by French surgeons Cocteau and Leroy-D'Étiolle, starting in 1824. The crystallin contents of the lens capsule were removed but this was found to regenerate within a month. The rabbit is suitable for development of surgical techniques on the eye because it is easy to handle and its eye is comparatively large. Research in rabbits showed that their lens would start to regenerate within two weeks after a capsulotomy – a surgical technique in which the crystalline lens material is removed but the surrounding capsule which contained it is left mostly intact. The new lens was similar in structure to the structure but its shape might be irregular. Filling the capsule during regeneration seemed to encourage development of a more normal shape. The technique was also found to work in primates and so has been studied as a possible technique for treating cataracts in humans. Other animals in which lens regeneration has been observed include cats, chickens, dogs, fish, mice, rats and Xenopus frogs.
Experimental trials
A lens regeneration technique was trialled in a collaboration between Sun Yat-sen University and University of California, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.