source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/StepMania | StepMania is a cross-platform rhythm video game and engine. It was originally developed as a clone of Konami's arcade game series Dance Dance Revolution, and has since evolved into an extensible rhythm game engine capable of supporting a variety of rhythm-based game types. Released under the MIT License, StepMania is open-source free software.
Several video game series use StepMania as their game engines. This includes In the Groove, Pump It Up Pro, Pump It Up Infinity, and StepManiaX. StepMania was included in a video game exhibition at New York's Museum of the Moving Image in 2005.
Development
StepMania was originally developed as an open-source clone of Konami's arcade game series Dance Dance Revolution (DDR). During the first three major versions, the Interface was based heavily on DDR's. New versions were released relatively quickly at first, culminating in version 3.9 in 2005. In 2010, after almost 5 years of work without a stable release, StepMania creator Chris Danford forked a 2006 build of StepMania, paused development on the bleeding edge branch, and labeled the new branch StepMania 4 beta. A separate development team called the Spinal Shark Collective forked the bleeding-edge branch and continued work on it, branding it sm-ssc. On 30 May 2011, sm-ssc gained official status and was renamed StepMania 5.0. Development on the upcoming version, 5.1, has gone cold over the past few years after a couple of betas were released over at GitHub. Project OutFox (formerly known as StepMania 5.3, initially labeled as FoxMania) is a currently closed-source fork of the 5.0 and 5.1 codebase originally planned to reintegrate in StepMania, however further in development, it was decided to become an independent project due to its larger scope of goals while still sharing codebase improvements to future versions of StepMania. These improvements include modernizing the original codebase to improve performance and graphical fidelity, refurbishing aspects of the engine that h |
https://en.wikipedia.org/wiki/Haxe | Haxe is a high-level cross-platform programming language and compiler that can produce applications and source code for many different computing platforms from one code-base. It is free and open-source software, released under the MIT License. The compiler, written in OCaml, is released under the GNU General Public License (GPL) version 2.
Haxe includes a set of features and a standard library supported across all platforms, like numeric data types, strings, arrays, maps, binary, reflection, maths, Hypertext Transfer Protocol (HTTP), file system and common file formats. Haxe also includes platform-specific API's for each compiler target. Kha, OpenFL and Heaps.io are popular Haxe frameworks that enable creating multi-platform content from one codebase.
Haxe originated with the idea of supporting client-side and server-side programming in one language, and simplifying the communication logic between them. Code written in the Haxe language can be compiled into JavaScript, C++, Java, JVM, PHP, C#, Python, Lua and Node.js. Haxe can also directly compile SWF, HashLink, and NekoVM bytecode and also runs in interpreted mode.
Haxe supports externs (definition files) that can contain type information of existing libraries to describe target-specific interaction in a type-safe manner, like C++ header files can describe the structure of existing object files. This enables to use the values defined in the files as if they were statically typed Haxe entities. Beside externs, other solutions exist to access each platform's native capabilities.
Many popular IDEs and source code editors have support available for Haxe development. No particular development environment or tool set is officially recommended by the Haxe Foundation, although VS Code, IntelliJ IDEA and HaxeDevelop have the most support for Haxe development. The core functionalities of syntax highlighting, code completion, refactoring, debugging, etc. are available to various degrees.
History
Development of Haxe bega |
https://en.wikipedia.org/wiki/Knaster%E2%80%93Tarski%20theorem | In the mathematical areas of order and lattice theory, the Knaster–Tarski theorem, named after Bronisław Knaster and Alfred Tarski, states the following:
Let (L, ≤) be a complete lattice and let f : L → L be an order-preserving (monotonic) function w.r.t. ≤ . Then the set of fixed points of f in L forms a complete lattice under ≤ .
It was Tarski who stated the result in its most general form, and so the theorem is often known as Tarski's fixed-point theorem. Some time earlier, Knaster and Tarski established the result for the special case where L is the lattice of subsets of a set, the power set lattice.
The theorem has important applications in formal semantics of programming languages and abstract interpretation, as well as in game theory.
A kind of converse of this theorem was proved by Anne C. Davis: If every order-preserving function f : L → L on a lattice L has a fixed point, then L is a complete lattice.
Consequences: least and greatest fixed points
Since complete lattices cannot be empty (they must contain a supremum and infimum of the empty set), the theorem in particular guarantees the existence of at least one fixed point of f, and even the existence of a least fixed point (or greatest fixed point). In many practical cases, this is the most important implication of the theorem.
The least fixpoint of f is the least element x such that f(x) = x, or, equivalently, such that f(x) ≤ x; the dual holds for the greatest fixpoint, the greatest element x such that f(x) = x.
If f(lim xn) = lim f(xn) for all ascending sequences xn, then the least fixpoint of f is lim f n(0) where 0 is the least element of L, thus giving a more "constructive" version of the theorem. (See: Kleene fixed-point theorem.) More generally, if f is monotonic, then the least fixpoint of f is the stationary limit of f α(0), taking α over the ordinals, where f α is defined by transfinite induction: f α+1 = f (f α) and f γ for a limit ordinal γ is the least upper bound of the |
https://en.wikipedia.org/wiki/GFP-cDNA | The GFP-cDNA project documents the localisation of proteins to subcellular compartments of the eukaryotic cell applying fluorescence microscopy. Experimental data are complemented with bioinformatic analyses and published online in a database. A search function allows the finding of proteins containing features or motifs of particular interest. The project is a collaboration of the research groups of Rainer Pepperkok at the European Molecular Biology Laboratory (EMBL) and Stefan Wiemann at the German Cancer Research Centre (DKFZ).
What kinds of experiments are made?
The cDNAs of novel identified Open Reading Frames(ORF) are tagged with Green Fluorescent Protein (GFP) and expressed in eukaryotic cells. Subsequently, the subcellular localisation of the fusion proteins is recorded by fluorescence microscopy.
Steps:
1. Large-scale cloning
Any large-scale manipulation of ORFs requires cloning technologies which are free of restriction enzymes. In this respect those that utilise recombination cloning (Gateway of Invitrogen or Creator of BD Biosciences) have proved to be the most suitable. This cloning technology is based on recombination mechanisms used by phages to integrate their DNA into the host genome. It allows the ORFs to be rapidly and conveniently shuttled between functionally useful vectors without the need for conventional restriction cloning. In the cDNA-GFP project the ORFs are transferred into CFP/YFP expression vectors. For the localisation analysis both N- and C-terminal fusions are generated. This maximises the possibility of correctly ascertaining the localisation, since the presence of GFP may mask targeting signals that may be present at one end of the native protein.
N-Terminal Fluorescent Fusions
Insert your gene of interest into the MCS upstream of the fluorescent protein gene, and express your gene as a fusion to the N-terminus of the fluorescent protein.
C-Terminal Fluorescent Fusions
Insert your gene of interest into the MCS downstream o |
https://en.wikipedia.org/wiki/Jane%20Kister | Jane Elizabeth Kister (born and also published as Jane Bridge, 18 October 1944 – 1 December 2019) was a British and American mathematical logician and mathematics editor who served for many years as an editor of Mathematical Reviews.
Early life and education
Jane Bridge was originally from Weybridge, England, where she was born on 18 October 1944; her father was a lawyer and later a judge. Her family moved to London when she was four, and she studied at St Paul's Girls' School in London. She matriculated at Somerville College, Oxford in 1963, but her studies were interrupted by a diagnosis of lupus; she resumed reading mathematics there in 1964, tutored by Anne Cobbe. She earned a first, won a Junior Mathematical Prize, and continued at Oxford for graduate study.
She was given the Mary Somerville Research Fellowship in 1969, and completed her doctorate (D.Phil.) at Oxford in 1972. Her dissertation, Some Problems in Mathematical Logic: Systems of Ordinal Functions and Ordinal Notations, was supervised by Robin Gandy. She then became a tutorial fellow in mathematics at Somerville College, taking Anne Cobbe's position after Cobbe's retirement, and a member of the Mathematical Institute, University of Oxford, working among others there with Dana Scott.
Marriage and later life
In 1977, mathematician James Kister from the University of Michigan visited Oxford on sabbatical; they married in 1978 and she returned with him to the US, giving up her position at Oxford and in 1992 taking US citizenship. She obtained a visiting professorship at the Massachusetts Institute of Technology, and then in 1979 began working at Mathematical Reviews, where she would remain for the rest of her career. She became associate executive editor in 1984, and executive editor in 1998, the first woman to hold that position. When Mathematical Reviews shifted from being a paper review journal to an online electronic database, MathSciNet, in 1996, Kister was heavily involved in this advance. She a |
https://en.wikipedia.org/wiki/Electron%20affinity | The electron affinity (Eea) of an atom or molecule is defined as the amount of energy released when an electron attaches to a neutral atom or molecule in the gaseous state to form an anion.
X(g) + e− → X−(g) + energy
This differs by sign from the energy change of electron capture ionization. The electron affinity is positive when energy is released on electron capture.
In solid state physics, the electron affinity for a surface is defined somewhat differently (see below).
Measurement and use of electron affinity
This property is used to measure atoms and molecules in the gaseous state only, since in a solid or liquid state their energy levels would be changed by contact with other atoms or molecules.
A list of the electron affinities was used by Robert S. Mulliken to develop an electronegativity scale for atoms, equal to the average of the electrons
affinity and ionization potential. Other theoretical concepts that use electron affinity include electronic chemical potential and chemical hardness. Another example, a molecule or atom that has a more positive value of electron affinity than another is often called an electron acceptor and the less positive an electron donor. Together they may undergo charge-transfer reactions.
Sign convention
To use electron affinities properly, it is essential to keep track of sign. For any reaction that releases energy, the change ΔE in total energy has a negative value and the reaction is called an exothermic process. Electron capture for almost all non-noble gas atoms involves the release of energy and thus is exothermic. The positive values that are listed in tables of Eea are amounts or magnitudes. It is the word "released" within the definition "energy released" that supplies the negative sign to ΔE. Confusion arises in mistaking Eea for a change in energy, ΔE, in which case the positive values listed in tables would be for an endo- not exo-thermic process. The relation between the two is Eea = −ΔE(attach).
However, if |
https://en.wikipedia.org/wiki/Lamination%20%28topology%29 | In topology, a branch of mathematics, a lamination is a :
"topological space partitioned into subsets"
decoration (a structure or property at a point) of a manifold in which some subset of the manifold is partitioned into sheets of some lower dimension, and the sheets are locally parallel.
A lamination of a surface is a partition of a closed subset of the surface into smooth curves.
It may or may not be possible to fill the gaps in a lamination to make a foliation.
Examples
A geodesic lamination of a 2-dimensional hyperbolic manifold is a closed subset together with a foliation of this closed subset by geodesics. These are used in Thurston's classification of elements of the mapping class group and in his theory of earthquake maps.
Quadratic laminations, which remain invariant under the angle doubling map. These laminations are associated with quadratic maps. It is a closed collection of chords in the unit disc. It is also topological model of Mandelbrot or Julia set.
See also
train track (mathematics)
Orbit portrait
Notes |
https://en.wikipedia.org/wiki/Parallax%20scrolling | Parallax scrolling is a technique in computer graphics where background images move past the camera more slowly than foreground images, creating an illusion of depth in a 2D scene of distance. The technique grew out of the multiplane camera technique used in traditional animation since the 1930s.
Parallax scrolling was popularized in 2D computer graphics with its introduction to video games in the early 1980s. Some parallax scrolling was used in the arcade video game Jump Bug (1981). It used a limited form of parallax scrolling with the main scene scrolling while the starry night sky is fixed and clouds move slowly, adding depth to the scenery. The following year, Moon Patrol (1982) implemented a full form of parallax scrolling, with three separate background layers scrolling at different speeds, simulating the distance between them. Moon Patrol is often credited with popularizing parallax scrolling. Jungle King (1982), later called Jungle Hunt, also had parallax scrolling, and was released a month after Moon Patrol in June 1982.
Methods
There are four main methods of parallax scrolling used in titles for arcade system board, video game console and personal computer systems.
Layer method
Some display systems support multiple background layers that can be scrolled independently in horizontal and vertical directions and composited on one another, simulating a multiplane camera. On such a display system, a game can produce parallax by simply changing each layer's position by a different amount in the same direction. Layers that move more quickly are perceived to be closer to the virtual camera. Layers can be placed in front of the playfield—the layer containing the objects with which the player interacts—for various reasons such as to provide increased dimension, obscure some of the action of the game, or distract the player.
Sprite method
Programmers may also make pseudo-layers of sprites—individually controllable moving objects drawn by hardware on top of or |
https://en.wikipedia.org/wiki/Apache%20Kudu | Apache Kudu is a free and open source column-oriented data store of the Apache Hadoop ecosystem. It is compatible with most of the data processing frameworks in the Hadoop environment. It provides completeness to Hadoop's storage layer to enable fast analytics on fast data.
The open source project to build Apache Kudu began as internal project at Cloudera. The first version Apache Kudu 1.0 was released 19 September 2016.
Comparison with other storage engines
Kudu was designed and optimized for OLAP workloads. Like HBase, it is a real-time store that supports key-indexed record lookup and mutation. Kudu differs from HBase since Kudu's datamodel is a more traditional relational model, while HBase is schemaless. Kudu's "on-disk representation is truly columnar and follows an entirely different storage design than HBase/Bigtable".
See also
List of column-oriented DBMSes |
https://en.wikipedia.org/wiki/Maculopapular%20rash | A maculopapular rash is a type of rash characterized by a flat, red area on the skin that is covered with small confluent bumps. It may only appear red in lighter-skinned people. The term "maculopapular" is a compound: macules are small, flat discolored spots on the surface of the skin; and papules are small, raised bumps. It is also described as erythematous, or red.
This type of rash is common in several diseases and medical conditions, including scarlet fever, measles, Ebola virus disease, rubella, HIV, secondary syphilis (Congenital syphilis, which is asymptomatic, the newborn may present this type of rash), erythrovirus (parvovirus B19), chikungunya (alphavirus), zika, smallpox (which has been eradicated), varicella (when vaccinated persons exhibit symptoms from the modified form), heat rash, and sometimes in Dengue fever. It is also a common manifestation of a skin reaction to the antibiotic amoxicillin or chemotherapy drugs. Cutaneous infiltration of leukemic cells may also have this appearance. Maculopapular rash is seen in graft-versus-host disease (GVHD) developed after a hematopoietic stem cell transplant (bone marrow transplant), which can be seen within one week or several weeks after the transplant. In the case of GVHD, the maculopapular lesions may progress to a condition similar to toxic epidermal necrolysis. In addition, this is the type of rash that some patients presenting with Ebola virus hemorrhagic (EBO-Z) fever will reveal but can be hard to see on dark skin people. It is also seen in patients with Marburg hemorrhagic fever, a filovirus not unlike Ebola.
This type of rash can be as a result of large doses of niacin or no-flush niacin (2000 – 2500 mg), used for the management of low HDL cholesterol.
This type of rash can also be a symptom of Sea bather's eruption. This stinging, pruritic, maculopapular rash affects swimmers in some Atlantic locales (e.g., Florida, Caribbean, Long Island). It is caused by hypersensitivity to stings from the |
https://en.wikipedia.org/wiki/Na%C3%AFve%20physics | Naïve physics or folk physics is the untrained human perception of basic physical phenomena. In the field of artificial intelligence the study of naïve physics is a part of the effort to formalize the common knowledge of human beings.
Many ideas of folk physics are simplifications, misunderstandings, or misperceptions of well-understood phenomena, incapable of giving useful predictions of detailed experiments, or simply are contradicted by more thorough observations. They may sometimes be true, be true in certain limited cases, be true as a good first approximation to a more complex effect, or predict the same effect but misunderstand the underlying mechanism.
Naïve physics is characterized by a mostly intuitive understanding humans have about objects in the physical world. Certain notions of the physical world may be innate.
Examples
Some examples of naïve physics include commonly understood, intuitive, or everyday-observed rules of nature:
What goes up must come down
A dropped object falls straight down
A solid object cannot pass through another solid object
A vacuum sucks things towards it
An object is either at rest or moving, in an absolute sense
Two events are either simultaneous or they are not
Many of these and similar ideas formed the basis for the first works in formulating and systematizing physics by Aristotle and the medieval scholastics in Western civilization. In the modern science of physics, they were gradually contradicted by the work of Galileo, Newton, and others. The idea of absolute simultaneity survived until 1905, when the special theory of relativity and its supporting experiments discredited it.
Psychological research
The increasing sophistication of technology makes possible more research on knowledge acquisition. Researchers measure physiological responses such as heart rate and eye movement in order to quantify the reaction to a particular stimulus. Concrete physiological data is helpful when observing infant behavior, becau |
https://en.wikipedia.org/wiki/LEGO%20%28proof%20assistant%29 | LEGO is a proof assistant developed by Randy Pollack at the University of Edinburgh. It implements several type theories: the Edinburgh Logical Framework (LF), the Calculus of Constructions (CoC), the Generalized Calculus of Constructions (GCC) and the Unified Theory of Dependent Types (UTT). |
https://en.wikipedia.org/wiki/Testosterone%E2%80%93cortisol%20ratio | In human biology, the testosterone–cortisol ratio describes the ratio between testosterone, the primary male sex hormone and an anabolic steroid, and cortisol, another steroid hormone, in the human body.
The ratio is often used as a biomarker of physiological stress in athletes during training, during athletic performance, and during recovery, and has been explored as a predictor of performance. At least among weight-lifters, the ratio tracks linearly with increases in training volume over the first year of training but the relationship breaks down after that. A lower ratio in weight-lifters just prior to performance appears to predict better performance.
The ratio has been studied as a possible biomarker for criminal aggression, but as of 2009 its usefulness was uncertain. |
https://en.wikipedia.org/wiki/Biotope | A biotope is an area of uniform environmental conditions providing a living place for a specific assemblage of plants and animals. Biotope is almost synonymous with the term "habitat", which is more commonly used in English-speaking countries. However, in some countries these two terms are distinguished: the subject of a habitat is a population, the subject of a biotope is a biocoenosis or "biological community".
It is an English loanword derived from the German Biotop, which in turn came from the Greek bios (meaning 'life') and topos ('place'). (The related word geotope has made its way into the English language by the same route, from the German Geotop.)
Ecology
The concept of a biotope was first advocated by Ernst Haeckel (1834–1919), a German zoologist famous for the recapitulation theory. In his book General Morphology (1866), which defines the term "ecology", he stresses the importance of the concept of habitat as a prerequisite for an organism's existence. Haeckel also explains that with one ecosystem, its biota is shaped by environmental factors (such as water, soil, and geographical features) and interaction among living things; the original idea of a biotope was closely related to evolutional theory. Following this, F. Dahl, a professor at the Berlin Zoological Museum, referred to this ecological system as a "biotope" (biotop) (1908).
Biotope restoration
Although the term "biotope" is considered to be a technical word with respect to ecology, in recent years the term is more generally used in administrative and civic activities. Since the 1970s the term "biotope" has received great attention as a keyword throughout Europe (mainly Germany) for the preservation, regeneration, and creation of natural environmental settings. Used in this context, the term "biotope" often refers to a smaller and more specific ecology and is very familiar to human life. In Germany especially, activities related to regenerating biotopes are enthusiastically received. These ac |
https://en.wikipedia.org/wiki/Crop%20coefficient | Crop coefficients are properties of plants used in predicting evapotranspiration (ET). The most basic crop coefficient, Kc, is simply the ratio of ET observed for the crop studied over that observed for the well calibrated reference crop under the same conditions.
Potential evapotranspiration (PET), is the evaporation and transpiration that potentially could occur if a field of the crop had an ideal unlimited water supply. RET is the reference ET often denoted as ET0.
Even in agricultural crops, where ideal conditions are approximated as much as is practical, plants are not always growing (and therefore transpiring) at their theoretical potential. Plants have growth stages and states of health induced by a variety of environmental conditions.
RET usually represents the PET of the reference crop's most active growth. Kc then becomes a function or series of values specific to the crop of interest through its growing season. These can be quite elaborate in the case of certain maize varieties, but tend to use a trapezoidal or leaf area index (LAI) curve for common crop or vegetation canopies.
Stress coefficients, Ks, account for diminished ET due to specific stress factors. These are often assumed to combine by multiplication.
Water stress is the most ubiquitous stress factor, often denoted as Kw. Stress coefficients tend to be functions ranging between 0 and 1. The simplest are linear, but thresholds are appropriate for some toxicity responses. Crop coefficients can exceed 1 when the crop evapotranspiration exceeds that of RET. |
https://en.wikipedia.org/wiki/Caleb%20Gattegno | Caleb Gattegno (1911–1988) was an Egyptian educator, psychologist, and mathematician. He is considered one of the most influential and prolific mathematics educators of the twentieth century. He is best known for introducing new approaches to teaching and learning mathematics (Visible & Tangible Math), foreign languages (The Silent Way) and reading (Words in Color). Gattegno also developed pedagogical materials for each of these approaches, and was the author of more than 120 books and hundreds of articles largely on the topics of education and human development.
Background
Gattegno was born November 11, 1911, in Alexandria, Egypt. His parents, Menachem Gattegno, a Spanish merchant, and his wife, Bchora, had nine children. Because of poverty, Gattegno and his siblings had to work starting from a young age. The future mathematician had no formal education until he started to learn on his own at the age of 14. He took external examinations when he was 20 years old and obtained a teaching license in physics and chemistry from the University of Marseille in Cairo.
He moved to England, where he became involved in teacher education and helped establish the Association of Teachers of Mathematics and the International Commission for the Study and Improvement of Mathematics Teaching. He taught at several universities including the University of Liverpool and the University of London.
Pedagogical approach
Gattegno's pedagogical approach is characterised by propositions based on the observation of human learning in many and varied situations. This is a description of three of these propositions. He was also influenced by the works of Jean Piaget and worked on introducing the implications of the latter's cognitive theory on education.
Learning and effort
Gattegno noticed that there is an "energy budget" for learning. Human beings have a highly developed sense of the economics of their own energy and are very sensitive to the cost involved in using it. It is therefore esse |
https://en.wikipedia.org/wiki/Flux%20%28biology%29 | In general, flux in biology relates to movement of a substance between compartments. There are several cases where the concept of flux is important.
The movement of molecules across a membrane: in this case, flux is defined by the rate of diffusion or transport of a substance across a permeable membrane. Except in the case of active transport, net flux is directly proportional to the concentration difference across the membrane, the surface area of the membrane, and the membrane permeability constant.
In ecology, flux is often considered at the ecosystem level – for instance, accurate determination of carbon fluxes using techniques like eddy covariance (at a regional and global level) is essential for modeling the causes and consequences of global warming.
Metabolic flux refers to the rate of flow of metabolites through a biochemical network, along a linear metabolic pathway, or through a single enzyme. A calculation may also be made of carbon flux or flux of other elemental components of biomolecules (e.g. nitrogen). The general unit of flux is chemical mass /time (e.g., micromole/minute; mg/kg/minute). Flux rates are dependent on a number of factors, including: enzyme concentration; the concentration of precursor, product, and intermediate metabolites; post-translational modification of enzymes; and the presence of metabolic activators or repressors. Metabolic flux in biologic systems can refer to biosynthesis rates of polymers or other macromolecules, such as proteins, lipids, polynucleotides, or complex carbohydrates, as well as the flow of intermediary metabolites through pathways. Metabolic control analysis and flux balance analysis provide frameworks for understanding metabolic fluxes and their constraints.
Measuring movement
Flux is the net movement of particles across a specified area in a specified period of time. The particles may be ions or molecules, or they may be larger, like insects, muskrats or cars. The units of time can be anything from milli |
https://en.wikipedia.org/wiki/Sign%20of%20the%20horns | The sign of the horns is a hand gesture with a variety of meanings and uses in various cultures. It is formed by extending the index and little fingers while holding the middle and ring fingers down with the thumb.
Religious and superstitious meaning
In Hatha Yoga, a similar hand gesture – with the tips of middle and ring finger touching the thumb – is known as , a gesture believed to rejuvenate the body. In Indian classical dance forms, it symbolizes the lion. In Buddhism, the is seen as an apotropaic gesture to expel demons, remove negative energy, and ward off evil. It is commonly found on depictions of Gautama Buddha. It is also found on the Song dynasty statue of Laozi, the founder of Taoism, on Mount Qingyuan, China.
An apotropaic usage of the sign can be seen in Italy and in other Mediterranean cultures where, when confronted with unfortunate events, or simply when these events are mentioned, the sign of the horns may be given to ward off further bad luck. It is also used traditionally to counter or ward off the "evil eye" (). In Italy specifically, the gesture is known as the ('horns'). With fingers pointing down, it is a common Mediterranean apotropaic gesture, by which people seek protection in unlucky situations (a Mediterranean equivalent of knocking on wood). The President of the Italian Republic, Giovanni Leone, startled the media when, while in Naples during an outbreak of cholera, he shook the hands of patients with one hand while with the other behind his back he superstitiously made the , presumably to ward off the disease or in reaction to being confronted by such misfortune.
In Italy and other parts of the Mediterranean region, the gesture must usually be performed with the fingers tilting downward or in a leveled position not pointed at someone and without movement to signify the warding off of bad luck; in the same region and elsewhere, the gesture may take a different, offensive, and insulting meaning if it is performed with fingers upw |
https://en.wikipedia.org/wiki/Crumpling | In geometry and topology, crumpling is the process whereby a sheet of paper or other two-dimensional manifold undergoes disordered deformation to yield a three-dimensional structure comprising a random network of ridges and facets with variable density. The geometry of crumpled structures is the subject of some interest to the mathematical community within the discipline of topology. Crumpled paper balls have been studied and found to exhibit surprisingly complex structures with compressive strength resulting from frictional interactions at locally flat facets between folds. The unusually high compressive strength of crumpled structures relative to their density is of interest in the disciplines of materials science and mechanical engineering.
Significance
The packing of a sheet by crumpling is a complex phenomenon that depends on material parameters and the packing protocol. Thus the crumpling behaviour of foil, paper and poly-membranes differs significantly and can be interpreted on the basis of material foldability. The high compressive strength exhibited by dense crumple formed cellulose paper is of interest towards impact dissipation applications and has been proposed as an approach to utilising waste paper.
From a practical standpoint, crumpled balls of paper are commonly used as toys for domestic cats. |
https://en.wikipedia.org/wiki/Rhamphichthyidae | Sand knifefish are freshwater electric fish of the family Rhamphichthyidae, from freshwater habitats in South America.
Just like most part of the members of the Gymnotiformes group, they also have elongated and compressed bodies and electric organs. The long anal fin actually extends from before the pectoral fins to the tip of the tail. There is no dorsal fin. Teeth are absent in the oral jaws and the snout is very long and tubular. The nostrils are very close together. This group is sometimes known as the tubesnout knifefishes for this reason.
They are nocturnal and burrow in the sand during the day.
Genera
According to FishBase there are only three genera in this family, but a comprehensive molecular study from 2015 showed that two additional genera belong here (formerly in Hypopomidae, marked with stars* in list), and this has been followed by recent authorities.
Gymnorhamphichthys
Hypopygus*
Iracema
Rhamphichthys
Steatogenys*
See also
List of fish families |
https://en.wikipedia.org/wiki/Learning%20automaton | A learning automaton is one type of machine learning algorithm studied since 1970s. Learning automata select their current action based on past experiences from the environment. It will fall into the range of reinforcement learning if the environment is stochastic and a Markov decision process (MDP) is used.
History
Research in learning automata can be traced back to the work of Michael Lvovitch Tsetlin in the early 1960s in the Soviet Union. Together with some colleagues, he published a collection of papers on how to use matrices to describe automata functions. Additionally, Tsetlin worked on reasonable and collective automata behaviour, and on automata games. Learning automata were also investigated by researches in the United States in the 1960s. However, the term learning automaton was not used until Narendra and Thathachar introduced it in a survey paper in 1974.
Definition
A learning automaton is an adaptive decision-making unit situated in a random environment that learns the optimal action through repeated interactions with its environment. The actions are chosen according to a specific probability distribution which is updated based on the environment response the automaton obtains by performing a particular action.
With respect to the field of reinforcement learning, learning automata are characterized as policy iterators. In contrast to other reinforcement learners, policy iterators directly manipulate the policy π. Another example for policy iterators are evolutionary algorithms.
Formally, Narendra and Thathachar define a stochastic automaton to consist of:
a set X of possible inputs,
a set Φ = { Φ1, ..., Φs } of possible internal states,
a set α = { α1, ..., αr } of possible outputs, or actions, with r ≤ s,
an initial state probability vector p(0) = ≪ p1(0), ..., ps(0) ≫,
a computable function A which after each time step t generates p(t+1) from p(t), the current input, and the current state, and
a function G: Φ → α which generates the outpu |
https://en.wikipedia.org/wiki/Pronunciation%20of%20GIF | The pronunciation of GIF, an acronym for the Graphics Interchange Format, has been disputed since the 1990s. Popularly rendered in English as a one-syllable word, the acronym is most commonly pronounced (with a hard g as in gift) or (with a soft g as in gem), differing in the phoneme represented by the letter G. Many public figures and institutions have taken sides in the debate; Steve Wilhite, the creator of the image file format, gave a speech at the 2013 Webby Awards arguing for the soft-g pronunciation. Others have pointed to the term's origin from abbreviation of the hard-g word graphics to argue for the other pronunciation.
The controversy stems partly from the fact that there is no general rule for how the letter sequence gi is to be pronounced; the hard g prevails in words such as gift, while the soft g is used in others such as ginger. In addition, some speakers enunciate each letter in the acronym, producing . English dictionaries generally accept both main alternatives as valid, and linguistic analyses show no clear advantage for either based on the pronunciation frequencies of similar English words. The pronunciation of the acronym can also vary in languages other than English.
Background
The Graphics Interchange Format (GIF) is an image file format developed in 1987 by Steve Wilhite at the American online service provider CompuServe. GIFs are popularly used to display short, looped animations. The acronym GIF, commonly pronounced as a monosyllable, has a disputed pronunciation. Some individuals pronounce the word with a hard g, as in , whereas others pronounce it with a soft g, as in . A minority prefer to enunciate each letter of the acronym individually, creating the pronunciation .
Wilhite and the team who developed the file format included in the technical specifications that the acronym was to be pronounced with a soft g. In the specifications, the team wrote that "choosy programmers choose ... 'jif, in homage to the peanut butter company Jif' |
https://en.wikipedia.org/wiki/214%20%28number%29 | 214 (two hundred [and] fourteen) is the natural number following 213 and preceding 215.
In mathematics
214 is a composite number (with prime factorization 2 * 107) and a triacontakaiheptagonal number (37-gonal number).
214!! − 1 is a 205-digit prime number.
The 11th perfect number 2106×(2107−1) has 214 divisors.
Number of regions into which a figure made up of a row of 5 adjacent congruent rectangles is divided upon drawing diagonals of all possible rectangles.
In other fields
214 is a song by Rivermaya.
214 Aschera is a Main belt asteroid.
E214 is the E number of Ethylparaben.
The Bell 214 is a helicopter.
The Tupolev 214 is an airliner.
Type 214 submarine
There are several highways numbered 214.
Form DD 214 documents discharge from the U.S. Armed Forces.
The number of Wainwright-listed summits of the English Lake District
214 is also:
The first area code of metropolitan Dallas, Texas
The number of Chinese radicals for the writing of Chinese characters according to the 1716 Kangxi Dictionary.
SMTP status code for a reply message to a help command
The Dewey Decimal Classification for Theodicy (the problem of evil). |
https://en.wikipedia.org/wiki/Quran%20code | The term Quran code (also known as Code 19) refers to the claim that the Quranic text contains a hidden mathematically complex code. Advocates think that the code represents a mathematical proof of the divine authorship of the Quran. Proponents of the Quran code claim that the code is based on statistical procedures, however, this claim has not been validated by any independent mathematical or scientific institute.
History
In 1969, Rashad Khalifa, an Egyptian-American biochemist, began analyzing the separated letters of the Quran (also called Quranic initials or Muqattaʿat), and the Quran to examine certain sequences of numbers. In 1973 he published the book Miracle of the Quran: Significance of the Mysterious Alphabets, in which he describes the Quranic initials through enumerations and distributions.
In 1974, Khalifa claimed to have discovered a mathematical code hidden in the Quran, a code based around the number 19. He wrote the book The Computer Speaks: God's Message to the World, in which he thematizes this Quran code. He relies on Surah 74, verse 30 to prove the significance of the number: "Over it is nineteen,".
Proponents of the code include United Submitters International (an association initiated by Rashad Khalifa) as well as some Quranists and traditional Muslims.
Example
Believers in Quran Code often use certain word counts, checksums and cross sums to legitimize the code.
Edip Yüksel, a Turkish Quranistic author and colleague of Rashad Khalifa, makes the following claims in his book Nineteen: God's Signature in Nature and Scripture:
The Bismillah (bismi ʾllāhi ʾr-raḥmāni ʾr-raḥīmi), the Quranic opening formula, which, with one exception, is at the beginning of every Surah of the Quran, consists of exactly 19 letters.
The first word of the Bismillah, Ism (name), without contraction, occurs 19 times in the Quran (19×1). [Also no plural forms, or those with pronoun endings]
The second word of the Bismillah, Allah (God), occurs 2698 times (19×142). |
https://en.wikipedia.org/wiki/44%20%28number%29 | 44 (forty-four) is the natural number following 43 and preceding 45.
In mathematics
Forty-four X is a composite number; a square-prime, of the form (p2,q) and fourth of this form and of the form (22.q), where q is a higher prime.
44 is a repdigit and palindromic number in decimal. It is the tenth 10-happy number, and the fourth octahedral number.
It is the first member of the first cluster of two square-primes; of the form (p2,q), specifically, {(22.11)=44, (32.5)=45}. The next such cluster of two square-primes comprises {(22.29)=116, (32.13)=117}.
44 has an aliquot sum of 40, within an aliquot sequence of three composite numbers (44,40,50,43,1,0) to the prime in the 43-aliquot tree.
Since the greatest prime factor of 442 + 1 = 1937 is 149 and thus more than 44 twice, 44 is a Størmer number. Given Euler's totient function, φ(44) = 20 and φ(69) = 44.
44 is a tribonacci number, preceded by 7, 13, and 24, whose sum it equals.
44 is the number of derangements of 5 items.
There are only 44 kinds of Schwarz triangles, aside from the infinite dihedral family of triangles (p 2 2) with p = {2, 3, 4, ...}.
There are 44 distinct stellations of the truncated cube and truncated octahedron, per Miller's rules.
44 four-dimensional crystallographic point groups of a total 227 contain dual enantiomorphs, or mirror images.
There are forty-four classes of finite simple groups that arise from four general families of such groups:
Two general groups stem from cyclic groups and alternating groups.
Sixteen families of groups stem from simple groups of Lie type.
Twenty-six groups are sporadic.
Sometimes the Tits group is considered a 17th non-strict simple group of Lie type, or a 27th sporadic group, which would yield a total of 45 finite simple groups.
In science
The atomic number of ruthenium
Astronomy
Messier object M44, a magnitude 4.0 open cluster in the constellation Cancer, also known as the Beehive Cluster
The New General Catalogue object NGC 44, a doubl |
https://en.wikipedia.org/wiki/Age%20determination%20in%20fish | Knowledge of fish age characteristics is necessary for stock assessments, and to develop management or conservation plans. Size is generally associated with age; however, there are variations in size at any particular age for most fish species making it difficult to estimate one from the other with precision. Therefore, researchers interested in determining a fish age look for structures which increase incrementally with age. The most commonly used techniques involve counting natural growth rings on the scales, otoliths, vertebrae, fin spines, eye lenses, teeth, or bones of the jaw, pectoral girdle, and opercular series. Even reliable aging techniques may vary among species; often, several different bony structures are compared among a population in order to determine the most accurate method.
History
Aristotle (ca. 340 B.C.) may have been the first scientist to speculate on the use of hard parts of fishes to determine age, stating in Historica Animalium that “the age of a scaly fish may be told by the size and hardness of its scales.” However, it was not until the development of the microscope that more detailed studies were performed on the structure of scales. Antonie van Leeuwenhoek developed improved lenses which he went use in his creation of microscopes. He had a wide range of interests including the structure of fish scales from the European eel (Anguilla anguilla) and the burbot (Lota lota), species which were previously thought not to have scales. He observed that the scales contained “circular lines” and that each scale had the same number of these lines, and correctly inferred that the number of lines correlated to the age of the fish. He also correctly associated the darker areas of scale growth to the season of slowed growth, a characteristic he had previously observed in tree trunks. Leeuwenhoek's work went widely undiscovered by fisheries researchers, and the discovery of fish aging structures is widely credited to Hans Hederström (e.g., Ricker 19 |
https://en.wikipedia.org/wiki/Proton%20therapy | In medicine, proton therapy, or proton radiotherapy, is a type of particle therapy that uses a beam of protons to irradiate diseased tissue, most often to treat cancer. The chief advantage of proton therapy over other types of external beam radiotherapy is that the dose of protons is deposited over a narrow range of depth; hence in minimal entry, exit, or scattered radiation dose to healthy nearby tissues.
When evaluating whether to treat a tumor with photon or proton therapy, physicians may choose proton therapy if it is important to deliver a higher radiation dose to targeted tissues while significantly decreasing radiation to nearby organs at risk. The American Society for Radiation Oncology Model Policy for Proton Beam therapy says proton therapy is considered reasonable if sparing the surrounding normal tissue "cannot be adequately achieved with photon-based radiotherapy" and can benefit the patient. Like photon radiation therapy, proton therapy is often used in conjunction with surgery and/or chemotherapy to most effectively treat cancer.
Description
Proton therapy is a type of external beam radiotherapy that uses ionizing radiation. In proton therapy, medical personnel use a particle accelerator to target a tumor with a beam of protons. These charged particles damage the DNA of cells, ultimately killing them by stopping their reproduction and thus eliminating the tumor. Cancerous cells are particularly vulnerable to attacks on DNA because of their high rate of division and their limited ability to repair DNA damage. Some cancers with specific defects in DNA repair may be more sensitive to proton radiation.
Proton therapy lets physicians deliver a highly conformal beam, i.e. delivering radiation that conforms to the shape and depth of the tumor and sparing much of the surrounding, normal tissue. For example, when comparing proton therapy to the most advanced types of photon therapy—intensity-modulated radiotherapy (IMRT) and volumetric modulated arc therap |
https://en.wikipedia.org/wiki/Singulation | Singulation is a method by which an RFID reader identifies a tag with a specific serial number from a number of tags in its field. This is necessary because if multiple tags respond simultaneously to a query, they will jam each other. In a typical commercial application, such as scanning a bag of groceries, potentially hundreds of tags might be within range of the reader.
When all the tags cooperate with the tag reader and follow the same anti-collision protocol, also called singulation protocol,
then the tag reader can read data from each and every tag without interference from the other tags.
Collision avoidance
Generally, a collision occurs when two entities require the same resource; for example, two ships with crossing courses in a narrows. In wireless technology, a collision occurs when two transmitters transmit at the same time with the same modulation scheme on the same frequency. In RFID technology, various strategies have been developed to overcome this situation.
Tree walking
There are different methods of singulation, but the most common is tree walking, which involves asking all tags with a serial number that starts with either a 1 or 0 to respond. If more than one responds, the reader might ask for all tags with a serial number that starts with 01 to respond, and then 010. It keeps doing this until it finds the tag it is looking for. Note that if the reader has some idea of what tags it wishes to interrogate, it can considerably optimise the search order. For example with some designs of tags, if a reader already suspects certain tags to be present then those tags can be instructed to remain silent, then tree walking can proceed without interference from these.
This simple protocol leaks considerable information because anyone able to eavesdrop on the tag reader alone can determine all but the last bit of a tag's serial number. Thus a tag can be (largely) identified so long as the reader's signal is receivable, which is usually possible at much |
https://en.wikipedia.org/wiki/Digitale%20Gesellschaft | Digitale Gesellschaft (literally, Digital Society) is a German registered association founded in 2010, that is committed to civil rights and consumer protection in terms of internet policy.
History
The founding members of the association are , , Falk Steiner, Matthias Mehldau, Andre Meister, Markus Reuter, , , and John Weitzmann.
Benjamin Bergemann is a spokesman.
One of the aims of the interest group is to build a campaign infrastructure, and also to reach people who are not internet-savvy. Their founder, Beckedahl stated that "more effective advocacy toward politics and economy" is also a part of their mission.
As of May 2012, the group has approximately thirty members. According to Beckedahl, the small number of full members is necessary to build an infrastructure before opening up to more people.
Issues
The group has worked on topics such as ACTA, Open government, open data, information privacy, telecommunications data retention, copyright, and net neutrality.
In 2013, they led a demonstration at Checkpoint Charlie, during Barack Obama's visit, against the NSA surveillance program PRISM. |
https://en.wikipedia.org/wiki/Standard%20algorithms | In elementary arithmetic, a standard algorithm or method is a specific method of computation which is conventionally taught for solving particular mathematical problems. These methods vary somewhat by nation and time, but generally include exchanging, regrouping, long division, and long multiplication using a standard notation, and standard formulas for average, area, and volume. Similar methods also exist for procedures such as square root and even more sophisticated functions, but have fallen out of the general mathematics curriculum in favor of calculators (or tables and slide rules before them).
The concepts of reform mathematics which the NCTM introduced in 1989 favors an alternative approach. It proposes a deeper understanding of the underlying theory instead of memorization of specific methods will allow students to develop individual methods which solve the same problems. Students' alternative algorithms are often just as correct, efficient, and generalizable as the standard algorithms, and maintain emphasis on the meaning of the quantities involved, especially as relates to place values (something that is usually lost in the memorization of standard algorithms). The development of sophisticated calculators has made manual calculation less important (see the note on square roots, above) and cursory teaching of traditional methods has created failure among many students. Greater achievement among all types of students is among the primary goals of mathematics education put forth by NCTM. Some researchers such as Constance Kamii have suggested that elementary arithmetic, as traditionally taught, is not appropriate in elementary school. Many first editions of textbooks written to the original 1989 standard such as TERC deliberately discouraged teaching of any particular method, instead devoting class and homework time to the solving of nontrivial problems, which stimulate students to develop their own methods of calculation, rooted in number sense and place v |
https://en.wikipedia.org/wiki/Electric%20battery | A battery is a source of electric power consisting of one or more electrochemical cells with external connections for powering electrical devices. When a battery is supplying power, its positive terminal is the cathode and its negative terminal is the anode. The terminal marked negative is the source of electrons that will flow through an external electric circuit to the positive terminal. When a battery is connected to an external electric load, a redox reaction converts high-energy reactants to lower-energy products, and the free-energy difference is delivered to the external circuit as electrical energy. Historically the term "battery" specifically referred to a device composed of multiple cells; however, the usage has evolved to include devices composed of a single cell.
Primary (single-use or "disposable") batteries are used once and discarded, as the electrode materials are irreversibly changed during discharge; a common example is the alkaline battery used for flashlights and a multitude of portable electronic devices. Secondary (rechargeable) batteries can be discharged and recharged multiple times using an applied electric current; the original composition of the electrodes can be restored by reverse current. Examples include the lead–acid batteries used in vehicles and lithium-ion batteries used for portable electronics such as laptops and mobile phones.
Batteries come in many shapes and sizes, from miniature cells used to power hearing aids and wristwatches to, at the largest extreme, huge battery banks the size of rooms that provide standby or emergency power for telephone exchanges and computer data centers. Batteries have much lower specific energy (energy per unit mass) than common fuels such as gasoline. In automobiles, this is somewhat offset by the higher efficiency of electric motors in converting electrical energy to mechanical work, compared to combustion engines.
History
Invention
Benjamin Franklin first used the term "battery" in 1749 wh |
https://en.wikipedia.org/wiki/William%20Beckner%20%28mathematician%29 | William Beckner (born September 15, 1941) is an American mathematician, known for his work in harmonic analysis, especially geometric inequalities. He is the Paul V. Montgomery Centennial Memorial Professor in Mathematics at The University of Texas at Austin.
Education
Beckner earned his Bachelor of Science in physics from the University of Missouri in Columbia, Missouri in 1963, where he became a member of the Phi Beta Kappa Society. He later earned his Ph.D. in mathematics at Princeton University in Princeton, New Jersey, where his doctoral adviser was Elias Stein. He also completed some postgraduate work in mathematics under adviser A.P. Calderon at the University of Chicago.
Awards and honors
Salem Prize
Sloan Fellow
Fellow of the American Mathematical Society.
Selected publications
See also
Babenko–Beckner inequality
Hirschman uncertainty |
https://en.wikipedia.org/wiki/Antenna%20farm | An antenna farm, satellite dish farm or dish farm is an area dedicated to television or radio telecommunications transmitting or receiving antenna equipment, such as C, Ku or Ka band satellite dish antennas, UHF/VHF/AM/FM transmitter towers or mobile cell towers. The history of the term "antenna farm" is uncertain, but it dates to at least the 1950s.
In telecom circles, any area with more than three antennas could be referred to as an antenna farm. In the case of an AM broadcasting station (mediumwave and longwave, occasionally shortwave), the multiple mast radiators may all be part of an antenna system for a single station, while for VHF and UHF the site may be under joint management. Alternatively, a single tower with many separate antennas is often called a "candelabra tower".
Safety and security
Commercial antenna farms are managed by radio stations, television stations, satellite teleports or military organizations and are mostly very secure facilities with access limited to broadcast engineers, RF engineers or maintenance technicians. This is not only for the physical security of the location (including preventing equipment/metal theft), but also for safety, as there may be a radiation (closer to daylight or radiant heat in energy level, much less disruptive to cellular activity than emissions from radioactive elements or x-ray machines) hazard unless stations are powered-down.
Locations
Where terrain and road access allows, mountaintop sites are very attractive for non-AM broadcast stations and others, because it increases the stations' height above average terrain, allowing them to reach further by avoiding obstructions on the ground, and by increasing the radio horizon. With a clearer line of sight in both cases, more signal can be received. While the same is true of a very tall tower, like Paris’ Eiffel tower, such towers are expensive, dangerous, and difficult to access the top of, and may collect and drop large amounts of ice in winter, or even |
https://en.wikipedia.org/wiki/Igor%20Pak | Igor Pak () (born 1971, Moscow, Soviet Union) is a professor of mathematics at the University of California, Los Angeles, working in combinatorics and discrete probability. He formerly taught at the Massachusetts Institute of Technology and the University of Minnesota, and he is best known for his bijective proof of the hook-length formula for the number of Young tableaux, and his work on random walks. He was a keynote speaker alongside George Andrews and Doron Zeilberger at the 2006 Harvey Mudd College Mathematics Conference on Enumerative Combinatorics.
Pak is an Associate Editor for the journal Discrete Mathematics. He gave a Fejes Tóth Lecture at the University of Calgary in February 2009.
In 2018, he was an invited speaker at the International Congress of Mathematicians in Rio de Janeiro.
Background
Pak went to Moscow High School № 57. After graduating, he worked for a year at Bank Menatep.
He did his undergraduate studies at Moscow State University. He was a PhD student of Persi Diaconis at Harvard University, where he received a doctorate in Mathematics in 1997, with a thesis titled Random Walks on Groups: Strong Uniform Time Approach. Afterwards, he worked with László Lovász as a postdoc at Yale University. He was a fellow at the Mathematical Sciences Research Institute and a long-term visitor at the Hebrew University of Jerusalem. |
https://en.wikipedia.org/wiki/Everything | Everything, every-thing, or every thing, is all that exists; it’s the opposite of nothing, or its complement. It is the totality of things relevant to some subject matter. Without expressed or implied limits, it may refer to anything. The universe is everything that exists theoretically, though a multiverse may exist according to theoretical cosmology predictions. It may refer to an anthropocentric worldview, or the sum of human experience, history, and the human condition in general. Every object and entity is a part of everything, including all physical bodies and in some cases all abstract objects.
Scope
In ordinary conversation, everything usually refers only to the totality of things relevant to the subject matter. When there is no expressed limitation, everything may refer to the universe, or the world.
The universe is most commonly defined as everything that physically exists: the entirety of time, all forms of matter, energy and momentum, and the physical laws and constants that govern them. However, the term "universe" may be used in slightly different contextual senses, denoting such concepts as the cosmos, the world, or nature. According to some speculations, this universe may be one of many disconnected universes, which are collectively denoted as the multiverse. In the bubble universe theory, there are an infinite variety of universes, each with different physical constants. In the many-worlds hypothesis, new universes are spawned with every quantum measurement. By definition, these speculations cannot currently be tested experimentally, yet, if multiple universes do exist, they would still be part of everything.
Especially in a metaphysical context, World may refer to everything that constitutes reality and the universe: see World (philosophy). However, world may only refer to Earth envisioned from an anthropocentric or human worldview, as a place by human beings.
In theoretical physics
In theoretical physics, a theory of everything (TOE) is a hyp |
https://en.wikipedia.org/wiki/Fabimycin | Fabimycin is an newly developed antibiotic candidate which is effective against gram-negative bacterias, an unusually problematic class of bacteria that uses thicker cell walls and molecular efflux pumps to protect themselves by preventing the antibiotics reaching inside the cells.
Antibiotic resistance
Global deaths attributable to antimicrobial resistance (AMR) numbered 1.27 million in 2019. That year, AMR may have contributed to 5 million deaths and one in five people who died due to AMR were children under five years old. The European Centre for Disease Prevention and Control calculated that in 2015 there were 671,689 infections in the EU and European Economic Area caused by antibiotic-resistant bacteria, resulting in 33,110 deaths. Most were acquired in healthcare settings.
History
Researchers modified the structure of Debio-1452, an under-development antibiotic that is active against gram positive bacteria, and its derivative, which is moderately effective against non-resistant gram-negative bacteria. The drug inhibits the bacterial enzyme FabI, which is an important enzyme in bacterial fatty acid biosynthesis. Clinical trials targeting the enzyme for use in S. aureus (Gram +ve) infections have reached Phase 2 inhibitors.
Fabimycin was tested in mice against more than 200 colonies of resistant bacteria, across 54 strains of E. coli, Klebsiella pneumoniae and Acinetobacter baumannii. It cleared up pneumonia and urinary tract infections, pushing bacteria levels lower than before infection in mouse models.
Further, it did not affect some types of commensal bacteria present in the gut microbiome.
See also
Multiple drug resistance
Methicillin-resistant Staphylococcus aureus |
https://en.wikipedia.org/wiki/Double%20suspension%20theorem | In geometric topology, the double suspension theorem of James W. Cannon () and Robert D. Edwards states that the double suspension S2X of a homology sphere X is a topological sphere.
If X is a piecewise-linear homology sphere but not a sphere, then its double suspension S2X (with a triangulation derived by applying the double suspension operation to a triangulation of X) is an example of a triangulation of a topological sphere that is not piecewise-linear. The reason is that, unlike in piecewise-linear manifolds, the link of one of the suspension points is not a sphere.
See also |
https://en.wikipedia.org/wiki/JSFiddle | JSFiddle is an online IDE service and online community for testing and showcasing user-created and collaborational HTML, CSS and JavaScript code snippets, known as 'fiddles'. It allows for simulated AJAX calls. In 2019, JSFiddle was ranked the second most popular online IDE by the PopularitY of Programming Language (PYPL) index based on the number of times it was searched, directly behind Cloud9 IDE, worldwide and in the USA.
Concept
JSFiddle is an online IDE which is designed to allow users to edit and run HTML, JavaScript, and CSS code on a single page. Its interface is minimalist and split into four main frames, which correspond to editable HTML, JavaScript and CSS fields and a result field which displays the user's project after it is run. Since early on, JSFiddle adopted smart source-code editor with programming features.
As of 2020, JSFiddle uses CodeMirror to support its editable fields, providing multicursors, syntax highlighting, syntax verification (linter), brace matching, auto indentation, autocompletion, code/text folding, Search and Replace to assist web developers in their actions. On the left, a sidebar allows users to integrate external resources such as external CSS stylesheets and external JavaScript libraries. The most popular JavaScript frameworks and CSS frameworks are suggested to users and available via a click.
JSFiddle allows users to publicly save their code an uncapped number of times for free. Each version is saved online at the application's website with an incremental numbered suffix. This allows users to re-access their saved code. Code saved on JSFiddle may also be edited into new versions, shared with other parties, and forked into a new line.
JSFiddle is widely used among web developers to share simple tests and demonstrations. JSFiddle is also widely used on Stack Overflow, the dominant question-answer online forum for the web industry.
History
In 2009, JSFiddle's predecessor, MooShell, was created by Piotr Zalewa as a w |
https://en.wikipedia.org/wiki/Pod%20People%20%28Invasion%20of%20the%20Body%20Snatchers%29 | Pod people (also known as body snatchers) is the colloquial term for a species of plant-like aliens featured in the 1954 novel The Body Snatchers by Jack Finney, the 1956 film Invasion of the Body Snatchers, the 1978 remake of the same name, and the 1993 film Body Snatchers. Although sharing themes, they are not in the 2007 film Invasion of the Pod People.
The novel
Pod people are a race of nomadic extraterrestrial parasites from a dying planet. Realizing their planet's resources are nearing depletion, the pods evolved the ability to defy gravity and leave their planet's atmosphere in the search of planets to colonize. For millennia, the pods floated in space like spores, propelled by the solar winds, some occasionally landing on inhabited planets. Upon landing, they replace the dominant species by spawning emotionless replicas; the original bodies disintegrate into dust after the duplication process. After consuming all the resources, the pods leave in search of other planets. Such a consumption was apparently the fate of civilizations inhabiting Mars and the Moon. The Pods' sole purpose is survival with no attention to the civilizations they conquer or the resources they squander. The duplicates have lifespans of five earth years, and cannot sexually reproduce. Their invasion of Earth was short; unable to tolerate our determination, the pods abandoned our planet, leaving behind their duplicates, but those died quickly.
Invasion of the Body Snatchers (1956 film)
One of the pod people hints at their extraterrestrial origin and purpose without explaining. Physician Miles Bennell, played by Kevin McCarthy, gets away from the town and tells his story to another doctor. A truck carrying pods is wrecked; thereafter, the second physician believes the tale. He asks the government agents to quarantine the town, but viewers are left to wonder whether they were successful. Prior to a rewrite, the ending was less hopeful about the fate of humanity, ending before McCarth |
https://en.wikipedia.org/wiki/International%20Speech%20Communication%20Association | The International Speech Communication Association (ISCA) is a non-profit organization and one of the two main professional associations for speech communication science and technology, the other association being the IEEE Signal Processing Society.
Purpose of the association
The purpose is to promote the study and application of automatic speech processing (in the two directions: speech recognition and speech synthesis) with several sub-topics like speaker recognition or speech compression. The activity of the association concerns all aspects of speech processing, from the computational aspects to the linguistic aspects as well as the theoretical aspects.
Conferences
ISCA organizes yearly the INTERSPEECH conference.
Most recent INTERSPEECH:
2013 Lyon
2014 Singapore
2015 Dresden
2016 San Francisco
2017 Stockholm
2018 Hyderabad
2019 Graz
2020 Shanghai (fully virtual)
2021 Brno (hybrid)
Forthcoming INTERSPEECH:
2022 Incheon
2023 Dublin
2024 Jerusalem
ISCA board
Current ISCA president is Sebastian Möller. The Vice president is Odette Scharenborg, and other members are professionals of the field.
History of ISCA
ISCA is the result of the merge of ESCA (European Speech Communication Association created in 1987 in Europe) and PC-ICSLP (Permanent Council of the organization of International Conference on Spoken Language Processing created in 1986 in Japan). The first ISCA event was held in 2000 in Beijing, China.
See also
Natural language processing
Speech technology |
https://en.wikipedia.org/wiki/Universe%20%28Unix%29 | In some versions of the Unix operating system, the term universe was used to denote some variant of the working environment. During the late 1980s, most commercial Unix variants were derived from either System V or BSD. Most versions provided both BSD and System V universes and allowed the user to switch between them. Each universe, typically implemented by separate directory trees or separate filesystems, usually included different versions of commands, libraries, man pages, and header files. While such a facility offered the ability to develop applications portable across both System V and BSD variants, the requirements in disk space and maintenance (separate configuration files, twice the work in patching systems) gave them a problematic reputation. Systems that offered this facility included Harris/Concurrent's CX/UX, Convex's Convex/OS, Apollo's Domain/OS (version 10 only), Pyramid's DC/OSx (dropped in SVR4-based version 2), Concurrent's Masscomp/RTU, MIPS Computer Systems' RISC/os, Sequent's DYNIX/ptx and Siemens' SINIX.
Some versions of System V Release 4 retain a system similar to Dual Universe concept, with BSD commands (which behave differently from classic System V commands) in , BSD header files in and library files in . can also be found in NeXTSTEP and OPENSTEP, as well as Solaris.
External links
Sven Mascheck, DYNIX 3.2.0 and SINIX V5.20 Universes
Unix |
https://en.wikipedia.org/wiki/Lumus | Lumus is an Israeli-based augmented reality company headquartered in Ness Ziona, Israel. Founded in 2000, Lumus has developed technology for see-through wearable displays, via its patented Light-guide Optical Element (LOE) platform to market producers of smart glasses and augmented reality eyewear.
Technology
The LOE is a patented optical waveguide that makes use of multiple partial reflectors embedded in a single substrate to reflect a virtual image into the eye of the wearer. Specifically, the image is coupled into the LOE by a "Pod" (micro-display projector) that sits at the edge of the waveguide—in an eyeglass configuration, this is embedded in the temple of the glasses. The image travels through total internal reflection to the multiple array of partial reflectors and are reflected to the eye. While each partial reflector shows only a portion of the image, the optics are such that the wearer sees the combined array and perceives it as a single uniform image projected at infinity. The transparent display enables a virtual image to be seamlessly overlaid over the wearer's real world view. This is especially true when the source image comprises a black background with light color wording or symbology being displayed. Black is essentially see-through color, while lighter colored objects, symbols or characters appear to float in the wearer's line of sight. Conversely, full screen images like documents, internet pages, movies which are typically brighter colors can be displayed to look like a large virtual image floating a few meter's away from the wearer.
Lumus, with the LOE, has a single waveguide that works on all colors. The thickness of their one LOE is similar to the stack of multiple (one per red, green, and blue) thinner waveguides on HoloLens. They simply cut the waveguide's entrance at an angle to get the light to enter (rather than use a color specific diffraction grating), and then they use a series of very specially designed partial mirrors to cause t |
https://en.wikipedia.org/wiki/Dianna%20Xu | Yilun Dianna Xu is a mathematician and computer scientist whose research concerns the computational geometry of curves and surfaces, computer vision, and computer graphics. She is a professor of computer science at Bryn Mawr College where she chairs the computer science department.
Education and career
Xu graduated from Smith College in 1996, with a bachelor's degree in computer science. She credits going to a women's college with the nurturing environment that allowed her to become interested in computer science.
She completed her Ph.D. in 2002 in computer and information science at the University of Pennsylvania. Her dissertation, Incremental Algorithms for the Design of Triangular-Based Spline Surfaces, was supervised by Jean Gallier. After staying at Pennsylvania as a postdoctoral researcher, she joined the Bryn Mawr faculty in 2004.
Books
With Ira Greenberg and Deepak Kumar, Xu is the author of Processing: Creative Coding and Generative Art in Processing 2 (Springer, 2013), a tutorial introduction to Processing, an open-source graphical library and integrated development environment built for the electronic arts, new media art, and visual design communities.
With Jean Gallier, she is the author of A Guide to the Classification Theorem for Compact Surfaces (Springer, 2013). |
https://en.wikipedia.org/wiki/Back-story%20%28production%29 | Back-story, in the production of consumer goods, is information about the effects of their production.
sustainability advocates had begun evoking literary backstories to refer to the "backstories" of goods: that is, the impacts on the planet and people, caused by producing and delivering those goods. Without knowledge of the full backstory of a product, a consumer cannot accurately judge whether the impacts of purchasing it are good or bad. Some environmentalists and consumer-protection advocates argue that greater corporate and governmental transparency would be a critical step towards sustainability, enabling consumers to make more informed choices, and activists to bring public opinion to bear on practices they consider unethical.
See also
Supply Chain |
https://en.wikipedia.org/wiki/KT%20%28energy%29 | kT (also written as kBT) is the product of the Boltzmann constant, k (or kB), and the temperature, T. This product is used in physics as a scale factor for energy values in molecular-scale systems (sometimes it is used as a unit of energy), as the rates and frequencies of many processes and phenomena depend not on their energy alone, but on the ratio of that energy and kT, that is, on (see Arrhenius equation, Boltzmann factor). For a system in equilibrium in canonical ensemble, the probability of the system being in state with energy E is proportional to .
More fundamentally, kT is the amount of heat required to increase the thermodynamic entropy of a system by k.
In physical chemistry, as kT often appears in the denominator of fractions (usually because of Boltzmann distribution), sometimes β = 1/kT is used instead of kT, turning into .
RT
RT is the product of the molar gas constant, R, and the temperature, T. This product is used in physics and chemistry as a scaling factor for energy values in macroscopic scale (sometimes it is used as a pseudo-unit of energy), as many processes and phenomena depend not on the energy alone, but on the ratio of energy and RT, i.e. E/RT. The SI units for RT are joules per mole (J/mol).
It differs from kT only by a factor of the Avogadro constant, NA. Its dimension is energy or ML2T−2, expressed in SI units as joules (J):
kT = RT/NA |
https://en.wikipedia.org/wiki/Evolutionary%20psychology%20and%20culture | Evolutionary psychology has traditionally focused on individual-level behaviors, determined by species-typical psychological adaptations. Considerable work, though, has been done on how these adaptations shape and, ultimately govern, culture (Tooby and Cosmides, 1989). Tooby and Cosmides (1989) argued that the mind consists of many domain-specific psychological adaptations, some of which may constrain what cultural material is learned or taught. As opposed to a domain-general cultural acquisition program, where an individual passively receives culturally-transmitted material from the group, Tooby and Cosmides (1989), among others, argue that: "the psyche evolved to generate adaptive rather than repetitive behavior, and hence critically analyzes the behavior of those surrounding it in highly structured and patterned ways, to be used as a rich (but by no means the only) source of information out of which to construct a 'private culture' or individually tailored adaptive system; in consequence, this system may or may not mirror the behavior of others in any given respect." (Tooby and Cosmides 1989).
Epidemiological culture
The Epidemiology of representations, or cultural epidemiology, is a broad framework for understanding cultural phenomena by investigating the distribution of mental representations in and through populations. The theory of cultural epidemiology was largely developed by Dan Sperber to study society and cultures. The theory has implications for psychology and anthropology.
Mental representations are transferred from person to person through cognitive causal chains. Sperber (2001) identified three different, yet interrelated, cognitive causal chains, outlined in Table 1. A cognitive causal chain (CCC) links a perception to an evolved, domain-specific response or process. For example:
On October 31, at 7:30 p.m., Mrs. Jones’s doorbell rings. Mrs. Jones hears the doorbell, and assumes that there is somebody at the door. She remembers it is Halloween: she |
https://en.wikipedia.org/wiki/Optical%20space | Optical spaces are mathematical coordinate systems that facilitate the modelling of optical systems as mathematical transformations. An optical space is a mathematical coordinate system such as a Cartesian coordinate system associated with a refractive index. The analysis of optical systems is greatly simplified by the use of optical spaces which enable designers to place the origin of a coordinate system at any of several convenient locations. In the design of optical systems two optical spaces, object space and image space, are always employed. Additional intermediate spaces are often used as well.
Optical spaces extend to infinity in all directions. The object space does not exist only on the "input" side of the system, nor the image space only on the "output" side. Spaces in this sense can be considered points of view. All optical spaces thus overlap completely to infinity in all directions. Typically, the origin and at least some of the coordinate axes of each space are different, providing different perspectives to the designer. It may not be possible to discern from an illustration to which space a point, ray, or plane belongs unless some convention is adopted. A common convention uses capital letters like X, Y, or Z to label points and lower case letters like a, b, and c to indicate distances. Unprimed letters like t or v indicate object space and primed letters like t′ or v′ indicate image space. Intermediate spaces are indicated by additional primes such as r″, z″, or q″. The same letter is used to indicate points or distances that share a conjugate relationship. The only exception is the use of F and F′ to indicate respectively object and image space focal points (which are not conjugate). The term "object point" does not necessarily refer to a point on a specific object but rather to a point in object space; similarly for "image point".
One may wonder how an object point can exist on the "output" side of an optical system or conversely how an imag |
https://en.wikipedia.org/wiki/Quantum%20heterostructure | A quantum heterostructure is a heterostructure in a substrate (usually a semiconductor material), where size restricts the movements of the charge carriers forcing them into a quantum confinement. This leads to the formation of a set of discrete energy levels at which the carriers can exist. Quantum heterostructures have sharper density of states than structures of more conventional sizes.
Quantum heterostructures are important for fabrication of short-wavelength light-emitting diodes and diode lasers, and for other optoelectronic applications, e.g. high-efficiency photovoltaic cells.
Examples of quantum heterostructures confining the carriers in quasi-two, -one and -zero dimensions are:
Quantum wells
Quantum wires
Quantum dots |
https://en.wikipedia.org/wiki/Peppercoin | Peppercoin is a cryptographic system for processing micropayments. Peppercoin Inc. was a company that offers services based on the peppercoin method.
The peppercoin system was developed by Silvio Micali and Ron Rivest and first presented at the RSA Conference in 2002 (although it had not yet been named.) The core idea is to bill one randomly selected transaction a lump sum of money rather than bill each transaction a small amount. It uses "universal aggregation", which means that it aggregates transactions over users, merchants as well as payment service providers. The random selection is cryptographically secure—it cannot be influenced by any of the parties. It is claimed to reduce the transaction cost per dollar from 27 cents to "well below 10 cents."
Peppercoin, Inc. was a privately held company founded in late 2001 by Micali and Rivest based in Waltham, MA. It has secured about $15M in venture capital in two rounds of funding. Its services have seen modest adoption. Peppercoin collects 5-9% of transaction cost from the merchant. Peppercoin, Inc. was bought out in 2007 by Chockstone for an undisclosed amount. |
https://en.wikipedia.org/wiki/Cat%27s%20cradle | Cat's cradle is a game involving the creation of various string figures between the fingers, either individually or by passing a loop of string back and forth between two or more players. The true origin of the name is debated, though the first known reference is in The light of nature pursued by Abraham Tucker in 1768. The type of string, the specific figures, their order, and the names of the figures vary. Independent versions of this game have been found in indigenous cultures throughout the world, including in Africa, Eastern Asia, the Pacific Islands, Australia, the Americas, and the Arctic.
Play
The simplest version of the game involves a player using a long string loop to make a complex figure using their fingers and hands.
Another version of the game consists of two or more players making a sequence of string figures, each altering the figure made by the previous player. The game begins with one player making the eponymous figure Cat's Cradle (above). After each figure, the next player manipulates that figure and removes the string figure from the hands of the previous player with one of a few simple motions and tightens the loop to create another figure, for example, Diamonds. Diamonds might then lead to Candles (which is also known as Pinkies), for example, and then Manger—an inverted Cat's Cradle—and so on. Most of the core figures allow a choice between two or more subsequent figures: for example, Fish in a Dish can become Cat's Eye or Manger. The game ends when a player makes a mistake or creates a dead-end figure, which cannot be turned into anything else. Many players believe that Two Crowns or King's Crown is one such dead-end figure, although more experienced players recognize that it can be creatively maneuvered into Candles or Pinkies, which allows the game to continue.
History
The origin of the name "cat's cradle" is debated but the first known reference is in The light of nature pursued by Abraham Tucker in 1768. "An ingenious play they cal |
https://en.wikipedia.org/wiki/Gluon%20field | In theoretical particle physics, the gluon field is a four-vector field characterizing the propagation of gluons in the strong interaction between quarks. It plays the same role in quantum chromodynamics as the electromagnetic four-potential in quantum electrodynamics the gluon field constructs the gluon field strength tensor.
Throughout this article, Latin indices take values 1, 2, ..., 8 for the eight gluon color charges, while Greek indices take values 0 for timelike components and 1, 2, 3 for spacelike components of four-dimensional vectors and tensors in spacetime. Throughout all equations, the summation convention is used on all color and tensor indices, unless explicitly stated otherwise.
Introduction
Gluons can have eight colour charges so there are eight fields, in contrast to photons which are neutral and so there is only one photon field.
The gluon fields for each color charge each have a "timelike" component analogous to the electric potential, and three "spacelike" components analogous to the magnetic vector potential. Using similar symbols:
where are not exponents but enumerate the eight gluon color charges, and all components depend on the position vector of the gluon and time t. Each is a scalar field, for some component of spacetime and gluon color charge.
The Gell-Mann matrices are eight 3 × 3 matrices which form matrix representations of the SU(3) group. They are also generators of the SU(3) group, in the context of quantum mechanics and field theory; a generator can be viewed as an operator corresponding to a symmetry transformation (see symmetry in quantum mechanics). These matrices play an important role in QCD as QCD is a gauge theory of the SU(3) gauge group obtained by taking the color charge to define a local symmetry: each Gell-Mann matrix corresponds to a particular gluon color charge, which in turn can be used to define color charge operators. Generators of a group can also form a basis for a vector space, so the overall gluon |
https://en.wikipedia.org/wiki/Heart%20valve | A heart valve is a one-way valve that allows blood to flow in one direction through the chambers of the heart. Four valves are usually present in a mammalian heart and together they determine the pathway of blood flow through the heart. A heart valve opens or closes according to differential blood pressure on each side.
The four valves in the mammalian heart are two atrioventricular valves separating the upper atria from the lower ventricles – the mitral valve in the left heart, and the tricuspid valve in the right heart. The other two valves are at the entrance to the arteries leaving the heart these are the semilunar valves – the aortic valve at the aorta, and the pulmonary valve at the pulmonary artery.
The heart also has a coronary sinus valve and an inferior vena cava valve, not discussed here.
Structure
The heart valves and the chambers are lined with endocardium. Heart valves separate the atria from the ventricles, or the ventricles from a blood vessel. Heart valves are situated around the fibrous rings of the cardiac skeleton. The valves incorporate flaps called leaflets or cusps, similar to a duckbill valve or flutter valve, which are pushed open to allow blood flow and which then close together to seal and prevent backflow. The mitral valve has two cusps, whereas the others have three. There are nodules at the tips of the cusps that make the seal tighter.
The pulmonary valve has left, right, and anterior cusps. The aortic valve has left, right, and posterior cusps. The tricuspid valve has anterior, posterior, and septal cusps; and the mitral valve has just anterior and posterior cusps.
The valves of the human heart can be grouped in two sets:
Two atrioventricular valves to prevent backflow of blood from the ventricles into the atria:
Tricuspid valve or right atrioventricular valve, between the right atrium and right ventricle
Mitral valve or bicuspid valve, between the left atrium and left ventricle
Two semilunar valves to prevent the backflow o |
https://en.wikipedia.org/wiki/Frobenius%20inner%20product | In mathematics, the Frobenius inner product is a binary operation that takes two matrices and returns a scalar. It is often denoted . The operation is a component-wise inner product of two matrices as though they are vectors, and satisfies the axioms for an inner product. The two matrices must have the same dimension - same number of rows and columns, but are not restricted to be square matrices.
Definition
Given two complex number-valued n×m matrices A and B, written explicitly as
the Frobenius inner product is defined as,
where the overline denotes the complex conjugate, and denotes Hermitian conjugate. Explicitly this sum is
The calculation is very similar to the dot product, which in turn is an example of an inner product.
Relation to other products
If A and B are each real-valued matrices, the Frobenius inner product is the sum of the entries of the Hadamard product. If the matrices are vectorised (i.e., converted into column vectors, denoted by ""), then
Therefore
Properties
It is a sesquilinear form, for four complex-valued matrices A, B, C, D, and two complex numbers a and b:
Also, exchanging the matrices amounts to complex conjugation:
For the same matrix,
,
and,
.
Frobenius norm
The inner product induces the Frobenius norm
Examples
Real-valued matrices
For two real-valued matrices, if
then
Complex-valued matrices
For two complex-valued matrices, if
then
while
The Frobenius inner products of A with itself, and B with itself, are respectively
See also
Hadamard product (matrices)
Hilbert–Schmidt inner product
Kronecker product
Matrix analysis
Matrix multiplication
Matrix norm
Tensor product of Hilbert spaces – the Frobenius inner product is the special case where the vector spaces are finite-dimensional real or complex vector spaces with the usual Euclidean inner product |
https://en.wikipedia.org/wiki/Code%20bloat | In computer programming, code bloat is the production of program code (source code or machine code) that is perceived as unnecessarily long, slow, or otherwise wasteful of resources. Code bloat can be caused by inadequacies in the programming language in which the code is written, the compiler used to compile it, or the programmer writing it. Thus, while code bloat generally refers to source code size (as produced by the programmer), it can be used to refer instead to the generated code size or even the binary file size.
Examples
The following JavaScript algorithm has a large number of redundant variables, unnecessary logic and inefficient string concatenation.
// Complex
function TK2getImageHTML(size, zoom, sensor, markers) {
var strFinalImage = "";
var strHTMLStart = '<img src="';
var strHTMLEnd = '" alt="The map"/>';
var strURL = "http://maps.google.com/maps/api/staticmap?center=";
var strSize = '&size='+ size;
var strZoom = '&zoom='+ zoom;
var strSensor = '&sensor='+ sensor;
strURL += markers[0].latitude;
strURL += ",";
strURL += markers[0].longitude;
strURL += strSize;
strURL += strZoom;
strURL += strSensor;
for (var i = 0; i < markers.length; i++) {
strURL += markers[i].addMarker();
}
strFinalImage = strHTMLStart + strURL + strHTMLEnd;
return strFinalImage;
};
The same logic can be stated more efficiently as follows:
// Simplified
const TK2getImageHTML = (size, zoom, sensor, markers) => {
const [ { latitude, longitude } ] = markers;
let url = `http://maps.google.com/maps/api/staticmap?center=${ latitude },${ longitude }&size=${ size }&zoom=${ zoom }&sensor=${ sensor }`;
markers.forEach(marker => url += marker.addMarker());
return `<img src="${ url }" alt="The map" />`;
};
Code density of different languages
The difference in code density between various computer languages is so great that often less memory is needed to hold both a progr |
https://en.wikipedia.org/wiki/Dicke%20model | The Dicke model is a fundamental model of quantum optics, which describes the interaction between light and matter. In the Dicke model, the light component is described as a single quantum mode, while the matter is described as a set of two-level systems. When the coupling between the light and matter crosses a critical value, the Dicke model shows a mean-field phase transition to a superradiant phase. This transition belongs to the Ising universality class and was realized in cavity quantum electrodynamics experiments. Although the superradiant transition bears some analogy with the lasing instability, these two transitions belong to different universality classes.
Description
The Dicke model is a quantum mechanical model that describes the coupling between a single-mode cavity and two-level systems, or equivalently spin-½ degrees of freedom. The model was first introduced in 1973 by K. Hepp and E. H. Lieb. Their study was inspired by the pioneering work of R. H. Dicke on the superradiant emission of light in free space and named after him.
Like any other model in quantum mechanics, the Dicke model includes a set of quantum states (the Hilbert space) and a total-energy operator (the Hamiltonian). The Hilbert space of the Dicke model is given by (the tensor product of) the states of the cavity and of the two-level systems. The Hilbert space of the cavity can be spanned by Fock states with photons, denoted by . These states can be constructed from the vacuum state using the canonical ladder operators, and , which add and subtract a photon from the cavity, respectively. The states of each two-level system are referred to as up and down and are defined through the spin operators , satisfying the spin algebra . Here is the Planck constant and indicates a specific two-level system.
The Hamiltonian of the Dicke model is
Here, the first term describes the energy of the cavity and equals to the product of the energy of a single cavity photon (where is the |
https://en.wikipedia.org/wiki/Hans%20Grauert | Hans Grauert (8 February 1930 in Haren, Emsland, Germany – 4 September 2011) was a German mathematician. He is known for major works on several complex variables, complex manifolds and the application of sheaf theory in this area, which influenced later work in algebraic geometry. Together with Reinhold Remmert he established and developed the theory of complex-analytic spaces. He became professor at the University of Göttingen in 1958, as successor to C. L. Siegel. The lineage of this chair traces back through an eminent line of mathematicians: Weyl, Hilbert, Riemann, and ultimately to Gauss. Until his death, he was professor emeritus at Göttingen.
Grauert was awarded a fellowship of the Leopoldina.
Early life
Grauert attended school at the Gymnasium in Meppen before studying for a semester at the University of Mainz in 1949, and then at the University of Münster, where he was awarded his doctorate in 1954.
See also
Andreotti–Grauert theorem
Grauert's theorem
Levi problem
Publications
with Klaus Fritzsche:
with Klaus Fritzsche: |
https://en.wikipedia.org/wiki/Quantum%20coin%20flipping | Consider two remote players, connected by a channel, that don't trust each other. The problem of them agreeing on a random bit by exchanging messages over this channel, without relying on any trusted third party, is called the coin flipping problem in cryptography. Quantum coin flipping uses the principles of quantum mechanics to encrypt messages for secure communication. It is a cryptographic primitive which can be used to construct more complex and useful cryptographic protocols, e.g. Quantum Byzantine agreement.
Unlike other types of quantum cryptography (in particular, quantum key distribution), quantum coin flipping is a protocol used between two users who do not trust each other. Consequently, both users (or players) want to win the coin toss and will attempt to cheat in various ways.
It is known that if the communication between the players is over a classical channel, i.e. a channel over which quantum information cannot be communicated, then one player can (in principle) always cheat regardless of which protocol is used. We say in principle because it might be that cheating requires an unfeasible amount of computational resource. Under standard computational assumptions, coin flipping can be achieved with classical communication.
The most basic figure of merit for a coin-flipping protocol is given by its bias, a number between and . The bias of a protocol captures the success probability of an all-powerful cheating player who uses the best conceivable strategy. A protocol with bias means that no player can cheat. A protocol with bias means that at least one player can always succeed at cheating. Obviously, the smaller the bias better the protocol.
When the communication is over a quantum channel, it has been shown that even the best conceivable protocol can not have a bias less than .
Consider the case where each player knows the preferred bit of the other. A coin flipping problem which makes this additional assumption constitutes the weaker variant |
https://en.wikipedia.org/wiki/Numbered-node%20cycle%20network | The numbered-node cycle network (; [formal] and ["bike-by-numbers", informal]) is a wayfinding system. It spans the Netherlands, Belgium, parts of France and Germany, and parts of Croatia, and is expanding rapidly, . Each intersection or node is given a number, and the numbers are signposted, so the cyclist always knows which way to go to get to the next node.
Numbers are not unique, but nodes with the same number are placed far apart, so that they can't be confused. To find a route, the cyclist uses a list of node numbers (the sequence of intersections they will pass through). The list is generated with a website, or a downloaded, roadside or paper map. Intersection numbers need little translation.
Bike networks are, by nature, more distributed than car routes, with more junctions; they do not gather all cyclists onto arterial bike routes. The numbered-node network makes long-distance bike travel simpler (by making it harder to get lost), and faster (by making frequent stops to check a map needless). Areas on the numbered-node network cite substantial economic benefits, including revenues from increased bike tourism.
The numbered-node network is more flexible than previous signage systems, which only indicated long, pre-determined routes. The numbered-node network signage can be used to plan and follow any arbitrary route through the network. This makes for more flexible bicycle touring, and is more usable for utility cycling.
History
The system was designed by the Belgian . Bollen worked as a mine engineer from 1971 to 1990, and then joined Regionaal Landschap Kempen en Maasland (RLKM). RLKM did not ask Bollen to design the scheme; he volunteered it. The idea of labelling each intersection was inspired by his annoyance at having to stop at each intersection to read the map, when out biking with his wife; he personally describes himself as more of a hiker than a biker. Rumours notwithstanding, the numbering was not inspired by a wayfinding system from the mi |
https://en.wikipedia.org/wiki/Many-body%20problem | The many-body problem is a general name for a vast category of physical problems pertaining to the properties of microscopic systems made of many interacting particles. Microscopic here implies that quantum mechanics has to be used to provide an accurate description of the system. Many can be anywhere from three to infinity (in the case of a practically infinite, homogeneous or periodic system, such as a crystal), although three- and four-body systems can be treated by specific means (respectively the Faddeev and Faddeev–Yakubovsky equations) and are thus sometimes separately classified as few-body systems.
In general terms, while the underlying physical laws that govern the motion of each individual particle may (or may not) be simple, the study of the collection of particles can be extremely complex. In such a quantum system, the repeated interactions between particles create quantum correlations, or entanglement. As a consequence, the wave function of the system is a complicated object holding a large amount of information, which usually makes exact or analytical calculations impractical or even impossible.
This becomes especially clear by a comparison to classical mechanics. Imagine a single particle that can be described with numbers (take for example a free particle described by its position and velocity vector, resulting in ). In classical mechanics, such particles can simply be described by numbers. The dimension of the classical many-body system scales linearly with the number of particles .
In quantum mechanics, however, the many-body-system is in general in a superposition of combinations of single particle states - all the different combinations have to be accounted for. The dimension of the quantum many body system therefore scales exponentially with , much faster than in classical mechanics.
Because the required numerical expense grows so quickly, simulating the dynamics of more than three quantum-mechanical particles is already infeasible for m |
https://en.wikipedia.org/wiki/ATCC%20%28company%29 | ATCC or the American Type Culture Collection is a nonprofit organization which collects, stores, and distributes standard reference microorganisms, cell lines and other materials for research and development. Established in 1925 to serve as a national center for depositing and distributing microbiological specimens, ATCC has since grown to distribute in over 150 countries. It is now the largest general culture collection in the world.
Products and collections
ATCC's collections include a wide range of biological materials for research, including cell lines, microorganisms and bioproducts. The organization holds a collection of more than 3,000 human and animal cell lines and an additional 1,200 hybridomas. ATCC's microorganism collection includes a collection of more than 18,000 strains of bacteria, as well as 3,000 different types of animal viruses and 1,000 plant viruses. In addition, ATCC maintains collections of protozoans, yeasts and fungi with over 7,500 yeast and fungus species and 1,000 strains of protists.
Services
In addition to serving as a biorepository and distributor, ATCC provides specialized services as a biological resource center. Individuals and groups can employ a safe deposit service for their own cell cultures, providing a secure back-up for valuable biomaterials if required. ATCC also is able to retain secure samples of patented materials and distribute them according to instructions and approval of the patent holder. ATCC also provides biological repository management services to institutions, agencies and companies wishing to outsource the handling of their own culture collections. ATCC also manages BEI Resources, who provides reagents, tools and information needed in research on microbes.
ATCC also serves to set standards for biological reagent and assay quality. These standards are used by the U.S. Food and Drug Administration and the U.S. Department of Agriculture, as well as organizations such as AOAC International, the Clinical and |
https://en.wikipedia.org/wiki/Melanophryniscus%20peritus | Melanophryniscus peritus is a species of frog in the family Bufonidae. It is only known from a single specimen collected in 1953, and may be extinct.
Taxonomy
Melanophryniscus peritus was described in 2011 by Ulisses Caramaschi and Carlos Alberto Gonçalves da Cruz. Originally, it was placed in the Melanophryniscus tumifrons group. The specific name, peritus, is from the Latin verb pereo, meaning to vanish or disappear. It was given to reflect the species' status.
Description
The holotype and only known specimen was a female 39.3 mm long (SVL), a medium size for the genus. It also had a wide head, dark brown coloration on its dorsal side, and a lighter brown underside.
Habitat and distribution
The species is only known from its type locality, in the Mantiqueira mountain range of southeastern Brazil. This is further north than the majority of members in the Melanophryniscus tumifrons group. Based on the activities of other members of the genus, Melanophryniscus peritus is believed to inhabit small ponds, flooded areas near rivulets, and bromeliads.
History
The only known specimen of Melanophryniscus peritus was collected on November 4, 1953 by German-Brazilian naturalist Helmut Sick. Multiple surveys of the species' known range have failed to uncover any more individuals, and it is listed as "critically endangered" and possibly extinct. It's believed that habitat loss led to the species' decline. |
https://en.wikipedia.org/wiki/Pandoravirus%20tropicalis | Pandoravirus tropicalis is a virus belonging to the genus Pandoravirus. It was isolated from water samples taken from the artificial lake Lake Pampulha in Brazil. |
https://en.wikipedia.org/wiki/DIDO%20%28nuclear%20reactor%29 | DIDO was a materials testing nuclear reactor at the Atomic Energy Research Establishment at Harwell, Oxfordshire in the United Kingdom. It used enriched uranium metal fuel, and heavy water as both neutron moderator and primary coolant. There was also a graphite neutron reflector surrounding the core. In the design phase, DIDO was known as AE334 after its engineering design number.
DIDO was designed to have a high neutron flux, largely to reduce the time required for testing of materials intended for use in nuclear power reactors. This also allowed for the production of intense beams of neutrons for use in neutron diffraction.
DIDO was shut down in 1990 and is under planning for decommissioning.
In all, six DIDO class reactors were constructed based on this design:
DIDO, first criticality 1956.
PLUTO, also at Harwell, first criticality 1957.
HIFAR (Australia), first criticality January 1958.
Dounreay Materials Testing Reactor (DMTR) at Dounreay Nuclear Power Development Establishment in Scotland, first criticality May 1958.
DR-3 at Risø National Laboratory (Denmark), first criticality January 1960.
FRJ-II at Jülich Research Centre (Germany), first criticality 1962.
HIFAR was the last to shut down, in 2007.
See also
List of nuclear reactors |
https://en.wikipedia.org/wiki/Botany%20of%20Lord%20Auckland%27s%20Group%20and%20Campbell%27s%20Island | The Botany of Lord Auckland's Group and Campbell's Island is a description of the plants discovered in those islands during the Ross expedition written by Joseph Dalton Hooker and published by Reeve Brothers in London between 1844 and 1845. Hooker sailed on HMS Erebus as assistant surgeon. It was the first in a series of four Floras in the Flora Antarctica, the others being the Botany of Fuegia, the Falklands, Kerguelen's Land, Etc. (1845–1847), the Flora Novae-Zelandiae (1851–1853), and the Flora Tasmaniae (1853–1859). They were "splendidly" illustrated by Walter Hood Fitch.
The larger part of the plant specimens collected during the Ross expedition are now part of the Kew Herbarium.
Context
The British government fitted out an expedition led by James Clark Ross to investigate magnetism and marine geography in high southern latitudes, which sailed with two ships, HMS Terror and HMS Erebus on 29 September 1839 from Chatham.
The ships arrived, after several stops, at the Cape of Good Hope on 4 April 1840. On 21 April the giant kelp Macrocystis pyrifera was found off Marion Island, but no landfall could be made there or on the Crozet Islands due to the harsh winds. On 12 May the ships anchored at Christmas Harbour for two and a half months, during which all the plant species previously encountered by James Cook on the Kerguelen Islands were collected. On 16 August they reached the River Derwent, remaining in Tasmania until 12 November. A week later the flotilla stopped at Lord Auckland's Islands and Campbell's Island for the spring months.
Large floating forests of Macrocystis and Durvillaea were found until the ships ran into icebergs at latitude 61° S. Pack-ice was met at 68° S and longitude 175°. During this part of the voyage Victoria Land, Mount Erebus and Mount Terror were discovered. After returning to Tasmania for three months, the flotilla went via Sydney to the Bay of Islands, and stayed for three months in New Zealand to collect plants there. After v |
https://en.wikipedia.org/wiki/Partially%20ordered%20group | In abstract algebra, a partially ordered group is a group (G, +) equipped with a partial order "≤" that is translation-invariant; in other words, "≤" has the property that, for all a, b, and g in G, if a ≤ b then a + g ≤ b + g and g + a ≤ g + b.
An element x of G is called positive if 0 ≤ x. The set of elements 0 ≤ x is often denoted with G+, and is called the positive cone of G.
By translation invariance, we have a ≤ b if and only if 0 ≤ -a + b.
So we can reduce the partial order to a monadic property: if and only if
For the general group G, the existence of a positive cone specifies an order on G. A group G is a partially orderable group if and only if there exists a subset H (which is G+) of G such that:
0 ∈ H
if a ∈ H and b ∈ H then a + b ∈ H
if a ∈ H then -x + a + x ∈ H for each x of G
if a ∈ H and -a ∈ H then a = 0
A partially ordered group G with positive cone G+ is said to be unperforated if n · g ∈ G+ for some positive integer n implies g ∈ G+. Being unperforated means there is no "gap" in the positive cone G+.
If the order on the group is a linear order, then it is said to be a linearly ordered group.
If the order on the group is a lattice order, i.e. any two elements have a least upper bound, then it is a lattice-ordered group (shortly l-group, though usually typeset with a script l: ℓ-group).
A Riesz group is an unperforated partially ordered group with a property slightly weaker than being a lattice-ordered group. Namely, a Riesz group satisfies the Riesz interpolation property: if x1, x2, y1, y2 are elements of G and xi ≤ yj, then there exists z ∈ G such that xi ≤ z ≤ yj.
If G and H are two partially ordered groups, a map from G to H is a morphism of partially ordered groups if it is both a group homomorphism and a monotonic function. The partially ordered groups, together with this notion of morphism, form a category.
Partially ordered groups are used in the definition of valuations of fields.
Examples
The integers with their usual o |
https://en.wikipedia.org/wiki/Indium%20gallium%20arsenide%20phosphide | Indium gallium arsenide phosphide () is a quaternary compound semiconductor material, an alloy of gallium arsenide, gallium phosphide, indium arsenide, or indium phosphide. This compound has applications in photonic devices, due to the ability to tailor its band gap via changes in the alloy mole ratios, x and y.
Indium phosphide-based photonic integrated circuits, or PICs, commonly use alloys of to construct quantum wells, waveguides and other photonic structures, lattice matched to an InP substrate, enabling single-crystal epitaxial growth onto InP.
Many devices operating in the near-infrared 1.55 μm wavelength window utilize this alloy, and are employed as optical components (such as laser transmitters, photodetectors and modulators) in C-band communications systems.
Fraunhofer Institute for Solar Energy Systems ISE reported a triple-junction solar cell utilizing . The cell has very high efficiency of 35.9% (claimed to be a record).
See also
Indium gallium phosphide
Gallium indium arsenide antimonide phosphide
Solar cell efficiency |
https://en.wikipedia.org/wiki/Pythagorean%20astronomical%20system | An astronomical system positing that the Earth, Moon, Sun, and planets revolve around an unseen "Central Fire" was developed in the fifth century BC and has been attributed to the Pythagorean philosopher Philolaus. The system has been called "the first coherent system in which celestial bodies move in circles", anticipating Copernicus in moving "the earth from the center of the cosmos [and] making it a planet". Although its concepts of a Central Fire distinct from the Sun, and a nonexistent "Counter-Earth" were erroneous, the system contained the insight that "the apparent motion of the heavenly bodies" was (in large part) due to "the real motion of the observer". How much of the system was intended to explain observed phenomena and how much was based on myth, mysticism, and religion is disputed. While the departure from traditional reasoning is impressive, other than the inclusion of the five visible planets, very little of the Pythagorean system is based on genuine observation. In retrospect, Philolaus's views are "less like scientific astronomy than like symbolical speculation."
Before Philolaus
Knowledge of contributions to Pythagorean astronomy before Philolaus is limited. Hippasus, another early Pythagorean philosopher, did not contribute to astronomy, and no evidence of Pythagoras's work on astronomy remains. None for the remaining astronomical contributions can be attributed to a single person and, therefore, Pythagoreans as whole take the credit. However, it should not be presumed that the Pythagoreans as a unanimous group agreed on a single system before the time of Philolaus.
One surviving theory from the Pythagoreans before Philolaus, the harmony of the spheres, is first mentioned in Plato’s Republic. Plato presents the theory in a mythological sense by including it in the Myth of Er, which concludes the Republic. Aristotle mentions the theory in De Caelo, in which he presents the theory as a "physical doctrine" that coincides with the rest of the Pyt |
https://en.wikipedia.org/wiki/Fascia%20training | Fascia training describes sports activities and movement exercises that attempt to improve the functional properties of the muscular connective tissues in the human body, such as tendons, ligaments, joint capsules and muscular envelopes. Also called fascia, these tissues take part in a body-wide tensional force transmission network and are responsive to training stimulation.
Origin
Whenever muscles and joints are moved this also exerts mechanical strain on related fascia. The general assumption in sports science had therefore been that muscle strength exercises as well as cardiovascular training would be sufficient for an optimal training of the associate fibrous connective tissues. However, recent ultrasound-based research revealed that the mechanical threshold for a training effect on tendinous tissues tends to be significantly higher than for muscle fibers. This insight happened roughly during the same time in which the field of fascia research attracted major attention by showing that fascial tissues are much more than passive transmitters of muscular tension (years 2007 – 2010). Both influences together triggered an increasing attention in sports science towards the question whether and how fascial tissues can be specifically stimulated with active exercises.
Principles
Fascia training follows the following principles:
Preparatory counter-movement (increasing elastic recoil by pre-stretching involved fascial tissues);
The Ninja principle (focus on effortless movement quality);
Dynamic stretching (alternation of melting static stretches with dynamic stretches that include mini-bounces, with multiple directional variations);
Proprioceptive refinement (enhancing somatic perceptiveness by mindfulness oriented movement explorations);
Hydration and renewal (foam rolling and similar tool-assisted myofascial self-treatment applications);
Sustainability: respecting the slower adaptation speed but more sustaining effects of fascial tissues (compared with muscles |
https://en.wikipedia.org/wiki/Mir-650%20microRNA%20precursor%20family | In molecular biology mir-650 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms.
Diabetic and Non-Diabetic Heart Failure
miR-650 is one of a group of six miRNAs with altered expression levels in diabetic and non-diabetic heart failure. This altered expression corresponds to various enriched cardiac dysfunctions.
NDRG2 regulation
miR-650 has further been reported to target a homologous DNA region in the promoter region of the NDRG2 gene. There is direct regulation of this gene at a transcriptional level, leading to repressed NDRG2 expression.
See also
MicroRNA |
https://en.wikipedia.org/wiki/JIC%20fitting | JIC fittings, defined by the SAE J514 and MIL-DTL-18866 standards, are a type of flare fitting machined with a 37-degree flare seating surface. JIC (Joint Industry Council) fittings are widely used in fuel delivery and fluid power applications, especially where high pressure (up to ) is involved. The SAE J514 standard replaces the MS16142 US military specification, although some tooling is still listed under MS16142. JIC fittings are dimensionally identical to AN (Army-Navy) fittings, but are produced to less exacting tolerances and are generally less costly. SAE 45-degree flare fittings are similar in appearance, but are not interchangeable, though dash sizes 2, 3, 4, 5, 8, and 10 share the same thread size. Some couplings may have dual machined seats for both 37-degree and 45-degree flare seats.
Komatsu and JIS (Japanese Industrial Standard) fittings have flare ends similar to JIC fittings. Komatsu and JIS both use a 30-degree flare seating surface. The only difference is Komatsu uses millimeter thread sizes while JIS use a BSP (British Standard Pipe) thread.
JIC fitting systems have three components that make a tubing assembly: fitting, flare nut, and sleeve. As with other flared connection systems, the seal is achieved through metal-to-metal contact between the finished surface of the fitting nose and the inside diameter of the flared tubing. The sleeve is used to evenly distribute the compressive forces of the flare nut to the flared end of the tube. Materials commonly used to fabricate JIC fittings include forged carbon steel, forged stainless steel, forged brass, machined brass, Monel and nickel-copper alloys.
JIC fittings are commonly used in the Fluid Power industry in a diagnostic and test-point setting. A three-way JIC coupling provides a port inline of circuit by which a user can connect a measurement or diagnostic device to take pressure readings and perform circuit and system diagnostics. |
https://en.wikipedia.org/wiki/Puzzle%20box | A puzzle box (also called a secret box or trick box) is a box that can be opened only by solving a puzzle. Some require only a simple move and others a series of discoveries.
Modern puzzle boxes developed from furniture and jewelry boxes with secret compartments and hidden openings, known since the Renaissance. Puzzle boxes produced for entertainment first appeared in Victorian England in the 19th century and as tourist souvenirs in the Interlaken region in Switzerland and in the Hakone region of Japan at the end of the 19th and the beginning of the 20th century. Boxes with secret openings appeared as souvenirs at other tourist destinations during the early 20th century, including the Amalfi Coast, Madeira, and Sri Lanka, though these were mostly 'one-trick' traditions. Chinese cricket boxes represent another example of intricate boxes with secret openings.
Interest in puzzle boxes subsided during and after the two World Wars. The art was revived in the 1980s by three pioneers of this genre: Akio Kamei in Japan, Trevor Wood in England, and Frank Chambers in Ireland. There are currently a number of artists producing puzzle boxes, including the Karakuri group in Japan set up by Akio Kamei, US puzzle box specialists Robert Yarger and Kagen Sound, as well as a number of other designers and puzzle makers who produce puzzle boxes across the globe.
Clive Barker's horror novella The Hellbound Heart (later adapted into a film, Hellraiser, followed by numerous original sequels) centers on the fictional Lemarchand's box, a puzzle box which opens the gates to another dimension when manipulated.
See also
Mechanical puzzle |
https://en.wikipedia.org/wiki/Fruit%20Pie%20the%20Magician | Fruit Pie the Magician was the official mascot for Hostess fruit pies for over three decades, from 1973 until early 2006. Fruit Pie the Magician was featured in print ads in comic books as well as animated in television commercials. The character appeared on Hostess product labels as an anthropomorphic fruit pie sporting a cape, white gloves, top hat, and magic wand.
Hostess described the mascot: "Fruit Pie the Magician loves to entertain friends with his wacky magic tricks. His favorite magic trick is to make Hostess Fruit Pies appear out of thin air. You always have to keep an eye on the Magician or else he may play a trick on you."
Fruit Pie was parodied in The Order of the Stick webcomic episode 91.
See also
Captain Cupcake
Twinkie the Kid
External links
Meet the Hostess Gang
Do You Remember Fruit Pie the Magician and Twinkie the Kid?
Cartoon mascots
Food advertising characters
Male characters in advertising
Fictional food characters
Fictional stage magicians
Hostess Brands
Mascots introduced in 1973 |
https://en.wikipedia.org/wiki/List%20of%20video%20transcoding%20software | The following is a list of video transcoding software.
Open-source
Shutter Encoder (Windows, OS X, Linux)
DVD Flick (Windows)
FFmpeg (Windows, OS X, Linux)
HandBrake (Windows, OS X, Linux)
Ingex (Linux)
MEncoder (Windows, OS X, Linux)
Nandub (Windows)
Thoggen (Linux)
VirtualDubMod (Windows)
VirtualDub (Windows)
VLC Media Player (Windows, Mac OS X, Linux)
Arista (Linux)
Avidemux (Windows, OS X, Linux)
Freeware
Freemake Video Converter (Windows)
FormatFactory (Windows)
Ingest Machine DV (Windows)
MediaCoder (Windows)
SUPER (Windows)
Windows Media Encoder (Windows)
Zamzar (Web application)
ZConvert (Windows)
Commercial
Compressor (Mac OS X)
MPEG Video Wizard DVD (Windows)
ProCoder (Windows)
QuickTime Pro (Mac OS X, Windows)
Roxio Creator (Windows)
Sorenson Squeeze
Telestream Episode (Mac OS X, Windows)
TMPGEnc (Windows)
Wowza Streaming Engine with included Wowza Transcoder feature (Linux, Mac OS X, Windows)
Zamzar - Premium service (Web application)
Zencoder (Web application)
See also
Photo slideshow software
List of video editing software
Video transcoding software |
https://en.wikipedia.org/wiki/Cistus%20%C3%97%20incanus | Cistus × incanus L. is a hybrid between Cistus albidus and Cistus crispus. The name "Cistus incanus" (synonym C. villosus) has been used by other authors in a different sense, for Cistus creticus (at least in part). The English name hoary rock-rose may refer to this species, among others.
Description
Because of confusion between the original species named by Linnaeus in 1753 and the way in which the name was used by later authors (see § Taxonomy), plants described under this name may actually belong to different species. C. × incanus is a shrubby plant, to about tall, with grey-green leaves and pink to purple flowers.
Taxonomy
The name Cistus incanus was first used by Carl Linnaeus in 1753 in Species Plantarum. Confusion exists among this name and two later names published by Linnaeus, Cistus creticus in 1762 and Cistus villosus in 1764. There is general agreement that C. villosus, at least as used by later authors, is not a distinct species. Two treatments are then found.
In the first, generally older, treatment, C. incanus is accepted, with C. villosus being a synonym. C. creticus is treated as C. incanus subsp. creticus.
According to Demoly (1996), Linnaeus's Cistus incanus was recognized to be a hybrid as early as 1904. The second treatment (followed here) is based on this recognition. C. creticus is accepted, with C. villosus as a synonym. C. × incanus L. is treated as the hybrid C. albidus × C. crispus. As used by previous authors, but not Linnaeus, the name "C. incanus" is taken to refer to Cistus creticus, particularly C. creticus subsp. eriocephalus. Two formerly recognised subspecies of C. incanus are regarded as subspecies of Cistus creticus:
Cistus × incanus subsp. corsicus = C. creticus subsp. corsicus
Cistus × incanus subsp. creticus = C. creticus subsp. creticus |
https://en.wikipedia.org/wiki/Energy%20modeling | Energy modeling or energy system modeling is the process of building computer models of energy systems in order to analyze them. Such models often employ scenario analysis to investigate different assumptions about the technical and economic conditions at play. Outputs may include the system feasibility, greenhouse gas emissions, cumulative financial costs, natural resource use, and energy efficiency of the system under investigation. A wide range of techniques are employed, ranging from broadly economic to broadly engineering. Mathematical optimization is often used to determine the least-cost in some sense. Models can be international, regional, national, municipal, or stand-alone in scope. Governments maintain national energy models for energy policy development.
Energy models are usually intended to contribute variously to system operations, engineering design, or energy policy development. This page concentrates on policy models. Individual building energy simulations are explicitly excluded, although they too are sometimes called energy models. IPCC-style integrated assessment models, which also contain a representation of the world energy system and are used to examine global transformation pathways through to 2050 or 2100 are not considered here in detail.
Energy modeling has increased in importance as the need for climate change mitigation has grown in importance. The energy supply sector is the largest contributor to global greenhouse gas emissions. The IPCC reports that climate change mitigation will require a fundamental transformation of the energy supply system, including the substitution of unabated (not captured by CCS) fossil fuel conversion technologies by low-GHG alternatives.
Model types
A wide variety of model types are in use. This section attempts to categorize the key types and their usage. The divisions provided are not hard and fast and mixed-paradigm models exist. In addition, the results from more general models can be |
https://en.wikipedia.org/wiki/Secondary%20stability | Secondary stability, also known as reserve stability, is a boat or ship's ability to right itself at large angles of heel (lateral tilt), as opposed to primary or initial stability, the boat's tendency to stay laterally upright when tilted to low (<10°) angles.
The study of initial and secondary stability are part of naval architecture as applied to small watercraft (as distinct from the study of ship stability concerning large ships).
A greater lateral width (beam) and more initial stability decrease the secondary stability- once tilted more than a certain angle the boat is conversely harder to restore to its stable upright position.
Other types of ship stability
Primary stability
Tertiary stability (also called inverse stability): Tertiary stability is undesirable as it causes a vessel to remain upside-down. Self-righting watercraft have negative tertiary stability, and no limit of positive stability. For kayak rolling, the stability of an upside-down kayak is also important; lower tertiary stability makes rolling up easier.
See also
Ship stability
Limit of positive stability — boats |
https://en.wikipedia.org/wiki/Tissue%20membrane | A tissue membrane is a thin layer or sheet of cells that covers the outside of the body (for example, skin), the organs (for example, pericardium), internal passageways that lead to the exterior of the body (for example, mucosa of stomach), and the lining of the moveable joint cavities. There are two basic types of tissue membranes: connective tissue and epithelial membranes.
Connective tissue membrane
The connective tissue membrane is formed solely from connective tissue. These membranes encapsulate organs, such as the kidneys, and line our movable joints. A synovial membrane is a type of connective tissue membrane that lines the cavity of a freely movable joint. For example, synovial membranes surround the joints of the shoulder, elbow, and knee. Fibroblasts in the inner layer of the synovial membrane release hyaluronan into the joint cavity. The hyaluronan effectively traps available water to form the synovial fluid, a natural lubricant that enables the bones of a joint to move freely against one another without much friction. This synovial fluid readily exchanges water and nutrients with blood, as do all body fluids.
Epithelial membrane
The epithelial membrane is composed of epithelium attached to a layer of connective tissue, for example, skin. The mucous membrane is also a composite of connective and epithelial tissues. Sometimes called mucosae, these epithelial membranes line the body cavities and hollow passageways that open to the external environment, and include the digestive, respiratory, excretory, and reproductive tracts. Mucus, produced by the epithelial exocrine glands, covers the epithelial layer. The underlying connective tissue, called the lamina propria (literally “own layer”), help support the fragile epithelial layer.
A serous membrane is an epithelial membrane composed of mesodermally derived epithelium called the mesothelium that is supported by connective tissue. These membranes line the coelomic cavities of the body, that is, those cavit |
https://en.wikipedia.org/wiki/Pepscan | Pepscan is a procedure for mapping and characterizing epitopes involving the synthesis of overlapping peptides and analysis of the peptides in enzyme-linked immunosorbent assays (ELISAs). The method is based on combinatorial chemistry and was pioneered by Mario Geysen and coworkers.
Rob Meloen was one of Geysen's co-workers. He also played an important role in the development of numerous other new technologies, including vaccine and diagnostic product development for several viral diseases. From 1994 to 2010, Meloen was Professor of Special Appointment (Chair: Biomolecular Recognition) at Utrecht University. He was one of the co-founders of the company Pepscan (Lelystad, the Netherlands) and became Scientific Director (CSO). Pepscan is now part of the Biosynth Group.
Twenty-five years later, the Pepscan methodology, evolved and modernized with the latest insights, is still an important part of Pepscan’s epitope mapping platform, which is instrumental in therapeutic antibody development. |
https://en.wikipedia.org/wiki/ACES%20%28computational%20chemistry%29 | Aces II (Advanced Concepts in Electronic Structure Theory) is an ab initio computational chemistry package for performing high-level quantum chemical ab initio calculations. Its major strength is the accurate calculation of atomic and molecular energies as well as properties using many-body techniques such as many-body perturbation theory (MBPT) and, in particular coupled cluster techniques to treat electron correlation. The development of ACES II began in early 1990 in the group of Professor Rodney J. Bartlett at the Quantum Theory Project (QTP) of the University of Florida in Gainesville. There, the need for more efficient codes had been realized and the idea of writing an entirely new program package emerged. During 1990 and 1991 John F. Stanton, Jürgen Gauß, and John D. Watts, all of them at that time postdoctoral researchers in the Bartlett group, supported by a few students, wrote the backbone of what is now known as the ACES II program package. The only parts which were not new coding efforts were the integral packages (the MOLECULE package of J. Almlöf, the VPROP package of P.R. Taylor, and the integral derivative package ABACUS of T. Helgaker, P. Jorgensen J. Olsen, and H.J. Aa. Jensen). The latter was modified extensively for adaptation with Aces II, while the others remained very much in their original forms.
Ultimately, two different versions of the program evolved. The first was maintained by the Bartlett group at the University of Florida, and the other (known as ACESII-MAB) was maintained by groups at the University of Texas, Universitaet Mainz in Germany, and ELTE in Budapest, Hungary. The latter is now called CFOUR.
Aces III is a parallel implementation that was released in the fall of 2008. The effort led to definition of a new architecture for scalable parallel software called the super instruction architecture. The design and creation of software is divided into two parts:
The algorithms are coded in a domain specific language called s |
https://en.wikipedia.org/wiki/Sony%20Vaio%20W%20series | The Sony Vaio W series is a series of netbooks, and formerly a series of desktop PCs.
All-in-one desktops (2002)
The Sony Vaio W series is a line of all-in-one PCs. It was first launched in Japan, and came to the U.S. market in October 2002, with the first model being PCV-W10. Combining features such as large multimedia speakers, foldable keyboards, a large 15.3 inch display, i.LINK, 1.6 GHz Pentium 4 CPUs with 512 MB RAM, and in some later models TV features, the W series was seen as a high-end multimedia series with great specs for its time. It was replaced by the Vaio L series in 2006.
Netbooks (2009)
The Sony Vaio W series name was relaunched in 2009 as a series of notebook computers. It is aimed primarily towards the youth market, creating a new market audience for Vaio. The product is intended to be mainly used for at home for browsing, sharing photos online, downloading music and online networking. It clearly differentiates itself from the existing notebook line-up and is not presented as a full PC.
Features
10.1” 16:9 WXGA 1366×768 X-black LCD screen with LED backlights
2.6 lb.
Full pitch isolation keyboard
Intel Atom N280 processor at 1.66 GHz
Built in webcam and microphone
3 hour battery
Wireless b/g/n networking
Matching accessories (carry pouch and mouse accessory kit)
Models
The models are made in three colors: pink, white, and brown. Their base price is (USD) $499. |
https://en.wikipedia.org/wiki/Cockayne%20syndrome | Cockayne syndrome (CS), also called Neill-Dingwall syndrome, is a rare and fatal autosomal recessive neurodegenerative disorder characterized by growth failure, impaired development of the nervous system, abnormal sensitivity to sunlight (photosensitivity), eye disorders and premature aging. Failure to thrive and neurological disorders are criteria for diagnosis, while photosensitivity, hearing loss, eye abnormalities, and cavities are other very common features. Problems with any or all of the internal organs are possible. It is associated with a group of disorders called leukodystrophies, which are conditions characterized by degradation of neurological white matter. There are two primary types of Cockayne syndrome: Cockayne syndrome type A (CSA), arising from mutations in the ERCC8 gene, and Cockayne syndrome type B (CSB), resulting from mutations in the ERCC6 gene.
The underlying disorder is a defect in a DNA repair mechanism. Unlike other defects of DNA repair, patients with CS are not predisposed to cancer or infection. Cockayne syndrome is a rare but destructive disease usually resulting in death within the first or second decade of life. The mutation of specific genes in Cockayne syndrome is known, but the widespread effects and its relationship with DNA repair is yet to be well understood.
It is named after English physician Edward Alfred Cockayne (1880–1956) who first described it in 1936 and re-described in 1946. Neill-Dingwall syndrome was named after Mary M. Dingwall and Catherine A. Neill. These two scientists described the case of two brothers with Cockayne syndrome and asserted it was the same disease described by Cockayne. In their article, the two contributed to the signs of the disease through their discovery of calcifications in the brain. They also compared Cockayne syndrome to what is now known as Hutchinson–Gilford progeria syndrome (HGPS), then called progeria, due to the advanced aging that characterizes both disorders.
Types
CS Type I |
https://en.wikipedia.org/wiki/Neutropenia | Neutropenia is an abnormally low concentration of neutrophils (a type of white blood cell) in the blood. Neutrophils make up the majority of circulating white blood cells and serve as the primary defense against infections by destroying bacteria, bacterial fragments and immunoglobulin-bound viruses in the blood. People with neutropenia are more susceptible to bacterial infections and, without prompt medical attention, the condition may become life-threatening (neutropenic sepsis).
Neutropenia can be divided into congenital and acquired, with severe congenital neutropenia (SCN) and cyclic neutropenia (CyN) being autosomal dominant and mostly caused by heterozygous mutations in the ELANE gene (neutrophil elastase). Neutropenia can be acute (temporary) or chronic (long lasting). The term is sometimes used interchangeably with "leukopenia" ("deficit in the number of white blood cells").
Decreased production of neutrophils is associated with deficiencies of vitamin B12 and folic acid, aplastic anemia, tumors, drugs, metabolic disease, nutritional deficiency and immune mechanisms. In general, the most common oral manifestations of neutropenia include ulcer, gingivitis, and periodontitis. Agranulocytosis can be presented as whitish or greyish necrotic ulcer in the oral cavity, without any sign of inflammation. Acquired agranulocytosis is much more common than the congenital form. The common causes of acquired agranulocytosis including drugs (non-steroidal anti-inflammatory drugs, antiepileptics, antithyroid, and antibiotics) and viral infection. Agranulocytosis has a mortality rate of 7–10%. To manage this, the application of granulocyte colony stimulating factor (G-CSF) or granulocyte transfusion and the use of broad-spectrum antibiotics to protect against bacterial infections are recommended.
Signs and symptoms
Signs and symptoms of neutropenia include fever, painful swallowing, gingival pain, skin abscesses, and otitis. These symptoms may exist because individuals w |
https://en.wikipedia.org/wiki/Strong%20key | Strong Key is a naming convention used in computer programming. There can be more than one component (e.g.: DLL) with the same naming, but with different versions. This can lead to many conflicts.
A Strong Key (also called SN Key or Strong Name) is used in the Microsoft .NET Framework to uniquely identify a component. This is done partly with Public-key cryptography.
Strong keys or names provide security of reference from one component to another or from a root key to a component. This is not the same as tamper resistance of the file containing any given component. Strong names also are a countermeasure against dll hell.
This key is produced by another computer program as a pair. |
https://en.wikipedia.org/wiki/Aanderaa%E2%80%93Karp%E2%80%93Rosenberg%20conjecture | In theoretical computer science, the Aanderaa–Karp–Rosenberg conjecture (also known as the Aanderaa–Rosenberg conjecture or the evasiveness conjecture) is a group of related conjectures about the number of questions of the form "Is there an edge between vertex and vertex ?" that have to be answered to determine whether or not an undirected graph has a particular property such as planarity or bipartiteness. They are named after Stål Aanderaa, Richard M. Karp, and Arnold L. Rosenberg. According to the conjecture, for a wide class of properties, no algorithm can guarantee that it will be able to skip any questions: any algorithm for determining whether the graph has the property, no matter how clever, might need to examine every pair of vertices before it can give its answer. A property satisfying this conjecture is called evasive.
More precisely, the Aanderaa–Rosenberg conjecture states that any deterministic algorithm must test at least a constant fraction of all possible pairs of vertices, in the worst case, to determine any non-trivial monotone graph property; in this context, a property is monotone if it remains true when edges are added (so planarity is not monotone, but non-planarity is monotone). A stronger version of this conjecture, called the evasiveness conjecture or the Aanderaa–Karp–Rosenberg conjecture, states that exactly tests are needed. Versions of the problem for randomized algorithms and quantum algorithms have also been formulated and studied.
The deterministic Aanderaa–Rosenberg conjecture was proven by , but the stronger Aanderaa–Karp–Rosenberg conjecture remains unproven. Additionally, there is a large gap between the conjectured lower bound and the best proven lower bound for randomized and quantum query complexity.
Example
The property of being non-empty (that is, having at least one edge) is monotone, because adding another edge to a non-empty graph produces another non-empty graph. There is a simple algorithm for testing whether a grap |
https://en.wikipedia.org/wiki/Rhizorhabdus | Rhizorhabdus is a genus of bacteria. Its name is derived from the latin rhiza, meaning root, and rhabdos, meaning rod. Members of this genus, including Rhizorhabdus wittichii and five other species with sequenced genomes, are associated with soil or plant roots. |
https://en.wikipedia.org/wiki/Acid%20growth | Acid growth refers to the ability of plant cells and plant cell walls to elongate or expand quickly at low (acidic) pH. The cell wall needs to be modified in order to maintain the turgor pressure. This modification is controlled by plant hormones like auxin. Auxin also controls the expression of some cell wall genes. This form of growth does not involve an increase in cell number. During acid growth, plant cells enlarge rapidly because the cell walls are made more extensible by expansin, a pH-dependent wall-loosening protein. Expansin loosens the network-like connections between cellulose microfibrils within the cell wall, which allows the cell volume to increase by turgor and osmosis. A typical sequence leading up to this would involve the introduction of a plant hormone (auxin, for example) that causes protons (H+ ions) to be pumped out of the cell into the cell wall. As a result, the cell wall solution becomes more acidic. It was suggested by different scientist that the epidermis is a unique target of the auxin but this theory has been disapproved over time. This activates expansin activity, causing the wall to become more extensible and to undergo wall stress relaxation, which enables the cell to take up water and to expand. The acid growth theory has been very controversial in the past. |
https://en.wikipedia.org/wiki/HomePNA | The HomePNA Alliance (formerly the Home Phoneline Networking Alliance, also known as HPNA) is an incorporated non-profit industry association of companies that develops and standardizes technology for home networking over the existing coaxial cables and telephone wiring within homes, so new wires do not need to be installed.
HomePNA was developed for entertainment applications such as IPTV which require good quality of service (QoS).
History
HomePNA 1.0 technology was developed by Tut Systems in the 1990s. The original protocols used balanced pair telephone wire.
HomePNA 2.0 was developed by Epigram and was approved by the ITU as Recommendations G.9951, G.9952 and G.9953.
HomePNA 3.0 was developed by Broadcom (which had purchased Epigram) and Coppergate Communications and was approved by the ITU as Recommendation G.9954 in February 2005.
HomePNA 3.1 was developed by Coppergate Communications and was approved by the ITU as Recommendation G.9954 in January 2007. HomePNA 3.1 added Ethernet over coax. HomePNA 3.1 uses frequencies above those used for digital subscriber line and analog voice calls over phone wires and below those used for broadcast and direct-broadcast satellite TV over coax, so it can coexist with those services on the same wires.
In March 2009, HomePNA announced a liaison agreement with the HomeGrid Forum to promote the ITU-T G.hn wired home networking standard. In May 2013 the HomePNA alliance merged with the HomeGrid Forum.
Technical characteristics
HomePNA uses frequency-division multiplexing (FDM), which uses different frequencies for voice and data on the same wires without interfering with each other. A standard phone line has enough room to support voice, high-speed DSL and a landline phone.
Two custom chips designed using the HPNA specifications were developed by Broadcom: the 4100 chip can send and receive signals over 1,000 ft (305 m) on a typical phone line. The larger 4210 controller chip strips away noise and passes data on |
https://en.wikipedia.org/wiki/Morganella%20morganii | Morganella morganii is a species of Gram-negative bacteria. It has a commensal relationship within the intestinal tracts of humans, mammals, and reptiles as normal flora. Although M. morganii has a wide distribution, it is considered an uncommon cause of community-acquired infection, and it is most often encountered in postoperative and other nosocomial infections, such as urinary tract infections.
Historical identification and systematics
Morganella morganii was first described by a British bacteriologist H. de R. Morgan in 1906 as Morgan's bacillus. Morgan isolated the bacterium from stools of infants who were noted to have had "summer diarrhea". Later in 1919, Winslow et al. named Morgan's bacillus, Bacillus morganii. In 1936, though, Rauss renamed B. morganii as Proteus morganii. Fulton, in 1943, showed that B. columbensis and P. morganii were the same and defined the genus Morganella, due to the DNA-DNA hybridization. In 1943, Fulton attempted to define a subspecies, M. m. columbensis. However, in 1962, a review article by Ewing reported that M. columbensis had been re-identified as Escherichia coli, thereby removing that organism from the genus Morganella.
Microbiology
Morganella morganii is facultatively anaerobic and oxidase-negative. Its colonies appear off-white and opaque in color, when grown on agar plates. M. morganii cells are straight rods, about 0.6–0.7 μm in diameter and 1.0–1.7 μm in length. This organism moves by way of peritrichous flagella, but some strains do not form flagella at .
M. morganii is split into two subspecies: M. morganii subsp. morganii and M. morganii subsp. sibonii. M. morganii subsp. sibonii is able to ferment trehalose, whereas subsp. morganii cannot, and this is the primary phenotype used to differentiate them.
M. morganii can produce the enzyme catalase, so it is able to convert hydrogen peroxide to water and oxygen. This is a common enzyme found in most living organisms. In addition, it is indole test-positive, meaning |
https://en.wikipedia.org/wiki/Dobson%20unit | The Dobson unit (DU) is a unit of measurement of the amount of a trace gas in a vertical column through the Earth's atmosphere. It originated, and continues to be primarily used in respect to, atmospheric ozone, whose total column amount, usually termed "total ozone", and sometimes "column abundance", is dominated by the high concentrations of ozone in the stratospheric ozone layer.
The Dobson unit is defined as the thickness (in units of 10 μm) of that layer of pure gas which would be formed by the total column amount at standard conditions for temperature and pressure (STP). This is sometimes referred to as a 'milli-atmo-centimeter'. A typical column amount of 300 DU of atmospheric ozone therefore would form a 3 mm layer of pure gas at the surface of the Earth if its temperature and pressure conformed to STP.
The Dobson unit is named after Gordon Dobson, a researcher at the University of Oxford who in the 1920s built the first instrument to measure total ozone from the ground, making use of a double prism monochromator to measure the differential absorption of different bands of solar ultraviolet radiation by the ozone layer. This instrument, called the Dobson ozone spectrophotometer, has formed the backbone of the global network for monitoring atmospheric ozone and was the source of the discovery in 1984 of the Antarctic ozone hole.
Ozone
NASA uses a baseline value of 220 DU for ozone. This was chosen as the starting point for observations of the Antarctic ozone hole, since values of less than 220 Dobson units were not found before 1979. Also, from direct measurements over Antarctica, a column ozone level of less than 220 Dobson units is a result of the ozone loss from chlorine and bromine compounds.
Sulfur dioxide
In addition, Dobson units are often used to describe total column densities of sulfur dioxide, which occurs in the atmosphere in small amounts due to the combustion of fossil fuels, from biological processes releasing dimethyl sulfide, or by natura |
https://en.wikipedia.org/wiki/WIRIS | WIRIS is a company, legally registered as Maths for More, providing a set of proprietary HTML-based JavaScript tools which can author and edit mathematical formulas, execute mathematical problems and show mathematical graphics on the Cartesian coordinate system.
WIRIS equation editor is a native browser application, with a light server-side, that supports both MathML and LaTeX. Since 2017, after buying Design Science, a US-based a developer of MathType desktop software, WIRIS rebranded their web equation editor as MathType by WIRIS.
WIRIS is based in Barcelona, Spain and was founded by teachers and former students from the Technical University of Catalonia (Barcelona Tech) coordinated by Professor Sebastià Xambó. |
https://en.wikipedia.org/wiki/Thermodynamics%20and%20an%20Introduction%20to%20Thermostatistics | Thermodynamics and an Introduction to Thermostatistics is a textbook written by Herbert Callen that explains the basics of classical thermodynamics and discusses advanced topics in both classical and quantum frameworks. It covers the subject in an abstract and rigorous manner and contains discussions of applications. The textbook contains three parts, each building upon the previous. The first edition was published in 1960 and a second followed in 1985.
Overview
The first part of the book starts by presenting the problem thermodynamics is trying to solve, and provides the postulates on which thermodynamics is founded. It then develops upon this foundation to discuss reversible processes, heat engines, thermodynamics potentials, Maxwell's relations, stability of thermodynamics systems, and first-order phase transitions. As the author lays down the basics of thermodynamics, he then goes to discuss more advanced topics such as critical phenomena and irreversible processes.
The second part of the text presents the foundations of classical statistical mechanics. The concept of Boltzmann's entropy is introduced and used to describe the Einstein model, the two-state system, and the polymer model. Afterwards, the different statistical ensembles are discussed from which the thermodynamics potentials are derived. Quantum fluids and fluctuations are also discussed.
The last part of the text is a brief discussion on symmetry and the conceptual foundations of thermostatistics. In the final chapter, Callen advances his thesis that the symmetries of the fundamental laws of physics underlie the very foundations of thermodynamics and seeks to illuminate the crucial role thermodynamics plays in science.
Callen advises that a one-semester course for advanced undergraduates should cover the first seven chapters plus chapters 15 and 16 if time permits.
Second edition
Background
The second edition provides a descriptive account of the thermodynamics of critical phenomena, which |
https://en.wikipedia.org/wiki/Seroconversion | In immunology, seroconversion is the development of specific antibodies in the blood serum as a result of infection or immunization, including vaccination. During infection or immunization, antigens enter the blood, and the immune system begins to produce antibodies in response. Before seroconversion, the antigen itself may or may not be detectable, but the antibody is absent. During seroconversion, the antibody is present but not yet detectable. After seroconversion, the antibody is detectable by standard techniques and remains detectable unless the individual seroreverts. Seroreversion, or loss of antibody detectability, can occur due to weakening of the immune system or waning antibody concentration over time. Seroconversion refers the production of specific antibodies against specific antigens, meaning that a single infection could cause multiple waves of seroconversion against different antigens. Similarly, a single antigen could cause multiple waves of seroconversion with different classes of antibodies. For example, most antigens prompt seroconversion for the IgM class of antibodies first, and subsequently the IgG class.
Seroconversion rates are one of the methods used for determining the efficacy of a vaccine. The higher the rate of seroconversion, the more protective the vaccine for a greater proportion of the population. Seroconversion does not inherently confer immunity or resistance to infection. Only some antibodies, such as anti-spike antibodies for COVID-19, confer protection.
Because seroconversion refers to detectability by standard techniques, seropositivity status depends on the sensitivity and specificity of the assay. As a result, assays, like any serum test, may give false positives or false negatives and should be confirmed if used for diagnosis or treatment.
Mechanism
The physical structure of an antibody allows it to bind to a specific antigen, such as bacterial or viral proteins, to form a complex. Because antibodies are highly specific |
https://en.wikipedia.org/wiki/Logics%20for%20computability | Logics for computability are formulations of logic which
capture some aspect of computability as a basic notion. This usually involves a mix
of special logical connectives as well as semantics which explains how the logic is to be interpreted in a computational way.
Probably the first formal treatment of logic for computability is the realizability interpretation by Stephen Kleene in 1945, who gave an interpretation of intuitionistic number theory in terms of Turing machine computations. His motivation was to make precise the Heyting-Brouwer-Kolmogorov (BHK) interpretation of intuitionism, according to which proofs of mathematical statements are to be viewed as constructive procedures.
With the rise of many other kinds of logic, such as modal logic and linear logic, and novel semantic models, such as game semantics, logics for computability have been formulated in several contexts. Here we mention two.
Modal logic for computability
Kleene's original realizability interpretation has received much attention among those who study connections between computability and logic. It was extended to full higher-order intuitionistic logic by Martin Hyland in 1982 who constructed the effective topos. In 2002, Steve Awodey, Lars Birkedal, and Dana Scott formulated a modal logic for computability which extended the usual realizability interpretation with two modal operators expressing the notion of being "computably true".
Japaridze's computability logic
"Computability Logic" is a proper noun referring to a research programme initiated by Giorgi Japaridze in 2003. Its ambition is to redevelop logic from a game-theoretic semantics. Such a semantics sees games as formal equivalents of interactive computational problems, and their "truth" as existence of algorithmic winning strategies. See Computability logic
See also
Computability logic
Game semantics
Interactive computation |
https://en.wikipedia.org/wiki/Labcorp | Laboratory Corporation of America Holdings, more commonly known as Labcorp, is an American healthcare company headquartered in Burlington, North Carolina. It operates one of the largest clinical laboratory networks in the world, with a United States network of 36 primary laboratories. Before a merger with National Health Laboratory in 1995, the company operated under the name Roche BioMedical. Labcorp performs its largest volume of specialty testing at its Center for Esoteric Testing in Burlington, North Carolina, where the company is headquartered. As of 2018, Labcorp processes 2.5 million lab tests weekly.
Labcorp was an early pioneer of genomic testing using polymerase chain reaction (PCR) technology, at its Center for Molecular Biology and Pathology in Research Triangle Park, North Carolina, where it also performs other molecular diagnostics. It also does oncology testing, human immunodeficiency virus (HIV) genotyping and phenotyping.
Labcorp also operates the National Genetics Institute, Inc. (NGI), in Los Angeles, California, which develops PCR testing methods.
Labcorp's ViroMed facility, originally in Minnetonka, Minnesota, until closing this site in 2013, is now housed in Burlington and performs real-time PCR microbial testing using laboratory-developed assays.
Labcorp also provides testing in Puerto Rico and in three Canadian provinces.
In February 2022, Labcorp announced that it has entered into agreements with Ascension, one of the nation’s leading Catholic and non-profit health systems, to manage Ascension’s hospital-based laboratories in 10 states and purchase select assets of the health system’s outreach laboratory business.
Labcorp utilizes a fleet of eight Pilatus PC-12 and a single Pilatus PC-24 aircraft on nightly runs from Burlington for use on the East Coast. Prior to the acquisition of PC-12 aircraft Labcorp utilized seven PA-31-350's.
History
Revlon
National Health Laboratories Incorporated began in 1978. The company was a national blo |
https://en.wikipedia.org/wiki/Press%E2%80%93Schechter%20formalism | The Press–Schechter formalism is a mathematical model for predicting the number of objects (such as galaxies, galaxy clusters or dark matter halos) of a certain mass within a given volume of the Universe. It was described in an academic paper by William H. Press and Paul Schechter in 1974.
Background
In the context of cold dark matter cosmological models, perturbations on all scales are imprinted on the universe at very early times, for example by quantum fluctuations during an inflationary era. Later, as radiation redshifts away, these become mass perturbations, and they start to grow linearly. Only long after that, starting with small mass scales and advancing over time to larger mass scales, do the perturbations actually collapse to form (for example) galaxies or clusters of galaxies, in so-called hierarchical structure formation (see Physical cosmology).
Press and Schechter observed that the fraction of mass in collapsed objects more massive than some mass M is related to the fraction of volume samples in which the smoothed initial density fluctuations are above some density threshold. This yields a formula for the mass function (distribution of masses) of objects at any given time.
Result
The Press–Schechter formalism predicts that the number of objects with mass between and is:
where is the index of the power spectrum of the fluctuations in the early universe , is the mean (baryonic and dark) matter density of the universe at the time the fluctuation from which the object was formed had gravitationally collapsed, and is a cut-off mass below which structures will form. Its value is:
is the standard deviation per unit volume of the fluctuation from which the object was formed had gravitationally collapsed, at the time of the gravitational collapse, and R is the scale of the universe at that time. Parameters with subscript 0 are at the time of the initial creation of the fluctuations (or any later time before the gravitational collapse).
Qualitativel |
https://en.wikipedia.org/wiki/Hector%20%28microcomputer%29 | Hector (or Victor Lambda) are a series of a microcomputers produced in France in the early 1980s.
In January 1980, Michel Henric-Coll founded a company named "Lambda Systems" in Toulouse, that would import a computer (produced by "Interact Electronics Inc" of Ann Arbor, Michigan) to France. The computer was sold under the name of "Victor Lambda".
"Lambda Systems" went bankrupt in July 1981, along with "Interact". In December 1981, "Micronique", an electronic components company based in southern Paris, acquires the rights to the "Victor Lambda".
In 1982, "Victor Lambda Diffusion", a subsidiary, distributes the "Victor Lambda". The first machines built in the United States were not a success, and the following models were designed and produced in France at the headquarters of the "Micronique" company. The company uses the slogan: "The French Personal Computer".
In 1983, the "Victor" is renamed "Hector", to avoid confusion with the machines from the Californian company "Victor Technologies" (formerly "Sirius Systems Technology").
The last model introduced was the Hector MX, with production of the series ending in 1985. The series was not successful, due to the focus on the French market, intense competition from Amstrad machines and high prices.
Models
Victor Lambda
The Victor Lambda was a rebranded Interact Home Computer(also called The Interact Family Computer 2) microcomputer. Introduced in 1980, it had a chiclet keyboard and built-in cassette recorder for data storage.
Specifications:
CPU: Intel i8080, 2.0 MHz
Memory: 8K RAM, expandable to 16K RAM; 2K ROM
OS: Basic Level II (Microsoft BASIC v4.7); EDU-Basic (both loaded from tape)
Keyboard: 53-key chiclet
Display: 17 × 12 characters text in 8 colors; 112 × 78 with 4 colors from a palette of 8
Sound: SN76477 (one voice, four octaves)
Ports: Television (RGB), two joysticks, RS232 (optional)
Built-in cassette recorder (1200 B/s)
PSU: External AC transformer
Hector 1 (Victor Lambda 2)
The Hector 1 wa |
https://en.wikipedia.org/wiki/Theorem%20of%20absolute%20purity | In algebraic geometry, the theorem of absolute (cohomological) purity is an important theorem in the theory of étale cohomology. It states: given
a regular scheme X over some base scheme,
a closed immersion of a regular scheme of pure codimension r,
an integer n that is invertible on the base scheme,
a locally constant étale sheaf with finite stalks and values in ,
for each integer , the map
is bijective, where the map is induced by cup product with .
The theorem was introduced in SGA 5 Exposé I, § 3.1.4. as an open problem. Later, Thomason proved it for large n and Gabber in general.
See also
purity (algebraic geometry) |
https://en.wikipedia.org/wiki/Sleep%20cycle | The sleep cycle is an oscillation between the slow-wave and REM (paradoxical) phases of sleep. It is sometimes called the ultradian sleep cycle, sleep–dream cycle, or REM-NREM cycle, to distinguish it from the circadian alternation between sleep and wakefulness. In humans, this cycle takes 70 to 110 minutes (90 ± 20 minutes).
Characteristics
Electroencephalography shows the timing of sleep cycles by virtue of the marked distinction in brainwaves manifested during REM and non-REM sleep. Delta wave activity, correlating with slow-wave (deep) sleep, in particular shows regular oscillations throughout a good night's sleep. Secretions of various hormones, including renin, growth hormone, and prolactin, correlate positively with delta-wave activity, while secretion of thyroid-stimulating hormone correlates inversely. Heart rate variability, well known to increase during REM, predictably also correlates inversely with delta-wave oscillations over the ~90-minute cycle.
In order to determine in which stage of sleep the asleep subject is, electroencephalography is combined with other devices used for this differentiation. EMG (electromyography) is a crucial method to distinguish between sleep phases: for example, a decrease of muscle tone is in general a characteristic of the transition from wake to sleep, and during REM sleep, there is a state of muscle atonia (paralysis), resulting in an absence of signals in the EMG.
EOG (electrooculography), the measure of the eyes’ movement, is the third method used in the sleep architecture measurement; for example, REM sleep, as the name indicates, is characterized by a rapid eye movement pattern, visible thanks to the EOG.
Moreover, methods based on cardiorespiratory parameters are also effective in the analysis of sleep architecture—if they are associated with the other aforementioned measurements (such as electroencephalography, electrooculography and the electromyography).
Homeostatic functions, especially thermoregulation, o |
https://en.wikipedia.org/wiki/Optic%20vesicle | The eyes begin to develop as a pair of diverticula (pouches) from the lateral aspects of the forebrain. These diverticula make their appearance before the closure of the anterior end of the neural tube; after the closure of the tube around the 4th week of development, they are known as the optic vesicles. Previous studies of optic vesicles suggest that the surrounding extraocular tissues – the surface ectoderm and extraocular mesenchyme – are necessary for normal eye growth and differentiation.
They project toward the sides of the head, and the peripheral part of each expands to form a hollow bulb, while the proximal part remains narrow and constitutes the optic stalk, which goes on to form the optic nerve.
Additional images
See also
Eye development |
https://en.wikipedia.org/wiki/Bus%20analyzer | A bus analyzer is a type of a protocol analysis tool, used for capturing and analyzing communication data across a specific interface bus, usually embedded in a hardware system. The bus analyzer functionality helps design, test and validation engineers to check, test, debug and validate their designs throughout the design cycles of a hardware-based product. It also helps in later phases of a product life cycle, in examining communication interoperability between systems and between components, and clarifying hardware support concerns.
A bus analyzer is designed for use with specific parallel or serial bus architectures. Though the term bus analyzer implies a physical communication and interface that is being analyzed, it is sometimes used interchangeably with the term protocol analyzer or Packet Analyzer, and may be used also for analysis tools for Wireless interfaces like wireless LAN (like Wi-Fi), PAN (like Bluetooth, Wireless USB), and other, though these technologies do not have a “Wired” Bus.
The bus analyzer monitors and captures the bus communication data, decodes and analyses it and displays the data and analysis reports to the user. It is essentially a logic analyzer with some additional knowledge of the underlying bus traffic characteristics. One of the key differences between a bus analyzer and a logic analyzer is notably its ability to filter and extract only relevant traffic that occurs on the analyzed bus. Some advanced logic analyzers present data storage qualification options that also allow to filter bus traffic, enabling bus analyzer-like features.
Some key differentiators between bus and logic analyzers are:
1. Cost: Logic analyzers usually carry higher prices than bus analyzers. The converse of this fact is that a logic analyzer can be used with a variety of bus architectures, whereas a bus analyzer is only good with one architecture.
2. Targeted Capabilities and Preformatting of data: A bus analyzer can be designed to provide very specific |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.