doc_id
int32
15
2.25M
text
stringlengths
101
6.85k
source
stringlengths
39
44
2,147,669
Stem cell genomics analyzes the genomes of stem cells. Currently, this field is rapidly expanding due to the dramatic decrease in the cost of sequencing genomes. The study of stem cell genomics has wide reaching implications in the study of stem cell biology and possible therapeutic usages of stem cells. Application of research in this field could lead to drug discovery and information on diseases by the molecular characterization of the pluripotent stem cell through DNA and transcriptome sequencing and looking at the epigenetic changes of stem cells and subsequent products. One step in that process is single cell phenotypic analysis, and the connection between the phenotype and genotype of specific stem cells. While current genomic screens are done with entire populations of cells, focusing in on a single stem cell will help determine specific signaling activity associated with varying degrees of stem cell differentiation and limit background due to heterogeneous populations. Single cell analysis of induced pluripotent stem cells (iPSCs), or stem cells able to differentiate into many different cell types, is a suggested method for treating such diseases like Alzheimer's disease (AD). This includes for understanding the differences between sporadic AD and familial AD. By first taking a skin sample from the patient and are transformed by transducing cells using retroviruses to encode such stem cell genes as Oct4, Sox2, KLF4 and cMYC. This allows for skin cells to be reprogrammed into patient-specific stem cell lines. Taking genomic sequences of these individual cells would allow for patient-specific treatments and furthering understanding of AD disease models. This technique would be used for similar diseases, like amyotrophic lateral sclerosis (ALS) and spinal muscular atrophy (SMA). These stem cells developed from a singular patient would also be able to be used to produce cells affected in the above-mentioned diseases. As mentioned, it will also lead to patient specific phenotypes of each disease. Further chemical analyses to develop safer drugs can be done through sequence information and cell-culture tests on iPSCs. After development on a specific drug, it can be transferred to other patient diseased cells while also being safety tested.
https://en.wikipedia.org/wiki?curid=5747184
2,164,952
Neuro: of or having to do with the nervous system. Nervous System: An organ system that coordinates the activities of muscles, monitors organs, constructs and processes data received from the senses and initiates actions. The human nervous system coordinates the functions of itself and all organ systems including but not limited to the cardiovascular system, respiratory system, skin, digestive system, immune system, hormonal, metabolic, musculoskeletal, endocrine system, blood and reproductive system. Optimal function of the organism as a whole depends upon the proper function of the nervous system.
https://en.wikipedia.org/wiki?curid=49292146
2,235,722
Probabilistic Approach for protein NMR Assignment Validation (PANAV) is a freely available stand-alone program that is used for protein chemical shift re-referencing. Chemical shift referencing is a problem in protein nuclear magnetic resonance as >20% of reported NMR chemical shift assignments appear to be improperly referenced. For certain nuclei (especially 13C and 15N) these referencing issues can cause systematic chemical shift errors of between 1.0 and 2.5 ppm. Chemical shift errors of this magnitude often make it very difficult to compare NMR chemical shift assignments between proteins. It also makes it very hard to structurally interpret chemical shifts (i.e. identify secondary structures or perform chemical shift refinement). Unlike most other chemical shift re-referencing tools PANAV employs a structure-independent protocol. That is, with PANAV there is no need to know the structure of the protein in advance of correcting any chemical shift referencing errors. This makes PANAV particularly useful for NMR studies involving novel or newly assigned proteins, where the structure has yet to be determined. Indeed, this scenario represents the vast majority of assignment cases in biomolecular NMR. PANAV uses residue-specific and secondary structure-specific chemical shift distributions that were calculated over short (3-6 residue) fragments of correctly referenced proteins (found in RefDB) to identify mis-assigned resonances. More specifically, PANAV compares the initial (i.e. observed) chemical shift assignments to the expected chemical shifts based on their local sequence and expected/predicted secondary structure. In this way, PANAV is able to identify and re-reference mis-referenced chemical shift assignments. PANAV can also identify potentially mis-assigned resonances as well. PANAV has been extensively tested and compared against a large number of existing re-referencing or mis-assignment detection programs (most of which are structure-based). These assessments indicate that PANAV is equal to or superior to existing approaches.
https://en.wikipedia.org/wiki?curid=42516350
11,699
Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on "known" properties learned from the training data, data mining focuses on the discovery of (previously) "unknown" properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to "reproduce known" knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously "unknown" knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.
https://en.wikipedia.org/wiki?curid=233488
11,710
Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data is known as training data, and consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimization of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function will allow the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task.
https://en.wikipedia.org/wiki?curid=233488
11,743
An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times.
https://en.wikipedia.org/wiki?curid=233488
17,547
Alumni have made important contributions to science. Some have concentrated their studies on the very small universe of atoms and molecules. Nobel laureate William F. Giauque (BS 1920, PhD 1922) investigated chemical thermodynamics, Nobel laureate Willard Libby (BS 1931, PhD 1933) pioneered radiocarbon dating, Nobel laureate Willis Lamb (BS 1934, PhD 1938) examined the hydrogen spectrum, Nobel laureate Hamilton O. Smith (BA 1952) applied restriction enzymes to molecular genetics, Nobel laureate Robert Laughlin (BA math 1972) explored the fractional quantum Hall effect, and Nobel laureate Andrew Fire (BA math 1978) helped to discover RNA interference-gene silencing by double-stranded RNA. Nobel laureate Glenn T. Seaborg (PhD 1937) collaborated with Albert Ghiorso (BS 1913) to discover 12 chemical elements, such as "americium", "berkelium", and "californium". David Bohm (PhD 1943) discovered Bohm diffusion. Nobel laureate Yuan T. Lee (PhD 1965) developed the crossed molecular beam technique for studying chemical reactions. Carol Greider (PhD 1987), professor of molecular biology and genetics at Johns Hopkins University School of Medicine, was awarded the 2009 Nobel Prize in medicine for discovering a key mechanism in the genetic operations of cells, an insight that has inspired new lines of research into cancer. Harvey Itano (BS 1942) conducted breakthrough work on sickle cell anemia that marked the first time a disease was linked to a molecular origin. While he was valedictorian of Berkeley's class of 1942, he was unable to attend commencement exercises due to internment. Narendra Karmarkar (PhD 1983) is known for the interior point method, a polynomial algorithm for linear programming known as Karmarkar's algorithm. National Medal of Science laureate Chien-Shiung Wu (PhD 1940), often known as the "Chinese Madame Curie", disproved the Law of Conservation of Parity for which she was awarded the inaugural Wolf Prize in Physics. Kary Mullis (PhD 1973) was awarded the 1993 Nobel Prize in Chemistry for his role in developing the polymerase chain reaction, a method for amplifying DNA sequences. Daniel Kahneman was awarded the 2002 Nobel Memorial Prize in Economics for his work in Prospect theory. Richard O. Buckius, engineer, Bachelor's in Mechanical Engineering '72, Masters '73, PhD '75, currently Chief Operating Officer of the National Science Foundation. Edward P. Tryon (PhD 1967) is the physicist who first said our universe originated from a quantum fluctuation of the vacuum.
https://en.wikipedia.org/wiki?curid=31922
18,768
Quantum computing began in 1980 when physicist Paul Benioff proposed a quantum mechanical model of the Turing machine. Richard Feynman and Yuri Manin later suggested that a quantum computer had the potential to simulate things a classical computer could not feasibly do. In 1986 Feynman introduced an early version of the quantum circuit notation. In 1994, Peter Shor developed a quantum algorithm for finding the prime factors of an integer with the potential to decrypt RSA-encrypted communications. In 1998 Isaac Chuang, Neil Gershenfeld and Mark Kubinec created the first two-qubit quantum computer that could perform computations. Despite ongoing experimental progress since the late 1990s, most researchers believe that "fault-tolerant quantum computing [is] still a rather distant dream." In 2015, Duke University studies estimated that nearly 3 million qubits large fault-tolerant quantum computer could factor 2,048-bit integer in five months. In recent years, investment in quantum computing research has increased in the public and private sectors. On 23 October 2019, Google AI, in partnership with the U.S. National Aeronautics and Space Administration (NASA), claimed to have performed a quantum computation that was infeasible on any classical computer, but whether this claim was or is still valid is a topic of active research.
https://en.wikipedia.org/wiki?curid=25220
20,846
Electricity generation is often done by a process of converting mechanical energy to electricity. Devices such as steam turbines or gas turbines are involved in the production of the mechanical energy, which is passed on to electric generators which produce the electricity. Electricity can also be supplied by chemical sources such as electric batteries or by other means from a wide variety of sources of energy. Electric power is generally supplied to businesses and homes by the electric power industry. Electricity is usually sold by the kilowatt hour (3.6 MJ) which is the product of power in kilowatts multiplied by running time in hours. Electric utilities measure power using electricity meters, which keep a running total of the electric energy delivered to a customer. Unlike fossil fuels, electricity is a low entropy form of energy and can be converted into motion or many other forms of energy with high efficiency.
https://en.wikipedia.org/wiki?curid=9550
27,182
The cell cycle is a series of events that take place in a cell that cause it to divide into two daughter cells. These events include the duplication of its DNA and some of its organelles, and the subsequent partitioning of its cytoplasm into two daughter cells in a process called cell division. In eukaryotes (i.e., animal, plant, fungal, and protist cells), there are two distinct types of cell division: mitosis and meiosis. Mitosis is part of the cell cycle, in which replicated chromosomes are separated into two new nuclei. Cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. In general, mitosis (division of the nucleus) is preceded by the S stage of interphase (during which the DNA is replicated) and is often followed by telophase and cytokinesis; which divides the cytoplasm, organelles and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. The different stages of mitosis all together define the mitotic phase of an animal cell cycle—the division of the mother cell into two genetically identical daughter cells. The cell cycle is a vital process by which a single-celled fertilized egg develops into a mature organism, as well as the process by which hair, skin, blood cells, and some internal organs are renewed. After cell division, each of the daughter cells begin the interphase of a new cycle. In contrast to mitosis, meiosis results in four haploid daughter cells by undergoing one round of DNA replication followed by two divisions. Homologous chromosomes are separated in the first division (meiosis I), and sister chromatids are separated in the second division (meiosis II). Both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. Both are believed to be present in the last eukaryotic common ancestor.
https://en.wikipedia.org/wiki?curid=9127632
34,211
Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that itself (since it still contains the same total energy even in different forms) but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy.
https://en.wikipedia.org/wiki?curid=9649
34,213
Energy transformations in the universe over time are characterized by various kinds of potential energy, that has been available since the Big Bang, being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae to "store" energy in the creation of heavy isotopes (such as uranium and thorium), and nuclear decay, a process in which energy is released that was originally stored in these heavy elements, before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic and thermal energy in a very short time.
https://en.wikipedia.org/wiki?curid=9649
48,776
In unsupervised learning, input data is given along with the cost function, some function of the data formula_1 and the network's output. The cost function is dependent on the task (the model domain) and any "a priori" assumptions (the implicit properties of the model, its parameters and the observed variables). As a trivial example, consider the model formula_2 where formula_3 is a constant and the cost formula_4. Minimizing this cost produces a value of formula_3 that is equal to the mean of the data. The cost function can be much more complicated. Its form depends on the application: for example, in compression it could be related to the mutual information between formula_1 and formula_7, whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples those quantities would be maximized rather than minimized). Tasks that fall within the paradigm of unsupervised learning are in general estimation problems; the applications include clustering, the estimation of statistical distributions, compression and filtering.
https://en.wikipedia.org/wiki?curid=21523
54,103
The conservation of energy is a universal principle in physics and holds for any interaction, along with the conservation of momentum. The classical conservation of mass, in contrast, is violated in certain relativistic settings. This concept has been experimentally proven in a number of ways, including the conversion of mass into kinetic energy in nuclear reactions and other interactions between elementary particles. While modern physics has discarded the expression 'conservation of mass', in older terminology a relativistic mass can also be defined to be equivalent to the energy of a moving system, allowing for a "conservation of relativistic mass". Mass conservation breaks down when the energy associated with the mass of a particle is converted into other forms of energy, such as kinetic energy, thermal energy, or radiant energy. Similarly, kinetic or radiant energy can be used to create particles that have mass, always conserving the total energy and momentum.
https://en.wikipedia.org/wiki?curid=422481
55,072
In biology, the nervous system is the highly complex part of an animal that coordinates its actions and sensory information by transmitting signals to and from different parts of its body. The nervous system detects environmental changes that impact the body, then works in tandem with the endocrine system to respond to such events. Nervous tissue first arose in wormlike organisms about 550 to 600 million years ago. In vertebrates it consists of two main parts, the central nervous system (CNS) and the peripheral nervous system (PNS). The CNS consists of the brain and spinal cord. The PNS consists mainly of nerves, which are enclosed bundles of the long fibers or axons, that connect the CNS to every other part of the body. Nerves that transmit signals from the brain are called motor nerves or "efferent" nerves, while those nerves that transmit information from the body to the CNS are called sensory nerves or "afferent". Spinal nerves are mixed nerves that serve both functions. The PNS is divided into three separate subsystems, the somatic, autonomic, and enteric nervous systems. Somatic nerves mediate voluntary movement. The autonomic nervous system is further subdivided into the sympathetic and the parasympathetic nervous systems. The sympathetic nervous system is activated in cases of emergencies to mobilize energy, while the parasympathetic nervous system is activated when organisms are in a relaxed state. The enteric nervous system functions to control the gastrointestinal system. Both autonomic and enteric nervous systems function involuntarily. Nerves that exit from the cranium are called cranial nerves while those exiting from the spinal cord are called spinal nerves.
https://en.wikipedia.org/wiki?curid=21944
70,218
The kinetic energy of any entity depends on the reference frame in which it is measured. However, the total energy of an isolated system, i.e. one in which energy can neither enter nor leave, does not change over time in the reference frame in which it is measured. Thus, the chemical energy converted to kinetic energy by a rocket engine is divided differently between the rocket ship and its exhaust stream depending upon the chosen reference frame. This is called the Oberth effect. But the total energy of the system, including kinetic energy, fuel chemical energy, heat, etc., is conserved over time, regardless of the choice of reference frame. Different observers moving with different reference frames would however disagree on the value of this conserved energy.
https://en.wikipedia.org/wiki?curid=17327
87,746
A "wet cell" battery has a liquid electrolyte. Other names are "flooded cell", since the liquid covers all internal parts or "vented cell", since gases produced during operation can escape to the air. Wet cells were a precursor to dry cells and are commonly used as a learning tool for electrochemistry. They can be built with common laboratory supplies, such as beakers, for demonstrations of how electrochemical cells work. A particular type of wet cell known as a concentration cell is important in understanding corrosion. Wet cells may be primary cells (non-rechargeable) or secondary cells (rechargeable). Originally, all practical primary batteries such as the Daniell cell were built as open-top glass jar wet cells. Other primary wet cells are the Leclanche cell, Grove cell, Bunsen cell, Chromic acid cell, Clark cell, and Weston cell. The Leclanche cell chemistry was adapted to the first dry cells. Wet cells are still used in automobile batteries and in industry for standby power for switchgear, telecommunication or large uninterruptible power supplies, but in many places batteries with gel cells have been used instead. These applications commonly use lead–acid or nickel–cadmium cells. Molten salt batteries are primary or secondary batteries that use a molten salt as electrolyte. They operate at high temperatures and must be well insulated to retain heat.
https://en.wikipedia.org/wiki?curid=19174720
88,156
where formula_1 denotes the change in the internal energy of a closed system (for which heat or work through the system boundary are possible, but matter transfer is not possible), formula_2 denotes the quantity of energy supplied "to" the system as heat, and formula_3 denotes the amount of thermodynamic work done "by" the system "on" its surroundings. An equivalent statement is that perpetual motion machines of the first kind are impossible; work formula_3 done by a system on its surrounding requires that the system's internal energy formula_9 decrease or be consumed, so that the amount of internal energy lost by that work must be resupplied as heat formula_2 by an external energy source or as work by an external machine acting on the system (so that formula_9 is recovered) to make the system work continuously.
https://en.wikipedia.org/wiki?curid=29952
93,758
Probability distributions usually belong to one of two classes. A discrete probability distribution is applicable to the scenarios where the set of possible outcomes is discrete (e.g. a coin toss, a roll of a die) and the probabilities are encoded by a discrete list of the probabilities of the outcomes; in this case the discrete probability distribution is known as probability mass function. On the other hand, absolutely continuous probability distributions are applicable to scenarios where the set of possible outcomes can take on values in a continuous range (e.g. real numbers), such as the temperature on a given day. In the absolutely continuous case, probabilities are described by a probability density function, and the probability distribution is by definition the integral of the probability density function. The normal distribution is a commonly encountered absolutely continuous probability distribution. More complex experiments, such as those involving stochastic processes defined in continuous time, may demand the use of more general probability measures.
https://en.wikipedia.org/wiki?curid=23543
97,446
Helper T cells express T cell receptors that recognize antigen bound to Class II MHC molecules. The MHC:antigen complex is also recognized by the helper cell's CD4 co-receptor, which recruits molecules inside the T cell (such as Lck) that are responsible for the T cell's activation. Helper T cells have a weaker association with the MHC:antigen complex than observed for killer T cells, meaning many receptors (around 200–300) on the helper T cell must be bound by an MHC:antigen to activate the helper cell, while killer T cells can be activated by engagement of a single MHC:antigen molecule. Helper T cell activation also requires longer duration of engagement with an antigen-presenting cell. The activation of a resting helper T cell causes it to release cytokines that influence the activity of many cell types. Cytokine signals produced by helper T cells enhance the microbicidal function of macrophages and the activity of killer T cells. In addition, helper T cell activation causes an upregulation of molecules expressed on the T cell's surface, such as CD40 ligand (also called CD154), which provide extra stimulatory signals typically required to activate antibody-producing B cells.
https://en.wikipedia.org/wiki?curid=14958
100,996
There are various types of potential energy, each associated with a particular type of force. For example, the work of an elastic force is called elastic potential energy; work of the gravitational force is called gravitational potential energy; work of the Coulomb force is called electric potential energy; work of the strong nuclear force or weak nuclear force acting on the baryon charge is called nuclear potential energy; work of intermolecular forces is called intermolecular potential energy. Chemical potential energy, such as the energy stored in fossil fuels, is the work of the Coulomb force during rearrangement of configurations of electrons and nuclei in atoms and molecules. Thermal energy usually has two components: the kinetic energy of random motions of particles and the potential energy of their configuration.
https://en.wikipedia.org/wiki?curid=23703
118,576
In 1912, Max Planck published the first journal article to describe the discontinuous emission of radiation, based on the discrete quanta of energy. In Planck's "second quantum theory" resonators absorbed energy continuously, but emitted energy in discrete energy quanta only when they reached the boundaries of finite cells in phase space, where their energies became integer multiples of . This theory led Planck to his new radiation law, but in this version energy resonators possessed a zero-point energy, the smallest average energy a resonator could take on. Planck's radiation equation contained a residual energy factor, one , as an additional term dependent on the frequency , which was greater than zero (where is Planck's constant). It is therefore widely agreed that "Planck's equation marked the birth of the concept of zero-point energy." In a series of papers from 1911 to 1913, Planck found the average energy of an oscillator to be:
https://en.wikipedia.org/wiki?curid=84400
118,683
In 1951 Callen and Welton proved the quantum fluctuation-dissipation theorem (FDT) which was originally formulated in classical form by Nyquist (1928) as an explanation for observed Johnson noise in electric circuits. Fluctuation-dissipation theorem showed that when something dissipates energy, in an effectively irreversible way, a connected heat bath must also fluctuate. The fluctuations and the dissipation go hand in hand; it is impossible to have one without the other. The implication of FDT being that the vacuum could be treated as a heat bath coupled to a dissipative force and as such energy could, in part, be extracted from the vacuum for potentially useful work. Such a theory has met with resistance: Macdonald (1962) and Harris (1971) claimed that extracting power from the zero-point energy to be impossible, so FDT could not be true. Grau and Kleen (1982) and Kleen (1986), argued that the Johnson noise of a resistor connected to an antenna must satisfy Planck's thermal radiation formula, thus the noise must be zero at zero temperature and FDT must be invalid. Kiss (1988) pointed out that the existence of the zero-point term may indicate that there is a renormalization problem—i.e., a mathematical artifact—producing an unphysical term that is not actually present in measurements (in analogy with renormalization problems of ground states in quantum electrodynamics). Later, Abbott et al. (1996) arrived at a different but unclear conclusion that "zero-point energy is infinite thus it should be renormalized but not the 'zero-point fluctuations'". Despite such criticism, FDT has been shown to be true experimentally under certain quantum, non-classical conditions. Zero-point fluctuations can, and do, contribute towards systems which dissipate energy. A paper by Armen Allahverdyan and Theo Nieuwenhuizen in 2000 showed the feasibility of extracting zero-point energy for useful work from a single bath, without contradicting the laws of thermodynamics, by exploiting certain quantum mechanical properties.
https://en.wikipedia.org/wiki?curid=84400
128,214
In Bayesian inference, although one can speak about the likelihood of any proposition or random variable given another random variable: for example the likelihood of a parameter value or of a statistical model (see marginal likelihood), given specified data or other evidence, the likelihood function remains the same entity, with the additional interpretations of (i) a conditional density of the data given the parameter (since the parameter is then a random variable) and (ii) a measure or amount of information brought by the data about the parameter value or even the model. Due to the introduction of a probability structure on the parameter space or on the collection of models, it is possible that a parameter value or a statistical model have a large likelihood value for given data, and yet have a low "probability", or vice versa. This is often the case in medical contexts. Following Bayes' Rule, the likelihood when seen as a conditional density can be multiplied by the prior probability density of the parameter and then normalized, to give a posterior probability density. More generally, the likelihood of an unknown quantity formula_4 given another unknown quantity formula_166 is proportional to the "probability of formula_166 given formula_4".
https://en.wikipedia.org/wiki?curid=44968
128,865
In the model of computation used in this article (the quantum circuit model), a classic computer generates the gate composition for the quantum computer, and the quantum computer behaves as a coprocessor that receives instructions from the classical computer about which primitive gates to apply to which qubits. Measurement of quantum registers results in binary values that the classical computer can use in its computations. Quantum algorithms often contain both a classical and a quantum part. Unmeasured I/O (sending qubits to remote computers without collapsing their quantum states) can be used to create networks of quantum computers. Entanglement swapping can then be used to realize distributed algorithms with quantum computers that are not directly connected. Examples of distributed algorithms that only require the use of a handful of quantum logic gates is superdense coding, the quantum Byzantine agreement and the BB84 cipherkey exchange protocol.
https://en.wikipedia.org/wiki?curid=888587
130,496
Molecular biology studies the common genetic and developmental mechanisms of animals and plants, attempting to answer the questions regarding the mechanisms of genetic inheritance and the structure of the gene. In 1953, James Watson and Francis Crick described the structure of DNA and the interactions within the molecule, and this publication jump-started research into molecular biology and increased interest in the subject. While researchers practice techniques specific to molecular biology, it is common to combine these with methods from genetics and biochemistry. Much of molecular biology is quantitative, and recently a significant amount of work has been done using computer science techniques such as bioinformatics and computational biology. Molecular genetics, the study of gene structure and function, has been among the most prominent sub-fields of molecular biology since the early 2000s. Other branches of biology are informed by molecular biology, by either directly studying the interactions of molecules in their own right such as in cell biology and developmental biology, or indirectly, where molecular techniques are used to infer historical attributes of populations or species, as in fields in evolutionary biology such as population genetics and phylogenetics. There is also a long tradition of studying biomolecules "from the ground up", or molecularly, in biophysics.
https://en.wikipedia.org/wiki?curid=34413
164,050
Data integrity contains guidelines for data retention, specifying or guaranteeing the length of time data can be retained in a particular database. To achieve data integrity, these rules are consistently and routinely applied to all data entering the system, and any relaxation of enforcement could cause errors in the data. Implementing checks on the data as close as possible to the source of input (such as human data entry), causes less erroneous data to enter the system. Strict enforcement of data integrity rules results in lower error rates, and time saved troubleshooting and tracing erroneous data and the errors it causes to algorithms.
https://en.wikipedia.org/wiki?curid=40995
165,863
Rectification is the translation of the raw EMG signal to a signal with a single polarity, usually positive. The purpose of rectifying the signal is to ensure the signal does not average to zero, due to the raw EMG signal having positive and negative components. Two types of rectification are used: full-wave and half-wave rectification. Full-wave rectification adds the EMG signal below the baseline to the signal above the baseline to make a conditioned signal that is all positive. If the baseline is zero, this is equivalent to taking the absolute value of the signal. This is the preferred method of rectification because it conserves all of the signal energy for analysis. Half-wave rectification discards the portion of the EMG signal that is below the baseline. In doing so, the average of the data is no longer zero therefore it can be used in statistical analyses.
https://en.wikipedia.org/wiki?curid=997173
184,899
In chemistry and physics, activation energy is the minimum amount of energy that must be provided for compounds to result in a chemical reaction. The activation energy ("E") of a reaction is measured in joules per mole (J/mol), kilojoules per mole (kJ/mol) or kilocalories per mole (kcal/mol). Activation energy can be thought of as the magnitude of the potential barrier (sometimes called the energy barrier) separating minima of the potential energy surface pertaining to the initial and final thermodynamic state. For a chemical reaction to proceed at a reasonable rate, the temperature of the system should be high enough such that there exists an appreciable number of molecules with translational energy equal to or greater than the activation energy. The term "activation energy" was introduced in 1889 by the Swedish scientist Svante Arrhenius.
https://en.wikipedia.org/wiki?curid=38413
184,905
A catalyst is able to reduce the activation energy by forming a transition state in a more favorable manner. Catalysts, by nature, create a more "comfortable" fit for the substrate of a reaction to progress to a transition state. This is possible due to a release of energy that occurs when the substrate binds to the active site of a catalyst. This energy is known as Binding Energy. Upon binding to a catalyst, substrates partake in numerous stabilizing forces while within the active site (e.g. hydrogen bonding or van der Waals forces). Specific and favorable bonding occurs within the active site until the substrate forms to become the high-energy transition state. Forming the transition state is more favorable with the catalyst because the favorable stabilizing interactions within the active site "release" energy. A chemical reaction is able to manufacture a high-energy transition state molecule more readily when there is a stabilizing fit within the active site of a catalyst. The binding energy of a reaction is this energy released when favorable interactions between substrate and catalyst occur. The binding energy released assists in achieving the unstable transition state. Reactions without catalysts need a higher input of energy to achieve the transition state. Non-catalyzed reactions do not have free energy available from active site stabilizing interactions, such as catalytic enzyme reactions.
https://en.wikipedia.org/wiki?curid=38413
196,505
A study by Natural Resources Canada found that cold climate air source heat pumps (CC-ASHPs) work in Canadian winters, based on testing in Ottawa (Ontario) in late December 2012 to early January 2013 using a ducted CC-ASHP. (The report does not explicitly state whether backup heat sources should be considered for temperatures below −30 °C. The record low for Ottawa is −36 °C.) The CC-ASHP provided 60% energy savings compared to natural gas (in energy units). When considering energy efficiency in electricity generation however, more energy would be used with the CC-ASHP, relative to natural gas heating, in provinces or territories (Alberta, Nova Scotia, and the Northwest Territories) where coal-fired generation was the predominant method of electricity generation. (The energy savings in Saskatchewan were marginal. Other provinces use primarily hydroelectric and/or nuclear generation.) Despite the significant energy savings relative to gas in provinces not relying primarily on coal, the higher cost of electricity relative to natural gas (using 2012 retail prices in Ottawa, Ontario) made natural gas the less expensive energy source. (The report did not calculate the cost of operation in the province of Quebec, which has lower electricity rates, nor did it show the impact of time of use electricity rates.) The study found that in Ottawa a CC-ASHP cost 124% more to operate than the natural gas system. However, in areas where natural gas is not available to homeowners, 59% energy cost savings can be realized relative to heating with fuel oil. The report noted that about 1 million residences in Canada (8%) are still heated with fuel oil. The report shows 54% energy cost savings for CC-ASHPs relative to electric baseboard resistance heating. Based on these savings, the report showed a five-year payback for converting from either fuel oil or electric baseboard resistance heating to a CC-ASHP. (The report did not specify whether that calculation considered the possible need for an electrical service upgrade in the case of converting from fuel oil. Presumably no electrical service upgrade would be needed if converting from electric resistance heat.) The report did note greater fluctuations in room temperature with the heat pump due to its defrost cycles.
https://en.wikipedia.org/wiki?curid=6453717
227,040
The climate system receives nearly all of its energy from the sun and radiates energy to outer space. The balance of incoming and outgoing energy and the passage of the energy through the climate system is Earth's energy budget. When the incoming energy is greater than the outgoing energy, Earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and Earth experiences cooling.
https://en.wikipedia.org/wiki?curid=47512
238,327
Dialog in "Datalore" establishes some of Data's backstory. It is stated that he was deactivated in 2336 on Omicron Theta before an attack by the Crystalline Entity, a spaceborne creature which converts life forms to energy for sustenance. He was found and reactivated by Starfleet personnel two years later. Data went to Starfleet Academy from 2341 to 2345 (he describes himself as "Class of '78" to Commander William Riker in the series premiere "Encounter at Farpoint"—with "honors in probability mechanics and exobiology", although canonically may only refer to the stardate) and then served in Starfleet aboard the USS "Trieste". He was assigned to the "Enterprise" under Captain Jean-Luc Picard in 2364. In "Datalore", Data discovers his amoral brother, Lore, and learns that Dr. Noonien Soong created Data after Lore. Lore fails in an attempt to betray the "Enterprise" to the Crystalline Entity, and Wesley Crusher beams Data's brother into space at the episode's conclusion. Lore claimed to Data that Data being "less-perfect" which was a lie, as Soong later told Data in "Brothers"; the only real difference between the two of them "was some programming" (Lore's positronic net differed from Data's: it had a Type-"L" phase discriminator compared to Data's Type-"R"; see episode "").
https://en.wikipedia.org/wiki?curid=47676
252,396
A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features from a collection of information, while descriptive statistics (in the mass noun sense) is the process of using and analysing those statistics. Descriptive statistic is distinguished from inferential statistics (or inductive statistics) by its aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent. This generally means that descriptive statistics, unlike inferential statistics, is not developed on the basis of probability theory, and are frequently nonparametric statistics. Even when a data analysis draws its main conclusions using inferential statistics, descriptive statistics are generally also presented. For example, in papers reporting on human subjects, typically a table is included giving the overall sample size, sample sizes in important subgroups (e.g., for each treatment or exposure group), and demographic or clinical characteristics such as the average age, the proportion of subjects of each sex, the proportion of subjects with related co-morbidities, etc.
https://en.wikipedia.org/wiki?curid=8187
260,517
CDMA is a spread-spectrum multiple-access technique. A spread-spectrum technique spreads the bandwidth of the data uniformly for the same transmitted power. A spreading code is a pseudo-random code that has a narrow ambiguity function, unlike other narrow pulse codes. In CDMA a locally generated code runs at a much higher rate than the data to be transmitted. Data for transmission is combined by bitwise XOR (exclusive OR) with the faster code. The figure shows how a spread-spectrum signal is generated. The data signal with pulse duration of formula_1 (symbol period) is XORed with the code signal with pulse duration of formula_2 (chip period). (Note: bandwidth is proportional to formula_3, where formula_4 = bit time.) Therefore, the bandwidth of the data signal is formula_5 and the bandwidth of the spread spectrum signal is formula_6. Since formula_2 is much smaller than formula_1, the bandwidth of the spread-spectrum signal is much larger than the bandwidth of the original signal. The ratio formula_9 is called the spreading factor or processing gain and determines to a certain extent the upper limit of the total number of users supported simultaneously by a base station.
https://en.wikipedia.org/wiki?curid=7143
261,068
At any temperature greater than absolute zero, microscopic potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero point energy. A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system of given composition has attained its minimum attainable entropy.
https://en.wikipedia.org/wiki?curid=340757
261,069
The microscopic kinetic energy portion of the internal energy gives rise to the temperature of the system. Statistical mechanics relates the pseudo-random kinetic energy of individual particles to the mean kinetic energy of the entire ensemble of particles comprising a system. Furthermore, it relates the mean microscopic kinetic energy to the macroscopically observed empirical property that is expressed as temperature of the system. While temperature is an intensive measure, this energy expresses the concept as an extensive property of the system, often referred to as the "thermal energy", The scaling property between temperature and thermal energy is the entropy change of the system.
https://en.wikipedia.org/wiki?curid=340757
266,748
Oftentimes, the collection of primary data may be difficult and deemed proprietary or confidential by the owner. An alternative to primary data is secondary data, which is data that comes from LCA databases, literature sources, and other past studies. With secondary sources, it is often you find data that is similar to a process but not exact (e.g., data from a different country, slightly different process, similar but different machine, etc.). As such, it is important to explicitly document the differences in such data. However, secondary data is not always inferior to primary data. For example, referencing another work's data in which the author used very accurate primary data. Along with primary data, secondary data should document the source, reliability, and temporal, geographical, and technological representativeness.
https://en.wikipedia.org/wiki?curid=604896
268,899
The heat pump is a refrigeration based appliance which reverses refrigerant flow between the indoor and outdoor coils. This is done by energizing a reversing valve (also known as a "4-way" or "change-over" valve). During cooling, the indoor coil is an evaporator removing heat from the indoor air and transferring it to the outdoor coil where it is rejected to the outdoor air. During heating, the outdoor coil becomes the evaporator and heat is removed from the outdoor air and transferred to the indoor air through the indoor coil. The reversing valve, controlled by the thermostat, causes the change-over from heat to cool. Residential heat pump thermostats generally have an "O" terminal to energize the reversing valve in cooling. Some residential and many commercial heat pump thermostats use a "B" terminal to energize the reversing valve in heating. The heating capacity of a heat pump decreases as outdoor temperatures fall. At some outdoor temperature (called the balance point) the ability of the refrigeration system to transfer heat into the building falls below the heating needs of the building. A typical heat pump is fitted with electric heating elements to supplement the refrigeration heat when the outdoor temperature is below this balance point. Operation of the supplemental heat is controlled by a second stage heating contact in the heat pump thermostat. During heating, the outdoor coil is operating at a temperature below the outdoor temperature and condensation on the coil may take place. This condensation may then freeze onto the coil, reducing its heat transfer capacity. Heat pumps therefore have a provision for occasional defrost of the outdoor coil. This is done by reversing the cycle to the cooling mode, shutting off the outdoor fan, and energizing the electric heating elements. The electric heat in defrost mode is needed to keep the system from blowing cold air inside the building. The elements are then used in the "reheat" function. Although the thermostat may indicate the system is in defrost and electric heat is activated, the defrost function is not controlled by the thermostat. Since the heat pump has electric heat elements for supplemental and reheats, the heat pump thermostat provides for use of the electric heat elements should the refrigeration system fail. This function is normally activated by an "E" terminal on the thermostat. When in emergency heat, the thermostat makes no attempt to operate the compressor or outdoor fan.
https://en.wikipedia.org/wiki?curid=265822
271,503
The black hole information paradox is a puzzle that appears when the predictions of quantum mechanics and general relativity are combined. The theory of general relativity predicts the existence of black holes that are regions of spacetime from which nothing — not even light — can escape. In the 1970s, Stephen Hawking applied the rules of quantum mechanics to such systems and found that an isolated black hole would emit a form of radiation called Hawking radiation. Hawking also argued that the detailed form of the radiation would be independent of the initial state of the black hole and would depend only on its mass, electric charge and angular momentum. The information paradox appears when one considers a process in which a black hole is formed through a physical process and then evaporates away entirely through Hawking radiation. Hawking's calculation suggests that the final state of radiation would retain information only about the total mass, electric charge and angular momentum of the initial state. Since many different states can have the same mass, charge and angular momentum this suggests that many initial physical states could evolve into the same final state. Therefore, information about the details of the initial state would be permanently lost. However, this violates a core precept of both classical and quantum physics—that, "in principle," the state of a system at one point in time should determine its value at any other time. Specifically, in quantum mechanics the state of the system is encoded by its wave function. The evolution of the wave function is determined by a unitary operator, and unitarity implies that the wave function at any instant of time can be used to determine the wave function either in the past or the future.
https://en.wikipedia.org/wiki?curid=851008
274,437
Two independent dual-CPU computers, A and B, form the controller; giving redundancy to the system. The failure of controller system A automatically leads to a switch-over to controller system B without impeding operational capabilities; the subsequent failure of controller system B would provide a graceful shutdown of the engine. Within each system (A and B), the two M68000s operate in lock-step, thereby enabling each system to detect failures by comparing the signal levels on the buses of the two M68000 processors within that system. If differences are encountered between the two buses, then an interrupt is generated and control turned over to the other system. Because of subtle differences between M68000s from Motorola and the second source manufacturer TRW, each system uses M68000s from the same manufacturer (for instance system A would have two Motorola CPUs while system B would have two CPUs manufactured by TRW). Memory for block I controllers was of the plated-wire type, which functions in a manner similar to magnetic core memory and retains data even after power is turned off. Block II controllers used conventional CMOS static RAM.
https://en.wikipedia.org/wiki?curid=680000
275,147
DNA methylation marks – genomic regions with specific methylation patterns in a specific biological state such as tissue, cell type, individual – are regarded as possible functional regions involved in gene transcriptional regulation. Although various human cell types may have the same genome, these cells have different methylomes. The systematic identification and characterization of methylation marks across cell types are crucial to understanding the complex regulatory network for cell fate determination. Hongbo Liu et al. proposed an entropy-based framework termed SMART to integrate the whole genome bisulfite sequencing methylomes across 42 human tissues/cells and identified 757,887 genome segments. Nearly 75% of the segments showed uniform methylation across all cell types. From the remaining 25% of the segments, they identified cell type-specific hypo/hypermethylation marks that were specifically hypo/hypermethylated in a minority of cell types using a statistical approach and presented an atlas of the human methylation marks. Further analysis revealed that the cell type-specific hypomethylation marks were enriched through H3K27ac and transcription factor binding sites in a cell type-specific manner. In particular, they observed that the cell type-specific hypomethylation marks are associated with the cell type-specific super-enhancers that drive the expression of cell identity genes. This framework provides a complementary, functional annotation of the human genome and helps to elucidate the critical features and functions of cell type-specific hypomethylation.
https://en.wikipedia.org/wiki?curid=1137227
277,328
All biomass goes through at least some of these steps: it needs to be grown, collected, dried, fermented, distilled, and burned. All of these steps require resources and an infrastructure. The total amount of energy input into the process compared to the energy released by burning the resulting ethanol fuel is known as the energy balance (or "energy returned on energy invested"). Figures compiled in a 2007 report by "National Geographic" point to modest results for corn ethanol produced in the US: one unit of fossil-fuel energy is required to create 1.3 energy units from the resulting ethanol. The energy balance for sugarcane ethanol produced in Brazil is more favorable, with one unit of fossil-fuel energy required to create 8 from the ethanol. Energy balance estimates are not easily produced, thus numerous such reports have been generated that are contradictory. For instance, a separate survey reports that production of ethanol from sugarcane, which requires a tropical climate to grow productively, returns from 8 to 9 units of energy for each unit expended, as compared to corn, which only returns about 1.34 units of fuel energy for each unit of energy expended. A 2006 University of California Berkeley study, after analyzing six separate studies, concluded that producing ethanol from corn uses much less petroleum than producing gasoline.
https://en.wikipedia.org/wiki?curid=608623
277,357
While the discrete unit sample function and the Kronecker delta function use the same letter, they differ in the following ways. For the discrete unit sample function, it is more conventional to place a single integer index in square braces; in contrast the Kronecker delta can have any number of indexes. Further, the purpose of the discrete unit sample function is different from the Kronecker delta function. In DSP, the discrete unit sample function is typically used as an input function to a discrete system for discovering the system function of the system which will be produced as an output of the system. In contrast, the typical purpose of the Kronecker delta function is for filtering terms from an Einstein summation convention.
https://en.wikipedia.org/wiki?curid=182890
282,475
In the 1960s data modeling gained more significance with the initiation of the management information system (MIS) concept. According to Leondes (2002), "during that time, the information system provided the data and information for management purposes. The first generation database system, called Integrated Data Store (IDS), was designed by Charles Bachman at General Electric. Two famous database models, the network data model and the hierarchical data model, were proposed during this period of time". Towards the end of the 1960s, Edgar F. Codd worked out his theories of data arrangement, and proposed the relational model for database management based on first-order predicate logic.
https://en.wikipedia.org/wiki?curid=82871
282,495
Another kind of data model describes how to organize data using a database management system or other data management technology. It describes, for example, relational tables and columns or object-oriented classes and attributes. Such a data model is sometimes referred to as the "physical data model", but in the original ANSI three schema architecture, it is called "logical". In that architecture, the physical model describes the storage media (cylinders, tracks, and tablespaces). Ideally, this model is derived from the more conceptual data model described above. It may differ, however, to account for constraints like processing capacity and usage patterns.
https://en.wikipedia.org/wiki?curid=82871
283,324
Since the oil embargoes and price spikes of the 1970s, energy efficiency and conservation have been fundamental tenets of U.S. energy policy. The scope of energy conservation and efficiency measures has been broadened throughout time by U.S. energy policies and programs, including federal and state legislation and regulatory actions, to include all economic sectors and all geographical areas of the nation. Measurable energy conservation and efficiency gains in the 1980s led to the 1987 Energy Security Report to the President (DOE, 1987) that "the United States uses about 29 quads less energy in a year today than it would have if our economic growth since 1972 had been accompanied by the less- efficient trends in energy use we were following at that time" The DOE Strategy and the legislation included new strategies for strengthening conservation and efficiency in buildings, industry, and electric power, such as integrated resource planning for electric and natural gas utilities and efficiency and labeling standards for 13 residential appliances and equipment categories. Lack of a national consensus on how to proceed interfered with developing a consistent and comprehensive approach. Nevertheless, the Energy Policy Act of 2005 (EPAct05; 109th U.S. Congress, 2005) contained many new energy conservation and efficiency provisions in the transportation, buildings, and electric power sectors.
https://en.wikipedia.org/wiki?curid=478933
300,847
In the same year, building on de Broglie's hypothesis, [[Erwin Schrödinger]] developed the equation that describes the behavior of a quantum-mechanical wave. The mathematical model, called the [[Schrödinger equation]] after its creator, is central to quantum mechanics, defines the permitted stationary states of a quantum system, and describes how the quantum state of a physical system changes in time. The wave itself is described by a mathematical function known as a "[[wave function]]". Schrödinger said that the wave function provides the "means for predicting the probability of measurement results".
https://en.wikipedia.org/wiki?curid=2796131
301,367
When running a Gibbs free energy equilibrium program, the iterative process from the originally specified molecular composition to the final calculated equilibrium composition is essentially random and not time accurate. With a non-equilibrium program, the computation process is time accurate and follows a solution path dictated by chemical and reaction rate formulas. The five species model has 17 chemical formulas (34 when counting reverse formulas). The Lighthill-Freeman model is based upon a single ordinary differential equation and one algebraic equation. The five species model is based upon 5 ordinary differential equations and 17 algebraic equations. Because the 5 ordinary differential equations are tightly coupled, the system is numerically "stiff" and difficult to solve. The five species model is only usable for entry from low Earth orbit where entry velocity is approximately . For lunar return entry of 11 km/s, the shock layer contains a significant amount of ionized nitrogen and oxygen. The five-species model is no longer accurate and a twelve-species model must be used instead.
https://en.wikipedia.org/wiki?curid=45294
306,496
In crystallography, there is the concept of a unit cell which comprises the space between adjacent lattice points as well as any atoms in that space. A unit cell is defined as a space that, when translated through a subset of all vectors described by formula_2, fills the lattice space without overlapping or voids. (I.e., a lattice space is a multiple of a unit cell.) There are mainly two types of unit cells: primitive unit cells and conventional unit cells. A primitive cell is the very smallest component of a lattice (or crystal) which, when stacked together with lattice translation operations, reproduces the whole lattice (or crystal). Note that the translations must be lattice translation operations that cause the lattice to appear unchanged after the translation. If arbitrary translations were allowed, one could make a primitive cell half the size of the true one, and translate twice as often, as an example. Another way of defining the size of a primitive cell that avoids invoking lattice translation operations, is to say that the primitive cell is the smallest possible component of a lattice (or crystal) that can be repeated to reproduce the whole lattice (or crystal), "and" that contains exactly one lattice point. In either definition, the primitive cell is characterized by its small size. There are clearly many choices of cell that can reproduce the whole lattice when stacked (two lattice halves, for instance), and the minimum size requirement distinguishes the primitive cell from all these other valid repeating units. If the lattice or crystal is 2-dimensional, the primitive cell has a minimum area; likewise in 3 dimensions the primitive cell has a minimum volume. Despite this rigid minimum-size requirement, there is not one unique choice of primitive unit cell. In fact, all cells whose borders are primitive translation vectors will be primitive unit cells. The fact that there is not a unique choice of primitive translation vectors for a given lattice leads to the multiplicity of possible primitive unit cells. Conventional unit cells, on the other hand, are not necessarily minimum-size cells. They are chosen purely for convenience and are often used for illustration purposes. They are loosely defined.
https://en.wikipedia.org/wiki?curid=661808
311,807
Among dividing cells, there are multiple levels of cell potency, the cell's ability to differentiate into other cell types. A greater potency indicates a larger number of cell types that can be derived. A cell that can differentiate into all cell types, including the placental tissue, is known as "totipotent". In mammals, only the zygote and subsequent blastomeres are totipotent, while in plants, many differentiated cells can become totipotent with simple laboratory techniques. A cell that can differentiate into all cell types of the adult organism is known as "pluripotent". Such cells are called meristematic cells in higher plants and embryonic stem cells in animals, though some groups report the presence of adult pluripotent cells. Virally induced expression of four transcription factors Oct4, Sox2, c-Myc, and Klf4 (Yamanaka factors) is sufficient to create pluripotent (iPS) cells from adult fibroblasts. A multipotent cell is one that can differentiate into multiple different, but closely related cell types. Oligopotent cells are more restricted than multipotent, but can still differentiate into a few closely related cell types. Finally, unipotent cells can differentiate into only one cell type, but are capable of self-renewal. In cytopathology, the level of cellular differentiation is used as a measure of cancer progression. "Grade" is a marker of how differentiated a cell in a tumor is.
https://en.wikipedia.org/wiki?curid=152611
311,957
Compressed-air energy storage (CAES) plants can bridge the gap between production volatility and load. CAES storage addresses the energy needs of consumers by effectively providing readily available energy to meet demand. Renewable energy sources like wind and solar energy vary. So at times when they provide little power, they need to be supplemented with other forms of energy to meet energy demand. Compressed-air energy storage plants can take in the surplus energy output of renewable energy sources during times of energy over-production. This stored energy can be used at a later time when demand for electricity increases or energy resource availability decreases.
https://en.wikipedia.org/wiki?curid=24130
330,574
In physics and chemistry, the law of conservation of energy states that the total energy of an isolated system remains constant; it is said to be "conserved" over time. This law, first proposed and tested by Émilie du Châtelet, means that energy can neither be created nor destroyed; rather, it can only be transformed or transferred from one form to another. For instance, chemical energy is converted to kinetic energy when a stick of dynamite explodes. If one adds up all forms of energy that were released in the explosion, such as the kinetic energy and potential energy of the pieces, as well as heat and sound, one will get the exact decrease of chemical energy in the combustion of the dynamite.
https://en.wikipedia.org/wiki?curid=67088
330,624
However, when the non-unitary Born rule is applied, the system's energy is measured with an energy that can be below or above the expectation value, if the system was not in an energy eigenstate. (For macroscopic systems, this effect is usually too small to measure.) The disposition of this energy gap is not well-understood; some physicists believe that the energy is transferred to or from the macroscopic environment in the course of the measurement process, while others believe that the observable energy is only conserved "on average". No experiment has been confirmed as definitive evidence of violations of the conservation of energy principle in quantum mechanics, but that doesn't rule out that some newer experiments, as proposed, may find evidence of violations of the conservation of energy principle in quantum mechanics.
https://en.wikipedia.org/wiki?curid=67088
352,739
The Bohr model is a relatively primitive model of the hydrogen atom, compared to the "valence shell atom" model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before moving on to the more accurate, but more complex, valence shell atom. A related quantum model was originally proposed by Arthur Erich Haas in 1910 but was rejected until the 1911 Solvay Congress where it was thoroughly discussed. The quantum theory of the period between Planck's discovery of the quantum (1900) and the advent of a mature quantum mechanics (1925) is often referred to as the old quantum theory.
https://en.wikipedia.org/wiki?curid=4831
353,340
The second signal comes from co-stimulation, in which surface receptors on the APC are induced by a relatively small number of stimuli, usually products of pathogens, but sometimes breakdown products of cells, such as necrotic-bodies or heat shock proteins. The only co-stimulatory receptor expressed constitutively by naive T cells is CD28, so co-stimulation for these cells comes from the CD80 and CD86 proteins, which together constitute the B7 protein, (B7.1 and B7.2, respectively) on the APC. Other receptors are expressed upon activation of the T cell, such as OX40 and ICOS, but these largely depend upon CD28 for their expression. The second signal licenses the T cell to respond to an antigen. Without it, the T cell becomes anergic, and it becomes more difficult for it to activate in future. This mechanism prevents inappropriate responses to self, as self-peptides will not usually be presented with suitable co-stimulation. Once a T cell has been appropriately activated (i.e. has received signal one and signal two) it alters its cell surface expression of a variety of proteins. Markers of T cell activation include CD69, CD71 and CD25 (also a marker for Treg cells), and HLA-DR (a marker of human T cell activation). CTLA-4 expression is also up-regulated on activated T cells, which in turn outcompetes CD28 for binding to the B7 proteins. This is a checkpoint mechanism to prevent over activation of the T cell. Activated T cells also change their cell surface glycosylation profile.
https://en.wikipedia.org/wiki?curid=170417
378,511
An electric "charge," such as a single proton in space, has a magnitude defined in coulombs. Such a charge has an electric field surrounding it. In pictorial form, the electric field from a positive point charge can be visualized as a dot radiating electric field lines (sometimes also called "lines of force"). Conceptually, electric flux can be thought of as "the number of field lines" passing through a given area. Mathematically, electric flux is the integral of the normal component of the electric field over a given area. Hence, units of electric flux are, in the MKS system, newtons per coulomb times meters squared, or N m/C. (Electric flux density is the electric flux per unit area, and is a measure of strength of the normal component of the electric field averaged over the area of integration. Its units are N/C, the same as the electric field in MKS units.)
https://en.wikipedia.org/wiki?curid=43590
380,409
Usually, the easiest part of model evaluation is checking whether a model fits experimental measurements or other empirical data. In models with parameters, a common approach to test this fit is to split the data into two disjoint subsets: training data and verification data. The training data are used to estimate the model parameters. An accurate model will closely match the verification data even though these data were not used to set the model's parameters. This practice is referred to as cross-validation in statistics.
https://en.wikipedia.org/wiki?curid=20590
399,977
Bayesian statistical methods use Bayes' theorem to compute and update probabilities after obtaining new data. Bayes' theorem describes the conditional probability of an event based on data as well as prior information or beliefs about the event or conditions related to the event. For example, in Bayesian inference, Bayes' theorem can be used to estimate the parameters of a probability distribution or statistical model. Since Bayesian statistics treats probability as a degree of belief, Bayes' theorem can directly assign a probability distribution that quantifies the belief to the parameter or set of parameters.
https://en.wikipedia.org/wiki?curid=404412
404,895
Supervised learning (SL) is a machine learning paradigm for problems where the available data consists of labelled examples, meaning that each data point contains features (covariates) and an associated label. The goal of supervised learning algorithms is learning a function that maps feature vectors (inputs) to labels (output), based on example input-output pairs. It infers a function from "" consisting of a set of "training examples". In supervised learning, each example is a "pair" consisting of an input object (typically a vector) and a desired output value (also called the "supervisory signal"). A supervised learning algorithm analyzes the training data and produces an inferred function, which can be used for mapping new examples. An optimal scenario will allow for the algorithm to correctly determine the class labels for unseen instances. This requires the learning algorithm to generalize from the training data to unseen situations in a "reasonable" way (see inductive bias). This statistical quality of an algorithm is measured through the so-called generalization error.
https://en.wikipedia.org/wiki?curid=20926
411,078
Various assumptions and approximations adapted to the system under study lead to expressions for the free energy. Correlation functions are used to calculate the free-energy functional as an expansion on a known reference system. If the non-uniform fluid can be described by a density distribution that is not far from uniform density a functional Taylor expansion of the free energy in density increments leads to an expression for the thermodynamic potential using known correlation functions of the uniform system. In the square gradient approximation a strong non-uniform density contributes a term in the gradient of the density. In a perturbation theory approach the direct correlation function is given by the sum of the direct correlation in a known system such as hard spheres and a term in a weak interaction such as the long range London dispersion force. In a local density approximation the local excess free energy is calculated from the effective interactions with particles distributed at uniform density of the fluid in a cell surrounding a particle. Other improvements have been suggested such as the weighted density approximation for a direct correlation function of a uniform system which distributes the neighboring particles with an effective weighted density calculated from a self-consistent condition on the direct correlation function.
https://en.wikipedia.org/wiki?curid=209874
443,385
As the application of precise algorithm to solve problem is very limited, we often use approximate algorithm or heuristic algorithm. The result of the algorithm can be assessed by C / C* ≤ ε . C is the total travelling distance generated from approximate algorithm; C* is the optimal travelling distance; ε is the upper limit for the ratio of the total travelling distance of approximate solution to optimal solution under the worst condition. The value of ε >1.0. The more it closes to 1.0, the better the algorithm is. These algorithms include: Interpolation algorithm, Nearest neighbour algorithm, Clark & Wright algorithm, Double spanning tree algorithm, Christofides algorithm, Hybrid algorithm, Probabilistic algorithm (such as Simulated annealing).
https://en.wikipedia.org/wiki?curid=45036001
461,236
The flow of the cytoplasm in the cell of "Chara corallina" is belied by the "barber pole" movement of the chloroplasts. Two sections of chloroplast flow are observed with the aid of a microscope. These sections are arranged helically along the longitudinal axis of the cell. In one section, the chloroplasts move upward along one band of the helix, while in the other, the chloroplasts move downwardly. The area between these sections are known as indifferent zones. Chloroplasts are never seen to cross these zones, and as a result it was thought that cytoplasmic and vacuolar fluid flow are similarly restricted, but this is not true. First, Kamiya and Kuroda, experimentally determined that cytoplasmic flow rate varies radially within the cell, a phenomenon not clearly depicted by the chloroplast movement. Second, Raymond Goldstein and others developed a mathematical fluid model for the cytoplasmic flow which not only predicts the behavior noted by Kamiya and Kuroda, but predicts the trajectories of cytoplasmic flow through indifferent zones. The Goldstein model ignores the vacuolar membrane, and simply assumes that shear forces are directly translated to the vacuolar fluid from the cytoplasm. The Goldstein model predicts there is net flow toward one of the indifferent zones from the other. This actually is suggested by the flow of the chloroplasts. At one indifferent zone, the section with the chloroplasts moving at a downward angle will be above the chloroplasts moving at an upward angle. This section is known as the minus different zone (IZ-). Here, if each direction is broken into components in the theta (horizontal) and z (vertical) directions, the sum of these components oppose each other in the z direction, and similarly diverges in theta direction. The other indifferent zone has the upwardly angled chloroplast movement on top and is known as the positive indifferent zone (IZ+). Thus, while the z directional components oppose each other again, the theta components now converge. The net effect of the forces is cytoplasmic/vacuolar flow moves from the minus indifferent zone to the positive indifferent zone. As stated, these directional components are suggested by chloroplast movement, but are not obvious. Further, the effect of this cytoplasmic/vacuolar flow from one indifferent zone to the other demonstrates that cytoplasmic particles do cross the indifferent zones even if the chloroplasts at the surface do not. Particles, as they rise in the cell, spiral around in a semicircular manner near the minus indifferent zone, cross one indifferent zone, and end up near a positive indifferent zone. Further experiments on the Characean cells support of the Goldstein model for vacuolar fluid flow. However, due to the vacuolar membrane (which was ignored in the Goldstein model), the cytoplasmic flow follows a different flow pattern. Further, recent experiments have shown that the data collected by Kamiya and Kuroda which suggested a flat velocity profile in the cytoplasm are not fully accurate. Kikuchi worked with "Nitella flexillis" cells, and found an exponential relationship between fluid flow velocity and distance from cell membrane. Although this work is not on Characean cells, the flows between "Nitella flexillis" and "Chara coralina" are visually and structurally similar.
https://en.wikipedia.org/wiki?curid=656613
468,561
While geothermal energy has had many uses in Iceland throughout history, its use there for electricity generation did not come until relatively recently. Iceland’s power was largely derived from fossil fuels until the 1970s, when the national government looked to address energy price inequities across the country. A report commissioned in 1970 by the country’s National Energy Authority, Orkustofnun, recommended increased domestic production of geothermal power and hydroelectricity to stabilize energy prices and reduce the nation’s reliance on external energy resources. In 1973, an international energy crisis began, subjecting Iceland to highly volatile oil prices and an uncertain energy market. The crisis sparked Iceland’s government to ramp up adoption of the domestic power sources identified by the National Energy Authority’s report. The ensuing rapid growth of renewable energy production mostly originated from a geopolitical desire for energy independence and was catalyzed by the urgent economic constraints during the 1970s energy crisis. Since then, in addition to increasing Iceland’s energy independence, it has also resulted in the widespread decarbonization of the country’s electric grid.
https://en.wikipedia.org/wiki?curid=113657
477,751
This definition can be derived from the microcanonical ensemble, which is a system of a constant number of particles, a constant volume and that does not exchange energy with its environment. Suppose that the system has some external parameter, x, that can be changed. In general, the energy eigenstates of the system will depend on "x". According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
https://en.wikipedia.org/wiki?curid=5529757
489,545
Based on this, in 1978, Jorma Rissanen published an MDL learning algorithm using the statistical notion of information rather than algorithmic information. Over the past 40 years this has developed into a rich theory of statistical and machine learning procedures with connections to Bayesian model selection and averaging, penalization methods such as Lasso and Ridge, and so on - Grünwald and Roos (2020) give an introduction including all modern developments. Rissanen started out with this idea: all statistical learning is about finding regularities in data, and the best hypothesis to describe the regularities in data is also the one that is able to "statistically" compress the data most. Like other statistical methods, it can be used for learning the parameters of a model using some data. Usually though, standard statistical methods assume that the general form of a model is fixed. MDL's main strength is that it can also be used for selecting the general form of a model and its parameters. The quantity of interest (sometimes just a model, sometimes just parameters, sometimes both at the same time) is called a hypothesis. The basic idea is then to consider the (lossless) "two-stage code" that encodes data formula_1 with length formula_2 by first encoding a hypothesis formula_3 in the set of considered hypotheses formula_4 and then coding formula_1 "with the help of" formula_3; in the simplest context this just means "encoding the deviations of the data from the predictions made by formula_3:
https://en.wikipedia.org/wiki?curid=331325
499,150
A thermodynamic system is not simply a physical system. Rather, in general, infinitely many different alternative physical systems comprise a given thermodynamic system, because in general a physical system has vastly many more microscopic characteristics than are mentioned in a thermodynamic description. A thermodynamic system is a macroscopic object, the microscopic details of which are not explicitly considered in its thermodynamic description. The number of state variables required to specify the thermodynamic state depends on the system, and is not always known in advance of experiment; it is usually found from experimental evidence. The number is always two or more; usually it is not more than some dozen. Though the number of state variables is fixed by experiment, there remains choice of which of them to use for a particular convenient description; a given thermodynamic system may be alternatively identified by several different choices of the set of state variables. The choice is usually made on the basis of the walls and surroundings that are relevant for the thermodynamic processes that are to be considered for the system. For example, if it is intended to consider heat transfer for the system, then a wall of the system should be permeable to heat, and that wall should connect the system to a body, in the surroundings, that has a definite time-invariant temperature.
https://en.wikipedia.org/wiki?curid=2747182
504,873
Spin waves are observed through four experimental methods: inelastic neutron scattering, inelastic light scattering (Brillouin scattering, Raman scattering and inelastic X-ray scattering), inelastic electron scattering (spin-resolved electron energy loss spectroscopy), and spin-wave resonance (ferromagnetic resonance). In the first method the energy loss of a beam of neutrons that excite a magnon is measured, typically as a function of scattering vector (or equivalently momentum transfer), temperature and external magnetic field. Inelastic neutron scattering measurements can determine the dispersion curve for magnons just as they can for phonons. Important inelastic neutron scattering facilities are present at the ISIS neutron source in Oxfordshire, UK, the Institut Laue-Langevin in Grenoble, France, the High Flux Isotope Reactor at Oak Ridge National Laboratory in Tennessee, USA, and at the National Institute of Standards and Technology in Maryland, USA. Brillouin scattering similarly measures the energy loss of photons (usually at a convenient visible wavelength) reflected from or transmitted through a magnetic material. Brillouin spectroscopy is similar to the more widely known Raman scattering, but probes a lower energy and has a superior energy resolution in order to be able to detect the meV energy of magnons. Ferromagnetic (or antiferromagnetic) resonance instead measures the absorption of microwaves, incident on a magnetic material, by spin waves, typically as a function of angle, temperature and applied field. Ferromagnetic resonance is a convenient laboratory method for determining the effect of magnetocrystalline anisotropy on the dispersion of spin waves. One group at the Max Planck Institute of Microstructure Physics in Halle, Germany proved that by using spin polarized electron energy loss spectroscopy (SPEELS), very high energy surface magnons can be excited. This technique allows one to probe the dispersion of magnons in the ultrathin ferromagnetic films. The first experiment was performed for a 5 ML Fe film. With momentum resolution, the magnon dispersion was explored for an 8 ML fcc Co film on Cu(001) and an 8 ML hcp Co on W(110), respectively. The maximum magnon energy at the border of the surface Brillouin zone was 240 meV.
https://en.wikipedia.org/wiki?curid=2629646
509,649
Within the field of developmental biology, one goal is to understand how a particular cell develops into a final cell type, known as fate determination. Within an embryo, several processes play out at the cellular and tissue level to create an organism. These processes include cell proliferation, differentiation, cellular movement and programmed cell death. Each cell in an embryo receives molecular signals from neighboring cells in the form of proteins, RNAs and even surface interactions. Almost all animals undergo a similar sequence of events during very early development, a conserved process known as embryogenesis. During embryogenesis, cells exist in three germ layers, and undergo gastrulation. While embryogenesis has been studied for more than a century, it was only recently (the past 25 years or so) that scientists discovered that a basic set of the same proteins and mRNAs are involved in embryogenesis. Evolutionary conservation is one of the reasons that model systems such as the fly (Drosophila "melanogaster"), the mouse (Mus "musculus"), and other organisms are used as models to study embryogenesis and developmental biology. Studying model organisms provides information relevant to other animals, including humans. While studying the different model systems, cells fate was discovered to be determined via multiple ways, two of which are by the combination of transcription factors the cells have and by the cell-cell interaction. Cells’ fate determination mechanisms were categorized into three different types, autonomously specified cells, conditionally specified cells, or syncytial specified cells. Furthermore, the cells’ fate was determined mainly using two types of experiments, cell ablation and transplantation. The results obtained from these experiments, helped in identifying the fate of the examined cells.
https://en.wikipedia.org/wiki?curid=8285473
514,494
Focal adhesions are integrin-containing, multi-protein structures that form mechanical links between intracellular actin bundles and the extracellular substrate in many cell types. Focal adhesions are large, dynamic protein complexes through which the cytoskeleton of a cell connects to the ECM. They are limited to clearly defined ranges of the cell, at which the plasma membrane closes to within 15 nm of the ECM substrate. Focal adhesions are in a state of constant flux: proteins associate and disassociate with it continually as signals are transmitted to other parts of the cell, relating to anything from cell motility to cell cycle. Focal adhesions can contain over 100 different proteins, which suggests a considerable functional diversity. More than anchoring the cell, they function as signal carriers (sensors), which inform the cell about the condition of the ECM and thus affect their behavior. In sessile cells, focal adhesions are quite stable under normal conditions, while in moving cells their stability is diminished: this is because in motile cells, focal adhesions are being constantly assembled and disassembled as the cell establishes new contacts at the leading edge, and breaks old contacts at the trailing edge of the cell. One example of their important role is in the immune system, in which white blood cells migrate along the connective endothelium following cellular signals to damaged biological tissue.
https://en.wikipedia.org/wiki?curid=2440776
516,823
Synthetic data is generated to meet specific needs or certain conditions that may not be found in the original, real data. This can be useful when designing any type of system because the synthetic data are used as a simulation or as a theoretical value, situation, etc. This allows us to take into account unexpected results and have a basic solution or remedy, if the results prove to be unsatisfactory. Synthetic data are often generated to represent the authentic data and allows a baseline to be set. Another benefit of synthetic data is to protect the privacy and confidentiality of authentic data. As stated previously, synthetic data is used in testing and creating many different types of systems; below is a quote from the abstract of an article that describes a software that generates synthetic data for testing fraud detection systems that further explains its use and importance. "This enables us to create realistic behavior profiles for users and attackers. The data is used to train the fraud detection system itself, thus creating the necessary adaptation of the system to a specific environment."
https://en.wikipedia.org/wiki?curid=25270778
533,808
The mutual transformation of quantum information between light and matter is the focus of quantum informatics. The interaction between a single photon and a cooled crystal doped with rare earth ions is investigated. Crystals doped with rare earth have broad application prospects in the field of quantum storage because they provide a unique application system. Li Chengfeng from the quantum information laboratory of the Chinese Academy of Sciences developed a solid-state quantum memory and demonstrated the photon computing function using time and frequency. Based on this research, a large-scale quantum network based on quantum repeater can be constructed by utilizing the storage and coherence of quantum states in the material system. Researchers have shown for the first time in rare-earth ion-doped crystals. By combining the three-dimensional space with two-dimensional time and two-dimensional spectrum, a kind of memory that is different from the general one is created. It has the multimode capacity and can also be used as a high fidelity quantum converter. Experimental results show that in all these operations, the fidelity of the three-dimensional quantum state carried by the photon can be maintained at around 89%.
https://en.wikipedia.org/wiki?curid=60448457
541,165
The dual-cell voltage clamp technique is a specialized variation of the two electrode voltage clamp, and is only used in the study of gap junction channels. Gap junctions are pores that directly link two cells through which ions and small molecules flow freely. When two cells in which gap junction proteins, typically connexins or innexins, are expressed, either endogenously or via injection of mRNA, a junction channel will form between the cells. Since two cells are present in the system, two sets of electrodes are used. A recording electrode and a current injecting electrode are inserted into each cell, and each cell is clamped individually (each set of electrodes is attached to a separate apparatus, and integration of data is performed by computer). To record junctional conductance, the current is varied in the first cell while the recording electrode in the second cell records any changes in V for the second cell only. (The process can be reversed with the stimulus occurring in the second cell and recording occurring in the first cell.) Since no variation in current is being induced by the electrode in the recorded cell, any change in voltage must be induced by current crossing into the recorded cell, through the gap junction channels, from the cell in which the current was varied.
https://en.wikipedia.org/wiki?curid=603690
561,576
In physics, relativistic quantum mechanics (RQM) is any Poincaré covariant formulation of quantum mechanics (QM). This theory is applicable to massive particles propagating at all velocities up to those comparable to the speed of light "c", and can accommodate massless particles. The theory has application in high energy physics, particle physics and accelerator physics, as well as atomic physics, chemistry and condensed matter physics. "Non-relativistic quantum mechanics" refers to the mathematical formulation of quantum mechanics applied in the context of Galilean relativity, more specifically quantizing the equations of classical mechanics by replacing dynamical variables by operators. "Relativistic quantum mechanics" (RQM) is quantum mechanics applied with special relativity. Although the earlier formulations, like the Schrödinger picture and Heisenberg picture were originally formulated in a non-relativistic background, a few of them (e.g. the Dirac or path-integral formalism) also work with special relativity.
https://en.wikipedia.org/wiki?curid=19389837
569,486
A data structure is an abstract construct that embeds data in a well defined manner. An efficient data structure allows manipulation of the data in efficient ways. The data manipulation may include data insertion, deletion, updating and retrieval in various modes. A certain data structure type may be very effective in certain operations, and very ineffective in others. A data structure type is selected upon DBMS development to best meet the operations needed for the types of data it contains. Type of data structure selected for a certain task typically also takes into consideration the type of storage it resides in (e.g., speed of access, minimal size of storage chunk accessed, etc.). In some DBMSs database administrators have the flexibility to select among options of data structures to contain user data for performance reasons. Sometimes the data structures have selectable parameters to tune the database performance.
https://en.wikipedia.org/wiki?curid=209503
575,215
All HST science data are permanently archived after passing through the calibration pipeline. NASA policy mandates a one-year proprietary period on all data, which means that only the initial proposal team can access the data for the first year after it has been obtained. Subsequent to that year, the data become available to anyone who wishes to access it. Data sets retrieved from the archive are automatically re-calibrated to ensure that the most up-to-date calibration factors and software are applied. The STScI serves as the archive center for all of NASA's optical/UV space missions. In addition to archiving and storing HST science data, STScI holds data from 13 other missions including the International Ultraviolet Explorer (IUE), the Extreme Ultraviolet Explorer (EUVE), the Far Ultraviolet Spectroscopic Explorer (FUSE), and the Galaxy Evolution Explorer (GALEX). Kepler and JWST science data will be archived and retrieved in similar fashions. The internet serves as the primary user interface to the data archives at STScI (http://archive.stsci.edu). The archive currently holds over 30 terabytes of data. Each day about 11 gigabytes of new data are ingested and about 85 gigabytes of data are distributed to users. The Hubble Legacy Archive (HLA; http://hla.stsci.edu/), currently in development, will act as a more integrated and user-friendly archive. It will provide raw Hubble data as well as higher-level science products (color images, mosaics, etc.).
https://en.wikipedia.org/wiki?curid=177098
581,403
In 1969 scientists D. Wilshaw, O. P. Buneman and H. Longuet-Higgins proposed an alternative, non-holographic model that fulfilled many of the same requirements as Gabor's original holographic model. The Gabor model did not explain how the brain could use Fourier analysis on incoming signals or how it would deal with the low signal-noise ratio in reconstructed memories. Longuet-Higgin's correlograph model built on the idea that any system could perform the same functions as a Fourier holograph if it could correlate pairs of patterns. It uses minute pinholes that do not produce diffraction patterns to create a similar reconstruction as that in Fourier holography. Like a hologram, a discrete correlograph can recognize displaced patterns and store information in a parallel and non-local way so it usually will not be destroyed by localized damage. They then expanded the model beyond the correlograph to an associative net where the points become parallel lines arranged in a grid. Horizontal lines represent axons of input neurons while vertical lines represent output neurons. Each intersection represents a modifiable synapse. Though this cannot recognize displaced patterns, it has a greater potential storage capacity. This was not necessarily meant to show how the brain is organized, but instead to show the possibility of improving on Gabor's original model. One property of the associative net that makes it attractive as a neural model is that good retrieval can be obtained even when some of the storage elements are damaged or when some of the components of the address are incorrect. P. Van Heerden countered this model by demonstrating mathematically that the signal-noise ratio of a hologram could reach 50% of ideal. He also used a model with a 2D neural hologram network for fast searching imposed upon a 3D network for large storage capacity. A key quality of this model was its flexibility to change the orientation and fix distortions of stored information, which is important for our ability to recognize an object as the same entity from different angles and positions, something the correlograph and association network models lack.
https://en.wikipedia.org/wiki?curid=1896271
596,956
Mark H. Ashcraft defines math anxiety as "a feeling of tension, apprehension, or fear that interferes with math performance" (2002, p. 1). It is a phenomenon that is often considered when examining students' problems in mathematics. According to the American Psychological Association, mathematical anxiety is often linked to testing anxiety. This anxiety can cause distress and likely causes a dislike and avoidance of all math-related tasks. The academic study of math anxiety originates as early as the 1950s, where Mary Fides Gough introduced the term "mathemaphobia" to describe the phobia-like feelings of many towards mathematics. The first math anxiety measurement scale was developed by Richardson and Suinn in 1972. Since this development, several researchers have examined math anxiety in empirical studies. Hembree (1990) conducted a meta-analysis of 151 studies concerning math anxiety. The study determined that math anxiety is related to poor math performance on math achievement tests and to negative attitudes concerning math. Hembree also suggests that math anxiety is directly connected with math avoidance.
https://en.wikipedia.org/wiki?curid=8111444
612,896
Retinal cell fate determination relies on positional cell–cell signaling that activates signal transduction pathways, rather than cell lineage. Cell–cell signal that is released from R8 photoreceptors (already differentiated retinal cells) of each ommatidium is received by neighboring progenitor retinal cells, stimulating their incorporation into developing ommatidia. The undifferentiated retinal cells select their appropriate cell fates based on their position with their differentiated neighbors. The local signal, Growth Factor Spitz, activates the epidermal growth factor receptor (EGFR) signal transduction pathway, and initiates a cascade of events that will result in transcription of genes involved in cell fate determination. This process leads to the induction of cell fates, starting from the R8 photoreceptor neurons and progresses to the sequential recruitment of neighboring undifferentiated cells. The first seven neighboring cells receive R8 signaling to differentiate as photoreceptor neurons, followed by the recruitment of the four non-neuronal cone cells.
https://en.wikipedia.org/wiki?curid=358597
613,082
The separation of static (cold) and dynamic (hot) data to reduce write amplification is not a simple process for the SSD controller. The process requires the SSD controller to separate the LBAs with data which is constantly changing and requiring rewriting (dynamic data) from the LBAs with data which rarely changes and does not require any rewrites (static data). If the data is mixed in the same blocks, as with almost all systems today, any rewrites will require the SSD controller to rewrite both the dynamic data (which caused the rewrite initially) and static data (which did not require any rewrite). Any garbage collection of data that would not have otherwise required moving will increase write amplification. Therefore, separating the data will enable static data to stay at rest and if it never gets rewritten it will have the lowest possible write amplification for that data. The drawback to this process is that somehow the SSD controller must still find a way to wear level the static data because those blocks that never change will not get a chance to be written to their maximum P/E cycles.
https://en.wikipedia.org/wiki?curid=27560356
614,819
The most general causal LTI transfer function can be uniquely factored into a series of an all-pass and a minimum phase system. The system function is then the product of the two parts, and in the time domain the response of the system is the convolution of the two part responses. The difference between a minimum phase and a general transfer function is that a minimum phase system has all of the poles and zeroes of its transfer function in the left half of the s-plane representation (in discrete time, respectively, inside the unit circle of the z-plane). Since inverting a system function leads to poles turning to zeroes and vice versa, and poles on the right side (s-plane imaginary line) or outside (z-plane unit circle) of the complex plane lead to unstable systems, only the class of minimum phase systems is closed under inversion. Intuitively, the minimum phase part of a general causal system implements its amplitude response with minimum group delay, while its all pass part corrects its phase response alone to correspond with the original system function.
https://en.wikipedia.org/wiki?curid=548131
625,587
Data confidentiality is the property in that data contents are not made available or disclosed to illegal users. Outsourced data is stored in a cloud and out of the owners' direct control. Only authorized users can access the sensitive data while others, including CSPs, should not gain any information about the data. Meanwhile, data owners expect to fully utilize cloud data services, e.g., data search, data computation, and data sharing, without the leakage of the data contents to CSPs or other adversaries. Confidentiality refers to how data must be kept strictly confidential to the owner of said data
https://en.wikipedia.org/wiki?curid=25619904
632,046
Historically, the term "data presentation architecture" is attributed to Kelly Lautt: "Data Presentation Architecture (DPA) is a rarely applied skill set critical for the success and value of Business Intelligence. Data presentation architecture weds the science of numbers, data and statistics in discovering valuable information from data and making it usable, relevant and actionable with the arts of data visualization, communications, organizational psychology and change management in order to provide business intelligence solutions with the data scope, delivery timing, format and visualizations that will most effectively support and drive operational, tactical and strategic behaviour toward understood business (or organizational) goals. DPA is neither an IT nor a business skill set but exists as a separate field of expertise. Often confused with data visualization, data presentation architecture is a much broader skill set that includes determining what data on what schedule and in what exact format is to be presented, not just the best way to present data that has already been chosen. Data visualization skills are one element of DPA."
https://en.wikipedia.org/wiki?curid=3461736
632,386
Source energy, in contrast, is the term used in North America for the amount of primary energy consumed in order to provide a facility’s site energy. It is always greater than the site energy, as it includes all site energy and adds to it the energy lost during transmission, delivery, and conversion. While source or primary energy provides a more complete picture of energy consumption, it cannot be measured directly and must be calculated using conversion factors from site energy measurements. For electricity, a typical value is three units of source energy for one unit of site energy. However, this can vary considerably depending on factors such as the primary energy source or fuel type, the type of power plant, and the transmission infrastructure. One full set of conversion factors is available as technical reference from Energy STAR.
https://en.wikipedia.org/wiki?curid=1413688
632,387
Either site or source energy can be an appropriate metric when comparing or analyzing energy use of different facilities. The U.S Energy Information Administration, for example, uses primary (source) energy for its energy overviews but site energy for its Commercial Building Energy Consumption Survey and Residential Building Energy Consumption Survey. The US Environmental Protection Agency's Energy STAR program recommends using source energy, and the US Department of Energy uses site energy in its definition of a zero net energy building.
https://en.wikipedia.org/wiki?curid=1413688
638,898
In physics, the no-communication theorem or no-signaling principle is a no-go theorem from quantum information theory which states that, during measurement of an entangled quantum state, it is not possible for one observer, by making a measurement of a subsystem of the total state, to communicate information to another observer. The theorem is important because, in quantum mechanics, quantum entanglement is an effect by which certain widely separated events can be correlated in ways that, at first glance, suggest the possibility of communication faster-than-light. The no-communication theorem gives conditions under which such transfer of information between two observers is impossible. These results can be applied to understand the so-called paradoxes in quantum mechanics, such as the EPR paradox, or violations of local realism obtained in tests of Bell's theorem. In these experiments, the no-communication theorem shows that failure of local realism does not lead to what could be referred to as "spooky communication at a distance" (in analogy with Einstein's labeling of quantum entanglement as requiring "spooky action at a distance" on the assumption of QM's completeness).
https://en.wikipedia.org/wiki?curid=1488320
641,566
In quantum mechanics, the probability current (sometimes called probability flux) is a mathematical quantity describing the flow of probability. Specifically, if one thinks of probability as a heterogeneous fluid, then the probability current is the rate of flow of this fluid. It is a real vector that changes with space and time. Probability currents are analogous to mass currents in hydrodynamics and electric currents in electromagnetism. As in those fields, the probability current is related to the probability density function via a continuity equation. The probability current is invariant under gauge transformation.
https://en.wikipedia.org/wiki?curid=3545648
649,124
Cell size depends on both cell growth and cell division, with a disproportionate increase in the rate of cell growth leading to production of larger cells and a disproportionate increase in the rate of cell division leading to production of many smaller cells. Cell proliferation typically involves balanced cell growth and cell division rates that maintain a roughly constant cell size in the exponentially proliferating population of cells. Cell proliferation occurs by combining cell growth with regular "G1-S-M-G2" cell cycles to produce many diploid cell progeny.
https://en.wikipedia.org/wiki?curid=594336
659,505
Recent advances in both AI and quantum information theory have given rise to the concept of quantum neural networks. These hold promise in quantum information processing, which is challenging to classical networks, but can also find application in solving classical problems. In 2018, a physical realization of a quantum reservoir computing architecture was demonstrated in the form of nuclear spins within a molecular solid. However, the nuclear spin experiments in did not demonstrate quantum reservoir computing per se as they did not involve processing of sequential data. Rather the data were vector inputs, which makes this more accurately a demonstration of quantum implementation of a random kitchen sink algorithm (also going by the name of extreme learning machines in some communities). In 2019, another possible implementation of quantum reservoir processors was proposed in the form of two-dimensional fermionic lattices. In 2020, realization of reservoir computing on gate-based quantum computers was proposed and demonstrated on cloud-based IBM superconducting near-term quantum computers.
https://en.wikipedia.org/wiki?curid=10667750
666,639
In computer programming, a function prototype or function interface is a declaration of a function that specifies the function’s name and type signature (arity, data types of parameters, and return type), but omits the function body. While a function definition specifies "how" the function does what it does (the "implementation"), a function prototype merely specifies its interface, i.e. "what" data types go in and come out of it. The term "function prototype" is particularly used in the context of the programming languages C and C++ where placing forward declarations of functions in header files allows for splitting a program into translation units, i.e. into parts that a compiler can separately translate into object files, to be combined by a linker into an executable or a library.
https://en.wikipedia.org/wiki?curid=1311431
681,185
Data at rest in information technology means data that is housed physically on computer data storage in any digital form (e.g. cloud storage, file hosting services, databases, data warehouses, spreadsheets, archives, tapes, off-site or cloud backups, mobile devices etc.). Data at rest includes both structured and unstructured data. This type of data is subject to threats from hackers and other malicious threats to gain access to the data digitally or physical theft of the data storage media. To prevent this data from being accessed, modified or stolen, organizations will often employ security protection measures such as password protection, data encryption, or a combination of both. The security options used for this type of data are broadly referred to as "data at rest protection" (DARP).
https://en.wikipedia.org/wiki?curid=33993923
688,511
Transformation is the direct alteration of a cell's genetic components by passing the genetic material through the cell membrane. About 1% of bacteria are naturally able to take up foreign DNA, but this ability can be induced in other bacteria. Stressing the bacteria with a heat shock or electroporation can make the cell membrane permeable to DNA that may then be incorporated into the genome or exist as extrachromosomal DNA. Typically the cells are incubated in a solution containing divalent cations (often calcium chloride) under cold conditions, before being exposed to a heat pulse (heat shock). Calcium chloride partially disrupts the cell membrane, which allows the recombinant DNA to enter the host cell. It is suggested that exposing the cells to divalent cations in cold condition may change or weaken the cell surface structure, making it more permeable to DNA. The heat-pulse is thought to create a thermal imbalance across the cell membrane, which forces the DNA to enter the cells through either cell pores or the damaged cell wall. Electroporation is another method of promoting competence. In this method the cells are briefly shocked with an electric field of 10-20 kV/cm, which is thought to create holes in the cell membrane through which the plasmid DNA may enter. After the electric shock, the holes are rapidly closed by the cell's membrane-repair mechanisms. Up-taken DNA can either integrate with the bacterials genome or, more commonly, exist as extrachromosomal DNA.
https://en.wikipedia.org/wiki?curid=37319629
722,893
If we have a function which describes the system's potential energy, we can determine the system's equilibria using calculus. A system is in mechanical equilibrium at the critical points of the function describing the system's potential energy. We can locate these points using the fact that the derivative of the function is zero at these points. To determine whether or not the system is stable or unstable, we apply the second derivative test. With formula_1 denoting the static equation of motion of a system with a single degree of freedom we can perform the following calculations:
https://en.wikipedia.org/wiki?curid=92290
722,946
A Hammerstein equation is a nonlinear first-kind Volterra integral equation of the form:formula_94Under certain regularity conditions, the equation is equivalent to the implicit Volterra integral equation of the second-kind:formula_95where:formula_96The equation may however also be expressed in operator form which motivates the definition of the following operator called the nonlinear Volterra-Hammerstein operator:formula_97Here formula_98 is a smooth function while the kernel "K" may be continuous, i.e. bounded, or weakly singular. The corresponding second-kind Volterra integral equation called the Volterra-Hammerstein Integral Equation of the second kind, or simply Hammerstein equation for short, can be expressed as:formula_99In certain applications, the nonlinearity of the function "G" may be treated as being only semi-linear in the form of:formula_100In this case, we the following semi-linear Volterra integral equation:formula_101In this form, we can state an existence and uniqueness theorem for the semi-linear Hammerstein integral equation.
https://en.wikipedia.org/wiki?curid=474234
723,145
In the cloud computing industry the terms "data temperature", or "hot data" and "cold data" have emerged to describe how data is stored in this respect. Hot data is used to describe mission-critical data that needs to be accessed frequently while cold data describes data that is needed less often and less urgently, such as data kept for archiving or auditing purposes. Hot data should be stored in ways offering fast retrieval and modification, often accomplished by in-memory storage but not always. Cold data on the other hand can be stored in a more cost-effective way and is accepted that data access will likely be slower compared to hot data. While these descriptions are useful, "hot" and "cold" lack concrete definitions.
https://en.wikipedia.org/wiki?curid=1942477
730,838
The first case can be debugged by tracing the data-flow. By using lineage and data-flow information together a data scientist can figure out how the inputs are converted into outputs. During the process actors that behave unexpectedly can be caught. Either these actors can be removed from the data flow or they can be augmented by new actors to change the data-flow. The improved data-flow can be replayed to test the validity of it. Debugging faulty actors include recursively performing coarse-grain replay on actors in the data-flow, which can be expensive in resources for long dataflows. Another approach is to manually inspect lineage logs to find anomalies, which can be tedious and time-consuming across several stages of a data-flow. Furthermore, these approaches work only when the data scientist can discover bad outputs. To debug analytics without known bad outputs, the data scientist need to analyze the data-flow for suspicious behavior in general. However, often, a user may not know the expected normal behavior and cannot specify predicates. This section describes a debugging methodology for retrospectively analyzing lineage to identify faulty actors in a multi-stage data-flow. We believe that sudden changes in an actor’s behavior, such as its average selectivity, processing rate or output size, is characteristic of an anomaly. Lineage can reflect such changes in actor behavior over time and across different actor instances. Thus, mining lineage to identify such changes can be useful in debugging faulty actors in a data-flow.
https://en.wikipedia.org/wiki?curid=44783487
735,957
Quantum networks form an important element of quantum computing and quantum communication systems. Quantum networks facilitate the transmission of information in the form of quantum bits, also called qubits, between physically separated quantum processors. A quantum processor is a small quantum computer being able to perform quantum logic gates on a certain number of qubits. Quantum networks work in a similar way to classical networks. The main difference is that quantum networking, like quantum computing, is better at solving certain problems, such as modeling quantum systems.
https://en.wikipedia.org/wiki?curid=2325953
774,661
Interphase is the portion of the cell cycle that is not accompanied by visible changes under the microscope, and includes the G1, S and G2 phases. During interphase, the cell grows (G1), replicates its DNA (S) and prepares for mitosis (G2). A cell in interphase is not simply quiescent. The term quiescent (i.e. dormant) would be misleading since a cell in interphase is very busy synthesizing proteins, copying DNA into RNA, engulfing extracellular material, processing signals, to name just a few activities. The cell is quiescent only in the sense of cell division (i.e. the cell is out of the cell cycle, G0). Interphase is the phase of the cell cycle in which a typical cell spends most of its life. Interphase is the 'daily living' or metabolic phase of the cell, in which the cell obtains nutrients and metabolizes them, grows, replicates its DNA in preparation for mitosis, and conducts other "normal" cell functions.
https://en.wikipedia.org/wiki?curid=222320
783,230
The likelihood is the probability or probability density of what was observed, viewed as a function of parameters in an assumed model. To incorporate censored data points in the likelihood the censored data points are represented by the probability of the censored data points as a function of the model parameters given a model, i.e. a function of CDF(s) instead of the density or probability mass.
https://en.wikipedia.org/wiki?curid=11548952
794,129
The origins of the Dirac sea lie in the energy spectrum of the Dirac equation, an extension of the Schrödinger equation consistent with special relativity, an equation that Dirac had formulated in 1928. Although this equation was extremely successful in describing electron dynamics, it possesses a rather peculiar feature: for each quantum state possessing a positive energy , there is a corresponding state with energy -. This is not a big difficulty when an isolated electron is considered, because its energy is conserved and negative-energy electrons may be left out. However, difficulties arise when effects of the electromagnetic field are considered, because a positive-energy electron would be able to shed energy by continuously emitting photons, a process that could continue without limit as the electron descends into ever lower energy states. However, real electrons clearly do not behave in this way.
https://en.wikipedia.org/wiki?curid=312308