doc_id
int32
15
2.25M
text
stringlengths
101
6.85k
source
stringlengths
39
44
144,079
The fluid mosaic model explains various observations regarding the structure of functional cell membranes. According to this biological model, there is a lipid bilayer (two molecules thick layer consisting primarily of amphipathic phospholipids) in which protein molecules are embedded. The phospholipid bilayer gives fluidity and elasticity to the membrane. Small amounts of carbohydrates are also found in the cell membrane. The biological model, which was devised by Seymour Jonathan Singer and Garth L. Nicolson in 1972, describes the cell membrane as a two-dimensional liquid that restricts the lateral diffusion of membrane components. Such domains are defined by the existence of regions within the membrane with special lipid and protein cocoon that promote the formation of lipid rafts or protein and glycoprotein complexes. Another way to define membrane domains is the association of the lipid membrane with the cytoskeleton filaments and the extracellular matrix through membrane proteins. The current model describes important features relevant to many cellular processes, including: cell-cell signaling, apoptosis, cell division, membrane budding, and cell fusion. The fluid mosaic model is the most acceptable model of the plasma membrane. Its main function is to separate the contents of the cell from the exterior.
https://en.wikipedia.org/wiki?curid=340440
175,354
The plum pudding model usefully guided his student, Ernest Rutherford, to devise experiments to further explore the composition of atoms. Also, Thomson's model was an improvement upon previous solar system models of Joseph Larmor and the Saturnian ring model for atomic electrons put forward in 1904 by Nagaoka (after James Clerk Maxwell's model of Saturn's rings) as these could not withstand classical mechanics as in the solar system models the electrons would spiral into the nucleus, so they were dropped in favor of Thompson's model. However, all previous atomic models were useful as predecessors of the more correct 1913 solar-system-like Bohr model of the atom which Bohr references in his paper that he borrowed substantially from the 1912 nuclear model of John William Nicholson whose quantum atomic model quantized angular momentum as "h"/2. The Bohr Model was initially planar as the Nagaoka model, but Sommerfeld introduced elliptical orbits in the years 1914-1925 until the theory was overthrown by modern quantum mechanics.
https://en.wikipedia.org/wiki?curid=2840
179,962
The resting potential must be established within a cell before the cell can be depolarized. There are many mechanisms by which a cell can establish a resting potential, however there is a typical pattern of generating this resting potential that many cells follow. The cell uses ion channels, ion pumps, and voltage-gated ion channels to generate a negative resting potential within the cell. However, the process of generating the resting potential within the cell also creates an environment outside the cell that favors depolarization. The sodium potassium pump is largely responsible for the optimization of conditions on both the interior and the exterior of the cell for depolarization. By pumping three positively charged sodium ions (Na) out of the cell for every two positively charged potassium ions (K) pumped into the cell, not only is the resting potential of the cell established, but an unfavorable concentration gradient is created by increasing the concentration of sodium outside the cell and increasing the concentration of potassium within the cell. Although there is an excessive amount of potassium in the cell and sodium outside the cell, the generated resting potential keeps the voltage-gated ion channels in the plasma membrane closed, preventing the ions that have been pumped across the plasma membrane from diffusing to an area of lower concentration. Additionally, despite the high concentration of positively-charged potassium ions, most cells contain internal components (of negative charge), which accumulate to establish a negative inner-charge.
https://en.wikipedia.org/wiki?curid=531587
232,442
To calculate the bands including electron-electron interaction many-body effects, one can resort to so-called Green's function methods. Indeed, knowledge of the Green's function of a system provides both ground (the total energy) and also excited state observables of the system. The poles of the Green's function are the quasiparticle energies, the bands of a solid. The Green's function can be calculated by solving the Dyson equation once the self-energy of the system is known. For real systems like solids, the self-energy is a very complex quantity and usually approximations are needed to solve the problem. One such approximation is the GW approximation, so called from the mathematical form the self-energy takes as the product Σ = "GW" of the Green's function "G" and the dynamically screened interaction "W". This approach is more pertinent when addressing the calculation of band plots (and also quantities beyond, such as the spectral function) and can also be formulated in a completely "ab initio" way. The GW approximation seems to provide band gaps of insulators and semiconductors in agreement with experiment, and hence to correct the systematic DFT underestimation.
https://en.wikipedia.org/wiki?curid=706247
240,870
The best known example of quantum interference is the double-slit experiment. In this experiment, electrons, atoms or other quantum mechanical objects approach a barrier with two slits in it. If the quantum object succeeds in passing through the slits, its position is measured with a detection screen a certain distance beyond and behind the barrier. For this system, one lets formula_39 be that part of the wavefunction that passes through one of the slits and lets formula_40 be that part of the wavefunction that passes through the other slit. When the object almost reaches the screen, the probability of where it is located is given by the above equation. In this context, the equation says that the probability of finding the object at some point just before it hits the screen is the probability that would be obtained if it went through the first slit plus the probability that would be obtained if it went through the second slit plus the quantum interference term, which has no counterpart in classical physics. The quantum interference term can significantly change the pattern observed on the detection screen.
https://en.wikipedia.org/wiki?curid=15112
261,052
The internal energy of a thermodynamic system is the total energy contained within it. It is the energy necessary to create or prepare the system in its given internal state, and includes the contributions of potential energy and internal kinetic energy. It keeps account of the gains and losses of energy of the system that are due to changes in its internal state. It does not include the kinetic energy of motion of the system as a whole, or any external energies from surrounding force fields. The internal energy of an isolated system is constant, which is expressed as the law of conservation of energy, a foundation of the first law of thermodynamics. The internal energy is an extensive property.
https://en.wikipedia.org/wiki?curid=340757
301,365
Determining the thermodynamic state of the stagnation point is more difficult under an equilibrium gas model than a perfect gas model. Under a perfect gas model, the "ratio of specific heats" (also called "isentropic exponent", adiabatic index, "gamma", or "kappa") is assumed to be constant along with the gas constant. For a real gas, the ratio of specific heats can wildly oscillate as a function of temperature. Under a perfect gas model there is an elegant set of equations for determining thermodynamic state along a constant entropy stream line called the "isentropic chain". For a real gas, the isentropic chain is unusable and a "Mollier diagram" would be used instead for manual calculation. However, graphical solution with a Mollier diagram is now considered obsolete with modern heat shield designers using computer programs based upon a digital lookup table (another form of Mollier diagram) or a chemistry based thermodynamics program. The chemical composition of a gas in equilibrium with fixed pressure and temperature can be determined through the "Gibbs free energy method". Gibbs free energy is simply the total enthalpy of the gas minus its total entropy times temperature. A chemical equilibrium program normally does not require chemical formulas or reaction-rate equations. The program works by preserving the original elemental abundances specified for the gas and varying the different molecular combinations of the elements through numerical iteration until the lowest possible Gibbs free energy is calculated (a Newton–Raphson method is the usual numerical scheme). The data base for a Gibbs free energy program comes from spectroscopic data used in defining partition functions. Among the best equilibrium codes in existence is the program "Chemical Equilibrium with Applications" (CEA) which was written by Bonnie J. McBride and Sanford Gordon at NASA Lewis (now renamed "NASA Glenn Research Center"). Other names for CEA are the "Gordon and McBride Code" and the "Lewis Code". CEA is quite accurate up to 10,000 K for planetary atmospheric gases, but unusable beyond 20,000 K (double ionization is not modelled). CEA can be downloaded from the Internet along with full documentation and will compile on Linux under the G77 Fortran compiler.
https://en.wikipedia.org/wiki?curid=45294
315,359
From 2007 onwards, the scope of the course (along with that of Math 25) was changed to more strictly cover the contents of four semester-long courses in two semesters: Math 25a (linear algebra and real analysis) and Math 122 (group theory and vector spaces) in Math 55a; and Math 25b (real analysis) and Math 113 (complex analysis) in Math 55b. The name was also changed to "Honors Abstract Algebra" (Math 55a) and "Honors Real and Complex Analysis" (Math 55b). Fluency in formulating and writing mathematical proofs is listed as a course prerequisite for Math 55, while such experience is considered "helpful" but not required for Math 25. In practice, students of Math 55 have usually had extensive experience in proof writing and abstract mathematics, with many being the past winners of prestigious national or international mathematical Olympiads (such as USAMO or IMO). Typical students of Math 25 have also had previous exposure to proof writing through mathematical contests or university-level mathematics courses.
https://en.wikipedia.org/wiki?curid=22008131
322,380
There are 11 distinct organ systems in human beings, which form the basis of human anatomy and physiology. The 11 organ systems include the respiratory system, digestive and excretory system, circulatory system, urinary system, integumentary system, skeletal system, muscular system, endocrine system, lymphatic system, nervous system, and reproductive systems. There are other systems in the body that are not organ systems. For example, the Immune system protects the organism from infection, but it is not an organ system as it is not composed of organs. Some organs are in more than one system. For example, the nose is in both the respiratory system and also is a sensory organ in the nervous system. The testes and ovary are both part of the reproductive systems and endocrine systems.
https://en.wikipedia.org/wiki?curid=419100
350,980
Probability logic is interested in how the probability of the premises of an argument affects the probability of its conclusion. It differs from classical logic, which assumes that propositions are either true or false but does not take into consideration the probability or certainty that a proposition is true or false. The probability of the conclusion of a deductive argument cannot be calculated by figuring out the cumulative probability of the argument’s premises. Dr. Timothy McGrew, a specialist in the applications of probability theory, and Dr. Ernest W. Adams, a Professor Emeritus at UC Berkeley, pointed out that the theorem on the accumulation of uncertainty designates only a lower limit on the probability of the conclusion. So the probability of the conjunction of the argument’s premises sets only a minimum probability of the conclusion. The probability of the argument’s conclusion cannot be any lower than the probability of the conjunction of the argument’s premises. For example, if the probability of a deductive argument’s four premises is ~0.43, then it is assured that the probability of the argument’s conclusion is no less than ~0.43. It could be much higher, but it cannot drop under that lower limit.
https://en.wikipedia.org/wiki?curid=61093
446,215
Although there are many other implementations for quantum information processing (QIP) and quantum computation, optical quantum systems are prominent candidates, since they link quantum computation and quantum communication in the same framework. In optical systems for quantum information processing, the unit of light in a given mode—or photon—is used to represent a qubit. Superpositions of quantum states can be easily represented, encrypted, transmitted and detected using photons. Besides, linear optical elements of optical systems may be the simplest building blocks to realize quantum operations and quantum gates. Each linear optical element equivalently applies a unitary transformation on a finite number of qubits. The system of finite linear optical elements constructs a network of linear optics, which can realize any quantum circuit diagram or quantum network based on the quantum circuit model. Quantum computing with continuous variables is also possible under the linear optics scheme.
https://en.wikipedia.org/wiki?curid=41312173
514,381
A programming model is an execution model coupled to an API or a particular pattern of code. In this style, there are actually two execution models in play: the execution model of the base programming language and the execution model of the programming model. An example is Spark where Java is the base language, and Spark is the programming model. Execution may be based on what appear to be library calls. Other examples include the POSIX Threads library and Hadoop's MapReduce. In both cases, the execution model of the programming model is different from that of the base language in which the code is written. For example, the C programming language has no behavior in its execution model for input/output or thread behavior. But such behavior can be invoked from C syntax, by making what appears to be a call to a normal C library.
https://en.wikipedia.org/wiki?curid=4045128
540,524
Another concept is that data is considered as a form of public good. Researchers from Stanford University proposed the use of such a framework, to think about data and the development of AI; they were thinking about radiology data specifically. They concluded that clinical data should be a form of public good, used for the benefit of future patients and that the data should be widely available for the development of knowledge and tools to benefit future patients. From this, they drew three main conclusions. Firstly, if the clinical data is really not owned by anyone, those who interact with it then have an obligation to ensure that the data is used for the benefit of future patients in societies. Secondly, this data should be widely shared for research and development, and all the individuals and entities with access to that data, then essentially become stewards of that data and become responsible to carefully safeguard the privacy and to ensure that the data is used for developing knowledge and tools for the good. Thirdly, patient consent wouldn’t necessarily be required before the data is used for secondary purposes, such as AI development and training and testing, as long as there are mechanisms in place to ensure that ethical standards are being followed. According to this proposed framework, the authors propose that it would be unethical to sell data to the third parties by granting exclusive access in exchange for monetary or any kind of payments that exceed costs.
https://en.wikipedia.org/wiki?curid=37451236
542,016
Electroreception and electrogenesis are the closely-related biological abilities to perceive electrical stimuli and to generate electric fields. Both are used to locate prey; stronger electric discharges are used in a few groups of fishes to stun prey. The capabilities are found almost exclusively in aquatic or amphibious animals, since water is a much better conductor of electricity than air. In passive electrolocation, objects such as prey are detected by sensing the electric fields they create. In active electrolocation, fish generate a weak electric field and sense the different distortions of that field created by objects that conduct or resist electricity. Active electrolocation is practised by two groups of weakly electric fish, the Gymnotiformes (knifefishes) and the Mormyridae (elephantfishes), and by "Gymnarchus niloticus", the African knifefish. An electric fish generates an electric field using an electric organ, modified from muscles in its tail. The field is called weak if it is only enough to detect prey, and strong if it is powerful enough to stun or kill. The field may be in brief pulses, as in the elephantfishes, or a continuous wave, as in the knifefishes. Some strongly electric fish, such as the electric eel, locate prey by generating a weak electric field, and then discharge their electric organs strongly to stun the prey; other strongly electric fish, such as the electric ray, electrolocate passively. The stargazers are unique in being strongly electric but not using electrolocation.
https://en.wikipedia.org/wiki?curid=1166387
558,779
The system in this experiment consists of both compartments; that is, the entire region occupied by the gas at the end of the experiment. Because this system is thermally isolated, it cannot exchange heat with its surroundings. Also, since the system's total volume is kept constant, the system cannot perform work on its surroundings. As a result, the change in internal energy, formula_6, is zero. Internal energy consists of internal kinetic energy (due to the motion of the molecules) and internal potential energy (due to intermolecular forces). When the molecular motion is random, temperature is the measure of the internal kinetic energy. In this case, the internal kinetic energy is called heat. If the chambers have not reached equilibrium, there will be some kinetic energy of flow, which is not detectable by a thermometer (and therefore is not a component of heat). Thus, a change in temperature indicates a change in kinetic energy, and some of this change will not appear as heat until and unless thermal equilibrium is reestablished. When heat is transferred into kinetic energy of flow, this causes a decrease in temperature.
https://en.wikipedia.org/wiki?curid=1867005
563,320
An accumulator is an energy storage device: a device which accepts energy, stores energy, and releases energy as needed. Some accumulators accept energy at a low rate (low power) over a long time interval and deliver the energy at a high rate (high power) over a short time interval. Some accumulators accept energy at a high rate over a short time interval and deliver the energy at a low rate over longer time interval. Some accumulators typically accept and release energy at comparable rates. Various devices can store thermal energy, mechanical energy, and electrical energy. Energy is usually accepted and delivered in the same form. Some devices store a different form of energy than what they receive and deliver performing energy conversion on the way in and on the way out.
https://en.wikipedia.org/wiki?curid=3664546
595,816
The European Union's new General Data Protection Regulation (GDPR) demands that stored data on people in the EU undergo either anonymization or a pseudonymization process. GDPR Recital (26) establishes a very high bar for what constitutes anonymous data, thereby exempting the data from the requirements of the GDPR, namely “…information which does not relate to an identified or identifiable natural person or to personal data rendered anonymous in such a manner that the data subject is not or no longer identifiable.” The European Data Protection Supervisor (EDPS) and the Spanish Agencia Española de Protección de Datos (AEPD) have issued joint guidance related to requirements for anonymity and exemption from GDPR requirements. According to the EDPS and AEPD no one, including the data controller, should be able to re-identify data subjects in a properly anonymized dataset. Research by data scientists at Imperial College in London and UCLouvain in Belgium, as well as a ruling by Judge Michal Agmon-Gonen of the Tel Aviv District Court, highlight the shortcomings of "Anonymisation" in today's big data world. Anonymisation reflects an outdated approach to data protection that was developed when the processing of data was limited to isolated (siloed) applications prior to the popularity of “big data” processing involving the widespread sharing and combining of data.
https://en.wikipedia.org/wiki?curid=41669781
626,256
The ink is controlled in the flexographic printing process by the ink system. The ink system contains an ink pump, anilox roll and either a fountain roll system or doctor blade system. The fountain roll or two-roll system has one roll spinning in an ink pan pressed against the anilox roll to transfer a layer of ink to then be applied to the printing plate. This system is best used for low quality print such as flood coats and block lettering due to its inability to produce a clean wipe of the anilox roll. The doctor blade system can either be an open single blade system or an enclosed dual blade system. The single blade system uses an open ink pan with a roller that is then sheared with one doctor blade to create a uniform layer of ink to be distributed. The remaining ink sheared from the anilox roll will collect in the ink pan to then be pumped back into the system. The cylinder plate, anilox, and doctor blade are independently controlled by hydraulic, pressure and/or pneumatic systems. This system is best used for low to mid quality print work - usually found in corrugated box printing. The dual blade system is an enclosed system that has one doctor blade for doctoring the ink and one containment blade that contains the ink in the chamber and allows ink from the anilox roll back in. Dual blade systems require 2 end seals and adequate chamber pressure in order to maintain the tight seal between the ink chamber and the anilox roll. This system is best used for high quality, intricate print designs, like those found in the label industry.
https://en.wikipedia.org/wiki?curid=169534
649,121
Cell proliferation is the process by which "a cell grows and divides to produce two daughter cells". Cell proliferation leads to an exponential increase in cell number and is therefore a rapid mechanism of tissue growth. Cell proliferation requires both cell growth and cell division to occur at the same time, such that the average size of cells remains constant in the population. Cell division can occur without cell growth, producing many progressively smaller cells (as in cleavage of the zygote), while cell growth can occur without cell division to produce a single larger cell (as in growth of neurons). Thus, cell proliferation is not synonymous with either cell growth or cell division, despite the fact that these terms are sometimes used interchangeably.
https://en.wikipedia.org/wiki?curid=594336
650,927
Whenever a function is discontinuously truncated in one FT domain, broadening and rippling are introduced in the other FT domain. A perfect example from optics is in connection with the point spread function, which for on-axis plane wave illumination of a quadratic lens (with circular aperture), is an Airy function, "J"("x")/"x". Literally, the point source has been "spread out" (with ripples added), to form the Airy point spread function (as the result of truncation of the plane wave spectrum by the finite aperture of the lens). This source of error is known as Gibbs phenomenon and it may be mitigated by simply ensuring that all significant content lies near the center of the transparency, or through the use of window functions which smoothly taper the field to zero at the frame boundaries. By the convolution theorem, the FT of an arbitrary transparency function - multiplied (or truncated) by an aperture function - is equal to the FT of the non-truncated transparency function convolved against the FT of the aperture function, which in this case becomes a type of "Greens function" or "impulse response function" in the spectral domain. Therefore, the image of a circular lens is equal to the object plane function convolved against the Airy function (the FT of a circular aperture function is "J"("x")/"x" and the FT of a rectangular aperture function is a product of sinc functions, sin"x"/"x").
https://en.wikipedia.org/wiki?curid=312008
687,863
Subjectivists, also known as Bayesians or followers of epistemic probability, give the notion of probability a subjective status by regarding it as a measure of the 'degree of belief' of the individual assessing the uncertainty of a particular situation. Epistemic or subjective probability is sometimes called credence, as opposed to the term chance for a propensity probability. Some examples of epistemic probability are to assign a probability to the proposition that a proposed law of physics is true or to determine how probable it is that a suspect committed a crime, based on the evidence presented. The use of Bayesian probability raises the philosophical debate as to whether it can contribute valid justifications of belief. Bayesians point to the work of Ramsey (p 182) and de Finetti (p 103) as proving that subjective beliefs must follow the laws of probability if they are to be coherent. Evidence casts doubt that humans will have coherent beliefs. The use of Bayesian probability involves specifying a prior probability. This may be obtained from consideration of whether the required prior probability is greater or lesser than a reference probability associated with an urn model or a thought experiment. The issue is that for a given problem, multiple thought experiments could apply, and choosing one is a matter of judgement: different people may assign different prior probabilities, known as the reference class problem. The "sunrise problem" provides an example.
https://en.wikipedia.org/wiki?curid=23538
704,269
In June 2019, the Korean government confirmed the Third Energy Master Plan, also called a constitutional law of the energy sector and renewed every five years. Its goal is to achieve sustainable growth and enhance the quality of life through energy transition. There are five major tasks to achieve this goal. First, with regards to consumption, the goal is to improve energy consumption efficiency by 38% compared to the level of 2017 and to reduce energy consumption by 18.6% below the BAU level by 2040. Second, with respect to generation, the task is to bring a transition towards a safe and clean energy mix by raising the share of renewable energy in power generation (30~35% by 2040) and by implementing a gradual phase-out of nuclear power and a drastic reduction of coal. Third, regarding the systems, the task is to raise the share of distributed generation nearby where demand is created with renewables and fuel cells and to enhance the roles and responsibility of local governments and residents. Fourth, with regards to the industry, the task is to foster businesses related to renewables, hydrogen, and energy efficiency as a future energy industry, to help the conventional energy industry develop higher value-added businesses, and to support the nuclear power industry to maintain its main ecosystem. The fifth task is to improve the energy market system of electricity, gas, and heat in order to promote energy transition and is to develop an energy big data platform in order to create new businesses.
https://en.wikipedia.org/wiki?curid=39208945
712,502
A chemical reactor is an enclosed volume in which a chemical reaction takes place. In chemical engineering, it is generally understood to be a process vessel used to carry out a chemical reaction, which is one of the classic unit operations in chemical process analysis. The design of a chemical reactor deals with multiple aspects of chemical engineering. Chemical engineers design reactors to maximize net present value for the given reaction. Designers ensure that the reaction proceeds with the highest efficiency towards the desired output product, producing the highest yield of product while requiring the least amount of money to purchase and operate. Normal operating expenses include energy input, energy removal, raw material costs, labor, etc. Energy changes can come in the form of heating or cooling, pumping to increase pressure, frictional pressure loss or agitation.Chemical reaction engineering is the branch of chemical engineering which deals with chemical reactors and their design, especially by application of chemical kinetics to industrial systems.
https://en.wikipedia.org/wiki?curid=1162781
730,799
Representation broadly depends on scope of the metadata management and reference point of interest. Data lineage provides sources of the data and intermediate data flow hops from the reference point with backward data lineage, leads to the final destination's data points and its intermediate data flows with forward data lineage. These views can be combined with end-to-end lineage for a reference point that provides complete audit trail of that data point of interest from sources to its final destinations. As the data points or hops increases, the complexity of such representation becomes incomprehensible. Thus, the best feature of the data lineage view would be to be able to simplify the view by temporarily masking unwanted peripheral data points. Tools that have the masking feature enables scalability of the view and enhances analysis with best user experience for both technical and business users. Data lineage also enables companies to trace sources of specific business data for the purposes of tracking errors, implementing changes in processes, and implementing system migrations to save significant amounts of time and resources, thereby tremendously improving BI efficiency.
https://en.wikipedia.org/wiki?curid=44783487
733,465
In control theory, a time-invariant (TIV) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function. If this function depends "only" indirectly on the time-domain (via the input function, for example), then that is a system that would be considered time-invariant. Conversely, any direct dependence on the time-domain of the system function could be considered as a "time-varying system".
https://en.wikipedia.org/wiki?curid=1291319
780,093
The essential job of this system is to find a suitable balance between fixing dirty data and maintaining the data as close as possible to the original data from the source production system. This is a challenge for the Extract, transform, load architect. The system should offer an architecture that can cleanse data, record quality events and measure/control quality of data in the data warehouse. A good start is to perform a thorough data profiling analysis that will help define to the required complexity of the data cleansing system and also give an idea of the current data quality in the source system(s).
https://en.wikipedia.org/wiki?curid=3575651
785,329
Another possibility for increased efficiency is to convert the frequency of light down towards the bandgap energy with a fluorescent material. In particular, to exceed the Shockley–Queisser limit, it is necessary for the fluorescent material to convert a single high-energy photon into several lower-energy ones (quantum efficiency > 1). For example, one photon with more than double the bandgap energy can become two photons above the bandgap energy. In practice, however, this conversion process tends to be relatively inefficient. If a very efficient system were found, such a material could be painted on the front surface of an otherwise standard cell, boosting its efficiency for little cost. In contrast, considerable progress has been made in the exploration of fluorescent downshifting, which converts high-energy light (e. g., UV light) to low-energy light (e. g., red light) with a quantum efficiency smaller than 1. The cell may be more sensitive to these lower-energy photons. Dyes, rare-earth phosphors and quantum dots are actively investigated for fluorescent downshifting. For example, silicon quantum dots enabled downshifting has led to the efficiency enhancement of the state-of-the-art silicon solar cells.
https://en.wikipedia.org/wiki?curid=20055670
844,162
At resting potential, both the voltage gated sodium and potassium channels are closed but as the cell membrane becomes depolarized the voltage gated sodium channels begin to open up and the neuron begins to depolarize, creating a current feedback loop known as the Hodgkin cycle. However, potassium ions naturally move out of the cell and if the original depolarization event was not significant enough then the neuron does not generate an action potential. If all the sodium channels are open, however, then the neuron becomes ten times more permeable to sodium than potassium, quickly depolarizing the cell to a peak of +40 mV. At this level the sodium channels begin to inactivate and voltage gated potassium channels begin to open. This combination of closed sodium channels and open potassium channels leads to the neuron re-polarizing and becoming negative again. The neuron continues to re-polarize until the cell reaches ~ –75 mV, which is the equilibrium potential of potassium ions. This is the point at which the neuron is hyperpolarized, between –70 mV and –75 mV. After hyperpolarization the potassium channels close and the natural permeability of the neuron to sodium and potassium allows the neuron to return to its resting potential of –70 mV. During the refractory period, which is after hyper-polarization but before the neuron has returned to its resting potential the neuron is capable of triggering an action potential due to the sodium channels ability to be opened, however, because the neuron is more negative it becomes more difficult to reach the action potential threshold.
https://en.wikipedia.org/wiki?curid=518565
856,148
Consider a mechanical system consisting of two partial systems "A" and "B" which interact with each other only during a limited time. Let the "ψ" function [i.e., wavefunction] before their interaction be given. Then the Schrödinger equation will furnish the "ψ" function after the interaction has taken place. Let us now determine the physical state of the partial system "A" as completely as possible by measurements. Then quantum mechanics allows us to determine the "ψ" function of the partial system "B" from the measurements made, and from the "ψ" function of the total system. This determination, however, gives a result which depends upon which of the physical quantities (observables) of "A" have been measured (for instance, coordinates or momenta). Since there can be only one physical state of "B" after the interaction which cannot reasonably be considered to depend on the particular measurement we perform on the system "A" separated from "B" it may be concluded that the "ψ" function is not unambiguously coordinated to the physical state. This coordination of several "ψ" functions to the same physical state of system "B" shows again that the "ψ" function cannot be interpreted as a (complete) description of a physical state of a single system.
https://en.wikipedia.org/wiki?curid=211602
864,965
The adaptive immune response is the body's second line of defense. The cells of the adaptive immune system are extremely specific because during early developmental stages the B and T cells develop antigen receptors that are specific to only certain antigens. This is extremely important for B and T cell activation. B and T cells are extremely dangerous cells, and if they are able to attack without undergoing a rigorous process of activation, a faulty B or T cell can begin exterminating the host's own healthy cells. Activation of naïve helper T cells occurs when antigen-presenting cells (APCs) present foreign antigen via MHC class II molecules on their cell surface. These APCs include dendritic cells, B cells, and macrophages which are specially equipped not only with MHC class II but also with co-stimulatory ligands which are recognized by co-stimulatory receptors on helper T cells. Without the co-stimulatory molecules, the adaptive immune response would be inefficient and T cells would become anergic. Several T cell subgroups can be activated by specific APCs, and each T cell is specially equipped to deal with each unique microbial pathogen. The type of T cell activated and the type of response generated depends, in part, on the context in which the APC first encountered the antigen. Once helper T cells are activated, they are able to activate naïve B cells in the lymph node. However, B cell activation is a two-step process. Firstly, B cell receptors, which are just Immunoglobulin M (IgM) and Immunoglobulin D (IgD) antibodies specific to the particular B cell, must bind to the antigen which then results in internal processing so that it is presented on the MHC class II molecules of the B cell. Once this happens a T helper cell which is able to identify the antigen bound to the MHC interacts with its co-stimulatory molecule and activates the B cell. As a result, the B cell becomes a plasma cell which secretes antibodies that act as an opsonin against invaders.
https://en.wikipedia.org/wiki?curid=147561
894,807
In the Copenhagen view of this experiment, Alice's measurement—and particularly her measurement choice—has a direct effect on Bob's state. However, under the assumption of locality, actions on Alice's system do not affect the "true", or "ontic" state of Bob's system. We see that the ontic state of Bob's system must be compatible with one of the quantum states formula_7 or formula_8, since Alice can make a measurement that concludes with one of those states being the quantum description of his system. At the same time, it must also be compatible with one of the quantum states formula_9 or formula_10 for the same reason. Therefore, the ontic state of Bob's system must be compatible with at least two quantum states; the quantum state is therefore not a complete descriptor of his system. Einstein, Podolsky and Rosen saw this as evidence of the incompleteness of the Copenhagen interpretation of quantum theory, since the wavefunction is explicitly not a complete description of a quantum system under this assumption of locality. Their paper concludes:
https://en.wikipedia.org/wiki?curid=8978774
913,839
The quantile function is one way of prescribing a probability distribution, and it is an alternative to the probability density function (pdf) or probability mass function, the cumulative distribution function (cdf) and the characteristic function. The quantile function, "Q", of a probability distribution is the inverse of its cumulative distribution function "F". The derivative of the quantile function, namely the quantile density function, is yet another way of prescribing a probability distribution. It is the reciprocal of the pdf composed with the quantile function.
https://en.wikipedia.org/wiki?curid=10167616
913,898
Climate change occurs when changes in Earth's climate system result in new weather patterns that remain in place for an extended period of time. This length of time can be as short as a few decades to as long as millions of years. The climate system receives nearly all of its energy from the sun. The climate system also gives off energy to outer space. The balance of incoming and outgoing energy, and the passage of the energy through the climate system, determines Earth's energy budget. When the incoming energy is greater than the outgoing energy, earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and earth experiences cooling. Climate change also influences the average sea level.
https://en.wikipedia.org/wiki?curid=142440
942,543
A basic workflow for reproducible research involves data acquisition, data processing and data analysis. Data acquisition primarily consists of obtaining primary data from a primary source such as surveys, field observations, experimental research, or obtaining data from an existing source. Data processing involves the processing and review of the raw data collected in the first stage, and includes data entry, data manipulation and filtering and may be done using software. The data should be digitized and prepared for data analysis. Data may be analysed with the use of software to interpret or visualise statistics or data to produce the desired results of the research such as quantitative results including figures and tables. The use of software and automation enhances the reproducibility of research methods.
https://en.wikipedia.org/wiki?curid=47651
944,161
It is often convenient in calculation that the invariant mass of a system is the total energy of the system (divided by ) in the COM frame (where, by definition, the momentum of the system is zero). However, since the invariant mass of any system is also the same quantity in all inertial frames, it is a quantity often calculated from the total energy in the COM frame, then used to calculate system energies and momenta in other frames where the momenta are not zero, and the system total energy will necessarily be a different quantity than in the COM frame. As with energy and momentum, the invariant mass of a system cannot be destroyed or changed, and it is thus conserved, so long as the system is closed to all influences. (The technical term is isolated system meaning that an idealized boundary is drawn around the system, and no mass/energy is allowed across it.)
https://en.wikipedia.org/wiki?curid=491022
961,328
This means that during a transfer, all communication is started by the Bus Controller, and a terminal device cannot start a data transfer on its own. In the case of an RT to RT transfer the sequence is as follows: An application or function in the subsystem behind the RT interface (e.g. RT1) writes the data that is to be transmitted into a specific (transmit) sub-address (data buffer). The time at which this data is written to the sub-address is not necessarily linked to the time of the transaction, though the interfaces ensure that partially updated data is not transmitted. The Bus controller commands the RT that is the destination of the data (e.g. RT2) to receive the data at a specified (receive) data sub-address and then commands RT1 to transmit from the transmit sub-address specified in the command. RT1 transmits a Status word, indicating its current status, and the data. The Bus Controller receives RT1's status word, and sees that the transmit command has been received and actioned without a problem. RT2 receives the data on the shared data bus and writes it into the designated receive sub-address and transmits its Status word. An application or function on the subsystem behind the receiving RT interface may then access the data. Again the timing of this read is not necessarily linked to that of the transfer. The Bus Controller receives RT2's status word and sees that the receive command and data have been received and actioned without a problem.
https://en.wikipedia.org/wiki?curid=1441618
965,557
Non-mechanical work also contrasts with shaft work. Shaft work is the other of the two mainly considered kinds of mechanical contact work. It transfers energy by rotation, but it does not eventually change the shape or volume of the system. Because it does not change the volume of the system it is not measured as pressure–volume work, and it is called isochoric work. Considered solely in terms of the eventual difference between initial and final shapes and volumes of the system, shaft work does not make a change. During the process of shaft work, for example the rotation of a paddle, the shape of the system changes cyclically, but this does not make an eventual change in the shape or volume of the system. Shaft work is a kind of contact work, because it occurs through direct material contact with the surrounding matter at the boundary of the system. A system that is initially in a state of thermodynamic equilibrium cannot initiate any change in its internal energy. In particular, it cannot initiate shaft work. This explains the curious use of the phrase "inanimate material agency" by Kelvin in one of his statements of the second law of thermodynamics. Thermodynamic operations or changes in the surroundings are considered to be able to create elaborate changes such as indefinitely prolonged, varied, or ceased rotation of a driving shaft, while a system that starts in a state of thermodynamic equilibrium is inanimate and cannot spontaneously do that. Thus the sign of shaft work is always negative, work being done on the system by the surroundings. Shaft work can hardly be done indefinitely slowly; consequently it always produces entropy within the system, because it relies on friction or viscosity within the system for its transfer. The foregoing comments about shaft work apply only when one ignores that the system can store angular momentum and its related energy.
https://en.wikipedia.org/wiki?curid=3616613
1,020,547
In the original Hopfield model of associative memory, the variables were binary, and the dynamics were described by a one-at-a-time update of the state of the neurons. An energy function quadratic in the formula_60 was defined, and the dynamics consisted of changing the activity of each single neuron formula_62 only if doing so would lower the total energy of the system. This same idea was extended to the case of formula_60 being a continuous variable representing the output of neuron formula_62, and formula_60 being a monotonic function of an input current. The dynamics became expressed as a set of first-order differential equations for which the "energy" of the system always decreased.  The energy in the continuous case has one term  which is quadratic in the formula_60 (as in the binary model), and a second term which depends on the gain function (neuron's activation function). While having many desirable properties of associative memory, both of  these classical systems suffer from a small memory storage capacity, which scales linearly with the number of input features.
https://en.wikipedia.org/wiki?curid=1170097
1,023,156
In lambda calculus, each function has exactly one parameter. What is thought of as functions with multiple parameters is usually represented in lambda calculus as a function which takes the first argument, and returns a function which takes the rest of the arguments; this is a transformation known as currying. Some programming languages, like ML and Haskell, follow this scheme. In these languages, every function has exactly one parameter, and what may look like the definition of a function of multiple parameters, is actually syntactic sugar for the definition of a function that returns a function, etc. Function application is left-associative in these languages as well as in lambda calculus, so what looks like an application of a function to multiple arguments is correctly evaluated as the function applied to the first argument, then the resulting function applied to the second argument, etc.
https://en.wikipedia.org/wiki?curid=324375
1,024,086
The model evidence formula_74 is the probability of the data given the model formula_75. It is also known as the marginal likelihood, and as the "prior predictive density". Here, the model is defined by the likelihood function formula_76 and the prior distribution on the parameters, i.e. formula_77. The model evidence captures in a single number how well such a model explains the observations. The model evidence of the Bayesian linear regression model presented in this section can be used to compare competing linear models by Bayesian model comparison. These models may differ in the number and values of the predictor variables as well as in their priors on the model parameters. Model complexity is already taken into account by the model evidence, because it marginalizes out the parameters by integrating formula_78 over all possible values of formula_10 and formula_24.
https://en.wikipedia.org/wiki?curid=7519917
1,040,295
A silicon solar cell was first patented in 1946 by Russell Ohl when working at Bell Labs and first publicly demonstrated at the same research institution by Fuller, Chapin, and Pearson in 1954; however, these first proposals were monofacial cells and not designed to have their rear face active. The first bifacial solar cell theoretically proposed is in a Japanese patent with a priority date 4 October 1960, by Hiroshi Mori, when working for the company Hayakawa Denki Kogyo Kabushiki Kaisha (in English, Hayakawa Electric Industry Co. Ltd.), which later developed into nowadays Sharp Corporation. The proposed cell was a two-junction pnp structure with contact electrodes attached to two opposite edges.However, first demonstrations of bifacial solar cells and panels were carried out in the Soviet Space Program in the Salyut 3 (1974) and Salyut 5 (1976) LEO military space stations. These bifacial solar cells were developed and manufactured by Bordina et al. at the VNIIT (All Union Scientific Research Institute of Energy Sources) in Moscow that in 1975 became Russian solar cell manufacturer KVANT. In 1974 this team filed a US patent in which the cells were proposed with the shape of mini-parallelepipeds of maximum size 1mmx1mmx1mm connected in series so that there were 100 cells/cm. As in modern-day BSCs, they proposed the use of isotype junctions pp close to one of the light-receiving surfaces. In Salyut 3, small experimental panels with a total cell surface of 24 cm demonstrated an increase in energy generation per satellite revolution due to Earth's albedo of up to 34%, compared to monofacial panels at the time. A 17–45% gain due to the use of bifacial panels (0.48m – 40W) was recorded during the flight of Salyut 5 space station. Simultaneous to this Russian research, on the other side of the Iron Curtain, the Laboratory of Semiconductors at the School of Telecommunication Engineering of the Technical University of Madrid, led by Professor Antonio Luque , independently carries out a broad research program seeking the development of industrially feasible bifacial solar cells. While Mori's patent and VNIIT-KVANT spaceship-borne prototypes were based on tiny cells without surface metal grid and therefore intricately interconnected, more in the style of microelectronic devices which were at that time in their onset, Luque will file two Spanish patents in 1976 and 1977 and one in the United States in 1977 that were precursory of modern bifacials . Luque's patents were the first to propose BSCs with one cell per silicon wafer, as was by then the case of monofacial cells and so continues to be, with metal grids on both surfaces. They considered both the npp+ structure and the pnp structures. Development of BSCs at the Laboratory of Semiconductors was tackled in a three-fold approach that resulted in three PhD theses, authored by Andrés Cuevas (1980), Javier Eguren (1981) and Jesús Sangrador (1982), the first two having Luque as doctoral advisor while Dr. Gabriel Sala, from the same group, conducted the third. Cuevas' thesis consisted of constructing the first of Luque's patents, the one of 1976, that due to its npn structure similar to that of a transistor, was dubbed the "transcell". Eguren's thesis dealt with the demonstration of Luque's 2nd patent of 1977, with a npp doping profile, with the pp isotype junction next to the cell's rear surface, creating what is usually referred as a back surface field (BSF) in solar cell technology. This work gave way to several publications and additional patents. In particular, the beneficial effect of reducing p-doping in the base, where reduction of voltage in the emitter junction (front p-n junction) was compensated by voltage increase in the rear isotype junction, while at the same time enabling higher diffusion length of minority carriers that increases the current output under bifacial illumination. Sangrador's thesis and third development route at the Technical University of Madrid, proposed the so-called vertical multijunction edge-illuminated solar cell in which pnn where stacked and connected in series and illuminated by their edges, this being high voltage cells that required no surface metal grid to extract the current. In 1979 the Laboratory for Semiconductors became the Institute for Solar Energy (IES-UPM), that having Luque as the first director, continued intense research on bifacial solar cells well until the first decade of the 21st century, with remarkable results. For example, in 1994, two Brazilian PhD students at the Institute of Solar Energy, Adriano Moehlecke and Izete Zanesco, together with Luque, developed and produced a bifacial solar cell rendering 18.1% in the front face and 19.1% in the rear face; a record bifaciality of 103% (at that time record efficiency for monofacial cells was slightly below 22%).
https://en.wikipedia.org/wiki?curid=65342312
1,048,200
In the United Kingdom, at least some statistics has been taught in schools since the 1930s. At present, A-level qualifications (typically taken by 17- to 18-year-olds) are being developed in "Statistics" and "Further Statistics". The coverage of the former includes: Probability; Data Collection; Descriptive Statistics; Discrete Probability Distributions; Binomial Distribution; Poisson Distributions; Continuous Probability Distributions; The Normal Distribution; Estimation; Hypothesis Testing; Chi-Squared; Correlation and Regression. The coverage of "Further Statistics" includes: Continuous Probability Distributions; Estimation; Hypothesis Testing; One Sample Tests; Hypothesis Testing; Two Sample Tests; Goodness of Fit Tests; Experimental Design; Analysis of Variance (Anova); Statistical Process Control; Acceptance Sampling. The Centre for Innovation in Mathematics Teaching (CIMT) has online course notes for these sets of topics. Revision notes for an existing qualification indicate a similar coverage. At an earlier age (typically 15–16 years) GCSE qualifications in mathematics contain "Statistics and Probability" topics on: Probability; Averages; Standard Deviation; Sampling; Cumumulative Frequency Graphs (including median and quantiles); Representing Data; Histograms. The UK's Office for National Statistics has a webpage leading to material suitable for both teachers and students at school level. In 2004 the Smith inquiry made the following statement:
https://en.wikipedia.org/wiki?curid=24985094
1,080,350
Penicillin, its derivatives and methicillin, and other beta-lactam antibiotics inhibits activity of the cell-wall forming penicillin-binding protein family (PBP 1, 2, 3 and 4). This disrupts the cell wall structure, causing the cytoplasm to leak and cell death. However, "mecA" codes for PBP2a that has a lower affinity for beta-lactams, which keeps the structural integrity of the cell wall, preventing cell death. Bacterial cell wall synthesis in "S. aureus" depends on transglycosylation to form linear polymer of sugar monomers and transpeptidation to form an interlinking peptides to strengthen the newly developed cell wall. PBPs have a transpeptidase domain, but scientists thought only monofunctional enzymes catalyze transglycosylation, yet PBP2 has domains to perform both essential processes. When antibiotics enter the medium, they bind to the transpeptidation domain and inhibit PBPs from cross-linking muropeptides, therefore preventing the formation of stable cell wall. With cooperative action, PBP2a lacks the proper receptor for the antibiotics and continues transpeptidation, preventing cell wall breakdown. The functionality of PBP2a depends on two structural factors on the cell wall of "S. aureus." First, for PBP2a to properly fit onto the cell wall, to continue transpeptidation, it needs the proper amino acid residues, specifically a pentaglycine residue and an amidated glutamate residue. Second, PBP2a has an effective transpeptidase activity but lacks the transglycosylation domain of PBP2, which builds the backbone of the cell wall with polysaccharide monomers, so PBP2a must rely on PBP2 to continue this process. The latter forms a therapeutic target to improve the ability of beta-lactams to prevent cell wall synthesis in resistant "S. aureus". Identifying inhibitors of glycosylases involved in the cell wall synthesis and modulating their expression can resensitize these previously resistant bacteria to beta-lactam treatment. For example, epicatechin gallate, a compound found in green tea, has shown signs of lowering the resistance to beta-lactams, to the point where oxacillin, which acts on PBP2 and PBP2a, effectively inhibits cell wall formation.
https://en.wikipedia.org/wiki?curid=19372852
1,130,514
It has been mentioned that a quantum well is basically a potential well that confines particles to move in two dimensions instead of three, forcing them to occupy a planar region. In coupled quantum wells there are two possible ways for electrons and holes to be bound into an exciton: indirect exciton and direct exciton. In indirect exciton, electrons and holes are in different quantum wells, in contrast with direct exciton where electrons and holes are located in the same well. In a case where the quantum wells are identical, both levels have a two-fold degeneracy. Direct exciton level is lower than the level of indirect exciton because of greater Coulomb interaction. Also, indirect exciton has an electric dipole momentum normal to coupled quantum well and thus a moving indirect exciton has an in-plane magnetic momentum perpendicular to its velocity. Any interactions of its electric dipole with normal electric field, lowers one of indirect exciton sub-levels and in sufficiently strong electric fields the moving indirect exciton becomes the ground excitonic level. Having in mind these procedures, one can select velocity to have an interaction between magnetic dipole and in-plane magnetic field. This displaces the minimum of the dispersion law away from the radiation zone. The importance of this, lies on the fact that electric and in-plane magnetic fields normal to coupled quantum wells, can control the dispersion of indirect exciton. Normal electric field is needed for tuning the transition: direct exciton --> indirect exciton + phonon into resonance and its magnitude can form a linear function with the magnitude of in-plane magnetic field.
https://en.wikipedia.org/wiki?curid=7206727
1,154,864
External memory algorithms are analyzed in an idealized model of computation called the external memory model (or I/O model, or disk access model). The external memory model is an abstract machine similar to the RAM machine model, but with a cache in addition to main memory. The model captures the fact that read and write operations are much faster in a cache than in main memory, and that reading long contiguous blocks is faster than reading randomly using a disk read-and-write head. The running time of an algorithm in the external memory model is defined by the number of reads and writes to memory required. The model was introduced by Alok Aggarwal and Jeffrey Vitter in 1988. The external memory model is related to the cache-oblivious model, but algorithms in the external memory model may know both the block size and the cache size. For this reason, the model is sometimes referred to as the cache-aware model.
https://en.wikipedia.org/wiki?curid=1881722
1,190,210
Given the genetic makeup of an organism, the complete set of possible reactions constitutes its reactome. Reactome, located at http://www.reactome.org is a curated, peer-reviewed resource of human biological processes/pathway data. The basic unit of the Reactome database is a reaction; reactions are then grouped into causal chains to form pathways The Reactome data model allows us to represent many diverse processes in the human system, including the pathways of intermediary metabolism, regulatory pathways, and signal transduction, and high-level processes, such as the cell cycle. Reactome provides a qualitative framework, on which quantitative data can be superimposed. Tools have been developed to facilitate custom data entry and annotation by expert biologists, and to allow visualization and exploration of the finished dataset as an interactive process map. Although the primary curational domain is pathways from Homo sapiens, electronic projections of human pathways onto other organisms are regularly created via putative orthologs, thus making Reactome relevant to model organism research communities. The database is publicly available under open source terms, which allows both its content and its software infrastructure to be freely used and redistributed. Studying whole transcriptional profiles and cataloging protein–protein interactions has yielded much valuable biological information, from the genome or proteome to the physiology of an organism, an organ, a tissue or even a single cell. The Reactome database containing a framework of possible reactions which, when combined with expression and enzyme kinetic data, provides the infrastructure for quantitative models, therefore, an integrated view of biological processes, which links such gene products and can be systematically mined by using bioinformatics applications. Reactome data available in a variety of standard formats, including BioPAX, SBML and PSI-MI, and also enable data exchange with other pathway databases, such as the Cycs, KEGG and amaze, and molecular interaction databases, such as BIND and HPRD. The next data release will cover apoptosis, including the death receptor signaling pathways, and the Bcl2 pathways, as well as pathways involved in hemostasis. Other topics currently under development include several signaling pathways, mitosis, visual phototransduction and hematopoeisis. In summary, Reactome provides high-quality curated summaries of fundamental biological processes in humans in a form of biologist-friendly visualization of pathways data, and is an open-source project.
https://en.wikipedia.org/wiki?curid=1872854
1,205,452
Domestically, the Clean Energy Act 2011 addresses GHG with an emissions cap, carbon price, and subsidies. Emissions by the electric sector are addressed by Renewable Energy targets at multiple scales, Australian Renewable Energy Agency (ARENA), Clean Energy Finance Corporation (CEFC), carbon capture and storage flagships, and feed-in tariffs on solar panels. Emissions by the industrial sector are addressed by the Energy Efficiency Opportunities (EEO) program. Emissions by the building sector are addressed by building codes, minimum energy performance standards, Commercial Building Disclosure program, state energy-saving obligations, and the National Energy Saving Initiative. Emissions by the transportation sector are addressed by reduced fuel tax credits and vehicle emissions performance standards. Emissions by the agricultural sector are addressed by the Carbon Farming Initiative and state land-clearing laws. Emissions by the land use sector are addressed by the Clean Energy Future Package, which consists of the Carbon Farming Futures program, Diversity Fund, Regional Natural Resources Management Planning for Climate Change Fund, Indigenous Carbon Farming Fund, Carbon Pollution Reduction Scheme (CPRS), and Carbon Farming Skills program. State energy saving schemes vary by state, with the Energy Saving Scheme (ESS) in North South Wales, Residential Energy Efficiency Scheme (REES) in South Australia, Energy Saver Incentive Scheme (ESI) in Victoria, and Energy Efficiency Improvement Scheme (EEIS) in Australian Capital Territory.
https://en.wikipedia.org/wiki?curid=20802345
1,231,296
where formula_29 is the area of the moving plate and the stagnant plate, formula_30 is the spatial coordinate normal to the plates. In this experimental setup, value for the force formula_31 is first selected. Then a maximum velocity formula_32 is measured, and finally both values are entered in the equation to calculate viscosity. This gives one value for the viscosity of the selected fluid. If another value of the force is selected, another maximum velocity will be measured. This will result in another viscosity value if the fluid is a non-Newtonian fluid such as paint, but it will give the same viscosity value for a Newtonian fluid such as water, petroleum oil or gas. If another parameter like temperature, formula_33, is changed, and the experiment is repeated with the same force, a new value for viscosity will be the calculated, for both non-Newtonian and Newtonian fluids. The great majority of material properties varies as a function of temperature, and this goes for viscosity also. The viscosity is also a function of pressure and, of course, the material itself. For a fluid mixture, this means that the shear viscosity will also vary according to the fluid composition. To map the viscosity as a function of all these variables require a large sequence of experiments that generates an even larger set of numbers called measured data, observed data or observations. Prior to, or at the same time as, the experiments is a material property model (or short material model) proposed to describe or explain the observations. This mathematical model is called the constitutive equation for shear viscosity. It is usually an explicit function that contains some empirical parameters that is adjusted in order to match the observations as good as the mathematical function is capable to do.
https://en.wikipedia.org/wiki?curid=56541610
1,326,567
Complete stimulation of T helper cells requires the B7 molecule present on the antigen presenting cell to bind with CD28 molecule present on the T cell surface (in close proximity with the T cell receptor). Likewise, a second interaction between the CD40 ligand or CD154 (CD40L) present on T cell surface and CD40 present on B cell surface, is also necessary. The same interactions that stimulate the T helper cell also stimulate the B cell, hence the term "costimulation". The entire mechanism ensures that an activated T cell only stimulates a B cell that recognizes the antigen containing the "same" epitope as recognized by the T cell receptor of the "costimulating" T helper cell. The B cell gets stimulated, apart from the direct costimulation, by certain growth factors, viz., interleukins 2, 4, 5, and 6 in a paracrine fashion. These factors are usually produced by the newly activated T helper cell. However, this activation occurs only after the B cell receptor present on a memory or a naive B cell itself would have bound to the corresponding epitope, without which the initiating steps of phagocytosis and antigen processing would not have occurred.
https://en.wikipedia.org/wiki?curid=9335254
1,362,559
By introducing more energy thresholds above the low-energy threshold, a PCD can be divided into several discrete energy bins. Each registered photon is thus assigned to a specific bin depending on its energy, such that each pixel measures a histogram of the incident X-ray spectrum. This spectral information provides several advantages over the integrated deposited energy of an EID. First, it makes it possible to quantitatively determine the material composition of each pixel in the reconstructed CT image, as opposed to the estimated average linear attenuation coefficient obtained in a conventional CT scan. It turns out such a material base decomposition, using at least two energy bins, can adequately account for all elements found in the body and increases the contrast between tissue types. Further, the spectral information can be used to remove beam hardening artefacts. These arise because of the higher linear attenuation of most materials at lower energy which shifts the mean energy of the X-ray spectrum towards higher energies as the beam passes through the object. By comparing the ratios of counts in different energy bins with those of the attenuated beam, the amount of beam hardening can be accounted for (either explicitly or implicitly in the reconstruction) using a PCD. Finally, using more than two energy bins allows to discriminate between on the one hand dense bone and calcifications and on the other hand heavier elements (commonly iodine or gadolinium) used as contrast agents. This has the potential reduce the amount of X-ray dose from a contrast scan by removing the need for a reference scan before contrast injection. Although spectral CT is already clinically available in the form of dual-energy scanners, photon-counting CT offers a number of advantages. A PCD can implement more than two energy thresholds with a higher degree of separation than what is possible to achieve in dual-energy CT. This improvement in energy resolution translates to higher contrast-to-noise ratio in the image, in particular in contrast-enhanced and material-selective images. Also, it can be shown that at least three energies are necessary to simultaneously decompose both tissue and contrast medium. More energy bins also allow for simultaneously differentiating between different contrast agents.
https://en.wikipedia.org/wiki?curid=55862624
1,391,799
DB2, VSAM, and other data storage formats achieve greater I/O performance thanks to a new System z9 feature called a MIDAW. Also, the System z9 introduces the , a new type of processor that accelerates certain specific DB2 tasks. Modified Indirect Data Address Words (MIDAWs) are a channel programming capability of the IBM System z9 processor range, and all subsequent ranges. The MIDAW facility is an extension to the pre-existing Indirect Data Address Word (IDAW) channel programming capability, providing support for more efficient FICON channel programs. MIDAWs allow ECKD channel programs to read and write to many storage locations using one channel command, which means fewer signals up and down the channel are required to transfer the same amount of data. This reduction is particularly noticeable for Extended Format data sets, accessed through Media Manager. Examples include Extended Format Sequential data sets, Extended Format VSAM data sets and certain types of DB2 tablespaces. While each of these data set organizations have alternatives, each has a distinct set of advantages, whether in the area of performance, space saving (through hardware-assisted data compression), or scalability (by allowing an individual data set to exceed 4 GiB).
https://en.wikipedia.org/wiki?curid=2458789
1,421,902
The Real-Time Solar Wind (RTSW) system is continuously monitoring the solar wind and producing warnings of impending major geomagnetic activity, up to one hour in advance. Warnings and alerts issued by NOAA allow those with systems sensitive to such activity to take preventative action. The RTSW system gathers solar wind and energetic particle data at high time resolution from four ACE instruments (MAG, SWEPAM, EPAM, and SIS), packs the data into a low-rate bit stream, and broadcasts the data continuously. NASA sends real-time data to NOAA each day when downloading science data. With a combination of dedicated ground stations (CRL in Japan and RAL in Great Britain), and time on existing ground tracking networks (NASA DSN and the USAF's AFSCN), the RTSW system can receive data 24 hours per day throughout the year. The raw data are immediately sent from the ground station to the Space Weather Prediction Center in Boulder, Colorado, processed, and then delivered to its Space Weather Operations Center where they are used in daily operations; the data are also delivered to the CRL Regional Warning Center at Hiraiso Station, Japan, to the USAF 55th Space Weather Squadron, and placed on the World Wide Web. The data are downloaded, processed and dispersed within 5 minutes from the time they leave ACE. The RTSW system also uses the low-energy energetic particles to warn of approaching interplanetary shocks, and to help monitor the flux of high-energy particles that can produce radiation damage in satellite systems.
https://en.wikipedia.org/wiki?curid=1009525
1,531,053
Overall, the status scientific data remains a flexible point of discussion among individual researchers, communities and policy-makers: "In broader terms, whatever 'data' is of interest to researchers should be treated as 'research data'" Important policy reports, like the 2012 collective synthesis of the National Academies of science on data citation, have intentionally adopted a relative and nominalist definition of data: "we will devote little time to definitional issues (e.g., what are data?), except to acknowledge that data often exist in the eyes of the beholder." For Christine Borgman, the main issue is not to define scientific data ("what are data") but to contextualize the point where data became a focal point of discussion within a discipline, an institution or a national research program ("when are data"). In the 2010s, the expansion of available data sources and the sophistication of data analysis method has expanded the range of disciplines primarily affected by data management issues to "computational social science, digital humanities, social media data, citizen science research projects, and political science."
https://en.wikipedia.org/wiki?curid=31915311
1,573,134
Patch-seq integrative dataset allows for a comprehensive characterization of cell types, particularly in neurons. Neuroscientists have applied this technique to a variety of experiments and protocols to discover new cell subtypes based on correlations between transcriptomic data and neuronal morpho-electric properties. Applying machine learning to patch-seq data it is then possible to study transcriptomic data and link it to their respective morpho-electric properties. Having a confirmed ground truth for robust cell type classification allows researchers to look at the function of specific neuron types and subtypes in complex processes such as behavior, language and the underlying processes in neurological and psychiatric diseases. Comparison of proteomics with transcriptomics has shown that transcriptional data does not necessarily translate into the same protein expression and likewise having the ability to look at the ground truth of a neuron's phenotype from classical classification methods combined with transcriptomic data is important for neuroscience. Patch-seq experiments have been found which support transcriptomic results but others have found cases where morphology of similar transcriptomically defined cell types in different brain regions did not match up. The technique is particularly well suited for neuroscience but in general any tissue where it is of interest to simultaneous know electrophysiology, morphology, and transcriptomics would find use for patch-seq. For instance patch-seq has also been applied to non-neuronal tissues such as pancreatic islets for studying diabetes.
https://en.wikipedia.org/wiki?curid=66692692
1,601,103
One of the problems facing real-time ocean observatories is the ability to provide a fast and accurate assessment of the data quality. Ocean Networks Canada is in the process of implementing real-time quality control on incoming data. For scalar data, the aim is to meet the guidelines of the Quality Assurance of Real Time Oceanographic Data (QARTOD) group. QARTOD is a US organization tasked with identifying issues involved with incoming real-time data from the U.S Integrated Ocean Observing System (IOOS). A large portion of their agenda is to create guidelines for how the quality of real-time data is to be determined and reported to the scientific community. Real-time data quality testing at Ocean Networks Canada includes tests designed to catch instrument failures and major spikes or data dropouts before the data is made available to the user. Real-time quality tests include meeting instrument manufacturer's standards and overall observatory/site ranges determined from previous data. Due to the positioning of some instrument platforms in highly productive areas, we have also designed dual-sensor tests e.g. for some conductivity sensors. The quality control testing is split into 3 separate categories. The first category is in real-time and tests the data before the data are parsed into the database. The second category is delayed-mode testing where archived data are subject to testing after a certain period of time. The third category is manual quality control by an Ocean Networks Canada data expert.
https://en.wikipedia.org/wiki?curid=19175774
1,603,986
The use of 3D cell cultures in microfluidic devices has led to a field of study called organ-on-a-chip. The first report of these types of microfluidic cultures was used to study the toxicity of naphthalene metabolites on the liver and lung (Viravaidya et al.). These devices can grow a stripped-down version of an organ-like system that can be used to understand many biological processes. By adding an additional dimension, more advanced cell architectures can be achieved, and cell behavior is more representative of "in vivo" dynamics; cells can engage in enhanced communication with neighboring cells and cell-extracellular matrix interactions can be modeled. In these devices, chambers or collagen layers containing different cell types can interact with one another for multiple days while various channels deliver nutrients to the cells. An advantage of these devices is that tissue function can be characterized and observed under controlled conditions (e.g., effect of shear stress on cells, effect of cyclic strain or other forces) to better understand the overall function of the organ. While these 3D models ofter better model organ function on a cellular level compared with 2D models, there are still challenges. Some of the challenges include: imaging of the cells, control of gradients in static models (i.e., without a perfusion system), and difficulty recreating vasculature. Despite these challenges, 3D models are still used as tools for studying and testing drug responses in pharmacological studies. In recent years, there are microfluidic devices reproducing the complex "in vivo" microvascular network. Organs-on-a-chip have also been used to replicate very complex systems like lung epithelial cells in an exposed airway and provides valuable insight for how multicellular systems and tissues function "in vivo." These devices are able to create a physiologically realistic 3D environment, which is desirable as a tool for drug screening, drug delivery, cell-cell interactions, tumor metastasis etc. In one study, researchers grew tumor cells and tested the drug delivery of cis platin, resveratrol, tirapazamine (TPZ) and then measured the effects the drugs have on cell viability.
https://en.wikipedia.org/wiki?curid=50311973
1,618,324
In the original Hopfield model of associative memory, the variables were binary, and the dynamics were described by a one-at-a-time update of the state of the neurons. An energy function quadratic in the formula_2 was defined, and the dynamics consisted of changing the activity of each single neuron formula_4 only if doing so would lower the total energy of the system. This same idea was extended to the case of formula_2 being a continuous variable representing the output of neuron formula_4, and formula_2 being a monotonic function of an input current. The dynamics became expressed as a set of first-order differential equations for which the "energy" of the system always decreased. The energy in the continuous case has one term which is quadratic in the formula_2 (as in the binary model), and a second term which depends on the gain function (neuron's activation function). While having many desirable properties of associative memory, both of these classical systems suffer from a small memory storage capacity, which scales linearly with the number of input features.
https://en.wikipedia.org/wiki?curid=68440670
1,634,641
An ideal quantum system is not realistic because it should be completely isolated while, in practice, it is influenced by the coupling to an environment, which typically has a large number of degrees of freedom (for example an atom interacting with the surrounding radiation field). A complete microscopic description of the degrees of freedom of the environment is typically too complicated. Hence, one looks for simpler descriptions of the dynamics of the open system. In principle, one should investigate the unitary dynamics of the total system, i.e. the system and the environment, to obtain information about the reduced system of interest by averaging the appropriate observables over the degrees of freedom of the environment. To model the dissipative effects due to the interaction with the environment, the Schrödinger equation is replaced by a suitable master equation, such as a Lindblad equation or a stochastic Schrödinger equation in which the infinite degrees of freedom of the environment are "synthesized" as a few quantum noises. Mathematically, time evolution in a Markovian open quantum system is no longer described by means of one-parameter groups of unitary maps, but one needs to introduce quantum Markov semigroups.
https://en.wikipedia.org/wiki?curid=67944523
1,689,913
The Hurricane Weather Research and Forecasting (HWRF) model is a specialized version of the Weather Research and Forecasting (WRF) model and is used to forecast the track and intensity of tropical cyclones. The model was developed by the National Oceanic and Atmospheric Administration (NOAA), the U.S. Naval Research Laboratory, the University of Rhode Island, and Florida State University. It became operational in 2007. Despite improvements in track forecasting, predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance. Other than the specialized guidance, global guidance such as the GFS, Unified Model (UKMET), NOGAPS, Japanese Global Spectral Model (GSM), European Centre for Medium-Range Weather Forecasts model, France's Action de Recherche Petite Echelle Grande Echelle (ARPEGE) and Aire Limit´ee Adaptation Dynamique Initialisation (ALADIN) models, India's National Centre for Medium Range Weather Forecasting (NCMRWF) model, Korea's Global Data Assimilation and Prediction System (GDAPS) and Regional Data Assimilation and Prediction System (RDAPS) models, Hong Kong/China's Operational Regional Spectral Model (ORSM) model, and Canadian Global Environmental Multiscale Model (GEM) model are used for track and intensity purposes.
https://en.wikipedia.org/wiki?curid=4142447
1,835,180
Computer system architectures such as Hadoop and HPCC which can support data-parallel applications are a potential solution to the terabyte and petabyte scale data processing requirements of data-intensive computing. Clusters of commodity hardware are commonly being used to address Big Data problems. The fundamental challenges for Big Data applications and data-intensive computing are managing and processing exponentially growing data volumes, significantly reducing associated data analysis cycles to support practical, timely applications, and developing new algorithms which can scale to search and process massive amounts of data. The National Science Foundation has identified key issues related to data-intensive computing problems such as the programming abstractions including models, languages, and algorithms which allow a natural expression of parallel processing of data. Declarative, data-centric programming languages are well-suited to this class of problems.
https://en.wikipedia.org/wiki?curid=31733715
1,899,201
The purpose of establishing renewable energy cooperatives touches upon environmental, economic, and social issues. They first work to encourage a shift in the energy system to a cleaner and more sustainable line of renewable energy sources, helping to reduce the impacts of climate change. They also disseminate renewable energy technologies by serving as an alternative to the current system of state or corporate-owned and highly centralized energy generation. Doing so democratizes the system so that energy generation is controlled by many local movements rather than by a few companies; for example, five companies in the case of Spain. By spreading out energy generation, local economies benefit and generate employment opportunities. The increased competition also helps stabilize energy prices, supporting equal access to energy. Local producers can even profit by selling energy back to the centralized grid. Many times, the revenue generated by a cooperative is reinvested back into developing new facilities or improving current technology. RE co-ops give more power and control to citizens so that they are less dependent on their energy supply and can have more influence on and engagement with their energy sourcing.
https://en.wikipedia.org/wiki?curid=65691512
1,920,572
Type free λ-calculus treats functions as rules and does not differentiate functions and the objects which they are applied to, meaning λ-calculus is type free. A by-product of type free λ-calculus is an effective computability equivalent to general recursion and Turing machines. The set of λ-terms can be considered a functional topology in which a function space can be embedded, meaning λ mappings within the space X are such that λ:X → X. Introduced November 1969, Dana Scott's untyped set theoretic model constructed a proper topology for any λ-calculus model whose function space is limited to continuous functions. The result of a Scott continuous λ-calculus topology is a function space built upon a programming semantic allowing fixed point combinatorics, such as the Y combinator, and data types. By 1971, λ-calculus was equipped to define any sequential computation and could be easily adapted to parallel computations. The reducibility of all computations to λ-calculus allows these λ-topological properties to become adopted by all programming languages.
https://en.wikipedia.org/wiki?curid=36075414
1,925,475
As a system is formed from a solid surface and a drop of liquid, energy minima and maxima are produced by the free energy of the system. When the solid surface is rough or homogeneous, the system, which is made up of a solid, a liquid, and a fluid, could have multiple minima produced from the free energy at different minima points. One of these minima is called the global minimum. The global minimum has the lowest free energy within the system and is defined to be the stable equilibrium state. Furthermore, the other minima illustrate the metastable equilibrium states of the system. In between these minima are the energy barriers which hinder the motion of energy between the various metastable states in the system. The transition of energy between metastable states is also affected by the availability of external energy of the system, which is as well associated with the volume of the liquid drop on top of the solid surface. Likewise, the volume of the liquid could potentially have an impact on the locations of the minima points, which could influence the contact angles created by the solid and the liquid. Besides, the contact angles are directly related to whether the solid surface is ideal, in other words, whether it is a smooth, heterogeneous surface.
https://en.wikipedia.org/wiki?curid=45330103
1,996,892
In the theory of quantum communication, the entanglement-assisted classical capacity of a quantum channel is the highest rate at which classical information can be transmitted from a sender to receiver when they share an unlimited amount of noiseless entanglement. It is given by the quantum mutual information of the channel, which is the input-output quantum mutual information maximized over all pure bipartite quantum states with one system transmitted through the channel. This formula is the natural generalization of Shannon's noisy channel coding theorem, in the sense that this formula is equal to the capacity, and there is no need to regularize it. An additional feature that it shares with Shannon's formula is that a noiseless classical or quantum feedback channel cannot increase the entanglement-assisted classical capacity. The entanglement-assisted classical capacity theorem is proved in two parts: the direct coding theorem and the converse theorem. The direct coding theorem demonstrates that the quantum mutual information of the channel is an achievable rate, by a random coding strategy that is effectively a noisy version of the super-dense coding protocol. The converse theorem demonstrates that this rate is optimal by making use of the strong subadditivity of quantum entropy.
https://en.wikipedia.org/wiki?curid=31477835
2,022,924
Interatomic potentials were developed for amino acids as early as in the late 1960s, for example, serving the CHARMM program. The fraction of covered chemical space was small considering the size of the periodic table, and compatible interatomic potentials for inorganic compounds remained largely unavailable. Different energy functions, lack of interpretation and validation of parameters restricted modeling to isolated compounds with unpredictable errors. Specifically, assumptions of formal charges, fixed atoms, and other approximations often led to collapsed structures and random energy differences when allowing atom mobility. A concept for consistent simulations of inorganic-organic interfaces was introduced in 2003. A major obstacle was the poor definition of atomic charges in molecular models, especially for inorganic compounds. IFF utilizes a method to assign atomic charges that translates chemical bonding accurately into molecular models, including metals, oxides, minerals, and organic molecules. The models reproduce multipole moments internal to a chemical compound on the basis of experimental data for electron deformation densities, dipole moments, as well as consideration of atomization energies, ionization energies, coordination numbers, and trends relative to other chemically similar compounds in the periodic table (the Extended Born Model). The method ensures a combination of experimental data and theory to represent chemical bonding and yield up to ten times more reliable and reproducible atomic charges in comparison to the use of quantum methods. This approach is essential to carry out consistent all-atom simulations of compounds across the periodic table that vary widely in internal polarity. IFF also allows the inclusion of specific features of the electronic structure such as π electrons in graphitic materials and in aromatic compounds. Another characteristic is the systematic reproduction of structures and energies to validate the classical Hamiltonian. The quality of structural predictions is assessed by validation of lattice parameters and densities from X-ray data, which is common in molecular simulations. In addition, IFF uses surface and cleavage energies from experimental measurements to ensure a reliable potential energy surface. Thereafter, hydration energies, adsorption energies, thermal, and mechanical properties can often be computed in quantitative agreement with measurements without further modifications. The parameters also have a physical-chemical interpretation and chemical analogy can be effectively used to derive parameters for chemically similar, yet unknown compounds in good accuracy. Alternative approaches based on random force field fitting to lattice parameters and mechanical properties (the 2nd derivative of the energy) lack interpretability and can cause over 500% errors in surface and interfacial energies, limiting the utility of models.
https://en.wikipedia.org/wiki?curid=64170778
2,073,189
The IIE, based out of the Hubrecht Institute (aka "Hubrecht Laboratories) in the Netherlands, changed its name in 1968 to the International Society of Developmental Biologists (ISDB). In 1997 the ISDB took over the functions of a parallel organization, the European Developmental Biology Organisation (EDBO), becoming the world umbrella of developmental biology associations. Numerous national societies are currently members of the ISDB, including the Society for Developmental Biology, the Asia-Pacific Developmental Biology Network, the Australia and New Zealand Society for Cell and Developmental Biology, the British Society of Developmental Biologists, the Finnish Society for Developmental Biology, the French Developmental Biology Society, the German Society of Developmental Biology, the Hong Kong Society for Developmental Biology, the Israel Society for Developmental Biology, the Italian Embryology Group, the Japanese Society for Developmental Biology, the Latin American Society for Developmental Biology, the Portuguese Society for Developmental Biology, and the Spanish Developmental Biology Society.
https://en.wikipedia.org/wiki?curid=37257608
2,112,199
A differential equation model is one that describes the value of dependent variables as they evolve in time or space by giving equations involving those variables and their derivatives with respect to some independent variables, usually time and/or space. An ordinary differential equation is one in which the system's dependent variables are functions of only one independent variable. Many physical, chemical, biological and electrical systems are well described by ordinary differential equations. Frequently we assume a system is governed by differential equations, but we do not have exact knowledge of the influence of various factors on the state of the system. For instance, we may have an electrical circuit that in theory is described by a system of ordinary differential equations, but due to the tolerance of resistors, variations of the supply voltage or interference from outside influences we do not know the exact parameters of the system. For some systems, especially those that support chaos, a small change in parameter values can cause a large change in the behavior of the system, so an accurate model is extremely important. Therefore, it may be necessary to construct more exact differential equations by building them up based on the actual system performance rather than a theoretical model. Ideally, one would measure all the dynamical variables involved over an extended period of time, using many different initial conditions, then build or fine tune a differential equation model based on these measurements.
https://en.wikipedia.org/wiki?curid=15507968
2,116,182
Free-floating ribosomes can become membrane bound through a process called translocation. Through translocation, ribosomes that are found in the cytosol producing proteins are moved and attached to the membrane. This process is responsible for development of the rough endoplasmic reticulum. First, ribosomes begin protein synthesis at the N-terminus. The first of the polypeptide may actually be a signal sequence that tells the ribosome that the protein must be extruded into the rough endoplasmic reticulum. The signal sequence triggers translocation by binding with a signal recognition particle (SRP) also located in the cytosol. The signal recognition particle allows recognition and binding via a signal recognition particle receptor on the target membrane’s surface. The signal recognition particle receptor and signal recognition particle are both attached to a translocon and bound with guanosine triphosphate (GTP). This guanosine triphosphate is phosphorylated for energy and opens the translocon allowing the ribosome to attach via its 60S subunit and its signal sequence to enter the lumen, or institial space of the rough endoplasmic reticulum. The signal recognition particle and signal recognition particle receptor detach and can be recycled. The signal sequence is cleaved inside the lumen of the rough endoplasmic reticulum and the ribosome continues to produce the protein into the endoplasmic reticulum where it is folded. Upon protein synthesis completion, the translocon closes and the ribosome detaches. It is important to remember that during translocation, translation briefly stops until binding with the membrane is finished. It is also important to remember that ribosomes can associate and dissociate with the endoplasmic reticulum as need for protein synthesis. After synthesis into the rough endoplasmic reticulum, proteins may travel to the end of the rough endoplasmic reticulum where they are exocytosed, or packaged into small vesicles formed via cleavage of the membrane of the rough endoplasmic reticulum. These vesicles are sent to the Golgi apparatus for sorting and release as needed by the cell. Some proteins are made to be released immediately as the cell is in constant need of them while some proteins are store for immediate release upon signal.
https://en.wikipedia.org/wiki?curid=21151506
2,134,913
Energy-based generative neural networks is a class of generative models, which aim to learn explicit probability distributions of data in the form of energy-based models whose energy functions are parameterized by modern deep neural networks. Its name is due to the fact that this model can be derived from the discriminative neural networks. The parameter of the neural network in this model is trained in a generative manner by Markov chain Monte Carlo(MCMC)-based maximum likelihood estimation. The learning process follows an <nowiki>"</nowiki>analysis by synthesis<nowiki>"</nowiki> scheme, where within each learning iteration, the algorithm samples the synthesized examples from the current model by a gradient-based MCMC method, e.g., Langevin dynamics, and then updates the model parameters based on the difference between the training examples and the synthesized ones. This process can be interpreted as an alternating mode seeking and mode shifting process, and also has an adversarial interpretation. The first energy-based generative neural network is the generative ConvNet proposed in 2016 for image patterns, where the neural network is a convolutional neural network. The model has been generalized to various domains to learn distributions of videos, and 3D voxels. They are made more effective in their variants. They have proven useful for data generation (e.g., image synthesis, video synthesis, 3D shape synthesis, etc.), data recovery (e.g., recovering videos with missing pixels or image frames, 3D super-resolution, etc), data reconstruction (e.g., image reconstruction and linear interpolation ).
https://en.wikipedia.org/wiki?curid=63637933
2,169,220
Frontiers of Physics (formerly "Frontiers of Physics in China" from 2006 to 2010) is a bimonthly peer-reviewed academic journal established in 2006 and co-published by Higher Education Press (China) and Springer Verlag (Germany). Topics covered include quantum mechanics and quantum information; gravitation, cosmology and astrophysics; elementary particles and fields; nuclear physics; atomic physics, molecular physics, optical physics; statistical physics and nonlinear physics; plasma physics and accelerator physics; condensed matter physics; nanostructures and functional materials; and soft matter, biological physics and interdisciplinary physics.
https://en.wikipedia.org/wiki?curid=28921943
11,760
Machine learning approaches in particular can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society. Language models learned from data have been shown to contain human-like biases. Machine learning systems used for criminal risk assessment have been found to be biased against black people. In 2015, Google photos would often tag black people as gorillas, and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all. Similar issues with recognizing non-white people have been found in many other systems. In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language. Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains. Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There's nothing artificial about AI...It's inspired by people, it's created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility."
https://en.wikipedia.org/wiki?curid=233488
21,549
The reactants in the electrochemical reactions in a lithium-ion cell are materials of anode and cathode, both of which are compounds containing lithium atoms. During discharge, an oxidation half-reaction at the anode produces positively charged lithium ions and negatively charged electrons. The oxidation half-reaction may also produce uncharged material that remains at the anode. Lithium ions move through the electrolyte, electrons move through the external circuit, and then they recombine at the cathode (together with the cathode material) in a reduction half-reaction. The electrolyte and external circuit provide conductive media for lithium ions and electrons, respectively, but do not partake in the electrochemical reaction. During discharge, electrons flow from the negative electrode (anode) towards the positive electrode (cathode) through the external circuit. The reactions during discharge lower the chemical potential of the cell, so discharging transfers energy from the cell to wherever the electric current dissipates its energy, mostly in the external circuit. During charging these reactions and transports go in the opposite direction: electrons move from the positive electrode to the negative electrode through the external circuit. To charge the cell the external circuit has to provide electric energy. This energy is then stored as chemical energy in the cell (with some loss, e. g. due to coulombic efficiency lower than 1).
https://en.wikipedia.org/wiki?curid=201485
54,101
An object moves at different speeds in different frames of reference, depending on the motion of the observer. This implies the kinetic energy, in both Newtonian mechanics and relativity, is 'frame dependent', so that the amount of relativistic energy that an object is measured to have depends on the observer. The "relativistic mass" of an object is given by the relativistic energy divided by . Because the relativistic mass is exactly proportional to the relativistic energy, relativistic mass and relativistic energy are nearly synonymous; the only difference between them is the units. The "rest mass" or invariant mass of an object is defined as the mass an object has in its rest frame, when it is not moving with respect to the observer. Physicists typically use the term "mass", though experiments have shown an object's gravitational mass depends on its total energy and not just its rest mass. The rest mass is the same for all inertial frames, as it is independent of the motion of the observer, it is the smallest possible value of the relativistic mass of the object. Because of the attraction between components of a system, which results in potential energy, the rest mass is almost never additive; in general, the mass of an object is not the sum of the masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as observed from the center of momentum frame, and potential energy. The masses add up only if the constituents are at rest (as observed from the center of momentum frame) and do not attract or repel, so that they do not have any extra kinetic or potential energy. Massless particles are particles with no rest mass, and therefore have no intrinsic energy; their energy is due only to their momentum.
https://en.wikipedia.org/wiki?curid=422481
54,106
For an isolated system of particles moving in different directions, the invariant mass of the system is the analog of the rest mass, and is the same for all observers, even those in relative motion. It is defined as the total energy (divided by ) in the center of momentum frame. The "center of momentum frame" is defined so that the system has zero total momentum; the term center of mass frame is also sometimes used, where the "center of mass frame" is a special case of the center of momentum frame where the center of mass is put at the origin. A simple example of an object with moving parts but zero total momentum is a container of gas. In this case, the mass of the container is given by its total energy (including the kinetic energy of the gas molecules), since the system's total energy and invariant mass are the same in any reference frame where the momentum is zero, and such a reference frame is also the only frame in which the object can be weighed. In a similar way, the theory of special relativity posits that the thermal energy in all objects, including solids, contributes to their total masses, even though this energy is present as the kinetic and potential energies of the atoms in the object, and it (in a similar way to the gas) is not seen in the rest masses of the atoms that make up the object. Similarly, even photons, if trapped in an isolated container, would contribute their energy to the mass of the container. Such extra mass, in theory, could be weighed in the same way as any other type of rest mass, even though individually photons have no rest mass. The property that trapped energy in any form adds weighable mass to systems that have no net momentum is one of the consequences of relativity. It has no counterpart in classical Newtonian physics, where energy never exhibits weighable mass.
https://en.wikipedia.org/wiki?curid=422481
60,942
In 1886, Frank Julian Sprague invented the first practical DC motor, a non-sparking device that maintained relatively constant speed under variable loads. Other Sprague electric inventions about this time greatly improved grid electric distribution (prior work done while employed by Thomas Edison), allowed power from electric motors to be returned to the electric grid, provided for electric distribution to trolleys via overhead wires and the trolley pole, and provided control systems for electric operations. This allowed Sprague to use electric motors to invent the first electric trolley system in 1887–88 in Richmond, Virginia, the electric elevator and control system in 1892, and the electric subway with independently powered centrally-controlled cars. The latter were first installed in 1892 in Chicago by the South Side Elevated Railroad, where it became popularly known as the "L". Sprague's motor and related inventions led to an explosion of interest and use in electric motors for industry. The development of electric motors of acceptable efficiency was delayed for several decades by failure to recognize the extreme importance of an air gap between the rotor and stator. Efficient designs have a comparatively small air gap. The St. Louis motor, long used in classrooms to illustrate motor principles, is inefficient for the same reason, as well as appearing nothing like a modern motor.
https://en.wikipedia.org/wiki?curid=76086
64,373
Work and heat are expressions of actual physical processes of supply or removal of energy, while the internal energy formula_6 is a mathematical abstraction that keeps account of the exchanges of energy that befall the system. Thus the term 'heat' for formula_3 means "that amount of energy added or removed as heat in the thermodynamic sense", rather than referring to a form of energy within the system. Likewise, the term 'work energy' for formula_4 means "that amount of energy gained or lost through thermodynamic work". Internal energy is a property of the system whereas work done and heat supplied are not. A significant result of this distinction is that a given internal energy change formula_2 can be achieved by different combinations of heat and work. (This may be signaled by saying that heat and work are path dependent, while change in internal energy depends only on the initial and final states of the process. It is necessary to bear in mind that thermodynamic work is measured by change in the system, not necessarily the same as work measured by forces and distances in the surroundings; this distinction is noted in the term 'isochoric work' (at constant volume).)
https://en.wikipedia.org/wiki?curid=166404
66,083
A research question that is asked about big data sets is whether it is necessary to look at the full data to draw certain conclusions about the properties of the data or if is a sample is good enough. The name big data itself contains a term related to size and this is an important characteristic of big data. But sampling enables the selection of right data points from within the larger data set to estimate the characteristics of the whole population. In manufacturing different types of sensory data such as acoustics, vibration, pressure, current, voltage, and controller data are available at short time intervals. To predict downtime it may not be necessary to look at all the data but a sample may be sufficient. Big data can be broken down by various data point categories such as demographic, psychographic, behavioral, and transactional data. With large sets of data points, marketers are able to create and use more customized segments of consumers for more strategic targeting.
https://en.wikipedia.org/wiki?curid=27051151
108,411
A time series is one type of panel data. Panel data is the general class, a multidimensional data set, whereas a time series data set is a one-dimensional panel (as is a cross-sectional dataset). A data set may exhibit characteristics of both panel data and time series data. One way to tell is to ask what makes one data record unique from the other records. If the answer is the time data field, then this is a time series data set candidate. If determining a unique record requires a time data field and an additional identifier which is unrelated to time (e.g. student ID, stock symbol, country code), then it is panel data candidate. If the differentiation lies on the non-time identifier, then the data set is a cross-sectional data set candidate.
https://en.wikipedia.org/wiki?curid=406624
118,563
The notion of a zero-point energy is also important for cosmology, and physics currently lacks a full theoretical model for understanding zero-point energy in this context; in particular, the discrepancy between theorized and observed vacuum energy in the universe is a source of major contention. Physicists Richard Feynman and John Wheeler calculated the zero-point radiation of the vacuum to be an order of magnitude greater than nuclear energy, with a single light bulb containing enough energy to boil all the world's oceans. Yet according to Einstein's theory of general relativity, any such energy would gravitate, and the experimental evidence from the expansion of the universe, dark energy and the Casimir effect shows any such energy to be exceptionally weak. A popular proposal that attempts to address this issue is to say that the fermion field has a negative zero-point energy, while the boson field has positive zero-point energy and thus these energies somehow cancel each other out. This idea would be true if supersymmetry were an exact symmetry of nature; however, the LHC at CERN has so far found no evidence to support it. Moreover, it is known that if supersymmetry is valid at all, it is at most a broken symmetry, only true at very high energies, and no one has been able to show a theory where zero-point cancellations occur in the low-energy universe we observe today. This discrepancy is known as the cosmological constant problem and it is one of the greatest unsolved mysteries in physics. Many physicists believe that "the vacuum holds the key to a full understanding of nature".
https://en.wikipedia.org/wiki?curid=84400
148,049
In laboratory systems, either 10–30 mm beam diameter non-monochromatic Al K or Mg K anode radiation is used, or a focused 20-500 micrometer diameter beam single wavelength Al K monochromatised radiation. Monochromatic Al K X-rays are normally produced by diffracting and focusing a beam of non-monochromatic X-rays off of a thin disc of natural, crystalline quartz with a <1010> orientation. The resulting wavelength is 8.3386 angstroms (0.83386 nm) which corresponds to a photon energy of 1486.7 eV. Aluminum K X-rays have an intrinsic full width at half maximum (FWHM) of 0.43 eV, centered on 1486.7 eV ("E"/Δ"E" = 3457). For a well–optimized monochromator, the energy width of the monochromated aluminum K X-rays is 0.16 eV, but energy broadening in common electron energy analyzers (spectrometers) produces an ultimate energy resolution on the order of FWHM=0.25 eV which, in effect, is the ultimate energy resolution of most commercial systems. When working under practical, everyday conditions, high energy-resolution settings will produce peak widths (FWHM) between 0.4 and 0.6 eV for various pure elements and some compounds. For example, in a spectrum obtained in 1 minute at a pass energy of 20 eV using monochromated aluminum K X-rays, the Ag 3"d" peak for a clean silver film or foil will typically have a FWHM of 0.45 eV. Non-monochromatic magnesium X-rays have a wavelength of 9.89 angstroms (0.989 nm) which corresponds to a photon energy of 1253 eV. The energy width of the non-monochromated X-ray is roughly 0.70 eV, which, in effect is the ultimate energy resolution of a system using non-monochromatic X-rays. Non-monochromatic X-ray sources do not use any crystals to diffract the X-rays which allows all primary X-rays lines and the full range of high-energy Bremsstrahlung X-rays (1–12 keV) to reach the surface. The ultimate energy resolution (FWHM) when using a non-monochromatic Mg K source is 0.9–1.0 eV, which includes some contribution from spectrometer-induced broadening.
https://en.wikipedia.org/wiki?curid=70847
159,138
There are two primary ways of constructing statistical models: in a "static" model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. "Adaptive" models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
https://en.wikipedia.org/wiki?curid=18209
162,777
The highest order of derivation that appears in a (linear) differential equation is the "order" of the equation. The term , which does not depend on the unknown function and its derivatives, is sometimes called the "constant term" of the equation (by analogy with algebraic equations), even when this term is a non-constant function. If the constant term is the zero function, then the differential equation is said to be "homogeneous", as it is a homogeneous polynomial in the unknown function and its derivatives. The equation obtained by replacing, in a linear differential equation, the constant term by the zero function is the "associated homogeneous equation". A differential equation has "constant coefficients" if only constant functions appear as coefficients in the associated homogeneous equation.
https://en.wikipedia.org/wiki?curid=379868
175,518
In a 2005 well-to-wheels analysis, the DOE estimated that fuel cell electric vehicles using hydrogen produced from natural gas would result in emissions of approximately 55% of the CO per mile of internal combustion engine vehicles and have approximately 25% less emissions than hybrid vehicles. In 2006, Ulf Bossel stated that the large amount of energy required to isolate hydrogen from natural compounds (water, natural gas, biomass), package the light gas by compression or liquefaction, transfer the energy carrier to the user, plus the energy lost when it is converted to useful electricity with fuel cells, leaves around 25% for practical use." Richard Gilbert, co-author of "Transport Revolutions: Moving People and Freight without Oil" (2010), comments similarly, that producing hydrogen gas ends up using some of the energy it creates. Then, energy is taken up by converting the hydrogen back into electricity within fuel cells. This means that only a quarter of the initially available energy reaches the electric motor' ... Such losses in conversion don't stack up well against, for instance, recharging an electric vehicle (EV) like the Nissan Leaf or Chevy Volt from a wall socket". A 2010 well-to-wheels analysis of hydrogen fuel cell vehicles report from Argonne National Laboratory states that renewable H2 pathways offer much larger green house gas benefits. This result has recently been confirmed. In 2010, a US DOE well-to-wheels publication assumed that the efficiency of the single step of compressing hydrogen to at the refueling station is 94%. A 2016 study in the November issue of the journal "Energy" by scientists at Stanford University and the Technical University of Munich concluded that, even assuming local hydrogen production, "investing in all-electric battery vehicles is a more economical choice for reducing carbon dioxide emissions, primarily due to their lower cost and significantly higher energy efficiency."
https://en.wikipedia.org/wiki?curid=1252085
221,036
In physical sciences, mechanical energy is the sum of potential energy and kinetic energy. The principle of conservation of mechanical energy states that if an isolated system is subject only to conservative forces, then the mechanical energy is constant. If an object moves in the opposite direction of a conservative net force, the potential energy will increase; and if the speed (not the velocity) of the object changes, the kinetic energy of the object also changes. In all real systems, however, nonconservative forces, such as frictional forces, will be present, but if they are of negligible magnitude, the mechanical energy changes little and its conservation is a useful approximation. In elastic collisions, the kinetic energy is conserved, but in inelastic collisions some mechanical energy may be converted into thermal energy. The equivalence between lost mechanical energy and an increase in temperature was discovered by James Prescott Joule.
https://en.wikipedia.org/wiki?curid=859234
257,070
In quantum physics, a quantum state is a mathematical entity that provides a probability distribution for the outcomes of each possible measurement on a system. Knowledge of the quantum state together with the rules for the system's evolution in time exhausts all that can be predicted about the system's behavior. A mixture of quantum states is again a quantum state. Quantum states that cannot be written as a mixture of other states are called pure quantum states, while all other states are called mixed quantum states. A pure quantum state can be represented by a ray in a Hilbert space over the complex numbers, while mixed states are represented by density matrices, which are positive semidefinite operators that act on Hilbert spaces.
https://en.wikipedia.org/wiki?curid=30876419
280,725
An example of a simple type system is that of the C language. The portions of a C program are the function definitions. One function is invoked by another function. The interface of a function states the name of the function and a list of parameters that are passed to the function's code. The code of an invoking function states the name of the invoked, along with the names of variables that hold values to pass to it. During execution, the values are placed into temporary storage, then execution jumps to the code of the invoked function. The invoked function's code accesses the values and makes use of them. If the instructions inside the function are written with the assumption of receiving an integer value, but the calling code passed a floating-point value, then the wrong result will be computed by the invoked function. The C compiler checks the types of the arguments passed to a function when it is called against the types of the parameters declared in the function's definition. If the types do not match, the compiler throws a compile-time error.
https://en.wikipedia.org/wiki?curid=199701
282,502
A data model instance is created by applying a data model theory. This is typically done to solve some business enterprise requirement. Business requirements are normally captured by a semantic logical data model. This is transformed into a physical data model instance from which is generated a physical database. For example, a data modeler may use a data modeling tool to create an entity-relationship model of the corporate data repository of some business enterprise. This model is transformed into a relational model, which in turn generates a relational database.
https://en.wikipedia.org/wiki?curid=82871
283,334
The Residential Energy Services Network is a crucial benchmark for energy reduction (RESNET). The Home Energy Rating System (HERS) of RESNET, which is based on the International Code Council's (ICC) energy code, is used to rate home energy consumption with a standard numerical scale that examines factors in home energy use (About HERS 2018). The American National Standards Institute (ANSI) has acknowledged the HERS assessment system as a national benchmark for evaluating energy efficiency. The International Energy Conservation Code (IECC) of the ICC requires an energy rating index, and the main index used in the residential building sector is HERS. The mortgage financing sector makes substantial use of the HERS index. A home's expected energy usage may impact the available mortgage funds based on the HERS score, with more energy-efficient, lower energy-using homes potentially qualifying for a better mortgage rate or amount.
https://en.wikipedia.org/wiki?curid=478933
311,820
Other important mechanisms fall under the category of asymmetric cell divisions, divisions that give rise to daughter cells with distinct developmental fates. Asymmetric cell divisions can occur because of asymmetrically expressed maternal cytoplasmic determinants or because of signaling. In the former mechanism, distinct daughter cells are created during cytokinesis because of an uneven distribution of regulatory molecules in the parent cell; the distinct cytoplasm that each daughter cell inherits results in a distinct pattern of differentiation for each daughter cell. A well-studied example of pattern formation by asymmetric divisions is body axis patterning in Drosophila. RNA molecules are an important type of intracellular differentiation control signal. The molecular and genetic basis of asymmetric cell divisions has also been studied in green algae of the genus "Volvox", a model system for studying how unicellular organisms can evolve into multicellular organisms. In "Volvox carteri", the 16 cells in the anterior hemisphere of a 32-cell embryo divide asymmetrically, each producing one large and one small daughter cell. The size of the cell at the end of all cell divisions determines whether it becomes a specialized germ or somatic cell.
https://en.wikipedia.org/wiki?curid=152611
317,181
The internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. The minimum number of state variables required to represent a given system, formula_4, is usually equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is often, though not always, the same as the number of energy storage elements in the circuit such as capacitors and inductors. The state variables defined must be linearly independent, i.e., no state variable can be written as a linear combination of the other state variables, or the system cannot be solved.
https://en.wikipedia.org/wiki?curid=548156
320,011
Quantum decoherence is the loss of quantum coherence. In quantum mechanics, particles such as electrons are described by a wave function, a mathematical representation of the quantum state of a system; a probabilistic interpretation of the wave function is used to explain various quantum effects. As long as there exists a definite phase relation between different states, the system is said to be coherent. A definite phase relationship is necessary to perform quantum computing on quantum information encoded in quantum states. Coherence is preserved under the laws of quantum physics.
https://en.wikipedia.org/wiki?curid=185732
331,046
The term "data mining" is a misnomer because the goal is the extraction of patterns and knowledge from large amounts of data, not the extraction ("mining") of data itself. It also is a buzzword and is frequently applied to any form of large-scale data or information processing (collection, extraction, warehousing, analysis, and statistics) as well as any application of computer decision support system, including artificial intelligence (e.g., machine learning) and business intelligence. The book "Data mining: Practical machine learning tools and techniques with Java" (which covers mostly machine learning material) was originally to be named "Practical machine learning", and the term "data mining" was only added for marketing reasons. Often the more general terms ("large scale") "data analysis" and "analytics"—or, when referring to actual methods, "artificial intelligence" and "machine learning"—are more appropriate.
https://en.wikipedia.org/wiki?curid=42253
331,053
The manual extraction of patterns from data has occurred for centuries. Early methods of identifying patterns in data include Bayes' theorem (1700s) and regression analysis (1800s). The proliferation, ubiquity and increasing power of computer technology have dramatically increased data collection, storage, and manipulation ability. As data sets have grown in size and complexity, direct "hands-on" data analysis has increasingly been augmented with indirect, automated data processing, aided by other discoveries in computer science, specially in the field of machine learning, such as neural networks, cluster analysis, genetic algorithms (1950s), decision trees and decision rules (1960s), and support vector machines (1990s). Data mining is the process of applying these methods with the intention of uncovering hidden patterns. in large data sets. It bridges the gap from applied statistics and artificial intelligence (which usually provide the mathematical background) to database management by exploiting the way data is stored and indexed in databases to execute the actual learning and discovery algorithms more efficiently, allowing such methods to be applied to ever-larger data sets.
https://en.wikipedia.org/wiki?curid=42253
381,714
Assume a model with one or more unknown parameters, and a data set to which the model can be fit (the training data set). The fitting process optimizes the model parameters to make the model fit the training data as well as possible. If an independent sample of validation data is taken from the same population as the training data, it will generally turn out that the model does not fit the validation data as well as it fits the training data. The size of this difference is likely to be large especially when the size of the training data set is small, or when the number of parameters in the model is large. Cross-validation is a way to estimate the size of this effect.
https://en.wikipedia.org/wiki?curid=416612
389,154
In quantum mechanics, the Hamiltonian of a system is an operator corresponding to the total energy of that system, including both kinetic energy and potential energy. Its spectrum, the system's "energy spectrum" or its set of "energy eigenvalues", is the set of possible outcomes obtainable from a measurement of the system's total energy. Due to its close relation to the energy spectrum and time-evolution of a system, it is of fundamental importance in most formulations of quantum theory.
https://en.wikipedia.org/wiki?curid=14381
409,119
Green's function is not necessarily unique since the addition of any solution of the homogeneous equation to one Green's function results in another Green's function. Therefore if the homogeneous equation has nontrivial solutions, multiple Green's functions exist. In some cases, it is possible to find one Green's function that is nonvanishing only for formula_47, which is called a retarded Green's function, and another Green's function that is nonvanishing only for formula_48, which is called an advanced Green's function. In such cases, any linear combination of the two Green's functions is also a valid Green's function. The terminology advanced and retarded is especially useful when the variable x corresponds to time. In such cases, the solution provided by the use of the retarded Green's function depends only on the past sources and is causal whereas the solution provided by the use of the advanced Green's function depends only on the future sources and is acausal. In these problems, it is often the case that the causal solution is the physically important one. The use of advanced and retarded Green's function is especially common for the analysis of solutions of the inhomogeneous electromagnetic wave equation.
https://en.wikipedia.org/wiki?curid=311001
410,237
Other weaknesses are that it has not been determined if the statistically most accurate method for combining results is the fixed, IVhet, random or quality effect models, though the criticism against the random effects model is mounting because of the perception that the new random effects (used in meta-analysis) are essentially formal devices to facilitate smoothing or shrinkage and prediction may be impossible or ill-advised. The main problem with the random effects approach is that it uses the classic statistical thought of generating a "compromise estimator" that makes the weights close to the naturally weighted estimator if heterogeneity across studies is large but close to the inverse variance weighted estimator if the between study heterogeneity is small. However, what has been ignored is the distinction between the model "we choose" to analyze a given dataset, and the "mechanism by which the data came into being". A random effect can be present in either of these roles, but the two roles are quite distinct. There's no reason to think the analysis model and data-generation mechanism (model) are similar in form, but many sub-fields of statistics have developed the habit of assuming, for theory and simulations, that the data-generation mechanism (model) is identical to the analysis model we choose (or would like others to choose). As a hypothesized mechanisms for producing the data, the random effect model for meta-analysis is silly and it is more appropriate to think of this model as a superficial description and something we choose as an analytical tool – but this choice for meta-analysis may not work because the study effects are a fixed feature of the respective meta-analysis and the probability distribution is only a descriptive tool.
https://en.wikipedia.org/wiki?curid=62329
422,808
Unlike the traditional extract, transform, load ("ETL") process, the data remains in place, and real-time access is given to the source system for the data. This reduces the risk of data errors, of the workload moving data around that may never be used, and it does not attempt to impose a single data model on the data (an example of heterogeneous data is a federated database system). The technology also supports the writing of transaction data updates back to the source systems. To resolve differences in source and consumer formats and semantics, various abstraction and transformation techniques are used. This concept and software is a subset of data integration and is commonly used within business intelligence, service-oriented architecture data services, cloud computing, enterprise search, and master data management.
https://en.wikipedia.org/wiki?curid=26041421
443,467
In an extension of classical dynamical systems theory, rather than coupling the environment's and the agent's dynamical systems to each other, an “open dynamical system” defines a “total system”, an “agent system”, and a mechanism to relate these two systems. The total system is a dynamical system that models an agent in an environment, whereas the agent system is a dynamical system that models an agent's intrinsic dynamics (i.e., the agent's dynamics in the absence of an environment). Importantly, the relation mechanism does not couple the two systems together, but rather continuously modifies the total system into the decoupled agent's total system. By distinguishing between total and agent systems, it is possible to investigate an agent's behavior when it is isolated from the environment and when it is embedded within an environment. This formalization can be seen as a generalization from the classical formalization, whereby the agent system can be viewed as the agent system in an open dynamical system, and the agent coupled to the environment and the environment can be viewed as the total system in an open dynamical system.
https://en.wikipedia.org/wiki?curid=355240
449,403
Quantum neural networks are computational neural network models which are based on the principles of quantum mechanics. The first ideas on quantum neural computation were published independently in 1995 by Subhash Kak and Ron Chrisley, engaging with the theory of quantum mind, which posits that quantum effects play a role in cognitive function. However, typical research in quantum neural networks involves combining classical artificial neural network models (which are widely used in machine learning for the important task of pattern recognition) with the advantages of quantum information in order to develop more efficient algorithms. One important motivation for these investigations is the difficulty to train classical neural networks, especially in big data applications. The hope is that features of quantum computing such as quantum parallelism or the effects of interference and entanglement can be used as resources. Since the technological implementation of a quantum computer is still in a premature stage, such quantum neural network models are mostly theoretical proposals that await their full implementation in physical experiments.
https://en.wikipedia.org/wiki?curid=3737445