doc_id
int32
15
2.25M
text
stringlengths
101
6.85k
source
stringlengths
39
44
798,178
The "Dialogue" does not treat the Tychonic system, which was becoming the preferred system of many astronomers at the time of publication and which was ultimately proven incorrect. The Tychonic system is a motionless Earth system but not a Ptolemaic system; it is a hybrid system of the Copernican and Ptolemaic models. Mercury and Venus orbit the Sun (as in the Copernican system) in small circles, while the Sun in turn orbits a stationary Earth; Mars, Jupiter, and Saturn orbit the Sun in much larger circles, which means they also orbit the Earth. The Tychonian system is mathematically equivalent to the Copernican system, except that the Copernican system predicts a stellar parallax, while the Tychonian system predicts none. Stellar parallax was not measurable until the 19th century, and therefore there was at the time no valid disproof of the Tychonic system on empirical grounds, nor any decisive observational evidence for the Copernican system.
https://en.wikipedia.org/wiki?curid=265006
800,792
There have been several attempts to create methods and tools that can detect and observe biases within an algorithm. These emergent fields focus on tools which are typically applied to the (training) data used by the program rather than the algorithm's internal processes. These methods may also analyze a program's output and its usefulness and therefore may involve the analysis of its confusion matrix (or table of confusion). Explainable AI to detect algorithm Bias is a suggested way to detect the existence of bias in an algorithm or learning model. Using machine learning to detect bias is called, "conducting an AI audit", where the "auditor" is an algorithm that goes through the AI model and the training data to identify biases.
https://en.wikipedia.org/wiki?curid=55817338
826,863
In an electrolytic cell, a current passes through the cell by an external voltage, causing a non-spontaneous chemical reaction to proceed. In a galvanic cell, the progress of a spontaneous chemical reaction causes an electric current to flow. An equilibrium electrochemical cell exists in the state between an electrolytic cell and a galvanic cell. The tendency of a spontaneous reaction to push a current through the external circuit is exactly balanced by a counter-electromotive force so that no current flows. If this counter-electromotive force is increased, the cell becomes an electrolytic cell, and if it is decreased, the cell becomes a galvanic cell.
https://en.wikipedia.org/wiki?curid=361021
827,681
Exergy is a combination property of a system and its environment because it depends on the state of both the system and environment. The exergy of a system in equilibrium with the environment is zero. Exergy is neither a thermodynamic property of matter nor a thermodynamic potential of a system. Exergy and energy both have units of joules. The internal energy of a system is always measured from a fixed reference state and is therefore always a state function. Some authors define the exergy of the system to be changed when the environment changes, in which case it is not a state function. Other writers prefer a slightly alternate definition of the available energy or exergy of a system where the environment is firmly defined, as an unchangeable absolute reference state, and in this alternate definition, exergy becomes a property of the state of the system alone.
https://en.wikipedia.org/wiki?curid=1075005
828,138
The underlying ideas for the D-Wave approach arose from experimental results in condensed matter physics, and in particular work on quantum annealing in magnets performed by Gabriel Aeppli, Thomas Felix Rosenbaum and collaborators, who had been checking the advantages, proposed by Bikas K. Chakrabarti & collaborators, of quantum tunneling/fluctuations in the search for ground state(s) in spin glasses. These ideas were later recast in the language of quantum computation by MIT physicists Edward Farhi, Seth Lloyd, Terry Orlando, and Bill Kaminsky, whose publications in 2000 and 2004 provided both a theoretical model for quantum computation that fit with the earlier work in quantum magnetism (specifically the adiabatic quantum computing model and quantum annealing, its finite temperature variant), and a specific enablement of that idea using superconducting flux qubits which is a close cousin to the designs D-Wave produced. In order to understand the origins of much of the controversy around the D-Wave approach, it is important to note that the origins of the D-Wave approach to quantum computation arose not from the conventional quantum information field, but from experimental condensed matter physics.
https://en.wikipedia.org/wiki?curid=9448373
828,189
If the function represents the quantum state vector , then the real expression , that depends on , forms a probability density function of the given state. The difference of a "density function" from simply a numerical probability means that one should integrate this modulus-squared function over some (small) domains in to obtain probability values – as was stated above, the system can't be in some state with a positive probability. It gives to both amplitude and density function a physical dimension, unlike a dimensionless probability. For example, for a 3-dimensional wave function, the amplitude has the dimension [L], where L is length.
https://en.wikipedia.org/wiki?curid=429425
871,334
The primary aim of health-related AI applications is to analyze relationships between clinical techniques and patient outcomes. AI programs are applied to practices such as diagnostics, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. What differentiates AI technology from traditional technologies in healthcare is the ability to gather data, process it, and produce a well-defined output to the end-user. AI does this through machine learning algorithms and deep learning. These processes can recognize patterns in behavior and create their own logic. To gain useful insights and predictions, machine learning models must be trained using extensive amounts of input data. AI algorithms behave differently from humans in two ways: (1) algorithms are literal: once a goal is set, the algorithm learns exclusively from the input data and can only understand what it has been programmed to do, (2) and some deep learning algorithms are black boxes; algorithms can predict with extreme precision, but offer little to no comprehensible explanation to the logic behind its decisions aside from the data and type of algorithm used.
https://en.wikipedia.org/wiki?curid=52588198
879,370
Many biological processes involve the conversion of energy into forms that are usable for chemical transformations, and are quantum mechanical in nature. Such processes involve chemical reactions, light absorption, formation of excited electronic states, transfer of excitation energy, and the transfer of electrons and protons (hydrogen ions) in chemical processes, such as photosynthesis, olfaction and cellular respiration. Quantum biology may use computations to model biological interactions in light of quantum mechanical effects. Quantum biology is concerned with the influence of non-trivial quantum phenomena, which can be explained by reducing the
https://en.wikipedia.org/wiki?curid=13537626
887,484
Despite the serious challenges presented by the inability to utilize the valuable rules of excluded middle and double negation elimination, intuitionistic logic has practical use. One reason for this is that its restrictions produce proofs that have the existence property, making it also suitable for other forms of mathematical constructivism. Informally, this means that if there is a constructive proof that an object exists, that constructive proof may be used as an algorithm for generating an example of that object, a principle known as the Curry–Howard correspondence between proofs and algorithms. One reason that this particular aspect of intuitionistic logic is so valuable is that it enables practitioners to utilize a wide range of computerized tools, known as proof assistants. These tools assist their users in the verification (and generation) of large-scale proofs, whose size usually precludes the usual human-based checking that goes into publishing and reviewing a mathematical proof. As such, the use of proof assistants (such as Agda or Coq) is enabling modern mathematicians and logicians to develop and prove extremely complex systems, beyond those that are feasible to create and check solely by hand. One example of a proof that was impossible to satisfactorily verify without formal verification is the famous proof of the four color theorem. This theorem stumped mathematicians for more than a hundred years, until a proof was developed that ruled out large classes of possible counterexamples, yet still left open enough possibilities that a computer program was needed to finish the proof. That proof was controversial for some time, but, later, it was verified using Coq.
https://en.wikipedia.org/wiki?curid=169262
893,578
A solar cell's quantum efficiency value indicates the amount of current that the cell will produce when irradiated by photons of a particular wavelength. If the cell's quantum efficiency is integrated over the whole solar electromagnetic spectrum, one can evaluate the amount of current that the cell will produce when exposed to sunlight. The ratio between this energy-production value and the highest possible energy-production value for the cell (i.e., if the QE were 100% over the whole spectrum) gives the cell's overall energy conversion efficiency value. Note that in the event of multiple exciton generation (MEG), quantum efficiencies of greater than 100% may be achieved since the incident photons have more than twice the band gap energy and can create two or more electron-hole pairs per incident photon.
https://en.wikipedia.org/wiki?curid=1162543
899,043
Specific energy or massic energy is energy per unit mass. It is also sometimes called gravimetric energy density, which is not to be confused with energy density, which is defined as energy per unit volume. It is used to quantify, for example, stored heat and other thermodynamic properties of substances such as specific internal energy, specific enthalpy, specific Gibbs free energy, and specific Helmholtz free energy. It may also be used for the kinetic energy or potential energy of a body. Specific energy is an intensive property, whereas energy and mass are extensive properties.
https://en.wikipedia.org/wiki?curid=2030045
899,049
Energy density measures the energy released when the food is metabolized by a healthy organism when it ingests the food (see food energy for calculation) and the food is metabolized with oxygen, into waste products such as carbon dioxide and water. Besides alcohol the only sources of food energy are carbohydrates, fats and proteins, which make up ninety percent of the dry weight of food. Therefore, water content is the most important factor in energy density. Carbohydrates provide four calories per gram (17 kJ/g), and proteins offer slightly less at 16kJ/g whereas fat provides nine calories per gram (38 kJ/g), times as much energy. Fats contain more carbon-carbon and carbon-hydrogen bonds than carbohydrates or proteins and are therefore richer in energy. Foods that derive most of their energy from fat have a much higher energy density than those that derive most of their energy from carbohydrates or proteins, even if the water content is the same. Nutrients with a lower absorption, such as fiber or sugar alcohols, lower the energy density of foods as well. A moderate energy density would be 1.6 to 3 calories per gram (7–13 kJ/g); salmon, lean meat, and bread would fall in this category. High-energy foods would have more than three calories per gram and include crackers, cheese, dark chocolate, and peanuts.
https://en.wikipedia.org/wiki?curid=2030045
917,545
A "logical system" or "language" (not be confused with the kind of "formal language" discussed above which is described by a formal grammar), is a deductive system (see section above; most commonly first order predicate logic) together with additional (non-logical) axioms. According to model theory, a logical system may be given one or more semantics or interpretations which describe whether a well-formed formula is satisfied by a given structure. A structure that satisfies all the axioms of the formal system is known as a model of the logical system. A logical system is sound if each well-formed formula that can be inferred from the axioms is satisfied by every model of the logical system. Conversely, a logic system is (semantically) complete if each well-formed formula that is satisfied by every model of the logical system can be inferred from the axioms.
https://en.wikipedia.org/wiki?curid=396102
938,028
A particle in a fluid is described by a Langevin equation with a potential energy function, a damping force, and thermal fluctuations given by the fluctuation dissipation theorem. If the potential is quadratic then the constant energy curves are ellipses, as shown in the figure. If there is dissipation but no thermal noise, a particle continually loses energy to the environment, and its time-dependent phase portrait (velocity vs position) corresponds to an inward spiral toward 0 velocity. By contrast, thermal fluctuations continually add energy to the particle and prevent it from reaching exactly 0 velocity. Rather, the initial ensemble of stochastic oscillators approaches a steady state in which the velocity and position are distributed according to the Maxwell–Boltzmann distribution. In the plot below (figure 2), the long time velocity distribution (orange) and position distributions (blue) in a harmonic potential (formula_49) is plotted with the Boltzmann probabilities for velocity (red) and position (green). In particular, the late time behavior depicts thermal equilibrium.
https://en.wikipedia.org/wiki?curid=166890
955,937
Osmosis is the movement of water molecules across a selectively permeable membrane. The net movement of water molecules through a partially permeable membrane from a solution of high water potential to an area of low water potential. A cell with a less negative water potential will draw in water but this depends on other factors as well such as solute potential (pressure in the cell e.g. solute molecules) and pressure potential (external pressure e.g. cell wall). There are three types of Osmosis solutions: the isotonic solution, hypotonic solution, and hypertonic solution. Isotonic solution is when the extracellular solute concentration is balanced with the concentration inside the cell. In the Isotonic solution, the water molecules still moves between the solutions, but the rates are the same from both directions, thus the water movement is balanced between the inside of the cell as well as the outside of the cell. A hypotonic solution is when the solute concentration outside the cell is lower than the concentration inside the cell. In hypotonic solutions, the water moves into the cell, down its concentration gradient (from higher to lower water concentrations). That can cause the cell to swell. Cells that don't have a cell wall, such as animal cells, could burst in this solution. A hypertonic solution is when the solute concentration is higher (think of hyper - as high) than the concentration inside the cell. In hypertonic solution, the water will move out, causing the cell to shrink.
https://en.wikipedia.org/wiki?curid=417014
958,433
Quantum cryptography is the science of exploiting quantum mechanical properties to perform cryptographic tasks. The best known example of quantum cryptography is quantum key distribution which offers an information-theoretically secure solution to the key exchange problem. The advantage of quantum cryptography lies in the fact that it allows the completion of various cryptographic tasks that are proven or conjectured to be impossible using only classical (i.e. non-quantum) communication. For example, it is impossible to copy data encoded in a quantum state. If one attempts to read the encoded data, the quantum state will be changed due to wave function collapse (no-cloning theorem). This could be used to detect eavesdropping in quantum key distribution (QKD).
https://en.wikipedia.org/wiki?curid=28676005
959,681
Supervised learning is a type of algorithm that learns from labeled data and learns how to assign labels to future data that is unlabeled. In biology supervised learning can be helpful when we have data that we know how to categorize and we would like to categorize more data into those categories.A common supervised learning algorithm is the random forest, which uses numerous decision trees to train a model to classify a dataset. Forming the basis of the random forest, a decision tree is a structure which aims to classify, or label, some set of data using certain known features of that data. A practical biological example of this would be taking an individual's genetic data and predicting whether or not that individual is predisposed to develop a certain disease or cancer. At each internal node the algorithm checks the dataset for exactly one feature, a specific gene in the previous example, and then branches left or right based on the result. Then at each leaf node, the decision tree assigns a class label to the dataset. So in practice, the algorithm walks a specific root-to-leaf path based on the input dataset through the decision tree, which results in the classification of that dataset. Commonly, decision trees have target variables that take on discrete values, like yes/no, in which case it is referred to as a classification tree, but if the target variable is continuous then it is called a regression tree. To construct a decision tree, it must first be trained using a training set to identify which features are the best predictors of the target variable.
https://en.wikipedia.org/wiki?curid=149353
961,326
A MIL-STD-1553 multiplex data bus system consists of a Bus Controller (BC) controlling multiple Remote Terminals (RT) all connected together by a data bus providing a single data path between the Bus Controller and all the associated Remote Terminals. There may also be one or more Bus Monitors (BM); however, Bus Monitors are specifically not allowed to take part in data transfers, and are only used to capture or record data for analysis, etc. In redundant bus implementations, several data buses are used to provide more than one data path, i.e. dual redundant data bus, tri-redundant data bus, etc. All transmissions onto the data bus are accessible to the BC and all connected RTs. Messages consist of one or more 16-bit words (command, data, or status). The 16 bits comprising each word are transmitted using Manchester code, where each bit is transmitted as a 0.5 μs high and 0.5 μs low for a logical 1 or a low-high sequence for a logical 0. Each word is preceded by a 3 μs sync pulse (1.5 μs low plus 1.5 μs high for data words and the opposite for command and status words, which cannot occur in the Manchester code) and followed by an odd parity bit. Practically each word could be considered as a 20-bit word: 3 bit for sync, 16 bit for payload and 1 bit for odd parity control. The words within a message are transmitted contiguously and there has to be a minimum of a 4 μs gap between messages. However, this inter-message gap can be, and often is, much larger than 4 μs, even up to 1 ms with some older Bus Controllers. Devices have to start transmitting their response to a valid command within 4–12 μs and are considered to not have received a command or message if no response has started within 14 μs.
https://en.wikipedia.org/wiki?curid=1441618
972,043
During embryonic development, cells are restricted to different layers due to differential affinities. One of the ways this can occur is when cells share the same cell-to-cell adhesion molecules. For instance, homotypic cell adhesion can maintain boundaries between groups of cells that have different adhesion molecules. Furthermore, cells can sort based upon differences in adhesion between the cells, so even two populations of cells with different levels of the same adhesion molecule can sort out. In cell culture cells that have the strongest adhesion move to the center of a mixed aggregates of cells. Moreover, cell-cell adhesion is often modulated by cell contractility, which can exert forces on the cell-cell contacts so that two cell populations with equal levels of the same adhesion molecule can sort out. The molecules responsible for adhesion are called cell adhesion molecules (CAMs). Several types of cell adhesion molecules are known and one major class of these molecules are cadherins. There are dozens of different cadherins that are expressed on different cell types. Cadherins bind to other cadherins in a like-to-like manner: E-cadherin (found on many epithelial cells) binds preferentially to other E-cadherin molecules. Mesenchymal cells usually express other cadherin types such as N-cadherin.
https://en.wikipedia.org/wiki?curid=19965
974,519
Although the equipartition theorem makes accurate predictions in certain conditions, it is inaccurate when quantum effects are significant, such as at low temperatures. When the thermal energy is smaller than the quantum energy spacing in a particular degree of freedom, the average energy and heat capacity of this degree of freedom are less than the values predicted by equipartition. Such a degree of freedom is said to be "frozen out" when the thermal energy is much smaller than this spacing. For example, the heat capacity of a solid decreases at low temperatures as various types of motion become frozen out, rather than remaining constant as predicted by equipartition. Such decreases in heat capacity were among the first signs to physicists of the 19th century that classical physics was incorrect and that a new, more subtle, scientific model was required. Along with other evidence, equipartition's failure to model black-body radiation—also known as the ultraviolet catastrophe—led Max Planck to suggest that energy in the oscillators in an object, which emit light, were quantized, a revolutionary hypothesis that spurred the development of quantum mechanics and quantum field theory.
https://en.wikipedia.org/wiki?curid=516133
974,602
A commonly cited counter-example where energy is "not" shared among its various forms and where equipartition does "not" hold in the microcanonical ensemble is a system of coupled harmonic oscillators. If the system is isolated from the rest of the world, the energy in each normal mode is constant; energy is not transferred from one mode to another. Hence, equipartition does not hold for such a system; the amount of energy in each normal mode is fixed at its initial value. If sufficiently strong nonlinear terms are present in the energy function, energy may be transferred between the normal modes, leading to ergodicity and rendering the law of equipartition valid. However, the Kolmogorov–Arnold–Moser theorem states that energy will not be exchanged unless the nonlinear perturbations are strong enough; if they are too small, the energy will remain trapped in at least some of the modes.
https://en.wikipedia.org/wiki?curid=516133
987,319
The flywheel collects a percentage of the initial kinetic energy of the car, and this percentage can be represented by formula_8. The flywheel stores the energy as rotational kinetic energy. Because the energy is kept as kinetic energy and not transformed into another type of energy this process is efficient. The flywheel can only store so much energy, however, and this is limited by its maximum amount of rotational kinetic energy. This is determined based upon the inertia of the flywheel and its angular velocity. As the car sits idle, little rotational kinetic energy is lost over time so the initial amount of energy in the flywheel can be assumed to equal the final amount of energy distributed by the flywheel. The amount of kinetic energy distributed by the flywheel is therefore:
https://en.wikipedia.org/wiki?curid=305992
1,001,168
Wigner's friend is a thought experiment in theoretical quantum physics, first conceived by the physicist Eugene Wigner in 1961, and further developed by David Deutsch in 1985. The scenario involves an indirect observation of a quantum measurement: An observer formula_1 observes another observer formula_2 who performs a quantum measurement on a physical system. The two observers then formulate a statement about the physical system's state after the measurement according to the laws of quantum theory. However, in most of the interpretations of quantum mechanics, the resulting statements of the two observers contradict each other. This reflects a seeming incompatibility of two laws in quantum theory: the deterministic and continuous time evolution of the state of a closed system and the nondeterministic, discontinuous collapse of the state of a system upon measurement. Wigner's friend is therefore directly linked to the measurement problem in quantum mechanics with its famous Schrödinger's cat paradox.
https://en.wikipedia.org/wiki?curid=71119
1,003,261
The study of the complexity of explicitly given algorithms is called analysis of algorithms, while the study of the complexity of problems is called computational complexity theory. Both areas are highly related, as the complexity of an algorithm is always an upper bound on the complexity of the problem solved by this algorithm. Moreover, for designing efficient algorithms, it is often fundamental to compare the complexity of a specific algorithm to the complexity of the problem to be solved. Also, in most cases, the only thing that is known about the complexity of a problem is that it is lower than the complexity of the most efficient known algorithms. Therefore, there is a large overlap between analysis of algorithms and complexity theory.
https://en.wikipedia.org/wiki?curid=6511
1,012,191
After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. . By Penrose and Lucas's arguments, existing quantum computers are not sufficient , so Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron. However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing.
https://en.wikipedia.org/wiki?curid=2958015
1,022,390
When the scope of variables declared within a function does not extend beyond that function, this is known as function scope. Function scope is available in most programming languages which offer a way to create a "local variable" in a function or subroutine: a variable whose scope ends (that goes out of context) when the function returns. In most cases the lifetime of the variable is the duration of the function call—it is an automatic variable, created when the function starts (or the variable is declared), destroyed when the function returns—while the scope of the variable is within the function, though the meaning of "within" depends on whether scope is lexical or dynamic. However, some languages, such as C, also provide for static local variables, where the lifetime of the variable is the entire lifetime of the program, but the variable is only in context when inside the function. In the case of static local variables, the variable is created when the program initializes, and destroyed only when the program terminates, as with a static global variable, but is only in context within a function, like an automatic local variable.
https://en.wikipedia.org/wiki?curid=62068
1,022,391
Importantly, in lexical scope a variable with function scope has scope only within the "lexical context" of the function: it goes out of context when another function is called within the function, and comes back into context when the function returns—called functions have no access to the local variables of calling functions, and local variables are only in context within the body of the function in which they are declared. By contrast, in dynamic scope, the scope extends to the "execution context" of the function: local variables "stay in context" when another function is called, only going out of context when the defining function ends, and thus local variables are in context of the function in which they are defined "and all called functions". In languages with lexical scope and nested functions, local variables are in context for nested functions, since these are within the same lexical context, but not for other functions that are not lexically nested. A local variable of an enclosing function is known as a non-local variable for the nested function. Function scope is also applicable to anonymous functions.
https://en.wikipedia.org/wiki?curid=62068
1,025,405
Systems biology was begun as a new field of science around 2000, when the Institute for Systems Biology was established in Seattle in an effort to lure "computational" type people who it was felt were not attracted to the academic settings of the university. The institute did not have a clear definition of what the field actually was: roughly bringing together people from diverse fields to use computers to holistically study biology in new ways. A Department of Systems Biology at Harvard Medical School was launched in 2003. In 2006 it was predicted that the buzz generated by the "very fashionable" new concept would cause all the major universities to need a systems biology department, thus that there would be careers available for graduates with a modicum of ability in computer programming and biology. In 2006 the National Science Foundation put forward a challenge to build a mathematical model of the whole cell. In 2012 the first whole-cell model of "Mycoplasma genitalium" was achieved by the Karr Laboratory at the Mount Sinai School of Medicine in New York. The whole-cell model is able to predict viability of "M. genitalium" cells in response to genetic mutations.
https://en.wikipedia.org/wiki?curid=467899
1,034,696
The mean squared prediction error can be computed exactly in two contexts. First, with a data sample of length "n", the data analyst may run the regression over only "q" of the data points (with "q" < "n"), holding back the other "n – q" data points with the specific purpose of using them to compute the estimated model’s MSPE out of sample (i.e., not using data that were used in the model estimation process). Since the regression process is tailored to the "q" in-sample points, normally the in-sample MSPE will be smaller than the out-of-sample one computed over the "n – q" held-back points. If the increase in the MSPE out of sample compared to in sample is relatively slight, that results in the model being viewed favorably. And if two models are to be compared, the one with the lower MSPE over the "n – q" out-of-sample data points is viewed more favorably, regardless of the models’ relative in-sample performances. The out-of-sample MSPE in this context is exact for the out-of-sample data points that it was computed over, but is merely an estimate of the model’s MSPE for the mostly unobserved population from which the data were drawn.
https://en.wikipedia.org/wiki?curid=3244288
1,037,606
A likelihood function arises from a probability density function considered as a function of its distributional parameterization argument. For example, consider a model which gives the probability density function formula_1 of observable random variable formula_2 as a function of a parameter formula_3 Then for a specific value formula_4 of formula_5 the function formula_6 is a likelihood function of formula_7 it gives a measure of how "likely" any particular value of formula_8 is, if we know that formula_9 has the value formula_10 The density function may be a density with respect to counting measure, i.e. a probability mass function.
https://en.wikipedia.org/wiki?curid=17905
1,039,032
In contrast, for isolated systems (and fixed external parameters), the second law states that the entropy will increase to a maximum value at equilibrium. An isolated system has a fixed total energy and mass. A closed system, on the other hand, is a system which is connected to another, and cannot exchange matter (i.e. particles), but other forms of energy (e.g. heat), with the other system. If, rather than an isolated system, we have a closed system, in which the entropy rather than the energy remains constant, then it follows from the first and second laws of thermodynamics that the energy of that system will drop to a minimum value at equilibrium, transferring its energy to the other system. To restate:
https://en.wikipedia.org/wiki?curid=3662314
1,042,935
The figure shows the normalized effective sail area normalized by the coil area formula_215 for the MKM case from Gros of equation and for Zubrin from equation for formula_222, formula_116=100 km, and formula_205=0.1 (cm for the G-cloud on approach to Alpha Centauri corresponding to ISM density formula_225 (kg/m) consistent with that from Freeland plotted versus the spacecraft velocity relative to the speed of light formula_226. A good fit occurs for these parameters, but for different values of formula_116 and formula_218 the fit can vary significantly. Also plotted is the MHD applicability test of ion gyroradius divided by magnetopause radius formula_229 <1 from equation on the secondary axis. Note that MHD applicability occurs at formula_230 < 1%. For comparison, the 2004 Fujita formula_87 as a function of formula_229 from the MHD applicability test section is also plotted. Note that the Gros model predicts a more rapid decrease in effective area than this model at higher velocities. The normalized values of formula_208 and formula_234 track closely until formula_235 10% after which point the Zubrin magsail model of Equation becomes increasingly optimistic and equation is applicable instead. Since the models track closely up to formula_236 10%, with the kinematic model underestimating effective sail area for smaller values of formula_237 (hence underestimating force), equation is an approximation for both the MHD and kinematic region. The Gros model is pessimistic for formula_237 < 0.1%.
https://en.wikipedia.org/wiki?curid=37845
1,042,938
The figure plots the distance traveled while decelerating formula_255 (ly) and time required to decelerate formula_256 (yr) given a starting relative velocity formula_257 and a final velocity formula_258 (m/s) consistent with that from Freeland for the same parameters above. Equation gives the magsail mass formula_259 as 97 tonnes assuming 100 tonnes of payload mass formula_173 using the same values used by Freeland of formula_189 = 10 (A/m) and formula_262=6,500 (kg/m) for the superconducting coil. Equation gives Force for the magsail multiplied by formula_87=4 for the Andrews/Zubrin model to align with equation definition of force from the Gros model. Acceleration is force divided by mass, velocity is the integral of acceleration over the deceleration time interval formula_256 (yr) and deceleration distance traveled formula_255 (ly) is the integral of the velocity over formula_256 (yr). Numerical integration resulted in the lines plotted in the figure with deceleration distance traveled plotted on the primary vertical axis on the left and time required to decelerate on the secondary vertical axis on the right. Note that the MHD Zubrin model and the Gros kinematic model predict nearly identical values of deceleration distance up to formula_267~ 5% of light speed, with the Zubrin model predicting less deceleration distance and shorter deceleration time at greater values of formula_267. This is consistent with the Gros model predicting a smaller effective area formula_251 at larger values of formula_267. The value of the closed form solution for deceleration distance formula_271from for the same parameters closely tracks the numerical integration result.
https://en.wikipedia.org/wiki?curid=37845
1,045,790
In the early 1950s, Eccles and his colleagues performed the research that would lead to his receiving the Nobel Prize. To study synapses in the peripheral nervous system, Eccles and colleagues used the stretch reflex as a model, which is easily studied because it consists of only two neurons: a sensory neuron (the muscle spindle fibre) and the motor neuron. The sensory neuron synapses onto the motor neuron in the spinal cord. When a current is passed into the sensory neuron in the quadriceps, the motor neuron innervating the quadriceps produced a small excitatory postsynaptic potential (EPSP). When a similar current is passed through the hamstring, the opposing muscle to the quadriceps, an inhibitory postsynaptic potential (IPSP) is produced in the quadriceps motor neuron. Although a single EPSP was not enough to fire an action potential in the motor neuron, the sum of several EPSPs from multiple sensory neurons synapsing onto the motor neuron can cause the motor neuron to fire, thus contracting the quadriceps. On the other hand, IPSPs could subtract from this sum of EPSPs, preventing the motor neuron from firing.
https://en.wikipedia.org/wiki?curid=16426
1,047,380
CdSe-derived nanoparticles with sizes below 10 nm exhibit a property known as quantum confinement. Quantum confinement results when the electrons in a material are confined to a very small volume. Quantum confinement is size dependent, meaning the properties of CdSe nanoparticles are tunable based on their size. One type of CdSe nanoparticle is a CdSe quantum dot. This discretization of energy states results in electronic transitions that vary by quantum dot size. Larger quantum dots have closer electronic states than smaller quantum dots which means that the energy required to excite an electron from HOMO to the LUMO is lower than the same electronic transition in a smaller quantum dot. This quantum confinement effect can be observed as a red shift in absorbance spectra for nanocrystals with larger diameters. Quantum confinement effects in quantum dots can also result in fluorescence intermittency, called "blinking."
https://en.wikipedia.org/wiki?curid=2115745
1,048,209
In many colleges, a basic course in "statistics for non-statisticians" has required only algebra (and not calculus); for future statisticians, in contrast, the undergraduate exposure to statistics is highly mathematical. As undergraduates, future statisticians should have completed courses in multivariate calculus, linear algebra, computer programming, and a year of calculus-based probability and statistics. Students wanting to obtain a doctorate in statistics from "any of the better graduate programs in statistics" should also take "real analysis". Laboratory courses in physics, chemistry and psychology also provide useful experiences with planning and conducting experiments and with analyzing data. The ASA recommends that undergraduate students consider obtaining a bachelor's degree in applied mathematics as preparation for entering a master program in statistics.
https://en.wikipedia.org/wiki?curid=24985094
1,062,019
In the field of cellular biology, single-cell analysis is the study of genomics, transcriptomics, proteomics, metabolomics and cell–cell interactions at the single cell level. The concept of single-cell analysis originated in the 1970s. Before the discovery of heterogeneity, single-cell analysis mainly referred to the analysis or manipulation of an individual cell in a bulk population of cells at a particular condition using optical or electronic microscope. To date, due to the heterogeneity seen in both eukaryotic and prokaryotic cell populations, analyzing a single cell makes it possible to discover mechanisms not seen when studying a bulk population of cells. Technologies such as fluorescence-activated cell sorting (FACS) allow the precise isolation of selected single cells from complex samples, while high throughput single cell partitioning technologies, enable the simultaneous molecular analysis of hundreds or thousands of single unsorted cells; this is particularly useful for the analysis of transcriptome variation in genotypically identical cells, allowing the definition of otherwise undetectable cell subtypes. The development of new technologies is increasing our ability to analyze the genome and transcriptome of single cells, as well as to quantify their proteome and metabolome. Mass spectrometry techniques have become important analytical tools for proteomic and metabolomic analysis of single cells. Recent advances have enabled quantifying thousands of protein across hundreds of single cells, and thus make possible new types of analysis. In situ sequencing and fluorescence in situ hybridization (FISH) do not require that cells be isolated and are increasingly being used for analysis of tissues.
https://en.wikipedia.org/wiki?curid=38991948
1,065,956
The choice of free energy function, formula_13, can have a significant effect on the physical behaviour of the interface, and should be selected with care. The double-well function represents an approximation of the Van der Waals equation of state near the critical point, and has historically been used for its simplicity of implementation when the phase-field model is employed solely for interface tracking purposes. But this has led to the frequently observed spontaneous drop shrinkage phenomenon, whereby the high phase miscibility predicted by an Equation of State near the critical point allows significant interpenetration of the phases and can eventually lead to the complete disappearance of a droplet whose radius is below some critical value. Minimizing perceived continuity losses over the duration of a simulation requires limits on the Mobility parameter, resulting in a delicate balance between interfacial smearing due to convection, interfacial reconstruction due to free energy minimization (i.e. mobility-based diffusion), and phase interpenetration, also dependent on the mobility. A recent review of alternative energy density functions for interface tracking applications has proposed a modified form of the double-obstacle function which avoids the spontaneous drop shrinkage phenomena and limits on mobility, with comparative results provide for a number of benchmark simulations using the double-well function and the volume-of-fluid sharp interface technique. The proposed implementation has a computational complexity only slightly greater than that of the double-well function, and may prove useful for interface tracking applications of the phase-field model where the duration/nature of the simulated phenomena introduces phase continuity concerns (i.e. small droplets, extended simulations, multiple interfaces, etc.).
https://en.wikipedia.org/wiki?curid=16706608
1,073,582
Calculating the elastic modulus with software involves using software filtering techniques to separate the critical unloading data from the rest of the load-displacement data. The start and end points are usually found by using user defined percentages. This user input increases the variability because of possible human error. It would be best if the entire calculation process was automatically done for more consistent results. A good nanoindentation machine prints out the load unload curve data with labels to each of the segments such as loading, top hold, unload, bottom hold, and reloading. If multiple cycles are used then each one should be labeled. However mores nanoindenters only give the raw data for the load-unload curves. An automatic software technique finds the sharp change from the top hold time to the beginning of the unloading. This can be found by doing a linear fit to the top hold time data. The unload data starts when the load is 1.5 times standard deviation less than the hold time load. The minimum data point is the end of the unloading data. The computer calculates the elastic modulus with this data according to the Oliver—Pharr (nonlinear). The Doerner-Nix method is less complicated to program because it is a linear curve fit of the selected minimum to maximum data. However, it is limited because the calculated elastic modulus will decrease as more data points are used along the unloading curve. The Oliver-Pharr nonlinear curve fit method to the unloading curve data where formula_25 is the depth variable, formula_54 is the final depth and formula_55 and formula_26 are constants and coefficients. The software must use a nonlinear convergence method to solve for formula_55, formula_54 and formula_26 that best fits the unloading data. The slope is calculated by differentiating formula_5 at the maximum displacement.
https://en.wikipedia.org/wiki?curid=4023692
1,097,721
A form of enterprise application integration, it is intended to reduce costs and standardize on agreed data definitions associated with integrating business systems. A canonical model is any model that is canonical in nature, i.e. a model which is in the simplest form possible based on a standard, application integration (EAI) solution. Most organizations also adopt a set of standards for message structure and content (message payload). The desire for consistent message payload results in the construction of an enterprise or business domain canonical model common view within a given context. Often the term canonical model is used interchangeably with integration strategy and often entails a move to a message-based integration methodology. A typical migration from point-to-point canonical data model, an enterprise design pattern which provides common data naming, definition and values within a generalized data framework. Advantages of using a canonical data model are reducing the number of data translations and reducing the maintenance effort.
https://en.wikipedia.org/wiki?curid=27500605
1,143,310
Inadequate data sanitization methods can result in two main problems: a breach of private information and compromises to the integrity of the original dataset. If data sanitization methods are unsuccessful at removing all sensitive information, it poses the risk of leaking this information to attackers. Numerous studies have been conducted to optimize ways of preserving sensitive information. Some data sanitization methods have a high sensitivity to distinct points that have no closeness to data points. This type of data sanitization is very precise and can detect anomalies even if the poisoned data point is relatively close to true data. Another method of data sanitization is one that also removes outliers in data, but does so in a more general way. It detects the general trend of data and discards any data that strays and it’s able to target anomalies even when inserted as a group. In general, data sanitization techniques use algorithms to detect anomalies and remove any suspicious points that may be poisoned data or sensitive information.
https://en.wikipedia.org/wiki?curid=67273665
1,170,116
Moreover, the theorem is historically significant for the role it played in ruling out the possibility of hidden variables in quantum mechanics. A hidden-variable theory that is deterministic implies that the probability of a given outcome is "always" either 0 or 1. For example, a Stern–Gerlach measurement on a spin-1 atom will report that the atom's angular momentum along the chosen axis is one of three possible values, which can be designated formula_17, formula_18 and formula_19. In a deterministic hidden-variable theory, there exists an underlying physical property that fixes the result found in the measurement. Conditional on the value of the underlying physical property, any given outcome (for example, a result of formula_19) must be either impossible or guaranteed. But Gleason's theorem implies that there can be no such deterministic probability measure. The mapping formula_21 is continuous on the unit sphere of the Hilbert space for any density operator formula_2. Since this unit sphere is connected, no continuous probability measure on it can be deterministic. Gleason's theorem therefore suggests that quantum theory represents a deep and fundamental departure from the classical intuition that uncertainty is due to ignorance about hidden degrees of freedom. More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity.
https://en.wikipedia.org/wiki?curid=6796998
1,171,585
Biosurveillance<br>In 1999, the University of Pittsburgh's Center for Biomedical Informatics deployed the first automated bioterrorism detection system, called RODS (Real-Time Outbreak Disease Surveillance). RODS is designed to draw collect data from many data sources and use them to perform signal detection, that is, to detect a possible bioterrorism event at the earliest possible moment. RODS, and other systems like it, collect data from sources including clinic data, laboratory data, and data from over-the-counter drug sales. In 2000, Michael Wagner, the codirector of the RODS laboratory, and Ron Aryel, a subcontractor, conceived the idea of obtaining live data feeds from "non-traditional" (non-health-care) data sources. The RODS laboratory's first efforts eventually led to the establishment of the National Retail Data Monitor, a system which collects data from 20,000 retail locations nationwide.
https://en.wikipedia.org/wiki?curid=45584
1,187,599
To formulate such a denotational semantics, one might first try to construct a "model" for the lambda calculus, in which a genuine (total) function is associated with each lambda term. Such a model would formalize a link between the lambda calculus as a purely syntactic system and the lambda calculus as a notational system for manipulating concrete mathematical functions. The combinator calculus is such a model. However, the elements of the combinator calculus are functions from functions to functions; in order for the elements of a model of the lambda calculus to be of arbitrary domain and range, they could not be true functions, only partial functions.
https://en.wikipedia.org/wiki?curid=325077
1,191,397
Hiley refers to the quantum potential as internal energy and as "a new quality of energy only playing a role in quantum processes". He explains that the quantum potential is a further energy term aside the well-known kinetic energy and the (classical) potential energy and that it is a nonlocal energy term that arises necessarily in view of the requirement of energy conservation; he added that much of the physics community's resistance against the notion of the quantum potential may have been due to scientists' expectations that energy should be local.
https://en.wikipedia.org/wiki?curid=8057418
1,194,791
Neurons are known as excitable cells because on its surface membrane there are an abundance of proteins known as ion-channels that allow small charged particles to pass in and out of the cell. The structure of the neuron allows chemical information to be received by its dendrites, propagated through the perikaryon (cell body) and down its axon, and eventually passing on to other neurons through its axon terminal. These voltage-gated ion channels allow for rapid depolarization throughout the cell. This depolarization, if it reaches a certain threshold, will cause an action potential. Once the action potential reaches the axon terminal, it will cause an influx of calcium ions into the cell. The calcium ions will then cause vesicles, small packets filled with neurotransmitters, to bind to the cell membrane and release its contents into the synapse. This cell is known as the pre-synaptic neuron, and the cell that interacts with the neurotransmitters released is known as the post-synaptic neuron. Once the neurotransmitter is released into the synapse, it can either bind to receptors on the post-synaptic cell, the pre-synaptic cell can re-uptake it and save it for later transmission, or it can be broken down by enzymes in the synapse specific to that certain neurotransmitter. These three different actions are major areas where drug action can affect communication between neurons.
https://en.wikipedia.org/wiki?curid=1685778
1,198,300
In physics, an open quantum system is a quantum-mechanical system that interacts with an external quantum system, which is known as the "environment" or a "bath". In general, these interactions significantly change the dynamics of the system and result in quantum dissipation, such that the information contained in the system is lost to its environment. Because no quantum system is completely isolated from its surroundings, it is important to develop a theoretical framework for treating these interactions in order to obtain an accurate understanding of quantum systems.
https://en.wikipedia.org/wiki?curid=1079106
1,198,747
A fire requires heat, fuel, and an oxidizing agent. The energy required to overcome the activation energy barrier for combustion is transferred as heat into the system, resulting in changes to the system's internal energy. In a process, the energy input to start a fire may comprise both work and heat, such as when one rubs tinder (work) and experiences friction (heat) to start a fire. The ensuing combustion is highly exothermic, which releases heat. The overall change in internal energy does not reveal the mode of energy transfer and quantifies only the net work and heat. The difference between initial and final states of the system's internal energy does not account for the extent of the energy interactions transpired. Therefore, internal energy is a state function (i.e. exact differential), while heat and work are path functions (i.e. inexact differentials) because integration must account for the path taken.
https://en.wikipedia.org/wiki?curid=5598930
1,200,066
Over the same period in the 1980s, biologists studying cell motility had hit upon an interesting and reproducible behavior of cells in culture: the scattering response. Epithelial cells in culture grow normally as tight clusters. However, they could be induced to break cell-cell contacts and become elongated and motile after exposure to a "scatter factor" that was secreted by mesenchymal cells such as Swiss 3T3 fibroblasts. This was best described by Julia Gray's group in 1987. During the same period in the mid 1980s, a monoclonal antibody was reported by the group of Walter Birchmeier to disrupt cell-cell contacts and alter the front-rear polarity of cells in culture. The target of this antibody was later identified as a component of cell-cell junctions, E-cadherin. These disparate observations eventually coalesced into a resilient paradigm for cell motility and cell polarity. Epithelial cells are typically nonmotile, but can become motile by inhibiting cell-cell junctions or by addition of growth factors that induce scattering. Both of these are reversible, and both involve the rupture of cell-cell junctions.
https://en.wikipedia.org/wiki?curid=55054202
1,201,521
The simplest stochastic Lagrangian model is the Langevin equation, which provides a model for the velocity following the fluid particle. In particular, the Langevin equation for the fluid-particle velocity yields a complete prediction for turbulent dispersion. According to the equation, the Lagrangian velocity autocorrelation function is the exponential formula_106. With this expression for formula_107, the standard deviation of the particle displacement can be integrated to yieldformula_108According to the Langevin equation, each component of the fiuid particle velocity is an Ornstein-Uhlenbeck process. It follows that the fluid particle position (ie., the integral of the Ornstein-Uhlenbeck process) is also a Gaussian process. Thus, the mean scalar field predicted by the Langevin equation is the Gaussian distributionformula_109with formula_110 given by the previous equation.
https://en.wikipedia.org/wiki?curid=7747984
1,236,429
The log-linear models can be thought of to be on a continuum with the two extremes being the simplest model and the saturated model. The simplest model is the model where all the expected frequencies are equal. This is true when the variables are not related. The saturated model is the model that includes all the model components. This model will always explain the data the best, but it is the least parsimonious as everything is included. In this model, observed frequencies equal expected frequencies, therefore in the likelihood ratio chi-square statistic, the ratio formula_7 and formula_8. This results in the likelihood ratio chi-square statistic being equal to 0, which is the best model fit. Other possible models are the conditional equiprobability model and the mutual dependence model.
https://en.wikipedia.org/wiki?curid=35696836
1,281,417
If the quantum system has a classical analog, e.g. a coherent state or thermal radiation, then is non-negative everywhere like an ordinary probability distribution. If, however, the quantum system has no classical analog, e.g. an incoherent Fock state or entangled system, then is negative somewhere or more singular than a Dirac delta function. (By a theorem of Schwartz, distributions that are more singular than the Dirac delta function are always negative somewhere.) Such "negative probability" or high degree of singularity is a feature inherent to the representation and does not diminish the meaningfulness of expectation values taken with respect to . Even if does behave like an ordinary probability distribution, however, the matter is not quite so simple. According to Mandel and Wolf: "The different coherent states are not [mutually] orthogonal, so that even if formula_15 behaved like a true probability density [function], it would not describe probabilities of mutually exclusive states."
https://en.wikipedia.org/wiki?curid=5906119
1,286,658
Occasionally, and somewhat confusingly, the term "continuous quantum computation" is used to refer to a different area of quantum computing: the study of how to use quantum systems having "finite"-dimensional Hilbert spaces to calculate or approximate the answers to mathematical questions involving continuous functions. A major motivation for investigating the quantum computation of continuous functions is that many scientific problems have mathematical formulations in terms of continuous quantities. A second motivation is to explore and understand the ways in which quantum computers can be more capable or powerful than classical ones. The computational complexity of a problem can be quantified in terms of the minimal computational resources necessary to solve it. In quantum computing, resources include the number of qubits available to a computer and the number of queries that can be made to that computer. The classical complexity of many continuous problems is known. Therefore, when the quantum complexity of these problems is obtained, the question as to whether quantum computers are more powerful than classical can be answered. Furthermore, the degree of the improvement can be quantified. In contrast, the complexity of discrete problems is typically unknown. For example, the classical complexity of integer factorization is unknown.
https://en.wikipedia.org/wiki?curid=54782330
1,315,850
Motorola offered a 1970s-era system based on the United States Coast Guard LORAN maritime navigation system. The LORAN system was intended for ships but signal levels on the US east- and west-coast areas were adequate for use with receivers in automobiles. The system may have been marketed under the Motorola model name "Metricom". It consisted of an LF LORAN receiver and data interface box/modem connected to a separate two-way radio. The receiver and interface calculated a latitude and longitude in degrees, decimal degrees format based on the LORAN signals. This was sent over the radio as MDC-1200 or MDC-4800 data to a system controller, which plotted the mobile's approximate location on a map. The system worked reliably but sometimes had problems with electrical noise in urban areas. Sparking electric trolley poles or industrial plants which radiated electrical noise sometime overwhelmed the LORAN signals, affecting the system's ability to determine the mobile's geolocation. Because of the limited resolution, this type of system was impractical for small communities or operational areas such as a pit mine or port.
https://en.wikipedia.org/wiki?curid=369646
1,319,118
The MBS process often can be divided in 5 main activities. The first activity of the MBS process chain is the” 3D CAD master model”, in which product developers, designers and engineers are using the CAD system to generate a CAD model and its assembly structure related to given specifications. This 3D CAD master model is converted during the activity “Data transfer” to the MBS input data formats i.e. STEP. The “MBS Modeling” is the most complex activity in the process chain. Following rules and experiences, the 3D model in MBS format, multiple boundaries, kinematics, forces, moments or degrees of freedom are used as input to generate the MBS model. Engineers have to use MBS software and their knowledge and skills in the field of engineering mechanics and machine dynamics to build the MBS model including joints and links. The generated MBS model is used during the next activity “Simulation”. Simulations, which are specified by time increments and boundaries like starting conditions are run by MBS Software i.e. Siemens Simcenter 3D Motion, NX Motion, MSC ADAMS or RecurDyn. It is also possible to perform MBS simulations using free and open source packages such as MBDyn, with CAD packages such as NX CAD, FreeCAD as pre-post processors, to prepare CAD models and visualize results. The last activity is the “Analysis and evaluation”. Engineers use case-dependent directives to analyze and evaluate moving paths, speeds, accelerations, forces or moments. The results are used to enable releases or to improve the MBS model, in case the results are insufficient. One of the most important benefits of the MBS process chain is the usability of the results to optimize the 3D CAD master model components. Due to the fact that the process chain enables the optimization of component design, the resulting loops can be used to achieve a high level of design and MBS model optimization in an iterative process.
https://en.wikipedia.org/wiki?curid=40547030
1,328,325
NARMAX methods are designed to do more than find the best approximating model. System identification can be divided into two aims. The first involves approximation where the key aim is to develop a model that approximates the data set such that good predictions can be made. There are many applications where this approach is appropriate, for example in time series prediction of the weather, stock prices, speech, target tracking, pattern classification etc. In such applications the form of the model is not that important. The objective is to find an approximation scheme which produces the minimum prediction errors. A second objective of system identification, which includes the first objective as a subset, involves much more than just finding a model to achieve the best mean squared errors. This second aim is why the NARMAX philosophy was developed and is linked to the idea of finding the simplest model structure. The aim here is to develop models that reproduce the dynamic characteristics of the underlying system, to find the simplest possible model, and if possible to relate this to components and behaviours of the system under study. The core aim of this second approach to identification is therefore to identify and reveal the rule that represents the system. These objectives are relevant to model simulation and control systems design, but increasingly to applications in medicine, neuro science, and the life sciences. Here the aim is to identify models, often nonlinear, that can be used to understand the basic mechanisms of how these systems operate and behave so that we can manipulate and utilise these. NARMAX methods have also been developed in the frequency and spatio-temporal domains.
https://en.wikipedia.org/wiki?curid=40158142
1,332,570
Hydraulic hybrid vehicle systems consists of four main components: the working fluid, reservoir, pump/motor (in parallel hybrid system) or in-wheel motors and pumps (in series hybrid system), and accumulator. In some systems, a hydraulic transformer is also installed for converting output flow at any pressure with a very low power loss. In an electric hybrid system, energy is stored in the battery and is delivered to the electric motor to power the vehicle. During braking the kinetic energy of the vehicle is used to charge the battery through the regenerative braking. In hydraulic hybrid system, the pump/motor extracts the kinetic energy during braking to pump the working fluid from the reservoir to the accumulator. Working fluid is thus pressurized, which leads to energy storage. When the vehicle accelerates, this pressurized working fluid provides energy to the pump/motor to power the vehicle. For a parallel hybrid system, fuel efficiency gains and emissions reductions result from reduced mechanical load on the internal combustion engine due to the torque provided by the hybrid system.
https://en.wikipedia.org/wiki?curid=23268078
1,357,648
The resulting top-level OPD is the system diagram (SD), which includes the stakeholder group, in particular the beneficiary group, and additional top-level environmental things, which provide the context for the system's operation. The SD should contain only the central and important things—those things indispensable for understanding the function and context of the system. The function is the main process in SD, which also contains the objects involved in this process: the beneficiary, the operand (the object upon which the process operates), and possibly the attribute of the operand whose value the process changes. SD should also contain an object representing the system that enables the function. The default name of this system is created by adding the word "System" to the name of the function. For example, if the function is Car Painting, the name of the system would be Car Painting System.
https://en.wikipedia.org/wiki?curid=1656850
1,360,834
Shiing-Shen Chern published his proof of the theorem in 1944 while at the Institute for Advanced Study. This was historically the first time that the formula was proven without assuming the manifold to be embedded in a Euclidean space, which is what it means by "intrinsic". The special case for a hypersurface (an n-1-dimensional submanifolds in an n-dimensional Euclidean space) was proved by H. Hopf in which the integrand is the Gauss–Kronecker curvature (the product of all principal curvatures at a point of the hypersurface). This was generalized independently by Allendoerfer in 1939 and Fenchel in 1940 to a Riemannian submanifold of a Euclidean space of any codimension, for which they used the Lipschitz–Killing curvature (the average of the Gauss–Kronecker curvature along each unit normal vector over the unit sphere in the normal space; for an even dimensional submanifold, this is an invariant only depending on the Riemann metric of the submanifold). Their result would be valid for the general case if the Nash embedding theorem can be assumed. However, this theorem was not available then, as John Nash published his famous embedding theorem for Riemannian manifolds in 1956. In 1943 Allendoerfer and Weil published their proof for the general case, in which they first used an approximation theorem of H. Whitney to reduce the case to analytic Riemannian manifolds, then they embedded "small" neighborhoods of the manifold isometrically into a Euclidean space with the help of the Cartan–Janet local embedding theorem, so that they can patch these embedded neighborhoods together and apply the above theorem of Allendoerfer and Fenchel to establish the global result. This is, of course, unsatisfactory for the reason that the theorem only involves intrinsic invariants of the manifold, then the validity of the theorem should not rely on its embedding into a Euclidean space. Weil met Chern in Princeton after Chern arrived in August 1943. He told Chern that he believed there should be an intrinsic proof, which Chern was able to obtain within two weeks. The result is Chern's classic paper "A simple intrinsic proof of the Gauss–Bonnet formula for closed Riemannian manifolds" published in the Annals of Mathematics the next year. The earlier work of Allendoerfer, Fenchel, Allendoerfer and Weil were cited by Chern in this paper. The work of Allendoerfer and Weil was also cited by Chern in his second paper related to the same topic.
https://en.wikipedia.org/wiki?curid=393115
1,373,324
The concerns about the energy needs and environmental impacts of data centers are intensifying. Energy efficiency is one of the major challenges of today's Information and communications technology (ICT) sector. The networking portion of a data center is accounted to consume around 15% of overall cyber energy usage. Around 15.6 billion kWh of energy was utilized solely by the communication infrastructure within the data centers worldwide in 2010. The energy consumption by the network infrastructure within a data center is expected to increase to around 50% in data centers. IEEE 802.3az standard has been standardized in 2011 that make use of adaptive link rate technique for energy efficiency. Moreover, fat tree and DCell architectures use commodity network equipment that is inherently energy efficient. Workload consolidation is also used for energy efficiency by consolidating the workload on few devices to power-off or sleep the idle devices.
https://en.wikipedia.org/wiki?curid=41945536
1,389,009
How, then, does one define a concept as a system's total mass which is easily defined in classical mechanics? As it turns out, at least for spacetimes which are asymptotically flat (roughly speaking, which represent some isolated gravitating system in otherwise empty and gravity-free infinite space), the ADM 3+1 split leads to a solution: as in the usual Hamiltonian formalism, the time direction used in that split has an associated energy, which can be integrated up to yield a global quantity known as the ADM mass (or, equivalently, ADM energy). Alternatively, there is a possibility to define mass for a spacetime that is stationary, in other words, one that has a time-like Killing vector field (which, as a generating field for time, is canonically conjugate to energy); the result is the so-called Komar mass Although defined in a totally different way, it can be shown to be equivalent to the ADM mass for stationary spacetimes. The Komar integral definition can also be generalized to non-stationary fields for which there is at least an asymptotic time translation symmetry; imposing a certain gauge condition, one can define the Bondi energy at null infinity. In a way, the ADM energy measures all of the energy contained in spacetime, while the Bondi energy excludes those parts carried off by gravitational waves to infinity. Great effort has been expended on proving positivity theorems for the masses just defined, not least because positivity, or at least the existence of a lower limit, has a bearing on the more fundamental question of boundedness from below: if there were no lower limit to the energy, then no isolated system would be absolutely stable; there would always be the possibility of a decay to a state of even lower total energy. Several kinds of proofs that both the ADM mass and the Bondi mass are indeed positive exist; in particular, this means that Minkowski space (for which both are zero) is indeed stable. While the focus here has been on energy, analogue definitions for global momentum exist; given a field of angular Killing vectors and following the Komar technique, one can also define global angular momentum.
https://en.wikipedia.org/wiki?curid=6513985
1,389,031
This conjecture was soon proved to be correct by one of Hilbert's close associates, Emmy Noether. Noether's theorem applies to any system which can be described by an action principle. Noether's theorem associates conserved energies with time-translation symmetries. When the time-translation symmetry is a finite parameter continuous group, such as the Poincaré group, Noether's theorem defines a scalar conserved energy for the system in question. However, when the symmetry is an infinite parameter continuous group, the existence of a conserved energy is not guaranteed. In a similar manner, Noether's theorem associates conserved momenta with space-translations, when the symmetry group of the translations is finite-dimensional. Because General Relativity is a diffeomorphism invariant theory, it has an infinite continuous group of symmetries rather than a finite-parameter group of symmetries, and hence has the wrong group structure to guarantee a conserved energy. Noether's theorem has been extremely influential in inspiring and unifying various ideas of mass, system energy, and system momentum in General Relativity.
https://en.wikipedia.org/wiki?curid=6513985
1,412,953
XCS inspired the development of a whole new generation of LCS algorithms and applications. In 1995, Congdon was the first to apply LCS to real-world epidemiological investigations of disease followed closely by Holmes who developed the BOOLE++, EpiCS, and later EpiXCS for epidemiological classification. These early works inspired later interest in applying LCS algorithms to complex and large-scale data mining tasks epitomized by bioinformatics applications. In 1998, Stolzmann introduced anticipatory classifier systems (ACS) which included rules in the form of 'condition-action-effect, rather than the classic 'condition-action' representation. ACS was designed to predict the perceptual consequences of an action in all possible situations in an environment. In other words, the system evolves a model that specifies not only what to do in a given situation, but also provides information of what will happen after a specific action will be executed. This family of LCS algorithms is best suited to multi-step problems, planning, speeding up learning, or disambiguating perceptual aliasing (i.e. where the same observation is obtained in distinct states but requires different actions). Butz later pursued this anticipatory family of LCS developing a number of improvements to the original method. In 2002, Wilson introduced XCSF, adding a computed action in order to perform function approximation. In 2003, Bernado-Mansilla introduced a sUpervised Classifier System (UCS), which specialized the XCS algorithm to the task of supervised learning, single-step problems, and forming a best action set. UCS removed the reinforcement learning strategy in favor of a simple, accuracy-based rule fitness as well as the explore/exploit learning phases, characteristic of many reinforcement learners. Bull introduced a simple accuracy-based LCS (YCS) and a simple strength-based LCS Minimal Classifier System (MCS) in order to develop a better theoretical understanding of the LCS framework. Bacardit introduced GAssist and BioHEL, Pittsburgh-style LCSs designed for data mining and scalability to large datasets in bioinformatics applications. In 2008, Drugowitsch published the book titled "Design and Analysis of Learning Classifier Systems" including some theoretical examination of LCS algorithms. Butz introduced the first rule online learning visualization within a GUI for XCSF (see the image at the top of this page). Urbanowicz extended the UCS framework and introduced ExSTraCS, explicitly designed for supervised learning in noisy problem domains (e.g. epidemiology and bioinformatics). ExSTraCS integrated (1) expert knowledge to drive covering and genetic algorithm towards important features in the data, (2) a form of long-term memory referred to as attribute tracking, allowing for more efficient learning and the characterization of heterogeneous data patterns, and (3) a flexible rule representation similar to Bacardit's mixed discrete-continuous attribute list representation. Both Bacardit and Urbanowicz explored statistical and visualization strategies to interpret LCS rules and perform knowledge discovery for data mining. Browne and Iqbal explored the concept of reusing building blocks in the form of code fragments and were the first to solve the 135-bit multiplexer benchmark problem by first learning useful building blocks from simpler multiplexer problems. ExSTraCS 2.0 was later introduced to improve Michigan-style LCS scalability, successfully solving the 135-bit multiplexer benchmark problem for the first time directly. The n-bit multiplexer problem is highly epistatic and heterogeneous, making it a very challenging machine learning task.
https://en.wikipedia.org/wiki?curid=854461
1,421,734
Induction of proliferation by the EpoR is likely cell type-dependent. It is known that EpoR can activate mitogenic signaling pathways and can lead to cell proliferation in erythroleukemic cell lines "in vitro", various non-erythroid cells, and cancer cells. So far, there is no sufficient evidence that "in vivo", EpoR signaling can induce erythroid progenitors to undergo cell division, or whether Epo levels can modulate the cell cycle. EpoR signaling may still have a proliferation effect upon BFU-e progenitors, but these progenitors cannot be directly identified, isolated and studied. CFU-e progenitors enter the cell cycle at the time of GATA-1 induction and PU.1 suppression in a developmental manner rather than due to EpoR signaling. Subsequent differentiation stages (proerythroblast to orthochromatic erythroblast) involve a decrease in cell size and eventual expulsion of the nucleus, and are likely dependent upon EpoR signaling only for their survival. In addition, some evidence on macrocytosis in hypoxic stress (when Epo can increase 1000-fold) suggests that mitosis is actually "skipped" in later erythroid stages, when EpoR expression is low/absent, in order to provide emergency reserve of red blood cells as soon as possible. Such data, though sometimes circumstantial, argue that there is limited capacity to proliferate specifically in response to Epo (and not other factors). Together, these data suggest that EpoR in erythroid differentiation may function primarily as a survival factor, while its effect on the cell cycle (for example, rate of division and corresponding changes in the levels of cyclins and Cdk inhibitors) "in vivo" awaits further work. In other cell systems, however, EpoR may provide a specific proliferative signal.
https://en.wikipedia.org/wiki?curid=5827434
1,425,156
Intracrine refers to a hormone that acts inside a cell, regulating intracellular events. In simple terms it means that the cell stimulates itself by cellular production of a factor that acts within the cell. Steroid hormones act through intracellular (mostly nuclear) receptors and, thus, may be considered to be intracrines. In contrast, peptide or protein hormones, in general, act as endocrines, autocrines, or paracrines by binding to their receptors present on the cell surface. Several peptide/protein hormones or their isoforms also act inside the cell through different mechanisms. These peptide/protein hormones, which have intracellular functions, are also called intracrines. The term 'intracrine' is thought to have been coined to represent peptide/protein hormones that also have intracellular actions. To better understand intracrine, we can compare it to paracrine, autocrine and endocrine. The autocrine system deals with the autocrine receptors of a cell allowing for the hormones to bind, which have been secreted from that same cell. The paracrine system is one where nearby cells get hormones from a cell, and change the functioning of those nearby cells. The endocrine system refers to when the hormones from a cell affect another cell that is very distant from the one that released the hormone.
https://en.wikipedia.org/wiki?curid=4060393
1,469,211
Typically, a signal box with an intermediate block section will have a home signal (and associated distant signal), starting signal and an intermediate block home signal which has its own distant signal. The line from the starting signal to the intermediate block home signal is called the intermediate block home section. The line from the intermediate block home signal to the home signal of the next signal box on the same line in the same direction of travel is the absolute block section. To clear the intermediate block home signal a "line clear" is required from the signal box in advance.
https://en.wikipedia.org/wiki?curid=2861203
1,482,507
In quantum computing, the variational quantum eigensolver (VQE) is a quantum algorithm for quantum chemistry, quantum simulations and optimization problems. It is a hybrid algorithm that uses both classical computers and quantum computers to find the ground state of a given physical system. Given a guess or ansatz, the quantum processor calculates the expectation value of the system with respect to an observable, often the Hamiltonian, and a classical optimizer is used to improve the guess. The algorithm is based on variational method of quantum mechanics.
https://en.wikipedia.org/wiki?curid=68092250
1,533,348
A term closely related to data archaeology is data lineage. The first step in performing data archeology is an investigation into their data lineage. Data lineage entails the history of the data, its source and any alterations or transformations they have undergone. Data lineage can be found in the metadata of a dataset, the para data of a dataset or any accompanying identifiers (methodological guides etc). With data archeology comes methodological transparency which is the level to which the data user can access the data history. The level of methodological transparency available determines not only how much can be recovered, but assists in knowing the data. Data lineage investigation involves what instruments were used, what the selection criteria are, the measurement parameters and the sampling frameworks.
https://en.wikipedia.org/wiki?curid=2588620
1,534,210
One example of a computer system that can be used as a computer-aided ergonomics system is "The AnyBody Modeling System" that consider the human body as a dynamic multi-rigid-body system. The human model is a public domain model contains most of the bones, muscles and joints that are present in the human body. The model has more than 1000 muscle elements, and many muscle elements have been modeled with detailed muscle model theory described by Hill, A.V. in 1938. The muscle model contains information including physiological cross-sectional area, length, ratio of red and white fibers. The AnyBody modeling system is capable of modeling almost any human voluntary movement or static situation. One example of a model, could be a seated model, where the human body is placed in a chair, that have a seat, backrest, headrest, leg rest, footrest, and armrest. The model can then calculate the forces acting between the human body and the chair, as well as for example the forces between any given spinal vertebra. This could be used for finding the optimal seated posture for a person, who has lower back pain, assuming that greater load on a vertebra result in greater pain.
https://en.wikipedia.org/wiki?curid=21559539
1,534,827
Now, paradigms are concerned with not only theory but also modes of behaviour within inquiry. One significant part of Beer's paradigm is the development of his Viable Systems Model (VSM) that addresses problem situations in terms of control and communication processes, seeking to ensure system viability within the object of attention. Another is Beer's Syntegrity protocol which centres on the means by which effective communications in complex situations can occur. VSM has been used successfully to diagnose organisational pathologies (conditions of social ill-health). The model involves not only an operative system that has both structure (e.g., divisions in an organisation or departments in a division) from which behaviour emanates that is directed towards an environment, but also a meta-system, which some have called the observer of the system. The system and meta-system are ontologically different, so that for instance where in a production company the system is concerned with production processes and their immediate management, the meta-system is more concerned with the management of the production system as a whole. The connection between the system and meta-system is explained through Beer's Cybernetic map. Beer considered that viable social systems should be seen as living systems. Humberto Maturana used the term or autopoiesis (self-production) to explain biological living systems, but was reluctant to accept that social systems were living.
https://en.wikipedia.org/wiki?curid=51777886
1,536,949
The G protein-coupled receptor, GPR31, cloned from PC3 human prostate cancer cell line is a high affinity (Kd=4.8 nM) receptor for 12("S")-HETE; GPR31 does not bind 12("R")-HETE and has relatively little affinity for 5("S")-HETE or 15("S")-HETE. GPR31 mRNA is expressed at low levels in several human cell lines including K562 cells (human myelogenous leukemia cell line), Jurkat cells, (T lymphocyte cell line), Hut78 cells (T cell lymphoma cell line), HEK 293 cells (primary embryonic kidney cell line), MCF7 cells (mammary adenocarcinoma cell line), and EJ cells (bladder carcinoma cell line). This mRNA appears to be more highly expressed in PC3 and DU145 prostate cancer cell lines as well as in human umbilical vein endothelial cells (HUVEC), human umbilical vein endothelial cells (HUVEC), human brain microvascular endothelial cells (HBMEC), and human pulmonary aortic endothelial cells (HPAC). In PC-3 prostate cancer cells, GPR31 receptor mediates the action of 12("S")-HETE in activating the Mitogen-activated protein kinase kinase/Extracellular signal-regulated kinases-1/2 pathway and NFκB pathway that lead to cell growth and other functions. Studies have not yet determined the role, if any, in GPR31 receptor in the action of 12("S")-HETE in other cell types.
https://en.wikipedia.org/wiki?curid=24796896
1,549,508
There are four instrumentation components used to detect these signals: (1) the signal source, (2) the transducer used to detect the signal, (3) the amplifier, and (4) the signal processing circuit. The signal source refers to the location at which the EMG electrode is place. EMG signal acquisition is dependent on distance from the electrode to the muscle fiber, so placement is imperative. The transducer used to detect the signal is an EMG electrode than transforms the bioelectric signal from the muscle to a readable electric signal. The amplifier reproduces an undistorted bioelectric signal and also allows for noise reduction in the signal. Signal processing involves taking the recorded electrical impulses, filtering them, and enveloping the data.
https://en.wikipedia.org/wiki?curid=40610658
1,573,133
Designing workflow for processing and combining the resulting multimodal data depends on the particular research question patch-seq is being applied to. In cell typing studies the data should be compared with existing scRNA-seq studies with larger sample sizes (in the order of thousands of cells compared to tens or hundreds) and therefore greater statistical power for cell type identification using transcriptomic data alone. Correlation based methods are sufficient for this step. Dimensionality reduction methods such as T-distributed stochastic neighbor embedding or uniform manifold approximation and projection can then be used for visualization of the collected data's position on a reference atlas of higher quality scRNA-seq data. Machine learning can be applied in order to relate the gene expression data to the morphological and electrophysiological data. Methods for doing so include autoencoders, bottleneck networks, or other rank reduction methods. Including morphological data has proven to be challenging as it is a computer vision task, a notoriously complicated problem in machine learning. It is difficult to represent imaging data from the morphological reconstructions as a feature vector for including in the analysis.
https://en.wikipedia.org/wiki?curid=66692692
1,580,787
From time to time a system forms a new concept or eliminates an old one. At every level, the NMF system always keeps a reserve of vague (fuzzy) inactive concept-models. They are inactive in that their parameters are not adapted to the data; therefore their similarities to signals are low. Yet, because of a large vagueness (covariance) the similarities are not exactly zero. When a new signal does not fit well into any of the active models, its similarities to inactive models automatically increase (because first, every piece of data is accounted for, and second, inactive models are vague-fuzzy and potentially can "grab" every signal that does not fit into more specific, less fuzzy, active models. When the activation signal a for an inactive model, m, exceeds a certain threshold, the model is activated. Similarly, when an activation signal for a particular model falls below a threshold, the model is deactivated. Thresholds for activation and deactivation are set usually based on information existing at a higher hierarchical level (prior information, system resources, numbers of activated models of various types, etc.). Activation signals for active models at a particular level { a } form a "neuronal field," which serve as input signals to the next level, where more abstract and more general concepts are formed.
https://en.wikipedia.org/wiki?curid=19208664
1,593,443
In general, electron energy loss spectroscopy is based on the energy losses of electrons when inelastically scattered on matter. An incident beam of electrons with a known energy (E) is scattered on a sample. The scattering of these electrons can excite the electronic structure of the sample. If this is the case the scattered electron loses the specific energy (ΔE) needed to cause the excitation. Those scattering processes are called inelastic. It may be easiest to imagine that the energy loss is for example due to an excitation of an electron from an atomic K-shell to the M-shell. The energy for this excitation is taken away from the electron's kinetic energy. The energies of the scattered electrons (E) are measured and the energy loss can be calculated. From the measured data an intensity versus energy loss diagram is established. In the case of scattering by phonons the so-called energy loss can also be a gain of energy (similar to anti-Stokes Raman spectroscopy). These energy losses allow, using comparison to other experiments or theory, one to draw conclusions about surface properties of a sample.
https://en.wikipedia.org/wiki?curid=1942466
1,610,737
To understand why it is a good idea for a statistical language model to contain a cache component one might consider someone who is dictating a letter about elephants to a speech recognition system. Standard (non-cache) N-gram language models will assign a very low probability to the word "elephant" because it is a very rare word in English. If the speech recognition system does not contain a cache component the person dictating the letter may be annoyed: each time the word "elephant" is spoken another sequence of words with a higher probability according to the N-gram language model may be recognized (e.g., "tell a plan"). These erroneous sequences will have to be deleted manually and replaced in the text by "elephant" each time "elephant" is spoken. If the system has a cache language model, "elephant" will still probably be misrecognized the first time it is spoken and will have to be entered into the text manually; however, from this point on the system is aware that "elephant" is likely to occur again – the estimated probability of occurrence of "elephant" has been increased, making it more likely that if it is spoken it will be recognized correctly. Once "elephant" has occurred several times the system is likely to recognize it correctly every time it is spoken until the letter has been completely dictated. This increase in the probability assigned to the occurrence of "elephant" is an example of a consequence of machine learning and more specifically of pattern recognition.
https://en.wikipedia.org/wiki?curid=31139924
1,616,472
Combining multi-dimensional algebraic notation with the relational data model was the obvious answer. Compiler writing techniques were by now widespread, and languages like GAMS could be implemented relatively quickly. However, translating this rigorous mathematical representation into the algorithm-specific format required the computation of partial derivatives on very large systems. In the 1970s, TRW developed a system called PROSE that took the ideas of chemical engineers to compute point derivatives that were exact derivatives at a given point, and to embed them in a consistent, Fortran-style calculus modeling language. The resulting system allowed the user to use automatically generated exact first and second order derivatives. This was a pioneering system and an important demonstration of a concept. However, PROSE had a number of shortcomings: it could not handle large systems, problem representation was tied to an array-type data structure that required address calculations, and the system did not provide access to state-of-the art solution methods. From linear programming, GAMS learned that exploitation of sparsity was key to solving large problems. Thus, the final piece of the puzzle was the use of sparse data structures.
https://en.wikipedia.org/wiki?curid=1438314
1,633,528
As already observed, the parametric search technique can be substantially sped up by replacing the simulated test algorithm by an efficient parallel algorithm, for instance in the parallel random-access machine (PRAM) model of parallel computation, where a collection of processors operate in synchrony on a shared memory, all performing the same sequence of operations on different memory addresses. If the test algorithm is a PRAM algorithm uses formula_30 processors and takes time formula_31 (that is, formula_31 steps in which each processor performs a single operation), then each of its steps may be simulated by using the decision algorithm to determine the results of at most formula_30 numerical comparisons. By finding a median or near-median value in the set of comparisons that need to be evaluated, and passing this single value to the decision algorithm, it is possible to eliminate half or nearly half of the comparisons with only a single call of the decision algorithm. By repeatedly halving the set of comparisons required by the simulation in this way, until none are left, it is possible to simulate the results of formula_30 numerical comparisons using only formula_35 calls to the decision algorithm. Thus, the total time for parametric search in this case becomes formula_36 (for the simulation itself) plus the time for formula_37 calls to the decision algorithm (for formula_31 batches of comparisons, taking formula_35 calls per batch). Often, for a problem that can be solved in this way, the time-processor product of the PRAM algorithm is comparable to the time for a sequential decision algorithm, and the parallel time is polylogarithmic, leading to a total time for the parametric search that is slower than the decision algorithm by only a polylogarithmic factor.
https://en.wikipedia.org/wiki?curid=50716864
1,634,606
A typical ADV system equipped with N receivers records simultaneously 4.N values with each sample. That is, for each receiver, a velocity component, a signal strength value, a signal-to-noise (SNR) and a correlation value. The signal strength, SNR and correlation values are used primarily to determine the quality and accuracy of the velocity data, although the signal strength (acoustic backscatter intensity) may related to the instantaneous suspended sediment concentration with proper calibration. The velocity component is measured along the line connecting the sampling volume to the receiver. The velocity data must be transformed into a Cartesian system of coordinates and the trigonometric transformation may cause some velocity resolution errors.
https://en.wikipedia.org/wiki?curid=22366672
1,634,607
Although acoustic Doppler velocimetry (ADV) has become a popular technique in laboratory in field applications, several researchers pointed out accurately that the ADV signal outputs include the combined effects of turbulent velocity fluctuations, Doppler noise, signal aliasing, turbulent shear and other disturbances. Evidences included by high levels of noise and spikes in all velocity components. In turbulent flows, the ADV velocity outputs are a combination of Doppler noise, signal aliasing, velocity fluctuations, installation vibrations and other disturbances. The signal may be further affected adversely by velocity shear across the sampling volume and boundary proximity. Lemmin and Lhermitte, Chanson et al., and Blanckaert and Lemmin discussed the inherent Doppler noise of an ADV system. Spikes may be caused by aliasing of the Doppler signal. McLelland and Nicholas explained the physical processes while Nikora and Goring, Goring and Nikora and Wahl developed techniques to eliminate aliasing errors called "spikes". These methods were developed for steady flow situations and tested in man-made channels. Not all of them are reliable, and the phase-space thresholding despiking technique appears to be a robust method in steady flows ). Simply, "raw" ADV velocity data are not "true" turbulent velocities and they should never be used without adequate post-processing (e.g.,). Chanson presented a summary of experiences gained during laboratory and field investigations with both Sontek and Nortek ADV systems.
https://en.wikipedia.org/wiki?curid=22366672
1,642,964
There was a case of application of partially linear model in biometrics by Zeger and Diggle in 1994. The research objective of their paper is the evolution period cycle of CD4 cell amounts in HIV (Human immune-deficiency virus) seroconverters (Zeger and Diggle, 1994). CD4 cell plays a significant role in immune function in human body. Zeger and Diggle aimed to assess the proceed of disease by measuring the changing amount of CD4 cells. The number of CD4 cell is associated with body age and smoking behavior and so on. To clear the group of observation data in their experiment, Zeger and Diggle applied partially linear model for their work. Partially linear model primarily contributes to the estimation of average loss time of CD4 cells and adjusts the time dependence of some other covariables in order to simplify the proceed of data comparison, and also, the partially linear model characterizes the deviation of typical curve for their observed group to estimate the progression curve of the changing amount of CD4 cell. The deviation, granted by partially linear model, potentially helps to recognize the observe targets who had a slow progression on the amounting change of CD4 cells.
https://en.wikipedia.org/wiki?curid=60262610
1,655,822
In quantum field theory, there exist quantum categories. and quantum double groupoids. One can consider quantum double groupoids to be fundamental groupoids defined via a 2-functor, which allows one to think about the physically interesting case of quantum fundamental groupoids (QFGs) in terms of the bicategory Span(Groupoids), and then constructing 2-Hilbert spaces and 2-linear maps for manifolds and cobordisms. At the next step, one obtains cobordisms with corners via natural transformations of such 2-functors. A claim was then made that, with the gauge group SU(2), ""the extended TQFT, or ETQFT, gives a theory equivalent to the Ponzano–Regge model of quantum gravity""; similarly, the Turaev–Viro model would be then obtained with representations of SU(2). Therefore, one can describe the state space of a gauge theory – or many kinds of quantum field theories (QFTs) and local quantum physics, in terms of the transformation groupoids given by symmetries, as for example in the case of a gauge theory, by the gauge transformations acting on states that are, in this case, connections. In the case of symmetries related to quantum groups, one would obtain structures that are representation categories of quantum groupoids, instead of the 2-vector spaces that are representation categories of groupoids.
https://en.wikipedia.org/wiki?curid=19515158
1,660,708
Due to a delay in the MICNS secure data link, the first prototype system was produced in a Block I configuration. This system used the same air vehicle (minus the MICNS and automated landing system), the same hydraulic launcher, the same hydraulically operated recovery system, and same Ground Control System. This system used an unsecure interim data link and an alternate semi-automatic system to guide the air vehicle into the net. This system flew 17 flights during July–November 1982. This system was then assigned to an Army Early Operational Capability (EOC) unit from July 1983-July 1984 which conducted 20 flights. This EOC effort was created to provide the field with a system to identify any system weakness and provide the user to refine his tactics, techniques, and procedures for using the technology.
https://en.wikipedia.org/wiki?curid=17844797
1,663,915
A problem with solar cells is that the high energy photons that hit the surface are converted to heat. This is a loss for the cell because the incoming photons are not converted into usable energy. The idea behind the hot carrier cell is to utilize some of that incoming energy which is converted to heat. If the electrons and holes can be collected while hot, a higher voltage can be obtained from the cell. The problem with doing this is that the contacts which collect the electrons and holes will cool the material. Thus far, keeping the contacts from cooling the cell has been theoretical. Another way of improving the efficiency of the solar cell using the heat generated is to have a cell which allows lower energy photons to excite electron and hole pairs. This requires a small bandgap. Using a selective contact, the lower energy electrons and holes can be collected while allowing the higher energy ones to continue moving through the cell. The selective contacts are made using a double barrier resonant tunneling structure. The carriers are cooled which they scatter with phonons. If a material has a large bandgap of phonons then the carriers will carry more of the heat to the contact and it won't be lost in the lattice structure. One material which has a large bandgap of phonons is indium nitride. The hot carrier cells are in their infancy but are beginning to move toward the experimental stage.
https://en.wikipedia.org/wiki?curid=25451813
1,682,742
The quantum automaton differs from the topological automaton in that, instead of having a binary result (is the iterated point in, or not in, the final set?), one has a probability. The quantum probability is the (square of) the initial state projected onto some final state "P"; that is formula_108. But this probability amplitude is just a very simple function of the distance between the point formula_109 and the point formula_110 in formula_17, under the distance metric given by the Fubini–Study metric. To recap, the quantum probability of a language being accepted can be interpreted as a metric, with the probability of accept being unity, if the metric distance between the initial and final states is zero, and otherwise the probability of accept is less than one, if the metric distance is non-zero. Thus, it follows that the quantum finite automaton is just a special case of a geometric automaton or a metric automaton, where formula_17 is generalized to some metric space, and the probability measure is replaced by a simple function of the metric on that space.
https://en.wikipedia.org/wiki?curid=7926008
1,688,747
The IBM System/38 was intended to be the successor of the System/34 and the earlier System/3x systems. However, due to the delays in the development of the System/38 and the high cost of the hardware once complete, IBM developed the simpler and cheaper System/36 platform which was more widely adopted than the System/38. The System/36 was an evolution of the System/34 design, but the two machines were not object-code compatible. Instead, the System/36 offered source code compatibility, allowing System/34 applications to be recompiled on a System/36 with little to no changes. Some System/34 hardware was incompatible with the System/36.
https://en.wikipedia.org/wiki?curid=256354
1,695,988
In machine learning, Manifold regularization is a technique for using the shape of a dataset to constrain the functions that should be learned on that dataset. In many machine learning problems, the data to be learned do not cover the entire input space. For example, a facial recognition system may not need to classify any possible image, but only the subset of images that contain faces. The technique of manifold learning assumes that the relevant subset of data comes from a manifold, a mathematical structure with useful properties. The technique also assumes that the function to be learned is "smooth": data with different labels are not likely to be close together, and so the labeling function should not change quickly in areas where there are likely to be many data points. Because of this assumption, a manifold regularization algorithm can use unlabeled data to inform where the learned function is allowed to change quickly and where it is not, using an extension of the technique of Tikhonov regularization. Manifold regularization algorithms can extend supervised learning algorithms in semi-supervised learning and transductive learning settings, where unlabeled data are available. The technique has been used for applications including medical imaging, geographical imaging, and object recognition.
https://en.wikipedia.org/wiki?curid=48777199
1,705,673
Kuṭṭaka is an algorithm for finding integer solutions of linear Diophantine equations. A linear Diophantine equation is an equation of the form "ax" + "by" = "c" where "x" and "y" are unknown quantities and "a", "b", and "c" are known quantities with integer values. The algorithm was originally invented by the Indian astronomer-mathematician Āryabhaṭa (476–550 CE) and is described very briefly in his Āryabhaṭīya. Āryabhaṭa did not give the algorithm the name "Kuṭṭaka", and his description of the method was mostly obscure and incomprehensible. It was Bhāskara I (c. 600 – c. 680) who gave a detailed description of the algorithm with several examples from astronomy in his "Āryabhatiyabhāṣya", who gave the algorithm the name "Kuṭṭaka". In Sanskrit, the word Kuṭṭaka means "pulverization" (reducing to powder), and it indicates the nature of the algorithm. The algorithm in essence is a process where the coefficients in a given linear Diophantine equation are broken up into smaller numbers to get a linear Diophantine equation with smaller coefficients. In general, it is easy to find integer solutions of linear Diophantine equations with small coefficients. From a solution to the reduced equation, a solution to the original equation can be determined. Many Indian mathematicians after Aryabhaṭa have discussed the Kuṭṭaka method with variations and refinements. The Kuṭṭaka method was considered to be so important that the entire subject of algebra used to be called "Kuṭṭaka-ganita" or simply "Kuṭṭaka". Sometimes the subject of solving linear Diophantine equations is also called "Kuṭṭaka".
https://en.wikipedia.org/wiki?curid=49669987
1,713,348
The National Meteorological Center's Global Spectral Model was introduced during August 1980. The European Centre for Medium-Range Weather Forecasts model debuted on May 1, 1985. The United Kingdom Met Office has been running their global model since the late 1980s, adding a 3D-Var data assimilation scheme in mid-1999. The Canadian Meteorological Centre has been running a global model since 1991. The United States ran the Nested Grid Model (NGM) from 1987 to 2000, with some features lasting as late as 2009. Between 2000 and 2002, the Environmental Modeling Center ran the Aviation (AVN) model for shorter range forecasts and the Medium Range Forecast (MRF) model at longer time ranges. During this time, the AVN model was extended to the end of the forecast period, eliminating the need of the MRF and thereby replacing it. In late 2002, the AVN model was renamed the Global Forecast System (GFS). The German Weather Service has been running their global hydrostatic model, the GME, using a hexagonal icosahedral grid since 2002. The GFS is slated to eventually be supplanted by the Flow-following, finite-volume Icosahedral Model (FIM), which like the GME is gridded on a truncated icosahedron, in the mid-2010s.
https://en.wikipedia.org/wiki?curid=30890995
1,744,981
The third, SROSS 3 (also known as SROSS C), attained a lower-than-planned orbit on 20 May 1992. The GRB monitored celestial gamma ray bursts in the energy range 20–3000 keV. SROSS C and C2 carried a gamma-ray burst (GRB) experiment and a Retarded Potential Analyzer (RPA) experiment. The GRB experiment operated from 25 May 1992 until reentry on 14 July 1992. The instrument consisted of a main and a redundant CsI(Na) scintillator operating in the energy range 20–3000 keV. The crystals were 76 mm (main) and 37 mm (redundant) in diameter. Each had a thickness of 12.5 mm. A 'burst mode' was triggered by the 100–1024 keV count rate exceeding a preset limit during a 256 or 1024 ms time integration. In this mode, 65 s of temporal and 2 s of spectral data prior to the trigger are stored, as well as the subsequent 16 s of spectral data and 204 s of temporal data. The low resolution data consists of two energy channels (20–100 keV and 100–1024 keV) from 65 s before the trigger to 204 s after the trigger in 256 ms integrations. The 20–1024 keV rates are also recorded with a 2 ms resolution for 1 s prior to 1 s after trigger and a 16 ms resolution for 1s prior to 8 s after the trigger. Energy spectra are conducted with a 124 channel PHA. Four pre-trigger spectra and 32 post-trigger spectra are recorded for every burst with a 512 ms integration time. The RPA measured temperature, density and characteristics of electrons in the Earth's ionosphere. The GRB experiment computer system used the RCA CDP1802 microprocessor.
https://en.wikipedia.org/wiki?curid=11385141
1,748,846
Video modulation is a strategy of transmitting video signal in the field of radio modulation and television technology. This strategy enables the video signal to be transmitted more efficiently through long distances. In general, video modulation means that a higher frequency carrier wave is modified according to the original video signal. In this way, carrier wave contains the information in the video signal. Then, the carrier will "carry" the information in the form of radio frequency (RF) signal. When carrier reaches its destination, the video signal is extracted from the carrier by decoding. In other words, the video signal is first combined with a higher frequency carrier wave so that carrier wave contains the information in video signal. The combined signal is called radio-frequency signal. At the end of this transmitting system, the RF signals stream from a light sensor and hence, the receivers can obtain the initial data in the original video signal.
https://en.wikipedia.org/wiki?curid=566523
1,756,362
Chemical compatibility is important when choosing materials for chemical storage or reactions, so that the vessel and other apparatus will not be damaged by its contents. For purposes of chemical storage, chemicals that are incompatible should not be stored together so that any leak will not cause an even more dangerous situation by reacting after leaking. In addition, chemical compatibility refers to the container material being acceptable to store the chemical or for a tool or object that comes in contact with a chemical to not degrade. For example, when stirring a chemical the stirrer must be stable in the chemical that is being stirred. Because of this many companies publish chemical resistance charts. and databases to help chemical users use appropriate materials for handling chemicals. Such charts are particularly important for polymers as they are often not compatible with common chemical reagents and will even depend on how the polymers are processed. For example, 3-D printing polymer tools used for chemical experiments must be chosen to ensure chemical compatibility with care.
https://en.wikipedia.org/wiki?curid=8761205
1,772,216
A marathon runner's velocity at lactate threshold is strongly correlated to their performance. Lactate threshold or anaerobic threshold is considered a good indicator of the body's ability to efficiently process and transfer chemical energy into mechanical energy. A marathon is considered an aerobic dominant exercise, but higher intensities associated with elite performance use a larger percentage of anaerobic energy. The lactate threshold is the cross over point between predominantly aerobic energy usage and anaerobic energy usage. This cross over is associated with the anaerobic energy system's inability to efficiently produce energy leading to the buildup of blood lactate often associated with muscle fatigue. In endurance trained athletes, the increase in blood lactate concentration appears at about 75%-90%VO, which directly corresponds to the VO marathoner's run at. With this high of an intensity endured for over two hours, a marathon runner's performance requires more energy production than that solely supplied by mitochondrial activity. This causes a higher anaerobic to aerobic energy ratio during a marathon. The higher the velocity and fractional use of aerobic capacity an individual has at their lactic threshold, the better their overall performance.
https://en.wikipedia.org/wiki?curid=57276769
1,803,608
QTT addresses the measurement problem in quantum mechanics by providing a detailed description of what happens during the so-called "collapse of the wave function". It reconciles the concept of a quantum jump with the smooth evolution described by the Schrödinger equation. The theory suggests that "quantum jumps" are not instantaneous but happen in a coherently driven system as a smooth transition through a series of superposition states. This prediction was tested experimentally in 2019 by a team at Yale University led by Michel Devoret and Zlatko Minev, in collaboration with Carmichael and others at Yale University and the University of Auckland. In their experiment they used a superconducting artificial atom to observe a quantum jump in detail, confirming that the transition is a continuous process that unfolds over time. They were also able to detect when a quantum jump was about to occur and intervene to reverse it, sending the system back to the state in which it started. This experiment, inspired and guided by QTT, represents a new level of control over quantum systems and has potential applications in correcting errors in quantum computing in the future.
https://en.wikipedia.org/wiki?curid=65266370
1,819,870
Algorithmic complexity is the rate in which an algorithm performs. Although there are multiple ways to solve a computational problem, the best and most effective way in doing so matters. All factors such as the hardware, networking, programming language, and performance constraints play into the time a program takes to output the desired project. It is important for data scientists to calculate the time an algorithm will take to better under ways to make it more efficient. According to a study done by, The University of Southern California, "The complexity is defined as a numerical function T(n). T(n) is the time that the algorithm performs versus the input size n." In other words, the complexity is calculated by the time the algorithm takes to run the module, T(n) multiplied by the input size of the program(n). This is otherwise known as the big "O" notation. Additionally, Allen Tucker, a computer science professor at Bowdoin College, stated "Computational complexity is a continuum, in that some algorithms require linear time (that is, the time required increases directly with the number of items or nodes in the list, graph, or network being processed), whereas others require quadratic or even exponential time to complete " As previously stated, although there are multiple ways to create a program it is important to be able calculate the efficiency of an algorithm to better classify and make the algorithms run on a best-case scenario. "The goal of computational complexity is to classify algorithms according to their performances."(Adamchik)
https://en.wikipedia.org/wiki?curid=27733786
1,823,042
Energy security measures center on reducing dependence on any one source of imported energy or supplier, exploiting renewable energy resources, and reducing demand by energy conservation. Finland is highly dependent on energy import from Russia: 71% of total energy in 2007: Hard coal 92%, raw oil 75% and natural gas 100%. The support of domestic energy has been mainly based on traditional bioenergy and highly disputed fossil peat energy alternatives. The new renewable energy alternatives have not been effectively promoted by the end of 2010. This strategy was criticised by the IEA in its country evaluation in 2007. According to the renewable energy statistics of EU countries Finland has low capacities of wind power (19/27), solar power (17/27) and solar heating (23/27) in 2010. Wind power is repeatedly the most favoured power source in Finland with over 90% of support according to the public opinion tests. In this respect the official energy policy of Finland has promoted the market control of traditional energy sources and companies.
https://en.wikipedia.org/wiki?curid=32854563
1,823,050
For example, the politicians of Espoo sold the public district heating system for the big energy company Fortum in 2006. Since then the district heating prices in Espoo have kept rising and Espoo city has lost tens of millions of euros annually in the energy business compared to nearby cities Helsinki and Vantaa. Further the tax payers have higher district heating costs. Fortum uses 100% fossil energy of natural gas from Russia for district heating. In 2010 Fortum lobbied for the total restriction by law of all renewable energy alternatives within the district heating areas. This has not been realised, but the renewable alternatives have more control by the public permission system since 2010. The total energy company deal from Espoo to Fortum was worth of 365 million euros for Espoo. The investment of these funds have not given the claimed 5-6% return. In fact, 15 million € was invested in Kaupthing that was in bankruptcy on 9.10.2008. Further Espoo lent a sum of 82 million euros to the state for a motor way project (Kehä 1) with no interest at all during 2008-2013. Even though the commercial investors have received large compensations for their work from Espoo city energy gains, the media have given the impression that the return of funds have not compensated the tax payers costs. In short, the deal can be considers successful for the nuclear company Fortum, but unsuccessful for the Espoo tax payers. There is no effective free competition for district heating in place. Further, one hardly can avoid the impression that the energy and construction companies have mutual interests to promote the dependency on Russian energy. Neither Finnish construction nor energy companies have at least until end 2010 actively promoted higher energy efficiency standards and alternative energy source obligations.
https://en.wikipedia.org/wiki?curid=32854563
1,835,621
Development of CICE began in 1994 by Elizabeth Hunke at Los Alamos National Laboratory (LANL). Since its initial release in 1998 following development of the Elastic-Viscous-Plastic (EVP) sea ice rheology within the model, it has been substantially developed by an international community of model users and developers. Enthalpy-conserving thermodynamics and improvements to the sea ice thickness distribution were added to the model between 1998 and 2005. The first institutional user outside of LANL was Naval Postgraduate School in the late-1990s, where it was subsequently incorporated into the Regional Arctic System Model (RASM) in 2011. The National Center for Atmospheric Research (NCAR) was the first to incorporate CICE into a global climate model in 2002, and developers of the NCAR Community Earth System Model (CESM) have continued to contribute to CICE innovations and have used it to investigate polar variability in Earth's climate system. The United States Navy began using CICE shortly after 2000 for polar research and sea ice forecasting and it continues to do so today. Since 2000, CICE development or coupling to oceanic and atmospheric models for weather and climate prediction has occurred at the University of Reading, University College London, the U.K. Met Office Hadley Centre, Environment and Climate Change Canada, the Danish Meteorological Institute, the Commonwealth Science and Industrial Research Organisation, and Beijing Normal University, among other institutions. As a result of model development in the global community of CICE users, the model's computer code now includes a comprehensive saline ice physics and biogeochemistry library that incorporates mushy-layer thermodynamics, anisotropic continuum mechanics, Delta-Eddington radiative transfer, melt-pond physics and land-fast ice. CICE version 6 is open-source software and was released in 2018 on GitHub.
https://en.wikipedia.org/wiki?curid=58881336
1,838,647
Scanning the photon energy corresponds to shifting the internal energy distribution of the parent ion. The parent ion sits in a potential energy well, in which the lowest energy exit channel often corresponds to the breaking of the weakest chemical bond, resulting in the formation of a fragment or daughter ion. A mass spectrum is recorded at every photon energy, and the fractional ion abundances are plotted to obtain the breakdown diagram. At low energies no parent ion is energetic enough to dissociate, and the parent ion corresponds to 100% of the ion signal. As the photon energy is increased, a certain fraction of the parent ions (in fact according to the cumulative distribution function of the neutral internal energy distribution) still has too little energy to dissociate, but some do. The parent ion fractional abundances decrease, and the daughter ion signal increases. At the dissociative photoionization threshold, "E", all parent ions, even the ones with initially 0 internal energy, can dissociate, and the daughter ion abundance reaches 100% in the breakdown diagram.
https://en.wikipedia.org/wiki?curid=34385670
1,853,207
"Contemporary Physics" has been published by Taylor & Francis since 1959 and publishes four issues per year. The subjects covered by this journal are: astrophysics, atomic and nuclear physics, chemical physics, computational physics, condensed matter physics, environmental physics, experimental physics, general physics, particle & high energy physics, plasma physics, space science, and theoretical physics.
https://en.wikipedia.org/wiki?curid=25728623