doc_id
int32
15
2.25M
text
stringlengths
101
6.85k
source
stringlengths
39
44
503,742
The Deutsch-Jozsa algorithm is a quantum algorithm designed to solve a toy problem with a smaller query complexity than is possible with a classical algorithm. The toy problem asks whether a function formula_82 is constant or balanced, those being the only two possibilities. The only way to evaluate the function formula_83 is to consult a black box or oracle. A classical deterministic algorithm will have to check more than half of the possible inputs to be sure of whether or not the function is constant or balanced. With formula_84 possible inputs, the query complexity of the most efficient classical deterministic algorithm is formula_85. The Deutsch-Jozsa algorithm takes advantage of quantum parallelism to check all of the elements of the domain at once and only needs to query the oracle once, making its query complexity formula_86.
https://en.wikipedia.org/wiki?curid=24092190
568,447
In artificial intelligence (AI), particularly machine learning (ML), ablation is the removal of a component of an AI system. An ablation study investigates the performance of an AI system by removing certain components to understand the contribution of the component to the overall system. The term is an analogy with biology (removal of components of an organism), and is particularly used in the analysis of artificial neural nets by analogy with ablative brain surgery. Other analogies include other neuroscience biological systems such as Drosophilla central nervous system and the vertebrate brain. Ablation studies require that a system exhibit graceful degradation: the system must continue to function even when certain components are missing or degraded. According to some researchers, ablation studies have been deemed a convenient technique in investigating artificial intelligence and its durability to structural damages. Ablation studies damage and/or remove certain components in a controlled setting to investigate all possible outcomes of system failure; this characterizes how each action impacts the system's overall performance and capabilities. The ablation process can be used to test systems that perform tasks such as speech recognition, visual object recognition, and robot control.
https://en.wikipedia.org/wiki?curid=65434605
578,132
One theorized application would be to use topologically ordered states as media for quantum computing in a technique known as topological quantum computing. A topologically ordered state is a state with complicated non-local quantum entanglement. The non-locality means that the quantum entanglement in a topologically ordered state is distributed among many different particles. As a result, the pattern of quantum entanglements cannot be destroyed by local perturbations. This significantly reduces the effect of decoherence. This suggests that if we use different quantum entanglements in a topologically ordered state to encode quantum information, the information may last much longer. The quantum information encoded by the topological quantum entanglements can also be manipulated by dragging the topological defects around each other. This process may provide a physical apparatus for performing quantum computations. Therefore, topologically ordered states may provide natural media for both quantum memory and quantum computation. Such realizations of quantum memory and quantum computation may potentially be made fault tolerant.
https://en.wikipedia.org/wiki?curid=3087602
608,684
The second risk is the AI race itself, whether or not the race is won by any one group. Due to the rhetoric and perceived advantage of being the first to develop advanced AI technology, there emerges strong incentives to cut corners on safety considerations which might leave out important aspects such as bias and fairness. In particular, the perception of another team being on the brink of a break through encourages other teams to take short cuts and deploy an AI system that is not ready, which can be harmful to others and the group possessing the AI system. As Paul Scharre warns in "Foreign Policy", "For each country, the real danger is not that it will fall behind its competitors in AI but that the perception of a race will prompt everyone to rush to deploy unsafe AI systems. In their desire to win, countries risk endangering themselves just as much as their opponents." Nick Bostrom and others developed a model that provides further evidence of such. The model found that a team possessing more information about other teams' capabilities caused more risk-taking and short cuts in their development of AI systems. Furthermore, the greater the enmity between teams, the greater the risk of ignoring precautions and leading to an AI disaster. Another danger of an AI arms race is the risk of losing control of the AI systems and the risk is compounded in the case of a race to artificial general intelligence, which Cave noted may present an existential risk.
https://en.wikipedia.org/wiki?curid=56127293
634,732
Gaia DR1, the first data release of the spacecraft "Gaia" mission, based on 14 months of observations made through September 2015, took place on 13 September 2016. The data release includes positions and magnitudes in a single photometric band for 1.1 billion stars using only "Gaia" data, positions, parallaxes and proper motions for more than 2 million stars based on a combination of "Gaia" and Tycho-2 data for those objects in both catalogues, light curves and characteristics for about 3000 variable stars, and positions and magnitudes for more than 2000 extragalactic sources used to define the celestial reference frame. The second data release (DR2), which occurred on 25 April 2018, is based on 22 months of observations made between 25 July 2014 and 23 May 2016. It includes positions, parallaxes and proper motions for about 1.3 billion stars and positions of an additional 300 million stars, red and blue photometric data for about 1.1 billion stars and single colour photometry for an additional 400 million stars, and median radial velocities for about 7 million stars between magnitude 4 and 13. It also contains data for over 14,000 selected Solar System objects. The first part of the third data release, EDR3 (Early Data Release 3) was released on 3 December 2020. It is based on 34 months of observations and consists of improved positions, parallaxes and proper motions of over 1.8 billion objects The full DR3, expected on 13 June 2022, will include the EDR3 data plus Solar System data; variability information; results for non-single stars, for quasars, and for extended objects; astrophysical parameters; and a special data set, the Gaia Andromeda Photometric Survey (GAPS). The final Gaia catalogue is expected to be released three years after the end of the Gaia mission.
https://en.wikipedia.org/wiki?curid=28232
674,551
The climate system receives energy from the Sun, and to a far lesser extent from the Earth's core, as well as tidal energy from the Moon. The Earth gives off energy to outer space in two forms: it directly reflects a part of the radiation of the Sun and it emits infra-red radiation as black-body radiation. The balance of incoming and outgoing energy, and the passage of the energy through the climate system, determines Earth's energy budget. When the total of incoming energy is greater than the outgoing energy, Earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and Earth experiences cooling.
https://en.wikipedia.org/wiki?curid=5565588
678,856
According to QBists, the advantages of adopting this view of probability are twofold: First, for QBists the role of quantum states, such as the wavefunctions of particles, is to efficiently encode probabilities; so quantum states are ultimately degrees of belief themselves. (If one considers any single measurement that is a minimal, informationally complete POVM, this is especially clear: A quantum state is mathematically equivalent to a single probability distribution, the distribution over the possible outcomes of that measurement.) Regarding quantum states as degrees of belief implies that the event of a quantum state changing when a measurement occurs—the "collapse of the wave function"—is simply the agent updating her beliefs in response to a new experience. Second, it suggests that quantum mechanics can be thought of as a local theory, because the Einstein–Podolsky–Rosen (EPR) criterion of reality can be rejected. The EPR criterion states, "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." Arguments that quantum mechanics should be considered a nonlocal theory depend upon this principle, but to a QBist, it is invalid, because a personalist Bayesian considers all probabilities, even those equal to unity, to be degrees of belief. Therefore, while many interpretations of quantum theory conclude that quantum mechanics is a nonlocal theory, QBists do not.
https://en.wikipedia.org/wiki?curid=35611432
693,345
During World War II, Pascual Jordan first suggested that since the positive energy of a star's mass and the negative energy of its gravitational field together may have zero total energy, conservation of energy would not prevent a star being created by a quantum transition of the vacuum. George Gamow recounted putting this idea to Albert Einstein: "Einstein stopped in his tracks and, since we were crossing a street, several cars had to stop to avoid running us down". Elaboration of the concept was slow, with the first notable calculation being performed by Richard Feynman in 1962. The first known publication on the topic was in 1973, when Edward Tryon proposed in the journal "Nature" that the universe emerged from a large-scale quantum fluctuation of vacuum energy, resulting in its positive mass-energy being exactly balanced by its negative gravitational potential energy. In the subsequent decades, development of the concept was constantly plagued by the dependence of the calculated masses on the selection of the coordinate systems. In particular, a problem arises due to energy associated with coordinate systems co-rotating with the entire universe. A first constraint was derived in 1987 when Alan Guth published a proof of gravitational energy being negative to matter associated mass-energy. The question of the mechanism permitting generation of both positive and negative energy from null initial solution was not understood, and an "ad hoc" solution with cyclic time was proposed by Stephen Hawking in 1988.
https://en.wikipedia.org/wiki?curid=26499561
693,752
For the epistemic approach, we formulate the problem as if we want to attribute probability to a hypothesis. Unfortunately, this kind of approach is (for highly rigorous reasons) best answered with Bayesian statistics, where the interpretation of probability is straightforward because Bayesian statistics is conditional on the entire sample space, whereas frequentist data is inherently conditional on unobserved and unquantifiable data. The reason for this is inherent to frequentist design. Frequentist statistics is unfortunately conditioned not on solely the data but also on the "experimental design". In frequentist statistics, the cutoff for understanding the frequency occurrence is derived from the family distribution used in the experiment design. For example, a binomial distribution and a negative binomial distribution can be used to analyze exactly the same data, but because their tail ends are different the frequentist analysis will realize different levels of statistical significance for the same data that assumes different probability distributions. This difference does not occur in Bayesian inference. For more, see the likelihood principle, which frequentist statistics inherently violates.
https://en.wikipedia.org/wiki?curid=15537745
714,585
The general audit (alternatively called a mini-audit, site energy audit or detailed energy audit or complete site energy audit) expands on the preliminary audit described above by collecting more detailed information about facility operation and by performing a more detailed evaluation of energy conservation measures. Utility bills are collected for a 12- to 36-month period to allow the auditor to evaluate the facility's energy demand rate structures and energy usage profiles. If interval meter data is available, the detailed energy profiles that such data makes possible will typically be analyzed for signs of energy waste. Additional metering of specific energy-consuming systems is often performed to supplement utility data. In-depth interviews with facility operating personnel are conducted to provide a better understanding of major energy consuming systems and to gain insight into short- and longer-term energy consumption patterns.
https://en.wikipedia.org/wiki?curid=9728715
753,507
Solutions of the Dirac equation contain negative energy quantum states. As a result, an electron could always radiate energy and fall into a negative energy state. Even worse, it could keep radiating infinite amounts of energy because there were infinitely many negative energy states available. To prevent this unphysical situation from happening, Dirac proposed that a "sea" of negative-energy electrons fills the universe, already occupying all of the lower-energy states so that, due to the Pauli exclusion principle, no other electron could fall into them. Sometimes, however, one of these negative-energy particles could be lifted out of this Dirac sea to become a positive-energy particle. But, when lifted out, it would leave behind a "hole" in the sea that would act exactly like a positive-energy electron with a reversed charge. These holes were interpreted as "negative-energy electrons" by Paul Dirac and mistakenly identified with protons in his 1930 paper "A Theory of Electrons and Protons" However, these "negative-energy electrons" turned out to be positrons, and not protons.
https://en.wikipedia.org/wiki?curid=1327
754,862
A net zero-energy building (ZEB) is a building that over a year does not use more energy than it generates. The first 1979 Zero Energy Design building used passive solar heating and cooling techniques with airtight construction and super insulation. A few ZEBs fail to fully exploit more affordable conservation technology and all use onsite active renewable energy technologies like photovoltaic to offset the building's primary energy consumption. "Passive house" and ZEB are regarded as complementary synergistic technology approaches, based on the same physics of thermal energy transfer and storage: ZEBs drive the annual energy consumption down to 0 kWh/m with help from on-site renewable energy sources and can benefit from materials and methods which are used to meet the passive house demand constraint of 120 kWh/m which will minimize the need for the often costly on-site renewable energy sources. Energy Plus houses are similar to both "passivhaus" and ZEB but emphasize the production of more energy per year than they consume, e.g., annual energy performance of −25 kWh/m is an Energy Plus house.
https://en.wikipedia.org/wiki?curid=1866599
781,899
Ensemble models of allosteric regulation enumerate an allosteric system's statistical ensemble as a function of its potential energy function, and then relate specific statistical measurements of allostery to specific energy terms in the energy function (such as an intermolecular salt bridge between two domains). Ensemble models like the ensemble allosteric model and allosteric Ising model assume that each domain of the system can adopt two states similar to the MWC model. The allostery landscape model introduced by Cuendet, Weinstein, and LeVine allows for the domains to have any number of states and the contribution of a specific molecular interaction to a given allosteric coupling can be estimated using a rigorous set of rules. Molecular dynamics simulations can be used to estimate a system's statistical ensemble so that it can be analyzed with the allostery landscape model.
https://en.wikipedia.org/wiki?curid=106256
787,996
A "runtime system", also called "runtime environment", primarily implements portions of an execution model. This is not to be confused with the runtime lifecycle phase of a program, during which the runtime system is in operation. When treating the "runtime system" as distinct from the "runtime environment" (RTE), the first may be defined as a specific part of the application software (IDE) used for programming, a piece of software that provides the programmer a more convenient environment for running programs during their production (testing and similar), while the second (RTE) would be the very instance of an execution model being applied to the developed program which is itself then run in the aforementioned "runtime system".
https://en.wikipedia.org/wiki?curid=418206
795,322
The first term on the right hand side of the equation presents a stream of thermal energy into the system; the last term—a part of a stream of energy formula_13 coming into the system with the stream of particles of substances formula_14 that can be positive or negative, formula_15, where formula_16 is chemical potential of substance formula_17. The middle term in (1) depicts energy dissipation (entropy production) due to the relaxation of internal variables formula_18. In the case of chemically reacting substances, which was investigated by Prigogine, the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalised, to consider any deviation from the equilibrium state as an internal variable, so that we consider the set of internal variables formula_18 in equation (1) to consist of the quantities defining not only degrees of completeness of all chemical reactions occurring in the system, but also the structure of the system, gradients of temperature, difference of concentrations of substances and so on.
https://en.wikipedia.org/wiki?curid=472429
810,316
In exothermic chemical reactions, the heat that is released by the reaction takes the form of electromagnetic energy or kinetic energy of molecules. The transition of electrons from one quantum energy level to another causes light to be released. This light is equivalent in energy to some of the stabilization energy of the energy for the chemical reaction, i.e. the bond energy. This light that is released can be absorbed by other molecules in solution to give rise to molecular translations and rotations, which gives rise to the classical understanding of heat. In an exothermic reaction, the activation energy (energy needed to start the reaction) is less than the energy that is subsequently released, so there is a net release of energy.
https://en.wikipedia.org/wiki?curid=10201
824,782
Franck and Hertz had proposed that the 4.9 V characteristic of their experiments was due to ionization of mercury atoms by collisions with the flying electrons emitted at the cathode. In 1915 Bohr published a paper noting that the measurements of Franck and Hertz were more consistent with the assumption of quantum levels in his own model for atoms. In the Bohr model, the collision excited an internal electron within the atom from its lowest level to the first quantum level above it. The Bohr model also predicted that light would be emitted as the internal electron returned from its excited quantum level to the lowest one; its wavelength corresponded to the energy difference of the atom's internal levels, which has been called the Bohr relation. Franck and Hertz's observation of emission from their tube at 254 nm was also consistent with Bohr's perspective. Writing following the end of World War I in 1918, Franck and Hertz had largely adopted the Bohr perspective for interpreting their experiment, which has become one of the experimental pillars of quantum mechanics. As Abraham Pais described it, "Now the beauty of Franck and Hertz's work lies not only in the measurement of the energy loss "E"-"E" of the impinging electron, but they also observed that, when the energy of that electron exceeds 4.9 eV, mercury begins to emit ultraviolet light of a definite frequency "ν" as defined in the above formula. Thereby they gave (unwittingly at first) the first direct experimental proof of the Bohr relation!" Franck himself emphasized the importance of the ultraviolet emission experiment in an epilogue to the 1960 Physical Science Study Committee (PSSC) film about the Franck–Hertz experiment.
https://en.wikipedia.org/wiki?curid=381805
828,175
Probability amplitudes provide a relationship between the quantum state vector of a system and the results of observations of that system, a link was first proposed by Max Born, in 1926. Interpretation of values of a wave function as the probability amplitude is a pillar of the Copenhagen interpretation of quantum mechanics. In fact, the properties of the space of wave functions were being used to make physical predictions (such as emissions from atoms being at certain discrete energies) before any physical interpretation of a particular function was offered. Born was awarded half of the 1954 Nobel Prize in Physics for this understanding, and the probability thus calculated is sometimes called the "Born probability". These probabilistic concepts, namely the probability density and quantum measurements, were vigorously contested at the time by the original physicists working on the theory, such as Schrödinger and Einstein. It is the source of the mysterious consequences and philosophical difficulties in the interpretations of quantum mechanics—topics that continue to be debated even today.
https://en.wikipedia.org/wiki?curid=429425
842,940
Tyrosine kinases function in a variety of processes, pathways, and actions, and are responsible for key events in the body. The receptor tyrosine kinases function in transmembrane signaling, whereas tyrosine kinases within the cell function in signal transduction to the nucleus. Tyrosine kinase activity in the nucleus involves cell-cycle control and properties of transcription factors. In this way, in fact, tyrosine kinase activity is involved in mitogenesis, or the induction of mitosis in a cell; proteins in the cytosol and proteins in the nucleus are phosphorylated at tyrosine residues during this process. Cellular growth and reproduction may rely to some degree on tyrosine kinase. Tyrosine kinase function has been observed in the nuclear matrix, which comprises not the chromatin but rather the nuclear envelope and a “fibrous web” that serves to physically stabilize DNA. To be specific, Lyn, a type of kinase in the Src family that was identified in the nuclear matrix, appears to control the cell cycle. Src family tyrosine kinases are closely related but demonstrate a wide variety of functionality. Roles or expressions of Src family tyrosine kinases vary significantly according to cell type, as well as during cell growth and differentiation. Lyn and Src family tyrosine kinases in general have been known to function in signal transduction pathways. There is evidence that Lyn is localized at the cell membrane; Lyn is associated both physically and functionally with a variety of receptor molecules.
https://en.wikipedia.org/wiki?curid=51903
858,859
Artificial Intelligence (AI) is a program that enables computers to sense, reason, act and adapt. AI is not new, but it is growing rapidly and tremendously. AI can now deal with large data sets, solve problems, and provide more efficient operation. AI will be more potential in healthcare because it provides easier accessibility of information, improves healthcare, and reduce cost. There are different factors that drive AI in healthcare, but the two most important are economics and the advent of big data analytics. Costs, new payment options, and people's desire to improve health outcomes are the primary economic drivers of the AI. Based on the reading, AI can save $150 million annually in the US by 2026. Also, AI growth is expected to reach $6.6 million by 2021. Big data analytics is another big driver because we are in the age of big data. The data is extremely helpful to assist the integration of AI in healthcare because it ensures the execution of complex tasks, quality, and efficiency.
https://en.wikipedia.org/wiki?curid=3255966
858,860
AI brings many benefits to the healthcare industry. AI helps to detect diseases, administer chronic conditions, deliver health services, and discover the drug. Also, AI has the potential to address important health challenges. In healthcare organizations, AI is able to plan and relocate resources. AI is able to match patients with healthcare providers that meet their needs. AI also helps improve the healthcare experience by using an app to identify patients' anxieties. In medical research, AI helps to analyze and evaluate the patterns and complex data. For instance, AI is important in drug discovery because it can search relevant studies and analyze different kinds of data. In clinical care, AI helps to detect diseases, analyze clinical data, publications, and guidelines. As such, AI aids to find the best treatments for the patients. Other uses of AI in clinical care include medical imaging, echocardiography, screening, and surgery.
https://en.wikipedia.org/wiki?curid=3255966
871,997
Cell growth is not to be confused with cell division or the cell cycle, which are distinct processes that can occur alongside cell growth during the process of cell proliferation, where a cell, known as the mother cell, grows and divides to produce two daughter cells. Importantly, cell growth and cell division can also occur independently of one another. During early embryonic development (cleavage of the zygote to form a morula and blastoderm), cell divisions occur repeatedly without cell growth. Conversely, some cells can grow without cell division or without any progression of the cell cycle, such as growth of neurons during axonal pathfinding in nervous system development.
https://en.wikipedia.org/wiki?curid=564779
872,020
Pom1 forms polar gradients that peak at cell ends, which shows a direct link between size control factors and a specific physical location in the cell. As a cell grows in size, a gradient in Pom1 grows. When cells are small, Pom1 is spread diffusely throughout the cell body. As the cell increases in size, Pom1 concentration decreases in the middle and becomes concentrated at cell ends. Small cells in early G2 which contain sufficient levels of Pom1 in the entirety of the cell have inactive Cdr2 and cannot enter mitosis. It is not until the cells grow into late G2, when Pom1 is confined to the cell ends that Cdr2 in the medial cortical nodes is activated and able to start the inhibition of Wee1. This finding shows how cell size plays a direct role in regulating the start of mitosis. In this model, Pom1 acts as a molecular link between cell growth and mitotic entry through a Cdr2-Cdr1-Wee1-Cdk1 pathway. The Pom1 polar gradient successfully relays information about cell size and geometry to the Cdk1 regulatory system. Through this gradient, the cell ensures it has reached a defined, sufficient size to enter mitosis.
https://en.wikipedia.org/wiki?curid=564779
872,444
Quantum annealing starts from a quantum-mechanical superposition of all possible states (candidate states) with equal weights. Then the system evolves following the time-dependent Schrödinger equation, a natural quantum-mechanical evolution of physical systems. The amplitudes of all candidate states keep changing, realizing a quantum parallelism, according to the time-dependent strength of the transverse field, which causes quantum tunneling between states. If the rate of change of the transverse field is slow enough, the system stays close to the ground state of the instantaneous Hamiltonian (also see adiabatic quantum computation). If the rate of change of the transverse field is accelerated, the system may leave the ground state temporarily but produce a higher likelihood of concluding in the ground state of the final problem Hamiltonian, i.e., diabatic quantum computation. The transverse field is finally switched off, and the system is expected to have reached the ground state of the classical Ising model that corresponds to the solution to the original optimization problem. An experimental demonstration of the success of quantum annealing for random magnets was reported immediately after the initial theoretical proposal.
https://en.wikipedia.org/wiki?curid=5219389
879,375
Scientists have recently looked for experimental evidence of this proposed energy transfer mechanism. A study published in 2007 claimed the identification of electronic quantum coherence at −196 °C (77 K). Another theoretical study from 2010 provided evidence that quantum coherence lives as long as 300 femtoseconds at biologically relevant temperatures (4 °C or 277 K). In that same year, experiments conducted on photosynthetic cryptophyte algae using two-dimensional photon echo spectroscopy yielded further confirmation for long-term quantum coherence. These studies suggest that, through evolution, nature has developed a way of protecting quantum coherence to enhance the efficiency of photosynthesis. However, critical follow-up studies question the interpretation of these results. Single molecule spectroscopy now shows the quantum characteristics of photosynthesis without the interference of static disorder, and some studies use this method to assign reported signatures of electronic quantum coherence to nuclear dynamics occurring in chromophores. A number of proposals emerged trying to explain unexpectedly long coherence. According to one proposal, if each site within the complex feels its own environmental noise, the electron will not remain in any local minimum due to both quantum coherence and thermal environment, but proceed to the reaction site via quantum walks. Another proposal is that the rate of quantum coherence and electron tunneling create an energy sink that moves the electron to the reaction site quickly. Other work suggested that geometric symmetries in the complex may favor efficient energy transfer to the reaction center, mirroring perfect state transfer in quantum networks. Furthermore, experiments with artificial dye molecules cast doubts on the interpretation that quantum effects last any longer than one hundred femtoseconds.
https://en.wikipedia.org/wiki?curid=13537626
879,403
If a plant cell is placed in a hypertonic solution, the plant cell loses water and hence turgor pressure by plasmolysis: pressure decreases to the point where the protoplasm of the cell peels away from the cell wall, leaving gaps between the cell wall and the membrane and making the plant cell shrink and crumple. A continued decrease in pressure eventually leads to cytorrhysis – the complete collapse of the cell wall. Plants with cells in this condition wilt. After plasmolysis the gap between the cell wall and the cell membrane in a plant cell is filled with hypertonic solution. This is because as the solution surrounding the cell is hypertonic, exosmosis takes place and the space between the cell wall and cytoplasm is filled with solutes, as most of the water drains away and hence the concentration inside the cell becomes more hypertonic. There are some mechanisms in plants to prevent excess water loss in the same way as excess water gain. Plasmolysis can be reversed if the cell is placed in a hypotonic solution. Stomata help keep water in the plant so it does not dry out. Wax also keeps water in the plant. The equivalent process in animal cells is called crenation.
https://en.wikipedia.org/wiki?curid=106270
890,124
Most writers think that the new paradox can be defused, although the resolution requires concepts from mathematical economics. Suppose formula_20 for all "a". It can be shown that this is possible for some probability distributions of "X" (the smaller amount of money in the two envelopes) only if formula_21. That is, only if the mean of all possible values of money in the envelopes is infinite. To see why, compare the series described above in which the probability of each "X" is 2/3 as likely as the previous "X" with one in which the probability of each "X" is only 1/3 as likely as the previous "X". When the probability of each subsequent term is greater than one-half of the probability of the term before it (and each "X" is twice that of the "X" before it) the mean is infinite, but when the probability factor is less than one-half, the mean converges. In the cases where the probability factor is less than one-half, <math>E(B|A=a) for all "a" other than the first, smallest "a", and the total expected value of switching converges to 0. In addition, if an ongoing distribution with a probability factor greater than one-half is made finite by, after any number of terms, establishing a final term with "all the remaining probability," that is, 1 minus the probability of all previous terms, the expected value of switching with respect to the probability that "A" is equal to the last, largest "a" will exactly negate the sum of the positive expected values that came before, and again the total expected value of switching drops to 0 (this is the general case of setting out an equal probability of a finite set of values in the envelopes described above). Thus, the only distributions that seem to point to a positive expected value for switching are those in which formula_21. Averaging over "a", it follows that formula_23 (because "A" and "B" have identical probability distributions, by symmetry, and both "A" and "B" are greater than or equal to "X").
https://en.wikipedia.org/wiki?curid=2539764
894,310
Hidden quantum Markov models (HQMMs) are a quantum-enhanced version of classical Hidden Markov Models (HMMs), which are typically used to model sequential data in various fields like robotics and natural language processing. Unlike the approach taken by other quantum-enhanced machine learning algorithms, HQMMs can be viewed as models inspired by quantum mechanics that can be run on classical computers as well. Where classical HMMs use probability vectors to represent hidden 'belief' states, HQMMs use the quantum analogue: density matrices. Recent work has shown that these models can be successfully learned by maximizing the log-likelihood of the given data via classical optimization, and there is some empirical evidence that these models can better model sequential data compared to classical HMMs in practice, although further work is needed to determine exactly when and how these benefits are derived. Additionally, since classical HMMs are a particular kind of Bayes net, an exciting aspect of HQMMs is that the techniques used show how we can perform quantum-analogous Bayesian inference, which should allow for the general construction of the quantum versions of probabilistic graphical models.
https://en.wikipedia.org/wiki?curid=44108758
907,828
Tokenization, when applied to data security, is the process of substituting a sensitive data element with a non-sensitive equivalent, referred to as a token, that has no intrinsic or exploitable meaning or value. The token is a reference (i.e. identifier) that maps back to the sensitive data through a tokenization system. The mapping from original data to a token uses methods that render tokens infeasible to reverse in the absence of the tokenization system, for example using tokens created from random numbers. A one-way cryptographic function is used to convert the original data into tokens, making it difficult to recreate the original data without obtaining entry to the tokenization system's resources. To deliver such services, the system maintains a vault database of tokens that are connected to the corresponding sensitive data. Protecting the system vault is vital to the system, and improved processes must be put in place to offer database integrity and physical security.
https://en.wikipedia.org/wiki?curid=61419
907,993
A bound system is typically at a lower energy level than its unbound constituents because its mass must be less than the total mass of its unbound constituents. For systems with low binding energies, this "lost" mass after binding may be fractionally small, whereas for systems with high binding energies, the missing mass may be an easily measurable fraction. This missing mass may be lost during the process of binding as energy in the form of heat or light, with the removed energy corresponding to the removed mass through Einstein's equation . In the process of binding, the constituents of the system might enter higher energy states of the nucleus/atom/molecule while retaining their mass, and because of this, it is necessary that they are removed from the system before its mass can decrease. Once the system cools to normal temperatures and returns to ground states regarding energy levels, it will contain less mass than when it first combined and was at high energy. This loss of heat represents the "mass deficit", and the heat itself retains the mass that was lost (from the point of view of the initial system). This mass will appear in any other system that absorbs the heat and gains thermal energy.
https://en.wikipedia.org/wiki?curid=125769
929,562
ADP cycling supplies the energy needed to do work in a biological system, the thermodynamic process of transferring energy from one source to another. There are two types of energy: potential energy and kinetic energy. Potential energy can be thought of as stored energy, or usable energy that is available to do work. Kinetic energy is the energy of an object as a result of its motion. The significance of ATP is in its ability to store potential energy within the phosphate bonds. The energy stored between these bonds can then be transferred to do work. For example, the transfer of energy from ATP to the protein myosin causes a conformational change when connecting to actin during muscle contraction.
https://en.wikipedia.org/wiki?curid=89226
940,183
The rough idea of these inapproximability results is to form a graph that represents a probabilistically checkable proof system for an NP-complete problem such as the Boolean satisfiability problem. In a probabilistically checkable proof system, a proof is represented as a sequence of bits. An instance of the satisfiability problem should have a valid proof if and only if it is satisfiable. The proof is checked by an algorithm that, after a polynomial-time computation on the input to the satisfiability problem, chooses to examine a small number of randomly chosen positions of the proof string. Depending on what values are found at that sample of bits, the checker will either accept or reject the proof, without looking at the rest of the bits. False negatives are not allowed: a valid proof must always be accepted. However, an invalid proof may sometimes mistakenly be accepted. For every invalid proof, the probability that the checker will accept it must be low.
https://en.wikipedia.org/wiki?curid=249254
945,611
All living organisms are the products of repeated rounds of cell growth and division. During this process, known as the cell cycle, a cell duplicates its contents and then divides in two. The purpose of the cell cycle is to accurately duplicate each organism's DNA and then divide the cell and its contents evenly between the two resulting cells. In eukaryotes, the cell cycle consists of four main stages: G, during which a cell is metabolically active and continuously grows; S phase, during which DNA replication takes place; G, during which cell growth continues and the cell synthesizes various proteins in preparation for division; and the M (mitosis) phase, during which the duplicated chromosomes (known as the sister chromatids) separate into two daughter nuclei, and the cell divides into two daughter cells, each with a full copy of DNA. Compared to the eukaryotic cell cycle, the prokaryotic cell cycle (known as binary fission) is relatively simple and quick: the chromosome replicates from the origin of replication, a new membrane is assembled, and the cell wall forms a septum which divides the cell into two.
https://en.wikipedia.org/wiki?curid=3730562
945,612
As the eukaryotic cell cycle is a complex process, eukaryotes have evolved a network of regulatory proteins, known as the cell cycle control system, which monitors and dictates the progression of the cell through the cell cycle. This system acts like a timer, or a clock, which sets a fixed amount of time for the cell to spend in each phase of the cell cycle, while at the same time it also responds to information received from the processes it controls. The cell cycle checkpoints play an important role in the control system by sensing defects that occur during essential processes such as DNA replication or chromosome segregation, and inducing a cell cycle arrest in response until the defects are repaired. The main mechanism of action of the cell cycle checkpoints is through the regulation of the activities of a family of protein kinases known as the cyclin-dependent kinases (CDKs), which bind to different classes of regulator proteins known as cyclins, with specific cyclin-CDK complexes being formed and activated at different phases of the cell cycle. Those complexes, in turn, activate different downstream targets to promote or prevent cell cycle progression.
https://en.wikipedia.org/wiki?curid=3730562
947,096
Electric power is usually measured in kilowatts (kW). Electric energy is usually measured in kilowatt-hours (kW·h). For example, if an electric load that draws 1.5 kW of electric power is operated for 8 hours, it uses 12 kW·h of electric energy. In the United States, a residential electric customer is charged based on the amount of electric energy used. On the customer bill, the electric utility states the amount of electric energy, in kilowatt-hours (kW·h), that the customer used since the last bill, and the cost of the energy per kilowatt-hour (kW·h).
https://en.wikipedia.org/wiki?curid=2189642
965,160
In quantum mechanics, an energy level is degenerate if it corresponds to two or more different measurable states of a quantum system. Conversely, two or more different states of a quantum mechanical system are said to be degenerate if they give the same value of energy upon measurement. The number of different states corresponding to a particular energy level is known as the degree of degeneracy of the level. It is represented mathematically by the Hamiltonian for the system having more than one linearly independent eigenstate with the same energy eigenvalue. When this is the case, energy alone is not enough to characterize what state the system is in, and other quantum numbers are needed to characterize the exact state when distinction is desired. In classical mechanics, this can be understood in terms of different possible trajectories corresponding to the same energy.
https://en.wikipedia.org/wiki?curid=2292483
1,011,516
In statistics, imputation is the process of replacing missing data with substituted values. When substituting for a data point, it is known as "unit imputation"; when substituting for a component of a data point, it is known as "item imputation". There are three main problems that missing data causes: missing data can introduce a substantial amount of bias, make the handling and analysis of the data more arduous, and create reductions in efficiency. Because missing data can create problems for analyzing data, imputation is seen as a way to avoid pitfalls involved with listwise deletion of cases that have missing values. That is to say, when one or more values are missing for a case, most statistical packages default to discarding any case that has a missing value, which may introduce bias or affect the representativeness of the results. Imputation preserves all cases by replacing missing data with an estimated value based on other available information. Once all missing values have been imputed, the data set can then be analysed using standard techniques for complete data. There have been many theories embraced by scientists to account for missing data but the majority of them introduce bias. A few of the well known attempts to deal with missing data include: hot deck and cold deck imputation; listwise and pairwise deletion; mean imputation; non-negative matrix factorization; regression imputation; last observation carried forward; stochastic imputation; and multiple imputation.
https://en.wikipedia.org/wiki?curid=1309220
1,013,991
A noise signal is typically considered as a linear addition to a useful information signal. Typical signal quality measures involving noise are signal-to-noise ratio (SNR or "S"/"N"), signal-to-quantization noise ratio (SQNR) in analog-to-digital conversion and compression, peak signal-to-noise ratio (PSNR) in image and video coding and noise figure in cascaded amplifiers. In a carrier-modulated passband analogue communication system, a certain carrier-to-noise ratio (CNR) at the radio receiver input would result in a certain signal-to-noise ratio in the detected message signal. In a digital communications system, a certain "E"/"N" (normalized signal-to-noise ratio) would result in a certain bit error rate. Telecommunication systems strive to increase the ratio of signal level to noise level in order to effectively transfer data. Noise in telecommunication systems is a product of both internal and external sources to the system.
https://en.wikipedia.org/wiki?curid=3966982
1,020,556
Classical formulation of continuous Hopfield Networks can be understood as a special limiting case of the modern Hopfield networks with one hidden layer. Continuous Hopfield  Networks for neurons with graded response are typically described by the dynamical equations and the energy function where formula_103, and formula_104 is the inverse of the activation function formula_105. This model is a special limit of the class of models that is called models A, with the following choice of the Lagrangian functions that, according to the definition (), leads to the activation functions If we integrate out the hidden neurons the system of equations () reduces to the equations on the feature neurons () with formula_106, and the general expression for the energy () reduces to the effective energy While the first two terms in equation () are the same as those in equation (), the third terms look superficially different. In equation () it is a Legendre transform of the Lagrangian for the feature neurons, while in () the third term is an integral of the inverse activation function. Nevertheless, these two expressions are in fact equivalent, since the derivatives of a function and its Legendre transform are inverse functions of each other. The easiest way to see that these two terms are equal explicitly is to differentiate each one with respect to formula_107. The results of these differentiations for both expressions are equal to formula_108. Thus, the two expressions are equal up to an additive constant. This completes the proof that the classical Hopfield Network with continuous states is a special limiting case of the modern Hopfield network () with energy ().
https://en.wikipedia.org/wiki?curid=1170097
1,020,558
The neurons can be organized in layers so that every neuron in a given layer has the same activation function and the same dynamic time scale. If we assume that there are no horizontal connections between the neurons within the layer (lateral connections) and there are no skip-layer connections, the general fully connected network (), () reduces to the architecture shown in Fig.4. It has formula_123 layers of recurrently connected neurons with the states described by continuous variables formula_124 and the activation functions formula_125, index formula_126 enumerates the layers of the network, and index formula_62 enumerates individual neurons in that layer. The activation functions can depend on the activities of all the neurons in the layer. Every layer can have a different number of neurons formula_128. These neurons are recurrently connected with the neurons in the preceding and the subsequent layers. The matrices of weights that connect neurons in layers formula_126 and formula_130 are denoted by formula_131 (the order of the upper indices for weights is the same as the order of the lower indices, in the example above this means that the index formula_62 enumerates neurons in the layer formula_126, and index formula_134 enumerates neurons in the layer formula_130). The feedforward weights and the feedback weights are equal. The dynamical equations for the neurons' states can be written as \xi^{(A, A-1)}_{ij} g_j^{A-1} + \sum\limits_{j=1}^{N_{A+1}} \xi^{(A, A+1)}_{ij} g_j^{A+1} - x_i^A</math>|}} with boundary conditions The main difference of these equations from the conventional feedforward networks is the presence of the second term, which is responsible for the feedback from higher layers. These top-down signals help neurons in lower layers to decide on their response to the presented stimuli. Following the general recipe it is convenient to introduce a Lagrangian function formula_136 for the formula_126-th hidden layer, which depends on the activities of all the neurons in that layer. The activation functions in that layer can be defined as partial derivatives of the Lagrangian With these definitions the energy (Lyapunov) function is given by \Big[ \sum\limits_{i=1}^{N_A} x_i^A g_i^A - L^{A}\Big] - \sum\limits_{A=1}^{N_\text{layer}-1} \sum\limits_{i=1}^{N_{A+1}} \sum\limits_{j=1}^{N_A} g_i^{A+1} \xi^{(A+1,A)}_{ij} g_j^A</math>|}} If the Lagrangian functions, or equivalently the activation functions, are chosen in such a way that the Hessians for each layer are positive semi-definite and the overall energy is bounded from below, this system is guaranteed to converge to a fixed point attractor state. The temporal derivative of this energy function is given by \tau_A \sum\limits_{i,j=1}^{N_A} \frac{dx_j^A}{dt} \frac{\partial^2 L^{A}}{\partial x_j^{A} \partial x_i^{A}} \frac{dx_i^A}{dt} \leq 0</math>|}} Thus, the hierarchical layered network is indeed an attractor network with the global energy function. This network is described by a hierarchical set of synaptic weights that can be learned for each specific problem.
https://en.wikipedia.org/wiki?curid=1170097
1,025,061
In quantum computing, quantum supremacy or quantum advantage is the goal of demonstrating that a programmable quantum device can solve a problem that no classical computer can solve in any feasible amount of time (irrespective of the usefulness of the problem). Conceptually, quantum supremacy involves both the engineering task of building a powerful quantum computer and the computational-complexity-theoretic task of finding a problem that can be solved by that quantum computer and has a superpolynomial speedup over the best known or possible classical algorithm for that task. The term was coined by John Preskill in 2012, but the concept of a quantum computational advantage, specifically for simulating quantum systems, dates back to Yuri Manin's (1980) and Richard Feynman's (1981) proposals of quantum computing. Examples of proposals to demonstrate quantum supremacy include the boson sampling proposal of Aaronson and Arkhipov, D-Wave's specialized frustrated cluster loop problems, and sampling the output of random quantum circuits.
https://en.wikipedia.org/wiki?curid=54452801
1,025,064
In 1994, further progress toward quantum supremacy was made when Peter Shor formulated Shor's algorithm, streamlining a method for factoring integers in polynomial time. Later on in 1995, Christopher Monroe and David Wineland published their paper, “Demonstration of a Fundamental Quantum Logic Gate”, marking the first demonstration of a quantum logic gate, specifically the two-bit "controlled-NOT". In 1996, Lov Grover put into motion an interest in fabricating a quantum computer after publishing his algorithm, Grover's Algorithm, in his paper, “A fast quantum mechanical algorithm for database search”. In 1998, Jonathan A. Jones and Michele Mosca published “Implementation of a Quantum Algorithm to Solve Deutsch's Problem on a Nuclear Magnetic Resonance Quantum Computer”, marking the first demonstration of a quantum algorithm.
https://en.wikipedia.org/wiki?curid=54452801
1,025,079
The best known algorithm for simulating an arbitrary random quantum circuit requires an amount of time that scales exponentially with the number of qubits, leading one group to estimate that around 50 qubits could be enough to demonstrate quantum supremacy. Bouland, Fefferman, Nirkhe and Vazirani gave, in 2018, theoretical evidence that efficiently simulating a random quantum circuit would require a collapse of the computational polynomial hierarchy. Google had announced its intention to demonstrate quantum supremacy by the end of 2017 by constructing and running a 49-qubit chip that would be able to sample distributions inaccessible to any current classical computers in a reasonable amount of time. The largest universal quantum circuit simulator running on classical supercomputers at the time was able to simulate 48 qubits. But for particular kinds of circuits, larger quantum circuit simulations with 56 qubits are possible. This may require increasing the number of qubits to demonstrate quantum supremacy. On October 23, 2019, Google published the results of this quantum supremacy experiment in the Nature article, “Quantum Supremacy Using a Programmable Superconducting Processor” in which they developed a new 53-qubit processor, named “Sycamore”, that is capable of fast, high-fidelity quantum logic gates, in order to perform the benchmark testing. Google claims that their machine performed the target computation in 200 seconds, and estimated that their classical algorithm would take 10,000 years in the world's fastest supercomputer to solve the same problem. IBM disputed this claim, saying that an improved classical algorithm should be able to solve that problem in two and a half days on that same supercomputer.
https://en.wikipedia.org/wiki?curid=54452801
1,035,058
Immunomagnetic cell sorting is also known as immunomagnetic cell separation, immunomagnetic cell enrichment, or magnetic-activated cell sorting, and commonly known by the acronym MACS which is a trademark of Miltenyi GmbH. Immunomagnetic cell sorting is based on separation of beads passing a magnetic field. A variety of companies offer different solutions for enrichment or depletion of cell populations. Immunomagnetic cell sorting provides a method for enriching a heterogeneous mixture of cells based on cell-surface protein expression (antigens). This technology is based on the attachment of small, inert, supra-magnetic particles to mAbs specific for antigens on the target cell population. Cells labelled to these antibody-bead conjugates are then separated via a column containing a ferromagnetic matrix. By applying a magnetic field to the matrix, the beads stick to the matrix inside the column and the bead-carrying cells are held back from passing through. Unlabelled cells can pass through the matrix and are collected in the flow-through. To elute the trapped cells from the column, the magnetic field is simply removed. Immunomagnetic cell sorting therefore enables different strategies for positive enrichment or depletion of cells. Immunomagnetic beads are small and usually do not interfere with downstream assays, however for some applications it may be necessary to remove them . Using this separation method up-scaling the cell numbers does not significantly increase processing times and the sterility of the sample is guaranteed if the cell sorting is performed inside a biosafety cabinet. On the other hand, this technique allows to separate the cells based only on a single marker and it is not able to discriminate between different levels of protein expression (quantitative analysis). Immunomagnetic cell sorting has shown to be beneficial when used with NPC (neural progenitor cell) cultures in particular, as it is easier to manage and causes minimal damage to live cells.
https://en.wikipedia.org/wiki?curid=22327978
1,041,561
Here and below, formula_6 means proportional; a PDF is always scaled so that its integral over the whole space is one. This formula_7, called the "prior", was evolved in time by running the model and now is to be updated to account for new data. It is natural to assume that the error distribution of the data is known; data have to come with an error estimate, otherwise they are meaningless. Here, the data formula_8 is assumed to have Gaussian PDF with covariance formula_9 and mean formula_10, where formula_11 is the so-called observation matrix. The covariance matrix formula_9 describes the estimate of the error of the data; if the random errors in the entries of the data vector formula_8 are independent, formula_9 is diagonal and its diagonal entries are the squares of the standard deviation (“error size”) of the error of the corresponding entries of the data vector formula_8. The value formula_10 is what the value of the data would be for the state formula_1 in the absence of data errors. Then the probability density formula_18 of the data formula_8 conditional of the system state formula_1, called the data likelihood, is
https://en.wikipedia.org/wiki?curid=9553738
1,065,756
Quantum probability provides a new way to explain human probability judgment errors including the conjunction and disjunction errors. A conjunction error occurs when a person judges the probability of a likely event L "and" an unlikely event U to be greater than the unlikely event U; a disjunction error occurs when a person judges the probability of a likely event L to be greater than the probability of the likely event L "or" an unlikely event U. Quantum probability theory is a generalization of Bayesian probability theory because it is based on a set of von Neumann axioms that relax some of the classic Kolmogorov axioms. The quantum model introduces a new fundamental concept to cognition—the compatibility versus incompatibility of questions and the effect this can have on the sequential order of judgments. Quantum probability provides a simple account of conjunction and disjunction errors as well as many other findings such as order effects on probability judgments.
https://en.wikipedia.org/wiki?curid=30138821
1,078,204
Torsional vibrations of drive systems usually result in a significant fluctuation of the rotational speed of the rotor of the driving electric motor. Such oscillations of the angular speed superimposed on the average rotor rotational speed cause more or less severe perturbation of the electromagnetic flux and thus additional oscillations of the electric currents in the motor windings. Then, the generated electromagnetic torque is also characterized by additional variable in time components which induce torsional vibrations of the drive system. According to the above, mechanical vibrations of the drive system become coupled with the electrical vibrations of currents in the motor windings. Such a coupling is often complicated in character and thus computationally troublesome. Because of this reason, till present majority of authors used to simplify the matter regarding mechanical vibrations of drive systems and electric current vibrations in the motor windings as mutually uncoupled. Then, the mechanical engineers applied the electromagnetic torques generated by the electric motors as ‘a priori‘ assumed excitation functions of time or of the rotor-to-stator slip, e.g. in paper usually basing on numerous experimental measurements carried out for the given electric motor dynamic behaviours. For this purpose, by means of measurement results, proper approximate formulas have been developed, which describe respective electromagnetic external excitations produced by the electric motor. However, the electricians thoroughly modelled electric current flows in the electric motor windings, but they usually reduced the mechanical drive system to one or seldom to at most a few rotating rigid bodies, as e.g. in In many cases, such simplifications yield sufficiently useful results for engineering applications, but very often they can lead to remarkable inaccuracies, since many qualitative dynamic properties of the mechanical systems, e.g. their mass distribution, torsional flexibility and damping effects, are being neglected. Thus, an influence of drive system vibratory behaviour on the electric machine rotor angular speed fluctuation, and in this way on the electric current oscillations in the rotor and stator windings, can not be investigated with a satisfactory precision.
https://en.wikipedia.org/wiki?curid=1203089
1,086,013
where formula_27 is the dimensionality of the subspace. The conservation law in this case is expressed by the unitarity of the S-matrix. In either case, the considerations assume a closed isolated system. This closed isolated system is a system with (1) a fixed energy formula_28 and (2) a fixed number of particles formula_29 in (c) a state of equilibrium. If one considers a huge number of replicas of this system, one obtains what is called a ``microcanonical ensemble´´. It is for this system that one postulates in quantum statistics the ``fundamental postulate of equal a priori probabilities of an isolated system´´. This says that the isolated system in equilibrium occupies each of its accessible states with the same probability. This fundamental postulate therefore allows us to equate the a priori probability to the degeneracy of a system, i.e. to the number of different states with the same energy.
https://en.wikipedia.org/wiki?curid=10782759
1,161,243
Model–view–adapter (MVA) or mediating-controller MVC is a software architectural pattern and multitier architecture. In complex computer applications that present large amounts of data to users, developers often wish to separate data (model) and user interface (view) concerns so that changes to the user interface will not affect data handling and that the data can be reorganized without changing the user interface. MVA and traditional MVC both attempt to solve this same problem, but with two different styles of solution. Traditional MVC arranges model (e.g., data structures and storage), view (e.g., user interface), and controller (e.g., business logic) in a triangle, with model, view, and controller as vertices, so that some information flows between the model and views outside of the controller's direct control. The model–view–adapter solves this rather differently from the model–view–controller by arranging model, adapter or mediating controller and view linearly without any connections whatsoever directly between model and view.
https://en.wikipedia.org/wiki?curid=16570597
1,178,362
Data-intensive is used to describe applications that are I/O bound or with a need to process large volumes of data. Such applications devote most of their processing time to I/O and movement and manipulation of data. Parallel processing of data-intensive applications typically involves partitioning or subdividing the data into multiple segments which can be processed independently using the same executable application program in parallel on an appropriate computing platform, then reassembling the results to produce the completed output data. The greater the aggregate distribution of the data, the more benefit there is in parallel processing of the data. Data-intensive processing requirements normally scale linearly according to the size of the data and are very amenable to straightforward parallelization. The fundamental challenges for data-intensive computing are managing and processing exponentially growing data volumes, significantly reducing associated data analysis cycles to support practical, timely applications, and developing new algorithms which can scale to search and process massive amounts of data. Researchers coined the term BORPS for "billions of records per second" to measure record processing speed in a way analogous to how the term MIPS applies to describe computers' processing speed.
https://en.wikipedia.org/wiki?curid=31107479
1,178,371
The programming model for MapReduce architecture is a simple abstraction where the computation takes a set of input key–value pairs associated with the input data and produces a set of output key–value pairs. In the Map phase, the input data is partitioned into input splits and assigned to Map tasks associated with processing nodes in the cluster. The Map task typically executes on the same node containing its assigned partition of data in the cluster. These Map tasks perform user-specified computations on each input key–value pair from the partition of input data assigned to the task, and generates a set of intermediate results for each key. The shuffle and sort phase then takes the intermediate data generated by each Map task, sorts this data with intermediate data from other nodes, divides this data into regions to be processed by the reduce tasks, and distributes this data as needed to nodes where the Reduce tasks will execute. The Reduce tasks perform additional user-specified operations on the intermediate data possibly merging values associated with a key to a smaller set of values to produce the output data. For more complex data processing procedures, multiple MapReduce calls may be linked together in sequence.
https://en.wikipedia.org/wiki?curid=31107479
1,203,218
Direct ammonia fuel cells are researched for this exact reason and new studies presented a new integrated solar-based ammonia synthesis and fuel cell. The solar base follows from excess solar power that is used to synthesize ammonia. This is done by using an ammonia electrolytic cell (AEC) in combination with a proton exchange membrane (PEM) fuel cell. When a dip in solar power occurs, a direct ammonia fuel cell kicks into action providing the lacking energy. This recent research (2020) is a clear example of efficient use of energy, which is essentially done by temporary storage and use of ammonia as a fuel. Storage of energy in ammonia does not degrade over time, which is the case with batteries and flywheels. This provides long-term energy storage. This compact form of energy has the additional advantage that excess energy can easily be transported to other locations. This needs to be done with high safety measures due to the toxicity of ammonia for humans. Further research needs to be done to complement this system with wind energy and hydro-power plants to create a hybrid system to limit the interruptions in power supply. It is necessary to also investigate on the economic performance of the proposed system. Some scientists envision a new ammonia economy that is almost the same as the oil industry, but with the enormous advantage of inexhaustible carbon-free power. This so called green ammonia is considered as a potential fuel for super large ships. South Korean shipbuilder DSME plans on commercializing these ships by 2025.
https://en.wikipedia.org/wiki?curid=34504569
1,206,068
In order to separate and define the photobaric and photothermal contributions in spongy samples (leaves, lichens) one uses the following properties of the photoacoustic signal: (1) At low frequencies (below roughly 100 Hz) the photobaric part of the photoacoustic signal may be quite large and the total signal decreases under the background light. The photobaric signal is obtained in principle from the difference of signals (the total signal minus the reference signal, after a correction to account for the energy storage). (2) At sufficiently high frequencies, however, the photobaric signal is very much attenuated in comparison with the photothermal component and can be neglected. Also, no photobaric signal can be observed even at low frequencies in a leaf with its inner air space filled with water. This is true also in live algal thalli, suspensions of microalgae and photosynthetic bacteria. This is because the photobaric signal depends on oxygen diffusion from the photosynthetic membranes to the air phase, and is largely attenuated as the diffusion distance in the aqueous medium increases. In all the above instances when no photobaric signal is observed one may determine the energy storage by comparing the photoacoustic signal obtained with the probing light alone, to the reference signal.
https://en.wikipedia.org/wiki?curid=35891052
1,280,517
The description in terms of the virtual scheduling algorithm is given by the ITU-T as follows: "The virtual scheduling algorithm updates a Theoretical Arrival Time (TAT), which is the 'nominal' arrival time of the cell assuming cells are sent equally spaced at an emission interval of "T" corresponding to the cell rate "Λ" [= 1/"T"] when the source is active. If the actual arrival time of a cell is not 'too early' relative to the "TAT" and tolerance "τ" associated to the cell rate, i.e. if the actual arrival time is after its theoretical arrive time minus the limit value (t > "TAT" – "τ"), then the cell is conforming; otherwise, the cell is nonconforming" . If the cell is nonconforming then "TAT" is left unchanged. If the cell is conforming, and arrived before its TAT (equivalent to the bucket not being empty but being less than the limit value), then the next cell's "TAT" is simply "TAT" + "T". However, if a cell arrives after its "TAT", then the "TAT" for the next cell is calculated from this cell's arrival time, not its "TAT". This prevents credit building up when there is a gap in the transmission (equivalent to the bucket becoming less than empty).
https://en.wikipedia.org/wiki?curid=1616583
1,286,657
In all approaches to quantum computing, it is important to know whether a task under consideration can be carried out efficiently by a classical computer. An algorithm might be described in the language of quantum mechanics, but upon closer analysis, revealed to be implementable using only classical resources. Such an algorithm would not be taking full advantage of the extra possibilities made available by quantum physics. In the theory of quantum computation using finite-dimensional Hilbert spaces, the Gottesman–Knill theorem demonstrates that there exists a set of quantum processes that can be emulated efficiently on a classical computer. Generalizing this theorem to the continuous-variable case, it can be shown that, likewise, a class of continuous-variable quantum computations can be simulated using only classical analog computations. This class includes, in fact, some computational tasks that use quantum entanglement. When the Wigner quasiprobability representations of all the quantities—states, time evolutions "and" measurements—involved in a computation are nonnegative, then they can be interpreted as ordinary probability distributions, indicating that the computation can be modeled as an essentially classical one. This type of construction can be thought of as a continuum generalization of the Spekkens toy model.
https://en.wikipedia.org/wiki?curid=54782330
1,288,578
In education, a data system is a computer system that aims to provide educators with student data to help solve educational problems. Examples of data systems include Student Information Systems (SISs), assessment systems, Instructional Management Systems (IMSs), and data-warehousing systems, but distinctions between different types of data systems are blurring as these separate systems begin to serve more of the same functions. Data systems that present data to educators in an over-the-counter data format embed labels, supplemental documentation, and help system and make key package/display and content decisions to improve the accuracy of data system users’ data analyses.
https://en.wikipedia.org/wiki?curid=24770596
1,288,711
Schreiber has used small molecules to study three specific areas of biology, and then through the more general application of small molecules in biomedical research. Academic screening centers have been created that emulate the Harvard Institute of Chemistry and Cell Biology and the Broad Institute; in the U.S., there has been a nationwide effort to expand this capability via the government-sponsored NIH Road Map. Chemistry departments have changed their names to include the term chemical biology and new journals have been introduced (Cell Chemical Biology, ChemBioChem, Nature Chemical Biology, ACS Chemical Biology]) to cover the field. Schreiber has been involved in the founding of numerous biopharmaceutical companies whose research relies on chemical biology: Vertex Pharmaceuticals, Inc. (VRTX), Ariad Pharmaceuticals, Inc. (ARIA), Infinity Pharmaceuticals, Inc (INFI), Forma Therapeutics, H3 Biomedicine, Jnana Therapeutics, and Kojin Therapeutics. These companies have produced new therapeutics in several disease areas, including cystic fibrosis and cancer.
https://en.wikipedia.org/wiki?curid=4433942
1,309,044
While gluons are massless, they still possess energy – chromodynamic binding energy. In this way, they are similar to photons, which are also massless particles carrying energy – photon energy. The amount of energy per single gluon, or "gluon energy", cannot be calculated. Unlike photon energy, which is quantifiable, described by the Planck–Einstein relation and depends on a single variable (the photon's frequency), no formula exists for the quantity of energy carried by each gluon. While the effects of a single photon can be observed, single gluons have not been observed outside of a hadron. Due to the mathematical complexity of quantum chromodynamics and the somewhat chaotic structure of hadrons, which are composed of gluons, valence quarks, sea quarks and other virtual particles, it is not even measurable how many gluons exist at a given moment inside a hadron. Additionally, not all of the QCD binding energy is gluon energy, but rather, some of it comes from the kinetic energy of the hadron's constituents. Therefore, only the total QCD binding energy per hadron can be stated. However, in the future, studies into quark–gluon plasma might be able to overcome this.
https://en.wikipedia.org/wiki?curid=22472780
1,343,432
These diagrammatic languages can be traced back to Penrose graphical notation, developed in the early 1970s. Diagrammatic reasoning has been used before in quantum information science in the quantum circuit model, however, in categorical quantum mechanics primitive gates like the CNOT-gate arise as composites of more basic algebras, resulting in a much more compact calculus. In particular, the ZX-calculus has sprung forth from categorical quantum mechanics as a diagrammatic counterpart to conventional linear algebraic reasoning about quantum gates. The ZX-calculus consists of a set of generators representing the common Pauli quantum gates and the Hadamard gate equipped with a set of graphical rewrite rules governing their interaction. Although a standard set of rewrite rules has not yet been established, some versions have been proven to be "complete", meaning that any equation that holds between two quantum circuits represented as diagrams can be proven using the rewrite rules. The ZX-calculus has been used to study for instance measurement-based quantum computing.
https://en.wikipedia.org/wiki?curid=28267626
1,366,075
where formula_8 is the pair potential for interaction between particles, and formula_9 is the external-field potential. By integration over part of the variables, the Liouville equation can be transformed into a chain of equations where the first equation connects the evolution of one-particle probability density function with the two-particle probability density function, second equation connects the two-particle probability density function with the three-particle probability density function, and generally the "s"-th equation connects the "s"-particle probability density function
https://en.wikipedia.org/wiki?curid=5703638
1,386,817
A “system", therefore, has implicit as well as explicit definition of boundaries to which the systematic process of hazard identification, hazard analysis and control is applied. The system can range in complexity from a crewed spacecraft to an autonomous machine tool. The system safety concept helps the system designer(s) to model, analyse, gain awareness about, understand and eliminate the hazards, and apply controls to achieve an acceptable level of safety. Ineffective decision making in safety matters is regarded as the first step in the sequence of hazardous flow of events in the "Swiss cheese" model of accident causation. Communications regarding system risk have an important role to play in correcting risk perceptions by creating, analysing and understanding information model to show what factors create and control the hazardous process. For almost any system, product, or service, the most effective means of limiting product liability and accident risks is to implement an organized system safety function, beginning in the conceptual design phase and continuing through to its development, fabrication, testing, production, use and ultimate disposal. The aim of the system safety concept is to gain assurance that a system and associated functionality behaves safely and is safe to operate. This assurance is necessary. Technological advances in the past have produced positive as well as negative effects.
https://en.wikipedia.org/wiki?curid=10708254
1,403,828
Applying classical methods of machine learning to the study of quantum systems is the focus of an emergent area of physics research. A basic example of this is quantum state tomography, where a quantum state is learned from measurement. Other examples include learning Hamiltonians, learning quantum phase transitions, and automatically generating new quantum experiments. Classical machine learning is effective at processing large amounts of experimental or calculated data in order to characterize an unknown quantum system, making its application useful in contexts including quantum information theory, quantum technologies development, and computational materials design. In this context, it can be used for example as a tool to interpolate pre-calculated interatomic potentials or directly solving the Schrödinger equation with a variational method.
https://en.wikipedia.org/wiki?curid=61373032
1,424,288
AI-generated "synthetic data" can be another option to generate test data. AI-powered synthetic data generators learn the patterns and qualities of a sample database. Once the training of the AI algorithm has taken place, it can produce as much or as little test data as defined. AI-generated synthetic data needs additional privacy measures to prevent the algorithm from overfitting. Some commercially available synthetic data generators come with additional privacy and accuracy controls. The amount of data to be tested is determined or limited by considerations such as time, cost and quality. Time to produce, cost to produce and quality of the test data, and efficiency.
https://en.wikipedia.org/wiki?curid=6476885
1,431,664
A polyharmonic equation is a partial differential equation of the form formula_38 for any natural number formula_39, where formula_40 is the Laplace operator. For example, the biharmonic equation is formula_41 and the triharmonic equation is formula_42. All the polyharmonic radial basis functions are solutions of a polyharmonic equation (or more accurately, a modified polyharmonic equation with a Dirac delta function on the right hand side instead of 0). For example, the thin plate radial basis function is a solution of the modified 2-dimensional biharmonic equation. Applying the 2D Laplace operator (formula_43) to the thin plate radial basis function formula_44 either by hand or using a computer algebra system shows that formula_45. Applying the Laplace operator to formula_46 (this is formula_47) yields 0. But 0 is not exactly correct. To see this, replace formula_48 with formula_49 (where formula_50 is some small number tending to 0). The Laplace operator applied to formula_51 yields formula_52. For formula_53 the right hand side of this equation approaches infinity as formula_50 approaches 0. For any other formula_55, the right hand side approaches 0 as formula_50 approaches 0. This indicates that the right hand side is a Dirac delta function. A computer algebra system will show that
https://en.wikipedia.org/wiki?curid=6160804
1,437,664
Both superconductivity and superinsulation rest on the pairing of conduction electrons into Cooper pairs. In superconductors, all the pairs move coherently, allowing for the electric current without resistance. In superinsulators, both Cooper pairs and normal excitations are confined and the electric current cannot flow. A mechanism behind superinsulation is the proliferation of magnetic monopoles at low temperatures. In two dimensions (2D), magnetic monopoles are quantum tunneling events (instantons) that are often referred to as monopole “plasma”. In three dimensions (3D), monopoles form a Bose condensate. Monopole plasma or monopole condensate squeezes Faraday's electric field lines into thin electric flux filaments or strings dual to Abrikosov vortices in superconductors. Cooper pairs of opposite charges at the end of these electric strings feel an attractive linear potential. When the corresponding string tension is large, it is energetically favorable to pull out of vacuum many charge-anticharge pairs and to form many short strings rather than to continue stretching the original one. As a consequence, only neutral “electric pions” exist as asymptotic states and the electric conduction is absent. This mechanism is a single-color version of the confinement mechanism that binds quarks into hadrons. Because the electric forces are much weaker than strong forces of the particle physics, the typical size of “electric pions” well exceeds the size of corresponding elementary particles. This implies that preparing the samples that are sufficiently small, one can peer inside an “electric pion,” where electric strings are loose and Coulomb interactions are screened, hence electric charges are effectively unbound and move as if they were in the metal. The low-temperature saturation of the resistance to metallic behavior has been observed in TiN films with small lateral dimensions.
https://en.wikipedia.org/wiki?curid=16864252
1,437,693
Master Data Services allows the master data to be categorized by hierarchical relationships, such as employee data are a subtype of organization data. Hierarchies are generated by relating data attributes. Data can be automatically categorized using rules, and the categories are introspected programmatically. Master Data Services can also expose the data as Microsoft SQL Server views, which can be pulled by any SQL-compatible client. It uses a role-based access control system to restrict access to the data. The views are generated dynamically, so they contain the latest data entities in the master hub. It can also push out the data by writing to some external journals. Master Data Services also includes a web-based UI for viewing and managing the data. It uses ASP.NET in the back-end. The Silverlight front-end was replaced with HTML5 in SQL Server 2019.
https://en.wikipedia.org/wiki?curid=13430116
1,463,362
The energy of a nucleon in a nucleus is its rest mass energy minus a binding energy. In addition to this, there is an energy due to degeneracy: for instance, a nucleon with energy "E" will be forced to a higher energy "E" if all the lower energy states are filled. This is because nucleons are fermions and obey Fermi–Dirac statistics. The work done in putting this nucleon to a higher energy level results in a pressure, which is the degeneracy pressure. When the effective binding energy, or Fermi energy, reaches zero, adding a nucleon of the same isospin to the nucleus is not possible, as the new nucleon would have a negative effective binding energy — i.e. it is more energetically favourable (system will have lowest overall energy) for the nucleon to be created outside the nucleus. This defines the particle drip point for that species.
https://en.wikipedia.org/wiki?curid=26850357
1,471,503
This is easy in the CA case because the relation ""a" commutes with "b"" is an equivalence relation on the non-identity elements. So the elements break up into equivalence classes, such that each equivalence class is the set of non-identity elements of a maximal abelian subgroup. The normalizers of these maximal abelian subgroups turn out to be exactly the maximal proper subgroups of "G". These normalizers are Frobenius groups whose character theory is reasonably transparent, and well-suited to manipulations involving character induction. Also, the set of prime divisors of |"G"| is partitioned according to the primes which divide the orders of the distinct conjugacy classes of maximal abelian subgroups of |"G"|. This pattern of partitioning the prime divisors of |"G"| according to conjugacy classes of certain Hall subgroups (a Hall subgroup is one whose order and index are relatively prime) which correspond to the maximal subgroups of "G" (up to conjugacy) is repeated in both the proof of the Feit–Hall–Thompson CN-theorem and in the proof of the Feit–Thompson odd-order theorem. Each maximal subgroup "M" has a certain nilpotent Hall subgroup "M" with normalizer contained in "M", whose order is divisible by certain primes forming a set σ("M"). Two maximal subgroups are conjugate if and only if the sets σ("M") are the same, and if they are not conjugate then the sets σ("M") are disjoint. Every prime dividing the order of "G" occurs in some set σ("M"). So the primes dividing the order of "G" are partitioned into equivalence classes corresponding to the conjugacy classes of maximal subgroups. The proof of the CN-case is already considerably more difficult than the CA-case: the main extra problem is to prove that two different Sylow subgroups intersect in the identity. This part of the proof of the odd-order theorem takes over 100 journal pages. A key step is the proof of the Thompson uniqueness theorem, stating that abelian subgroups of normal rank at least 3 are contained in a unique maximal subgroup, which means that the primes "p" for which the Sylow "p"-subgroups have normal rank at most 2 need to be considered separately. Bender later simplified the proof of the uniqueness theorem using Bender's method. Whereas in the CN-case, the resulting maximal subgroups "M" are still Frobenius groups, the maximal subgroups that occur in the proof of the odd-order theorem need no longer have this structure, and the analysis of their structure and interplay produces 5 possible types of maximal subgroups, called types I, II, III, IV, V. Type I subgroups are of "Frobenius type", a slight generalization of Frobenius group, and in fact later on in the proof are shown to be Frobenius groups. They have the structure "M"⋊"U" where "M" is the largest normal nilpotent Hall subgroup, and "U" has a subgroup "U" with the same exponent such that "M"⋊"U" is a Frobenius group with kernel "M". Types II, III, IV, V are all 3-step groups with structure "M"⋊"U"⋊"W", where "M"⋊"U" is the derived subgroup of "M". The subdivision into types II, III, IV and V depends on the structure and embedding of the subgroup "U" as follows:
https://en.wikipedia.org/wiki?curid=1656473
1,497,297
An adequate amount of training and validation data is required for machine learning. However, some very useful products like satellite remote sensing data only have decades of data since the 1970s. If one is interested in the yearly data, then only less than 50 samples are available. Such amount of data may not be adequate. In a study of automatic classification of geological structures, the weakness of the model is the small training dataset, even though with the help of data augmentation to increase the size of the dataset. Another study of predicting streamflow found that the accuracies depend on the availability of sufficient historical data, therefore sufficient training data determine the performance of machine learning. Inadequate training data may lead to a problem called overfitting. Overfitting causes inaccuracies in machine learning as the model learns about the noise and undesired details.
https://en.wikipedia.org/wiki?curid=68735447
1,512,895
Ultrasound energy, simply known as ultrasound, is a type of mechanical energy called sound characterized by vibrating or moving particles within a medium. Ultrasound is distinguished by vibrations with a frequency greater than 20,000 Hz, compared to audible sounds that humans typically hear with frequencies between 20 and 20,000 Hz. Ultrasound energy requires matter or a medium with particles to vibrate to conduct or propagate its energy. The energy generally travels through most mediums in the form of a wave in which particles are deformed or displaced by the energy then reestablished after the energy passes. Types of waves include shear, surface, and longitudinal waves with the latter being one of the most common used in biological applications. The characteristics of the traveling ultrasound energy greatly depend on the medium that it is traveling through. While ultrasound waves propagate through a medium, the amplitude of the wave is continually reduced or weakened with the distance it travels. This is known as attenuation and is due to the scattering or deflecting of energy signals as the wave propagates and the conversion of some of the energy to heat energy within the medium. A medium that changes the mechanical energy from the vibrations of the ultrasound energy into thermal or heat energy is called viscoelastic. The properties of ultrasound waves traveling through the medium of biological tissues has been extensively studied in recent years and implemented into many important medical tools.
https://en.wikipedia.org/wiki?curid=14580514
1,517,975
In operational numerical weather prediction, forecast models are used to predict future states of the atmosphere, based on how the climate system evolves with time from an initial state. The initial state provided as input to the forecast must consist of data values for a range of "prognostic" meteorological fields – that is, those fields which determine the future evolution of the model. Spatially varying fields are required in the form used by the model, for example at each intersection point on a regular grid of longitude and latitude circles, and initial data must be valid at a single time that corresponds to the present or the recent past. By contrast, the available observational data usually do not include all of the model's prognostic fields, and may include other additional fields; these data also have different spatial distribution from the forecast model grid, are valid over a range of times rather than a single time, and are also subject to observational error. The technique of data assimilation is therefore used to produce an "analysis" of the initial state, which is a best fit of the numerical model to the available data, taking into account the errors in the model and the data.
https://en.wikipedia.org/wiki?curid=25144285
1,533,349
In the socio-political manner, data archaeology involves the analysis of data assemblages to reveal their discursive and material socio-technical elements and apparatuses. This kind of analysis can reveal the politics of the data being analysed and thus that of their producing institution. Archaeology in this sense, refers to the provenance of data. It involves mapping the sites, formats and infrastructures through which data flows and are altered or transformed over time. it has an interest in the life of data, and the politics that shapes the circulation of data. This serves to expose the key actors, practices and praxes at play and their roles. It can be accomplished in two steps. First is, accessing and assessing the technical stack of the data (this refers to the infrastructure and material technologies used to build/gather the data) to understand the physical representation of the data and also. Second, analysing the contextual stack of the data which shapes how the data is constructed, used and analysed. This can be done via a variety of processes, interviews, analysing technical and policy documents and investigating the effect of the data on a community or the institutional, financial, legal and material framing. This can be attained by creating a data assemblage
https://en.wikipedia.org/wiki?curid=2588620
1,567,179
On a high level representation, the neurons can be viewed as connected to other neurons to form a neural network in one of three ways. A specific network can be represented as a physiologically (or anatomically) connected network and modeled that way. There are several approaches to this (see Ascoli, G.A. (2002) Sporns, O. (2007), Connectionism, Rumelhart, J. L., McClelland, J. L., and PDP Research Group (1986), Arbib, M. A. (2007)). Or, it can form a functional network that serves a certain function and modeled accordingly (Honey, C. J., Kotter, R., Breakspear, R., & Sporns, O. (2007), Arbib, M. A. (2007)). A third way is to hypothesize a theory of the functioning of the biological components of the neural system by a mathematical model, in the form of a set of mathematical equations. The variables of the equation are some or all of the neurobiological properties of the entity being modeled, such as the dimensions of the dendrite or the stimulation rate of action potential along the axon in a neuron. The mathematical equations are solved using computational techniques and the results are validated with either simulation or experimental processes. This approach to modeling is called computational neuroscience. This methodology is used to model components from the ionic level to system level of the brain. This method is applicable for modeling integrated system of biological components that carry information signal from one neuron to another via intermediate active neurons that can pass the signal through or create new or additional signals. The computational neuroscience approach is extensively used and is based on two generic models, one of cell membrane potential Goldman (1943) and Hodgkin and Katz (1949), and the other based on Hodgkin-Huxley model of action potential (information signal).
https://en.wikipedia.org/wiki?curid=33818014
1,582,120
The use of data in modern society brings about new ways of understanding and measuring the world, but also brings with it certain concerns or issues. Data scholars attempt to bring some of these issues to light in their quest to be critical of data. Rob Kitchin identifies both technical and organizational issues of data, as well as some normative and ethical questions. Technical and organization issues concerning data range from the scope of datasets, access to the data, the quality of the data, the integration of the data, the application of analytics and ecological fallacies, as well as the skills and organizational capabilities of the research team. Some of the normative and ethical concerns addressed by Kitchin include surveillance through one's data (dataveillance), the privacy of one's data, the ownership of one's data, the security of one's data, anticipatory or corporate governance, and profiling individuals by their data. All of these concerns must be taken into account by scholars of data in their objective to be critical.
https://en.wikipedia.org/wiki?curid=51578025
1,612,667
The efficiency and exhaustivity of generators are also related to the data structures. Unlike previous methods, AEGIS was a list-processing generator. Compared to adjacency matrices, list data requires less memory. As no spectral data was interpreted in this system, the user needed to provide substructures as inputs. Structure generators can also vary based on the type of data used, such as HMBC, HSQC and other NMR data. LUCY is an open-source structure elucidation method based on the HMBC data of unknown molecules, and involves an exhaustive 2-step structure generation process where first all combinations of interpretations of HMBC signals are implemented in a connectivity matrix, which is then completed by a deterministic generator filling in missing bond information. This platform could generate structures with any arbitrary size of molecules; however, molecular formulas with more than 30 heavy atoms are too time consuming for practical applications. This limitation highlighted the need for a new CASE system. SENECA was developed to eliminate the shortcomings of LUCY. To overcome the limitations of the exhaustive approach, SENECA was developed as a stochastic method to find optimal solutions. The systems comprise two stochastic methods: simulated annealing and genetic algorithms. First, a random structure is generated; then, its energy is calculated to evaluate the structure and its spectral properties. By transforming this structure into another structure, the process continues until the optimum energy is reached. In the generation, this transformation relies on equations based on Jean-Loup Faulon's rules. LSD (Logic for Structure Determination) is an important contribution from French scientists. The tool uses spectral data information such as HMBC and COSY data to generate all possible structures. LSD is an open source structure generator released under the General Public License (GPL). A well-known commercial CASE system, StrucEluc, also features a NMR based generator. This tool is from ACD Labs and, notably, one of the developers of MASS, Mikhail Elyashberg. COCON is another NMR based structure generator, relying on theoretical data sets for structure generation. Except J-HMBC and J-COSY, all NMR types can be used as inputs.
https://en.wikipedia.org/wiki?curid=66452088
1,615,923
NCP offers research in different branches of Physics such as particle Physics, computational physics, Astrophysics, Cosmology, Atmospheric physics, Atomic, molecular, and optical physics, Chemical physics, Condensed matter physics, (Fluid dynamics, Laser Physics, Mathematical physics, Plasma Physics·, Quantum field theory, Nano Physics, Quantum information theory.
https://en.wikipedia.org/wiki?curid=23541193
1,618,339
Classical formulation of continuous Hopfield networks can be understood as a special limiting case of the Modern Hopfield networks with one hidden layer. Continuous Hopfield Networks for neurons with graded response are typically described by the dynamical equations and the energy function where formula_45, and formula_46 is the inverse of the activation function formula_47. This model is a special limit of the class of models that is called models A, with the following choice of the Lagrangian functions that, according to the definition (), leads to the activation functions If we integrate out the hidden neurons the system of equations () reduces to the equations on the feature neurons () with formula_48, and the general expression for the energy () reduces to the effective energy While the first two terms in equation () are the same as those in equation (), the third terms look superficially different. In equation () it is a Legendre transform of the Lagrangian for the feature neurons, while in () the third term is an integral of the inverse activation function. Nevertheless, these two expressions are in fact equivalent, since the derivatives of a function and its Legendre transform are inverse functions of each other. The easiest way to see that these two terms are equal explicitly is to differentiate each one with respect to formula_49. The results of these differentiations for both expressions are equal to formula_50. Thus, the two expressions are equal up to an additive constant. This completes the proof that the classical Hopfield network with continuous states is a special limiting case of the modern Hopfield network () with energy ().
https://en.wikipedia.org/wiki?curid=68440670
1,618,341
The neurons can be organized in layers so that every neuron in a given layer has the same activation function and the same dynamic time scale. If we assume that there are no horizontal connections between the neurons within the layer (lateral connections) and there are no skip-layer connections, the general fully connected network (), () reduces to the architecture shown in Fig.4. It has formula_65 layers of recurrently connected neurons with the states described by continuous variables formula_66 and the activation functions formula_67, index formula_68 enumerates the layers of the network, and index formula_4 enumerates individual neurons in that layer. The activation functions can depend on the activities of all the neurons in the layer. Every layer can have a different number of neurons formula_70. These neurons are recurrently connected with the neurons in the preceding and the subsequent layers. The matrices of weights that connect neurons in layers formula_68 and formula_72 are denoted by formula_73 (the order of the upper indices for weights is the same as the order of the lower indices, in the example above this means that the index formula_4 enumerates neurons in the layer formula_68, and index formula_76 enumerates neurons in the layer formula_72). The feedforward weights and the feedback weights are equal. The dynamical equations for the neurons' states can be written as \xi^{(A, A-1)}_{ij} g_j^{A-1} + \sum\limits_{j=1}^{N_{A+1}} \xi^{(A, A+1)}_{ij} g_j^{A+1} - x_i^A</math>|}} with boundary conditions The main difference of these equations from the conventional feedforward networks is the presence of the second term, which is responsible for the feedback from higher layers. These top-down signals help neurons in lower layers to decide on their response to the presented stimuli. Following the general recipe it is convenient to introduce a Lagrangian function formula_78 for the formula_68-th hidden layer, which depends on the activities of all the neurons in that layer. The activation functions in that layer can be defined as partial derivatives of the Lagrangian With these definitions the energy (Lyapunov) function is given by \Big[ \sum\limits_{i=1}^{N_A} x_i^A g_i^A - L^{A}\Big] - \sum\limits_{A=1}^{N_\text{layer}-1} \sum\limits_{i=1}^{N_{A+1}} \sum\limits_{j=1}^{N_A} g_i^{A+1} \xi^{(A+1,A)}_{ij} g_j^A</math>|}} If the Lagrangian functions, or equivalently the activation functions, are chosen in such a way that the Hessians for each layer are positive semi-definite and the overall energy is bounded from below, this system is guaranteed to converge to a fixed point attractor state. The temporal derivative of this energy function is given by \tau_A \sum\limits_{i,j=1}^{N_A} \frac{dx_j^A}{dt} \frac{\partial^2 L^{A}}{\partial x_j^{A} \partial x_i^{A}} \frac{dx_i^A}{dt} \leq 0</math>|}}Thus, the hierarchical layered network is indeed an attractor network with the global energy function. This network is described by a hierarchical set of synaptic weights that can be learned for each specific problem.
https://en.wikipedia.org/wiki?curid=68440670
1,678,011
In many control applications, trying to write a mathematical model of the plant is considered a hard task, requiring efforts and time to the process and control engineers. This problem is overcome by "data-driven" methods, which fit a system model to the experimental data collected, choosing it in a specific models class. The control engineer can then exploit this model to design a proper controller for the system. However, it is still difficult to find a simple yet reliable model for a physical system, that includes only those dynamics of the system that are of interest for the control specifications. The "direct" data-driven methods allow to tune a controller, belonging to a given class, without the need of an identified model of the system. In this way, one can also simply weight process dynamics of interest inside the control cost function, and exclude those dynamics that are out of interest.
https://en.wikipedia.org/wiki?curid=56275884
1,704,172
In the "" paradigm, programmers provide an AI such as AlphaZero with an "objective function" that the programmers intend will encapsulate the goal or goals that the programmers wish the AI to accomplish. Such an AI later populates a (possibly implicit) internal "model" of its environment. This model encapsulates all the agent's beliefs about the world. The AI then creates and executes whatever plan is calculated to maximize the value of its objective function. For example, AlphaZero chess has a simple objective function of "+1 if AlphaZero wins, -1 if AlphaZero loses". During the game, AlphaZero attempts to execute whatever sequence of moves it judges most likely to give the maximum value of +1. Similarly, a reinforcement learning system can have a "reward function" that allows the programmers to shape the AI's desired behavior. An evolutionary algorithm's behavior is shaped by a "fitness function".
https://en.wikipedia.org/wiki?curid=64335737
1,705,435
Solar cells operate as quantum energy conversion devices, and are therefore subject to the thermodynamic efficiency limit. Photons with an energy below the band gap of the absorber material cannot generate an electron-hole pair, and so their energy is not converted to useful output and only generates heat if absorbed. For photons with an energy above the band gap energy, only a fraction of the energy above the band gap can be converted to useful output. When a photon of greater energy is absorbed, the excess energy above the band gap is converted to kinetic energy of the carrier recombination. The excess kinetic energy is converted to heat through phonon interactions as the kinetic energy of the carriers slows to equilibrium velocity. Hence, the solar energy cannot be converted to electricity beyond a certain limit.
https://en.wikipedia.org/wiki?curid=32495538
1,706,380
Some of the differences in approach described above are easier to understand if the macroscope is interpreted as a particular instance of the "value chain of big data" (with a particular focus on earth and/or biosphere observations), which as stated in Chen "et al." (2014) can be divided into four phases, namely data generation, data acquisition (aka data assembly), data storage, and data analysis. For some workers such as M. Dornelas "et al.", the macroscope is the sum of the data collection systems (the generation element) that will provide the content that is needed for subsequent analysis, although some mention is also made of "a series of domain-specific data registries" which would then permit the content to be discovered. For others such as OBIS, the principal effort required to construct the macroscope is the data assembly component, which then permits the integrated analysis of previously disparate datasets (OBIS data can then either be viewed by the tools supplied, or downloaded to a user's own system for additional visualization and analysis); while for facilities interested in discovering patterns in the data (and with sufficient computing power to hand), the macroscope is the suite of temporal and spatial analytical and filtering tools ("lenses" in the terminology of the Indiana University Cyberinfrastructure for Network Science Center) which can be applied once the data are assembled. Since by analogy with the microscope, the macroscope is in essence a method of visualizing subjects too large to be seen completely in a conventional field of view, probably none of these approaches are incorrect, the differences in emphasis being complementary in that each is capable of contributing to the resulting "virtual instrument" that is envisaged by this concept. One trend that is observable, however, is that of increasing base dataset size and desired sampling density, today's "macroscopes" being built upon arrays of fine scale / high resolution data that would have been discarded as undesirable detail (obscuring the "big picture") in the original concepts of Odum and de Rosnay.
https://en.wikipedia.org/wiki?curid=64208794
1,706,712
Krylov has served on the editorial boards of numerous peer-review journals, including "Annual Review of Physical Chemistry", the "Journal of Chemical Physics", the "Journal of Physical Chemistry", "Chemical Physics Letters", the "International Journal of Quantum Chemistry," "Physical Chemistry–Chemical Physics, Molecular Physics," and "Wires Computational Molecular Science." She has served as a guest editor of special issues of J". Phys. Chem. A" honoring Prof. Benny Gerber and Prof. Hanna Reisler, the special issue of "Chemical Reviews on Theoretical Modeling of Excited-State Processes," and the special issue of "Physical Chemistry–Chemical Physics on Quantum Information Science." Currently, she is an associate editor of "Physical Chemistry-Chemical Physics" (RSC) and of "Wires Computational Molecular Science" (Wiley).
https://en.wikipedia.org/wiki?curid=11061937
1,713,598
In logic and theoretical computer science, and specifically proof theory and computational complexity theory, proof complexity is the field aiming to understand and analyse the computational resources that are required to prove or refute statements. Research in proof complexity is predominantly concerned with proving proof-length lower and upper bounds in various propositional proof systems. For example, among the major challenges of proof complexity is showing that the Frege system, the usual propositional calculus, does not admit polynomial-size proofs of all tautologies. Here the size of the proof is simply the number of symbols in it, and a proof is said to be of polynomial size if it is polynomial in the size of the tautology it proves.
https://en.wikipedia.org/wiki?curid=2801284
1,713,612
A proof system "P" is "automatable" if there is an algorithm that given a tautology formula_1 outputs a "P"-proof of formula_1 in time polynomial in the size of formula_1 and the length of the shortest "P"-proof of formula_1. Note that if a proof system is not polynomially bounded, it can still be automatable. A proof system "P" is "weakly automatable" if there is a proof system "R" and an algorithm that given a tautology formula_1 outputs an "R"-proof of formula_1 in time polynomial in the size of formula_1 and the length of the shortest "P"-proof of formula_1.
https://en.wikipedia.org/wiki?curid=2801284
1,750,975
The quantum jump method is an approach which is much like the master-equation treatment except that it operates on the wave function rather than using a density matrix approach. The main component of this method is evolving the system's wave function in time with a pseudo-Hamiltonian; where at each time step, a quantum jump (discontinuous change) may take place with some probability. The calculated system state as a function of time is known as a quantum trajectory, and the desired density matrix as a function of time may be calculated by averaging over many simulated trajectories. For a Hilbert space of dimension N, the number of wave function components is equal to N while the number of density matrix components is equal to N. Consequently, for certain problems the quantum jump method offers a performance advantage over direct master-equation approaches.
https://en.wikipedia.org/wiki?curid=40325028
1,803,601
QTT is compatible with the standard formulation of quantum theory, as described by the Schrödinger equation, but it offers a more detailed view. The Schrödinger equation can be used to compute the probability of finding a quantum system in each of its possible states should a measurement be made. This approach is fundamentally statistical and is useful for predicting average measurements of large ensembles of quantum objects but it does not describe or provide insight into the behaviour of individual particles. QTT fills this gap by offering a way to describe the trajectories of individual quantum particles that obey the probabilities computed from the Schrödinger equation. Like the quantum jump method, QTT applies to open quantum systems that interact with their environment. QTT has become particularly popular since the technology has been developed to efficiently control and monitor individual quantum systems as it can predict how individual quantum objects such as particles will behave when they are observed.
https://en.wikipedia.org/wiki?curid=65266370
1,868,784
Zero electron kinetic energy (ZEKE) spectroscopy was developed with the idea of collecting only the resonance ionization photoelectrons that have extremely low kinetic energy. The technique involves waiting for a period of time after a resonance ionization experiment and then pulsing an electric field to collect the lowest energy photoelectrons in a detector. Typically, ZEKE experiments utilize two different tunable lasers. One laser photon energy is tuned to be resonant with the energy of an intermediate state. (This may be resonant with an excited state at a multiphoton transition.) Another photon energy is tuned to be close to the ionization threshold energy. The technique worked extremely well and demonstrated energy resolution that was significantly better than the laser bandwidth. It turns out that it was not the photoelectrons that were detected in ZEKE. The delay between the laser and the electric field pulse selected the longest lived and most circular Rydberg states closest to the energy of the ion core. The population distribution of surviving long-lived near threshold Rydberg states is close to the laser energy bandwidth. The electric field pulse stark shifts the near-threshold Rydberg states and vibrational autoionization occurs. ZEKE has provided a significant advance in the study of the vibrational spectroscopy of molecular ions. Schlag, Peatman and Müller-Dethlefs originated ZEKE spectroscopy.
https://en.wikipedia.org/wiki?curid=4119397
1,884,125
In most texts on statistical mechanics the statistical distribution functions formula_1 in Maxwell–Boltzmann statistics, Bose–Einstein statistics, Fermi–Dirac statistics) are derived by determining those for which the system is in its state of maximum probability. But one really requires those with average or mean probability, although – of course – the results are usually the same for systems with a huge number of elements, as is the case in statistical mechanics. The method for deriving the distribution functions with mean probability has been developed by C. G. Darwin and Fowler and is therefore known as the Darwin–Fowler method. This method is the most reliable general procedure for deriving statistical distribution functions. Since the method employs a selector variable (a factor introduced for each element to permit a counting procedure) the method is also known as the Darwin–Fowler method of selector variables. Note that a distribution function is not the same as the probability – cf. Maxwell–Boltzmann distribution, Bose–Einstein distribution, Fermi–Dirac distribution. Also note that the distribution function formula_2 which is a measure of the fraction of those states which are actually occupied by elements, is given by formula_3 or formula_4, where formula_5 is the degeneracy of energy level formula_6 of energy formula_7 and formula_8 is the number of elements occupying this level (e.g. in Fermi–Dirac statistics 0 or 1). Total energy formula_9 and total number of elements formula_10 are then given by formula_11 and formula_12.
https://en.wikipedia.org/wiki?curid=51188793
1,907,518
Behavioral malware detection has been researched more recently. Most approaches to behavioral detection are based on analysis of system call dependencies. The executed binary code is traced using strace or more precise taint analysis to compute data-flow dependencies among system calls. The result is a directed graph formula_1 such that nodes are system calls, and edges represent dependencies. For example, formula_2 if a result returned by system call formula_3 (either directly as a result or indirectly through output parameters) is later used as a parameter of system call formula_4. The origins of the idea to use system calls to analyze software can be found in the work of Forrest et al. Christodorescu et al. point out that malware authors cannot easily reorder system calls without changing the semantics of the program, which makes system call dependency graphs suitable for malware detection. They compute a difference between malware and goodware system call dependency graphs and use the resulting graphs for detection, achieving high detection rates. Kolbitsch et al. pre-compute symbolic expressions and evaluate them on the syscall parameters observed at runtime.
https://en.wikipedia.org/wiki?curid=36181451
1,913,754
In a model gated synapse, the gate is either open or closed by default. The gatekeeper neuron, therefore, serves as an external switch to the gate at the synapse of two other neurons. One of these neurons provides the input signal and the other provides the output signal. It is the role of the gatekeeper neuron to regulate the transmission of the input to the output. When activated, the gatekeeper neuron alters the polarity of the presynaptic axon to either open or close the gate. If this neuron depolarizes the presynaptic axon, it allows the signal to be transmitted. Thus, the gate is open. Hyperpolarization of the presynaptic axon closes the gate. Just like in a transistor, the gatekeeper neuron turns the system on or off; it affects the output signal of the postsynaptic neuron. Whether it is turned on or off is dependent on the nature of the input signal (either excitatory or inhibitory) from the presynaptic neuron.
https://en.wikipedia.org/wiki?curid=9916386
1,916,920
A cell suspension or suspension culture is a type of cell culture in which single cells or small aggregates of cells are allowed to function and multiply in an agitated growth medium, thus forming a suspension. Suspension culture is one of the two classical types of cell culture, the other being adherent culture. The history of suspension cell culture closely aligns with the history of cell culture overall, but differs in maintenance methods and commercial applications. The cells themselves can either be derived from homogenized tissue or from heterogenous cell solutions. Suspension cell culture is commonly used to culture nonadhesive cell lines like hematopoietic cells, plant cells, and insect cells. While some cell lines are cultured in suspension, the majority of commercially available mammalian cell lines are adherent. Suspension cell cultures must be agitated to maintain cells in suspension, and may require specialized equipment (e.g. magnetic stir plate, orbital shakers, incubators) and flasks (e.g. culture flasks, spinner flasks, shaker flasks). These cultures need to be maintained with nutrient containing media and cultured in a specific cell density range to avoid cell death.
https://en.wikipedia.org/wiki?curid=60401671
1,958,810
In some global optimization problems the analytical definition of the objective function is unknown and it is only possible to get values at fixed points. There are objective functions in which the cost of an evaluation is very high, for example when the evaluation is the result of an experiment or a particularly onerous measurement. In these cases, the search of the global extremum (maximum or minimum) can be carried out using a methodology named "Bayesian optimization", which tend to obtain a priori the best possible result with a predetermined number of evaluations. In summary it is assumed that outside the points in which it has already been evaluated, the objective function has a pattern which can be represented by a stochastic process with appropriate characteristics. The stochastic process is taken as a model of the objective function, assuming that the probability distribution of its extrema gives the best indication about extrema of the objective function. In the simplest case of the one-dimensional optimization, given that the objective function has been evaluated in a number of points, there is the problem to choose in which of the intervals thus identified is more appropriate to invest in a further evaluation. If a Wiener stochastic process is chosen as a model for the objective function, it is possible to calculate the probability distribution of the model extreme points inside each interval, conditioned by the known values at the interval boundaries. The comparison of the obtained distributions provides a criterion for selecting the interval in which the process should be iterated. The probability value of having identified the interval in which falls the global extremum point of the objective function can be used as a stopping criterion. Bayesian optimization is not an efficient method for the accurate search of local extrema so, once the search range has been restricted, depending on the characteristics of the problem, a specific local optimization method can be used.
https://en.wikipedia.org/wiki?curid=52629507
1,996,275
There is an increasing call for the energy models and datasets used for energy policy analysis and advice to be made public in the interests of transparency and quality. A 2010 paper concerning energy efficiency modeling argues that "an open peer review process can greatly support model verification and validation, which are essential for model development". One 2012 study argues that the source code and datasets used in such models should be placed under publicly accessible version control to enable third-parties to run and check specific models. Another 2014 study argues that the public trust needed to underpin a rapid transition in energy systems can only be built through the use of transparent open-source energy models. The UK TIMES project (UKTM) is open source, according to a 2014 presentation, because "energy modelling must be replicable and verifiable to be considered part of the scientific process" and because this fits with the "drive towards clarity and quality assurance in the provision of policy insights". In 2016, the Deep Decarbonization Pathways Project (DDPP) is seeking to improve its modelling methodologies, a key motivation being "the intertwined goals of transparency, communicability and policy credibility." A 2016 paper argues that model-based energy scenario studies, wishing to influence decision-makers in government and industry, must become more comprehensible and more transparent. To these ends, the paper provides a checklist of transparency criteria that should be completed by modelers. The authors note however that they "consider open source approaches to be an extreme case of transparency that does not automatically facilitate the comprehensibility of studies for policy advice." An editorial from 2016 opines that closed energy models providing public policy support "are inconsistent with the open access movement [and] funded research". A 2017 paper lists the benefits of open data and models and the reasons that many projects nonetheless remain closed. The paper makes a number of recommendations for projects wishing to transition to a more open approach. The authors also conclude that, in terms of openness, energy research has lagged behind other fields, most notably physics, biotechnology, and medicine. Moreover:
https://en.wikipedia.org/wiki?curid=47926105
2,024,828
UV-induced apoptosis is an adequate (physiological) reaction of a cell damaged by UV radiation (UVR) in a sufficiently large (lethal) dose and it prevents the disordered destruction of UV damaged cells by help necrosis. Cell elimination by apoptosis occurs when UV-induced cell damage which cannot be repaired by the intracellular repair system exceeds at it certain limit (lethal damage). Through apoptosis, the cells are self-disassembled into compartments with their subsequent utilization (mainly by neighboring cells). The first time sign of the beginning of the apoptosis system is working in a UV damaged cell is the activation of restriction enzymes, which divide cell DNA into fragments convenient for utilization. But too large a dose of UVR can lead to breakdown (inactivation) of the energy-dependent mechanism of apoptosis (super lethal damage). In this case, cell destruction occurs randomly, not orderly, and during a significantly longer (compared with apoptosis) time interval. UV-irradiated cells do not change their appearance for a long time [1, 6], as a result of which the researchers may make the erroneous conclusion that “revealed an unexpected response to a dose at which a higher dose of UV increased the viability of keratinocytes” [2]. The fact that UV-induced apoptosis at high doses of UVR begins to be replaced by necrosis was established in 2000 [3]. For keratinocytes, the proportion of cells that have elimination by help apoptosis, with an increase in UVR dose can reach to achieve 45%, but with a further increase in the dose of UVR (due to the shutdown of the mechanism of apoptosis), destruction of damaged cells by help necrosis and the part of cells that eliminated by apoptosis begins to decrease (non-monotonous dose dependence of UV-induced apoptosis) [4, 11]. In the dose range of UVR from “lethal” to “super-lethal”, “pro-inflammatory” apoptosis can be manifested, which was experimentally discovered in 2003 [5]. This may be the result of partial damage to the apoptosis mechanism by UV radiation [1]. If at moderate doses “pure” apoptosis does not cause an inflammatory reaction, then at sufficiently large (but lower than superlethal) doses, an inflammatory reaction arises due to pro-inflammatory apoptosis, which leads to the appearance of “fast” erythema for UV irradiated skin keratinocytes. Kinetic of “fast” erythema is much faster by the time of development of UV erythema caused by necrosis of UV damaged keratinocytes [6]. The most erythemogenic is UVB (UVB, 280 - 320 nanometers) the spectral range of UVR, since radiation in this range is less absorbed by the outer layers of the skin, which allows UVB radiation, in contrast to UVC (UVC, 200 - 280 nm), to reach more deep layers skin and act on keratinocytes of the deep-lying basal layer of the epidermis of the skin. The ability to induce apoptosis for UVB and UVC radiation is due to the fact that the DNA of the nucleus [7] and / or mitochondria [8] of the cell absorbs UVR well in the UVC and UVB spectral range. Keratinocytes of the skin (regardless of UVR exposure) are in a state of programmed apoptosis, during which the keratinocytes of the basal layer are removed from it and during the transition through all layers of the epidermis within 28 days turn into flakes of the outer stratum corneum, which are subsequently desquamated. It is clear that the keratinocyte response to UV exposure will depend on what phase of programmed apoptosis (at what distance from the basal layer) the keratinocyte experienced UV exposure, and this is the main reason for the difference of the UV effect for UVC and UVB on the skin. There are also differences in the initiation of mitochondrial (internal) and caspase-dependent (external) apoptosis for the UVC and UVB spectral ranges [9]. Sunburn cells (SBS) are the keratinocytes in the process of UV-induced apoptosis (both “pure” and pro-inflammatory). The appearance of SBC may be not associated with an inflammatory reaction, but the role of UV-induced apoptosis of skin keratinocytes in the development of UV erythema (hyperemia, redness) of the skin has been established, which allowed the development of a patent-protected METHOD FOR QUANTITATIVE ASSESSMENT OF APOPTOSIS SYSTEM [10], in which “the brightest lamp of skin display "(photoerythema) is used (as an indicator of the manifestation of strictly dosed sterile inflammation) to diagnose the state of the body systems involved in the elimination of UV-induced damage. Such systems (except apoptosis) include the immune system, the intracellular repair system, the microcirculation system and not only.
https://en.wikipedia.org/wiki?curid=7111771
2,025,408
Sustainable Energy Utility (SEU) is a community-based model of development founded on energy conservation and the use of renewables, seeking to permanently decrease the use of source materials, water, and energy. The model prescribes the creation of independent and financially self-sufficient non-profit entities for energy sustainability through conservation, efficiency, and end-user based decentralized renewable energy in an effort to address concerns about climate change, rising energy prices, inequity of energy availability, and a lack of community governance of energy development. The SEU model was developed by Dr. J. Byrne at the Center for Energy and Environmental Policy, University of Delaware. The Foundation for Renewable Energy and Environment (FREE) is implementing versions of the model.
https://en.wikipedia.org/wiki?curid=42798343
2,040,345
and SMPDB. As a metabolomics resource, the ECMDB is designed to facilitate research in the area gut/microbiome metabolomics and environmental metabolomics. The ECMDB contains two kinds of data: 1) chemical data and 2) molecular biology and/or biochemical data. The chemical data includes more than 2700 metabolite structures with detailed metabolite descriptions along with nearly 5000 NMR, GC-MS and LC-MS spectra corresponding to these metabolites. The biochemical data includes nearly 1600 protein (and DNA) sequences and more than 3100 biochemical reactions that are linked to these metabolite entries. Each metabolite entry in the ECMDB contains more than 80 data fields with approximately 65% of the information being devoted to chemical data and the other 35% of the information devoted to enzymatic or biochemical data. Many data fields are hyperlinked to other databases (KEGG, PubChem, MetaCyc, ChEBI, PDB, UniProt, and GenBank). The ECMDB also has a variety of structure and pathway viewing applets. The ECMDB database offers a number of text, sequence, spectral, chemical structure and relational query searches. These are described in more detail below.
https://en.wikipedia.org/wiki?curid=42638967
2,073,008
To construct a contour boxplot, data ordering is the first step. In functional data analysis, each observation is a real function, therefore data ordering is different from the classical boxplot where scalar data are simply ordered from the smallest sample value to the largest. More generally, data depth, gives a center-outward ordering of data points, and thereby provides a mechanism for constructing rank statistics of various kinds of multidimensional data. For instance, functional data examples can be ordered using the method of band depth or a modified band depth. In contour data analysis, each observation is a feature-set (a subset of the domain), and therefore not a function. Thus, the notion of band depth and modified band depth is further extended to accommodate features that can be expressed as sets but not necessarily as functions. Contour band depth allows for ordering feature-set data from the center outwards and, thus, introduces a measure to define functional quantiles and the centrality or outlyingness of an observation. Having the ranks of feature-set data, the contour boxplot is a natural extension of the classical boxplot which in special cases reduces back to the traditional functional boxplot.
https://en.wikipedia.org/wiki?curid=40190398
2,096,597
Increased energy efficiency has allowed wastewater treatment plants to comply with discharge limits reducing the energy demand even up to 50% without affecting treatment performances. However, energy efficiency strategies by themselves are not sufficient to achieve independence from the electricity grid and fossil fuel-based energy sources. To achieve energy neutrality, multiple studies have looked at the feasibility of integrating a variety of renewable energy sources into wastewater treatment plants. The wastewater itself is a carrier of energy and a theoretical calculation, based on the characteristic of the sewage, shows that the composition of the embedded energy is 80% thermal energy and 20% chemical energy. The thermal energy can be recovered as heat while the chemical energy is recovered as biogas.
https://en.wikipedia.org/wiki?curid=55657441
2,113,330
The SELDM user interface has one or more GUI forms that are used to enter four categories of input data, which include documentation, site and region information, hydrologic statistics, and water-quality data. The documentation data include information about the analyst, the project, and the analysis. The site and region data include the highway-site characteristics, the ecoregions, the upstream-basin characteristics, and, if a lake analysis is selected, the lake-basin characteristics. The hydrologic data include precipitation, streamflow, and runoff-coefficient statistics. The water-quality data include highway-runoff-quality statistics, upstream-water-quality statistics, downstream-water-quality definitions, and BMP-performance statistics. There also is a GUI form for running the model and accessing the distinct set of output files. The SELDM interface is designed to populate the database with data and statistics for the analysis and to specify index variables that are used by the program to query the database when SELDM is run. It is necessary to step through the input forms each time an analysis is run.
https://en.wikipedia.org/wiki?curid=45102490