doc_id
int32
15
2.25M
text
stringlengths
101
6.85k
source
stringlengths
39
44
894,281
Quantum machine learning is the integration of quantum algorithms within machine learning programs. The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer, i.e. quantum-enhanced machine learning. While machine learning algorithms are used to compute immense quantities of data, quantum machine learning utilizes qubits and quantum operations or specialized quantum systems to improve computational speed and data storage done by algorithms in a program. This includes hybrid methods that involve both classical and quantum processing, where computationally difficult subroutines are outsourced to a quantum device. These routines can be more complex in nature and executed faster on a quantum computer. Furthermore, quantum algorithms can be used to analyze quantum states instead of classical data. Beyond quantum computing, the term "quantum machine learning" is also associated with classical machine learning methods applied to data generated from quantum experiments (i.e. machine learning of quantum systems), such as learning the phase transitions of a quantum system or creating new quantum experiments. Quantum machine learning also extends to a branch of research that explores methodological and structural similarities between certain physical systems and learning systems, in particular neural networks. For example, some mathematical and numerical techniques from quantum physics are applicable to classical deep learning and vice versa. Furthermore, researchers investigate more abstract notions of learning theory with respect to quantum information, sometimes referred to as "quantum learning theory".
https://en.wikipedia.org/wiki?curid=44108758
1,020,557
Biological neural networks have a large degree of heterogeneity in terms of different cell types. This section describes a mathematical model of a fully connected modern Hopfield network assuming the extreme degree of heterogeneity: every single neuron is different. Specifically, an energy function and the corresponding dynamical equations are described assuming that each neuron has its own activation function and kinetic time scale.  The network is assumed to be fully connected, so that every neuron is connected to every other neuron using a symmetric matrix of weights formula_109, indices formula_110 and formula_111 enumerate different neurons in the network, see Fig.3. The easiest way to mathematically formulate this problem is to define the architecture through a Lagrangian function formula_112 that depends on the activities of all the neurons in the network. The activation function for each neuron is defined as a partial derivative of the Lagrangian  with respect to that neuron's activity From the biological perspective one can think about formula_113 as an axonal output of the neuron formula_110. In the simplest case, when the Lagrangian is additive for different neurons, this definition results in the activation that is a non-linear function of that neuron's activity. For non-additive Lagrangians this activation function can depend on the activities of a group of neurons. For instance, it can contain contrastive (softmax) or divisive normalization. The dynamical equations describing temporal evolution of a given neuron are given by This equation belongs to the class of models called firing rate models in neuroscience. Each neuron formula_110 collects the axonal outputs formula_116 from all the neurons, weights them with the synaptic coefficients formula_109 and produces its own time-dependent activity formula_118. The temporal evolution has a time constant formula_119, which in general can be different for every neuron. This network has a global energy function where the first two terms represent the Legendre transform of the Lagrangian function with respect to the neurons' currents formula_118. The temporal derivative of this energy function can be computed on the dynamical trajectories leading to (see for details) The last inequality sign holds provided that the matrix formula_121 (or its symmetric part) is positive semi-definite. If, in addition to this, the energy function is bounded from below the non-linear dynamical equations are guaranteed to converge to a fixed point attractor state. The advantage of formulating this network in terms of the Lagrangian functions is that it makes it possible to easily experiment with different choices of the activation functions and different architectural arrangements of neurons. For all those flexible choices the conditions of convergence are determined by the properties of the matrix formula_122 and the existence of the lower bound on the energy function.
https://en.wikipedia.org/wiki?curid=1170097
1,070,096
In physics, conservation laws play important roles. For example, the law of conservation of energy states that the energy of a closed system must remain constant. It can neither increase nor decrease without coming in contact with an external system. If we consider the whole universe as a closed system, the total amount of energy always remains the same. However, the form of energy keeps changing. One may wonder if there is any such law for the conservation of information. In the classical world, information can be copied and deleted perfectly. In the quantum world, however, the conservation of quantum information should mean that information cannot be created nor destroyed. This concept stems from two fundamental theorems of quantum mechanics: the no-cloning theorem and the no-deleting theorem. But the no-hiding theorem is the ultimate proof of the conservation of quantum information. The importance of the no-hiding theorem is that it proves the conservation of wave function in quantum theory. This had never been proved earlier. What was known before is that the conservation of entropy holds for a quantum system undergoing unitary time evolution and if entropy represents information in quantum theory, then it is believed then that information should somehow be conserved. For example, one can prove that pure states remain pure states and probabilistic combination of pure states (called as mixed states) remain mixed states under unitary evolution. However, it was never proved that if the probability amplitude disappears from one system, it will reappear in another system. Thus, one may say that as energy keeps changing its form, the wave function keep moving from one Hilbert space to another Hilbert space. Since the wave function contains all the relevant information about a physical system, the conservation of wave function is tantamount to conservation of quantum information.
https://en.wikipedia.org/wiki?curid=54424691
1,618,340
Biological neural networks have a large degree of heterogeneity in terms of different cell types. This section describes a mathematical model of a fully connected Modern Hopfield network assuming the extreme degree of heterogeneity: every single neuron is different. Specifically, an energy function and the corresponding dynamical equations are described assuming that each neuron has its own activation function and kinetic time scale. The network is assumed to be fully connected, so that every neuron is connected to every other neuron using a symmetric matrix of weights formula_51, indices formula_52 and formula_53 enumerate different neurons in the network, see Fig.3. The easiest way to mathematically formulate this problem is to define the architecture through a Lagrangian function formula_54 that depends on the activities of all the neurons in the network. The activation function for each neuron is defined as a partial derivative of the Lagrangian with respect to that neuron's activity From the biological perspective one can think about formula_55 as an axonal output of the neuron formula_52. In the simplest case, when the Lagrangian is additive for different neurons, this definition results in the activation that is a non-linear function of that neuron's activity. For non-additive Lagrangians this activation function can depend on the activities of a group of neurons. For instance, it can contain contrastive (softmax) or divisive normalization. The dynamical equations describing temporal evolution of a given neuron are given by This equation belongs to the class of models called firing rate models in neuroscience. Each neuron formula_52 collects the axonal outputs formula_58 from all the neurons, weights them with the synaptic coefficients formula_51 and produces its own time-dependent activity formula_60. The temporal evolution has a time constant formula_61, which in general can be different for every neuron. This network has a global energy function where the first two terms represent the Legendre transform of the Lagrangian function with respect to the neurons' currents formula_60. The temporal derivative of this energy function can be computed on the dynamical trajectories leading to (see for details) The last inequality sign holds provided that the matrix formula_63 (or its symmetric part) is positive semi-definite. If, in addition to this, the energy function is bounded from below the non-linear dynamical equations are guaranteed to converge to a fixed point attractor state. The advantage of formulating this network in terms of the Lagrangian functions is that it makes it possible to easily experiment with different choices of the activation functions and different architectural arrangements of neurons. For all those flexible choices the conditions of convergence are determined by the properties of the matrix formula_64 and the existence of the lower bound on the energy function.
https://en.wikipedia.org/wiki?curid=68440670
1,644,776
The puma model was proposed by Diaz and Kostelecky in 2010 as a three-parameter model that exhibits consistency with all the established neutrino data (accelerator, atmospheric, reactor, and solar) and naturally describes the anomalous low-energy excess observed in MiniBooNE that is inconsistent with the conventional massive model. This is a hybrid model that includes Lorentz violation and neutrino masses. One of the main differences between this model and the bicycle and tandem models described above is the incorporation of nonrenormalizable terms in the theory, which lead to powers of the energy greater than one. Nonetheless, all these models share the characteristic of having a mixed energy dependence that leads to energy-dependent mixing angles, a feature absent in the conventional massive model. At low energies, the mass term dominates and the mixing takes the tribimaximal form, a widely used matrix postulated to describe neutrino mixing. This mixing added to the 1/"E" dependence of the mass term guarantees agreement with solar and KamLAND data. At high energies, Lorentz-violating contributions take over making the contribution of neutrino masses negligible. A seesaw mechanism is triggered, similar to that in the bicycle model, making one of the eigenvalues proportional to 1/"E", which usually come with neutrino masses. This feature lets the model mimic the effects of a mass term at high energies despite the fact that there are only non-negative powers of the energy. The energy dependence of the Lorentz-violating terms produce maximal formula_9 mixing, which makes the model consistent with atmospheric and accelerator data. The oscillation signal in MiniBooNE appears because the oscillation phase responsible for the oscillation channel formula_10 grows rapidly with energy and the oscillation amplitude is large only for energies below 500 MeV. The combination of these two effects produces an oscillation signal in MiniBooNE at low energies, in agreement with the data. Additionally, since the model includes a term associated to a CPT-odd Lorentz-violating operator, different probabilities appear for neutrinos and antineutrinos. Moreover, since the amplitude for formula_10 decreases for energies above 500 MeV, long-baseline experiments searching for nonzero formula_12 should measure different values depending on the energy; more precisely, the MINOS experiment should measure a value smaller than the T2K experiment according to the puma model, which agrees with current measurements.
https://en.wikipedia.org/wiki?curid=27640668
1,447,098
There is a wide spectrum of terminology associated with space mapping: ideal model, coarse model, coarse space, fine model, companion model, cheap model, expensive model, surrogate model, low fidelity (resolution) model, high fidelity (resolution) model, empirical model, simplified physics model, physics-based model, quasi-global model, physically expressive model, device under test, electromagnetics-based model, simulation model, computational model, tuning model, calibration model, surrogate model, surrogate update, mapped coarse model, surrogate optimization, parameter extraction, target response, optimization space, validation space, neuro-space mapping, implicit space mapping, output space mapping, port tuning, predistortion (of design specifications), manifold mapping, defect correction, model management, multi-fidelity models, variable fidelity/variable complexity, multigrid method, coarse grid, fine grid, surrogate-driven, simulation-driven, model-driven, feature-based modeling.
https://en.wikipedia.org/wiki?curid=38218032
503,732
There is no known way to efficiently simulate a quantum computational model with a classical computer. This means that a classical computer cannot simulate a quantum computational model in polynomial time. However, a quantum circuit of formula_31 qubits with formula_8 quantum gates can be simulated by a classical circuit with formula_33 classical gates. This number of classical gates is obtained by determining how many bit operations are necessary to simulate the quantum circuit. In order to do this, first the amplitudes associated with the formula_31 qubits must be accounted for. Each of the states of the formula_31 qubits can be described by a two-dimensional complex vector, or a state vector. These state vectors can also be described a linear combination of its component vectors with coefficients called amplitudes. These amplitudes are complex numbers which are normalized to one, meaning the sum of the squares of the absolute values of the amplitudes must be one. The entries of the state vector are these amplitudes. Which entry each of the amplitudes are correspond to the none-zero component of the component vector which they are the coefficients of in the linear combination description. As an equation this is described as formula_36 or formula_37 using Dirac notation. The state of the entire formula_31 qubit system can be described by a single state vector. This state vector describing the entire system is the tensor product of the state vectors describing the individual qubits in the system. The result of the tensor products of the formula_31 qubits is a single state vector which has formula_40 dimensions and entries that are the amplitudes associated with each basis state or component vector. Therefore, formula_40amplitudes must be accounted for with a formula_40 dimensional complex vector which is the state vector for the formula_31 qubit system. In order to obtain an upper bound for the number of gates required to simulate a quantum circuit we need a sufficient upper bound for the amount data used to specify the information about each of the formula_40 amplitudes. To do this formula_1 bits of precision are sufficient for encoding each amplitude. So it takes formula_46 classical bits to account for the state vector of the formula_31 qubit system. Next the application of the formula_8 quantum gates on formula_40 amplitudes must be accounted for. The quantum gates can be represented as formula_50 sparse matrices. So to account for the application of each of the formula_8 quantum gates, the state vector must be multiplied by a formula_50 sparse matrix for each of the formula_8 quantum gates. Every time the state vector is multiplied by a formula_50 sparse matrix, formula_55 arithmetic operations must be performed. Therefore, there are formula_56 bit operations for every quantum gate applied to the state vector. So formula_56 classical gate are needed to simulate formula_31 qubit circuit with just one quantum gate. Therefore, formula_33 classical gates are needed to simulate a quantum circuit of formula_31 qubits with formula_8 quantum gates. While there is no known way to efficiently simulate a quantum computer with a classical computer, it is possible to efficiently simulate a classical computer with a quantum computer. This is evident from the belief that formula_26.
https://en.wikipedia.org/wiki?curid=24092190
916,978
Once the type of model is specified in stage 2, the data and the method of collecting data must be specified. The model must be specified first in order to determine the variables which need to be collected. Conversely, when deciding on the desired forecasting model, the available data or methods to collect data needs to be considered in order to formulate the correct model. Time series data and cross section data are the different collection methods that can be used. Time series data are based on historical observations taken sequentially in time. These observations are used to derive relevant statistics, characteristics, and insight from the data. The data points that may be collected using time series data may be sales, prices, manufacturing costs and their corresponding time intervals i.e., weekly, monthly, quarterly, annually or any other regular interval.  Cross section data refers to data collected on a single entity at different periods of time. Cross sectional data used in demand forecasting usually depicts a data point gathered from an individual, firm, industry or an area. For example, sales for Firm A during quarter 1. This type of data encapsulates a variety of data points which resulted in the final data point. The subset of data points may not be observable or feasible to determine but can be practical method for adding precision to the demand forecast model. The source for the data can be found via the firm’s records, commercial or private agencies or official sources.
https://en.wikipedia.org/wiki?curid=17873973
572,138
Particle physics, dealing with the smallest physical systems, is also known as "high energy physics". Physics of larger length scales, including the macroscopic scale, is also known as "low energy physics". Intuitively, it might seem incorrect to associate "high energy" with the physics of very small, "low" mass-energy systems, like subatomic particles. By comparison, one gram of hydrogen, a macroscopic system, has ~ times the mass-energy of a single proton, a central object of study in high energy physics. Even an entire beam of protons circulated in the Large Hadron Collider, a high energy physics experiment, contains ~ protons, each with of energy, for a total beam energy of ~ or ~ 336.4 MJ, which is still ~ times lower than the mass-energy of a single gram of hydrogen. Yet, the macroscopic realm is "low energy physics", while that of quantum particles is "high energy physics".
https://en.wikipedia.org/wiki?curid=382187
1,707,390
Entosis has been found to be a different mechanism for cancer cells to form cell-in-cell structures at tumor sites. The entosis process in cancer cells is mediated via E-cadherin and P-cadherin. Since cadherins usually create homolytic cell to cell junctions, it is believed that the process mainly occurs between homologous cells. After cell-cell adhesions are mediated, the engulfed cells promote their own uptake into the neighbor cell. Additionally, they promote the ingestion process through actin polymerization and myosin contraction. The invading cell (outer cell) actomysin contraction is regulated by controllers or cell tension such as RhoA, furthermore they accumulate actin and myosin at the cell cortex which generates the mechanical tension that generates the cell-in-cell invasion mechanism. The entosis mechanism can potentially have substantial energetic implication in cancer cells compared to other mechanisms of cell death and engulfment. A crucial part of the process is the active involvement of invading cells, which does not happen in other forms of cell engulfment. This allows the mechanism to selectively target living cells, excluding dead cells or non-living material such as cell debris. After internalization, engulfed cells are killed by the host cell following the maturation of the entotic vacuole that encapsulates the entotic cell. The maturation of the entotic vacuole involves modification by autophagy pathway proteins, followed by lysosome fusion and inner cell dead and degradation inside the host cell. In this mechanism, autophagy pathway proteins play an important role by scavenging extracellular nutrients derived from the inner cell death. Internalized cells can also undergo alternative fates such as apoptosis or unharmed escape from host cell. In clinical cancer specimens, evidence of DNA fragmentation has been found suggesting that non-apoptotic cell death may be a common fate for entotic cells in human cancers. Entosis correlates with cancer worse prognosis in head and neck squamous cell carcinoma, anal carcinoma, lung adenocarcinoma, pancreatic ductal carcinoma, and some breast ductal carcinoma.
https://en.wikipedia.org/wiki?curid=14501387
410,219
Doi & Barendregt working in collaboration with Khan, Thalib and Williams (from the University of Queensland, University of Southern Queensland and Kuwait University), have created an inverse variance quasi likelihood based alternative (IVhet) to the random effects (RE) model for which details are available online. This was incorporated into MetaXL version 2.0, a free Microsoft excel add-in for meta-analysis produced by Epigear International Pty Ltd, and made available on 5 April 2014. The authors state that a clear advantage of this model is that it resolves the two main problems of the random effects model. The first advantage of the IVhet model is that coverage remains at the nominal (usually 95%) level for the confidence interval unlike the random effects model which drops in coverage with increasing heterogeneity. The second advantage is that the IVhet model maintains the inverse variance weights of individual studies, unlike the RE model which gives small studies more weight (and therefore larger studies less) with increasing heterogeneity. When heterogeneity becomes large, the individual study weights under the RE model become equal and thus the RE model returns an arithmetic mean rather than a weighted average. This side-effect of the RE model does not occur with the IVhet model which thus differs from the RE model estimate in two perspectives: Pooled estimates will favor larger trials (as opposed to penalizing larger trials in the RE model) and will have a confidence interval that remains within the nominal coverage under uncertainty (heterogeneity). Doi & Barendregt suggest that while the RE model provides an alternative method of pooling the study data, their simulation results demonstrate that using a more specified probability model with untenable assumptions, as with the RE model, does not necessarily provide better results. The latter study also reports that the IVhet model resolves the problems related to underestimation of the statistical error, poor coverage of the confidence interval and increased MSE seen with the random effects model and the authors conclude that researchers should henceforth abandon use of the random effects model in meta-analysis. While their data is compelling, the ramifications (in terms of the magnitude of spuriously positive results within the Cochrane database) are huge and thus accepting this conclusion requires careful independent confirmation. The availability of a free software (MetaXL) that runs the IVhet model (and all other models for comparison) facilitates this for the research community.
https://en.wikipedia.org/wiki?curid=62329
747,621
Energy efficiencies of water heaters in residential use can vary greatly, particularly depending on manufacturer and model. However, electric heaters tend to be slightly more efficient (not counting power station losses) with recovery efficiency (how efficiently energy transfers to the water) reaching about 98%. Gas-fired heaters have maximum recovery efficiencies of only about 8294% (the remaining heat is lost with the flue gasses). Overall energy factors can be as low as 80% for electric and 50% for gas systems. Natural gas and propane tank water heaters with energy factors of 62% or greater, as well as electric tank water heaters with energy factors of 93% or greater, are considered high-efficiency units. Energy Star-qualified natural gas and propane tank water heaters (as of September 2010) have energy factors of 67% or higher, which is usually achieved using an intermittent pilot together with an automatic flue damper, baffle blowers, or power venting. Direct electric resistance tank water heaters are not included in the Energy Star program, however, the Energy Star program does include electric heat pump units with energy factors of 200% or higher. Tankless gas water heaters (as of 2015) must have an energy factor of 90% or higher for Energy Star qualification. Since electricity production in thermal plants has efficiency levels ranging from only 15% to slightly over 55% (combined cycle gas turbine), with around 40% typical for thermal power stations, direct resistance electric water heating may be the least energy efficient option. However, use of a heat pump can make electric water heaters much more energy efficient and lead to a decrease in carbon dioxide emissions, even more so if a low carbon source of electricity is used. Using district heating utilizing waste heat from electricity generation and other industries to heat residences and hot water gives an increased overall efficiency, removing the need for burning fossil fuel or using high energy value electricity to produce heat in the individual home.
https://en.wikipedia.org/wiki?curid=521801
868,841
The cell envelope is composed of the cell membrane and the cell wall. As in other organisms, the bacterial cell wall provides structural integrity to the cell. In prokaryotes, the primary function of the cell wall is to protect the cell from internal turgor pressure caused by the much higher concentrations of proteins, and other molecules inside the cell compared to its external environment. The bacterial cell wall differs from that of all other organisms by the presence of peptidoglycan which is located immediately outside of the cell membrane. Peptidoglycan is made up of a polysaccharide backbone consisting of alternating N-Acetylmuramic acid (NAM) and N-acetylglucosamine (NAG) residues in equal amounts. Peptidoglycan is responsible for the rigidity of the bacterial cell wall, and for the determination of cell shape. It is relatively porous and is not considered to be a permeability barrier for small substrates. While all bacterial cell walls (with a few exceptions such as extracellular parasites such as "Mycoplasma") contain peptidoglycan, not all cell walls have the same overall structures. Since the cell wall is required for bacterial survival, but is absent in some eukaryotes, several antibiotics (notably the penicillins and cephalosporins) stop bacterial infections by interfering with cell wall synthesis, while having no effects on human cells which have no cell wall, only a cell membrane. There are two main types of bacterial cell walls, those of gram-positive bacteria and those of gram-negative bacteria, which are differentiated by their Gram staining characteristics. For both these types of bacteria, particles of approximately 2 nm can pass through the peptidoglycan. If the bacterial cell wall is entirely removed, it is called a protoplast while if it's partially removed, it is called a spheroplast. Beta-lactam antibiotics such as penicillin inhibit the formation of peptidoglycan cross-links in the bacterial cell wall. The enzyme lysozyme, found in human tears, also digests the cell wall of bacteria and is the body's main defense against eye infections.
https://en.wikipedia.org/wiki?curid=4209093
911,557
One of the fundamental elements of biomedical and translation research is the use of integrated data repositories. A survey conducted in 2010 defined "integrated data repository" (IDR) as a data warehouse incorporating various sources of clinical data to support queries for a range of research-like functions. Integrated data repositories are complex systems developed to solve a variety of problems ranging from identity management, protection of confidentiality, semantic and syntactic comparability of data from different sources, and most importantly convenient and flexible query. Development of the field of clinical informatics led to the creation of large data sets with electronic health record data integrated with other data (such as genomic data). Types of data repositories include operational data stores (ODSs), clinical data warehouses (CDWs), clinical data marts, and clinical registries. Operational data stores established for extracting, transferring and loading before creating warehouse or data marts. Clinical registries repositories have long been in existence, but their contents are disease specific and sometimes considered archaic. Clinical data stores and clinical data warehouses are considered fast and reliable. Though these large integrated repositories have impacted clinical research significantly, it still faces challenges and barriers. One big problem is the requirement for ethical approval by the institutional review board (IRB) for each research analysis meant for publication. Some research resources do not require IRB approval. For example, CDWs with data of deceased patients have been de-identified and IRB approval is not required for their usage. Another challenge is data quality. Methods that adjust for bias (such as using propensity score matching methods) assume that a complete health record is captured. Tools that examine data quality (e.g., point to missing data) help in discovering data quality problems.
https://en.wikipedia.org/wiki?curid=351581
54,141
There were several further developments in the first decade of the twentieth century. In May 1907, Einstein explained that the expression for energy of a moving mass point assumes the simplest form when its expression for the state of rest is chosen to be (where is the mass), which is in agreement with the "principle of the equivalence of mass and energy". In addition, Einstein used the formula , with being the energy of a system of mass points, to describe the energy and mass increase of that system when the velocity of the differently moving mass points is increased. Max Planck rewrote Einstein's mass–energy relationship as in June 1907, where is the pressure and the volume to express the relation between mass, its latent energy, and thermodynamic energy within the body. Subsequently, in October 1907, this was rewritten as and given a quantum interpretation by German physicist Johannes Stark, who assumed its validity and correctness. In December 1907, Einstein expressed the equivalence in the form and concluded: "A mass is equivalent, as regards inertia, to a quantity of energy . […] It appears far more natural to consider every inertial mass as a store of energy." American physical chemists Gilbert N. Lewis and Richard C. Tolman used two variations of the formula in 1909: and , with being the relativistic energy (the energy of an object when the object is moving), is the rest energy (the energy when not moving), is the relativistic mass (the rest mass and the extra mass gained when moving), and is the rest mass. The same relations in different notation were used by Lorentz in 1913 and 1914, though he placed the energy on the left-hand side: and , with being the total energy (rest energy plus kinetic energy) of a moving material point, its rest energy, the relativistic mass, and the invariant mass.
https://en.wikipedia.org/wiki?curid=422481
283,327
Governments at the national, regional, and local levels may implement policies to promote energy efficiency. Building energy rules can cover the energy consumption of an entire structure or specific building components, like heating and cooling systems. They represent some of the most frequently used instruments for energy efficiency improvements in buildings and can play an essential role in improving energy conservation in buildings. There are multiple reasons for the growth of these policies and programs since the 2000s, including cost savings as energy prices increased, growing concern about the environmental impacts of energy use, and public health concerns. The policies and programs related to energy conservation are critical to establishing safety and performance levels, assisting in consumer decision-making, and explicitly identifying energy-conserving and energy-efficient products. Recent policies include new programs and regulatory incentives that call for electric and natural gas utilities to increase their involvement in delivering energy-efficiency products and services to their customers. For example, the National Action Plan for Energy Efficiency (NAPEE) is a public-private partnership created in response to EPAct05 that brings together senior executives from electric and natural gas utilities, state public utility commissions, other state agencies, and environmental and consumer groups representing every region of the country. The success of building energy regulation in effectively controlling energy consumption in the building sector will be, to a great extent, associated with the adopted energy performance indicator and the promoted energy assessment tools. It can help overcome significant market barriers and ensure cost-effective energy efficiency opportunities are incorporated into new buildings. This is crucial in emerging nations where new constructions are rapidly developing, and market and energy prices sometimes discourage efficient technologies. The building energy standards development and adoption showed that 42% of emerging developing countries surveyed have no energy standard in place, 20% have mandatory, 22% have mixed, and 16% proposed.
https://en.wikipedia.org/wiki?curid=478933
572,139
The reason for this is that the "high energy" refers to energy "at the quantum particle level". While macroscopic systems indeed have a larger total energy content than any of their constituent quantum particles, there can be no experiment or other observation of this total energy without extracting the respective amount of energy from each of the quantum particles – which is exactly the domain of high energy physics. Daily experiences of matter and the Universe are characterized by very low energy. For example, the photon energy of visible light is about 1.8 to 3.2 eV. Similarly, the bond-dissociation energy of a carbon-carbon bond is about 3.6 eV. This is the energy scale manifesting at the macroscopic level, such as in chemical reactions. Even photons with far higher energy, gamma rays of the kind produced in radioactive decay, have photon energy that is almost always between and – still two orders of magnitude lower than the mass-energy of a single proton. Radioactive decay gamma rays are considered as part of nuclear physics, rather than high energy physics.
https://en.wikipedia.org/wiki?curid=382187
911,558
Clinical research informatics (CRI) is a sub-field of health informatics that tries to improve the efficiency of clinical research by using informatics methods. Some of the problems tackled by CRI are: creation of data warehouses of health care data that can be used for research, support of data collection in clinical trials by the use of electronic data capture systems, streamlining ethical approvals and renewals (in US the responsible entity is the local institutional review board), maintenance of repositories of past clinical trial data (de-identified). CRI is a fairly new branch of informatics and has met growing pains as any up and coming field does. Some issue CRI faces is the ability for the statisticians and the computer system architects to work with the clinical research staff in designing a system and lack of funding to support the development of a new system. Researchers and the informatics team have a difficult time coordinating plans and ideas in order to design a system that is easy to use for the research team yet fits in the system requirements of the computer team. The lack of funding can be a hindrance to the development of the CRI. Many organizations who are performing research are struggling to get financial support to conduct the research, much less invest that money in an informatics system that will not provide them any more income or improve the outcome of the research (Embi, 2009). Ability to integrate data from multiple clinical trials is an important part of clinical research informatics. Initiatives, such as PhenX and Patient-Reported Outcomes Measurement Information System triggered a general effort to improve secondary use of data collected in past human clinical trials. CDE initiatives, for example, try to allow clinical trial designers to adopt standardized research instruments (electronic case report forms). A parallel effort to standardizing how data is collected are initiatives that offer de-identified patient level clinical study data to be downloaded by researchers who wish to re-use this data. Examples of such platforms are Project Data Sphere, dbGaP, ImmPort or Clinical Study Data Request. Informatics issues in data formats for sharing results (plain CSV files, FDA endorsed formats, such as CDISC Study Data Tabulation Model) are important challenges within the field of clinical research informatics. There are a number of activities within clinical research that CRI supports, including:
https://en.wikipedia.org/wiki?curid=351581
1,437,692
Sharing the architectural core with Stratature +EDM, Master Data Services uses a Microsoft SQL Server database as the physical data store. It is a part of the "Master Data Hub", which uses the database to store and manage data entities. It is a database with the software to validate and manage the data, and keep it synchronized with the systems that use the data. The master data hub has to extract the data from the source system, validate, sanitize and shape the data, remove duplicates, and update the hub repositories, as well as synchronize the external sources. The entity schemas, attributes, data hierarchies, validation rules and access control information are specified as metadata to the Master Data Services runtime. Master Data Services does not impose any limitation on the data model. Master Data Services also allows custom "Business rules", used for validating and sanitizing the data entering the data hub, to be defined, which is then run against the data matching the specified criteria. All changes made to the data are validated against the rules, and a log of the transaction is stored persistently. Violations are logged separately, and optionally the owner is notified, automatically. All the data entities can be versioned.
https://en.wikipedia.org/wiki?curid=13430116
1,639,627
The stereo model is an energy model that integrates both the position-shift model and the phase-difference model. The position-shift model suggests that the receptive fields of left and right simple cells are identical in shape but are shifted horizontally relative to each other. This model was proposed by Bishop and Pettigrew in 1986. According to the phase-difference model the excitatory and inhibitory sub-regions of the left and right receptive fields of simple cells are shifted in phase such that their boundaries overlap. This model was developed by Ohzawa in 1990. The stereo model uses Fourier phase dependence of simple cell responses, and it suggests that the use of the response of only simple cells is not enough to accurately depict the physiological observations found in cat, monkey, and human visual pathways. In order to make the model more representative of physiological observations, the stereo model combines the responses of both simple and complex cells into a single signal. How this combination is done depends on the incoming stimulus. As one example, the model uses independent Fourier phases for some types of stimuli, and finds the preferred disparity of the complex cells equal to the left-right receptive field shift. For other stimuli, the complex cell becomes less phase sensitive than the simple cells alone, and when the complex cells larger receptive field is included in the model, the phase sensitivity is returns to results similar to normal physiological observations. In order to include the larger receptive fields of complex cells, the model averages several pairs of simple cells nearby and overlaps their receptive fields to construct the complex cell model. This allows the complex cell to be phase independent for all stimuli presented while still maintaining an equal receptive field shift to the simple cells it is composed of in the model.
https://en.wikipedia.org/wiki?curid=41121858
71,319
The energy efficiency of a system or device that converts energy is measured by the ratio of the amount of useful energy put out by the system ("output energy") to the total amount of energy that is put in ("input energy") or by useful output energy as a percentage of the total input energy. In the case of fuel cells, useful output energy is measured in electrical energy produced by the system. Input energy is the energy stored in the fuel. According to the U.S. Department of Energy, fuel cells are generally between 40 and 60% energy efficient. This is higher than some other systems for energy generation. For example, the typical internal combustion engine of a car is about 25% energy efficient. Steam power plants usually achieve efficiencies of 30-40% while combined cycle gas turbine and steam plants can achieve efficiencies as high as 60%. In combined heat and power (CHP) systems, the waste heat produced by the primary power cycle - whether fuel cell, nuclear fission or combustion - is captured and put to use, increasing the efficiency of the system to up to 85–90%.
https://en.wikipedia.org/wiki?curid=11729
180,818
There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. The data requirements are initially recorded as a conceptual data model which is essentially a set of technology independent specifications about the data and is used to discuss initial requirements with the business stakeholders. The conceptual model is then translated into a logical data model, which documents structures of the data that can be implemented in databases. Implementation of one conceptual data model may require multiple logical data models. The last step in data modeling is transforming the logical data model to a physical data model that organizes the data into tables, and accounts for access, performance and storage details. Data modeling defines not just data elements, but also their structures and the relationships between them.
https://en.wikipedia.org/wiki?curid=759422
188,546
The basic definition of "energy" is a measure of a body's (in thermodynamics, the system's) ability to cause change. For example, when a person pushes a heavy box a few metres forward, that person exerts mechanical energy, also known as work, on the box over a distance of a few meters forward. The mathematical definition of this form of energy is the product of the force exerted on the object and the distance by which the box moved (). Because the person changed the stationary position of the box, that person exerted energy on that box. The work exerted can also be called "useful energy", because energy was converted from one form into the intended purpose, i.e. mechanical utilisation. For the case of the person pushing the box, the energy in the form of internal (or potential) energy obtained through metabolism was converted into work in order to push the box. This energy conversion, however, was not straightforward: while some internal energy went into pushing the box, some was diverted away (lost) in the form of heat (transferred thermal energy). For a reversible process, heat is the product of the absolute temperature formula_1 and the change in entropy formula_2 of a body (entropy is a measure of disorder in a system). The difference between the change in internal energy, which is formula_3, and the energy lost in the form of heat is what is called the "useful energy" of the body, or the work of the body performed on an object. In thermodynamics, this is what is known as "free energy". In other words, free energy is a measure of work (useful energy) a system can perform at constant temperature. Mathematically, free energy is expressed as:
https://en.wikipedia.org/wiki?curid=39221
598,457
A data architecture aims to set data standards for all its data systems as a vision or a model of the eventual interactions between those data systems. Data integration, for example, should be dependent upon data architecture standards since data integration requires data interactions between two or more data systems. A data architecture, in part, describes the data structures used by a business and its computer applications software. Data architectures address data in storage, data in use, and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc.
https://en.wikipedia.org/wiki?curid=4071997
918,307
In quantum computing, a quantum algorithm is an algorithm which runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical (or non-quantum) algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is usually used for those algorithms which seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
https://en.wikipedia.org/wiki?curid=632489
965,555
In thermodynamics, non-mechanical work is to be contrasted with mechanical work that is done by forces in immediate contact between the system and its surroundings. If the putative 'work' of a process cannot be defined as either long-range work or else as contact work, then sometimes it cannot be described by the thermodynamic formalism as work at all. Nevertheless, the thermodynamic formalism allows that energy can be transferred between an open system and its surroundings by processes for which work is not defined. An example is when the wall between the system and its surrounds is not considered as idealized and vanishingly thin, so that processes can occur within the wall, such as friction affecting the transfer of matter across the wall; in this case, the forces of transfer are neither strictly long-range nor strictly due to contact between the system and its surrounds; the transfer of energy can then be considered as by convection, and assessed in sum just as transfer of internal energy. This is conceptually different from transfer of energy as heat through a thick fluid-filled wall in the presence of a gravitational field, between a closed system and its surroundings; in this case there may convective circulation within the wall but the process may still be considered as transfer of energy as heat between the system and its surroundings; if the whole wall is moved by the application of force from the surroundings, without change of volume of the wall, so as to change the volume of the system, then it is also at the same time transferring energy as work. A chemical reaction within a system can lead to electrical long-range forces and to electric current flow, which transfer energy as work between system and surroundings, though the system's chemical reactions themselves (except for the special limiting case in which in they are driven through devices in the surroundings so as to occur along a line of thermodynamic equilibrium) are always irreversible and do not directly interact with the surroundings of the system.
https://en.wikipedia.org/wiki?curid=3616613
1,899,210
Several types of RE co-ops exist, based on the type of energy source used, what added value they bring, and the business model they follow. The type of energy source means exactly as it sounds; does the RE co-op focus on solar, wind, geothermal, biogas, biomass, and/or tidal energy? The added value describes the role that the cooperative plays in the overall market: consumer, producer, distributor, trader, or a hybrid of these. Lastly, the business model refers to the size of the cooperative, composition of its members, and method of interaction with other organizations. There are six models: Local Group of Citizens, Regional-National RE co-ops, Fully Integrated RE co-ops, Network of RE co-ops, Multi-Stakeholder Governance Model, and Non-Energy-Focused Organizations. The Local Group of Citizens model centers around a specific project for a local cooperative. It is very small and localized, so the members exist as the only investors. Once the project is completed, it usually does not expand. In fact, they are often referred to as a renewable energy source (RES) project since it revolves around a single renewable energy creation or improvement. Regional-National cooperatives operate at either a regional or national level. They are made up of a mix of volunteers and employees and can be financed by both members and investors, unlike the Local Group of Citizens model. Started by a group of citizens, it serves as bottom-up development for several projects at once. Co-ops using this model can help develop single RES projects but tend to work towards larger goals (e.g. wind energy across the country). A Fully Integrated model holds and controls the entire energy market for themselves. They cover production, supply, distribution when possible, and other services. Evolving from the previous two models, a Fully Integrated model strives for energy independence from the common large company-dominated market. A Network of RE co-ops is a structured group that connects several RE co-ops. It works to improve the operation and balance the time, resources, and overall economy through education and rules. This model embodies horizontal integration. Multi-Stakeholder Governance models, on the other hand, are based on vertical integration. This type of RE co-op is composed of members and agents of the energy market such as consumers, producers, workers, etc. These co-ops can exist on several levels, from local to regional, but must include a mixed composition of supporters. Finally, Non-Energy-Focused Organizations do not operate for the purpose of renewable energy sourcing but create RES projects in the process of accomplishing a different goal. They resemble RE co-ops in action, but differ in their objectives.
https://en.wikipedia.org/wiki?curid=65691512
249,581
The tally of quantum numbers varies from system to system and has no universal answer. Hence these parameters must be found for each system to be analyzed. A quantized system requires at least one quantum number. The dynamics (i.e. time evolution) of any quantum system are described by a quantum operator in the form of a Hamiltonian, . There is one quantum number of the system corresponding to the system's energy; i.e., one of the eigenvalues of the Hamiltonian. There is also one quantum number for each linearly independent operator that commutes with the Hamiltonian. A complete set of commuting observables (CSCO) that commute with the Hamiltonian characterizes the system with all its quantum numbers. There is a one-to-one relationship between the quantum numbers and the operators of the CSCO, with each quantum number taking one of the eigenvalues of its corresponding operator. As a result of the different basis that may be arbitrarily chosen to form a complete set of commuting operators, different sets of quantum numbers may be used for the description of the same system in different situations.
https://en.wikipedia.org/wiki?curid=532405
261,314
Cell biology (also cellular biology or cytology) is a branch of biology that studies the structure, function, and behavior of cells. All living organisms are made of cells. A cell is the basic unit of life that is responsible for the living and functioning of organisms. Cell biology is the study of structural and functional units of cells. Cell biology encompasses both prokaryotic and eukaryotic cells and has many subtopics which may include the study of cell metabolism, cell communication, cell cycle, biochemistry, and cell composition. The study of cells is performed using several microscopy techniques, cell culture, and cell fractionation. These have allowed for and are currently being used for discoveries and research pertaining to how cells function, ultimately giving insight into understanding larger organisms. Knowing the components of cells and how cells work is fundamental to all biological sciences while also being essential for research in biomedical fields such as cancer, and other diseases. Research in cell biology is interconnected to other fields such as genetics, molecular genetics, molecular biology, medical microbiology, immunology, and cytochemistry.
https://en.wikipedia.org/wiki?curid=6339
273,461
"Statistical proof" from data refers to the application of statistics, data analysis, or Bayesian analysis to infer propositions regarding the probability of data. While "using" mathematical proof to establish theorems in statistics, it is usually not a mathematical proof in that the "assumptions" from which probability statements are derived require empirical evidence from outside mathematics to verify. In physics, in addition to statistical methods, "statistical proof" can refer to the specialized "mathematical methods of physics" applied to analyze data in a particle physics experiment or observational study in physical cosmology. "Statistical proof" may also refer to raw data or a convincing diagram involving data, such as scatter plots, when the data or diagram is adequately convincing without further analysis.
https://en.wikipedia.org/wiki?curid=82285
352,737
In atomic physics, the Bohr model or Rutherford–Bohr model, presented by Niels Bohr and Ernest Rutherford in 1913, is a system consisting of a small, dense nucleus surrounded by orbiting electrons—similar to the structure of the Solar System, but with attraction provided by electrostatic forces in place of gravity. It came after the solar system Joseph Larmor model (1897), the solar system Jean Perrin model (1901), the cubical model (1902), the Hantaro Nagaoka Saturnian model (1904), the plum pudding model (1904), the quantum Arthur Haas model (1910), the Rutherford model (1911), and the nuclear quantum John William Nicholson model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum physical interpretation introduced by Haas and Nicholson, but forsaking any attempt to align with classical physics radiation.
https://en.wikipedia.org/wiki?curid=4831
402,007
The formula implies that bound systems have an invariant mass (rest mass for the system) less than the sum of their parts, if the binding energy has been allowed to escape the system after the system has been bound. This may happen by converting system potential energy into some other kind of active energy, such as kinetic energy or photons, which easily escape a bound system. The difference in system masses, called a mass defect, is a measure of the binding energy in bound systems – in other words, the energy needed to break the system apart. The greater the mass defect, the larger the binding energy. The binding energy (which itself has mass) must be released (as light or heat) when the parts combine to form the bound system, and this is the reason the mass of the bound system decreases when the energy leaves the system. The total invariant mass is actually conserved, when the mass of the binding energy that has escaped, is taken into account.
https://en.wikipedia.org/wiki?curid=145040
768,737
In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker–Planck equation (FPE). The Fokker–Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively, numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function.
https://en.wikipedia.org/wiki?curid=1361454
907,944
In the thermodynamics of equilibrium, a state function, function of state, or point function for a thermodynamic system is a mathematical function relating several state variables or state quantities (that describe equilibrium states of a system) that depend only on the current equilibrium thermodynamic state of the system (e.g. gas, liquid, solid, crystal, or emulsion), not the path which the system has taken to reach that state. A state function describes equilibrium states of a system, thus also describing the type of system. A state variable is typically a state function so the determination of other state variable values at an equilibrium state also determines the value of the state variable as the state function at that state. The ideal gas law is a good example. In this law, one state variable (e.g., pressure, volume, temperature, or the amount of substance in a gaseous equilibrium system) is a function of other state variables so is regarded as a state function. A state function could also describe the number of a certain type of atoms or molecules in a gaseous, liquid, or solid form in a heterogeneous or homogeneous mixture, or the amount of energy required to create such a system or change the system into a different equilibrium state.
https://en.wikipedia.org/wiki?curid=341436
1,066,989
The intrinsic reaction coordinate (IRC), derived from the potential energy surface, is a parametric curve that connects two energy minima in the direction that traverses the minimum energy barrier (or shallowest ascent) passing through one or more saddle point(s). However, in reality if reacting species attains enough energy it may deviate from the IRC to some extent. The energy values (points on the hyper-surface) along the reaction coordinate result in a 1-D energy surface (a line) and when plotted against the reaction coordinate (energy vs reaction coordinate) gives what is called a reaction coordinate diagram (or energy profile). Another way of visualizing an energy profile is as a cross section of the hyper surface, or surface, long the reaction coordinate. Figure 5 shows an example of a cross section, represented by the plane, taken along the reaction coordinate and the potential energy is represented as a function or composite of two geometric variables to form a 2-D energy surface. In principle, the potential energy function can depend on N variables but since an accurate visual representation of a function of 3 or more variables cannot be produced (excluding level hypersurfaces) a 2-D surface has been shown. The points on the surface that intersect the plane are then projected onto the reaction coordinate diagram (shown on the right) to produce a 1-D slice of the surface along the IRC. The reaction coordinate is described by its parameters, which are frequently given as a composite of several geometric parameters, and can change direction as the reaction progresses so long as the smallest energy barrier (or activation energy (Ea)) is traversed. The saddle point represents the highest energy point lying on the reaction coordinate connecting the reactant and product; this is known as the transition state. A reaction coordinate diagram may also have one or more transient intermediates which are shown by high energy wells connected via a transition state peak. Any chemical structure that lasts longer than the time for typical bond vibrations (10 – 10s) can be considered as intermediate.
https://en.wikipedia.org/wiki?curid=14149235
1,572,905
For example, the theorem do not exclude quantum scarring, as the phase space volume of the scars also gradually vanishes in this limit. A quantum eigenstate is scarred by periodic orbit if its probability density is on the classical invariant manifolds near and all along that periodic orbit is systematically enhanced above the classical, statistically expected density along that orbit. In a simplified manner, a quantum scar refers to an eigenstate of whose probability density is enhanced in the neighborhood of a classical periodic orbit when the corresponding classical system is chaotic. In conventional scarring, the responsive periodic orbit is unstable. The instability is a decisive point that separates quantum scars from a more trivial finding that the probability density is enhanced near stable periodic orbits due to the Bohr's correspondence principle. The latter can be viewed as a purely classical phenomenon, whereas in the former quantum interference is important. On the other hand, in the perturbation-induced quantum scarring, some of the high-energy eigenstates of a locally perturbed quantum dot contain scars of short periodic orbits of the corresponding unperturbed system. Even though similar in appearance to ordinary quantum scars, these scars have a fundamentally different origin., In this type of scarring, there are no periodic orbits in the perturbed classical counterpart or they are too unstable to cause a scar in a conventional sense. Conventional and perturbation-induced scars are both a striking visual example of classical-quantum correspondence and of a quantum suppression of chaos (see the figure). In particular, scars are a significant correction to the assumption that the corresponding eigenstates of a classically chaotic Hamiltonian are only featureless and random. In some sense, scars can be considered as an eigenstate counterpart to the quantum ergodicity theorem of how short periodic orbits provide corrections to the universal random matrix theory eigenvalue statistics.
https://en.wikipedia.org/wiki?curid=23855903
1,573,961
His research interests are extremely wide and include physics of ultra-cold gases (Bose-Einstein condensation, quantum dynamics of degenerate gases, laser induced condensation, theory of master equation and open systems for many body systems, ultra-cold Fermi gases, strongly correlated atomic and molecular systems, ultra-cold disordered and frustrated gases, ultra-cold dipolar gases, ultra-cold gases and quantum gauge theories), Quantum Information (theory of entanglement; implementations in quantum optical systems, quantum communications, quantum cryptography, quantum computers, quantum networks and entanglement percolation), Statistical Physics (stochastic processes; dynamical critical phenomena, spin glasses and disordered systems; statistical physics of neural networks; complex systems; interdisciplinary applications of statistical physics in neurophysiology, cognitive science and social psychology), Mathematical Physics (mathematical foundations of quantum mechanics and entanglement theory, rigorous statistical mechanics), Laser-matter interactions (interactions of intense laser with atoms, molecules, and plasmas; new sources of coherent XUV radiation and X-rays; ultrafast phenomena in atoms, molecules and solid state, atto-second physics, classical and complex dynamics of atomic systems), Quantum Optics (cavity quantum electrodynamics; cooling and trapping of atoms, non-classical states of light and matter; foundations of quantum mechanics; classical and quantum stochastic processes).
https://en.wikipedia.org/wiki?curid=46396821
1,613,467
The accuracy of semiclassical models are compared based on the BTE by investigating how they treat the classical velocity overshoot problem, a key short channel effect (SCE) in transistor structures. Essentially, velocity overshoot is a nonlocal effects of scaled devices, which is related to the experimentally observed increase in current drive and transconductance. As the channel length becomes smaller, the velocity is no longer saturated in the high field region, but it overshoots the predicted saturation velocity. The cause of this phenomenon is that the carrier transit time becomes comparable to the energy relaxation time, and therefore the mobile carriers do not have enough time to reach equilibrium with the applied electric field by scattering in the short channel devices. The summary of simulation results (Illinois Tool: MOCA) with DD and HD model is shown in figure beside. In the figure (a), the case when the field is not high enough to cause the velocity overshoot effect in the whole channel region is shown. Note that at such limit, the data from the DD model fit well to the MC model in the non-overshoot region, but the HD model overestimate the velocity in that region. The velocity overshoot is observed only near the drain junction in the MC data and the HD model fits well in that region. From the MC data, it can be noticed that the velocity overshoot effect is abrupt in the high-field region, which is not properly included in the HD model. For high field conditions as shown in the figure (b) the velocity overshoot effect almost all over the channel and the HD results and the MC results are very close in the channel region.
https://en.wikipedia.org/wiki?curid=29783201
1,767,314
In computational physics and more specifically in quantum mechanics, the ground state energies of quantum systems is associated with the top of the spectrum of Schrödinger's operators. The Schrödinger equation is the quantum mechanics version of the Newton's second law of motion of classical mechanics (the mass times the acceleration is the sum of the forces). This equation represents the wave function (a.k.a. the quantum state) evolution of some physical system, including molecular, atomic of subatomic systems, as well as macroscopic systems like the universe. The solution of the imaginary time Schrödinger equation (a.k.a. the heat equation) is given by a Feynman-Kac distribution associated with a free evolution Markov process (often represented by Brownian motions) in the set of electronic or macromolecular configurations and some potential energy function. The long time behavior of these nonlinear semigroups is related to top eigenvalues and ground state energies of Schrödinger's operators. The genetic type mean field interpretation of these Feynman-Kac models are termed Resample Monte Carlo, or Diffusion Monte Carlo methods. These branching type evolutionary algorithms are based on mutation and selection transitions. During the mutation transition, the walkers evolve randomly and independently in a potential energy landscape on particle configurations. The mean field selection process (a.k.a. quantum teleportation, population reconfiguration, resampled transition) is associated with a fitness function that reflects the particle absorption in an energy well. Configurations with low relative energy are more likely to duplicate. In molecular chemistry, and statistical physics Mean field particle methods are also used to sample Boltzmann-Gibbs measures associated with some cooling schedule, and to compute their normalizing constants (a.k.a. free energies, or partition functions).
https://en.wikipedia.org/wiki?curid=43677277
1,822,787
In the section of the book on quantum algorithms, chapter 7 includes material on quantum complexity theory and the Deutch algorithm, Deutsch–Jozsa algorithm, Bernstein–Vazirani algorithm, and Simon's algorithm, algorithms devised to prove separations in quantum complexity by solving certain artificial problems faster than could be done classically. It also covers the quantum Fourier transform. Chapter 8 covers Shor's algorithm for integer factorization, and introduces the hidden subgroup problem. Chapter 9 covers Grover's algorithm and the quantum counting algorithm for speeding up certain kinds of brute-force search. The remaining chapters return to the topic of quantum entanglement and discuss quantum decoherence, quantum error correction, and its use in designing robust quantum computing devices, with the final chapter providing an overview of the subject and connections to additional topics. Appendices provide a graphical approach to tensor products of probability spaces, and extend Shor's algorithm to the abelian hidden subgroup problem.
https://en.wikipedia.org/wiki?curid=63448622
1,936,050
To extract relevant data from large data sets, TBI employs various methods such as data consolidation, data federation, and data warehousing. In the data consolidation approach, data is extracted from various sources and centralized in a single database. This approach enables standardization of heterogeneous data and helps address issues in interoperability and compatibility among data sets. However, proponents of this method often encounter difficulties in updating their databases as it is based on a single data model. In contrast, the data federation approach links databases together and extracts data on a regular basis, then combines the data for queries. The benefit of this approach is that it enables the user to access real-time data on a single portal. However, the limitation of this is that data collected may not always be synchronized as it is derived from multiple sources. Data warehousing provides a single unified platform for data curation. Data warehousing ingrates data from multiple sources into a common format, and is typically used in bioscience exclusively for decision support purposes.
https://en.wikipedia.org/wiki?curid=37670240
66,018
Though used sometimes loosely partly because of a lack of formal definition, the interpretation that seems to best describe Big data is the one associated with large body of information that we could not comprehend when used only in smaller amounts. In it primary definition though, Big data refers to data sets that are too large or complex to be dealt with by traditional data-processing application software. Data with many fields (rows) offer greater statistical power, while data with higher complexity (more attributes or columns) may lead to a higher false discovery rate. Big data analysis challenges include capturing data, data storage, data analysis, search, sharing, transfer, visualization, querying, updating, information privacy, and data source. Big data was originally associated with three key concepts: "volume", "variety", and "velocity". The analysis of big data presents challenges in sampling, and thus previously allowing for only observations and sampling. Thus a fourth concept, "veracity," refers to the quality or insightfulness of the data. Without sufficient investment in expertise for big data veracity, then the volume and variety of data can produce costs and risks that exceed an organization's capacity to create and capture "value" from big data"."
https://en.wikipedia.org/wiki?curid=27051151
266,778
It is recognized that much energy is lost in the production of energy commodities themselves, such as nuclear energy, photovoltaic electricity or high-quality petroleum products. "Net energy content" is the energy content of the product minus energy input used during extraction and conversion, directly or indirectly. A controversial early result of LCEA claimed that manufacturing solar cells requires more energy than can be recovered in using the solar cell. Although these results were true when solar cells were first manufactured, their efficiency increased greatly over the years. Currently, energy payback time of photovoltaic solar panels range from a few months to several years. Module recycling could further reduce the energy payback time to around one month. Another new concept that flows from life cycle assessments is energy cannibalism. Energy cannibalism refers to an effect where rapid growth of an entire energy-intensive industry creates a need for energy that uses (or cannibalizes) the energy of existing power plants. Thus, during rapid growth, the industry as a whole produces no energy because new energy is used to fuel the embodied energy of future power plants. Work has been undertaken in the UK to determine the life cycle energy (alongside full LCA) impacts of a number of renewable technologies.
https://en.wikipedia.org/wiki?curid=604896
469,314
The main reason for differences in embodied energy data between databases is due to the source of data and methodology used in their compilation. Bottom-up 'process' data is typically sourced from product manufacturers and suppliers. While this data is generally more reliable and specific to particular products, the methodology used to collect process data typically results in much of the embodied energy of a product being excluded, mainly due to the time, costs and complexity of data collection. Top-down environmentally-extended input-output (EEIO) data, based on national statistics can be used to fill these data gaps. While EEIO analysis of products can be useful on its own for initial scoping of embodied energy, it is generally much less reliable than process data and rarely relevant for a specific product or material. Hence, hybrid methods for quantifying embodied energy have been developed, using available process data and filling any data gaps with EEIO data. Databases that rely on this hybrid approach, such as The University of Melbourne's EPiC Database, provide a more comprehensive assessment of the embodied energy of products and materials.
https://en.wikipedia.org/wiki?curid=1520238
533,733
Many people do not have a Heating, Ventilation, and Air Condition (HVAC) system in their homes because it is too expensive. However according to this article "Save Money Through Energy Efficiency", HVAC is not as expensive as one may think it is. This article "Save Money Through Energy Efficiency" tells us that the main thing to look for when shopping for a HVAC system is to make sure it runs efficiently. One thing to always look for when in the market of a new HVAC system is the energy guide sticker on the machine. Although many might choose to not believe that sticker and that it is just there to help with the sales, history shows that many of the newer HVAC systems with the yellow energy guide stickers help save customers hundreds to thousands of dollars depending on how much they use their HVAC system. On the yellow energy guide sticker on many of the newer systems, it displays the average cost to run that machine. Once a customer has found the perfect HVAC system, the customer should run it monthly if it is only put into use during specific times of year. It is recommended that if an HVAC system is not being used each month, that it should be turned on and left running for ten to fifteen minutes. On the other hand if the customer is somebody who runs their HVAC system frequently, it is really important to maintain it. Maintenance on an HVAC system includes changing out the air filter, inspecting the areas where air intake takes place, and check for leaks. Doing these three steps are super essential and is the key to keeping an HVAC system running for a long time. A customer should do these three steps every couple of months or when it is suspected problem with the HVAC system. Some signs that could lead to a potential problem is if the HVAC system does not provide air cool enough. That could be due to a leakage in the cooling fluids. Another sign that could mean that the HVAC system is not running perfectly fine is if there is a bad smell to the air that it is providing. That often means that the air filters need to be replaced. Changing the air filters on an HVAC system is really important because they are exposed to a lot of dust depending on where your HVAC system is and it could build up dust from simply just sitting in one's home.
https://en.wikipedia.org/wiki?curid=396508
735,959
In the realm of quantum communication, one wants to send qubits from one quantum processor to another over long distances. This way, local quantum networks can be intra connected into a quantum internet. A quantum internet supports many applications, which derive their power from the fact that by creating quantum entangled qubits, information can be transmitted between the remote quantum processors. Most applications of a quantum internet require only very modest quantum processors. For most quantum internet protocols, such as quantum key distribution in quantum cryptography, it is sufficient if these processors are capable of preparing and measuring only a single qubit at a time. This is in contrast to quantum computing where interesting applications can only be realized if the (combined) quantum processors can easily simulate more qubits than a classical computer (around 60). Quantum internet applications require only small quantum processors, often just a single qubit, because quantum entanglement can already be realized between just two qubits. A simulation of an entangled quantum system on a classical computer cannot simultaneously provide the same security and speed.
https://en.wikipedia.org/wiki?curid=2325953
780,862
Another example is the case of using an application programming interface (API) to interact with a runtime system. The calls to that API look the same as calls to a regular software library, however at some point during the call the execution model changes. The runtime system implements an execution model different from that of the language the library is written in terms of. A person reading the code of a normal library would be able to understand the library's behavior by just knowing the language the library was written in. However, a person reading the code of the API that invokes a runtime system would not be able to understand the behavior of the API call just by knowing the language the call was written in. At some point, via some mechanism, the execution model stops being that of the language the call is written in and switches over to being the execution model implemented by the runtime system. For example, the trap instruction is one method of switching execution models. This difference is what distinguishes an API-invoked execution model, such as Pthreads, from a usual software library. Both Pthreads calls and software library calls are invoked via an API, but Pthreads behavior cannot be understood in terms of the language of the call. Rather, Pthreads calls bring into play an outside execution model, which is implemented by the Pthreads runtime system (this runtime system is often the OS kernel).
https://en.wikipedia.org/wiki?curid=2106840
801,688
As mentioned, especially for hydrogen/deuterium substitution, most kinetic isotope effects arise from the difference in zero-point energy (ZPE) between the reactants and the transition state of the isotopologues in question, and this difference can be understood qualitatively with the following description: within the Born–Oppenheimer approximation, the potential energy surface is the same for both isotopic species. However, a quantum-mechanical treatment of the energy introduces discrete vibrational levels onto this curve, and the lowest possible energy state of a molecule corresponds to the lowest vibrational energy level, which is slightly higher in energy than the minimum of the potential energy curve. This difference, referred to as the zero-point energy, is a manifestation of the Heisenberg uncertainty principle that necessitates an uncertainty in the C-H or C-D bond length. Since the heavier (in this case the deuterated) species behaves more "classically," its vibrational energy levels are closer to the classical potential energy curve, and it has a lower zero-point energy. The zero-point energy differences between the two isotopic species, at least in most cases, diminish in the transition state, since the bond force constant decreases during bond breaking. Hence, the lower zero-point energy of the deuterated species translates into a larger activation energy for its reaction, as shown in the following figure, leading to a normal kinetic isotope effect. This effect should, in principle, be taken into account all 3"N−"6 vibrational modes for the starting material and 3"N""−"7 vibrational modes at the transition state (one mode, the one corresponding to the reaction coordinate, is missing at the transition state, since a bond breaks and there is no restorative force against the motion). The harmonic oscillator is a good approximation for a vibrating bond, at least for low-energy vibrational states. Quantum mechanics gives the vibrational zero-point energy as formula_13. Thus, we can readily interpret the factor of ½ and the sums of formula_14 terms over ground state and transition state vibrational modes in the exponent of the simplified formula above. For a harmonic oscillator, the vibrational frequency is inversely proportional to the square root of the reduced mass of the vibrating system:
https://en.wikipedia.org/wiki?curid=1106771
844,277
Strictly speaking the above equation holds also for systems with chemical reactions if the terms in the balance equation are taken to refer to total mass, i.e. the sum of all the chemical species of the system. In the absence of a chemical reaction the amount of any chemical species flowing in and out will be the same; this gives rise to an equation for each species present in the system. However, if this is not the case then the mass balance equation must be amended to allow for the generation or depletion (consumption) of each chemical species. Some use one term in this equation to account for chemical reactions, which will be negative for depletion and positive for generation. However, the conventional form of this equation is written to account for both a positive generation term (i.e. product of reaction) and a negative consumption term (the reactants used to produce the products). Although overall one term will account for the total balance on the system, if this balance equation is to be applied to an individual species and then the entire process, both terms are necessary. This modified equation can be used not only for reactive systems, but for population balances such as arise in particle mechanics problems. The equation is given below; note that it simplifies to the earlier equation in the case that the generation term is zero.
https://en.wikipedia.org/wiki?curid=2428476
894,304
Quantum analogues or generalizations of classical neural nets are often referred to as quantum neural networks. The term is claimed by a wide range of approaches, including the implementation and extension of neural networks using photons, layered variational circuits or quantum Ising-type models. Quantum neural networks are often defined as an expansion on Deutsch's model of a quantum computational network. Within this model, nonlinear and irreversible gates, dissimilar to the Hamiltonian operator, are deployed to speculate the given data set. Such gates make certain phases unable to be observed and generate specific oscillations. Quantum neural networks apply the principals quantum information and quantum computation to classical neurocomputing. Current research shows that QNN can exponentially increase the amount of computing power and the degrees of freedom for a computer, which is limited for a classical computer to its size. A quantum neural network has computational capabilities to decrease the number of steps, qubits used, and computation time. The wave function to quantum mechanics is the neuron for Neural networks. To test quantum applications in a neural network, quantum dot molecules are deposited on a substrate of GaAs or similar to record how they communicate with one another. Each quantum dot can be referred as an island of electric activity, and when such dots are close enough (approximately 10 - 20 nm) electrons can tunnel underneath the islands. An even distribution across the substrate in sets of two create dipoles and ultimately two spin states, up or down. These states are commonly known as qubits with corresponding states of formula_8 and formula_9 in Dirac notation.
https://en.wikipedia.org/wiki?curid=44108758
1,526,374
The first agent-based model is a multiscale model of mammary gland development starting with a rudimentary mammary ductal tree at the onset of puberty (during active proliferation) all the way to a full mammary gland at adulthood (when there is little proliferation). The model consists of millions of agents, with each agent representing a mammary stem cell, a progenitor cell, or a differentiated cell in the breast. Simulations were first run on the Lawrence Berkeley National Laboratory Lawrencium supercomputer to parameterize and benchmark the model against a variety of "in vivo" mammary gland measurements. The model was then used to test the three different mechanisms to determine which one led to simulation results that matched "in vivo" experiments the best. Surprisingly, radiation-induced cell inactivation by death did not contribute to increased stem cell frequency independently of the dose delivered in the model. Instead the model revealed that the combination of increased self-renewal and cell proliferation during puberty led to stem cell enrichment. In contrast epithelial-mesenchymal transition in the model was shown to increase stem cell frequency not only in pubertal mammary glands but also in adult glands. This latter prediction, however, contradicted the "in vivo" data; irradiation of adult mammary glands did not lead to increased stem cell frequency. These simulations therefore suggested self-renewal as the primary mechanism behind pubertal stem cell increase.
https://en.wikipedia.org/wiki?curid=29782518
1,582,119
In their article in which they coin the term 'critical data studies,' Dalton and Thatcher also provide several justifications as to why data studies is a discipline worthy of a critical approach. First, 'big data' is an important aspect of twenty-first century society, and the analysis of 'big data' allows for a deeper understanding of what is happening and for what reasons. Furthermore, big data as a technological tool and the information that it yields are not neutral, according to Dalton and Thatcher, making it worthy of critical analysis in order to identify and address its biases. Building off this idea, another justification for a critical approach is that the relationship between big data and society is an important one, and therefore worthy of study. Dalton and Thatcher stress how the relationship is not an example of technological determinism, but rather how big data can shape the lives of individuals. Big data technology can cause significant changes in society's structure and in the everyday lives of people, and, being a product of society, big data technology is worthy of sociological investigation. Moreover, data sets are almost never completely raw, that is to say without any influences. Rather, data are shaped by the vision or goals of a research team, and during the data collection process, certain things are quantified, stored, sorted and even discarded by the research team. A critical approach is thus necessary in order to understand and reveal the intent behind the information being presented. Moreover, data alone cannot speak for themselves; in order to possess any concrete meaning, data must be accompanied by theoretical insight or alternative quantitative or qualitative research measures. Dalton and Thatcher argue that if one were to only think of data in terms of its exploitative power, there is no possibility of using data for revolutionary, liberatory purposes. Finally, Dalton and Thatcher propose that a critical approach in studying data allows for 'big data' to be combined with older, 'small data,' and thus create more thorough research, opening up more opportunities, questions and topics to be explored.
https://en.wikipedia.org/wiki?curid=51578025
1,603,990
Compared to the highly complex microenvironment "in vivo", traditional mono-culture of single cell types "in vitro" only provides limited information about cellular behavior due to the lack of interactions with other cell types. Typically, cell-to-cell signaling can be divided into four categories depending on the distance: endocrine signaling, paracrine signaling, autocrine signaling, and juxtacrine signaling. For example, in paracrine signaling, growth factors secreted from one cell diffuse over a short distance to the neighboring target cell, whereas in juxtacrine signaling, membrane-bound ligands of one cell directly bind to surface receptors of adjacent cells. There are three conventional approaches to incorporate cell signaling in "in vitro" cell culture: conditioned media transfer, mixed (or direct) co-culture, and segregated (or indirect) co-culture. The use of conditioned media, where the cultured medium of one cell type (the effector) is introduced to the culture of another cell type (the responder), is a traditional way to include the effects of soluble factors in cell signaling. However, this method only allows one-way signaling, does not apply to short-lived factors (which often degrade before transfer to the responder cell culture), and does not allow temporal observations of the secreted factors. Recently, co-culture has become the predominant approach to study the effect of cellular communication by culturing two biologically related cell types together. Mixed co-culture is the simplest co-culture method, where two types of cells are in direct contact within a single culture compartment at the desired cell ratio. Cells can communicate by paracrine and juxtacrine signaling, but separated treatments and downstream analysis of a single cell type are not readily feasible due to the completely mixed population of cells. The more common method is segregated co-culture, where the two cell types are physically separated but can communicate in shared media by paracrine signaling. The physical barrier can be a porous membrane, a solid wall, or a hydrogel divider. If the physical barrier is removable (such as in PDMS or hydrogel), the assay can also be used to study cell invasion or cell migration. Co-culture designs can be adapted to tri- or multi-culture, which are often more representative of "in vivo" conditions relative to co-culture.
https://en.wikipedia.org/wiki?curid=50311973
1,626,246
The Kolmogorov structure function of an individual data string expresses the relation between the complexity level constraint on a model class and the least log-cardinality of a model in the class containing the data. The structure function determines all stochastic properties of the individual data string: for every constrained model class it determines the individual best-fitting model in the class irrespective of whether the true model is in the model class considered or not. In the classical case we talk about a set of data with a probability distribution, and the properties are those of the expectations. In contrast, here we deal with individual data strings and the properties of the individual string focused on. In this setting, a property holds with certainty rather than with high probability as in the classical case. The Kolmogorov structure function precisely quantifies the goodness-of-fit of an individual model with respect to individual data.
https://en.wikipedia.org/wiki?curid=18010343
1,784,136
The interaction between services often requires exchanging business documents. In order for a service consumer to send data (related to a particular business entity e.g. a purchase order), it needs to know the structure of the data i.e. the data model. For this, the service provider publishes the structure of the data that it expects within the incoming message from the service consumer. In case of services being implemented as web services, this would be the XML schema document. Once the service consumer knows the required data model, it can structure the data accordingly. However, under some conditions it may be possible that the service consumer already possesses the required data, which relates to a particular business document, but the data does not conform to the data model as specified by the service provider. This disparity among the data models results in the requirement of data model transformation so that the message is transformed into the required structure as dictated by the service provider. Building upon the aforementioned example, it is entirely possible that, after processing the received business document, the service provider sends back the processed document to the service consumer that once again performs the data model transformation to convert the processed business document back to the data model that it uses within its logic to represent the business document.
https://en.wikipedia.org/wiki?curid=26592693
1,915,219
The first development of "quantum electrochemistry" is somewhat difficult to pin down. This is not very surprising, since the development of quantum mechanics to chemistry can be summarized as the application of quantum wave theory models to atoms and molecules. This being the case, electrochemistry, which is particularly concerned with the electronic states of some particular system, is already, by its nature, tied into the quantum mechanical model of the electron in quantum chemistry. There were proponents of quantum electrochemistry, who applied quantum mechanics to electrochemistry with unusual zeal, clarity and precision. Among them were Revaz Dogonadze and his co-workers. They developed one of the early quantum mechanical models for proton transfer reactions in chemical systems. Dogonadze is a particularly celebrated promoter of quantum electrochemistry, and is also credited with forming an international summer school of quantum electrochemistry centered in Yugoslavia. He was a main author of the "Quantum-Mechanical Theory of Kinetics of the Elementary Act of Chemical, Electrochemical and Biochemical Processes in Polar Liquids". Another important contributor is Rudolph A. Marcus, who won the Nobel Prize in Chemistry in 1992 for his "Theory of Electron Transfer Reactions in Chemical Systems". Recently, Marcus theory has been shown to be part of a more general concept associated with the quantum rate theory, a theory that predicts the rate of electron transfer (electrochemistry being a particular case) using the concepts of conductance quantum and quantum capacitance.
https://en.wikipedia.org/wiki?curid=872416
1,915,830
Data-informed decision-making (DIDM) gives reference to the collection and analysis of data to guide decisions that improve success. Another form of this process is referred to as Data-Driven Decision-Making (DDDM) which is defined similarly as making decisions based on hard data as opposed to intuition, observation, or guesswork. DIDM is used in education communities (where data is used with the goal of helping students and improving curricula) but is also applicable to (and thus also used in) other fields in which data is used to inform decisions. While data based decision making is a more common term, "data-informed" decision-making is a preferable term since decisions should not be based solely on quantitative data. "Data-driven decision-making" is most often seen in the contexts of business growth and entrepreneurship. Most educators have access to a data system for the purpose of analyzing student data. These data systems present data to educators in an over-the-counter data format (embedding labels, supplemental documentation, and a help system, making key package/display and content decisions) to improve the success of educators’ data-informed decision-making. In Business, fostering and actively supporting DIDM in their firm and among their colleagues could be the main role of CIOs (Chief Information Officers) or CDOs (Chief Data Officers).
https://en.wikipedia.org/wiki?curid=39186711
1,958,613
TML system description fields include descriptions of the physical system, the data system and the data product. The data itself forms the other component of a TML data stream. The physical system description includes information such as model and serial number information about specific transducers and components of a system, system calibration information, system capabilities, installation information, owners and operators, and other information directly applicable to searches related to general data exchange independent of operating conditions. The data system description contains information about the specific transducers and components such as their behavior, responses to physical phenomena, sensitivity, and other operating parameters. The data product description addresses the specific data stream, such as data types, layouts, encoding, and other information necessary for the consumer of a TML data stream to interpret the stream.
https://en.wikipedia.org/wiki?curid=6045563
1,983,683
Geodetic Data Services (GDS) program provides services for the long-term stewardship of unique data sets. These services organize, manage, and archive data, and develop tools for data access and interpretation. GDS provides a comprehensive suite of services including sensor network data operations, data products and services, data management and archiving, and advanced cyberinfrastructure. Services are provided for GPS/GNSS data, Imaging data, Strain and Seismic data, and Meteorological data. GPS/GNSS data enable millimeter-scale surface motions at discrete points. Data from geodetic imaging instruments can be used to map topography and delineate deformation with high spatial resolution. InSAR and Terrestrial LiDAR imaging data services are provided. Strain and seismic data from borehole strainmeters, seismometers, thermometers, pore pressure transducers, tiltmeters, and rock samples from drilling, as well as surface-based tiltmeters and laser strainmeters are available. In addition, temperature, relative humidity, and atmospheric pressure data are available from surface measurements of atmospheric conditions from stations. Tropospheric parameters are generated during daily GPS post-processing managed by UNAVCO and are accessible through data access services. The program is optimized to enable access to high-precision geodetic data. The UNAVCO Data Archive includes more than 2,300 continuous GPS stations.
https://en.wikipedia.org/wiki?curid=25430759
25,170
Differential calculus is the study of the definition, properties, and applications of the derivative of a function. The process of finding the derivative is called "differentiation". Given a function and a point in the domain, the derivative at that point is a way of encoding the small-scale behavior of the function near that point. By finding the derivative of a function at every point in its domain, it is possible to produce a new function, called the "derivative function" or just the "derivative" of the original function. In formal terms, the derivative is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be the doubling function.
https://en.wikipedia.org/wiki?curid=5176
143,255
Cell division is the process by which a parent cell divides into two daughter cells. Cell division usually occurs as part of a larger cell cycle in which the cell grows and replicates its chromosome(s) before dividing. In eukaryotes, there are two distinct types of cell division: a vegetative division (mitosis), producing daughter cells genetically identical to the parent cell, and a cell division that produces haploid gametes for sexual reproduction (meiosis), reducing the number of chromosomes from two of each type in the diploid parent cell to one of each type in the daughter cells. In cell biology, mitosis () is a part of the cell cycle, in which, replicated chromosomes are separated into two new nuclei. Cell division gives rise to genetically identical cells in which the total number of chromosomes is maintained. In general, mitosis (division of the nucleus) is preceded by the S stage of interphase (during which the DNA replication occurs) and is often followed by telophase and cytokinesis; which divides the cytoplasm, organelles, and cell membrane of one cell into two new cells containing roughly equal shares of these cellular components. The different stages of mitosis all together define the mitotic (M) phase of animal cell cycle—the division of the mother cell into two genetically identical daughter cells. Meiosis results in four haploid daughter cells by undergoing one round of DNA replication followed by two divisions. Homologous chromosomes are separated in the first division, and sister chromatids are separated in the second division. Both of these cell division cycles are used in the process of sexual reproduction at some point in their life cycle. Both are believed to be present in the last eukaryotic common ancestor.
https://en.wikipedia.org/wiki?curid=36869
147,049
The internal energy of a body can change in a process in which chemical potential energy is converted into non-chemical energy. In such a process, the thermodynamic system can change its internal energy by doing work on its surroundings, or by gaining or losing energy as heat. It is not quite lucid to merely say that 'the converted chemical potential energy has simply become internal energy'. It is, however, convenient and more lucid to say that 'the chemical potential energy has been converted into thermal energy'. Such thermal energy may be viewed as a contributor to internal energy or to enthalpy, thinking of the contribution as a process without thinking that the contributed energy has become an identifiable component of the internal or enthalpic energies. The thermal energy is thus thought of as a 'process entity' rather than as an 'enduring physical entity'. This is expressed in ordinary traditional language by talking of 'heat of reaction'.
https://en.wikipedia.org/wiki?curid=467047
159,547
Probability distortion is that people generally do not look at the value of probability uniformly between 0 and 1. Lower probability is said to be over-weighted (that is, a person is overly concerned with the outcome of the probability) while medium to high probability is under-weighted (that is, a person is not concerned enough with the outcome of the probability). The exact point in which probability goes from over-weighted to under-weighted is arbitrary, but a good point to consider is probability = 0.33. A person values probability = 0.01 much more than the value of probability = 0 (probability = 0.01 is said to be over-weighted). However, a person has about the same value for probability = 0.4 and probability = 0.5. Also, the value of probability = 0.99 is much less than the value of probability = 1, a sure thing (probability = 0.99 is under-weighted). A little more in depth when looking at probability distortion is that "π"("p") + "π"(1 − "p") < 1 (where "π"("p") is probability in prospect theory).
https://en.wikipedia.org/wiki?curid=197284
173,457
A data lake is a system or repository of data stored in its natural/raw format, usually object blobs or files. A data lake is usually a single store of data including raw copies of source system data, sensor data, social data etc., and transformed data used for tasks such as reporting, visualization, advanced analytics and machine learning. A data lake can include structured data from relational databases (rows and columns), semi-structured data (CSV, logs, XML, JSON), unstructured data (emails, documents, PDFs) and binary data (images, audio, video). A data lake can be established "on premises" (within an organization's data centers) or "in the cloud" (using cloud services from vendors such as Amazon, Microsoft, or Google).
https://en.wikipedia.org/wiki?curid=46626475
190,330
For example, given "N" sequence data structures, e.g. singly linked list, vector etc., and "M" algorithms to operate on them, e.g. codice_1, codice_2 etc., a direct approach would implement each algorithm specifically for each data structure, giving combinations to implement. However, in the generic programming approach, each data structure returns a model of an iterator concept (a simple value type that can be dereferenced to retrieve the current value, or changed to point to another value in the sequence) and each algorithm is instead written generically with arguments of such iterators, e.g. a pair of iterators pointing to the beginning and end of the subsequence or "range" to process. Thus, only data structure-algorithm combinations need be implemented. Several iterator concepts are specified in the STL, each a refinement of more restrictive concepts e.g. forward iterators only provide movement to the next value in a sequence (e.g. suitable for a singly linked list or a stream of input data), whereas a random-access iterator also provides direct constant-time access to any element of the sequence (e.g. suitable for a vector). An important point is that a data structure will return a model of the most general concept that can be implemented efficiently—computational complexity requirements are explicitly part of the concept definition. This limits the data structures a given algorithm can be applied to and such complexity requirements are a major determinant of data structure choice. Generic programming similarly has been applied in other domains, e.g. graph algorithms.
https://en.wikipedia.org/wiki?curid=105837
196,453
Overfitting is the use of models or procedures that violate Occam's razor, for example by including more adjustable parameters than are ultimately optimal, or by using a more complicated approach than is ultimately optimal. For an example where there are too many adjustable parameters, consider a dataset where training data for can be adequately predicted by a linear function of two independent variables. Such a function requires only three parameters (the intercept and two slopes). Replacing this simple function with a new, more complex quadratic function, or with a new, more complex linear function on more than two independent variables, carries a risk: Occam's razor implies that any given complex function is "a priori" less probable than any given simple function. If the new, more complicated function is selected instead of the simple function, and if there was not a large enough gain in training-data fit to offset the complexity increase, then the new complex function "overfits" the data, and the complex overfitted function will likely perform worse than the simpler function on validation data outside the training dataset, even though the complex function performed as well, or perhaps even better, on the training dataset.
https://en.wikipedia.org/wiki?curid=173332
199,640
Some view the birth of quantum chemistry as starting with the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926. However, the 1927 article of Walter Heitler (1904–1981) and Fritz London, is often recognized as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Fock, to cite a few. The history of quantum chemistry also goes through the 1838 discovery of cathode rays by Michael Faraday, the 1859 statement of the black-body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete, and the 1900 quantum hypothesis by Max Planck that any energy radiating atomic system can theoretically be divided into a number of discrete energy elements "ε" such that each of these energy elements is proportional to the frequency "ν" with which they each individually radiate energy and a numerical value called Planck's constant. Then, in 1905, to explain the photoelectric effect (1839), i.e., that shining light on certain materials can function to eject electrons from the material, Albert Einstein postulated, based on Planck's quantum hypothesis, that light itself consists of individual quantum particles, which later came to be called photons (1926). In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding. Probably the greatest contribution to the field was made by Linus Pauling.
https://en.wikipedia.org/wiki?curid=25211
244,503
Stone has had issued to him a large number of patents embracing a method for impressing oscillations on a radiator system and emitting the energy in the form of waves of predetermined length whatever may be the electrical dimensions of the oscillator. On February 8, 1900, he filed for a selective system in . In this system, two simple circuits are associated inductively, each having an independent degree of freedom, and in which the restoration of electric oscillations to zero potential the currents are superimposed, giving rise to compound harmonic currents which permit the resonator system to be syntonized with precision to the oscillator. Stone's system, as stated in , developed free or unguided simple harmonic electromagnetic signal waves of a definite frequency to the exclusion of the energy of signal waves of other frequencies, and an elevated conductor and means for developing therein forced simple electric vibrations of corresponding frequency. In these patents Stone devised a multiple inductive oscillation circuit with the object of forcing on the antenna circuit a single oscillation of definite frequency. In the system for receiving the energy of free or unguided simple harmonic electromagnetic signal waves of a definite frequency to the exclusion of the energy of signal waves of other frequencies, he claimed an elevated conductor and a resonant circuit associated with said conductor and attuned to the frequency of the waves, the energy of which is to be received. A coherer made on what is called the "Stone system" was employed in some of the portable wireless outfits of the United States Army. The Stone Coherer has two small steel plugs between which are placed loosely packed carbon granules. This is a "self-decohering" device; though not as sensitive as other forms of detectors it is well suited to the rough usage of portable outfits.
https://en.wikipedia.org/wiki?curid=3800477
283,263
Energy monitoring through energy audits can achieve energy efficiency in existing buildings. An energy audit is an inspection and analysis of energy use and flows for energy conservation in a structure, process, or system intending to reduce energy input without negatively affecting output. Energy audits can determine specific opportunities for energy conservation and efficiency measures as well as determine cost-effective strategies. Training professionals typically accomplish this and can be part of some national programs discussed above. The recent development of smartphone apps enables homeowners to complete relatively sophisticated energy audits themselves. For instance, smart thermostats can connect to standard HVAC systems to maintain energy-efficient indoor temperatures. In addition, data loggers can also be installed to monitor the interior temperature and humidity levels to provide a more precise understanding of the conditions. If the data gathered is compared with the users' perceptions of comfort, more fine-tuning of the interiors can be implemented (e.g., increasing the temperature where A.C. is used to prevent over-cooling). Building technologies and smart meters can allow commercial and residential energy users to visualize the impact their energy use can have in their workplaces or homes. Advanced real-time energy metering can help people save energy through their actions.
https://en.wikipedia.org/wiki?curid=478933
463,196
Discrete differential calculus is the study of the definition, properties, and applications of the difference quotient of a function. The process of finding the difference quotient is called "differentiation". Given a function defined at several points of the real line, the difference quotient at that point is a way of encoding the small-scale (i.e., from the point to the next) behavior of the function. By finding the difference quotient of a function at every pair of consecutive points in its domain, it is possible to produce a new function, called the "difference quotient function" or just the "difference quotient" of the original function. In formal terms, the difference quotient is a linear operator which takes a function as its input and produces a second function as its output. This is more abstract than many of the processes studied in elementary algebra, where functions usually input a number and output another number. For example, if the doubling function is given the input three, then it outputs six, and if the squaring function is given the input three, then it outputs nine. The derivative, however, can take the squaring function as an input. This means that the derivative takes all the information of the squaring function—such as that two is sent to four, three is sent to nine, four is sent to sixteen, and so on—and uses this information to produce another function. The function produced by differentiating the squaring function turns out to be something close to the doubling function.
https://en.wikipedia.org/wiki?curid=61660335
631,159
A system dedicated to offsite offline data processing was set up at the Stony Brook University in Stony Brook, NY to process raw data sent from Kamioka. Most of the reformatted raw data is copied from system facility in Kamioka. At Stony Brook, a system was set up for analysis and further processing. At Stony Brook the raw data were processed with a multi-tape DLT drive. The first stage data reduction processes were done for the high energy analysis and for the low energy analysis. The data reduction for the high energy analysis was mainly for atmospheric neutrino events and proton decay search while the low energy analysis was mainly for the solar neutrino events. The reduced data for the high energy analysis was further filtered by other reduction processes and the resulting data were stored on disks. The reduced data for the low energy were stored on DLT tapes and sent to University of California, Irvine for further processing.
https://en.wikipedia.org/wiki?curid=28464
711,446
There exist two underlying models in each OT system: the data model that defines the way data objects in a document are addressed by operations, and the operation model that defines the set of operations that can be directly transformed by OT functions. Different OT systems may have different data and operation models. For example, the data model of the first OT system is a single linear address space; and its operation model consists of two primitive operations: character-wise insert and delete. The basic operation model has been extended to include a third primitive operation update to support collaborative Word document processing and 3D model editing. The basic OT data model has been extended into a hierarchy of multiple linear addressing domains, which is capable of modeling a broad range of documents. A data adaptation process is often required to map application-specific data models to an OT-compliant data model.
https://en.wikipedia.org/wiki?curid=8924002
751,710
Building on some of their earlier work, Gemino and Wand acknowledge some main points to consider when studying the affecting factors: the content that the conceptual model must represent, the method in which the model will be presented, the characteristics of the model's users, and the conceptual model languages specific task. The conceptual model's content should be considered in order to select a technique that would allow relevant information to be presented. The presentation method for selection purposes would focus on the technique's ability to represent the model at the intended level of depth and detail. The characteristics of the model's users or participants is an important aspect to consider. A participant's background and experience should coincide with the conceptual model's complexity, else misrepresentation of the system or misunderstanding of key system concepts could lead to problems in that system's realization. The conceptual model language task will further allow an appropriate technique to be chosen. The difference between creating a system conceptual model to convey system functionality and creating a system conceptual model to interpret that functionality could involve two completely different types of conceptual modeling languages.
https://en.wikipedia.org/wiki?curid=2381958
778,442
Chemical energy is the energy of chemical substances that is released when they undergo a chemical reaction and transform into other substances. Some examples of storage media of chemical energy include batteries, food, and gasoline (as well as oxygen gas, which is of high chemical energy due to its relatively weak double bond and indispensable for chemical-energy release in gasoline combustion). Breaking and re-making of chemical bonds involves energy, which may be either absorbed by or evolved from a chemical system. If reactants with relatively weak electron-pair bonds convert to more strongly bonded products, energy is released. Therefore, relatively weakly bonded and unstable molecules store chemical energy.
https://en.wikipedia.org/wiki?curid=417846
826,782
There are various strengths to using a semantic data mining and ontological based approach. As previously mentioned, these tools can help during the per-processing phase by filtering out non-desirable data from the data set. Additionally, well-structured formal semantics integrated into well designed ontologies can return powerful data that can be easily read and processed by machines. A specifically useful example of this exists in the medical use of semantic data processing. As an example, a patient is having a medical emergency and is being rushed to hospital. The emergency responders are trying to figure out the best medicine to administer to help the patient. Under normal data processing, scouring all the patient’s medical data to ensure they are getting the best treatment could take too long and risk the patients’ health or even life. However, using semantically processed ontologies, the first responders could save the patient’s life. Tools like a semantic reasoner can use ontology to infer the what best medicine to administer to the patient is based on their medical history, such as if they have a certain cancer or other conditions, simply by examining the natural language used in the patient's medical records. This would allow the first responders to quickly and efficiently search for medicine without having worry about the patient’s medical history themselves, as the semantic reasoner would already have analyzed this data and found solutions. In general, this illustrates the incredible strength of using semantic data mining and ontologies. They allow for quicker and more efficient data extraction on the user side, as the user has fewer variables to account for, since the semantically pre-processed data and ontology built for the data have already accounted for many of these variables. However, there are some drawbacks to this approach. Namely, it requires a high amount of computational power and complexity, even with relatively small data sets. This could result in higher costs and increased difficulties in building and maintaining semantic data processing systems. This can be mitigated somewhat if the data set is already well organized and formatted, but even then, the complexity is still higher when compared to standard data processing.
https://en.wikipedia.org/wiki?curid=12386904
829,865
In gymnosperms, the male gametophytes are produced inside microspores within the microsporangia located inside male cones or microstrobili. In each microspore, a single gametophyte is produced, consisting of four haploid cells produced by meiotic division of a diploid microspore mother cell.  At maturity, each microspore-derived gametophyte becomes a pollen grain. During its development, the water and nutrients that the male gametophyte requires are provided by the sporophyte tissue until they are released for pollination. The cell number of each mature pollen grain varies between the gymnosperm orders. Cycadophyta have 3 celled pollen grains while Ginkgophyta have 4 celled pollen grains. Gnetophyta may have 2 or 3 celled pollen grains depending on the species, and Coniferophyta pollen grains vary greatly ranging from single celled to 40 celled. One of these cells is typically a germ cell and other cells may consist of a single tube cell which grows to form the pollen tube, sterile cells, and/or prothallial cells which are both vegetative cells without an essential reproductive function. After pollination is successful, the male gametophyte continues to develop. If a tube cell was not developed in the microstrobilus, one is created after pollination via mitosis. The tube cell grows into the diploid tissue of the female cone and may branch out into the megastrobilus tissue or grow straight towards the egg cell. The megastrobilus sporophytic tissue provides nutrients for the male gametophyte at this stage. In some gymnosperms, the tube cell will create a direct channel from the site of pollination to the egg cell, in other gymnosperms, the tube cell will rupture in the middle of the megastrobilus sporophyte tissue. This occurs because in some gymnosperm orders, the germ cell is nonmobile and a direct pathway is needed, however, in Cycadophyta and Ginkgophyta, the germ cell is mobile due to flagella being present and a direct tube cell path from the pollination site to the egg is not needed. In most species the germ cell can be more specifically described as a sperm cell which mates with the egg cell during fertilization, though that is not always the case. In some Gnetophyta species, the germ cell will release two sperm nuclei that undergo a rare gymnosperm double fertilization process occurring solely with sperm nuclei and not with the fusion of developed cells. After fertilization is complete in all orders, the remaining male gametophyte tissue will deteriorate.
https://en.wikipedia.org/wiki?curid=13115
879,372
Quantum biology is an emerging field; most of the current research is theoretical and subject to questions that require further experimentation. Though the field has only recently received an influx of attention, it has been conceptualized by physicists throughout the 20th century. It has been suggested that quantum biology might play a critical role in the future of the medical world. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. Erwin Schrödinger's 1944 book "What is Life?" discussed applications of quantum mechanics in biology. Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. He further suggested that mutations are introduced by "quantum leaps". Other pioneers Niels Bohr, Pascual Jordan, and Max Delbruck argued that the quantum idea of complementarity was fundamental to the life sciences. In 1963, Per-Olov Löwdin published proton tunneling as another mechanism for DNA mutation. In his paper, he stated that there is a new field of study called "quantum biology". In 1979, the Soviet and Ukrainian physicist Alexander Davydov published the first textbook on quantum biology entitled "Biology and Quantum Mechanics".
https://en.wikipedia.org/wiki?curid=13537626
879,387
Enzymes have been postulated to use quantum tunneling in order to transfer electrons from one place to another in electron transport chains. It is possible that protein quaternary architectures may have adapted to enable sustained quantum entanglement and coherence, which are two of the limiting factors for quantum tunneling in biological entities. These architectures might account for a greater percentage of quantum energy transfer, which occurs through electron transport and proton tunneling (usually in the form of hydrogen ions, H). Tunneling refers to the ability of a subatomic particle to travel through potential energy barriers. This ability is due, in part, to the principle of complementarity, which holds that certain substances have pairs of properties that cannot be measured separately without changing the outcome of measurement. Particles, such as electrons and protons, have wave-particle duality; they can pass through energy barriers due to their wave characteristics without violating the laws of physics. In order to quantify how quantum tunneling is used in many enzymatic activities, many biophysicists utilize the observation of hydrogen ions. When hydrogen ions are transferred, this is seen as a staple in an organelle's primary energy processing network; in other words, quantum effects are most usually at work in proton distribution sites at distances on the order of an angstrom (1 Å). In physics, a semiclassical (SC) approach is most useful in defining this process because of the transfer from quantum elements (e.g. particles) to macroscopic phenomena (e.g. biochemicals). Aside from hydrogen tunneling, studies also show that electron transfer between redox centers through quantum tunneling plays an important role in enzymatic activity of photosynthesis and cellular respiration (see also Mitochondria section below). For example, electron tunneling on the order of 15–30 Å contributes to redox reactions in cellular respiration enzymes, such as complexes I, III, and IV in mitochondria. Without quantum tunneling, organisms would not be able to convert energy quickly enough to sustain growth. Quantum tunneling actually acts as a shortcut for particle transfer; according to quantum mathematics, a particle's jump from in front of a barrier to the other side of a barrier occurs faster than if the barrier had never been there in the first place. (For more on the technicality of this, see Hartman effect.)
https://en.wikipedia.org/wiki?curid=13537626
907,994
For example, if two objects are attracting each other in space through their gravitational field, the attraction force accelerates the objects, increasing their velocity, which converts their potential energy (gravity) into kinetic energy. When the particles either pass through each other without interaction or elastically repel during the collision, the gained kinetic energy (related to speed) begins to revert into potential energy, driving the collided particles apart. The decelerating particles will return to the initial distance and beyond into infinity, or stop and repeat the collision (oscillation takes place). This shows that the system, which loses no energy, does not combine (bind) into a solid object, parts of which oscillate at short distances. Therefore, to bind the particles, the kinetic energy gained due to the attraction must be dissipated by resistive force. Complex objects in collision ordinarily undergo inelastic collision, transforming some kinetic energy into internal energy (heat content, which is atomic movement), which is further radiated in the form of photonsthe light and heat. Once the energy to escape the gravity is dissipated in the collision, the parts will oscillate at a closer, possibly atomic, distance, thus looking like one solid object. This lost energy, necessary to overcome the potential barrier to separate the objects, is the binding energy. If this binding energy were retained in the system as heat, its mass would not decrease, whereas binding energy lost from the system as heat radiation would itself have mass. It directly represents the "mass deficit" of the cold, bound system.
https://en.wikipedia.org/wiki?curid=125769
909,538
Other problems include (for example) issues with the quality of data, consistent classification and identification of data, and data-reconciliation issues. Master data management of disparate data systems requires data transformations as the data extracted from the disparate source data system is transformed and loaded into the master data management hub. To synchronize the disparate source master data, the managed master data extracted from the master data management hub is again transformed and loaded into the disparate source data system as the master data is updated. As with other Extract, Transform, Load-based data movement, these processes are expensive and inefficient to develop and to maintain which greatly reduces the return on investment for the master data management product.
https://en.wikipedia.org/wiki?curid=15103022
958,436
Cryptography is the strongest link in the chain of data security. However, interested parties cannot assume that cryptographic keys will remain secure indefinitely. Quantum cryptography has the potential to encrypt data for longer periods than classical cryptography. Using classical cryptography, scientists cannot guarantee encryption beyond approximately 30 years, but some stakeholders could use longer periods of protection. Take, for example, the healthcare industry. As of 2017, 85.9% of office-based physicians are using electronic medical record systems to store and transmit patient data. Under the Health Insurance Portability and Accountability Act, medical records must be kept secret. Typically, paper medical records are shredded after a period of time, but electronic records leave a digital trace. Quantum key distribution can protect electronic records for periods of up to 100 years. Also, quantum cryptography has useful applications for governments and militaries as, historically, governments have kept military data secret for periods of over 60 years. There also has been proof that quantum key distribution can travel through a noisy channel over a long distance and be secure. It can be reduced from a noisy quantum scheme to a classical noiseless scheme. This can be solved with classical probability theory. This process of having consistent protection over a noisy channel can be possible through the implementation of quantum repeaters. Quantum repeaters have the ability to resolve quantum communication errors in an efficient way. Quantum repeaters, which are quantum computers, can be stationed as segments over the noisy channel to ensure the security of communication. Quantum repeaters do this by purifying the segments of the channel before connecting them creating a secure line of communication. Sub-par quantum repeaters can provide an efficient amount of security through the noisy channel over a long distance.
https://en.wikipedia.org/wiki?curid=28676005
1,001,885
The free energy principle is based on the Bayesian idea of the brain as an “inference engine.” Under the free energy principle, systems pursue paths of least surprise, or equivalently, minimize the difference between predictions based on their model of the world and their sense and associated perception. This difference is quantified by variational free energy and is minimized by continuous correction of the world model of the system, or by making the world more like the predictions of the system. By actively changing the world to make it closer to the expected state, systems can also minimize the free energy of the system. Friston assumes this to be the principle of all biological reaction. Friston also believes his principle applies to mental disorders as well as to artificial intelligence. AI implementations based on the active inference principle have shown advantages over other methods. Although challenging even for experts, the free energy principle is ultimately quite simple and fundamental, and can be re-derived from conventional mathematics following maximum entropy inference. Indeed, it can be shown that any large enough random dynamical system will display the kind of boundary that allows one to apply the free energy principle to model its dynamics: the probability of finding a Markov blanket in the underlying potential of the system (and therefore, being able to apply the free energy principle) goes to 100% as the size of the system goes to infinity.
https://en.wikipedia.org/wiki?curid=39403556
1,023,716
The Diesel engine is a heat engine: it converts heat into work. During the bottom isentropic processes (blue), energy is transferred into the system in the form of work formula_5, but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant pressure (red, isobaric) process, energy enters the system as heat formula_6. During the top isentropic processes (yellow), energy is transferred out of the system in the form of formula_7, but by definition (isentropic) no energy is transferred into or out of the system in the form of heat. During the constant volume (green, isochoric) process, some of energy flows out of the system as heat through the right depressurizing process formula_8. The work that leaves the system is equal to the work that enters the system plus the difference between the heat added to the system and the heat that leaves the system; in other words, net gain of work is equal to the difference between the heat added to the system and the heat that leaves the system.
https://en.wikipedia.org/wiki?curid=8483
1,065,042
The Johnson–Cook (JC) model is purely empirical and is the most widely used of the five. However, this model exhibits an unrealistically small strain-rate dependence at high temperatures. The Steinberg–Cochran–Guinan–Lund (SCGL) model is semi-empirical. The model is purely empirical and strain-rate independent at high strain-rates. A dislocation-based extension based on is used at low strain-rates. The SCGL model is used extensively by the shock physics community. The Zerilli–Armstrong (ZA) model is a simple physically based model that has been used extensively. A more complex model that is based on ideas from dislocation dynamics is the Mechanical Threshold Stress (MTS) model. This model has been used to model the plastic deformation of copper, tantalum, alloys of steel, and aluminum alloys. However, the MTS model is limited to strain-rates less than around 10/s. The Preston–Tonks–Wallace (PTW) model is also physically based and has a form similar to the MTS model. However, the PTW model has components that can model plastic deformation in the overdriven shock regime (strain-rates greater that 10/s). Hence this model is valid for the largest range of strain-rates among the five flow stress models.
https://en.wikipedia.org/wiki?curid=17328425
1,066,977
In simplest terms, a potential energy surface or PES is a mathematical or graphical representation of the relation between energy of a molecule and its geometry. The methods for describing the potential energy are broken down into a classical mechanics interpretation (molecular mechanics) and a quantum mechanical interpretation. In the quantum mechanical interpretation an exact expression for energy can be obtained for any molecule derived from quantum principles (although an infinite basis set may be required) but ab initio calculations/methods will often use approximations to reduce computational cost. Molecular mechanics is empirically based and potential energy is described as a function of component terms that correspond to individual potential functions such as torsion, stretches, bends, Van der Waals energies, electrostatics and cross terms. Each component potential function is fit to experimental data or properties predicted by "ab initio" calculations. Molecular mechanics is useful in predicting equilibrium geometries and transition states as well as relative conformational stability. As a reaction occurs the atoms of the molecules involved will generally undergo some change in spatial orientation through internal motion as well as its electronic environment. Distortions in the geometric parameters result in a deviation from the equilibrium geometry (local energy minima). These changes in geometry of a molecule or interactions between molecules are dynamic processes which call for understanding all the forces operating within the system. Since these forces can be mathematically derived as first derivative of potential energy with respect to a displacement, it makes sense to map the potential energy of the system as a function of geometric parameters , , and so on. The potential energy at given values of the geometric parameters is represented as a hyper-surface (when ) or a surface (when ). Mathematically, it can be written as
https://en.wikipedia.org/wiki?curid=14149235
1,150,679
Quantum Darwinism seeks to explain the transition of quantum systems from the vast potentiality of superposed states to the greatly reduced set of pointer states as a selection process, einselection, imposed on the quantum system through its continuous interactions with the environment. All quantum interactions, including measurements, but much more typically interactions with the environment such as with the sea of photons in which all quantum systems are immersed, lead to decoherence or the manifestation of the quantum system in a particular basis dictated by the nature of the interaction in which the quantum system is involved. In the case of interactions with its environment Zurek and his collaborators have shown that a preferred basis into which a quantum system will decohere is the pointer basis underlying predictable classical states. It is in this sense that the pointer states of classical reality are selected from quantum reality and exist in the macroscopic realm in a state able to undergo further evolution. However, the 'einselection' program depends on assuming a particular division of the universal quantum state into 'system' + 'environment', with the different degrees of freedom of the environment posited as having mutually random phases. This phase randomness does not arise from within the quantum state of the universe on its own, and Ruth Kastner has pointed out that this limits the explanatory power of the Quantum Darwinism program. Zurek replies to Kastner's criticism in "Classical selection and quantum Darwinism".
https://en.wikipedia.org/wiki?curid=1334123
1,178,363
Computer system architectures which can support data parallel applications were promoted in the early 2000s for large-scale data processing requirements of data-intensive computing. Data-parallelism applied computation independently to each data item of a set of data, which allows the degree of parallelism to be scaled with the volume of data. The most important reason for developing data-parallel applications is the potential for scalable performance, and may result in several orders of magnitude performance improvement. The key issues with developing applications using data-parallelism are the choice of the algorithm, the strategy for data decomposition, load balancing on processing nodes, message passing communications between nodes, and the overall accuracy of the results. The development of a data parallel application can involve substantial programming complexity to define the problem in the context of available programming tools, and to address limitations of the target architecture. Information extraction from and indexing of Web documents is typical of data-intensive computing which can derive significant performance benefits from data parallel implementations since Web and other types of document collections can typically then be processed in parallel.
https://en.wikipedia.org/wiki?curid=31107479
1,253,206
Alternative abbreviations: FCHV (Fuel Cell Hybrid Vehicle), FCV (Fuel Cell Vehicle), HFCV (Hydrogen Fuel Cell Vehicle), HFCEV (Hydrogen Fuel Cell Electric Vehicle). This type is actually electric vehicle that is powered by electric energy generated using onboard fuel cell. Fuel cell uses hydrogen stored in hydrogen tanks and oxygen from the air to create water and electricity. Water is discarded but electricity is used for propulsion. Similarly to Hybrid Electric Vehicle, there is a small traction battery. That battery is used to capture energy while slowing down, driving downhill and also energy created by fuel cell in advance. Fuel cells are not instantaneous - there is a significant delay between driver request for acceleration and electricity generation - fuel cell must be blasted with hydrogen gas and fresh air for it to generate electricity. That delay is prevented by keeping some energy readily available in a battery, capacitor or flywheel. These vehicles do not have a socket therefore they cannot be recharged from the grid. All energy originates from fuel cell.
https://en.wikipedia.org/wiki?curid=60192408
1,364,489
The bacterial cell's control system has a hierarchical organization. The signaling and the control subsystem interfaces with the environment by means of sensory modules largely located on the cell surface. The genetic network logic responds to signals received from the environment and from internal cell status sensors to adapt the cell to current conditions. A major function of the top level control is to ensure that the operations involved in the cell cycle occur in the proper temporal order. In "Caulobacter", this is accomplished by the genetic regulatory circuit composed of five master regulators and an associated phospho-signaling network. The phosphosignaling network monitors the state of progression of the cell cycle and plays an essential role in accomplishing asymmetric cell division. The cell cycle control system manages the time and place of the initiation of chromosome replication and cytokinesis as well as the development of polar organelles. Underlying all these operations are the mechanisms for production of protein and structural components and energy production. The “housekeeping” metabolic and catabolic subsystems provide the energy and the molecular raw materials for protein synthesis, cell wall construction and other operations of the cell. The housekeeping functions are coupled bidirectionally to the cell cycle control system. However, they can adapt, somewhat independently of the cell cycle control logic, to changing composition and levels of the available nutrient sources.
https://en.wikipedia.org/wiki?curid=839361
1,568,910
Since the late '90s, Farhi has been studying how to use quantum mechanics to gain algorithmic speedup in solving problems that are difficult for conventional computers. He and Sam Gutmann pioneered the continuous time Hamiltonian based approach to quantum computation which is an alternative to the conventional gate model. He and Gutmann then proposed the idea of designing algorithms based on quantum walks, which was used to demonstrate the power of quantum computation over classical. They, along with Jeffrey Goldstone and Michael Sipser, introduced the idea of quantum computation by adiabatic evolution which generated much interest in the quantum computing community. For example, the D-Wave machine is designed to run the quantum adiabatic algorithm. In 2007, Farhi, Goldstone and Gutmann showed, using quantum walks, that a quantum computer can determine who wins a game faster than a classical computer. In 2010, he along with Peter Shor and others at MIT introduced a scheme for Quantum Money which so far has resisted attack. In 2014 Farhi, Goldstone and Gutmann introduced the Quantum Approximate Optimization Algorithm (QAOA), a novel quantum algorithm for finding approximate solutions to combinatorial search problems. Farhi and Harrow showed that the lowest depth version of the QAOA exhibits Quantum Supremacy which means that in worst case its output can not be simulated efficiently by a classical device. The QAOA is viewed as one of the best candidates to run on noisy Intermediate-scale quantum (NISQ) devices which are coming online in the near future.
https://en.wikipedia.org/wiki?curid=47668180
1,688,839
Once the cell is complete, a surface coil (or coils, depending on the desired coil type) is taped to the outside of the cell, which a) allows RF pulses to be produced in order to tip the polarized spins into the detection field (x,y plane) and b) detects the signal produced by the polarized nuclear spins. The cell is placed in an oven which allows for the cell and its contents to be heated so the alkali metal enters the vapor phase, and the cell is centered in a coil system which generates an applied magnetic field (along the z-axis). A laser, tuned to the D line (electric-dipole transition) of the alkali metal and with a beam diameter matching the diameter of the optical cell, is then aligned with the optical flats of the cell in such a way where the entirety of the cell is illuminated by laser light to provide the largest polarization possible (Figure 7). The laser can be anywhere between tens of watts to hundreds of watts, where higher the power yields larger polarization but is more costly. In order to further increase polarization, a retro-reflective mirror is placed behind the cell in order to pass the laser light through the cell twice. Additionally, an IR iris is placed behind the mirror, providing information of laser light absorption by the alkali metal atoms. When the laser is illuminating the cell, but the cell is at room temperature, the IR iris is used to measure the percent transmittance of laser light through the cell. As the cell is heated, the rubidium enters the vapor phase and starts to absorb laser light, causing the percent transmittance to decrease. The difference in the IR spectrum between a room temperature spectrum and a spectrum taken while the cell is heated can be used to calculate an estimated rubidium polarization value, P.
https://en.wikipedia.org/wiki?curid=900726
1,793,140
In evaluating the SiTF curve, the signal input and signal output are measured differentially; meaning, the differential of the input signal and differential of the output signal are calculated and plotted against each other. An operator, using computer software, defines an arbitrary area, with a given set of data points, within the signal and background regions of the output image of the infrared sensor, i.e. of the unit under test (UUT), (see "Half Moon" image below). The average signal and background are calculated by averaging the data of each arbitrarily defined region. A second order polynomial curve is fitted to the data of each line. Then, the polynomial is subtracted from the average signal and background data to yield the new signal and background. The difference of the new signal and background data is taken to yield the net signal. Finally, the net signal is plotted versus the signal input. The signal input of the UUT is within its own spectral response. (e.g. color-correlated temperature, pixel intensity, etc.). The slope of the linear portion of this curve is then found using the method of least squares.
https://en.wikipedia.org/wiki?curid=19392692
8,754
The mathematical description of motion, or kinematics, is based on the idea of specifying positions using numerical coordinates. Movement is represented by these numbers changing over time: a body's trajectory is represented by a function that assigns to each value of a time variable the values of all the position coordinates. The simplest case is one-dimensional, that is, when a body is constrained to move only along a straight line. Its position can then be given by a single number, indicating where it is relative to some chosen reference point. For example, a body might be free to slide along a track that runs left to right, and so its location can be specified by its distance from a convenient zero point, or origin, with negative numbers indicating positions to the left and positive numbers indicating positions to the right. If the body's location as a function of time is formula_1, then its average velocity over the time interval from formula_2 to formula_3 is formula_4Here, the Greek letter formula_5 (delta) is used, per tradition, to mean "change in". A positive average velocity means that the position coordinate formula_6 increases over the interval in question, a negative average velocity indicates a net decrease over that interval, and an average velocity of zero means that the body ends the time interval in the same place as it began. Calculus gives the means to define an "instantaneous" velocity, a measure of a body's speed and direction of movement at a single moment of time, rather than over an interval. One notation for the instantaneous velocity is to replace formula_5 with the symbol formula_8, for example,formula_9This denotes that the instantaneous velocity is the derivative of the position with respect to time. It can roughly be thought of as the ratio between an infinitesimally small change in position formula_10 to the infinitesimally small time interval formula_11 over which it occurs. More carefully, the velocity and all other derivatives can be defined using the concept of a limit. A function formula_12 has a limit of formula_13 at a given input value formula_2 if the difference between formula_15 and formula_13 can be made arbitrarily small by choosing an input sufficiently close to formula_2. One writes, formula_18Instantaneous velocity can be defined as the limit of the average velocity as the time interval shrinks to zero:formula_19 "Acceleration" is to velocity as velocity is to position: it is the derivative of the velocity with respect to time. Acceleration can likewise be defined as a limit:formula_20Consequently, the acceleration is the "second derivative" of position, often written formula_21.
https://en.wikipedia.org/wiki?curid=55212
27,181
Cell signaling (or communication) is the ability of cells to receive, process, and transmit signals with its environment and with itself. Signals can be non-chemical such as light, electrical impulses, and heat, or chemical signals (or ligands) that interact with receptors, which can be found embedded in the cell membrane of another cell or located deep inside a cell. There are generally four types of chemical signals: autocrine, paracrine, juxtacrine, and hormones. In autocrine signaling, the ligand affects the same cell that releases it. Tumor cells, for example, can reproduce uncontrollably because they release signals that initiate their own self-division. In paracrine signaling, the ligand diffuses to nearby cells and affects them. For example, brain cells called neurons release ligands called neurotransmitters that diffuse across a synaptic cleft to bind with a receptor on an adjacent cell such as another neuron or muscle cell. In juxtacrine signaling, there is direct contact between the signaling and responding cells. Finally, hormones are ligands that travel through the circulatory systems of animals or vascular systems of plants to reach their target cells. Once a ligand binds with a receptor, it can influence the behavior of another cell, depending on the type of receptor. For instance, neurotransmitters that bind with an inotropic receptor can alter the excitability of a target cell. Other types of receptors include protein kinase receptors (e.g., receptor for the hormone insulin) and G protein-coupled receptors. Activation of G protein-coupled receptors can initiate second messenger cascades. The process by which a chemical or physical signal is transmitted through a cell as a series of molecular events is called signal transduction
https://en.wikipedia.org/wiki?curid=9127632
34,194
In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular, or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is usually accompanied by a decrease, and sometimes an increase, of the total energy of the substances involved. Some energy may be transferred between the surroundings and the reactants in the form of heat or light; thus the products of a reaction have sometimes more but usually less energy than the reactants. A reaction is said to be exothermic or exergonic if the final state is lower on the energy scale than the initial state; in the less common case of endothermic reactions the situation is the reverse. Chemical reactions are usually not possible unless the reactants surmount an energy barrier known as the activation energy. The "speed" of a chemical reaction (at a given temperature "T") is related to the activation energy "E" by the Boltzmann's population factor e; that is, the probability of a molecule to have energy greater than or equal to "E" at a given temperature "T". This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation. The activation energy necessary for a chemical reaction can be provided in the form of thermal energy.
https://en.wikipedia.org/wiki?curid=9649
34,218
Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since formula_13 is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~formula_14 joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Often, however, the complete conversion of matter (such as atoms) to non-matter (such as photons) is forbidden by conservation laws.
https://en.wikipedia.org/wiki?curid=9649
44,551
Many nations around the world already have renewable energy contributing more than 20% of their total energy supply, with some generating over half their electricity from renewables. A few countries generate all their electricity using renewable energy. National renewable energy markets are projected to continue to grow strongly in the 2020s and beyond. Studies have shown that a global transition to 100% renewable energy across all sectors – power, heat, transport and desalination – is feasible and economically viable. Renewable energy resources exist over wide geographical areas, in contrast to fossil fuels, which are concentrated in a limited number of countries. Deployment of renewable energy and energy efficiency technologies is resulting in significant energy security, climate change mitigation, and economic benefits. However renewables are being hindered by hundreds of billions of dollars of fossil fuel subsidies. In international public opinion surveys there is strong support for renewables such as solar power and wind power. But the International Energy Agency said in 2021 that to reach net zero carbon emissions more effort is needed to increase renewables, and called for generation to increase by about 12% a year to 2030. 47% of respondents to a 2022 European survey from the European Union and the United Kingdom (45%) want their government to focus on the development of renewable energies. This is compared to 37% in both the United States and China when asked to list their priorities on energy. 46% of respondents from China believe that diversifying energy sources should be a top priority, compared to 34% respondents from the European Union, 35% from the United Kingdom, and 39% from the United States. The German government is actively encouraging solar and wind energy to decrease the dependability of conventional energy sources like coal and boost the share of clean energy. Expanding renewable energy sources (37%) and diversifying energy suppliers (39%) is more evenly distributed among American respondents.
https://en.wikipedia.org/wiki?curid=25784
59,229
The relationship between cellular proliferation and mitochondria has been investigated. Tumor cells require ample ATP to synthesize bioactive compounds such as lipids, proteins, and nucleotides for rapid proliferation. The majority of ATP in tumor cells is generated via the oxidative phosphorylation pathway (OxPhos). Interference with OxPhos cause cell cycle arrest suggesting that mitochondria play a role in cell proliferation. Mitochondrial ATP production is also vital for cell division and differentiation in infection in addition to basic functions in the cell including the regulation of cell volume, solute concentration, and cellular architecture. ATP levels differ at various stages of the cell cycle suggesting that there is a relationship between the abundance of ATP and the cell's ability to enter a new cell cycle. ATP's role in the basic functions of the cell make the cell cycle sensitive to changes in the availability of mitochondrial derived ATP. The variation in ATP levels at different stages of the cell cycle support the hypothesis that mitochondria play an important role in cell cycle regulation. Although the specific mechanisms between mitochondria and the cell cycle regulation is not well understood, studies have shown that low energy cell cycle checkpoints monitor the energy capability before committing to another round of cell division.
https://en.wikipedia.org/wiki?curid=19588
66,054
Big data analytics was used in healthcare by providing personalized medicine and prescriptive analytics, clinical risk intervention and predictive analytics, waste and care variability reduction, automated external and internal reporting of patient data, standardized medical terms and patient registries. Some areas of improvement are more aspirational than actually implemented. The level of data generated within healthcare systems is not trivial. With the added adoption of mHealth, eHealth and wearable technologies the volume of data will continue to increase. This includes electronic health record data, imaging data, patient generated data, sensor data, and other forms of difficult to process data. There is now an even greater need for such environments to pay greater attention to data and information quality. "Big data very often means 'dirty data' and the fraction of data inaccuracies increases with data volume growth." Human inspection at the big data scale is impossible and there is a desperate need in health service for intelligent tools for accuracy and believability control and handling of information missed. While extensive information in healthcare is now electronic, it fits under the big data umbrella as most is unstructured and difficult to use. The use of big data in healthcare has raised significant ethical challenges ranging from risks for individual rights, privacy and autonomy, to transparency and trust.
https://en.wikipedia.org/wiki?curid=27051151
89,051
While Robert Hooke’s discovery of cells in 1665 led to the proposal of the Cell Theory, Hooke misled the cell membrane theory that all cells contained a hard cell wall since only plant cells could be observed at the time. Microscopists focused on the cell wall for well over 150 years until advances in microscopy were made. In the early 19th century, cells were recognized as being separate entities, unconnected, and bound by individual cell walls after it was found that plant cells could be separated. This theory extended to include animal cells to suggest a universal mechanism for cell protection and development. By the second half of the 19th century, microscopy was still not advanced enough to make a distinction between cell membranes and cell walls. However, some microscopists correctly identified at this time that while invisible, it could be inferred that cell membranes existed in animal cells due to intracellular movement of components internally but not externally and that membranes were not the equivalent of a cell wall to a plant cell. It was also inferred that cell membranes were not vital components to all cells. Many refuted the existence of a cell membrane still towards the end of the 19th century. In 1890, an update to the Cell Theory stated that cell membranes existed, but were merely secondary structures. It was not until later studies with osmosis and permeability that cell membranes gained more recognition. In 1895, Ernest Overton proposed that cell membranes were made of lipids.
https://en.wikipedia.org/wiki?curid=33051527