doc_id
int32
15
2.25M
text
stringlengths
101
6.85k
source
stringlengths
39
44
1,859,630
The theory of control systems leaves room for systems with both feedforward pathways and feedback elements or pathways. The terms 'feedforward' and 'feedback' refer to elements or paths within a system, not to a system as a whole. THE input to the system comes from outside it, as energy from the signal source by way of some possibly leaky or noisy path. Part of the output of a system can be compounded, with the intermediacy of a feedback path, in some way such as addition or subtraction, with a signal derived from the system input, to form a 'return balance signal' that is input to a PART of the system to form a feedback loop within the system. (It is not correct to say that part of the output of a system can be used as THE input to the system.)
https://en.wikipedia.org/wiki?curid=6965005
1,869,558
The LICA experiment was designed to measure 0.5--5 MeV/nucleon solar and magnetospheric ions (He through Ni) arriving from the zenith in twelve energy bands. The mass of an ion was determined with simultaneous measurements of its time of flight (ToF) across a path length of approximately and its residual kinetic energy in one of four silicon (Si) solid state detectors. Ions passing through the 0.75 micrometre nickel entrance foils emitted secondary electrons which a chevron microchannel plate assembly amplified to form a signal to begin timing. A double entrance foil prevented single pinholes from allowing sunlight to enter the telescope and provided immunity to solar and geocoronal ultraviolet. Another foil and microchannel plate assembly in front of the solid state detectors gave the signal to stop timing. Wedge-and-strip anodes on the front sides of the timing anodes determined where the ion passed through the foils and, therefore, its flight path length. The velocity determined from the path length, the ToF, and the residual energy measured by the solid state detectors were combined to yield the mass of the ion with a resolution of about 1%, adequate to provide complete isotope separation. Corrections for the energy loss in the entrance foils gave the ion's incident energy. The geometric factor of the sensor was 0.8 cm2-sr and the field of view was 17° x 21°. On-board processing determined whether ions triggering LICA were protons, He nuclei, or more massive ions. Protons were counted in a rate and not further analyzed. Heavier nuclei were treated as low (He) or high (more massive than He) priority for transmission to the ground. The instrument data processing unit ensured that a sample of both priority events was telemetered, but that low priority events did not crowd out the rarer heavy species. Processed flux rates versus energy of H (hydrogen), He, O, Si group, and Fe groups were picked out every 15 seconds for transmission. Appropriate magnetic field models enabled specification of the atomic charge state by means of rigidity cut-off calculations. In addition, the proton cut-off versus energy during an orbit helped charge identification of the other species. On-board calibrations of the sensor were done by command about once per week. Data was stored in on-board memory of 26.5 MB, which was then dumped twice daily over ground stations.
https://en.wikipedia.org/wiki?curid=10669005
1,872,693
In geometry, the Petr–Douglas–Neumann theorem (or the PDN-theorem) is a result concerning arbitrary planar polygons. The theorem asserts that a certain procedure when applied to an arbitrary polygon always yields a regular polygon having the same number of sides as the initial polygon. The theorem was first published by Karel Petr (1868–1950) of Prague in 1908. The theorem was independently rediscovered by Jesse Douglas (1897–1965) in 1940 and also by B H Neumann (1909–2002) in 1941. The naming of the theorem as "Petr–Douglas–Neumann theorem", or as the "PDN-theorem" for short, is due to Stephen B Gray. This theorem has also been called Douglas's theorem, the Douglas–Neumann theorem, the Napoleon–Douglas–Neumann theorem and Petr's theorem.
https://en.wikipedia.org/wiki?curid=35762367
1,872,987
Professor Sarpeshkar's work in the May 2013 edition of "Nature" (doi: 10.1038/nature12148) pioneered the field of analog synthetic biology . Recently, three awarded patents and one pending patent of his have shown how to emulate quantum physics with classical analog circuits rigorously. He has used it to create novel quantum-inspired architectures that do spectrum analysis like the biological inner ear or cochlea, i.e. a 'Quantum Cochlea'. Professor Sarpeshkar's book introduced a novel form of electronics termed "Cytomorphic electronics", i.e., electronics inspired by cell biology . It is based on the astounding similarity between the Boltzmann exponential equations of noisy molecular flux in chemical reactions and the Boltzmann exponential equations of noisy electron flow in transistors. Hence circuits in biology and chemistry can be mapped to circuits in electronics and vice versa. Therefore, this 'cytomorphic mapping' enables one to map analog electronic motifs to analog molecular circuit motifs in living cells as in the work in "Nature" and also to simulate large-scale feedback networks in cells with analog electronic supercomputers. Thus, his work has led to a novel and fundamental analog circuits approach to the fields of synthetic biology and systems biology, both of which are highly important in the future of biotechnology and medicine and . For example, the synthesis of biofuels, chemicals, energy, molecular and cellular sensors, network drug design, treatments for cancer, diabetes, auto-immune, infectious, and neural diseases can all impacted be impacted by his fundamental work on analog synthetic and systems biology.
https://en.wikipedia.org/wiki?curid=3050547
1,873,892
The idea of the proof of Vaught's theorem is as follows. If there are at most countably many countable models, then there is a smallest one: the atomic model, and a largest one, the saturated model, which are different if there is more than one model. If they are different, the saturated model must realize some "n"-type omitted by the atomic model. Then one can show that an atomic model of the theory of structures realizing this "n"-type (in a language expanded by finitely many constants) is a third model, not isomorphic to either the atomic or the saturated model. In the example above with 3 models, the atomic model is the one where the sequence is unbounded, the saturated model is the one where the sequence converges, and an example of a type not realized by the atomic model is an element greater than all elements of the sequence.
https://en.wikipedia.org/wiki?curid=13797188
1,874,618
Q-system appeared to be working successfully in a variety of organisms. It has been used to drive expression of luciferase, as a proof of principle, in cultured mammalian cells. In zebrafish the Q-system has been successfully used with several tissue-specific promoters, and was shown to work independently of the GAL4/UAS system when expressed in the same cell. In C. elegans the Q-system has been shown to work in muscles and in neuronal tissue. In 2016, the Q-system was used to target, for the first time, the olfactory neurons of malaria mosquitoes "Anopheles gambiae." In 2019, the Q-system in "Anopheles" mosquitoes was used to examine the functional responses of olfactory neurons to odors. In 2019, the Q-system was introduced into the "Aedes aegypti" mosquito to capture tissue specific expression patterns. These successes make the Q-system the system of choice when developing genetic tools for other organisms. Currently the main shortcoming of the Q-system is the low number of available transgenic lines, but it will be overcome as the scientific community creates and shares these resources, such as by the use of the GAL4>QF2 HACK system to convert existing GAL4 transgenic insertions to QF2. DNA binding domain of QF2 fused with VP16 transcriptional activator domain was successfully applied in "Penicillium" to gain control over the penicillin producing secondary metabolite gene cluster in a scalable manner.
https://en.wikipedia.org/wiki?curid=51758505
1,913,784
The firing of an action potential, and consequently the release of neurotransmitters, occurs by this gating mechanism. In synaptic gating, in order for an action potential to occur, there must be more than one input to produce a single output in the neuron being gated. The interaction between these sets of neurons creates a biological AND gate. The neuron being gated is bistable and must be brought to the up state before it can fire an action potential. When this bistable neuron is in the up state, the gate is open. A gatekeeper neuron is responsible for stimulating the bistable neuron by shifting it from a down state to an up state and thus, opening the gate. Once the gate is open, an excitatory neuron can cause the bistable neuron to further depolarize and reach threshold causing and action potential to occur. If the gatekeeper does not shift the bistable neuron from down to up, the excitatory neuron will not be able to fire an action potential in the bistable neuron. Both the gatekeeper neuron and excitatory neuron are necessary to fire an action potential in the bistable neuron, but neither is sufficient to do so alone.
https://en.wikipedia.org/wiki?curid=9916386
1,948,842
Finally, differences in cell bond tension could also play a role in the establishment of the boundary and the separation of the two different cell populations. Experimental data has shown that Myosin-II is up-regulated along both the dorsal-ventral and anterior-posterior boundaries in the imaginal wing disc. The D/V boundary is characterized by the presence of filamentous actin and mutations in Myosin-II heavy chain impairs D/V compartmentalization. Similarly, both F-actin and Myosin-II are increased along the A/P boundary, accompanied by a decrease of "Bazooka", which was also observed in the D/V border. The Rho-kinase inhibitor Y-27632, of which Myosin-II is the main target, significantly reduces cell bond tension, suggesting that Myosin-II could be the main effector of this process. In support of the signaling-affinity model, creating an artificial interface between cells with active vs. inactive Hh signaling induces a junctional behavior that aligns the cell bonds of where these opposing cell types meet. Moreover, a 2.5-fold increase in mechanical tension is observed along the A/P boundary, compared to the rest of the tissue. Simulations using a vertex model demonstrate that this increase in cell bond tension is enough to maintain proliferating cell populations in separate compartment boundaries. Parameters used to measure cell bond tension are based cell-cell adhesion and cortical tension input.
https://en.wikipedia.org/wiki?curid=4642940
1,956,045
Quantum feedback or quantum feedback control is a class of methods to prepare and manipulate a quantum system in which that system's quantum state or trajectory is used to evolve the system towards some desired outcome. Just as in the classical case, feedback occurs when outputs from the system are used as inputs that control the dynamics (e.g. by controlling the Hamiltonian of the system). The feedback signal is typically filtered or processed in a classical way, which is often described as measurement based feedback. However, quantum feedback also allows the possibility of maintaining the quantum coherence of the output as the signal is processed (via unitary evolution), which has no classical analogue.
https://en.wikipedia.org/wiki?curid=46926884
1,978,552
The oracle complexity approach is inherently different from computational complexity theory, which relies on the Turing machine to model algorithms, and requires the algorithm's input (in this case, the function formula_3) to be represented as a bit of strings in memory. Instead, the algorithm is not computationally constrained, but its access to the function formula_3 is assumed to be constrained. This means that on the one hand, oracle complexity results only apply to specific families of algorithms which access the function in a certain manner, and not any algorithm as in computational complexity theory. On the other hand, the results apply to most if not all iterative algorithms used in practice, do not rely on any unproven assumptions, and lead to a nuanced understanding of how the function's geometry and type of information used by the algorithm affects practical performance.
https://en.wikipedia.org/wiki?curid=64271048
1,986,127
Tomography is a method of producing a three-dimensional image of the internal structures of a solid object (such as the human body or the earth) by the observation and recording of differences in the effects on the passage of energy waves impinging on those structures. The waves of energy are P-waves generated by earthquakes and are recording the wave velocities. The high quality data that is being collected by the permanent seismic stations of USArray and the Advanced National Seismic System (ANSS) will allow the creation of high resolution seismic imaging of the Earth's interior below the United States. Seismic tomography helps constrain mantle velocity structure and aids in the understanding of chemical and geodynamic processes that are at work. With the use of the data collected by USArray and global travel-time data, a global tomography model of P-wave velocity heterogeneity in the mantle can be created. The range and resolution of this technique will allow investigation into the suite of problems that are of concern in the North American mantle lithosphere, including the nature of the major tectonic features. This method gives evidence for differences in thickness and the velocity anomaly of the mantle lithosphere between the stable center of the continent and the more active western North America. This data is vital for the understanding of local lithosphere evolution, and when combined with additional global data, will allow the mantle to be imaged beyond the current extent of USArray.
https://en.wikipedia.org/wiki?curid=3463982
1,996,099
Shortly after his PhD thesis work, he derived his most well known scientific contribution, what is known as the Lindblad equation. As the Schrödinger equation describes the evolution of a closed quantum system, the Lindblad equation is a generalization, describing the evolution of an open quantum system, in which a system of interest is interacting with an uncontrollable environment. The Lindblad equation is a significant theoretical contribution and is widely used in many fields of physics, including quantum optics and condensed matter. It is also now the most common method for describing noise that affects various quantum technologies, in the domains of quantum communication and computation.
https://en.wikipedia.org/wiki?curid=56013977
2,008,125
Data in StatCrunch is represented in a "data table" view, which is similar to a spreadsheet view, but unlike spreadsheets, the cells in a data table can only contain numbers or text. Formulas cannot be stored in these cells. There are many ways to import data into StatCrunch. Data can be typed directly into cells in the data table. Entire blocks of data may be cut-and-pasted into the data table. Text files (.csv, .txt, etc.) and Microsoft Excel files (.xls and .xlsx) can be drag-and-dropped into the data table. Data can be pulled into StatCrunch directly from Wikipedia tables or other Web tables, including multi-page tables. Data can be loaded directly from Google Drive and Dropbox. Shared data sets saved by other StatCrunch community users can be searched for by title or keyword and opened in a data table.
https://en.wikipedia.org/wiki?curid=53263934
2,019,807
A large amount of data collected from the Internet comes from user-generated content. This includes blogs, posts on social networks, and information submitted in forms. Besides user-generated data, corporations are also currently data mining data from consumers in order to understand customers, identify new markets, and make investment decisions. Kirkpatrick the Director at United Nations Global Pulse labels this data "massive passive data" or "data exhaust". Data philanthropy is the idea that something positive can come from this overload of data. Data philanthropy is defined as the private sector sharing this data in ways that the public can benefit. The term philanthropy helps to emphasis that data sharing is a positive act and that the shared data is a public good.
https://en.wikipedia.org/wiki?curid=49882988
2,096,593
Since the recognition of anthropogenic causes of climate change in the late 1980s and the identification of the energy sector as one of the main contributors, there has been a global effort to investigate the energy consumption of human activities and their indirect contribution to greenhouse gas emissions. In Europe the energy analysis of the wastewater sector was conducted adopting mainly two strategies. Germany (MURL, 1999) and Switzerland (BUWAL, 1994), for example, developed energy management manuals for wastewater treatment plants and reduced their energy consumption by 38% and 50%, respectively. These manuals provided wastewater utilities with energy targets to achieve. On the other hand, in 1999 Austria promoted benchmarking that allowed annual comparison of wastewater treatment plants energy performances. This comparison stimulated a competition among the wastewater treatment plants and the aspiration to improve their efficiency, which led Austria to be one of the first countries in the world to achieve energy neutrality in the wastewater sector. The energy benchmarking process has allowed wastewater treatment plants to identify their most energy-consuming assets and possible inefficiencies, and target them to reduce their energy demand. For example, the inefficiency of the aeration process identified by multiple studies has allowed the development of more energy efficient oxidation units, with a possible energy saving of about 20% to 50% in some cases according to Frijns and an EPA study.
https://en.wikipedia.org/wiki?curid=55657441
2,109,824
In general, the lack of low cost diagnostics for malaria results in late diagnosis of the disease in many low income communities (contributing to high morbidity and mortality from severe forms of malaria), and over-treatment of malaria where syndromic management is used due to lack of point-of-care diagnostics (contributing to wastage of money on treatment of non-malarial illness especially since the new recommended Artemisinin-based therapies are expensive). Additionally, Inconsistent data relay to the Ministry of Health despite the fact that there are a number of data management platforms being used by health practitioners. The data received is inconsistent in terms of both quality and quantity and is often outdated or not in realtime. The current data collection and surveillance methods are through the national Health Management Information System (HMIS). Data is first collected at the health centre level where hard copies(paper/books) are used and electronic medical record systems for a few health centers that have the capacity. However there is very low usage of this system as paper-based records are lost in delivery, poor quality of data(inaccurate statistical data), untimely delivery of HMIS reports, exclusion of data from the private health providers and at the community level, inadequate segregation of HMIS data and limited political support. Lack of functional supply chains and adequate reporting around availability of supplies, means that often health facilities are without vital drugs and equipment for long periods of time and as a result drug and diagnostic performance can not be monitored.Efficient health information and data systems are vital for improved decision making and timely intervention. We are shifting into an era where data driven approaches have yielded appropriate resource utilization for implementing health programs.
https://en.wikipedia.org/wiki?curid=64748079
2,118,730
Thermodynamics and statistical mechanics describe systems that have variable numbers of particles via the chemical potential formula_43, defined as Gibbs free energy formula_44 per particle:formula_45, where formula_46 is the Gibbs free energy for the system of formula_38 particles. In thermal and particle equilibrium with bulk reservoirs, the entire system has a common value of chemical potential formula_48 (the Fermi level in other contexts). The free energy needed for the entry of a new ion to the channel is defined by the excess chemical potential formula_49 which (ignoring an entropy term ) can be written as formula_50 where formula_51 is the charging energy (self-energy barrier) of an incoming ion and formula_52is its affinity (i.e. energy of attraction to the binding site formula_1). The difference in energy between formula_51 and formula_55 (Fig.2.) defines the ionic energy level separation (Coulomb gap) and gives rise to most of the observed ICB effects.
https://en.wikipedia.org/wiki?curid=58030745
2,122,131
In condensed matter physics, the quantum dimer magnet state is one in which quantum spins in a magnetic structure entangle to form a singlet state. These entangled spins act as bosons and their excited states (triplons) can undergo Bose-Einstein condensation (BEC). The quantum dimer system was originally proposed by Matsubara and Matsuda as a mapping of the lattice Bose gas to the quantum antiferromagnet. Quantum dimer magnets are often confused as valence bond solids; however, a valence bond solid requires the breaking of translational symmetry and the dimerizing of spins. In contrast, quantum dimer magnets exist in crystal structures where the translational symmetry is inherently broken. There are two types of quantum dimer models: the XXZ model and the weakly-coupled dimer model. The main difference is the regime in which BEC can occur. For the XXZ model (commonly referred to as the magnon BEC), the BEC occurs upon cooling without a magnetic field and manifests itself as a symmetric dome in the field versus temperature phase diagram centered about H = 0. The weakly-coupled dimer model does not magnetically order in zero magnetic field, but instead orders upon the closing of the spin gap, where the BEC regime begins and is a dome centered at non-zero field.
https://en.wikipedia.org/wiki?curid=58307276
2,149,311
The "Annual Review of Cell and Developmental Biology" defines its scope as covering significant developments in the fields of developmental and cell biology. Included subfields are the organization of structures and functions within the cell, cell development, cell evolution for unicellular and multicellular organisms, molecular biology models, and research tools. Beginning in 2005, each volume starts with a prefatory chapter written by a prominent cell or developmental biologist in which they reflect upon their career and experiences. As of 2022, "Journal Citation Reports" lists the journal's impact factor as 11.902, ranking it third of 39 titles in the category "Developmental Biology" and twenty-seventh of 194 titles in the category "Cell Biology". It is abstracted and indexed in Scopus, Science Citation Index Expanded, MEDLINE, EMBASE, Chemical Abstracts Core, and Academic Search, among others.
https://en.wikipedia.org/wiki?curid=16094944
2,164,929
Hamiltonian complexity or quantum Hamiltonian complexity is a topic which deals with problems in quantum complexity theory and condensed matter physics. It mostly studies constraint satisfaction problems related to ground states of local Hamiltonians; that is, Hermitian matrices that act locally on a system of interest. The constraint satisfaction problems in quantum Hamiltonian complexity have led to the quantum version of the Cook–Levin theorem. Quantum Hamiltonian complexity has helped physicists understand the difficulty of simulating physical systems.
https://en.wikipedia.org/wiki?curid=62591526
2,166,661
Viral pathogens capitalize on cell surface receptors that are ubiquitous and can recognize many diverse ligands for attachment and ultimately, entry into the cell. These ligands not only consist of endogenous proteins but also bacterial and viral products. Once the virus is anchored to the cell surface, virus uptake typically occurs using host mechanisms such as endocytosis. One method of viral uptake is through clathrin-mediated endocytosis (CME). The cell surface receptors provide a binding pocket for attachment and entry into the cell, and therefore, affects a cell's susceptibility to infection. In addition, the receptor density on the surface of the endothelial cell also affects how efficiently the virus enters the host cell. For instance, a lower cell surface receptor density may render an endothelial cell less susceptible for virus infection than an endothelial with a higher cell surface receptor density. The endothelium contains a myriad of cell surface receptors associated with functions such as immune cell adherence and trafficking, blood clotting, vasodilation, and barrier permeability. Given these vital functions, virus interactions with these receptors offers insight into the symptoms that present during viral pathogenesis such as inflammation, increased vascular permeability, and thrombosis.
https://en.wikipedia.org/wiki?curid=68831707
2,211,207
A data cooperative is a group of individuals voluntarily pooling together their data. As an entity, a data cooperative is a type of data infrastructure, formed through the voluntary and collaborative pooling efforts of individuals. As a data infrastructure, data cooperatives are created, owned and operated by community members, and this enables the communities, and its members, to have full control over their data, and the decisions that are made by the insights gathered from the data. By giving individual community members control over their data, data cooperatives are a new and innovative type of data infrastructure, that act as a counter weight against data brokers and data driven corporations.
https://en.wikipedia.org/wiki?curid=72285060
2,241,463
NOMADS fosters system inter-operability by integrating legacy systems and emerging technologies and existing metadata conventions used for models and observational data. NOMADS relies on local decisions about data holdings. Loosely combining legacy systems, while developing new ways to support data access to valuable data, permits NOMADS to work on the cutting edge of distributed data systems. In this effort, no one institution carries the weight of data delivery since data are distributed across the network, and served by the institutions that developed the data. The responsibility for documentation falls on the data generator; with the Advisory Panels ensuring overall quality and systems standards, and to determine which NOMADS data are required for long-term storage. Further, NOMADS in no way precludes the need for national centers to maintain and support long-term archives. In fact, NOMADS and secure data archives are mutually supportive and necessary for long-term research. The primary science benefit of the NOMADS framework is that it enables a feedback mechanism to tie Government and university research directly back to the NOAA operational communities, numerical weather prediction quality control and diagnostics processes at NCEP, and climate model assessments and inter-comparisons from around the world.
https://en.wikipedia.org/wiki?curid=31944603
9,505
The rules of quantum mechanics assert that the state space of a system is a Hilbert space and that observables of the system are Hermitian operators acting on vectors in that space – although they do not tell us which Hilbert space or which operators. These can be chosen appropriately in order to obtain a quantitative description of a quantum system, a necessary step in making physical predictions. An important guide for making these choices is the correspondence principle, a heuristic which states that the predictions of quantum mechanics reduce to those of classical mechanics in the regime of large quantum numbers. One can also start from an established classical model of a particular system, and then try to guess the underlying quantum model that would give rise to the classical model in the correspondence limit. This approach is known as quantization.
https://en.wikipedia.org/wiki?curid=25202
11,773
Embedded Machine Learning is a sub-field of machine learning, where the machine learning model is run on embedded systems with limited computing resources such as wearable computers, edge devices and microcontrollers. Running machine learning model in embedded devices removes the need for transferring and storing data on cloud servers for further processing, henceforth, reducing data breaches and privacy leaks happening because of transferring data, and also minimizes theft of intellectual properties, personal data and business secrets. Embedded Machine Learning could be applied through several techniques including hardware acceleration, using approximate computing, optimization of machine learning models and many more.
https://en.wikipedia.org/wiki?curid=233488
17,352
To derive a generalized entropy balanced equation, we start with the general balance equation for the change in any extensive quantity formula_53 in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that formula_54, i.e. the rate of change of formula_53 in the system, equals the rate at which formula_53 enters the system at the boundaries, minus the rate at which formula_53 leaves the system across the system boundaries, plus the rate at which formula_53 is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time formula_59 of the extensive quantity entropy formula_60, the entropy balance equation is:
https://en.wikipedia.org/wiki?curid=9891
23,791
The electron was discovered in 1897 by J. J. Thomson, and it was quickly realized that it is the particle (charge carrier) that carries electric currents in electric circuits. In 1900 the first (classical) model of electrical conduction, the Drude model, was proposed by Paul Drude, which finally gave a scientific explanation for Ohm's law. In this model, a solid conductor consists of a stationary lattice of atoms (ions), with conduction electrons moving randomly in it. A voltage across a conductor causes an electric field, which accelerates the electrons in the direction of the electric field, causing a drift of electrons which is the electric current. However the electrons collide with atoms which causes them to scatter and randomizes their motion, thus converting kinetic energy to heat (thermal energy). Using statistical distributions, it can be shown that the average drift velocity of the electrons, and thus the current, is proportional to the electric field, and thus the voltage, over a wide range of voltages.
https://en.wikipedia.org/wiki?curid=49090
27,170
Every cell is enclosed within a cell membrane that separates its cytoplasm from the extracellular space. A cell membrane consists of a lipid bilayer, including cholesterols that sit between phospholipids to maintain their fluidity at various temperatures. Cell membranes are semipermeable, allowing small molecules such as oxygen, carbon dioxide, and water to pass through while restricting the movement of larger molecules and charged particles such as ions. Cell membranes also contains membrane proteins, including integral membrane proteins that go across the membrane serving as membrane transporters, and peripheral proteins that loosely attach to the outer side of the cell membrane, acting as enzymes shaping the cell. Cell membranes are involved in various cellular processes such as cell adhesion, storing electrical energy, and cell signalling and serve as the attachment surface for several extracellular structures such as a cell wall, glycocalyx, and cytoskeleton.
https://en.wikipedia.org/wiki?curid=9127632
27,172
All cells require energy to sustain cellular processes. Energy is the capacity to do work, which, in thermodynamics, can be calculated using Gibbs free energy. According to the first law of thermodynamics, energy is conserved, i.e., cannot be created or destroyed. Hence, chemical reactions in a cell do not create new energy but are involved instead in the transformation and transfer of energy. Nevertheless, all energy transfers lead to some loss of usable energy, which increases entropy (or state of disorder) as stated by the second law of thermodynamics. As a result, an organism requires continuous input of energy to maintain a low state of entropy. In cells, energy can be transferred as electrons during redox (reduction–oxidation) reactions, stored in covalent bonds, and generated by the movement of ions (e.g., hydrogen, sodium, potassium) across a membrane.
https://en.wikipedia.org/wiki?curid=9127632
37,285
According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as "exotic matter". More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition, and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics. Although early calculations suggested that a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small.
https://en.wikipedia.org/wiki?curid=31591
39,661
This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-dependent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.
https://en.wikipedia.org/wiki?curid=59874
39,707
In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule. Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort.
https://en.wikipedia.org/wiki?curid=59874
42,043
The Poincaré recurrence theorem considers a theoretical microscopic description of an isolated physical system. This may be considered as a model of a thermodynamic system after a thermodynamic operation has removed an internal wall. The system will, after a sufficiently long time, return to a microscopically defined state very close to the initial one. The Poincaré recurrence time is the length of time elapsed until the return. It is exceedingly long, likely longer than the life of the universe, and depends sensitively on the geometry of the wall that was removed by the thermodynamic operation. The recurrence theorem may be perceived as apparently contradicting the second law of thermodynamics. More obviously, however, it is simply a microscopic model of thermodynamic equilibrium in an isolated system formed by removal of a wall between two systems. For a typical thermodynamical system, the recurrence time is so large (many many times longer than the lifetime of the universe) that, for all practical purposes, one cannot observe the recurrence. One might wish, nevertheless, to imagine that one could wait for the Poincaré recurrence, and then re-insert the wall that was removed by the thermodynamic operation. It is then evident that the appearance of irreversibility is due to the utter unpredictability of the Poincaré recurrence given only that the initial state was one of thermodynamic equilibrium, as is the case in macroscopic thermodynamics. Even if one could wait for it, one has no practical possibility of picking the right instant at which to re-insert the wall. The Poincaré recurrence theorem provides a solution to Loschmidt's paradox. If an isolated thermodynamic system could be monitored over increasingly many multiples of the average Poincaré recurrence time, the thermodynamic behavior of the system would become invariant under time reversal.
https://en.wikipedia.org/wiki?curid=133017
44,581
Marine energy (also sometimes referred to as ocean energy) is the energy carried by ocean waves, tides, salinity, and ocean temperature differences. The movement of water in the world's oceans creates a vast store of kinetic energy, or energy in motion. This energy can be harnessed to generate electricity to power homes, transport and industries. The term marine energy encompasses wave powerpower from surface waves, marine current power - power from marine hydrokinetic streams (e.g., the Gulf Stream), and tidal powerobtained from the kinetic energy of large bodies of moving water. Reverse electrodialysis (RED) is a technology for generating electricity by mixing fresh river water and salty sea water in large power cells designed for this purpose; as of 2016, it is being tested at a small scale (50 kW). Offshore wind power is not a form of marine energy, as wind power is derived from the wind, even if the wind turbines are placed over water. The oceans have a tremendous amount of energy and are close to many if not most concentrated populations. Ocean energy has the potential of providing a substantial amount of new renewable energy around the world.
https://en.wikipedia.org/wiki?curid=25784
47,141
According to current theories on the nature of wormholes, construction of a traversable wormhole would require the existence of a substance with negative energy, often referred to as "exotic matter". More technically, the wormhole spacetime requires a distribution of energy that violates various energy conditions, such as the null energy condition along with the weak, strong, and dominant energy conditions. However, it is known that quantum effects can lead to small measurable violations of the null energy condition, and many physicists believe that the required negative energy may actually be possible due to the Casimir effect in quantum physics. Although early calculations suggested a very large amount of negative energy would be required, later calculations showed that the amount of negative energy can be made arbitrarily small.
https://en.wikipedia.org/wiki?curid=34043
55,102
Synapses may be electrical or chemical. Electrical synapses make direct electrical connections between neurons, but chemical synapses are much more common, and much more diverse in function. At a chemical synapse, the cell that sends signals is called presynaptic, and the cell that receives signals is called postsynaptic. Both the presynaptic and postsynaptic areas are full of molecular machinery that carries out the signalling process. The presynaptic area contains large numbers of tiny spherical vessels called synaptic vesicles, packed with neurotransmitter chemicals. When the presynaptic terminal is electrically stimulated, an array of molecules embedded in the membrane are activated, and cause the contents of the vesicles to be released into the narrow space between the presynaptic and postsynaptic membranes, called the synaptic cleft. The neurotransmitter then binds to receptors embedded in the postsynaptic membrane, causing them to enter an activated state. Depending on the type of receptor, the resulting effect on the postsynaptic cell may be excitatory, inhibitory, or modulatory in more complex ways. For example, release of the neurotransmitter acetylcholine at a synaptic contact between a motor neuron and a muscle cell induces rapid contraction of the muscle cell. The entire synaptic transmission process takes only a fraction of a millisecond, although the effects on the postsynaptic cell may last much longer (even indefinitely, in cases where the synaptic signal leads to the formation of a memory trace).
https://en.wikipedia.org/wiki?curid=21944
57,673
In January 1926, Schrödinger published in "Annalen der Physik" the paper "" (Quantization as an Eigenvalue Problem) on wave mechanics and presented what is now known as the Schrödinger equation. In this paper, he gave a "derivation" of the wave equation for time-independent systems and showed that it gave the correct energy eigenvalues for a hydrogen-like atom. This paper has been universally celebrated as one of the most important achievements of the twentieth century and created a revolution in most areas of quantum mechanics and indeed of all physics and chemistry. A second paper was submitted just four weeks later that solved the quantum harmonic oscillator, rigid rotor, and diatomic molecule problems and gave a new derivation of the Schrödinger equation. A third paper, published in May, showed the equivalence of his approach to that of Heisenberg and gave the treatment of the Stark effect. A fourth paper in this series showed how to treat problems in which the system changes with time, as in scattering problems. In this paper he introduced a complex solution to the wave equation in order to prevent the occurrence of fourth and sixth order differential equations. Schrödinger ultimately reduced the order of the equation to one. (This was arguably the moment when quantum mechanics switched from real to complex numbers.) These papers were his central achievement and were at once recognized as having great significance by the physics community.
https://en.wikipedia.org/wiki?curid=9942
64,031
The domain and codomain are not always explicitly given when a function is defined, and, without some (possibly difficult) computation, one might only know that the domain is contained in a larger set. Typically, this occurs in mathematical analysis, where "a function often refers to a function that may have a proper subset of as domain. For example, a "function from the reals to the reals" may refer to a real-valued function of a real variable. However, a "function from the reals to the reals" does not mean that the domain of the function is the whole set of the real numbers, but only that the domain is a set of real numbers that contains a non-empty open interval. Such a function is then called a partial function. For example, if is a function that has the real numbers as domain and codomain, then a function mapping the value to the value is a function from the reals to the reals, whose domain is the set of the reals , such that .
https://en.wikipedia.org/wiki?curid=185427
64,437
A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium.
https://en.wikipedia.org/wiki?curid=166404
64,439
With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components. Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics:
https://en.wikipedia.org/wiki?curid=166404
69,019
Solar chemical processes use solar energy to drive chemical reactions. These processes offset energy that would otherwise come from a fossil fuel source and can also convert solar energy into storable and transportable fuels. Solar induced chemical reactions can be divided into thermochemical or photochemical. A variety of fuels can be produced by artificial photosynthesis. The multielectron catalytic chemistry involved in making carbon-based fuels (such as methanol) from reduction of carbon dioxide is challenging; a feasible alternative is hydrogen production from protons, though use of water as the source of electrons (as plants do) requires mastering the multielectron oxidation of two water molecules to molecular oxygen. Some have envisaged working solar fuel plants in coastal metropolitan areas by 2050 the splitting of seawater providing hydrogen to be run through adjacent fuel-cell electric power plants and the pure water by-product going directly into the municipal water system. In addition, chemical energy storage is another solution to solar energy storage.
https://en.wikipedia.org/wiki?curid=27743
70,208
Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. In an entirely circular orbit, this kinetic energy remains constant because there is almost no friction in near-earth space. However, it becomes apparent at re-entry when some of the kinetic energy is converted to heat. If the orbit is elliptical or hyperbolic, then throughout the orbit kinetic and potential energy are exchanged; kinetic energy is greatest and potential energy lowest at closest approach to the earth or other massive body, while potential energy is greatest and kinetic energy the lowest at maximum distance. Disregarding loss or gain however, the sum of the kinetic and potential energy remains constant.
https://en.wikipedia.org/wiki?curid=17327
74,386
Decay energy, therefore, remains associated with a certain measure of the mass of the decay system, called invariant mass, which does not change during the decay, even though the energy of decay is distributed among decay particles. The energy of photons, the kinetic energy of emitted particles, and, later, the thermal energy of the surrounding matter, all contribute to the invariant mass of the system. Thus, while the sum of the rest masses of the particles is not conserved in radioactive decay, the "system" mass and system invariant mass (and also the system total energy) is conserved throughout any decay process. This is a restatement of the equivalent laws of conservation of energy and conservation of mass.
https://en.wikipedia.org/wiki?curid=197767
79,383
Restating this as energy equation, the energy per unit volume in an ideal, incompressible liquid is constant throughout its vessel. At the surface, gravitational potential energy is large but liquid pressure energy is low. At the bottom of the vessel, all the gravitational potential energy is converted to pressure energy. The sum of pressure energy and gravitational potential energy per unit volume is constant throughout the volume of the fluid and the two energy components change linearly with the depth. Mathematically, it is described by Bernoulli's equation, where velocity head is zero and comparisons per unit volume in the vessel are
https://en.wikipedia.org/wiki?curid=23619
93,753
Absolutely continuous probability distributions can be described in several ways. The probability density function describes the infinitesimal probability of any given value, and the probability that the outcome lies in a given interval can be computed by integrating the probability density function over that interval. An alternative description of the distribution is by means of the cumulative distribution function, which describes the probability that the random variable is no larger than a given value (i.e., formula_4 for some formula_5). The cumulative distribution function is the area under the probability density function from formula_6 to formula_5, as described by the picture to the right.
https://en.wikipedia.org/wiki?curid=23543
96,932
In the 1990s, Andranik Tangian developed a model of artificial perception that implemented a principle of correlativity, which operationalized the Gestalt psychology laws in their interaction. The model finds structures in data without knowing the structures, similarly to segregating elements in abstract painting—like curves, contours and spots—without identifying them with known objects. The approach is based on the least complex data representations in the sense of Kolmogorov, i.e. requiring the least memory storage, which is regarded as saving the brain energy. The least complexity criterion leads to multi-level data representations in terms of generative patterns and their transformations, using proximities, similarities, symmetries, common fate grouping, continuities, etc. The idea that perception is data representation rather than "physical" recognition is illustrated by the effect of several voices produced by a single physical body—a loudspeaker membrane, whereas the effect of a single tone is produced by several physical bodies—organ pipes tuned as a chord and activated by a single key. It is shown that the physical causality in certain observations can be revealed through optimal data representations, and this nature–information duality is explained by the fact that both nature and information are subordinated to the same principle of efficiency. In some situations, the least complex data representations use the patterns already stored in the memory, demonstrating the dependence of perception on previous knowledge—in line with the Gestalt psychology law of past experience. Such an "intelligent" perception is opposed to the "naïve" perception that is based exclusively on direct percepts and is therefore context-dependent. The model is applied to automatic notation of music—recognition of interval structures in chords and polyphonic voices (with no reference to pitch, thereby relying on interval hearing instead of absolute hearing) as well as rhythms under variable tempo, approaching the capabilities of trained musicians. The model is also relevant to visual scene analysis and explains some modes of abstract thinking.
https://en.wikipedia.org/wiki?curid=70402
104,974
Smart cities have been conceptualized using the OSI model of 'layer' abstractions. Smart cities are constructed by connecting the city's public infrastructure with city application systems and passing collected data through three layers, the perception layer, the network layer and the application layer. City application systems then use data to make better decisions when controlling different city infrastructures. The perception layer is where data is collected across the smart city using sensors. This data could be collected through sensors such as cameras, RFID, or GPS positioning. The perception layer sends data it collects using wireless transmissions to the network layer. The network layer is responsible for transporting collected data from the perception layer to the application layer. The network layer utilizes a city's communication infrastructure to send data meaning it can be intercepted by attackers and must be held responsible for keeping collected data and information private. The application layer is responsible for processing the data received from network layer. The application layer uses the data it processes to make decisions on how to control the city infrastructure based on the data it receives.
https://en.wikipedia.org/wiki?curid=12592050
105,720
In quantum computing, a qubit () or quantum bit is a basic unit of quantum information—the quantum version of the classic binary bit physically realized with a two-state device. A qubit is a two-state (or two-level) quantum-mechanical system, one of the simplest quantum systems displaying the peculiarity of quantum mechanics. Examples include the spin of the electron in which the two levels can be taken as spin up and spin down; or the polarization of a single photon in which the two states can be taken to be the vertical polarization and the horizontal polarization. In a classical system, a bit would have to be in one state or the other. However, quantum mechanics allows the qubit to be in a coherent superposition of both states simultaneously, a property that is fundamental to quantum mechanics and quantum computing.
https://en.wikipedia.org/wiki?curid=25284
118,590
The idea of a quantum harmonic oscillator and its associated energy can apply to either an atom or a subatomic particle. In ordinary atomic physics, the zero-point energy is the energy associated with the ground state of the system. The professional physics literature tends to measure frequency, as denoted by above, using angular frequency, denoted with and defined by . This leads to a convention of writing Planck's constant with a bar through its top () to denote the quantity . In these terms, the most famous such example of zero-point energy is the above associated with the ground state of the quantum harmonic oscillator. In quantum mechanical terms, the zero-point energy is the expectation value of the Hamiltonian of the system in the ground state.
https://en.wikipedia.org/wiki?curid=84400
127,095
The term is broadly applied to a number of different derivations, the first of which was introduced by John Stewart Bell in a 1964 paper titled "On the Einstein Podolsky Rosen Paradox". Bell's paper was a response to a 1935 thought experiment that Albert Einstein, Boris Podolsky and Nathan Rosen proposed, arguing that quantum physics is an "incomplete" theory. By 1935, it was already recognized that the predictions of quantum physics are probabilistic. Einstein, Podolsky and Rosen presented a scenario that involves preparing a pair of particles such that the quantum state of the pair is entangled, and then separating the particles to an arbitrarily large distance. The experimenter has a choice of possible measurements that can be performed on one of the particles. When they choose a measurement and obtain a result, the quantum state of the other particle apparently collapses instantaneously into a new state depending upon that result, no matter how far away the other particle is. This suggests that either the measurement of the first particle somehow also interacted with the second particle at faster than the speed of light, "or" that the entangled particles had some unmeasured property which pre-determined their final quantum states before they were separated. Therefore, assuming locality, quantum mechanics must be incomplete, as it cannot give a complete description of the particle's true physical characteristics. In other words, quantum particles, like electrons and photons, must carry some property or attributes not included in quantum theory, and the uncertainties in quantum theory's predictions would then be due to ignorance or unknowability of these properties, later termed "hidden variables".
https://en.wikipedia.org/wiki?curid=56369
127,134
By the late 1940s, the mathematician George Mackey had grown interested in the foundations of quantum physics, and in 1957 he drew up a list of postulates that he took to be a precise definition of quantum mechanics. Mackey conjectured that one of the postulates was redundant, and shortly thereafter, Andrew M. Gleason proved that it was indeed deducible from the other postulates. Gleason's theorem provided an argument that a broad class of hidden-variable theories are incompatible with quantum mechanics. More specifically, Gleason's theorem rules out hidden-variable models that are "noncontextual". Any hidden-variable model for quantum mechanics must, in order to avoid the implications of Gleason's theorem, involve hidden variables that are not properties belonging to the measured system alone but also dependent upon the external context in which the measurement is made. This type of dependence is often seen as contrived or undesirable; in some settings, it is inconsistent with special relativity. The Kochen–Specker theorem refines this statement by constructing a specific finite subset of rays on which no such probability measure can be defined.
https://en.wikipedia.org/wiki?curid=56369
129,896
ETL processing involves extracting the data from the source system(s). In many cases, this represents the most important aspect of ETL, since extracting data correctly sets the stage for the success of subsequent processes. Most data-warehousing projects combine data from different source systems. Each separate system may also use a different data organization and/or format. Common data-source formats include relational databases, XML, JSON and flat files, but may also include non-relational database structures such as Information Management System (IMS) or other data structures such as Virtual Storage Access Method (VSAM) or Indexed Sequential Access Method (ISAM), or even formats fetched from outside sources by means such as web spidering or screen-scraping. The streaming of the extracted data source and loading on-the-fly to the destination database is another way of performing ETL when no intermediate data storage is required.
https://en.wikipedia.org/wiki?curid=239516
130,168
The typical extract, transform, load (ETL)-based data warehouse uses staging, data integration, and access layers to house its key functions. The staging layer or staging database stores raw data extracted from each of the disparate source data systems. The integration layer integrates the disparate data sets by transforming the data from the staging layer, often storing this transformed data in an operational data store (ODS) database. The integrated data are then moved to yet another database, often called the data warehouse database, where the data is arranged into hierarchical groups, often called dimensions, and into facts and aggregate facts. The combination of facts and dimensions is sometimes called a star schema. The access layer helps users retrieve data.
https://en.wikipedia.org/wiki?curid=7990
175,515
By 2010, advancements in fuel cell technology had reduced the size, weight and cost of fuel cell electric vehicles. In 2010, the U.S. Department of Energy (DOE) estimated that the cost of automobile fuel cells had fallen 80% since 2002 and that such fuel cells could potentially be manufactured for $51/kW, assuming high-volume manufacturing cost savings. Fuel cell electric vehicles have been produced with "a driving range of more than 250 miles between refueling". They can be refueled in less than 5 minutes. Deployed fuel cell buses have a 40% higher fuel economy than diesel buses. EERE's Fuel Cell Technologies Program claims that, as of 2011, fuel cells achieved a 42 to 53% fuel cell electric vehicle efficiency at full power, and a durability of over 75,000 miles with less than 10% voltage degradation, double that achieved in 2006. In 2012, Lux Research, Inc. issued a report that concluded that "Capital cost ... will limit adoption to a mere 5.9 GW" by 2030, providing "a nearly insurmountable barrier to adoption, except in niche applications". Lux's analysis concluded that by 2030, PEM stationary fuel cell applications will reach $1 billion, while the vehicle market, including fuel cell forklifts, will reach a total of $2 billion.
https://en.wikipedia.org/wiki?curid=1252085
177,321
A quantum mechanical system or particle that is bound—that is, confined spatially—can only take on certain discrete values of energy, called energy levels. This contrasts with classical particles, which can have any amount of energy. The term is commonly used for the energy levels of the electrons in atoms, ions, or molecules, which are bound by the electric field of the nucleus, but can also refer to energy levels of nuclei or vibrational or rotational energy levels in molecules. The energy spectrum of a system with such discrete energy levels is said to be quantized.
https://en.wikipedia.org/wiki?curid=59444
179,307
Since the "Lennard-Jonesium" is the archetype for the modeling of simple yet realistic intermolecular interactions, a large number of thermophysical properties were studied and reported in the literature. Computer experiment data of the Lennard-Jones potential is presently considered the most accurately known data in classical mechanics computational chemistry. Hence, such data is also mostly used as benchmark for the validation and testing of new algorithms and theories. The Lennard-Jones potential has been constantly used since the early days of molecular simulations. The first results from computer experiments for the Lennard-Jones potential were reported by Rosenbluth and Rosenbluth and Wood and Parker after molecular simulations on "fast computing machines" became available in 1953. Since then many studies reported data of the Lennard-Jones substance; approximately 50,000 data points are publicly available. The current state of research of thermophysical properties of the Lennard-Jones substance is summarized in the following. The most comprehensive summary and digital database was given by Stephan et al. Presently, no data repository covers and maintains this database (or any other model potential) – the concise data selection stated by the NIST website should be treated with caution regarding referencing and coverage (it contains a small fraction of the available data). Most of the data on NIST website provides non-peer-reviewed data generated in-house by NIST.
https://en.wikipedia.org/wiki?curid=227686
180,829
There are several notations for data modeling. The actual model is frequently called "entity–relationship model", because it depicts data in terms of the entities and relationships described in the data. An entity–relationship model (ERM) is an abstract conceptual representation of structured data. Entity–relationship modeling is a relational schema database modeling method, used in software engineering to produce a type of conceptual data model (or semantic data model) of a system, often a relational database, and its requirements in a top-down fashion.
https://en.wikipedia.org/wiki?curid=759422
182,724
Compared to the previous generations the temperature levels have been reduced to increase the energy efficiency of the system, with supply side temperatures of 70 °C and lower. Potential heat sources are waste heat from industry, CHP plants burning waste, biomass power plants, geothermal and solar thermal energy (central solar heating), large scale heat pumps, waste heat from cooling purposes and data centers and other sustainable energy sources. With those energy sources and large scale thermal energy storage, including seasonal thermal energy storage, fourth generation district heating systems are expected to provide flexibility for balancing wind and solar power generation, for example by using heat pumps to integrate surplus electric power as heat when there is much wind energy or providing electricity from biomass plants when back-up power is needed. Therefore, large scale heat pumps are regarded as a key technology for smart energy systems with high shares of renewable energy up to 100% and advanced fourth generation district heating systems.
https://en.wikipedia.org/wiki?curid=1669741
185,018
Performance of QDs is determined by the size and/or composition of the QD structures. Unlike simple atomic structures, a quantum dot structure has the unusual property that energy levels are strongly dependent on the structure's size. For example, CdSe quantum dot light emission can be tuned from red (5 nm diameter) to the violet region (1.5 nm dot). The physical reason for QD coloration is the quantum confinement effect and is directly related to their energy levels. The bandgap energy that determines the energy (and hence color) of the fluorescent light is inversely proportional to the square of the size of quantum dot. Larger QDs have more energy levels that are more closely spaced, allowing the QD to emit (or absorb) photons of lower energy (redder color). In other words, the emitted photon energy increases as the dot size decreases, because greater energy is required to confine the semiconductor excitation to a smaller volume.
https://en.wikipedia.org/wiki?curid=23692678
185,512
The case of classical mechanics is discussed in the next section, on ergodicity in geometry. As to quantum mechanics, there is no universal quantum definition of ergodocity or even chaos (see quantum chaos). However, there is a quantum ergodicity theorem stating that the expectation value of an operator converges to the corresponding microcanonical classical average in the semiclassical limit formula_85. Nevertheless, the theorem does not imply that "all" eigenstates of the Hamiltionian whose classical counterpart is chaotic are features and random. For example, the quantum ergodicity theorem do not exclude the existence of non-ergodic states such as quantum scars. In addition to the conventional scarring, there are two other types of quantum scarring, which further illustrate the weak-ergodicity breaking in quantum chaotic systems: perturbation-induced and many-body quantum scars.
https://en.wikipedia.org/wiki?curid=5456824
195,443
The problem of pattern recognition can be stated as follows: Given an unknown function formula_4 (the "ground truth") that maps input instances formula_5 to output labels formula_6, along with training data formula_7 assumed to represent accurate examples of the mapping, produce a function formula_8 that approximates as closely as possible the correct mapping formula_9. (For example, if the problem is filtering spam, then formula_10 is some representation of an email and formula_11 is either "spam" or "non-spam"). In order for this to be a well-defined problem, "approximates as closely as possible" needs to be defined rigorously. In decision theory, this is defined by specifying a loss function or cost function that assigns a specific value to "loss" resulting from producing an incorrect label. The goal then is to minimize the expected loss, with the expectation taken over the probability distribution of formula_12. In practice, neither the distribution of formula_12 nor the ground truth function formula_4 are known exactly, but can be computed only empirically by collecting a large number of samples of formula_12 and hand-labeling them using the correct value of formula_16 (a time-consuming process, which is typically the limiting factor in the amount of data of this sort that can be collected). The particular loss function depends on the type of label being predicted. For example, in the case of classification, the simple zero-one loss function is often sufficient. This corresponds simply to assigning a loss of 1 to any incorrect labeling and implies that the optimal classifier minimizes the error rate on independent test data (i.e. counting up the fraction of instances that the learned function formula_8 labels wrongly, which is equivalent to maximizing the number of correctly classified instances). The goal of the learning procedure is then to minimize the error rate (maximize the correctness) on a "typical" test set.
https://en.wikipedia.org/wiki?curid=126706
196,460
Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately represent the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse of overfitting: low bias and high variance). This can be gathered from the Bias-variance tradeoff which is the method of analyzing a model or algorithm for bias error, variance error and irreducible error. With a high bias and low variance the result of the model is that it will inaccurately represent the data points and thus insufficiently be able to predict future data results (see Generalization error). Shown in Figure 5 the linear line could not represent all the given data points due to the line not resembling the curvature of the points. We would expect to see a parabola shaped line as shown in Figure 6 and Figure 1. As previously mentioned if we were to use Figure 5 for analysis we would get false predictive results contrary to the results if we analyzed Figure 6.
https://en.wikipedia.org/wiki?curid=173332
202,004
Within this paper was perhaps his most outstanding contribution, the introduction of the concept of free energy, now universally called Gibbs free energy in his honor. The Gibbs free energy relates the tendency of a physical or chemical system to simultaneously lower its energy and increase its disorder, or entropy, in a spontaneous natural process. Gibbs's approach allows a researcher to calculate the change in free energy in the process, such as in a chemical reaction, and how fast it will happen. Since virtually all chemical processes and many physical ones involve such changes, his work has significantly impacted both the theoretical and experiential aspects of these sciences. In 1877, Ludwig Boltzmann established statistical derivations of many important physical and chemical concepts, including entropy, and distributions of molecular velocities in the gas phase. Together with Boltzmann and James Clerk Maxwell, Gibbs created a new branch of theoretical physics called statistical mechanics (a term that he coined), explaining the laws of thermodynamics as consequences of the statistical properties of large ensembles of particles. Gibbs also worked on the application of Maxwell's equations to problems in physical optics. Gibbs's derivation of the phenomenological laws of thermodynamics from the statistical properties of systems with many particles was presented in his highly influential textbook "Elementary Principles in Statistical Mechanics", published in 1902, a year before his death. In that work, Gibbs reviewed the relationship between the laws of thermodynamics and the statistical theory of molecular motions. The overshooting of the original function by partial sums of Fourier series at points of discontinuity is known as the Gibbs phenomenon.
https://en.wikipedia.org/wiki?curid=1416046
202,308
Ideally, a car traveling at a constant velocity on level ground in a vacuum with frictionless wheels could travel at any speed without consuming any energy beyond what is needed to get the car up to speed. Less ideally, any vehicle must expend energy on overcoming road load forces, which consist of aerodynamic drag, tire rolling resistance, and inertial energy that is lost when the vehicle is decelerated by friction brakes. With ideal regenerative braking, the inertial energy could be completely recovered, but there are few options for reducing aerodynamic drag or rolling resistance other than optimizing the vehicle's shape and the tire design. Road load energy or the energy demanded at the wheels, can be calculated by evaluating the vehicle equation of motion over a specific driving cycle. The vehicle powertrain must then provide this minimum energy in order to move the vehicle and will lose a large amount of additional energy in the process of converting fuel energy into work and transmitting it to the wheels. Overall, the sources of energy loss in moving a vehicle may be summarized as follows:
https://en.wikipedia.org/wiki?curid=4313931
210,864
A flow battery, or redox flow battery (after reduction–oxidation), is a type of electrochemical cell where chemical energy is provided by two chemical components dissolved in liquids that are pumped through the system on separate sides of a membrane. Ion transfer inside the cell (accompanied by flow of electric current through an external circuit) occurs through the membrane while both liquids circulate in their own respective space. Cell voltage is chemically determined by the Nernst equation and ranges, in practical applications, from 1.0 to 2.43 volts. The energy capacity is a function of the electrolyte volume and the power is a function of the surface area of the electrodes.
https://en.wikipedia.org/wiki?curid=3133405
217,707
One example of an experimental storage system based on chemical reaction energy is the salt hydrate technology. The system uses the reaction energy created when salts are hydrated or dehydrated. It works by storing heat in a container containing 50% sodium hydroxide (NaOH) solution. Heat (e.g. from using a solar collector) is stored by evaporating the water in an endothermic reaction. When water is added again, heat is released in an exothermic reaction at 50 °C (120 °F). Current systems operate at 60% efficiency. The system is especially advantageous for seasonal thermal energy storage, because the dried salt can be stored at room temperature for prolonged times, without energy loss. The containers with the dehydrated salt can even be transported to a different location. The system has a higher energy density than heat stored in water and the capacity of the system can be designed to store energy from a few months to years.
https://en.wikipedia.org/wiki?curid=2465250
219,333
There have been several other analyses of negative mass, such as the studies conducted by R. M. Price, though none addressed the question of what kind of energy and momentum would be necessary to describe non-singular negative mass. Indeed, the Schwarzschild solution for negative mass parameter has a naked singularity at a fixed spatial position. The question that immediately comes up is, would it not be possible to smooth out the singularity with some kind of negative mass density. The answer is yes, but not with energy and momentum that satisfies the dominant energy condition. This is because if the energy and momentum satisfies the dominant energy condition within a spacetime that is asymptotically flat, which would be the case of smoothing out the singular negative mass Schwarzschild solution, then it must satisfy the positive energy theorem, i.e. its ADM mass must be positive, which is of course not the case. However, it was noticed by Belletête and Paranjape that since the positive energy theorem does not apply to asymptotic de Sitter spacetime, it would actually be possible to smooth out, with energy–momentum that does satisfy the dominant energy condition, the singularity of the corresponding exact solution of negative mass Schwarzschild–de Sitter, which is the singular, exact solution of Einstein's equations with cosmological constant. In a subsequent article, Mbarek and Paranjape showed that it is in fact possible to obtain the required deformation through the introduction of the energy–momentum of a perfect fluid.
https://en.wikipedia.org/wiki?curid=262606
222,599
The lattice system can be found as follows. If the crystal system is not trigonal then the lattice system is of the same type. If the crystal system is trigonal, then the lattice system is hexagonal unless the space group is one of the seven in the rhombohedral lattice system consisting of the 7 trigonal space groups in the table above whose name begins with R. (The term rhombohedral system is also sometimes used as an alternative name for the whole trigonal system.) The hexagonal lattice system is larger than the hexagonal crystal system, and consists of the hexagonal crystal system together with the 18 groups of the trigonal crystal system other than the seven whose names begin with R.
https://en.wikipedia.org/wiki?curid=463721
223,504
One of the simplest types of neural-spiking models is the Poisson process. This however, is limited in that it is memory-less. It does not account for any spiking history when calculating the current probability of firing. Neurons, however, exhibit a fundamental (biophysical) history dependence by way of its relative and absolute refractory periods. To address this, a conditional intensity function is used to represent the probability of a neuron spiking, conditioned on its own history. The conditional intensity function expresses the instantaneous firing probability and implicitly defines a complete probability model for the point process. It defines a probability per unit time. So if this unit time is taken small enough to ensure that only one spike could occur in that time window, then our conditional intensity function completely specifies the probability that a given neuron will fire in a certain time.
https://en.wikipedia.org/wiki?curid=1648224
228,072
In quantum physics, a quantum fluctuation (also known as a vacuum state fluctuation or vacuum fluctuation) is the temporary random change in the amount of energy in a point in space, as prescribed by Werner Heisenberg's uncertainty principle. They are minute random fluctuations in the values of the fields which represent elementary particles, such as electric and magnetic fields which represent the electromagnetic force carried by photons, W and Z fields which carry the weak force, and gluon fields which carry the strong force. Vacuum fluctuations appear as virtual particles, which are always created in particle-antiparticle pairs. Since they are created spontaneously without a source of energy, vacuum fluctuations and virtual particles are said to violate the conservation of energy. This is theoretically allowable because the particles annihilate each other within a time limit determined by the uncertainty principle so they are not directly observable. The uncertainty principle states the uncertainty in energy and time can be related by formula_1, where ≈  Js. This means that pairs of virtual particles with energy formula_2 and lifetime shorter than formula_3 are continually created and annihilated in empty space. Although the particles are not directly detectable, the cumulative effects of these particles are measurable. For example, without quantum fluctuations, the "bare" mass and charge of elementary particles would be infinite; from renormalization theory the shielding effect of the cloud of virtual particles is responsible for the finite mass and charge of elementary particles. Another consequence is the Casimir effect. One of the first observations which was evidence for vacuum fluctuations was the Lamb shift in hydrogen. In July 2020, scientists reported that quantum vacuum fluctuations can influence the motion of macroscopic, human-scale objects by measuring correlations below the standard quantum limit between the position/momentum uncertainty of the mirrors of LIGO and the photon number/phase uncertainty of light that they reflect.
https://en.wikipedia.org/wiki?curid=161253
235,768
Considered as electric energy converters, all these existing lamps are inefficient, emitting more of their input energy as waste heat than as visible light. Global electric lighting in 1997 consumed 2016 terawatthours of energy. Lighting consumes roughly 12% of electrical energy produced by industrialized countries. The increasing scarcity of energy resources, and the environmental costs of producing energy, particularly the discovery of global warming due to carbon dioxide emitted by the burning of fossil fuels, which are the largest source of energy for electric power generation, created an increased incentive to develop more energy-efficient electric lights.
https://en.wikipedia.org/wiki?curid=9910525
238,331
In "Star Trek: Picard", Data is seen in Picard's dreams, playing poker with him in Ten-Forward, and later painting in the middle of the vineyards of Chateau Picard. It is revealed that Dahj and Soji Asha are Data's daughters, created through fractal neuronic cloning, a procedure developed by Dr. Bruce Maddox. These neurons were apparently salvaged from B-4, who had been dismantled and placed in storage after his positronic net was found to be too primitive to integrate Data's memories. However, Data's consciousness is revealed to still exist inside a quantum simulation crafted by Maddox and based upon memories retrieved from the neurons Maddox salvaged from B-4, the equipment holding the network now in the possession of Altan Soong, Noonien Soong's biological son. Since Data's memories only extend as far as the moment he implanted his memories into B-4, he lacks the memory of sacrificing himself to save Picard. After Picard dies, Altan Soong transfers Picard's consciousness into a golem intended for his own consciousness and Picard meets with Data inside the simulation. Data requests that Picard terminate his consciousness, which would allow Data the experience of living, however briefly, believing that he could only truly "live" if he had a finite lifespan. Once Picard awakens, he carries out Data's wish and Data's consciousness rapidly ages to death, Picard giving a brief eulogy as he observes that what made Data remarkable was his ability to see humanity's worst traits and still aspire to the best parts of the human condition.
https://en.wikipedia.org/wiki?curid=47676
243,052
The largest collection of data ever used for WGS purposes was assembled, processed and applied in the development of WGS 72. Both optical and electronic satellite data were used. The electronic satellite data consisted, in part, of Doppler data provided by the U.S. Navy and cooperating non-DoD satellite tracking stations established in support of the Navy's Navigational Satellite System (NNSS). Doppler data was also available from the numerous sites established by GEOCEIVERS during 1971 and 1972. Doppler data was the primary data source for WGS 72 (see image). Additional electronic satellite data was provided by the SECOR (Sequential Collation of Range) Equatorial Network completed by the U.S. Army in 1970. Optical satellite data from the Worldwide Geometric Satellite Triangulation Program was provided by the BC-4 camera system (see image). Data from the Smithsonian Astrophysical Observatory was also used which included camera (Baker–Nunn) and some laser ranging.
https://en.wikipedia.org/wiki?curid=233654
249,579
In quantum physics and chemistry, quantum numbers describe values of conserved quantities in the dynamics of a quantum system. Quantum numbers correspond to eigenvalues of operators that commute with the Hamiltonian—quantities that can be known with precision at the same time as the system's energy—and their corresponding eigenspaces. Together, a specification of all of the quantum numbers of a quantum system fully characterize a basis state of the system, and can in principle be measured together.
https://en.wikipedia.org/wiki?curid=532405
249,580
An important aspect of quantum mechanics is the quantization of many observable quantities of interest. In particular, this leads to quantum numbers that take values in discrete sets of integers or half-integers; although they could approach infinity in some cases. This distinguishes quantum mechanics from classical mechanics where the values that characterize the system such as mass, charge, or momentum, all range continuously. Quantum numbers often describe specifically the energy levels of electrons in atoms, but other possibilities include angular momentum, spin, etc. An important family is flavour quantum numbers – internal quantum numbers which determine the type of a particle and its interactions with other particles through the fundamental forces. Any quantum system can have one or more quantum numbers; it is thus difficult to list all possible quantum numbers.
https://en.wikipedia.org/wiki?curid=532405
259,363
One way to simulate a two-dimensional cellular automaton is with an infinite sheet of graph paper along with a set of rules for the cells to follow. Each square is called a "cell" and each cell has two possible states, black and white. The "neighborhood" of a cell is the nearby, usually adjacent, cells. The two most common types of neighborhoods are the "von Neumann neighborhood" and the "Moore neighborhood". The former, named after the founding cellular automaton theorist, consists of the four orthogonally adjacent cells. The latter includes the von Neumann neighborhood as well as the four diagonally adjacent cells. For such a cell and its Moore neighborhood, there are 512 (= 2) possible patterns. For each of the 512 possible patterns, the rule table would state whether the center cell will be black or white on the next time interval. Conway's Game of Life is a popular version of this model. Another common neighborhood type is the "extended von Neumann neighborhood", which includes the two closest cells in each orthogonal direction, for a total of eight. The general equation for such a system of rules is "k", where "k" is the number of possible states for a cell, and "s" is the number of neighboring cells (including the cell to be calculated itself) used to determine the cell's next state. Thus, in the two-dimensional system with a Moore neighborhood, the total number of automata possible would be 2, or .
https://en.wikipedia.org/wiki?curid=54342
259,474
One of the central tools in complex analysis is the line integral. The line integral around a closed path of a function that is holomorphic everywhere inside the area bounded by the closed path is always zero, as is stated by the Cauchy integral theorem. The values of such a holomorphic function inside a disk can be computed by a path integral on the disk's boundary (as shown in Cauchy's integral formula). Path integrals in the complex plane are often used to determine complicated real integrals, and here the theory of residues among others is applicable (see methods of contour integration). A "pole" (or isolated singularity) of a function is a point where the function's value becomes unbounded, or "blows up". If a function has such a pole, then one can compute the function's residue there, which can be used to compute path integrals involving the function; this is the content of the powerful residue theorem. The remarkable behavior of holomorphic functions near essential singularities is described by Picard's theorem. Functions that have only poles but no essential singularities are called meromorphic. Laurent series are the complex-valued equivalent to Taylor series, but can be used to study the behavior of functions near singularities through infinite sums of more well understood functions, such as polynomials.
https://en.wikipedia.org/wiki?curid=5759
260,518
Each user in a CDMA system uses a different code to modulate their signal. Choosing the codes used to modulate the signal is very important in the performance of CDMA systems. The best performance occurs when there is good separation between the signal of a desired user and the signals of other users. The separation of the signals is made by correlating the received signal with the locally generated code of the desired user. If the signal matches the desired user's code, then the correlation function will be high and the system can extract that signal. If the desired user's code has nothing in common with the signal, the correlation should be as close to zero as possible (thus eliminating the signal); this is referred to as cross-correlation. If the code is correlated with the signal at any time offset other than zero, the correlation should be as close to zero as possible. This is referred to as auto-correlation and is used to reject multi-path interference.
https://en.wikipedia.org/wiki?curid=7143
263,896
For White, "the primary function of culture" is to "harness and control energy." White differentiates between five stages of human development: In the first, people use the energy of their own muscles. In the second, they use the energy of domesticated animals. In the third, they use the energy of plants (agricultural revolution). In the fourth, they learn to use the energy of natural resources: coal, oil, gas. In the fifth, they harness nuclear energy. White introduced the formula P=E/T, where P is the development index, E is a measure of energy consumed, and T is the measure of the efficiency of technical factors using the energy. In his own words, "culture evolves as the amount of energy harnessed per capita per year is increased, or as the efficiency of the instrumental means of putting the energy to work is increased". Nikolai Kardashev extrapolated his theory, creating the Kardashev scale, which categorizes the energy use of advanced civilizations.
https://en.wikipedia.org/wiki?curid=803661
268,126
For research-oriented careers, students work toward a doctoral degree specializing in a particular field. Fields of specialization include experimental and theoretical astrophysics, atomic physics, biological physics, chemical physics, condensed matter physics, cosmology, geophysics, gravitational physics, material science, medical physics, microelectronics, molecular physics, nuclear physics, optics, particle physics, plasma physics, quantum information science, and radiophysics.
https://en.wikipedia.org/wiki?curid=23269
283,058
Supersymmetry has various applications to different areas of physics, such as quantum mechanics, statistical mechanics, quantum field theory, condensed matter physics, nuclear physics, optics, stochastic dynamics, astrophysics, quantum gravity, and cosmology. Supersymmetry has also been applied to high energy physics, where a supersymmetric extension of the Standard Model is a possible candidate for physics beyond the Standard Model. However, no supersymmetric extensions of the Standard Model have been experimentally verified.
https://en.wikipedia.org/wiki?curid=224636
283,258
User behavior also relies on the impact of personality traits, social norms, and attitudes on energy conservation behavior. Beliefs and attitudes toward a convenient lifestyle, environmentally friendly transport, energy security, and residential location choices affect energy conservation behavior. As a result, energy conservation can be made possible by adopting pro-environmental behavior and energy-efficient systems.  Education on approaches to energy conservation can result in wise energy use. The choices made by the users yield energy usage patterns.Rigorous analysis of these usage patterns identifies waste energy patterns, and improving those patterns may reduce significant energy load. Therefore, human behavior is critical to determining the implications of energy conservation measures and solving environmental problems. Substant energy conservation may be achieved if users' habit loops are modified.
https://en.wikipedia.org/wiki?curid=478933
283,266
Energy conservation through users' behaviors requires understanding household occupants' lifestyle, social, and behavioral factors in analyzing energy consumption. This involves one-time investments in energy efficiency, such as purchasing new energy-efficient appliances or upgrading the building insulation without curtailing economic utility or the level of energy services, and energy curtailment behaviors which are theorized to be driven more by social-psychological factors and environmental concerns in comparison to the energy efficiency behaviors. Replacing existing appliances with newer and more efficient ones leads to energy efficiency as less energy is wasted throughout. Overall, energy efficiency behaviors are identified more with one-time, cost-incurring investments in efficient appliances and retrofits, while energy curtailment behaviors include repetitive, low-cost energy-saving efforts.
https://en.wikipedia.org/wiki?curid=478933
287,679
A multitude of functions can be performed by the cytoskeleton. Its primary function is to give the cell its shape and mechanical resistance to deformation, and through association with extracellular connective tissue and other cells it stabilizes entire tissues. The cytoskeleton can also contract, thereby deforming the cell and the cell's environment and allowing cells to migrate. Moreover, it is involved in many cell signaling pathways and in the uptake of extracellular material (endocytosis), the segregation of chromosomes during cellular division, the cytokinesis stage of cell division, as scaffolding to organize the contents of the cell in space and in intracellular transport (for example, the movement of vesicles and organelles within the cell) and can be a template for the construction of a cell wall. Furthermore, it can form specialized structures, such as flagella, cilia, lamellipodia and podosomes. The structure, function and dynamic behavior of the cytoskeleton can be very different, depending on organism and cell type. Even within one cell, the cytoskeleton can change through association with other proteins and the previous history of the network.
https://en.wikipedia.org/wiki?curid=156970
291,714
The radiated beam becomes wider as the distance between the antenna and aircraft becomes greater, making the position information less accurate. Additionally, detecting changes in aircraft velocity requires several radar sweeps that are spaced several seconds apart. In contrast, a system using ADS-B creates and listens for periodic position and intent reports from aircraft. These reports are generated based on the aircraft's navigation system, and distributed via one or more of the ADS-B data links. The accuracy of the data is no longer susceptible to the position of the aircraft or the length of time between radar sweeps. (However, the signal strength of the signal received from the aircraft at the ground station is still dependent on the range from the aircraft to the receiver, and interference, obstacles, or weather could degrade the integrity of the received signal enough to prevent the digital data from being decoded without errors. When the aircraft is farther away, the weaker received signal will tend to be more affected by the aforementioned adverse factors and is less likely to be received without errors. Error detection will allow errors to be recognized, so the system maintains full accuracy regardless of aircraft position when the signal can be received and decoded correctly. This advantage does not equate to total indifference to the range of an aircraft from the ground station.)
https://en.wikipedia.org/wiki?curid=18949160
299,319
According to Saunders, increased energy efficiency tends to increase energy consumption by two means. First, increased energy efficiency makes the use of energy relatively cheaper, thus encouraging increased use (the direct rebound effect). Second, increased energy efficiency increases real incomes and leads to increased economic growth, which pulls up energy use for the whole economy. At the microeconomic level (looking at an individual market), even with the rebound effect, improvements in energy efficiency usually result in reduced energy consumption. That is, the rebound effect is usually less than 100%. However, at the macroeconomic level, more efficient (and hence comparatively cheaper) energy leads to faster economic growth, which increases energy use throughout the economy. Saunders argued that taking into account both microeconomic and macroeconomic effects, the technological progress that improves energy efficiency will tend to increase overall energy use. Besides the neoclassical interpretation, hypotheses generated from heterodox economics also are consistent with the existence of the Jevons effect.
https://en.wikipedia.org/wiki?curid=988796
300,232
Thermal energy is extremely unique because it in most cases (willow) cannot be converted to other forms of energy. Only a difference in the density of thermal/heat energy (temperature) can be used to perform work, and the efficiency of this conversion will be (much) less than 100%. This is because thermal energy represents a particularly disordered form of energy; it is spread out randomly among many available states of a collection of microscopic particles constituting the system (these combinations of position and momentum for each of the particles are said to form a phase space). The measure of this disorder or randomness is entropy, and its defining feature is that the entropy of an isolated system never decreases. One cannot take a high-entropy system (like a hot substance, with a certain amount of thermal energy) and convert it into a low entropy state (like a low-temperature substance, with correspondingly lower energy), without that entropy going somewhere else (like the surrounding air). In other words, there is no way to concentrate energy without spreading out energy somewhere else.
https://en.wikipedia.org/wiki?curid=1413965
301,366
A non-equilibrium real gas model is the most accurate model of a shock layer's gas physics, but is more difficult to solve than an equilibrium model. The simplest non-equilibrium model is the "Lighthill-Freeman model" developed in 1958. The Lighthill-Freeman model initially assumes a gas made up of a single diatomic species susceptible to only one chemical formula and its reverse; e.g., N ? N + N and N + N ? N (dissociation and recombination). Because of its simplicity, the Lighthill-Freeman model is a useful pedagogical tool, but is unfortunately too simple for modelling non-equilibrium air. Air is typically assumed to have a mole fraction composition of 0.7812 molecular nitrogen, 0.2095 molecular oxygen and 0.0093 argon. The simplest real gas model for air is the "five species model", which is based upon N, O, NO, N, and O. The five species model assumes no ionization and ignores trace species like carbon dioxide.
https://en.wikipedia.org/wiki?curid=45294
307,203
In signal processing and statistics, a window function (also known as an apodization function or tapering function) is a mathematical function that is zero-valued outside of some chosen interval, normally symmetric around the middle of the interval, usually near a maximum in the middle, and usually tapering away from the middle. Mathematically, when another function or waveform/data-sequence is "multiplied" by a window function, the product is also zero-valued outside the interval: all that is left is the part where they overlap, the "view through the window". Equivalently, and in actual practice, the segment of data within the window is first isolated, and then only that data is multiplied by the window function values. Thus, tapering, not segmentation, is the main purpose of window functions.
https://en.wikipedia.org/wiki?curid=244097
309,113
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy.
https://en.wikipedia.org/wiki?curid=442137
315,945
In some cases, receptor activation caused by ligand binding to a receptor is directly coupled to the cell's response to the ligand. For example, the neurotransmitter GABA can activate a cell surface receptor that is part of an ion channel. GABA binding to a GABA receptor on a neuron opens a chloride-selective ion channel that is part of the receptor. GABA receptor activation allows negatively charged chloride ions to move into the neuron, which inhibits the ability of the neuron to produce action potentials. However, for many cell surface receptors, ligand-receptor interactions are not directly linked to the cell's response. The activated receptor must first interact with other proteins inside the cell before the ultimate physiological effect of the ligand on the cell's behavior is produced. Often, the behavior of a chain of several interacting cell proteins is altered following receptor activation. The entire set of cell changes induced by receptor activation is called a signal transduction mechanism or pathway.
https://en.wikipedia.org/wiki?curid=4109042
318,492
The resulting daughter cells of the first cell division are called the AB cell (containing PAR-6 and PAR-3) and the P1 cell (containing PAR-1 and PAR-2). A second cell division produces the ABp and ABa cells from the AB cell, and the EMS and P2 cells from the P1 cell. This division establishes the dorsal-ventral axis, with the ABp cell forming the dorsal side and the EMS cell marking the ventral side. Through Wnt signaling, the P2 cell instructs the EMS cell to divide along the anterior-posterior axis. Through Notch signaling, the P2 cell differentially specifies the ABp and ABa cells, which further defines the dorsal-ventral axis. The left-right axis also becomes apparent early in embryogenesis, although it is unclear exactly when specifically the axis is determined. However, most theories of the L-R axis development involve some kind of differences in cells derived from the AB cell.
https://en.wikipedia.org/wiki?curid=57546
322,871
In physical cosmology and astronomy, dark energy is an unknown form of energy that affects the universe on the largest scales. The first observational evidence for its existence came from measurements of supernovas, which showed that the universe does not expand at a constant rate; rather, the universe's expansion is accelerating. Understanding the universe's evolution requires knowledge of its starting conditions and composition. Before these observations, scientists thought that all forms of matter and energy in the universe would only cause the expansion to slow down over time. Measurements of the cosmic microwave background (CMB) suggest the universe began in a hot Big Bang, from which general relativity explains its evolution and the subsequent large-scale motion. Without introducing a new form of energy, there was no way to explain how scientists could measure an accelerating universe. Since the 1990s, dark energy has been the most accepted premise to account for the accelerated expansion. As of 2021, there are active areas of cosmology research to understand the fundamental nature of dark energy. Assuming that the lambda-CDM model of cosmology is correct, as of 2013, the best current measurements indicate that dark energy contributes 68% of the total energy in the present-day observable universe. The mass–energy of dark matter and ordinary (baryonic) matter contributes 26% and 5%, respectively, and other components such as neutrinos and photons contribute a very small amount. Dark energy's density is very low - 6×10 J/m (~formula_1 g/cm), much less than the density of ordinary matter or dark matter within galaxies. However, it dominates the universe's mass–energy content because it is uniform across space.
https://en.wikipedia.org/wiki?curid=19604228
322,878
Nearly all inflation models predict that the total (matter+energy) density of the universe should be very close to the critical density. During the 1980s, most cosmological research focused on models with critical density in matter only, usually 95% cold dark matter (CDM) and 5% ordinary matter (baryons). These models were found to be successful at forming realistic galaxies and clusters, but some problems appeared in the late 1980s: in particular, the model required a value for the Hubble constant lower than preferred by observations, and the model under-predicted observations of large-scale galaxy clustering. These difficulties became stronger after the discovery of anisotropy in the by the COBE spacecraft in 1992, and several modified CDM models came under active study through the mid-1990s: these included the Lambda-CDM model and a mixed cold/hot dark matter model. The first direct evidence for dark energy came from supernova observations in 1998 of accelerated expansion in Riess "et al." and in Perlmutter "et al.", and the Lambda-CDM model then became the leading model. Soon after, dark energy was supported by independent observations: in 2000, the BOOMERanG and Maxima experiments observed the first acoustic peak in the CMB, showing that the total (matter+energy) density is close to 100% of critical density. Then in 2001, the 2dF Galaxy Redshift Survey gave strong evidence that the matter density is around 30% of critical. The large difference between these two supports a smooth component of dark energy making up the difference. Much more precise measurements from WMAP in 2003–2010 have continued to support the standard model and give more accurate measurements of the key parameters.
https://en.wikipedia.org/wiki?curid=19604228
327,227
An axiomatization of propositional calculus is a set of tautologies called axioms and one or more inference rules for producing new tautologies from old. A "proof" in an axiom system "A" is a finite nonempty sequence of propositions each of which is either an instance of an axiom of "A" or follows by some rule of "A" from propositions appearing earlier in the proof (thereby disallowing circular reasoning). The last proposition is the theorem proved by the proof. Every nonempty initial segment of a proof is itself a proof, whence every proposition in a proof is itself a theorem. An axiomatization is sound when every theorem is a tautology, and complete when every tautology is a theorem.
https://en.wikipedia.org/wiki?curid=54476844
331,045
Data mining is the process of extracting and discovering patterns in large data sets involving methods at the intersection of machine learning, statistics, and database systems. Data mining is an interdisciplinary subfield of computer science and statistics with an overall goal of extracting information (with intelligent methods) from a data set and transforming the information into a comprehensible structure for further use. Data mining is the analysis step of the "knowledge discovery in databases" process, or KDD. Aside from the raw analysis step, it also involves database and data management aspects, data pre-processing, model and inference considerations, interestingness metrics, complexity considerations, post-processing of discovered structures, visualization, and online updating.
https://en.wikipedia.org/wiki?curid=42253
331,056
Before data mining algorithms can be used, a target data set must be assembled. As data mining can only uncover patterns actually present in the data, the target data set must be large enough to contain these patterns while remaining concise enough to be mined within an acceptable time limit. A common source for data is a data mart or data warehouse. Pre-processing is essential to analyze the multivariate data sets before data mining. The target set is then cleaned. Data cleaning removes the observations containing noise and those with missing data.
https://en.wikipedia.org/wiki?curid=42253
331,067
Data mining requires data preparation which uncovers information or patterns which compromise confidentiality and privacy obligations. A common way for this to occur is through data aggregation. Data aggregation involves combining data together (possibly from various sources) in a way that facilitates analysis (but that also might make identification of private, individual-level data deducible or otherwise apparent). This is not data mining "per se", but a result of the preparation of data before—and for the purposes of—the analysis. The threat to an individual's privacy comes into play when the data, once compiled, cause the data miner, or anyone who has access to the newly compiled data set, to be able to identify specific individuals, especially when the data were originally anonymous.
https://en.wikipedia.org/wiki?curid=42253
332,008
It is natural to ask why ordinary everyday objects and events do not seem to display quantum mechanical features such as superposition. Indeed, this is sometimes regarded as "mysterious", for instance by Richard Feynman. In 1935, Erwin Schrödinger devised a well-known thought experiment, now known as Schrödinger's cat, which highlighted this dissonance between quantum mechanics and classical physics. One modern view is that this mystery is explained by quantum decoherence. A macroscopic system (such as a cat) may evolve over time into a superposition of classically distinct quantum states (such as "alive" and "dead"). The mechanism that achieves this is a subject of significant research, one mechanism suggests that the state of the cat is entangled with the state of its environment (for instance, the molecules in the atmosphere surrounding it), when averaged over the possible quantum states of the environment (a physically reasonable procedure unless the quantum state of the environment can be controlled or measured precisely) the resulting mixed quantum state for the cat is very close to a classical probabilistic state where the cat has some definite probability to be dead or alive, just as a classical observer would expect in this situation. Another proposed class of theories is that the fundamental time evolution equation is incomplete, and requires the addition of some type of fundamental Lindbladian, the reason for this addition and the form of the additional term varies from theory to theory. A popular theory is Continuous spontaneous localization, where the lindblad term is proportional to the spatial separation of the states, this too results in a quasi-classical probabilistic state.
https://en.wikipedia.org/wiki?curid=82728
338,114
Si single-junction solar cells have been widely studied for decades and are reaching their practical efficiency of ~26% under 1-sun conditions. Increasing this efficiency may require adding more cells with bandgap energy larger than 1.1 eV to the Si cell, allowing to convert short-wavelength photons for generation of additional voltage. A dual-junction solar cell with a band gap of 1.6–1.8 eV as a top cell can reduce thermalization loss, produce a high external radiative efficiency and achieve theoretical efficiencies over 45%. A tandem cell can be fabricated by growing the GaInP and Si cells. Growing them separately can overcome the 4% lattice constant mismatch between Si and the most common III–V layers that prevent direct integration into one cell. The two cells therefore are separated by a transparent glass slide so the lattice mismatch does not cause strain to the system. This creates a cell with four electrical contacts and two junctions that demonstrated an efficiency of 18.1%. With a fill factor (FF) of 76.2%, the Si bottom cell reaches an efficiency of 11.7% (± 0.4) in the tandem device, resulting in a cumulative tandem cell efficiency of 29.8%. This efficiency exceeds the theoretical limit of 29.4% and the record experimental efficiency value of a Si 1-sun solar cell, and is also higher than the record-efficiency 1-sun GaAs device. However, using a GaAs substrate is expensive and not practical. Hence researchers try to make a cell with two electrical contact points and one junction, which does not need a GaAs substrate. This means there will be direct integration of GaInP and Si.
https://en.wikipedia.org/wiki?curid=2352910