doc_id
int32
15
2.25M
text
stringlengths
101
6.85k
source
stringlengths
39
44
1,014,795
Data center facilities are heavy consumers of energy, accounting for between 1.1% and 1.5% of the world's total energy use in 2010 [1]. The U.S. Department of Energy estimates that data center facilities consume up to 100 to 200 times more energy than standard office buildings.<ref name="https://energy.gov">“Best Practices Guide for Energy-Efficient Data Center Design”, prepared by the National Renewable Energy Laboratory for the U.S. Department of Energy, Federal Energy Management Program, March 2011. </ref>
https://en.wikipedia.org/wiki?curid=1661475
1,020,550
A simple example of the modern Hopfield network can be written in terms of binary variables formula_60 that represent the active formula_68 and inactive formula_69 state of the model neuron formula_62.formula_71In this formula the weights formula_72 represent the matrix of memory vectors (index formula_73 enumerates different memories, and index formula_74 enumerates the content of each memory corresponding to the formula_62-th feature neuron), and the function formula_76 is a rapidly growing non-linear function. The update rule for individual neurons (in the asynchronous case) can be written in the following form formula_77which states that in order to calculate the updated state of the formula_78-th neuron the network compares two energies: the energy of the network with the formula_62-th neuron in the ON state and the energy of the network with the formula_62-th neuron in the OFF state, given the states of the remaining neuron. The updated state of the formula_62-th neuron selects the state that has the lowest of the two energies.
https://en.wikipedia.org/wiki?curid=1170097
1,023,812
In his introduction to Post 1921, van Heijenoort observes that both the "truth-table and the axiomatic approaches are clearly presented". This matter of a proof of consistency both ways (by a model theory, by axiomatic proof theory) comes up in the more-congenial version of Post's consistency proof that can be found in Nagel and Newman 1958 in their chapter V "An Example of a Successful Absolute Proof of Consistency". In the main body of the text they use a model to achieve their consistency proof (they also state that the system is complete but do not offer a proof) (Nagel & Newman 1958:45–56). But their text promises the reader a proof that is axiomatic rather than relying on a model, and in the Appendix they deliver this proof based on the notions of a division of formulas into two classes K and K that are mutually exclusive and exhaustive (Nagel & Newman 1958:109–113).
https://en.wikipedia.org/wiki?curid=1252308
1,024,448
The process of ICA based on infomax in short is: given a set of signal mixtures formula_89 and a set of identical independent model cumulative distribution functions(cdfs) formula_116, we seek the unmixing matrix formula_117 which maximizes the joint entropy of the signals formula_118, where formula_119 are the signals extracted by formula_117. Given the optimal formula_117, the signals formula_122 have maximum entropy and are therefore independent, which ensures that the extracted signals formula_123 are also independent. formula_116 is an invertible function, and is the signal model. Note that if the source signal model probability density function formula_125 matches the probability density function of the extracted signal formula_126, then maximizing the joint entropy of formula_127 also maximizes the amount of mutual information between formula_89 and formula_122. For this reason, using entropy to extract independent signals is known as infomax.
https://en.wikipedia.org/wiki?curid=598031
1,034,542
In the past several decades, the most popular fault model used in practice is the single stuck-at fault model. In this model, one of the signal lines in a circuit is assumed to be stuck at a fixed logic value, regardless of what inputs are supplied to the circuit. Hence, if a circuit has "n" signal lines, there are potentially "2n" stuck-at faults defined on the circuit, of which some can be viewed as being equivalent to others. The stuck-at fault model is a "logical" fault model because no delay information is associated with the fault definition. It is also called a "permanent" fault model because the faulty effect is assumed to be permanent, in contrast to "intermittent" faults which occur (seemingly) at random and "transient" faults which occur sporadically, perhaps depending on operating conditions (e.g. temperature, power supply voltage) or on the data values (high or low voltage states) on surrounding signal lines. The single stuck-at fault model is "structural" because it is defined based on a structural gate-level circuit model.
https://en.wikipedia.org/wiki?curid=374448
1,035,051
Fluorescence-Activated Cell Sorting, is also known as flow cytometry cell sorting, or commonly known by the acronym FACS, which is a trademark of Becton Dickinson and Company. Fluorescence activated cell sorting utilizes flow cytometry to separate cells based on morphological parameters and the expression of multiple extracellular and intracellular proteins. This method allows multiparameter cell sorting and involves encapsulating cells into small liquid droplets which are selectively given electric charges and sorted by an external electric field. Fluorescence activated cell sorting has several systems that work together to achieve successful sorting of events of interest. These include fluidic, optical, and electrostatic systems. The fluidic system has to establish a precisely-timedbreak off from the liquid stream in small uniform droplets, so that droplets containing individual cells can then be deflected electrostatically Based on the invention of Richard Sweet, droplet formation of the liquid jet of a cell sorter is stabilized by vibrations of an ultrasonic transducer at the exit of the nozzle orifice. The disturbances grow exponentially and lead to break up of the jet in droplets with precise timing. A cell of interest that should be sorted is measured at the sensing zone and moves down the stream to the breakoff point. During the separation of the droplet with the cell in it from the intact liquid jet, a voltage pulse is given to the liquid jet so that droplets containing the cells of interest can be deflected in an electric field between two deflection plates for sorting. The droplets are then caught by collection tubes or vessels placed below the deflection plates.
https://en.wikipedia.org/wiki?curid=22327978
1,039,036
Consider, for one, the familiar example of a marble on the edge of a bowl. If we consider the marble and bowl to be an isolated system, then when the marble drops, the potential energy will be converted to the kinetic energy of motion of the marble. Frictional forces will convert this kinetic energy to heat, and at equilibrium, the marble will be at rest at the bottom of the bowl, and the marble and the bowl will be at a slightly higher temperature. The total energy of the marble-bowl system will be unchanged. What was previously the potential energy of the marble, will now reside in the increased heat energy of the marble-bowl system. This will be an application of the maximum entropy principle as set forth in the principle of minimum potential energy, since due to the heating effects, the entropy has increased to the maximum value possible given the fixed energy of the system.
https://en.wikipedia.org/wiki?curid=3662314
1,046,892
An MFA system is a model of an industrial plant, an industrial sector or a region of concern. The level of detail of the system model is chosen to fit the purpose of the study. An MFA system always consists of the system boundary, one or more processes, material flows between processes, and stocks of materials within processes. Physical exchange between the system and its environment happens via flows that cross the system boundary. Contrary to the preconceived notion that a system represents a specific industrial installation, systems and processes in MFA can represent much larger and more abstract entities as long as they are well-defined. The explicit system definition helps the practitioner to locate the available quantitative information in the system, either as stocks within certain processes or as flows between processes. An MFA system description can be refined by disaggregating processes or simplified by aggregating processes.
https://en.wikipedia.org/wiki?curid=7886277
1,048,218
In recent decades, there has been an increased emphasis on data analysis and scientific inquiry in statistics education. In the United Kingdom, the Smith inquiry "Making Mathematics Count" suggests teaching basic statistical concepts as part of the science curriculum, rather than as part of mathematics. In the United States, the ASA's guidelines for undergraduate statistics specify that introductory statistics should emphasize the scientific methods of data collection, particularly randomized experiments and random samples: further, the first course should review these topics when the theory of "statistical inference" is studied. Similar recommendations occur for the Advanced Placement (AP) course in Statistics. The ASA and AP guidelines are followed by contemporary textbooks in the US, such as those by Freedman, Purvis & Pisani ("Statistics") and by David S. Moore ("Introduction to the Practice of Statistics" with McCabe and "Statistics: Concepts and Controversies" with Notz) and by Watkins, Schaeffer & Cobb ("Statistics: From Data to Decisions" and "Statistics in Action").
https://en.wikipedia.org/wiki?curid=24985094
1,049,539
The non-random two-liquid model (abbreviated NRTL model) is an activity coefficient model that correlates the activity coefficients formula_1 of a compound with its mole fractions formula_2 in the liquid phase concerned. It is frequently applied in the field of chemical engineering to calculate phase equilibria. The concept of NRTL is based on the hypothesis of Wilson that the local concentration around a molecule is different from the bulk concentration. This difference is due to a difference between the interaction energy of the central molecule with the molecules of its own kind formula_3 and that with the molecules of the other kind formula_4. The energy difference also introduces a non-randomness at the local molecular level. The NRTL model belongs to the so-called local-composition models. Other models of this type are the Wilson model, the UNIQUAC model, and the group contribution model UNIFAC. These local-composition models are not thermodynamically consistent for a one-fluid model for a real mixture due to the assumption that the local composition around molecule "i" is independent of the local composition around molecule "j". This assumption is not true, as was shown by Flemr in 1976. However, they are consistent if a hypothetical two-liquid model is used.
https://en.wikipedia.org/wiki?curid=14053488
1,065,745
Quantum cognition is an emerging field which applies the mathematical formalism of quantum theory to model cognitive phenomena such as information processing by the human brain, language, decision making, human memory, concepts and conceptual reasoning, human judgment, and perception. The field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum-mechanical about the brain. Quantum cognition is based on the quantum-like paradigm or generalized quantum paradigm or quantum structure paradigm that information processing by complex systems such as the brain, taking into account contextual dependence of information and probabilistic reasoning, can be mathematically described in the framework of quantum information and quantum probability theory.
https://en.wikipedia.org/wiki?curid=30138821
1,070,095
The proof of the no-hiding theorem is based on the linearity and the unitarity of quantum mechanics. The original information which is missing from the final state simply remains in the subspace of the environmental Hilbert space. Also, note that the original information is not in the correlation between the system and the environment. This is the essence of the no-hiding theorem. One can in principle, recover the lost information from the environment by local unitary transformations acting only on the environment Hilbert space. The no-hiding theorem provides new insights to the nature of quantum information. For example, if classical information is lost from one system it may either move to another system or can be hidden in the correlation between a pair of bit strings. However, quantum information cannot be completely hidden in correlations between a pair of subsystems. Quantum mechanics allows only one way to completely hide an arbitrary quantum state from one of its subsystems. If it is lost from one subsystem, then it moves to other subsystems.
https://en.wikipedia.org/wiki?curid=54424691
1,074,021
With unlabelled protein the usual procedure is to record a set of two-dimensional homonuclear nuclear magnetic resonance experiments through correlation spectroscopy (COSY), of which several types include conventional correlation spectroscopy, "total correlation" spectroscopy (TOCSY) and nuclear Overhauser effect spectroscopy (NOESY). A two-dimensional nuclear magnetic resonance experiment produces a two-dimensional spectrum. The units of both axes are chemical shifts. The COSY and TOCSY transfer magnetization through the chemical bonds between adjacent protons. The conventional correlation spectroscopy experiment is only able to transfer magnetization between protons on adjacent atoms, whereas in the total correlation spectroscopy experiment the protons are able to relay the magnetization, so it is transferred among all the protons that are connected by adjacent atoms. Thus in a conventional correlation spectroscopy, an alpha proton transfers magnetization to the beta protons, the beta protons transfers to the alpha and gamma protons, if any are present, then the gamma proton transfers to the beta and the delta protons, and the process continues. In total correlation spectroscopy, the alpha and all the other protons are able to transfer magnetization to the beta, gamma, delta, epsilon if they are connected by a continuous chain of protons. The continuous chain of protons are the sidechain of the individual amino acids. Thus these two experiments are used to build so called spin systems, that is build a list of resonances of the chemical shift of the peptide proton, the alpha protons and all the protons from each residue’s sidechain. Which chemical shifts corresponds to which nuclei in the spin system is determined by the conventional correlation spectroscopy connectivities and the fact that different types of protons have characteristic chemical shifts. To connect the different spinsystems in a sequential order, the nuclear Overhauser effect spectroscopy experiment has to be used. Because this experiment transfers magnetization through space, it will show crosspeaks for all protons that are close in space regardless of whether they are in the same spin system or not. The neighbouring residues are inherently close in space, so the assignments can be made by the peaks in the NOESY with other spin systems.
https://en.wikipedia.org/wiki?curid=3654507
1,077,736
A non-trivial topology does not in itself imply energetic stability. There is in fact no necessary relation between topology and energetic stability. Hence, one must be careful not to confuse ‘topological stability,’ which is a mathematical concept, with energy stability in real physical systems. Topological stability refers to the idea that in order for a system described by a continuous field to transition from one topological state to another, a rupture must occur in the continuous field, i.e. a discontinuity must be produced. For example, if one wishes to transform a flexible balloon doughnut (torus) into an ordinary spherical balloon, it is necessary to introduce a rupture on some part of the balloon doughnut's surface. Mathematically, the balloon doughnut would be described as 'topologically stable.' However, in physics, the free energy required to introduce a rupture enabling the transition of a system from one ‘topological’ state to another is always "finite". For example, it is possible to turn a rubber ballon into flat piece of rubber by poking it with a needle (and popping it!). Thus, while a physical system can be "approximately" described using the mathematical concept of topology, attributes such as "energetic" stability are dependent on the system's parameters—the strength of the rubber in the example above—not the topology per se. In order to draw a meaningful parallel between the concept of topological stability and the energy stability of a system, the analogy must necessarily be accompanied by the introduction of a non-zero phenomenological ‘field rigidity’ to account for the finite energy needed to rupture the field’s topology. Modeling and then integrating this field rigidity can be likened to calculating a breakdown energy-density of the field. These considerations suggest that what is often referred to as ‘topological protection,’ or a 'topological barrier,' should more accurately be referred to as a 'topology-related energy barrier,' though this terminology is somewhat cumbersome. A quantitative evaluation of such a topological barrier can be obtained by extracting the critical magnetic configuration when the topological number changes during the dynamical process of a skyrmion creation event. Applying the topological charge defined in a lattice, the barrier height is theoretically shown to be proportional to the exchange stiffness.
https://en.wikipedia.org/wiki?curid=44416015
1,083,170
There are many ways to validate a model. Residual plots plot the difference between the actual data and the model's predictions: correlations in the residual plots may indicate a flaw in the model. Cross validation is a method of model validation that iteratively refits the model, each time leaving out just a small sample and comparing whether the samples left out are predicted by the model: there are many kinds of cross validation. Predictive simulation is used to compare simulated data to actual data. External validation involves fitting the model to new data. Akaike information criterion estimates the quality of a model.
https://en.wikipedia.org/wiki?curid=26502065
1,090,025
where "K" is kinetic energy, "m" is mass, and "v" is velocity. Because the mass of a roller coaster car remains constant, if the speed is increased, the kinetic energy must also increase. This means that the kinetic energy for the roller coaster system is greatest at the bottom of the largest downhill slope on the track, typically at the bottom of the lift hill. When the train begins to climb the next hill on the track, the train's kinetic energy is converted back into potential energy, decreasing the train's velocity. This process of converting kinetic energy to potential energy and back to kinetic energy continues with each hill. The energy is never destroyed but is lost to friction between the car and track bringing
https://en.wikipedia.org/wiki?curid=25303858
1,090,096
One reason for the lack of global symmetry breaking is, that one can easily excite long wavelength fluctuations which destroy perfect order. ``Easily excited´´ means, that the energy for those fluctuations tend to zero for large enough systems. Let's consider a magnetic model (e.g. the XY-model in one dimension). It is a chain of magnetic moments of length formula_16. We consider harmonic approximation, where the forces (torque) between neighbouring moments increase linearly with the angle of twisting formula_17. This implies, that the energy due to twisting increases quadratically formula_18. The total energy is the sum of all twisted pairs of magnetic moments formula_19. If one considers the excited mode with the lowest energy in one dimension (see figure), then the moments on the chain of length formula_16 are tilted by formula_21 along the chain. The relative angle between neighbouring moments is the same for all pairs of moments in this mode and equals formula_22, if the chain consists of formula_23 magnetic moments. It follows that the total energy of this lowest mode is formula_24. It decreases with increasing system size formula_25 and tends to zero in the thermodynamic limit formula_26, formula_27, formula_28. For arbitrary large systems follows, that the lowest modes do not cost any energy and will be thermally excited. Simultaneously, the long range order is destroyed on the chain. In two dimensions (or in a plane) the number of magnetic moments is proportional to the area of the plain formula_29. The energy for the lowest excited mode is then formula_30, which tends to a constant in the thermodynamic limit. Thus the modes will be excited at sufficiently large temperatures. In three dimensions, the number of magnetic moments is proportional to the volume formula_31 and the energy of the lowest mode is formula_32. It diverges with system size and will thus not be excited for large enough systems. Long range order is not affected by this mode and global symmetry breaking is allowed.
https://en.wikipedia.org/wiki?curid=4186556
1,090,700
The first workstations in the series, the Model 720, Model 730 and Model 750 systems were introduced on 26 March 1991 and were code-named "Snakes". The models used the PA-7000 microprocessor, with the Model 720 using a 50 MHz version and the Model 730 and Model 750 using a 66 MHz version. The PA-7000 is provided with 128 KB of instruction cache on the Model 720 and 730 and 256 KB on the Model 750. All models are provided with 256 KB of data cache. The Model 720 and Model 730 supported 16 to 64 MB of memory, while the Model 750 supported up to 192 MB. Onboard SCSI was provided by an NCR 53C700 SCSI controller. These systems could use both 2D and 3D graphics options, with 2D options being the greyscale GRX and the color CRX. 3D options were the Personal VRX and the Turbo GRX.
https://en.wikipedia.org/wiki?curid=952894
1,092,067
An analytical solution to the Poisson–Boltzmann equation can be used to describe an electron-electron interaction in a metal-insulator semiconductor (MIS). This can be used to describe both time and position dependence of dissipative systems such as a mesoscopic system. This is done by solving the Poisson–Boltzmann equation analytically in the three-dimensional case. Solving this results in expressions of the distribution function for the Boltzmann equation and self-consistent average potential for the Poisson equation. These expressions are useful for analyzing quantum transport in a mesoscopic system. In metal-insulator semiconductor tunneling junctions, the electrons can build up close to the interface between layers and as a result the quantum transport of the system will be affected by the electron-electron interactions. Certain transport properties such as electric current and electronic density can be known by solving for self-consistent Coulombic average potential from the electron-electron interactions, which is related to electronic distribution. Therefore, it is essential to analytically solve the Poisson–Boltzmann equation in order to obtain the analytical quantities in the MIS tunneling junctions.
https://en.wikipedia.org/wiki?curid=6161274
1,093,015
The runtime of the quantum algorithm for solving systems of linear equations originally proposed by Harrow et al. was shown to be formula_82, where formula_83 is the error parameter and formula_1 is the condition number of formula_7. This was subsequently improved to formula_86 by Andris Ambainis and a quantum algorithm with runtime polynomial in formula_87 was developed by Childs et al. Since the HHL algorithm maintains its logarithmic scaling in formula_3 only for sparse or low rank matrices, Wossnig et al. extended the HHL algorithm based on a quantum singular value estimation technique and provided a linear system algorithm for dense matrices which runs in formula_89 time compared to the formula_90 of the standard HHL algorithm.
https://en.wikipedia.org/wiki?curid=42676762
1,093,030
Wiebe et al. provide a new quantum algorithm to determine the quality of a least-squares fit in which a continuous function is used to approximate a set of discrete points by extending the quantum algorithm for linear systems of equations. As the number of discrete points increases, the time required to produce a least-squares fit using even a quantum computer running a quantum state tomography algorithm becomes very large. Wiebe et al. find that in many cases, their algorithm can efficiently find a concise approximation of the data points, eliminating the need for the higher-complexity tomography algorithm.
https://en.wikipedia.org/wiki?curid=42676762
1,103,991
In 1988–1990, the destruction of munitions containing BZ, a non-lethal hallucinating agent occurred at Pine Bluff Chemical Activity in Arkansas. Hawthorne Army Depot in Nevada destroyed all M687 chemical artillery shells and 458 metric tons of binary precursor chemicals by July 1999. Operations were completed at Johnston Atoll Chemical Agent Disposal System, where all 640 metric tons of chemical agents were destroyed by 2000, as well as at Edgewood Chemical Activity in Maryland, with 1,472 metric tons of agents destroyed by February 2006. All DF and QL, chemical weapons precursors, were destroyed in 2006 at Pine Bluff. Newport Chemical Depot in Indiana began destruction operations in May, 2005 and completed operations on August 8, 2008, disposing of 1,152 tonnes of agents. Pine Bluff completed destruction of 3,850 tons of weapons on November 12, 2010. Anniston Chemical Activity in Alabama completed disposal on September 22, 2011. Umatilla Chemical Depot in Oregon finished disposal on October 25, 2011. Tooele Chemical Demilitarization Facility at Deseret Chemical Depot in Utah finished disposal on January 21, 2012.
https://en.wikipedia.org/wiki?curid=40505833
1,110,426
At its core, erwin has a computer-aided software engineering tool (or CASE tool). Users can utilize erwin Data Modeler as a way to take conceptual data model and create a logical data model that is not dependent on a specific database technology. This schematic model can be used to create the physical data model. Users can then forward engineer the data definition language required to instantiate the schema for a range of database-management systems. The software includes features to graphically modify the model, including dialog boxes for specifying the number of entity–relationships, database constraints, indexes, and data uniqueness. erwin supports three data modeling languages: IDEF1X, a variant of information technology engineering developed by James Martin, and a form of dimensional modeling notation.
https://en.wikipedia.org/wiki?curid=9502078
1,116,785
Stochastic resonance (SR) is observed when noise added to a system changes the system's behaviour in some fashion. More technically, SR occurs if the signal-to-noise ratio of a nonlinear system or device increases for moderate values of noise intensity. It often occurs in bistable systems or in systems with a sensory threshold and when the input signal to the system is "sub-threshold." For lower noise intensities, the signal does not cause the device to cross threshold, so little signal is passed through it. For large noise intensities, the output is dominated by the noise, also leading to a low signal-to-noise ratio. For moderate intensities, the noise allows the signal to reach threshold, but the noise intensity is not so large as to swamp it. Thus, a plot of signal-to-noise ratio as a function of noise intensity contains a peak.
https://en.wikipedia.org/wiki?curid=965419
1,117,954
Data assimilation is a mathematical discipline that seeks to optimally combine theory (usually in the form of a numerical model) with observations. There may be a number of different goals sought – for example, to determine the optimal state estimate of a system, to determine initial conditions for a numerical forecast model, to interpolate sparse observation data using (e.g. physical) knowledge of the system being observed, to set numerical parameters based on training a model from observed data. Depending on the goal, different solution methods may be used. Data assimilation is distinguished from other forms of machine learning, image analysis, and statistical methods in that it utilizes a dynamical model of the system being analyzed.
https://en.wikipedia.org/wiki?curid=2458875
1,119,433
Here, is a function from the space of states to the real numbers; in physics applications, is interpreted as the energy of the configuration "x". The parameter is a free parameter; in physics, it is the inverse temperature. The normalizing constant is the partition function. However, in infinite systems, the total energy is no longer a finite number and cannot be used in the traditional construction of the probability distribution of a canonical ensemble. Traditional approaches in statistical physics studied the limit of intensive properties as the size of a finite system approaches infinity (the thermodynamic limit). When the energy function can be written as a sum of terms that each involve only variables from a finite subsystem, the notion of a Gibbs measure provides an alternative approach. Gibbs measures were proposed by probability theorists such as Dobrushin, Lanford, and Ruelle and provided a framework to directly study infinite systems, instead of taking the limit of finite systems.
https://en.wikipedia.org/wiki?curid=3085914
1,150,678
Along with Zurek's related theory of envariance (invariance due to quantum entanglement), quantum Darwinism seeks to explain how the classical world emerges from the quantum world and proposes to answer the quantum measurement problem, the main interpretational challenge for quantum theory. The measurement problem arises because the quantum state vector, the source of all knowledge concerning quantum systems, evolves according to the Schrödinger equation into a linear superposition of different states, predicting paradoxical situations such as "Schrödinger's cat"; situations never experienced in our classical world. Quantum theory has traditionally treated this problem as being resolved by a non-unitary transformation of the state vector at the time of measurement into a definite state. It provides an extremely accurate means of predicting the value of the definite state that will be measured in the form of a probability for each possible measurement value. The physical nature of the transition from the quantum superposition of states to the definite classical state measured is not explained by the traditional theory but is usually assumed as an axiom and was at the basis of the debate between Niels Bohr and Albert Einstein concerning the completeness of quantum theory.
https://en.wikipedia.org/wiki?curid=1334123
1,155,732
A data grid is an architecture or set of services that gives individuals or groups of users the ability to access, modify and transfer extremely large amounts of geographically distributed data for research purposes. Data grids make this possible through a host of middleware applications and services that pull together data and resources from multiple administrative domains and then present it to users upon request. The data in a data grid can be located at a single site or multiple sites where each site can be its own administrative domain governed by a set of security restrictions as to who may access the data. Likewise, multiple replicas of the data may be distributed throughout the grid outside their original administrative domain and the security restrictions placed on the original data for who may access it must be equally applied to the replicas. Specifically developed data grid middleware is what handles the integration between users and the data they request by controlling access while making it available as efficiently as possible. The adjacent diagram depicts a high level view of a data grid.
https://en.wikipedia.org/wiki?curid=35951900
1,155,736
The data transport service also provides for the low-level access and connections between hosts for file transfer. The data transport service may use any number of modes to implement the transfer to include parallel data transfer where two or more data streams are used over the same channel or striped data transfer where two or more steams access different blocks of the file for simultaneous transfer to also using the underlying built-in capabilities of the network hardware or specifically developed protocols to support faster transfer speeds. The data transport service might optionally include a network overlay function to facilitate the routing and transfer of data as well as file I/O functions that allow users to see remote files as if they were local to their system. The data transport service hides the complexity of access and transfer between the different systems to the user so it appears as one unified data source.
https://en.wikipedia.org/wiki?curid=35951900
1,156,815
The microscopic origin of the atomic magnetic moments in magnetic materials is quantum mechanical; the Planck constant enters explicitly in the equation defining the magnetic moment of an electron, along with its charge and its mass. Yet, the magnetic moments in the dysprosium titanate and the holmium titanate spin ice materials are effectively described by "classical" statistical mechanics, and not quantum statistical mechanics, over the experimentally relevant and reasonably accessible temperature range (between 0.05K and 2K) where the spin ice phenomena manifest themselves. Although the weakness of quantum effects in these two compounds is rather unusual, it is believed to be understood. There is current interest in the search of quantum spin ices, materials in which the laws of quantum mechanics now become needed to describe the behavior of the magnetic moments. Magnetic ions other than dysprosium (Dy) and holmium (Ho) are required to generate a quantum spin ice, with praseodymium (Pr), terbium (Tb) and ytterbium (Yb) being possible candidates. One reason for the interest in quantum spin ice is the belief that these systems may harbor a "quantum spin liquid", a state of matter where magnetic moments continue to wiggle (fluctuate) down to absolute zero temperature. The theory describing the low-temperature and low-energy properties of quantum spin ice is akin to that of vacuum quantum electrodynamics, or QED. This constitutes an example of the idea of emergence.
https://en.wikipedia.org/wiki?curid=4376459
1,158,754
In the field of energetics, an energy carrier is produced by human technology from a primary energy source. Only the energy sector uses primary energy sources. Other sectors of society use an energy carrier to perform useful activities (end-uses). The distinction between "Energy Carriers" (EC) and "Primary Energy Sources" (PES) is extremely important. An energy carrier can be more valuable (have a higher quality) than a primary energy source. For example 1 megajoule (MJ) of electricity produced by a hydroelectric plant is equivalent to 3 MJ of oil. Sunlight is a main source of primary energy, which can be transformed into plants and then into coal, oil and gas. Solar power and wind power are other derivatives of sunlight. Note that although coal, oil and natural gas are derived from sunlight, they are considered primary energy sources which are extracted from the earth (fossil fuels). Natural uranium is also a primary energy source extracted from the earth but does not come from the decomposition of organisms (mineral fuel).
https://en.wikipedia.org/wiki?curid=827528
1,170,102
In mathematical physics, Gleason's theorem shows that the rule one uses to calculate probabilities in quantum physics, the Born rule, can be derived from the usual mathematical representation of measurements in quantum physics together with the assumption of non-contextuality. Andrew M. Gleason first proved the theorem in 1957, answering a question posed by George W. Mackey, an accomplishment that was historically significant for the role it played in showing that wide classes of hidden-variable theories are inconsistent with quantum physics. Multiple variations have been proven in the years since. Gleason's theorem is of particular importance for the field of quantum logic and its attempt to find a minimal set of mathematical axioms for quantum theory.
https://en.wikipedia.org/wiki?curid=6796998
1,175,214
Energy efficiency and renewable energy are twin pillars of a sustainable energy future. However, there is little linking of these pillars despite their potential synergies. The more efficiently energy services are delivered, the faster renewable energy can become an effective and significant contributor of primary energy. The more energy is obtained from renewable sources, the less fossil fuel energy is required to provide that same energy demand. This linkage of renewable energy with energy efficiency relies in part on the electrical energy efficiency benefits of copper.
https://en.wikipedia.org/wiki?curid=37904380
1,177,093
EnergyPATHWAYS is a comprehensive accounting framework used to construct economy-wide energy infrastructure scenarios. While portions of the model do use linear programming techniques, for instance, for electricity dispatch, the EnergyPATHWAYS model is not fundamentally an optimization model and embeds few decision dynamics. EnergyPATHWAYS offers detailed energy, cost, and emissions accounting for the energy flows from primary supply to final demand. The energy system representation is flexible, allowing for differing levels of detail and the nesting of cities, states, and countries. The model uses hourly least-cost electricity dispatch and supports power-to-gas, short-duration energy storage, long-duration energy storage, and demand response. Scenarios typically run to 2050.
https://en.wikipedia.org/wiki?curid=38803848
1,183,112
Pure Physics major programmes are provided in the Chinese University of Hong Kong (CUHK), Hong Kong University of Science and Technology (HKUST) and University of Hong Kong (HKU). Topics include engineering physics, mechanics, thermodynamics, fluids, wave, optics, modern physics, laboratory, heat, electromagnetism, quantitative methods, computational physics, astronomy, astrophysics, classical mechanics, quantum mechanics, quantum information, statistical physics, theoretical physics, computer simulation, soft matter, practical electronics, contemporary physics, instrumentation, statistical mechanics, solid state physics, meteorology, nanoscience, optical physics, theory of relativity and particle physics etc.
https://en.wikipedia.org/wiki?curid=2711029
1,184,471
Existing 3D methods are not without limitations, including scalability, reproducibility, sensitivity, and compatibility with high-throughput screening (HTS) instruments. Cell-based HTS relies on rapid determination of cellular response to drug interaction, such as dose dependent cell viability, cell-cell/cell-matrix interaction, and/or cell migration, but the available assays are not optimized for 3D cell culturing. Another challenge faced by 3D cell culturing is the limited amount of data and publications that address mechanisms and correlations of drug interaction, cell differentiation, and cell-signalling in these 3D environments. None of the 3D methods have yet replaced 2D culturing on a large scale, including in the drug development process; although the number of 3D cell culturing publications is increasing rapidly, the current limited biochemical characterization of 3D tissue diminishes the adoption of new methods.
https://en.wikipedia.org/wiki?curid=39905795
1,190,314
The method is based on the idea of a so-called fraction function formula_1. It is a scalar function, defined as the integral of a fluid's characteristic function in the control volume, namely the volume of a computational grid cell. The volume fraction of each fluid is tracked through every cell in the computational grid, while all fluids share a single set of momentum equations, i.e. one for each spatial direction. From a cell-volume averaged perspective, when a cell is empty of the tracked phase, the value of formula_1 is zero; when the cell is full of tracked phase, formula_4; and when the cell contains an interface between the tracked and non-tracked volumes, formula_5. From a perspective of a local point that contains no volume, formula_1 is a discontinuous function insofar as its value jumps from 0 to 1 when the local point moves from the non-tracked to the tracked phase. The normal direction of the fluid interface is found where the value of formula_1 changes most rapidly. With this method, the free-surface is not defined sharply, instead it is distributed over the height of a cell. Thus, in order to attain accurate results, local grid refinements have to be done. The refinement criterion is simple, cells with formula_8 have to be refined. A method for this, known as the marker and micro-cell method, has been developed by Raad and his colleagues in 1997.
https://en.wikipedia.org/wiki?curid=16676201
1,191,385
B. Hiley and R. E. Callaghan re-interpret the role of the Bohm model and its notion of quantum potential in the framework of Clifford algebra, taking account of recent advances that include the work of David Hestenes on spacetime algebra. They show how, within a nested hierarchy of Clifford algebras formula_72, for each Clifford algebra an element of a minimal left ideal formula_73 and an element of a right ideal representing its Clifford conjugation formula_74 can be constructed, and from it the "Clifford density element" (CDE) formula_75, an element of the Clifford algebra which is isomorphic to the standard density matrix but independent of any specific representation. On this basis, bilinear invariants can be formed which represent properties of the system. Hiley and Callaghan distinguish bilinear invariants of a first kind, of which each stands for the expectation value of an element formula_76 of the algebra which can be formed as formula_77, and bilinear invariants of a second kind which are constructed with derivatives and represent momentum and energy. Using these terms, they reconstruct the results of quantum mechanics without depending on a particular representation in terms of a wave function nor requiring reference to an external Hilbert space. Consistent with earlier results, the quantum potential of a non-relativistic particle with spin (Pauli particle) is shown to have an additional spin-dependent term, and the momentum of a relativistic particle with spin (Dirac particle) is shown to consist in a linear motion and a rotational part. The two dynamical equations governing the time evolution are re-interpreted as conservation equations. One of them stands for the conservation of energy; the other stands for the conservation of probability and of spin. The quantum potential plays the role of an internal energy which ensures the conservation of total energy.
https://en.wikipedia.org/wiki?curid=8057418
1,195,889
One of the most well-known examples in social physics is the relationship of the Ising model and the voting dynamics of a finite population. The Ising model, as a model of ferromagnetism, is represented by a grid of spaces, each of which is occupied by a spin (physics), numerically ±1. Mathematically, the final energy state of the system depends on the interactions of the spaces and their respective spins. For example, if two adjacent spaces share the same spin, the surrounding neighbors will begin to align, and the system will eventually reach a state of consensus. In social physics, it has been observed that voter dynamics in a finite population obey the same mathematical properties of the Ising model. In the social physics model, each spin denotes an opinion, e.g. yes or no, and each space represents a "voter". If two adjacent spaces (voters) share the same spin (opinion), their neighbors begin to align with their spin value; if two adjacent spaces do not share the same spin, then their neighbors remain the same. Eventually, the remaining voters will reach a state of consensus as the "information flows outward".
https://en.wikipedia.org/wiki?curid=52054307
1,198,305
The time evolution of quantum systems can be determined by solving the effective equations of motion, also known as master equations, that govern how the density matrix describing the system changes over time and the dynamics of the observables that are associated with the system. In general, however, the environment that we want to model as being a part of our system is very large and complicated, which makes finding exact solutions to the master equations difficult, if not impossible. As such, the theory of open quantum systems seeks an economical treatment of the dynamics of the system and its observables. Typical observables of interest include things like energy and the robustness of quantum coherence (i.e. a measure of a state's coherence). Loss of energy to the environment is termed quantum dissipation, while loss of coherence is termed quantum decoherence.
https://en.wikipedia.org/wiki?curid=1079106
1,202,709
In physics, fields are entities that interact with matter and can be described mathematically by assigning a value to each point in space and time. Vector fields are fields which are assigned both a numerical value and a direction at each point in space and time. Electric charges produce a vector field called the electric field. The numerical value of the electric field, also called the electric field strength, determines the strength of the electric force that a charged particle will feel in the field and the direction of the field determines which direction the force will be in. By convention, the direction of the electric field is the same as the direction of the force on positive charges and opposite to the direction of the force on negative charges. Because positive charges are repelled by other positive charges and are attracted to negative charges, this means the electric fields point away from positive charges and towards negative charges. These properties of the electric field are encapsulated in the equation for the electric force on a charge written in terms of the electric field:
https://en.wikipedia.org/wiki?curid=58686423
1,203,293
The existence of non-standard models of arithmetic can be demonstrated by an application of the compactness theorem. To do this, a set of axioms P* is defined in a language including the language of Peano arithmetic together with a new constant symbol "x". The axioms consist of the axioms of Peano arithmetic P together with another infinite set of axioms: for each numeral "n", the axiom "x" > "n" is included. Any finite subset of these axioms is satisfied by a model that is the standard model of arithmetic plus the constant "x" interpreted as some number larger than any numeral mentioned in the finite subset of P*. Thus by the compactness theorem there is a model satisfying all the axioms P*. Since any model of P* is a model of P (since a model of a set of axioms is obviously also a model of any subset of that set of axioms), we have that our extended model is also a model of the Peano axioms. The element of this model corresponding to "x" cannot be a standard number, because as indicated it is larger than any standard number.
https://en.wikipedia.org/wiki?curid=6251420
1,221,856
On March 2, 2010, at the inaugural ARPA-E Energy Innovation Summit, U.S. Energy Secretary Steven Chu announced a third funding opportunity for ARPA-E projects. Like the second funding opportunity, ARPA-E solicited projects by category: Grid-Scale Rampable Intermittent Dispatchable Storage (GRIDS), Agile Delivery of Electrical Power Technology (ADEPT), and Building Energy Efficiency Through Innovative Thermodevices (BEET-IT). GRIDS welcomed projects that focused on widespread deployment of cost-effective grid-scale energy storage in two specific areas: 1) proof of concept storage component projects focused on validating new, over-the-horizon electrical energy storage concepts, and 2) advanced system prototypes that address critical shortcomings of existing grid-scale energy storage technologies. ADEPT focused on investing in materials for fundamental advances in soft magnetics, high voltage switches, and reliable, high-density charge storage in three categories: 1) fully integrated, chip-scale power converters for applications including, but not limited to, compact, efficient drivers for solid-state lighting, distributed micro-inverters for photovoltaics, and single-chip power supplies for computers, 2) kilowatt scale package integrated power converters by enabling applications such as low-cost, efficient inverters for grid-tied photovoltaics and variable speed motors, and 3) lightweight, solid-state, medium voltage energy conversion for high power applications such as solid-state electrical substations and wind turbine generators. BEET-IT solicited projects regarding energy efficient cooling technologies and air conditioners (AC) for buildings to save energy and reduce GHG emissions in the following areas: 1) cooling systems that use refrigerants with low global warming potential; 2) energy efficient air conditioning (AC) systems for warm and humid climates with an increased coefficient of performance (COP); and 3) vapor compression AC systems for hot climates for re-circulating air loads with an increased COP.
https://en.wikipedia.org/wiki?curid=21687875
1,235,651
Furthermore, it has been shown that cell-cell adhesion formation not only restricts growth and proliferation by imposing physical constraints such as cell area, but also by triggering signaling pathways that downregulate proliferation. One such pathway is the Hippo-YAP signaling pathway, which is largely responsible for inhibiting cell growth in mammals. This pathway consists primarily of a phosphorylation cascade involving serine kinases and is mediated by regulatory proteins, which regulate cell growth by binding to growth-controlling genes. The serine/threonine kinase Hippo (Mst1/Mst2 encoded by the STK4 and STK3 genes respectively in mammals) activates a secondary kinase (Lats1/Lats2), which phosphorylates YAP, a transcriptional activator of growth genes. The phosphorylation of YAP serves to export it from the nucleus and prevent it from activating growth-promoting genes; this is how the Hippo-YAP pathway inhibits cell growth. More importantly, the Hippo-YAP pathway uses upstream elements to act in response to cell-cell contact and controls density-dependent inhibition of proliferation. For example, cadherins are transmembrane proteins that form cellular junctions via homophilic binding and thus act as detectors for cell-cell contact. Cadherin-mediated activation of the inhibitory pathway involves the transmembrane E-cadherin forming a homophilic bond in order to activate α- and β-catenin, which then stimulate downstream components of the Hippo-YAP pathway to ultimately downregulate cell growth. This is consistent with the finding that E-cadherin overexpression hinders metastasis and tumorigenesis. Because YAP is shown to be associated with mitogenic growth factor signaling and thus cell proliferation, it is likely that future studies will focus on the Hippo-YAP pathway's role in cancer cells.
https://en.wikipedia.org/wiki?curid=6020806
1,235,815
Physics-based energy functions, such as AMBER and CHARMM, are typically derived from quantum mechanical simulations, and experimental data from thermodynamics, crystallography, and spectroscopy. These energy functions typically simplify physical energy function and make them pairwise decomposable, meaning that the total energy of a protein conformation can be calculated by adding the pairwise energy between each atom pair, which makes them attractive for optimization algorithms. Physics-based energy functions typically model an attractive-repulsive Lennard-Jones term between atoms and a pairwise electrostatics coulombic term between non-bonded atoms.
https://en.wikipedia.org/wiki?curid=1581752
1,235,818
used mostly in molecular dynamics simulations, are optimized for the simulation of single sequences, but protein design searches through many conformations of many sequences. Thus, molecular mechanics force-fields must be tailored for protein design. In practice, protein design energy functions often incorporate both statistical terms and physics-based terms. For example, the Rosetta energy function, one of the most-used energy functions, incorporates physics-based energy terms originating in the CHARMM energy function, and statistical energy terms, such as rotamer probability and knowledge-based electrostatics. Typically, energy functions are highly customized between laboratories, and specifically tailored for every design.
https://en.wikipedia.org/wiki?curid=1581752
1,238,152
"Shigella flexneri" is an intracellular bacterium that infects the epithelial lining of the mammalian intestinal tract. This bacterium is acid tolerant and can survive conditions of pH 2. Thus, it is able to enter the mouth of its host and survive passage through the stomach to the colon. Once inside of the colon, "S. flexneri" can penetrate the epithelium in three ways: 1) The bacterium can alter the tight junctions between the epithelial cells, allowing it to cross into the sub-mucosa. 2) It can penetrate the highly endocytic M cells that are dispersed in the epithelial layer and cross into the sub-mucosa. 3) After reaching the sub-mucosa, the bacteria can be phagocytosed by macrophages and induce apoptosis, cell death. This releases cytokines that recruit polymorphonuclear cells (PMN) to the sub-mucosa. "S. flexneri" still in the lumen of the colon traverse the epithelial lining as the PMNs cross into the infected area. The influx of PMN cells across the epithelial layer in response to Shigella disrupts the integrity of the epithelium allowing lumenal bacteria to cross into the sub-mucosa in an M-cell independent mechanism. "S. flexneri" uses these three methods to reach the sub-mucosa to penetrate the epilithelial cells from the basolateral side. The bacterium has four known invasion plasmid antigens: IpaA, IpaB, IpaC, and IpaD. When "S. flexneri" makes contact with the basolateral side of an epithelial cell, IpaC and IpaB are fused together to make a pore in the epithelial cell membrane. It then uses a type-III secretion system (T3SS) to insert the other Ipa proteins into the cytoplasm of the epithelial cell. "S. flexneri" can pass to neighboring epithelial cells by using its own outer membrane protein, IcsA, to activate the host's actin assembly machinery. The IcsA protein is first localized to one pole of the bacterium where it will then bind with the host's protein, Neural Wiskott-Aldrich Syndrome Protein (N-WASP). This IcsA/N-WASP complex then activates the Actin-related protein (Arp) 2/3 Complex. Arp 2/3 Complex is the protein responsible for rapidly initiating actin polymerization and propelling the bacteria forward. When "S. flexneri" reaches the adjoining membrane, it creates a protrusion into the neighboring cell's cytoplasm. The bacteria becomes surrounded by two layers of cellular membrane. It then uses another IpaBC complex to make a pore and enter the next cell. VacJ is a protein that is also needed by "S. flexneri" to exit the protrusion. Its exact function is still being studied but it is known that intercellular spread is greatly impaired without it. Bacterial replication within the epithelial cell is detrimental to the cell but it is proposed that epithelial cell death is largely due to the host’s own inflammatory response.
https://en.wikipedia.org/wiki?curid=3041236
1,239,760
Within atomic, molecular, and optical physics, there are numerous studies using molecules to verify fundamental constants and probe for physics beyond the Standard Model. Certain molecular structures are predicted to be sensitive to new physics phenomena, such as parity and time-reversal violation. Molecules are also considered a potential future platform for trapped ion quantum computing, as their more complex energy level structure could facilitate higher efficiency encoding of quantum information than individual atoms. From a chemical physics perspective, intramolecular vibrational energy redistribution experiments use vibrational spectra to determine how energy is redistributed between different quantum states of a vibrationally excited molecule.
https://en.wikipedia.org/wiki?curid=675130
1,242,940
The phase qubit is operated in the zero-voltage state, with formula_22. At very low temperatures, much less than 1 K (achievable using a cryogenic system known as a dilution refrigerator), with a sufficiently high resistance and small capacitance Josephson junction, quantum energy levels become detectable in the local minima of the washboard potential. These were first detected using microwave spectroscopy, where a weak microwave signal is added to the current formula_2 biasing the junction. Transitions from the zero voltage state to the voltage state were measured by monitoring the voltage across the junction. Clear resonances at certain frequencies were observed, which corresponded well with the quantum transition energies obtained by solving the Schrödinger equation for the local minimum in the washboard potential. Classically only a single resonance is expected, centered at the plasma frequency formula_49. Quantum mechanically, the potential minimum in the washboard potential can accommodate several quantized energy levels, with the lowest (ground to first excited state) transition at an energy formula_50, but the higher energy transitions (first to second excited state, second to third excited state) shifted somewhat below this due to the non-harmonic nature of the trapping potential minimum, whose resonance frequency falls as the energy increases in the minimum. Observing multiple, discrete levels in this fashion is extremely strong evidence that the superconducting device is behaving quantum mechanically, rather than classically.
https://en.wikipedia.org/wiki?curid=21650344
1,246,975
Through 2019 AFC Energy altered its focus towards EV charging and partnered with Rolec. Perceived advantages of the AFC Energy system are the ability to operate in remote off-grid areas, the ability to neutralise the constraints of grid capacity in any situation, and carbon-free power generation. During 2019 AFC also set about branding its products. Of particular interest are the solid electrolyte fuel cell scheduled for release in 2022, and the Alkamem membrane, a high density fuel cell with possible applications in electrolysis. Its market capitalisation was some 7- 10% of other UK-listed hydrogen companies such as ITM power and Ceres Power. In June 2020 AFC Energy entered into an agreement with Acciona, a large Spanish construction company with a multi-national presence, to demonstrate the AFC Energy fuel cell on site. In July 2020 AFC Energy announced a collaboration with Extreme E to use its hydrogen fuel cell technology to enable its race fleet to be charged using zero emission energy. The by-product of utilizing these hydrogen fuel cell power generators for charging, water, will be used elsewhere on-site.
https://en.wikipedia.org/wiki?curid=27064054
1,261,744
Data must have an identity to be accessible. For instance, Skute is a mechanism based on key/value storage that allows dynamic data allocation in an efficient way. Each server must be identified by a label in the form continent-country-datacenter-room-rack-server. The server can reference multiple virtual nodes, with each node having a selection of data (or multiple partitions of multiple data). Each piece of data is identified by a key space which is generated by a one-way cryptographic hash function (e.g. MD5) and is localised by the hash function value of this key. The key space may be partitioned into multiple partitions with each partition referring to a piece of data. To perform replication, virtual nodes must be replicated and referenced by other servers. To maximize data durability and data availability, the replicas must be placed on different servers and every server should be in a different geographical location, because data availability increases with geographical diversity. The process of replication includes an evaluation of space availability, which must be above a certain minimum thresh-hold on each chunk server. Otherwise, data are replicated to another chunk server. Each partition, i, has an availability value represented by the following formula:
https://en.wikipedia.org/wiki?curid=41471789
1,274,453
system that describes the rates of change in each metabolite's concentration or amount. To this end, a rate law, i.e., a kinetic equation that determines the rate of reaction based on the concentrations of all reactants is required for each reaction. Software packages that include numerical integrators, such as COPASI or SBMLsimulator, are then able to simulate the system dynamics given an initial condition. Often these rate laws contain kinetic parameters with uncertain values. In many cases it is desired to estimate these parameter values with respect to given time-series data of metabolite concentrations. The system is then supposed to reproduce the given data. For this purpose the distance between the given data set and the result of the simulation, i.e., the numerically or in few cases analytically obtained solution of the differential equation system is computed. The values of the parameters are then estimated to minimize this distance. One step further, it may be desired to estimate the mathematical structure of the differential equation system because the real rate laws are not known for the reactions within the system under study. To this end, the program SBMLsqueezer allows automatic creation of appropriate rate laws for all reactions with the network.
https://en.wikipedia.org/wiki?curid=3408308
1,286,659
One example of a scientific problem that is naturally expressed in continuous terms is path integration. The general technique of path integration has numerous applications including quantum mechanics, quantum chemistry, statistical mechanics, and computational finance. Because randomness is present throughout quantum theory, one typically requires that a quantum computational procedure yield the correct answer, not with certainty, but with high probability. For example, one might aim for a procedure that computes the correct answer with probability at least 3/4. One also specifies a degree of uncertainty, typically by setting the maximum acceptable error. Thus, the goal of a quantum computation could be to compute the numerical result of a path-integration problem to within an error of at most ε with probability 3/4 or more. In this context, it is known that quantum algorithms can outperform their classical counterparts, and the computational complexity of path integration, as measured by the number of times one would expect to have to query a quantum computer to get a good answer, grows as the inverse of ε.
https://en.wikipedia.org/wiki?curid=54782330
1,311,647
Syntactic interoperability (see below) is a prerequisite for semantic interoperability. Syntactic interoperability refers to the packaging and transmission mechanisms for data. In healthcare, HL7 has been in use for over thirty years (which predates the internet and web technology), and uses the pipe character (|) as a data delimiter. The current internet standard for document markup is XML, which uses "< >" as a data delimiter. The data delimiters convey no meaning to the data other than to structure the data. Without a data dictionary to translate the contents of the delimiters, the data remains meaningless. While there are many attempts at creating data dictionaries and information models to associate with these data packaging mechanisms, none have been practical to implement. This has only perpetuated the ongoing "babelization" of data and inability to exchange data with meaning.
https://en.wikipedia.org/wiki?curid=7233280
1,319,485
Use of the "force-mediating particle" picture (FMPP) is unnecessary in nonrelativistic quantum mechanics, and Coulomb's law is used as given in atomic physics and quantum chemistry to calculate both bound and scattering states. A non-perturbative relativistic quantum theory, in which Lorentz invariance is preserved, is achievable by evaluating Coulomb's law as a 4-space interaction using the 3-space position vector of a reference electron obeying Dirac's equation and the quantum trajectory of a second electron which depends only on the scaled time. The quantum trajectory of each electron in an ensemble is inferred from the Dirac current for each electron by setting it equal to a velocity field times a quantum density, calculating a position field from the time integral of the velocity field, and finally calculating a quantum trajectory from the expectation value of the position field. The quantum trajectories are of course spin dependent, and the theory can be validated by checking that Pauli's exclusion principle is obeyed for a collection of fermions.
https://en.wikipedia.org/wiki?curid=28152615
1,319,875
Stapp's work has drawn criticism from scientists such as David Bourget and Danko Georgiev. Recent papers and a book by Georgiev criticize Stapp's model in two aspects: (1) The mind in Stapp's model does not have its own wavefunction or density matrix, but nevertheless can act upon the brain using projection operators. Such usage is not compatible with standard quantum mechanics because one can attach any number of ghostly minds to any point in space that act upon physical quantum systems with any projection operators. Therefore, Stapp's model does not build upon "the prevailing principles of physics", but negates them. (2) Stapp's claim that quantum Zeno effect is robust against environmental decoherence directly contradicts a basic theorem in quantum information theory according to which acting with projection operators upon the density matrix of a quantum system can never decrease the Von Neumann entropy of the system, but can only increase it. Stapp has responded to Bourget and Georgiev stating that the allegations of errors are incorrect.
https://en.wikipedia.org/wiki?curid=1157602
1,322,077
Integrated quantum photonics, uses photonic integrated circuits to control photonic quantum states for applications in quantum technologies. As such, integrated quantum photonics provides a promising approach to the miniaturisation and scaling up of optical quantum circuits. The major application of integrated quantum photonics is Quantum technology:, for example quantum computing, quantum communication, quantum simulation, quantum walks and quantum metrology.
https://en.wikipedia.org/wiki?curid=49512158
1,327,831
In the energy methods of simulating the dynamics of complex structures, a state of the system is often described as an element of an appropriate function space. To be in this state, the system pays a certain cost in terms of energy required by the state. This energy is a scalar quantity, a function of the state, hence the term "functional". The system tends to develop from the state with higher energy (higher cost) to the state with lower energy, thus local minima of this functional are usually related to the stable stationary states. Studying such states is part of the optimization problems, where the terms "energy functional" or "cost functional" are often used to describe the objective function.
https://en.wikipedia.org/wiki?curid=20560973
1,342,323
Anomaly detection, is formally defined as the process of identifying unexpected items or events in data sets, which differ from the norm. The prominence of social networking in the current era has led to many hidden potential concerns, primarily those related to information privacy. As more and more users rely on the social networks, for more than merely interactions and self-representation, even going beyond to store personal information, the risks for exposure become prominent. Users are often threatened by privacy breaches, unauthorized access to personal information, and leakage of sensitive data. To attempt to solve this issue, the authors of "Anomaly Detection over Differential Preserved Privacy in Online Social Networks" have proposed a model using a social network utilizing restricted local differential privacy. By using this model, it aims for improved privacy preservation through anomaly detection is analyzed. In this paper, the authors propose a privacy preserving model that sanitizes the collection of user information from a social network utilizing restricted local differential privacy (LDP) to save synthetic copies of collected data. This model uses reconstructed data to classify user activity and detect abnormal network behavior. The experimental results demonstrate that the proposed method achieves high data utility on the basis of improved privacy preservation. Furthermore, local differential privacy sanitized data are suitable for use in subsequent analyses such as anomaly detection. Anomaly detection on the proposed method’s reconstructed data achieves a detection accuracy similar to that on the original data.
https://en.wikipedia.org/wiki?curid=60504486
1,344,672
Quantum imaging is a new sub-field of quantum optics that exploits quantum correlations such as quantum entanglement of the electromagnetic field in order to image objects with a resolution or other imaging criteria that is beyond what is possible in classical optics. Examples of quantum imaging are quantum ghost imaging, quantum lithography, sub-shot-noise imaging, and quantum sensing. Quantum imaging may someday be useful for storing patterns of data in quantum computers and transmitting large amounts of highly secure encrypted information. Quantum mechanics has shown that light has inherent “uncertainties” in its features, manifested as moment-to-moment fluctuations in its properties. Controlling these fluctuations—which represent a sort of “noise”—can improve detection of faint objects, produce better amplified images, and allow workers to more accurately position laser beams.
https://en.wikipedia.org/wiki?curid=14967282
1,347,096
A governing equation may also be a state equation, an equation describing the state of the system, and thus actually be a constitutive equation that has "stepped up the ranks" because the model in question was not meant to include a time-dependent term in the equation. This is the case for a model of an oil production plant which on the average operates in a steady state mode. Results from one thermodynamic equilibrium calculation are input data to the next equilibrium calculation together with some new state parameters, and so on. In this case the algorithm and sequence of input data form a chain of actions, or calculations, that describes change of states from the first state (based solely on input data) to the last state that finally comes out of the calculation sequence.
https://en.wikipedia.org/wiki?curid=51232766
1,347,393
Perturb-seq or other conceptually similar protocols can be used to address a broad scope of biological questions and the applications of this technology will likely grow over time. Three papers on this topic, published in the December 2016 issue of the Journal Cell, demonstrated the utility of this method by applying it to the investigation of several distinct biological functions. In the paper, “Perturb-Seq: Dissecting Molecular Circuits with Scalable Single-Cell RNA Profiling of Pooled Genetic Screens”, the authors used Perturb-seq to conduct knockouts of transcription factors related to the immune response in hundreds of thousands of cells to investigate the cellular consequences of their inactivation. They also explored the effects of transcription factors on cell states in the context of the cell cycle. In the study led by UCSF, “A Multiplexed Single-Cell CRISPR Screening Platform Enables Systematic Dissection of the Unfolded Protein Response” the researchers suppressed multiple genes in each cell to study the unfolded protein response (UPR) pathway. With a similar methodology, but using the term CRISP-seq instead of Perturb-seq, the paper "Dissecting Immune Circuits by Linking CRISPR-Pooled Screens with Single-Cell RNA-Seq" performed a proof of concept experiment by using the technique to probe regulatory pathways related to innate immunity in mice. Lethality of each perturbation and epistasis analyses in cells with multiple perturbations was also investigated in these papers. Perturb-seq has so far been used with very few perturbations per experiment, but it can theoretically be scaled up to address the whole genome. Finally, the October 2016 preprint and subsequent paper demonstrate the bioinformatic reconstruction of the T cell receptor signaling pathway in Jurkat cells based on CROP-seq data.
https://en.wikipedia.org/wiki?curid=53353992
1,353,662
A statistical technique where the amount of model accuracy is specified as a range has recently been developed. The technique uses hypothesis testing to accept a model if the difference between a model's variable of interest and a system's variable of interest is within a specified range of accuracy. A requirement is that both the system data and model data be approximately Normally Independent and Identically Distributed (NIID). The t-test statistic is used in this technique. If the mean of the model is μ and the mean of system is μ then the difference between the model and the system is D = μ - μ. The hypothesis to be tested is if D is within the acceptable range of accuracy. Let L = the lower limit for accuracy and U = upper limit for accuracy. Then
https://en.wikipedia.org/wiki?curid=35658939
1,363,130
The Goldman model differs from the Kidd-Stubbs series decompression model in that the Goldman model assumes linear kinetics, where the K-S model includes a quadratic component, and the Goldman model considers only the central well-perfused compartment to contribute explicitly to risk, while the K-S model assumes all compartments to carry potential risk. The DCIEM 1983 model associates risk with the two outermost compartments of a four compartment series. The mathematical model based on this concept is claimed by Goldman to fit not only the Navy square profile data used for calibration, but also predicts risk relatively accurately for saturation profiles. A bubble version of the ICM model was not significantly different in predictions, and was discarded as more complex with no significant advantages. The ICM also predicted decompression sickness incidence more accurately at the low-risk recreational diving exposures recorded in DAN's Project Dive Exploration data set. The alternative models used in this study were the LE1 (Linear-Exponential) and straight Haldanean models. The Goldman model predicts a significant risk reduction following a safety stop on a low-risk dive and significant risk reduction by using nitrox (more so than the PADI tables suggest).
https://en.wikipedia.org/wiki?curid=38814223
1,364,484
The "Caulobacter" cell cycle regulatory system controls many modular subsystems that organize the progression of cell growth and reproduction. A control system constructed using biochemical and genetic logic circuitry organizes the timing of initiation of each of these subsystems. The central feature of the cell cycle regulation is a cyclical genetic circuit—a cell cycle engine—that is centered around the successive interactions of five master regulatory proteins: DnaA, GcrA, CtrA, SciP, and CcrM whose roles were worked out by the laboratories of Lucy Shapiro and Harley McAdams. These five proteins directly control the timing of expression of over 200 genes. The five master regulatory proteins are synthesized and then eliminated from the cell one after the other over the course of the cell cycle. Several additional cell signaling pathways are also essential to the proper functioning of this cell cycle engine. The principal role of these signaling pathways is to ensure reliable production and elimination of the CtrA protein from the cell at just the right times in the cell cycle.
https://en.wikipedia.org/wiki?curid=839361
1,364,486
Each process activated by the proteins of the cell cycle engine involve a cascade of many reactions. The longest subsystem cascade is DNA replication. In "Caulobacter" cells, replication of the chromosome involves about 2 million DNA synthesis reactions for each arm of the chromosome over 40 to 80 min depending on conditions. While the average time for each individual synthesis reaction can be estimated from the observed average total time to replicate the chromosome, the actual reaction time for each reaction varies widely around the average rate. This leads to a significant and inevitable cell-to-cell variation time to complete replication of the chromosome. There is similar random variation in the rates of progression of all the other subsystem reaction cascades. The net effect is that the time to complete the cell cycle varies widely over the cells in a population even when they all are growing in identical environmental conditions. Cell cycle regulation includes feedback signals that pace progression of the cell cycle engine to match progress of events at the regulatory subsystem level in each particular cell. This control system organization, with a controller (the cell cycle engine) driving a complex system, with modulation by feedback signals from the controlled system creates a closed loop control system.
https://en.wikipedia.org/wiki?curid=839361
1,364,488
The control circuitry that directs and paces "Caulobacter" cell cycle progression involves the entire cell operating as an integrated system. The control circuitry monitors the environment and the internal state of the cell, including the cell topology, as it orchestrates activation of cell cycle subsystems and "Caulobacter crescentus" asymmetric cell division. The proteins of the "Caulobacter" cell cycle control system and its internal organization are co-conserved across many alphaproteobacteria species, but there are great differences in the regulatory apparatus' functionality and peripheral connectivity to other cellular subsystems from species to species. The "Caulobacter" cell cycle control system has been exquisitely optimized by evolutionary selection as a total system for robust operation in the face of internal stochastic noise and environmental uncertainty.
https://en.wikipedia.org/wiki?curid=839361
1,377,047
The relation between formula_41 and formula_42 can be deduced directly from energy conservation. Since the energy associated with the undisturbed gas is neglected by setting formula_8, the total energy of the gas within the shock sphere must be equal to formula_1. Due to self-similarity, it is clear that not only the total energy within a sphere of radius formula_39 is constant, but also the total energy within a sphere of any radius formula_46 (in dimensional form, it says that total energy within a sphere of radius formula_47 that moves outwards with a velocity formula_48 must be constant). The amount of energy that leaves the sphere of radius formula_47 in time formula_50 due to the gas velocity formula_51 is formula_52, where formula_53 is the specific enthalpy of the gas. In that time, the radius of the sphere increases with the velocity formula_54 and the energy of the gas in this extra increased volume is formula_55, where formula_56 is the specific energy of the gas. Equating these expressions and substituting formula_57 and formula_58 that is valid for ideal polytropic gas leads to
https://en.wikipedia.org/wiki?curid=65783919
1,386,026
Cell–cell recognition is a cell's ability to distinguish one type of neighboring cell from another. This phenomenon occurs when complementary molecules on opposing cell surfaces meet. A receptor on one cell surface binds to its specific ligand on a nearby cell, initiating a cascade of events which regulate cell behaviors ranging from simple adhesion to complex cellular differentiation. Like other cellular functions, cell-cell recognition is impacted by detrimental mutations in the genes and proteins involved and is subject to error. The biological events that unfold due to cell-cell recognition are important for animal development, microbiomes, and human medicine.
https://en.wikipedia.org/wiki?curid=27340103
1,422,518
A differential backup is a type of data backup that preserves data, saving only the difference in the data since the last full backup. The rationale in this is that, since changes to data are generally few compared to the entire amount of data in the data repository, the amount of time required to complete the backup will be smaller than if a full backup was performed every time that the organization or data owner wishes to back up changes since the last full backup. Another advantage, at least as compared to the incremental backup method of data backup, is that at data restoration time, at most two backup media are ever needed to restore all the data. This simplifies data restores as well as increases the likelihood of shortening data restoration time.
https://en.wikipedia.org/wiki?curid=8463448
1,429,048
When the cell undergoes cell death during mitosis this is known as mitotic death. This is characterized by high levels of cyclin B1 still present in the cell at the time of cell death indicating the cell never finished mitosis. Mitotic catastrophe can also lead to the cell being fated for cell death by apoptosis or necrosis following interphase of the cell cycle. However, the timing of cell death can vary from hours after mitosis completes to years later which has been witnessed in human tissues treated with radiotherapy. The least common outcome of mitotic catastrophe is senescence in which the cell stops dividing and enters a permanent cell cycle arrest that prevents the cell from proliferating any further.
https://en.wikipedia.org/wiki?curid=22754707
1,443,762
A different approach to describe energy dissipation is to consider time dependent Hamiltonians. Against a common misunderstanding, the resulting unitary dynamics can describe energy dissipation, as certain degrees of freedom loose energy and others gain energy. However, the quantum mechanical state of the system stays pure, thus such an approach can not describe dephasing unless a subsystem is chosen and the reduced density matrix of this open quantum system is analyzed. Dephasing leads to quantum decoherence or information dissipation and is often important when describing open quantum systems. However, this approach is typically used e.g. in the description of optical experiments. There a light pulse (described by a time dependent semi-classical Hamiltonian) can change the energy in the system by stimulated absorption or emission.
https://en.wikipedia.org/wiki?curid=17580393
1,445,527
His research is focused on quantum optics, the quantum theory of information and quantum many-body physics. According to his theories, quantum computing will revolutionize the information society and lead to much more efficient and secure communication of information. His joint work with Peter Zoller on ion trap quantum computation opened up the possibility of experimental quantum computation, and his joint work on optical lattices jumpstarted the field of quantum simulation. He has also made seminal contributions in the fields of quantum information theory, degenerated quantum gases, quantum optics, and renormalization group methods. As of 2017 Juan Ignacio Cirac has published more than 440 articles in the most prestigious journals and is one of the most cited authors in his fields of research. He has been named among others as a possible candidate to win the nobel prize in physics.
https://en.wikipedia.org/wiki?curid=7507400
1,446,698
In semiconductors, valence electrons are located in energy bands. According to band theory, the electrons are either located in the valence band (lower energy) or the conduction band (higher energy), which are separated by an energy gap. In general, electrons will occupy different energy levels following the Fermi-Dirac distribution; for energy levels higher than the Fermi energy Ef , the occupation will be minimal. Electrons in lower levels can be excited into the higher levels through thermal or photoelectric excitations, leaving a positively-charged hole in the band they left. Due to conservation of net charge, the concentration of electrons (n) and of protons or holes (p) in a (pure) semiconductor must always be equal. Semiconductors can be doped to increase these concentrations: n-doping increases the concentration of electrons while p-doping increases the concentration of holes. This also affects the Fermi energy of the electrons: n-doped means a higher Fermi energy, while p-doped means a lower energy. At the interface between a n-doped and p-doped region in a semiconductor, band bending will occur. Due to the different charge distributions in the regions, an electric field will be induced, creating a so-called depletion region at the interface. Similar interfaces also appear at junctions between (doped) semiconductors and other materials, such as metals/electrolytes. A way to counteract this band bending is by applying a potential to the system. This potential would have to be the flat band potential and is defined to be the applied potential at which the conduction and valence bands become flat
https://en.wikipedia.org/wiki?curid=66494982
1,471,962
Numerous studies have confirmed the existence of two main patterns of cancer cell invasion by cell migration: collective cell migration and individual cell migration, by which tumor cells overcome barriers of the extracellular matrix and spread into surrounding tissues. Each pattern of cell migration displays specific morphological features and the biochemical/molecular genetic mechanisms underlying cell migration. Two types of migrating tumor cells, mesenchymal (fibroblast-like) and amoeboid, are observed in each pattern of cancer cell invasion. This review describes the key differences between the variants of cancer cell migration, the role of epithelial-mesenchymal, collective-amoeboid, mesenchymal-amoeboid, and amoeboid- mesenchymal transitions, as well as the significance of different tumor factors and stromal molecules in tumor invasion. The data and facts collected are essential to the understanding of how the patterns of cancer cell invasion are related to cancer progression and therapy efficacy. Convincing evidence is provided that morphological manifestations of the invasion patterns are characterized by a variety of tissue (tumor) structures. The results of our own studies are presented to show the association of breast cancer progression with intratumoral morphological heterogeneity, which most likely reflects the types of cancer cell migration and results from different activities of cell adhesion molecules in tumor cells of distinct morphological structures.
https://en.wikipedia.org/wiki?curid=58886026
1,492,970
Even casual conversation with the computer's operators, or with a human guard, could allow such a superintelligent AI to deploy psychological tricks, ranging from befriending to blackmail, to convince a human gatekeeper, truthfully or deceitfully, that it is in the gatekeeper's interest to agree to allow the AI greater access to the outside world. The AI might offer a gatekeeper a recipe for perfect health, immortality, or whatever the gatekeeper is believed to most desire; alternatively, the AI could threaten to do horrific things to the gatekeeper and his family once it inevitably escapes. One strategy to attempt to box the AI would be to allow it to respond to narrow multiple-choice questions whose answers would benefit human science or medicine, but otherwise bar all other communication with, or observation of, the AI. A more lenient "informational containment" strategy would restrict the AI to a low-bandwidth text-only interface, which would at least prevent emotive imagery or some kind of hypothetical "hypnotic pattern". However, on a technical level, no system can be completely isolated and still remain useful: even if the operators refrain from allowing the AI to communicate and instead merely run it for the purpose of observing its inner dynamics, the AI could strategically alter its dynamics to influence the observers. For example, it could choose to creatively malfunction in a way that increases the probability that its operators will become lulled into a false sense of security and choose to reboot and then de-isolate the system.
https://en.wikipedia.org/wiki?curid=31641770
1,504,675
Neurons communicate with one another via synapses. Synapses are specialized junctions between two cells in close apposition to one another. In a synapse, the neuron that sends the signal is the presynaptic neuron and the target cell receives that signal is the postsynaptic neuron or cell. Synapses can be either electrical or chemical. Electrical synapses are characterized by the formation of gap junctions that allow ions and other organic compound to instantaneously pass from one cell to another. Chemical synapses are characterized by the presynaptic release of neurotransmitters that diffuse across a synaptic cleft to bind with postsynaptic receptors. A neurotransmitter is a chemical messenger that is synthesized within neurons themselves and released by these same neurons to communicate with their postsynaptic target cells. A receptor is a transmembrane protein molecule that a neurotransmitter or drug binds. Chemical synapses are slower than electrical synapses.
https://en.wikipedia.org/wiki?curid=20848680
1,541,042
In quantum mechanics, a quantum speed limit (QSL) is a limitation on the minimum time for a quantum system to evolve between two distinguishable states. QSL are closely related to time-energy uncertainty relations. In 1945, Leonid Mandelstam and Igor Tamm derived a time-energy uncertainty relation that bounds the speed of evolution in terms of the energy dispersion. Over half a century later, Norman Margolus and Lev Levitin showed that the speed of evolution cannot exceed the mean energy, a result known as the Margolus–Levitin theorem. Realistic physical systems in contact with an environment are known as open quantum systems and their evolution is also subject to QSL. Quite remarkably it was shown that environmental effects, such as non-Markovian dynamics can speed up quantum processes, which was verified in a cavity QED experiment.
https://en.wikipedia.org/wiki?curid=59031392
1,552,737
One Garland Science success was the textbook "Molecular Biology of the Cell" (authors include Bruce Alberts and Peter Walter; James D. Watson was a previous author), which has been lauded as "the most influential cell biology textbook of its time". Other notable textbooks published by Garland Science included "The Biology of Cancer" (by Robert Weinberg), "Immunobiology" (authors including Charles Janeway and Kenneth Murphy), "Molecular Biology of the Cell: The Problems Book" (by John Wilson and Tim Hunt), "Essential Cell Biology" (Bruce Alberts et al.), "The Immune System" (Peter Parham), "Molecular Driving Forces" (Ken A. Dill & Sarina Bromberg), and "Physical Biology of the Cell" (Rob Phillips, Jane Kondev & Julie Theriot).
https://en.wikipedia.org/wiki?curid=9782980
1,560,218
Data activism is a social practice that uses technology and data. It emerged from existing activism sub-cultures such as hacker an open-source movements. Data activism is a specific type of activism which is enabled and constrained by the data infrastructure. It can use the production and collection of digital, volunteered, open data to challenge existing power relations. It is a form of media activism; however, this is not to be confused with slacktivism. It uses digital technology and data politically and proactively to foster social change. Forms of data activism can include digital humanitarianism and engaging in hackathons. Data activism is a social practice that is becoming more well-known with the expansion of technology, open-sourced software and the ability to communicate beyond an individual's immediate community. The culture of data activism emerged from previous forms of media activism, such as hacker movements. A defining characteristic of data activism is that ordinary citizens can participate, in comparison to previous forms of media activism where elite skill sets were required to participate. By increasingly involving average users, they are a signal of a change in perspective and attitude towards massive data collection emerging within the civil society realm.
https://en.wikipedia.org/wiki?curid=51552534
1,560,224
After the Fukushima nuclear disaster in 2011, Safecast was an organization established by a group of citizens that were concerned about high levels of radiation in the area. After receiving conflicting messages about levels of radiation from different media sources and scientists, individuals were uncertain which information was the most reliable. This brought about a movement where citizens would use Geiger counter readings to measure levels of radiation and circulate that data over the internet so that it was accessible by the public. Safecast was developed as a means of producing multiple sources of data on radiation. It was assumed that if the data was collected by similar Geiger counter measurements in mass volume, the data produced was likely to be accurate. Safecast allows individuals to download the raw radiation data, but Safecast also visualizes the data. The data that is used to create a visual map is processed and categorized by Safecast. This data is different from the raw radiation data because it has been filtered, which presents the data in a different way than the raw data. The change in presentation of data may alter the information that individuals take from it, which can pose a threat if misunderstood.
https://en.wikipedia.org/wiki?curid=51552534
1,578,807
The mechanism behind the implicit learning that is hypothesized to occur while people engage in artificial grammar learning is statistical learning or, more specifically, Bayesian learning. Bayesian learning takes into account types of biases or "prior probability distributions" individuals have that contribute to the outcome of implicit learning tasks. These biases can be thought of as a probability distribution that contains the probability that each possible hypothesis is likely to be correct. Due to the structure of the Bayesian model, the inferences output by the model are in the form of a probability distribution rather than a single most probable event. This output distribution is a "posterior probability distribution". The posterior probability of each hypothesis in the original distribution is the probability of the hypothesis being true given the data and the probability of data given the hypothesis is true.
https://en.wikipedia.org/wiki?curid=6352447
1,580,822
The raw payload data received through the data reception stations is further processed to generate Level-0 and Level-1 data products that are stored in the ISSDC archives for subsequent dissemination. Automation in the entire chain of data processing is planned. Raw payload data / Level-0 data/ Level-1 data for each science payload is transferred to the respective Payload Operations Centers (POC) for further processing, analysis and generation of higher level data products. The higher level data products generated by the POC’s are subsequently transferred to ISSDC archives for storage and dissemination. The data archives for Level-0 and higher products are organized following the Planetary Data System (PDS) standards.
https://en.wikipedia.org/wiki?curid=24941586
1,582,118
Critical data studies is the systematic study of data and its criticisms. The field was named by scholars Craig Dalton and Jim Thatcher. Prior to its naming, significant interest in critical data studies was generated by danah boyd and Kate Crawford, who posed a set of research questions for the critical study of big data and its impacts on society and culture. As its name implies, critical data studies draws heavily on the influence of critical theory which it applies to the study of data. Subsequently, others have worked to further solidify a field called critical data studies. Some of the other key scholars in this discipline include Rob Kitchin and Tracey P. Lauriault. Scholars have attempted to make sense of data through different theoretical frameworks, some of which include analyzing data technically, ethically, politically/economically, temporally/spatially, and philosophically. Some of the key academic journals related to critical data studies include the "Journal of Big Data" and "Big Data and Society".
https://en.wikipedia.org/wiki?curid=51578025
1,592,717
Each computable function has an infinite number of different program representations in a given programming language. In the theory of algorithms one often strives to find a program with the smallest complexity for a given computable function and a given complexity measure (such a program could be called "optimal"). Blum's speedup theorem shows that for any complexity measure, there exists a computable function, such that there is no optimal program computing it, because every program has a program of lower complexity. This also rules out the idea there is a way to assign to arbitrary functions "their" computational complexity, meaning the assignment to any "f" of the complexity of an optimal program for "f". This does of course not exclude the possibility of finding the complexity of an optimal program for certain specific functions.
https://en.wikipedia.org/wiki?curid=2757528
1,597,880
In order to determine the position of signal source, a second set of measurements is required. Typically, this is done by making DTO and DFO measurements for a reference signal simultaneous with the target signal measurement. The measurement of the reference signal is purely passive and simply serves to remove the biases in the system. The same measurements that are made for the target signal, DTO and DFO, are made for the reference signal. The key to a reference signal is that the transmit location of that signal is known. By comparing the DTO of the reference signal and the DTO of the target signal a result known as Time Difference of Arrival (TDOA) can be calculated. Likewise, from the DFO of the target and the DFO of the reference signal, a Frequency Difference of Arrival (FDOA) can be determined. The TDOA and FDOA results provide a finite number of locations on the Earth’s surface, and therefore, lines of position (LOPs) are determined from the TDOA and FDOA results.
https://en.wikipedia.org/wiki?curid=24981352
1,601,105
This site includes links to the ARGO Float Data, The Data Library and Archives (DLA), the Falmouth Monthly Climate Reports, Martha's Vineyard Coastal Observatory, the Multibeam Archive, the Seafloor Data and Observation Visualization Environment (SeaDOVE): A Web-served GIS Database of Multi-scalar Seafloor Data, Seafloor Sediments Data Collection, the Upper Ocean Mooring Data Archive, the U.S. GLOBEC Data System, U.S. JGOFS Data System, and the WHOI Ship Data-Grabber System.
https://en.wikipedia.org/wiki?curid=19175774
1,618,330
A simple example of the Modern Hopfield network can be written in terms of binary variables formula_2 that represent the active formula_10 and inactive formula_11 state of the model neuron formula_4.formula_13In this formula the weights formula_14 represent the matrix of memory vectors (index formula_15 enumerates different memories, and index formula_16 enumerates the content of each memory corresponding to the formula_4-th feature neuron), and the function formula_18 is a rapidly growing non-linear function. The update rule for individual neurons (in the asynchronous case) can be written in the following form formula_19which states that in order to calculate the updated state of the formula_20-th neuron the network compares two energies: the energy of the network with the formula_4-th neuron in the ON state and the energy of the network with the formula_4-th neuron in the OFF state, given the states of the remaining neuron. The updated state of the formula_4-th neuron selects the state that has the lowest of the two energies.
https://en.wikipedia.org/wiki?curid=68440670
1,620,328
Model circuits were energized at relatively low voltages to allow for safe measurement with adequate precision. The model base quantities varied by manufacturer and date of design; as amplified indicating instruments became more common, lower base quantities were feasible. Model voltages and currents started off around 200 volts and 0.5 amperes in the MIT analyzer, which still allowed directly driven (but especially sensitive) instruments to be used to measure model parameters. The later machines used as little as 50 volts and 50 mA, used with amplified indicating instruments. By use of the per-unit system, model quantities could be readily transformed into the actual system quantities of voltage, current, power or impedance. A watt measured in the model might correspond to hundreds of kilowatts or megawatts in the modeled system. One hundred volts measured on the model might correspond to one per-unit, which could represent, say, 230,000 volts on a transmission line or 11,000 volts in a distribution system. Typically, results accurate to around 2% of measurement could be obtained. Model components were single-phase devices, but using the symmetrical components method, unbalanced three-phase systems could be studied as well.
https://en.wikipedia.org/wiki?curid=38433617
1,621,278
In the late 1970s Data General was sued (under the Sherman and Clayton antitrust acts) by competitors for their practice of bundling RDOS with the Data General Nova or Eclipse minicomputer. When Data General introduced the Data General Nova, a company called Digidyne wanted to use its RDOS operating system on its own hardware clone. Data General refused to license their software and claimed their "bundling rights". In 1985, courts including the United States Court of Appeals for the Ninth Circuit ruled against Data General in a case called "Digidyne v. Data General". The Supreme Court of the United States declined to hear Data General's appeal, although Justices White and Blackmun would have heard it. The precedent set by the lower courts eventually forced Data General to license the operating system because restricting the software to only Data General's hardware was an illegal tying arrangement.
https://en.wikipedia.org/wiki?curid=254223
1,630,292
A thermodynamic system consisting of a single phase, in the absence of external forces, in its own state of internal thermodynamic equilibrium, is homogeneous. This means that the material in any region of the system can be interchanged with the material of any congruent and parallel region of the system, and the effect is to leave the system thermodynamically unchanged. The thermodynamic operation of "scaling" is the creation of a new homogeneous system whose size is a multiple of the old size, and whose intensive variables have the same values. Traditionally the size is stated by the mass of the system, but sometimes it is stated by the entropy, or by the volume. For a given such system , scaled by the real number to yield a new one , a state function, , such that , is said to be extensive. Such a function as is called a homogeneous function of degree 1. There are two different concepts mentioned here, sharing the same name: (a) the mathematical concept of degree-1 homogeneity in the scaling function; and (b) the physical concept of the spatial homogeneity of the system. It happens that the two agree here, but that is not because they are tautologous. It is a contingent fact of thermodynamics.
https://en.wikipedia.org/wiki?curid=39144241
1,639,788
Particle physics is an important field of application for computer algebra and exploits the capabilities of Computer Algebra Systems (CAS). This leads to valuable feed-back for the development of CAS. Looking at the history of computer algebra systems, the first programs date back to the 1960s. The first systems were almost entirely based on LISP ("LISt Programming language"). LISP is an interpreted language and, as the name already indicates, designed for the manipulation of lists. Its importance for symbolic computer programs in the early days has been compared to the importance of FORTRAN for numerical programs in the same period. Already in this first period, the program REDUCE had some special features for the application to high energy physics. An exception to the LISP-based programs was SCHOONSHIP, written in assembler language by Martinus J. G. Veltman and specially designed for applications in particle physics. The use of assembler code lead to an incredible fast program (compared to the interpreted programs at that time) and allowed the calculation of more complex scattering processes in high energy physics. It has been claimed the program's importance was recognized in 1998 by awarding the half of the Nobel prize to Veltman. Also the program MACSYMA deserves to be mentioned explicitly, since it triggered important development with regard to algorithms. In the 1980s new computer algebra systems started to be written in C. This enabled the better exploitation of the resources of the computer (compared to the interpreted language LISP) and at the same time allowed to maintain portability (which would not have been possible in assembler language). This period marked also the appearance of the first commercial computer algebra system, among which Mathematica and Maple are the best known examples. In addition, also a few dedicated programs appeared, an example relevant to particle physics is the program FORM by J. Vermaseren as a (portable) successor to SCHOONSHIP. More recently issues of the maintainability of large projects became more and more important and the overall programming paradigma changed from procedural programming to object-oriented design. In terms of programming languages this was reflected by a move from C to C++. Following this change of paradigma, the library GiNaC was developed. The GiNac library allows symbolic calculations in C++.
https://en.wikipedia.org/wiki?curid=17156914
1,648,488
The book has been reviewed several times and has been recommended in many other works. In a review of another work by the "MRS Bulletin" in 2011, the book was said to be "the indispensable work on electronic systems for experimental condensed matter physicists", due largely to the book's "lucidity and panache". The book is also recommended in other textbooks on condensed matter physics, including "The Solid State" by Harold Max Rosenberg in 1979, where it is called a "detailed, higher-level, modern treatment." The textbook "Solid-State Physics for Electronics" by Andre Moliton states in the foreword that the book aims to prepare students to "use by him- or herself the classic works of taught solid state physics, for example, those of Kittel and Ashcroft and Mermin." Along with "Kittel", the textbook "Introduction to Solid State Physics and Crystalline Nanostructures" by Giuseppe Iadonisi, Giovanni Cantele, and Maria Luisa Chiofalo included the book in the "Acknowledgements" section as "special mentions". It is also called one of the standard textbooks of solid state physics in the textbook "Polarized Electrons In Surface Physics". In a 2003 article detailing Mermin's contributions to solid state physics, the book was said to be "an extraordinarily readable textbook of the subject, which introduced a whole generation of solid state specialists to a subtle and elegant way of doing theoretical physics." The book, along with "Kittel" is also used as a benchmark for other books on solid-state physics; the publisher's description for the book "Advanced Solid State Physics" by Philip Phillips that was supplied to the Library of Congress for its bibliography entry states: "This is a modern book in solid state physics that should be accessible to anyone who has a working level of solid state physics at the Kittel or Ashcroft/Mermin level."
https://en.wikipedia.org/wiki?curid=65997474
1,648,825
Energy quality is a measure of the ease with which a form of energy can be converted to useful work or to another form of energy: i.e. its content of thermodynamic free energy. A high quality form of energy has a high content of thermodynamic free energy, and therefore a high proportion of it can be converted to work; whereas with low quality forms of energy, only a small proportion can be converted to work, and the remainder is dissipated as heat. The concept of energy quality is also used in ecology, where it is used to track the flow of energy between different trophic levels in a food chain and in thermoeconomics, where it is used as a measure of economic output per unit of energy. Methods of evaluating energy quality often involve developing a ranking of energy qualities in hierarchical order.
https://en.wikipedia.org/wiki?curid=3254125
1,657,184
Photoexcitation is the production of an excited state of a quantum system by photon absorption. The excited state originates from the interaction between a photon and the quantum system. Photons carry energy that is determined by the wavelengths of the light that carries the photons. Objects that emit light with longer wavelengths, emit photons carrying less energy. In contrast to that, light with shorter wavelengths emit photons with more energy. When the photon interacts with a quantum system, it is therefore important to know what wavelength one is dealing with. A shorter wavelength will transfer more energy to the quantum system than longer wavelengths.
https://en.wikipedia.org/wiki?curid=3147924
1,666,908
When coronavirus infects the host cell, its spike protein (S) binds to the receptor on the surface of the host cell, which enables the virus to enter the cell. The spike protein is cut by the host's protease at all stages of the formation, transportation and infection of the new cell. The domain that helps the external membrane of the virus fuse with the cell membrane is exposed to facilitate infection. The host cell receptor used by rat coronavirus is generally CEACAM1 (mCEACAM1). The type of infected tissue and the time at which the spike protein is cut vary according to the virus strain. Among them is S1 in the spike protein of MHV-A59. The cleavage site of S2 is cut by proteases such as furin in the host cell when the virus is produced and assembled, and when the virus infects a new cell, further cleavage in the lysosomal pathway is also required for successful infection. The ocyrosin of MHV-2 does not have the S1/S2 cleavage site and is not cut during the assembly process. Its infection depends on cleavage of the spike protein by endosomal enzymes. MHV-JHM (especially the more virulent JHM.SD and JHM-cl2), which infects nerve tissue, may not require surface exposure. The body can infect the cell, that is, it can achieve membrane fusion without binding to the cell receptor, so it can infect structures in the nervous system with little expression of mCEACAM1, and its infection may mainly depend on the cutting of its spike protein by the cell surface protease.
https://en.wikipedia.org/wiki?curid=5813522
1,674,106
Quil is a quantum instruction set architecture that first introduced a shared quantum/classical memory model. It was introduced by Robert Smith, Michael Curtis, and William Zeng in "A Practical Quantum Instruction Set Architecture". Many quantum algorithms (including quantum teleportation, quantum error correction, simulation, and optimization algorithms) require a shared memory architecture. Quil is being developed for the superconducting quantum processors developed by Rigetti Computing through the Forest quantum programming API. A Python library called codice_1 was introduced to develop Quil programs with higher level constructs. A Quil backend is also supported by other quantum programming environments.
https://en.wikipedia.org/wiki?curid=55836896
1,680,346
divide-and-conquer or decrease-and-conquer algorithms, where the size of the data decreases as one moves deeper in the recursion. In this case, one algorithm is used for the overall approach (on large data), but deep in the recursion, it switches to a different algorithm, which is more efficient on small data. A common example is in sorting algorithms, where the insertion sort, which is inefficient on large data, but very efficient on small data (say, five to ten elements), is used as the final step, after primarily applying another algorithm, such as merge sort or quicksort. Merge sort and quicksort are asymptotically optimal on large data, but the overhead becomes significant if applying them to small data, hence the use of a different algorithm at the end of the recursion. A highly optimized hybrid sorting algorithm is Timsort, which combines merge sort, insertion sort, together with additional logic (including binary search) in the merging logic.
https://en.wikipedia.org/wiki?curid=40338559
1,681,528
Besides increased efficiency of power plants, there was an increase in efficiency (between 1950 and 1973) of the railway utilization of this electricity with energy-intensity dropping from 218 to 124 kwh/10,000 gross tonne-km (of both passenger and freight trains) or a 43% drop. Since energy-intensity is the inverse of energy-efficiency it drops as efficiency goes up. But most of this 43% decrease in energy-intensity also benefited diesel traction. The conversion of wheel bearings from plain to roller bearings, increase of train weight, converting single track lines to double track (or partially double track), and the elimination of obsolete 2-axle freight cars increased the energy-efficiency of all types of traction: electric, diesel, and steam. However, there remained a 12–15% reduction of energy-intensity that only benefited electric traction (and not diesel). This was due to improvements in locomotives, more widespread use of regenerative braking (which in 1989 recycled 2.65% of the electric energy used for traction,) remote control of substations, better handling of the locomotive by the locomotive crew, and improvements in automation. Thus the overall efficiency of electric traction as compared to diesel more than doubled between 1950 and the mid-1970s in the Soviet Union. But after 1974 (thru 1980) there was no improvement in energy-intensity (wh/tonne-km) in part due to increasing speeds of passenger and freight trains.
https://en.wikipedia.org/wiki?curid=36017606
1,688,269
So, there are no electric or magnetic charges in the quantum LC circuit, but electric and magnetic fluxes only. Therefore, not only in the DOS LC circuit, but in the other LC circuits too, there are only the electromagnetic waves. Thus, the quantum LC circuit is the minimal geometrical-topological value of the quantum waveguide, in which there are no electric or magnetic charges, but electromagnetic waves only. Now one should consider the quantum LC circuit as a "black wave box" (BWB), which has no electric or magnetic charges, but waves. Furthermore, this BWB could be "closed" (in Bohr atom or in the vacuum for photons), or "open" (as for QHE and Josephson junction). So, the quantum LC circuit should has BWB and "input – output" supplements. The total energy balance should be calculated with considering of "input" and "output" devices. Without "input – output" devices, the energies "stored" on capacitances and inductances are virtual or "characteristics", as in the case of characteristic impedance (without dissipation). Very close to this approach now are Devoret (2004), which consider Josephson junctions with quantum inductance, Datta impedance of Schrödinger waves (2008) and Tsu (2008), which consider quantum wave guides.
https://en.wikipedia.org/wiki?curid=24838946