doc_id
int32
15
2.25M
text
stringlengths
101
6.85k
source
stringlengths
39
44
352,636
Rice's theorem generalizes the theorem that the halting problem is unsolvable. It states that for "any" non-trivial property, there is no general decision procedure that, for all programs, decides whether the partial function implemented by the input program has that property. (A partial function is a function which may not always produce a result, and so is used to model programs, which can either produce results or fail to halt.) For example, the property "halt for the input 0" is undecidable. Here, "non-trivial" means that the set of partial functions that satisfy the property is neither the empty set nor the set of all partial functions. For example, "halts or fails to halt on input 0" is clearly true of all partial functions, so it is a trivial property, and can be decided by an algorithm that simply reports "true." Also, this theorem holds only for properties of the partial function implemented by the program; Rice's Theorem does not apply to properties of the program itself. For example, "halt on input 0 within 100 steps" is "not" a property of the partial function that is implemented by the program—it is a property of the program implementing the partial function and is very much decidable.
https://en.wikipedia.org/wiki?curid=21391870
355,664
The objective consists of adjusting the parameters of a model function to best fit a data set. A simple data set consists of "n" points (data pairs) formula_1, "i" = 1, …, "n", where formula_2 is an independent variable and formula_3 is a dependent variable whose value is found by observation. The model function has the form formula_4, where "m" adjustable parameters are held in the vector formula_5. The goal is to find the parameter values for the model that "best" fits the data. The fit of a model to a data point is measured by its residual, defined as the difference between the observed value of the dependent variable and the value predicted by the model:
https://en.wikipedia.org/wiki?curid=82359
358,226
Although Tycho admired Copernicus and was the first to teach his theory in Denmark, he was unable to reconcile Copernican theory with the basic laws of Aristotelian physics, that he considered to be foundational. He was also critical of the observational data that Copernicus built his theory on, which he correctly considered to have a high margin of error. Instead, Tycho proposed a "geo-heliocentric" system in which the Sun and Moon orbited the Earth, while the other planets orbited the Sun. Tycho's system had many of the same observational and computational advantages that Copernicus' system had, and both systems also could accommodate the phases of Venus, although Galilei had yet to discover them. Tycho's system provided a safe position for astronomers who were dissatisfied with older models but were reluctant to accept the heliocentrism and the Earth's motion. It gained a considerable following after 1616 when Rome declared that the heliocentric model was contrary to both philosophy and Scripture, and could be discussed only as a computational convenience that had no connection to fact. Tycho's system also offered a major innovation: while both the purely geocentric model and the heliocentric model as set forth by Copernicus relied on the idea of transparent rotating crystalline spheres to carry the planets in their orbits, Tycho eliminated the spheres entirely. Kepler, as well as other Copernican astronomers, tried to persuade Tycho to adopt the heliocentric model of the Solar System, but he was not persuaded. According to Tycho, the idea of a rotating and revolving Earth would be "in violation not only of all physical truth but also of the authority of Holy Scripture, which ought to be paramount."
https://en.wikipedia.org/wiki?curid=30027
369,342
When an electric eel identifies prey, its brain sends a nerve signal to the electric organ; the nerve cells involved release the neurotransmitter chemical acetylcholine to trigger an electric organ discharge. This opens ion channels, allowing sodium to flow into the electrocytes, reversing the polarity momentarily. The discharge is terminated by an outflow of potassium ions through a separate set of ion channels. By causing a sudden difference in electric potential, it generates an electric current in a manner similar to a battery, in which cells are stacked to produce a desired total voltage output. It has been suggested that Sachs' organ is used for electrolocation; its discharge is of nearly 10 volts at a frequency of around 25 Hz. The main organ, supported by Hunter's organ in some way, is used to stun prey or to deter predators; it can emit signals at rates of several hundred hertz. Electric eels can concentrate the discharge to stun prey more effectively by curling up and making contact with the prey at two points along the body. It has also been suggested that electric eels can control their prey's nervous systems and muscles via electrical pulses, keeping prey from escaping, or forcing it to move so they can locate it, but this has been disputed. In self-defence, electric eels have been observed to leap from the water to deliver electric shocks to animals that might pose a threat. The shocks from leaping electric eels are powerful enough to drive away animals as large as horses.
https://en.wikipedia.org/wiki?curid=61262925
380,568
Quantum computing is an area of research that brings together the disciplines of computer science, information theory, and quantum physics. While the idea of information as part of physics is relatively new, there appears to be a strong tie between information theory and quantum mechanics. Whereas traditional computing operates on a binary system of ones and zeros, quantum computing uses qubits. Qubits are capable of being in a superposition, i.e. in both states of one and zero, simultaneously. Thus, the value of the qubit is not between 1 and 0, but changes depending on when it is measured. This trait of qubits is known as quantum entanglement, and is the core idea of quantum computing that allows quantum computers to do large scale computations. Quantum computing is often used for scientific research in cases where traditional computers do not have the computing power to do the necessary calculations, such in molecular modeling. Large molecules and their reactions are far too complex for traditional computers to calculate, but the computational power of quantum computers could provide a tool to perform such calculations.
https://en.wikipedia.org/wiki?curid=5213
394,867
In 2010, Peru passed the Law for Personal Data Protection, which defines biometric information that can be used to identify an individual as sensitive data. In 2012, Colombia passed a comprehensive Data Protection Law which defines biometric data as senstivite information. According to Article 9(1) of the EU's 2016 General Data Protection Regulation (GDPR) the processing of biometric data for the purpose of "uniquely identifying a natural person" is sensitive and the facial recognition data processed in this way becomes sensitive personal data. In response to the GDPR passing into the law of EU member states, EU based researchers voiced concern that if they were required under the GDPR to obtain individual's consent for the processing of their facial recognition data, a face database on the scale of MegaFace could never be established again. In September 2019 the Swedish Data Protection Authority (DPA) issued its first ever financial penalty for a violation of the EU's General Data Protection Regulation (GDPR) against a school that was using the technology to replace time-consuming roll calls during class. The DPA found that the school illegally obtained the biometric data of its students without completing an impact assessment. In addition the school did not make the DPA aware of the pilot scheme. A 200,000 SEK fine (€19,000/$21,000) was issued.
https://en.wikipedia.org/wiki?curid=602401
397,216
As of the 21st Century, multiple unit systems are used all over the world such as the United States Customary System, the British Customary System, and the International System. However, the United States is the only industrialized country that has not yet at least mostly converted to the Metric System. The systematic effort to develop a universally acceptable system of units dates back to 1790 when the French National Assembly charged the French Academy of Sciences to come up such a unit system. This system was the precursor to the metric system which was quickly developed in France but did not take on universal acceptance until 1875 when The Metric Convention Treaty was signed by 17 nations. After this treaty was signed, a General Conference of Weights and Measures (CGPM) was established. The CGPM produced the current SI system which was adopted in 1954 at the 10th conference of weights and measures. Currently, the United States is a dual-system society which uses both the SI system and the US Customary system.
https://en.wikipedia.org/wiki?curid=21347678
399,980
where formula_6. Although Bayes' theorem is a fundamental result of probability theory, it has a specific interpretation in Bayesian statistics. In the above equation, formula_1 usually represents a proposition (such as the statement that a coin lands on heads fifty percent of the time) and formula_2 represents the evidence, or new data that is to be taken into account (such as the result of a series of coin flips). formula_9 is the prior probability of formula_1 which expresses one's beliefs about formula_1 before evidence is taken into account. The prior probability may also quantify prior knowledge or information about formula_1. formula_13 is the likelihood function, which can be interpreted as the probability of the evidence formula_2 given that formula_1 is true. The likelihood quantifies the extent to which the evidence formula_2 supports the proposition formula_1. formula_18 is the posterior probability, the probability of the proposition formula_1 after taking the evidence formula_2 into account. Essentially, Bayes' theorem updates one's prior beliefs formula_9 after considering the new evidence formula_2.
https://en.wikipedia.org/wiki?curid=404412
411,069
Classical DFT allows the calculation of the equilibrium particle density and prediction of thermodynamic properties and behavior of a many-body system on the basis of model interactions between particles. The spatially dependent density determines the local structure and composition of the material. It is determined as a function that optimizes the thermodynamic potential of the grand canonical ensemble. The grand potential is evaluated as the sum of the ideal-gas term with the contribution from external fields and an excess thermodynamic free energy arising from interparticle interactions. In the simplest approach the excess free-energy term is expanded on a system of uniform density using a functional Taylor expansion. The excess free energy is then a sum of the contributions from "s"-body interactions with density-dependent effective potentials representing the interactions between "s" particles. In most calculations the terms in the interactions of three or more particles are neglected (second-order DFT). When the structure of the system to be studied is not well approximated by a low-order perturbation expansion with a uniform phase as the zero-order term, non-perturbative free-energy functionals have also been developed. The minimization of the grand potential functional in arbitrary local density functions for fixed chemical potential, volume and temperature provides self-consistent thermodynamic equilibrium conditions, in particular, for the local chemical potential. The functional is not in general a convex functional of the density; solutions may not be local minima. Limiting to low-order corrections in the local density is a well-known problem, although the results agree (reasonably) well on comparison to experiment.
https://en.wikipedia.org/wiki?curid=209874
415,421
The data model of the data access layer is a generalization of the NetCDF-3 data model, and substantially the same as the NetCDF-4 data model. The coordinate system layer implements and extends the concepts in the Climate and Forecast Metadata Conventions. The scientific data type layer allows data to be manipulated in coordinate space, analogous to the Open Geospatial Consortium specifications. The identification of coordinate systems and data typing is ongoing, but users can plug in their own classes at runtime for specialized processing.
https://en.wikipedia.org/wiki?curid=19332380
416,844
The Hamilton–Jacobi–Bellman equation (HJB) is a partial differential equation which is central to optimal control theory. The solution of the HJB equation is the 'value function', which gives the optimal cost-to-go for a given dynamical system with an associated cost function. Classical variational problems, for example, the brachistochrone problem can be solved using this method as well. The equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and coworkers. The corresponding discrete-time equation is usually referred to as the Bellman equation. In continuous time, the result can be seen as an extension of earlier work in classical physics on the Hamilton–Jacobi equation by William Rowan Hamilton and Carl Gustav Jacob Jacobi.
https://en.wikipedia.org/wiki?curid=730173
416,927
A Data Mapper is a Data Access Layer that performs bidirectional transfer of data between a persistent data store (often a relational database) and an in-memory data representation (the domain layer). The goal of the pattern is to keep the in-memory representation and the persistent data store independent of each other and the data mapper itself. This is useful when one needs to model and enforce strict business processes on the data in the domain layer that do not map neatly to the persistent data store. The layer is composed of one or more mappers (or Data Access Objects), performing the data transfer. Mapper implementations vary in scope. Generic mappers will handle many different domain entity types, dedicated mappers will handle one or a few.
https://en.wikipedia.org/wiki?curid=37202848
425,696
To illuminate a particular cell, the electrodes that intersect at the cell are charged by control circuitry and electric current flows through the cell, stimulating the gas (typically xenon and neon) atoms inside the cell. These ionized gas atoms, or plasmas, then release ultraviolet photons that interact with a phosphor material on the inside wall of the cell. The phosphor atoms are stimulated and electrons jump to higher energy levels. When these electrons return to their natural state, energy is released in the form of visible light. Every pixel on the display is made up of three subpixel cells. One subpixel cell is coated with red phosphor, another is coated with green phosphor, and the third cell is coated with blue phosphor. Light emitted from the subpixel cells is blended together to create an overall color for the pixel. The control circuitry can manipulate the intensity of light emitted from each cell, and therefore can produce a large gamut of colors. Light from each cell can be controlled and changed rapidly to produce a high-quality moving picture.
https://en.wikipedia.org/wiki?curid=10158337
447,788
Chemical physics is a subdiscipline of chemistry and physics that investigates physicochemical phenomena using techniques from atomic and molecular physics and condensed matter physics; it is the branch of physics that studies chemical processes from the point of view of physics. While at the interface of physics and chemistry, chemical physics is distinct from physical chemistry in that it focuses more on the characteristic elements and theories of physics. Meanwhile, physical chemistry studies the physical nature of chemistry. Nonetheless, the distinction between the two fields is vague, and scientists often practice in both fields during the course of their research.
https://en.wikipedia.org/wiki?curid=733929
461,201
To derive an energy equation, note that the advective acceleration term formula_37 may be decomposed as:formula_38where formula_39 is the vorticity of the flow and formula_40 is the Euclidean norm. This leads to a form of the momentum equation, ignoring the external forces term, given by:formula_41Taking the dot product of formula_5 with this equation leads to:formula_43This equation was arrived at using the scalar triple product formula_44. Define formula_45 to be the energy density:formula_46Noting that formula_47 is time-independent, we arrive at the equation:formula_48Assuming that the energy density is time-independent and the flow is one-dimensional leads to the simplification:formula_49with formula_50 being a constant; this is equivalent to Bernoulli's principle. Of particular interest in open-channel flow is the specific energy formula_51, which is used to compute the hydraulic head formula_52 that is defined as:formula_53with formula_54 being the specific weight. However, realistic systems require the addition of a head loss term formula_55 to account for energy dissipation due to friction and turbulence that was ignored by discounting the external forces term in the momentum equation.
https://en.wikipedia.org/wiki?curid=2235244
461,439
Synaptic signaling, an integral part of nervous system activity, occurs between neurons and target cells. These target cells can also be neurons or other cell types (i.e. muscle or gland cells). Protocadherins, a member of the cadherin family, mediate the adhesion of neurons to their target cells at synapses otherwise known as synaptic junctions. In order to for communication to occur between a neuron and its target cell, a wave of depolarization travels the length of the neuron and causes neurotransmitters to be released into the synaptic junction. These neurotransmitters bind and activate receptors on the post-synaptic neuron thereby transmitting the signal to the target cell. Thus, a post-synaptic membrane belongs to the membrane receiving the signal, while a pre-synaptic membrane is the source of the neurotransmitter. In a neuromuscular junction, a synapse is formed between a motor neuron and muscle fibers. In vertebrates, acetylcholine released from the motor neuron acts as a neurotransmitter which depolarizes the muscle fiber and causes muscle contraction. A neuron’s ability to receive and integrate simultaneous signals from the environment and other neurons allows for complex animal behavior.
https://en.wikipedia.org/wiki?curid=24950378
463,496
Electric music technology refers to musical instruments and recording devices that use electrical circuits, which are often combined with mechanical technologies. Examples of electric musical instruments include the electro-mechanical electric piano (invented in 1929), the electric guitar (invented in 1931), the electro-mechanical Hammond organ (developed in 1934) and the electric bass (invented in 1935). None of these electric instruments produce a sound that is audible by the performer or audience in a performance setting unless they are connected to instrument amplifiers and loudspeaker cabinets, which made them sound loud enough for performers and the audience to hear. Amplifiers and loudspeakers are separate from the instrument in the case of the electric guitar (which uses a guitar amplifier), electric bass (which uses a bass amplifier) and some electric organs (which use a Leslie speaker or similar cabinet) and electric pianos. Some electric organs and electric pianos include the amplifier and speaker cabinet within the main housing for the instrument.
https://en.wikipedia.org/wiki?curid=49182501
465,494
In multicellular animals the same principle has been put in the service of gene cascades that control body-shape. Each time a cell divides, two cells result which, although they contain the same genome in full, can differ in which genes are turned on and making proteins. Sometimes a 'self-sustaining feedback loop' ensures that a cell maintains its identity and passes it on. Less understood is the mechanism of epigenetics by which chromatin modification may provide cellular memory by blocking or allowing transcription. A major feature of multicellular animals is the use of morphogen gradients, which in effect provide a positioning system that tells a cell where in the body it is, and hence what sort of cell to become. A gene that is turned on in one cell may make a product that leaves the cell and diffuses through adjacent cells, entering them and turning on genes only when it is present above a certain threshold level. These cells are thus induced into a new fate, and may even generate other morphogens that signal back to the original cell. Over longer distances morphogens may use the active process of signal transduction. Such signalling controls embryogenesis, the building of a body plan from scratch through a series of sequential steps. They also control and maintain adult bodies through feedback processes, and the loss of such feedback because of a mutation can be responsible for the cell proliferation that is seen in cancer. In parallel with this process of building structure, the gene cascade turns on genes that make structural proteins that give each cell the physical properties it needs.
https://en.wikipedia.org/wiki?curid=356382
469,309
Embodied energy analysis is interested in what energy goes to supporting a consumer, and so all energy depreciation is assigned to the final demand of consumer. Different methodologies use different scales of data to calculate energy embodied in products and services of nature and human civilization. International consensus on the appropriateness of data scales and methodologies is pending. This difficulty can give a wide range in embodied energy values for any given material. In the absence of a comprehensive global embodied energy public dynamic database, embodied energy calculations may omit important data on, for example, the rural road/highway construction and maintenance needed to move a product, marketing, advertising, catering services, non-human services and the like. Such omissions can be a source of significant methodological error in embodied energy estimations. Without an estimation and declaration of the embodied energy error, it is difficult to calibrate the , and so the value of any given material, process or service to environmental and economic processes.
https://en.wikipedia.org/wiki?curid=1520238
470,127
In the 1940s, filling the vast observational gap between cytology and biochemistry, cell biology arose and established existence of cell organelles besides the nucleus. Launched in the late 1930s, the molecular biology research program cracked a genetic code in the early 1960s and then converged with cell biology as "cell and molecular biology", its breakthroughs and discoveries defying DN model by arriving in quest not of lawlike explanation but of causal mechanisms. Biology became a new model of science, while special sciences were no longer thought defective by lacking universal laws, as borne by physics.
https://en.wikipedia.org/wiki?curid=1845675
473,856
Energy economics is a broad scientific subject area which includes topics related to supply and use of energy in societies. Considering the cost of energy services and associated value gives economic meaning to the efficiency at which energy can be produced. Energy services can be defined as functions that generate and provide energy to the “desired end services or states”. The efficiency of energy services is dependent on the engineered technology used to produce and supply energy. The goal is to minimise energy input required (e.g. kWh, mJ, see Units of Energy) to produce the energy service, such as lighting (lumens), heating (temperature) and fuel (natural gas). The main sectors considered in energy economics are transportation and building, although it is relevant to a broad scale of human activities, including households and businesses at a microeconomic level and resource management and environmental impacts at a macroeconomic level.
https://en.wikipedia.org/wiki?curid=177696
500,042
The Potts model is related to, and generalized by, several other models, including the XY model, the Heisenberg model and the N-vector model. The infinite-range Potts model is known as the Kac model. When the spins are taken to interact in a non-Abelian manner, the model is related to the flux tube model, which is used to discuss confinement in quantum chromodynamics. Generalizations of the Potts model have also been used to model grain growth in metals and coarsening in foams. A further generalization of these methods by James Glazier and Francois Graner, known as the cellular Potts model, has been used to simulate static and kinetic phenomena in foam and biological morphogenesis.
https://en.wikipedia.org/wiki?curid=1162226
501,889
For the set cover problem, Lovász proved that the integrality gap for an instance with "n" elements is "H", the "n"th harmonic number. One can turn the linear programming relaxation for this problem into an approximate solution of the original unrelaxed set cover instance via the technique of randomized rounding . Given a fractional cover, in which each set "S" has weight "w", choose randomly the value of each 0–1 indicator variable "x" to be 1 with probability "w" ×(ln "n" +1), and 0 otherwise. Then any element "e" has probability less than 1/("e"×"n") of remaining uncovered, so with constant probability all elements are covered. The cover generated by this technique has total size, with high probability, (1+o(1))(ln "n")"W", where "W" is the total weight of the fractional solution. Thus, this technique leads to a randomized approximation algorithm that finds a set cover within a logarithmic factor of the optimum. As showed, both the random part of this algorithm and the need to construct an explicit solution to the linear programming relaxation may be eliminated using the method of conditional probabilities, leading to a deterministic greedy algorithm for set cover, known already to Lovász, that repeatedly selects the set that covers the largest possible number of remaining uncovered elements. This greedy algorithm approximates the set cover to within the same "H" factor that Lovász proved as the integrality gap for set cover. There are strong complexity-theoretic reasons for believing that no polynomial time approximation algorithm can achieve a significantly better approximation ratio .
https://en.wikipedia.org/wiki?curid=6368430
503,723
Quantum complexity theory is the subfield of computational complexity theory that deals with complexity classes defined using quantum computers, a computational model based on quantum mechanics. It studies the hardness of computational problems in relation to these complexity classes, as well as the relationship between quantum complexity classes and classical (i.e., non-quantum) complexity classes.
https://en.wikipedia.org/wiki?curid=24092190
503,724
A complexity class is a collection of computational problems that can be solved by a computational model under certain resource constraints. For instance, the complexity class P is defined as the set of problems solvable by a Turing machine in polynomial time. Similarly, quantum complexity classes may be defined using quantum models of computation, such as the quantum circuit model or the equivalent quantum Turing machine. One of the main aims of quantum complexity theory is to find out how these classes relate to classical complexity classes such as P, NP, BPP, and PSPACE.
https://en.wikipedia.org/wiki?curid=24092190
503,738
Notice the discrepancy between the quantum query complexities associated with a particular type of problem, depending on which query model was used to determine the complexity. For example, when the matrix model is used, the quantum complexity of the connectivity model in Big O notation is formula_75, but when the array model is used, the complexity is formula_76. Additionally, for brevity, we use the shorthand formula_77 in certain cases, where formula_78. The important implication here is that the efficiency of the algorithm used to solve a graphing problem is dependent on the type of query model used to model the graph.
https://en.wikipedia.org/wiki?curid=24092190
509,652
The determination of a cell to a particular fate can be broken down into two states where the cell can be specified (committed) or determined. In the state of being committed or specified, the cell type is not yet determined and any bias the cell has toward a certain fate can be reversed or transformed to another fate. If a cell is in a determined state, the cell's fate cannot be reversed or transformed. In general, this means that a cell determined to differentiate into a brain cell cannot be transformed into a skin cell. Determination is followed by differentiation, the actual changes in biochemistry, structure, and function that result in specific cell types. Differentiation often involves a change in appearance as well as function.
https://en.wikipedia.org/wiki?curid=8285473
521,380
More precisely, a nonholonomic system, also called an "anholonomic" system, is one in which there is a continuous closed circuit of the governing parameters, by which the system may be transformed from any given state to any other state. Because the final state of the system depends on the intermediate values of its trajectory through parameter space, the system cannot be represented by a conservative potential function as can, for example, the inverse square law of the gravitational force. This latter is an example of a holonomic system: path integrals in the system depend only upon the initial and final states of the system (positions in the potential), completely independent of the trajectory of transition between those states. The system is therefore said to be "integrable", while the nonholonomic system is said to be "nonintegrable". When a path integral is computed in a nonholonomic system, the value represents a deviation within some range of admissible values and this deviation is said to be an "anholonomy" produced by the specific path under consideration. This term was introduced by Heinrich Hertz in 1894.
https://en.wikipedia.org/wiki?curid=554248
540,523
At a global level, the implementation of digital health solutions depends on large data sets, ranging from simple statistics that record every birth and death to more sophisticated metrics that track diseases, outbreaks, and chronic conditions. These systems record data such as patient records, blood test results, EKGs, MRIs, billing records, drug prescriptions, and other private medical information. Medical professionals can use this data to make more data-driven decisions about patient care and consumers themselves can utilize it to make informed choices about their own health. Given the personal nature of the data being collected, a crucial debate has arisen amongst stake-holders about one of the challenges induced by digital health solutions: the ownership of health data. In most cases, governments and big data and technology companies are storing citizens' medical information, leaving many concerned with how their data is being used and/or who has access to it. This is further compounded by the fact that the details that answer these questions is oftentimes hidden in complex terms & conditions that are rarely read. A notable example of a data privacy breach in the digital health space took place in 2016. Google faced a major lawsuit over a data-sharing agreement that gave its artificial intelligence arm, DeepMind, access to the personal health data of 1.6 million British patients. Google failed to secure patient consent and guarantee the anonymity of the patients.
https://en.wikipedia.org/wiki?curid=37451236
544,629
FuelCell Energy, Inc. is a publicly traded fuel cell company, headquartered in Danbury, Connecticut. It designs, manufactures, operates and services Direct Fuel Cell power plants (a type of molten carbonate fuel cell). The company's fuel cell technology is an alternative to traditional combustion-based power generation, and is complementary to intermittent sources of energy, such as solar and wind turbines. As one of the biggest publicly traded fuel cell manufacturers in the U.S., the company provides clean energy in over 50 locations all over the world. It operates the world’s largest fuel cell park, Gyeonggi Green Energy Fuel cell park, which is located in South Korea. The park consists of 21 power plants providing 59 Megawatt of electricity plus district heating to a number of customers in South Korea. It also operates the largest fuel cell park in North America consisting of five 2.8MW power plants and a rankine cycle turbine bottoming cycle in Bridgeport, Connecticut. Its customer base covers commercial and industrial enterprises including utility companies, municipalities, universities, etc.
https://en.wikipedia.org/wiki?curid=42044114
546,949
In cochlear implants, the sound is picked up by a microphone and transmitted to the behind-the-ear external processor to be converted to the digital data. The digitized data is then modulated on a radio frequency signal and transmitted to an antenna inside a headpiece. The data and power carrier are transmitted through a pair of coupled coils to the hermetically sealed internal unit. By extracting the power and demodulating the data, electric current commands are sent to the cochlea to stimulate the auditory nerve through microelectrodes. The key point is that the internal unit does not have a battery and it should be able to extract the required energy. Also to reduce the infection, data is transmitted wirelessly along with power. Inductively coupled coils are good candidates for power and data telemetry, although radio-frequency transmission could provide better efficiency and data rates. Parameters needed by the internal unit include the pulse amplitude, pulse duration, pulse gap, active electrode, and return electrode that are used to define a biphasic pulse and the stimulation mode. An example of the commercial devices include Nucleus 22 device that utilized a carrier frequency of 2.5 MHz and later in the newer revision called Nucleus 24 device, the carrier frequency was increased to 5 MHz. The internal unit in the cochlear implants is an ASIC (application-specific integrated circuit) chip that is responsible to ensure safe and reliable electric stimulation. Inside the ASIC chip, there is a forward pathway, a backward pathway, and control units. The forward pathway recovers digital information from the RF signal which includes stimulation parameters and some handshaking bits to reduce the communication error. The backward pathway usually includes a back telemetry voltage
https://en.wikipedia.org/wiki?curid=29803004
558,279
The drop down menu (accessed by double clicking or right clicking an object) includes several tools for liquifying, turning into sponges, cloning, and mirroring objects; for generating plots of physics-relevant quantities of the object (such as velocity vs. time or y-position vs. x-position); for selecting objects; for changing the appearance of objects (including the option to toggle the presence of velocity, momentum, and force vectors); for assigning text to an object; for changing the simulated material of the object (including such parameters as density, mass, friction, restitution, and attraction); for assigning and changing an object's velocity; for a list of the information about an object (including the area, mass, moment of inertia, position, velocity, angular velocity, momentum, angular momentum, energy (total), kinetic linear energy, kinetic angular energy, potential energy (gravity), potential energy (attraction), and potential energy (spring)); for assigning objects to various collision layers; for performing "geometry actions" (such as gluing objects to the background, adding center axles, adding center thrusters, attaching tracers, attaching gears, or transforming the object into a circle); for editing objects via constructive solid geometry (CSG); for assigning keystrokes for controlling the object; and for opening a script menu for that selected object(s).
https://en.wikipedia.org/wiki?curid=24443136
558,503
The inverse demand function can be used to derive the total and marginal revenue functions. Total revenue equals price, P, times quantity, Q, or TR = P×Q. Multiply the inverse demand function by Q to derive the total revenue function: TR = (120 - .5Q) × Q = 120Q - 0.5Q². The marginal revenue function is the first derivative of the total revenue function or MR = 120 - Q. Note that in this linear example the MR function has the same y-intercept as the inverse demand function, the x-intercept of the MR function is one-half the value of the demand function, and the slope of the MR function is twice that of the inverse demand function. This relationship holds true for all linear demand equations. The importance of being able to quickly calculate MR is that the profit-maximizing condition for firms regardless of market structure is to produce where marginal revenue equals marginal cost (MC). To derive MC the first derivative of the total cost function is taken.
https://en.wikipedia.org/wiki?curid=3976717
569,589
Transferring the knowledge from a large to a small model needs to somehow teach to the latter without loss of validity. If both models are trained on the same data, the small model may have insufficient capacity to learn a concise knowledge representation given the same computational resources and same data as the large model. However, some information about a concise knowledge representation is encoded in the pseudolikelihoods assigned to its output: when a model correctly predicts a class, it assigns a large value to the output variable corresponding to such class, and smaller values to the other output variables. The distribution of values among the outputs for a record provides information on how the large model represents knowledge. Therefore, the goal of economical deployment of a valid model can be achieved by training only the large model on the data, exploiting its better ability to learn concise knowledge representations, and then distilling such knowledge into the smaller model, that would not be able to learn it on its own, by training it to learn the soft output of the large model.
https://en.wikipedia.org/wiki?curid=62295363
576,021
To explain quantum integrability, it is helpful to consider the free particle setting. Here all dynamics are one-body reducible. A quantum system is said to be integrable if the dynamics are two-body reducible. The Yang–Baxter equation is a consequence of this reducibility and leads to trace identities which provide an infinite set of conserved quantities. All of these ideas are incorporated into the quantum inverse scattering method where the algebraic Bethe ansatz can be used to obtain explicit solutions. Examples of quantum integrable models are the Lieb–Liniger model, the Hubbard model and several variations on the Heisenberg model. Some other types of quantum integrability are known in explicitly time-dependent quantum problems, such as the driven Tavis-Cummings model.
https://en.wikipedia.org/wiki?curid=2573213
578,675
The traditional topics in quantum chaos concerns spectral statistics (universal and non-universal features), and the study of eigenfunctions of various chaotic Hamiltonian. For example, before the existence of scars was reported, eigenstates of a classically chaotic system were conjectured to fill the available phase space evenly, up to random fluctuations and energy conservation (Quantum ergodicity). However, a quantum eigenstate of a classically chaotic system can be scarred: the probability density of the eigenstate is enhanced in the neighborhood of a periodic orbit, above the classical, statistically expected density along the orbit (scars). In particular, scars are both a striking visual example of classical-quantum correspondence away from the usual classical limit, and a useful example of a quantum suppression of chaos. For example, this is evident in the perturbation-induced quantum scarring: More specifically, in quantum dots perturbed by local potential bumps (impurities), some of the eigenstates are strongly scarred along periodic orbits of unperturbed classical counterpart.
https://en.wikipedia.org/wiki?curid=313779
583,074
In a typical undergraduate program for physics majors, required courses are in the sub-disciplines of physics, with additional required courses in mathematics. Because much of the insight of physics is described by differential equations relating matter, space, and time (for example Newton's law of motion and the Maxwell equations of electromagnetism), students have to be familiar with differential equations. In a typical undergraduate program for chemistry majors, emphasis is placed on laboratory classes and understanding and applying models describing chemical bonds and molecular structure. Emphasis is also placed in the methods for analysis and the formulas and equations used when considering the chemical transformation. Students take courses in math, physics, chemistry, and often biochemistry. Between the two programs of study, there is a large area of overlap (calculus, introductory physics, quantum mechanics, thermodynamics). However, physics places a larger emphasis on fundamental theory (with its deep mathematical treatment) while chemistry places more emphasis in combining the most important mathematical definitions of the theory with the approach of the molecular models. Laboratory skills may differ in both programs, as students may be involved in different technologies, depending on the program and the institution of higher education (for example, a chemistry student may spend more laboratory time dealing with glassware for distillation and purification or on a form of chromatography-spectroscopy instrument, while a physics student may spend much more time dealing with a laser and non-linear optics technology or some complex electrical circuit).
https://en.wikipedia.org/wiki?curid=33615960
583,220
Energy engineering or Energy Systems Engineering is a broad field of engineering dealing with energy efficiency, energy services, facility management, plant engineering, environmental compliance, sustainable energy and renewable energy technologies. Energy engineering is one of the most recent engineering disciplines to emerge. Energy engineering combines knowledge from the fields of physics, math, and chemistry with economic and environmental engineering practices. Energy engineers apply their skills to increase efficiency and further develop renewable sources of energy.
https://en.wikipedia.org/wiki?curid=9348093
600,021
The Wheeler–DeWitt equation is a functional differential equation. It is ill-defined in the general case, but very important in theoretical physics, especially in quantum gravity. It is a functional differential equation on the space of three dimensional spatial metrics. The Wheeler–DeWitt equation has the form of an operator acting on a wave functional; the functional reduces to a function in cosmology. Contrary to the general case, the Wheeler–DeWitt equation is well defined in minisuperspaces like the configuration space of cosmological theories. An example of such a wave function is the Hartle–Hawking state. Bryce DeWitt first published this equation in 1967 under the name "Einstein–Schrödinger equation"; it was later renamed the "Wheeler–DeWitt equation".
https://en.wikipedia.org/wiki?curid=1253305
608,693
China is pursuing a strategic policy of military-civil fusion on AI for global technological supremacy. According to a February 2019 report by Gregory C. Allen of the Center for a New American Security, China's leadership – including paramount leader Xi Jinping – believes that being at the forefront in AI technology is critical to the future of global military and economic power competition. Chinese military officials have said that their goal is to incorporate commercial AI technology to "narrow the gap between the Chinese military and global advanced powers." The close ties between Silicon Valley and China, and the open nature of the American research community, has made the West's most advanced AI technology easily available to China; in addition, Chinese industry has numerous home-grown AI accomplishments of its own, such as Baidu passing a notable Chinese-language speech recognition capability benchmark in 2015. As of 2017, Beijing's roadmap aims to create a $150 billion AI industry by 2030. Before 2013, Chinese defense procurement was mainly restricted to a few conglomerates; however, as of 2017, China often sources sensitive emerging technology such as drones and artificial intelligence from private start-up companies. An October 2021 report by the Center for Security and Emerging Technology found that "Most of the [Chinese military]'s AI equipment suppliers are not state-owned defense enterprises, but private Chinese tech companies founded after 2010." The report estimated that Chinese military spending on AI exceeded $1.6 billion each year. The "Japan Times" reported in 2018 that annual private Chinese investment in AI is under $7 billion per year. AI startups in China received nearly half of total global investment in AI startups in 2017; the Chinese filed for nearly five times as many AI patents as did Americans.
https://en.wikipedia.org/wiki?curid=56127293
613,239
An essential component of the experiment is the data acquisition (DAQ) system, which manages the data flow from the detector electronics. The requirement for the experiment is to acquire raw data at a rate of 18 GB/s. This is accomplished by employing parallel data-processing architecture using 24 high-speed GPUs (NVIDIA Tesla K40) to process data from 12 bit waveform digitisers. The set-up is controlled by the MIDAS DAQ software framework. The DAQ system processes data from 1296 calorimeter channels, 3 straw tracker stations, and auxiliary detectors (e.g. entrance muon counters). The total data output of the experiment is estimated at 2 PB.
https://en.wikipedia.org/wiki?curid=53887100
620,517
ETC is also a more viable option than other alternatives by definition. ETC requires much less energy input from outside sources, like a battery, than a railgun or a coilgun would. Tests have shown that energy output by the propellant is higher than energy input from outside sources on ETC guns. In comparison, a railgun currently cannot achieve a higher muzzle velocity than the amount of energy input. Even at 50% efficiency a rail gun launching a projectile with a kinetic energy of 20 MJ would require an energy input into the rails of 40 MJ, and 50% efficiency has not yet been achieved. To put this into perspective, a rail gun launching at 9 MJ of energy would need roughly 32 MJ worth of energy from capacitors. Current advances in energy storage allow for energy densities as high as 2.5 MJ/dm³, which means that a battery delivering 32 MJ of energy would require a volume of 12.8 dm³ per shot; this is not a viable volume for use in a modern main battle tank, especially one designed to be lighter than existing models. There has even been discussion about eliminating the necessity for an outside electrical source in ETC ignition by initiating the plasma cartridge through a small explosive force.
https://en.wikipedia.org/wiki?curid=10382
629,750
Data integration involves combining data residing in different sources and providing users with a unified view of them. This process becomes significant in a variety of situations, which include both commercial (such as when two similar companies need to merge their databases) and scientific (combining research results from different bioinformatics repositories, for example) domains. Data integration appears with increasing frequency as the volume (that is, big data) and the need to share existing data explodes. It has become the focus of extensive theoretical work, and numerous open problems remain unsolved. Data integration encourages collaboration between internal as well as external users. The data being integrated must be received from a heterogeneous database system and transformed to a single coherent data store that provides synchronous data across a network of files for clients. A common use of data integration is in data mining when analyzing and extracting information from existing databases that can be useful for Business information.
https://en.wikipedia.org/wiki?curid=4780372
630,988
Biological neuron models, also known as a spiking neuron models, are mathematical descriptions of the properties of certain cells in the nervous system that generate sharp electrical potentials across their cell membrane, roughly one millisecond in duration, called action potentials or spikes (Fig. 2). Since spikes are transmitted along the axon and synapses from the sending neuron to many other neurons, spiking neurons are considered to be a major information processing unit of the nervous system. Spiking neuron models can be divided into different categories: the most detailed mathematical models are biophysical neuron models (also called Hodgkin-Huxley models) that describe the membrane voltage as a function of the input current and the activation of ion channels. Mathematically simpler are integrate-and-fire models that describe the membrane voltage as a function of the input current and predict the spike times without a description of the biophysical processes that shape the time course of an action potential. Even more abstract models only predict output spikes (but not membrane voltage) as a function of the stimulation where the stimulation can occur through sensory input or pharmacologically. This article provides a short overview of different spiking neuron models and links, whenever possible to experimental phenomena. It includes deterministic and probabilistic models.
https://en.wikipedia.org/wiki?curid=14408479
631,093
In the case of modelling a biological neuron, physical analogues are used in place of abstractions such as "weight" and "transfer function". A neuron is filled and surrounded with water containing ions, which carry electric charge. The neuron is bound by an insulating cell membrane and can maintain a concentration of charged ions on either side that determines a capacitance . The firing of a neuron involves the movement of ions into the cell that occurs when neurotransmitters cause ion channels on the cell membrane to open. We describe this by a physical time-dependent current . With this comes a change in voltage, or the electrical potential energy difference between the cell and its surroundings, which is observed to sometimes result in a voltage spike called an action potential which travels the length of the cell and triggers the release of further neurotransmitters. The voltage, then, is the quantity of interest and is given by .
https://en.wikipedia.org/wiki?curid=14408479
632,662
In 2021, Kelly, Hart and Rose produced a μCF model whereby the ratio, Q, of thermal energy produced to the kinetic energy of the accelerated deuterons used to create negative pions (and thus negative muons through pion decay) was optimized. In this model, the heat energy of the incoming deuterons as well as that of the particles produced due to the deuteron beam impacting a tungsten target was recaptured to the extent possible, as suggested by Gordon Pusch in the previous paragraph. Additionally, heat energy due to tritium breeding in a lithium-lead shell was recaptured, as suggested by Jändel, Danos and Rafelski in 1988. The best Q value was found to be about 130% assuming that 50% of the muons produced were actually utilized for fusion catalysis. Furthermore, assuming that the accelerator was 18% efficient at transforming electrical energy into deuteron kinetic energy and conversion efficiency of heat energy into electrical energy of 60%, they estimate that, currently, the amount of electrical energy that could be produced by a μCF reactor would be 14% of the electrical energy consumed. In order for this to improve, they suggest that some combination of a) increasing accelerator efficiency and b) increasing the number of fusion reactions per negative muon above the assumed level of 150 would be needed.
https://en.wikipedia.org/wiki?curid=285048
643,334
Cell adhesion is the process by which cells interact and attach to neighbouring cells through specialised molecules of the cell surface. This process can occur either through direct contact between cell surfaces such as cell junctions or indirect interaction, where cells attach to surrounding extracellular matrix, a gel-like structure containing molecules released by cells into spaces between them. Cells adhesion occurs from the interactions between cell-adhesion molecules (CAMs), transmembrane proteins located on the cell surface. Cell adhesion links cells in different ways and can be involved in signal transduction for cells to detect and respond to changes in the surroundings. Other cellular processes regulated by cell adhesion include cell migration and tissue development in multicellular organisms. Alterations in cell adhesion can disrupt important cellular processes and lead to a variety of diseases, including cancer and arthritis. Cell adhesion is also essential for infectious organisms, such as bacteria or viruses, to cause diseases.
https://en.wikipedia.org/wiki?curid=531064
660,043
The asymptotic complexity is defined by the most efficient (in terms of whatever computational resource one is considering) algorithm for solving the game; the most common complexity measure (computation time) is always lower-bounded by the logarithm of the asymptotic state-space complexity, since a solution algorithm must work for every possible state of the game. It will be upper-bounded by the complexity of any particular algorithm that works for the family of games. Similar remarks apply to the second-most commonly used complexity measure, the amount of space or computer memory used by the computation. It is not obvious that there is any lower bound on the space complexity for a typical game, because the algorithm need not store game states; however many games of interest are known to be PSPACE-hard, and it follows that their space complexity will be lower-bounded by the logarithm of the asymptotic state-space complexity as well (technically the bound is only a polynomial in this quantity; but it is usually known to be linear).
https://en.wikipedia.org/wiki?curid=317419
661,637
Games of chance are not merely pure applications of probability calculus and gaming situations are not just isolated events whose numerical probability is well established through mathematical methods; they are also games whose progress is influenced by human action. In gambling, the human element has a striking character. The player is not only interested in the mathematical probability of the various gaming events, but he or she has expectations from the games while a major interaction exists. To obtain favorable results from this interaction, gamblers take into account all possible information, including statistics, to build gaming strategies. The oldest and most common betting system is the martingale, or doubling-up, system on even-money bets, in which bets are doubled progressively after each loss until a win occurs. This system probably dates back to the invention of the roulette wheel. Two other well-known systems, also based on even-money bets, are the d’Alembert system (based on theorems of the French mathematician Jean Le Rond d’Alembert), in which the player increases his bets by one unit after each loss but decreases it by one unit after each win, and the Labouchere system (devised by the British politician Henry Du Pré Labouchere, although the basis for it was invented by the 18th-century French philosopher Marie-Jean-Antoine-Nicolas de Caritat, marquis de Condorcet), in which the player increases or decreases his bets according to a certain combination of numbers chosen in advance. The predicted average gain or loss is called "expectation" or expected value and is the sum of the probability of each possible outcome of the experiment multiplied by its payoff (value). Thus, it represents the average amount one expects to win per bet if bets with identical odds are repeated many times. A game or situation in which the expected value for the player is zero (no net gain nor loss) is called a "fair game. "The attribute "fair "refers not to the technical process of the game, but to the chance balance house (bank)–player.
https://en.wikipedia.org/wiki?curid=7804447
678,866
Approaches to quantum theory, like QBism, which treat quantum states as expressions of information, knowledge, belief, or expectation are called "epistemic" interpretations. These approaches differ from each other in what they consider quantum states to be information or expectations "about", as well as in the technical features of the mathematics they employ. Furthermore, not all authors who advocate views of this type propose an answer to the question of what the information represented in quantum states concerns. In the words of the paper that introduced the Spekkens Toy Model,if a quantum state is a state of knowledge, and it is not knowledge of local and noncontextual hidden variables, then what is it knowledge about? We do not at present have a good answer to this question. We shall therefore remain completely agnostic about the nature of the reality to which the knowledge represented by quantum states pertains. This is not to say that the question is not important. Rather, we see the epistemic approach as an unfinished project, and this question as the central obstacle to its completion. Nonetheless, we argue that even in the absence of an answer to this question, a case can be made for the epistemic view. The key is that one can hope to identify phenomena that are characteristic of states of incomplete knowledge regardless of what this knowledge is about.Leifer and Spekkens propose a way of treating quantum probabilities as Bayesian probabilities, thereby considering quantum states as epistemic, which they state is "closely aligned in its philosophical starting point" with QBism. However, they remain deliberately agnostic about what physical properties or entities quantum states are information (or beliefs) about, as opposed to QBism, which offers an answer to that question. Another approach, advocated by Bub and Pitowsky, argues that quantum states are information about propositions within event spaces that form non-Boolean lattices. On occasion, the proposals of Bub and Pitowsky are also called "quantum Bayesianism".
https://en.wikipedia.org/wiki?curid=35611432
681,191
While it is generally accepted that archive data (i.e. which never changes), regardless of its storage medium, is data at rest and active data subject to constant or frequent change is data in use. “Inactive data” could be taken to mean data which may change, but infrequently. The imprecise nature of terms such as “constant” and “frequent” means that some stored data cannot be comprehensively defined as either data at rest or in use. These definitions could be taken to assume that Data at Rest is a superset of data in use; however, data in use, subject to frequent change, has distinct processing requirements from data at rest, whether completely static or subject to occasional change.
https://en.wikipedia.org/wiki?curid=33993923
724,076
Energy released (or absorbed) because of a reaction between chemical substances ("reactants") is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in internal energy of a chemical system. It can be calculated from formula_1, the internal energy of formation of the reactant molecules related to the bond energies of the molecules under consideration, and formula_2, the internal energy of formation of the product molecules. The change in internal energy is equal to the heat change if it is measured under conditions of constant volume (at STP condition), as in a closed rigid container such as a bomb calorimeter. However, at constant pressure, as in reactions in vessels open to the atmosphere, the measured heat is usually not equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the widely tabulated enthalpies of formation are used.)
https://en.wikipedia.org/wiki?curid=5936
744,546
The Klein–Gordon equation was first considered as a quantum wave equation by Schrödinger in his search for an equation describing de Broglie waves. The equation is found in his notebooks from late 1925, and he appears to have prepared a manuscript applying it to the hydrogen atom. Yet, because it fails to take into account the electron's spin, the equation predicts the hydrogen atom's fine structure incorrectly, including overestimating the overall magnitude of the splitting pattern by a factor of for the -th energy level. The Dirac equation relativistic spectrum is, however, easily recovered if the orbital-momentum quantum number is replaced by total angular-momentum quantum number . In January 1926, Schrödinger submitted for publication instead "his" equation, a non-relativistic approximation that predicts the Bohr energy levels of hydrogen without fine structure.
https://en.wikipedia.org/wiki?curid=209627
750,680
One strength of agent-based modelling is its ability to mediate information flow between scales. When additional details about an agent are needed, a researcher can integrate it with models describing the extra details. When one is interested in the emergent behaviours demonstrated by the agent population, they can combine the agent-based model with a continuum model describing population dynamics. For example, in a study about CD4+ T cells (a key cell type in the adaptive immune system), the researchers modelled biological phenomena occurring at different spatial (intracellular, cellular, and systemic), temporal, and organizational scales (signal transduction, gene regulation, metabolism, cellular behaviors, and cytokine transport). In the resulting modular model, signal transduction and gene regulation are described by a logical model, metabolism by constraint-based models, cell population dynamics are described by an agent-based model, and systemic cytokine concentrations by ordinary differential equations. In this multi-scale model, the agent-based model occupies the central place and orchestrates every stream of information flow between scales.
https://en.wikipedia.org/wiki?curid=985619
751,732
A statistical model is a probability distribution function proposed as generating data. In a parametric model, the probability distribution function has variable parameters, such as the mean and variance in a normal distribution, or the coefficients for the various exponents of the independent variable in linear regression. A nonparametric model has a distribution function without parameters, such as in bootstrapping, and is only loosely confined by assumptions. Model selection is a statistical method for selecting a distribution function within a class of them; e.g., in linear regression where the dependent variable is a polynomial of the independent variable with parametric coefficients, model selection is selecting the highest exponent, and may be done with nonparametric means, such as with cross validation.
https://en.wikipedia.org/wiki?curid=2381958
759,615
GENESIS ("Gemi Entegre Savaş İdare Sistemi", i.e., "Ship Integrated Combat Management System"), a network-centric warfare management system developed by HAVELSAN and initially used in the upgraded s of the Turkish Navy, was contracted for the first two corvettes on 23 May 2007. In the last Ada-class corvette, "Kınalıada", the ADVENT combat management system (an upgraded version of GENESIS) is installed instead of GENESIS system. It is also planned for "Burgazada" to be retrofited with the ADVENT combat management system. The class ships have a national hull mounted sonar developed by the Scientific and Technological Research Council of Turkey. The sonar dome has been developed by STM's subcontractor ONUK-BG Defence Systems, extensively employing nano-enhanced Fiber Reinforced Polymer. The Ada class features an Electronic Chart Precise Integrated Navigation System (ECPINS), supplied by OSI Geospatial. Integrated Platform Management System (IPMS) for controlling machinery, auxiliary systems, power generation and distribution was delivered by STM's subcontractor Yaltes JV. Main systems integrated to IPMS are power management system, fire detection system, fire fighting, damage control system, CCTV system and stability control system.
https://en.wikipedia.org/wiki?curid=40390348
780,865
Higher-level behaviors implemented by a runtime system may include tasks such as drawing text on the screen or making an Internet connection. It is often the case that operating systems provide these kinds of behaviors as well, and when available, the runtime system is implemented as an abstraction layer that translates the invocation of the runtime system into an invocation of the operating system. This hides the complexity or variations in the services offered by different operating systems. This also implies that the OS kernel can itself be viewed as a runtime system, and that the set of OS calls that invoke OS behaviors may be viewed as interactions with a runtime system.
https://en.wikipedia.org/wiki?curid=2106840
786,123
The myenteric plexus functions as a part of the enteric nervous system (digestive system). The enteric nervous system can and does function autonomously, but normal digestive function requires communication links between this intrinsic system and the central nervous system. The ENS contains sensory receptors, primary afferent neurons, interneurons, and motor neurons. The events that are controlled, at least in part, by the ENS are multiple and include motor activity, secretion, absorption, blood flow, and interaction with other organs such as the gallbladder or pancreas. These links take the form of parasympathetic and sympathetic fibers that connect either the central and enteric nervous systems or connect the central nervous system directly with the digestive tract. Through these cross connections, the gut can provide sensory information to the CNS, and the CNS can affect gastrointestinal function. Connection to the central nervous system also means that signals from outside of the digestive system can be relayed to the digestive system: for instance, the sight of appealing food stimulates secretion in the stomach.
https://en.wikipedia.org/wiki?curid=730370
797,479
Recent research and study has been done in utilizing an organic solar cell as the top cell in a hybrid tandem solar cell stack. Because organic solar cells have a higher band gap than traditional inorganic photovoltaics like silicon or CIGS, they can absorb higher energy photons without losing much of the energy due to thermalization, and thus operate at a higher voltage. The lower energy photons and higher energy photons that are unabsorbed pass through the top organic solar cell and are then absorbed by the bottom inorganic cell. Organic solar cells are also solution processible at low temperatures with a low cost of 10 dollars per square meter, resulting in a printable top cell that improves the overall efficiencies of existing, inorganic solar cell technologies. Much research has been done to enable the formation of such a hybrid tandem solar cell stack, including research in the deposition of semi-transparent electrodes that maintain low contact resistance while having high transparency.
https://en.wikipedia.org/wiki?curid=18397250
818,152
The Bell–LaPadula model focuses on data confidentiality and controlled access to classified information, in contrast to the Biba Integrity Model which describes rules for the protection of data integrity. In this formal model, the entities in an information system are divided into subjects and objects. The notion of a "secure state" is defined, and it is proven that each state transition preserves security by moving from secure state to secure state, thereby inductively proving that the system satisfies the security objectives of the model. The Bell–LaPadula model is built on the concept of a state machine with a set of allowable states in a computer system. The transition from one state to another state is defined by transition functions. A system state is defined to be "secure" if the only permitted access modes of subjects to objects are in accordance with a security policy. To determine whether a specific access mode is allowed, the clearance of a subject is compared to the classification of the object (more precisely, to the combination of classification and set of compartments, making up the "security level") to determine if the subject is authorized for the specific access mode. The clearance/classification scheme is expressed in terms of a lattice. The model defines one discretionary access control (DAC) rule and two mandatory access control (MAC) rules with three security properties:
https://en.wikipedia.org/wiki?curid=423866
824,770
These experimental results proved to be consistent with the Bohr model for atoms that had been proposed the previous year by Niels Bohr. The Bohr model was a precursor of quantum mechanics and of the electron shell model of atoms. Its key feature was that an electron inside an atom occupies one of the atom's "quantum energy levels". Before the collision, an electron inside the mercury atom occupies its lowest available energy level. After the collision, the electron inside occupies a higher energy level with 4.9 electron volts (eV) more energy. This means that the electron is more loosely bound to the mercury atom. There were no intermediate levels or possibilities in Bohr's quantum model. This feature was "revolutionary" because it was inconsistent with the expectation that an electron could be bound to an atom's nucleus by any amount of energy.
https://en.wikipedia.org/wiki?curid=381805
826,862
An electrolytic cell is an electrochemical cell that utilizes an external source of electrical energy to force a chemical reaction that would not otherwise occur. The external energy source is a voltage applied between the cell′s two electrodes; an anode (positively charged electrode) and a cathode (negatively charged electrode), which are immersed in an electrolyte solution. This is in contrast to a galvanic cell, which itself is a source of electrical energy and the foundation of a battery. The net reaction taking place in a galvanic cell is a spontaneous reaction, i.e, the Gibbs free energy remains -ve, while the net reaction taking place in an electrolytic cell is the reverse of this spontaneous reaction, i.e, the Gibbs free energy is +ve.
https://en.wikipedia.org/wiki?curid=361021
827,988
Data quality refers to the state of qualitative or quantitative pieces of information. There are many definitions of data quality, but data is generally considered high quality if it is "fit for [its] intended uses in operations, decision making and planning". Moreover, data is deemed of high quality if it correctly represents the real-world construct to which it refers. Furthermore, apart from these definitions, as the number of data sources increases, the question of internal data consistency becomes significant, regardless of fitness for use for any particular external purpose. People's views on data quality can often be in disagreement, even when discussing the same set of data used for the same purpose. When this is the case, data governance is used to form agreed upon definitions and standards for data quality. In such cases, data cleansing, including standardization, may be required in order to ensure data quality.
https://en.wikipedia.org/wiki?curid=1609808
829,199
Although X.25 predates the OSI Reference Model (OSIRM), the physical layer of the OSI model corresponds to the X.25 "physical layer", the data link layer to the X.25 "data link layer", and the network layer to the X.25 "packet layer". The X.25 "data link layer", LAPB, provides a reliable data path across a data link (or multiple parallel data links, multilink) which may not be reliable itself. The X.25 "packet layer" provides the virtual call mechanisms, running over X.25 LAPB. The "packet layer" includes mechanisms to maintain virtual calls and to signal data errors in the event that the "data link layer" cannot recover from data transmission errors. All but the earliest versions of X.25 include facilities which provide for OSI network layer Addressing (NSAP addressing, see below).
https://en.wikipedia.org/wiki?curid=43336
839,781
Data vault's philosophy is that all data is relevant data, even if it is not in line with established definitions and business rules. If data are not conforming to these definitions and rules then that is a problem for the business, not the data warehouse. The determination of data being "wrong" is an interpretation of the data that stems from a particular point of view that may not be valid for everyone, or at every point in time. Therefore the data vault must capture all data and only when reporting or extracting data from the data vault is the data being interpreted.
https://en.wikipedia.org/wiki?curid=19260221
839,994
Three G states exist and can be categorized as either reversible (quiescent) or irreversible (senescent and differentiated). Each of these three states can be entered from the G phase before the cell commits to the next round of the cell cycle. Quiescence refers to a reversible G state where subpopulations of cells reside in a 'quiescent' state before entering the cell cycle after activation in response to extrinsic signals. Quiescent cells are often identified by low RNA content, lack of cell proliferation markers, and increased label retention indicating low cell turnover. Senescence is distinct from quiescence because senescence is an irreversible state that cells enter in response to DNA damage or degradation that would make a cell's progeny nonviable. Such DNA damage can occur from telomere shortening over many cell divisions as well as reactive oxygen species (ROS) exposure, oncogene activation, and cell-cell fusion. While senescent cells can no longer replicate, they remain able to perform many normal cellular functions. Senescence is often a biochemical alternative to the self-destruction of such a damaged cell by apoptosis. In contrast to cellular senescence, quiescence is not a reactive event but part of the core programming of several different cell types. Finally, differentiated cells are stem cells that have progressed through a differentiation program to reach a mature – terminally differentiated – state. Differentiated cells continue to stay in G and perform their main functions indefinitely.
https://en.wikipedia.org/wiki?curid=619064
844,640
Amyloid beta (Aβ) was found to cause neurotoxicity and cell death in the brain when present in high concentrations. Aβ results from a mutation that occurs when protein chains are cut at the wrong locations, resulting in chains of different lengths that are unusable. Thus they are left in the brain until they are broken down, but if enough accumulate, they form plaques which are toxic to neurons. Aβ uses several routes in the central nervous system to cause cell death. An example is through the nicotinic acetylcholine receptor (nAchRs), which is a receptor commonly found along the surfaces of the cells that respond to nicotine stimulation, turning them on or off. Aβ was found manipulating the level of nicotine in the brain along with the MAP kinase, another signaling receptor, to cause cell death. Another chemical in the brain that Aβ regulates is JNK; this chemical halts the extracellular signal-regulated kinases (ERK) pathway, which normally functions as memory control in the brain. As a result, this memory favoring pathway is stopped, and the brain loses essential memory function. The loss of memory is a symptom of neurodegenerative disease, including AD. Another way Aβ causes cell death is through the phosphorylation of AKT; this occurs as the phosphate group is bound to several sites on the protein. This phosphorylation allows AKT to interact with BAD, a protein known to cause cell death. Thus an increase in Aβ results in an increase of the AKT/BAD complex, in turn stopping the action of the anti-apoptotic protein Bcl-2, which normally functions to stop cell death, causing accelerated neuron breakdown and the progression of AD.
https://en.wikipedia.org/wiki?curid=546712
852,937
For an isolated "massive" system, the center of mass of the system moves in a straight line with a steady sub-luminal velocity (with a velocity depending on the reference frame used to view it). Thus, an observer can always be placed to move along with it. In this frame, which is the center-of-momentum frame, the total momentum is zero, and the system as a whole may be thought of as being "at rest" if it is a bound system (like a bottle of gas). In this frame, which exists under these assumptions, the invariant mass of the system is equal to the total system energy (in the zero-momentum frame) divided by . This total energy in the center of momentum frame, is the "minimum" energy which the system may be observed to have, when seen by various observers from various inertial frames.
https://en.wikipedia.org/wiki?curid=162321
853,163
Since Newton, scientists have extensively attempted to model the world. In particular, when a mathematical model is available (for instance, Newton's gravitational law or Coulomb's equation for electrostatics), we can foresee, given some parameters that describe a physical system (such as a distribution of mass or a distribution of electric charges), the behavior of the system. This approach is known as mathematical modeling and the above-mentioned physical parameters are called the model parameters or simply the model. To be precise, we introduce the notion of state of the physical system: it is the solution of the mathematical model's equation. In optimal control theory, these equations are referred to as the state equations. In many situations we are not truly interested in knowing the physical state but just its effects on some objects (for instance, the effects the gravitational field has on a specific planet). Hence we have to introduce another operator, called the observation operator, which converts the state of the physical system (here the predicted gravitational field) into what we want to observe (here the movements of the considered planet). We can now introduce the so-called forward problem, which consists of two steps:
https://en.wikipedia.org/wiki?curid=203956
853,183
As mentioned above, noise may be such that our measurements are not the image of any model, so that we cannot look for a model that produces the data but rather look for the best (or optimal) model: that is, the one that best matches the data. This leads us to minimize an objective function, namely a functional that quantifies how big the residuals are or how far the predicted data are from the observed data. Of course, when we have perfect data (i.e. no noise) then the recovered model should fit the observed data perfectly. A standard objective function, formula_56, is of the form:
https://en.wikipedia.org/wiki?curid=203956
859,246
The new Mode S system was intended to operate with just a single reply from an aircraft, a system known as monopulse. The accompanying diagram shows a conventional main or "sum" beam of an SSR antenna to which has been added a "difference" beam. To produce the sum beam the signal is distributed horizontally across the antenna aperture. This feed system is divided into two equal halves and the two parts summed again to produce the original sum beam. However the two halves are also subtracted to produce a difference output. A signal arriving exactly normal, or boresight, to the antenna will produce a maximum output in the sum beam but a zero signal in the difference beam. Away from boresight the signal in the sum beam will be less but there will be a non-zero signal in the difference beam. The angle of arrival of the signal can be determined by measuring the ratio of the signals between the sum and difference beams. The ambiguity about boresight can be resolved as there is a 180° phase change in the difference signal either side of boresight. Bearing measurements can be made on a single pulse, hence monopulse, but accuracy can be improved by averaging measurements made on several or all of the pulses received in a reply from an aircraft. A monopulse receiver was developed early in the UK Adsel programme and this design is still used widely today. Mode S reply pulses are deliberately designed to be similar to mode A and C replies so the same receiver can be used to provide improved bearing measurement for the SSR mode A and C system with the advantage that the interrogation rate can be substantially reduced thereby reducing the interference caused to other users of the system.
https://en.wikipedia.org/wiki?curid=2056508
864,745
A schema is applied to a table in traditional databases. In such traditional databases, the table typically enforces the schema when the data is loaded into the table. This enables the database to make sure that the data entered follows the representation of the table as specified by the table definition. This design is called "schema on write". In comparison, Hive does not verify the data against the table schema on write. Instead, it subsequently does run time checks when the data is read. This model is called "schema on read". The two approaches have their own advantages and drawbacks. Checking data against table schema during the load time adds extra overhead, which is why traditional databases take a longer time to load data. Quality checks are performed against the data at the load time to ensure that the data is not corrupt. Early detection of corrupt data ensures early exception handling. Since the tables are forced to match the schema after/during the data load, it has better query time performance. Hive, on the other hand, can load data dynamically without any schema check, ensuring a fast initial load, but with the drawback of comparatively slower performance at query time. Hive does have an advantage when the schema is not available at the load time, but is instead generated later dynamically.
https://en.wikipedia.org/wiki?curid=30248516
871,347
For many diseases, pathological analysis of cells and tissues is considered to be the gold standard of disease diagnosis. AI-assisted pathology tools have been developed to assist with the diagnosis of a number of diseases, including breast cancer, hepatitis B, gastric cancer, and colorectal cancer. AI has also been used to predict genetic mutations and prognosticate disease outcomes. AI is well-suited for use in low-complexity pathological analysis of large-scale screening samples, such as colorectal or breast cancer screening, thus lessening the burden on pathologists and allowing for faster turnaround of sample analysis. Several deep learning and artificial neural network models have shown accuracy similar to that of human pathologists, and a study of deep learning assistance in diagnosing metastatic breast cancer in lymph nodes showed that the accuracy of humans with the assistance of a deep learning program was higher than either the humans alone or the AI program alone. Additionally, implementation of digital pathology is predicted to save over $12 million for a university center over the course of five years, though savings attributed to AI specifically have not yet been widely researched. The use of augmented and virtual reality could prove to be a stepping stone to wider implementation of AI-assisted pathology, as they can highlight areas of concern on a pathology sample and present them in real-time to a pathologist for more efficient review. AI also has the potential to identify histological findings at levels beyond what the human eye can see, and has shown the ability to utilize genotypic and phenotypic data to more accurately detect the tumor of origin for metastatic cancer. One of the major current barriers to widespread implementation of AI-assisted pathology tools is the lack of prospective, randomized, multi-center controlled trials in determining the true clinical utility of AI for pathologists and patients, highlighting a current area of need in AI and healthcare research.
https://en.wikipedia.org/wiki?curid=52588198
871,360
Beyond making content edits to an EHR, there are AI algorithms that evaluate an individual patient's record and predict a risk for a disease based on their previous information and family history. One general algorithm is a rule-based system that makes decisions similarly to how humans use flow charts. This system takes in large amounts of data and creates a set of rules that connect specific observations to concluded diagnoses. Thus, the algorithm can take in a new patient's data and try to predict the likeliness that they will have a certain condition or disease. Since the algorithms can evaluate a patient's information based on collective data, they can find any outstanding issues to bring to a physician's attention and save time. One study conducted by the Centerstone research institute found that predictive modeling of EHR data has achieved 70–72% accuracy in predicting individualized treatment response. These methods are helpful due to the fact that the amount of online health records doubles every five years. Physicians do not have the bandwidth to process all this data manually, and AI can leverage this data to assist physicians in treating their patients.
https://en.wikipedia.org/wiki?curid=52588198
876,450
According to Walker, system 1 functions as a serial cognitive steering processor for system 2, rather than a parallel system. In large-scale repeated studies with school students, Walker tested how students adjusted their imagined self-operation in different curriculum subjects of maths, science and English. He showed that students consistently adjust the biases of their heuristic self-representation to specific states for the different curriculum subjects. The model of cognitive steering proposes that, in order to process epistemically varied environmental data, a heuristic orientation system is required to align varied, incoming environmental data with existing neural algorithmic processes. The brain's associative simulation capacity, centered around the imagination, plays an integrator role to perform this function. Evidence for early-stage concept formation and future self-operation within the hippocampus supports the model. In the cognitive steering model, a conscious state emerges from effortful associative simulation, required to align novel data accurately with remote memory, via later algorithmic processes. By contrast, fast unconscious automaticity is constituted by unregulated simulatory biases, which induce errors in subsequent algorithmic processes. The phrase ‘rubbish in, rubbish out' is used to explain errorful heuristic processing: errors will always occur if the accuracy of initial retrieval and location of data is poorly self-regulated.
https://en.wikipedia.org/wiki?curid=6240358
880,437
In vector calculus and differential geometry the generalized Stokes theorem (sometimes with apostrophe as Stokes' theorem or Stokes's theorem), also called the Stokes–Cartan theorem, is a statement about the integration of differential forms on manifolds, which both simplifies and generalizes several theorems from vector calculus. In particular, the fundamental theorem of calculus is the special case where the manifold is a line segment, and Stokes' theorem is the case of a surface in formula_1. Hence, the theorem is sometimes referred to as the Fundamental Theorem of Multivariate Calculus.
https://en.wikipedia.org/wiki?curid=29040
885,704
Local area networking standards such as Ethernet and IEEE 802.3 specifications use terminology from the seven-layer OSI model rather than the TCP/IP model. The TCP/IP model, in general, does not consider physical specifications, rather it assumes a working network infrastructure that can deliver media-level frames on the link. Therefore, RFC 1122 and RFC 1123, the definition of the TCP/IP model, do not discuss hardware issues and physical data transmission and set no standards for those aspects. Some textbook authors have supported the interpretation that physical data transmission aspects are part of the link layer. Others assumed that physical data transmission standards are not considered communication protocols, and are not part of the TCP/IP model. These authors assume a hardware layer or physical layer below the link layer, and several of them adopt the OSI term data link layer instead of link layer in a modified description of layering. In the predecessor to the TCP/IP model, the "ARPAnet Reference Model" (RFC 908, 1982), aspects of the link layer are referred to by several poorly defined terms, such as "network-access layer", "network-access protocol", as well as "network layer", while the next higher layer is called "internetwork layer". In some modern textbooks, "network-interface layer", "host-to-network layer" and "network-access layer" occur as synonyms either to the link layer or the data link layer, often including the physical layer.
https://en.wikipedia.org/wiki?curid=30862590
894,318
A natural model of passive learning is Valiant's probably approximately correct (PAC) learning. Here the learner receives random examples (x,c(x)), where x is distributed according to some unknown distribution D. The learner's goal is to output a hypothesis function h such that h(x)=c(x) with high probability when x is drawn according to D. The learner has to be able to produce such an 'approximately correct' h for every D and every target concept c in its concept class. We can consider replacing the random examples by potentially more powerful quantum examples formula_11. In the PAC model (and the related agnostic model), this doesn't significantly reduce the number of examples needed: for every concept class, classical and quantum sample complexity are the same up to constant factors. However, for learning under some fixed distribution D, quantum examples can be very helpful, for example for learning DNF under the uniform distribution. When considering time complexity, there exist concept classes that can be PAC-learned efficiently by quantum learners, even from classical examples, but not by classical learners (again, under plausible complexity-theoretic assumptions).
https://en.wikipedia.org/wiki?curid=44108758
899,569
Bioenergetics is the part of biochemistry concerned with the energy involved in making and breaking of chemical bonds in the molecules found in biological organisms. It can also be defined as the study of energy relationships and energy transformations and transductions in living organisms. The ability to harness energy from a variety of metabolic pathways is a property of all living organisms that contains earth science. Growth, development, anabolism and catabolism are some of the central processes in the study of biological organisms, because the role of energy is fundamental to such biological processes. Life is dependent on energy transformations; living organisms survive because of exchange of energy between living tissues/ cells and the outside environment. Some organisms, such as autotrophs, can acquire energy from sunlight (through photosynthesis) without needing to consume nutrients and break them down. Other organisms, like heterotrophs, must intake nutrients from food to be able to sustain energy by breaking down chemical bonds in nutrients during metabolic processes such as glycolysis and the citric acid cycle. Importantly, as a direct consequence of the First Law of Thermodynamics, autotrophs and heterotrophs participate in a universal metabolic network—by eating autotrophs (plants), heterotrophs harness energy that was initially transformed by the plants during photosynthesis.
https://en.wikipedia.org/wiki?curid=1369210
900,451
An energy balance can be used to track energy through a system, and is a very useful tool for determining resource use and environmental impacts, using the First and Second laws of thermodynamics, to determine how much energy is needed at each point in a system, and in what form that energy is a cost in various environmental issues. The energy accounting system keeps track of energy in, energy out, and non-useful energy versus work done, and transformations within the system.
https://en.wikipedia.org/wiki?curid=177694
913,830
In probability and statistics, the quantile function, associated with a probability distribution of a random variable, specifies the value of the random variable such that the probability of the variable being less than or equal to that value equals the given probability. Intuitively, the quantile function associates with a range at and below a probability input the likelihood that a random variable is realized in that range for some probability distribution. It is also called the percentile function, percent-point function or inverse cumulative distribution function.
https://en.wikipedia.org/wiki?curid=10167616
931,644
Normally, once a tissue is injured or infected, damaged cells elicit inflammation by stimulating specific patterns of enzyme activity and cytokine gene expression in surrounding cells. Discrete clusters ("cytokine clusters") of molecules are secreted, which act as mediators, inducing the activity of subsequent cascades of biochemical changes. Each cytokine binds to specific receptors on various cell types, and each cell type responds in turn by altering the activity of intracellular signal transduction pathways, depending on the receptors that the cell expresses and the signaling molecules present inside the cell. Collectively, this reprogramming process induces a stepwise change in cell phenotypes, which will ultimately lead to restoration of tissue function and toward regaining essential structural integrity. A tissue can thereby heal, depending on the productive communication between the cells present at the site of damage and the immune system. One key factor in healing is the regulation of cytokine gene expression, which enables complementary groups of cells to respond to inflammatory mediators in a manner that gradually produces essential changes in tissue physiology. Cancer cells have either permanent (genetic) or reversible (epigenetic) changes to their genome, which partly inhibit their communication with surrounding cells and with the immune system. Cancer cells do not communicate with their tissue microenvironment in a manner that protects tissue integrity; instead, the movement and the survival of cancer cells become possible in locations where they can impair tissue function. Cancer cells survive by "rewiring" signal pathways that normally protect the tissue from the immune system. This alteration of the immune response is evident in early stages of malignancy too.
https://en.wikipedia.org/wiki?curid=2332422
940,444
To impute missing data in statistics, NMF can take missing data while minimizing its cost function, rather than treating these missing data as zeros. This makes it a mathematically proven method for data imputation in statistics. By first proving that the missing data are ignored in the cost function, then proving that the impact from missing data can be as small as a second order effect, Ren et al. (2020) studied and applied such an approach for the field of astronomy. Their work focuses on two-dimensional matrices, specifically, it includes mathematical derivation, simulated data imputation, and application to on-sky data.
https://en.wikipedia.org/wiki?curid=3681279
944,191
On the other hand, for systems which are unbound, the "closure" of the system may be enforced by an idealized surface, inasmuch as no mass–energy can be allowed into or out of the test-volume over time, if conservation of system invariant mass is to hold during that time. If a force is allowed to act on (do work on) only one part of such an unbound system, this is equivalent to allowing energy into or out of the system, and the condition of "closure" to mass–energy (total isolation) is violated. In this case, conservation of invariant mass of the system also will no longer hold. Such a loss of rest mass in systems when energy is removed, according to where is the energy removed, and is the change in rest mass, reflect changes of mass associated with movement of energy, not "conversion" of mass to energy.
https://en.wikipedia.org/wiki?curid=491022
944,193
In special relativity, mass is not "converted" to energy, for all types of energy still retain their associated mass. Neither energy nor invariant mass can be destroyed in special relativity, and each is separately conserved over time in closed systems. Thus, a system's invariant mass may change "only" because invariant mass is allowed to escape, perhaps as light or heat. Thus, when reactions (whether chemical or nuclear) release energy in the form of heat and light, if the heat and light is "not" allowed to escape (the system is closed and isolated), the energy will continue to contribute to the system rest mass, and the system mass will not change. Only if the energy is released to the environment will the mass be lost; this is because the associated mass has been allowed out of the system, where it contributes to the mass of the surroundings.
https://en.wikipedia.org/wiki?curid=491022
956,636
where "u"("t") is an input signal, "y"("t") is an output signal, the vector x represents a set of "n" state variables describing the device, and "g" and "f" are continuous functions. For a current-controlled memristive system the signal "u"("t") represents the current signal "i"("t") and the signal "y"("t") represents the voltage signal "v"("t"). For a voltage-controlled memristive system the signal "u"("t") represents the voltage signal "v"("t") and the signal "y"("t") represents the current signal "i"("t").
https://en.wikipedia.org/wiki?curid=14246162
958,458
Under the name of 'quantum tagging', the first position-based quantum schemes have been investigated in 2002 by Kent. A US-patent was granted in 2006. The notion of using quantum effects for location verification first appeared in the scientific literature in 2010. After several other quantum protocols for position verification have been suggested in 2010, Buhrman et al. claimed a general impossibility result: using an enormous amount of quantum entanglement (they use a doubly exponential number of EPR pairs, in the number of qubits the honest player operates on), colluding adversaries are always able to make it look to the verifiers as if they were at the claimed position. However, this result does not exclude the possibility of practical schemes in the bounded- or noisy-quantum-storage model (see above). Later Beigi and König improved the amount of EPR pairs needed in the general attack against position-verification protocols to exponential. They also showed that a particular protocol remains secure against adversaries who controls only a linear amount of EPR pairs. It is argued in that due to time-energy coupling the possibility of formal unconditional location verification via quantum effects remains an open problem. It is worth mentioning that the study of position-based quantum cryptography has also connections with the protocol of port-based quantum teleportation, which is a more advanced version of quantum teleportation, where many EPR pairs are simultaneously used as ports.
https://en.wikipedia.org/wiki?curid=28676005
958,833
A generative algorithm models how the data was generated in order to categorize a signal. It asks the question: based on my generation assumptions, which category is most likely to generate this signal? A discriminative algorithm does not care about how the data was generated, it simply categorizes a given signal. So, discriminative algorithms try to learn formula_27 directly from the data and then try to classify data. On the other hand, generative algorithms try to learn formula_28 which can be transformed into formula_27 later to classify the data. One of the advantages of generative algorithms is that you can use formula_28 to generate new data similar to existing data. On the other hand, it has been proved that some discriminative algorithms give better performance than some generative algorithms in classification tasks.
https://en.wikipedia.org/wiki?curid=1222578
965,503
By an external system that lies in the surroundings, not necessarily a thermodynamic system as strictly defined by the usual thermodynamic state variables, otherwise than by transfer of matter, work can be said to be done on a thermodynamic system. Part of such surroundings-defined work can have a mechanism just as for system-defined thermodynamic work done by the system, while the rest of such surroundings-defined work appears, to the thermodynamic system, not as a negative amount of thermodynamic work done by it, but, rather, as heat transferred to it. The paddle stirring experiments of Joule provide an example, illustrating the concept of isochoric (or constant volume) mechanical work, in this case sometimes called "shaft work". Such work is not thermodynamic work as defined here, because it acts through friction, within, and on the surface of, the thermodynamic system, and does not act through macroscopic forces that the system can spontaneously exert on its surroundings, describable by its state variables. Surroundings-defined work can also be non-mechanical. An example is Joule heating, because it occurs through friction as the electric current passes through the thermodynamic system. When it is done isochorically, and no matter is transferred, such an energy transfer is regarded as a heat transfer into the system of interest.
https://en.wikipedia.org/wiki?curid=3616613
965,525
The definition of thermodynamic work is in terms of the changes of the system's extensive deformation (and chemical constitutive and certain other) state variables, such as volume, molar chemical constitution, or electric polarisation. Examples of state variables that are not extensive deformation or other such variables are temperature and entropy , as for example in the expression . Changes of such variables are not actually physically measureable by use of a single simple adiabatic thermodynamic process; they are processes that occur neither by thermodynamic work nor by transfer of matter, and therefore are said occur by heat transfer. The quantity of thermodynamic work is defined as work done by the system on its surroundings. According to the second law of thermodynamics, such work is irreversible. To get an actual and precise physical measurement of a quantity of thermodynamic work, it is necessary to take account of the irreversibility by restoring the system to its initial condition by running a cycle, for example a Carnot cycle, that includes the target work as a step. The work done by the system on its surroundings is calculated from the quantities that constitute the whole cycle. A different cycle would be needed to actually measure the work done by the surroundings on the system. This is a reminder that rubbing the surface of a system appears to the rubbing agent in the surroundings as mechanical, though not thermodynamic, work done on the system, not as heat, but appears to the system as heat transferred to the system, not as thermodynamic work. The production of heat by rubbing is irreversible; historically, it was a piece of evidence for the rejection of the caloric theory of heat as a conserved substance. The irreversible process known as Joule heating also occurs through a change of a non-deformation extensive state variable.
https://en.wikipedia.org/wiki?curid=3616613
965,532
When work, for example pressure–volume work, is done on its surroundings by a closed system that cannot pass heat in or out because it is confined by an adiabatic wall, the work is said to be adiabatic for the system as well as for the surroundings. When mechanical work is done on such an adiabatically enclosed system by the surroundings, it can happen that friction in the surroundings is negligible, for example in the Joule experiment with the falling weight driving paddles that stir the system. Such work is adiabatic for the surroundings, even though it is associated with friction within the system. Such work may or may not be isochoric for the system, depending on the system and its confining walls. If it happens to be isochoric for the system (and does not eventually change other system state variables such as magnetization), it appears as a heat transfer to the system, and does not appear to be adiabatic for the system.
https://en.wikipedia.org/wiki?curid=3616613
965,556
Non-mechanical work contrasts with pressure–volume work. Pressure–volume work is one of the two mainly considered kinds of mechanical contact work. A force acts on the interfacing wall between system and surroundings. The force is that due to the pressure exerted on the interfacing wall by the material inside the system; that pressure is an internal state variable of the system, but is properly measured by external devices at the wall. The work is due to change of system volume by expansion or contraction of the system. If the system expands, in the present article it is said to do positive work on the surroundings. If the system contracts, in the present article it is said to do negative work on the surroundings. Pressure–volume work is a kind of contact work, because it occurs through direct material contact with the surrounding wall or matter at the boundary of the system. It is accurately described by changes in state variables of the system, such as the time courses of changes in the pressure and volume of the system. The volume of the system is classified as a "deformation variable", and is properly measured externally to the system, in the surroundings. Pressure–volume work can have either positive or negative sign. Pressure–volume work, performed slowly enough, can be made to approach the fictive reversible quasi-static ideal.
https://en.wikipedia.org/wiki?curid=3616613
967,615
Level 1 is a cab signalling system that can be superimposed on the existing signalling system, leaving the fixed signalling system (national signalling and track-release system) in place. Eurobalise radio beacons pick up signal aspects from the trackside signals via signal adapters and telegram coders ("Lineside Electronics Unit" – LEU) and transmit them to the vehicle as a "movement authority" together with route data at fixed points. The on-board computer continuously monitors and calculates the maximum speed and the "braking curve" from these data. Because of the spot transmission of data, the train must travel over the Eurobalise beacon to obtain the next "movement authority". In order for a stopped train to be able to move (when the train is not stopped exactly over a balise), there are optical signals that show permission to proceed. With the installation of additional Eurobalises (""infill balises"") or a "EuroLoop" between the distant signal and main signal, the new proceed aspect is transmitted continuously. The EuroLoop is an extension of the Eurobalise over a particular distance that basically allows data to be transmitted continuously to the vehicle over cables emitting electromagnetic waves. A radio version of the EuroLoop is also possible.
https://en.wikipedia.org/wiki?curid=2214121
979,112
Since the cost of energy has become a significant factor in the performance of economy of societies, management of energy resources has become very crucial. Energy management involves utilizing the available energy resources more effectively; that is, with minimum incremental costs. Many times it is possible to save expenditure on energy without incorporating fresh technology by simple management techniques. Most often energy management is the practice of using energy more efficiently by eliminating energy wastage or to balance justifiable energy demand with appropriate energy supply. The process couples energy awareness with energy conservation.
https://en.wikipedia.org/wiki?curid=2935251
981,839
Both terms, zero energy buildings and green buildings, have similarities and differences. "Green" buildings often focus on operational energy, and disregard the embodied carbon footprint from construction. According to the IPCC, embodied carbon will make up half of the total carbon emissions between now[2020] and 2050. On the other hand, zero energy buildings are specifically designed to produce enough energy from renewable energy sources to meet its own consumption requirements, and green buildings can be generally defined as a building that reduces negative impacts or positively impacts our natural environment [1-NEWUSDE]. There are several factors that must be considered before a building is determined to be a green building. Building a green building must include an efficient use of utilities such as water and energy, use of renewable energy, use of recycling and reusing practices to reduce waste, provide proper indoor air quality, use of ethically sourced and non-toxic materials, use of a design that allows the building to adapt to changing environmental climates, and aspects of the design, construction, and operational process that address the environment and quality of life of its occupants. The term green building can also be used to refer to the practice of green building which includes being resource efficient from its design, to its construction, to its operational processes, and ultimately to its deconstruction. The practice of green building differs slightly from zero energy buildings because it considers all environmental impacts such as use of materials and water pollution for example, whereas the scope of zero energy buildings only includes the buildings energy consumption and ability to produce an equal amount, or more, of energy from renewable energy sources.
https://en.wikipedia.org/wiki?curid=4211531
995,883
The model is initially fit on a training data set, which is a set of examples used to fit the parameters (e.g. weights of connections between neurons in artificial neural networks) of the model. The model (e.g. a naive Bayes classifier) is trained on the training data set using a supervised learning method, for example using optimization methods such as gradient descent or stochastic gradient descent. In practice, the training data set often consists of pairs of an input vector (or scalar) and the corresponding output vector (or scalar), where the answer key is commonly denoted as the "target" (or "label"). The current model is run with the training data set and produces a result, which is then compared with the "target", for each input vector in the training data set. Based on the result of the comparison and the specific learning algorithm being used, the parameters of the model are adjusted. The model fitting can include both variable selection and parameter estimation.
https://en.wikipedia.org/wiki?curid=1514392
996,071
In order to allow the use of a G1 ballistic coefficient rather than velocity data Dr. Pejsa provided two reference drag curves. The first reference drag curve is based purely on the Siacci/Mayevski retardation rate function. The second reference drag curve is adjusted to equal the Siacci/Mayevski retardation rate function at a projectile velocity of 2600 fps (792.5 m/s) using a .30-06 Springfield Cartridge, Ball, Caliber .30 M2 rifle spitzer bullet with a slope or deceleration constant factor of 0.5 in the supersonic flight regime. In other flight regimes the second Pejsa reference drag curve model uses slope constant factors of 0.0 or -4.0. These deceleration constant factors can be verified by backing out Pejsa's formulas (the drag curve segments fits the form V / C and the retardation coefficient curve segments fits the form V / (V / C) = C * V where C is a fitting coefficient). The empirical test data Pejsa used to determine the exact shape of his chosen reference drag curve and pre-defined mathematical function that returns the retardation coefficient at a given Mach number was provided by the US military for the Cartridge, Ball, Caliber .30 M2 bullet. The calculation of the retardation coefficient function also involves air density, which Pejsa did not mention explicitly. The Siacci/Mayevski G1 model uses the following deceleration parametrization (60 °F, 30 inHg and 67% humidity, air density ρ = 1.2209 kg/m). Dr. Pejsa suggests using the second drag curve because the Siacci/Mayevski G1 drag curve does not provide a good fit for modern spitzer bullets. To obtain relevant retardation coefficients for optimal long range modeling Dr. Pejsa suggested using accurate projectile specific down range velocity measurement data for a particular projectile to empirically derive the average retardation coefficient rather than using a reference drag curve derived average retardation coefficient. Further he suggested using ammunition with reduced propellant loads to empirically test actual projectile flight behavior at lower velocities. When working with reduced propellant loads utmost care must be taken to avoid dangerous or catastrophic conditions (detonations) with can occur when firing experimental loads in firearms.
https://en.wikipedia.org/wiki?curid=584911
997,004
In a report published in 1926 in Transactions of the Faraday Society, James Franck was concerned with the mechanisms of photon-induced chemical reactions. The presumed mechanism was the excitation of a molecule by a photon, followed by a collision with another molecule during the short period of excitation. The question was whether it was possible for a molecule to break into photoproducts in a single step, the absorption of a photon, and without a collision. In order for a molecule to break apart, it must acquire from the photon a vibrational energy exceeding the dissociation energy, that is, the energy to break a chemical bond. However, as was known at the time, molecules will only absorb energy corresponding to allowed quantum transitions, and there are no vibrational levels above the dissociation energy level of the potential well. High-energy photon absorption leads to a transition to a higher electronic state instead of dissociation. In examining how much vibrational energy a molecule could acquire when it is excited to a higher electronic level, and whether this vibrational energy could be enough to immediately break apart the molecule, he drew three diagrams representing the possible changes in binding energy between the lowest electronic state and higher electronic states.
https://en.wikipedia.org/wiki?curid=1590747
1,001,887
The notion that self-organising biological systems – like a cell or brain – can be understood as minimising variational free energy is based upon Helmholtz’s work on unconscious inference and subsequent treatments in psychology and machine learning. Variational free energy is a function of observations and a probability density over their hidden causes. This variational density is defined in relation to a probabilistic model that generates predicted observations from hypothesized causes. In this setting, free energy provides an approximation to Bayesian model evidence. Therefore, its minimisation can be seen as a Bayesian inference process. When a system actively makes observations to minimise free energy, it implicitly performs active inference and maximises the evidence for its model of the world.
https://en.wikipedia.org/wiki?curid=39403556
1,001,915
Concerning the top-down vs bottom-up controversy that has been addressed as a major open problem of attention, a computational model has succeeded in illustrating the circulatory nature of reciprocation between top-down and bottom-up mechanisms. Using an established emergent model of attention, namely, SAIM, the authors suggested a model called PE-SAIM that – in contrast to the standard version – approaches the selective attention from a top-down stance. The model takes into account the forwarding prediction errors sent to the same level or a level above to minimize the energy function indicating the difference between data and its cause or – in other words – between the generative model and posterior. To enhance validity, they also incorporated the neural competition between the stimuli in their model. A notable feature of this model is the reformulation of the free energy function only in terms of prediction errors during the task performance:
https://en.wikipedia.org/wiki?curid=39403556