doc_id
int32
15
2.25M
text
stringlengths
101
6.85k
source
stringlengths
39
44
1,689,752
A Deep energy retrofit (abbreviated as DER) can be broadly categorized as an energy conservation measure in an existing building also leading to an overall improvement in the building performance. While there is no exact definition for a deep energy retrofit, it can be defined as a whole-building analysis and construction process that aims at achieving on-site energy use minimization in a building by 50% or more compared to the baseline energy use (calculated using utility bills analysis) making use of existing technologies, materials and construction practices. Such a retrofit reaps multifold (energy and non-energy) benefits beyond energy cost savings, unlike conventional energy retrofit. It may also involve remodeling the building to achieve a harmony in energy, indoor air quality, durability, and thermal comfort. An integrated project delivery method is recommended for a deep energy retrofit project. An over-time approach in a deep energy retrofitting project provides a solution to the large upfront costs problem in all-at-once execution of the project.
https://en.wikipedia.org/wiki?curid=27882571
1,689,910
During 1972, the first model to forecast storm surge along the continental shelf of the United States was developed, known as the Special Program to List the Amplitude of Surges from Hurricanes (SPLASH). In 1978, the first hurricane-tracking model based on atmospheric dynamics – the movable fine-mesh (MFM) model – began operating. The Quasi-Lagrangian Limited Area (QLM) model is a multi-level primitive equation model using a Cartesian grid and the Global Forecast System (GFS) for boundary conditions. In the early 1980s, the assimilation of satellite-derived winds from water vapor, infrared, and visible satellite imagery was found to improve tropical cyclones track forecasting. The Geophysical Fluid Dynamics Laboratory (GFDL) hurricane model was used for research purposes between 1973 and the mid-1980s. Once it was determined that it could show skill in hurricane prediction, a multi-year transition transformed the research model into an operational model which could be used by the National Weather Service for both track and intensity forecasting in 1995. By 1985, the Sea Lake and Overland Surges from Hurricanes (SLOSH) Model had been developed for use in areas of the Gulf of Mexico and near the United States' East coast, which was more robust than the SPLASH model.
https://en.wikipedia.org/wiki?curid=4142447
1,713,604
Proof complexity measures the efficiency of the proof system usually in terms of the minimal size of proofs possible in the system for a given tautology. The size of a proof (respectively formula) is the number of symbols needed to represent the proof (respectively formula). A propositional proof system "P" is "polynomially bounded" if there exists a constant formula_3 such that every tautology of size formula_4 has a "P"-proof of size formula_5. A central question of proof complexity is to understand if tautologies admit polynomial-size proofs. Formally,
https://en.wikipedia.org/wiki?curid=2801284
1,718,763
The capability of a runtime verifier to detect errors strictly depends on its capability to analyze execution traces. When the monitors are deployed with the system, instrumentation is typically minimal and the execution traces are as simple as possible to keep the runtime overhead low. When runtime verification is used for testing, one can afford more comprehensive instrumentations that augment events with important system information that can be used by the monitors to construct and therefore analyze more refined models of the executing system. For example, augmenting events with Vector clock information and with data and control flow information allows the monitors to construct a "causal model" of the running system in which the observed execution was only one possible instance. Any other permutation of events that is consistent with the model is a feasible execution of the system, which could happen under a different thread interleaving. Detecting property violations in such inferred executions (by monitoring them) makes the monitor "predict" errors that did not happen in the observed execution, but which can happen in another execution of the same system. An important research challenge is to extract models from execution traces that comprise as many other execution traces as possible.
https://en.wikipedia.org/wiki?curid=3098816
1,718,884
Quantum stochastic calculus is a generalization of stochastic calculus to noncommuting variables. The tools provided by quantum stochastic calculus are of great use for modeling the random evolution of systems undergoing measurement, as in quantum trajectories. Just as the Lindblad master equation provides a quantum generalization to the Fokker–Planck equation, quantum stochastic calculus allows for the derivation of quantum stochastic differential equations (QSDE) that are analogous to classical Langevin equations.
https://en.wikipedia.org/wiki?curid=41263617
1,737,562
The idea is related to a property of the bosonic string in a curve background, better known as nonlinear sigma model. First calculations from this model showed as the beta function, representing the running of the metric of the model as a function of an energy scale, is proportional to the Ricci tensor giving rise to a Ricci flow. As this model has conformal invariance and this must be kept to have a sensible quantum field theory, the beta function must be zero producing immediately the Einstein field equations. While Einstein equations seem to appear somewhat out of place, nevertheless this result is surely striking showing as a background two-dimensional model could produce higher-dimensional physics. An interesting point here is that such a string theory can be formulated without a requirement of criticality at 26 dimensions for consistency as happens on a flat background. This is a serious hint that the underlying physics of Einstein equations could be described by an effective two-dimensional conformal field theory. Indeed, the fact that we have evidence for an inflationary universe is an important support to string cosmology.
https://en.wikipedia.org/wiki?curid=2864696
1,750,597
The time complexity of an algorithm counts the number of arithmetic operations sufficient for the algorithm to solve the problem. For example, Gaussian elimination requires on the order of" D" operations, and so it is said to have polynomial time-complexity, because its complexity is bounded by a cubic polynomial. There are examples of algorithms that do not have polynomial-time complexity. For example, a generalization of Gaussian elimination called Buchberger's algorithm has for its complexity an exponential function of the problem data (the degree of the polynomials and the number of variables of the multivariate polynomials). Because exponential functions eventually grow much faster than polynomial functions, an exponential complexity implies that an algorithm has slow performance on large problems.
https://en.wikipedia.org/wiki?curid=31255067
1,750,606
The criss-cross algorithm is a simply stated algorithm for linear programming. It was the second fully combinatorial algorithm for linear programming. The partially combinatorial simplex algorithm of Bland cycles on some (nonrealizable) oriented matroids. The first fully combinatorial algorithm was published by Todd, and it is also like the simplex algorithm in that it preserves feasibility after the first feasible basis is generated; however, Todd's rule is complicated. The criss-cross algorithm is not a simplex-like algorithm, because it need not maintain feasibility. The criss-cross algorithm does not have polynomial time-complexity, however.
https://en.wikipedia.org/wiki?curid=31255067
1,760,176
In 2013, Lopaeva "et al." exploited photon number correlations, instead of entanglement, in a sub-optimal target detection experiment. To illustrate the benefit of quantum entanglement, in 2013 Zhang "et al." reported a secure communication experiment based on quantum illumination and demonstrated for the first time that entanglement can enable a substantial performance advantage in the presence of quantum decoherence. In 2015, Zhang "et al." applied quantum illumination in sensing and showed that employing entanglement can yield a higher signal-to-noise ratio than the optimal classical scheme can provide, even though the highly lossy and noisy environment completely destroys the initial entanglement. This sensing experiment thus proved the original theoretical proposals of quantum illumination. The first experimental effort to perform microwave quantum illumination was based on using Josephson parametric amplifier and a digital receiver. As applied to imaging, in 2019 England "et al." demonstrated this principle by imaging through noise in a scanning configuration. The first full-field imaging system based on quantum illumination that uses spatially-entangled photon pairs for imaging in the presence of noise and losses was reported in a two successive publications in 2019 and 2020 by two research groups from the University of Glasgow.
https://en.wikipedia.org/wiki?curid=41129282
1,766,525
System of Environmental-Economic Accounting (SEEA) is a framework to compile statistics linking environmental statistics to economic statistics. SEEA is described as a satellite system to the United Nations System of National Accounts (SNA). This means that the definitions, guidelines and practical approaches of the SNA are applied to the SEEA. This system enables environmental statistics to be compared to economic statistics as the system boundaries are the same after some processing of the input statistics. By analysing statistics on the economy and the environment at the same time it is possible to show different patterns of sustainability for production and consumption. It can also show the economic consequences of maintaining a certain environmental standard.
https://en.wikipedia.org/wiki?curid=30308263
1,775,524
Margolus' original application for the block cellular automaton model was to simulate the billiard ball model of reversible computation, in which Boolean logic signals are simulated by moving particles and logic gates are simulated by elastic collisions of those particles. It is possible, for instance, to perform billiard-ball computations in the two-dimensional Margolus model, with two states per cell, and with the number of live cells conserved by the evolution of the model. In the "BBM" rule that simulates the billiard-ball model in this way, signals consist of single live cells, moving diagonally. To accomplish this motion, the block transition function replaces a block containing a single live cell with another block in which the cell has been moved to the opposite corner of the block. Similarly, elastic collisions may be performed by a block transition function that replaces two diagonally opposite live cells by the other two cells of the block. In all other configurations of a block, the block transition function makes no change to its state. In this model, rectangles of live cells (carefully aligned with respect to the partition) remain stable, and may be used as mirrors to guide the paths of the moving particles. For instance, the illustration of the Margolus neighborhood shows four particles and a mirror; if the next step uses the blue partition, then two particles are moving towards the mirror while the other two are about to collide, whereas if the next step uses the red partition, then two particles are moving away from the mirror and the other two have just collided and will move apart from each other.
https://en.wikipedia.org/wiki?curid=2399633
1,794,383
Various energy transformations are possible. An energy balance can be used to track energy through a system. This becomes a useful tool for determining resource use and environmental impacts. How much energy is needed at each point in a system is measured, as well as the form of that energy. An accounting system keeps track of energy in, energy out, and non-useful energy versus work done, and transformations within a system. Sometimes, non-useful work is what is often responsible for environmental problems.
https://en.wikipedia.org/wiki?curid=1323604
1,798,883
Ischemic cell death, or oncosis, is a form of accidental cell death. The process is characterized by an ATP depletion within the cell leading to impairment of ionic pumps, cell swelling, clearing of the cytosol, dilation of the endoplasmic reticulum and golgi apparatus, mitochondrial condensation, chromatin clumping, and cytoplasmic bleb formation. Oncosis refers to a series of cellular reactions following injury that precedes cell death. The process of oncosis is divided into three stages. First, the cell becomes committed to oncosis as a result of damage incurred to the plasma membrane through toxicity or ischemia, resulting in the leak of ions and water due to ATP depletion. The ionic imbalance that occurs subsequently causes the cell to swell without a concurrent change in membrane permeability to reverse the swelling. In stage two, the reversibility threshold for the cell is passed and the cell becomes committed to cell death. During this stage the membrane becomes abnormally permeable to trypan blue and propidium iodide, indicating membrane compromise. The final stage is cell death and removal of the cell via phagocytosis mediated by an inflammatory response.
https://en.wikipedia.org/wiki?curid=28325705
1,803,694
The Klee–Minty cube has been used to analyze the performance of many algorithms, both in the worst case and on average. The time complexity of an algorithm counts the number of arithmetic operations sufficient for the algorithm to solve the problem. For example, Gaussian elimination requires on the order of" D" operations, and so it is said to have polynomial time-complexity, because its complexity is bounded by a cubic polynomial. There are examples of algorithms that do not have polynomial-time complexity. For example, a generalization of Gaussian elimination called Buchberger's algorithm has for its complexity an exponential function of the problem data (the degree of the polynomials and the number of variables of the multivariate polynomials). Because exponential functions eventually grow much faster than polynomial functions, an exponential complexity implies that an algorithm has slow performance on large problems.
https://en.wikipedia.org/wiki?curid=31302509
1,809,148
Contact inhibition of locomotion is involved in the migration of many cell types, including neural crest (NC) cells in vertebrates which give rise to cells of the peripheral nervous system (PNS), facial cartilage, and other non-neural cells throughout the body. NC cells are very mobile, with actin-rich protrusions at the leading edge of each cell in the direction of travel. When an NC cell collides with another NC cell, activation of the Wnt planar cell polarity (PCP) signaling pathway occurs at the point of cell contact, causing localized activation of the downstream effector RhoA. This activation is likely caused by interactions between cadherins on the cell surfaces, and leads to the retraction of the cell protrusions and a change in the cell’s polarity, causing the NC cell to change direction. Interestingly, this contact inhibition of locomotion among NC cells is coupled with chemical coattraction between NC cells, which allows the cells to keep in motion for efficient migration as well as to stay together, respectively, leading to collective migration.
https://en.wikipedia.org/wiki?curid=1082943
1,822,781
After an introductory chapter overviewing related topics including quantum cryptography, quantum information theory, and quantum game theory, chapter 2 introduces quantum mechanics and quantum superposition using polarized light as an example, also discussing qubits, the Bloch sphere representation of the state of a qubit, and quantum key distribution. Chapter 3 introduces direct sums, tensor products, and quantum entanglement, and chapter 4 includes the EPR paradox, Bell's theorem on the impossibility of local hidden variable theories, as quantified by Bell's inequality. Chapter 5 discusses unitary operators, quantum logic gates, quantum circuits, and functional completeness for systems of quantum gates. Chapter 6, the final chapter of the building block section, discusses (classical) reversible computing, and the conversion of arbitrary computations to reversible computations, a necessary step to performing them on quantum devices.
https://en.wikipedia.org/wiki?curid=63448622
1,828,037
The latent internal energy of a system is the internal energy a system requires to undergo a phase transition. Its value is specific to the substance or mix of substances in question. The value can also vary with temperature and pressure. Generally speaking the value is different for the type of phase change being accomplished. Examples can include Latent internal energy of vaporization (liquid to vapor), Latent internal energy of crystallization (liquid to solid) Latent internal energy of sublimation (solid to vapor). These values are usually expressed in units of energy per mole or per mass such as J/mol or BTU/lb. Often a negative sign will be used to represent energy being withdrawn from the system, while a positive value represents energy being added to the system.
https://en.wikipedia.org/wiki?curid=8546071
1,829,859
In computational complexity theory, the class QIP (which stands for Quantum Interactive Polynomial time) is the quantum computing analogue of the classical complexity class IP, which is the set of problems solvable by an interactive proof system with a polynomial-time verifier and one computationally unbounded prover. Informally, IP is the set of languages for which a computationally unbounded prover can convince a polynomial-time verifier to accept when the input is in the language (with high probability) and cannot convince the verifier to accept when the input is not in the language (again, with high probability). In other words, the prover and verifier may interact for polynomially many rounds, and if the input is in the language the verifier should accept with probability greater than 2/3, and if the input is not in the language, the verifier should be reject with probability greater than 2/3. In IP, the verifier is like a BPP machine. In QIP, the communication between the prover and verifier is quantum, and the verifier can perform quantum computation. In this case the verifier is like a BQP machine.
https://en.wikipedia.org/wiki?curid=25608560
1,831,032
A biochemical basis for the signal transduction from the cytoskeleton to the nucleus resulting in changes in gene expression was first proposed by Björklund (now Gordon) and Gordon in 1993 This would result in a biochemical transduction of the biomechanical signal from the cytoskeleton that is thereby passed on to the nucleus. This then signals the changes in gene expression. If the cell has experienced contraction, one signal is sent and if the cell has experienced expansion then another signal is sent. The signal from the cytoskeleton is what causes determination of cell fate. The phenomena of gene gradients during development is dismissed as an epiphenomena resulting from the passage of the biomechanical wave initiating changes in gene expression in individual cells as the wave passes through a cell sheet. They have outlined their research and their theory of differentiation waves in detail in their book "Embryogenesis Explained". For example, the first differentiation that takes place during mammalian compaction is explained in terms of their differentiation waves model thus; Cells on the outside of the morula expand due to the effect of their position on the outside of the early ball of cells and they become determined to be trophoblast. Cells on the inside of the ball contract instead due to the mechanical force of being on the inside and they become determined to be the inner cell mass. All the other activity, such as changes in gene expression, signalling proteins, release of morphogens, and epigenetic changes, are considered the result of differentiation after the response of the cytoskeleton to mechanical signals which then determines cell fate using purely mechanical signals. Failure of neural tube closure is explained as a failure of methylation of cytoskeleton of developing neural tissue for those neural tube defects which are folate sensitive and prevented by folic acid supplementation.
https://en.wikipedia.org/wiki?curid=55661946
1,834,924
The DCC Curation Lifecycle Model is especially relevant to three key participants in the digital curation process: data creators, data archivists, and data reusers. The model highlights the importance of data creation, such as metadata, in successful, sustainable curation practices. This is relevant to data creators. Data archivists will find the model beneficial as an outline of the necessary processes to guarantee the thoroughness of their curation actions. Finally, because the model outlines the aforementioned steps, it prompts the successful curation of data and, therefore, the ability of that data to be accessed in the future and reused.
https://en.wikipedia.org/wiki?curid=6630735
1,837,145
Cytorrhysis is the permanent and irreparable damage to the cell wall after the complete collapse of a plant cell due to the loss of internal positive pressure (hydraulic turgor pressure). Positive pressure within a plant cell is required to maintain the upright structure of the cell wall. Desiccation (relative water content of less than or equal to 10%) resulting in cellular collapse occurs when the ability of the plant cell to regulate turgor pressure is compromised by environmental stress. Water continues to diffuse out of the cell after the point of zero turgor pressure, where internal cellular pressure is equal to the external atmospheric pressure, has been reached, generating negative pressure within the cell. That negative pressure pulls the center of the cell inward until the cell wall can no longer withstand the strain. The inward pressure causes the majority of the collapse to occur in the central region of the cell, pushing the organelles within the remaining cytoplasm against the cell walls. Unlike in plasmolysis (a phenomenon that does not occur in nature), the plasma membrane maintains its connections with the cell wall both during and after cellular collapse.
https://en.wikipedia.org/wiki?curid=1597688
1,838,749
MICRO runs under the Michigan Terminal System (MTS), the interactive time-sharing system developed at the University of Michigan that runs on IBM System/360 Model 67, System/370, and compatible mainframe computers. MICRO provides a query language, a database directory, and a data dictionary to create an interface between the user and the very efficient proprietary Set-Theoretic Data Structure (STDS) software developed by the Set-Theoretic Information Systems Corporation (STIS) of Ann Arbor, Michigan. The lower level routines from STIS treat the data bases as sets and perform set operations on them, e.g., union, intersection, restrictions, etc. Although the underlying STDS model is based on set theory, the MICRO user interface is similar to those subsequently used in relational database management systems. MICRO's data representation can be thought of as a matrix or table in which the rows represent different records or "cases", and the columns contain individual data items for each record; however, the actual data representation is in set-theoretic form. In labor market applications the rows typically represent job applicants or employees and columns represent fields such as age, sex, and income or type of industry, number of employees, and payroll.
https://en.wikipedia.org/wiki?curid=7872666
1,841,842
Once all regulatory proteins, etc. have been synthesized and the scaffold has been established, the cell has attained its own specific expression profile. This allows it to synthesize cell-specific enzymes and receptors characteristic of its particular function. The nuclear scaffold is predicted to be relatively permanent for a given cell type, but induction of a signaling pathway—by ligand binding, cell:cell contact, or some other mechanism—can temporarily shift the expression profile. When such a signal changes expression of genes coding for INM or a chromatin-modifying enzymes, it can induce differentiation in to a different cell type. Thus, the Nuclear Scaffold Theory predicts that symmetric cell division occurs when a daughter cell contains the same complement of INMs as the parent cell. Conversely, asymmetric cell division is expected to result in parent and daughter cells with different INM profiles.
https://en.wikipedia.org/wiki?curid=18939846
1,850,752
The IBM System/360 Model 195 is a discontinued IBM computer introduced on August 20, 1969. The Model 195 was a reimplementation of the IBM System/360 Model 91 design using monolithic integrated circuits. It offers "an internal processing speed about twice as fast as the Model 85, the next most powerful System/360".<ref name="System/360 Model 195"></ref> The Model 195 was discontinued on February 9, 1977, the same date as the System/370 Model 195.
https://en.wikipedia.org/wiki?curid=54417247
1,853,376
The laws of quantum mechanics allow one to reduce the required resources for some tasks by many orders of magnitude if the image data are encoded in the quantum state of a suitable physical system. The researchers discuss a suitable method for encoding image data, and develop a new quantum algorithm that can detect boundaries among parts of an image with a single logical operation. This edge-detection operation is independent of the size of the image. Several other algorithms are also discussed. It is theoretically and experimentally demonstrated that they work in practice. This is the first experiment to demonstrate practical quantum image processing. It contributes a substantial progress towards both theoretical and experimental quantum computing for image processing, it will stimulate future studies in the field of quantum information processing of visual data.
https://en.wikipedia.org/wiki?curid=55351593
1,898,405
A synthetic air data system (SADS) is an alternative air data system that can produce synthetic air data quantities without directly measuring the air data. It uses other information such as GPS, wind information, the aircraft's attitude, and aerodynamic properties to estimate or infer the air data quantities. Though air data includes altitude, airspeed, pressures, air temperature, Mach number, and flow angles (e.g., Angle of Attack and Angle of sideslip), existing known SADS primarily focuses on estimating airspeed, Angle of Attack, and Angle of sideslip. SADS is used to monitor the primary air data system if there is an anomaly due to sensor faults or system faults. It can also be potentially used as a backup to provide air data estimates for any aerial vehicle.
https://en.wikipedia.org/wiki?curid=65403215
1,937,538
The numerical renormalization group is an inherently non-perturbative procedure, which was originally used to solve the Kondo model. The Kondo model is a simplified theoretical model which describes a system of magnetic spin-1/2 impurities which couple to metallic conduction electrons (e.g. iron impurities in gold). This problem is notoriously difficult to tackle theoretically, since perturbative techniques break down at low-energy. However, Wilson was able to prove for the first time using the numerical renormalization group that the ground state of the Kondo model is a singlet state. But perhaps more importantly, the notions of renormalization, fixed points, and renormalization group flow were introduced to the field of condensed matter theory — it is for this that Wilson won the Nobel Prize in 1982. The complete behaviour of the Kondo model, including both the high-temperature 'local moment' regime and the low-temperature 'strong coupling' regime are captured by the numerical renormalization group; an exponentially small energy scale T (not accessible from straight perturbation theory) was shown to govern all properties at low-energies, with all physical observables such as resistivity, thermodynamics, dynamics etc. exhibiting universal scaling. This is a characteristic feature of many problems in condensed matter physics, and is a central theme of quantum impurity physics in particular. In the original example of the Kondo model, the impurity local moment is completely screened below T by the conduction electrons via the celebrated Kondo effect; and one famous consequence is that such materials exhibit a resistivity minimum at low temperatures, contrary to expectations based purely on the standard phonon contribution, where the resistivity is predicted to decrease monotonically with temperature.
https://en.wikipedia.org/wiki?curid=22426870
1,958,587
Metadata relating to archiving, indexing and cataloguing is an integral part of TML, since a TML data stream is designed to be self-contained and self-sufficient. Any information about the system, as well as information required to later parse and process the data, is captured in the TML system description. In addition to information about the system that produced the data, precise information about the data itself is captured. Data types, data sizes, ordering and arrangement, calibration information, units of measurement, precise time-tagging of individual groups of data, information about uncertainty, coordinate reference frames (where applicable) and physical phenomena relating to the data are among the details which are captured and retained. The TML system description therefore automatically tags all fields, which can later be stored in a registry for discovery.
https://en.wikipedia.org/wiki?curid=6045563
1,975,828
The next step is to build a data management system that will be able to handle large amounts of data and perform analytics in near real-time. In order to enable rapid decision making, data storage, management and processing need to be more integrated. General Electric has built a prototype data storage infrastructure for fleet of gas turbines. The developed in-memory data grids (IMDG)-based system was proved to be able to handle challenging high velocity and high volume data flow while performing near real-time analytics on the data. They believe that the developed technology has demonstrated a viable path to realize batch "Industrial Big Data” management infrastructure. As prices of memory becomes cheaper, such systems will become central and fundamental to future industry.
https://en.wikipedia.org/wiki?curid=48415691
1,978,130
The domain model defines the aspects of the application which can be adapted or which are otherwise required for the operation of the adaptive system. The domain model contains several concepts that stand as the backbone for the content of the system. Other terms which have been used for this concept include content model, application model, system model, device model and task model. It describes educational content such as information pages, examples, and problems. The simplest content model relates every content item to exactly one domain concept (in this model, this concept is frequently referred to as a domain topic). More advanced content models use multi-concept indexing for each content item and sometimes use roles to express the nature of item-concept relationship.
https://en.wikipedia.org/wiki?curid=9269429
1,995,692
Digital steganography can hide confidential data (i.e. secret files) very securely by embedding them into some media data called "vessel data." The vessel data is also referred to as "carrier, cover, or dummy data". In BPCS-steganography true color images (i.e., 24-bit color images) are mostly used for vessel data. The embedding operation in practice is to replace the "complex areas" on the bit planes of the vessel image with the confidential data. The most important aspect of BPCS-steganography is that the embedding capacity is very large. In comparison to simple image based steganography which uses solely the least important bit of data, and thus (for a 24-bit color image) can only embed data equivalent to 1/8 of the total size, BPCS-steganography uses multiple bit-planes, and so can embed a much higher amount of data, though this is dependent on the individual image. For a 'normal' image, roughly 50% of the data might be replaceable with secret data before image degradation becomes apparent.
https://en.wikipedia.org/wiki?curid=31208407
1,999,342
The System Data Engine of the Common Data Provider in IBM Z Operational Log and Data Analytics can be run stand-alone in batch mode to read SMF data from a data set and then write it to a file. The System Data Engine batch jobs can be created to write SMF data to data sets and send SMF data to the Data Streamer.
https://en.wikipedia.org/wiki?curid=6744901
2,010,501
In mathematics, the Rayleigh theorem for eigenvalues pertains to the behavior of the solutions of an eigenvalue equation as the number of basis functions employed in its resolution increases. Rayleigh, Lord Rayleigh, and 3rd Baron Rayleigh are the titles of John William Strutt, after the death of his father, the 2nd Baron Rayleigh. Lord Rayleigh made contributions not just to both theoretical and experimental physics, but also to applied mathematics. The Rayleigh theorem for eigenvalues, as discussed below, enables the energy minimization that is required in many self-consistent calculations of electronic and related properties of materials, from atoms, molecules, and nanostructures to semiconductors, insulators, and metals. Except for metals, most of these other materials have an energy or a band gap, i.e., the difference between the lowest, unoccupied energy and the highest, occupied energy. For crystals, the energy spectrum is in bands and there is a band gap, if any, as opposed to energy gap. Given the diverse contributions of Lord Rayleigh, his name is associated with other theorems, including Parseval's theorem. For this reason, keeping the full name of "Rayleigh Theorem for Eigenvalues" avoids confusions.
https://en.wikipedia.org/wiki?curid=63401284
2,015,879
The physics of each hadron enters through its distribution amplitudes formula_64, which specifies the partitioning of the light-front momenta of the valence constituents formula_67. It is given in light-cone gauge formula_68 as formula_69, the integral of the valence light-front wave function over the internal transverse momentum squared formula_70; the upper limit formula_71 is the characteristic transverse momentum in the exclusive reaction. The logarithmic evolution of the distribution amplitude in formula_72 is given rigorously in perturbative QCD by the ERBL evolution equation. The results are also consistent with general principles such as the renormalization group. The asymptotic behavior of the distribution such as formula_73 where formula_74 is the decay constant measured in pion decay formula_75 can also be determined from first principles. The nonperturbative form of the hadron light-front wave function and distribution amplitude can be determined from AdS/QCD using light-front holography. The deuteron distribution amplitude has five components corresponding to the five different color-singlet combinations of six color triplet quarks, only one of which is the standard nuclear physics product formula_76 of two color singlets. It obeys a formula_77 evolution equation leading to equal weighting of the five components of the deuteron's light-front wave function components at formula_78 The new degrees of freedom are called "hidden color". Each hadron emitted from a hard exclusive reaction emerges with high momentum and small transverse size. A fundamental feature of gauge theory is that soft gluons decouple from the small color-dipole moment of the compact fast-moving color-singlet wave function configurations of the incident and final-state hadrons. The transversely compact color-singlet configurations can persist over a distance of order formula_79, the Ioffe coherence length. Thus, if we study hard quasi elastic processes in a nuclear target, the outgoing and ingoing hadrons will have minimal absorption - a novel phenomenon called "color transparency". This implies that quasi-elastic hadron-nucleon scattering at large momentum transfer can occur additively on all of the nucleons in a nucleus with minimal attenuation due to elastic or inelastic final state interactions in the nucleus, i.e. the nucleus becomes transparent. In contrast, in conventional Glauber scattering, one predicts nearly energy-independent initial and final-state attenuation. Color transparency has been verified in many hard-scattering exclusive experiments, particularly in the diffractive dijet experiment formula_80 at Fermilab. This experiment also provides a measurement of the pion's light-front valence wave function from the observed formula_6 and transverse momentum dependence of the produced dijets.
https://en.wikipedia.org/wiki?curid=43699175
2,025,063
The delimited real system is convicted with help of the IEM method in an abstract model. IEM is the construction of the two main positions "information model" and "business process model". The "information model" is made by the specification of the object classes to be modeled for "product", "order" and "resource" with the class structures as well as descriptive and relational features. By identification and description of functions, activities and its combination to processes the "business process model" is formed. As a general rule the construction of the "information model" follows first in which the modeling person can go back to available reference class structures. The reference classes which do not correspond to the real system or were not found to be relevant at the system delimitation are deleted. The missing relevant classes are inserted. After the object base is fixed, the activities and functions are joined together at the objects according to the "generic activity model" and with the help of combination elements to business processes. A model is made which can be analysed and changed if it is required. It often happens, that during the construction of the "business process model" new relevant object classes are identified so that the class trees getting completed. The construction of the two positions is, therefore, an iterative process.
https://en.wikipedia.org/wiki?curid=7284218
2,027,864
Since the 1980s, it has been well known that archeological dental calculus preserves cellular structures and oral bacteria, but a new discovery in the last decade has revealed that dental calculus is a long-term reservoir of DNA and proteins. Human DNA in dental calculus was initially targeted by PCR amplification of mitochondrial DNA (mtDNA), followed by either haplogroup inference or conventional cloning and Sanger sequencing. Shotgun metagenomics paired with next-generation sequencing (NGS) technology further confirmed dental calculus contains mitochondrial and nuclear DNA. Dental calculus typically contains 10–1,000-fold more DNA than bone or dentine, making it the richest known source of aDNA, one of the possible double helical structures of DNA, in the archaeological record. Archaeological dental calculus is an alternative source of high quality mitochondrial DNA sufficient for full mitogenome reconstruction. This reconstruction can then be applied to maternal lineage ancestry analysis to determine the haplogroup, thus identifying which geographical regions maternal ancestors settled. Protein sequencing has also been applied revealing bacterial functions such as virulence factors and their interactions with the host are viable from ancient dental calculus. Proteomics has revealed over 60 human proteins with origins in dental calculus such as follicular dendritic cell-secreted protein, alpha amylase I, hemoglobin, etc. Metabolomics and lipidomic studies are used to determine what metabolic categories (amino acids, carbohydrates, cofactors and vitamins, energy, lipids, nucleic acids, peptides, xenobiotics) and the source of metabolites (host, microbial, diet) are found within dental calculus samples. Many of these newly developed techniques used to study ancient dental calculus are still in their early stages and need to overcome several limitations to offer a more accurate understanding on the evolution of the oral microbiome. Some examples of these limitations are isolation of contaminant DNA, correct identification of ancient microbial species, identification and isolation of non-bacterial DNA as well as better statistical techniques.
https://en.wikipedia.org/wiki?curid=67011367
2,055,101
In his early career, he worked on problems in statistical physics, dynamical system and complex systems. In 1987, along with Per Bak and Kurt Wiesenfeld, he proposed the concept and developed the theory for self-organized criticality, which had and continues to have broad applications in complex systems with scale invariance. The model they used to illustrate the idea is referred to as the Bak-Tang-Wiesenfeld "sandpile" model. His current research interest is at the interface between physics and biology. Specifically, he focuses on systems biology and works on problems such as protein folding, cell cycle regulation, function-topology relationship in biological network, cell fate determination and design principles in biological systems. He was a tenured Full Professor at the University of California San Francisco before returning to China in 2011. He is a Fellow of the American Physical Society, a member of the Chinese Academy of Sciences, the founding director of the interdisciplinary Center for Quantitative Biology at Peking University and the founding Co-Editor-in-Chief of the journal Quantitative Biology.
https://en.wikipedia.org/wiki?curid=2968953
2,062,005
J. C. McLennan, director of the physics laboratory at U of T from 1906 to 1932, undertook studies in atmospheric conductivity and cathode rays, but in 1912 was inspired by the work of Bohr, to conduct research into atomic spectroscopy. He, along with G. M. Shrum, constructed the first machine for the liquification of helium in North America, which was used for cryogenic studies of metals and solid gases. Research into colloid physics in the twenties and thirties by E. F. Burton and his students led to the construction of the first electron microscope in North America. Geophysics research was also undertaken at the U of T at this time by L. Gilchrist. At McGill, L.V. King studied mathematical physics while D.A. Keys and A.S. Eve conducted research into geophysics and J.S Marshall into atmospheric physics. McGill also established the first theoretical physics group at a Canadian university. At the University of Alberta, R.W. Boyle became the first professor of physics in 1912 and conducted research into ultrasound while F. Allen established the physics department at the University of Manitoba and bent his efforts towards the physics of physiology. At the University of Saskatchewan, E. L. Harrington was the first physics department head from 1924 to 1956, during which time that institution developed expertise in upper atmospheric research, begun by B.W. Currie in 1932. From 1935 to 1945, Gerhard Herzberg studied atomic and molecular physics there. Physics began at Queen's with the work of A. L. Clark and nuclear research was conducted there by J. A. Gray, B. W. Sargent, A. T. Stewart and others. H. L. Bronson, department head at Dalhousie, was active in physics research from 1910 to 1956.
https://en.wikipedia.org/wiki?curid=18401364
2,085,258
In multidimensional signal processing, Multidimensional signal restoration refers to the problem of estimating the original input signal from observations of the distorted or noise contaminated version of the original signal using some prior information about the input signal and /or the distortion process. Multidimensional signal processing systems such as audio, image and video processing systems often receive as input, signals that undergo distortions like blurring, band-limiting etc. during signal acquisition or transmission and it may be vital to recover the original signal for further filtering. Multidimensional signal restoration is an inverse problem, where only the distorted signal is observed and some information about the distortion process and/or input signal properties is known. A general class of iterative methods have been developed for the multidimensional restoration problem with successful applications to multidimensional deconvolution, signal extrapolation and denoising.
https://en.wikipedia.org/wiki?curid=52292986
2,098,040
Although the original proof by Valentine Bargmann is quite technical, the main idea follows from two general theorems on ordinary differential equations, the Sturm Oscillation Theorem and the Sturm-Picone Comparison Theorem. If we denote by formula_21 the wave function subject to the given potential with total energy formula_22 and azimuthal quantum number formula_2, the Sturm Oscillation Theorem implies that formula_1 equals the number of nodes of formula_21. From the Sturm-Picone Comparison Theorem, it follows that when subject to a stronger potential formula_26 (i.e. formula_27 for all formula_28), the number of nodes either grows or remains the same. Thus, more specifically, we can replace the potential formula_3 by formula_30. For the corresponding wave function with total energy formula_22 and azimuthal quantum number formula_2, denoted by formula_33, the radial Schrödinger equation becomes
https://en.wikipedia.org/wiki?curid=1184376
2,099,395
Data from PetDB can be viewed in HTML tables and downloaded in spreadsheets in XLS format. During selection of chemical parameters a user can choose to retrieve data as individual values (each row in the data table contains values measured on the same sample with the same method and linked to the same reference) or in precompiled format. The precompiled format arranges all data associated with a sample in a single row, even when data is sourced from multiple publications. In cases where there is more than one data value for a particular chemical item, the precompilation algorithm selects the most recent analysis and the most precise method available. Links in the HTML table permit the user to access more detailed information about the sample, reference or data value (analytical procedure). The final spreadsheet output contains two worksheets. The first contains queried chemical data, geospatial coordinates, and abridged methods and references, while the second contains metadata on analytical methods and publication information.
https://en.wikipedia.org/wiki?curid=11490059
2,110,749
Cells themselves can create biomolecular gradients by releasing signaling molecules that diffuse outwardly. These gradients are critical for cellular identity and cell relocation. Similarly, the gradients produced by cells may influence cellular fate by their temporal and spatial characteristics. In certain organisms, the choice of cell fate can be determined by a gradient, a binary choice, or through a relay of molecules released by a cell. If cell fate is binary, the identity of the cell is influenced by the presence or absence of a signaling molecule; consequently, these signals can also induce cell fate acting in a relay. The relay functions by the source cell releasing a signaling molecule into the environment. Adjacent cells possessing similar cell identities respond to these signals and can release different signaling molecules to cells in their surrounding area, promoting additional new cell fates. This process continues to all cells in the developing organism. In contrast, a signal can act in a gradient, which induces a specific cell fate as a function of the concentration of the molecule in the gradient. These types of molecules are known as morphogens.
https://en.wikipedia.org/wiki?curid=42213076
2,159,604
Since Torza treats the fluid inside the droplet and outside the droplet as having no net charge, the governing equation for the electric stress sub-problem reduces to Gauss's law with a spatial charge density of zero. By re-expressing the electric field in terms of the gradient of the electric potential, the governing electric equation reduces to Laplace's equation. Separation of variables can be used to derive a solution to this equation of the form of a power series multiplied by the cosine of the polar angle taken relative to the direction of the electric field. Using the solutions for the magnitude of the electric potentials on the inside and outside of the droplet, the electric stress created on the bubble/droplet interface can be determined using the definition of the Maxwell stress tensor and neglecting the electric field.
https://en.wikipedia.org/wiki?curid=41093622
2,161,637
Lysophosphatidic acid phosphatase activity is used to detect and to quantify irregular levels of LPAs on a cell's surface. LPAs are receptor-active mediators that promote cell motility, cell growth and cell survival. There is clear evidence that cancerous ovarian cells have an increased level of LPA concentrations on their cell surfaces. These LPAs leak from the cell surface into the blood stream. The high levels of LPAs in the blood are used as tumor markers. In these cell clusters, lysophosphatidic acid phosphatase activity is higher than it is in regular cells. This can be attributed to the significantly increased levels of LPA that are secreted and synthesized by the ovarian cancer cells. This helps explain the cancerous cell's radical behavior and uncontrollable proliferation caused by the imbalance of enzyme and substrate concentrations, therefore leading to the inability to turn off the LPA cascade signalling effectively. One possible way to address and treat ovarian cancer cell proliferation would be to increase the concentration of lysophosphatidic acid phosphatase on the cell's surface, thus decreasing the amount of LPAs available to signal the cell to proceed with its radical behavior.
https://en.wikipedia.org/wiki?curid=45639348
2,161,878
The International Nuclear Library Network was founded in 2005 by the IAEA Library and the Atomic Energy of Canada Limited (AECL) Library. In its initial years, it counted a total of five members (in addition to the initiators, the National Atomic Energy Commission of Argentina, the Turkish Atomic Energy Agency and the Institute of Nuclear Physics of Uzbekistan Academy of Science. In 2006, the Australian Nuclear Science and Technology Organisation (ANSTO) also joined the network. In 2007, the INLN welcomed four new members: the China Nuclear Information Centre, the Nigerian Nuclear Regulatory Authority, the Obninsk State Technical University for Nuclear Power Engineering in the Russian Federation, and the Russian Association of Nuclear Science and Education (RANSE), thus increasing the membership to ten participants. In 2008, at the meeting of INLN Members and prospective members, during the 34th International Nuclear Information System (INIS) Liaison Officers Meeting in Vienna, a large number of nuclear libraries from around the world expressed their interest and subsequently joined the INLN: the Belarus INIS Center, Chair of Ecological Information Systems, the Nuclear Research Institute Rez plc of Czech Republic, the Egyptian Atomic Energy Authority (EAEA), the Ghana Atomic Energy Commission Library, the Bhabha Atomic Research Centre (BARC) of India, the Radiological Protection Institute of Ireland, the Japan Atomic Energy Agency (JAEA), the Instituto Nacional de Investigaciones Nucleares (ININ), Centro de Información y Documentación Nuclear (CIDN) of Mexico, the Centre National de l' Energie des Sciences et des Techniques Nucléaires (CNESTEN) of Morocco, the Korea Atomic Energy Research Institute (KAERI) at the University of Science and Technology (South Korea), the Vinca Nuclear Institute of Serbia, the Centre National des Sciences et Technologies Nucléaires (CNSTN) of Tunisia. In November of the same year, the Library Network of the Brazilian Nuclear Energy Commission (Comissão Nacional de Energia Nuclear – CNEN), consisting of seven nuclear libraries, became an INLN official member. In January 2009, the Commissariat à l'Énergie Atomique (CEA) - Centre de Saclay - Centre de Ressources Documentaires joined the Network, while in 2010, the National Atomic Energy Commission, the Norwegian Radiation Protection Authority, and the Pakistan Institute of Nuclear Science & Technology decided to become INLN members. The Nuclear Energy Regulatory Agency of Indonesia - Badan Pengawas Tenaga Nuklir and the New Zealand Institute of Environmental Science and Research – National Radiation Laboratory are the newest members, raising the total number of Nuclear Libraries/Information Centres participating in the INLN to 37.
https://en.wikipedia.org/wiki?curid=22847485
2,211,827
The Re-referenced Protein Chemical shift Database (RefDB) is an NMR spectroscopy database of carefully corrected or re-referenced chemical shifts, derived from the BioMagResBank (BMRB) (Fig. 1). The database was assembled by using a structure-based chemical shift calculation program (called SHIFTX) to calculate expected protein (1)H, (13)C and (15)N chemical shifts from X-ray or NMR coordinate data of previously assigned proteins reported in the BMRB. The comparison is automatically performed by a program called SHIFTCOR. The RefDB database currently provides reference-corrected chemical shift data on more than 2000 assigned peptides and proteins. Data from the database indicates that nearly 25% of BMRB entries with (13)C protein assignments and 27% of BMRB entries with (15)N protein assignments require significant chemical shift reference readjustments. Additionally, nearly 40% of protein entries deposited in the BioMagResBank appear to have at least one assignment error. Users may download, search or browse the database through a number of methods available through the RefDB website. RefDB provides a standard chemical shift resource for biomolecular NMR spectroscopists, wishing to derive or compute chemical shift trends in peptides and proteins.
https://en.wikipedia.org/wiki?curid=34138058
2,218,561
Cells and proteins can be patterned in microfluidic devices with one of the channel walls exposed in different geometries and designs depending on the behaviors and interactions to be studied, such as quorum sensing or co-culturing of several types of cells. A majority of cell culturing has been carried out by introducing the cells in a perfused conditioned medium to simulate the desired cell populations in traditional close-channel microfluidic devices. The challenge to support the cell growth and simultaneously study multiple cell types in a single device with an exposed channel is that the interactions between cells in this medium needs to be controlled since the timing and location of the interactions is critical. This issue can be addressed in several ways including the modification of the device design, using droplet microfluidics, and cell sorting. Not only does this allow for the ease of manipulating the environment of the cells, but having an open channel wall allows for a better understanding of biological interactions at this interface. Creating designs of microfluidic platforms with different compartments that are isolated and have different dimensions allows for co-culturing of several types of cells. These devices often incorporate droplet formation to encapsulate cells and act as transport and reaction vehicles in two or more immiscible phases, making it possible to carry out numerous parallel analyses using different conditions. Open microfluidics has also been coupled with fluorescence-activated cell sorting (FACS) to allow for cells to be contained in individually sorted compartments in an open microfluidic network for culturing in an exposed environment. The exposure of one of the channel walls introduces the issue of evaporation and therefore cell loss, however this issue can be minimized by using droplet microfluidics where the cell-containing droplets are submerged in a fluorinated oil. Although evaporation is a major disadvantage of using an open microfluidic system for cell culturing, the advantages over a closed system include ease of manipulation and access to the cells. For certain applications, such as the study of drug transport and lung function using alveolar epithelium cells, air exposure to is essential for developing the lungs.
https://en.wikipedia.org/wiki?curid=60405154
987
Tesla, Inc. ( or ) is an American multinational automotive and clean energy company headquartered in Austin, Texas. Tesla designs and manufactures electric vehicles (electric cars and trucks), battery energy storage from home to grid-scale, solar panels and solar roof tiles, and related products and services. Tesla is one of the world's most valuable companies and remains the world's most valuable automaker with a market capitalization of more than US$550 billion. In 2021, the company had the most worldwide sales of battery electric vehicles and plug-in electric vehicles, capturing 21% of the battery-electric (purely electric) market and 14% of the plug-in market (which includes plug-in hybrids). Through its subsidiary Tesla Energy, the company develops and is a major installer of photovoltaic systems in the United States. Tesla Energy is also one of the largest global suppliers of battery energy storage systems, with 3.99 gigawatt-hours (GWh) installed in 2021.
https://en.wikipedia.org/wiki?curid=5533631
8,817
Quantum mechanics is a theory of physics originally developed in order to understand microscopic phenomena: behavior at the scale of molecules, atoms or subatomic particles. Generally and loosely speaking, the smaller a system is, the more an adequate mathematical model will require understanding quantum effects. The conceptual underpinning of quantum physics is very different from that of classical physics. Instead of thinking about quantities like position, momentum, and energy as properties that an object "has", one considers what result might "appear" when a measurement of a chosen type is performed. Quantum mechanics allows the physicist to calculate the probability that a chosen measurement will elicit a particular result. The expectation value for a measurement is the average of the possible results it might yield, weighted by their probabilities of occurrence.
https://en.wikipedia.org/wiki?curid=55212
9,460
A fundamental feature of the theory is that it usually cannot predict with certainty what will happen, but only give probabilities. Mathematically, a probability is found by taking the square of the absolute value of a complex number, known as a probability amplitude. This is known as the Born rule, named after physicist Max Born. For example, a quantum particle like an electron can be described by a wave function, which associates to each point in space a probability amplitude. Applying the Born rule to these amplitudes gives a probability density function for the position that the electron will be found to have when an experiment is performed to measure it. This is the best the theory can do; it cannot say for certain where the electron will be found. The Schrödinger equation relates the collection of probability amplitudes that pertain to one moment of time to the collection of probability amplitudes that pertain to another.
https://en.wikipedia.org/wiki?curid=25202
9,469
After the measurement, if result formula_8 was obtained, the quantum state is postulated to collapse to formula_10, in the non-degenerate case, or to formula_15, in the general case. The probabilistic nature of quantum mechanics thus stems from the act of measurement. This is one of the most difficult aspects of quantum systems to understand. It was the central topic in the famous Bohr–Einstein debates, in which the two scientists attempted to clarify these fundamental principles by way of thought experiments. In the decades after the formulation of quantum mechanics, the question of what constitutes a "measurement" has been extensively studied. Newer interpretations of quantum mechanics have been formulated that do away with the concept of "wave function collapse" (see, for example, the many-worlds interpretation). The basic idea is that when a quantum system interacts with a measuring apparatus, their respective wave functions become entangled so that the original quantum system ceases to exist as an independent entity. For details, see the article on measurement in quantum mechanics.
https://en.wikipedia.org/wiki?curid=25202
9,510
Early attempts to merge quantum mechanics with special relativity involved the replacement of the Schrödinger equation with a covariant equation such as the Klein–Gordon equation or the Dirac equation. While these theories were successful in explaining many experimental results, they had certain unsatisfactory qualities stemming from their neglect of the relativistic creation and annihilation of particles. A fully relativistic quantum theory required the development of quantum field theory, which applies quantization to a field (rather than a fixed set of particles). The first complete quantum field theory, quantum electrodynamics, provides a fully quantum description of the electromagnetic interaction. Quantum electrodynamics is, along with general relativity, one of the most accurate physical theories ever devised.
https://en.wikipedia.org/wiki?curid=25202
12,328
A less common but increasingly important paradigm of processors (and indeed, computing in general) deals with data parallelism. The processors discussed earlier are all referred to as some type of scalar device. As the name implies, vector processors deal with multiple pieces of data in the context of one instruction. This contrasts with scalar processors, which deal with one piece of data for every instruction. Using Flynn's taxonomy, these two schemes of dealing with data are generally referred to as "single instruction" stream, "multiple data" stream (SIMD) and "single instruction" stream, "single data" stream (SISD), respectively. The great utility in creating processors that deal with vectors of data lies in optimizing tasks that tend to require the same operation (for example, a sum or a dot product) to be performed on a large set of data. Some classic examples of these types of tasks include multimedia applications (images, video and sound), as well as many types of scientific and engineering tasks. Whereas a scalar processor must complete the entire process of fetching, decoding and executing each instruction and value in a set of data, a vector processor can perform a single operation on a comparatively large set of data with one instruction. This is only possible when the application tends to require many steps which apply one operation to a large set of data.
https://en.wikipedia.org/wiki?curid=5218
13,631
Classical physics is generally concerned with matter and energy on the normal scale of observation, while much of modern physics is concerned with the behavior of matter and energy under extreme conditions or on a very large or very small scale. For example, atomic and nuclear physics study matter on the smallest scale at which chemical elements can be identified. The physics of elementary particles is on an even smaller scale since it is concerned with the most basic units of matter; this branch of physics is also known as high-energy physics because of the extremely high energies necessary to produce many types of particles in particle accelerators. On this scale, ordinary, commonsensical notions of space, time, matter, and energy are no longer valid.
https://en.wikipedia.org/wiki?curid=22939
18,765
There are several models of quantum computation with the most widely used being quantum circuits. Other models include the quantum Turing machine, quantum annealing, and adiabatic quantum computation. Most models are based on the quantum bit, or "qubit", which is somewhat analogous to the bit in classical computation. A qubit can be in a 1 or 0 quantum state, or in a superposition of the 1 and 0 states. When it is measured, however, it is always 0 or 1; the probability of either outcome depends on the qubit's quantum state immediately prior to measurement. One model that does not use qubits is continuous variable quantum computation.
https://en.wikipedia.org/wiki?curid=25220
18,766
Efforts towards building a physical quantum computer focus on technologies such as transmons, ion traps and topological quantum computers, which aim to create high-quality qubits. These qubits may be designed differently, depending on the full quantum computer's computing model, as to whether quantum logic gates, quantum annealing, or adiabatic quantum computation are employed. There are currently a number of significant obstacles to constructing useful quantum computers. It is particularly difficult to maintain qubits' quantum states, as they suffer from quantum decoherence. Quantum computers therefore require error correction.
https://en.wikipedia.org/wiki?curid=25220
18,767
Any computational problem that can be solved by a classical computer can also be solved by a quantum computer. Conversely, any problem that can be solved by a quantum computer can also be solved by a classical computer, at least in principle given enough time. In other words, quantum computers obey the Church–Turing thesis. This means that while quantum computers provide no additional advantages over classical computers in terms of computability, quantum algorithms for certain problems have significantly lower time complexities than corresponding known classical algorithms. Notably, quantum computers are believed to be able to quickly solve certain problems that no classical computer could solve in any "feasible" amount of time—a feat known as "quantum supremacy." The study of the computational complexity of problems with respect to quantum computers is known as quantum complexity theory.
https://en.wikipedia.org/wiki?curid=25220
18,795
Quantum algorithms that offer more than a polynomial speedup over the best-known classical algorithm include Shor's algorithm for factoring and the related quantum algorithms for computing discrete logarithms, solving Pell's equation, and more generally solving the hidden subgroup problem for abelian finite groups. These algorithms depend on the primitive of the quantum Fourier transform. No mathematical proof has been found that shows that an equally fast classical algorithm cannot be discovered, although this is considered unlikely. Certain oracle problems like Simon's problem and the Bernstein–Vazirani problem do give provable speedups, though this is in the quantum query model, which is a restricted model where lower bounds are much easier to prove and doesn't necessarily translate to speedups for practical problems.
https://en.wikipedia.org/wiki?curid=25220
20,074
The MIM-104 Patriot is a surface-to-air missile (SAM) system, the primary of its kind used by the United States Army and several allied states. It is manufactured by the U.S. defense contractor Raytheon and derives its name from the radar component of the weapon system. The AN/MPQ-53 at the heart of the system is known as the "Phased Array Tracking Radar to Intercept on Target" which is a backronym for PATRIOT. The Patriot system replaced the Nike Hercules system as the U.S. Army's primary High to Medium Air Defense (HIMAD) system and replaced the MIM-23 Hawk system as the U.S. Army's medium tactical air defense system. In addition to these roles, Patriot has been given the function of the U.S. Army's anti-ballistic missile (ABM) system, which is now Patriot's primary mission. The system is expected to stay fielded until at least 2040.
https://en.wikipedia.org/wiki?curid=52024
21,362
"Penetration" or "viral entry" follows attachment: Virions enter the host cell through receptor-mediated endocytosis or membrane fusion. The infection of plant and fungal cells is different from that of animal cells. Plants have a rigid cell wall made of cellulose, and fungi one of chitin, so most viruses can get inside these cells only after trauma to the cell wall. Nearly all plant viruses (such as tobacco mosaic virus) can also move directly from cell to cell, in the form of single-stranded nucleoprotein complexes, through pores called plasmodesmata. Bacteria, like plants, have strong cell walls that a virus must breach to infect the cell. Given that bacterial cell walls are much thinner than plant cell walls due to their much smaller size, some viruses have evolved mechanisms that inject their genome into the bacterial cell across the cell wall, while the viral capsid remains outside.
https://en.wikipedia.org/wiki?curid=19167679
23,792
The development of quantum mechanics in the 1920s modified this picture somewhat, but in modern theories the average drift velocity of electrons can still be shown to be proportional to the electric field, thus deriving Ohm's law. In 1927 Arnold Sommerfeld applied the quantum Fermi-Dirac distribution of electron energies to the Drude model, resulting in the free electron model. A year later, Felix Bloch showed that electrons move in waves (Bloch electrons) through a solid crystal lattice, so scattering off the lattice atoms as postulated in the Drude model is not a major process; the electrons scatter off impurity atoms and defects in the material. The final successor, the modern quantum band theory of solids, showed that the electrons in a solid cannot take on any energy as assumed in the Drude model but are restricted to energy bands, with gaps between them of energies that electrons are forbidden to have. The size of the band gap is a characteristic of a particular substance which has a great deal to do with its electrical resistivity, explaining why some substances are electrical conductors, some semiconductors, and some insulators.
https://en.wikipedia.org/wiki?curid=49090
25,145
In addition to the differential calculus and integral calculus, the term is also used for naming specific methods of calculation and related theories which seek to model a particular concept in terms of mathematics. Examples of this convention include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific calculus, and the ethical calculus.
https://en.wikipedia.org/wiki?curid=5176
29,590
A system of logic is "sound" when its proof system cannot derive a conclusion from a set of premises unless it is semantically entailed by them. In other words, its proof system cannot lead to false conclusions, as defined by the semantics. A system is "complete" when its proof system can derive every conclusion that is semantically entailed by its premises. In other words, its proof system can lead to any true conclusion, as defined by the semantics. Thus, soundness and completeness together describe a system whose notions of validity and entailment line up perfectly.
https://en.wikipedia.org/wiki?curid=46426065
30,001
The transfer of energy from one chemical substance to another depends on the "size" of energy quanta emitted from one substance. However, heat energy is often transferred more easily from almost any substance to another because the phonons responsible for vibrational and rotational energy levels in a substance have much less energy than photons invoked for the electronic energy transfer. Thus, because vibrational and rotational energy levels are more closely spaced than electronic energy levels, heat is more easily transferred between substances relative to light or other forms of electronic energy. For example, ultraviolet electromagnetic radiation is not transferred with as much efficacy from one substance to another as thermal or electrical energy.
https://en.wikipedia.org/wiki?curid=5180
30,210
Many statisticians, including Nate Silver, have argued that data science is not a new field, but rather another name for statistics. Others argue that data science is distinct from statistics because it focuses on problems and techniques unique to digital data. Vasant Dhar writes that statistics emphasizes quantitative data and description. In contrast, data science deals with quantitative and qualitative data (e.g. from images, text, sensors, transactions or customer information, etc) and emphasizes prediction and action. Andrew Gelman of Columbia University has described statistics as a nonessential part of data science.
https://en.wikipedia.org/wiki?curid=35458904
31,552
In vertebrates, complementary to the circulatory system is the lymphatic system. This system carries excess plasma filtered from the capillaries as interstitial fluid between cells, away from the body tissues in an accessory route to return the excess fluid back to the blood circulation as lymph. The passage of lymph takes much longer than that of blood. The lymphatic system is a subsystem that is essential for the functioning of the blood circulatory system; without it the blood would become depleted of fluid. The lymphatic system works together with the immune system. Unlike the closed circulatory system, the lymphatic system is an open system. Some sources describe it as a "secondary circulatory system".
https://en.wikipedia.org/wiki?curid=57330
32,080
Von Neumann first proposed a quantum logic in his 1932 treatise "Mathematical Foundations of Quantum Mechanics", where he noted that projections on a Hilbert space can be viewed as propositions about physical observables. The field of quantum logic was subsequently inaugurated, in a famous paper of 1936 by von Neumann and Garrett Birkhoff, the first work ever to introduce quantum logics, wherein von Neumann and Birkhoff first proved that quantum mechanics requires a propositional calculus substantially different from all classical logics and rigorously isolated a new algebraic structure for quantum logics. The concept of creating a propositional calculus for quantum logic was first outlined in a short section in von Neumann's 1932 work, but in 1936, the need for the new propositional calculus was demonstrated through several proofs. For example, photons cannot pass through two successive filters that are polarized perpendicularly ("e.g.", horizontally and vertically), and therefore, "a fortiori", it cannot pass if a third filter polarized diagonally is added to the other two, either before or after them in the succession, but if the third filter is added "between" the other two, the photons will indeed pass through. This experimental fact is translatable into logic as the "non-commutativity" of conjunction formula_38. It was also demonstrated that the laws of distribution of classical logic, formula_39 and formula_40, are not valid for quantum theory.
https://en.wikipedia.org/wiki?curid=15942
34,177
Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object (for instance due to its position in a field), the elastic energy stored in a solid object, chemical energy associated with chemical reactions, the radiant energy carried by electromagnetic radiation, and the internal energy contained within a thermodynamic system. All living organisms constantly take in and release energy.
https://en.wikipedia.org/wiki?curid=9649
34,195
In biology, energy is an attribute of all biological systems, from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or organelle of a biological organism. Energy used in respiration is stored in substances such as carbohydrates (including sugars), lipids, and proteins stored by cells. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, using as a standard an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum. The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.
https://en.wikipedia.org/wiki?curid=9649
37,294
The delayed choice quantum eraser experiment performed by Marlan Scully involves pairs of entangled photons that are divided into "signal photons" and "idler photons", with the signal photons emerging from one of two locations and their position later measured as in the double-slit experiment. Depending on how the idler photon is measured, the experimenter can either learn which of the two locations the signal photon emerged from or "erase" that information. Even though the signal photons can be measured before the choice has been made about the idler photons, the choice seems to retroactively determine whether or not an interference pattern is observed when one correlates measurements of idler photons to the corresponding signal photons. However, since interference can be observed only after the idler photons are measured and they are correlated with the signal photons, there is no way for experimenters to tell what choice will be made in advance just by looking at the signal photons, only by gathering classical information from the entire system; thus causality is preserved.
https://en.wikipedia.org/wiki?curid=31591
37,959
On 31 July 2008, Vice Admiral Barry McCullough (Deputy Chief of Naval Operations for Integration of Resources and Capabilities) and Allison Stiller (Deputy Assistant Secretary of the Navy for Ship Programs) stated that "the DDG 1000 cannot perform area air defense; specifically, it cannot successfully employ the Standard Missile-2 (SM-2), SM-3 or SM-6 and is incapable of conducting Ballistic Missile Defense." Dan Smith, president of Raytheon's Integrated Defense Systems division, has countered that the radar and combat system are essentially the same as other SM-2-capable ships, "I can't answer the question as to why the Navy is now asserting … that "Zumwalt" is not equipped with an SM-2 capability". The lack of anti-ballistic missile capability may represent a lack of compatibility with SM-2/SM-3. The "Arleigh Burke"-class ships have BMD systems with their Lockheed-Martin AEGIS tracking and targeting software, unlike the DDG-1000's Raytheon TSCE-I targeting and tracking software, which does not, as it is not yet complete, so while the DDG-1000, with its TSCE-I combat system, does have the SM-2/SM-3 missile system installed, it does not yet have the BMD/IAMD upgrade planned for the derived CG(X). The Aegis system, on the other hand, was used in the Aegis Ballistic Missile Defense System. Since Aegis has been the Navy's chief combat system for the past 30 years, when the Navy started a BMD program, the combat system it was tested on was the Aegis combat system. So while the DDG-51 platform and the DDG-1000 platform are both SM-2/SM-3 capable, as a legacy of the Aegis Ballistic Missile Defense System, only the DDG-51 with the Aegis combat system is BMD capable. However, the DDG-1000's TSCE-I combat system had both BMD and IAMD upgrades planned. Combined with recent intelligence that China is developing targetable anti-ship ballistic missiles based on the DF-21, this may be considered a fatal flaw.
https://en.wikipedia.org/wiki?curid=864558
38,118
When a census is not feasible, a chosen subset of the population called a sample is studied. Once a sample that is representative of the population is determined, data is collected for the sample members in an observational or experimental setting. Again, descriptive statistics can be used to summarize the sample data. However, drawing the sample contains an element of randomness; hence, the numerical descriptors from the sample are also prone to uncertainty. To draw meaningful conclusions about the entire population, "inferential statistics" is needed. It uses patterns in the sample data to draw inferences about the population represented while accounting for randomness. These inferences may take the form of answering yes/no questions about the data (hypothesis testing), estimating numerical characteristics of the data (estimation), describing associations within the data (correlation), and modeling relationships within the data (for example, using regression analysis). Inference can extend to forecasting, prediction, and estimation of unobserved values either in or associated with the population being studied. It can include extrapolation and interpolation of time series or spatial data, and data mining.
https://en.wikipedia.org/wiki?curid=26685
39,641
To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function. For example, given a wave function in position space formula_2 as above, we have
https://en.wikipedia.org/wiki?curid=59874
39,694
As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space. However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways.
https://en.wikipedia.org/wiki?curid=59874
42,012
See here for the justification for this definition. Suppose that the system has some external parameter, "x", that can be changed. In general, the energy eigenstates of the system will depend on "x". According to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system's Hamiltonian, the system will stay in the same energy eigenstate and thus change its energy according to the change in energy of the energy eigenstate it is in.
https://en.wikipedia.org/wiki?curid=133017
54,099
Mass–energy equivalence states that all objects having mass, or "massive objects", have a corresponding intrinsic energy, even when they are stationary. In the rest frame of an object, where by definition it is motionless and so has no momentum, the mass and energy are equivalent and they differ only by a constant, the speed of light squared (). In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. These energies tend to be much smaller than the mass of the object multiplied by , which is on the order of 10 joules for a mass of one kilogram. Due to this principle, the mass of the atoms that come out of a nuclear reaction is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light with the same equivalent energy as the difference. In analyzing these explosions, Einstein's formula can be used with as the energy released and removed, and as the change in mass.
https://en.wikipedia.org/wiki?curid=422481
54,109
In some reactions, matter particles can be destroyed and their associated energy released to the environment as other forms of energy, such as light and heat. One example of such a conversion takes place in elementary particle interactions, where the rest energy is transformed into kinetic energy. Such conversions between types of energy happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their original mass, though the mass lost is not due to the destruction of any smaller constituents. Nuclear fission allows a tiny fraction of the energy associated with the mass to be converted into usable energy such as radiation; in the decay of the uranium, for instance, about 0.1% of the mass of the original atom is lost. In theory, it should be possible to destroy matter and convert all of the rest-energy associated with matter into heat and light, but none of the theoretically known methods are practical. One way to harness all the energy associated with mass is to annihilate matter with antimatter. Antimatter is rare in our universe, however, and the known mechanisms of production require more usable energy than would be released in annihilation. CERN estimated in 2011 that over a billion times more energy is required to make and store antimatter than could be released in its annihilation.
https://en.wikipedia.org/wiki?curid=422481
63,014
The Fourier transform is a special case (under certain conditions) of the bilateral Laplace transform. While the Fourier transform of a function is a complex function of a "real" variable (frequency), the Laplace transform of a function is a complex function of a "complex" variable. The Laplace transform is usually restricted to transformation of functions of with . A consequence of this restriction is that the Laplace transform of a function is a holomorphic function of the variable . Unlike the Fourier transform, the Laplace transform of a distribution is generally a well-behaved function. Techniques of complex variables can also be used to directly study Laplace transforms. As a holomorphic function, the Laplace transform has a power series representation. This power series expresses a function as a linear superposition of moments of the function. This perspective has applications in probability theory.
https://en.wikipedia.org/wiki?curid=18610
64,421
Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903), which might be regarded as 'zero-dimensional' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces. How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if formula_63 denotes the total energy of that component system, one may write
https://en.wikipedia.org/wiki?curid=166404
64,455
The situation is clarified by Gyarmati, who shows that his definition of "heat transfer", for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity.
https://en.wikipedia.org/wiki?curid=166404
66,021
The size and number of available data sets have grown rapidly as data is collected by devices such as mobile devices, cheap and numerous information-sensing Internet of things devices, aerial (remote sensing), software logs, cameras, microphones, radio-frequency identification (RFID) readers and wireless sensor networks. The world's technological per-capita capacity to store information has roughly doubled every 40 months since the 1980s; , every day 2.5 exabytes (2.5×2 bytes) of data are generated. Based on an IDC report prediction, the global data volume was predicted to grow exponentially from 4.4 zettabytes to 44 zettabytes between 2013 and 2020. By 2025, IDC predicts there will be 163 zettabytes of data. According to IDC, global spending on big data and business analytics (BDA) solutions is estimated to reach $215.7 billion in 2021. While Statista report, the global big data market is forecasted to grow to $103 billion by 2027. In 2011 McKinsey & Company reported, if US healthcare were to use big data creatively and effectively to drive efficiency and quality, the sector could create more than $300 billion in value every year. In the developed economies of Europe, government administrators could save more than €100 billion ($149 billion) in operational efficiency improvements alone by using big data. And users of services enabled by personal-location data could capture $600 billion in consumer surplus. One question for large enterprises is determining who should own big-data initiatives that affect the entire organization.
https://en.wikipedia.org/wiki?curid=27051151
66,096
In the provocative article "Critical Questions for Big Data", the authors title big data a part of mythology: "large data sets offer a higher form of intelligence and knowledge [...], with the aura of truth, objectivity, and accuracy". Users of big data are often "lost in the sheer volume of numbers", and "working with Big Data is still subjective, and what it quantifies does not necessarily have a closer claim on objective truth". Recent developments in BI domain, such as pro-active reporting especially target improvements in the usability of big data, through automated filtering of non-useful data and correlations. Big structures are full of spurious correlations either because of non-causal coincidences (law of truly large numbers), solely nature of big randomness (Ramsey theory), or existence of non-included factors so the hope, of early experimenters to make large databases of numbers "speak for themselves" and revolutionize scientific method, is questioned. Catherine Tucker has pointed to "hype" around big data, writing "By itself, big data is unlikely to be valuable." The article explains: "The many contexts where data is cheap relative to the cost of retaining talent to process it, suggests that processing skills are more important than data itself in creating value for a firm."
https://en.wikipedia.org/wiki?curid=27051151
69,881
About 6 MeV of the fission-input energy is supplied by the simple binding of an extra neutron to the heavy nucleus via the strong force; however, in many fissionable isotopes, this amount of energy is not enough for fission. Uranium-238, for example, has a near-zero fission cross section for neutrons of less than 1 MeV energy. If no additional energy is supplied by any other mechanism, the nucleus will not fission, but will merely absorb the neutron, as happens when U absorbs slow and even some fraction of fast neutrons, to become U. The remaining energy to initiate fission can be supplied by two other mechanisms: one of these is more kinetic energy of the incoming neutron, which is increasingly able to fission a fissionable heavy nucleus as it exceeds a kinetic energy of 1 MeV or more (so-called fast neutrons). Such high energy neutrons are able to fission U directly (see thermonuclear weapon for application, where the fast neutrons are supplied by nuclear fusion). However, this process cannot happen to a great extent in a nuclear reactor, as too small a fraction of the fission neutrons produced by any type of fission have enough energy to efficiently fission U (fission neutrons have a mode energy of 2 MeV, but a median of only 0.75 MeV, meaning half of them have less than this insufficient energy).
https://en.wikipedia.org/wiki?curid=22054
73,941
The graph of a function or relation is the set of all points satisfying that function or relation. For a function of one variable, "f", the set of all points , where is the graph of the function "f". For a function "g" of two variables, the set of all points , where is the graph of the function "g". A sketch of the graph of such a function or relation would consist of all the salient parts of the function or relation which would include its relative extrema, its concavity and points of inflection, any points of discontinuity and its end behavior. All of these terms are more fully defined in calculus. Such graphs are useful in calculus to understand the nature and behavior of a function or relation.
https://en.wikipedia.org/wiki?curid=7706
74,385
Radioactive decay results in a reduction of summed rest mass, once the released energy (the "disintegration energy") has escaped in some way. Although decay energy is sometimes defined as associated with the difference between the mass of the parent nuclide products and the mass of the decay products, this is true only of rest mass measurements, where some energy has been removed from the product system. This is true because the decay energy must always carry mass with it, wherever it appears (see mass in special relativity) according to the formula "E" = "mc". The decay energy is initially released as the energy of emitted photons plus the kinetic energy of massive emitted particles (that is, particles that have rest mass). If these particles come to thermal equilibrium with their surroundings and photons are absorbed, then the decay energy is transformed to thermal energy, which retains its mass.
https://en.wikipedia.org/wiki?curid=197767
76,781
Once processed and organized, the data may be incomplete, contain duplicates, or contain errors. The need for "data cleaning" will arise from problems in the way that the datum are entered and stored. Data cleaning is the process of preventing and correcting these errors. Common tasks include record matching, identifying inaccuracy of data, overall quality of existing data, deduplication, and column segmentation. Such data problems can also be identified through a variety of analytical techniques. For example; with financial information, the totals for particular variables may be compared against separately published numbers that are believed to be reliable. Unusual amounts, above or below predetermined thresholds, may also be reviewed. There are several types of data cleaning, that are dependent upon the type of data in the set; this could be phone numbers, email addresses, employers, or other values. Quantitative data methods for outlier detection, can be used to get rid of data that appears to have a higher likelihood of being input incorrectly. Textual data spell checkers can be used to lessen the amount of mis-typed words. However, it is harder to tell if the words themselves are correct.
https://en.wikipedia.org/wiki?curid=2720954
79,504
Functional programming has its roots in academia, evolving from the lambda calculus, a formal system of computation based only on functions. Functional programming has historically been less popular than imperative programming, but many functional languages are seeing use today in industry and education, including Common Lisp, Scheme, Clojure, Wolfram Language, Racket, Erlang, Elixir, OCaml, Haskell, and F#. Functional programming is also key to some languages that have found success in specific domains, like JavaScript in the Web, R in statistics, J, K and Q in financial analysis, and XQuery/XSLT for XML. Domain-specific declarative languages like SQL and Lex/Yacc use some elements of functional programming, such as not allowing mutable values. In addition, many other programming languages support programming in a functional style or have implemented features from functional programming, such as C++11, C#, Kotlin, Perl, PHP, Python, Go, Rust, Raku, Scala, and Java (since Java 8).
https://en.wikipedia.org/wiki?curid=10933
81,289
These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see quantum gravity and graviton) to the frustration of theoreticians like Stephen Hawking. Interactions between quarks and leptons are the result of an exchange of force-carrying particles such as photons between quarks and leptons. The force-carrying particles are not themselves building blocks. As one consequence, mass and energy (which to our present knowledge cannot be created or destroyed) cannot always be related to matter (which can be created out of non-matter particles such as photons, or even out of pure energy, such as kinetic energy). Force mediators are usually not considered matter: the mediators of the electric force (photons) possess energy (see Planck relation) and the mediators of the weak force (W and Z bosons) have mass, but neither are considered matter either. However, while these quanta are not considered matter, they do contribute to the total mass of atoms, subatomic particles, and all systems that contain them.
https://en.wikipedia.org/wiki?curid=19673093
82,598
Electrons that are bound in atoms, molecules and solids each occupy distinct states of well-defined binding energies. When light quanta deliver more than this amount of energy to an individual electron, the electron may be emitted into free space with excess (kinetic) energy that is formula_1 higher than the electron's binding energy. The distribution of kinetic energies thus reflects the distribution of the binding energies of the electrons in the atomic, molecular or crystalline system: an electron emitted from the state at binding energy formula_15 is found at kinetic energy formula_16. This distribution is one of the main characteristics of the quantum system, and can be used for further studies in quantum chemistry and quantum physics.
https://en.wikipedia.org/wiki?curid=23579
89,050
The cell membrane (also known as the plasma membrane (PM) or cytoplasmic membrane, and historically referred to as the plasmalemma) is a biological membrane that separates and protects the interior of all cells from the outside environment (the extracellular space). The cell membrane consists of a lipid bilayer, made up of two layers of phospholipids with cholesterols (a lipid component) interspersed between them, maintaining appropriate membrane fluidity at various temperatures. The membrane also contains membrane proteins, including integral proteins that span the membrane and serve as membrane transporters, and peripheral proteins that loosely attach to the outer (peripheral) side of the cell membrane, acting as enzymes to facilitate interaction with the cell's environment. Glycolipids embedded in the outer lipid layer serve a similar purpose. The cell membrane controls the movement of substances in and out of cells and organelles, being selectively permeable to ions and organic molecules. In addition, cell membranes are involved in a variety of cellular processes such as cell adhesion, ion conductivity, and cell signalling and serve as the attachment surface for several extracellular structures, including the cell wall and the carbohydrate layer called the glycocalyx, as well as the intracellular network of protein fibers called the cytoskeleton. In the field of synthetic biology, cell membranes can be artificially reassembled.
https://en.wikipedia.org/wiki?curid=33051527
92,850
The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG, that have defined roles in the immune system.
https://en.wikipedia.org/wiki?curid=2362
93,760
Besides the probability function, the cumulative distribution function, the probability mass function and the probability density function, the moment generating function and the characteristic function also serve to identify a probability distribution, as they uniquely determine an underlying cumulative distribution function.
https://en.wikipedia.org/wiki?curid=23543
97,444
Killer T cells are a sub-group of T cells that kill cells that are infected with viruses (and other pathogens), or are otherwise damaged or dysfunctional. As with B cells, each type of T cell recognizes a different antigen. Killer T cells are activated when their T-cell receptor binds to this specific antigen in a complex with the MHC Class I receptor of another cell. Recognition of this MHC:antigen complex is aided by a co-receptor on the T cell, called CD8. The T cell then travels throughout the body in search of cells where the MHC I receptors bear this antigen. When an activated T cell contacts such cells, it releases cytotoxins, such as perforin, which form pores in the target cell's plasma membrane, allowing ions, water and toxins to enter. The entry of another toxin called granulysin (a protease) induces the target cell to undergo apoptosis. T cell killing of host cells is particularly important in preventing the replication of viruses. T cell activation is tightly controlled and generally requires a very strong MHC/antigen activation signal, or additional activation signals provided by "helper" T cells (see below).
https://en.wikipedia.org/wiki?curid=14958
98,170
In probability theory and statistics, the Bernoulli distribution, named after Swiss mathematician Jacob Bernoulli, is the discrete probability distribution of a random variable which takes the value 1 with probability formula_7 and the value 0 with probability formula_8. Less formally, it can be thought of as a model for the set of possible outcomes of any single experiment that asks a yes–no question. Such questions lead to outcomes that are boolean-valued: a single bit whose value is success/yes/true/one with probability "p" and failure/no/false/zero with probability "q". It can be used to represent a (possibly biased) coin toss where 1 and 0 would represent "heads" and "tails", respectively, and "p" would be the probability of the coin landing on heads (or vice versa where 1 would represent tails and "p" would be the probability of tails). In particular, unfair coins would have formula_9
https://en.wikipedia.org/wiki?curid=199189
100,437
The function "A"("t" | "ν") is the integral of Student's probability density function, "f"("t") between −"t" and "t", for "t" ≥ 0. It thus gives the probability that a value of "t" less than that calculated from observed data would occur by chance. Therefore, the function "A"("t" | "ν") can be used when testing whether the difference between the means of two sets of data is statistically significant, by calculating the corresponding value of "t" and the probability of its occurrence if the two sets of data were drawn from the same population. This is used in a variety of situations, particularly in "t"-tests. For the statistic "t", with "ν" degrees of freedom, "A"("t" | "ν") is the probability that "t" would be less than the observed value if the two means were the same (provided that the smaller mean is subtracted from the larger, so that "t" ≥ 0). It can be easily calculated from the cumulative distribution function "F"("t") of the "t"-distribution:
https://en.wikipedia.org/wiki?curid=105375
101,036
Considering the system of bodies as the combined set of small particles the bodies consist of, and applying the previous on the particle level we get the negative gravitational binding energy. This potential energy is more strongly negative than the total potential energy of the system of bodies as such since it also includes the negative gravitational binding energy of each body. The potential energy of the system of bodies as such is the negative of the energy needed to separate the bodies from each other to infinity, while the gravitational binding energy is the energy needed to separate all particles from each other to infinity.
https://en.wikipedia.org/wiki?curid=23703
101,502
A sonar target is small relative to the sphere, centred around the emitter, on which it is located. Therefore, the power of the reflected signal is very low, several orders of magnitude less than the original signal. Even if the reflected signal was of the same power, the following example (using hypothetical values) shows the problem: Suppose a sonar system is capable of emitting a 10,000 W/m signal at 1 m, and detecting a 0.001 W/m signal. At 100 m the signal will be 1 W/m (due to the inverse-square law). If the entire signal is reflected from a 10 m target, it will be at 0.001 W/m when it reaches the emitter, i.e. just detectable. However, the original signal will remain above 0.001 W/m until 3000 m. Any 10 m target between 100 and 3000 m using a similar or better system would be able to detect the pulse, but would not be detected by the emitter. The detectors must be very sensitive to pick up the echoes. Since the original signal is much more powerful, it can be detected many times further than twice the range of the sonar (as in the example).
https://en.wikipedia.org/wiki?curid=29438
102,969
Volkswagen's Rudolf Krebs said in 2013 that "no matter how excellent you make the cars themselves, the laws of physics hinder their overall efficiency. The most efficient way to convert energy to mobility is electricity." He elaborated: "Hydrogen mobility only makes sense if you use green energy", but ... you need to convert it first into hydrogen "with low efficiencies" where "you lose about 40 percent of the initial energy". You then must compress the hydrogen and store it under high pressure in tanks, which uses more energy. "And then you have to convert the hydrogen back to electricity in a fuel cell with another efficiency loss". Krebs continued: "in the end, from your original 100 percent of electric energy, you end up with 30 to 40 percent." In 2015, "CleanTechnica" listed some of the disadvantages of hydrogen fuel cell vehicles A 2016 study in "Energy" by scientists at Stanford University and the Technical University of Munich concluded that, even assuming local hydrogen production, "investing in all-electric battery vehicles is a more economical choice for reducing carbon dioxide emissions".
https://en.wikipedia.org/wiki?curid=188545
106,590
If one used Planck's energy quanta, and demanded that electromagnetic radiation at a given frequency could only transfer energy to matter in integer multiples of an energy quantum "hf", then the photoelectric effect could be explained very simply. Low-frequency light only ejects low-energy electrons because each electron is excited by the absorption of a single photon. Increasing the intensity of the low-frequency light (increasing the number of photons) only increases the number of excited electrons, not their energy, because the energy of each photon remains low. Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck's constant "h" to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency, the gradient of the line being Planck's constant. These results were not confirmed until 1915, when Robert Andrews Millikan produced experimental results in perfect accord with Einstein's predictions.
https://en.wikipedia.org/wiki?curid=33426
106,615
Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to Schrödinger equation. For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, have no solutions of the Schrödinger equation. Instead of a particle wave function that localizes mass in space, a photon wave function can be constructed from Einstein kinematics to localize energy in spatial coordinates.
https://en.wikipedia.org/wiki?curid=33426