text stringlengths 31 999 | source stringclasses 5 values |
|---|---|
Servo bandwidth is the maximum trackable sinusoidal frequency of amplitude A, with tracking achieved at or before 10% of A amplitude is reached. The servo bandwidth indicates the capability of the servo to follow rapid changes in the commanded input. It is usually specified as a frequency in Hertz or radian/sec | https://huggingface.co/datasets/fmars/wiki_stem |
In control engineering, a servomechanism, usually shortened to servo, is an automatic device that uses error-sensing negative feedback to correct the action of a mechanism. In displacement-controlled applications, it usually includes a built-in encoder or other position feedback mechanism to ensure the output is achieving the desired effect. The term correctly applies only to systems where the feedback or error-correction signals help control mechanical position, speed, attitude or any other measurable variables | https://huggingface.co/datasets/fmars/wiki_stem |
A set-valued function (or correspondence) is a mathematical function that maps elements from one set, the domain of the function, to subsets of another set. Set-valued functions are used in a variety of mathematical fields, including optimization, control theory and game theory.
Set-valued functions are also known as multivalued functions in some references, but herein and in many others references in mathematical analysis, a multivalued function is a set-valued function f that has a further continuity property, namely that the choice of an element in the set
f
(
x
)
{\displaystyle f(x)}
defines a corresponding element in each set
f
(
y
)
{\displaystyle f(y)}
for y close to x, and thus defines locally an ordinary function | https://huggingface.co/datasets/fmars/wiki_stem |
In optimal control, problems of singular control are problems that are difficult to solve because a straightforward application of Pontryagin's minimum principle fails to yield a complete solution. Only a few such problems have been solved, such as Merton's portfolio problem in financial economics or trajectory optimization in aeronautics. A more technical explanation follows | https://huggingface.co/datasets/fmars/wiki_stem |
The Smith predictor (invented by O. J. M | https://huggingface.co/datasets/fmars/wiki_stem |
Space vector modulation (SVM) is an algorithm for the control of pulse-width modulation (PWM). It is used for the creation of alternating current (AC) waveforms; most commonly to drive 3 phase AC powered motors at varying speeds from DC using multiple class-D amplifiers. There are variations of SVM that result in different quality and computational requirements | https://huggingface.co/datasets/fmars/wiki_stem |
In systems theory, a system or a process is in a steady state if the variables (called state variables) which define the behavior of the system or the process are unchanging in time. In continuous time, this means that for those properties p of the system, the partial derivative with respect to time is zero and remains so:
∂
p
∂
t
=
0
for all present and future
t
.
{\displaystyle {\frac {\partial p}{\partial t}}=0\quad {\text{for all present and future }}t | https://huggingface.co/datasets/fmars/wiki_stem |
Stochastic control or stochastic optimal control is a sub field of control theory that deals with the existence of uncertainty either in observations or in the noise that drives the evolution of the system. The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables. Stochastic control aims to design the time path of the controlled variables that performs the desired control task with minimum cost, somehow defined, despite the presence of this noise | https://huggingface.co/datasets/fmars/wiki_stem |
Supervisory control is a general term for control of many individual controllers or control loops, such as within a distributed control system. It refers to a high level of overall monitoring of individual process controllers, which is not necessary for the operation of each controller, but gives the operator an overall plant process view, and allows integration of operation between controllers.
A more specific use of the term is for a Supervisory Control and Data Acquisition system or SCADA, which refers to a specific class of system for use in process control, often on fairly small and remote applications such as a pipeline transport, water distribution, or wastewater utility system station | https://huggingface.co/datasets/fmars/wiki_stem |
The supervisory control theory (SCT), also known as the Ramadge–Wonham framework (RW framework), is a method for automatically synthesizing supervisors that restrict the behavior of a plant such that as much as possible of the given specifications are fulfilled. The plant is assumed to spontaneously generate events. The events are in either one of the following two categories controllable or uncontrollable | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, in the field of control theory, a Sylvester equation is a matrix equation of the form:
A
X
+
X
B
=
C
.
{\displaystyle AX+XB=C. }
It is named after English mathematician James Joseph Sylvester | https://huggingface.co/datasets/fmars/wiki_stem |
System analysis in the field of electrical engineering characterizes electrical systems and their properties. System analysis can be used to represent almost anything from population growth to audio speakers; electrical engineers often use it because of its direct relevance to many areas of their discipline, most notably signal processing, communication systems and control systems.
Characterization of systems
A system is characterized by how it responds to input signals | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, the tensor product (TP) model transformation was proposed by Baranyi and Yam as key concept for higher-order singular value decomposition of functions. It transforms a function (which can be given via closed formulas or neural networks, fuzzy logic, etc. ) into TP function form if such a transformation is possible | https://huggingface.co/datasets/fmars/wiki_stem |
In the early 1990s, a new type of sliding mode control, named terminal sliding modes (TSM) was invented at the Jet Propulsion Laboratory (JPL) by Venkataraman and Gulati. TSM is robust non-linear control approach.
The main idea of terminal sliding mode control evolved out of seminal work on terminal attractors done by Zak in the JPL, and is evoked by the concept of terminal attractors which guarantee finite time convergence of the states | https://huggingface.co/datasets/fmars/wiki_stem |
In control theory, a time-invariant (TI) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function | https://huggingface.co/datasets/fmars/wiki_stem |
A time-variant system is a system whose output response depends on moment of observation as well as moment of input signal application. In other words, a time delay or time advance of input not only shifts the output signal in time but also changes other parameters and behavior. Time variant systems respond differently to the same input at different times | https://huggingface.co/datasets/fmars/wiki_stem |
Baranyi and Yam proposed the TP model transformation as a new concept in quasi-LPV (qLPV) based control, which plays a central role in the highly desirable bridging between identification and polytopic systems theories. It is also used as a TS (Takagi-Sugeno) fuzzy model transformation. It is uniquely effective in manipulating the convex hull of polytopic forms (or TS fuzzy models), and, hence, has revealed and proved the fact that convex hull manipulation is a necessary and crucial step in achieving optimal solutions and decreasing conservativeness in modern linear matrix inequality based control theory | https://huggingface.co/datasets/fmars/wiki_stem |
In control system theory, and various branches of engineering, a transfer function matrix, or just transfer matrix is a generalisation of the transfer functions of single-input single-output (SISO) systems to multiple-input and multiple-output (MIMO) systems. The matrix relates the outputs of the system to its inputs. It is a particularly useful construction for linear time-invariant (LTI) systems because it can be expressed in terms of the s-plane | https://huggingface.co/datasets/fmars/wiki_stem |
In electrical engineering and mechanical engineering, a transient response is the response of a system to a change from an equilibrium or a steady state. The transient response is not necessarily tied to abrupt events but to any event that affects the equilibrium of the system. The impulse response and step response are transient responses to a specific input (an impulse and a step, respectively) | https://huggingface.co/datasets/fmars/wiki_stem |
A system is said to be transient or in a transient state when a process variable or variables have been changed and the system has not yet reached a steady state. The time taken for the circuit to change from one steady state to another steady state is called the transient time.
Examples
Chemical Engineering
When a chemical reactor is being brought into operation, the concentrations, temperatures, species compositions, and reaction rates are changing with time until operation reaches its nominal process variables | https://huggingface.co/datasets/fmars/wiki_stem |
Underactuation is a technical term used in robotics and control theory to describe mechanical systems that cannot be commanded to follow arbitrary trajectories in configuration space. This condition can occur for a number of reasons, the simplest of which is when the system has a lower number of actuators than degrees of freedom. In this case, the system is said to be trivially underactuated | https://huggingface.co/datasets/fmars/wiki_stem |
The term unicycle is often used in robotics and control theory to mean a generalised cart or car moving in a two-dimensional world; these are also often called "unicycle-like" or "unicycle-type" vehicles. This usage is distinct from the literal sense of "one wheeled robot bicycle".
These theoretical vehicles are typically shown as having two parallel driven wheels, one mounted on each side of their centre, and (presumably) some sort of offset castor to maintain balance; although in general they could be any vehicle capable of simultaneous arbitrary rotation and translation | https://huggingface.co/datasets/fmars/wiki_stem |
The unscented transform (UT) is a mathematical function used to estimate the result of applying a given nonlinear transformation to a probability distribution that is characterized only in terms of a finite set of statistics. The most common use of the unscented transform is in the nonlinear projection of mean and covariance estimates in the context of nonlinear extensions of the Kalman filter. Its creator Jeffrey Uhlmann explained that "unscented" was an arbitrary name that he adopted to avoid it being referred to as the “Uhlmann filter” though others have indicated that "unscented" is a contrast to "scented" intended as a euphemism for "stinky"
Background
Many filtering and control methods represent estimates of the state of a system in the form of a mean vector and an associated error covariance matrix | https://huggingface.co/datasets/fmars/wiki_stem |
In continuum mechanics, viscous damping is a formulation of the damping phenomena, in which the source of damping force is modeled as a function of the volume, shape, and velocity of an object traversing through a real fluid with viscosity. Typical examples of viscous damping in mechanical systems include:
Fluid films between surfaces
Fluid flow around a piston in a cylinder
Fluid flow through an orifice
Fluid flow within a journal bearingViscous damping also refers to damping devices. Most often they damp motion by providing a force or torque opposing motion proportional to the velocity | https://huggingface.co/datasets/fmars/wiki_stem |
The weighted product model (WPM) is a popular multi-criteria decision analysis (MCDA) / multi-criteria decision making (MCDM) method. It is similar to the weighted sum model (WSM). The main difference is that instead of addition in the main mathematical operation, there is multiplication | https://huggingface.co/datasets/fmars/wiki_stem |
In decision theory, the weighted sum model (WSM), also called weighted linear combination (WLC) or simple additive weighting (SAW), is the best known and simplest multi-criteria decision analysis (MCDA) / multi-criteria decision making method for evaluating a number of alternatives in terms of a number of decision criteria.
Description
In general, suppose that a given MCDA problem is defined on m alternatives and n decision criteria. Furthermore, let us assume that all the criteria are benefit criteria, that is, the higher the values are, the better it is | https://huggingface.co/datasets/fmars/wiki_stem |
Witsenhausen's counterexample, shown in the figure below, is a deceptively simple toy problem in decentralized stochastic control. It was formulated by Hans Witsenhausen in 1968. It is a counterexample to a natural conjecture that one can generalize a key result of centralized linear–quadratic–Gaussian control systems—that in a system with linear dynamics, Gaussian disturbance, and quadratic cost, affine (linear) control laws are optimal—to decentralized systems | https://huggingface.co/datasets/fmars/wiki_stem |
In control theory the Youla–Kučera parametrization (also simply known as Youla parametrization) is a formula that describes all possible stabilizing feedback controllers for a given plant P, as function of a single parameter Q.
Details
The YK parametrization is a general result. It is a fundamental result of control theory and launched an entirely new area of research and found application, among others, in optimal and robust control | https://huggingface.co/datasets/fmars/wiki_stem |
The zero-order hold (ZOH) is a mathematical model of the practical signal reconstruction done by a conventional digital-to-analog converter (DAC). That is, it describes the effect of converting a discrete-time signal to a continuous-time signal by holding each sample value for one sample interval. It has several applications in electrical communication | https://huggingface.co/datasets/fmars/wiki_stem |
In physics, critical phenomena is the collective name associated with the
physics of critical points. Most of them stem from the divergence of the
correlation length, but also the dynamics slows down. Critical phenomena include scaling relations among different quantities, power-law divergences of some quantities (such as the magnetic susceptibility in the ferromagnetic phase transition) described by critical exponents, universality, fractal behaviour, and ergodicity breaking | https://huggingface.co/datasets/fmars/wiki_stem |
Multicritical points are special points in the parameter space of thermodynamic or
other systems with a continuous phase transition. At least two thermodynamic or other
parameters must be adjusted to reach a multicritical point. At a multicritical point the
system belongs to a universality class different from the "normal" universality class | https://huggingface.co/datasets/fmars/wiki_stem |
The Abelian sandpile model (ASM) is the more popular name of the original Bak–Tang–Wiesenfeld model (BTW). BTW model was the first discovered example of a dynamical system displaying self-organized criticality. It was introduced by Per Bak, Chao Tang and Kurt Wiesenfeld in a 1987 paper | https://huggingface.co/datasets/fmars/wiki_stem |
Conductivity near the percolation threshold in physics, occurs in a mixture between a dielectric and a metallic component. The conductivity
σ
{\displaystyle \sigma }
and the dielectric constant
ϵ
{\displaystyle \epsilon }
of this mixture show a critical behavior if the fraction of the metallic component reaches the percolation threshold. The behavior of the conductivity near this percolation threshold will show a smooth change over from the conductivity of the dielectric component to the conductivity of the metallic component | https://huggingface.co/datasets/fmars/wiki_stem |
In the renormalization group analysis of phase transitions in physics, a critical dimension is the dimensionality of space at which the character of the phase transition changes. Below the lower critical dimension there is no phase transition. Above the upper critical dimension the critical exponents of the theory become the same as that in mean field theory | https://huggingface.co/datasets/fmars/wiki_stem |
Critical exponents describe the behavior of physical quantities near continuous phase transitions. It is believed, though not proven, that they are universal, i. e | https://huggingface.co/datasets/fmars/wiki_stem |
In thermodynamics, a critical point (or critical state) is the end point of a phase equilibrium curve. One example is the liquid–vapor critical point, the end point of the pressure–temperature curve that designates conditions under which a liquid and its vapor can coexist. At higher temperatures, the gas cannot be liquefied by pressure alone | https://huggingface.co/datasets/fmars/wiki_stem |
Critical radius is the minimum particle size from which an aggregate is thermodynamically stable. In other words, it is the lowest radius formed by atoms or molecules clustering together (in a gas, liquid or solid matrix) before a new phase inclusion (a bubble, a droplet or a solid particle) is viable and begins to grow. Formation of such stable nuclei is called nucleation | https://huggingface.co/datasets/fmars/wiki_stem |
Critical value may refer to:
In differential topology, a critical value of a differentiable function ƒ : M → N between differentiable manifolds is the image (value of) ƒ(x) in N of a critical point x in M. In statistical hypothesis testing, the critical values of a statistical test are the boundaries of the acceptance region of the test. The acceptance region is the set of values of the test statistic for which the null hypothesis is not rejected | https://huggingface.co/datasets/fmars/wiki_stem |
In physics and materials science, the Curie temperature (TC), or Curie point, is the temperature above which certain materials lose their permanent magnetic properties, which can (in most cases) be replaced by induced magnetism. The Curie temperature is named after Pierre Curie, who showed that magnetism was lost at a critical temperature. The force of magnetism is determined by the magnetic moment, a dipole moment within an atom which originates from the angular momentum and spin of electrons | https://huggingface.co/datasets/fmars/wiki_stem |
In statistical physics, directed percolation (DP) refers to a class of models that mimic filtering of fluids through porous materials along a given direction, due to the effect of gravity. Varying the microscopic connectivity of the pores, these models display a phase transition from a macroscopically permeable (percolating) to an impermeable (non-percolating) state. Directed percolation is also used as a simple model for epidemic spreading with a transition between survival and extinction of the disease depending on the infection rate | https://huggingface.co/datasets/fmars/wiki_stem |
The term Fermi point has two applications but refers to the same phenomena (special relativity):
Fermi point (quantum field theory)
Fermi point (nanotechnology)For both applications count that the symmetry between particles and anti-particles in weak interactions is violated:
At this point the particle energy E = cp is zero.
In nanotechnology this concept can be applied to electron behavior. An electron as a single particle is a fermion obeying the Pauli exclusion principle | https://huggingface.co/datasets/fmars/wiki_stem |
The λ (lambda) universality class is a group in condensed matter physics. It regroups several systems possessing strong analogies, namely, superfluids, superconductors and smectics (liquid crystals). All these systems are expected to belong to the same universality class for the thermodynamic critical properties of the phase transition | https://huggingface.co/datasets/fmars/wiki_stem |
A liquid–liquid critical point (or LLCP) is the endpoint of a liquid–liquid phase transition line (LLPT); it is a critical point where two types of local structures coexist at the exact ratio of unity. This hypothesis was first developed by Peter Poole, Francesco Sciortino, Uli Essmann and H. Eugene Stanley in Boston to obtain a quantitative understanding of the huge number of anomalies present in water | https://huggingface.co/datasets/fmars/wiki_stem |
The lower critical solution temperature (LCST) or lower consolute temperature is the critical temperature below which the components of a mixture are miscible in all proportions. The word lower indicates that the LCST is a lower bound to a temperature interval of partial miscibility, or miscibility for certain compositions only.
The phase behavior of polymer solutions is an important property involved in the development and design of most polymer-related processes | https://huggingface.co/datasets/fmars/wiki_stem |
The Mathematics of Chip-Firing is a textbook in mathematics on chip-firing games and abelian sandpile models. It was written by Caroline Klivans, and published in 2018 by the CRC Press.
Topics
A chip-firing game, in its most basic form, is a process on an undirected graph, with each vertex of the graph containing some number of chips | https://huggingface.co/datasets/fmars/wiki_stem |
In the context of the physical and mathematical theory of percolation, a percolation transition is characterized by a set of universal critical exponents, which describe the fractal properties of the percolating medium at large scales and sufficiently close to the transition. The exponents are universal in the sense that they only depend on the type of percolation model and on the space dimension. They are expected to not depend on microscopic details such as the lattice structure, or whether site or bond percolation is considered | https://huggingface.co/datasets/fmars/wiki_stem |
Percolation surface critical behavior concerns the influence of surfaces on the critical behavior of percolation.
Background
Percolation is the study of connectivity in random systems, such as electrical conductivity in random conductor/insulator systems, fluid flow in porous media, gelation in polymer systems, etc. At a critical fraction of connectivity or porosity, long-range connectivity can take place, leading to long-range flow | https://huggingface.co/datasets/fmars/wiki_stem |
The percolation threshold is a mathematical concept in percolation theory that describes the formation of long-range connectivity in random systems. Below the threshold a giant connected component does not exist; while above it, there exists a giant component of the order of system size. In engineering and coffee making, percolation represents the flow of fluids through porous media, but in the mathematics and physics worlds it generally refers to simplified lattice models of random systems or networks (graphs), and the nature of the connectivity in them | https://huggingface.co/datasets/fmars/wiki_stem |
In chemistry, thermodynamics, and other related fields, a phase transition (or phase change) is the physical process of transition between one state of a medium and another. Commonly the term is used to refer to changes among the basic states of matter: solid, liquid, and gas, and in rare cases, plasma. A phase of a thermodynamic system and the states of matter have uniform physical properties | https://huggingface.co/datasets/fmars/wiki_stem |
Phase Transitions and Critical Phenomena is a 20-volume series of books, comprising review articles on phase transitions and critical phenomena, published during 1972-2001. It is "considered the most authoritative series on the topic". Volumes 1-6 were edited by Cyril Domb and Melville S | https://huggingface.co/datasets/fmars/wiki_stem |
In physics, mathematics and statistics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality.
The technical term for this transformation is a dilatation (also known as dilation). Dilatations can form part of a larger conformal symmetry | https://huggingface.co/datasets/fmars/wiki_stem |
Self-organized criticality (SOC) is a property of dynamical systems that have a critical point as an attractor. Their macroscopic behavior thus displays the spatial or temporal scale-invariance characteristic of the critical point of a phase transition, but without the need to tune control parameters to a precise value, because the system, effectively, tunes itself as it evolves towards criticality.
The concept was put forward by Per Bak, Chao Tang and Kurt Wiesenfeld ("BTW") in a paper
published in 1987 in Physical Review Letters, and is considered to be one of the mechanisms by which complexity arises in nature | https://huggingface.co/datasets/fmars/wiki_stem |
A supercritical fluid (SCF) is any substance at a temperature and pressure above its critical point, where distinct liquid and gas phases do not exist, but below the pressure required to compress it into a solid. It can effuse through porous solids like a gas, overcoming the mass transfer limitations that slow liquid transport through such materials. SCF are much superior to gases in their ability to dissolve materials like liquids or solids | https://huggingface.co/datasets/fmars/wiki_stem |
Supercritical liquid–gas boundaries are lines in the pressure-temperature (pT) diagram that delimit more liquid-like and more gas-like states of a supercritical fluid. They comprise the Fisher–Widom line, the Widom line, and the Frenkel line.
Overview
According to textbook knowledge, it is possible to transform a liquid continuously into a gas, without undergoing a phase transition, by heating and compressing strongly enough to go around the critical point | https://huggingface.co/datasets/fmars/wiki_stem |
Tricritical point refers to a point where the second order phase transition curve meets the first order phase transition curve, which was first introduced by Lev Landau in 1937, wherein Landau called the tricritical point as the critical point of the continuous transition. The first example of the tricritical point was shown by Robert B. Griffiths in helium-3 helium-4 mixture | https://huggingface.co/datasets/fmars/wiki_stem |
In statistical mechanics, universality is the observation that there are properties for a large class of systems that are independent of the dynamical details of the system. Systems display universality in a scaling limit, when a large number of interacting parts come together. The modern meaning of the term was introduced by Leo Kadanoff in the 1960s, but a simpler version of the concept was already implicit in the van der Waals equation and in the earlier Landau theory of phase transitions, which did not incorporate scaling correctly | https://huggingface.co/datasets/fmars/wiki_stem |
In statistical mechanics, a universality class is a collection of mathematical models which share a single scale invariant limit under the process of renormalization group flow. While the models within a class may differ dramatically at finite scales, their behavior will become increasingly similar as the limit scale is approached. In particular, asymptotic phenomena such as critical exponents will be the same for all models in the class | https://huggingface.co/datasets/fmars/wiki_stem |
Widom scaling (after Benjamin Widom) is a hypothesis in statistical mechanics regarding the free energy of a magnetic system near its critical point which leads to the critical exponents becoming no longer independent so that they can be parameterized in terms of two values. The hypothesis can be seen to arise as a natural consequence of the block-spin renormalization procedure, when the block size is chosen to be of the same size as the correlation length. Widom scaling is an example of universality | https://huggingface.co/datasets/fmars/wiki_stem |
The beam propagation method (BPM) is an approximation technique for simulating the propagation of light in slowly varying optical waveguides. It is essentially the same as the so-called parabolic equation (PE) method in underwater acoustics. Both BPM and the PE were first introduced in the 1970s | https://huggingface.co/datasets/fmars/wiki_stem |
Bioelectrodynamics is a branch of medical physics and bioelectromagnetism which deals with rapidly changing electric and magnetic fields in biological systems, i. e. high frequency endogenous electromagnetic phenomena in living cells | https://huggingface.co/datasets/fmars/wiki_stem |
In physics, in the context of electromagnetism, Birkhoff's theorem concerns spherically symmetric static solutions of Maxwell's field equations of electromagnetism.
The theorem is due to George D. Birkhoff | https://huggingface.co/datasets/fmars/wiki_stem |
Blondel's experiments are a series of experiments performed by physicist André Blondel in 1914 in order to determine what was the most general law of electromagnetic induction. In fact, noted Blondel, "Significant discussions have been raised repeatedly on the question of what is the most general law of induction: we should consider the electromotive force (e. m | https://huggingface.co/datasets/fmars/wiki_stem |
Computational electromagnetics (CEM), computational electrodynamics or electromagnetic modeling is the process of modeling the interaction of electromagnetic fields with physical objects and the environment.
It typically involves using computer programs to compute approximate solutions to Maxwell's equations to calculate antenna performance, electromagnetic compatibility, radar cross section and electromagnetic wave propagation when not in free space. A large subfield is antenna modeling computer programs, which calculate the radiation pattern and electrical properties of radio antennas, and are widely used to design antennas for specific applications | https://huggingface.co/datasets/fmars/wiki_stem |
Dash of Destruction (also known as Doritos Dash of Destruction) is a racing advergame developed by independent software developer NinjaBee for the Xbox 360's Xbox Live Arcade service. It was released on December 17, 2008 for free. The concept originated from gamer Mike Borland, winner of Doritos-sponsored "Unlock Xbox" competition | https://huggingface.co/datasets/fmars/wiki_stem |
Dead Frontier is a free-to-play, browser-based survival horror game which takes place in a post-apocalyptic, zombie-infested setting. It is operated by Creaky Corpse Ltd. Dead Frontier was released for open beta on April 21, 2008, and has over ten million registered accounts | https://huggingface.co/datasets/fmars/wiki_stem |
DECO Online was a MMORPG developed by Rocksoft and published in English by JOYMAX Interactive, DECO had no monthly subscription fees and instead the game featured a Microtransaction shop. DECO online closed in December 2010.
Nations
DECO online consists of two nations, Millena and Rain, Millena is a nation that uses weapons such as swords and bows while Rain players use magic | https://huggingface.co/datasets/fmars/wiki_stem |
Devil Summoner 2: Raidou Kuzunoha vs. King Abaddon is an action role-playing game developed and published by Atlus for the PlayStation 2. The game is the fourth in the Devil Summoner series, which is a part of the larger Megami Tensei franchise, and serves as the direct sequel to Devil Summoner: Raidou Kuzunoha vs | https://huggingface.co/datasets/fmars/wiki_stem |
Die2Nite is a browser-based, multiplayer survival game created by Motion Twin and played in real time. The objective is to build a town to survive the longest during the assaults from hordes of zombies, which come in larger numbers each day at 23:00 server time.
Gameplay
Overview
Each player plays the part of a citizen in a town with the population of 40 | https://huggingface.co/datasets/fmars/wiki_stem |
Digital Combat Simulator, or DCS, is a combat flight simulation game developed primarily by Eagle Dynamics and The Fighter Collection.
Several labels are used when referring to the DCS line of simulation products: DCS World, Modules, and Campaigns. DCS World is a free-to-play game that includes two free aircraft and two free maps | https://huggingface.co/datasets/fmars/wiki_stem |
Dino Run is a Flash game created by PixelJAM and XGen Studios, released on April 30, 2008. The player steers a Velociraptor through increasingly dangerous side-scrolling landscapes to escape an impending "wall of doom". The game uses simple pixel art and 8-bit sound to replicate the style of 1980s arcade games | https://huggingface.co/datasets/fmars/wiki_stem |
Disaster: Day of Crisis is a 2008 action-adventure light gun shooter developed by Monolith Soft and published by Nintendo for the Wii. In it, the player must survive various natural disasters while also battling terrorists and rescuing civilians. According to Nintendo, the game features “cutting-edge physics and gripping visuals” to recreate the sheer terror of major catastrophes | https://huggingface.co/datasets/fmars/wiki_stem |
Ergodic theory is a branch of mathematics that studies statistical properties of deterministic dynamical systems; it is the study of ergodicity. In this context, "statistical properties" refers to properties which are expressed through the behavior of time averages of various functions along trajectories of dynamical systems. The notion of deterministic dynamical systems assumes that the equations determining the dynamics do not contain any random perturbations, noise, etc | https://huggingface.co/datasets/fmars/wiki_stem |
In physics and thermodynamics, the ergodic hypothesis says that, over long periods of time, the time spent by a system in some region of the phase space of microstates with the same energy is proportional to the volume of this region, i. e. , that all accessible microstates are equiprobable over a long period of time | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, arithmetic combinatorics is a field in the intersection of number theory, combinatorics, ergodic theory and harmonic analysis.
Scope
Arithmetic combinatorics is about combinatorial estimates associated with arithmetic operations (addition, subtraction, multiplication, and division). Additive combinatorics is the special case when only the operations of addition and subtraction are involved | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, Smale's axiom A defines a class of dynamical systems which have been extensively studied and whose dynamics is relatively well understood. A prominent example is the Smale horseshoe map. The term "axiom A" originates with Stephen Smale | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, a commutation theorem for traces explicitly identifies the commutant of a specific von Neumann algebra acting on a Hilbert space in the presence of a trace.
The first such result was proved by Francis Joseph Murray and John von Neumann in the 1930s and applies to the von Neumann algebra generated by a discrete group or by the dynamical system associated with a measurable transformation preserving a probability measure.
Another important application is in the theory of unitary representations of unimodular locally compact groups, where the theory has been applied to the regular representation and other closely related representations | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, a conservative system is a dynamical system which stands in contrast to a dissipative system. Roughly speaking, such systems have no friction or other mechanism to dissipate the dynamics, and thus, their phase space does not shrink over time. Precisely speaking, they are those dynamical systems that have a null wandering set: under time evolution, no portion of the phase space ever "wanders away", never to be returned to or revisited | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, the Ellis–Numakura lemma states that if S is a non-empty semigroup with a topology such that S is compact and the product is semi-continuous, then S has an idempotent element p, (that is, with pp = p). The lemma is named after Robert Ellis and Katsui Numakura.
Applications
Applying this lemma to the Stone–Čech compactification βN of the natural numbers shows that there are idempotent elements in βN | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, a sequence (s1, s2, s3, . . | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, the equidistribution theorem is the statement that the sequence
a, 2a, 3a, . . | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, ergodic flows occur in geometry, through the geodesic and horocycle flows of closed hyperbolic surfaces. Both of these examples have been understood in terms of the theory of unitary representations of locally compact groups: if Γ is the fundamental group of a closed surface, regarded as a discrete subgroup of the Möbius group G = PSL(2,R), then the geodesic and horocycle flow can be identified with the natural actions of the subgroups A of real positive diagonal matrices and N of lower unitriangular matrices on the unit tangent bundle G / Γ. The Ambrose-Kakutani theorem expresses every ergodic flow as the flow built from an invertible ergodic transformation on a measure space using a ceiling function | https://huggingface.co/datasets/fmars/wiki_stem |
In physics, statistics, econometrics and signal processing, a stochastic process is said to be in an ergodic regime if an observable's ensemble average equals the time average. In this regime, any collection of random samples from a process must represent the average statistical properties of the entire regime. Conversely, a process that is not in ergodic regime is said to be in non-ergodic regime | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, ergodicity expresses the idea that a point of a moving system, either a dynamical system or a stochastic process, will eventually visit all parts of the space that the system moves in, in a uniform and random sense. This implies that the average behavior of the system can be deduced from the trajectory of a "typical" point. Equivalently, a sufficiently large collection of random samples from a process can represent the average statistical properties of the entire process | https://huggingface.co/datasets/fmars/wiki_stem |
In physics, the Fermi–Pasta–Ulam–Tsingou (FPUT) problem or formerly the Fermi–Pasta–Ulam problem was the apparent paradox in chaos theory that many complicated enough physical systems exhibited almost exactly periodic behavior – called Fermi–Pasta–Ulam–Tsingou recurrence (or Fermi–Pasta–Ulam recurrence) – instead of the expected ergodic behavior. This came as a surprise, as Enrico Fermi, certainly, expected the system to thermalize in a fairly short time. That is, it was expected for all vibrational modes to eventually appear with equal strength, as per the equipartition theorem, or, more generally, the ergodic hypothesis | https://huggingface.co/datasets/fmars/wiki_stem |
Given a topological space and a group acting on it, the images of a single point under the group action form an orbit of the action. A fundamental domain or fundamental region is a subset of the space which contains exactly one point from each of these orbits. It serves as a geometric realization for the abstract set of representatives of the orbits | https://huggingface.co/datasets/fmars/wiki_stem |
In physics and mathematics, the Hadamard dynamical system (also called Hadamard's billiard or the Hadamard–Gutzwiller model) is a chaotic dynamical system, a type of dynamical billiards. Introduced by Jacques Hadamard in 1898, and studied by Martin Gutzwiller in the 1980s, it is the first dynamical system to be proven chaotic.
The system considers the motion of a free (frictionless) particle on the Bolza surface, i | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, the Hopf decomposition, named after Eberhard Hopf, gives a canonical decomposition of a measure space (X, μ) with respect to an invertible non-singular transformation T:X→X, i. e. a transformation which with its inverse is measurable and carries null sets onto null sets | https://huggingface.co/datasets/fmars/wiki_stem |
In ergodic theory, Kac's lemma, demonstrated by mathematician Mark Kac in 1947, is a lemma stating that in a measure space the orbit of almost all the points contained in a set
A
{\displaystyle A}
of such space, whose measure is
μ
(
A
)
{\displaystyle \mu (A)}
, return to
A
{\displaystyle A}
within an average time inversely proportional to
μ
(
A
)
{\displaystyle \mu (A)}
. The lemma extends what is stated by Poincaré recurrence theorem, in which it is shown that the points return in
A
{\displaystyle A}
infinite times.
Application
In physics, a dynamical system evolving in time may be described in a phase space, that is by the evolution in time of some variables | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, Kingman's subadditive ergodic theorem is one of several ergodic theorems. It can be seen as a generalization of Birkhoff's ergodic theorem.
Intuitively, the subadditive ergodic theorem is a kind of random variable version of Fekete's lemma (hence the name ergodic) | https://huggingface.co/datasets/fmars/wiki_stem |
In Lie theory and related areas of mathematics, a lattice in a locally compact group is a discrete subgroup with the property that the quotient space has finite invariant measure. In the special case of subgroups of Rn, this amounts to the usual geometric notion of a lattice as a periodic subset of points, and both the algebraic structure of lattices and the geometry of the space of all lattices are relatively well understood.
The theory is particularly rich for lattices in semisimple Lie groups or more generally in semisimple algebraic groups over local fields | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics — specifically, in ergodic theory — a maximising measure is a particular kind of probability measure. Informally, a probability measure μ is a maximising measure for some function f if the integral of f with respect to μ is "as big as it can be". The theory of maximising measures is relatively young and quite little is known about their general structure and properties | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, mixing is an abstract concept originating from physics: the attempt to describe the irreversible thermodynamic process of mixing in the everyday world: e. g. mixing paint, mixing drinks, industrial mixing | https://huggingface.co/datasets/fmars/wiki_stem |
In physics, a dynamical system is said to be mixing if the phase space of the system becomes strongly intertwined, according to at least one of several mathematical definitions. For example, a measure-preserving transformation T is said to be strong mixing if
lim
k
→
∞
μ
(
T
−
k
A
∩
B
)
=
μ
(
A
)
⋅
μ
(
B
)
{\displaystyle \lim _{k\rightarrow \infty }\,\mu (T^{-k}A\cap B)=\mu (A)\cdot \mu (B)}
whenever A and B are any measurable sets and μ is the associated measure. Other definitions are possible, including weak mixing and topological mixing | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, the no-wandering-domain theorem is a result on dynamical systems, proven by Dennis Sullivan in 1985.
The theorem states that a rational map f : Ĉ → Ĉ with deg(f) ≥ 2 does not have a wandering domain, where Ĉ denotes the Riemann sphere. More precisely, for every component U in the Fatou set of f, the sequence
U
,
f
(
U
)
,
f
(
f
(
U
)
)
,
…
,
f
n
(
U
)
,
…
{\displaystyle U,f(U),f(f(U)),\dots ,f^{n}(U),\dots }
will eventually become periodic | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, the Ornstein isomorphism theorem is a deep result in ergodic theory. It states that if two Bernoulli schemes have the same Kolmogorov entropy, then they are isomorphic. The result, given by Donald Ornstein in 1970, is important because it states that many systems previously believed to be unrelated are in fact isomorphic; these include all finite stationary stochastic processes, including Markov chains and subshifts of finite type, Anosov flows and Sinai's billiards, ergodic automorphisms of the n-torus, and the continued fraction transform | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, the multiplicative ergodic theorem, or Oseledets theorem provides the theoretical background for computation of Lyapunov exponents of a nonlinear dynamical system. It was proved by Valery Oseledets (also spelled "Oseledec") in 1965 and reported at the International Mathematical Congress in Moscow in 1966. A conceptually different proof of the multiplicative ergodic theorem was found by M | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics and physics, the Poincaré recurrence theorem states that certain dynamical systems will, after a sufficiently long but finite time, return to a state arbitrarily close to (for continuous state systems), or exactly the same as (for discrete state systems), their initial state.
The Poincaré recurrence time is the length of time elapsed until the recurrence. This time may vary greatly depending on the exact initial state and required degree of closeness | https://huggingface.co/datasets/fmars/wiki_stem |
In quantum chaos, a branch of mathematical physics, quantum ergodicity is a property of the quantization of classical mechanical systems that are chaotic in the sense of exponential sensitivity to initial conditions. Quantum ergodicity states, roughly, that in the high-energy limit, the probability distributions associated to energy eigenstates of a quantized ergodic Hamiltonian tend to a uniform distribution in the classical phase space. This is consistent with the intuition that the flows of ergodic systems are equidistributed in phase space | https://huggingface.co/datasets/fmars/wiki_stem |
In mathematics, Ratner's theorems are a group of major theorems in ergodic theory concerning unipotent flows on homogeneous spaces proved by Marina Ratner around 1990. The theorems grew out of Ratner's earlier work on horocycle flows. The study of the dynamics of unipotent flows played a decisive role in the proof of the Oppenheim conjecture by Grigory Margulis | https://huggingface.co/datasets/fmars/wiki_stem |
In probability theory, a stationary ergodic process is a stochastic process which exhibits both stationarity and ergodicity. In essence this implies that the random process will not change its statistical properties with time and that its statistical properties (such as the theoretical mean and variance of the process) can be deduced from a single, sufficiently long sample (realization) of the process.
Stationarity is the property of a random process which guarantees that its statistical properties, such as the mean value, its moments and variance, will not change over time | https://huggingface.co/datasets/fmars/wiki_stem |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.