text
stringlengths 256
16.4k
|
|---|
Sound navigation and ranging — lesson. Science CBSE, Class 9.
6. Sound navigation and ranging
The SONAR is the acronym for Sound Navigation And Ranging. Sonar is a device that uses ultrasonic waves to measure the distance, direction and speed of underwater objects.
The transmitter produces and transmits ultrasonic waves into seawater, and these waves after striking the object on the sea surface, get reflected, and the detector senses them..
The function of the detector is to convert the ultrasonic waves into electrical signals.
The distance of the object that reflects the sound wave can be calculated by knowing the speed of sound in water and the time interval between transmission and reception of the ultrasound.
\(v\) - speed of sound through seawater.
\(t\) - the time interval between transmission and reception of the ultrasound.
\(2d\) - total distance travelled by the ultrasound.
Then, by using the formula,
\mathit{Speed}=\frac{\mathit{Distance}}{\mathit{Time}}
Apply all the terms, we will get the formula to find the total distance travelled by ultrasound
The sonar technique is used to determine the depth of the sea and to locate underwater hills, icebergs, valleys, submarines, sunken ships etc. This method is also called echo-ranging.
1. A Sonar emits a pulse on the surface of the water, which are detected after reflection from the bottom. If the time interval between the emission and detection of the pulse is \(6\) \(s\), find water depth. Take velocity of sound in water as \(1500\) \(m/s\).
\(t\) \(=\) \(6\) \(s\)
\(v\) \(=\) \(1500\) \(m/s\)
To find: distance(d)
Applying all the values in the formula, we get
So, the depth of the water is \(4500\) \(m\).
|
Tune Gain-Scheduled Controller Using Closed-Loop PID Autotuner Block - MATLAB & Simulink - MathWorks Benelux
Water-Tank System Model
Tune Controller at Single Operating Point
Tune Gain-Scheduled Controller at Multiple Operating Points
Performance Improvements of Gain-Scheduled Controller
This example shows how to use the Closed-Loop PID Autotuner block to tune a gain-scheduled controller in one simulation.
This example uses a gain-scheduled controller to control the water level of a nonlinear Water-Tank System plant. The Water-Tank System plant is originally controlled by a single PI controller in the watertank Simulink® model. For more details on the nonlinear Water-Tank System plant, see watertank Simulink Model.
The following sections describe how to modify the watertank model for tuning and validating gain-scheduled controller. Alternatively, use the watertank_gainscheduledcontrol model provided with this example.
Connect Closed-Loop PID Autotuner Block with Plant and Controller
Insert the Closed-Loop PID Autotuner block between the controller and plant as shown in the following diagram. The start/stop signal starts and stops the closed-loop experiment. When no experiment is running, the Closed-Loop PID Autotuner block behaves like a unity gain block, where the u signal passes directly to u+Δu.
Connect Blocks to Store Tuned Gains
To create a gain schedule, the autotuned gains are recorded at each operating point. In this example, a triggered subsystem is used to write the reference heights and controller gains to the workspace upon falling edges of the autotuner start/stop signal. Simulating this model produces an array of tuned gains and breakpoints for easy use with dynamic lookup tables to test the controller.
Validate Performance of Gain-Scheduled Controller
After you obtain a set of breakpoints and tuned gains, test the tuned gain-scheduled controller with the Water-Tank System plant. To do so, remove the autotuner block, change the source of the PID Controller block to external, and insert Lookup Table Dynamic blocks as shown in the diagram.
Integrate Both Tuning and Testing in Example Model
In this example, a gain-scheduled controller is tuned using the Closed-Loop PID Autotuner block and its performance is then tested in the same model. The example model uses a variant subsystem to organize the tuning and testing workflows.
To switch between Tuning and Testing modes, double-click the Variant Subsystem block.
Before tuning the gain-scheduled controller at multiple operating points, tuning at single operating point helps you configure the Closed-Loop PID Autotuner block. Open the example model watertank_gainscheduledcontrol with controller gains used by the watertank Simulink model.
mdl = 'watertank_gainscheduledcontrol';
Kp = 1.599340;
Ki = 0.079967;
set_param([mdl,'/Variant Subsystem'],'SimMode','Tuning');
Configure Closed-Loop PID Autotuner Block
After connecting the Closed-Loop PID Autotuner block with the Water-Tank System plant model and PID Controller block, use the block parameters to specify tuning and experiment settings. This example uses the same design requirements found in the example Design Compensator Using Automated PID Tuning and Graphical Bode Design. These design requirements are in the form of closed-loop step response characteristics.
To tune the PID controller to meet the above design requirements, parameters of the Closed-Loop PID Autotuner block are pre-populated. The Tuning tab has three main tuning settings.
Target bandwidth — Determines how fast you want the controller to respond. The target bandwidth is roughly 2/desired rise time. For a desired rise time of 4 seconds, set target bandwidth = 2/4 = 0.5 rad/s.
Target phase margin — Determines how robust you want the controller to be. In this example, start with the default value of 60 degrees.
Experiment sample time — Sample time for the experiment performed by the autotuner block. Use the recommended 0.02/bandwidth for sample time = 0.02/0.5 = 0.04s.
The Experiment tab has three main experiment settings.
Plant Type — Specifies whether the plant is asymptotically stable or integrating. In this example, the Water-Tank System plant is integrating.
Plant Sign — Specifies whether the plant has a positive or negative sign. The plant sign is positive if a positive change in the plant input at the nominal operating point results in a positive change in the plant output when the plant reaches a new steady state. In this example, the Water-Tank System plant has a positive plant sign.
Sine Amplitudes — Specifies amplitudes of the injected sine wave perturbations. In this example, specify a sine amplitude of 0.3.
Simulate at One Operating Point
Start the experiment at 140 seconds to ensure that the water level has reached steady-state
H=10
. The recommended experiment duration is 200/bandwidth seconds = 200/0.4 = 500s. With start time of 140 seconds, the stop time is 640 seconds. The simulation stop time is further increased to capture the full experiment.
set_param([mdl,'/Variant Subsystem/Tuning/Closed-Loop PID Autotuner1'],'TargetPM','60');
set_param([mdl,'/Signal Editor'],'ActiveScenario','TuningSignal_OnePoint');
simOut = sim(mdl,'StopTime','800');
simOut.Kp_tuned
simOut.Ki_tuned
In the watertank Simulink model, initial PI controller gains are Kp = 1.599340 and Ki = 0.079967. After tuning, the controller gains are Kp = 1.82567 and Ki = 0.20373.
Check Tuning Result and Adjust Autotuning Parameters
Replace controller gains with the new autotuned gains and validate the design requirements.
Kp = simOut.Kp_tuned;
Ki = simOut.Ki_tuned;
plot(simOut.ScopeDataGS.time,simOut.ScopeDataGS.signals.values);
title('Step Response of Controller Tuned with 60-Degree Target Phase Margin');
StepPerformance_OnePoint = stepinfo(simOut.ScopeDataGS.signals.values(:),simOut.ScopeDataGS.time(:),10,1)
StepPerformance_OnePoint = struct with fields:
SettlingMax: 10.7821
The step response has a rise time of 3.6251 seconds and overshoot of 8.6895%. The overshoot is larger than desired; increase target phase margin to 75 degrees to improve the closed-loop transient response.
Examine the simulation result. The system is at steady-state when experiment starts and returns to steady-state after tuning is completed. As an indication of controller tuning performance, the Closed-Loop PID Autotuner block reaches 100% convergence level sooner than the recommended 500 seconds. As a result, reduce experiment duration to 300 seconds, meaning a stop time of 440 seconds. Accordingly, decrease the simulation stop time from 800 seconds to 500 seconds.
set_param([mdl,'/Signal Editor'],'ActiveScenario','TuningSignal_OnePointAdjusted');
Simulating with new experiment parameters produces tuned gains of Kp = 1.93514 and Ki = 0.11415. Examine the step response again using gains tuned with the increased target phase margin value.
StepPerformance_OnePointAdjusted = stepinfo(simOut.ScopeDataGS.signals.values(:),simOut.ScopeDataGS.time(:),10,1)
StepPerformance_OnePointAdjusted = struct with fields:
The step response has a rise time of 4.1398 seconds and overshoot of 3.1438%, both of which meet the design requirements.
Simulate the model with tuned gains for multiple operating points
H
= [5, 10, 15, 20].
set_param([mdl,'/Signal Editor'],'ActiveScenario','TuningSignal_SinglePID');
simOut_single = sim(mdl,'StopTime','2400');
The set of tuned gains produces a desired response. You can now perform tuning at multiple operating points to create a gain-scheduled controller.
Create Input Tuning Signal
The operating range of scheduling variable
H
from 1 to 20 is covered by the operating points for autotuning. In this example, the gain-scheduled controller gains are tuned at four operating points with
H
= [5, 10, 15, 20]. To tune at multiple operating points, use the Signal Editor block to create the reference and autotuner start/stop signal
Simulate Multiple Operating Points
Using the input signal, simulate the watertank_gainscheduledcontrol model for the entire length of the autotuning process. At the end of simulation, save both tuned gains and breakpoints as vectors in the MATLAB® Workspace.
set_param([mdl,'/Signal Editor'],'ActiveScenario','TuningSignal');
simOut = sim(mdl,'StopTime','2400');
Kp_tuned = simOut.Kp_tuned
Kp_tuned = 4×1
Ki_tuned = simOut.Ki_tuned
Ki_tuned = 4×1
breakpoints = simOut.breakpoints
breakpoints = 4×1
To examine the performance of the gain-scheduled controller, set the Variant Subsystem to Testing mode and simulate the model.
set_param([mdl,'/Variant Subsystem'],'SimMode','Testing');
simOut_GS = sim(mdl,'StopTime','2400');
Using the gain-scheduled controller, step responses of the water level in the Water-Tank System plant are much faster and have less overshoot than the untuned controller used in watertank Simulink Model.
In addition, the tuned gain-scheduled controller leads to a better transient performance than the single set of gains tuned at water level
H
Use the compareControllers_watertank script to compute the step-response characteristics for the PID controller tuned at
H
= 10 and the gain-scheduled controller. The script generates two tables, which contain the rise time (in seconds) and percentage overshoot for the gain-scheduled controller and a single set of controller gains.
compareControllers_watertank
RiseTime=2×4 table
H = 1 to 5 H = 5 to 10 H = 10 to 15 H = 15 to 20
__________ ___________ ____________ ____________
Single PID 4.6721 3.7818 3.715 3.6826
Gain-Scheduled 4.8012 3.845 3.7744 3.7402
Overshoot=2×4 table
Single PID 0.69606 5.2553 5.888 6.2236
Gain-Scheduled 0.14074 4.6827 5.3208 5.6592
The gain-scheduled controller leads to a smaller overshoot for a comparable rise time, compared to a single set of gains tuned at one operating point. This workflow is useful when you want to tune a gain-scheduled controller using the Closed-Loop PID Autotuner block.
Closed-Loop PID Autotuner | PID Controller | Signal Editor | Lookup Table Dynamic | Variant Subsystem, Variant Model
|
Physics - Airy plasmons defeat diffraction on the surface
Airy plasmons defeat diffraction on the surface
Alessandro Salandrino and Demetrios Christodoulides
College of Optics and Photonics-CREOL, University of Central Florida, Orlando, FL 32816, USA
The Airy plasmon is unique since it represents the only possible one-dimensional diffraction-free solution.
Adapted from A. Minovich et al. [2]
Figure 1: (a) Schematic of the experiment of Minovich et al. A gold grating deposited on a glass substrate is excited from below by a polarized
784
nm laser beam. A bent gold-coated fiber-optic tip with a
150
nm aperture is used to collect the light generated by surface plasmons (red peaks). (b) Image of Airy plasmons recorded with the fiber tip superimposed on a micrograph of the grating. (c) Numerical simulation of an Airy plasmon undergoing self-healing as it collides with a hole bored into the grating.(a) Schematic of the experiment of Minovich et al. A gold grating deposited on a glass substrate is excited from below by a polarized
784
150
nm aperture is used to collect the light generated ... Show more
Surface plasmons (SPs), or more exactly, surface plasmon polaritons, are surface electromagnetic waves that propagate along the planar interface between a metal and a dielectric material [1] (see Fig. 1). These particular electromagnetic modes are sustained by the collective electronic oscillations (plasma waves) in the metal in proximity to the interface. Plasmons are essentially two-dimensional waves whose field components decay exponentially with distance from the surface. The very fact that these waves tightly cling to the surface makes them ideal for molecule diagnostics and biosensing applications. On many occasions, SP beams are expected to carry energy from one location on the surface to another. Yet for this to happen in an effective way, diffraction broadening effects must be first suppressed.
Methods to suppress the diffraction broadening of a freely propagating plasmon beam have been actively pursued by several research groups in recent years. For many applications, this diffraction-free plasmonic energy transport would need to occur even if there were imperfections on the surface. In a recent paper in Physical Review Letters, Alexander Minovich at the Australian National University in Canberra and his colleagues report the experimental observation [2] of a new class of plasmon-polariton waves that could offer such a solution: the so-called Airy-plasmon waves [3]. In general, Airy plasmons can exhibit a host of appealing characteristics. Not only can they retain their intensity features up to several diffraction lengths, but they can also self-heal and accelerate (self-bend along a parabolic trajectory) during propagation. Apart from being interesting in their own right, Airy plasmons may also hold promise for new, exciting applications in the general area of plasmonics.
Diffraction is a ubiquitous process in nature. Under the action of this effect, any confined beam carrying finite power is known to expand during propagation. The familiar spread of a Gaussian optical beam is just another example of this process. Loosely speaking, one can understand this phenomenon by considering the walk-off effects taking place between all the plane waves comprising a wave packet. The possibility of either engineering or suppressing diffraction effects—at least up to a certain distance of interest—was first tackled by Durnin, Miceli, and Eberly in their classic paper on Bessel beams [4]. In this work, it was recognized that diffraction effects can be deliberately “delayed” by using Bessel wave fronts—perhaps the best known example of a “diffraction-free” field. What they also realized was that all possible two-dimensional diffraction-free waves result from a conical superposition of plane waves. This is so because all the plane wave components involved share the same propagation vector along the path of propagation, and hence the beam intensity profile resulting from this superposition remains invariant. Given that this conical addition can be arranged in a number of different ways, it is clear that, in principle, one could come up with infinitely many such diffraction-free-field arrangements—with the Bessel family being just one of them.
Diffractionless fields are normal propagation modes (very much like a plane wave) and as such, in principle, they are supposed to carry infinite power. In practice, however, because of finite aperture effects, even these “privileged” beams are eventually subjected to the effects of diffraction. Yet, if the size of the aperture is considerably bigger than any of the intensity features of this wave front, the beam will propagate virtually undistorted until diffraction associated with the aperture itself starts to take its toll. Thus, for all practical purposes, the diffraction process is slowed down over the intended distance of propagation, and for this reason these beams are called “diffraction-free.” Nowadays such diffractionless beams find interesting applications in many and diverse settings ranging from biosensing to nonlinear optics.
At this point it is perhaps natural for one to ask if similar concepts can be used in plasmonics. In other words, is it possible to achieve diffraction-free propagation using plasmon beams along the surface of a metal? If so, will the scheme of conical superposition also work in this case? Surprisingly, the answer to the latter question is no. The very fact that a surface plasmon polariton can only exist in flatland renders the diffraction process one-dimensional. In turn, this reduction in dimensionality does not allow meaningful diffraction-free patterns through conical superposition [2]. Yet one-dimensional (1D) diffraction-free waves do exist. This discovery was made more than thirty years ago by Berry and Balazs, who showed that the force-free quantum mechanical Schrödinger equation can admit a unique, nonspreading accelerating solution in the form of an Airy wave packet [5]. These Airy packets are in a class by themselves and do not result from any conical addition.
The realization that Airy waves can be actually observed in optics came only recently [6,7]. In fact, in this realm, the beam’s acceleration takes on a whole new meaning. It means that during propagation, the intensity features of an optical Airy beam can self-bend, even in free space, and this without violating the premises of Ehrenfest’s theorem, which governs the centroid trajectory. Interestingly, Airy beams can propagate along parabolic paths, very much like cannon balls moving under the force of gravity. Subsequent experimental studies also demonstrated that these beams tend to self-heal themselves after encountering any perturbations—a property shared by all diffraction-free beams.
Earlier last year, the prospect of observing nondiffracting Airy plasmon waves was theoretically suggested by our group [3]. As mentioned before, the Airy plasmon wave packet is unique since it represents the only possible one-dimensional diffraction-free solution. In this realization, an Airy plasmon can propagate almost undistorted over several diffraction lengths along the metal surface. This occurs while its intensity features parabolically self-bend as a function of distance. In general, the beam can maintain the size of its lobes—a valuable property, especially when resolution is needed. Even more importantly, because of its self-healing characteristics, the Airy plasmon happens to be resilient against surface perturbations. This is a desirable feature given that surface plasmon polaritons tend to lose or radiate energy because of surface imperfections.
Minovich and colleagues report the experimental observation of such a surface Airy plasmon. Their experiments were carried out on an air-gold interface at a wavelength of nanometers (nm). In this arrangement a -nm-thick gold film was deposited on a glass substrate. A diffraction arrangement was fabricated by selectively removing the metal using focused ion-beam techniques, as shown in Fig. 1(a). This was used as a platform to couple a broader wave front of microns into a plasmon and shape the resulting beam into an Airy profile. This is achieved by manipulating both amplitude and phase of the optical field. The propagation dynamics of the Airy plasmon are then monitored using a near-field scanning optical microscope gold-coated fiber tip close to the surface. Experimental near-field results are shown in Fig. 1(b) and clearly demonstrate the anticipated effects. The main lobe of the Airy plasmon can maintain its width ( nm) up to several Rayleigh lengths while it also parabolically self-bends—a key characteristic of an Airy beam. In addition, self-healing is possible, as also depicted in Fig. 1(c).
This field is only now beginning. Interestingly, since the submission of this paper, Airy plasmons have also been reported by other groups. These studies were carried out independently at the University of California, Berkley, and Nanjing University, China [8,9]. These works are expected to stimulate more efforts towards a better understanding and utilization of nondiffracting Airy plasmons. As the authors of the accompanying paper indicate, surface Airy plasmons may open new opportunities for selective on-chip manipulation of nanoparticles and optical sensing circuitry. It will be of interest to see if, in the future, such ideas may spill over in other emerging disciplines of plasmonics.
W. L. Barnes, A. Dereux, and T. W. Ebbesen, Nature 424, 824 (2003)
A. Minovich, A. E. Klein, N. Janunts, T. Pertsch, D. N. Neshev, and Yu. S. Kivshar, Phys. Rev. Lett. 107, 116802 (2011)
A. Salandrino and D. N. Christodoulides, Opt. Lett. 35, 2082 (2010)
J. Durnin, J. J. Miceli, and J. H. Eberly, Phys. Rev. Lett. 58, 1499 (1987)
M. V. Berry and N. L. Balazs, Am. J. Phys. 47, 264 (1979)
G. A. Siviloglou and D. N. Christodoulides, Opt. Lett. 32, 979 (2007)
G. A. Siviloglou, J. Brokly, A. Dogariu, and D. N. Christodoulides, Phys. Rev. Lett. 99, 213901 (2007); Opt. Express 16, 12880 (2008)
P. Zhang, S. Wang, Y. Liu, X. Yin, C. Lu, Z. Chen, and X. Zhang, Opt. Lett. 36, 3191 (2011)
L. Li, T. Li, S. M. Wang, and S. N. Zhu, Phys. Rev. Lett. (to be published); arXiv:1105.3160
Alessandro Salandrino received his M.S. in electrical engineering from Roma Tre University, Italy, and his M.S. and Ph.D. in optics and photonics from CREOL, The College of Optics and Photonics at the University of Central Florida. He is currently a postdoctoral research associate at the University of California, Berkeley.
Demetri Christodoulides received his Ph.D. degree in 1986. He subsequently joined Bellcore as a postdoctoral fellow and then the faculty of electrical engineering at Lehigh University. Since 2002, he has been with CREOL, The College of Optics and Photonics at the University of Central Florida, where he is currently a Provost’s Distinguished Research Professor.
|
Hund's rules - Wikipedia
Not to be confused with Hund's cases.
In atomic physics, Hund's rules refers to a set of rules that German physicist Friedrich Hund formulated around 1927, which are used to determine the term symbol that corresponds to the ground state of a multi-electron atom. The first rule is especially important in chemistry, where it is often referred to simply as Hund's Rule.
The three rules are:[1][2][3]
For a given electron configuration, the term with maximum multiplicity has the lowest energy. The multiplicity is equal to
{\displaystyle 2S+1\ }
{\displaystyle S}
is the total spin angular momentum for all electrons. The multiplicity is also equal to the number of unpaired electrons plus one.[4] Therefore, the term with lowest energy is also the term with maximum
{\displaystyle S\,}
and maximum number of unpaired electrons.
For a given multiplicity, the term with the largest value of the total orbital angular momentum quantum number
{\displaystyle L\,}
has the lowest energy.
For a given term, in an atom with outermost subshell half-filled or less, the level with the lowest value of the total angular momentum quantum number
{\displaystyle J\,}
(for the operator
{\displaystyle {\boldsymbol {J}}={\boldsymbol {L}}+{\boldsymbol {S}}}
) lies lowest in energy. If the outermost shell is more than half-filled, the level with the highest value of
{\displaystyle J\,}
is lowest in energy.
These rules specify in a simple way how usual energy interactions determine which term includes the ground state. The rules assume that the repulsion between the outer electrons is much greater than the spin–orbit interaction, which is in turn stronger than any other remaining interactions. This is referred to as the LS coupling regime.
Full shells and subshells do not contribute to the quantum numbers for total S, the total spin angular momentum and for L, the total orbital angular momentum. It can be shown that for full orbitals and suborbitals both the residual electrostatic energy (repulsion between electrons) and the spin–orbit interaction can only shift all the energy levels together. Thus when determining the ordering of energy levels in general only the outer valence electrons must be considered.
Rule 1Edit
Main article: Hund's rule of maximum multiplicity
For silicon there is only one triplet term, so the second rule is not required. The lightest atom that requires the second rule to determine the ground state term is titanium (Ti, Z = 22) with electron configuration 1s2 2s2 2p6 3s2 3p6 3d2 4s2. In this case the open shell is 3d2 and the allowed terms include three singlets (1S, 1D, and 1G) and two triplets (3P and 3F). (Here the symbols S, P, D, F, and G indicate that the total orbital angular momentum quantum number has values 0, 1, 2, 3 and 4, respectively, analogous to the nomenclature for naming atomic orbitals.)
{\displaystyle L=3}
{\displaystyle L=1}
{\displaystyle (M_{L}=4,M_{S}=1)}
{\displaystyle (M_{L}=2,M_{S}=+1/2)}
{\displaystyle M_{L}}
{\displaystyle M_{S}}
{\displaystyle L\,}
{\displaystyle S\,}
{\displaystyle {\begin{aligned}\Delta E&=\zeta (L,S)\{\mathbf {L} \cdot \mathbf {S} \}\\\ &=\ (1/2)\zeta (L,S)\{J(J+1)-L(L+1)-S(S+1)\}\end{aligned}}}
{\displaystyle \zeta (L,S)\,}
{\displaystyle J\,}
{\displaystyle {}^{3}\!P\,}
{\displaystyle J=2,1,0\,}
{\displaystyle {}^{3}\!P_{0}\,}
{\displaystyle {}^{3}\!P\,}
{\displaystyle J=2,1,0\,}
{\displaystyle {}^{3}\!P_{2}\,}
{\displaystyle L=0\,}
{\displaystyle J\,}
{\displaystyle S\,}
{\displaystyle S=3/2,\ L=0}
{\displaystyle J=S=3/2}
{\displaystyle {}^{4}\!S_{3/2}\,}
Excited statesEdit
^ G.L. Miessler and D.A. Tarr, Inorganic Chemistry (Prentice-Hall, 2nd edn 1999) ISBN 0138418918, pp. 358–360
^ T. Engel and P. Reid, Physical Chemistry (Pearson Benjamin-Cummings, 2006) ISBN 080533842X, pp. 477–479
^ G. Herzberg, Atomic Spectra and Atomic Structure (Dover Publications, 1944) ISBN 0486601153, p. 135 (Although, Herzberg states these as being two rules rather than three.)
^ Miessler and Tarr p.33
^ a b I.N. Levine, Quantum Chemistry (Prentice-Hall, 4th edn 1991) ISBN 0205127703, pp. 303–304
"Hund's Rules". HyperPhysics.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Hund%27s_rules&oldid=1084120243"
|
Susan’s apartment is shown at right. Assuming that all rooms are rectangular, find the areas described below. All measurements are in feet.
Find the area of her living room.
Area is the total square units inside a shape.
Area of a rectangle is base multiplied by height.
=15
; Height
=18
15(18)=270
Reminder! Don't forget your units. ft ( ft ) = ft2
Actual Answer (a):
270
Find the area of her entire apartment.
The base of the rectangle you are trying to find the area of is
15+8
644
How much larger than her bedroom is her living room?
Find the area of both rooms and subtract the area of the bedroom from the area of the living room.
Find the perimeter of the kitchen.
The base of the rectangle you are trying to find the perimeter of is
8
52
2 by 2 generic rectangle labeled as follows: Left edge top ↑
interior top left Bedroom
interior top right Bath
Left edge bottom ↑
interior bottom left Living Room
bottom edge left ← 15 →
bottom edge right, ← 8 →
|
Estimate instantaneous frequency - MATLAB instfreq - MathWorks India
instfreq
Instantaneous Frequency of Nonstationary Signal
Instantaneous Frequency of Complex-Valued Signal
Instantaneous Frequency of Multichannel Signal
Instantaneous Frequency of Chirp
Instantaneous Frequency of Sinusoid
Instantaneous Frequency and Bandwidth as Conditional Spectral Moments
ifq = instfreq(x,fs)
ifq = instfreq(x,t)
ifq = instfreq(xt)
ifq = instfreq(tfd,fd,td)
ifq = instfreq(___,Name,Value)
[ifq,t] = instfreq(___)
instfreq(___)
ifq = instfreq(x,fs) estimates the instantaneous frequency of a signal, x, sampled at a rate fs. If x is a matrix, then the function estimates the instantaneous frequency independently for each column and returns the result in the corresponding column of ifq.
ifq = instfreq(x,t) estimates the instantaneous frequency of x sampled at the time values stored in t.
ifq = instfreq(xt) estimates the instantaneous frequency of a signal stored in the MATLAB® timetable xt. The function treats all variables in the timetable and all columns inside each variable independently.
ifq = instfreq(tfd,fd,td) estimates the instantaneous frequency of the signal whose time-frequency distribution, tfd, is sampled at the frequency values stored in fd and the time values stored in td.
ifq = instfreq(___,Name,Value) specifies additional options for any of the previous syntaxes using name-value pair arguments. You can specify the algorithm used to estimate the instantaneous frequency or the frequency limits used in the computation.
[ifq,t] = instfreq(___) also returns t, a vector of sample times corresponding to ifq.
instfreq(___) with no output arguments plots the estimated instantaneous frequency.
s = besselj(0,1000*(sin(2*pi*t.^2/8).^4));
% To hear, type sound(s,fs)
Estimate the time-dependent frequency of the signal as the first moment of the power spectrogram. Plot the power spectrogram and overlay the instantaneous frequency.
instfreq(s,fs)
Generate a complex-valued signal that consists of a chirp with sinusoidally varying frequency content. The signal is sampled at 3 kHz for 1 second and is embedded in white Gaussian noise.
x = exp(2j*pi*100*cos(2*pi*2*t))+randn(size(t))/100;
Estimate the time-dependent frequency of the signal as the first moment of the power spectrogram. This is the only method that instfreq supports for complex-valued signals. Plot the power spectrogram and overlay the instantaneous frequency.
instfreq(x,t)
Create a two-channel signal, sampled at 1 kHz for 2 seconds, consisting of two voltage-controlled oscillators.
In one channel, the instantaneous frequency varies with time as a sawtooth wave whose maximum is at 75% of the period.
In the other channel, the instantaneous frequency varies with time as a square wave with a duty cycle of 30%.
Plot the spectrograms of the two channels. Specify a time resolution of 0.1 second for the sawtooth channel and a frequency resolution of 10 Hz for the square channel.
y = vco(square(2*pi*t,30),[0.1 0.3]*fs,fs);
pspectrum(x,fs,'spectrogram','TimeResolution',0.1)
pspectrum(y,fs,'spectrogram','FrequencyResolution',10)
Store the signal in a timetable. Compute and display the instantaneous frequency.
xt = timetable(seconds(t),x,y);
instfreq(xt)
Repeat the computation using the analytic signal.
instfreq(xt,'Method','hilbert')
q = chirp(t-1,0,1/2,20,'quadratic',100,'convex').*exp(-1.7*(t-2).^2);
Use the pspectrum function with default settings to estimate the power spectrum of the signal. Use the estimate to compute the instantaneous frequency.
[p,f,t] = pspectrum(q,fs,'spectrogram');
instfreq(p,f,t)
Repeat the calculation using the synchrosqueezed Fourier transform. Use a 500-sample Hann window to divide the signal into segments and window them.
[s,sf,st] = fsst(q,fs,hann(500));
instfreq(abs(s).^2,sf,st)
Compare the instantaneous frequencies found using the two different methods.
[psf,pst] = instfreq(p,f,t);
[fsf,fst] = instfreq(abs(s).^2,sf,st);
plot(fst,fsf,pst,psf)
Generate a sinusoidal signal sampled at 1 kHz for 0.3 second and embedded in white Gaussian noise of variance 1/16. Specify a sinusoid frequency of 200 Hz. Estimate and display the instantaneous frequency of the signal.
t = (0:1/fs:0.3-1/fs)';
x = sin(2*pi*200*t) + randn(size(t))/4;
Estimate the instantaneous frequency of the signal again, but now use a time-frequency distribution with a coarse frequency resolution of 25 Hz as input.
[p,fd,td] = pspectrum(x,t,'spectrogram','FrequencyResolution',25);
instfreq(p,fd,td)
Generate a signal that consists of a chirp whose frequency varies sinusoidally between 300 Hz and 1200 Hz. The signal is sampled at 3 kHz for 2 seconds.
y = chirp(t,100,1,200,"quadratic");
y = vco(cos(2*pi*t),[0.1 0.4]*fs,fs);
Use instfreq to compute the instantaneous frequency of the signal and the corresponding sample times. Verify that the output corresponds to the centralized first-order conditional spectral moment of the time-frequency distribution of the signal as computed by tfsmoment (Predictive Maintenance Toolbox).
[z,tz] = instfreq(y,fs);
[a,ta] = tfsmoment(y,fs,1,Centralize=false);
plot(tz,z,ta,a,'.')
legend("instfreq","tfsmoment")
Use instbw to compute the instantaneous bandwidth of the signal and the corresponding sample times. Specify a scale factor of 1. Verify that the output corresponds to the square root of the noncentralized second-order conditional spectral moment of the time-distribution of the signal. In other words, instbw generates a standard deviation and tfsmoment generates a variance.
[w,tw] = instbw(y,fs,ScaleFactor=1);
[m,tm] = tfsmoment(y,fs,2);
plot(tw,w,tm,sqrt(m),'.')
Input signal, specified as a vector or matrix. If x is a vector, then instfreq treats it as a single channel. If x is a matrix, then instfreq computes the instantaneous frequency independently for each column and returns the result in the corresponding column of ifq.
real vector | duration scalar | duration array | datetime array
Sample times, specified as a real vector, a duration scalar, a duration array, or a datetime array.
duration scalar — The time interval between consecutive samples of x.
Real vector, duration array, or datetime array — The time instant corresponding to each element of x.
Example: seconds(1) specifies a 1-second lapse between consecutive measurements of a signal.
Example: seconds(0:8) specifies that a signal is sampled at 1 Hz for 8 seconds.
Input timetable. xt must contain increasing, finite row times.
Example: timetable(seconds(0:4)',randn(5,3),randn(5,4)) contains a three-channel random process and a four-channel random process, both sampled at 1 Hz for 4 seconds.
tfd — Time-frequency distribution
Time-frequency distribution, specified as a matrix sampled at the frequencies stored in fd and the time values stored in td. This input argument is supported only when 'Method' is set to 'tfmoment'.
Example: [p,f,t] = pspectrum(sin(2*pi*(0:511)/4),4,'spectrogram') specifies the time-frequency distribution of a 1 Hz sinusoid sampled at 4 Hz for 128 seconds, and also the frequencies and times at which it is computed.
fd, td — Frequency and time values for time-frequency distribution
Frequency and time values for time-frequency distribution, specified as vectors. These input arguments are supported only when 'Method' is set to 'tfmoment'.
Example: 'Method','tfmoment','FrequencyLimits',[25 50] computes the instantaneous frequency of the input in the range from 25 Hz to 50 Hz by finding the first conditional spectral moment of the time-frequency distribution.
FrequencyLimits — Frequency range
[0 fs/2] (default for real-valued signals) | [-fs/2 fs/2] (default for complex-valued signals) | two-element vector in Hz
Frequency range, specified as the comma-separated pair consisting of 'FrequencyLimits' and a two-element vector in Hz. If not specified, 'FrequencyLimits' defaults to [0 fs/2] for real-valued signals and to [-fs/2 fs/2] for complex-valued signals. This argument is supported only when 'Method' is set to 'tfmoment'.
'tfmoment' (default) | 'hilbert'
Computation method, specified as the comma-separated pair consisting of 'Method' and either 'tfmoment' or 'hilbert'.
'tfmoment' — Compute the instantaneous frequency as the first conditional spectral moment of the time-frequency distribution of x. If x is nonuniformly sampled, then instfreq interpolates the signal to a uniform grid to compute instantaneous frequencies.
'hilbert' — Compute the instantaneous frequency as the derivative of the phase of the analytic signal of x found using the Hilbert transform. This method accepts only uniformly sampled, real-valued signals and does not support time-frequency distribution input.
ifq — Instantaneous frequency
Instantaneous frequency, returned as a vector, a matrix, or a timetable with the same dimensions as the input.
t — Times of frequency estimates
real vector | duration array | datetime array
Times of frequency estimates, returned as a real vector, a duration array, or a datetime array.
The instantaneous frequency of a nonstationary signal is a time-varying parameter that relates to the average of the frequencies present in the signal as it evolves [1], [2].
If 'Method' is set to 'tfmoment', then instfreq estimates the instantaneous frequency as the first conditional spectral moment of the time-frequency distribution of the input signal. The function:
Computes the spectrogram power spectrum P(t,f) of the input using the pspectrum function and uses the spectrum as a time-frequency distribution.
Estimates the instantaneous frequency using
{f}_{\text{inst}}\left(t\right)=\frac{{\int }_{0}^{\infty }f\text{\hspace{0.17em}}P\left(t,f\right)\text{\hspace{0.17em}}df}{{\int }_{0}^{\infty }P\left(t,f\right)\text{\hspace{0.17em}}df}.
If 'Method' is set to 'hilbert', then instfreq estimates the instantaneous frequency as the derivative of the phase of the analytic signal of the input. The function:
Computes the analytic signal, xA, of the input using the hilbert function.
{f}_{\text{inst}}\left(t\right)=\frac{1}{2\pi }\frac{d\varphi }{dt},
where ϕ is the phase of the analytic signal of the input.
[1] Boashash, Boualem. “Estimating and Interpreting the Instantaneous Frequency of a Signal. I. Fundamentals.” Proceedings of the IEEE® 80, no. 4 (April 1992): 520–538. https://doi.org/10.1109/5.135376.
[2] Boashash, Boualem. "Estimating and Interpreting The Instantaneous Frequency of a Signal. II. Algorithms and Applications." Proceedings of the IEEE 80, no. 4 (May 1992): 540–568. https://doi.org/10.1109/5.135378.
hilbert | instbw | pspectrum | tfmoment (Predictive Maintenance Toolbox) | tfsmoment (Predictive Maintenance Toolbox) | tftmoment (Predictive Maintenance Toolbox)
|
Standard molar enthalpy of combustion and formation of quaternary ammonium tetrachlorozincate [n-CnH2n+1 N(CH3)3]2 ZnCl4 | SpringerPlus | Full Text
Biyan Ren1,
Shuying Zhang2,
Bei Ruan1,
Kezhong Wu1 &
The standard molar enthalpy of combustion (Δc H o m) and formation (Δf H o m) of quaternary ammonium tetrachlorozincate [n-CnH2n+1N(CH3)3]2ZnCl4 have been determined for the hydrocarbon chain length from even number 8 to 18 of carbon atoms (n) by an oxygen-bomb combustion calorimeter. The results indicated that the values of Δc H o m increased and Δf H o m decreased with increasing chain length and showed a linear dependence on the number of carbon atoms, which were caused by that the order and rigidity of the hydrocarbon chain decreased with increasing the carbon atoms. The linear regression equations are -Δc H o m =1440.50n +3730.67 and -Δf H o m = −85.32n + 1688.22.
The quaternary ammonium tetrachlorometallate with the general formula [n- CnH2n+1NR3]2MX4 (M = Cu, Mn, Cd, Zn, Co, …, X = Cl, Br, I, R is alkyl, or aryl) (short notation: CnC3M) have been attracted considerable attention because of their physical properties including ferro-, piezo- or pyroelectricity, ferri-, antiferro- or piezomagnetism and their technical application for electro- or magneto-optical devices (Blachnik et al.1996; Kezhong et al.2010). The advances in synthesis along with the ease of controlling various structural parameters (metal, halogen and number of carbon atoms in the alkylammonium ion) have made them ideal objects for studies by spectroscopy, calorimetry, diffraction, and a variety of other techniques (Abid et al.2011; Donghua et al.2011; Shymkiv et al.2011). In addition, several theoretical studies have been undertaken to predict the behavior of the CnC3M (Francesco et al.2002; Gosniowska et al.2000). However, the thermodynamic properties of the CnC3M have been reported rarely in the literature. In the present work, the series of quaternary ammonium tetrachlorozincate [n-CnH2n+1 N(CH3)3]2ZnCl4 (n = 8, 10, 12, 14, 16, 18) are synthesized from ethanol solutions. The standard molar enthalpy of formation (Δf H o m) and the standard molar enthalpy of combustion (Δc H o m) of the CnC3Zn have been determined by an oxygen-bomb combustion calorimeter with increasing chain length at T = 298.15 K.
ZnCl2, concentrated HCl and absolute ethanol were analytical grade. n-Octyltrimethylammonium chloride (A.P.), were purchased from TOKYO CHEMICAL INDUSTRY CO LTD (Japan). n-Decyltrimethylammonium chloride(A.P.), n-Dodecyltrimethylammonium chloride(A.P.), n-Tetradecyltrimethylammonium chloride(A.P.), n-Hexadecyltrimethylammonium chloride(A.P.), n-Trimethylstearylammonium chloride(A.P.) were purchased from J & K CHEMICAL LTD. For the synthesis of CnC3Zn, the hot absolute ethanol solutions of ZnCl2, concentrated HCl and the corresponding quaternary ammonium were mixed in a 1:2:2 molar ratios. The solutions were concentrated by boiling for 1 h, and then cooled to room temperature. After filtration, the products were recrystallized twice from absolute ethanol and then were placed in a vacuum desiccator for 10 h at about 353 K. The CnC3Zn were analyzed with an MT-3 CHN elemental analyzers (Japan) are listed in the following: Elemental analyses calc. (%) for C8C3Zn: C 47.88, H 9.43, N 5.08, Cl 25.75; Found: C 47.45, H 9.50, N 5.13, Cl 24.99. Anal. Calcd for C10C3Zn: C 51.37, H 9.88, N 4.61, Cl 23.38; Found: C 50.98, H 9.95, N 4.58, Cl 22.81. Anal. calcd for C12C3Zn: C 54.26, H 10.25, N 4.22, Cl 21.41; Found: C 53.93, H 10.34, N 4.26, Cl 21.25. Anal. Calcd for C14C3Zn: C 56.72, H 10.56, N 3.89, Cl 19.74; Found: C 56.06, H 10.20, N 3.84, Cl 19.03. Anal. Calcd for C16C3Zn: C 58.84, H 10.84, N 3.61, Cl 18.31; Found: C 58.91, H 10.77, N 3.62,Cl18.72. Anal. Calcd for C18C3Zn: C 60.65, H 11.07, N 3.37, Cl 17.08; Found: C 60.56, H 11.04, N 3.36, Cl 17.55.
The combustion experiments were performed with a static bomb calorimeter (XRY-1A Shanghai). Benzoic acid (Thermochemical Standard, BCS-CRM-190r) was used as calibrant of the bomb calorimeter. Its massic energy of combustion is Δ c U = −(26460 ± 3.8) J · g−1 under certificate conditions. The massic energy of combustion Δc U m for each CnC3Zn was fitted with equation Δc U m = [−ε (calor) · ΔT+Δm ign • u ign +V NaOH•(−59.7)]/m CnC3Zn, where ε cal is the energy equivalent of the calorimeter, ΔT is the calorimeter temperature change corrected, Δm ign is the mass of the Nickel-chromium alloy for ignition and the massic energy is u ign = −3.245 kJ · g−1(U ign = Δm ign • u ign ). m CnC3Zn is the mass of the CnC3Zn which were burned, V NaOH is the volume of sodium hydroxid which consumed by nitric acid, the corrections for nitric acid formation were based on −59.7 kJ · mol−1 for the molar energy of formation of 0.1 mol · dm−3 HNO3 (aq) from N2, O2, and H2O(l) (Matos et al.2002). The calibration results were corrected to the average mass of water added to the calorimeter: 2500.0 g and the volume of oxygen bomb was 300 ml. From five independent calibration experiments between T = 295.15 K and T = 299.15 K, the energy equivalent ε cal = (13965.4 ± 4.7) J · K-1 was obtained, where the uncertainty quoted is the standard deviation of the mean. For all experiments, ignition was made at T = (298.150 ± 0.001) K. Combustion experiments were performed in oxygen at a pressure p = 3.00 MPa and in the presence of 10.00 cm3 of water added to the bomb (Matos et al.2002).
The individual results of all combustion experiments, together with the mean values and their standard deviations, are given for each compound in Table 1. In accordance with normal thermochemical practice, the uncertainties assigned to the standard molar enthalpies of combustion are, in each case, equal to twice the overall standard deviation of the mean and include the uncertainties in calibration (Henoc et al.2009). The results are referred to the following reactions (1 ~ 6) and the following equation (7 ~ 9):
\begin{array}{l}{\left[{C}_{8}{H}_{17}N{\left(C{H}_{3}\right)}_{3}\right]}_{2}\mathit{ZnC}{l}_{4}\left(s\right)+\frac{69}{2}{O}_{2}\left(g\right)\\ \phantom{\rule{1em}{0ex}}=\mathit{ZnO}\left(s\right)+22C{O}_{2}\left(g\right)+4\mathit{HCl}\left(l\right)+24{H}_{2}O\left(l\right)\\ \phantom{\rule{2em}{0ex}}+{N}_{2}\left(g\right)\end{array}
\begin{array}{l}{\left[{C}_{10}{H}_{21}N{\left(C{H}_{3}\right)}_{3}\right]}_{2}\mathit{ZnC}{l}_{4}\left(s\right)+\frac{81}{2}{O}_{2}\left(g\right)\\ \phantom{\rule{1em}{0ex}}=\mathit{ZnO}\left(s\right)+26C{O}_{2}\left(g\right)+4\mathit{HCl}\left(l\right)+28{H}_{2}O\left(l\right)\\ \phantom{\rule{2em}{0ex}}+{N}_{2}\left(g\right)\end{array}
\begin{array}{l}{\left[{C}_{12}{H}_{25}N{\left(C{H}_{3}\right)}_{3}\right]}_{2}\mathit{ZnC}{l}_{4}\left(s\right)+\frac{93}{2}{O}_{2}\left(g\right)\\ \phantom{\rule{1em}{0ex}}=\mathit{ZnO}\left(s\right)+30C{O}_{2}\left(g\right)+4\mathit{HCl}\left(l\right)+32{H}_{2}O\left(l\right)\\ \phantom{\rule{2em}{0ex}}+{N}_{2}\left(g\right)\end{array}
\begin{array}{l}{\left[{C}_{14}{H}_{29}N{\left(C{H}_{3}\right)}_{3}\right]}_{2}\mathit{ZnC}{l}_{4}\left(s\right)+\frac{105}{2}{O}_{2}\left(g\right)\\ \phantom{\rule{1em}{0ex}}=\mathit{ZnO}\left(s\right)+34C{O}_{2}\left(g\right)+4\mathit{HCl}\left(l\right)+36{H}_{2}O\left(l\right)\\ \phantom{\rule{2em}{0ex}}+{N}_{2}\left(g\right)\end{array}
\begin{array}{l}{\left[{C}_{16}{H}_{33}N{\left(C{H}_{3}\right)}_{3}\right]}_{2}\mathit{ZnC}{l}_{4}\left(s\right)+\frac{117}{2}{O}_{2}\left(g\right)\\ \phantom{\rule{1em}{0ex}}=\mathit{ZnO}\left(s\right)+38C{O}_{2}\left(g\right)+4\mathit{HCl}\left(l\right)+40{H}_{2}O\left(l\right)\\ \phantom{\rule{2em}{0ex}}+{N}_{2}\left(g\right)\end{array}
\begin{array}{l}{\left[{C}_{18}{H}_{37}N{\left(C{H}_{3}\right)}_{3}\right]}_{2}\mathit{ZnC}{l}_{4}\left(s\right)+\frac{129}{2}{O}_{2}\left(g\right)\\ \phantom{\rule{1em}{0ex}}=\mathit{ZnO}\left(s\right)+42C{O}_{2}\left(g\right)+4\mathit{HCl}\left(l\right)+44{H}_{2}O\left(l\right)\\ \phantom{\rule{2em}{0ex}}+{N}_{2}\left(g\right)\end{array}
{\Delta }_{\mathrm{c}}{H}^{\mathrm{o}}{\mathrm{m}}_{}={\mathrm{M\Delta }}_{\mathrm{c}}{U}^{\mathrm{o}}{\mathrm{m}}_{}+\Delta \phantom{\rule{0.12em}{0ex}}n\phantom{\rule{0.12em}{0ex}}\mathit{RT}
\Delta \phantom{\rule{0.12em}{0ex}}n={n}_{\mathrm{g}}\left(\mathrm{product}\right)-{n}_{\mathrm{g}}\left(\mathrm{reactant}\right)
{\mathrm{\Delta }}_{\mathrm{f}}{H}^{\mathrm{o}}{\mathrm{m}}_{}\left({\mathrm{C}}_{\text{n}}{\mathrm{C}}_{3}\mathrm{Zn}\right){=\mathrm{\Sigma V}}_{\mathrm{B}}{\mathrm{\Delta }}_{\mathrm{f}}{H}_{\mathrm{m}}{\mathrm{o}}^{}\left(\mathrm{B}\right){-\mathrm{\Delta }}_{\mathrm{c}}{H}^{\mathrm{o}}{\mathrm{m}}_{}
Table 1 The values of the combustion energies of the quaternary ammonium tetrachlorometallate C n C 3 Zn
Where R is the molar gas constant and M is the molar mass of the CnC3Zn. The VB is the stoichiometric coefficient and the Δf H o m (B) is the standard molar enthalpy of formation of the combustion products. The standard molar enthalpies of formation of ZnO(s), H2O (l) and CO2(g) at T = 298.15 K, −(348.28) kJ · mol−1, −(285.830 ± 0.042) kJ · mol−1 and − (393.51 ± 0.13) kJ · mol−1 (Manuel et al.2010). The Δf H o m of the CnC3Zn resulted from the Δc H o m by an oxygen-bomb combustion calorimeter at T = 298.15 K. Table 2 lists the values of the standard molar energies Δc U o m, the enthalpies of combustion Δc H o m and the standard molar enthalpies of formation Δf H o m result form Δc U o m for the CnC3Zn.
Table 2 The value of thermochemical functions of the quaternary ammonium tetrachlorometallate C n C 3 Zn
The influence of the hydrocarbon chain length on Δc H o m and Δf H o m of the CnC3Zn has been obtained for chain lengths from 8 to 18 carbon atoms. The values of Δc H o m and Δf H o m show a linear dependence on the number of carbon atoms from experimental data analysis. Figure 1, Figure 2 show a plot of the calculated -Δc H o m and -Δf H o m vs. C-atoms (n) that gave a straight line relationship from the values of Table 2. The linear regression equation are -Δc H o m =1440.50n +3730.67 with a correlation coefficient r = 0.9998 and - Δf H o m = −85.32n + 1688.22 with r =0.9512. A striking feature is that Δc H o m increased and Δf H o m decreased with increasing the chain length. This reason is that the structures of CnC3Zn are characteristic of the piling of sandwiches in which a two-dimensional cavities of ZnCl4 2- tetrahedra is sandwiched between two alkylammonium layers. The layers are bound by van der Waals forces between (CH2)nCH3 groups and by long-range Coulomb forces. The –N(CH)3 3+ groups of the chains occupy the cavities of the ZnCl4 2- layers and are bonded by ion bonds to the chlorine atoms (Weizhen et al.2011). As the hydrocarbon chain length increases, the formation of the chain conformer plays a more important role in the structural phase transitions. It is known that the order and rigidity of the hydrocarbon chain were decreased with increasing the carbon atoms, that is with increasing mean number of conformationally flexible chain in CnC3Zn (Nobuaki et al.2011), furthermore, the intensities of the ion bonds and van der Walls force decrease with increasing the carbon atoms resulting in that the values of Δc H o m and Δf H o m show a linear dependence on the carbon atoms.
Plot of Δ c H o m vs. number( n ) of carbon-atoms in the quaternary ammonium tetrachlorometallate C n C 3 Zn.
Plot of Δ f H o m vs. number( n ) of carbon-atoms in the quaternary ammonium tetrachlorometallate C n C 3 Zn.
The standard molar enthalpy of combustion and formation of quaternary ammonium tetrachlorozincate [n-CnH2n+1 N(CH3)3]2ZnCl4 (n = 8, 10, 12, 14, 16, 18) have been measured by an oxygen-bomb combustion calorimeter. The results indicated that the values of the standard molar combustion enthalpies Δc H o m of these compounds increased with increasing chain length and the standard molar formation enthalpies Δf H o m of these compounds decreased with increasing chain length and showed a linear dependence on the number of carbon atoms.
Abid H, Samet A, Dammak T: Electronic structure calculations and optical properties of a new organic–inorganic luminescent perovskite: (C9H19NH3)2PbI2Br2. J Lumin 2011, 131: 1753-1757. 10.1016/j.jlumin.2011.03.034
Blachnik R, Siethoff C: Thermoanalytical and X-ray study of some alkylammonium tetrachlorozincates. Thermochim Acta 1996, 278: 39-47.
Donghua H, Youying D, Zhcheng T: Crystal structures and thermochemistry on phase change materials ( n -CnH2n+1NH3)2CuCl4(s) (n = 14 and 15). Sol Energy Mater Sol Cells 2011, 95: 2897-2906. 10.1016/j.solmat.2011.06.014
Neve F, Francescangeli O, Crispini A: Crystal architecture and mesophase structure of long-chain N-alkylpyridinium tetrachlorometallates. Inorg Chim Acta 2002, 338: 51-58.
Gosniowska M, Ciunik Z, Bator G, Jakubas R, Baran J: Structure and phase trransitions in tetramethylammonium tetrabromoindate(III) and tetraethylammonium tetrabromoindate(III) crystals. J Mol Struct 2000, 555: 243-255. 10.1016/S0022-2860(00)00607-4
Flores H, Adriana Camarillo E, Mentado J: Enthalpies of combustion and formation of 2-acetylpyrrole, 2-acetylfuran and 2-acetylthiophene. Thermochim Acta 2009, 493: 76-79. 10.1016/j.tca.2009.04.012
Kezhong W, Jianjun Z: Subsolidus binary phase diagram of ( n -CnH2n+1NH3)2ZnCl4 (n =14, 16, 18). J Therm Anal Calorim 2010, 101: 913-917. 10.1007/s10973-010-0806-9
Matos MAR, Monte MJS, Hillesheim DM: Standard molar enthalpies of combustion of the three trans -methoxycinnamic acids. J Chem Thermodyn 2002, 34: 499-509. 10.1006/jcht.2001.0903
Manuel AV, da Silva R, Ana IMC, Lobo F: Enthalpies of combustion, vapour pressures, and enthalpies of sublimation of the 1,5- and 1,8-diaminonaphthalenes. J Chem Thermodyn 2010, 42: 371-379. 10.1016/j.jct.2009.09.009
Shymkiv RM, Sveleba SA, Karpa IV, Katerynchuk IN, Kunyo IM, Phitsych EI: Electronic spectra and phase transitions in thin [N(CH3)4]2CuCl4 microcrystals. J Appl Spectroscopy 2011, 78: 823-828.
Kitazawa N, Aono M, Watanabe Y: Synthesis and luminescence properties of lead-halide based organic–inorganic layered perovskite compounds (CnH2n+1NH3)2 PbI4 (n = 4, 5, 7, 8 and 9). J Phys Chem Solids 2011, 72: 1467-1471. 10.1016/j.jpcs.2011.08.029
Weizhen C, Kezhong W, Xiaodi L, Liuqin W, Biyan R: Subsolidus binary phase diagram of the perovskite type layer materials ( n -CnH2n+1NH3)2ZnCl4 (n = 10, 12, 14). Thermochim Acta 2011, 521: 80-83. 10.1016/j.tca.2011.04.008
This project was financially supported by National Natural Science Foundation of China (No.21073052, 21246006), Natural Science Foundation of Hebei Province (No. B2012205034), and Science Foundation of Hebei Normal University (L2011K04).
Department of Chemistry and Material Science, Hebei Normal University, Shijiazhuang, 050024, China
Biyan Ren, Bei Ruan, Kezhong Wu & Jianjun Zhang
Department of Basic Course, the Chinese People’s Armed Police Force Academy, Langfang, 065000, China
Biyan Ren
Bei Ruan
Kezhong Wu
Correspondence to Kezhong Wu.
KZW participated in the design of the experiment; All authors equally participated in the preparation of the manuscript, read and approved the final manuscript.
Ren, B., Zhang, S., Ruan, B. et al. Standard molar enthalpy of combustion and formation of quaternary ammonium tetrachlorozincate [n-CnH2n+1 N(CH3)3]2 ZnCl4 . SpringerPlus 2, 98 (2013). https://doi.org/10.1186/2193-1801-2-98
Quaternary ammonium tetrachlorozincate
|
Dining cryptographers problem - Wikipedia
In cryptography, the dining cryptographers problem studies how to perform a secure multi-party computation of the boolean-XOR function. David Chaum first proposed this problem in the early 1980s and used it as an illustrative example to show that it was possible to send anonymous messages with unconditional sender and recipient untraceability. Anonymous communication networks based on this problem are often referred to as DC-nets (where DC stands for "dining cryptographers").[1]
4.1 Transmissions of longer messages
4.2 Larger group sizes
4.3 Sparse secret sharing graphs
4.4 Alternate alphabets and combining operators
5 Handling or avoiding collisions
6 Countering disruption attacks
Dining cryptographers problem illustration
Three cryptographers gather around a table for dinner. The waiter informs them that the meal has been paid for by someone, who could be one of the cryptographers or the National Security Agency (NSA). The cryptographers respect each other's right to make an anonymous payment, but want to find out whether the NSA paid. So they decide to execute a two-stage protocol.
In the first stage, every two cryptographers establish a shared one-bit secret, say by tossing a coin behind a menu so that only two cryptographers see the outcome in turn for each two cryptographers. Suppose, for example, that after the coin tossing, cryptographer A and B share a secret bit
{\displaystyle 1}
, A and C share
{\displaystyle 0}
, and B and C share
{\displaystyle 1}
In the second stage, each cryptographer publicly announces a bit, which is:
if they didn't pay for the meal, the exclusive OR (XOR) of the two shared bits they hold with their two neighbours,
if they did pay for the meal, the opposite of that XOR.
Supposing none of the cryptographers paid, then A announces
{\displaystyle 1\oplus 0=1}
, B announces
{\displaystyle 1\oplus 1=0}
, and C announces
{\displaystyle 0\oplus 1=1}
. On the other hand, if A paid, she announces
{\displaystyle \lnot (1\oplus 0)=0}
The three public announcements combined reveal the answer to their question. One simply computes the XOR of the three bits announced. If the result is 0, it implies that none of the cryptographers paid (so the NSA must have paid the bill). Otherwise, one of the cryptographers paid, but their identity remains unknown to the other cryptographers.
David Chaum coined the term dining cryptographers network, or DC-net, for this protocol.
The DC-net protocol is simple and elegant. It has several limitations, however, some solutions to which have been explored in follow-up research (see the References section below).
If two cryptographers paid for the dinner, their messages will cancel each other out, and the final XOR result will be
{\displaystyle 0}
. This is called a collision and allows only one participant to transmit at a time using this protocol. In a more general case, a collision happens as long as any even number of participants send messages.
Any malicious cryptographer who does not want the group to communicate successfully can jam the protocol so that the final XOR result is useless, simply by sending random bits instead of the correct result of the XOR. This problem occurs because the original protocol was designed without using any public key technology and lacks reliable mechanisms to check whether participants honestly follow the protocol.[2]
The protocol requires pairwise shared secret keys between the participants, which may be problematic if there are many participants. Also, though the DC-net protocol is "unconditionally secure", it actually depends on the assumption that "unconditionally secure" channels already exist between pairs of the participants, which is not easy to achieve in practice.
A related anonymous veto network algorithm computes the logical OR of several users' inputs, rather than a logical XOR as in DC-nets, which may be useful in applications to which a logical OR combining operation is naturally suited.
David Chaum first thought about this problem in the early 1980s. The first publication that outlines the basic underlying ideas is his.[3] The journal version appeared in the very first issue of the Journal of Cryptology.[4]
DC-nets are readily generalized to allow for transmissions of more than one bit per round, for groups larger than three participants, and for arbitrary "alphabets" other than the binary digits 0 and 1, as described below.
Transmissions of longer messages[edit]
To enable an anonymous sender to transmit more than one bit of information per DC-nets round, the group of cryptographers can simply repeat the protocol as many times as desired to create a desired number of bits worth of transmission bandwidth. These repetitions need not be performed serially. In practical DC-net systems, it is typical for pairs of participants to agree up-front on a single shared "master" secret, using Diffie–Hellman key exchange for example. Each participant then locally feeds this shared master secret into a pseudorandom number generator, in order to produce as many shared "coin flips" as desired to allow an anonymous sender to transmit multiple bits of information.
Larger group sizes[edit]
The protocol can be generalized to a group of
{\displaystyle n}
participants, each with a shared secret key in common with each other participant. In each round of the protocol, if a participant wants to transmit an untraceable message to the group, they invert their publicly announced bit. The participants can be visualized as a fully connected graph with the vertices representing the participants and the edges representing their shared secret keys.
Sparse secret sharing graphs[edit]
The protocol may be run with less than fully connected secret sharing graphs, which can improve the performance and scalability of practical DC-net implementations, at the potential risk of reducing anonymity if colluding participants can split the secret sharing graph into separate connected components. For example, an intuitively appealing but less secure generalization to
{\displaystyle n>3}
participants using a ring topology, where each cryptographer sitting around a table shares a secret only with the cryptographer to their immediate left and right, and not with every other cryptographer. Such a topology is appealing because each cryptographer needs to coordinate two coin flips per round, rather tha{\displaystyle n}
. However, if Adam and Charlie are actually NSA agents sitting immediately to the left and right of Bob, an innocent victim, and if Adam and Charlie secretly collude to reveal their secrets to each other, then they can determine with certainty whether or not Bob was the sender of a 1 bit in a DC-net run, regardless of how many participants there are in total. This is because the colluding participants Adam and Charlie effectively "split" the secret sharing graph into two separate disconnected components, one containing only Bob, the other containing all other honest participants.
Another compromise secret sharing DC-net topology, employed in the Dissent system for scalability,[5] may be described as a client/server or user/trustee topology. In this variant, we assume there are two types of participants playing different roles: a potentially large number n of users who desire anonymity, and a much smaller number
{\displaystyle m}
of trustees whose role is to help the users obtain that anonymity. In this topology, each of the
{\displaystyle n}
users shares a secret with each of the
{\displaystyle m}
trustees—but users share no secrets directly with other users, and trustees share no secrets directly with other trustees—resulting in an
{\displaystyle n\times m}
secret sharing matrix. If the number of trustees
{\displaystyle m}
is small, then each user needs to manage only a few shared secrets, improving efficiency for users in the same way the ring topology does. However, as long as at least one trustee behaves honestly and does not leak his or her secrets or collude with other participants, then that honest trustee forms a "hub" connecting all honest users into a single fully connected component, regardless of which or how many other users and/or trustees might be dishonestly colluding. Users need not know or guess which trustee is honest; their security depends only on the existence of at least one honest, non-colluding trustee.
Alternate alphabets and combining operators[edit]
Though the simple DC-nets protocol uses binary digits as its transmission alphabet, and uses the XOR operator to combine cipher texts, the basic protocol generalizes to any alphabet and combining operator suitable for one-time pad encryption. This flexibility arises naturally from the fact that the secrets shared between the many pairs of participants are, in effect, merely one-time pads combined symmetrically within a single DC-net round.
One useful alternate choice of DC-nets alphabet and combining operator is to use a finite group suitable for public-key cryptography as the alphabet—such as a Schnorr group or elliptic curve—and to use the associated group operator as the DC-net combining operator. Such a choice of alphabet and operator makes it possible for clients to use zero-knowledge proof techniques to prove correctness properties about the DC-net ciphertexts that they produce, such as that the participant is not "jamming" the transmission channel, without compromising the anonymity offered by the DC-net. This technique was first suggested by Golle and Juels,[6] further developed by Franck,[7] and later implemented in Verdict, a cryptographically verifiable implementation of the Dissent system.[8]
Handling or avoiding collisions[edit]
The measure originally suggested by David Chaum to avoid collisions is to retransmit the message once a collision is detected, but the paper does not explain exactly how to arrange the retransmission.
Dissent avoids the possibility of unintentional collisions by using a verifiable shuffle to establish a DC-nets transmission schedule, such that each participant knows exactly which bits in the schedule correspond to his own transmission slot, but does not know who owns other transmission slots.[9]
Countering disruption attacks[edit]
Herbivore divides a large anonymity network into smaller DC-net groups, enabling participants to evade disruption attempts by leaving a disrupted group and joining another group, until the participant finds a group free of disruptors.[10] This evasion approach introduces the risk that an adversary who owns many nodes could selectively disrupt only groups the adversary has not completely compromised, thereby "herding" participants toward groups that may be functional precisely because they are completely compromised.[11]
Dissent implements several schemes to counter disruption. The original protocol[9] used a verifiable cryptographic shuffle to form a DC-net transmission schedule and distribute "transmission assignments", allowing the correctness of subsequent DC-nets ciphertexts to be verified with a simple cryptographic hash check. This technique required a fresh verifiable before every DC-nets round, however, leading to high latencies. A later, more efficient scheme allows a series of DC-net rounds to proceed without intervening shuffles in the absence of disruption, but in response to a disruption event uses a shuffle to distribute anonymous accusations enabling a disruption victim to expose and prove the identity of the perpetrator.[5] Finally, more recent versions support fully verifiable DC-nets - at substantial cost in computation efficiency due to the use of public-key cryptography in the DC-net - as well as a hybrid mode that uses efficient XOR-based DC-nets in the normal case and verifiable DC-nets only upon disruption, to distribute accusations more quickly than is feasible using verifiable shuffles.[8]
^ Chaum DL (1988). "The dining cryptographers problem: unconditional sender and recipient untraceability". J Cryptol. 1(1):65–75.
^ Knights and Knaves.
^ David Chaum (1985). "Security without identification: transaction systems to make big brother obsolete" (PDF). Communications of the ACM. 28 (10): 1030–1044. CiteSeerX 10.1.1.319.3690. doi:10.1145/4372.4373. S2CID 15340054.
^ David Chaum (1988). "The Dining Cryptographers Problem: Unconditional Sender and Recipient Untraceability". Journal of Cryptology. 1 (1): 65–75. CiteSeerX 10.1.1.127.4293. doi:10.1007/BF00206326. S2CID 2664614.
^ a b David Isaac Wolinsky; Henry Corrigan-Gibbs; Bryan Ford; Aaron Johnson (October 8–10, 2012). Dissent in Numbers: Making Strong Anonymity Scale. 10th USENIX Symposium on Operating Systems Design and Implementation (OSDI). Hollywood, CA, USA.
^ Philippe Golle; Ari Juels (May 2–6, 2004). Dining Cryptographers Revisited (PDF). Eurocrypt 2004. Interlaken, Switzerland.
^ Franck, Christian (2008). New Directions for Dining Cryptographers (PDF) (M.Sc. thesis).
^ a b Henry Corrigan-Gibbs; David Isaac Wolinsky; Bryan Ford (August 14–16, 2013). Proactively Accountable Anonymous Messaging in Verdict. 22nd USENIX Security Symposium. Washington, DC, USA.
^ a b Henry Corrigan-Gibbs; Bryan Ford (October 2010). Dissent: Accountable Group Anonymity. 17th ACM Conference on Computer and Communications Security (CCS). Chicago, IL, USA. Archived from the original on 2012-11-29. Retrieved 2012-09-09.
^ Emin Gün Sirer; Sharad Goel; Mark Robson; Doğan Engin (September 19–22, 2004). Eluding Carnivores: File Sharing with Strong Anonymity (PDF). ACM SIGOPS European workshop. Leuven, Belgium.
^ Nikita Borisov; George Danezis; Prateek Mittal; Parisa Tabriz (October 2007). Denial of Service or Denial of Security? How Attacks on Reliability can Compromise Anonymity (PDF). ACM Conference on Computer and Communications Security (CCS). Alexandria, VA, USA.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Dining_cryptographers_problem&oldid=1073412630"
|
(Redirected from Radio communications)
Frequency spectrum of a typical modulated AM or FM radio signal. It consists of a component C at the carrier wave frequency
{\displaystyle f_{c}}
ITU frequency bandsEdit
Audio: Radio broadcastingEdit
Video: Television broadcastingEdit
Two-way voice communicationEdit
One-way voice communicationEdit
Data communicationEdit
Space communicationEdit
Communication satellite – an artificial satellite used as a telecommunications relay to transmit data between widely separated points on Earth. These are used because the microwaves used for telecommunications travel by line of sight and so cannot propagate around the curve of the Earth. As of 1 January 2021[update], there were 2,224 communications satellites in Earth orbit.[14] Most are in geostationary orbit 22,200 miles (35,700 km) above the equator, so that the satellite appears stationary at the same point in the sky, so the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track it. In a satellite ground station a microwave transmitter and large satellite dish antenna transmit a microwave uplink beam to the satellite. The uplink signal carries many channels of telecommunications traffic, such as long-distance telephone calls, television programs, and internet signals, using a technique called frequency-division multiplexing (FDM). On the satellite, a transponder receives the signal, translates it to a different downlink frequency to avoid interfering with the uplink signal, and retransmits it down to another ground station, which may be widely separated from the first. There the downlink signal is demodulated and the telecommunications traffic it carries is sent to its local destinations through landlines. Communication satellites typically have several dozen transponders on different frequencies, which are leased by different users.
RadiolocationEdit
JammingEdit
Retrieved from "https://en.wikipedia.org/w/index.php?title=Radio&oldid=1086353254#Radio_communication"
|
The M-Wright Function in Time-Fractional Diffusion Processes: A Tutorial Survey
2010 The
M
-Wright Function in Time-Fractional Diffusion Processes: A Tutorial Survey
Francesco Mainardi, Antonio Mura, Gianni Pagnini
In the present review we survey the properties of a transcendental function of the Wright type, nowadays known as
M
-Wright function, entering as a probability density in a relevant class of self-similar stochastic processes that we generally refer to as time-fractional diffusion processes. Indeed, the master equations governing these processes generalize the standard diffusion equation by means of time-integral operators interpreted as derivatives of fractional order. When these generalized diffusion processes are properly characterized with stationary increments, the
M
-Wright function is shown to play the same key role as the Gaussian density in the standard and fractional Brownian motions. Furthermore, these processes provide stochastic models suitable for describing phenomena of anomalous diffusion of both slow and fast types.
Francesco Mainardi. Antonio Mura. Gianni Pagnini. "The
M
-Wright Function in Time-Fractional Diffusion Processes: A Tutorial Survey." Int. J. Differ. Equ. 2010 (SI1) 1 - 29, 2010. https://doi.org/10.1155/2010/104505
Received: 13 September 2009; Accepted: 8 November 2009; Published: 2010
Francesco Mainardi, Antonio Mura, Gianni Pagnini "The
M
-Wright Function in Time-Fractional Diffusion Processes: A Tutorial Survey," International Journal of Differential Equations, Int. J. Differ. Equ. 2010(SI1), 1-29, (2010)
|
Implement tractor in 3D environment - Simulink - MathWorks España
Simulation 3D Tractor
Implement tractor in 3D environment
The Simulation 3D Tractor block implements a tree-axle tractor in the 3D simulation environment.
To use the Simulation 3D Tractor block, ensure that the Simulation 3D Scene Configuration block is in your model. If you set the Sample time parameter of the Simulation 3D Tractor block to -1, the block uses the sample time specified in the Simulation 3D Scene Configuration block.
Verify that the Simulation 3D Tractor block executes before the Simulation 3D Scene Configuration block. That way, Simulation 3D Vehicle prepares the signal data before the Unreal Engine® 3D visualization environment receives it. To check the block execution order, right-click the blocks and select Properties. On the General tab, confirm these Priority settings:
Simulation 3D Tractor — -1
Vehicle and wheel translation, in m. The array dimensions are 7-by-3, where:
Translation=\left[\begin{array}{ccc}{X}_{v}& {Y}_{v}& {Z}_{v}\\ {X}_{FL}& {Y}_{FL}& {Z}_{FL}\\ {X}_{FR}& {Y}_{FR}& {Z}_{FR}\\ {X}_{ML}& {Y}_{ML}& {Z}_{ML}\\ {X}_{MR}& {Y}_{MR}& {Z}_{MR}\\ {X}_{RL}& {Y}_{RL}& {Z}_{RL}\\ {X}_{RR}& {Y}_{RR}& {Z}_{RR}\end{array}\right]
Middle left wheel, XML
Vehicle and wheel rotation, in rad. The array dimensions are 7-by-3, where:
Rotation=\left[\begin{array}{ccc}Rol{l}_{v}& Pitc{h}_{v}& Ya{w}_{v}\\ Rol{l}_{FL}& Pitc{h}_{FL}& Ya{w}_{FL}\\ Rol{l}_{FR}& Pitc{h}_{FR}& Ya{w}_{FR}\\ Rol{l}_{ML}& Pitc{h}_{ML}& Ya{w}_{ML}\\ Rol{l}_{MR}& Pitc{h}_{MR}& Ya{w}_{MR}\\ Rol{l}_{RL}& Pitc{h}_{RL}& Ya{w}_{RL}\\ Rol{l}_{RR}& Pitc{h}_{RR}& Ya{w}_{RR}\end{array}\right]
Type — Tractor type
Conventional tractor (default) | Cab-over tractor
Type of tractor. For the dimensions, see:
Color — Vehicle color
Specify the vehicle color.
Name of the vehicle. By default, when you use the block in your model, the block sets the Name parameter to SimulinkVehicleX. The value of X depends on the number of Simulation 3D Vehicle with Ground Following and Simulation 3D Vehicle blocks that you have in your model.
Initial vehicle and wheel translation, in m. The array dimensions are 7-by-3, where:
Translation(...,1), Translation(...,2), and Translation(...,3) — Initial wheel translation relative to the vehicle, along the vehicle Z-down X-, Y-, and Z-axes, respectively.
Translation=\left[\begin{array}{ccc}{X}_{v}& {Y}_{v}& {Z}_{v}\\ {X}_{FL}& {Y}_{FL}& {Z}_{FL}\\ {X}_{FR}& {Y}_{FR}& {Z}_{FR}\\ {X}_{ML}& {Y}_{ML}& {Z}_{ML}\\ {X}_{MR}& {Y}_{MR}& {Z}_{MR}\\ {X}_{RL}& {Y}_{RL}& {Z}_{RL}\\ {X}_{RR}& {Y}_{RR}& {Z}_{RR}\end{array}\right]
The array dimensions are 7-by-3.
Rotation=\left[\begin{array}{ccc}Rol{l}_{v}& Pitc{h}_{v}& Ya{w}_{v}\\ Rol{l}_{FL}& Pitc{h}_{FL}& Ya{w}_{FL}\\ Rol{l}_{FR}& Pitc{h}_{FR}& Ya{w}_{FR}\\ Rol{l}_{ML}& Pitc{h}_{ML}& Ya{w}_{ML}\\ Rol{l}_{MR}& Pitc{h}_{MR}& Ya{w}_{MR}\\ Rol{l}_{RL}& Pitc{h}_{RL}& Ya{w}_{RL}\\ Rol{l}_{RR}& Pitc{h}_{RR}& Ya{w}_{RR}\end{array}\right]
Vehicle Body 3DOF Three Axles | Simulation 3D Trailer | Vehicle Body 6DOF Three Axles | Vehicle Body 3DOF | Vehicle Body 6DOF
|
Parametric Models - MATLAB & Simulink - MathWorks 日本
d{X}_{t}=\mathrm{μ}\left(t\right)dt+V\left(t\right)d{W}_{t}
d{X}_{t}=0.3d{W}_{t}.
d{X}_{t}=\mathrm{μ}\left(t\right){X}_{t}dt+D\left(t,{X}_{t}^{\mathrm{α}\left(t\right)}\right)V\left(t\right)d{W}_{t}
The cev object constrains A to an NVars-by-1 vector of zeros. D is a diagonal matrix whose elements are the corresponding element of the state vector X, raised to an exponent α(t).
d{X}_{t}=0.25{X}_{t}+0.3{X}_{t}^{\frac{1}{2}}d{W}_{t}.
d{X}_{t}=\mathrm{μ}\left(t\right){X}_{t}dt+D\left(t,{X}_{t}\right)V\left(t\right)d{W}_{t}
d{X}_{t}=0.25{X}_{t}dt+0.3{X}_{t}d{W}_{t}
d{X}_{t}=S\left(t\right)\left[L\left(t\right)â{X}_{t}\right]dt+D\left(t,{X}_{t}^{\mathrm{α}\left(t\right)}\right)V\left(t\right)d{W}_{t}
A\left(t\right)=S\left(t\right)L\left(t\right),B\left(t\right)=âS\left(t\right)
d{X}_{t}=0.2\left(0.1â{X}_{t}\right)dt+0.05{X}_{t}^{\frac{1}{2}}d{W}_{t}.
d{X}_{t}=S\left(t\right)\left[L\left(t\right)â{X}_{t}\right]dt+D\left(t,{X}_{t}^{\frac{1}{2}}\right)V\left(t\right)d{W}_{t}
d{X}_{t}=S\left(t\right)\left[L\left(t\right)â{X}_{t}\right]dt+V\left(t\right)d{W}_{t}
d{X}_{t}=0.2\left(0.1â{X}_{t}\right)dt+0.05d{W}_{t}.
d{X}_{1t}=B\left(t\right){X}_{1t}dt+\sqrt{{X}_{2t}}{X}_{1t}d{W}_{1t}
d{X}_{2t}=S\left(t\right)\left[L\left(t\right)â{X}_{2t}\right]dt+V\left(t\right)\sqrt{{X}_{2t}}d{W}_{2t}
\begin{array}{l}d{X}_{1t}=0.1{X}_{1t}dt+\sqrt{{X}_{2t}}{X}_{1t}d{W}_{1t}\\ d{X}_{2t}=0.2\left[0.1â{X}_{2t}\right]dt+0.05\sqrt{{X}_{2t}}d{W}_{2t}\end{array}
|
SDE with Linear Drift (SDELD) model - MATLAB - MathWorks Switzerland
Create a sdeld Object
SDE with Linear Drift (SDELD) model
Creates and displays SDE objects whose drift rate is expressed in linear drift-rate form and that derive from the sdeddo (SDE from drift and diffusion objects class).
Use sdeld objects to simulate sample paths of NVars state variables expressed in linear drift-rate form. They provide a parametric alternative to the mean-reverting drift form (see sdemrd).
These state variables are driven by NBrowns Brownian motion sources of risk over NPeriods consecutive observation periods, approximating continuous-time stochastic processes with linear drift-rate functions.
The sdeld object allows you to simulate any vector-valued SDELD of the form:
d{X}_{t}=\left(A\left(t\right)+B\left(t\right){X}_{t}\right)dt+D\left(t,{X}_{t}^{\alpha \left(t\right)}\right)V\left(t\right)d{W}_{t}
A is an NVars-by-1 vector.
B is an NVars-by-NVars matrix.
SDELD = sdeld(A,B,Alpha,Sigma)
SDELD = sdeld(___,Name,Value)
SDELD = sdeld(A,B,Alpha,Sigma) creates a default SDELD object.
SDELD = sdeld(___,Name,Value) creates a SDELD object with additional options specified by one or more Name,Value pair arguments.
A — Access function for the input argument A, callable as a function of time and state
B — Access function for the input argument B, callable as a function of time and state
If you specify A as an array, it must be an NVars-by-NVars matrix of state vector coefficients.
Although thegbm constructor enforces no restrictions on the sign of Sigma volatilities, they are specified as positive values.
Although sdeld does not enforce restrictions on the signs of Alpha or Sigma, each parameter is specified as a positive value.
If StartState is a scalar, sdeld applies the same initial value to all state variables on all trials.
If StartState is a column vector, sdeld applies a unique initial value to each state variable on all trials.
If StartState is a matrix, sdeld applies a unique initial value to each state variable on each trial.
F\left(t,{X}_{t}\right)=A\left(t\right)+B\left(t\right){X}_{t}
G\left(t,{X}_{t}\right)=D\left(t,{X}_{t}^{\alpha \left(t\right)}\right)V\left(t\right)
The sdeld class derives from the sdeddo class. These objects allow you to simulate correlated paths of NVARS state variables expressed in linear drift-rate form:
d{X}_{t}=\left(A\left(t\right)+B\left(t\right){X}_{t}\right)dt+D\left(t,{X}_{t}^{\alpha \left(t\right)}\right)V\left(t\right)d{W}_{t}
sdeld objects provide a parametric alternative to the mean-reverting drift form and also provide an alternative interface to the sdeddo parent class, because you can create an object without first having to create its drift and diffusion-rate components.
When you invoke these parameters with inputs, they behave like functions, giving the impression of dynamic behavior. The parameters accept the observation time t and a state vector Xt, and return an array of appropriate dimension. Even if you originally specified an input as an array, sdeld treats it as a static function of time and state, by that means guaranteeing that all parameters are accessible by the same interface.
Implementing Multidimensional Equity Market Models, Implementation 3: Using SDELD, CEV, and GBM Objects
|
Effect of Squealer Geometry Arrangement on a Gas Turbine Blade Tip Heat Transfer | J. Heat Transfer | ASME Digital Collection
Gm Salam Azad,
Gm Salam Azad
Department of Mechanical Engineering, Turbine Heat Transfer Lab, Texas A&M University, College Station, TX 77843-3123
e-mail: salam.azad@swpc.siemens.com
Marcus Easterling Chair Professor
e-mail: jchan@mengr.tamu.edu
Ronald S. Bunker,
GE R&D Center, Schenectady, NY 12301
C. Pang Lee
GE Aircraft Engines, Cincinnati, OH 45215
Gm Salam Azad Research Assistant
Je-Chin Han Marcus Easterling Chair Professor
Contributed by the Heat Transfer Division for publication in the JOURNAL OF HEAT TRANSFER. Manuscript received by the Heat Transfer Division April 19, 2001; revision received January 29, 2002. Associate Editor: H. S. Lee.
J. Heat Transfer. Jun 2002, 124(3): 452-459
Azad, G. S., Han, J., Bunker, R. S., and Lee, C. P. (May 10, 2002). "Effect of Squealer Geometry Arrangement on a Gas Turbine Blade Tip Heat Transfer ." ASME. J. Heat Transfer. June 2002; 124(3): 452–459. https://doi.org/10.1115/1.1471523
This study investigates the effect of a squealer tip geometry arrangement on heat transfer coefficient and static pressure distributions on a gas turbine blade tip in a five-bladed stationary linear cascade. A transient liquid crystal technique is used to obtain detailed heat transfer coefficient distribution. The test blade is a linear model of a tip section of the GE
E3
high-pressure turbine first stage rotor blade. Six tip geometry cases are studied: (1) squealer on pressure side, (2) squealer on mid camber line, (3) squealer on suction side, (4) squealer on pressure and suction sides, (5) squealer on pressure side plus mid camber line, and (6) squealer on suction side plus mid camber line. The flow condition during the blowdown tests corresponds to an overall pressure ratio of 1.32 and exit Reynolds number based on axial chord of
1.1×106.
Results show that squealer geometry arrangement can change the leakage flow and results in different heat transfer coefficients to the blade tip. A squealer on suction side provides a better benefit compared to that on pressure side or mid camber line. A squealer on mid camber line performs better than that on a pressure side.
Cooling, Experimental, Flow, Heat Transfer, Turbines, gas turbines, temperature distribution, boundary layers, heat transfer, confined flow, flow measurement
Blades, Flow (Dynamics), Gas turbines, Geometry, Heat transfer, Heat transfer coefficients, Pressure, Suction, Leakage flows, Chords (Trusses), Liquid crystals, Cascades (Fluid dynamics), Turbines, Transients (Dynamics)
Bunker, R. S., and Bailey, J. C., 2000, “Blade Tip Heat Transfer and Flow With Chordwise Sealing Strips,” International Symposium on Transport Phenomena and Dynamics of Rotating Machinery (ISROMAC), Honolulu, Hawaii, pp. 548–555.
Bunker, R. S., and Bailey, J. C., 2000, “An Experimental Study of Heat Transfer and Flow on a Gas Turbine Blade Tip with Various Tip Leakage Sealing Methods,” 4th ISHMT / ASME Heat and Mass Transfer Conference, India.
Heyes, F. J. G., Hodson, H. P., and Dailey, G. M., 1991, “The Effect of Blade Tip Geometry on the Tip Leakage Flow in Axial Turbine Cascades,” ASME Paper No. 91-GT-135.
Azad, G. S., Han, J. C., and Boyle, R. J., 2000, “Heat Transfer and Flow on the Squealer Tip of a Gas Turbine Blade,” ASME Paper No. 2000-GT-195.
Dunn, M. G., and Haldeman, C. W., 2000, “Time-Averaged Heat Flux for a Recessed Tip, Lip, and Platform of a Transonic Turbine Blade,” ASME Paper No. GT-0197.
Ameri, A. A., Steinthorsson, E., and Rigby, L. D., 1997, “Effect of Squealer Tip on Rotor Heat Transfer and Efficiency,” ASME Paper No. 97-GT-128.
Ameri, A. A., Steinthorsson, E., and Rigby, L. D., 1998, “Effects of Tip Clearance and Casing Recess on Heat Transfer and Stage Efficiency in Axial Turbines,” ASME Paper No. 98-GT-369.
Ameri, A. A., 2001, “Heat Transfer and Flow on the Blade Tip of a Gas Turbine Equipped with a Mean-Camberline Strip,” ASME Paper No. 2001-GT-0156.
Yang, T. T., and Diller, T. E., 1995, “Heat Transfer and Flow for a Grooved Turbine Blade Tip in a Transonic Cascade,” ASME Paper No. 95-WA/HT-29.
Bindon, J. P., and Morphus, G., 1988, “The Effect of Relative Motion, Blade Edge Radius and Gap Size on the Blade Tip Pressure Distribution in an Annular Turbine Cascade with Clearance,” ASME Paper No. 88-GT-256.
Flow and Heat Transfer in Turbine Tip Gaps
Yaras, M. I., and Sjolander, S. A., 1991, “Effects of Simulated Rotation on Tip Leakage in a Planar Cascade of Turbine Blades, Part I-Tip Gap Flow,” ASME Paper No. 91-GT-127.
Kaiser, I., and Bindon, J. P., 1997, “The Effect of Tip Clearance on the Development of Loss Behind a Rotor and a Subsequent Nozzle,” ASME Paper No. 97-GT-53.
Rotor Tip Leakage: Part I—Basic Methodology
Rotor Tip Leakage: Part II—Design Optimization Through Viscous Analysis and Experiment
Mayle, R. E., and Metzger D. E., 1982, “Heat Transfer at the Tip of an Unshrouded Turbine Blade” Proc. Seventh Int. Heat Transfer Conf., Hemisphere Pub., New York, pp. 87–92.
Bunker, R. S., Bailey, J. C., and Ameri, A. A., 1999, “Heat Transfer and Flow on the First Stage Blade Tip of a Power Generation Gas Turbine: Part 1: Experimental Results,” ASME Paper No. 99-GT-169.
Ameri, A. A., and Steinthorsson, E., 1995, “Prediction of Unshrouded Rotor Blade Tip Heat Transfer,” ASME Paper No. 95-GT-142.
Ameri, A. A., and Steinthorsson, E., 1996, “Analysis of Gas Turbine Rotor Blade Tip and Shroud Heat Transfer,” ASME Paper No. 96-GT-189.
Ameri, A. A., and Bunker, R. S., 1999, “Heat Transfer and Flow on the First Stage Blade Tip of a Power Generation Gas Turbine: Part 2: Simulation Results,” ASME Paper No. 99-GT-283.
Azad, G. S., Han, J. C., Teng, S., and Boyle, R., 2000, “Heat Transfer and Pressure Distribution on a Gas Turbine Blade Tip,” ASME Paper No. 2000-GT-194.
Heat Transfer Coefficients on the Squealer Tip and Near Tip Regions of a Gas Turbine Blade With Single or Double Squealer
|
p
\mathrm{GL}\left(2\right)
L
L
p
A certain Dirichlet series attached to Siegel modular forms of degree two.
W. Kohnen, N.-P. Skoruppa (1989)
A Dirichlet series for modular forms of degree n
Aloys Krieg (1991)
A generalization of Gauss sums and its applications to Siegel modular forms and L-functions associated with the vector space of quadratic forms.
Koichi Takase (1990)
A note on representation of positive definite binary quadratic forms by positive definite quadratic forms in 6 variables
Yoshiyuki Kitaoka (1990)
A Note on the Siegel-Eisenstein series of weight 2 on Sp2 (Z).
Shoyu Nagaoka (1992)
An arithmetic of modular function fields of degree two
Ryuji Sasaki (1999)
Andrianov' s L-functions associated to Siegel wave forms of degree two.
Akira Hori (1995)
Arithmetic of half integral weight theta-series
Myung-Hwan Kim (1993)
Automorphic forms and functions with respect to the Jacobi group.
Helmut Klingen (1996)
Bemerkung zu einem Satz von J. Igusa und W. Hammond.
Eberhard Freitag, Volker Schneider (1967)
Rainer Schulze-Pillot (1995)
Certain L-series of Ranking-Selberg type associated to Siegel modular forms of degree g.
W. Kohnen (1990)
Characteristic Twists of a Dirichlet series for Siegel Cusp Forms.
W. Kohnen, J. Sengupta, A. Krieg (1995)
Class numbers, Jacobi forms and Siegel-Eisenstein series of weight 2 on Sp2.
Winfried Kohnen (1993)
Cohomology of the boundary of Siegel modular varieties of degree two, with applications
J. William Hoffman, Steven H. Weintraub (2003)
Let 𝓐₂(n) = Γ₂(n)∖𝔖₂ be the quotient of Siegel's space of degree 2 by the principal congruence subgroup of level n in Sp(4,ℤ). This is the moduli space of principally polarized abelian surfaces with a level n structure. Let 𝓐₂(n)* denote the Igusa compactification of this space, and ∂𝓐₂(n)* = 𝓐₂(n)* - 𝓐₂(n) its "boundary". This is a divisor with normal crossings. The main result of this paper is the determination of H(∂𝓐₂(n)*) as a module over the finite group Γ₂(1)/Γ₂(n). As an application...
Karsten Buecker (1996)
Vector-valued Siegel modular forms may be found in certain cohomology groups with coefficients lying in an irreducible representation of the symplectic group. Using functoriality in the coefficients, we show that the ordinary components of the cohomology are independent of the weight parameter. The meaning of ordinary depends on a choice of parabolic subgroup of
GSp\left(4\right)
, giving a particular direction in the change of weight. Our results complement those of Taylor and Tilouine-Urban for the two other possible...
Dohoon Choi, YoungJu Choie, Olav K. Richter (2011)
We employ recent results on Jacobi forms to investigate congruences and filtrations of Siegel modular forms of degree
2
. In particular, we determine when an analog of Atkin’s
U\left(p\right)
-operator applied to a Siegel modular form of degree
2
is nonzero modulo a prime
p
. Furthermore, we discuss explicit examples to illustrate our results.
|
68Q01 General
68Q12 Quantum algorithms and complexity
68Q80 Cellular automata
Daniel Kirsten (2008)
Lucian Ilie, Grzegorz Rozenberg, Arto Salomaa (2000)
For a non-negative integer k, we say that a language L is k-poly-slender if the number of words of length n in L is of order
𝒪\left({n}^{k}\right)
. We give a precise characterization of the k-poly-slender context-free languages. The well-known characterization of the k-poly-slender regular languages is an immediate consequence of ours.
A classification of rational languages by semilattice-ordered monoids
Libor Polák (2004)
We prove here an Eilenberg type theorem: the so-called conjunctive varieties of rational languages correspond to the pseudovarieties of finite semilattice-ordered monoids. Taking complements of members of a conjunctive variety of languages we get a so-called disjunctive variety. We present here a non-trivial example of such a variety together with an equational characterization of the corresponding pseudovariety.
A coalgebraic semantics of subtyping
Erik Poll (2001)
Coalgebras have been proposed as formal basis for the semantics of objects in the sense of object-oriented programming. This paper shows that this semantics provides a smooth interpretation for subtyping, a central notion in object-oriented programming. We show that different characterisations of behavioural subtyping found in the literature can conveniently be expressed in coalgebraic terms. We also investigate the subtle difference between behavioural subtyping and refinement.
A fully equational proof of Parikh’s theorem
Luca Aceto, Zoltán Ésik, Anna Ingólfsdóttir (2002)
We show that the validity of Parikh’s theorem for context-free languages depends only on a few equational properties of least pre-fixed points. Moreover, we exhibit an infinite basis of
\mu
-term equations of continuous commutative idempotent semirings.
We show that the validity of Parikh's theorem for context-free languages depends only on a few equational properties of least pre-fixed points. Moreover, we exhibit an infinite basis of μ-term equations of continuous commutative idempotent semirings.
Jérémie Cabessa, Jacques Duparc (2009)
The algebraic counterpart of the Wagner hierarchy consists of a well-founded and decidable classification of finite pointed ω-semigroups of width 2 and height ωω. This paper completes the description of this algebraic hierarchy. We first give a purely algebraic decidability procedure of this partial ordering by introducing a graph representation of finite pointed ω-semigroups allowing to compute their precise Wagner degrees. The Wagner degree of any ω-rational language can therefore be computed...
The algebraic study of formal languages shows that ω-rational sets correspond precisely to the ω-languages recognizable by finite ω-semigroups. Within this framework, we provide a construction of the algebraic counterpart of the Wagner hierarchy. We adopt a hierarchical game approach, by translating the Wadge theory from the ω-rational language to the ω-semigroup context. More precisely, we first show that the Wagner degree is indeed a syntactic invariant. We then define a reduction relation on...
A generalized minimal realization theory of machines in a category.
Antonio Bahamonde (1983)
This paper presents a generalized minimal realization theory of machines in a category which contains the Kleiski case. The minimal realization is the cheapest realization for a given cost functor. The final reachable realization of Arbib and Manes ([5]) and the minimal state approach for nondeterministic machines are included here.
Pierre-Cyrille Héam (2000)
A reversible automaton is a finite automaton in which each letter induces a partial one-to-one map from the set of states into itself. We solve the following problem proposed by Pin. Given an alphabet A, does there exist a sequence of languages Kn on A which can be accepted by a reversible automaton, and such that the number of states of the minimal automaton of Kn is in O(n), while the minimal number of states of a reversible automaton accepting Kn is in O(ρn) for some ρ > 1? We give...
Alexandru Mateescu, Arto Salomaa, Kai Salomaa, Sheng Yu (2001)
In this paper we introduce a sharpening of the Parikh mapping and investigate its basic properties. The new mapping is based on square matrices of a certain form. The classical Parikh vector appears in such a matrix as the second diagonal. However, the matrix product gives more information about a word than the Parikh vector. We characterize the matrix products and establish also an interesting interconnection between mirror images of words and inverses of matrices.
In this paper we introduce a sharpening of the Parikh mapping and investigate its basic properties. The new mapping is based on square matrices of a certain form. The classical Parikh vector appears in such a matrix as the second diagonal. However, the matrix product gives more information about a word than the Parikh vector. We characterize the matrix products and establish also an interesting interconnection between mirror images of words and inverses of .
A topology for automata. II.
Srivastava, Arun K., Shukla, Wagish (1986)
Algebraic and graph-theoretic properties of infinite
n
Zoltán Ésik, Zoltán L. Németh (2005)
\Sigma
n
-poset is an (at most) countable set, labeled in the set
\Sigma
n
partial orders. The collection of all
\Sigma
n
-posets is naturally equipped with
n
binary product operations and
n
\omega
-ary product operations. Moreover, the
\omega
-ary product operations give rise to
n
xmlns="http://www.w3.org/1998/Math/MathML" display="inline"> ...
A Σ-labeled n-poset is an (at most) countable set, labeled in the set Σ, equipped with n partial orders. The collection of all Σ-labeled n-posets is naturally equipped with n binary product operations and nω-ary product operations. Moreover, the ω-ary product operations give rise to nω-power operations. We show that those Σ-labeled n-posets that can be generated from the singletons by the binary and ω-ary product operations form the free algebra on Σ in a variety axiomatizable by an infinite collection...
|
Ask Answer - Some Basic Concepts of Chemistry - Expert Answered Questions for School Students
H2O2 is sold as a solution of approximately 5 grams of H2O2 per 100 ml of solution. The of molarity of the solution is approximately ?
10 mL {N}_{2} and 25 mL {H}_{2} at same P and T are allowed to react to give N{H}_{3} quantitatively. Predict \phantom{\rule{0ex}{0ex}}\left(i\right) the volume of N{H}_{3} formed,\phantom{\rule{0ex}{0ex}}\left(ii\right) limiting reagent.
EXAMPLE 18 Calculate equivalent mass of (a) HNO3 (b) H2SO4 (c) H3PO4 (d) NaOH q (e)
EXAMPLE 19 Calculate equivalent mass of H3 PO4 if, H3 PO4 +2NaOH
\to
Na2HPO4 +2H2 O
Q.4. The vapour density of a gas is 11.2. Calculate the volume occupied by 11.2 g of the gas at NTP.
[Hint: Mol. wt. = 2
×
VD]
(11.2 litres)
Plz solve q 7 nd q8
Q7. Find the total number fo nucleons present in 12 g of 12C atoms. (12 × 6.022 × 1023)
Q8. Find (i) the total number of neutrons, and (ii) the total mass of neutrons in 7 mg of 14C. (Assume that the mass of a neutron = mass of a hydrogen atom)
[ Hint : 1 14C atom contains 8 neutrons.]
(24.088 ×1020 0.004 g)
if 0.22 gram of a substance when vaporized displaced 45 cm cube of air measured over Water at 293 Kelvin and 755mm pressure and if vapour pressure of H2O is 17.4mm then a molecular weight of substance will be
Please help me in this question -
Q.22. By the reaction of carbon and oxygen, a mixture of CO and
C{O}_{2}
is obtained What is the composition by mass of the mixture obtained when 20 g of
{O}_{2}
reacts with 12 g of carbon?
|
F-test of equality of variances - Wikipedia
In statistics, an F-test of equality of variances is a test for the null hypothesis that two normal populations have the same variance. Notionally, any F-test can be regarded as a comparison of two variances, but the specific case being discussed in this article is that of two populations, where the test statistic used is the ratio of two sample variances.[1] This particular situation is of importance in mathematical statistics since it provides a basic exemplar case in which the F-distribution can be derived.[2] For application in applied statistics, there is concern[citation needed] that the test is so sensitive to the assumption of normality that it would be inadvisable to use it as a routine test for the equality of variances. In other words, this is a case where "approximate normality" (which in similar contexts would often be justified using the central limit theorem), is not good enough to make the test procedure approximately valid to an acceptable degree.
Let X1, ..., Xn and Y1, ..., Ym be independent and identically distributed samples from two populations which each has a normal distribution. The expected values for the two populations can be different, and the hypothesis to be tested is that the variances are equal. Let
{\displaystyle {\overline {X}}={\frac {1}{n}}\sum _{i=1}^{n}X_{i}{\text{ and }}{\overline {Y}}={\frac {1}{m}}\sum _{i=1}^{m}Y_{i}}
be the sample means. Let
{\displaystyle S_{X}^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}\left(X_{i}-{\overline {X}}\right)^{2}{\text{ and }}S_{Y}^{2}={\frac {1}{m-1}}\sum _{i=1}^{m}\left(Y_{i}-{\overline {Y}}\right)^{2}}
be the sample variances. Then the test statistic
{\displaystyle F={\frac {S_{X}^{2}}{S_{Y}^{2}}}}
has an F-distribution with n − 1 and m − 1 degrees of freedom if the null hypothesis of equality of variances is true. Otherwise it follows an F-distribution scaled by the ratio of true variances. The null hypothesis is rejected if F is either too large or too small based on the desired alpha level (i.e., statistical significance).
This F-test is known to be extremely sensitive to non-normality,[3][4] so Levene's test, Bartlett's test, or the Brown–Forsythe test are better tests for testing the equality of two variances. (However, all of these tests create experiment-wise type I error inflations when conducted as a test of the assumption of homoscedasticity prior to a test of effects.[5]) F-tests for the equality of variances can be used in practice, with care, particularly where a quick check is required, and subject to associated diagnostic checking: practical text-books[6] suggest both graphical and formal checks of the assumption.
F-tests are used for other statistical tests of hypotheses, such as testing for differences in means in three or more groups, or in factorial layouts. These F-tests are generally not robust when there are violations of the assumption that each population follows the normal distribution, particularly for small alpha levels and unbalanced layouts.[7] However, for large alpha levels (e.g., at least 0.05) and balanced layouts, the F-test is relatively robust, although (if the normality assumption does not hold) it suffers from a loss in comparative statistical power as compared with non-parametric counterparts.
The immediate generalization of the problem outlined above is to situations where there are more than two groups or populations, and the hypothesis is that all of the variances are equal. This is the problem treated by Hartley's test and Bartlett's test.
^ Snedecor, George W. and Cochran, William G. (1989), Statistical Methods, Eighth Edition, Iowa State University Press.
^ Johnson, N.L., Kotz, S., Balakrishnan, N. (1995) Continuous Univariate Distributions, Volume 2, Wiley. ISBN 0-471-58494-0 (Section 27.1)
^ Box, G.E.P. (1953). "Non-Normality and Tests on Variances". Biometrika. 40 (3/4): 318–335. doi:10.1093/biomet/40.3-4.318. JSTOR 2333350.
^ Markowski, Carol A; Markowski, Edward P. (1990). "Conditions for the Effectiveness of a Preliminary Test of Variance". The American Statistician. 44 (4): 322–326. doi:10.2307/2684360. JSTOR 2684360.
^ Sawilowsky, S. (2002). "Fermat, Schubert, Einstein, and Behrens–Fisher:The Probable Difference Between Two Means When σ12 ≠ σ22", Journal of Modern Applied Statistical Methods, 1(2), 461–472.
^ Rees, D.G. (2001) Essential Statistics (4th Edition), Chapman & Hall/CRC, ISBN 1-58488-007-4. Section 10.15
^ Blair, R. C. (1981). "A reaction to 'Consequences of failure to meet assumptions underlying the fixed effects analysis of variance and covariance'". Review of Educational Research. 51: 499–507. doi:10.3102/00346543051004499.
Retrieved from "https://en.wikipedia.org/w/index.php?title=F-test_of_equality_of_variances&oldid=993827742"
|
Identity_component Knowpia
In mathematics, specifically group theory, the identity component of a group G refers to several closely related notions of the largest connected subgroup of G containing the identity element.
In point set topology, the identity component of a topological group G is the connected component G0 of G that contains the identity element of the group. The identity path component of a topological group G is the path component of G that contains the identity element of the group.
In algebraic geometry, the identity component of an algebraic group G over a field k is the identity component of the underlying topological space. The identity component of a group scheme G over a base scheme S is, roughly speaking, the group scheme G0 whose fiber over the point s of S is the connected component (Gs)0 of the fiber Gs, an algebraic group.[1]
The identity component G0 of a topological or algebraic group G is a closed normal subgroup of G. It is closed since components are always closed. It is a subgroup since multiplication and inversion in a topological or algebraic group are continuous maps by definition. Moreover, for any continuous automorphism a of G we have
The identity path component of a topological group may in general be smaller than the identity component (since path connectedness is a stronger condition than connectedness), but these agree if G is locally path-connected.
Component groupEdit
The quotient group G/G0 is called the group of components or component group of G. Its elements are just the connected components of G. The component group G/G0 is a discrete group if and only if G0 is open. If G is an algebraic group of finite type, such as an affine algebraic group, then G/G0 is actually a finite group.
One may similarly define the path component group as the group of path components (quotient of G by the identity path component), and in general the component group is a quotient of the path component group, but if G is locally path connected these groups agree. The path component group can also be characterized as the zeroth homotopy group,
{\displaystyle \pi _{0}(G,e).}
The identity component of the additive group (Zp,+) of p-adic integers is the singleton set {0}, since Zp is totally disconnected.
The Weyl group of a reductive algebraic group G is the components group of the normalizer group of a maximal torus of G.
Consider the group scheme μ2 = Spec(Z[x]/(x2 - 1)) of second roots of unity defined over the base scheme Spec(Z). Topologically, μn consists of two copies of the curve Spec(Z) glued together at the point (that is, prime ideal) 2. Therefore, μn is connected as a topological space, hence as a scheme. However, μ2 does not equal its identity component because the fiber over every point of Spec(Z) except 2 consists of two discrete points.
An algebraic group G over a topological field K admits two natural topologies, the Zariski topology and the topology inherited from K. The identity component of G often changes depending on the topology. For instance, the general linear group GLn(R) is connected as an algebraic group but has two path components as a Lie group, the matrices of positive determinant and the matrices of negative determinant. Any connected algebraic group over a non-Archimedean local field K is totally disconnected in the K-topology and thus has trivial identity component in that topology.
^ SGA 3, v. 1, Exposé VI, Définition 3.1
Demazure, Michel; Alexandre Grothendieck, eds. (1970). Séminaire de Géométrie Algébrique du Bois Marie - 1962-64 - Schémas en groupes - (SGA 3) - vol. 1 (Lecture notes in mathematics 151). Lecture Notes in Mathematics (in French). Vol. 151. Berlin; New York: Springer-Verlag. pp. xv+564. doi:10.1007/BFb0058993. ISBN 978-3-540-05179-4. MR 0274458.
Demazure, M.; Grothendieck, A., Gille, P.; Polo, P. (eds.), Schémas en groupes (SGA 3), I: Propriétés Générales des Schémas en Groupes Revised and annotated edition of the 1970 original.
|
Maritime Radar Sea Clutter Modeling - MATLAB & Simulink - MathWorks 한êµ
Overview of Sea States
Grazing Angle Effects
Maritime Surveillance Radar Example
This example will introduce a sea clutter simulation for a maritime surveillance radar system. This example first discusses the physical properties associated with sea states. Next, it discusses the reflectivity of sea surfaces, investigating the effect of sea state, frequency, polarization, and grazing angle. Lastly, the example calculates the clutter-to-noise ratio (CNR) for a maritime surveillance radar system, considering the propagation path and weather effects.
In describing sea clutter, it is important first to establish the physical properties of the sea surface. In modeling sea clutter for radar, there are three important parameters:
{\mathrm{Ï}}_{\mathit{h}}
is the standard deviation of the wave height. The wave height is defined as the vertical distance between the wave crest and the adjacent wave trough.
{\mathrm{β}}_{0\text{â}}
is the slope of the wave.
{\mathit{v}}_{\mathit{w}\text{â}}
is the wind speed.
Due to the irregularity of waves, the physical properties of the sea are often described in terms of sea states. The Douglas sea state number is a widely used scale that represents a wide range of physical sea properties such as wave heights and associated wind velocities. At the lower end of the scale, a sea state of 0 represents a calm, glassy sea state. The scale then proceeds from a slightly rippled sea state at 1 to rough seas with high wave heights at sea state 5. Wave heights at a sea state of 8 can be greater than 9 meters or more.
Using the searoughness function, plot the sea properties for sea states 1 through 5. Note the slow increase in the wave slope
{\mathrm{β}}_{0\text{â}}
with sea state. This is a result of the wavelength and wave height increasing with wind speed, albeit with different factors.
% Analyze for sea states 1 through 5
ss = 1:5; % Sea states
numSeaStates = numel(ss);
hgtsd = zeros(1,numSeaStates);
beta0 = zeros(1,numSeaStates);
vw= zeros(1,numSeaStates);
% Obtain sea state properties
for is = 1:numSeaStates
[hgtsd(is),beta0(is),vw(is)] = searoughness(ss(is));
helperPlotSeaRoughness(ss,hgtsd,beta0,vw);
The physical properties you introduce is an important part in developing the geometry and environment of the maritime scenario. Furthermore, as you will see, radar returns from a sea surface exhibit strong dependence on sea state.
The sea surface is composed of water with an average salinity of about 35 parts per thousand. The reflection coefficient of sea water is close to
-
1 for microwave frequencies and at low grazing angles.
For smooth seas, the wave height is small, and the sea appears as an infinite, flat conductive plate with little-to-no backscatter. As the sea state number increases and the wave height increases, the surface roughness increases. This results in increased scattering that is directionally dependent. Additionally, the reflectivity exhibits a strong proportional dependence on wave height and a dependence that increases with increasing frequency on wind speed.
Investigate sea surface reflectivity versus frequency for various sea states using the seareflectivity function. Set the grazing angle equal to 0.5 degrees and consider frequencies over the range of 500 MHz to 35 GHz.
grazAng = 0.5; % Grazing angle (deg)
freq = linspace(0.5e9,35e9,100); % Frequency (Hz)
pol = 'H'; % Horizontal polarization
% Initialize reflectivity output
numFreq = numel(freq);
nrcsH = zeros(numFreq,numSeaStates);
% Calculate reflectivity
nrcsH(:,is) = seareflectivity(ss(is),grazAng,freq,'Polarization',pol);
% Plot reflectivity
helperPlotSeaReflectivity(ss,grazAng,freq,nrcsH,pol);
The figure shows that the sea surface reflectivity is proportional to frequency. Additionally, as the sea state number increases, which corresponds to increasing roughness, the reflectivity also increases.
Next, consider polarization effects on the sea surface reflectivity. Maintain the same grazing angle and frequency span from the previous section.
pol = 'V'; % Vertical polarization
nrcsV = zeros(numFreq,numSeaStates);
nrcsV(:,is) = seareflectivity(ss(is),grazAng,freq,'Polarization',pol);
hAxes = helperPlotSeaReflectivity(ss,grazAng,freq,nrcsH,'H');
helperPlotSeaReflectivity(ss,grazAng,freq,nrcsV,'V',hAxes);
The figure shows that there is a noticeable effect on the reflectivity based on polarization. Notice that the difference between horizontal and vertical polarizations is greater at lower frequencies than at higher frequencies. As the sea state number increases, the difference between horizontal and vertical polarizations decreases. Thus, there is a decreasing dependence on polarization with increasing frequency.
Consider the effect of grazing angle. Compute the sea reflectivity over the range of 0.1 to 60 degrees at an L-band frequency of 1.5 GHz.
grazAng = linspace(0.1,60,100); % Grazing angle (deg)
freq = 1.5e9; % L-band frequency (Hz)
numGrazAng = numel(grazAng);
nrcsH = zeros(numGrazAng,numSeaStates);
nrcsV = zeros(numGrazAng,numSeaStates);
nrcsH(:,is) = seareflectivity(ss(is),grazAng,freq,'Polarization','H');
nrcsV(:,is) = seareflectivity(ss(is),grazAng,freq,'Polarization','V');
ylim(hAxes,[-60 -10]);
From the figure, note that there is much more variation in the sea reflectivity at lower grazing angles, and differences exist between vertical and horizontal polarization. The figure shows that the dependence on grazing angle decreases as the grazing angle increases. Furthermore, the reflectivity for horizontally polarized signals is less than vertically polarized signals for the same sea state over the range of grazing angles considered.
Calculating Clutter-to-Noise Ratio
Consider a horizontally polarized maritime surveillance radar system operating at 6 GHz (C-band). Define the radar system.
freq = 6e9; % C-band frequency (Hz)
anht = 20; % Height (m)
ppow = 200e3; % Peak power (W)
tau = 200e-6; % Pulse width (sec)
prf = 300; % PRF (Hz)
azbw = 10; % Half-power azimuth beamwidth (deg)
elbw = 30; % Half-power elevation beamwidth (deg)
Gt = 22; % Transmit gain (dB)
Gr = 10; % Receive gain (dB)
nf = 3; % Noise figure (dB)
Ts = systemp(nf); % System temperature (K)
Next, simulate an operational environment where the sea state is 2. Calculate and plot the sea surface reflectivity for the grazing angles of the defined geometry.
% Sea parameters
ss = 2; % Sea state
% Calculate surface state
[hgtsd,beta0] = searoughness(ss);
% Setup geometry
anht = anht + 2*hgtsd; % Average height above clutter (m)
surfht = 3*hgtsd; % Surface height (m)
Rmax = min(Rua,Rhoriz); % Maximum simulation range (m)
Rm = linspace(100,Rmax,1000); % Range (m)
Rkm = Rm*1e-3; % Range (km)
% Calculate sea clutter reflectivity. Temporarily permit values outside of
% the NRL sea reflectivity model grazing angle bounds of 0.1 - 60 degrees.
% Specifically, this is to permit analysis of grazing angles less than 0.1
% degrees that are close to the horizon.
warning('off','radar:radar:outsideValidityRegion'); % Permit values outside model
nrcs = seareflectivity(ss,grazAng,freq);
warning('on','radar:radar:outsideValidityRegion'); % Turn warnings back on
helperPlotSeaReflectivity(ss,grazAng,freq,nrcs,'H');
Next, calculate the radar cross section (RCS) of the clutter using the clutterSurfaceRCS function. Note the drop in the clutter RCS as the radar horizon range is reached.
hAxes = helperPlot(Rkm,rcsdB,'RCS','Clutter RCS (dBsm)','Clutter Radar Cross Section (RCS)');
helperAddHorizLine(hAxes,Rhoriz);
Calculate the clutter-to-noise ratio (CNR) using the radareqsnr function. Again, note the drop in CNR as the simulation range approaches the radar horizon. Calculate the range at which the clutter falls below the noise.
% Convert frequency to wavelength
% Calculate and plot the clutter-to-noise ratio
cnr = radareqsnr(lambda,Rm(:),ppow,tau,...
'gain',[Gt Gr],'rcs',rcs,'Ts',Ts); % dB
hAxes = helperPlot(Rkm,cnr,'CNR','CNR (dB)','Clutter-to-Noise Ratio (CNR)');
ylim(hAxes,[-80 100]);
helperAddBelowClutterPatch(hAxes);
% Range when clutter falls below noise
helperFindClutterBelowNoise(Rkm,cnr);
Range at which clutter falls below noise (km) = 18.04
Considering the Propagation Path
When the path between the radar and clutter deviates from free space conditions, include the clutter propagation factor and the atmospheric losses on the path. You can calculate the clutter propagation factor using the radarpropfactor function.
% Calculate radar propagation factor for clutter
'SurfaceHeightStandardDeviation',hgtsd,...
'SurfaceSlope',beta0,...
helperPlot(Rkm,Fc,'Propagation Factor', ...
Within the above plot, two propagation regions are visible:
Interference region: This is the region where reflections interfere with the direct ray. This is exhibited over the ranges where there is lobing.
Intermediate region: This is the region between the interference and diffraction region, where the diffraction region is defined as a shadow region beyond the horizon. The intermediate region, which in this example occurs at the kink in the curve at about 1.5 km, is generally estimated by an interpolation between the interference and diffraction regions.
Typically, the clutter propagation factor and the sea reflectivity are combined as the product
{\mathrm{Ï}}_{\mathit{C}}{\mathit{F}}_{\mathit{C}\text{â}}^{4}
, because measurements of surface reflectivity are generally measurements of the product rather than just the reflectivity
{\mathrm{Ï}}_{\mathit{C}}
. Calculate this product and plot the results.
% Combine clutter reflectivity and clutter propagation factor
FcLinear = db2mag(Fc); % Convert to linear units
combinedFactor = nrcs.*FcLinear.^2;
combinedFactordB = pow2db(combinedFactor);
helperPlot(Rkm,combinedFactordB,'\sigma_CF_C', ...
'\sigma_CF_C (dB)', ...
'One-Way Sea Clutter Propagation Factor and Reflectivity');
Next, calculate the atmospheric loss on the path using the slant-path tropopl function. Use the default standard atmospheric model for the calculation.
% Calculate one-way loss associated with atmosphere
helperPlot(Rkm,Latmos,'Atmospheric Loss','Loss (dB)','One-Way Atmospheric Loss');
Recalculate the CNR. Include the propagation factor and atmospheric loss in the calculation. Note the change in the shape of the CNR curve. The point at which the clutter falls below the noise is much closer in range when you include these factors.
% Re-calculate CNR including radar propagation factor and atmospheric loss
'gain',[Gt Gr],'rcs',rcs,'Ts',Ts, ...
'PropagationFactor',Fc,...
'AtmosphericLoss',Latmos); % dB
helperAddPlot(Rkm,cnr,'CNR + Propagation Factor + Atmospheric Loss',hAxes);
Understanding Weather Effects
Just as the atmosphere affects the detection of a target, weather also affects the detection of clutter. Consider the effect of rain over the simulated ranges. First calculate the rain attenuation.
% Calculate one-way loss associated with rain
rr = 50; % Rain rate (mm/h)
polAng = 0; % Polarization tilt angle (0 degrees for horizontal)
Lrain = zeros(numEl,1);
Lrain(ie,:) = cranerainpl(Rm(ie),freq,rr,elAng(ie),polAng);
helperPlot(Rkm,Lrain,'Rain Loss','Loss (dB)','One-Way Rain Loss');
Recalculate the CNR. Include the propagation path and the rain loss. Note that there is only a slight decrease in the CNR due to the presence of the rain.
% Re-calculate CNR including radar propagation factor, atmospheric loss,
% and rain loss
'AtmosphericLoss',Latmos + Lrain); % dB
helperAddPlot(Rkm,cnr,'CNR + Propagation Factor + Atmospheric Loss + Rain',hAxes);
Range at which clutter falls below noise (km) = 9.61
This example introduces concepts regarding the simulation of sea surfaces. The sea reflectivity exhibits the following properties:
A strong dependence on sea state
A proportional dependence on frequency
A dependence on polarization that decreases with increasing frequency
A strong dependence on grazing angle at low grazing angles
This example also discusses how to use the sea state physical properties and reflectivity for the calculation of the clutter-to-noise ratio for a maritime surveillance radar system. Additionally, the example explains ways to improve simulation of the propagation path.
Blake, L. V. Machine Plotting of Radar Vertical-Plane Coverage Diagrams. NRL Report, 7098, Naval Research Laboratory, 1970.
Gregers-Hansen, V., and R. Mittal. An Improved Empirical Model for Radar Sea Clutter Reflectivity. NRL/MR, 5310-12-9346, Naval Research Laboratory, 27 Apr. 2012.
function helperPlotSeaRoughness(ss,hgtsd,beta0,vw)
% Creates 3x1 plot of sea roughness outputs
% Plot standard deviation of sea wave height
plot(ss,hgtsd,'-o','LineWidth',1.5)
ylabel([sprintf('Wave\nHeight ') '\sigma_h (m)'])
title('Sea Wave Roughness')
% Plot sea wave slope
plot(ss,beta0,'-o','LineWidth',1.5)
ylabel([sprintf('Wave\nSlope ') '\beta_0 (deg)'])
% Plot wind velocity
plot(ss,vw,'-o','LineWidth',1.5)
xlabel('Sea State')
ylabel([sprintf('Wind\nVelocity ') 'v_w (m/s)'])
function hAxes = helperPlotSeaReflectivity(ss,grazAng,freq,nrcs,pol,hAxes)
% Plot sea reflectivities
% Create figure and new axes if axes are not passed in
newFigure = false;
newFigure = true;
% Get polarization string
switch lower(pol)
lineStyle = '-';
lineStyle = '--';
if numel(grazAng) == 1
hLine = semilogx(hAxes,freq(:).*1e-9,pow2db(nrcs),lineStyle,'LineWidth',1.5);
hLine = plot(hAxes,grazAng(:),pow2db(nrcs),lineStyle,'LineWidth',1.5);
% Set display names
numLines = size(nrcs,2);
for ii = 1:numLines
hLine(ii).DisplayName = sprintf('SS %d, %s',ss(ii),pol);
if newFigure
hLine(ii).Color = brighten(hLine(ii).Color,0.5);
% Update labels and axes
ylabel('Reflectivity \sigma_0 (dB)')
title('Sea State Reflectivity \sigma_0')
legend('Location','southoutside','NumColumns',5,'Orientation','Horizontal');
function varargout = helperPlot(Rkm,y,displayName,ylabelStr,titleName)
% Used in CNR analysis
plot(hAxes,Rkm,y,'LineWidth',1.5,'DisplayName',displayName);
xlabel(hAxes,'Range (km)')
axis(hAxes,'tight');
function helperAddPlot(Rkm,y,displayName,hAxes)
ylimsIn = get(hAxes,'Ylim');
ylimsNew = get(hAxes,'Ylim');
set(hAxes,'Ylim',[ylimsIn(1) ylimsNew(2)]);
function helperAddHorizLine(hAxes,Rhoriz)
xline(Rhoriz.*1e-3,'--','DisplayName','Horizon Range','LineWidth',1.5);
xlim([xlims(1) Rhoriz.*1e-3*(1.05)]);
function helperAddBelowClutterPatch(hAxes)
% Add patch indicating when clutter falls below the noise
x = [xlims(1) xlims(1) xlims(2) xlims(2) xlims(1)];
y = [ylims(1) 0 0 ylims(1) ylims(1)];
hP = patch(hAxes,x,y,[0.8 0.8 0.8], ...
'FaceAlpha',0.3,'EdgeColor','none','DisplayName','Clutter Below Noise');
function helperFindClutterBelowNoise(Rkm,cnr)
% Find the point at which the clutter falls below the noise
idxNotNegInf = ~isinf(cnr);
Rclutterbelow = interp1(cnr(idxNotNegInf),Rkm(idxNotNegInf),0);
fprintf('Range at which clutter falls below noise (km) = %.2f\n',Rclutterbelow)
|
Home : Support : Online Help : Education : Student Packages : Statistics : Hypothesis Tests : TwoSampleFTest
TwoSampleFTest(X1, X2, beta, confidence_option, output_option)
The TwoSampleFTest function computes the two sample F-test upon datasets X1 and X2. This tests whether the population standard deviation of X1, divided by the population standard deviation of X2, is equal to beta, under the assumption that both populations are normally distributed.
\mathrm{with}\left(\mathrm{Student}[\mathrm{Statistics}]\right):
X≔[9,10,8,4,8,3,0,10,15,9]:
Y≔[6,3,10,11,9,8,13,4,4,4]:
\frac{\mathrm{Variance}\left(X\right)}{\mathrm{Variance}\left(Y\right)}
\frac{\textcolor[rgb]{0,0,1}{203}}{\textcolor[rgb]{0,0,1}{137}}
Calculate the two sample F-test on a list of values, assuming equal variances.
\mathrm{TwoSampleFTest}\left(X,Y,1,\mathrm{confidence}=0.95\right)
Confidence Interval: .368046193452367 .. 5.96552419074047
[\textcolor[rgb]{0,0,1}{\mathrm{hypothesis}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{true}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{confidenceinterval}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.368046193452367}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{5.96552419074047}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{distribution}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{FRatio}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{pvalue}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.567367926580979}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{statistic}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1.48175182481752}]
\mathrm{TwoSampleFTest}\left(X,Y,1,\mathrm{confidence}=0.95,\mathrm{output}=\mathrm{plot}\right)
\mathrm{report},\mathrm{graph}≔\mathrm{TwoSampleFTest}\left(X,Y,1,\mathrm{confidence}=0.95,\mathrm{output}=\mathrm{both}\right):
\mathrm{report}
[\textcolor[rgb]{0,0,1}{\mathrm{hypothesis}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{true}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{confidenceinterval}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.368046193452367}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{5.96552419074047}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{distribution}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{FRatio}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{pvalue}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0.567367926580979}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{statistic}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1.48175182481752}]
\mathrm{graph}
The Student[Statistics][TwoSampleFTest] command was introduced in Maple 18.
Student/Statistics/TwoSampleFTest/overview
|
Consider the two similar solids at right.
Refresh your memory about linear scale factors by re-reading the Math Notes box in Lesson 2.1.2 and in Lesson 11.1.3.
For help finding the surface area and volume of a solid, see the Math Notes box in Lesson 10.3.1.
What is the linear scale factor between the two solids?
2
What is the surface area of each solid? What is the ratio of the surface areas? How is this ratio related to the linear
The surface area can be found by counting or calculating the number of unit cube faces visible from all sides.
Surface areas are
24
^²
96
^²
4
. It is the square of the linear scale factor.
Now calculate the volume of each solid. How are the volumes related? Compare this to the linear scale factor and record your observations.
The volumes can be found by counting the number of unit cubes in each solid or by finding the volume of the two rectangular prisms which make up each solid.
The volumes are
6
^³
48
^³
8, 2^³
|
Divided sign - zxc.wiki
÷ is a redirect to this article. For further meanings see ÷ (disambiguation) .
Minus sign - , ./.
Divided sign : , ÷ , /
Plus minus sign ± , ∓
Comparison sign < , ≤ , = , ≥ , >
Root sign √
Sum symbol Σ
Product mark Π
Difference sign , Nabla ∆ , ∇
Integral sign ∫
Concatenation characters ∘
Angle sign ∠ , ∡ , ∢ , ∟
Vertical , parallel ⊥ , ∥
Triangle , square △ , □
Diameter sign ⌀
Union , cut ∪ , ∩
Difference , complement ∖ , ∁
Element character ∈
Subset , superset ⊂ , ⊆ , ⊇ , ⊃
Follow arrow ⇒ , ⇔ , ⇐
Universal quantifier ∀
Existential quantifier ∃
Conjunction , disjunction ∧ , ∨
Negation sign ¬
Divided characters , division characters or division characters are special characters that are regularly used to represent the mathematical operator for division .
1 symbols for the division
2 history of symbols
3 Representation in computer systems
3.2 Replacement by other characters
A colon (:), a colon with a middle bar (÷) or a slash (/) are used as dividing characters in the text . Fractions are represented by a fraction line that is similar to the forward slash in the text. In the formula set, the numerator and denominator of a fraction are displayed one above the other, with the now horizontal fraction line as the dividing line.
In most countries, including Germany, the colon (:) is preferred in school mathematics; In the English-speaking world and on pocket calculators , the Obelus symbol (÷) is usually used. In higher mathematics, the fractional spelling ( and rarely ) or the spelling as multiplication with the reciprocal value ( ) can be found almost exclusively, which provides the necessary clarity, especially with non-commutative multiplication. The slash (/) is found mainly in programming languages.
{\ displaystyle {\ tfrac {a} {b}}}
{\ displaystyle {} ^ {a} / _ {b}}
{\ displaystyle from ^ {- 1}}
Note the different associativity of the operators , if applicable .
The oldest symbol appears to be the forward slash (/). It was first used by the English mathematician William Oughtred in his work Clavis Mathematicae, published in London in 1631.
The German scientist Gottfried Wilhelm Leibniz used the colon (:). Leibniz first used the division colon in 1684 in Acta Eruditorum . Before Leibniz, the Englishman Johnson published the symbol in a book in 1633, but only as a fraction symbol and not for the division in the narrower sense.
Johann Rahn introduced the symbol (÷) composed of a colon and a dash for division . Together with the symbol for multiplication (∗), this appears for the first time in his book Teutsche Algebra, published in 1659. Rahn's split sign is sometimes referred to as the English split sign because it is more common in the English-speaking world. However, its origin lies in Germany.
Leonardo Fibonacci was the first European mathematician to use the horizontal fraction line derived from Islamic mathematics .
The international character encoding standard Unicode contains several divided characters and characters for closely related applications. They are in the following positions:
Coding in Unicode, HTML and LaTeX
: U + 003A colon Colon & # x003A; & # 58; :
÷ U + 00F7 division sign Divided sign & # x00F7; & # 247; & divide; \ div
∕ U + 2215 division slash Split line & # x2215; & # 8725; Note
⁄ U + 2044 fraction slash Fraction line & # x2044; & # 8260; & frasl; Note
∶ U + 2236 ratio (in relation to & # x2236; & # 8758; \ ratio
Note They are almost generated when using the LaTeX package xfrac .
In ASCII - character set is merely contain the colon, which is why many older computer systems could only represent him. According to Unicode, U + 2236 is to be preferred instead of the simple colon for divisions, since the simple colon also has other semantics.
The distinction between division slash and fraction slash is ultimately of a semantic nature , even if the Unicode consortium intended something different according to a technical note: “ … the 'fraction slash' U + 2044… builds up to a skewed fraction, the 'division slash' U + 2215… builds up to a potentially large linear fraction,… ”( Murray Sargent III , German:“… the 'fraction line' U + 2044… causes an oblique fraction, the 'division line' U + 2215… causes a potentially large linear fraction Fraction [ Note: ie within the line], ... “) The division slash can be found in the unicode block mathematical operators , the fraction slash in the block for general punctuation .
Replacement with other characters
Due to the lack of split characters on common keyboards , the characters are often replaced by the single colon : or the single slash / , both of which already appeared in the ASCII character set.
The ASCII extensions ISO 6937 from 1983 and ISO 8859-1 (Latin 1) from 1986 contained the divided character (÷). This can be generated by pressing Alt + 0247 on the number pad.
Florian Cajori : A History of Mathematical Notations. Dover Publications, New York NY 1993, ISBN 0-486-67766-4 (reprint of the original two volume work by Open Court Publishing 1928/1929).
↑ Andreas de Vries: The long way of the numbers. A brief history of the decimal system. Books on Demand, Norderstedt 2011, ISBN 978-3-8423-5120-2 , p. 42.
↑ Scott Pakin: The Comprehensive LaTeX Symbol List. (PDF, 8.7 MB) January 19, 2017, archived from the original on September 28, 2017 ; Retrieved on September 28, 2017 (English, linking the original results in a mirror of CTAN , the archive link compare file: Comprehensive LaTeX Symbol list.pdf ).
^ Jason C: Difference Between Unicode FRACTION SLASH and DIVISION SLASH. In: Super User. Stack Exchange, June 1, 2015, accessed November 25, 2015 .
↑ Murray Sargent III: Unicode Nearly Plain-Text Encoding of Mathematics (PDF; 1.4 MB) March 10, 2010 (English) accessed on November 25, 2015
This page is based on the copyrighted Wikipedia article "Geteiltzeichen" (Authors); it is used under the Creative Commons Attribution-ShareAlike 3.0 Unported License. You may redistribute it, verbatim or modified, providing that you comply with the terms of the CC-BY-SA.
|
Optimization_problem Knowpia
In mathematics, computer science and economics, an optimization problem is the problem of finding the best solution from all feasible solutions.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
Continuous optimization problemEdit
The standard form of a continuous optimization problem is[1]
{\displaystyle {\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&f(x)\\&\operatorname {subject\;to} &&g_{i}(x)\leq 0,\quad i=1,\dots ,m\\&&&h_{j}(x)=0,\quad j=1,\dots ,p\end{aligned}}}
f : ℝn → ℝ is the objective function to be minimized over the n-variable vector x,
gi(x) ≤ 0 are called inequality constraints
hj(x) = 0 are called equality constraints, and
m ≥ 0 and p ≥ 0.
If m = p = 0, the problem is an unconstrained optimization problem. By convention, the standard form defines a minimization problem. A maximization problem can be treated by negating the objective function.
Combinatorial optimization problemEdit
Formally, a combinatorial optimization problem A is a quadruple[citation needed] (I, f, m, g), where
I is a set of instances;
given an instance x ∈ I, f(x) is the set of feasible solutions;
given an instance x and a feasible solution y of x, m(x, y) denotes the measure of y, which is usually a positive real.
g is the goal function, and is either min or max.
The goal is then to find for some instance x an optimal solution, that is, a feasible solution y with
{\displaystyle m(x,y)=g\left\{m(x,y'):y'\in f(x)\right\}.}
For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure m0. For example, if there is a graph G which contains vertices u and v, an optimization problem might be "find a path from u to v that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from u to v that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'.
In the field of approximation algorithms, algorithms are designed to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is more naturally characterized as an optimization problem.[2]
Counting problem (complexity) – Type of computational problem
Function problem – Type of computational problem
Satisficing – Cognitive heuristic of searching for an acceptable decision − the optimum need not be found, just a "good enough" solution.
^ Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (pdf). Cambridge University Press. p. 129. ISBN 978-0-521-83378-3.
^ Ausiello, Giorgio; et al. (2003), Complexity and Approximation (Corrected ed.), Springer, ISBN 978-3-540-65431-5
"How Traffic Shaping Optimizes Network Bandwidth". IPC. 12 July 2016. Retrieved 13 February 2017.
|
Contractions & DBond - Dibs.Money
1. "When can I swap $DIBS for $DBOND?"
$DBOND will only become available in the bonds section following epochs in which the Time Weighted Average Price (TWAP) of $DIBS is under peg. This means that $DIBS's price will have had to have been under 1 $BNB per 1000 DIBS for the majority of the previous epoch in order to trigger the bonds section to "open".
The bonds section will always open at the very beginning of a new epoch, and remain open for the entire epoch — the bonds section can not and will never open mid-epoch — and during epochs in which the bonds section is open, $DIBS will not be printed in the Piggybank.
2. "What is the formula to calculate the redemption bonus for $DBOND?"
To encourage redemption of $DBOND for $DIBS when $DIBS's TWAP > 1.1, and in order to incentivize users to redeem at a higher price, $DBOND redemption will be more profitable with a higher $DIBS TWAP value. The $DBOND to $DIBS ratio will be 1:R, where R can be calculated in the formula as shown below:
R=1+[(DIBStwapprice)-1)*coeff)]
coeff = 0.7
To further illustrate why the longer you hold $DBOND the more profitable it is, let's take an initial $1000 investment into consideration. In this example, say this $1000 is used to buy $DIBS when $DIBS TWAP is 0.95 and then swapped for $DBOND. If these $DBOND are redeemed when: -$DIBS TWAP is 1.5, your investment would now be worth $1421. -$DIBS TWAP is 2, your investment would now be worth $1789. -$DIBS TWAP is 3, your investment would now be worth $2526. -$DIBS TWAP is 5, your investment would now be worth $4000.
3. "I expected $DBOND to be issued in the bonds section, but there is none. Why?"
There is a balanced state "at peg" when $DIBS's TWAP is between 1.00 and 1.01, and this means there is neither contraction nor inflation.
4. "When can I swap $DBOND back to $DIBS?"
1: $DIBS TWAP is above peg and
5. "Is $DBOND right for me?"
Like anything else in crypto, obtaining $DBOND is not risk-free. Just like in the real world, you are purchasing debt from the protocol with the expectation that you will be redeemed at a premium in the future. To date, this has occurred after all contractions, but past performance does not guarantee the same future outcomes. $DBOND is ideal for those with a medium to long-term time preference, as it incentivizes hodling in exchange for potentially extremely lucrative rewards. If you are looking for a quick flip or have short-term time preference, $DBOND may not be the right investment option for you.
|
Explain network predictions using Grad-CAM - MATLAB gradCAM - MathWorks 日本
{\mathrm{α}}_{k}^{c}=\stackrel{\text{Global average pooling}}{\stackrel{︷}{\frac{1}{N}\underset{i}{â}\underset{j}{â}\underset{\begin{array}{c}\text{Gradients}\\ \text{via}\\ \text{backprop}\end{array}}{\underset{︸}{\frac{â{y}^{c}}{â{A}_{i,j}^{k}}}}}},
M=\text{ReLU}\left({â}_{k}{\mathrm{α}}_{k}^{c}{A}^{k}\right).
{â}_{\left(i,j\right)âS}{y}_{ij}^{c}
[1] Selvaraju, Ramprasaath R., Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.†2017 (October 2017): 618–626, https://doi.org/10.1109/ICCV.2017.74.
[2] Vinogradova, Kira, Alexandr Dibrov, and Gene Myers. “Towards Interpretable Semantic Segmentation via Gradient-Weighted Class Activation Mapping.†Proceedings of the AAAI Conference on Artificial Intelligence 34, no. 10 (April 2020): 13943–13944, https://doi.org/10.1609/aaai.v34i10.7244.
|
Change_of_variables Knowpia
Change of variables is an operation that is related to substitution. However these are different operations, as can be seen when considering differentiation (chain rule) or integration (integration by substitution).
A very simple example of a useful variable change can be seen in the problem of finding the roots of the sixth-degree polynomial:
{\displaystyle x^{6}-9x^{3}+8=0.}
Sixth-degree polynomial equations are generally impossible to solve in terms of radicals (see Abel–Ruffini theorem). This particular equation, however, may be written
{\displaystyle (x^{3})^{2}-9(x^{3})+8=0}
(this is a simple case of a polynomial decomposition). Thus the equation may be simplified by defining a new variable
{\displaystyle u=x^{3}}
. Substituting x by
{\displaystyle {\sqrt[{3}]{u}}}
into the polynomial gives
{\displaystyle u^{2}-9u+8=0,}
which is just a quadratic equation with the two solutions:
{\displaystyle u=1\quad {\text{and}}\quad u=8.}
The solutions in terms of the original variable are obtained by substituting x3 back in for u, which gives
{\displaystyle x^{3}=1\quad {\text{and}}\quad x^{3}=8.}
Then, assuming that one is interested only in real solutions, the solutions of the original equation are
{\displaystyle x=(1)^{1/3}=1\quad {\text{and}}\quad x=(8)^{1/3}=2.}
{\displaystyle xy+x+y=71}
{\displaystyle x^{2}y+xy^{2}=880}
{\displaystyle x}
{\displaystyle y}
{\displaystyle x>y}
. (Source: 1991 AIME)
Solving this normally is not very difficult, but it may get a little tedious. However, we can rewrite the second equation as
{\displaystyle xy(x+y)=880}
. Making the substitutions
{\displaystyle s=x+y}
{\displaystyle t=xy}
reduces the system to
{\displaystyle s+t=71,st=880}
. Solving this gives
{\displaystyle (s,t)=(16,55)}
{\displaystyle (s,t)=(55,16)}
. Back-substituting the first ordered pair gives us
{\displaystyle x+y=16,xy=55,x>y}
{\displaystyle (x,y)=(11,5).}
Back-substituting the second ordered pair gives us
{\displaystyle x+y=55,xy=16,x>y}
, which gives no solutions. Hence the solution that solves the system is
{\displaystyle (x,y)=(11,5)}
Formal introductionEdit
{\displaystyle A}
{\displaystyle B}
be smooth manifolds and let
{\displaystyle \Phi :A\rightarrow B}
{\displaystyle C^{r}}
-diffeomorphism between them, that is:
{\displaystyle \Phi }
{\displaystyle r}
times continuously differentiable, bijective map from
{\displaystyle A}
{\displaystyle B}
{\displaystyle r}
times continuously differentiable inverse from
{\displaystyle B}
{\displaystyle A}
{\displaystyle r}
may be any natural number (or zero),
{\displaystyle \infty }
(smooth) or
{\displaystyle \omega }
(analytic).
{\displaystyle \Phi }
is called a regular coordinate transformation or regular variable substitution, where regular refers to the
{\displaystyle C^{r}}
-ness of
{\displaystyle \Phi }
. Usually one will write
{\displaystyle x=\Phi (y)}
to indicate the replacement of the variable
{\displaystyle x}
{\displaystyle y}
by substituting the value of
{\displaystyle \Phi }
{\displaystyle y}
for every occurrence of
{\displaystyle x}
{\displaystyle U(x,y):=(x^{2}+y^{2}){\sqrt {1-{\frac {x^{2}}{x^{2}+y^{2}}}}}=0.}
{\displaystyle \displaystyle (x,y)=\Phi (r,\theta )}
{\displaystyle \displaystyle \Phi (r,\theta )=(r\cos(\theta ),r\sin(\theta )).}
{\displaystyle \theta }
runs outside a
{\displaystyle 2\pi }
-length interval, for example,
{\displaystyle [0,2\pi ]}
{\displaystyle \Phi }
is no longer bijective. Therefore,
{\displaystyle \Phi }
should be limited to, for example
{\displaystyle (0,\infty ]\times [0,2\pi )}
. Notice how
{\displaystyle r=0}
is excluded, for
{\displaystyle \Phi }
is not bijective in the origin (
{\displaystyle \theta }
can take any value, the point will be mapped to (0, 0)). Then, replacing all occurrences of the original variables by the new expressions prescribed by
{\displaystyle \Phi }
and using the identity
{\displaystyle \sin ^{2}x+\cos ^{2}x=1}
{\displaystyle V(r,\theta )=r^{2}{\sqrt {1-{\frac {r^{2}\cos ^{2}\theta }{r^{2}}}}}=r^{2}{\sqrt {1-\cos ^{2}\theta }}=r^{2}\left|\sin \theta \right|.}
Now the solutions can be readily found:
{\displaystyle \sin(\theta )=0}
{\displaystyle \theta =0}
{\displaystyle \theta =\pi }
. Applying the inverse of
{\displaystyle \Phi }
shows that this is equivalent to
{\displaystyle y=0}
{\displaystyle x\not =0}
. Indeed, we see that for
{\displaystyle y=0}
the function vanishes, except for the origin.
Note that, had we allowed
{\displaystyle r=0}
, the origin would also have been a solution, though it is not a solution to the original problem. Here the bijectivity of
{\displaystyle \Phi }
is crucial. The function is always positive (for
{\displaystyle x,y\in \mathbb {R} }
), hence the absolute values.
The chain rule is used to simplify complicated differentiation. For example, consider the problem of calculating the derivative
{\displaystyle {\frac {d}{dx}}\sin(x^{2}).}
{\displaystyle y=\sin u\quad {\text{and}}\quad u=x^{2}}
{\displaystyle {\begin{aligned}&{\frac {d}{dx}}\sin(x^{2})=\overbrace {{\frac {dy}{dx}}={\frac {dy}{du}}\,{\frac {du}{dx}}} ^{\text{This part is the chain rule.}}=\left({\frac {d}{du}}\sin u\right)\left({\frac {d}{dx}}x^{2}\right)\\[8pt]={}&{\big (}\cos u{\big )}(2x)=\cos(x^{2})\cdot 2x.\end{aligned}}}
Difficult integrals may often be evaluated by changing variables; this is enabled by the substitution rule and is analogous to the use of the chain rule above. Difficult integrals may also be solved by simplifying the integral using a change of variables given by the corresponding Jacobian matrix and determinant.[1] Using the Jacobian determinant and the corresponding change of variable that it gives is the basis of coordinate systems such as polar, cylindrical, and spherical coordinate systems.
Scaling and shiftingEdit
{\displaystyle {\frac {d^{n}y}{dx^{n}}}={\frac {y_{\text{scale}}}{x_{\text{scale}}^{n}}}{\frac {d^{n}{\hat {y}}}{d{\hat {x}}^{n}}}}
{\displaystyle x={\hat {x}}x_{\text{scale}}+x_{\text{shift}}}
{\displaystyle y={\hat {y}}y_{\text{scale}}+y_{\text{shift}}.}
{\displaystyle \mu {\frac {d^{2}u}{dy^{2}}}={\frac {dp}{dx}}\quad ;\quad u(0)=u(L)=0}
describes parallel fluid flow between flat solid walls separated by a distance δ; μ is the viscosity and
{\displaystyle dp/dx}
the pressure gradient, both constants. By scaling the variables the problem becomes
{\displaystyle {\frac {d^{2}{\hat {u}}}{d{\hat {y}}^{2}}}=1\quad ;\quad {\hat {u}}(0)={\hat {u}}(1)=0}
{\displaystyle y={\hat {y}}L\qquad {\text{and}}\qquad u={\hat {u}}{\frac {L^{2}}{\mu }}{\frac {dp}{dx}}.}
Momentum vs. velocityEdit
{\displaystyle {\begin{aligned}m{\dot {v}}&=-{\frac {\partial H}{\partial x}}\\[5pt]m{\dot {x}}&={\frac {\partial H}{\partial v}}\end{aligned}}}
for a given function
{\displaystyle H(x,v)}
. The mass can be eliminated by the (trivial) substitution
{\displaystyle \Phi (p)=1/m\cdot p}
. Clearly this is a bijective map from
{\displaystyle \mathbb {R} }
{\displaystyle \mathbb {R} }
. Under the substitution
{\displaystyle v=\Phi (p)}
{\displaystyle {\begin{aligned}{\dot {p}}&=-{\frac {\partial H}{\partial x}}\\[5pt]{\dot {x}}&={\frac {\partial H}{\partial p}}\end{aligned}}}
Given a force field
{\displaystyle \varphi (t,x,v)}
, Newton's equations of motion are
{\displaystyle m{\ddot {x}}=\varphi (t,x,v).}
Lagrange examined how these equations of motion change under an arbitrary substitution of variables
{\displaystyle x=\Psi (t,y)}
{\displaystyle v={\frac {\partial \Psi (t,y)}{\partial t}}+{\frac {\partial \Psi (t,y)}{\partial y}}\cdot w.}
{\displaystyle {\frac {\partial {L}}{\partial y}}={\frac {\mathrm {d} }{\mathrm {d} t}}{\frac {\partial {L}}{\partial {w}}}}
are equivalent to Newton's equations for the function
{\displaystyle L=T-V}
, where T is the kinetic, and V the potential energy.
^ Kaplan, Wilfred (1973). "Change of Variables in Integrals". Advanced Calculus (Second ed.). Reading: Addison-Wesley. pp. 269–275.
|
Impermanent Loss | APWine Finance
Impermanent loss (IL) refers to the loss that funds can be exposed to when they are deposited in a liquidity pool. This is one of the main challenges of liquidity providers (LPs) who provide the funds for these pools. If IL exceeds earned fees, it means they suffered negative returns compared with simply holding their tokens outside the pool. Generally, the larger the relative price changes in a pool, the bigger the impermanent loss will be. Indeed, the most valuable tokens should in theory be bought from the pool, leaving liquidity providers with more less-valuable tokens.
APWine's AMM#
APWine's AMM is derived from Balancer V1's AMM designs. You can read more information about Balancer's AMM following this link.
As explained in this section about the AMM architecture, each future has two AMM pools: a PT-FYT pool and a PT-Underlying pool.
The former is a classical Balancer pool with fixed 50/50 weights. Hence, the impermanent loss you suffer from in this pool is the same IL you would suffer from in any 50/50 pool. You can read more about Balancer's IL following this link.
On the other hand, the latter has dynamic weights: the weighting of assets starts at 50/50 and the PT's weights progressively increase during the period. Specifically, as the price of the PT starts at a discounted price and should progressively converge to the price of one underlying, the weights are updated based on the yield generated to update the discount and protect liquidity providers from arbitrages. Therefore, the purpose of this weighting mechanism is to mitigate the impermanent loss. You can read more about the dynamic weights mechanism here. In particular, this will give you an intuition about the conditions for updating the weights.
Impermanent Loss formula for the PT-Underlying pool#
Let us now focus on the PT-Underlying pool to better understand the impact of the dynamic weights mechanism on the impermanent loss.
The Balancer's AMM design works with a simple invariant formula called the Cost Function:
V=\prod_i B_i^{w_i}
This value can also be expressed in underlying by multiplying each token balance with the price in underlying of the corresponding token:
V=\prod_i (B_i \times P_i)^{w_i}
This function ensures all the properties of the AMM. In Balancer's case, weights can be set to any value but are then static. This simplifies a lot the IL formula which can be expressed as a function of the AMM weights and the relative price change of each token (cf. this link for more details). In the case of APWine's AMM, the dynamic weights make the formula untractable. This section will explain the steps to understand the logic of APWine's IL formula.
Note that in the following computations
U
represents the underlying tokens and
PT
represents the principal tokens. Also, for completeness, let us introduce a liquidity provider lp holding a certain amount
T^{lp}
of LP tokens. The total supply of LP tokens is
T^{tot}
. Then, the pool share of the lp is
S^{lp} = \frac{T^{lp}}{T^{tot}}
. Finally, let us define the proportion of the balance of the token i held by lp as
\beta_i^t = S^{lp} \times B_i^t
. The following IL computations concern lp.
The impermanent loss is a ratio of performance between a return from a position in the AMM against the return of a simple hodling position (hang tight...):
IL = \frac{\Delta AMM_{U}}{\Delta HODL_{U}} - 1
Where the holding value change can be expressed as the sum of the weighted price changes:
\Delta HODL_{U} = \Delta P_{PT} \times w_{PT}^0 + \Delta P_{U} \times w_{U}^0
\Delta P_i = \frac{P_i^t}{P_i^0}
the price change between the deposit time
t=0
and the withdraw time
t
And the AMM value change is expressed as the ratio between the values of the AMM's function at withdraw (
t
) and deposit time (
t=0
\Delta AMM_{U} = \frac{V^t}{V^0}
V^t = \prod_i (\beta_i^t \times P_i^t)^{w_i^t} = (\beta_{pt}^t \times P_{PT}^t)^{w_{PT}^t} \times (\beta_{U}^t \times P_{U}^t)^{w_{U}^t}
All the variables in this formula depend on the time
t
and evolve during the period.
We can therefore express the final
\Delta AMM_{U}
\Delta AMM_{U} = \frac{V^t}{V^0} = \frac{(\beta_{pt}^t \times P_{PT}^t)^{w_{PT}^t} \times (\beta_{U}^t \times P_{U}^t)^{w_{U}^t}}{(\beta_{pt}^0 \times P_{PT}^0)^{w_{PT}^0} \times (\beta_{U}^0 \times P_{U}^0)^{w_{U}^0}}
This leads to the final impermanent loss formula:
IL = \frac{\frac{(\beta_{pt}^t \times P_{PT}^t)^{w_{PT}^t} \times (\beta_{U}^t \times P_{U}^t)^{w_{U}^t}}{(\beta_{pt}^0 \times P_{PT}^0)^{w_{PT}^0} \times (\beta_{U}^0 \times P_{U}^0)^{w_{U}^0}}}{\Delta P_{PT} \times w_{PT}^0 + \Delta P_{U} \times w_{U}^0} -1
Impermanent Loss behavior#
The IL function above is quite untractable and difficult to grasp. In the following sections, you will get a better intuition on the impermanent loss behavior under different angles. In particular, you will be able to compare the behavior of APWine's IL with the IL of a Balancer-like pool with fixed weights.
AMM behavior#
The weights in the AMM represent how much liquidity providers are exposed to a token price change. The fact that the AMM's weights progressively change by increasing the weights of the PT proportionally to the yield already generated makes you more exposed to the PT during the period.
At the end of the period, the LP withdraws liquidity from the AMM (i.e. getting your PT and Underlying tokens back) and then withdraws his Interest Bearing Tokens (IBTs) from the APWine protocol (1 PT for 1 underlying worth of IBTs). It is useful to consider this when thinking about the IL as the end of the period is the only moment when the PT can redeem one underlying no matter what its price on the AMM is. Otherwise, the price of the PT is oscillating around its discount price because of market speculation.
LPs are therefore advised to withdraw the liquidity by withdrawing their funds from the protocol and are not advised to sell their assets using the AMM (as they might suffer from slippage).
We ran simulations of our AMM, fixing some parameters to abstract the impact on the impermanent loss for several behaviors. We also only focus on the impermanent loss in underlying, without taking into account the impermanent loss in FIAT. It is liquidity providers' job to manage their FIAT exposure to different assets.
Entry price#
Considering LPs that will withdraw their liquidity at the end of the period from the AMM and then from the APWine protocol, the only price that matters is the entry price (aka the prices reflected on the AMM when they add liquidity). It is thus useful to visualize the impact of the entry price on the impermanent loss.
This first experiment was run for a future on an Interest Bearing Token with a fixed 50% APY during the period. We consider an LP providing liquidity to the PT-Underlying pool of the AMM at the beginning of the period and withdrawing its funds at the end. With a 50% APY during the period, the discounted price of the PT at the beginning of the period is 1PT = 0.667U (
1/(1+r)
r = 0.5
) and is therefore the correct spot price derived from this APY.
The graph below shows the impact of the entry price on the empirical impermanent loss. Clearly, the impermanent loss is shifted up compared to the fixed weight case.
Keep in mind that, accounting for a positive interest, the value of a PT should always lie between 0 and 1U during the period and progressively increase in value to reach a price of 1U at the end of the period. Hence, the graph above is here to get intuition on the impact of relative price change on the impermanent loss. Note that some areas of price change are not likely to happen in practice. For instance a relative change of 2 would make the entry price at 1PT = 1.334 underlying. No LP should add liquidity at this price. There is an important arbitrage opportunity.
Unlike with most AMMs, it is possible for liquidity providers to achieve a positive IL on APWine's AMM without taking fees into consideration. This is due to the fact that we only consider the price in terms of the underlying. The only asset changing in price is the PT that increases in value. The weight mechanism makes the liquidity provider more exposed to the PT asset that increases in value. In certain situations, the liquidity providers get more and more exposure to an asset that increases in price, making it more interesting compared to a fixed 50% exposition to the PT while holding the tokens outside the AMM.
This protection mechanism (dynamic weights) is therefore biased to hedge the risk of liquidity providers. The value gained by LPs in case of positive impermanent loss is taken from all the trades. It thus results in a safer pool for LPs, which allows to have more liquidity, boosting the usability of the AMM. For traders, on the one hand the arbitrages opportunities are reduced but on the other hand the usability increases (more liquidity
\Rightarrow
less slippage).
Relative price offset#
We study thereafter the impact on the impermanent loss of a discrepancy between the correctly derived spot price and the average spot price in the AMM. For instance, what happens with the impermanent loss if traders always underestimate (overestimate) the price of a PT in underlying?
As the PT price in underlying is the discounted price of one underlying with respect to the discount rate for the remaining of the period, underestimating (overestimating) this price is equivalent to overestimate (underestimate) the APY and so the discount rate of the token.
Recall that at any time the price of the PT is
P_{PT} = \frac{1}{1+r}
r
the discount rate for the remaining of the period.
The simulation was once again for a future on an Interest Bearing Token with a fixed 50% APY during the period, and with an LP providing liquidity to the PT-Underlying pool of the AMM at the beginning of the period and withdrawing its funds at the end. Let us further assume the liquidity provider makes a good entry (for example 1PT = 0.667U at the beginning of the period) and then traders will always underestimate (negative relative change) or overestimate (positive relative change) during the period.
The graph below clearly shows that underestimating the price benefits the liquidity provider. Traders trading at these price ranges will let more PT at the benefit of liquidity providers. Overestimating the price induce a negative Impermanent Loss. As observed the Impermanent Loss then gets positive again but similarly to the previous simulation, you should keep in mind that some prices are very unlikely to happen in practice (e.g. PT prices over 1U).
Let us now work out a few examples to better visualize these different scenarios. Several simulations were run with different parameters to compute the Empirical Impermanent Loss (EIL) of different investment positions.
The EIL is computed directly from the token amount and their respective prices when entering and leaving the pool:
EIL = \frac{\frac{A_{PT}^{t} \times P_{PT}^t + A_{U}^{t} \times P_{U}^t}{A_{PT}^{0} \times P_{PT}^0 + A_{U}^{0} \times P_{U}^0}}{\Delta P_{PT} \times A_{PT}^{0} + \Delta P_{U} \times A_{U}^{0}}
A_i^0
the amount of token provided in the AMM (at time
t=0
A_i^t
i
withdrawn from the AMM (at time
t
For all the scenarios, we still compute the impermanent loss in underlying. Hence, the price of the underlying token in underlying is obviously 1U.
Example 1: Optimistic scenario#
Patrick is an APWine liquidity provider in the PT-Underlying pool for a future on an IBT with a fixed APY of 20% during the period (optimistic scenario). He adds liquidity at the beginning of the period with a PT price of 1PT = 0.834U (the correct pricing).
He provides 100.0PT and 83.3334U and owns
\sim1\%
of the pool liquidity.
During the entire period, the PT price follows its theoretical discount curve with small variance (
\sigma = 0.1
). This means that on average, trades follow the right discount curve.
At the end of the period, 1PT = 1U and Patrick withdraws his liquidity from the AMM and gets 83.3324PT and 100.0008U.
By computing the Empirical Impermanent Loss, Patrick has an EIL of
EIL = -6.4783 e^{-7}\% \approx 0\%
Example 2: Under-priced entry#
In this second scenario, with the same conditions on the future as in the previous example (20% fixed APY), Patrick enters the pool at a PT price of 1PT = 0.75U. As mentioned, the correct pricing would be 1PT = 0.834U. Hence he enters at a price 10% below the true discount price.
He provides 100.0PT and 75.0U and owns
\sim 1\%
During the entire period the PT price eventually converges back to it's theoretical discount curve with small variance (
\sigma = 0.1
). This means that on average, trades end up following the right discount curve.
At the end of the period, 1PT = 1U and he withdraws his liquidity from the AMM and gets 94.8688PT and 79.0562U.
EIL = -0.0061\%
Example 3: Over-priced entry#
The following is the opposite scenario: Patrick enters the pool at a PT price of 1PT = 0.9167U, whereas the correct pricing would still be 1PT = 0.834U. Hence he enters at a price 10% above the true discount price.
\sim 1\%
\sigma = 0.1
At the end of the period, 1PT = 1U and he withdraws his liquidity from the AMM and gets 104.8802PT and 87.4013U.
EIL = 0.0032\%
. A positive impermanent loss!
Example 4: Overestimation of the APY#
Let us now focus on the impact on the impermanent loss when traders (on average) underestimate the PT price during the period. Having such price discrepancies with the true generated yield impacts the weight shifting and thus the trade sizes.
To keep things simple, let us keep the same asset as before: an IBT with a 20% fixed APY during the period.
Patrick enters the pool at a PT price of 1PT = 0.8334U, which is the correct pricing for an APY of 20%. Hence he made a good entry.
\sim 1\%
Then, traders, on average, always underestimate the PT price by 10%.
EIL = 0.0062\%
Example 5: Underestimation of the APY#
For the opposite scenario: Patrick enters the pool at a PT price of 1PT = 0.8334U, which is the correct pricing for an APY of 20%. Hence he made a good entry.
\sim 1\%
Then, traders, on average, always overestimate the PT price by 10%. We still assume the APY to be fixed during the period.
At the end of the period, he withdraws his liquidity from the AMM and gets 91.0984PT and 91.8559U.
Note that in this scenarios, we simulate the fact that traders overestimate the PT price until the end. At the end of the period the PT price in the AMM ends up being above 1U. This should not happen in practice as traders will eventually sell PTs to profit from this arbitrage opportunity.
EIL = -0.0021\%
« 🤖 APWine AMM
APWine's AMM
Impermanent Loss formula for the PT-Underlying pool
Impermanent Loss behavior
AMM behavior
Example 1: Optimistic scenario
Example 2: Under-priced entry
Example 3: Over-priced entry
Example 4: Overestimation of the APY
Example 5: Underestimation of the APY
|
Coding Theory | EMS Press
The workshop on Coding Theory has brought together leading researchers in several key areas of mathematical coding theory. On the side of many mathematicians there were computer scientist and electrical engineers present. Parti\-cipants came from many countries and the group included both senior and junior researchers.
Ever since its conception in the late 1940's, the theory of error-correcting codes has established itself as one of the central areas in mathematics.
Coding theory lies naturally at the intersection of a large number of disciplines in pure and applied mathematics: algebra and number theory, probability theory and statistics, communication theory, discrete mathematics and combinatorics, complexity theory, and statistical physics, are just but a few areas which have brought about very interesting applications in coding theory in recent years. The multitude of methods and means to construct and analyze codes and their properties suggests that a workshop with the explicit aim of bringing together researchers in different sub-fields of coding theory is necessary for cross-fertilization of ideas and global advancement of the field.
The following topics were covered during the workshop.\medskip
{\bf Combinatorial and probabilistic coding theory:} This area has experienced a huge revival in recent years because of its success in the design of codes with superior performance. Very roughly, in this area combinatorial structures are used to construct error-correcting codes, and properties of these structures are used to design and analyze efficient encoding and decoding algorithms for the codes. One of the most prominent examples in this area is furnished by the class of LDPC codes. These codes are constructed from sparse bipartite graphs. More generally Michael Tanner showed in the 80's how to construct `general codes on graphs'.
The sparsity of the graph provides methods for construction of low complexity encoders and decoders. The graphs need to be designed in such a way as to facilitate an optimal operation of the algorithms. To achieve this goal researchers have developed and applied methods from probability theory and statistics, algebra, discrete mathematics, number theory, and statistical physics. \medskip
{\bf Algebraic coding theory:} Algebraic coding theory primarily investigates codes obtained from algebraic constructions. Prime examples of this area of coding theory are codes from algebraic geometry, and codes obtained from algebraically constructed expander graphs. This discipline is almost as old as the coding theory itself, and has attracted (and continues to attract) some of the brightest minds in the field. Among the most exciting advances in this field in recent years has been the invention of list-decoding algorithms for various classes of algebraic codes. Such decoding algorithms yield for a received word a short list of codewords that have at most a given distance
\tau
to the received word. The size of the list depends on the distance
\tau
. The methods in this field are mostly algebraic and make use of various properties of multivariate polynomials, or more generally, the properties of ``well-behaved'' functions in the function field of an irreducible variety. Methods from algebraic geometry are very important in this area. On the computational side the field naturally embedds in the theory of Gr\"obner bases. There are emerging relationships between this area and codes on graphs, the leading question being whether or not it is possible to match the superior performance of graph-based codes with list-decoding algorithms, or at least with algorithms that are derived from list-decoding algorithms. \medskip
{\bf Theoretical computer science:} Theoretical computer science has con\-trib\-ut\-ed a large number of ideas to coding theory. The above mentioned analysis and design of LDPC codes, and the conception of list-decoding algorithms are two prime examples of such contributions.\medskip
The reader will find it interesting to study in more details the summary of the talks collected in this report.
Joachim Rosenthal, Mohammad Amin Shokrollahi, Coding Theory. Oberwolfach Rep. 4 (2007), no. 4, pp. 3241–3302
|
Torsion (mechanics) - Simple English Wikipedia, the free encyclopedia
Click for an example of torsion.
In solid mechanics, torsion is the twisting of an object that is result of an applied torque. In circular sections, the resultant shearing stress is perpendicular to the radius.
The shear stress at a point on a shaft is:
{\displaystyle \tau _{\theta _{z}}={Tr \over J}}
T is the applied torque, r is the distance from the center of rotation, and J is the polar moment of inertia.
The angle of twist can be found by using:
{\displaystyle \theta _{}={TL \over JG}}
θ is the angle of twist in radians.
T is the torque (N·m or ft·lbf).
L is the length of the object the torque is being applied to or over.
G is the shear modulus or more commonly the modulus of rigidity and is usually given in gigapascals (GPa) or ft·lbf.
J is the polar moment of inertia, for a round shaft or concentric tube only. For other shapes J must be determined by other means. For solid shafts the membrane analogy is useful, and for thin walled tubes of arbitrary shape the shear flow approximation is fairly good, if the section is not re-entrant. For thick walled tubes of arbitrary shape there is no simple solution, FEA may be the best method.
The English Wiktionary has a dictionary definition (meanings of a word) for: torsion
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Torsion_(mechanics)&oldid=5968938"
|
Enterprise Multiple Definition
What Is Enterprise Multiple?
Enterprise multiple, also known as the EV multiple, is a ratio used to determine the value of a company. The enterprise multiple, which is enterprise value divided by earnings before interest, taxes, depreciation, and amortization (EBITDA), looks at a company the way a potential acquirer would by considering the company's debt. What's considered a "good" or "bad" enterprise multiple will depend on the industry.
Formula and Calculation of Enterprise Multiple
\begin{aligned} &\text{Enterprise Multiple} = \frac { \text{EV} }{ \text{EBITDA} } \\ &\textbf{where:}\\ &\text{EV} = \text{Enterprise Value} = \text{Market capitalization} \ + \\ &\text{total debt} - \text{cash and cash equivalents} \\ &\text{EBITDA} = \text{Earnings before interest, taxes, depreciation} \\ &\text{and amortization} \\ \end{aligned}
Enterprise Multiple=EBITDAEVwhere:EV=Enterprise Value=Market capitalization +total debt−cash and cash equivalentsEBITDA=Earnings before interest, taxes, depreciationand amortization
Enterprise multiple, also known as the EV-to-EBITDA multiple, is a ratio used to determine the value of a company.
It is computed by dividing enterprise value by EBITDA.
The enterprise multiple takes into account a company's debt and cash levels in addition to its stock price and relates that value to the firm's cash profitability.
Enterprise multiples can vary depending on the industry.
Higher enterprise multiples are expected in high-growth industries and lower multiples in industries with slow growth.
Enterprise Multiple: My Favorite Financial Term
What Enterprise Multiple Can Tell You
Investors mainly use a company's enterprise multiple to determine whether a company is undervalued or overvalued. A low ratio relative to peers or historical averages indicates that a company might be undervalued and a high ratio indicates that the company might be overvalued.
An enterprise multiple is useful for transnational comparisons because it ignores the distorting effects of individual countries' taxation policies. It's also used to find attractive takeover candidates since enterprise value includes debt and is a better metric than market capitalization for merger and acquisition (M&A) purposes.
Enterprise multiples can vary depending on the industry. It is reasonable to expect higher enterprise multiples in high-growth industries (e.g. biotech) and lower multiples in industries with slow growth (e.g. railways).
Enterprise value (EV) is a measure of the economic value of a company. It is frequently used to determine the value of the business if it is acquired. It is considered to be a better valuation measure for M&A than a market cap since it includes the debt an acquirer would have to assume and the cash they'd receive.
Example of How to Use Enterprise Multiple
Dollar General (DG) generated $3.86 billion in EBITDA for the trailing 12 months (TTM) as of the year ended Jan. 28, 2022. The company had $344.8 million in cash and cash equivalents and $14.25 billion in total debt for the same ended year.
The company's market cap was $56.2 billion as of April 8, 2022. Dollar General's enterprise multiple is 18.2 [($56.2 billion + $14.25 billion - $344 million) / $3.86 billion]. At the same time last year, Dollar General's enterprise multiple was 17.4. The increase in the enterprise multiple is largely a result of the near $1 billion decrease in cash on their balance sheet, while EBITDA decreased just around $300 million. In this example, you can see how the Enterprise Multiple calculation takes into account both the cash the company has on hand and the debt the company is liable for.
Limitations of Using Enterprise Multiple
An enterprise multiple is a metric used for finding attractive buyout targets. But, beware of value traps—stocks with low multiples because they are deserved (e.g. the company is struggling and won't recover). This creates the illusion of a value investment, but the fundamentals of the industry or company point toward negative returns.
Investors assume that a stock's past performance is indicative of future returns and when the multiple comes down, they often jump at the opportunity to buy it at a "cheap" value. Knowledge of the industry and company fundamentals can help assess the stock's actual value.
One easy way to do this is to look at expected (forward) profitability and determine whether the projections pass the test. Forward multiples should be lower than the TTM multiples. Value traps occur when these forward multiples look overly cheap, but the reality is the projected EBITDA is too high and the stock price has already fallen, likely reflecting the market's cautiousness. As such, it's important to know the catalysts for the company and industry.
Dollar General. "Form 10-K for the Fiscal Year Ended January 28, 2022," Page 43.
|
\left(L,M\right)
\sigma
A binary intuitionistic fuzzy relation: some new results, a general factorization, and two properties of strict components.
Fono, Louis Aimé, Nana, Gilbert Njanpong, Salles, Maurice, Gwet, Henri (2009)
A characterization of tribes with respect to the Łukasiewicz
t
Erich Peter Klement, Mirko Navara (1997)
We give a complete characterization of tribes with respect to the Łukasiewicz
t
-norm, i. e., of systems of fuzzy sets which are closed with respect to the complement of fuzzy sets and with respect to countably many applications of the Łukasiewicz
t
-norm. We also characterize all operations with respect to which all such tribes are closed. This generalizes the characterizations obtained so far for other fundamental
t
-norms, e. g., for the product
A connection between Computer Science and Fuzzy Theory: Midpoints and running time of computing.
J. Casasnovas, O. Valero (2008)
A contour view on uninorm properties
Koen C. Maes, Bernard De Baets (2006)
Any given increasing
{\left[0,1\right]}^{2}\to \left[0,1\right]
function is completely determined by its contour lines. In this paper we show how each individual uninorm property can be translated into a property of contour lines. In particular, we describe commutativity in terms of orthosymmetry and we link associativity to the portation law and the exchange principle. Contrapositivity and rotation invariance are used to characterize uninorms that have a continuous contour line.
A discussion on aggregation operators
Daniel Gómez, Montero, Javier (2004)
It has been lately made very clear that aggregation processes can not be based upon a unique binary operator. Global aggregation operators have been therefore introduced as families of aggregation operators
{\left\{{T}_{n}\right\}}_{n}
, being each one of these
{T}_{n}
n
-ary operator actually amalgamating information whenever the number of items to be aggregated is
n
. Of course, some mathematical restrictions can be introduced, in order to assure an appropriate meaning, consistency and key mathematical capabilities. In this...
A fuzzy logic approach to assembly line balancing.
Daniel J. Fonseca, C. L. Guest, Matthew Elam, Charles L. Karr (2005)
This paper deals with the use of fuzzy set theory as a viable alternative method for modelling and solving the stochastic assembly line balancing problem. Variability and uncertainty in the assembly line balancing problem has traditionally been modelled through the use of statistical distributions. This may not be feasible in cases where no historical data exists. Fuzzy set theory allows for the consideration of the ambiguity involved in assigning processing and cycle times and the uncertainty contained...
Susanne Saminger, Radko Mesiar (2003)
We propose a concept of decomposable bi-capacities based on an analogous property of decomposable capacities, namely the valuation property. We will show that our approach extends the already existing concepts of decomposable bi-capacities. We briefly discuss additive and
k
-additive bi-capacities based on our definition of decomposability. Finally we provide examples of decomposable bi-capacities in our sense in order to show how they can be constructed.
A new approach for studying fuzzy functional equations.
Deeba, Elias, De Korvin, Andre (2001)
A new definition of the fuzzy set
Andrzej Piegat (2005)
The present fuzzy arithmetic based on Zadeh's possibilistic extension principle and on the classic definition of a fuzzy set has many essential drawbacks. Therefore its application to the solution of practical tasks is limited. In the paper a new definition of the fuzzy set is presented. The definition allows for a considerable fuzziness decrease in the number of arithmetic operations in comparison with the results produced by the present fuzzy arithmetic.
A nonstandard approach to fuzzy set theory
Costas A. Drossos, George Markakis, Mohammad Shakhatreh (1992)
A note about operations like
{T}_{W}
(the weakest
-norm) based addition on fuzzy intervals
Dug Hun Hong (2009)
We investigate a relation about subadditivity of functions. Based on subadditivity of functions, we consider some conditions for continuous
t
-norms to act as the weakest
{T}_{W}
-based addition. This work extends some results of Marková-Stupňanová [15], Mesiar [18].
{ℋ}_{3}
Celani, Sergio A. (1997)
A note on connectedness in intuitionistic fuzzy special topological spaces.
Özçaǧ, Selma, Çoker, Doǧan (2000)
A note on fuzzy cardinals
T
-norm-based operations on
LR
fuzzy intervals
Róbert Fullér, Tibor Keresztfalvi (1992)
A possibilistic view on set and multiset comparison
Antoon Bronselaer, Axel Hallez, Guy De Tré (2009)
A reflection on what is a membership function.
Enric Trillas, Claudi Alsina (1999)
This paper is just a first approach to the idea that the membership function μP of a fuzzy set labelled P is, basically, a measure on the set of linguistic expressions x is P for each x in the corresponding universe of discourse X. Estimating that the meaning of P (relatively to X) is nothing else than the use of P on X, these measures seem to be reached by generalizing to a preordered set the concept of Fuzzy Measure, introduced by M. Sugeno, when the preorder translates the primary use of the...
|
limsu{p}_{R\to 0⁺}1/R{\int }_{{Q}_{R}\left(x₀,t₀\right)}|curlu×u/|u||²dxdt\le {\epsilon }_{*}
{\epsilon }_{*}>0
\left(𝐯,p\right)
\Omega \subset {ℝ}^{3}
\left(𝐟{x}_{0},{t}_{0}\right)
{L}^{3}
𝐯
{L}^{3/2}
p
\left({𝐱}_{0},{t}_{0}\right)
𝐯
p
\left({𝐱}_{0},{t}_{0}\right)
A new regularity criterion for strong solutions to the Ericksen-Leslie system
Sadek Gala, Maria Alessandra Ragusa (2016)
A regularity criterion for strong solutions of the Ericksen-Leslie equations is established in terms of both the pressure and orientation field in homogeneous multiplier spaces.
A new regularity criterion for the Navier-Stokes equations.
Yue, Hu, Li, Wu-Ming (2011)
A note on the generalized energy inequality in the Navier-Stokes equations
Petr Kučera, Zdeněk Skalák (2003)
We prove that there exists a suitable weak solution of the Navier-Stokes equation, which satisfies the generalized energy inequality for every nonnegative test function. This improves the famous result on existence of a suitable weak solution which satisfies this inequality for smooth nonnegative test functions with compact support in the space-time.
A parabolic system involving a quadratic gradient term related to the Boussinesq approximation.
Jesús Ildefonso Díaz, Jean-Michel Rakotoson, Paul G. Schmidt (2007)
We propose a modification of the classical Boussinesq approximation for buoyancy-driven flows of viscous, incompressible fluids in situations where viscous heating cannot be neglected. This modification is motivated by unresolved issues regarding the global solvability of the original system. A very simple model problem leads to a coupled system of two parabolic equations with a source term involving the square of the gradient of one of the unknowns. Based on adequate notions of weak and strong...
J. Lederer, R. Lewandowski (2007)
A regularity criterion for the Navier-Stokes equations in terms of the horizontal derivatives of the two velocity components.
Chen, Wenying, Gala, Sadek (2011)
A regularity criterion for the Navier-Stokes equations in terms of the pressure gradient
Stefano Bosia, Monica Conti, Vittorino Pata (2014)
The incompressible three-dimensional Navier-Stokes equations are considered. A new regularity criterion for weak solutions is established in terms of the pressure gradient.
A remark on the regularity for the 3D Navier-Stokes equations in terms of the two components of the velocity.
A short note on
{L}^{q}
theory for Stokes problem with a pressure-dependent viscosity
Václav Mácha (2016)
We study higher local integrability of a weak solution to the steady Stokes problem. We consider the case of a pressure- and shear-rate-dependent viscosity, i.e., the elliptic part of the Stokes problem is assumed to be nonlinear and it depends on
p
and on the symmetric part of a gradient of
u
, namely, it is represented by a stress tensor
T\left(Du,p\right):=\nu \left(p,|D{|}^{2}\right)D
r
-growth condition with
r\in \left(1,2\right]
. In order to get the main result, we use Calderón-Zygmund theory and the method which was presented for example in...
A short note on regularity criteria for the Navier-Stokes equations containing the velocity gradient
Milan Pokorný (2005)
We review several regularity criteria for the Navier-Stokes equations and prove some new ones, containing different components of the velocity gradient.
A stochastic lagrangian proof of global existence of the Navier-Stokes equations for flows with small Reynolds number
Gautam Iyer (2009)
{\rho }_{i}
{u}^{\left(i\right)}
{\rho }_{i}{|}_{\infty }={\rho }_{i\infty }>0
{u}^{\left(i\right)}{|}_{\infty }=0
{\rho }_{i}\equiv {\rho }_{i\infty }
{u}^{\left(i\right)}\equiv 0
i=1,2
Additional note on partial regularity of weak solutions of the Navier-Stokes equations in the class
{L}^{\infty }\left(0,T,{L}^{3}{\left(\Omega \right)}^{3}\right)
We present a simplified proof of a theorem proved recently concerning the number of singular points of weak solutions to the Navier-Stokes equations. If a weak solution
𝐮
{L}^{\infty }\left(0,T,{L}^{3}{\left(\Omega \right)}^{3}\right)
, then the set of all possible singular points of
𝐮
\Omega
is at most finite at every time
{t}_{0}\in \left(0,T\right)
Almost global solutions of the free boundary problem for the equations of a magnetohydrodynamic incompressible fluid
Piotr Kacprzyk (2004)
Almost global in time existence of solutions for equations describing the motion of a magnetohydrodynamic incompressible fluid in a domain bounded by a free surfaced is proved. In the exterior domain we have an electromagnetic field which is generated by some currents which are located on a fixed boundary. We prove that a solution exists for t ∈ (0,T), where T > 0 is large if the data are small.
An optimal control problem for a generalized Boussinesq model: The time dependent case.
Jose Luis Boldrini, Enrique Fernández-Cara, Marko Antonio Rojas-Medar (2007)
Analysis of the flows of incompressible fluids with pressure dependent viscosity fulfilling
\nu \left(p,·\right)\to +\infty
p\to +\infty
M. Bulíček, Josef Málek, Kumbakonam R. Rajagopal (2009)
Over a large range of the pressure, one cannot ignore the fact that the viscosity grows significantly (even exponentially) with increasing pressure. This paper concerns long-time and large-data existence results for a generalization of the Navier-Stokes fluid whose viscosity depends on the shear rate and the pressure. The novelty of this result stems from the fact that we allow the viscosity to be an unbounded function of pressure as it becomes infinite. In order to include a large class of viscosities...
|
Kai Köhler, Damien Roessler (2002)
This is the second of a series of papers dealing with an analog in Arakelov geometry of the holomorphic Lefschetz fixed point formula. We use the main result of the first paper to prove a residue formula "à la Bott" for arithmetic characteristic classes living on arithmetic varieties acted upon by a diagonalisable torus; recent results of Bismut- Goette on the equivariant (Ray-Singer) analytic torsion play a key role in the proof.
A mod k index theorem.
Daniel S. Freed, Richard B. Melrose (1992)
A Remark on Lefschetz Formulae for Modest Vector Bundles.
K.H. Mayer (1975)
An analytic proof of Novikov's theorem on rational Pontrjagin classes
Dennis Sullivan, Nicolae Teleman (1983)
Anti-self-dual orbifolds with cyclic quotient singularities
Michael T. Lock, Jeff A. Viaclovsky (2015)
An index theorem for the anti-self-dual deformation complex on anti-self-dual orbifolds with cyclic quotient singularities is proved. We present two applications of this theorem. The first is to compute the dimension of the deformation space of the Calderbank–Singer scalar-flat Kähler toric ALE spaces. A corollary of this is that, except for the Eguchi–Hanson metric, all of these spaces admit non-toric anti-self-dual deformations, thus yielding many new examples of anti-self-dual ALE spaces. For...
\eta
Weiping Zhang (1994)
We present a direct analytic treatment of the Rokhlin congruence formula
\left[
\right]
by calculating the adiabatic limit of
\eta
-invariants of Dirac operators on circle bundles. Extensions to higher dimensions are obtained.
Claude Sabbah (1995/1996)
Coexistence state of a reaction-diffusion system.
Meng, Yijie, Wang, Yifu (2007)
Coherent orientations for periodic orbit problems in symplectic geometry.
H. Hofer, A. Floer (1993)
Combinatorial Hodge Theory and Signature Operator.
Nicolae Teleman (1980)
Cusp Forms and the Index Theorem for Manif olds with Boundary.
J.C. Hemperly (1975)
Der Indexsatz für geschlossene Geodätische.
Wilhelm Klingenberg (1974)
Déterminant relatif et la fonction Xi
Gilles Carron (1999/2000)
|
Physics - Two Spins Take the Quantum Bus
Department of Physics, University of Konstanz, Konstanz, Germany
Coupling between remote spins on a chip via virtual photons exchanged through a superconducting resonator could lead to gate operations between distant spin qubits.
Figure 1: Two qubits coupled to the same microwave resonator can be coupled via the exchange of virtual photons.
Both superconducting and semiconducting systems are the focus of intense ongoing research and development as platforms for quantum-information hardware. At the same time, superconducting-semiconducting hybrid quantum systems are attracting considerable attention, since they can combine interesting physical properties of both material classes and can sometimes exhibit entirely new properties. Now, a group led by Lieven Vandersypen at Delft University of Technology in the Netherlands has succeeded in coherently coupling two electron spins in separate semiconductor nanostructures via the exchange of virtual photons through a superconducting microwave resonator [1]. Their demonstration marks a milestone in semiconductor spin-qubit research and offers new possibilities for spin-based quantum computing.
Nearly a century ago, physicists found that the electron comes with an intrinsic quantized angular momentum. This electron spin lends itself to quantum computing, as its two states, up and down, can serve as the “0” and “1” of a qubit [2]. While the electron charge reacts to both electric and magnetic fields, the spin only couples to the magnetic field. For spin qubits, this is a good thing because in solid-state devices the main source of deleterious noise is electric. This spin advantage is particularly true for isotopically purified —the material used by the Delft team [1]—where magnetic noise is suppressed to an exceedingly low level.
However, the noninteraction of spin with electric fields can pose a problem when one needs a long-range interaction, or “quantum bus,” for coupling two distant qubits (Fig. 1). In the superconducting world, this problem was solved 15 years ago by using a superconducting microwave resonator as quantum bus between two distant superconducting qubits on a processor chip [3]. Superconducting qubits are based on electric charge, which couples directly to the electric field created by a cavity photon. Microwave photons are typically used for on-chip communication because their frequency matches the transition frequency of both superconductor and semiconductor qubits. However, for spin qubits, photon-mediated interactions are more difficult to realize, as the coupling between electron spins and the magnetic field of a cavity photon is much weaker. How, then, can long-range interactions between spin qubits be achieved while protecting those qubits from electric fluctuations? This problem can be solved if the coupling of the spin to the electric field can be effectively switched on and off.
The new work by the Delft group [1] makes use of an effective method to control the spin-photon coupling. The method uses a trick called a flopping-mode qubit, in which a single electron behaves as both a spin qubit and a charge qubit [4]. The charge qubit is associated with the electron’s position in a double-well potential, with the left side of the potential chosen to be the “0” state and the right side chosen for the “1” state. The coupling of the spin and charge qubits relies on the presence of a magnetic-field gradient produced by an on-chip micromagnet [5], which causes the spin qubit to experience a different magnetic field when the electron moves between the two sides of the potential.
The spin-charge-coupling mechanism can be switched on by making the double-well potential symmetric, thus allowing the electron to explore both wells (Fig. 2). In this condition, the system is maximally sensitive to the electric field of a cavity photon. The coupling is switched off by making the double well very asymmetric, such that the electron falls into the lower potential well where it remains immobile and thus insensitive to electric fields.
Figure 2: The spin qubit in the flopping mode consists of a single electron occupying a double quantum dot, indicated by the double-well potential. The electron can tunnel between the “left” and “right” minima. This degree of freedom defines a charge qubit, which couples to photons by its electric dipole moment. This coupling can also affect the spin of the electron via the difference between the left and right magnetic field
{\text{B}}_{\text{L}}
{\text{B}}_{\text{R}}
. When the double well is symmetric (top), the coupling to photons is “on.” But when it is made asymmetric (bottom), the electron becomes trapped in the lower well, and the coupling to photons turns “off.”The spin qubit in the flopping mode consists of a single electron occupying a double quantum dot, indicated by the double-well potential. The electron can tunnel between the “left” and “right” minima. This degree of freedom defines a charge qubit, wh... Show more
This technique enabled a series of breakthrough experiments, published in 2018 [6–8], in which strong spin-photon coupling was first observed between the electronic spin in a semiconductor quantum-dot structure and the photons in a superconducting microwave resonator. The achievement of the strong coupling regime of cavity quantum electrodynamics is significant and means that the spin-photon coupling exceeds the spin and cavity decay rates. This allows for the coherent exchange of quantum information between the spin and the photon and represents the first necessary step toward the exchange of quantum information between two spins via the cavity photons (Fig. 1). Using this technique, researchers observed the first signatures of cavity-mediated interactions between two remote spin qubits [9]. In this case, the microwave cavity mode was on resonance, which means the cavity-photon and spin-qubit energies coincided.
Building on this earlier work, the Delft team designed an experiment with two silicon-based quantum dots connected to the ends of a 250- m-long superconducting microwave cavity [1]. The researchers tuned the energy splitting between the up and down states for each of the two qubits using a combination of on-chip micromagnets and a homogeneous external magnetic field [9]. What sets the new work apart is that it operates in the so-called strong dispersive regime, where the cavity photon and qubit frequencies differ by an amount that is much greater than the spin-photon coupling and where the spin-photon interaction creates an energy shift in the cavity frequency exceeding the spin and cavity decay rates. As a result of these energy offsets, interactions between the two qubits are mediated by virtual photons—in other words, photons that have a very small probability of being detected in the cavity. Coupling spins via virtual photons is the preferred method because it is much less affected by the occasional loss of cavity photons. Previous work has utilized the strong dispersive regime for realizing superconducting two-qubit gates [3], and the regime has been targeted in models of future semiconductor spin-based two-qubit gates [10].
To achieve strong dispersive coupling, the Delft team used a high-impedance superconducting resonator to strengthen the charge-photon coupling and, as a result, the spin-photon coupling [1]. Having reached the strong dispersive regime, the Delft team realized two hallmark demonstrations. First, they observed the nonlocal spin-spin exchange mediated by virtual cavity photons. This demonstration required the exchange coupling to be larger than the decay rates and involved turning on the coupling of both spins to the common photon mode simultaneously. The team then tuned the qubit frequencies by rotating the external magnetic field and monitored the states with a microwave pump-probe detection scheme. As the two qubit frequencies became equal, the qubit states hybridized and produced a measurable splitting of in the energy of the spin states. The second hallmark demonstration was the observation of a photon-number-dependent shift of the qubit energy, which will allow for photon-state measurement and high-fidelity qubit readout by measuring a qubit-state-dependent shift of the photon frequency.
With these new results, spin qubits are catching up with superconducting qubits as a practical quantum-computing platform. What still need to be demonstrated are quantum bits that can be made to work in the time domain by switching the coupling on and off. But the newly achieved long-range connectivity—combined with the potential for high-fidelity readout, the long coherence times, and the small footprint on the chip—puts semiconductor spin qubits in an ideal position to realize large-scale and high-fidelity quantum processors.
P. Harvey-Collard et al., “Coherent spin-spin coupling mediated by virtual microwave photons,” Phys. Rev. X 12, 021026 (2022).
J. Majer et al., “Coupling superconducting qubits via a cavity bus,” Nature 449, 443 (2007).
M. Benito et al., “Input-output theory for spin-photon coupling in Si double quantum dots,” Phys. Rev. B 96, 235434 (2017).
M. Pioro-Ladrière et al., “Electrically driven single-electron spin resonance in a slanting Zeeman field,” Nature 4, 776 (2008).
X. Mi et al., “A coherent spin–photon interface in silicon,” Nature 555, 599 (2018).
N. Samkharadze et al., “Strong spin-photon coupling in silicon,” Science 359, 1123 (2018).
A. J. Landig et al., “Coherent spin–photon coupling using a resonant exchange qubit,” Nature 560, 179 (2018).
F. Borjans et al., “Resonant microwave-mediated interactions between distant electron spins,” Nature 577, 195 (2019).
Ada Warren et al., “Long-distance entangling gates between quantum dot spins mediated by a superconducting resonator,” Phys. Rev. B 100, 161303 (2019); M. Benito et al., “Optimized cavity-mediated dispersive two-qubit gates between spin qubits,” 100, 081412 (2019).
Guido Burkard studied physics at the Swiss Federal Institute of Technology (ETH) in Zurich and received his Ph.D. from the University of Basel in Switzerland. Since 2008, he has been a full professor at the University of Konstanz, Germany. He previously held a faculty position at RWTH Aachen University, Germany, and was SNF assistant professor at the University of Basel, after a postdoctoral appointment with the IBM Thomas J. Watson Research Center at Yorktown Heights, New York. His research interests encompass condensed-matter theory and quantum information, with special focus on the theory of solid-state qubits and hybrid quantum systems. In 2019 he was recognized as an Outstanding Referee by the American Physical Society
Coherent Spin-Spin Coupling Mediated by Virtual Microwave Photons
Patrick Harvey-Collard, Jurgen Dijkema, Guoji Zheng, Amir Sammak, Giordano Scappucci, and Lieven M. K. Vandersypen
SpintronicsSemiconductor Physics
|
(-)-menthol dehydrogenase Wikipedia
(−)-menthol dehydrogenase
A (−)-menthol dehydrogenase (EC 1.1.1.207) is an enzyme that catalyzes the chemical reaction
(−)-menthol + NADP+
{\displaystyle \rightleftharpoons }
(−)-menthone + NADPH + H+,
i.e., catalyses the breakdown of menthol. Thus, the two substrates of this enzyme are (−)-menthol and NADP+, whereas its 3 products are (−)-menthone, NADPH, and H+.
This enzyme belongs to the family of oxidoreductases, specifically those acting on the CH-OH group of donor with NAD+ or NADP+ as acceptor. The systematic name of this enzyme class is (−)-menthol:NADP+ oxidoreductase. This enzyme is also called monoterpenoid dehydrogenase. This enzyme participates in monoterpenoid biosynthesis.
Kjonaas R, Martinkus-Taylor C, Croteau R (May 1982). "Metabolism of Monoterpenes: Conversion of l-Menthone to l-Menthol and d-Neomenthol by Stereospecific Dehydrogenases from Peppermint (Mentha piperita) Leaves". Plant Physiology. 69 (5): 1013–7. doi:10.1104/pp.69.5.1013. JSTOR 4267341. PMC 426349. PMID 16662335.
|
(2-aminoethyl)phosphonate:pyruvate aminotransferase Wikipedia
In enzymology, a 2-aminoethylphosphonate—pyruvate transaminase (EC 2.6.1.37) is an enzyme that catalyzes the chemical reaction
(2-aminoethyl)phosphonate + pyruvate
{\displaystyle \rightleftharpoons }
2-phosphonoacetaldehyde + L-alanine
Thus, the two substrates of this enzyme are (2-aminoethyl)phosphonate and pyruvate, whereas its two products are 2-phosphonoacetaldehyde and L-alanine.
This enzyme belongs to the family of transferases, specifically the transaminases, which transfer nitrogenous groups. The systematic name of this enzyme class is (2-aminoethyl)phosphonate:pyruvate aminotransferase. Other names in common use include (2-aminoethyl)phosphonate transaminase, (2-aminoethyl)phosphonate aminotransferase, (2-aminoethyl)phosphonic acid aminotransferase, 2-aminoethylphosphonate-pyruvate aminotransferase, 2-aminoethylphosphonate aminotransferase, 2-aminoethylphosphonate transaminase, AEP transaminase, and AEPT. This enzyme participates in aminophosphonate metabolism. It employs one cofactor, pyridoxal phosphate.
As of late 2007, only one structure has been solved for this class of enzymes, with the PDB accession code 1M32.
La Nauze JM, Rosenberg H (1968). "The identification of 2-phosphonoacetaldehyde as an intermediate in the degradation of 2-aminoethylphosphonate by Bacillus cereus". Biochim. Biophys. Acta. 165 (3): 438–47. doi:10.1016/0304-4165(68)90223-7. PMID 4982500.
Dumora C, Lacoste AM, Cassaigne A (1983). "Purification and properties of 2-aminoethylphosphonate:pyruvate aminotransferase from Pseudomonas aeruginosa". Eur. J. Biochem. 133 (1): 119–25. doi:10.1111/j.1432-1033.1983.tb07436.x. PMID 6406228.
Lacoste AM, Dumora C, Balas L, Hammerschmidt F, Vercauteren J (1993). "Stereochemistry of the reaction catalysed by 2-aminoethylphosphonate aminotransferase. A 1H-NMR study". Eur. J. Biochem. 215 (3): 841–4. doi:10.1111/j.1432-1033.1993.tb18100.x. PMID 8394813.
Lacoste AM, Dumora C, Ali BR, Neuzil E, Dixon HB (1992). "Utilization of 2-aminoethylarsonic acid in Pseudomonas aeruginosa". J. Gen. Microbiol. 138 (6): 1283–7. doi:10.1099/00221287-138-6-1283. PMID 1527499.
|
Unsharp masking - Wikipedia
(Redirected from Unsharp mask)
Unsharp masking applied to lower part of image
Unsharp masking (USM) is an image sharpening technique, first implemented in darkroom photography, but now commonly used in digital image processing software. Its name derives from the fact that the technique uses a blurred, or "unsharp", negative image to create a mask of the original image.[1] The unsharp mask is then combined with the original positive image, creating an image that is less blurry than the original. The resulting image, although clearer, may be a less accurate representation of the image's subject.
In the context of signal processing, an unsharp mask is generally a linear or nonlinear filter that amplifies the high-frequency components of a signal.
1 Photographic darkroom unsharp masking
2 Digital unsharp masking
2.1 Local contrast enhancement
3 Comparison with deconvolution
Photographic darkroom unsharp masking[edit]
Simplified principle of unsharp masking
For the photographic darkroom process, a large-format glass plate negative is contact-copied onto a low-contrast film or plate to create a positive image. However, the positive copy is made with the copy material in contact with the back of the original, rather than emulsion-to-emulsion, so it is blurred. After processing this blurred positive is replaced in contact with the back of the original negative. When light is passed through both negative and in-register positive (in an enlarger, for example), the positive partially cancels some of the information in the negative.
Because the positive has been blurred intentionally, only the low-frequency (blurred) information is cancelled. In addition, the mask effectively reduces the dynamic range of the original negative. Thus, if the resulting enlarged image is recorded on contrasty photographic paper, the partial cancellation emphasizes the high-spatial-frequency information (fine detail) in the original, without loss of highlight or shadow detail. The resulting print appears more acute than one made without the unsharp mask: its acutance is increased.
In the photographic procedure, the amount of blurring can be controlled by changing the "softness" or "hardness" (from point source to fully diffuse) of the light source used for the initial unsharp mask exposure, while the strength of the effect can be controlled by changing the contrast and density (i.e., exposure and development) of the unsharp mask.
For traditional photography, unsharp masking is usually used on monochrome materials; special panchromatic soft-working black-and-white films have been available for masking photographic colour transparencies. This has been especially useful to control the density range of a transparency intended for photomechanical reproduction.
Digital unsharp masking[edit]
Source image (top),
sharpened image (middle),
highly sharpened image (bottom)
The same differencing principle is used in the unsharp-masking tool in many digital-imaging software packages, such as Adobe Photoshop and GIMP.[2] The software applies a Gaussian blur to a copy of the original image and then compares it to the original. If the difference is greater than a user-specified threshold setting, the images are (in effect) subtracted.
Digital unsharp masking is a flexible and powerful way to increase sharpness, especially in scanned images. Unfortunately, it may create unwanted conspicuous edge effects or increase image noise. However, these effects can be used creatively, especially if a single channel of an RGB or Lab image is sharpened. Undesired effects can be reduced by using a mask—particularly one created by edge detection—to only apply sharpening to desired regions, sometimes termed "smart sharpen".
Typically, digital unsharp masking is controlled via the amount, radius and threshold:
Amount is listed as a percentage and controls the magnitude of each overshoot (how much darker and how much lighter the edge borders become). This can also be thought of as how much contrast is added at the edges. It does not affect the width of the edge rims.
Radius affects the size of the edges to be enhanced or how wide the edge rims become, so a smaller radius enhances smaller-scale detail. Higher radius values can cause halos at the edges, a detectable faint light rim around objects. Fine detail needs a smaller radius. Radius and amount interact; reducing one allows more of the other.
Threshold controls the minimal brightness change that will be sharpened or how far apart adjacent tonal values have to be before the filter does anything. This lack of action is important to prevent smooth areas from becoming speckled. The threshold setting can be used to sharpen more pronounced edges, while leaving subtler edges untouched. Low values should sharpen more because fewer areas are excluded. Higher threshold values exclude areas of lower contrast.
Various recommendations exist for starting values of these parameters,[3] and the meaning may differ between implementations. Generally a radius of 0.5 to 2 pixels and an amount of 50–150% is recommended.
It is also possible to implement USM manually, by creating a separate layer to act as the mask;[2] this can be used to help understand how USM works or for fine customization.
The typical blending formula for unsharp masking is
sharpened = original + (original − blurred) × amount.
Local contrast enhancement[edit]
Unsharp masking may also be used with a large radius and a small amount (such as 30–100 pixel radius and 5–20% amount[4]), which yields increased local contrast, a technique termed local contrast enhancement.[4][5] USM can increase either sharpness or (local) contrast because these are both forms of increasing differences between values, increasing slope—sharpness referring to very small-scale (high-frequency) differences, and contrast referring to larger-scale (low-frequency) differences. More powerful techniques for improving tonality are referred to as tone mapping.
Comparison with deconvolution[edit]
For image processing, deconvolution is the process of approximately inverting the process that caused an image to be blurred. Specifically, unsharp masking is a simple linear image operation—a convolution by a kernel that is the Dirac delta minus a gaussian blur kernel. Deconvolution, on the other hand, is generally considered an ill-posed inverse problem that is best solved by nonlinear approaches. While unsharp masking increases the apparent sharpness of an image in ignorance of the manner in which the image was acquired, deconvolution increases the apparent sharpness of an image, but is based on information describing some of the likely origins of the distortions of the light path used in capturing the image; it may therefore sometimes be preferred, where the cost in preparation time and per-image computation time are offset by the increase in image clarity.
With deconvolution, "lost" image detail may be approximately recovered, although it generally is impossible to verify that any recovered detail is accurate. Statistically, some level of correspondence between the sharpened images and the actual scenes being imaged can be attained. If the scenes to be captured in the future are similar enough to validated image scenes, then one can assess the degree to which recovered detail may be accurate. The improvement to image quality is often attractive, since the same validation issues are present even for un-enhanced images.
For deconvolution to be effective, all variables in the image scene and capturing device need to be modeled, including aperture, focal length, distance to subject, lens, and media refractive indices and geometries. Applying deconvolution successfully to general-purpose camera images is usually not feasible, because the geometries of the scene are not set. However, deconvolution is applied in reality to microscopy and astronomical imaging, where the value of gained sharpness is high, imaging devices and the relative subject positions are both well defined, and optimization of the imaging devices to improve sharpness physically would cost significantly more. In cases where a stable, well-defined aberration is present, such as the lens defect in early Hubble Space Telescope images, deconvolution is an especially effective technique.
In the example below, the image is convolved with the following sharpening filter:
Sharpen filter
{\displaystyle {\begin{bmatrix}\ \ 0&-1&\ \ 0\\-1&\ \ 5&-1\\\ \ 0&-1&\ \ 0\end{bmatrix}}}
This matrix is obtained using the equation shown above under #Digital unsharp masking, using a uniform kernel with 5 pixels for the "blurred" image, and 5 for the "amount" multiplier:
{\displaystyle {\begin{bmatrix}0&0&0\\0&1&0\\0&0&0\end{bmatrix}}+\left({\begin{bmatrix}0&0&0\\0&1&0\\0&0&0\end{bmatrix}}-{\begin{bmatrix}0&1&0\\1&1&1\\0&1&0\end{bmatrix}}/5\right)5={\begin{bmatrix}\ \ 0&-1&\ \ 0\\-1&\ \ 5&-1\\\ \ 0&-1&\ \ 0\end{bmatrix}}}
The sharpening effect can be controlled by varying the multiplier. The value of 5 was chosen here to yield a kernel with integer values, but this is not a requirement for the operation.
The second image has been sharpened twice as much as the first.
^ Fulton, Wayne (1997–2010). "A few scanning tips, Sharpening - Unsharp Mask". Scantips.com. Archived from the original on 2019-04-27. Retrieved 1 October 2019.
^ a b 4.9. Unsharp Mask, esp. 4.9.4. How does an unsharp mask work?, Gimp documentation.
^ Guide to Image Sharpening, Cambridge in Color.
^ a b Local Contrast Enhancement, Cambridge in Color.
^ Understanding Local Contrast Enhancement, The Luminous Landscape.
Find sources: "Unsharp masking" – news · newspapers · books · scholar · JSTOR (May 2010) (Learn how and when to remove this template message)
Sharpening With a Stiletto, Dan Margulis, February, 1998
Life on the Edge, Dan Margulis, January, 2005
Excel spreadsheet that calculates an Unsharp Mask
Interactive Example of Unsharp Mask
PhotoKit Sharpener User Guide
Sharpening 101, mirror of by thom, Aug 1, 2003
The Unsharp Mask: Analog Photoshop, Sample of unsharp masking in the darkroom, before digital
Retrieved from "https://en.wikipedia.org/w/index.php?title=Unsharp_masking&oldid=1079406886"
|
1\frac{1}{2}
-generation of finite simple groups.
Stein, Alexander (1998)
(2,3)-generation of the groups PSL6(q)
Tabakov, K., Tchakerian, K. (2011)
2010 Mathematics Subject Classification: 20F05, 20D06.We prove that the group PSL6(q) is (2,3)-generated for any q. In fact, we provide explicit generators x and y of orders 2 and 3, respectively, for the group SL6(q).
A block-theory-free characterization of
{M}_{24}
D. Held, J. Hrabě de Angelis (1989)
A characterization of the groups Fi22, Fi23 and F24.
Richard Weiss, John van Bon (1992)
A combinatorial proof of the extension property for partial isometries
Jan Hubička, Matěj Konečný, Jaroslav Nešetřil (2019)
We present a short and self-contained proof of the extension property for partial isometries of the class of all finite metric spaces.
A cyclically pinched product of free groups which is not residually free.
F. Levin, G. Rosenberger, B. Baumslag (1993)
{\stackrel{˜}{C}}_{n}
\stackrel{˜}{C}
\stackrel{˜}{C}
A generalized Hopf formula for higher homology groups.
Ralph Stöhr (1989)
A geometric study of Fibonacci groups.
Helling, H., Kim, A.C., Mennicke, J.L. (1998)
A natural framing of knots.
Greene, Michael, Wiest, Bert (1998)
A new efficient presentation for
PSL\left(2,5\right)
and the structure of the groups
G\left(3,m,n\right)
Bilal Vatansever, David M. Gill, Nuran Eren (2000)
G\left(3,m,n\right)
is the group presented by
〈a,b\mid {a}^{5}={\left(ab\right)}^{2}={b}^{m+3}{a}^{-n}{b}^{m}{a}^{-n}=1〉
. In this paper, we study the structure of
G\left(3,m,n\right)
. We also give a new efficient presentation for the Projective Special Linear group
PSL\left(2,5\right)
and in particular we prove that
PSL\left(2,5\right)
G\left(3,m,n\right)
A note on torsion-by-nilpotent groups
Tarek Rouabhi, Nadir Trabelsi (2007)
|
Discrete-time notch filter with varying coefficients - Simulink - MathWorks Benelux
Discrete-time notch filter with varying coefficients
The block implements the Tustin discretization of a continuous-time notch filter with varying coefficients. Feed the continuous-time values of the notch frequency, minimum gain, and damping ratio to the freq, gmin, and damp input ports, respectively. These parameters control the notch depth and frequency of the continuous-time notch frequency as shown in the following illustration. The damping ratio damp controls the notch width Δ; larger damp means larger Δ.
Continuous-time value of the notch frequency, specified in rad/s.
Continuous-time value of the gain at notch frequency, in absolute units. This value controls the notch depth. The notch filter has unit gain at low and high frequency. The gain is lowest at the notch frequency.
Continuous-time value of the damping ratio, specified as a positive scalar value. The damping ratio controls the notch width; the closer to 0, the steeper the notch.
Pre-warping frequency, specified as a positive scalar. Discretization of the continuous-time notch-filter transfer function can shift the notch frequency when it is close to the Nyquist frequency. To ensure that the continuous and discrete filters have matching frequency response near a particular frequency w0, set this parameter to w0. The default value w0 = 0 corresponds to the bilinear (Tustin) transformation without pre-warp:
s=\frac{2}{{T}_{s}}\left(\frac{z-1}{z+1}\right),
Block sample time, specified as a positive scalar. This block does not support inherited sample time, because it requires a specified sample time to compute the discretization of the notch filter.
|
Physics - Retrospective—Electromagnons offer the best of two worlds
Retrospective—Electromagnons offer the best of two worlds
Laboratory for Developments and Methods, Paul Scherrer Institute, Bldg. WHGA/131, CH-5232 Villigen-PSI, Switzerland
Figure 1: Electromagnons can be excited with light in materials such as
{\text{YMn}}_{2}{\text{O}}_{5}
, shown here with its crystal structure projected onto a plane. When the ordered moments (black arrows) on the manganese sites (labeled M) fluctuate, some nearest-neighbor moments will be more parallel, others more antiparallel. (The blue arrows indicate the possible direction of the fluctuations.) This can lead to a dynamic modulation of the magnetic interactions and, in the process, a coupling of the magnetic fluctuations to the fluctuating electric dipoles. Adapted from Ref. [11].Electromagnons can be excited with light in materials such as
{\text{YMn}}_{2}{\text{O}}_{5}
, shown here with its crystal structure projected onto a plane. When the ordered moments (black arrows) on the manganese sites (labeled M) fluctuate, some nearest-neighbor moments wi... Show more
To appreciate why the discovery of electromagnons caused so much excitement, consider that it is not straightforward to couple electric and magnetic degrees in freedom in an insulating material. Ferroelectrics are insulators where the charge fluctuations can be associated with so-called electric-dipole-active lattice vibrations. These excitations can drive phase transitions to ferroelectric, or charge-ordered phases. Magnetic fluctuations in insulators, on the other hand, are associated with localized magnetic moments, which are ordered at sufficiently low temperature and support coherent magnetic waves. Because the origin of the electro- and magneto-active excitations is very different, it is not easy to find coupled electromagnetic or magnetoelectric excitations in insulating materials. To make matters worse, the mechanism that leads to ferroelectricity in conventional ferroelectrics like , which should exhibit electric-dipole-active excitations, impedes the presence of magnetism.
Since the early efforts on classical ferroelectrics, research has rapidly progressed, and several different classes of multiferroics have been discovered [2,3], with coupled ferroelectric and magnetic degrees of freedom. While ferroelectricity in a classical ferroelectric like is driven by a hybridization of empty -shell orbitals on the transition-metal site with occupied shells on the oxygen sites, ferroelectricity in these new classes of materials arises from different mechanisms, such as lone-pair ions, topological effects of the chemical lattice, or magnetically driven effects in frustrated materials. Not long after the discovery of these new multiferroic materials, it was shown that such materials could, as suspected, support electromagnetic excitations in solid matter.
In 2006, Pimenov et al. [4] reported that, using terahertz light, they were able to excite spin waves, which they called electromagnons, in . They also showed the electromagnons could be suppressed by applying a magnetic field, directly demonstrating magnetic-field-tuned electric-dipole-active excitations. The understanding of these electromagnons was, however, tentative. For example, it was not clear whether the electromagnon is associated with the transition metal or rare-earth-metal ion magnetism in this material, thus leaving the origin of electromagnons as an open question. In , it is the magnetic interactions between the manganese ( ) ions that gives rise to ferroelectricity. But is also magnetic because of the electrons of the rare-earth-metal ions, so it was not clear whether these excitations arose purely from the transition-metal magnetism.
Published shortly afterwards, Sushkov et al.’s paper answered this question [1]. They observed the same effects in the multiferroic material , which contains no rare-earth-metal magnetism (Fig. 1), putting any doubts that the electromagnons were not associated with transition-metal magnetism to rest. Since these two ground-breaking studies, numerous observations of electromagnons in very different materials have been published. Electromagnons appear most commonly in magnetically induced ferroelectrics, including a number of spin spiral ferroelectrics.
It is now known that electromagnons can be in multiferroics where ferroelectricity does not arise from magnetic order. For example, these excitations occur in multiferroic , where ferroelectricity arises from the lone-pair mechanism and a low-pitch antiferromagnetic spiral does not form except at a much lower temperature [5]. Recently (and somewhat surprisingly), electromagnons were reported in the paraelectric phase of multiferroic materials. For example, electromagnons have been observed in a very different material with a very different electronic and physical structure: a conical-spin magnetically ordered phase of the paraelectric phase of the hexaferrite [6]. This is an exciting discovery, as it suggests that electric-dipole-active magnons can exist in nonmultiferroic materials, and that many magnetically ordered insulators with complex noncollinear magnetic structures may support electromagnon excitations.
Progress in this field is not just a matter of adding to the catalog of multiferroics exhibiting electromagnons. Initially, electromagnetic excitations were mostly observed at relatively low frequencies, where they were expected to drive the condensation of boson excitations associated with the phase transition. However, it has now been shown, contrary to expectation, that in rare-earth and electromagnetic excitations exist also at relatively high frequencies [7,8]. It has been suggested that at least some of the high-frequency excitations are two-magnon excitations, and these excitations are strongly coupled to some of the phonons in that energy range. The prospect of multiparticle magnon states that couple strongly to the lattice can lead to novel strongly correlated effects in these materials and other unusual interactions.
What sets the time scale of magnetoelectric switching in multiferroics is also an exciting open question. A recent theoretical study [9] predicts that electromagnons can be used to switch ferroelectric polarization in rare-earth manganites at a picosecond time scale using terahertz optical pulses. This would be due to the dynamic magnetoelectric effects that are larger than the spin-orbit interactions, leading to static magnetoelectric effects. Recent studies on suggest, however, that the switching time scale is considerably longer, in the millisecond range [10]. This time scale is much longer than what would be expected even if the dynamic magnetoelectric effects in this material were completely governed by spin-orbit interactions and not by symmetric exchange as in the rare-earth manganites.
A.B. Sushkov, R. Valdés Aguilar, S. Park, S-W. Cheong, and H. D. Drew, Phys. Rev. Lett. 98, 027202 (2007)
S.-W. Cheong and M. Mostovoy, Nature Mater. 6, 13 (2007)
A. Pimenov, A. A. Mukhin, V. Yu. Ivanov, V. D. Travkin, A. M. Balbashov, and A. Loidl, Nature Phys. 2, 97 (2006)
M. Cazayous, Y. Gallais, A. Sacuto, R. de Sousa, D. Lebeugle and D. Colson, Phys. Rev. Lett. 101, 037601 (2008)
N. Kida, D. Okuyama, S. Ishiwata, Y. Taguchi, R. Shimano, K. Iwasa, T. Arima, and Y. Tokura, Phys. Rev B 80, 220406 (2009)
Y. Takahashi, N. Kida, Y. Yamasaki, J. Fujioka, T. Arima, R. Shimano, S. Miyahara, M. Mochizuki, N. Furukawa, and Y. Tokura, Phys. Rev. Lett. 101, 187201 (2008)
A. M. Shuvaev, F. Mayr, A. Loidl, A. A. Mukhin, and A. Pimenov, Eur. Phys. J. B 80, 351 (2011)
M. Mochizuki and N. Nagaosa, Phys. Rev. Lett. 105, 147202 (2010)
T. Hoffmann, P. Thielen, P. Becker, L. Bohaty, and M. Fiebig, arXiv:1103.2066
J. H. Kim, M. A. van der Vegte, A. Scaramucci, S. Artyukhin, J.-H. Chung, S. Park, S-W. Cheong, M. Mostovoy, and S.-H. Lee, Phys. Rev. Lett. 107, 097401 (2011)
Michel Kenzelmann heads the Laboratory for Developments and Methods at the Paul Scherrer Institut, in Switzerland. He received his D.Phil. from Oxford University in 2001. He had a postdoctoral position shared by Johns Hopkins University and NIST, Gaithersburg, from 2001 and 2004, and held a professor fellowship of the Swiss National Science Foundation at ETH Zürich from 2004 to 2008. His research interests focus on materials with strong magnetic fluctuations, such as low-dimensional and frustrated magnets, multiferroics, and unconventional heavy-fermion superconductors.
Electromagnons in Multiferroic
{\mathrm{YMn}}_{2}{\mathrm{O}}_{5}
{\mathrm{TbMn}}_{2}{\mathrm{O}}_{5}
A. B. Sushkov, R. Valdés Aguilar, S. Park, S-W. Cheong, and H. D. Drew
{\mathrm{YMn}}_{2}{\mathrm{O}}_{5}
{\mathrm{TbMn}}_{2}{\mathrm{O}}_{5}
|
EUDML | Goodman-Kruskal Measure of Association for Fuzzy-Categorized Variables EuDML | Goodman-Kruskal Measure of Association for Fuzzy-Categorized Variables
Goodman-Kruskal Measure of Association for Fuzzy-Categorized Variables
S. M. Taheri; Gholamreza Hesamian
The Goodman-Kruskal measure, which is a well-known measure of dependence for contingency tables, is generalized to the case when the variables of interest are categorized by linguistic terms rather than crisp sets. In addition, to test the hypothesis of independence in such contingency tables, a novel method of decision making is developed based on a concept of fuzzy
p
-value. The applicability of the proposed approach is explained using a numerical example.
Taheri, S. M., and Hesamian, Gholamreza. "Goodman-Kruskal Measure of Association for Fuzzy-Categorized Variables." Kybernetika 47.1 (2011): 110-122. <http://eudml.org/doc/196869>.
@article{Taheri2011,
abstract = {The Goodman-Kruskal measure, which is a well-known measure of dependence for contingency tables, is generalized to the case when the variables of interest are categorized by linguistic terms rather than crisp sets. In addition, to test the hypothesis of independence in such contingency tables, a novel method of decision making is developed based on a concept of fuzzy $p$-value. The applicability of the proposed approach is explained using a numerical example.},
author = {Taheri, S. M., Hesamian, Gholamreza},
keywords = {fuzzy frequency; fuzzy category; fuzzy Goodman–Kruskal statistic; fuzzy $p$-value; fuzzy significance level; NSD index; fuzzy frequency; fuzzy category; fuzzy Goodman-Kruskal statistics; fuzzy -value; fuzzy significance level; NSD index},
publisher = {Institute of Information Theory and Automation AS CR},
title = {Goodman-Kruskal Measure of Association for Fuzzy-Categorized Variables},
AU - Taheri, S. M.
AU - Hesamian, Gholamreza
TI - Goodman-Kruskal Measure of Association for Fuzzy-Categorized Variables
PB - Institute of Information Theory and Automation AS CR
AB - The Goodman-Kruskal measure, which is a well-known measure of dependence for contingency tables, is generalized to the case when the variables of interest are categorized by linguistic terms rather than crisp sets. In addition, to test the hypothesis of independence in such contingency tables, a novel method of decision making is developed based on a concept of fuzzy $p$-value. The applicability of the proposed approach is explained using a numerical example.
KW - fuzzy frequency; fuzzy category; fuzzy Goodman–Kruskal statistic; fuzzy $p$-value; fuzzy significance level; NSD index; fuzzy frequency; fuzzy category; fuzzy Goodman-Kruskal statistics; fuzzy -value; fuzzy significance level; NSD index
Agresti, A., Categorical Data Analysis, Second Edition. J. Wiley, New York 2002. (2002) Zbl1018.62002MR1914507
Brown, M. B., Benedetti, J. K., Sampling behavior of tests for correlation in two-way contingency tables, J. Amer. Statist. Assoc. 72 (1977), 309–315. (1977)
Denoeux, T., Masson, M. H., Herbert, P. H., Non-parametric rank-based statistics and significance tests for fuzzy data, Fuzzy Sets and Systems 153 (2005), 1–28. (2005) MR2202121
Dubois, D., Prade, H., 10.1016/0020-0255(83)90025-7, Inform. Sci. 30 (1983), 183–224. (1983) MR0730910DOI10.1016/0020-0255(83)90025-7
Engelgau, M. M., Thompson, T. J., Herman, W. H., Boyle, J. P., Aubert, R. E., Kenny, S. J., Badran, A., Sous, E. S., Ali, M. A., 10.2337/diacare.20.5.785, Diabetes Care 20 (1997), 785–791. (1997) DOI10.2337/diacare.20.5.785
Gibbons, J. D., Nonparametric Measures of Association, Sage Publication, Newbury Park 1993. (1993)
Goodman, L. A., Kruskal, W. H., Measures of association for cross classifications, J. Amer. Statist. Assoc. 49 (1954), 732–764. (1954) Zbl0056.12801
Goodman, L. A., Kruskal, W. H., Measures of Association for Cross Classifications, Springer, New York 1979. (1979) Zbl0426.62034MR0553108
Grzegorzewski, P., Statistical inference about the median from vague data, Control Cybernet. 27 (1998), 447–464. (1998) Zbl0945.62038MR1663896
Grzegorzewski, P., Distribution-free tests for vague data, In: Soft Methodology and Random Information Systems (M. Lopez-Diaz et al., eds.), Springer, Heidelberg 2004, pp. 495–502. (2004) Zbl1064.62052MR2118134
Grzegorzewski, P., Two-sample median test for vague data, In: Proc. th Conf. European Society for Fuzzy Logic and Technology-Eusflat, Barcelona 2005, pp. 621–626. (2005)
Grzegorzewski, P., 10.1002/int.20345, Internat. J. Intelligent Systems 24 (2009), 529–539. (2009) Zbl1160.62039DOI10.1002/int.20345
Holena, M., 10.1016/S0165-0114(03)00208-2, Fuzzy Sets and Systems 145 (2004), 229–252. (2004) MR2073999DOI10.1016/S0165-0114(03)00208-2
Hryniewicz, O. , Selection of variables for systems analysis, application of a fuzzy statistical test for independence, Proc. IPMU, Perugia 3 (2004), 2197–2204. (2004)
Hryniewicz, O., 10.1016/j.csda.2006.04.014, Comput. Statist. Data Anal. 51 (2006), 323–334. (2006) Zbl1157.62424MR2297603DOI10.1016/j.csda.2006.04.014
Hryniewicz, O., Possibilistic decisions and fuzzy statistical tests, Fuzzy Sets and Systems 157 (2006), 2665–2673. (2006) Zbl1099.62008MR2328390
Kahranam, C., Bozdag, C. F., Ruan, D., 10.1002/int.20037, Internat. J. Intelligent Systems 19 (2004), 1069–1078. (2004) DOI10.1002/int.20037
Kruse, R., Meyer, K. D. , Statistics with Vague Data, Reidel Publishing, New York 1987. (1987) Zbl0663.62010MR0913303
Lee, K. H., First Course on Fuzzy Theory and Applications, Springer, Heidelberg 2005. (2005) Zbl1063.94129
Mareš, M., Fuzzy data in statistics, Kybernetika 43 (2007), 491–502. (2007) Zbl1134.62001MR2377927
Pourahmad, S., Ayatollahi, S. M. T., Taheri, S. M., Fuzzy logistic regression, a new possibilistic model and its application in clinical diagnosis, Iranian J. Fuzzy Systems, to appear.
Tabaei, B. P., Herman, W. H., A multivariate logistic regression equation to screen for diabetes, Diabetes Care 25 (2002), 1999–2003. (2002)
Venkataraman, P., Applied Optimization with MATLAB Programming, J. Wiley, New York 2002. (2002)
Wang, X., Kerre, E., Reasonable properties for the ordering of fuzzy quantities (II), Fuzzy Sets and Systems 118 (2001), 387–405. (2001) Zbl0971.03055MR1809387
Yoan, Y., 10.1016/0165-0114(91)90073-Y, Fuzzy Sets and Systems 43 (1991), 139–157. (1991) MR1127998DOI10.1016/0165-0114(91)90073-Y
Viertl, R., Statistical Methods for Non-Precise Data, CRC Press, Boca Raton 1996. (1996) MR1382865
Gholamreza Hesamian, S. M. Taheri, Fuzzy empirical distribution function: Properties and application
fuzzy frequency, fuzzy category, fuzzy Goodman–Kruskal statistic, fuzzy
p
-value, fuzzy significance level, NSD index, fuzzy frequency, fuzzy category, fuzzy Goodman-Kruskal statistics, fuzzy
p
-value, fuzzy significance level, NSD index
Fuzzy analysis in statistics
Decision theory and fuzziness
Articles by S. M. Taheri
Articles by Gholamreza Hesamian
|
Molar mass Wikipedia
Mass per amount of substance
Not to be confused with Molecular mass or Mass number.
In chemistry, the molar mass of a chemical compound is defined as the mass of a sample of that compound divided by the amount of substance in that sample, measured in moles.[1] The molar mass is a bulk, not molecular, property of a substance. The molar mass is an average of many instances of the compound, which often vary in mass due to the presence of isotopes. Most commonly, the molar mass is computed from the standard atomic weights and is thus a terrestrial average and a function of the relative abundance of the isotopes of the constituent atoms on Earth. The molar mass is appropriate for converting between the mass of a substance and the amount of a substance for bulk quantities.
The molecular weight is commonly used as a synonym of molar mass, particularly for molecular compounds; however, the most authoritative sources define it differently (see Molecular mass).
The molar mass is an intensive property of the substance, that does not depend on the size of the sample. In the International System of Units (SI), the coherent unit of molar mass is kg/mol. However, for historical reasons, molar masses are almost always expressed in g/mol.
The mole was defined in such a way that the molar mass of a compound, in g/mol, is numerically equal (for all practical purposes) to the average mass of one molecule, in daltons. Thus, for example, the average mass of a molecule of water is about 18.0153 daltons, and the molar mass of water is about 18.0153 g/mol.
For chemical elements without isolated molecules, such as carbon and metals, the molar mass is computed dividing by the number of moles of atoms instead. Thus, for example, the molar mass of iron is about 55.845 g/mol.
Since 1971, SI defined the "amount of substance" as a separate dimension of measurement. Until 2019, the mole was defined as the amount of substance that has as many constituent particles as there are atoms in 12 grams of carbon-12. During that period, the molar mass of carbon-12 was thus exactly 12 g/mol, by definition. Since 2019, a mole of any substance has been redefined in the SI as the amount of that substance containing an exactly defined number of particles, 6.02214076×1023. The molar mass of a compound in g/mol thus is equal to the mass of this number of molecules of the compound in g.
3 Average molar mass of mixtures
4.2 DNA synthesis usage
Molar masses of elements[]
Main articles: Relative atomic mass and Standard atomic weight
The molar mass of atoms of an element is given by the relative atomic mass of the element multiplied by the molar mass constant, Mu = 0.99999999965(30)×10−3 kg⋅mol−1.[2] For normal samples from earth with typical isotope composition, the atomic weight can be approximated by the standard atomic weight[3] or the conventional atomic weight.
M(H) = 1.00797(7) × Mu = 1.00797(7) g/mol
M(S) = 32.065(5) × Mu = 32.065(5) g/mol
M(Cl) = 35.453(2) × Mu = 35.453(2) g/mol
M(Fe) = 55.845(2) × Mu = 55.845(2) g/mol.
Multiplying by the molar mass constant ensures that the calculation is dimensionally correct: standard relative atomic masses are dimensionless quantities (i.e., pure numbers) whereas molar masses have units (in this case, grams per mole).
Some elements are usually encountered as molecules, e.g. hydrogen (H
2), sulfur (S
8), chlorine (Cl
2). The molar mass of molecules of these elements is the molar mass of the atoms multiplied by the number of atoms in each molecule:
2) = 2 × 1.007 97(7) × Mu = 2.01588(14) g/mol
8) = 8 × 32.065(5) × Mu = 256.52(4) g/mol
2) = 2 × 35.453(2) × Mu = 70.906(4) g/mol.
Molar masses of compounds[]
The molar mass of a compound is given by the sum of the relative atomic mass A
r of the atoms which form the compound multiplied by the molar mass constant M
{\displaystyle M=M_{\rm {u}}M_{\rm {r}}=M_{\rm {u}}\sum _{i}{A_{\rm {r}}}_{i}.}
r is the relative molar mass, also called formula weight. For normal samples from earth with typical isotope composition, the standard atomic weight or the conventional atomic weight can be used as an approximation of the relative atomic mass of the sample. Examples are:
An average molar mass may be defined for mixtures of compounds.[1] This is particularly important in polymer science, where different polymer molecules may contain different numbers of monomer units (non-uniform polymers).[4][5]
Average molar mass of mixtures[]
The average molar mass of mixtures
{\displaystyle {\bar {M}}}
can be calculated from the mole fractions
{\displaystyle x_{i}}
of the components and their molar masses
{\displaystyle M_{i}}
{\displaystyle {\bar {M}}=\sum _{i}x_{i}M_{i}.}
It can also be calculated from the mass fractions
{\displaystyle w_{i}}
of the components:
{\displaystyle {\frac {1}{\bar {M}}}=\sum _{i}{\frac {w_{i}}{M_{i}}}.}
As an example, the average molar mass of dry air is 28.97 g/mol.[6]
Related quantities[]
Molar mass is closely related to the relative molar mass (M
r) of a compound, to the older term formula weight (F.W.), and to the standard atomic masses of its constituent elements. However, it should be distinguished from the molecular mass (which is confusingly also sometimes known as molecular weight), which is the mass of one molecule (of any single isotopic composition) and is not directly related to the atomic mass, the mass of one atom (of any single isotope). The dalton, symbol Da, is also sometimes used as a unit of molar mass, especially in biochemistry, with the definition 1 Da = 1 g/mol, despite the fact that it is strictly a unit of mass (1 Da = 1 u = 1.66053906660(50)×10−27 kg, as of 2018 CODATA recommended values).
Molecular weight (M.W.) is an older term for what is now more correctly called the relative molar mass (M
r).[7] This is a dimensionless quantity (i.e., a pure number, without units) equal to the molar mass divided by the molar mass constant.[8]
Molecular mass[]
The molecular mass (m) is the mass of a given molecule: it is usually measured in daltons (Da or u).[9] Different molecules of the same compound may have different molecular masses because they contain different isotopes of an element. This is distinct but related to the molar mass, which is a measure of the average molecular mass of all the molecules in a sample and is usually the more appropriate measure when dealing with macroscopic (weigh-able) quantities of a substance.
Molecular masses are calculated from the atomic masses of each nuclide, while molar masses are calculated from the standard atomic weights[10] of each element. The standard atomic weight takes into account the isotopic distribution of the element in a given sample (usually assumed to be "normal"). For example, water has a molar mass of 18.0153(3) g/mol, but individual water molecules have molecular masses which range between 18.0105646863(15) Da (1H
216O) and 22.0277364(9) Da (2H
DNA synthesis usage[]
The term formula weight (F.W.) has a specific meaning when used in the context of DNA synthesis: whereas an individual phosphoramidite nucleobase to be added to a DNA polymer has protecting groups and has its molecular weight quoted including these groups, the amount of molecular weight that is ultimately added by this nucleobase to a DNA polymer is referred to as the nucleobase's formula weight (i.e., the molecular weight of this nucleobase within the DNA polymer, minus protecting groups).
Precision and uncertainties[]
The precision to which a molar mass is known depends on the precision of the atomic masses from which it was calculated, and value of the molar mass constant. Most atomic masses are known to a precision of at least one part in ten-thousand, often much better[3] (the atomic mass of lithium is a notable, and serious,[12] exception). This is adequate for almost all normal uses in chemistry: it is more precise than most chemical analyses, and exceeds the purity of most laboratory reagents.
The precision of atomic masses, and hence of molar masses, is limited by the knowledge of the isotopic distribution of the element. If a more accurate value of the molar mass is required, it is necessary to determine the isotopic distribution of the sample in question, which may be different from the standard distribution used to calculate the standard atomic mass. The isotopic distributions of the different elements in a sample are not necessarily independent of one another: for example, a sample which has been distilled will be enriched in the lighter isotopes of all the elements present. This complicates the calculation of the standard uncertainty in the molar mass.
A useful convention for normal laboratory work is to quote molar masses to two decimal places for all calculations. This is more accurate than is usually required, but avoids rounding errors during calculations. When the molar mass is greater than 1000 g/mol, it is rarely appropriate to use more than one decimal place. These conventions are followed in most tabulated values of molar masses.[13][14]
Molar masses are almost never measured directly. They may be calculated from standard atomic masses, and are often listed in chemical catalogues and on safety data sheets (SDS). Molar masses typically vary between:
1–238 g/mol for atoms of naturally occurring elements;
1000–5000000 g/mol for polymers, proteins, DNA fragments, etc.
Vapour density[]
The measurement of molar mass by vapour density relies on the principle, first enunciated by Amedeo Avogadro, that equal volumes of gases under identical conditions contain equal numbers of particles. This principle is included in the ideal gas equation:
{\displaystyle pV=nRT,}
{\displaystyle \rho ={{nM} \over {V}}.}
Combining these two equations gives an expression for the molar mass in terms of the vapour density for conditions of known pressure and temperature:
{\displaystyle M={{RT\rho } \over {p}}.}
Freezing-point depression[]
The freezing point of a solution is lower than that of the pure solvent, and the freezing-point depression (ΔT) is directly proportional to the amount concentration for dilute solutions. When the composition is expressed as a molality, the proportionality constant is known as the cryoscopic constant (K
f) and is characteristic for each solvent. If w represents the mass fraction of the solute in solution, and assuming no dissociation of the solute, the molar mass is given by
{\displaystyle M={{wK_{\text{f}}} \over {\Delta T}}.\ }
Boiling-point elevation[]
The boiling point of a solution of an involatile solute is higher than that of the pure solvent, and the boiling-point elevation (ΔT) is directly proportional to the amount concentration for dilute solutions. When the composition is expressed as a molality, the proportionality constant is known as the ebullioscopic constant (K
b) and is characteristic for each solvent. If w represents the mass fraction of the solute in solution, and assuming no dissociation of the solute, the molar mass is given by
{\displaystyle M={{wK_{\text{b}}} \over {\Delta T}}.\ }
Mole map (chemistry)
^ a b International Union of Pure and Applied Chemistry (1993). Quantities, Units and Symbols in Physical Chemistry, 2nd ion, Oxford: Blackwell Science. ISBN 0-632-03583-8. p. 41. Electronic version.
^ "2018 CODATA Value: molar mass constant". The NIST Reference on Constants, Units, and Uncertainty. NIST. 20 May 2019. Retrieved 2019-05-20.
^ a b Wieser, M. E. (2006), "Atomic Weights of the Elements 2005" (PDF), Pure and Applied Chemistry, 78 (11): 2051–66, doi:10.1351/pac200678112051
^ "International union of pure and applied chemistry, commission on macromolecular nomenclature, note on the terminology for molar masses in polymer science". Journal of Polymer Science: Polymer Letters Edition. 22 (1): 57. 1984. Bibcode:1984JPoSL..22...57.. doi:10.1002/pol.1984.130220116.
^ Metanomski, W. V. (1991). Compendium of Macromolecular Nomenclature. Oxford: Blackwell Science. pp. 47–73. ISBN 0-632-02847-5.
^ IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (2006–) "relative molar mass". doi:10.1351/goldbook.R05270
^ "Author Guidelines – Article Layout". RSC Publishing. Retrieved 2007-10-14.
^ See, e.g., Weast, R. C., ed. (1972). Handbook of Chemistry and Physics (53rd ed.). Cleveland, OH: Chemical Rubber Co.
^ Possolo, Antonio; van der Veen, Adriaan M. H.; Meija, Juris; Hibbert, D. Brynn (2018-01-04). "Interpreting and propagating the uncertainty of the standard atomic weights (IUPAC Technical Report)". Pure and Applied Chemistry. 90 (2): 395–424. doi:10.1515/pac-2016-0402. S2CID 145931362.
HTML5 Molar Mass Calculator Archived 2017-04-25 at the Wayback Machine web and mobile application.
|
Velocity of Money - Course Hero
Introduction to Finance/National Monetary Supply/Velocity of Money
Velocity of Money Defined
Velocity of money is the rate at which money flows through the economy. Consideration of an economy's money supply, or the total amount of its currency and other liquid financial products, is useful in understanding economic behavior. Economic activity, money supply, and gross domestic product are all correlated and, when monitored over time, trend upward or downward together. Money supply may be divided into two sectors: M1 and M2 money supplies.
M1 money supply is the portion of the money supply that has the highest degree of liquidity. In other words, M1 money supply can be swiftly transformed into cash. M1 money supply includes traveler's checks, currencies, and checkable deposits. M2 money supply is the entirety of the M1 money supply, as well as savings accounts, money market securities, and money market mutual funds. A money market mutual fund is an investment portfolio made up of short-term debt securities. The savings accounts, money market securities, and money market mutual funds are less liquid than the M1 portion of the money supply but contain another essential component of money: the store of value (money's ability to represent the same worth over time). Thus, M2 money supply encompasses two of the essential components of money: medium of exchange and store of value.
Money movement through an economy is important for a stable economy. The gross domestic product (GDP) is a general gauge of the overall economic status of a country in terms of goods and services produced by that country. Dividing the gross domestic product by the money supply provides the turnover ratio, which is the number of times the money supply must move through the economy to yield the gross domestic product. This ratio is the velocity of money.
The velocity of money is a useful tool for investors to decide the strength of a specific economy. An economy's velocity of money directly correlates to its gross domestic product. Thus, during recessions, the economy's velocity of money decreases proportionally to production losses.
\text{Velocity of Money}=\frac{\text{Gross Domestic Product}}{\text{Money Supply}}
For instance, hypothetically, if the gross domestic product for the United States is $50 million and the money supply is $25 million, then the velocity of money is 2.
\begin{aligned}\text{Velocity of Money}&=\frac{\$50\;\text{million}}{\$25\;\text{million}}\\\\&=2\end{aligned}
This means that the people of the U.S. are willing to use their money at twice the rate of the money supply. However, within the U.S. during the recessions of 2001–2002 and 2008–2010, the downturn affected the velocity of money due to the money not being spent at a normal pace within a robust or healthy economy. As a result, there is a correlation—in recessionary periods the velocity and liquidity of the money supply is severely affected by recessions.
Velocity of Money during Recessions
Velocity of money is the rate at which money flows through an economy, and it is used by investors to determine the strength of an economy. The velocity of money sharply declines during recessions, such as those occurring between 2001 and 2002 and 2008 and 2010 in the United States.
Money Supply Control and Velocity of Money
The velocity of money, or the flow rate of money in an economy, can be controlled by moving money through an economy. Changes to the money supply or gross domestic product directly affect the velocity of money. For example, if the gross domestic product remains constant while the money supply increases, the velocity of money is reduced. This outcome is known as inflation.
Inflation is a continual increase in the average price levels of goods and services. In practice, inflation is a decrease in the buying power of money. It occurs when the money supply is increased at a rate that outpaces gross domestic product growth. Besides causing inflation, increasing the money supply can also affect interest rates. When the money supply is greater than demand, it causes a decrease in interest rates. It is helpful to look at interest rates as the "cost" of money, because the interest paid on a loan is the extra paid on top of the principal, or the initial amount of money, excluding interest, that was borrowed. Thus, when interest rates are reduced, the "cost" of money is also reduced, which will likely lead to an increase in the borrowing of money. For example, Mary is able to secure a one-year loan, borrowing $100,000. If interest rates drop from 10 percent to 8 percent, Mary may be tempted to borrow more than $100,000. Mary rationalizes that if she is able to afford $10,000 toward an interest payment, she should borrow $125,000 and pay back $135,000 at the end of the year. In this scenario, Mary pays $125,000 to the principal and $10,000 in interest. An abundance of money is likely to result in an increase in investing and greater consumption of resources. These behaviors serve as a catalyst to increase the gross domestic product.
There can also be a number of consequences for decreasing an economy's money supply. For example, the effect of decreasing the money supply is an increase in interest rates. This results in the "cost" of money increasing and likely a decrease in borrowing. In turn, the interest rate increases are likely to reduce investment spending and consumption. Reduced investment and consumption will then influence whether the gross domestic product halts or decreases.
Inflation versus Money Supply
An increase in an economy's money supply will result in the inflation rate rising. Conversely, a decrease in the economy's money supply will result in deflation, or a sustained decrease in the general price level of commonly purchased goods and services.
<Money Market Securities>U.S. Monetary Policies
|
Maze - M4z3 Runn3r - localo
Complete the scorch trials in under 5 seconds!
There is a little race we have to go quickly from checkpoint to checkpoint till the end in under 5 seconds.
We have to find a bug in the anti cheat to gain more speed.
Since the anti cheat probably just checks if our current velocity is less than a certain threshold and we send a timestamp inside our position packet, we can just spoof the time difference to make the server think that our velocity is quite low.
v = \frac{\Delta s}{\Delta t}
\Delta t
v
is small. I used A* to calculate my route. Inside the cheat UI the user has just to click inside the map to set the destination and hit the TP button to fast-travel to that point.
never trust client information
CSCG{N3VER_TRUST_T1111ME}
|
State prices - Wikipedia
In financial economics, a state-price security, also called an Arrow–Debreu security (from its origins in the Arrow–Debreu model), a pure security, or a primitive security is a contract that agrees to pay one unit of a numeraire (a currency or a commodity) if a particular state occurs at a particular time in the future and pays zero numeraire in all the other states. The price of this security is the state price of this particular state of the world. The state price vector is the vector of state prices for all states. [1] See Financial economics § State prices.
The Arrow–Debreu model (also referred to as the Arrow–Debreu–McKenzie model or ADM model) is the central model in general equilibrium theory and uses state prices in the process of proving the existence of a unique general equilibrium. State prices may relatedly be applied in derivatives pricing and hedging: a contract whose settlement value is a function of an underlying asset whose value is uncertain at contract date, can be decomposed as a linear combination of its Arrow–Debreu securities, and thus as a weighted sum of its state prices; [2] [3] see Contingent claim analysis. Breeden and Litzenberger's work in 1978 [4] established the latter, more general use of state prices in finance.
2 Application to financial assets
Imagine a world where two states are possible tomorrow: peace (P) and war (W). Denote the random variable which represents the state as ω; denote tomorrow's random variable as ω1. Thus, ω1 can take two values: ω1=P and ω1=W.
There is a security that pays off £1 if tomorrow's state is "P" and nothing if the state is "W". The price of this security is qP
There is a security that pays off £1 if tomorrow's state is "W" and nothing if the state is "P". The price of this security is qW
The prices qP and qW are the state prices.
The factors that affect these state prices are:
"Time preferences for consumption and the productivity of capital".[5] That is to say that the time value of money affects the state prices.
The probabilities of ω1=P and ω1=W. The more likely a move to W is, the higher the price qW gets, since qW insures the agent against the occurrence of state W. The seller of this insurance would demand a higher premium (if the economy is efficient).
The preferences of the agent. Suppose the agent has a standard concave utility function which depends on the state of the world. Assume that the agent loses an equal amount if the state is "W" as he would gain if the state was "P". Now, even if you assume that the above-mentioned probabilities ω1=P and ω1=W are equal, the changes in utility for the agent are not: Due to his decreasing marginal utility, the utility gain from a "peace dividend" tomorrow would be lower than the utility lost from the "war" state. If our agent were rational, he would pay more to insure against the down state than his net gain from the up state would be.
Application to financial assets[edit]
Further information: Financial economics § State prices, and Arrow–Debreu model § Economics of uncertainty: insurance and finance
If the agent buys both qP and qW, he has secured £1 for tomorrow. He has purchased a riskless bond. The price of the bond is b0 = qP + qW.
Now consider a security with state-dependent payouts (e.g. an equity security, an option, a risky bond etc.). It pays ck if ω1=k ,k=p or w.-- i.e. it pays cP in peacetime and cW in wartime). The price of this security is c0 = qPcP + qWcW.
Generally, the usefulness of state prices arises from their linearity: Any security can be valued as the sum over all possible states of state price times payoff in that state:
{\displaystyle c_{0}=\sum _{k}q_{k}\times c_{k}}
Analogously, for a continuous random variable indicating a continuum of possible states, the value is found by integrating over the state price density.
List of asset pricing articles
Financial economics § Underlying economics
^ economics.about.com Accessed June 18, 2008
^ Rebonato, Riccardo (8 July 2005). Volatility and Correlation: The Perfect Hedger and the Fox. John Wiley & Sons. pp. 323–. ISBN 978-0-470-09140-1.
^ Dempster; Pliska; Bruno Dupire (13 October 1997). Mathematics of Derivative Securities, ch. "Pricing and Hedging With Smiles". Cambridge University Press. pp. 103–. ISBN 978-0-521-58424-1.
^ Breeden, Douglas T.; Litzenberger, Robert H. (1978). "Prices of State-Contingent Claims Implicit in Option Prices". Journal of Business. 51 (4): 621–651. doi:10.1086/296025. JSTOR 2352653.
^ Copeland, Thomas E.; Weston, J. Fred; Shastri, Kuldeep (2004). Financial theory and corporate policy (4th ed.). Addison-Wesley. p. 81. ISBN 0321127218.
Retrieved from "https://en.wikipedia.org/w/index.php?title=State_prices&oldid=1049685057"
|
9 Ways to Solve Quadratic Equations Using the Quadratic Formula
1 See if the equation equals zero.
2 Convert the equation to standard form.
3 Identify the coefficients.
4 Plug the coefficients into the quadratic formula.
5 Use the order of operations to simplify the formula.
6 Simplify the radical.
7 Reduce the problem.
8 Circle your answer(s).
9 Memorize the quadratic formula.
You can use a few different techniques to solve a quadratic equation and the quadratic formula is one of them. The coolest thing about the formula is that it always works. You can apply it to any quadratic equation out there and you'll get an answer every time. That's not the case with the other techniques! The second coolest thing about the quadratic formula: it's easy to use. In this article, we'll walk you through the entire process from start to finish so you can crush your next algebra exam.
See if the equation equals zero. Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/a\/a3\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-1-Version-3.jpg\/v4-460px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-1-Version-3.jpg","bigUrl":"\/images\/thumb\/a\/a3\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-1-Version-3.jpg\/aid1909174-v4-728px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-1-Version-3.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
If it does, the equation is ready for you to solve. You can't use quadratic formula until the equation equals
{\displaystyle 0}
. If the equation you’re looking at doesn’t equal zero, don’t worry. We'll show you how to convert it.[1] X Research source
Here's a quadratic equation in standard form:
{\displaystyle ax^{2}}
{\displaystyle +bx+c=0}
Here are 2 examples to demonstrate:
{\displaystyle x^{2}-3x+1=0}
This equation is ready to solve because it equals
{\displaystyle 0}
{\displaystyle -3x^{2}+6x=-5}
This equation is not ready to solve just yet. We need to convert it first.
Convert the equation to standard form. Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/3\/33\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-2-Version-2.jpg\/v4-460px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-2-Version-2.jpg","bigUrl":"\/images\/thumb\/3\/33\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-2-Version-2.jpg\/aid1909174-v4-728px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-2-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Standard form means the equation equals "0" and is ready to solve. It might sound complicated, but converting to standard form is pretty easy. You just need to move some things around a bit! It’s easier to show you, so check out these examples:[3] X Research source
If an equation looks like this:
{\displaystyle -3x^{2}+6x=-5}
Move the
{\displaystyle -5}
to the left side of the equal sign and put
{\displaystyle 0}
on the right side of the equal sign. Remember: numerals change from
{\displaystyle +}
{\displaystyle -}
(or vice versa) when you move them to the other side of the equal sign.
Our converted equation:
{\displaystyle -3x^{2}+6x+5=0}
{\displaystyle x^{2}=3x-1}
Move all the terms to left side of the equal sign.
{\displaystyle x^{2}-3x+1=0}
{\displaystyle 2(w^{2}-2w)=5}
Undo the brackets to expand and move 5 to the left of the equal sign.
{\displaystyle 2w^{2}-4w-5=0}
Identify the coefficients. Download Article
The coefficients are the a, b, and c in the standard form equation. Remember, the standard formula is
{\displaystyle ax^{2}}
{\displaystyle +bx+c=0}
. Our equation in standard formula is
{\displaystyle -3x^{2}+6x+5=0}
. All you have to figure out a, b, and c.[4] X Research source
The coefficients in our equation:
{\displaystyle a=-3}
{\displaystyle b=6}
{\displaystyle c=5}
Plug the coefficients into the quadratic formula. Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/d\/d7\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-4-Version-2.jpg\/v4-460px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-4-Version-2.jpg","bigUrl":"\/images\/thumb\/d\/d7\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-4-Version-2.jpg\/aid1909174-v4-728px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-4-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Replace the a, b, and c in the quadratic formula with our coefficients. This part is easy! Just switch out the letters with the coefficients.[5] X Research source
Remember, the quadratic formula looks like this:
{\displaystyle x=-b}
± √(
{\displaystyle b^{2}-4ac)}
{\displaystyle /2a}
Our coefficients:
{\displaystyle a=-3}
{\displaystyle b=6}
{\displaystyle c=5}
Our equation after inserting the coefficients:
{\displaystyle x=-6}
{\displaystyle 6^{2}-(4)(-3)(5)}
{\displaystyle /2(-3)}
Use the order of operations to simplify the formula. Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/c\/c4\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-5-Version-2.jpg\/v4-460px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-5-Version-2.jpg","bigUrl":"\/images\/thumb\/c\/c4\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-5-Version-2.jpg\/aid1909174-v4-728px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-5-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Just do the math in the equation as you normally would. Now that all the coefficients have a numerical value, you can do the simple math in the equation.[6] X Research source
{\displaystyle 6^{2}=36}
{\displaystyle 2}
{\displaystyle -3=-6}
{\displaystyle -3}
{\displaystyle 5=-15}
{\displaystyle -15}
{\displaystyle -4}
{\displaystyle 60}
You end up with:
{\displaystyle x=-6}
± √
{\displaystyle (36+60)}
{\displaystyle /-6}
Then, simplify once more:
{\displaystyle x=-6}
{\displaystyle 96}
{\displaystyle /-6}
Simplify the radical. Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/b\/be\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-6.jpg\/v4-460px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-6.jpg","bigUrl":"\/images\/thumb\/b\/be\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-6.jpg\/aid1909174-v4-728px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-6.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
The radical is the number inside √ which is 96. To simplify, find the prime factorization of the number inside the radical.[7] X Research source "Prime factorization" means dividing the number by 2 (the first prime number). Then, continue dividing by 2 until you get a decimal or remainder. At that point, divide by 3, 5, 7, etc. until all you have left are prime numbers.[8] X Research source
Here's the prime factorization of 96: 2 x 2 x 2 x 2 x 2 x 3 = 96.
Group the pairs: (2 x 2) (2 x 2). There are four 2s, so 4 goes outside the radical sign.
Multiply what's left: (2 x 3) = 6. This goes inside the radical sign.
So √
{\displaystyle 96}
simplified = 4√
{\displaystyle 6}
Putting it all together:
{\displaystyle x=-6}
± 4√
{\displaystyle 6}
{\displaystyle /-6}
Reduce the problem. Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/5\/58\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-7.jpg\/v4-460px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-7.jpg","bigUrl":"\/images\/thumb\/5\/58\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-7.jpg\/aid1909174-v4-728px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-7.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Our equation can be reduced by 2. -6, 4, and 6 are all divisible by 2. That means the equation can be reduced by 2. Divide each number by 2:
The reduced equation:
{\displaystyle x=-3}
{\displaystyle 6}
{\displaystyle /-3}
{\displaystyle x=3}
{\displaystyle 6}
{\displaystyle /3}
(both answers are correct because of the ± sign)
These are your final answers.[9] X Research source
Circle your answer(s). Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/2\/2f\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-8.jpg\/v4-460px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-8.jpg","bigUrl":"\/images\/thumb\/2\/2f\/Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-8.jpg\/aid1909174-v4-728px-Solve-Quadratic-Equations-Using-the-Quadratic-Formula-Step-8.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
It'll make it easier for your teacher grade your work. You just did a lot of math there! Most teachers want you to "show your work," which means your teacher is going to see all of that. Go ahead and circle your answer so it'll stand out from the rest of the work on the page.
Memorize the quadratic formula. Download Article
The quadratic formula is
<b class="whb">{\displaystyle x=-b}</b>
<b class="whb">{\displaystyle b^{2}-4ac)}</b>
<b class="whb">{\displaystyle /2a}</b>
. You'll need to memorize the formula at some point (probably for the upcoming exam), so committing it to memory now isn't a bad idea. The formula might look a bit complicated at first glance, but we have some fun tips to help you out.
Sing these lyrics to the tune of Pop Goes the Weasel:
If songs aren't your thing, try memorizing this story instead:
A negative boy was thinking yes or no about going to a party.
At the party, he talked to a square boy but not to the 4 awesome cats.
It was all over at 2 am.[10] X Research source
↑ https://www.mesacc.edu/~scotz47781/mat120/notes/quad_formula/quad_formula.html
↑ https://www.youtube.com/watch?v=i7idZfS8t8w&t=59s
↑ https://www.youtube.com/watch?v=i7idZfS8t8w&t=129s
↑ https://www.mesacc.edu/~scotz47781/mat120/notes/radicals/simplify/images/examples/prime_factorization.html
↑ https://www.chem.tamu.edu/class/fyp/mathrev/mr-quadr.html
"Today I had an exam for maths. And while I was doing the exam, I had completely forgotten how to do the quadratic equation. And now I remember it by looking at the article and I'm like, wow."..." more
"Thanks so much for this explanation."
|
60H35 Computational methods for stochastic equations
A connection between the stochastic heat equation and fractional Brownian motion, and a simple proof of a result of Talagrand.
Mueller, Carl E., Wu, Zhixin (2009)
A conservative evolution of the Brownian excursion.
Zambotti, Lorenzo (2008)
A dynamical system in a Hilbert space with a weakly attractive nonstationary point
Ivo Vrkoč (1993)
A differential equation is a Hilbert space with all solutions bounded but with so finite nontrivial invariant measure is constructed. In fact, it is shown that all solutions to this equation converge weakly to the origin, nonetheless, there is no stationary point. Moreover, so solution has a non-empty
\Omega
Yoann Dabrowski (2014)
We get stationary solutions of a free stochastic partial differential equation. As an application, we prove equality of non-microstate and microstate free entropy dimensions under a Lipschitz like condition on conjugate variables, assuming also the von Neumann algebra
{R}^{\omega }
embeddable. This includes an
N
-tuple of
q
-Gaussian random variables e.g. for
|q|N\le 0.13
A general analytical result for non-linear SPDE's and applications.
Denis, Laurent, Stoica, L. (2004)
A growth estimate for continuous random fields
Ralf Manthey, Katrin Mittmann (1996)
A Haussmann-Clark-Ocone formula for functionals of diffusion processes with Lipschitz coefficients.
Bahlali, Khaled, Mezerdi, Brahim, Ouknine, Youssef (2002)
A modified Kardar-Parisi-Zhang model.
Da Prato, Giuseppe, Debussche, Arnaud, Tubaro, Luciano (2007)
Marcin Boryc, Łukasz Kruk (2015)
A singular stochastic control problem in n dimensions with timedependent coefficients on a finite time horizon is considered. We show that the value function for this problem is a generalized solution of the corresponding HJB equation with locally bounded second derivatives with respect to the space variables and the first derivative with respect to time. Moreover, we prove that an optimal control exists and is unique
A new inequality for superdiffusions and its applications to nonlinear differential equations.
Dynkin, E.B. (2004)
A non-nonstandard proof of Reimers' existence result for heat SPDEs.
Allouba, Hassan (1998)
A note on a Feynman-Kac-type formula.
Balan, Raluca M. (2009)
A note on a one-dimensional nonlinear stochastic wave equation.
Nedeljkov, Marko, Rajter, Danijela (2002)
A note on Krylov's
{L}_{p}
-theory for systems of SPDEs.
Mikulevicius, R., Rozovskii, B. (2001)
A note on maximal estimates for stochastic convolutions
Mark Veraar, Lutz Weis (2011)
In stochastic partial differential equations it is important to have pathwise regularity properties of stochastic convolutions. In this note we present a new sufficient condition for the pathwise continuity of stochastic convolutions in Banach spaces.
|
68W27 Online algorithms
A\in ₙ\left(ℤ\right)
{A}^{tr}
{C}_{I}
C{̂}_{I}
A generalization of Gosper's algorithm to bibasic hypergeometric summation.
Riese, Axel (1996)
A Macsyma implementation of Zeilberger's fast algorithm.
Caruso, Fabrizio (1999)
A new interpretor for PARI/GP
Bill Allombert (2008)
When Henri Cohen and his coworkers set out to write PARI twenty years ago, GP was an afterthought. While GP has become the most commonly used interface to the PARI library by a large margin, both the gp interpretor and the GP language are primitive in design. Paradoxically, while gp allows to handle very high-level objects, GP itself is a low-level language coming straight from the seventies.We rewrote GP as a compiler/evaluator pair, implementing several high-level features (statically scoped variables,...
A New Method for Computing Polynomial Greatest Common Divisors and Polynomial Remainder Sequences.
Alkiviadis G. Akritas (1987/1988)
A non-deteministic time hierarchy over the reals.
F. Cucker, J. L. Montaña, L. M. Pardo (1993)
Predrag S. Stanimirović, Predrag V. Krtolica, Rade Stanojević (2003)
A note on the minimality problem in indefinite summation of rational functions.
Pirastu, Roberto (1993)
A package for symbolic solution of real functional equations of real variables.
Enrique Castillo, Andres Iglesias (1997)
Mariangiola Dezani-Ciancaglini (1974)
A ring to describe symbolic expressions.
Andrés Bujosa, Regino Criado (1994)
A terminal area topology-independent GB-based conflict detection system for A-SMGCS.
Eugenio Roanes Lozano, Rafael Muga, Luis M. Laita, Eugenio Roanes Macías (2004)
A module for conflict detection in A-SMGCS is presented. It supervises the operations that the ground controller has to perform. It doesn?t depend on the topology of the terminal area. The system guarantees the safety of the proposed situation, that is, the impossibility that a conflict arises among aircrafts (and also road vehicles) obeying the signaling. We suppose that the terminal area has stop bars (or semaphores) controlling all intersections and accesses between runways, taxiways, exits,...
Accelerated series for universal constants, by the WZ method.
Wilf, Herbert S. (1999)
ALBERT---Software for scientific computations and applications.
Schmidt, Alfred, Siebert, K.G. (2001)
Algorithm for the Gröbner region of a principal ideal.
Bobe, Alexandru (2006)
Algorithmes d'élimination des quantificateurs
Annette Paugam (1985)
Algorithmische Aspekte zur Theorie der Gröbner-Basen
U. Meinhold (1988)
|
EUDML | Norming points and unique minimality of orthogonal projections. EuDML | Norming points and unique minimality of orthogonal projections.
Norming points and unique minimality of orthogonal projections.
Volume: 2006, page Article ID 42305, 17 p.-Article ID 42305, 17 p.
Shekhtman, Boris, and Skrzypek, LesŁaw. "Norming points and unique minimality of orthogonal projections.." Abstract and Applied Analysis 2006 (2006): Article ID 42305, 17 p.-Article ID 42305, 17 p.. <http://eudml.org/doc/53888>.
author = {Shekhtman, Boris, Skrzypek, LesŁaw},
title = {Norming points and unique minimality of orthogonal projections.},
AU - Skrzypek, LesŁaw
TI - Norming points and unique minimality of orthogonal projections.
{L}^{p}
Articles by Skrzypek
|
evaln example - Maple Help
Home : Support : Online Help : Mathematics : Evaluation : evaln example
Typed evaln Parameter Checking
This worksheet illustrates, by means of several examples, a feature in the Maple type system.
Specifying Parameters of Type name
The Maple parameter type checking allows you to specify that a parameter is to remain as a name (that is, x::evaln), while also specifying that this name must be assigned a particular sort of value.
The two types involved in doing this are:
name(<type-specification>)
evaln(<type-specification>)
These are semantically equivalent to name and evaln, with the additional restriction that the specified name must be assigned a value that matches the <type-specification>. Note that when making the test on the assigned value, the assigned value is not evaluated any further than it will have been already.
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{:=}\textcolor[rgb]{0,0,1}{3}
\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{:=}\textcolor[rgb]{0,0,1}{4.5}
Here, a evaluates to 3, which is not a name with an integer value.
type(a,name(integer));
\textcolor[rgb]{0,0,1}{\mathrm{false}}
The name b evaluates to 4.5, which is, again, not a name with an integer value.
type(b,name(integer));
\textcolor[rgb]{0,0,1}{\mathrm{false}}
In the next example, the name a does evaluate to an integer value. The type function itself sees the name a, because it has been quoted.
type('a',name(integer));
\textcolor[rgb]{0,0,1}{\mathrm{true}}
Here, the quoted 'b' evaluates to a name, but it is not bound to an integer, so the result is false.
type('b',name(integer));
\textcolor[rgb]{0,0,1}{\mathrm{false}}
A function that requires its argument to be an unevaluated name would have an integer value if evaluated.
f := proc(x::name(integer)) end;
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{:=}\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{::}\left(\textcolor[rgb]{0,0,1}{\mathrm{name}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{integer}}\right)\right)\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}
A function that requires its argument to be a name (implicitly unevaluated) would have an integer value if evaluated.
g := proc(x::evaln(integer)) end;
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{:=}\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{::}\left(\textcolor[rgb]{0,0,1}{\mathrm{evaln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{integer}}\right)\right)\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}
The argument is evaluated to 3, which is not a name with an integer value.
Error, invalid input: f expects its 1st argument, x, to be of type name(integer), but received 3
The argument is evaluated to 4.5, which is not a name with an integer value.
Error, invalid input: f expects its 1st argument, x, to be of type name(integer), but received 4.5
The argument is evaluated to 'a', which is a name with an integer value.
The argument is evaluated to 'b', which is a name, but not with an integer value. Note that the error message clearly indicates this. This cannot be confused with having received an actual assignment statement--that is not possible.
Error, invalid input: g expects its 1st argument, x, to be of type evaln(integer), but received b := 4.5
If you pass an unassigned name, the error message indicates the problem.
Error, invalid input: g expects its 1st argument, x, to be of type evaln(integer), but received c := c
The argument is explicitly unevaluated, and is a name with an integer value.
The argument is explicitly unevaluated, and is a name, but not with an integer value. Again note the meaningful error message.
Error, invalid input: f expects its 1st argument, x, to be of type name(integer), but received b := 4.5
An explicitly unevaluated name does not match a parameter of type evaln (it never did) or evaln(...).
g('b');
For more information, see the following help topics: type, evaln, name.
|
Fadi Zaher leads a team of specialists in index, multi-asset, and factor-based investing at Legal & General Investment Management in London.
Douglas Breeden and Robert Lucas, a Nobel laureate in economics, provided the foundation of the consumption capital asset pricing model (CCAPM) in 1979 and 1978, respectively. Their model is an extension of the traditional capital asset pricing model (CAPM). It's best used as a theoretical model, but it can help to make sense of variation in financial asset returns over time, and in some cases, its results can be more relevant than those achieved through the CAPM model. Read on to discover how this model works and what it can tell you.
What Is CCAPM?
While the CAPM relies on the market portfolio's return in order to understand and predict future asset prices, the CCAPM relies on the aggregate consumption. In the CAPM, risky assets create uncertainty in an investor's wealth, which is determined by the market portfolio (e.g., the S&P 500). In the CCAPM, on the other hand, risky assets create uncertainty in consumption—what an investor will spend becomes uncertain because his or her wealth, (i.e., income and property) is uncertain as a result of a decision to invest in risky assets.
In the CAPM, the risk premium on the market portfolio measures the price of risk, while the beta states the quantity of risk. In the CCAPM, on the other hand, the quantity of market risk is measured by the movements of the risk premium with consumption growth. Thus, the CCAPM explains how much the entire stock market changes relative to the consumption growth.
Is CCAPM Useful?
While the CCAPM rarely is used empirically, it is highly relevant in theoretical terms. Indeed, the CCAPM is not used, as was the standard CAPM, in the real world. Therefore, a firm evaluating a project or the cost of capital is more likely to use the CAPM than the CCAPM. The major reason for this is that the CCAPM tends to perform poorly on empirical grounds. This may be because a proportion of consumers do not actively take part in the stock market and, therefore, the basic link between consumption and stock returns assumed by the CCAPM cannot hold. For this reason, the CCAPM may perform better than the CAPM for people who hold stocks.
From an academic point of view, the CCAPM is more widely used than the CAPM. This is because it incorporates many forms of wealth beyond stock market wealth and provides a framework for understanding variation in financial asset returns over many time periods. This provides an extension of the CAPM, which only takes into account one-period asset returns. The CCAPM also provides a fundamental understanding of the relation between wealth and consumption and an investor's risk aversion.
Calculating CCAPM
A simplified version of the CCAPM can take a linear representation between a risky asset (a stock, for example) and the market risk premium. However, the difference is the definition of the so-called implied risk-free rate, implied market return and the consumption beta. Therefore, the formula for CCAPM is as follows:
\begin{aligned} &r_a = r_f + \beta_c ( r_m - r_f ) \\ &\textbf{where:} \\ &r_a = \text{expected returns on risky asset (e.g. a stock)} \\ &r_f = \text{implied risk-free rate (e.g. 3-month Treasury bill)} \\ &r_m = \text{implied expected market return} \\ &r_m - r_f = \text{implied market risk premium} \\ &\beta_c = \text{consumption beta of the asset} \\ \end{aligned}
ra=rf+βc(rm−rf)where:ra=expected returns on risky asset (e.g. a stock)rf=implied risk-free rate (e.g. 3-month Treasury bill)rm=implied expected market returnrm−rf=implied market risk premiumβc=consumption beta of the asset
The implied returns and risk premium are determined by the investors' consumption growth and risk aversion. Moreover, the risk premium defines the compensation that investors require for buying a risky asset. As in the standard CAPM, the model links the returns of a risky asset to its systematic risk (market risk). The systematic risk is provided by the consumption beta.
The consumption beta is defined as:
\begin{aligned} &\beta_c = \frac { \text{Covariance between } r_a \text{ and consumption growth} }{ \text{Covariance between } r_m \text{ and consumption growth}} \\ \end{aligned}
βc=Covariance between rm and consumption growthCovariance between ra and consumption growth
As shown below, a higher consumption beta implies a higher expected return on the risk asset.
In the CCAPM, an asset is riskier if it pays less when consumption is low (savings are high). The consumption beta is 1 if the risky assets move perfectly with the consumption growth. A consumption beta of 2 would increase an asset's returns by 2% if the market rose by 1%, and would fall by 2% if the market fell by 1%.
The consumption beta can be determined by statistical methods. An empirical study, "Risk and Return: Consumption Beta Versus Market Beta" (1984), by Gregory Mankiw and Matthew Shapiro tested the movements of the United States' consumption and stock returns on the New York Stock Exchange and on the S&P 500 Index between 1959 and 1982. The study suggests that the CCAPM implies a higher risk-free rate than the CAPM, while the CAPM provides a higher market risk (beta), as shown in Figure 2.
Measures CAPM CCAPM
Figure 2: Test of the CAPM and CCAPM. Source: "Risk and Return: Consumption Beta Versus Market Beta"
The question is, how much would the return on a risky asset be at the risk-free rate and beta in Table 1? Figure 3 illustrates an experiment on the required returns of a risky asset at different market returns (column 1). The required returns are calculated by using the CAPM and CCAPM formulas.
For example, if the market return is 3%, the market risk premium is -2.66 multiplied by the consumption beta 1.85 plus the risk-free rate (5.66%). This yields a required return of 0.74%. By contrast, the CAPM implies that the required return should be 16.17% when the market return is 3%.
Market Return Stock Return - CAPM Stock Return - CCAPM
Figure 3: Experiment on returns of a risky asset
The two cases of market return at 1% and 2% do not necessarily imply that investing in a risky asset is rewarded with a positive return. This, however, contradicts the fundamental aspects of risk-return requirements.
CCAPM Isn't Perfect
The CCAPM, like the CAPM, has been criticized because it relies on only one parameter. Because many different variables are known to empirically affect the pricing of assets, several models with multifactors, such as the arbitrage pricing theory, were created.
Another problem specific to the CCAPM is that it has led to two puzzles: the equity premium puzzle and the risk-free rate puzzle (RFRP). The EPP shows that investors have to be extremely risk-averse in order to imply the existence of a market risk premium. The RFRP says that investors save in Treasury bills despite the low rate of return, which has been documented with data from most industrialized countries in the world.
The CCAPM remedies some of the weaknesses of the CAPM. Moreover, it directly bridges macro-economy and financial markets, provides an understanding of investors' risk aversion and links the investment decision with wealth and consumption.
Duke. "Doug Breeden." Accessed June 15, 2021.
The Nobel Prize. "Robert E. Lucas Jr." Accessed June 15, 2021.
National Bureau of Economic Research. "Risk and Return: Consumption Versus Market Beta," Pages 13-14. Accessed June 15, 2021.
|
Train Image Classification Network Robust to Adversarial Examples - MATLAB & Simulink - MathWorks Australia
Test Network with Adversarial Inputs
Train Robust Network
Test Robust Network
Neural networks can be susceptible to a phenomenon known as adversarial examples [1], where very small changes to an input can cause it to be misclassified. These changes are often imperceptible to humans.
Techniques for creating adversarial examples include the FGSM [2] and the basic iterative method (BIM) [3], also known as projected gradient descent [4]. These techniques can significantly degrade the accuracy of a network.
You can use adversarial training [5] to train networks that are robust to adversarial examples. This example shows how to:
Train an image classification network.
Investigate network robustness by generating adversarial examples.
Train an image classification network that is robust to adversarial examples.
The digitTrain4DArrayData function loads images of handwritten digits and their digit labels. Create an arrayDatastore object for the images and the labels, and then use the combine function to make a single datastore containing all the training data.
Extract the class names.
Define an image classification network.
Create the function modelLoss, listed at the end of the example, that takes as input a dlnetwork object and a mini-batch of input data with corresponding labels and returns the loss and the gradients of the loss with respect to the learnable parameters in the network.
Specify the training options. Train for 30 epochs with a mini-batch size of 100 and a learning rate of 0.01.
% Evaluate the model loss, gradients, and state.
[net,velocity] = sgdmupdate(net,gradients,velocity,learnRate);
Test the classification accuracy of the network by evaluating network predictions on a test data set.
Create a minibatchqueue object containing the test data.
mbqTest = minibatchqueue(dsTest, ...
Predict the classes of the test data using the trained network and the modelPredictions function defined at the end of this example.
YPred = modelPredictions(net,mbqTest,classes);
acc = mean(YPred == TTest)
The network accuracy is very high.
Apply adversarial perturbations to the input images and see how doing so affects the network accuracy.
You can generate adversarial examples using techniques such as FGSM and BIM. FGSM is a simple technique that takes a single step in the direction of the gradient
{\nabla }_{X}L\left(X,T\right)
L
X
T
. The adversarial example is calculated as
{\mathit{X}}_{\mathrm{adv}}=\mathit{X}+ϵ.\mathrm{sign}\left({\nabla }_{\mathit{X}}\mathit{L}\left(\mathit{X},\mathit{T}\right)\right)
ϵ
controls how different the adversarial examples look from the original images. In this example, the values of the pixels are between 0 and 1, so an
ϵ
value of 0.1 alters each individual pixel value by up to 10% of the range. The value of
ϵ
depends on the image scale. For example, if your image is instead between 0 and 255, you need to multiply this value by 255.
BIM is a simple improvement to FGSM which applies FGSM over multiple iterations and applies a threshold. After each iteration, the BIM clips the perturbation to ensure the magnitude does not exceed
ϵ
. This method can yield adversarial examples with less distortion than FGSM. For more information about generating adversarial examples, see Generate Untargeted and Targeted Adversarial Examples for Image Classification.
Create adversarial examples using the BIM. Set epsilon to 0.1.
For the BIM, the size of the perturbation is controlled by parameter
\alpha
representing the step size in each iteration. This is as the BIM usually takes many, smaller, FGSM steps in the direction of the gradient.
Define the step size alpha and the number of iterations.
numAdvIter = 20;
Use the adversarialExamples function (defined at the end of this example) to compute adversarial examples using the BIM on the test data set. This function also returns the new predictions for the adversarial images.
[XAdv,YPredAdv] = adversarialExamples(net,mbqTest,epsilon,alpha,numAdvIter,classes);
Compute the accuracy of the network on the adversarial example data.
accAdversarial = mean(YPredAdv == TTest)
accAdversarial = 0.0114
visualizePredictions(XAdv,YPredAdv,TTest);
You can see that the accuracy is severely degraded by the BIM, even though the image perturbation is hardly visible.
You can train a network to be robust against adversarial examples. One popular method is adversarial training. Adversarial training involves applying adversarial perturbations to the training data during the training process [4] [5].
FGSM adversarial training is a fast and effective technique for training a network to be robust to adversarial examples. The FGSM is similar to the BIM, but it takes a single larger step in the direction of the gradient to generate an adversarial image.
Adversarial training involves applying the FGSM technique to each mini-batch of training data. However, for the training to be effective, these criteria must apply:
The FGSM training method must use a randomly initialized perturbation instead of a perturbation that is initialized to zero.
For the network to be robust to perturbations of size
ϵ
, perform FGSM training with a value slightly larger than
ϵ
. For this example, during adversarial training, you perturb the images using step size
\alpha =1.25ϵ
Train a new network with FGSM adversarial training. Start by using the same untrained network architecture as in the original network.
netRobust = dlnetwork(lgraph);
Define the adversarial training parameters. Set the number of iterations to 1, as the FGSM is equivalent to the BIM with a single iteration. Randomly initialize the perturbation and perturb the images using alpha.
initialization = "random";
alpha = 1.25*epsilon;
lineLossRobustTrain = animatedline(Color=[0.85 0.325 0.098]);
Train the robust network using a custom training loop and the same training options as previously defined. This loop is the same as in the previous custom training, but with added adversarial perturbation.
% If training on a GPU, then convert data to gpuArray.
% Apply adversarial perturbations to the data.
X = basicIterativeMethod(netRobust,X,T,alpha,epsilon, ...
numIter,initialization);
[loss,gradients,state] = dlfeval(@modelLoss,netRobust,X,T);
[netRobust,velocity] = sgdmupdate(netRobust,gradients,velocity,learnRate);
addpoints(lineLossRobustTrain,iteration,loss)
Calculate the accuracy of the robust network on the digits test data. The accuracy of the robust network can be slightly lower than the nonrobust network on the standard data.
YPred = modelPredictions(netRobust,mbqTest,classes);
accRobust = mean(YPred == TTest)
accRobust = 0.9970
Compute the adversarial accuracy.
[XAdv,YPredAdv] = adversarialExamples(netRobust,mbqTest,epsilon,alpha,numAdvIter,classes);
accRobustAdv = mean(YPredAdv == TTest)
accRobustAdv = 0.7366
The adversarial accuracy of the robust network is much better than that of the original network.
The modelLoss function takes as input a dlnetwork object net and a mini-batch of input data X with corresponding labels T and returns the loss, the gradients of the loss with respect to the learnable parameters in net, and the network state. To compute the gradients automatically, use the dlgradient function.
Input Gradients Function
The modelGradientsInput function takes as input a dlnetwork object net and a mini-batch of input data X with corresponding labels T and returns the gradients of the loss with respect to the input data X.
function gradient = modelGradientsInput(net,X,T)
T = squeeze(T);
T = dlarray(T,'CB');
[YPred] = forward(net,X);
Extract the image data from the incoming cell array and concatenate into a four-dimensional array.
% Extract label data from the cell and concatenate.
Adversarial Examples Function
Generate adversarial examples for a minibatchqueue object using the basic iterative method (BIM) and predict the class of the adversarial examples using the trained network net.
function [XAdv,predictions] = adversarialExamples(net,mbq,epsilon,alpha,numIter,classes)
XAdv = {};
% Generate adversarial images for each mini-batch.
initialization = "zero";
% Generate adversarial images.
XAdvMBQ = basicIterativeMethod(net,X,T,alpha,epsilon, ...
% Predict the class of the adversarial images.
YPred = predict(net,XAdvMBQ);
XAdv{iteration} = XAdvMBQ;
XAdv = cat(4,XAdv{:});
Basic Iterative Method Function
Generate adversarial examples using the basic iterative method (BIM). This method runs for multiple iterations with a threshold at the end of each iteration to ensure that the entries do not exceed epsilon. When numIter is set to 1, this is equivalent to using the fast gradient sign method (FGSM).
function XAdv = basicIterativeMethod(net,X,T,alpha,epsilon,numIter,initialization)
% Initialize the perturbation.
if initialization == "zero"
delta = zeros(size(X),like=X);
delta = epsilon*(2*rand(size(X),like=X) - 1);
gradient = dlfeval(@modelGradientsInput,net,X+delta,T);
delta = delta + alpha*sign(gradient);
XAdv = X + delta;
Visualize Prediction Results Function
Visualize images along with their predicted classes. Correct predictions use green text. Incorrect predictions use red text.
function visualizePredictions(XTest,YPred,TTest)
numImages = height*width;
% Select random images from the data.
indices = randperm(size(XTest,4),numImages);
XTest = extractdata(XTest);
XTest = XTest(:,:,:,indices);
YPred = YPred(indices);
TTest = TTest(indices);
% Plot images with the predicted label.
for i = 1:(numImages)
subplot(height,width,i)
imshow(XTest(:,:,:,i))
% If the prediction is correct, use green. If the prediction is false,
% use red.
if YPred(i) == TTest(i)
title("Prediction: " + color + string(YPred(i)))
[5] Wong, Eric, Leslie Rice, and J. Zico Kolter. “Fast Is Better than Free: Revisiting Adversarial Training.” Preprint, submitted January 12, 2020. https://arxiv.org/abs/2001.03994.
dlfeval | dlnetwork | dlgradient | arrayDatastore | minibatchqueue
|
𝔛C
-elements in groups and Dietzmann classes.
Maier, Rudolf, Rogério, José Robério (1999)
6-BFC groups
Cliff David, James Wiegold (2006)
A finiteness condition on automorphism groups
Federico Menegazzo, Derek J. S. Robinson (1987)
Anti-CC-groups and anti-PC-groups.
Russo, Francesco (2007)
CC-Groups with periodic central factor.
Miguel Ganzález, J. Otal, J.M. Pena (1990)
Conjugately dense subgroups in generalized FC-groups.
Erfanian, Ahmad, Russo, Francesco (2009)
FC
-nilpotent groups and a Frattini-like subgroup
M. J. Tomkinson (1992)
FC-gruppi e proiettività
Some lattice properties of FC-groups and generalized FC-groups are considered in this paper.
FC-nilpotent products of hypercentral groups.
B. AMBERG, S. Franciosi, F. Giovanni (1995)
Groups preserving the cardinality of subsets product under permutations
Yang Kok Kim (1996)
Groups with boundedly finite automorphism classes
Derek J. S. Robinson, James Wiegold (1984)
Groups with complete lattice of nearly normal subgroups.
Maria De Falco, Carmela Musella (2002)
A subgroup H of a group G is said to be nearly normal in G if it has finite index in its normal closure in G. A well-known theorem of B.H. Neumann states that every subgroup of a group G is nearly normal if and only if the commutator subgroup G' is finite. In this article, groups in which the intersection and the join of each system of nearly normal subgroups are likewise nearly normal are considered, and some sufficient conditions for such groups to be finite-by-abelian are given.
Groups with finite automorphism classes of subgroups
John Lennox, Federico Menegazzo, Howard Smith, James Wiegold (1988)
B.H. Neumann (1955/1956)
Carlo Casolo (1989)
Groups with finitely many conjugacy classes of non-normal subgroups of infinite rank
Maria De Falco, Francesco de Giovanni, Carmela Musella (2013)
It is proved that if a locally soluble group of infinite rank has only finitely many non-trivial conjugacy classes of subgroups of infinite rank, then all its subgroups are normal.
Groups with many nilpotent subgroups
Patrizia Longobardi, Mercede Maj, Avinoam Mann, Akbar Rhemtulla (1996)
Javier Otal, Juan Manuel Peña (1989)
Groups with nearly modular subgroup lattice
Francesco de Giovanni, Carmela Musella (2001)
A subgroup H of a group G is nearly normal if it has finite index in its normal closure
{H}^{G}
. A relevant theorem of B. H. Neumann states that groups in which every subgroup is nearly normal are precisely those with finite commutator subgroup. We shall say that a subgroup H of a group G is nearly modular if H has finite index in a modular element of the lattice of subgroups of G. Thus nearly modular subgroups are the natural lattice-theoretic translation of nearly normal subgroups. In this article we...
Groups with Restricted Conjugacy Classes
de Giovanni, F., Russo, A., Vincenzi, G. (2002)
Let F C 0 be the class of all finite groups, and for each nonnegative integer n define by induction the group class FC^(n+1) consisting of all groups G such that for every element x the factor group G/CG ( <x>^G ) has the property FC^n . Thus FC^1 -groups are precisely groups with finite conjugacy classes, and the class FC^n obviously contains all finite groups and all nilpotent groups with class at most n. In this paper the known theory of FC-groups is taken as a model, and it is shown that...
|
Mathematical Reasoning, Popular Questions: CBSE Class 11-humanities ENGLISH, English Grammar - Meritnation
2\right) The value of \frac{\mathrm{log} 49\sqrt{7}+\mathrm{log} 25\sqrt{5}-\mathrm{log} 4\sqrt{2}}{\mathrm{log} 17.5} is\phantom{\rule{0ex}{0ex}}\left(a\right) 5\phantom{\rule{0ex}{0ex}}\left(b\right) 2\phantom{\rule{0ex}{0ex}}\left(c\right) \frac{5}{2}\phantom{\rule{0ex}{0ex}}\left(d\right) \frac{3}{2}
\stackrel{\to }{i}+\stackrel{\to }{j}-\stackrel{\to }{k}
\stackrel{\to }{i}-3\stackrel{\to }{j}+4\stackrel{\to }{k} is
10\sqrt{3}
6\sqrt{30}
\frac{3}{2}\sqrt{30}
3\sqrt{30}
{\left(\frac{{d}^{2}y}{d{x}^{2}}\right)}^{\frac{4}{3}}-5{\left(\frac{dy}{dx}\right)}^{5}=0
\stackrel{\to }{a} and \stackrel{\to }{b}
\stackrel{\to }{a}+\stackrel{\to }{b}
\stackrel{\to }{a} and \stackrel{\to }{b}
\frac{2\mathrm{\pi }}{3}
\mathbit{Q}\mathbf{.}\mathbf{10}\mathbf{.}\mathbf{ }\mathbf{ } Solve the system of equations :\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}{\mathrm{log}}_{a} x {\mathrm{log}}_{a} \left(xyz\right) = 48\phantom{\rule{0ex}{0ex}}{\mathrm{log}}_{a} y {\mathrm{log}}_{a} \left(xyz\right) = 12, a > 0, a \ne 1.\phantom{\rule{0ex}{0ex}}{\mathrm{log}}_{a} z {\mathrm{log}}_{a} \left(xyz\right) = 84
|
Volume 6 Issue 2 | Annals of K-Theory
Home > Journals > Ann. K-Theory > Volume 6 > Issue 2
Ann. K-Theory 6 (2), 157-238, (2021) DOI: 10.2140/akt.2021.6.157
KEYWORDS: universal coefficient theorem, C∗-algebra classification, Kirchberg algebra, 19K35, 46L35, 46L80, 46M18
We study the classification of group actions on
{\text{C}}^{\ast }
-algebras up to equivariant KK-equivalence. We show that any group action is equivariantly KK-equivalent to an action on a simple, purely infinite
{\text{C}}^{\ast }
-algebra. We show that a conjecture of Izumi is equivalent to an equivalence between cocycle conjugacy and equivariant KK-equivalence for actions of torsion-free amenable groups on Kirchberg algebras. Let
G
be a cyclic group of prime order. We describe its actions up to equivariant KK-equivalence, based on previous work by Manuel Köhler. In particular, we classify actions of
G
on stabilised Cuntz algebras in the equivariant bootstrap class up to equivariant KK-equivalence.
The real cycle class map
Jens Hornbostel, Matthias Wendt, Heng Xie, Marcus Zibrowius
KEYWORDS: I-cohomology, singular cohomology, Chow–Witt rings, real realization, real cellular varieties, 14F25, 19G12
The classical cycle class map for a smooth complex variety sends cycles in the Chow ring to cycles in the singular cohomology ring. We study two cycle class maps for smooth real varieties: the map from the
\mathbit{I}
-cohomology ring to singular cohomology induced by the signature, and a new cycle class map defined on the Chow–Witt ring. For both maps, we establish compatibility with pullbacks, pushforwards and cup products. As a first application of these general results, we show that both cycle class maps are isomorphisms for cellular varieties.
Positive scalar curvature and an equivariant Callias-type index theorem for proper actions
Hao Guo, Peter Hochs, Varghese Mathai
KEYWORDS: Callias operator, Index, positive scalar curvature, proper group action, 19K56, 46L80, 53C27
For a proper action by a locally compact group
G
M
G
\text{Spin}
-structure, we obtain obstructions to the existence of complete
G
-invariant Riemannian metrics with uniformly positive scalar curvature. We focus on the case where
M∕G
is noncompact. The obstructions follow from a Callias-type index theorem, and relate to positive scalar curvature near hypersurfaces in
M
. We also deduce some other applications of this index theorem. If
G
is a connected Lie group, then the obstructions to positive scalar curvature vanish under a mild assumption on the action. In that case, we generalise a construction by Lawson and Yau to obtain complete
G
-invariant Riemannian metrics with uniformly positive scalar curvature, under an equivariant bounded geometry assumption.
An index theorem for quotients of Bergman spaces on egg domains
Mohammad Jabbari, Xiang Tang
KEYWORDS: Toeplitz operators, index theorem, egg domains, 19K33, 19K56
K
-homology index theorem for Toeplitz operators obtained from the multishifts of Bergman spaces on several classes of egg-like domains. This generalizes our earlier work with Douglas and Yu for the unit ball.
|
Convert direction cosine matrix to angle of attack and sideslip angle - Simulink - MathWorks Switzerland
Direction Cosine Matrix Body to Wind to Alpha and Beta
Convert direction cosine matrix to angle of attack and sideslip angle
The Direction Cosine Matrix Body to Wind to Alpha and Beta block converts a 3-by-3 direction cosine matrix (DCM) to angle of attack and sideslip angle. The DCM performs the coordinate transformation of a vector in body axes (ox0, oy0, oz0) into a vector in wind axes (ox2, oy2, oz2). For more information on the direction cosine matrix, see Algorithms.
This implementation generates angles that lie between ±90 degrees.
Direction cosine matrix to transform body-fixed vectors to wind-fixed vectors, specified as a 3-by-3 direct cosine matrix.
α β — Angle of attack and sideslip angle
Angle of attack and sideslip angle, returned as a vector, in radians.
The DCM matrix performs the coordinate transformation of a vector in body axes (ox0, oy0, oz0) into a vector in wind axes (ox2, oy2, oz2). The order of the axis rotations required to bring this about is:
\begin{array}{l}\left[\begin{array}{c}o{x}_{2}\\ o{y}_{2}\\ o{z}_{2}\end{array}\right]=DC{M}_{wb}\left[\begin{array}{c}o{x}_{0}\\ o{y}_{0}\\ o{z}_{0}\end{array}\right]\\ \\ \left[\begin{array}{c}o{x}_{2}\\ o{y}_{2}\\ o{z}_{2}\end{array}\right]=\left[\begin{array}{ccc}\mathrm{cos}\beta & \mathrm{sin}\beta & 0\\ -\mathrm{sin}\beta & \mathrm{cos}\beta & 0\\ 0& 0& 1\end{array}\right]\left[\begin{array}{ccc}\mathrm{cos}\alpha & 0& \mathrm{sin}\alpha \\ 0& 1& 0\\ -\mathrm{sin}\alpha & 0& \mathrm{cos}\alpha \end{array}\right]\left[\begin{array}{c}o{x}_{0}\\ o{y}_{0}\\ o{z}_{0}\end{array}\right]\end{array}
DC{M}_{wb}=\left[\begin{array}{ccc}\mathrm{cos}\alpha \mathrm{cos}\beta & \mathrm{sin}\beta & \mathrm{sin}\alpha \mathrm{cos}\beta \\ -\mathrm{cos}\alpha \mathrm{sin}\beta & \mathrm{cos}\beta & -\mathrm{sin}\alpha \mathrm{sin}\beta \\ -\mathrm{sin}\alpha & 0& \mathrm{cos}\alpha \end{array}\right]
To determine angles from the DCM, the following equations are used:
\begin{array}{l}\alpha =\text{asin}\left(-DCM\left(3,1\right)\right)\\ \\ \beta =\text{asin}\left(DCM\left(1,2\right)\right)\end{array}
Direction Cosine Matrix Body to Wind | Direction Cosine Matrix to Rotation Angles | Direction Cosine Matrix to Wind Angles | Rotation Angles to Direction Cosine Matrix | Wind Angles to Direction Cosine Matrix
|
Linear Inequalities, Popular Questions: CBSE Class 11-science ENGLISH, English Grammar - Meritnation
\frac{\mathrm{n}\left(\mathrm{n}+ 1\right)}{2}
\frac{\mathrm{n}\left(\mathrm{n}+ 1\right)\left(2\mathrm{n} +1\right)}{6}
\frac{{\mathrm{n}}^{2}{\left(\mathrm{n}+ 1\right)}^{2}}{4}
\frac{\mathrm{n}\left(\mathrm{n} + 1\right)\left(\mathrm{n} +2\right)}{6}
Nithin Cherian asked a question
2x + y >= 4
2x -3y <= 6
a, 3p-8=16
b, 9 by 2 y= 27 by 8
c, a by 13 + 6 =5
d, 5(m+7 ) =40
e, 5 by 2 n == 60
please asswer all of them because i have confusion in all of them please please
CONVERT THE NUMBERS TO SCIENTIFIC NOTATION:
0.6275 X 105-1
What we call the brackets :-
( ), [ ], { }
How many positive integer values can x take that satisfy the inequality (x - 8) (x - 10) (x - 12)..(x - 100)
Puskin Chatterjee asked a question
Solve for the value of x
Add 2x+5y+4 and 2y+5 and 6x+4y
What type of inequality is 6x^2-9x+13
Omi asked a question
SOLVE THE INEQUALITY -3x+2y≥-6
Yashwanth Vignesh K asked a question
x-1/x>=2 find x
Shahbaz &shahnawaz asked a question
good morning iid Mubarak
Anssss thii qq
Please helpe with this question of linear inequalities
for what value of a belongs to R , the quadratic equation (ax2 + 1)x2 - (a-1)x +(a2-a-2 )= 0 will have one root positive and other root negative ?
Amal Tom asked a question
urgent plzzzzzzz can u explain the solution of 6.1 24 Q plzz
The smallest positive integer x such that (x + 1) + (x + 2) + (x + 3)........+(x +2015)
In how many ways can we position 6 into oddered summands ? [for exp,3 can be positioned into 3 ways as 1+2, 2+1 and 1+1+1] :
(how to do this math????)
|
A moment sequence in the q-world
The aim of the paper is to present some initial results about a possible generalization of moment sequences to a so-called q-calculus. A characterization of such a q-analogue in terms of appropriate positivity conditions is also investigated. Using the result due to Maserick and Szafraniec, we adapt a classical description of Hausdorff moment sequences in terms of positive definiteness and complete monotonicity to the q-situation. This makes a link between q-positive definiteness and q-complete...
A multi-dimensional Hausdorff moment problems regularization by finite moments.
Ang, D.D., Gorenflo, R., Trong, D.D. (1999)
A note on a problem arising from risk theory
Ulrich Abel, Ovidiu Furdui, Ioan Gavrea, Mircea Ivan (2010)
In this note we give an answer to a problem of Gheorghiță Zbăganu that arose from the study of the properties of the moments of the iterates of the integrated tail operator.
A Pick function related to an inequality for the entropy function.
Berg, Christian (2001)
A q-analogue of complete monotonicity
The aim of this paper is to give a q-analogue for complete monotonicity. We apply a classical characterization of Hausdorff moment sequences in terms of positive definiteness and complete monotonicity, adapted to the q-situation. The method due to Maserick and Szafraniec that does not need moments turns out to be useful. A definition of a q-moment sequence appears as a by-product.
An Algorithm for the Hausdorff Moment Problem.
An example of a positive semidefinite double sequence which is not a moment sequence
Torben Maack Bisgaard (2004)
The first explicit example of a positive semidefinite double sequence which is not a moment sequence was given by Friedrich. We present an example with a simpler definition and more moderate growth as
\left(m,n\right)\to \infty
An extension of a Meyer's theorem
Dotto, Oclide José (1983/1984)
Asymptotic behavior of moment sequences.
Lozada-Chang, Li-Vang (2005)
Backward extensions of hyperexpansive operators
Zenon J. Jabłoński, Il Bong Jung, Jan Stochel (2006)
The concept of k-step full backward extension for subnormal operators is adapted to the context of completely hyperexpansive operators. The question of existence of k-step full backward extension is solved within this class of operators with the help of an operator version of the Levy-Khinchin formula. Some new phenomena in comparison with subnormal operators are found and related classes of operators are discussed as well.
Concerning local flatness in the moment problem
Johnson, Gordon G. (1969)
Corrigendum to "The Moment Problem in the Space C...(S)''.
Marek A. Kowalski, Zbigniew Sawon (1984)
Ein Pólyasches Momentenproblem und umhüllende asymptotische Potenzreihen.
Martin Schumacher (1975)
Erratum: On the positive definiteness of n (a sequence of exponentials).
Extreme positive definite double sequences which are not moment sequences.
From the fact that the two-dimensional moment problem is not always solvable, we can deduce that there must be extreme ray generators of the cone of positive definite double sequences which are nor moment sequences. Such an argument does not lead to specific examples. In this paper it is shown how specific examples can be constructed if one is given an example of an N-extremal indeterminate measure in the one-dimensional moment problem (such examples exist in the literature). Konrad Schmüdgen had...
Fuss-Catalan numbers in noncommutative probability.
Mlotkowski, Wojciech (2010)
|
generate polynomial - Maple Help
Home : Support : Online Help : Mathematics : Algebra : Polynomials : generate polynomial
generate polynomial from integer n by Z-adic expansion
genpoly(n, b, x);
genpoly computes the unique polynomial
a\left(x\right)
\mathrm{ℤ}[x]
from the integer n with coefficients less than
\frac{b}{2}
in magnitude such that
\mathrm{subs}\left(x=b,a\left(x\right)\right)=n
This is directly related to b-adic expansion of an integer. If the base-b representation of the integer n is
{c}_{0}+{c}_{1}b+⋯+{c}_{k}{b}^{k}
{c}_{i}
are integers modulo b (using symmetric representation) then the polynomial generated is
{c}_{0}+{c}_{1}x+⋯+{c}_{k}{x}^{k}
If n is a polynomial with integer coefficients then each integer coefficient is expanded into a polynomial. This polynomial, n, must be in fully expanded form.
The genpoly command is thread-safe as of Maple 15.
\mathrm{genpoly}\left(11,5,x\right)
\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}
\mathrm{genpoly}\left(11{y}^{2}-13y+21,5,x\right)
\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\left(\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{1}
|
Popular Science Monthly/Volume 4/February 1874/Correspondence - Wikisource, the free online library
Popular Science Monthly/Volume 4/February 1874/Correspondence
{\displaystyle {\begin{matrix}{\big \}}\end{matrix}}}
Mr. Gladstone's explanation of his own meaning must, of course, be accepted; and, inserting a special reference to it in the stereotype-plate, I here append his letter, that the reader may not be misled by my comments. Paying due respect to Mr. Gladstone's wish to avoid controversy, I will say no more here than seems needful to excuse myself for having misconstrued his words. "Evolution," as I understand it, and "creation," as usually understood, are mutually exclusive: if there has been that special formation and adjustment commonly meant by creation, there has not been evolution; if there has been evolution, there has not been special creation. Similarly, unchangeable laws, as conceived by a man of science, negative the current conception of divine government, which implies interferences or special providences: if the laws are unchangeable, they are never traversed by divine volitions suspending them: if God alters the predetermined course of things from time to time, the laws are not unchangeable. I assumed that Mr. Gladstone used the terms in these mutually-exclusive senses; but my assumption appears to have been a wrong one. This is manifest to me on reading what he instances as parallel antitheses; seeing that the terms of his parallel antitheses are not mutually exclusive. That which excludes "liberty," and is excluded by it, is despotism; and that which excludes "law and order," and is excluded by them, is anarchy. Were these mutually-exclusive conceptions used, Mr. Gladstone's parallel would be transformed thus:
The subject presents very grave difficulties under any view of the case. For, if we assume the existence of an ultimate solid particle, universal force cannot be conserved, because the interference of solid particles must destroy motion, and therefore force. Hence, in that view of the case, we must have a continual destruction, and therefore a continual creation of force. This conclusion I am not willing to accept, as I fully believe in the indestructibility of both matter and force.
The least objectionable view that I have been able to arrive at is, that all ideas are sensations excited primarily by material impressions, and hence that we can have absolutely no idea of space independent of matter.
And, as a stellar system in the universe of matter consists of millions of aggregated masses which are individually very small in proportion to the inter-spaces, so I believe that the chemical molecule is very small in proportion to the space between the molecules. And as each sun has (probably) various attendants (the planets), so each chemical molecule consists in general of several different bodies that may be easily separated (in consequence of the space between them being of the same order as the spaces between the molecules). But, like the different bodies of the solar system, or of a stellar system, each of these bodies is a compound mass consisting of millions of units of a different order, holding probably the same relation to the chemical molecule that the chemical molecule does to the matter of the solar system; and so on, both upward and downward, to infinity.
There is, therefore, as I conceive, absolutely no limit to the division of matter, physically as well as mathematically; but our organization is such that, of the infinite series of terms in which it manifests itself, we can know, experimentally, only two: viz., the stellar universe, constituting the first order, of which the stars and the planets are the units; and, secondly, the chemical molecule, which constitutes the second order.
According to this view, the material universe might be represented, in orders, by the following series: d-mx, . . . d-3x, d-2x, d-1x, d0x, dx d2x, d3x, . . . dn-1x, dnx, in which x is the unknown quantity, which we call matter, and m and n are both infinitely great.
In this series, d0x, or simply x, would represent all tangible matter; and dx, which is the next term descending, would represent chemical molecules and their constituents, the atoms of all known and unknown elementary bodies.
As in the analogous expression used in mathematical investigations, d2x is infinitely small in respect to dx, which in its turn is infinitely small in respect to d0x, and so on; yet each represents the elements of which the next preceding order is constituted. So in the physical world, as represented by the above series: the units in x, which are represented by the visible worlds in space, are infinitely large when compared with the units in dx, which are represented by the chemical molecules; the units in each preceding order, in both series, being aggregations of the units in the next succeeding order.
This view of the constitution of matter, though it necessitates the assumption of its actual infinite division, yet, to my mind, involves much less absurdity than to suppose it imparticled, and yet "elastic to the core," or to suppose that the chemical molecule, or even the chemical atom, is an absolute solid.J. E. Hendricks.
Des Moines, Iowa, November 21, 1873.
MATTER, FORCE, AND INERTIA.
Judge Stallo's valuable contributions to the Popular Science Monthly, on the "Primary Concepts of Modern Science," can scarcely fail to give the reader a clearer conception of elementary being. But it seems to me that his criticism of Mr. Faraday's "complex forces," and Baine's assertion that "matter, force, and inertia, are substantially three names for the same fact," is clearly illogical.
On the ground that the existence of all reality lies in relation and contrast, the author assumes that inertia and force are ever coexisting contrasts. He says: "We know nothing of force except by its contrast with mass, or (what is the same thing) inertia; and, conversely, as I have already pointed out in my first article, we know nothing of mass, except by its relation to force. Mass, inertia (or, as it is sometimes though inaccurately called, matter per se) is indistinguishable from absolute nothingness; for matter reveals its presence, or evinces its reality, only in its action, its force, its tension or motion.... It is impossible, therefore, to construct matter by a mere synthesis of forces."
Retrieved from "https://en.wikisource.org/w/index.php?title=Popular_Science_Monthly/Volume_4/February_1874/Correspondence&oldid=8852490"
|
90B22 Queues and service
90B06 Transportation, logistics
90B15 Network models, stochastic
90B18 Communication networks
90B20 Traffic problems
90B25 Reliability, availability, maintenance, inspection
90B35 Scheduling theory, deterministic
90B85 Continuous location
90B90 Case-oriented studies
A bulk queueing system under
N
-policy with bilevel service delay discipline and start-up time.
Muh, David C.R. (1993)
Hans Daduna (1991)
A diffusion model for two parallel queues with processor sharing: Transient behavior and asymptotics.
Knessl, Charles (1999)
A discrete single server queue with Markovian arrivals and phase type group services.
Alfa, Attahiru Sule, Dolhun, K.Laurie, Chakravarthy, S. (1995)
Muthukrishnan Senthil Kumar (2011)
This paper concerns a discrete time Geo[X]/G/1 retrial queue with general retrial time in which all the arriving customers require first essential service with probability
{\alpha }_{0}
while only some of them demand one of other optional services: type − r (r = 1, 2, 3,...M) service with probability
{\alpha }_{r}
. The system state distribution, the orbit size and the system size distributions are obtained in terms of generating functions. The stochastic decomposition law holds for the proposed model. Performance measures...
{\alpha }_{0}
{\alpha }_{r}
A discrete-time system with service control and repairs
Ivan Atencia (2014)
A finite capacity bulk service queue with single vacation and Markovian arrival process.
Gupta, U.C., Sikdar, Karabi (2004)
A finite capacity queue with Markovian arrivals and two servers with group services.
Chakravarthy, S., Alfa, Attahiru Sule (1994)
Marcin Woźniak, Wojciech M. Kempa, Marcin Gabryel, Robert K. Nowicki (2014)
In this paper, application of an evolutionary strategy to positioning a GI/M/1/N-type finite-buffer queueing system with exhaustive service and a single vacation policy is presented. The examined object is modeled by a conditional joint transform of the first busy period, the first idle time and the number of packets completely served during the first busy period. A mathematical model is defined recursively by means of input distributions. In the paper, an analytical study and numerical experiments...
A fixed-size batch service queue with vacations.
Lee, Ho Woo, Lee, Soon Seok, Chae, K.C. (1996)
A. Lambros, A. Pombortsis, A. Sideridis, D. Tambouratzis (1989)
A heavy-traffic theorem for the GI/G/1 queue with a Pareto-type service time distribution.
Cohen, J.W. (1998)
A linear programming approach to error bounds for random walks in the quarter-plane
Jasper Goseling, Richard J. Boucherie, Jan-Kees van Ommeren (2016)
We consider the steady-state behavior of random walks in the quarter-plane, in particular, the expected value of performance measures that are component-wise linear over the state space. Since the stationary distribution of a random walk is in general not readily available we establish upper and lower bounds on performance in terms of another random walk with perturbed transition probabilities, for which the stationary distribution is a geometric product-form. The Markov reward approach as developed...
A new approach to time-dependent queueing problem with arrivals having random memory
Sharda, Indu Garg (1985/1986)
A non-Markovian queueing system with a variable number of channels.
Rosson, Hong-Tham T., Dshalalow, Jewgeni H. (2003)
A non-regenerative model of a redundant repairable system: Bounds for the unavailability and asymptotical insensitivity to the lifetime distribution.
Kovalenko, Igor N. (1996)
A note on calculating steady state results for an
M/M/k
queuing system when the ratio of the arrival rate to the service rate is large.
Pasternack, Barry Alan, Drezner, Zvi (1998)
A note on the convexity of the expected queue length of the
M/M/s
queue with respect to the arrival rate: A third proof.
Mehrez, A., Brimberg, J. (1992)
|
1D dirac operators with special periodic potentials
Plamen Djakov, Boris Mityagin (2012)
2 - kratnaya polnota sistemy sobstvennyh i prisoedinennyh funkcij differencial’nogo operatora
{L}_{2}\left(\lambda \right)
M. Trifunovič (1970)
Toshiaki Kusahara, Hiroyuki Usami (2000)
A boundary value problem arising in the flow of a viscous fluid
Lee, Tai-Chi (1978)
A boundary value problem for non-linear differential equations with a retarded argument
Józef Wenety Myjak (1973)
A boundary value problem of fractional order at resonance.
A characterization of isochronous centres in terms of symmetries.
Emilio Freire, Gasull, Armengol, Guillamon, Antoni 2 (2004)
We present a description of isochronous centres of planar vector fields X by means of their groups of symmetries. More precisely, given a normalizer U of X (i.e., [X,U]= µ X, where µ is a scalar function), we provide a necessary and sufficient isochronicity condition based on µ. This criterion extends the result of Sabatini and Villarini that establishes the equivalence between isochronicity and the existence of commutators ([X,U]= 0). We put also special emphasis on the mechanical aspects of isochronicity;...
A characterization of the existence of solutions to some higher order boundary value problems
Gabriele Bonanno, Salvatore A. Marano (1995)
The aim of this short note is to present a theorem that characterizes the existence of solutions to a class of higher order boundary value problems. This result completely answers a question previously set by the authors in [Differential Integral Equations 6 (1993), 1119–1123].
{C}_{0}
A class of competing models with discrete delay
Marín, Julio, Cavani, Mario (2002)
|
de Laval nozzle - Wikipedia
A de Laval nozzle (or convergent-divergent nozzle, CD nozzle or con-di nozzle) is a tube which is pinched in the middle, making a carefully balanced, asymmetric hourglass shape. It is used to accelerate a compressible fluid to supersonic speeds in the axial (thrust) direction, by converting the thermal energy of the flow into kinetic energy. De Laval nozzles are widely used in some types of steam turbines and rocket engine nozzles. It also sees use in supersonic jet engines.
Diagram of a de Laval nozzle, showing approximate flow velocity (v), together with the effect on temperature (T) and pressure (p)
Similar flow properties have been applied to jet streams within astrophysics.[1]
3 Conditions for operation
4 Analysis of gas flow in de Laval nozzles
5 Exhaust gas velocity
6 Mass flow rate
Longitudinal section of RD-107 rocket engine (Tsiolkovsky State Museum of the History of Cosmonautics)
Giovanni Battista Venturi designed converging-diverging tubes known as Venturi tubes to experiment the effects in fluid pressure reduction while flowing through chokes (Venturi effect). German engineer and inventor Ernst Körting supposedly switched to a converging-diverging nozzle in his steam jet pumps by 1878 after using convergent nozzles but these nozzles remained a company secret.[2] Later, Swedish engineer Gustaf De Laval applied his own converging diverging nozzle design for use on his impulse turbine in the year 1888.[3][4][5][6]
Laval's Convergent-Divergent nozzle was first applied in a rocket engine by Robert Goddard. Most modern rocket engines that employ hot gas combustion use de Laval nozzles.
Its operation relies on the different properties of gases flowing at subsonic, sonic, and supersonic speeds. The speed of a subsonic flow of gas will increase if the pipe carrying it narrows because the mass flow rate is constant. The gas flow through a de Laval nozzle is isentropic (gas entropy is nearly constant). In a subsonic flow sound will propagate through the gas. At the "throat", where the cross-sectional area is at its minimum, the gas velocity locally becomes sonic (Mach number = 1.0), a condition called choked flow. As the nozzle cross-sectional area increases, the gas begins to expand and the gas flow increases to supersonic velocities where a sound wave will not propagate backward through the gas as viewed in the frame of reference of the nozzle (Mach number > 1.0).
As the gas exits the throat the increase in area allows for it to undergo a Joule-Thompson expansion wherein the gas expands at supersonic speeds from high to low pressure pushing the velocity of the mass flow beyond sonic speed.
When comparing the general geometric shape of the nozzle between the rocket and the jet engine, it only looks different at first glance, when in fact is about the same essential facts are noticeable on the same geometric cross-sections - that the combustion chamber in the jet engine must have the same "throat" (narrowing) in the direction of the outlet of the gas jet, so that the turbine wheel of the first stage of the jet turbine is always positioned immediately behind that narrowing, while any on the further stages of the turbine are located at the larger outlet cross section of the nozzle, where the flow accelerates.
Conditions for operationEdit
A de Laval nozzle will only choke at the throat if the pressure and mass flow through the nozzle is sufficient to reach sonic speeds, otherwise no supersonic flow is achieved, and it will act as a Venturi tube; this requires the entry pressure to the nozzle to be significantly above ambient at all times (equivalently, the stagnation pressure of the jet must be above ambient).
In addition, the pressure of the gas at the exit of the expansion portion of the exhaust of a nozzle must not be too low. Because pressure cannot travel upstream through the supersonic flow, the exit pressure can be significantly below the ambient pressure into which it exhausts, but if it is too far below ambient, then the flow will cease to be supersonic, or the flow will separate within the expansion portion of the nozzle, forming an unstable jet that may "flop" around within the nozzle, producing a lateral thrust and possibly damaging it.
In practice, ambient pressure must be no higher than roughly 2–3 times the pressure in the supersonic gas at the exit for supersonic flow to leave the nozzle.
Analysis of gas flow in de Laval nozzlesEdit
The analysis of gas flow through de Laval nozzles involves a number of concepts and assumptions:
For simplicity, the gas is assumed to be an ideal gas.
The gas flow is isentropic (i.e., at constant entropy). As a result, the flow is reversible (frictionless and no dissipative losses), and adiabatic (i.e., no heat enters or leaves the system).
The gas flow is constant (i.e., in steady state) during the period of the propellant burn.
The gas flow is along a straight line from gas inlet to exhaust gas exit (i.e., along the nozzle's axis of symmetry)
The gas flow behaviour is compressible since the flow is at very high velocities (Mach number > 0.3).
Exhaust gas velocityEdit
As the gas enters a nozzle, it is moving at subsonic velocities. As the cross-sectional area contracts the gas is forced to accelerate until the axial velocity becomes sonic at the nozzle throat, where the cross-sectional area is the smallest. From the throat the cross-sectional area then increases, allowing the gas to expand and the axial velocity to become progressively more supersonic.
The linear velocity of the exiting exhaust gases can be calculated using the following equation:[7][8][9]
{\displaystyle v_{e}={\sqrt {{\frac {TR}{M}}\cdot {\frac {2\gamma }{\gamma -1}}\cdot \left[1-\left({\frac {p_{e}}{p}}\right)^{\frac {\gamma -1}{\gamma }}\right]}},}
{\displaystyle v_{e}}
= exhaust velocity at nozzle exit,
{\displaystyle T}
= absolute temperature of inlet gas,
{\displaystyle R}
= universal gas law constant,
{\displaystyle M}
= the gas molecular mass (also known as the molecular weight)
{\displaystyle \gamma }
{\displaystyle {\frac {c_{p}}{c_{v}}}}
= isentropic expansion factor
{\displaystyle c_{p}}
{\displaystyle c_{v}}
are specific heats of the gas at constant pressure and constant volume respectively),
{\displaystyle p_{e}}
= absolute pressure of exhaust gas at nozzle exit,
{\displaystyle p}
= absolute pressure of inlet gas.
Some typical values of the exhaust gas velocity ve for rocket engines burning various propellants are:
1,700 to 2,900 m/s (3,800 to 6,500 mph) for liquid monopropellants,
2,900 to 4,500 m/s (6,500 to 10,100 mph) for liquid bipropellants,
2,100 to 3,200 m/s (4,700 to 7,200 mph) for solid propellants.
As a note of interest, ve is sometimes referred to as the ideal exhaust gas velocity because it is based on the assumption that the exhaust gas behaves as an ideal gas.
As an example calculation using the above equation, assume that the propellant combustion gases are: at an absolute pressure entering the nozzle p = 7.0 MPa and exit the rocket exhaust at an absolute pressure pe = 0.1 MPa; at an absolute temperature of T = 3500 K; with an isentropic expansion factor γ = 1.22 and a molar mass M = 22 kg/kmol. Using those values in the above equation yields an exhaust velocity ve = 2802 m/s, or 2.80 km/s, which is consistent with above typical values.
Technical literature often interchanges without note the universal gas law constant R, which applies to any ideal gas, with the gas law constant Rs, which only applies to a specific individual gas of molar mass M. The relationship between the two constants is Rs = R/M.
Mass flow rateEdit
In accordance with conservation of mass the mass flow rate of the gas throughout the nozzle is the same regardless of the cross-sectional area.[10]
{\displaystyle {\dot {m}}={\frac {Ap_{t}}{\sqrt {T_{t}}}}\cdot {\sqrt {{\frac {\gamma }{R}}M}}\cdot (1+{\frac {\gamma -1}{2}}\mathrm {Ma} ^{2})^{-{\frac {\gamma +1}{2(\gamma -1)}}}}
{\displaystyle {\dot {m}}}
= mass flow rate,
{\displaystyle A}
= cross-sectional area ,
{\displaystyle p_{t}}
= total pressure,
{\displaystyle T_{t}}
= total temperature,
{\displaystyle \gamma }
{\displaystyle {\frac {c_{p}}{c_{v}}}}
= isentropic expansion factor,
{\displaystyle R}
= gas constant,
{\displaystyle \mathrm {Ma} }
= Mach number
{\displaystyle M}
When the throat is at sonic speed Ma = 1 where the equation simplifies to:
{\displaystyle {\dot {m}}={\frac {Ap_{t}}{\sqrt {T_{t}}}}\cdot {\sqrt {\frac {\gamma M}{R}}}\cdot ({\frac {\gamma +1}{2}})^{-{\frac {\gamma +1}{2(\gamma -1)}}}}
By Newton's third law of motion the mass flow rate can be used to determine the force exerted by the expelled gas by:
{\displaystyle F={\dot {m}}\cdot v_{e}}
{\displaystyle F}
= force exerted,
{\displaystyle {\dot {m}}}
{\displaystyle v_{e}}
= exit velocity at nozzle exit
In aerodynamics, the force exerted by the nozzle is defined as the thrust.
Wikimedia Commons has media related to Convergent-divergent nozzles.
^ C.J. Clarke and B. Carswell (2007). Principles of Astrophysical Fluid Dynamics (1st ed.). Cambridge University Press. pp. 226. ISBN 978-0-521-85331-6.
^ Krehl, Peter O. K. (24 September 2008). History of Shock Waves, Explosions and Impact: A Chronological and Biographical Reference. ISBN 9783540304210. Archived from the original on 10 September 2021. Retrieved 10 September 2021.
de Laval, Carl Gustaf Patrik, "Steam turbine," Archived 2018-01-11 at the Wayback Machine U.S. Patent no. 522,066 (filed: 1889 May 1 ; issued: 1894 June 26)
^ Theodore Stevens and Henry M. Hobart (1906). Steam Turbine Engineering. MacMillan Company. pp. 24–27. Available on-line here Archived 2014-10-19 at the Wayback Machine in Google Books.
^ Garrett Scaife (2000). From Galaxies to Turbines: Science, Technology, and the Parsons Family. Taylor & Francis Group. p. 197. Available on-line here Archived 2014-10-19 at the Wayback Machine in Google Books.
^ "Richard Nakka's Equation 12". Archived from the original on 2017-07-15. Retrieved 2008-01-14.
^ "Robert Braeuning's Equation 1.22". Archived from the original on 2006-06-12. Retrieved 2006-04-15.
^ Hall, Nancy. "Mass Flow Choking". NASA. Archived from the original on 8 August 2020. Retrieved 29 May 2020.
Exhaust gas velocity calculator
Retrieved from "https://en.wikipedia.org/w/index.php?title=De_Laval_nozzle&oldid=1088193812"
|
Convert Euler-Rodrigues vector to rotation angles - Simulink - MathWorks Switzerland
Rodrigues to Rotation Angles
Convert Euler-Rodrigues vector to rotation angles
The Rodrigues to Rotation Angles block converts the three-element Euler-Rodrigues vector into rotation angles. The rotation used in this block is a passive transformation between two coordinate systems. For more information on Euler-Rodrigues vectors, see Algorithms.
Rotation angles, in radians, from which to determine the Euler-Rodrigues vector. Quaternion scalar is the first element.
For the 'ZYX', 'ZXY', 'YXZ', 'YZX', 'XYZ', and 'XZY' rotations, the block generates an R2 angle that lies between ±pi/2 radians (±90 degrees), and R1 and R3 angles that lie between ±pi radians (±180 degrees).
For the 'ZYZ', 'ZXZ', 'YXY', 'YZY', 'XYX', and 'XZX' rotations, the block generates an R2 angle that lies between 0 and pi radians (180 degrees), and R1 and R3 angles that lie between ±pi (±180 degrees). However, in the latter case, when R2 is 0, R3 is set to 0 radians.
\stackrel{⇀}{b}
\stackrel{\to }{b}=\left[\begin{array}{ccc}{b}_{x}& {b}_{y}& {b}_{z}\end{array}\right]
\begin{array}{l}{b}_{x}=\mathrm{tan}\left(\frac{1}{2}\theta \right){s}_{x},\\ {b}_{y}=\mathrm{tan}\left(\frac{1}{2}\theta \right){s}_{y},\\ {b}_{z}=\mathrm{tan}\left(\frac{1}{2}\theta \right){s}_{z}\end{array}
\stackrel{⇀}{s}
Direction Cosine Matrix to Rodrigues | Rodrigues to Direction Cosine Matrix | Rodrigues to Quaternions | Quaternions to Rodrigues | Rotation Angles to Rodrigues
|
Won‐Young Kim; Paul G. Richards; David P. Schaff; Karl Koch
Bulletin of the Seismological Society of America December 20, 2016, Vol.107, 1-21. doi:https://doi.org/10.1785/0120160111
Effects of Laterally Varying Mantle Lid Velocity Gradient and Crustal Thickness on Pn Geometric Spreading with Application to the North Korean Test Site
Xiao‐Bi Xie; Thorne Lay
Bulletin of the Seismological Society of America December 20, 2016, Vol.107, 22-33. doi:https://doi.org/10.1785/0120160203
A Spectrogram‐Based Method of Rg Detection for Explosion Monitoring
Colin T. O’Rourke; G. Eli Baker
Apparent Explosion Moments from Rg Waves Recorded on SPE
Carene Larmat; Esteban Rougier; Howard J. Patton
Bulletin of the Seismological Society of America November 29, 2016, Vol.107, 43-50. doi:https://doi.org/10.1785/0120160163
Analysis of Rayleigh‐Wave Particle Motion from Active Seismics
Giancarlo Dal Moro; Nassir S. Al‐Arifi; Sayed S. R. Moustafa
Seismic Reconstruction of the 2012 Palisades Rockfall Using the Analytical Solution to Lamb’s Problem
Lucia Gualtieri; Göran Ekström
Toppling Analysis of the Echo Cliffs Precariously Balanced Rock
Swetha Veeraraghavan; Kenneth W. Hudnut; Swaminathan Krishnan
Jessica C. Hawthorne; Jean‐Paul Ampuero; Mark Simons
Velocity Structure of the Northern Mississippi Embayment Sediments, Part I: Teleseismic P‐Wave Spectral Ratios Analysis
Akram Mostafanejad; Charles A. Langston
Bulletin of the Seismological Society of America November 29, 2016, Vol.107, 97-105. doi:https://doi.org/10.1785/0120150339
Velocity Structure of the Northern Mississippi Embayment Sediments, Part II: Inversion of Teleseismic P‐Wave Transfer Functions
Bulletin of the Seismological Society of America January 10, 2017, Vol.107, 106-116. doi:https://doi.org/10.1785/0120150340
Proxy‐Based VS30 Estimation in Central and Eastern North America
Grace A. Parker; Joseph A. Harmon; Jonathan P. Stewart; Youssef M. A. Hashash; Albert R. Kottke; Ellen M. Rathje; Walter J. Silva; Kenneth W. Campbell
Adjusting Central and Eastern North America Ground‐Motion Intensity Measures between Sites with Different Reference‐Rock Site Conditions
David M. Boore; Kenneth W. Campbell
Bulletin of the Seismological Society of America December 20, 2016, Vol.107, 132-148. doi:https://doi.org/10.1785/0120160208
Simulation of Earthquake Ground Motions in the Eastern United States Using Deterministic Physics‐Based and Site‐Based Stochastic Approaches
Sanaz Rezaeian; Stephen Hartzell; Xiaodan Sun; Carlos Mendoza
Comparison of Synthetic Pseudoabsolute Response Spectral Acceleration (PSA) for Four Crustal Regions within Central and Eastern North America (CENA)
Jennifer Dreiling; Marius P. Isken; Walter D. Mooney
Luke Philip Ogweno; Chris H. Cramer
Bulletin of the Seismological Society of America November 29, 2016, Vol.107, 180-197. doi:https://doi.org/10.1785/0120160033
Ground Motions for Induced Earthquakes in Oklahoma
Emrah Yenier; Gail M. Atkinson; Danielle F. Sumy
Richard C. Alt, II; Mark D. Zoback
Seismic Hazard for Cuba: A New Approach
Leonardo Alvarez; Conrad Lindholm; Madelín Villalón
Influence of Twenty Years of Research on Ground‐Motion Prediction Equations on Probabilistic Seismic Hazard in Italy
S. Barani; D. Albarello; M. Massa; D. Spallarossa
Squeezing Kappa (κ) Out of the Transportable Array: A Strategy for Using Bandlimited Data in Regions of Sparse Seismicity
Olga‐Joan Ktenidou; Walter J. Silva; Robert B. Darragh; Norman A. Abrahamson; Tadahiro Kishida
Stefano Maranò; Benjamin Edwards; Graziano Ferrari; Donat Fäh
Rosemary Fayjaloun; Mathieu Causse; Christophe Voisin; Cecile Cornou; Fabrice Cotton
Wenqi Du; Gang Wang
Relations between Some Horizontal‐Component Ground‐Motion Intensity Measures Used in Practice
David M. Boore; Tadahiro Kishida
Stephen Hartzell; Leonardo Ramírez‐Guzmán; Mark Meremonte; Alena Leeds
Hongwei Wang; Ruizhi Wen; Yefei Ren
Bulletin of the Seismological Society of America February 01, 2017, Vol.107, 359-371. doi:https://doi.org/10.1785/0120160083
Modeling Strong‐Motion Recordings of the 2010 Mw 8.8 Maule, Chile, Earthquake with High Stress‐Drop Subevents and Background Slip
Fracture Alignments in Marine Sediments Off Vancouver Island from Ps Splitting Analysis
Takashi Tonegawa; Koichiro Obana; Yojiro Yamamoto; Shuichi Kodaira; Kelin Wang; Michael Riedel; Honn Kao; George Spence
Viscoelastic Block Models of the North Anatolian Fault: A Unified Earthquake Cycle Representation of Pre‐ and Postseismic Geodetic Observations
Phoebe M. R. DeVries; Plamen G. Krastev; James F. Dolan; Brendan J. Meade
Magnitude Assessment for the Historical Earthquake Based on Strong‐Motion Simulation and Liquefaction Analysis: Case of the 1894 Atalanti Earthquake, Greece
T. Novikova; E. Mouzakiotis; V. K. Karastathis
Vasso Saltogianni; Tuncay Taymaz; Seda Yolsal‐Çevikbilen; Tuna Eken; Fanis Moschas; Stathis Stiros
Microtremor Array Measurements for Shallow S‐Wave Profiles at Strong‐Motion Stations in Hatay and Kahramanmaras Provinces, Southern Turkey
Özgür Tuna Özmen; Hiroaki Yamanaka; Mehmet Akif Alkan; Ulubey Çeken; Taylan Öztürk; Ahmet Sezen
Implementation of the Square‐Root‐Impedance Method to Estimate Site Amplification in Iran Using Random Profile Generation
Mojtaba Jahanandish; Hamid Zafarani; Amir Hossein Shafiee
A Note on Adding Viscoelasticity to Earthquake Simulators
Yan Zhang; Chi‐yuen Wang; Li‐yun Fu; Rui Yan; Xuezhong Chen
Frequency‐Dependent Effects of 2D Random Velocity Heterogeneities in the Mantle Lid on Pn Geometric Spreading
Florin Pavel; Radu Vacareanu
Mark W. Stirling; F. Ramon Zuniga
Erratum to Regional Stochastic GMPEs in Low‐Seismicity Areas: Scaling and Aleatory Variability Analysis—Application to the French Alps
Enumerating Plausible Multifault Ruptures in Complex Fault Systems with Physical Constraints
Low‐Frequency Marsquakes and Where to Find Them: Back Azimuth Determination Using a Polarization Analysis Approach
Mw
|
Harmonic series (mathematics) - Simple English Wikipedia, the free encyclopedia
infinite series of the reciprocals of the positive integers
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n}}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\cdots }
Divergent means that as you add more terms the sum never stops getting bigger. It does not go towards a single finite value.
Infinite means that you can always add another term. There is no final term to the series.
Its name comes from the idea of harmonics in music: the wavelengths of the overtones of a vibrating string are 1/2, 1/3, 1/4, etc., of the string's fundamental wavelength. Apart from the first term, every term of the series is the harmonic mean of the terms either side of it. The phrase harmonic mean also comes from music.
3 Rate of divergence
5.4 ln-series
5.5 φ-series
5.6 Random harmonic series
5.7 Depleted harmonic series
The fact that the harmonic series diverges was first proven in the 14th century by Nicole Oresme,[1] but was forgotten. Proofs were given in the 17th century by Pietro Mengoli,[2] Johann Bernoulli,[3] and Jacob Bernoulli.[4][5]
Harmonic sequences have been used by architects. In the Baroque period architects used them in the proportions of floor plans, elevations, and in the relationships between architectural details of churches and palaces.[6]
One way to prove divergence is to compare the harmonic series with another divergent series, where each denominator is replaced with the next-largest power of two:
{\displaystyle {\begin{aligned}&{}1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+{\frac {1}{6}}+{\frac {1}{7}}+{\frac {1}{8}}+{\frac {1}{9}}+\cdots \\[12pt]\geq {}&1+{\frac {1}{2}}+{\frac {1}{\color {red}{\mathbf {4} }}}+{\frac {1}{4}}+{\frac {1}{\color {red}{\mathbf {8} }}}+{\frac {1}{\color {red}{\mathbf {8} }}}+{\frac {1}{\color {red}{\mathbf {8} }}}+{\frac {1}{8}}+{\frac {1}{\color {red}{\mathbf {16} }}}+\cdots \end{aligned}}}
Each term of the harmonic series is greater than or equal to the corresponding term of the second series, and therefore the sum of the harmonic series must be greater than or equal to the sum of the second series. However, the sum of the second series is infinite:
{\displaystyle {\begin{aligned}&{}1+\left({\frac {1}{2}}\right)+\left({\frac {1}{4}}\!+\!{\frac {1}{4}}\right)+\left({\frac {1}{8}}\!+\!{\frac {1}{8}}\!+\!{\frac {1}{8}}\!+\!{\frac {1}{8}}\right)+\left({\frac {1}{16}}\!+\!\cdots \!+\!{\frac {1}{16}}\right)+\cdots \\[12pt]={}&1+{\frac {1}{2}}+{\frac {1}{2}}+{\frac {1}{2}}+{\frac {1}{2}}+\cdots =\infty \end{aligned}}}
It follows (by the comparison test) that the sum of the harmonic series must be infinite as well. More precisely, the comparison above proves that
{\displaystyle \sum _{n=1}^{2^{k}}{\frac {1}{n}}\geq 1+{\frac {k}{2}}}
This proof, proposed by Nicole Oresme in around 1350, is considered to be a high point of medieval mathematics. It is still a standard proof taught in mathematics classes today.
Illustration of the integral test.
It is possible to prove that the harmonic series diverges by comparing its sum with an improper integral. Consider the arrangement of rectangles shown in the figure to the right. Each rectangle is 1 unit wide and 1/n units high, so the total area of the infinite number of rectangles is the sum of the harmonic series:
{\displaystyle {\begin{array}{c}{\text{area of}}\\{\text{rectangles}}\end{array}}=1+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{4}}+{\frac {1}{5}}+\cdots }
The total area under the curve y = 1/x from 1 to infinity is given by a divergent improper integral:
{\displaystyle {\begin{array}{c}{\text{area under}}\\{\text{curve}}\end{array}}=\int _{1}^{\infty }{\frac {1}{x}}\,dx=\infty .}
Since this area is entirely contained within the rectangles, the total area of the rectangles must be infinite as well. This proves that
{\displaystyle \sum _{n=1}^{k}{\frac {1}{n}}>\int _{1}^{k+1}{\frac {1}{x}}\,dx=\ln(k+1).}
The generalization of this argument is known as the integral test.
Rate of divergenceEdit
The harmonic series diverges very slowly. For example, the sum of the first 1043 terms is less than 100.[7] This is because the partial sums of the series have logarithmic growth. In particular,
{\displaystyle \sum _{n=1}^{k}{\frac {1}{n}}=\ln k+\gamma +\varepsilon _{k}\leq (\ln k)+1}
where γ is the Euler–Mascheroni constant and εk ~ 1/2k which approaches 0 as k goes to infinity. Leonhard Euler proved both this and also that the sum which includes only the reciprocals of primes also diverges, that is:
{\displaystyle \sum _{p{\text{ prime }}}{\frac {1}{p}}={\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{5}}+{\frac {1}{7}}+{\frac {1}{11}}+{\frac {1}{13}}+{\frac {1}{17}}+\cdots =\infty .}
Main article: Harmonic number
The first thirty harmonic numbers
Partial sum of the harmonic series, Hn
expressed as a fraction
1 1 ~1 1
2 3 /2 ~1.5 1.5
3 11 /6 ~1.83333 1.83333
4 25 /12 ~2.08333 2.08333
5 137 /60 ~2.28333 2.28333
6 49 /20 ~2.45 2.45
7 363 /140 ~2.59286 2.59286
9 7129 /2520 ~2.82897 2.82897
10 7381 /2520 ~2.92897 2.92897
11 83711 /27720 ~3.01988 3.01988
13 1145993 /360360 ~3.18013 3.18013
17 42142223 /12252240 ~3.43955 3.43955
18 14274301 /4084080 ~3.49511 3.49511
19 275295799 /77597520 ~3.54774 3.54774
23 444316699 /118982864 ~3.73429 3.73429
24 1347822955 /356948592 ~3.77596 3.77596
25 34052522467 /8923714800 ~3.81596 3.81596
27 312536252003 /80313433200 ~3.89146 3.89146
29 9227046511387 /2329089562800 ~3.96165 3.96165
The finite partial sums of the diverging harmonic series,
{\displaystyle H_{n}=\sum _{k=1}^{n}{\frac {1}{k}},}
are called harmonic numbers.
The difference between Hn and ln n converges to the Euler–Mascheroni constant. The difference between any two harmonic numbers is never an integer. No harmonic numbers are integers, except for H1 = 1.[8]:p. 24[9]:Thm. 1
See also: Riemann series theorem § Changing the sum
The first fourteen partial sums of the alternating harmonic series (black line segments) shown converging to the natural logarithm of 2 (red line).
{\displaystyle \sum _{n=1}^{\infty }{\frac {(-1)^{n+1}}{n}}=1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+{\frac {1}{5}}-\cdots }
is known as the alternating harmonic series. This series converges by the alternating series test. In particular, the sum is equal to the natural logarithm of 2:
{\displaystyle 1-{\frac {1}{2}}+{\frac {1}{3}}-{\frac {1}{4}}+{\frac {1}{5}}-\cdots =\ln 2.}
The alternating harmonic series, while conditionally convergent, is not absolutely convergent: if the terms in the series are systematically rearranged, in general the sum becomes different and, dependent on the rearrangement, possibly even infinite.
The alternating harmonic series formula is a special case of the Mercator series, the Taylor series for the natural logarithm.
A related series can be derived from the Taylor series for the arctangent:
{\displaystyle \sum _{n=0}^{\infty }{\frac {(-1)^{n}}{2n+1}}=1-{\frac {1}{3}}+{\frac {1}{5}}-{\frac {1}{7}}+\cdots ={\frac {\pi }{4}}.}
This is known as the Leibniz series.
General harmonic seriesEdit
The general harmonic series is of the form
{\displaystyle \sum _{n=0}^{\infty }{\frac {1}{an+b}},}
where a ≠ 0 and b are real numbers, and b/a is not zero or a negative integer.
By the limit comparison test with the harmonic series, all general harmonic series also diverge.
p-seriesEdit
A generalization of the harmonic series is the p-series (or hyperharmonic series), defined as
{\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{p}}}}
for any real number p. When p = 1, the p-series is the harmonic series, which diverges. Either the integral test or the Cauchy condensation test shows that the p-series converges for all p > 1 (in which case it is called the over-harmonic series) and diverges for all p ≤ 1. If p > 1 then the sum of the p-series is ζ(p), i.e., the Riemann zeta function evaluated at p.
The problem of finding the sum for p = 2 is called the Basel problem; Leonhard Euler showed it is π2/6. The value of the sum for p = 3 is called Apéry's constant, since Roger Apéry proved that it is an irrational number.
ln-seriesEdit
Related to the p-series is the ln-series, defined as
{\displaystyle \sum _{n=2}^{\infty }{\frac {1}{n(\ln n)^{p}}}}
for any positive real number p. This can be shown by the integral test to diverge for p ≤ 1 but converge for all p > 1.
φ-seriesEdit
For any convex, real-valued function φ such that
{\displaystyle \limsup _{u\to 0^{+}}{\frac {\varphi \left({\frac {u}{2}}\right)}{\varphi (u)}}<{\frac {1}{2}},}
{\displaystyle \sum _{n=1}^{\infty }\varphi \left({\frac {1}{n}}\right)}
is convergent.[source?]
Random harmonic seriesEdit
The random harmonic series
{\displaystyle \sum _{n=1}^{\infty }{\frac {s_{n}}{n}},}
where the sn are independent, identically distributed random variables taking the values +1 and −1 with equal probability 1/2, is a well-known example in probability theory for a series of random variables that converges with probability 1. The fact of this convergence is an easy consequence of either the Kolmogorov three-series theorem or of the closely related Kolmogorov maximal inequality. Byron Schmuland of the University of Alberta further examined[10] the properties of the random harmonic series, and showed that the convergent series is a random variable with some interesting properties. In particular, the probability density function of this random variable evaluated at +2 or at −2 takes on the value 0.124999999999999999999999999999999999999999764..., differing from 1/8 by less than 10−42. Schmuland's paper explains why this probability is so close to, but not exactly, 1/8. The exact value of this probability is given by the infinite cosine product integral C2[11] divided by π.
Depleted harmonic seriesEdit
Main article: Kempner series
The depleted harmonic series where all of the terms in which the digit 9 appears anywhere in the denominator are removed can be shown to converge and its value is less than 80.[12] In fact, when all the terms containing any particular string of digits (in any base) are removed the series converges.
The harmonic series can be counterintuitive. This is because it is a divergent series even though the terms of the series get smaller and go towards zero. The divergence of the harmonic series is the source of some paradoxes.
The "worm on the rubber band".[13] Suppose that a worm crawls along an infinitely-elastic one-meter rubber band at the same time as the rubber band is uniformly stretched. If the worm travels 1 centimeter per minute and the band stretches 1 meter per minute, will the worm ever reach the end of the rubber band? The answer, counterintuitively, is "yes", for after n minutes, the ratio of the distance travelled by the worm to the total length of the rubber band is
{\displaystyle {\frac {1}{100}}\sum _{k=1}^{n}{\frac {1}{k}}.}
Because the series gets arbitrarily large as n becomes larger, eventually this ratio must exceed 1, which implies that the worm reaches the end of the rubber band. However, the value of n at which this occurs must be extremely large: approximately e100, a number exceeding 1043 minutes (1037 years). Although the harmonic series does diverge, it does so very slowly.
The Jeep problem asks how much total fuel is required for a car with a limited fuel-carrying capacity to cross a desert leaving fuel drops along the route. The distance the car can go with a given amount of fuel is related to the partial sums of the harmonic series, which grow logarithmically. And so the fuel required increases exponentially with the desired distance.
The block-stacking problem: blocks aligned according to the harmonic series bridges cleavages of any width.
The block-stacking problem: given a collection of identical dominoes, it is possible to stack them at the edge of a table so that they hang over the edge of the table without falling. The counterintuitive result is that they can be stacked in a way that makes the overhang as large as you want. That is, provided there are enough dominoes.[13][14]
A swimmer that goes faster each time they touch the wall of the pool. The swimmer starts crossing a 10-meter pool at a speed of 2 m/s, and with every crossing, another 2 m/s is added to the speed. In theory, the swimmer's speed is unlimited, but the number of pool crosses needed to get to that speed becomes very large; for instance, to get to the speed of light (ignoring special relativity), the swimmer needs to cross the pool 150 million times. Contrary to this large number, the time needed to reach a given speed depends on the sum of the series at any given number of pool crosses:
{\displaystyle {\frac {10}{2}}\sum _{k=1}^{n}{\frac {1}{k}}.}
Calculating the sum shows that the time required to get to the speed of light is only 97 seconds.
↑ Oresme, Nicole (c. 1360). Quaestiones super Geometriam Euclidis [Questions concerning Euclid's Geometry].
↑ Mengoli, Pietro (1650). "Praefatio [Preface]". Novae quadraturae arithmeticae, seu De additione fractionum [New arithmetic quadrature (i.e., integration), or On the addition of fractions]. Bologna: Giacomo Monti.
Mengoli's proof is by contradiction:
Let S denote the sum of the series. Group the terms of the series in triplets: S = 1 + (1/2 + 1/3 + 1/4) + (1/5 + 1/6 + 1/7) + (1/8 + 1/9 + 1/10) + … Since for x > 1, 1/x − 1 + 1/x + 1/x + 1 > 3/x, then S > 1 + 3/3 + 3/6 + 3/9 + … = 1 + 1 + 1/2 + 1/3 + … = 1 + S, which is false for any finite S. Therefore, the series diverges.
↑ Bernoulli, Johann (1742). "Corollary III of De seriebus varia". Opera Omnia. Lausanne & Basel: Marc-Michel Bousquet & Co. vol. 4, p. 8.
↑ Bernoulli, Jacob (1689). Propositiones arithmeticae de seriebus infinitis earumque summa finita [Arithmetical propositions about infinite series and their finite sums]. Basel: J. Conrad.
↑ Bernoulli, Jacob (1713). Ars conjectandi, opus posthumum. Accedit Tractatus de seriebus infinitis [Theory of inference, posthumous work. With the Treatise on infinite series…]. Basel: Thurneysen. pp. 250–251.
From p. 250, prop. 16:
"XVI. Summa serei infinita harmonicè progressionalium, 1/1 + 1/2 + 1/3 + 1/4 + 1/5 &c. est infinita. Id primus deprehendit Frater:…"
[16. The sum of an infinite series of harmonic progression, 1/1 + 1/2 + 1/3 + 1/4 + 1/5 + …, is infinite. My brother first discovered this…]
↑ Hersey, George L. Architecture and Geometry in the Age of the Baroque. pp. 11–12, 37–51.
↑ Sloane, N. J. A. (ed.). "Sequence A082912 (Sum of a(n) terms of harmonic series is > 10n)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
↑ Julian Havil, Gamma: Exploring Euler’s Constant, Princeton University Press, 2009.
↑ Thomas J. Osler, “Partial sums of series that cannot be an integer”, The Mathematical Gazette 96, November 2012, 515–519. https://www.jstor.org/stable/24496876?seq=1#page_scan_tab_contents
↑ Schmuland, Byron (May 2003). "Random Harmonic Series" (PDF). American Mathematical Monthly. 110 (5): 407–416. doi:10.2307/3647827. JSTOR 3647827.
↑ Eric W. Weisstein, Infinite Cosine Product Integral at MathWorld.
↑ "Nick's Mathematical Puzzles: Solution 72".
↑ 13.0 13.1 Graham, Ronald; Knuth, Donald E.; Patashnik, Oren (1989), Concrete Mathematics (2nd ed.), Addison-Wesley, pp. 258–264, ISBN 978-0-201-55802-9
↑ Sharp, R. T. (1954). "Problem 52: Overhanging dominoes" (PDF). Pi Mu Epsilon Journal. 1 (10): 411–412.
Wikimedia Commons has media related to Harmonic series (mathematics).
"The Harmonic Series Diverges Again and Again" (PDF). The AMATYC Review. 27: 31–43. 2006. Archived from the original (PDF) on 2013-05-15. Retrieved 2019-12-20.
Eric W. Weisstein, Harmonic Series at MathWorld.
Eric W. Weisstein, Book Stacking Problem at MathWorld.
Hudelson, Matt (1 October 2010). "Proof Without Words: The Alternating Harmonic Series Sums to ln 2" (PDF). Mathematics Magazine. 83 (4): 294. doi:10.4169/002557010X521831. S2CID 119484945.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Harmonic_series_(mathematics)&oldid=8058394"
|
Interface between thermal liquid and mechanical rotational networks - MATLAB - MathWorks Switzerland
{\stackrel{˙}{m}}_{\text{A}}=\epsilon \rho D\Omega +\left\{\begin{array}{cc}0,& \text{if}\text{\hspace{0.17em}}\text{fluid}\text{\hspace{0.17em}}\text{dynamic}\text{\hspace{0.17em}}\text{compressibility}\text{\hspace{0.17em}}\text{is}\text{\hspace{0.17em}}\text{off}\\ V\rho \left(\frac{1}{\beta }\frac{dp}{dt}+\alpha \frac{dT}{dt}\right),& \text{if}\text{\hspace{0.17em}}\text{fluid}\text{\hspace{0.17em}}\text{dynamic}\text{\hspace{0.17em}}\text{compressibility}\text{\hspace{0.17em}}\text{is}\text{\hspace{0.17em}}\text{on}\end{array}
{\stackrel{˙}{m}}_{\text{A}}
\tau =-\epsilon \left(p-{p}_{\text{Atm}}\right)D,
\frac{d\left(\rho uV\right)}{dt}={\varphi }_{\text{A}}+{Q}_{H}-pD\epsilon \Omega ,
|
Create univariate autoregressive integrated moving average (ARIMA) model - MATLAB - MathWorks Switzerland
1-0.5{L}^{1}+0.1{L}^{4}
1-{\varphi }_{1}{L}^{1}-{\varphi }_{4}{L}^{4}.
1+{\theta }_{1}{L}^{1}+{\theta }_{2}{L}^{2}+{\theta }_{3}{L}^{3}.
1-{\Phi }_{4}{L}^{4}-{\Phi }_{8}{L}^{8}.
1+{\Theta }_{4}{L}^{4}.
\varphi \left(L\right)=1-0.5L+0.1{L}^{2},
\Phi \left(L\right)=1-0.5{L}^{4}+0.1{L}^{8},
{y}_{t}=c+{\epsilon }_{t}
\mathit{c}
{\epsilon }_{\mathit{t}}\text{\hspace{0.17em}}
{\sigma }^{2}
\left(1+0.5{L}^{2}\right)\left(1-L\right){y}_{t}=3.1+\left(1-0.2L\right){\epsilon }_{t},
{\epsilon }_{\mathit{t}}
\Delta {y}_{t}=3.1-0.5\Delta {y}_{t-2}+{\epsilon }_{t}-0.2{\epsilon }_{t-1}.
{y}_{t}=1+\varphi {y}_{t-1}+{\epsilon }_{t},
{\epsilon }_{\mathit{t}}
\varphi
\left(1-{\varphi }_{1}L-{\varphi }_{2}{L}^{2}-{\varphi }_{3}{L}^{3}\right)\left(1-L\right){y}_{t}=\left(1+{\theta }_{1}L+{\theta }_{2}{L}^{2}\right){\epsilon }_{t}
{\epsilon }_{\mathit{t}}
{\sigma }^{2}
p
D
{y}_{t}={\epsilon }_{t}+{\theta }_{1}{\epsilon }_{t-1}+{\theta }_{12}{\epsilon }_{t-12},
{\epsilon }_{\mathit{t}}
{\sigma }^{2}
\left(0,1,1\right)×{\left(0,1,1\right)}_{12}
\left(1-L\right)\left(1-{L}^{12}\right){y}_{t}=\left(1+{\theta }_{1}L\right)\left(1+{\theta }_{12}{L}^{12}\right){\epsilon }_{t},
{\epsilon }_{\mathit{t}}
{\sigma }^{2}
{y}_{t}=0.05+0.6{y}_{t-1}+0.2{y}_{t-2}-0.1{y}_{t-3}+{\epsilon }_{t},
{\epsilon }_{t}
\mathit{t}
{y}_{t}=0.6{y}_{t-1}+{\epsilon }_{t},
{\epsilon }_{\mathit{t}}
\mathit{t}
\begin{array}{l}{y}_{t}=c+\varphi {y}_{t-1}+{\epsilon }_{t}+\theta {\epsilon }_{t-1}.\\ {\epsilon }_{t}={\sigma }_{t}{z}_{t}.\\ {\sigma }_{t}^{2}=\kappa +\gamma {\sigma }_{t-1}^{2}.\\ {z}_{t}\sim N\left(0,1\right).\end{array}
{y}_{t}={y}_{t-1}+{\epsilon }_{t},
{\epsilon }_{\mathit{t}}
{L}^{i}{y}_{t}={y}_{t-i}.
{y}_{t}=c+{x}_{t}\beta +{a}_{1}{y}_{t-1}+\dots +{a}_{w}{y}_{t-w}+{\epsilon }_{t}+{b}_{1}{\epsilon }_{t-1}+\dots +{b}_{v}{\epsilon }_{t-v}.
a\left(L\right){y}_{t}=c+{x}_{t}\beta +b\left(L\right){\epsilon }_{t}.
\varphi \left(L\right){\left(1-L\right)}^{D}\Phi \left(L\right){\left(1-{L}^{s}\right)}^{{D}_{s}}{y}_{t}=c+{x}_{t}\beta +\theta \left(L\right)\Theta \left(L\right){\epsilon }_{t}.
\varphi \left(L\right)
\varphi \left(L\right)=1-\varphi L-{\varphi }_{2}{L}^{2}-...-{\varphi }_{p}{L}^{p},
\Phi \left(L\right)
\Phi \left(L\right)=1-{\Phi }_{{p}_{1}}{L}^{{p}_{1}}-{\Phi }_{{p}_{2}}{L}^{{p}_{2}}-...-{\Phi }_{{p}_{s}}{L}^{{p}_{s}},
\theta \left(L\right)
\theta \left(L\right)=1+\theta L+{\theta }_{2}{L}^{2}+...+{\theta }_{q}{L}^{q},
\Theta \left(L\right)
\Theta \left(L\right)=1+{\Theta }_{{q}_{1}}{L}^{{q}_{1}}+{\Theta }_{{q}_{2}}{L}^{{q}_{2}}+...+{\Theta }_{{q}_{s}}{L}^{{q}_{s}},
q<\infty
E\left({y}_{t}\right)=\theta \left(L\right)0=0.
Var\left({y}_{t}\right)={\sigma }^{2}\sum _{i=1}^{q}{\theta }_{i}^{2}.
Cov\left({y}_{t},{y}_{t-s}\right)=\left\{\begin{array}{l}{\sigma }^{2}\left({\theta }_{s}+{\theta }_{1}{\theta }_{s-1}+{\theta }_{2}{\theta }_{s-2}+...+{\theta }_{q}{\theta }_{s-q}\right)\text{ if }s\ge q\\ 0\text{ otherwise}.\end{array}
\left\{{y}_{t};t=1,...,T\right\}
|
f
2\pi
g
f
f-g
{A}_{n}\mathrm{cos}\left(nx+{p}_{n}\right)
{a}_{n}\mathrm{cos}\left(nx\right)+{b}_{n}\mathrm{sin}\left(nx\right)
{a}_{n}\mathrm{cos}\left(nx\right)+{b}_{n}\mathrm{sin}\left(nx\right)
Description: graphical search of Fourier development of a function. interactive exercises, online calculators and plotters, mathematical recreation and games
|
Tokenomics - Hedge Protocol Docs
Hedge tokenomics
The Hedge protocol issues 2 protocol tokens - USH and HDG. USH is an overcollateralised stablecoin issued by locking up collateral and as such has a varying supply over time. HDG is the protocol token. At launch, users may stake HDG to earn portion of protocol fees, which are taken during loan initiation - this model may change as the protocol matures and HDG is used for governance.
HDG distribution
HDG token distribution.
15% of all tokens are set aside for investors - with 10% of the total going to seed investors and 5% set aside for future fundraise if needed. The seed investors have a 18 month vesting schedule after token launch with a 1 year cliff.
HDG Emissions
Hedge will start liquidity mining at the same time as mainnet launch. This is the expected emission schedule for HDG token emission over the next 6 years. The stability pool incentives are fixed and halve every year.
Target HDG emissions for the 1st year after launch
Target HDG emissions 6 years after launch
| Emissions schedule | Liquidity Incentives | Stability Pool Emissions | Cumulative Total |
|------------------------|--------------------------|------------------------------|----------------------|
| Month 1 | 70’000 | 112’250 | 182’250 |
| Month 4 | 50’000 | 94’393 | 652’599 |
| Month 7 | 45’000 | 79’373 | 1’045’160 |
| Month 10 | 45’000 | 66’745 | 1’392’537 |
| Year 2 | 500’000 | 500’000 | 2’600’000 |
| Year 5 | 200’000 | 31’250 | 3’906’250 |
Stability Pool Emissions
A total of 2M HDG tokens will be emitted over the total lifetime of the Hedge contract. Emissions are dictated according to the following half-life formula, where f(n) represents the total amount of tokens emitted on day n after launch.
f(n) = \int_{0}^{n} \frac{2000000}{\frac{365}{log(2))}}* \frac{1}{2}^{\frac{x}{365}} dx
|
A Characteristic Eigenfunction for Minimal Hypersurfaces in Space Forms.
Steen Markvorsen (1989)
A directional compactification of the complex Fermi surface
Daniel Bättig, Horst Knörrer, Eugene Trubowitz (1991)
A geometric estimate for a periodic Schrödinger operator
Thomas Friedrich (2000)
We estimate from below by geometric data the eigenvalues of the periodic Sturm-Liouville operator
-4{d}^{2}/d{s}^{2}+{\kappa }^{2}\left(s\right)
with potential given by the curvature of a closed curve.
B. Fine, P. Kirk, E. Klassen (1994)
A lower bound for ... on manifolds with boundary.
Andrzej Derdzinski, Ch. B. Croke (1987)
A lower bound for the error term in Weyl’s law for certain Heisenberg manifolds, II
Werner Nowak (2009)
This article is concerned with estimations from below for the remainder term in Weyl’s law for the spectral counting function of certain rational (2ℓ + 1)-dimensional Heisenberg manifolds. Concentrating on the case of odd ℓ, it continues the work done in part I [21] which dealt with even ℓ.
A lower bound for the least eigenvalue of ... + V.
Pierre Bérard (1990)
A lower bound on ... for geometrically finite hyperbolic n-manifolds.
Marc Burger, Richard D. Canary (1994)
A new intrinsic curvature invariant for centroaffine hypersurfaces.
Scharlach, Christine, Simon, Udo, Verstraelen, Leopold, Vrancken, Luc (1997)
A nilpotent Lie algebra and eigenvalue estimates
Jacek Dziubański, Andrzej Hulanicki, Joe Jenkins (1995)
The aim of this paper is to demonstrate how a fairly simple nilpotent Lie algebra can be used as a tool to study differential operators on
{ℝ}^{n}
with polynomial coefficients, especially when the property studied depends only on the degree of the polynomials involved and/or the number of variables.
A Note on the First Nonzero Eigenvalue of the Laplacian Acting of P-Forms.
Gilles Courtois, Bruno Colbois (1990)
A note on the isoperimetric constant
Peter Buser (1982)
A reduction method for proving the existence of solutions to elliptic equations involving the
p
Benalili, Mohamed, Maliki, Youssef (2003)
A Relation Between Growth and the Spectrum of the Laplacian.
|
Measuring velocity of sound waves using echo method — lesson. Science State Board, Class 10.
A source of sound pulses
A sound receiver and
Using the measuring tape, determine the distance '\(d\)' between the source of sound pulse and the reflecting surface.
The receiver is positioned near the source. The source produces a sound pulse.
A stopwatch is used to record the time interval between the time the sound pulse is sent and the time the receiver receives the echo. The time interval is denoted by the letter '\(t\)'.
Carry out the experiment three to four times more. For a given number of pulses, the average time is calculated.
Calculation of sound speed:
The sound pulse emitted by the source travels a total distance of \(2d\) between the source and the wall before returning to the receiver. The time it takes has been calculated to be \(t\). As a result, the speed of a sound wave is determined by
\begin{array}{l}\mathit{Speed}\phantom{\rule{0.147em}{0ex}}\mathit{of}\phantom{\rule{0.147em}{0ex}}\mathit{sound}=\frac{\mathit{distance}\phantom{\rule{0.147em}{0ex}}\mathit{travelled}}{\mathit{time}\phantom{\rule{0.147em}{0ex}}\mathit{taken}}\\ =\frac{2d}{t}\end{array}
|
34M03 Linear equations and systems
34M05 Entire and meromorphic solutions
34M10 Oscillation, growth of solutions
34M15 Algebraic aspects (differential-algebraic, hypertranscendence, group-theoretical)
34M35 Singularities, monodromy, local behavior of solutions, normal forms
34M45 Differential equations on complex manifolds
34M55 Painlevé and other special equations; classification, hierarchies;
34M56 Isomonodromic deformations
34M60 Singular perturbation problems in the complex domain (complex WKB, turning points, steepest descent)
x\frac{d\stackrel{\to }{y}}{dx}={\stackrel{\to }{G}}_{0}\left(x\right)+\left[\lambda \left(x\right)+{A}_{0}\right]\stackrel{\to }{y}+{x}^{\mu }\stackrel{\to }{G}\left(x,\stackrel{\to }{y}\right),
A note on the oscillation of solutions of periodic linear differential equations
A phase of the differential equation
{y}^{\text{'}}=Q\left(t\right)y
with a complex coefficient
Q
of the real variable
A reduction theory of second order meromorphic differential equations. II
W. B. Jurkat, H. J. Zwiesler (1988)
{\lambda }^{2}q\left(s\right),\phantom{\rule{4pt}{0ex}}s\in \left[{s}_{0},\infty \right)
\lambda \in ℝ
q\left(s\right)
\infty
s\to \infty
S
\lambda
S
S=\left\{0\right\}
S=ℤ
S=𝔻
ℚ\subset S⫋ℝ
Algunas Propiedades De Regularidad De Las Ecuaciones Diferenciales Complejas.
Jaime Rodriguez Montes (1980)
An existence theorem for solutions of
n
-th order nonlinear differential equations in the complex domain
Charles Powder (1979)
Analytic First Integrals of Ordinary Differential Equations
Wilfried Kaplan (1972)
Applications de la théorie de Nevanlinna p-adique.
Abdelbaki Boutabaa (1991)
Asymptotic behaviour of equations
\stackrel{˙}{z}=q\left(t,z\right)-p\left(t\right){z}^{2}
\stackrel{¨}{x}=x\varphi \left(t,\stackrel{˙}{x}{x}^{-1}\right)
Asymptotic behaviour of the equation
{x}^{\text{'}\text{'}}+p\left(t\right){x}^{\text{'}}+q\left(t\right)x=0
with complex-valued coefficients
Asymptotic behaviour of the system of two differential equations
Asymptotic nature of solutions of the equation
\stackrel{˙}{z}=f\left(t,z\right)
with a complex valued function
f
Asymptotische Eigenschaften der Differentialgleichung
{y}^{\text{'}\text{'}}+2{a}_{1}\left(x\right){y}^{\text{'}}+{a}_{2}\left(x\right)y=0
|
Random Bulk Properties of Heterogeneous Rectangular Blocks With Lognormal Young's Modulus: Effective Moduli | J. Appl. Mech. | ASME Digital Collection
Leon S. Dimas,
Leon S. Dimas
Laboratory for Atomistic and Molecular
Mechanics (LAMM),
Daniele Veneziano,
Tristan Giesa,
Markus J. Buehler 1
e-mail: mbuehler@MIT.EDU
Contributed by the Applied Mechanics Division of ASME for publication in the JOURNAL OF APPLIED MECHANICS. Manuscript received September 21, 2014; final manuscript received October 9, 2014; accepted manuscript posted October 13, 2014; published online November 14, 2014. Editor: Yonggang Huang.
Dimas, L. S., Veneziano, D., Giesa, T., and Buehler, M. J. (January 1, 2015). "Random Bulk Properties of Heterogeneous Rectangular Blocks With Lognormal Young's Modulus: Effective Moduli." ASME. J. Appl. Mech. January 2015; 82(1): 011003. https://doi.org/10.1115/1.4028783
We investigate the effective elastic properties of disordered heterogeneous materials whose Young's modulus varies spatially as a lognormal random field. For one-, two-, and three-dimensional (1D, 2D, and 3D) rectangular blocks, we decompose the spatial fluctuations of the Young's log-modulus
F=lnE
into first- and higher-order terms and find the joint distribution of the effective elastic tensor by multiplicatively combining the term-specific effects. The analytical results are in good agreement with Monte Carlo simulations. Through parametric analysis of the analytical solutions, we gain insight into the effective elastic properties of this class of heterogeneous materials. The results have applications to structural/mechanical reliability assessment and design.
Elasticity, Fluctuations (Physics), Simulation, Tensors, Young's modulus, Approximation, Shear modulus, Mechanical properties
(North-Holland Series in Applied Mathematics and Mechanics),
, The Netherlands, p.
Homogenization Techniques for Composite Media: Lectures Delivered at the CISM International Center for Mechanical Sciences
,” (Lecture Notes in Physics, Udine, Italy, July 1–5, 1985, Vol. ix), Springer-Verlag, Berlin, Germany, p.
Zolotoyabko
Inhomogeneity of Nacre Lamellae on the Nanometer Length Scale
.10.1021/cg3007734
Coupled Continuum and Discrete Analysis of Random Heterogeneous Materials: Elasticity and Fracture
Random Field Models of Heterogeneous Materials
Polynomial Chaos in Stochastic Finite-Elements
Stochastic Finite-Element Analysis of Simple Beams
Application of the Spectral Stochastic Finite Element Method for Performance Prediction of Composite Structures
Pellissetti
Iterative Solution of Systems of Linear Equations Arising in the Context of Stochastic Finite Elements
Neumann Expansion for Stochastic Finite-Element Analysis
Ostojastarzewski
Linear Elasticity of Planar Delaunay Networks—Random Field Characterization of Effective Moduli
Variability Response Functions for Effective Material Properties
Random Homogenization Analysis in Linear Elasticity Based on Analytical Bounds and Estimates
Response Variability of Stochastic Finite-Element Systems
Structural Response Variability
Stochastic Variability of Effective Properties Via the Generalized Variability Response Function
Analysis of Variance Method for the Equivalent Conductivity of Rectangular Blocks
See supplemental material at for detailed derivations of the equations and supplementary results.
Zur Spektraltheorie Stochastischer Prozesse
, Ann. Acad. Sci. Fenn., Ser. A, 37.
A Multi-Level Superelement Technique for Damage Analysis
|
93D05 Lyapunov and other classical stabilities (Lagrange, Poisson,
{L}^{p},{l}^{p}
93D10 Popov-type stability of feedback systems
93D20 Asymptotic stability
93D25 Input-output approaches
{ℋ}_{\infty }
constant gain state feedback stabilization of stochastic hybrid systems with Wiener process.
Boukas, E.K., Al-Muthairi, N.F. (2004)
A Brauer’s theorem and related results
Rafael Bru, Rafael Cantó, Ricardo Soto, Ana Urbano (2012)
Given a square matrix A, a Brauer’s theorem [Brauer A., Limits for the characteristic roots of a matrix. IV. Applications to stochastic matrices, Duke Math. J., 1952, 19(1), 75–91] shows how to modify one single eigenvalue of A via a rank-one perturbation without changing any of the remaining eigenvalues. Older and newer results can be considered in the framework of the above theorem. In this paper, we present its application to stabilization of control systems, including the case when the system...
A discussion on the Hölder and robust finite-time partial stabilizability of Brockett’s integrator∗
Chaker Jammazi (2012)
We consider chained systems that model various systems of mechanical or biological origin. It is known according to Brockett that this class of systems, which are controllable, is not stabilizable by continuous stationary feedback (i.e. independent of time). Various approaches have been proposed to remedy this problem, especially instationary or discontinuous feedbacks. Here, we look at another stabilization strategy (by continuous stationary or...
A feedforward compensation scheme for perfect decoupling of measurable input functions
Giovanni Marro, Lorenzo Ntogramatzidis (2005)
In this paper the exact decoupling problem of signals that are accessible for measurement is investigated. Exploiting the tools and the procedures of the geometric approach, the structure of a feedforward compensator is derived that, cascaded to a linear dynamical system and taking the measurable signal as input, provides the control law that solves the decoupling problem and ensures the internal stability of the overall system.
A generalized regular form for multivariable sliding mode control.
Perruquetti, W., Richard, J.P., Borne, P. (2001)
Kirane, Mokhtar, Tatar, Nasser-eddine (1999)
A necessary and sufficient condition for static output feedback stabilizability of linear discrete-time systems
Danica Rosinová, Vojtech Veselý, Vladimír Kučera (2003)
Necessary and sufficient conditions for a discrete-time system to be stabilizable via static output feedback are established. The conditions include a Riccati equation. An iterative as well as non-iterative LMI based algorithm with guaranteed cost for the computation of output stabilizing feedback gains is proposed and introduces the novel LMI approach to compute the stabilizing output feedback gain matrix. The results provide the discrete- time counterpart to the results by Kučera and De Souza.
Patrick Martinez (1999)
A new method to obtain decay rate estimates for dissipative systems with localized damping.
Patrick Martínez (1999)
We consider the wave equation damped with a locally distributed nonlinear dissipation. We improve several earlier results of E. Zuazua and of M. Nakao in two directions: first, using the piecewise multiplier method introduced by K. Liu, we weaken the usual geometrical conditions on the localization of the damping. Then thanks to some new nonlinear integral inequalities, we eliminate the usual assumption on the polynomial growth of the feedback in zero and we show that the energy of the system decays...
We consider the wave equation damped with a boundary nonlinear velocity feedback p(u'). Under some geometrical conditions, we prove that the energy of the system decays to zero with an explicit decay rate estimate even if the function ρ has not a polynomial behavior in zero. This work extends some results of Nakao, Haraux, Zuazua and Komornik, who studied the case where the feedback has a polynomial behavior in zero and completes a result of Lasiecka and Tataru. The proof is based on the construction...
A new Nyquist-based technique for tuning robust decentralized controllers
Alena Kozáková, Vojtech Veselý, Jakub Osuský (2009)
An original Nyquist-based frequency domain robust decentralized controller (DC) design technique for robust stability and guaranteed nominal performance is proposed, applicable for continuous-time uncertain systems described by a set of transfer function matrices. To provide nominal performance, interactions are included in individual design using one selected characteristic locus of the interaction matrix, used to reshape frequency responses of decoupled subsystems; such modified subsystems are...
A note on stabilization of discrete nonlinear systems.
Tang, Fengjun, Yuan, Rong (2011)
A procedure for designing stabilizing output feedback controllers
Jan Lunze (1984)
A reduction principle for global stabilization of nonlinear systems
Rachid Outbib, Gauthier Sallet (1998)
The goal of this paper is to propose new sufficient conditions for dynamic stabilization of nonlinear systems. More precisely, we present a reduction principle for the stabilization of systems that are obtained by adding integrators. This represents a generalization of the well-known lemma on integrators (see for instance [BYIS] or [Tsi1]).
A remark on energetic stability of feedback systems
A robustness result for a von Kármán plate
Mary Elizabeth Bradley, Irena Lasiecka (1993)
|
Effect of rain gauge density over the accuracy of rainfall: a case study over Bangalore, India | SpringerPlus | Full Text
Anoop Kumar Mishra1
Rainfall is an extremely variable parameter in both space and time. Rain gauge density is very crucial in order to quantify the rainfall amount over a region. The level of rainfall accuracy is highly dependent on density and distribution of rain gauge stations over a region. Indian Space Research Organisation (ISRO) have installed a number of Automatic Weather Station (AWS) rain gauges over Indian region to study rainfall. In this paper, the effect of rain gauge density over daily accumulated rainfall is analyzed using ISRO AWS gauge observations. A region of 50 km × 50 km box over southern part of Indian region (Bangalore) with good density of rain gauges is identified for this purpose. Rain gauge numbers are varied from 1–8 in 50 km box to study the variation in the daily accumulated rainfall. Rainfall rates from the neighbouring stations are also compared in this study. Change in the rainfall as a function of gauge spacing is studied. Use of gauge calibrated satellite observations to fill the gauge station value is also studied. It is found that correlation coefficients (CC) decrease from 82% to 21% as gauge spacing increases from 5 km to 40 km while root mean square error (RMSE) increases from 8.29 mm to 51.27 mm with increase in gauge spacing from 5 km to 40 km. Considering 8 rain gauges as a standard representative of rainfall over the region, absolute error increases from 15% to 64% as gauge numbers are decreased from 7 to 1. Small errors are reported while considering 4 to 7 rain gauges to represent 50 km area. However, reduction to 3 or less rain gauges resulted in significant error. It is also observed that use of gauge calibrated satellite observations significantly improved the rainfall estimation over the region with very few rain gauge observations.
Rainfall is one of the most discontinuous atmospheric parameters due to its temporal and spatial variability. Indian economy is highly dependent on agriculture. Accurate rainfall estimates are essential for agricultural purposes. Chief source of rainfall over Indian region is Monsoon. India Meteorological Department (IMD) use rain gauge based gridded rainfall product developed by Rajeevan et al. (2005), to monitor rainfall over India. Rain gauges are conventional tools to quantify area averaged precipitation over land surface. Dense network of uniformly distributed rain gauge stations are used to estimate rainfall for a particular area (Mishra et al. 2011). The problem of installing optimum rain gauge network has been the subject of research over the years. Insufficient gauge density leads to error in representing the areal rainfall of a region. It is also found that rainfall is also affected by the distance of rain gauge stations from the grid point (Bhowmik and Das, 2007). The purpose of this study is to analyse the effect of rain gauge density over the accuracy of the areal daily accumulated rainfall over a region in Bangalore. Use of gauge calibrated satellite observations to fill the gap over poor gauge density region is studied in the present paper. Variation in the rainfall observations as function of inter-gauge distances is also studied in this paper.
In the present study, ISRO designed AWS observations are used. It has a tipping bucket rain gauge with rain measuring capacity. The data are relayed through satellite and are available through Meteorological and Oceanographic Satellite Data Archival Centre (MOSDAC). At present, there are 1098 AWS rain gauge stations over India. Distribution of rain gauges over the region is very inhomogeneous. For the present study, a region of dense network over southern part of India has been used. This region is shown in Figure 1. The density of rain gauges over this region is such that 8 rain gauges fall within an area of 50 km × 50 km box. If any station was missing in the box, then rain gauge calibrated Satellite observations were used to represent the missing station based on match-ups between the rain from the rain gauge and the Meteosat brightness temperature within the area.
Area of study and number of stations within it. Alphabets represent the rain gauge stations.
Meteosat is a geostationary satellite launched in 1997 by the European Space Agency. It provides thermal infrared (TIR, 10.5-12.5 μm) and water vapor (WV, 5.7-7.1 μm) images every half an hour with a spatial resolution of 5 km. For the present study, data from 2009 to 2102 are used to study the impact of gauge calibrated satellite observations on areal rainfall estimation.
In the present study, Shepard (1968) inverse distance weighted interpolation technique has been used. It is based on the assumption that the interpolating region should be influenced most by the nearby points and less by the farthest points.
\mathrm{F}\left(\mathrm{x},\mathrm{y}\right)=\sum _{\mathrm{i}‒1}^{\mathrm{n}}{\mathrm{w}}_{\mathrm{i}}{\mathrm{f}}_{\mathrm{i}}
where 'n' is the total number of rain gauge stations in the region, 'fi' is the rain gauge value at the ith station, and wi is the weight function and is given by following equation:
{\mathrm{w}}_{\mathrm{i}}=\frac{{\mathrm{h}}_{\mathrm{i}}^{‒\mathrm{p}}}{\sum _{\mathrm{j}‒1}^{\mathrm{n}}{\mathrm{h}}_{\mathrm{j}}^{‒\mathrm{p}}}
where p = 2, hi is the distance between the interpolation point and rain gauge location.
Rainfall variation with rain gauge spacing
Rainfall shows a great amount of variability with both space and time. A typical example is shown in Figure 2. There is considerable amount of variation in the rainfall observed by the rain gauge stations within 50 km during September 19–20, 2009. It may be noted that though stations at ISRO HQs, IISc, and Air Force Station Yelahanka are within 2–5 Km distance from each other, they show significant difference in the hourly rainfall.
Hourly rainfall variation shown by the gauge stations.
361 cases of rain events were identified during rainy season of 2009–2012 to study the variation of rainfall with rain gauge spacing. Short lived intense rainfall events are defined as those with minimum hourly rainfall rate 15 mm and maximum life time of 3 hours in a day.
Figure 3, shows the effect of rain gauge spacing on the rainfall variability. It is noted from the Figure 3 that as rain gauge distance increases from 5 km to 40 km, CC decreases from 92% to 37% and rmse increases from 6.24 mm to 37.26 mm. If only short lived intense systems are considered then CC decreases from 82% to 21% while rmse increases from 8.29 mm to 51.27 mm. It may be noted that rainfall is extremely variable even inside a 50 km region. It may be observed that within 15 km rainfall variability is not abrupt. High rainfall variability is observed if the rain gauge spacing is greater than 15 km. Short lived intense rainfall events show greater variability as compared to all rain events including low but persistent rainfall cases.
Correlation coefficients (CC) and Root mean square error (RMSE) as function of distance between gauges.
Rainfall study using gauge calibrated satellite observations
Apart from the southern part of Indian region, ISRO AWS rain gauge density over India is poor. The density over some places is such that only 1 rain gauge station (and sometimes no rain gauge) falls in 50 km × 50 km region. It is observed from the present study that rainfall values may change significantly within 15 km area. So, it is very difficult to quantify the rainfall on the basis of rain gauge observations over a region having poor rain gauge density. In this section, possibility of using rain gauge calibrated satellite observations to fill the gaps of missing rain gauge stations is analyzed. Past study (Mishra et al. 2010) shows that satellite estimates of rainfall matches well with that from rain gauge observations over well populated rain gauge area. These satellite rainfall estimates are based on a matchup between ground truth rainfall and rain signature from satellite.
Figure 4, shows an example of a matchup between rainfall from rain gauge stations over area of study and brightness temperature from Meteosat satellite during September 19, 2009 at 1800 UTC. It may be observed that brightness temperature values show greater correlation with rainfall from rain gauge. A large data base using 92 cases of rainy events during 2009–2012 are used to generate a matchup between satellite observations and rain rates from rain gauge stations over area of study and this data base is used to calibrate the satellite observations from rain gauge.
Relation between the brightness Temp (TB) from satellite and rainfall from gauges. Rainfall rates are in mm/h. Brightness temperature image is scaled in the decreasing order.
Figure 5 shows the impact of including rain gauge calibrated satellite observations on the rainfall estimation over area of study during 19–20 September 2009. It may be observed that heavy rainfall was underestimated by using 6 rain gauges only. When the two missing rain gauge stations were filled by rain gauge calibrated satellite observations, two rainfall estimates were almost similar.
Rainfall variation with time during 19-20 September 2009 using 8 gauges, 6 gauges and gauge plus satellite observations.
Figure 6 shows the scatter diagram between rainfall using all 8 rain gauges over area of study and rainfall using 6 rain gauges. Impact of filling rain gauge station from gauge calibrated satellite observation is also shown in Figure 6. It is observed that rainfall is underestimated if two rain gauges were excluded from the analysis. If rain gauge calibrated satellite observations are used to represent the two missing rain gauge observations then rainfall estimates were improved over the area of study. It may be concluded from this study that rain gauge calibrated satellite observations can be used to supplement the rainfall information over the region having poor rain gauge density.
Scatter plot between rainfall (3-hourly) using 8 gauges and rainfall using 6 gauges (empty circle) and 6 gauges plus 2 gauge calibrated satellite observations (black circle).
Impact of rain gauge density in rainfall estimation
It is found from section 1 that rainfall in a region of 50 km box shows considerable variability. The estimate is affected by number of gauge stations in the area of study. Effect of rain gauge density over the accuracy of the rainfall estimation is studied in this section. For this purpose, total number of 274 rainy cases were considered during the years 2009–2012.
Figure 7 shows the plot between the binned rainfall (bin width 5 mm) using 8 rain gauges and rmse between rainfall from 8 rain gauges and those using 6, 4 and 2 rain gauges. It is observed that the error increases with the increase in rainfall values. This error also increases with the decrease in the number of rain gauges used in the rainfall estimation. If the number of rain gauges are 4–6 then error is less but further decrease in the number of rain gauges resulted in significantly high error.
Scatter and line plot between binned rainfall rate using all 8 gauges and rmse between rainfall using 8 gauges and that using 6, 4 and 2 gauges.
Figure 8, shows the effect of reduction in number of rain gauges on the absolute error in rainfall estimation. It may be observed that with 4–7 rain gauges in 50 km area, errors are less but reduction to 3 or less rain gauges resulted in high error. If single rain gauge is used to estimate the rainfall then the error increases up to 64%. It may be concluded that 4–7 rain gauges give a reasonable accuracy in rainfall estimation.
Scatter plot between the absolute error and number of gauges.
In the present paper, ISRO AWS rain gauge stations over southern part of India having a good rain gauge density are used to study the effect of rain gauge density and gauge spacing on rainfall estimation. Possibilities of using rain gauge calibrated satellite observations to represent vacant rain gauge stations are also explored in this study. Significant variations are observed even among stations located within about 15 km of each other. Error increases with increase in rain gauge spacing. It may also be concluded that 4–6 rain gauges give reasonable accuracy in daily rainfall estimation in a 50 km × 50 km area. There are scopes to use rain gauge calibrated satellite observations to represent the rain gauge station in area with poor rain gauge density. The technique described here may be used to estimate the rainfall over the area having insufficient number of rain gauges. Homogeneous distribution of rain gauges having sufficient number of equally spaced gauges form a perfect network to monitor the rainfall accurately over a region.
Bhowmik SKR, Das AK: Rainfall analysis for Indian monsoon region using the merged rain gauge observations and satellite estimates: Evaluation of monsoon rainfall features. J. Earth Syst. Sci 2007, 116(3):187-198. 10.1007/s12040-007-0019-1
Mishra A, Gairola RM, Varma AK, Agarwal VK: Remote sensing of Precipitation over Indian land and oceanic regions by synergistic use of multi-satellite sensors. J Geophys Res 2010., 115: D08106 10.1029/2009JD012157
Mishra A, Gairola RM, Varma AK, Agarwal VK: Improved rainfall estimation over Indian land oceanic regions using satellite infrared technique. Adv Space Res 2011, 48: 49-55. 10.1016/j.asr.2011.02.016
Rajeevan M, Bhate J, Kale JD, Lal B IMD Met Monograph No: Climatology 22/2005, 27. Development of a high resolution daily gridded rainfall data for the Indian region 2005. Available from National Climate Centre, IMD Pune. ncc@imdpune.gov.in
Shepard D: A two dimensional interpolation function for irregularly spaced data, Proc. ACM Nat. Conf 1968, 517-524.
I acknowledge the MOSDAC for providing ISRO AWS rain gauge data. Meteosat data from ESA used in this study is also thankfully acknowledged. Useful discussions with Prof J. Srinivasan of Divecha Centre for climate change, Indian Institute of Science (IISc), India, are appreciated. Significant part of the work presented in this paper was done while the author was with Divecha Centre for Climate change, IISc. Financial support from National Science Council of Taiwan under grants NSC96-2111-M-001-005-MY3 is thankfully acknowledged. The author is thankful to the anonymous reviewers for their useful comments to enhance the quality of this paper.
Research Centre for Environmental Changes, Academia Sinica, 128 Academia Road, Section 2, Nankang, Taipei, Taiwan, ROC 11529
Correspondence to Anoop Kumar Mishra.
Mishra, A.K. Effect of rain gauge density over the accuracy of rainfall: a case study over Bangalore, India. SpringerPlus 2, 311 (2013). https://doi.org/10.1186/2193-1801-2-311
|
Ziji Shao, Jinghua Liang, Qirui Cui, Mairbek Chshiev, Albert Fert, Tiejun Zhou and Hongxin Yang
Multiferroic materials based on transition-metal dichalcogenides: Potential platform for reversible control of Dzyaloshinskii-Moriya interaction and skyrmion via electric field
SiyuLiu, Pengyue Gao, Andreas Hermann, Guochun Yang, Jian Lv, Yanming Ma, Ho-Kwang Mao and Yanchao Wang
Stabilization of S3O4 at high pressure: implications for the sulfur-excess paradox
Sci. Bull. 67, 971 (2022)
Chenggang Li, Yingqi Cui, Hao Tian, Baozeng Ren, Qingyang Li, Yuanyuan Li and Hang Yang
Quantum Chemistry Study on the Structures and Electronic Properties of Bimetallic Ca2-Doped Magnesium Ca2Mgn (n = 1–15) Clusters
Su Hong Liu, Ya Jie Qi, Yu Zhu Jin, Yu Ying Wang, Cong Liu, Pei Sun, Kai Ge Cheng, Ming Xing Zhao and Xiang Nan Li
Probing the structural evolution, electronic and vibrational properties of neutral and anionic calcium-doped magnesium clusters
Results Phys 38, 1056355 (2022)
\overline{3}
|
\sigma
Athanassios Batakis (2000)
{\pi }_{V}
{}^{m}
{}^{m}\left(A\right)<\infty
\left(V,v\right)|V\in G\left(n,m\right),v\in V
{}^{m\left(n-m\right)}\left(V\in G\left(n,m\right)|\left(V,{\pi }_{V}\left(P\right)\right)\in Z\right)>0
{}^{m}
A family of singular functions and its relation to harmonic fractal analysis and fuzzy logic
Enrique de Amo, Manuel Díaz Carrillo, Juan Fernández-Sánchez (2016)
We study a parameterized family of singular functions which appears in a paper by H. Okamoto and M. Wunsch (2007). Various properties are revisited from the viewpoint of fractal geometry and probabilistic techniques. Hausdorff dimensions are calculated for several sets related to these functions, and new properties close to fractal analysis and strong negations are explored.
Let Γ be a closed set in
{ℝ}^{n}
with Lebesgue measure |Γ| = 0. The first aim of the paper is to give a Fourier analytical characterization of Hausdorff dimension of Γ. Let 0 < d < n. If there exist a Borel measure µ with supp µ ⊂ Γ and constants
{c}_{1}>0
{c}_{2}>0
{c}_{1}{r}^{d}\le µ\left(B\left(x,r\right)\right)\le {c}_{2}{r}^{d}
for all 0 < r < 1 and all x ∈ Γ, where B(x,r) is a ball with centre x and radius r, then Γ is called a d-set. The second aim of the paper is to provide a link between the related Lebesgue spaces
{L}_{p}\left(\Gamma \right)
, 0 < p ≤ ∞, with respect to...
A generalized
\sigma
-porous set with a small complement.
Tišer, Jaroslav (2005)
A method for evaluating the fractal dimension in the plane, using coverings with crosses
Claude Tricot (2002)
Various methods may be used to define the Minkowski-Bouligand dimension of a compact subset E in the plane. The best known is the box method. After introducing the notion of ε-connected set
{E}_{\epsilon }
, we consider a new method based upon coverings of
{E}_{\epsilon }
with crosses of diameter 2ε. To prove that this cross method gives the fractal dimension for all E, the main argument consists in constructing a special pavement of the complementary set with squares. This method gives rise to a dimension formula using integrals,...
A multifractal analysis of an interesting class of measures
Antonis Bisbas (1996)
A relation between dimension of the harmonic measure, entropy and drift for a random walk on a hyperbolic space.
Le Prince, Vincent (2008)
f:{ℝ}^{m}\to {ℝ}^{n}
{f}^{-1}\left(y\right)
y\in {ℝ}^{n}
A Short Proof of a Theorem of Ruelle.
Manfred Denker, Christoph Seck (1989)
Xiong Jin (2014)
Given a two-dimensional fractional multiplicative process
{\left({F}_{t}\right)}_{t\in \left[0,1\right]}
determined by two Hurst exponents
{H}_{1}
{H}_{2}
, we show that there is an associated uniform Hausdorff dimension result for the images of subsets of
\left[0,1\right]
F
{H}_{1}={H}_{2}
A Variational Principle for the Hausdorff Dimension of Fractal Sets.
C.D. Cutler, L. Olsen (1994)
h
-measure of a set. II.
Bărbulescu, Alina (2001)
Absolut konvergente Reihen und das Hausdorffsche Mass
Tibor Šalát (1959)
AC-устранимость, хаусдорфова размерность и
\left(N\right)
свойство.
С.П. Пономарёв (1994)
Additive Gruppen mit vorgegebener Hausdorffscher Dimension.
Bodo Volkmann, Paul Erdös (1966)
An analytic study on the self-similar fractals: Differentiation of integrals.
Miguel Reyes (1989)
Duquesne, Thomas S.A. (2009)
{ℝ}^{3}
Katz, Nets Hawk, Łaba, Izabella, Tao, Terence (2000)
|
Damage Tolerance of Well-Completion and Stimulation Techniques in Coalbed Methane Reservoirs | J. Energy Resour. Technol. | ASME Digital Collection
Hossein Jahediesfanjani,
Hossein Jahediesfanjani
, Norman, Oklahoma, 73019 USA
Hossein Jahediesfanjani is a petroleum engineering Ph.D. candidate in the University of Oklahoma. He holds a master of science in Engineering Management and petroleum engineering from the University of Louisiana in Lafayette, and a bachelor of science in Chemical Engineering from Persian Gulf University, Bushehr, Iran. His major research area is Wellbore stability, formation damage, reservoir simulation, and coal-bed methane gas reservoirs. He may be contacted at the Mewbourne School of Petroleum and Geological Engineering, The University of Oklahoma, T301 Energy Center, 100 E. Boyd St., Norman, Oklahoma 73019, USA, or via e-mail: Hossein@ou.edu.
Jahediesfanjani, H., and Civan, F. (January 19, 2005). "Damage Tolerance of Well-Completion and Stimulation Techniques in Coalbed Methane Reservoirs." ASME. J. Energy Resour. Technol. September 2005; 127(3): 248–256. https://doi.org/10.1115/1.1875554
Coalbed methane (CBM) reservoirs are characterized as naturally fractured, dual porosity, low permeability, and water saturated gas reservoirs. Initially, the gas, water, and coal are at thermodynamic equilibrium under prevailing reservoir conditions. Dewatering is essential to promote gas production. This can be accomplished by suitable completion and stimulation techniques. This paper investigates the efficiency and performance of the openhole cavity, hydraulic fractures, frack and packs, and horizontal wells as potential completion methods which may reduce formation damage and increase the productivity in coalbed methane reservoirs. Considering the dual porosity nature of CBM reservoirs, numerical simulations have been carried out to determine the formation damage tolerance of each completion and stimulation approach. A new comparison parameter, named as the normalized productivity index
Jnp(t)
is defined as the ratio of the productivity index of a stimulated well to that of a nondamaged vertical well as a function of time. Typical scenarios have been considered to evaluate the CBM properties, including reservoir heterogeneity, anisotropy, and formation damage, for their effects on
Jnp(t)
over the production time. The results for each stimulation technique show that the value of
Jnp(t)
declines over the time of production with a rate which depends upon the applied technique and the prevailing reservoir conditions. The results also show that horizontal wells have the best performance if drilled orthogonal to the butt cleats. Long horizontal fractures improve reservoir productivity more than short vertical ones. Open-hole cavity completions outperform vertical fractures if the fracture conductivity is reduced by any damage process. When vertical permeability is much lower than horizontal permeability, production of vertical wells will improve while productivity of horizontal wells will decrease. Finally, pressure distribution of the reservoir under each scenario is strongly dependent upon the reservoir characteristics, including the hydraulic diffusivity of methane, and the porosity and permeability distributions in the reservoir.
coal, natural gas technology, porosity, drilling, permeability, fracture, cracks
Damage, Fracture (Materials), Fracture (Process), Reservoirs, Methane, Permeability, Cavities, Anisotropy
Coalbed Methane Reservoir of the United States
AAPG Studies in Geology #17
, Tulsa, p.
Simulation and Economics of Coalbed Methane Production in the Powder River Basin
Paper SPE 24360 presented in the 1992 SPE Rocky Mountain Regional Meeting
, 18-21 May 1992 in Casper, Wyoming, Society of Petroleum Engineers, Richardson, TX, p. 9.
A Guide to Coalbed Methane Reservoir Engineering
, Chicago, Illinois, p.
The Unsteady-State Nature of Sorption and Diffusion Phenomena in the Micropore Structure of Coal
Paper SPE 15233 presented at the Unconventional Gas Technology Symposium of the Society of Petroleum Engineers
, Louisville, KY, May 18–21, Society of Petroleum Engineers, Richardson, TX, p.
A Comparative Analysis opf the Production Characteristics of Cavity Completions and Hydraulic Fractures in Coalbed Methane reservoirs
Paper SPE 55600 presented at the 1999 SPE Rocky Mountain Regional Meeting
, Gillette, Wyoming, 15–18 May, Society of Petroleum Engineers, Richardson, TX, p.
Coordinated Laboratory Studies in Support of Hydraulic Fracturing of Coalbed Methane
Paper SPE 22911 presented in the 66th Annual Technical Conference and Exhibition of the Society of Petroleum Engineers
, Dallas, Texas, October 6–9, Society of Petroleum Engineers, Richardson, TX, p.
Damage to Coal Permeability During Hydraulic Fracturing
Paper SPE 21813 presented at Rocky Mountain Regional Meeting and Low-Permeability Reservoir Symposium
Denwer, Colorado, April 15, 17, Society of Petroleum Engineers, Richardson, TX, p.
, Denver, Colorado, April 15, 17, Society of Petroleum Engineers, Richardson, TX, p.
Kortenski
Mineralogy, Geochemistry and Pyrite Content of Bulgarian Subbituminous Coals, Pernik Basin
Coalbed Methane and Coal Geology
Cleat Mineralization of Upper Permian Baralaba/Ramgal Coal Measures, Bowen Basin, Australia
Poejr
Production Optimization and Practical Reservoir Management of Coalbed Methane reservoirs
,” Paper SPE 67315 presented at the SPE Production Operations Symposium, Oklahoma City, Oklahoma, March, 26–28, Society of Petroleum Engineers, Richardson, TX, p.
The Relation of Diagnetic Clays and Sulfates to the Treatment of Coalbed Methane Reservoirs
Paper SPE 30736 presented at the SPE Annual Technical Conference and Exhibition of the Society of Petroleum Engineers
, Dallas, Texas, October, 22–25, Society of Petroleum Engineers, Richardson, TX, p.
Recent Advances in Coal Gas-Well Openhole Well Completion Technology
JPT, J. Pet. Technol.
Openhole Cavity Completions in Coalbed Methane Wells in the San Juan Basin
Review of Coalbed Methane Well Stimulation
Paper SPE 22395 presented at SPE International Meeting on Petroleum Engineering
, Bijing, China, March, 22–27, Society of Petroleum Engineers, Richardson, TX, p.
Induced Stresses Due to Propped Hydraulic Fracture in Coalbed Methane Wells
, Denver, Colorado, April 12–14, Society of Petroleum Engineers, Richardson, TX, p.
Understanding Cavity Well Performance
Paper SPE 28579 presented at in the 69th Annual Technical Conference and Exhibition of the Society of Petroleum Engineers
, New Orleans, Louisiana, September, 25–28, Society of Petroleum Engineers, Richardson, TX, p.
Rumorthy
Analysis of Success of Cavity Completion in the Fairway Zone of the San Juan Basin
Paper SPE 55603 Rocky Mountain Regional Meeting
Reservoir Characterization of an Openhole Cavity Completion Using Production and Well Test Data Analysis
Paper SPE 26917 presented at the SPE Eastern Regional Meeting
, Pittsburg, PA, November, 2–4, Society of Petroleum Engineers, Richardson, TX, p.
The Effect of Vertical Fractures on Gas Well Productivity
Paper SPE 15902 Society of Petroleum Engineers
, Unsolicited, Society of Petroleum Engineers, Richardson, TX, p.
Al-Yusef
The Effect of Multiple Hydraulic-Fracturing on the Performance of Gas Well
Application of Horizontal Drainhole Drilling Technology for Coalbed Methane Recovery
Paper SPE 16409 Society of Petroleum Engineers SPE/DOE Low-Permeability Reservoir Symposium
, Denver, Colorado, May 18–19, Society of Petroleum Engineers, Richardson, TX, p.
Enhanced Recovery of Coalbed Methane through Hydraulic Recovery
, Houston, Texas, October 2–5, Society of Petroleum Engineers, Richardson, TX, p.
Schrafnagle
Fracturing Fluid Leakoff and Damage Mechanisms in Coalbed Methane Reservoirs
. Englewood Cliffs, NJ., pp.
Water Fractures Provide Cost-Effective Well Stimulation Alternative
Shyliapobersky
Frack-and-Pack Stimulation, Design and Field Experience
Paper SPE 26564 presented in the Annual Technical Conference and Exhibition of the Society of Petroleum Engineers
Houston, Texas, October 3–6, Society of Petroleum Engineers, Richardson, TX, p.
Frack/Pack Modeling for High-Permeability Viscous Oil Reservoirs of the Duri Field, Indonesia
Paper SPE 72995 presented at the SPE International Symposium on Formation Damage Control
, Lafayette, Louisiana, February, 23–24, Society of Petroleum Engineers, Richardson, TX, p.
Fracturing Fluid Leakoff and Net Pressure Behavior in Frack and pack Stimulation
Paper SPE 29988 presented in International Meeting on Petroleum Engineering
, Beijing, PR China, November 14–17, Society of Petroleum Engineers, Richardson, TX, p.
An Analysis of Flied Development Strategies for Methane Production from Coal Seams
Paper SPE 16858 presented at the SPE Annual Technical Conference and Exhibition
, Dallas, Texas, September 27–30, Society of Petroleum Engineers, Richardson, TX, p.
A Review of Horizontal Drilling and Completion Techniques for Recovery of Coalbed Methane
Paper SPE 37131 presented in the SPE International Conference on Horizontal Well Technology
, Calgary, Alberta, Canada, November 18–20, Society of Petroleum Engineers, Richardson, TX, p.
Interactions of Horizontal Well Hydraulics and Formation damage
Paper SPE 35213 presented in the Permian Basin Oil and Gas Recovery Conference
, Midland, Texas, March 27–29, Society of Petroleum Engineers, Richardson, TX, p.
Inflow Performance and Production Forecasting of Horizontal Wells with Multiple Hydraulic-Fractures in Low Permeability Gas Reservoirs
Paper SPE 26169 presented in the SPE Gas Symposium held in Calgary
, Alberta, Canada, June 28–30, Society of Petroleum Engineers, Richardson, TX, p.
, Alberta, Canada, April 28, May 1, Society of Petroleum Engineers, Richardson, TX, p.
Enhanced Production in Horizontal Wells by the Cavity Failure Well Completion
Paper SPE 68835 presented in the SPE Western Regional Meeting
Bakersfield, California, March 26–30, Society of Petroleum Engineers, Richardson, TX, .
, London, 133-200, p.
Improvements in Simulation of Naturally Fractured Reservoirs
Paper SPE 10511
Journal of Society of Petroleum engineers
Numerical Model Estimates Fracture Production Increase
MATLAB® Version 6.5, 2002, The Math Works, Inc., Natick, MA.
Calculational Modeling of Explosive Fracture and Permeability Enhancement
Experimental Investigation of the Damage Mechanisms of Drilling Mud in Fractured Tight gas Reservoir
|
Feasible joint events for trackerJPDA - MATLAB jpdaEvents - MathWorks España
jpdaEvents
Generate Feasible Joint Events
Obtain Feasible Joint Events from Likelihood Matrix
validationMatrix
likelihoodMatrix
FJEProbs
Feasible joint events for trackerJPDA
FJE = jpdaEvents(validationMatrix)
[FJE,FJEProbs] = jpdaEvents(likelihoodMatrix,k)
FJE = jpdaEvents(validationMatrix) returns the feasible joint events, FJE, based on the validation matrix. A validation matrix describes the possible associations between detections and tracks, whereas a feasible joint event for multi-object tracking is one realization of the associations between detections and tracks.
[FJE,FJEProbs] = jpdaEvents(likelihoodMatrix,k) generates the k-best feasible joint event matrices, FJE, corresponding to the posterior likelihood matrix, likelihoodMatrix. likelihoodMatrix defines the posterior likelihood of associating detections with tracks.
Define an arbitrary validation matrix for five measurements and six tracks.
M = [1 1 1 1 1 0 1
1 1 1 1 1 1 1];
Generate all feasible joint events and count the total number.
FJE = jpdaEvents(M);
nFJE = size(FJE,3);
Display a few of the feasible joint events.
disp([num2str(nFJE) ' feasible joint event matrices were generated.'])
574 feasible joint event matrices were generated.
toSee = [1:round(nFJE/5):nFJE, nFJE];
for ii = toSee
disp("Feasible joint event matrix #" + ii + ":")
disp(FJE(:,:,ii))
Feasible joint event matrix #1:
Feasible joint event matrix #116:
Create a likelihood matrix assuming four detections and two tracks.
likeMatrix = [0.1 0.1 0.1;
Generate three most probable events and obtain their normalized probabilities.
[FJE,FJEProbs] = jpdaEvents(likeMatrix,3)
FJE = 4x3x3 logical array
FJE(:,:,1) =
FJEProbs = 3×1
validationMatrix — Validation matrix
m-by-(n+1) matrix
Validation matrix, specified as an m-by-(n+1) matrix, where m is the number of detections within the cluster of a sensor scan, and n is the number of tracks maintained in the tracker. The validation matrix uses the first column to account for the possibility that each detection is clutter or false alarm, which is commonly referred to as "Track 0" or T0. The validation matrix is a binary matrix listing all possible detections-to-track associations. If it is possible to assign track Ti to detection Dj, then the (j, i+1) entry of the validation matrix is 1. Otherwise, the entry is 0.
likelihoodMatrix — Likelihood matrix
(m+1)-by-(n+1) matrix
Likelihood matrix, specified as an (m+1)-by-(n+1) matrix, where m is the number of detections within the cluster of a sensor scan, and n is the number of tracks maintained in the tracker. The likelihood matrix uses the first column to account for the possibility that each detection is clutter or false alarm, which is commonly referred to as "Track 0" or T0. The matrix uses the first row to account for the possibility that each track is not assigned to any detection, which can be referred to as "Detection 0" or D0 . The (j+1,i+1) element of the matrix represents the likelihood to assign track Ti to detection Dj.
k — Number of joint probabilistic events
Number of joint probabilistic events, specified as a positive integer.
FJE — Feasible joint events
m-by-(n+1)-by-p array
Feasible joint events, specified as an m-by-(n+1)-by-p array, where m is the number of detections within the cluster of a sensor scan, n is the number of tracks maintained in the tracker, and p is the total number of feasible joint events. Each page (an m-by-(n+1) matrix) of FJE corresponds to one possible association between all the tracks and detections. The feasible joint event matrix on each page satisfies:
The matrix has exactly one "1" value per row.
Except for the first column, which maps to clutter, there can be at most one "1" per column.
For more details on feasible joint events, see Feasible Joint Events.
FJEProbs — Probabilities of feasible joint events
p-by-1 vector of nonnegative scalars
Probabilities of feasible joint events, returned as a p-by-1 vector of nonnegative scalars. The summation of these scalars is equal to 1. The k-th element represents the probability of the k-th joint events (specified in the FJE output argument) normalized over the p feasible joint events.
\Omega =\left[\begin{array}{ccc}1& 1& 0\\ 1& 1& 1\\ 1& 0& 1\end{array}\right]
\begin{array}{l}{\Omega }_{1}=\left[\begin{array}{ccc}1& 0& 0\\ 1& 0& 0\\ 1& 0& 0\end{array}\right],\text{\hspace{0.17em}}\text{ }\text{\hspace{0.17em}}{\Omega }_{2}=\left[\begin{array}{ccc}0& 1& 0\\ 1& 0& 0\\ 1& 0& 0\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Omega }_{3}=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 1& 0& 0\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Omega }_{4}=\left[\begin{array}{ccc}1& 0& 0\\ 0& 0& 1\\ 1& 0& 0\end{array}\right]\\ {\Omega }_{5}=\left[\begin{array}{ccc}0& 1& 0\\ 0& 0& 1\\ 1& 0& 0\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Omega }_{6}=\left[\begin{array}{ccc}1& 0& 0\\ 1& 0& 0\\ 0& 0& 1\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Omega }_{7}=\left[\begin{array}{ccc}0& 1& 0\\ 1& 0& 0\\ 0& 0& 1\end{array}\right],\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\Omega }_{8}=\left[\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right]\end{array}
[1] Zhou, Bin, and N. K. Bose. "Multitarget tracking in clutter: Fast algorithms for data association." IEEE Transactions on aerospace and electronic systems 29, no. 2 (1993): 352-363.
[2] Fisher, James L., and David P. Casasent. "Fast JPDA multitarget tracking algorithm." Applied optics 28, no. 2 (1989): 371-376.
When dynamic memory allocation is disabled in the generated code, the order of events with the same probability can be different from the results in MATLAB.
|
WeightMatrix - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Graph Theory : GraphTheory Package : WeightMatrix
get weight matrix
WeightMatrix(G, cp)
(optional) symbol or equation
WeightMatrix returns the matrix of edge weights of a weighted graph. The optional argument cp is used to control whether the weight matrix of the graph or a copy of it should be returned. The argument cp can be either the symbol copy or an equation of the form copy=true or copy=false. If the argument is missing the command returns a copy of the weight matrix of the graph by default.
\mathrm{with}\left(\mathrm{GraphTheory}\right):
G≔\mathrm{Graph}\left({[{1,2},2],[{2,3},1]}\right)
\textcolor[rgb]{0,0,1}{G}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{Graph 1: an undirected weighted graph with 3 vertices and 2 edge\left(s\right)}}
\mathrm{WeightMatrix}\left(G\right)
[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{2}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\end{array}]
|
Poincaré Duality Spaces - Manifold Atlas
Poincaré Duality Spaces
The user responsible for this page is Klein. No other user may edit this page at present.
A Poincaré pair of dimension
\newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\F}{\mathbb{F}} \newcommand{\bZ}{\mathbb{Z}} \newcommand{\bR}{\mathbb{R}} \newcommand{\bC}{\mathbb{C}} \newcommand{\bH}{\mathbb{H}} \newcommand{\bQ}{\mathbb{Q}} \newcommand{\bF}{\mathbb{F}} \newcommand{\bN}{\mathbb{N}} \DeclareMathOperator\id{id} % identity map \DeclareMathOperator\Sq{Sq} % Steenrod squares \DeclareMathOperator\Homeo{Homeo} % group of homeomorphisms of a topoloical space \DeclareMathOperator\Diff{Diff} % group of diffeomorphisms of a smooth manifold \DeclareMathOperator\SDiff{SDiff} % diffeomorphism under some constraint \DeclareMathOperator\Hom{Hom} % homomrphism group \DeclareMathOperator\End{End} % endomorphism group \DeclareMathOperator\Aut{Aut} % automorphism group \DeclareMathOperator\Inn{Inn} % inner automorphisms \DeclareMathOperator\Out{Out} % outer automorphism group \DeclareMathOperator\vol{vol} % volume \newcommand{\GL}{\text{GL}} % general linear group \newcommand{\SL}{\text{SL}} % special linear group \newcommand{\SO}{\text{SO}} % special orthogonal group \newcommand{\O}{\text{O}} % orthogonal group \newcommand{\SU}{\text{SU}} % special unitary group \newcommand{\Spin}{\text{Spin}} % Spin group \newcommand{\RP}{\Rr\mathrm P} % real projective space \newcommand{\CP}{\Cc\mathrm P} % complex projective space \newcommand{\HP}{\Hh\mathrm P} % quaternionic projective space \newcommand{\Top}{\mathrm{Top}} % topological category \newcommand{\PL}{\mathrm{PL}} % piecewise linear category \newcommand{\Cat}{\mathrm{Cat}} % any category \newcommand{\KS}{\text{KS}} % Kirby-Siebenmann class \newcommand{\Hud}{\text{Hud}} % Hudson torus \newcommand{\Ker}{\text{Ker}} % Kernel \newcommand{\underbar}{\underline} %Classifying Spaces for Families of Subgroups \newcommand{\textup}{\text} \newcommand{\sp}{^}d
consists of a finitely dominated CW pair
(X,\partial X)
\mathcal{L}
is a bundle of local coefficients on
X
which is free abelian of rank one, and
[X] \in H_d(X,\partial X;\mathcal {L})
is a class such that
\mathcal B
is allowed to range over all local coefficient bundles on
X
, but in fact it is sufficient to check the condition when
\mathcal{B}
is the local coefficient bundle over
X
\Bbb Z[\pi]
\pi
is the fundamental groupoid of
X
\partial X = \emptyset
, one says that
X
is a Poincaré duality space. (In view of this, perhaps better terminology would be to call
(X,\partial X)
a Poincaré duality space with boundary.)
\mathcal L
is called an orientation sheaf and
[X]
is called a fundamental class. The pair
(\mathcal L,[X])
is unique up to unique isomorphism.
(X,\partial X)
(\mathcal{L},[X])
a Poincar\'e pair of dimension
d
\partial X
is a Poincaré space of dimension
d-1
(\mathcal {L}_{|\partial X},\partial [X])
\partial: H_d(X;\mathcal{L}) \to H_{d-1}(\partial X;\mathcal{L}_{|\partial X})
is the boundary homomorphism.
A finite CW complex
X
admits the structure of a Poincaré duality space of dimension
if and only if there exists a framed compact smooth manifold
M
m \ge n+3
M
X
and the inclusion
\partial M \subset M
has homotopy fiber homotopy equivalent to
S^{m-n-1}
A compact (smooth, PL, TOP or homology) manifold
(X,\partial X)
d
is a Poincaré duality pair of dimension
d
\mathcal L
is the orientation sheaf of
X
[X]
is the manifold fundamental class.
Retrieved from "http://www.map.mpim-bonn.mpg.de/index.php?title=Poincar%C3%A9_Duality_Spaces&oldid=10684"
|
\left({S}_{3},{S}_{6}\right)
-Amalgams IV
Wolfgang Lempken, Christopher Parker, Peter Rowley (2005)
\left({S}_{3},{S}_{6}\right)
-Amalgams V
\left({S}_{3},{S}_{6}\right)
-Amalgams VI
\left({S}_{3},{S}_{6}\right)
A geometric approach to the almost convexity and growth of some nilpotent groups.
A lower bound to the action dimension of a group.
Yoon, Sung Yil (2004)
A non-quasiconvex subgroup of a hyperbolic group with an exotic limit set.
Kapovich, Ilya (1995)
A numerical invariant for finitely generated groups via actions on graphs.
William L. Paschke (1993)
A remark on the intersection of the conjugates of the base of quasi-HNN groups.
Mahmood, R.M.S. (2004)
A Root System for the Lyons Group.
Werner Meyer, Wolfram Neutsch (1989)
A Stiefel complex for the orthogonal group of a field.
K. Vogtmann (1982)
Guirardel, Vincent (2003)
Actions de groupes sur les arbres
Frédéric Paulin (1995/1996)
Actions of discrete groups on nonpositively curved spaces.
Bernhard Leeb, Michael Kapovich (1996)
ℝ
Vincent Guirardel (2008)
We study actions of finitely generated groups on
ℝ
-trees under some stability hypotheses. We prove that either the group splits over some controlled subgroup (fixing an arc in particular), or the action can be obtained by gluing together actions of simple types: actions on simplicial trees, actions on lines, and actions coming from measured foliations on
2
-orbifolds. This extends results by Sela and Rips-Sela. However, their results are misstated, and we give a counterexample to their statements.The...
|
Numerical calculations Algebraic calculations Complex numbers
http://www.epsilon-publi.net/ ari-en-math-admin
2 - Algebraic Calculations with Aplusix proposed by Aristod
A - Expansion
A1 - Expansion
-
1 variable, degree 1
-
Very easy
2\left(4x+1\right)\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}4\left(x-3\right)-3\left(4x+3\right)
Aplusix - Tools(Ari=Z) Open Description
-
-
3\left(6x+4\right)+4\left(-3+3x\right)-3\left(4x+2\right)
-
2 ou 3 variables, degree 1
-
2\left(4x-4y+2\right)-4\left(3x+2y+4\right)
-
-
x\left(2x+4\right)-2x\left(3+3x\right)\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}{\left(3x+2\right)}^{2}
-
-
A bit difficult
{\left(-2x+2y+2\right)}^{2}\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}{\left(x-3\right)}^{2}\left(1-4x\right)
B- Factorization
B1 - Factorizations - Very easy
-4{x}^{2}+3x\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}x\left(x-3\right)+\left(3x-2\right)\left(x-3\right)
B2 - Factorizations - Easy
{x}^{2}-16\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}9{x}^{2}-12x+4
B3 - Factorizations - Medium
\left(5+4x\right)\left(2x-1\right)+{\left(5+4x\right)}^{2}
B4 - Factorizations - Common factor - Difficult
4{x}^{2}-25+\left(4x+3\right)\left(2x+5\right)
B5 - Factorizations - Discriminant
-
{x}^{2}+3x-4
Aplusix - Tools(Ari=Q Alg=Light) Open Description
-
{x}^{2}-x-3\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}-3{x}^{2}-3x+4
B7 - Factorizations - Degree 3 - Without discriminant - Medium
{x}^{3}-27\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}{x}^{3}-4{x}^{2}+4x
B8 - Factorizations - Degree 3 and 4 - Difficult
\left(2x-3\right){\left(4x-1\right)}^{2}-\left(4x+3\right)\left(2x-3\right)\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}{x}^{4}-10{x}^{2}+9
C - Linear equations
C1 - Linear equations
-
Immediate
3x=5\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}2x-1=0
-
5x=3x+2\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}-x+3=4x+5+2x
-
2.3x+3.4=-2.6x-1\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\frac{4x-5}{3}=\frac{-x-1}{6}
Aplusix - Tools(Ari=Q) Open Description
-
3x+2-3x=4x+x-5
2\left(2x+1\right)=3\left(-2x+5\right)
-
5\left(1.3x+3.4\right)=4\left(-2.9x-5\right)\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}3\left(\frac{x}{3}+2\right)=5\left(\frac{x}{6}-5\right)
-
\frac{5x}{3}+5=\frac{4x}{3}+5\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\frac{2x+2}{3}=\frac{4x-5}{6}-\frac{3x+3}{2}
C8 - Equations with 0, 1 or an infinity of solutions
2x=2-2x+4x\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}4\left(3x-5\right)=4\left(x-4\right)
C10 - Fractional equations transforming into linear equations
\frac{5}{-3x+1}=\frac{2}{4x+2}\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\frac{-3x-2}{-x-3}=2
D - Linear inequalities
D1 - Linear inequalities
-
3x\le 5\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}2x-1>0
-
5x\ge 3x+2\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}-x+3<4x+5+2x
-
2.3x+3.4>-2.6x-1\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\frac{4x-5}{3}\le \frac{-x-1}{6}
-
3x+2-3x<4x+x-5
2\left(2x+1\right)\ge 3\left(-2x+5\right)
-
5\left(1.3x+3.4\right)>4\left(-2.9x-5\right)\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}3\left(\frac{x}{3}+2\right)\le 5\left(\frac{x}{6}-5\right)
-
\frac{5x}{3}+5\le \frac{4x}{3}+5\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\frac{2x+2}{3}\ge \frac{4x-5}{6}-\frac{3x+3}{2}
D8 - Inequalities witt 0, 1 or an infinity of solutions
2x>2-2x+4x\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}4\left(3x-5\right)\le 4\left(x-4\right)
E - Quadratic equations
E1 - Quadratic equations - Immediate
16{x}^{2}-9=0\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\left(-2x+4\right)\left(5x-1\right)=0\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}{\left(5x-2\right)}^{2}=0
E2 - Quadratic equations- Immediate
\frac{{x}^{2}}{4}=16\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\left(\frac{x}{3}-4\right)\left(3x+\frac{4}{3}\right)=0\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}{\left(\frac{x}{2}-5\right)}^{2}=0
E3 - Quadratic equations - Factorization
{x}^{2}-9-\left(5x+2\right)\left(x-3\right)=0\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\left(3x+2\right)\left(x-3\right)+\left(x-4\right)\left(x-3\right)=0
4{x}^{2}+12x+9+\left(x-4\right)\left(2x+3\right)=0\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\left(4x+4\right)\left(-4x+12\right)+\left(-x+3\right)\left(x-4\right)=0
E5 - Quadratic equations - Discriminant
{x}^{2}-3x+4=0\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}-2{x}^{2}+2x+1=0\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}{x}^{2}+2x+1=0
Aplusix - Tools(Ari=Q Alg=Medium) Open Description
-{x}^{2}+\frac{4}{5}x-4=0\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}4{x}^{2}+\frac{2}{5}x+\frac{3}{5}=0
E7 - Quadratic equations - Calculations and discriminant
\left(4x+2\right)\left(3x-6\right)=3x\left(6x-3\right)\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}-2{x}^{2}-x+3=4{x}^{2}+4x+2
{\left(4x+5\right)}^{2}={\left(5x-1\right)}^{2}\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}{\left(2x+2\right)}^{2}-\left(3x-2\right)\left(4x-1\right)=0
E9 - Fractional equations transforming into quadratic equations
\frac{1}{x+3}+\frac{1}{x-1}=\frac{2}{x}\phantom{\rule{0.8ex}{0ex}};\phantom{\rule{0.4ex}{0ex}}\frac{2x+3}{-3x-1}=\frac{4x-1}{2x-1}
F - Simultaneous equations
F1 - Simultaneous equations
2\times 2
-
\left\{\begin{array}{l}2x+1=4\\ 2y-1=1\end{array}\right\
\left\{\begin{array}{l}x+y=1\\ x+2y=5\end{array}\right\
2\times 2
-
\left\{\begin{array}{l}4x+2y=5\\ -3x+2y=-4\end{array}\right\
\left\{\begin{array}{l}-3y=-x+5\\ 4x=2y+4\end{array}\right\
2\times 2
-
\left\{\begin{array}{l}5y+2z-2=-3y+z+4\\ -3y+2z-2=3y+4z-3\end{array}\right\
2\times 2
-
Decimals and fractions
\left\{\begin{array}{l}5.1x-2.7y=3\\ 2.9x+5y=-4\end{array}\right\
\left\{\begin{array}{l}\frac{2}{3}x+\frac{5}{6}y=3\\ x+4y=3\end{array}\right\
F5 - Simultaneous equations with 2 unknowns
-
0, 1 or an infinity of solutions
\left\{\begin{array}{l}3x+2y=5\\ 9x+6y=15\end{array}\right\
\left\{\begin{array}{l}3x+3y=-4\\ 4x-3y=2\\ 2x+9y=-10\end{array}\right\
3\times 3
-
\left\{\begin{array}{l}3x-2y-2z=2\\ y-z=1\\ 2x+3y+2z=1\end{array}\right\
\left\{\begin{array}{l}-x+y-z=-2\\ 2x+y+2z=-1\\ -x+2y+3z=0\end{array}\right\
|
Appendixes - Tutellus
Appendix 1. Yield farming mathematical model.
The amount of tokens allocated to Yield Farming rewards is 64.000.000. The token releasing for farming follows a linear function in order to offer good rewards to the farmers during all 36 months.
If rewards were a constant amount, farmers will have huge rewards at the beginning and very low rewards at the end of the farming phase (due to the amount of tokens in circulation). This way, we are rewarding farmers with very interesting amounts from day one until the end of the farming phase.
Taking 1 month as unit...
Releasing is defined by a straight line function:
Where rewards in a certain period of time are defined by the integration of the function above:
As we know, from month 0 to 36, 64.000.000 TUT will be released. Assuming B is zero because if no time has passed no tokens are released (obviously):
\int _{0} ^{36} (Ax) dx = 64.000.000
A = \frac {(64.000.000)(2)} {(36)^2} = 98.765,4321
The straight line function that defines the growth of rewards releasing is:
y = 98.765,4321x
And rewards are calculated by the integration of the function above in a certain period of time:
Rw = \int _{t_0} ^{t_1} (98.765,4321x) dx
Farmers are allowed to claim their rewards, according to the function above, the ratio (R) and the participation (P):
Rw = \int _{t_0} ^{t_1} ((98.765,4321x)(P)(R)) dx
E.g. A token holder is providing 50.000 TUT (and 0,07 BTC) as liquidity to the pool and staking their LP tokens in Tutellus since first of month 12 and he is claiming for their rewards at the first of month 14. His liquidity represents 5% of the pool. There is a ratio of 4:1 in LP/FC2, meaning LP gets 80% of rewards against 20% for FC2. Assuming none of these ratios change through this period, rewards are calculated in this way:
Rw = \int _{12} ^{14} ((98.765,4321x)(0,05)(0,8)) dx = 7.901,23{\space} TUT
APR = \frac {7.901,23}{50.000} · \frac{12}{2} = 95\%
This represents 95% APR for that period. This might vary if any of the ratios change. If liquidity of the pool is duplicated, rewards will be half of those calculated, for example.
|
{C}_{0}\left(T\right)
X
T
{C}_{0}\left(T\right)=\left\{f\phantom{\rule{0.222222em}{0ex}}T\to I
f
\right\}
X
\sigma
T
u\phantom{\rule{0.222222em}{0ex}}{C}_{0}\left(T\right)\to X
A characterization and moving average representation for stable harmonizable processes.
Nikfar, M., Soltani, A.Reza (1996)
A characterization of group-valued measures satisfying the countable chain condition
A multidimensional Lyapunov type theorem
2
A Prokhorov's theorem for Banach lattice valued measures.
M. J. Muñoz Bouzo (1995)
A remark on weak McShane integral
Kazushi Yoshitomi (2019)
We characterize the weak McShane integrability of a vector-valued function on a finite Radon measure space by means of only finite McShane partitions. We also obtain a similar characterization for the Fremlin generalized McShane integral.
A scalar Volterra derivative for the PoU-integral
V. Marraffa (2005)
A weak form of the Henstock Lemma for the
\mathrm{P}oU
-integrable functions is given. This allows to prove the existence of a scalar Volterra derivative for the
\mathrm{P}oU
-integral. Also the
\mathrm{P}oU
-integrable functions are characterized by means of Pettis integrability and a condition involving finite pseudopartitions.
A simple proof of the Borel extension theorem and weak compactness of operators
Ivan Dobrakov, Thiruvaiyaru V. Panchapagesan (2002)
T
{C}_{0}\left(T\right)
be the Banach space of all complex valued continuous functions vanishing at infinity in
T
, provided with the supremum norm. Let
X
be a quasicomplete locally convex Hausdorff space. A simple proof of the theorem on regular Borel extension of
X
\sigma
T
is given, which is more natural and direct than the existing ones. Using this result the integral representation and weak compactness of a continuous linear map
u\phantom{\rule{0.222222em}{0ex}}{C}_{0}\left(T\right)\to X
when...
Absolute continuity of vector measures
K. Musiał (1973)
Abstract Perron-Stieltjes integral
Fundamental results concerning Stieltjes integrals for functions with values in Banach spaces are presented. The background of the theory is the Kurzweil approach to integration, based on Riemann type integral sums (see e.g. [4]). It is known that the Kurzweil theory leads to the (non-absolutely convergent) Perron-Stieltjes integral in the finite dimensional case. In [3] Ch. S. Honig presented a Stieltjes integral for Banach space valued functions. For Honig’s integral the Dushnik interior integral...
Abstract representing kernels.
A. Esperanza Tong (1974)
|
InfinitesimalCoadjointAction - Maple Help
Home : Support : Online Help : Mathematics : DifferentialGeometry : LieAlgebras : InfinitesimalCoadjointAction
LieAlgebras[InfinitesimalCoadjointAction] - find the vector fields defining the infinitesimal co-adjoint action of a Lie group on its Lie algebra
InfinitesimalCoadjointAction(Alg, M)
M - name or string, the name of an initialized manifold
G
be an
-dimensional Lie group with Lie algebra
\mathrm{𝔤}
\left[{e}_{i}, {e}_{j}\right] = {C}_{\mathrm{ij}}^{k}{}_{ }{e}_{k}
be the structure equations for
\mathrm{𝔤}
{x}^{i}
are coordinates for the dual vector space
\mathrm{𝔤}
{}^{*}
, then the infinitesimal generators for the co-adjoint action of
G
\mathit{ }\mathrm{𝔤}
{}^{*}
are the vector fields
{X}_{i}= {C}_{\mathrm{ij}}^{k}{}_{ }{x}^{j}\frac{∂ }{∂{x}^{k}}
The command InfinitesimalCoadjointAction(Algebra, Manifold) calculates the vector fields
{X}_{i}
for the Lie algebra Algebra using the coordinates for the dual space provide by M.
The command InfinitesimalCoadjointAction is part of the DifferentialGeometry:-LieAlgebras package. It can be used in the form InfinitesimalCoadjointAction(...) only after executing the commands with(DifferentialGeometry) and with(LieAlgebras), but can always be used by executing DifferentialGeometry:- LieAlgebras:- InfinitesimalCoadjointAction(...).
\mathrm{with}\left(\mathrm{DifferentialGeometry}\right):
\mathrm{with}\left(\mathrm{LieAlgebras}\right):
\mathrm{LD1}≔\mathrm{_DG}\left([["LieAlgebra",\mathrm{alg1},[3]],[[[1,3,1],1],[[2,3,1],1],[[2,3,2],1]]]\right)
\textcolor[rgb]{0,0,1}{\mathrm{LD1}}\textcolor[rgb]{0,0,1}{:=}\left[\left[\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{,}\left[\textcolor[rgb]{0,0,1}{\mathrm{e2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{e2}}\right]
\mathrm{DGsetup}\left(\mathrm{LD1}\right)
\textcolor[rgb]{0,0,1}{\mathrm{Lie algebra: alg1}}
Now define coordinates for the dual of the Lie algebra.
\mathrm{DGsetup}\left([x,y,z],N\right)
\textcolor[rgb]{0,0,1}{\mathrm{frame name: N}}
Calculate the infinitesimal generators for the co-adjoint action.
\mathrm{Gamma}≔\mathrm{InfinitesimalCoadjointAction}\left(\mathrm{alg1},N\right)
\textcolor[rgb]{0,0,1}{\mathrm{Γ}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_z}}\textcolor[rgb]{0,0,1}{,}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_z}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{+}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\right]
The center of the Lie algebra
\mathrm{alg1}
is trivial and therefore the structure equations for the Lie algebra
\mathrm{Γ}
\mathrm{alg1}
\mathrm{LieAlgebraData}\left(\mathrm{Gamma}\right)
\left[\left[\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{,}\left[\textcolor[rgb]{0,0,1}{\mathrm{e2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{e2}}\right]
\mathrm{Γ}
may be calculated directly using the Adjoint and convert/DGvector commands. For example, we obtain the last vector in
\mathrm{Γ}
A≔\mathrm{Adjoint}\left(\mathrm{e3}\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{:=}\left[\begin{array}{rrr}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{0}\\ \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}& \textcolor[rgb]{0,0,1}{0}\end{array}\right]
\mathrm{convert}\left(\mathrm{LinearAlgebra}:-\mathrm{Transpose}\left(A\right),\mathrm{DGvector},N\right)
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{+}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}
\mathrm{LD2}≔\mathrm{_DG}\left([["LieAlgebra",\mathrm{alg2},[4]],[[[2,4,1],1],[[3,4,3],1]]]\right)
\textcolor[rgb]{0,0,1}{\mathrm{LD2}}\textcolor[rgb]{0,0,1}{:=}\left[\left[\textcolor[rgb]{0,0,1}{\mathrm{e2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e4}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{,}\left[\textcolor[rgb]{0,0,1}{\mathrm{e3}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e4}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\right]
\mathrm{DGsetup}\left(\mathrm{LD2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{Lie algebra: alg2}}
\mathrm{DGsetup}\left([w,x,y,z],\mathrm{N2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{frame name: N2}}
\mathrm{Γ2}≔\mathrm{InfinitesimalCoadjointAction}\left(\mathrm{alg2},\mathrm{N2}\right)
\textcolor[rgb]{0,0,1}{\mathrm{Γ2}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{w}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_z}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_z}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{w}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_y}}\right]
In this example, the Lie algebra has a non-trivial center
\left[{e}_{1}\right]
and now the structure equations for
{\mathrm{Γ}}_{2}
are those for the quotient of
\mathrm{alg2}
\mathrm{Center}\left(\mathrm{alg2}\right)
\left[\textcolor[rgb]{0,0,1}{\mathrm{e1}}\right]
\mathrm{QuotientAlgebra}\left([\mathrm{e1}],[\mathrm{e2},\mathrm{e3},\mathrm{e4}]\right)
\left[\left[\textcolor[rgb]{0,0,1}{\mathrm{e2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e2}}\right]
\mathrm{LieAlgebraData}\left(\mathrm{Γ2}\right)
\left[\left[\textcolor[rgb]{0,0,1}{\mathrm{e2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e2}}\right]
The invariants for the co-adjoint action are called generalized Casimir operators (See J. Patera, R. T. Sharp , P. Winternitz and H. Zassenhaus, Invariants of real low dimensional Lie algebras, J. Math. Phys. 17, No 6, June 1976, 966--994).
We calculate the generalized Casimir operators for the Lie algebra [5,12] from this article. First use the Retrieve command to obtain the structure equations for this algebra and initialize the Lie algebra.
\mathrm{LD3}≔\mathrm{Library}:-\mathrm{Retrieve}\left("Winternitz",1,[5,12],\mathrm{alg3}\right)
\textcolor[rgb]{0,0,1}{\mathrm{LD3}}\textcolor[rgb]{0,0,1}{:=}\left[\left[\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e5}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{,}\left[\textcolor[rgb]{0,0,1}{\mathrm{e2}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e5}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e1}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{e2}}\textcolor[rgb]{0,0,1}{,}\left[\textcolor[rgb]{0,0,1}{\mathrm{e3}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e5}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\textcolor[rgb]{0,0,1}{,}\left[\textcolor[rgb]{0,0,1}{\mathrm{e4}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{e5}}\right]\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{e3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{e4}}\right]
\mathrm{DGsetup}\left(\mathrm{LD3}\right)
\textcolor[rgb]{0,0,1}{\mathrm{Lie algebra: alg3}}
\mathrm{DGsetup}\left([\mathrm{x1},\mathrm{x2},\mathrm{x3},\mathrm{x4},\mathrm{x5}],\mathrm{N3}\right)
\textcolor[rgb]{0,0,1}{\mathrm{frame name: N3}}
\mathrm{Γ3}≔\mathrm{InfinitesimalCoadjointAction}\left(\mathrm{alg3},\mathrm{N3}\right)
\textcolor[rgb]{0,0,1}{\mathrm{Γ3}}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{\mathrm{x1}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x5}}\textcolor[rgb]{0,0,1}{,}\left(\textcolor[rgb]{0,0,1}{\mathrm{x1}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{x2}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x5}}\textcolor[rgb]{0,0,1}{,}\left(\textcolor[rgb]{0,0,1}{\mathrm{x2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{x3}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x5}}\textcolor[rgb]{0,0,1}{,}\left(\textcolor[rgb]{0,0,1}{\mathrm{x3}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{x4}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x5}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x1}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x1}}\textcolor[rgb]{0,0,1}{+}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x1}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x2}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x2}}\textcolor[rgb]{0,0,1}{+}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x3}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x3}}\textcolor[rgb]{0,0,1}{+}\left(\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x3}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{x4}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{D_x4}}\right]
We use the InvariantGeometricObjectFields command to calculate the functions which invariant under the group generated by
{\mathrm{\Gamma }}_{3}
C≔\mathrm{expand}\left(\mathrm{GroupActions}:-\mathrm{InvariantGeometricObjectFields}\left(\mathrm{Γ3},[1],\mathrm{output}="list"\right)\right)
\textcolor[rgb]{0,0,1}{C}\textcolor[rgb]{0,0,1}{:=}\left[\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x1}}\right)\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{\mathrm{x2}}}{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x1}}\right)}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x1}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{x2}}}{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{\mathrm{x3}}}{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{6}}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x1}}\right)}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\frac{{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x1}}\right)}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{x2}}}{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{\mathrm{ln}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{x1}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{x3}}}{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{\mathrm{x4}}}{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}\right]
Functional combinations of these invariants give the formulas for the generalized Casimir operators in the Patera, Sharp, et al. paper.
\mathrm{expand}\left([\mathrm{expand}\left(\mathrm{exp}\left(-C[1]\right),\mathrm{symbolic}\right),2C[2]-{C[1]}^{2},3C[3]+{C[1]}^{3}-3C[1]C[2]]\right)
\left[\frac{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}{{\textcolor[rgb]{0,0,1}{ⅇ}}^{\frac{\textcolor[rgb]{0,0,1}{\mathrm{x2}}}{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{x3}}}{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}\textcolor[rgb]{0,0,1}{-}\frac{{\textcolor[rgb]{0,0,1}{\mathrm{x2}}}^{\textcolor[rgb]{0,0,1}{2}}}{{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}^{\textcolor[rgb]{0,0,1}{2}}}\textcolor[rgb]{0,0,1}{,}\frac{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{x4}}}{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}\textcolor[rgb]{0,0,1}{+}\frac{{\textcolor[rgb]{0,0,1}{\mathrm{x2}}}^{\textcolor[rgb]{0,0,1}{3}}}{{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}^{\textcolor[rgb]{0,0,1}{3}}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{x2}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{x3}}}{{\textcolor[rgb]{0,0,1}{\mathrm{x1}}}^{\textcolor[rgb]{0,0,1}{2}}}\right]
|
\frac{3}{8}-\frac{1}{6}
You cannot add or subtract fractions with different denominators. To make it possible to subtract, find the lowest common multiple of the two denominators.
8
6
24
\text{Multiply }\frac{3}{8}\text{ by }\frac{3}{3}\text{ and }\frac{1}{6}\text{ by }\frac{4}{4}.
This will make the denominators the same.
\frac{3}{8}\cdot \frac{3}{3} = \frac{9}{24} \ \ \ \ \ \ \ \ \ \frac{1}{6}\cdot \frac{4}{4}=\frac{4}{24}
Now you can subtract the numerator of the first term from the numerator of the second.
\frac{9}{24}-\frac{4}{24}=\frac{5}{24}
\frac{5}{24}
\frac{4}{5}+\frac{3}{5}
You don't need to change the denominator for this one. However, you will end up with a fraction greater than
1
. You will need to change it to a mixed number.
\frac{5}{9}-\frac{1}{5}
This problem can be solved in the same way as part (a).
|
a vertical rod of lenght l is moved with constant velocity v towards east the vertical components of earth - Physics - Electromagnetic Induction - 8193279 | Meritnation.com
a vertical rod of lenght l is moved with constant velocity v towards east. the vertical components of earth magnetic field is B and angle of dip is 'x'. the induced e.m.f. in rod is
1) Blvsinx
2) Blvtanx
3) Blv cotx
4) Blv cosx
\mathrm{Induced}\quad \mathrm{emf}\quad \mathrm{is}\quad \mathrm{given}\quad \mathrm{by}\phantom{\rule{0ex}{0ex}}\mathrm{e}=\mathrm{B}\text{'}\mathrm{lv}\phantom{\rule{0ex}{0ex}}\mathrm{Vertical}\quad \mathrm{component}\quad \mathrm{of}\quad \mathrm{magnetic}\quad \mathrm{field}\quad \mathrm{B}\text{'}=\mathrm{Bsin}\quad \mathrm{x}\phantom{\rule{0ex}{0ex}}=\mathrm{Bsin}\quad \mathrm{x}\quad \mathrm{l}\quad \mathrm{v}\quad \mathrm{cos}\quad \mathrm{x}\phantom{\rule{0ex}{0ex}}=Bl\mathrm{vcotx}
|
Niche_apportionment_models Knowpia
Mechanistic models for niche apportionment are biological models used to explain relative species abundance distributions. These niche apportionment models describe how species break up resource pool in multi-dimensional space, determining the distribution of abundances of individuals among species. The relative abundances of species are usually expressed as a Whittaker plot, or rank abundance plot, where species are ranked by number of individuals on the x-axis, plotted against the log relative abundance of each species on the y-axis. The relative abundance can be measured as the relative number of individuals within species or the relative biomass of individuals within species.
Niche apportionment models were developed because ecologists sought biological explanations for relative species abundance distributions. MacArthur (1957, 1961),[1][2] was one of the earliest to express dissatisfaction with purely statistical models, presenting instead 3 mechanistic niche apportionment models. MacArthur believed that ecological niches within a resource pool could be broken up like a stick, with each piece of the stick representing niches occupied in the community. With contributions from Sugihara (1980),[3] Tokeshi (1990, 1993, 1996)[4][5][6] expanded upon the broken stick model, when he generated roughly 7 mechanistic niche apportionment models. These mechanistic models provide a useful starting point for describing the species composition of communities.
Figure 1. Whittaker plot of species Rank abundance for 7 most commonly considered mechanistic models for niche apportionment. The models with steeper slopes such as the Dominance preemption model represents less even communities where a few species are highly abundant and the remaining species abundances decrease sharply (see Species evenness). Alternatively, shallow sloped models such as the Dominance Decay model are representative of more even communities where many species are equally abundant and relatively few species are rare.
A niche apportionment model can be used in situations where one resource pool is either sequentially or simultaneously broken up into smaller niches by colonizing species or by speciation (clarification on resource use: species within a guild use same resources, while species within a community may not).
These models describe how species that draw from the same resource pool (e.g. a guild (ecology)) partition their niche. The resource pool is broken either sequentially or simultaneously, and the two components of the process of fragmentation of the niche include which fragment is chosen and the size of the resulting fragment (Figure 2).
Niche apportionment models have been used in the primary literature to explain, and describe changes in the relative abundance distributions of a diverse array of taxa including, freshwater insects, fish, bryophytes beetles, hymenopteran parasites, plankton assemblages and salt marsh grass.
Figure 2. The mechanisms with which a niche is broken. A resource pool is partitioned into niches of different sizes, which can be translated into the relative abundances of species in the total resource pool.
The mechanistic models that describe these plots work under the assumption that rank abundance plots are based on a rigorous estimate of the abundances of individuals within species and that these measures represent the actual species abundance distribution. Furthermore, whether using the number of individuals as the abundance measure or the biomass of individuals, these models assume that this quantity is directly proportional to the size of the niche occupied by an organism. One suggestion is that abundance measured as the numbers of individuals, may exhibit lower variances than those using biomass. Thus, some studies using abundance as a proxy for niche allocation may overestimate the evenness of a community. This happens because there is not a clear distinction of the relationship between body size, abundance (ecology), and resource use. Often studies fail to incorporate size structure or biomass estimates into measures of actual abundance, and these measure can create a higher variance around the niche apportionment models than abundance measured strictly as the number of individuals.[7][8]
Tokeshi's mechanistic models of niche apportionmentEdit
Seven mechanistic models that describe niche apportionment are described below. The models are presented in the order of increasing evenness, from least even, the Dominance Pre-emption model to the most even the Dominance Decay and MacArthur Fraction models.
Dominance preemptionEdit
This model describes a situation where after initial colonization (or speciation) each new species pre-empts more than 50% of the smallest remaining niche. In a Dominance preemption model of niche apportionment the species colonize random portion between 50 and 100% of the smallest remaining niche, making this model stochastic in nature. A closely related model, the Geometric Series,[9] is a deterministic version of the Dominance pre-emption model, wherein the percentage of remaining niche space that the new species occupies (k) is always the same. In fact, the dominance pre-emption and geometric series models are conceptually similar and will produce the same relative abundance distribution when the proportion of the smaller niche filled is always 0.75. The dominance pre-emption model is the best fit to the relative abundance distributions of some stream fish communities in Texas, including some taxonomic groupings, and specific functional groupings.[10]
The Geometric (k=0.75)
{\displaystyle Pi=k(1-k)^{i-1}}
Random assortmentEdit
In the random assortment model the resource pool is divided at random among simultaneously or sequentially colonizing species. This pattern could arise because the abundance measure does not scale with the amount of niche occupied by a species or because temporal-variation in species abundance or niche breadth causes discontinuity in niche apportionment over time and thus species appear to have no relationship between extent of occupancy and their niche. Tokeshi (1993)[5] explained that this model, in many ways, is similar to Caswell's neutral theory of biodiversity, mainly because species appear to act independently of each other.
Random fractionEdit
The random fraction model describes a process where niche size is chosen at random by sequentially colonizing species. The initial species chooses a random portion of the total niche and subsequent colonizing species also choose a random portion of the total niche and divide it randomly until all species have colonized. Tokeshi (1990)[4] found this model to be compatible with some epiphytic Chiromonid shrimp communities, and more recently it has been used to explain the relative abundance distributions of phytoplankton communities, salt meadow vegetation, some communities of insects in the order Diptera, some ground beetle communities, functional and taxonomic groupings of stream fish in Texas bio-regions, and ichneumonid parasitoids. A similar model was developed by Sugihara in an attempt to provide a biological explanation for the log normal distribution of Preston (1948).[11] Sugihara's (1980)[3] Fixed Division Model was similar to the random fraction model, but the randomness of the model is drawn from a triangular distribution with a mean of 0.75 rather that a normal distribution with a mean of 0.5 used in the random fraction. Sugihara used a triangular distribution to draw the random variables because the randomness of some natural populations matches a triangular distribution with a mean of 0.75.
Power fractionEdit
This model can explain a relative abundance distribution when the probability of colonization an existing niche in a resource pool is positively related to the size of that niche (measured as abundance, biomass etc.). The probability with which a portion of the niche colonized is dependent on the relative sizes of the established niches, and is scaled by an exponent k. k can take a value between 0 and 1 and if k>0 there is always a slightly higher probability that the larger niche will be colonized. This model is toted as being more biologically realistic because one can imagine many cases where the niche with the larger proportion of resources is more likely to be invaded because that niche has more resource space, and thus more opportunity for acquisition. The random fraction model of niche apportionment is an extreme of the power fraction model where k=0, and the other extreme of the power fraction, when k=1 resembles the MacArthur Fraction model where the probability of colonization is directly proportion to niche size.[6][12]
MacArthur fractionEdit
This model requires that the initial niche is broken at random and the successive niches are chosen with a probability proportional to their size. In this model the largest niche always has a greater probability of being broken relative to the smaller niches in the resource pool. This model can lead to a more even distribution where larger niches are more likely to be broken facilitating co-existence between species in equivalent sized niches. The basis for the MacArthur Fraction model is the Broken Stick, developed by MacArthur (1957). These models produce similar results, but one of the main conceptual differences is that niches are filled simultaneously in Broken Stick model rather than sequentially as in the MacArthur Fraction. Tokeshi (1993)[5] argues that sequentially invading a resource pool is more biologically realistic than simultaneously breaking the niche space. When the abundance of fish from all bio-regions in Texas were combined the distribution resembled the broken stick model of niche apportionment, suggesting a relatively even distribution of freshwater fish species in Texas.[10]
Dominance decayEdit
This model can be thought of as the inverse to the Dominance pre-emption model. First, the initial resource pool is colonized randomly and the remaining, subsequent colonizers always colonize the largest niche, whether or not it is already colonized. This model generates the most even community relative to the niche apportionment models described above because the largest niche is always broken into two smaller fragments that are more likely to be equivalent to the size of the smaller niche that was not broken. Communities of this “level” of evenness seem to be rare in natural systems. However, one such community includes the relative abundance distribution of filter feeders in one site within the River Danube in Austria.[13]
A composite model exists when a combination of niche apportionment models are acting in different portions of the resource pool. Fesl (2002).[13] shows how a composite model might appear in a study of freshwater Diptera, in that different niche apportionment models fit different functional groups of the data. Another example by Higgins and Strauss (2008), modeling fish assemblages, found that fish communities from different habitats and with different species compositions conform to different niche apportionment models, thus the entire species assemblage was a combination of models in different regions of the species range.
Fitting mechanistic models of niche apportionment to empirical dataEdit
Mechanistic models of niche apportionment are intended to describe communities. Researchers have used these models in many ways to investigate the temporal and geographic trends in species abundance.
For many years the fit of niche apportionment models was conducted by eye and graphs of the models were compared with empirical data.[5] More recently statistical tests of the fit of niche apportionment models to empirical data have been developed.[14][15] The later method (Etienne and Ollf 2005)[15] uses a Bayesian simulation of the models to test their fit to empirical data. The former method, which is still commonly used, simulates the expected relative abundances, from a normal distribution, of each model given the same number of species as the empirical data. Each model is simulated multiple times, and mean and standard deviation can be calculated to assign confidence intervals around each relative abundance distribution. The confidence around each rank can be tested against empirical data for each model to determine model fit. The confidence intervals are calculated as follows.[4][12] For more information on the simulation of niche apportionment models the website [1][permanent dead link], which explains the program Power Niche.[14]
{\displaystyle R(x_{i})=\mu _{i}\pm {\frac {r\sigma _{i}}{\sqrt {n}}}}
r=confidence limit of simulated data σ=standard deviation of simulated data n=number of replicates of empirical sample
^ MacArthur, R. H. (1957). On the relative abundance of bird species. Proc. Natl. Acad. Sci. 43, 293-295.
^ MacArthur, R. H. MacArthur, J. W. (1961). On bird species diversity. Ecology. 42, 594-598.
^ a b Sugihara, G. (1980). Minimal community structure: an explanation of species abundance patterns. Am. Nat. 116. 770-787.
^ a b c Tokeshi, M. (1990). Niche apportionment or random assortment: species abundance patterns revisited. Journal of Animal Ecology. 59, 1129-1146.
^ a b c d Tokeshi, M. (1993). Species abundance patterns and community structure. Adv. Ecol. Res. 24, 112-186.
^ a b Tokeshi, M. (1996). Power Fraction: a new explanation for species abundance patterns in species-rich assemblages. Oikos. 75, 543-550.
^ Gaston, K. W. Blackburn, T. M. (2000). Macroecology, Oxford, UK: Blackwell Science.
^ Taper, M. L. Marquet, P. A. (1996). How do species really divide resources? American Naturalist. 147, 1072-1086.
^ May, R. M. (1975). Patterns of species abundance and diversity. In Ecology and Evolution of Communities. 81-120, Cambridge, MA: Harvard University Press.
^ a b Higgins, C. L. Strauss, R. E. (2008). Modeling stream fish assemblages with niche apportionment models: patterns. processes, and scale dependence. Transactions of the American Fisheries Society. 137, 696-706.
^ Prestion, F. W. (1948). The commonness and rarity of species. Ecology. 29, 254-253.
^ a b Magurrran, A. E. (2004). Measuring Biological Diversity. Oxford, UK: Blackwell Science
^ a b Fesl, C. (2002). Niche oriented species abundance models: different approaches and their application to larval chironomid (Diptera) assemblages in a large river. J. Anim. Ecol. 71, 1085-1094.
^ a b Drozd, P. Novotny, V. (2000). Power Niche: Niche Division models for community analysis. Version 1. Manual and program published at www.entu.cas/png/PowerNiche
^ a b Etienne, R. S. Ollf, H. (2005). Confronting different models of community structure to species-abundance data: a Bayesian model comparison. Ecology Letters. 8, 493-504.
|
51E15 Affine and projective planes
51E05 General block designs
51E21 Blocking sets, ovals,
k
-arcs
51E25 Other finite nonlinear geometries
2-\left({n}^{2},2n,2n-1\right)
𝒟\left(𝒜,2\right)
𝒜=\left(𝒫,ℒ\right)
n>2
2-\left({n}^{2},2n,2n-1\right)
n=3
𝒟\left(𝒜,2\right)
𝒜
n=4
𝒟\left(𝒜,2\right)
A{G}_{3}\left(4,2\right)
2-\left({n}^{2},2n,2n-1\right)
𝒟\left(𝒜,2\right)
𝒜
n>4
𝒟\left(𝒜,2\right)
4-dimensionale Translationsebenen.
A characterization of the Hall planes by planar and nonplanar involutions.
Johnson, N.L. (1989)
A combinatorial approach to the known projective planes of order nine
František Knoflíček (1995)
A combinatorial characterization of finite projective planes using strongly canonical forms of incidence matrices is presented. The corresponding constructions are applied to known projective planes of order 9. As a result a new description of the Hughes plane of order nine is obtained.
A family of complete arcs in finite projective planes
Barbu C. Kestenband (1989)
A Family of Not (V, l)-Transitivity Projective Planes of Order q3, q ... 1 (mod 3) and q ... 2.
Raúl Figueroa (1982)
A flag transitive plane of order 49 and its translation complement.
Narayana Rao, M.L., Satyanarayana, K., Arjuna Rao, K.M. (1991)
A geometrical characterization of the projective plane of order
4
Andrzej Lewandowski, Hanna Makowiecka (1979)
A note on the derived semifield planes of order 16.
N.L. Johnson (1978)
A polynomial characterization of affine spaces over GF(3)
A study of full collineation group of the projective plane of order 26.
Cigić, V. (1986)
A unified construction of finite geometries associated with
q
-clans in characteristic 2.
Cherowitzo, William E., O'Keefe, Christine M., Penttila, Tim (2003)
A universal construction for projective Hjelmslev planes of level
n
G. Hanssens, H. Van Maldeghem (1989)
Alcuni risultati sui
\left\{q\left(n-1\right)+1;n\right\}
-archi di un piano proiettivo finito
A. Basile, P. Brutti (1971)
Stoichev, Stoicho (2007)
The paper has been presented at the International Conference Pioneers of Bulgarian Mathematics, Dedicated to Nikola Obreshkoff and Lubomir Tschakalo ff , Sofia, July, 2006.Two heuristic algorithms (M65 and M52) for finding respectively unitals and maximal arcs in projective planes of order 16 are described. The exact algorithms based on exhaustive search are impractical because of the combinatorial explosion (huge number of combinations to be checked). Algorithms M65 and M52 use unions of orbits...
Jungnickel, Dieter, de Resmini, Marialuisa J. (2002)
Automorphismengruppen von Translationsebenen, die gewisse verallgemeinerte Elationen besitzen
Wolfram Büttner (1985)
Baer-Elation planes
Vikram Jha, Norman L. Johnson (1987)
Blocking sets of 16 points in projective planes of order 10 - III
Jürgen Bierbrauer (1985)
Blocking sets of maximal type in finite projective planes
|
90C06 Large-scale problems
90C33 Complementarity and equilibrium problems and variational inequalities (finite dimensions)
90C46 Optimality conditions, duality
90C47 Minimax problems
90C59 Approximation methods and heuristics
90C70 Fuzzy programming
A Continuous Conditional Gradient Method
Milojica Jaćimović, Andjelija Geary (1999)
A Dual Algorithm for Convex-Concave Data Smoothing by Cubic C2-Splines.
Jochen W. Schmidt, Isa Scholz (1990)
A formulation of combinatorial auction via reverse convex programming.
Schellhorn, Henry (2005)
A general maximum principle for optimization problems
A global method for some class of optimization and control problems.
Enkhbat, R. (2000)
A globally convergent non-interior point algorithm with full Newton step for second-order cone programming
Liang Fang, Guoping He, Li Sun (2009)
A non-interior point algorithm based on projection for second-order cone programming problems is proposed and analyzed. The main idea of the algorithm is that we cast the complementary equation in the primal-dual optimality conditions as a projection equation. By using this reformulation, we only need to solve a system of linear equations with the same coefficient matrix and compute two simple projections at each iteration, without performing any line search. This algorithm can start from an arbitrary...
A Gradient-Type Method for the Equilibrium Programming Problem With Coupled Constraints
Anatoly Antipin (2000)
A linear acceleration row action method for projecting onto subspaces.
Appleby, Glenn, Smolarski, Dennis C. (2005)
A localization problem in geometry and complexity of discrete programming
A new concept of separation
Reinhard Nehse (1981)
A new non-interior continuation method for
{P}_{0}
-NCP based on a SSPM-function
Liang Fang (2011)
In this paper, we consider a new non-interior continuation method for the solution of nonlinear complementarity problem with
{P}_{0}
-function (
{P}_{0}
-NCP). The proposed algorithm is based on a smoothing symmetric perturbed minimum function (SSPM-function), and one only needs to solve one system of linear equations and to perform only one Armijo-type line search at each iteration. The method is proved to possess global and local convergence under weaker conditions. Preliminary numerical results indicate that...
A new one-step smoothing newton method for second-order cone programming
Jingyong Tang, Guoping He, Li Dong, Liang Fang (2012)
In this paper, we present a new one-step smoothing Newton method for solving the second-order cone programming (SOCP). Based on a new smoothing function of the well-known Fischer-Burmeister function, the SOCP is approximated by a family of parameterized smooth equations. Our algorithm solves only one system of linear equations and performs only one Armijo-type line search at each iteration. It can start from an arbitrary initial point and does not require the iterative points to be in the sets...
A new simultaneous subgradient projection algorithm for solving a multiple-sets split feasibility problem
Yazheng Dang, Yan Gao (2014)
In this paper, we present a simultaneous subgradient algorithm for solving the multiple-sets split feasibility problem. The algorithm employs two extrapolated factors in each iteration, which not only improves feasibility by eliminating the need to compute the Lipschitz constant, but also enhances flexibility due to applying variable step size. The convergence of the algorithm is proved under suitable conditions. Numerical results illustrate that the new algorithm has better convergence than the...
A Parallel Projection Method for Solving Generalized Linear Least-Squares Problems.
Gang Lou, Shih-Ping Han (1988)
Y. Q. Bai, F. Y. Wang, X. W. Luo (2010)
In this paper we propose a primal-dual interior-point algorithm for convex quadratic semidefinite optimization problem. The search direction of algorithm is defined in terms of a matrix function and the iteration is generated by full-Newton step. Furthermore, we derive the iteration bound for the algorithm with small-update method, namely, O(
\sqrt{n}
\frac{n}{\epsilon }
), which is best-known bound so far.
A porosity result in convex minimization.
Howlett, P.G., Zaslavski, A.J. (2005)
|
Limits And Derivatives, Popular Questions: CBSE Class 11-science ENGLISH, English Grammar - Meritnation
derivative using first principle -
sin_/x (sin under root x)
The Snitch asked a question
Aditya Shreenayak asked a question
limitx--->0 [cosx - cos5x] / [cos2x - cos6x]
Sinupatra & 2 others asked a question
The derivative of (1 +cos2x) [whole square] using chain rule.
Kartik Mahajan asked a question
limxtend to 3 x^3-27/root x^2+7-4
Find the derivative of root tanx from first principles.
Differentiate : y=1/4 tan4x
Varnika asked a question
lim x-0 [1-cosx (cos2x)^1/2] / x^2
Sahil Jain asked a question
lim x-0 (1-cosx?cos2x)/x^2
Megha Bajaj asked a question
lim x tends to 0...x-sinx/1-cosx
Viral Patel & 1 other asked a question
lim x tends to zero 1-cos4x / x2 ?
find derivative of log tan x using first principle.
Shresth Gautam asked a question
lim [x],x tends to 2
lim [x],x tends to 5/2
lim [x].x tends to 1
Find the derivative of tan (root x) from first principle.
derive using first principle log(sinx) derivative.
Aman Gaur asked a question
if log 5 base 4 = a and log 6 base 5 = b then log 2 base 3 will be ( a) 1/ (2a+1) (b) 1/(2b+1) (c) 2ab+1 (d) 1/(2ab-1)
Shaheen . & 1 other asked a question
pls answer 533
prashant.sangson... asked a question
Harsh Vardhan Singh & 1 other asked a question
Alka Jones & 1 other asked a question
find the derivative using ab-initio method
(a)1/ax+b , x is not equal -b/a
lim. x->0
1- cosx.cos2x.cos3x /sin22x
Kaavya Gnanasekaran asked a question
differentiation of log x power Sin inverse of x
Find dy/dx. If (x2 + y2)2 = xy
Tanish asked a question
derivative of root of (sec2x-1)/(sec2x+1)
discuss the continuity of function f(x)= |x|+|x-1| in interval [-1,2]
Rishabh Sharma asked a question
Millan asked a question
What is the value of infinity/infinity {infinity divided by infinity}
Amogh Koushik R & 1 other asked a question
lim cos2x - 1/cosx - 1
Pls give detailed answer
5) From a group of 8 children, 3 girls and 5 boys, 3 children are selected at random. Calculate the probabilities that the selected group contain
(ii) only one girl
(iii) only one particular girl (iv) at least 1 girl (v) more girls than boys
romensingh3... asked a question
find the derivative of cosx by first principle of derivative
Navayug Sriram asked a question
Harshit Pant asked a question
Differentiate (2x+3)/(5x-4) by first principle.
Nisha Kar asked a question
find derivative of sinX+cosX from first principle
Yashaswini Gubbi asked a question
how is logax=logex*logae?
Angel Khan asked a question
Rishi Singh asked a question
Explain in brief the Rationalisation Method and Series Expansion Method please. . .
Derivative of (1+y^2)secx-ycotx+1=x^2
Deekshitha asked a question
Limx->0 sin px-tan 3x=4 exist, then find p
find derivative of log(cosx) w.r.t. 'x' using 1st principle
salary123... asked a question
Swastik Dhamija asked a question
Help me solve this limits question :
lim x→a (x^12 - a^12)/(x-a)
N.varadharasu asked a question
Root 3 sinx-cosx/x-pi/6 limit x tends to pi/6 pls ans soon tom exam
Sakshi Naik asked a question
13.Q. The function f(x) = 2tan3x-3tan2x+12tanx +3,
\in \left(0, \frac{\pi }{2}\right)
(A) increasing (B) decreasing
(C) increasing in (0,
\pi /4\right)and decrea\mathrm{sin}g in\left(\pi /4, \pi /2\right)
Ans the given ques 8..
Angad asked a question
evaluate limit x tends to 0 sin3x?3^x-1
\underset{\mathrm{n}\to \infty }{\mathrm{lim}}{\mathrm{n}}^{2} \left[\sqrt{\left(1-\mathrm{cos}\frac{1}{\mathrm{n}}\right)\sqrt{\left(1-\mathrm{cos}\frac{1}{\mathrm{n}}\right)\sqrt{\left(1-\mathrm{cos}\frac{1}{\mathrm{n}}\right)}}}....\infty \right]
Cynthia asked a question
lim x-->0 (cos 3x-cos 5x)/x2
Please solve this problem $#%
Aadhavan asked a question
limx tends to a ((x+2)^5/3-(a+2)^5/3)/x-2
examine the continuity
f(x) =logx-log7/x-7 for x not equal to 7
=7 for x= 7 at x=7
Differentiate the following by using first principle -
i). tan (2x + 1)
ii). √ tan x
iii). x2 sin x
Gauri Joglekar asked a question
lim (x+y) sec (x+y) -x sec x / y
lim x→a (sinx - sina)/ (x-a)
Limit (x --> 0) (tan 3x - 2x)/(3x - sin square x)
Sarjita Saha asked a question
in how many ways is it possible to make a selection by taking any number of all 20 fruits namely 9 mangoes 7 oranges and 4 apples?
differentiate x2 cosx from first principle..
Srishti Mittal asked a question
Derive the formula for Curved surface area of a cone using integration. I know it is not in class 11 but I need to know it urgently for my coaching class tomorrow.. The full working would be appreciated.. thank you
Sunny Sinha asked a question
Book Mathematics Class XI R.D.Sharma
Exercise 29.9, Qno- 29
Anu Kamat asked a question
Evaluate Limx->5 (2x2 +9x- 5)/(x+5)
Funmeet Singh Mehra asked a question
Hanaah asked a question
evaluate lim x->3 (x3-27)/(x-3)
Piklu Bhattacharjee asked a question
Q. lim x tends to pi/2
1+cos2x / (pi-2x)2
Ashish Prusty asked a question
Hafsah asked a question
Limit x tends to 0 x2-5x+6/x2+2x-8
Savita Bedha asked a question
Differentiate using first principle : e [under root(tan x)]
|
Lighten - Maple Help
Home : Support : Online Help : Graphics : Packages : ColorTools : Color and Color Object Manipulation Commands : Lighten
create a new color with higher lightness
Lighten(color, factor)
(optional) positive number
The Lighten command creates a new Color structure which has the same hue and saturation as the input color (in the hardward dependent HSV colorspace), but is multiplied by a factor of 1.2. In other words, the new Color structure is lighter by a factor of 1.2, if possiblewithin the bounds of the HSV colorspace.
If the optional parameter factor is given the output is lightened by that factor if it is greater than 1, and the reciprocal otherwise. Use the Darken command to decrease the lightness.
A more perceptually accurate lightening can be achieved using the Adjust command.
\mathrm{with}\left(\mathrm{ColorTools}\right):
\mathrm{Lighten}\left("DarkRed"\right)
\textcolor[rgb]{0,0,1}{〈}\colorbox[rgb]{0.654901960784314,0,0}{$\textcolor[rgb]{1,1,1}{RGB : 0.654 0 0}$}\textcolor[rgb]{0,0,1}{〉}
\mathrm{Lighten}\left("DarkRed",1.5\right)
\textcolor[rgb]{0,0,1}{〈}\colorbox[rgb]{0.819607843137255,0,0}{$\textcolor[rgb]{1,1,1}{RGB : 0.818 0 0}$}\textcolor[rgb]{0,0,1}{〉}
\mathrm{Lighten}\left("DarkRed",\frac{1}{1.5}\right)
\textcolor[rgb]{0,0,1}{〈}\colorbox[rgb]{0.819607843137255,0,0}{$\textcolor[rgb]{1,1,1}{RGB : 0.818 0 0}$}\textcolor[rgb]{0,0,1}{〉}
\mathrm{Lighten}\left("Black"\right)
\textcolor[rgb]{0,0,1}{〈}\colorbox[rgb]{0.00392156862745098,0.00392156862745098,0.00392156862745098}{$\textcolor[rgb]{1,1,1}{RGB : 0.00469 0.00469 0.00469}$}\textcolor[rgb]{0,0,1}{〉}
The ColorTools[Lighten] command was introduced in Maple 16.
The ColorTools[Lighten] command was updated in Maple 2017.
|
L
L
K
Roland Quême (1998)
ℚ
, discovered by Euclid, three centuries B.C.!). In the first months of 1997, we found more than 1200 new euclidean number fields of degree 4, 5 and 6 with a computer algorithm involving classical lattice properties of the embedding of...
S. C. Coutinho (2007)
We present a constructive proof of the fact that the set of algebraic Pfaff equations without algebraic solutions over the complex projective plane is dense in the set of all algebraic Pfaff equations of a given degree.
A continued fraction proof of Ford's theorem on complex rational approximations.
Richard B. Lakein (1975)
A differential criterion for complete intersections.
Bart De Smit (1997)
A generalization of a result on integers in metacyclic extensions
Let p be an odd prime and let c be an integer such that c>1 and c divides p-1. Let G be a metacyclic group of order pc and let k be a field such that pc is prime to the characteristic of k. Assume that k contains a primitive pcth root of unity. We first characterize the normal extensions L/k with Galois group isomorphic to G when p and c satisfy a certain condition. Then we apply our characterization to the case in which k is an algebraic number field with ring of integers ℴ, and, assuming some...
A generalization of a theorem of Euler for regular chains of complex quadratic irrationalities
Norman Richert (1990)
A generalization of Dirichlet's unit theorem
Paul Fili, Zachary Miner (2014)
We generalize Dirichlet's S-unit theorem from the usual group of S-units of a number field K to the infinite rank group of all algebraic numbers having nontrivial valuations only on places lying over S. Specifically, we demonstrate that the group of algebraic S-units modulo torsion is a ℚ-vector space which, when normed by the Weil height, spans a hyperplane determined by the product formula, and that the elements of this vector space which are linearly independent over ℚ retain their linear independence...
A generalization of Voronoï’s Theorem to algebraic lattices
Kenji Okuda, Syouji Yano (2010)
K
be an algebraic number field and
{𝒪}_{K}
the ring of integers of
K
. In this paper, we prove an analogue of Voronoï’s theorem for
{𝒪}_{K}
-lattices and the finiteness of the number of similar isometry classes of perfect
{𝒪}_{K}
-lattices.
A Note on heights in certain infinite extensions of
\mathbb{Q}
Enrico Bombieri, Umberto Zannier (2001)
We study the behaviour of the absolute Weil height of algebraic numbers in certain infinite extensions of
\mathbb{Q}
. In particular, we obtain a Northcott type property for infinite abelian extensions of finite exponent and also a Bogomolov type property for certain fields which are a
p
-adic analog of totally real fields. Moreover, we obtain a non-archimedean analog of a uniform distribution theorem of Bilu in the archimedean case.
A note on normal and power bases
A proof of quintic reciprocity using the arithmetic of y² = x⁵ + 1/4
A propos de l'ordre associé à l'anneau des entiers d'une extension, d'après H. Jacobinski
Anne-Marie BERGE (1971/1972)
A pure arithmetical characterization for certain fields with a given class group
J. Kaczorowski (1981)
A remark on arithmetic equivalence and the normset
Jim Coykendall (2000)
1. Introduction. Number fields with the same zeta function are said to be arithmetically equivalent. Arithmetically equivalent fields share much of the same properties; for example, they have the same degrees, discriminants, number of both real and complex valuations, and prime decomposition laws (over ℚ). They also have isomorphic unit groups and determine the same normal closure over ℚ [6]. Strangely enough, it has been shown (for example [4], or more recently [6] and [7]) that this does...
Addendum to the paper "On the product of the conjugates outside the unit circle of an algebraic number"
Additive relations with conjugate algebraic numbers
Artūras Dubickas (2003)
|
Analysis of variance for linear mixed-effects model - MATLAB - MathWorks Benelux
ANOVA for Fixed-Effects in LME Model
Satterthwaite Approximation for Degrees of Freedom
Analysis of variance for linear mixed-effects model
stats = anova(lme) returns the dataset array stats that includes the results of the F-tests for each fixed-effects term in the linear mixed-effects model lme.
stats = anova(lme,Name,Value) also returns the dataset array stats with additional options specified by one or more Name,Value pair arguments.
Method for computing approximate degrees of freedom to use in the F-test, specified as the comma-separated pair consisting of 'DFMethod' and one of the following.
Results of F-tests for fixed-effects terms, returned as a dataset array with the following columns.
Term Name of the fixed effects term
pValue p-value of the test for the term
There is one row for each fixed-effects term. Each term is a continuous variable, a grouping variable, or an interaction between two or more continuous or grouping variables. For each fixed-effects term, anova performs an F-test (marginal test) to determine if all coefficients representing the fixed-effects term are 0. To perform tests for the type III hypothesis, you must use the 'effects' contrasts while fitting the linear mixed-effects model.
'effects' contrasts indicate that the coefficients sum to 0, and fitlme creates two contrast-coded variables in the fixed-effects design matrix, $X$1 and $X$2, where
Shift_Evening=\left\{\begin{array}{c}0,\phantom{\rule{1em}{0ex}}if\phantom{\rule{0.2777777777777778em}{0ex}}Morning\\ 1,\phantom{\rule{1em}{0ex}}if\phantom{\rule{0.2777777777777778em}{0ex}}Evening\\ -1,\phantom{\rule{1em}{0ex}}if\phantom{\rule{0.2777777777777778em}{0ex}}Night\end{array}
Shift_Morning=\left\{\begin{array}{c}1,\phantom{\rule{1em}{0ex}}if\phantom{\rule{0.2777777777777778em}{0ex}}Morning\\ 0,\phantom{\rule{1em}{0ex}}if\phantom{\rule{0.2777777777777778em}{0ex}}Evening\\ -1,\phantom{\rule{1em}{0ex}}if\phantom{\rule{0.2777777777777778em}{0ex}}Night\end{array}.
\begin{array}{l}MorningShift:QCDe{v}_{im}={\beta }_{0}+{\beta }_{2}Shift_Mornin{g}_{i}+{b}_{0m}+{\epsilon }_{im},\phantom{\rule{1em}{0ex}}m=1,2,...,5,\\ EveningShift:QCDe{v}_{im}={\beta }_{0}+{\beta }_{1}Shift_Evenin{g}_{i}+{b}_{0m}+{\epsilon }_{im},\\ NightShift:\phantom{\rule{1em}{0ex}}QCDe{v}_{im}={\beta }_{0}-{\beta }_{1}Shift_Evenin{g}_{i}-{\beta }_{2}Shift_Mornin{g}_{i}+{b}_{0m}+{\epsilon }_{im},\end{array}
b
{\sigma }_{b}^{2}
ϵ
{\sigma }^{2}
F
-test to determine if all fixed-effects coefficients are 0.
p
-value for the constant term, 0.0021832, is the same as in the coefficient table in the lme display. The
p
-value of 0.0018721 for Shift measures the combined significance for both coefficients representing Shift.
Fit a linear mixed-effects model, where Fertilizer and Tomato are the fixed-effects variables, and the mean yield varies by the block (soil type) and the plots within blocks (tomato types within soil types) independently. Use the 'effects' contrasts when fitting the data for the type III sum of squares.
Perform an analysis of variance to test for the fixed-effects.
p
-value for the constant term, 5.9086e-30, is the same as in the coefficient table in the lme display. The
p
-values of 0.00018935, 1.0024e-14, and 0.19804 for Tomato, Fertilizer, and Tomato:Fertilizer represent the combined significance for all tomato coefficients, fertilizer coefficients, and coefficients representing the interaction between the tomato and fertilizer, respectively. The
p
-value of 0.19804 indicates that the interaction between tomato and fertilizer is not significant.
Fit the model using the 'effects' contrasts.
p
-values 0.022863 and 4.1099e-08 indicate significant effects of the initial weights of the subjects and the time factor in the amount of weight lost. The weight loss of subjects who are in program B is significantly different relative to the weight loss of subjects that are in program A. The lower and upper limits of the covariance parameters for the random effects do not include zero, thus they are significant.
Perform an F-test that all fixed-effects coefficients are zero.
p
-values for the constant term, initial weight, and week are the same as in the coefficient table in the previous lme output display. The
p
-value of 0.014286 for Program represents the combined significance for all program coefficients. Similarly, the
p
-value for the interaction between program and week (Program:Week) measures the combined significance for all coefficients representing this interaction.
Now, use the Satterthwaite method to compute the degrees of freedom.
The Satterthwaite method produces smaller denominator degrees of freedom and slightly larger
p
For each fixed-effects term, anova performs an F-test (marginal test), that all coefficients representing the fixed-effects term are 0. To perform tests for type III hypotheses, you must set the 'DummyVarCoding' name-value pair argument to 'effects' contrasts while fitting your linear mixed-effects model.
|
Portfolio Optimization with Semicontinuous and Cardinality Constraints - MATLAB & Simulink Example - MathWorks Italia
{x}_{i}
{\mathit{v}}_{\mathit{i}}
{v}_{i}
{\mathit{v}}_{\mathit{i}}
{\text{\hspace{0.17em}}\mathrm{lb}*{\mathit{v}}_{\mathit{i}}\le \mathit{x}}_{\mathit{i}}\le \mathrm{ub}*{\mathit{v}}_{\mathit{i}}
{\mathit{v}}_{\mathit{i}}
{v}_{i}
{\mathit{v}}_{\mathit{i}}
\mathrm{MinNumAssets}\le {\sum }_{1}^{\mathrm{NumAssets}}{\mathit{v}}_{\mathit{i}}\le \mathrm{MaxNumAssets}
{\mathit{v}}_{\mathit{i}}
{\mathit{x}}_{\mathit{i}}\ge 0\text{\hspace{0.17em}\hspace{0.17em}}\mathrm{and}\text{\hspace{0.17em}}\mathrm{sum}\left({\mathit{x}}_{\mathit{i}}\right)=1
{\mathit{x}}_{\mathit{i}}=0\text{\hspace{0.17em}}\mathrm{or}\text{\hspace{0.17em}}{\mathit{x}}_{\mathit{i}}\ge 0.05
\mathrm{lb}\le {\mathit{x}}_{\mathit{i}}\le \mathrm{ub}\text{\hspace{0.17em}}
{\mathit{x}}_{\mathit{i}}=0\text{\hspace{0.17em}}\mathrm{or}\text{\hspace{0.17em}}{\text{\hspace{0.17em}}\mathrm{lb}\le \mathit{x}}_{\mathit{i}}\le \mathrm{ub}
|
Similar and Congruent Triangles Warmup Practice Problems Online | Brilliant
Stephanie shines her flashlight into a mirror on the floor and sees the ray of light hit the wall on the other side of the room. As she moves closer to the mirror, holding the light at the same height, how will the ray of light on the wall change?
It moves lower down the wall It stays in the same spot It moves higher up the wall
Which of the following triangles is not necessarily congruent to the other two?
\triangle ABC
\triangle DEF
\triangle GHI
AB = BC = CD,
what is the area of the green region?
Two triangles have the same angle measures of
60^\circ, 80^\circ, \text{ and } 40^\circ.
Are the triangles congruent?
On a sunny day, Betty's shadow is 12 feet long and she is 4 feet tall. If Adam's shadow is 3 feet longer than Betty's, how tall is Adam?
|
Get Equation - Maple Help
Home : Support : Online Help : Programming : Maplets : Examples : Get Equation
display a Maplet application that requests an equation
GetEquation(opts)
The GetEquation() calling sequence displays a Maplet application that prompts the user to enter an equation. This equation is returned to the Maple session. If the user enters an algebraic expression, it is equated to zero before it is returned. If the user does not enter an equation, an exception is raised.
The GetEquation sample Maplet worksheet describes how to write a Maplet application that behaves similarly to this routine by using the Maplets[Elements] package.
Specifies the text that prompts the user for an equation. By default, the caption is Enter an equation:.
Specifies the Maplet application title. By default, the title is Get Equation.
\mathrm{with}\left(\mathrm{Maplets}[\mathrm{Examples}]\right):
f≔\mathrm{GetEquation}\left(\right)
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{y}
g≔\mathrm{GetEquation}\left(\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{ⅆ}}{\textcolor[rgb]{0,0,1}{ⅆ}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\right)}^{\textcolor[rgb]{0,0,1}{2}}
GetEquation Sample Maplet
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.