text
stringlengths 256
16.4k
|
|---|
Q Find the product using suitable identity (9a + 1/6)^2 Pls do not refer any link - Maths - Algebraic Expressions and Identities - 13454081 | Meritnation.com
Q. Find the product using suitable identity
(9a + 1/6)^2
Pls do not refer any link.
{\left(9\mathrm{a}+\frac{1}{6}\right)}^{2}\phantom{\rule{0ex}{0ex}}\left[\mathbf{Using}\mathbf{ }\mathbf{identity}\mathbf{ }{\left(\mathbf{a}\mathbf{+}\mathbf{b}\right)}^{\mathbf{2}}\mathbf{=}\mathbf{ }{\mathbf{a}}^{\mathbf{2}}\mathbf{+}{\mathbf{b}}^{\mathbf{2}}\mathbf{+}\mathbf{2}\mathbf{ab}\right]\phantom{\rule{0ex}{0ex}}{\left(9\mathrm{a}+\frac{1}{6}\right)}^{2} = {\left(9\mathrm{a}\right)}^{2}+{\left(\frac{1}{6}\right)}^{2}+2×9\mathrm{a}×\frac{1}{6}\phantom{\rule{0ex}{0ex}}= 81{\mathrm{a}}^{2}+\frac{1}{36}+3\mathrm{a}
Kriti Singh answered this
Pls verify if I have written the right question
Thanks Kirti :D
|
Downlink deprecoding onto transmission layers - MATLAB lteDLDeprecode - MathWorks Australia
lteDLDeprecode
Perform Deprecoding on Identity Matrix
Downlink deprecoding onto transmission layers
out = lteDLDeprecode(in,nu,txscheme,codebook)
out = lteDLDeprecode(enb,chs,in)
out = lteDLDeprecode(in,nu,txscheme,codebook) returns a symbol matrix by performing deprecoding using matrix pseudo-inversion to undo processing described in TS 36.211 [1], Section 6.3.4. The overall operation of the deprecoder is to transpose what is defined in the specification.
out = lteDLDeprecode(enb,chs,in) performs deprecoding of the precoded symbol matrix, in, according to cell-wide settings enb and chs (channel transmission configurations).
Deprecode a precoded identity matrix having codebook index 1 for three layers and four antennas.
in = lteDLPrecode(eye(3),4,'SpatialMux',1);
out = lteDLDeprecode(in,3,'SpatialMux',1)
in — Precoded input symbols
Precoded input symbols, specified as numeric matrix. The size of the matrix is N-by-P, where P is the number of transmission antennas and N is the number of symbols per antenna. Generate the matrix by extracting a PDSCH using ltePDSCHIndices function on a received resource array. You can perform a similar extraction using the index generator for any other downlink channel that utilizes precoding.
Number of layers, specified as an integer from 1 to 8. The maximum number of layers depends on the transmission scheme, txscheme.
Data Types: char | single
Codebook index to select the precoding matrix, specified as an integer from 0 to 15. This input is ignored for the 'Port0', 'TxDiversity', and 'CDD' transmission schemes. Find the precoding matrix corresponding to a particular codebook index in TS 36.211 [1], Section 6.3.4. In the case of 'TxDiversity' and nu=1, the function falls back to single port processing.
{N}_{\text{RB}}^{\text{DL}}
Channel-specific transmission configuration, specified as a structure that can contain the following parameter fields:
The following parameters are applicable when TxScheme is set to 'SpatialMux' or 'MultiUser'. Include either CodebookIdx field or both PMISet and PRBSet fields. For more information, see Algorithms.
CodebookIdx Required
The fields PMISet and PRBSet are used to determine the frequency-domain position occupied by each precoded symbol in out. This step is performed to apply the correct subband precoder when multiple PMI mode is used. Alternatively, you can provide the CodebookIdx parameter field. CodebookIdx is a scalar specifying the codebook index to use across the entire bandwidth. Therefore, the CodebookIdx field does not support subband precoding. The relationship between PMI values and codebook index is given in TS 36.213 [2], Section 7.2.4.
out — Deprecoded downlink output
Deprecoded downlink output, returned as NSYM-by-v matrix, containing v layers, with NSYMNSYM symbols in each layer. The symbols for layers and antennas lie in columns rather than in rows.
Precoding involves multiplying a P-by-v precoding matrix, F, by a v-by-NSYM matrix, representing NSYM symbols on each of v transmission layers. This multiplication yields a P-by-NSYM matrix, representing NSYM precoded symbols on each of P antenna ports. Depending on the transmission scheme, the precoding matrix can be composed of multiple matrices multiplied together. But the size of the product, F, is always P-by-v.
A P 2-by-2v precoding matrix, F, is multiplied by a 2v-by-NSYM matrix, formed by splitting the real and imaginary components of a v-by-NSYM matrix of symbols on layers. This multiplication yields a P 2-by-NSYM matrix of precoded symbols, which is then reshaped into a P-by-PNSYM matrix for transmission. Since v is P for the 'TxDiversity' transmission scheme, F is of size P 2-by-2P, rather than P 2-by-2v.
When v is P in 'CDD', 'SpatialMux', and 'MultiUser' transmission schemes, and when P and v are 2 in the 'TxDiversity' transmission scheme,
The precoding matrix, F, is square. Its size is 2P-by-2P for the transmit diversity scheme and P-by-P otherwise. In this case, the deprecoder takes the matrix inversion of the precoding matrix to yield the deprecoding matrix F –1. The matrix inversion is computed using LU decomposition with partial pivoting (row exchange):
Perform LU decomposition PxF = LU.
Solve LY = I using forward substitution.
Solve UX = Y using back substitution.
F –1 = XPx.
The degenerate case of the 'Port0' transmission scheme falls into this category, with P = v = 1.
For the 'CDD', 'SpatialMux', and 'MultiUser' transmission schemes,
The deprecoding is then performed by multiplying F –1 by the transpose of the input symbols (symbols is size NSYM-by-P, so the transpose is a P-by-NSYM matrix). This multiplication recovers the v-by-NSYM (equals P-by-NSYM) matrix of transmission layers.
The deprecoding is performed, multiplying F –1 by the transpose of the input symbols (symbols is size PNSYM-by-P, so the transpose is a P-by-PNSYM matrix), having first been reshaped into a 2P-by-NSYM matrix. This multiplication yields a 2v-by-NSYM, matrix which is then split into two v-by-NSYM matrices. To recover the v-by-NSYM matrix of transmission layers multiply the second matrix by j and add the two matrices together (thus recombining real and imaginary parts).
For the other cases, specifically 'CDD', 'SpatialMux', and 'MultiUser' transmission schemes with v ≠ P and the 'TxDiversity' transmission scheme with P = 4,
The precoding matrix F is not square. Instead, the matrix is rectangular with size P-by-v, except in the case of 'TxDiversity' transmission scheme with P = 4, where it is of size P 2-by-(2P = 16)-by-8. The number of rows is always greater than the number of columns in the matrix F is size m-by-n with m > n.
In this case, the deprecoder takes the matrix pseudo-inversion of the precoding matrix to yield the deprecoding matrix F +. The matrix pseudo-inversion is computed as follows.
Remove the last m − n rows of U to give
\overline{U}
Remove the last m − n columns of L to give
\overline{L}
X={\overline{U}}^{H}{\left(\overline{U}{\overline{U}}^{H}\right)}^{-1}{\left({\overline{L}}^{H}\overline{L}\right)}^{-1}{\overline{L}}^{H}
(the matrix inversions are carried out as in the previous steps).
F + = XPx
The application of the deprecoding matrix F + is the same process as described for deprecoding the square matrix case with F + in place of F –1.
This method of pseudo-inversion is based onLinear Algebra and Its Application [3], Chapter 3.4, Equation (56).
[3] Strang, Gilbert. Linear Algebra and Its Application. Academic Press, 1980. 2nd Edition.
lteDLPrecode | lteLayerDemap
|
NthWord - Maple Help
Home : Support : Online Help : Programming : Names and Strings : StringTools Package : Miscellaneous Utilities : NthWord
generate the nth word on a given alphabet
NthWord( alphabet, n )
string; alphabet to use
non-negative integer; nth word to generate
The NthWord(alphabet, n) command generates the nth word on the given ordered alphabet, which consists of the characters in the string alphabet. The string alphabet must not contain repeated characters. Words are enumerated in shortlex order.
0
th word on any alphabet is the empty string
""
8
\mathrm{with}\left(\mathrm{StringTools}\right):
\mathrm{seq}\left(\mathrm{NthWord}\left("abc",i\right),i=1..20\right)
\textcolor[rgb]{0,0,1}{"a"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"b"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"c"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"aa"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ab"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ac"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ba"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"bb"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"bc"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ca"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"cb"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"cc"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"aaa"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"aab"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"aac"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"aba"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"abb"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"abc"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"aca"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"acb"}
\mathrm{seq}\left(\mathrm{NthWord}\left("abcde",i\right),i=1..20\right)
\textcolor[rgb]{0,0,1}{"a"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"b"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"c"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"d"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"e"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"aa"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ab"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ac"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ad"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ae"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ba"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"bb"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"bc"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"bd"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"be"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ca"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"cb"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"cc"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"cd"}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{"ce"}
\mathrm{NthWord}\left("abc",{2}^{31}-1\right)
\textcolor[rgb]{0,0,1}{"abaababbbaacccacacca"}
\mathrm{NthWord}\left("abc",5\right)
\textcolor[rgb]{0,0,1}{"ab"}
\mathrm{NthWord}\left("bca",5\right)
\textcolor[rgb]{0,0,1}{"bc"}
StringTools[Generate]
|
Optimal 5R parallel leg design for quadruped robot gait cycle | JVE Journals
Mangesh D. Ratolikar1 , Prasanth Kumar R2
Copyright © 2020 Mangesh D. Ratolikar, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
This paper presents the design of optimal dimensions for a two degrees of freedom parallel mechanism used in quadruped for walking application. Serial linkages or open link mechanisms have less stiffness and poor dynamic performance, thus parallel mechanisms were developed. Many researchers have used symmetrical parallel leg for quadruped walking but force requirements are different in forward and return stroke, thus unsymmetrical parallel leg may be optimal. Using genetic algorithm, optimum link length values are obtained and the corresponding peak torque is also found.
Keywords: parallel manipulator, quadruped, optimization.
Robots are designed based on the tasks they are supposed to carry out. They are expected to be robust, dextrous [1] and reliable. Based on tasks, the designers have two options for robot design; to have a closed chain or an open chain mechanism.
In case of open chain mechanism, the stiffness is low [2], precision – positioning is slightly compromised and moreover huge amount of actuator torque is wasted which decreases the dynamic performance, but, their workspace reach is high. To overcome disadvantages of serial manipulators, parallel configuration was developed [3-7]. Few researchers have worked on parallel mechanism [8] to increase their workspace [9-12].
Parallel manipulators have higher payload carrying capacity, accuracy, and stiffness. Parallel mechanisms have low reach or small workspace and above that there are few occurrences of singularities. Instantaneous change in degree of freedom occurs whenever a singular configuration is attained which could be catastrophic is some cases. Thus, the study of singular configuration is essential [13]. Researchers have done extensive studies on singularity analysis [6, 14-17].
There are two types of singularities which are commonly observed in parallel mechanisms; Type 1 and Type 2 [4]. In Type 1, the end-effector will lose one or several degrees of freedom. In Type 2, at this configuration the actuator will not resist a force applied to end-effector. Conventional solutions were presented in [11, 18-21]. However, the solutions are not appreciable in all scenarios.
The requirement is precise motion so in order to achieve it, all singularities must be avoided, mechanism should be synthesised [22] and trajectory planning should be done [23]. There are many sets of solutions possible for inverse kinematic problem of parallel mechanism [8, 14]. Two Jacobian matrices are used to relate input – output velocities.
The paper presents optimal link lengths and orientation for parallel leg mechanism. In Section 2 and 3, the problem formulation is presented. In Section 4, results obtained by simulations are discussed. Finally, the conclusions are presented.
2. Model of the 5R parallel leg
The schematic diagram of the 5R parallel leg considered in this paper is shown in Fig. 2. Links 1 and 2 of lengths
{l}_{1}
{l}_{2}
are the proximal links, and links 3 and 4 of lengths
{l}_{3}
{l}_{4}
are the distal links. From the reference coordinate frame with origin at
O
, revolute joints of the actuated links 1 and 2 are at a distance of
d/2
on either side along
x
-axis. Fig. 1 shows the trajectory of the point
P
Fig. 1. Path trajectory
Fig. 2. Schematics of the 5R parallel leg structure
2.1. Velocity kinematics
For obtaining velocity kinematics relationship, consider the distances
{B}_{1}P
{B}_{2}P
written in terms of
x
y
{\theta }_{1}
{\theta }_{2}
{\left(x-{l}_{1}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{1}+\frac{d}{2}\right)}^{2}+{\left(y-{l}_{1}\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{1}\right)}^{2}={l}_{3}^{2},
{\left(x-{l}_{2}\mathrm{c}\mathrm{o}\mathrm{s}{\theta }_{2}-\frac{d}{2}\right)}^{2}+{\left(y-{l}_{2}\mathrm{s}\mathrm{i}\mathrm{n}{\theta }_{2}\right)}^{2}={l}_{4}^{2}.
Differentiating Eqs. (1) and (2), we get:
A\stackrel{˙}{X}=B\stackrel{˙}{\theta },
\mathbit{X}
{\left[x y\right]}^{T}
\mathbit{\theta }
{\left[{\theta }_{1} {\theta }_{2}\right]}^{T}
A
B
can be written in compact form as:
A=\left[\begin{array}{l}{T}_{3}^{T}\\ {T}_{4}^{T}\end{array}\right], B=\left[\begin{array}{cc}{\stackrel{-}{l}}_{1}×{\stackrel{-}{l}}_{3}& 0\\ 0& {\stackrel{-}{l}}_{2}×{\stackrel{-}{l}}_{4}\end{array}\right],
{\stackrel{-}{l}}_{1}
{\stackrel{-}{l}}_{2}
{\stackrel{-}{l}}_{3}
{\stackrel{-}{l}}_{4}
are respectively the directed line segments
{A}_{1}{B}_{1}
{A}_{2}{B}_{2}
{B}_{1}P
{B}_{2}P
2.2. Inverse position kinematics
Inverse position kinematics consists of determining the joint angles
{\theta }_{1}
{\theta }_{2}
given the end-effector position
\left(x y\right)
. For every end-effector position within the reachable workspace of the 5R parallel robot manipulator, there are, in general, four solutions. These four solutions are categorized based on whether links 1 and 3 or links 2 and 4 make included angles less than
\pi
\pi
measured from proximal links 1 or 2. For instance, in Fig. 2, the odd numbered links 1 and 3 make an angle less than
\pi
(considered '-') and the even numbered links 2 and 4 make an angle greater than
\pi
(considered '+'). This can be easily determined by checking the signs of
{A}_{1}{B}_{1}×{A}_{1}P
{A}_{2}{B}_{2}×{A}_{2}P
. Using this convention, the four solutions, which are also called working modes, are described as ++, + -, -+, and --. Again, for each of these working modes there exists two assembly modes which divide the workspace of that particular working mode. Movement in the workspace from one assembly mode to another requires disassembly of joint at the end-effector.
{l}_{1}={l}_{2}={l}_{3}={l}_{4}=d
, the workspace is maximum without holes in it as shown in Fig. 3.
Fig. 3. Working modes and assembly modes
3. Torque requirement
The relationship between joint torques at
{A}_{1}
{A}_{2}
, and the force applied at the end-effector on external surface is given by:
\tau ={J}^{T}F,
\tau ={\left[{\tau }_{1} {\tau }_{2}\right]}^{T}
F={\left[{F}_{x} {F}_{y}\right]}^{T}
and the Jacobian matrix:
J={A}^{-1}B.
It is clear that both
A
B
should be full rank to uniquely solve for joint torques, given the force required at the end-effector. We assume that the manipulator does not pass through singularities during normal usage and hence full rank Jacobian is ensured.
The following assumptions are considered as requirements or objectives of the leg structure:
1) End-effector path of the stance leg is a straight line parallel to the body of the quadruped robot during forward motion of the robot while maintaining a constant height.
2) End-effector path of the swing leg is a curve designed to lift the end-effector off the ground and take it to the next foot-hold on the ground. In addition to just lifting off the ground, the path should also circumvent any obstacles present.
3) End-effector can be placed on the ground at any desired foot hold within the limitations of the workspace of the parallel leg structure.
These objectives have to be met with least amount of torque from the actuators.
For constant height motion in trot gait, the vertical ground reaction force is equal to half of the total weight. For a friction coefficient of
\mu
, the maximum horizontal force will be
\mu
times the vertical force. Let the Jacobian matrix be written as:
J=\left[\begin{array}{ll}{a}_{11}& {a}_{12}\\ {a}_{21}& {a}_{22}\end{array}\right],
{a}_{ij}
are elements of the Jacobian matrix dependent on
x
y
{\theta }_{1}
{\theta }_{2}
In the configuration shown in Fig. 2, with the coordinate frame’s
y
-axis pointing upwards, the force that the leg mechanism has to generate is negative (
{F}_{y}<0
). The horizontal force
{F}_{x}
can be either positive or negative depending upon acceleration or decceleration of the robot body. Then Eq. (5) can be written as:
{\tau }_{1}=\left(±{a}_{11}\mu +{a}_{21}\right){F}_{y},
{\tau }_{2}=\left(±{a}_{12}\mu +{a}_{22}\right){F}_{y}.
Since we are interested in the magnitude of torques:
\left|{\tau }_{1}\right|=\left|\left(±{a}_{11}\mu +{a}_{21}\right)\right|\left|{F}_{y}\right|,
\left|{\tau }_{2}\right|=\left|\left(±{a}_{12}\mu +{a}_{22}\right)\right|\left|{F}_{y}\right|.
We can see from Eqs. (10) and (11) that the vertical force
⌊{F}_{y}⌋
is just a scaling factor for the joint actuator torques. The coefficients of
⌊{F}_{y}⌋
in Eqs. (10) and (11) depend on kinematics alone, i.e., end-effector position trajectory.
Now the problem of joint torque minimization can be stated as follows: For the given height of the robot body (or location where joint actuator is present) from the ground level and the given step length, find the dimensions
{l}_{1}
{l}_{2}
{l}_{3}
{l}_{4}
, and
such that the peak absolute values of joint actuator torques
{\tau }_{1}
{\tau }_{2}
are minimum.
The minimization problem is solved used genetic algorithm. We are optimizing the peak torques of two actuators of this two degree of freedom mechanism to get optimal link lengths. Fig. 4 shows the workspace of the parallel mechanism. The link lengths obtained are
{l}_{1}=
{l}_{2}=
{l}_{3}=
{l}_{4}=
1.7296 m and
d=
1.4937 m. The torque for the path is obtained as 0.264 Nm.
In this paper, we assumed the path is symmetric as shown in Fig. 1 and solving the -+ mode we get the workspace depicted in Fig. 4. There are two types of singularities that can occur while working with parallel mechanism, they are Type I and Type II. In Type I singularity the mechanism loses one degree of freedom, i.e., if one link will follow the motion specified for other link and give some undesirable result. The other type of singularity is Type II, in this case the mechanism will attain some configuration and the actuator will not resist a force applied to end-effector, the actuator should have infinite stiffness which is not possible in practice and thus the end-effector cannot resist any force applied.
The path obtained does not have any Type I singularities.
Fig. 4. Workspace obtained after optimization
In this paper we found out the optimized link length for the parallel manipulator. The criteria were set as to get maximum horizontal straight path but at minimum expense of torque. Here we have assumed that the path is symmetric (Fig. 1). In next case we would like to formulate asymmetric case and find out the optimized path. In the present study we have considered only the “–+” mode which seemed more promising, in future we want to analyse the remaining modes and study which one would yield the best result that can be incorporated for quadruped gait.
Campos L., Bourbonnais F., Bonev I. A., Bigras P. Development of a five-bar parallel robot with large workspace. ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, 2011. [Search CrossRef]
Briot S., Bonev I. A. Are parallel robots more accurate than serial robots? Transactions of the Canadian Society for Mechanical Engineering, Vol. 31, 2007, p. 445-455. [Publisher]
Alıcı G. Determination of singularity contours for five-bar planar parallel manipulators. Robotica, Vol. 18, 2000, p. 569-575. [Publisher]
Zhaocai D., Yueqing Y., Xuping Z. Dynamic modeling of flexible-links planar parallel robots. Frontiers of Mechanical Engineering in China, Vol. 3, 2008, p. 232-237. [Publisher]
Huang M. Z. Design of a planar parallel robot for optimal workspace and dexterity. International Journal of Advanced Robotic Systems, Vol. 8, 2011, p. 49-56. [Publisher]
Gao F., Qi C., Sun Q., Chen X., Tian X., A quadruped robot with parallel mechanism legs. IEEE International Conference on Robotics and Automation (ICRA), 2014. [Publisher]
Cui G., Zhang Y. Kinetostatic modeling and analysis of a new 3-DOF parallel manipulator. International Conference on Computational Intelligence and Software Engineering, 2009. [Search CrossRef]
Liu X. J., Wang J., Pritschow G. Performance atlases and optimum design of planar 5R symmetrical parallel mechanisms. Mechanism and Machine Theory, Vol. 41, 2006, p. 119-144. [Publisher]
Macho E., Altuzarra O., Pinto C., Hernandez A. Workspaces associated to assembly modes of the 5R planar parallel manipulator. Robotica, Vol. 26, 2008, p. 395-403. [Publisher]
Hesselbach J., Helm M. B., Soetebier S. Connecting Assembly Modes for Workspace Enlargement. Advances in Robot Kinematics, Springer, 2002, p. 347-356. [Publisher]
Figielski A., Bonev I. A., Bigras P. Towards development of a 2-DOF planar parallel robot with optimal workspace use. IEEE International Conference on Systems, Man and Cybernetics, 2007. [Search CrossRef]
Dash A. K., Chen I. M., Yeo S. H., Yang G. Workspace generation and planning singularity-free path for parallel manipulators. Mechanism and Machine Theory, Vol. 40, 2005, p. 776-805. [Publisher]
Basu D., Ghosal A. Singularity analysis of platform-type multi-loop spatial mechanisms. Mechanism and Machine Theory, Vol. 32, 1997, p. 375-389. [Publisher]
Long G. L., Paul R. P. Singularity avoidance and the control of an eight-revolute-joint manipulator. The International Journal of Robotics Research, Vol. 11, 1992, p. 503-515. [Publisher]
Lai Z. C., Yang D. C. H. A new method for the singularity analysis of simple six-link manipulators. The International Journal of Robotics Research, Vol. 5, 1996, p. 66-74. [Publisher]
Gosselin C., Angeles J. Singularity analysis of closed-loop kinematic chains. IEEE Transactions on Robotics and Automation, Vol. 6, 1990, p. 281-290. [Publisher]
Daniali H. M., Zsombor-Murray P. J., Angeles J. Singularity analysis of planar parallel manipulators. Mechanism and Machine Theory, Vol. 30, 1995, p. 665-678. [Publisher]
Collins C. L., Long G. L. The singularity analysis of an in-parallel hand controller for force-reflected teleoperation. IEEE Transactions on Robotics and Automation, Vol. 11, 1995, p. 661-669. [Publisher]
Bourbonnais F., Bigras P., Bonev I. A. Minimum-time trajectory planning and control of a pick-and-place five-bar parallel robot. IEEE/ASME Transactions on Mechatronics, Vol. 20, 2014, p. 740-749. [Publisher]
Wang S. L., Waldron K. J. A Study of the Singular Configurations of Serial Manipulators. Journal of Mechanical Design, Vol. 109, 1987, p. 14-20. [Search CrossRef]
Sen S., Dasgupta B., Mallik A. K. Variational approach for singularity-free path-planning of parallel manipulators. Mechanism and Machine Theory, Vol. 38, 2003, p. 1165-1183. [Publisher]
Sefrioui J., Gosselin C. M. On the quadratic nature of the singularity curves of planar three-degree-of-freedom parallel manipulators. Mechanism and Machine Theory, Vol. 30, 1995, p. 533-551. [Publisher]
Özdemir M. High-order singularities of 5R planar parallel robots. Robotica, Vol. 37, 2019, p. 233-245. [Publisher]
|
Spatiotemporal Evolution of Rotational Natural Cavitation in Rotational Supercavitating Evaporator for Desalination | J. Fluids Eng. | ASME Digital Collection
Zhi-Ying Zheng,
Zhi-Ying Zheng
School of Energy Science and Engineering, Harbin Institute of Technology
e-mail: zhengzhy@hit.edu.cn
College of Aerospace and Civil Engineering, Harbin Engineering University
e-mail: wang-lu@hrbeu.edu.cn
Jilin 132012,
e-mail: caiwh@neepu.edu.cn
Center for thermal sciences of Lyon (CETHIL) UMR5008, Institut National des Sciences Appliquées de Lyon (INSA-Lyon)
Villeurbanne 69621,
e-mail: xin.zheng@insa-lyon.fr
e-mail: qianlihit@outlook.com
e-mail: kunugi.tomoaki.85 s@st.kyoto-u.ac.jp
e-mail: lihui@hit.edu.cn
Sino-French Institute of Nuclear Engineering and Technology, Sun Yat-Sen University
Zhuhai 519082,
e-mail: lifch6@mail.sysu.edu.cn
Zhi-Ying Zheng and Lu Wang authors contributed equally to the paper.
Zheng, Z., Wang, L., Cai, W., Zheng, X., Li, Q., Kunugi, T., Li, H., and Li, F. (February 4, 2020). "Spatiotemporal Evolution of Rotational Natural Cavitation in Rotational Supercavitating Evaporator for Desalination." ASME. J. Fluids Eng. May 2020; 142(5): 051205. https://doi.org/10.1115/1.4045612
A novel desalination device named rotational supercavitating evaporator (RSCE) has been proposed and designed by utilizing supercavitation effect. With special focus on the spatiotemporal evolution of rotational natural cavitation, the hydrodynamic characteristics of cavitating flows in RSCE under different rotational speeds are studied by the visualization experiments and three-dimensional steady numerical simulations. The results of the visualization experiments show that with increasing rotational speed, the cavity morphology develops from several transient isolated bubbles moving with the blades, to blurred partial cavity, and finally to transparent supercavity with nearly constant size. Numerical simulation can predict the development of the cavity morphology in the experiment qualitatively and quantitatively. Vapor phase structures are shed at the tail of the cavity due to the reentrant jet, which are in the forms of single smaller bubbles and U-shaped vapor phase structures under lower rotational speeds and of cavitation clouds and cavitating filaments containing strings of bubbles under higher rotational speeds. Vortex structure is captured based on Q-criterion and encloses the cavity in the radial direction, wherein the periphery of the cavity is enclosed by a single tip vortex tube which can explain the generation of drifting stripe-shaped cavity under higher rotational speeds due to tip vortex, and the cavity tail is enclosed by two vortex tubes split from the single tip vortex tube. A power-law empirical formula for the dimensionless supercavity length versus the cavitation number considering the effect of rotation is obtained by fitting the experimental data on fully developed supercavitation.
rotational natural cavitation, spatiotemporal evolution, cavity morphology, cavitation shedding, reentrant jet, vortex structure
Blades, Cavitation, Cavities, Flow (Dynamics), Spatiotemporal phenomena, Vortices, Computer simulation, Vapors, Rotation
Hydrodynamics and Thermal Transfer Characteristics of Supercavitating Evaporators for Water Desalination
,” Ph.D. thesis, Russian State Library, Moscow (in Russian).
Study on the Hydrodynamic Characteristics of Rotational Supercavitating Evaporator
,” Ph.D. thesis, Harbin Institute of Technology, Harbin, China.
How to Make Up Water Resources
J. Sci. Life
, (12), epub.
Large-Scale Water Desalination Methods: A Review and New Perspectives
Modeling of Rotational Supercavitating Evaporator and the Geometrical Characteristics of Supercavity Within
Experimental Study on the Performance of a Rotational Supercavitating Evaporator for Desalination
Sci. China Tech. Sci.
Hydrodynamics and Cavitation of Pumps
Fluid Dynamics of Cavitation and Cavitating Turbopumps
Instability in a Cavitating Centrifugal Pump
JSME Int. J. Ser.
.10.1299/jsmeb1988.34.1_9
J. Phys.: Conf. Ser., IOP Publishing
Influence of Impeller-Tongue Interaction on the Unsteady Cavitation Behavior in a Centrifugal Pump
Numerical Investigation of the Hump Characteristic of a Pump–Turbine Based on an Improved Cavitation Model
Verification of Large Eddy Simulation (LES) Applied in Francis Hydro-Turbine Under Partial Flow Conditions
91 (in Chinese
).10.11918/j.issn.0367-6234.2015.07.014
A Panel Method for Trans-Cavitating Marine Propellers
Proceedings of the Seventh International Symposium on Cavitation
, Ann Arbor, MI, Aug. 17–22, Paper No.
Van Terwisga
Propeller Cavitation Modelling by CFD-Results From the VIRTUE 2008 Rome Workshop
Proceedings of the First International Symposium on Marine Propulsors
, Trondheim, Norway, June 22–24, pp.
.https://www.researchgate.net/publication/237569766_Propeller_Cavitation_Modelling_by_CFD_-_Results_from_the_VIRTUE_2008_Rome_Workshop
Numerical Analysis of a Propeller During Heave Motion in Cavitating Flow
A Dimensionless Scaling Parameter for Thermal Effects on Cavitation in Turbopump Inducers
Characterization of Cavitation Instabilities in a Four-Bladed Turbopump Inducer
Study of the Behavior of Vapor Fraction in a Turbopump Inducer Using an X-Ray Measurement Technique
The Prediction of Unsteady Sheet Cavitation
Proceedings of the Third International Symposium on Cavitation
Supercavitating Pumps for Cryogenic Liquids
The Supercavitating Pump
A Study of Supercavitating Pumps: 1st Report, Pump Performance and Cavitation on Impeller Blades
Design Method of Supercavitating Pumps
IOP Conf. Ser.: Mater. Sci. Eng., IOP Publishing
Supercavitating Propellers History, Operating Characteristics, Mechanism of Operation
,” Hydronautics, Laurel, MD, Technical Report No. 127-6.
Supercavitating Propellers, Momentum Theory
Tachmindji
The Design and Performance of Supercavitating Propellers
,” David Taylor Model Basin, Washington, DC, Report No. C-807.
Supercavitating 3-D Hydrofoils and Propellers: Prediction of Performance and Design
,” AVT Lecture Series on Supercavitating Flows, RTO (The Research and Technology Organization of NATO), von Kármán Institute (VKI), Brussels, Belgium, Report No. RTO EN-010-23.
Achkinadze
Supercavitating Propellers
,” AVT Lecture Series on Supercavitating Flows, RTO (The Research and Technology Organization of NATO), von KármánInstitute (VKI), Brussels, Belgium, Report No. RTO EN-010-22.
Numerical Modeling of Supercavitating Propeller Flows
Numerical Modeling of Supercavitating Flows
,” AVT Lecture Series on Supercavitating Flows, RTO (The Research and Technology Organization of NATO), von KármánInstitute (VKI), Brussels, Belgium, Report No. RTO EN-010-9.
The Physical Laws Governing the Cavitation Bubbles Produced Behind Solids of Revolution in a Fluid Flow
,” Kaiser Wilhelm Institute for Hydrodynamic Research, Gottingen, Report No. TPA3/TIB.
Steady Two-Dimensional Cavity Flows About Slender Bodies
,” David W. Taylor Model Basin, Washington, DC, Report No. 834.
A Wake Model for Free-Streamline Flow Theory Part 2. Cavity Flows Past Obstacles of Arbitrary Profile
Prediction of the 2-D Unsteady Supercavity Shapes
, California Institute of Technology, Pasadena, CA, June 20–23, Paper No. CAV2001: session B2.006.
Dynamic Processes of Supercavitation and Computer Simulation
Prediction of Supercavity Induced by a Wedge With Variational Velocity
Proceedings of International Conference on Mechanical Engineering and Mechanics
, Wuxi, Jiangsu, China, Nov. 5–7, pp.
Prediction of Wedge's Supercavity Shape
J. Harbin Univ. Sci. Technol.
110 (in Chinese
).10.3969/j.issn.1007-2683.2006.04.030
.10.2478/IJNAOE-2013-0018
Computational Analysis of Turbulent Super-Cavitating Flow Around a Two-Dimensional Wedge-Shaped Cavitator Geometry
Cavity Dynamics Behind a 2-D Wedge Analyzed by Incompressible and Compressible Flow Solvers
Belahadji
Cavitation in the Rotational Structures of a Turbulent Wake
,” AVT Lecture Series on Supercavitating Flows, RTO (The Research and Technology Organization of NATO), von Kármán Institute (VKI), Brussels, Belgium, Report No. RTO EN-010-5.
Numerical Study of the Pitching Motions of Supercavitating Vehicles
Modeling and Simulations of Supercavitating Vehicle With Planing Force in the Longitudinal Plane
Numerical Simulation and Water Tunnel Experiment on the Stalling Characteristics of Wedge Rudders
Numerical Study on the Characteristics of Natural Supercavitation by Planar Symmetric Wedge-Shaped Cavitators for Rotational Supercavitating Evaporator
Numerical Study on Morphological Characteristics of Rotational Natural Supercavitation by Rotational Supercavitating Evaporator With Optimized Blade Shape
, epub. 10.1007/s42241-019-0007-3
Release on the IAPWS Formulation 1995 for the Thermodynamic Properties of Ordinary Water Substance for General and Scientific Use
Meeting of the International Association for the Properties of Water and Steam (IAPWS)
, Fredericia, Denmark, Sept. 8–14, Paper No. R6-95.
IAPWS,
Revised Release on the IAPS Formulation 1985 for the Viscosity of Ordinary Water Substance
, Vejle, Denmark, Aug. 24–29, Paper No. R12-08.
Computations of Transonic Flow With the
v 2 ¯ − f
Physical and Numerical Modeling of Unsteady Cavitation Dynamics
Fourth International Conference on Multiphase Flow
, New Orleans, LA, May 27–June 1.
Combined Experimental Observation and Numerical Simulation of the Cloud Cavitation With U-Type Flow Structures on Hydrofoils
Coherent Motions in the Turbulent Boundary Layer
A Synthesized Model of the Near-Wall Behavior in Turbulent Boundary Layers
Proceedings of Eighth Symposium on Turbulence
, University of Missouri, Rolla, MO, pp.
A Visual Study on Complex Flow Structures and Flow Breakdown in a Boundary Layer Transition
Evolution of Vortex Structure in Boundary Layer Transition Induced by Roughness Elements
Three-Dimensional Characteristics of the Cavities Formed on a Two-Dimensional Hydrofoil
Proceedings of Third International Symposium on Cavitation
Hydrodynamic Theory of Two-Dimensional Flow With Cavitation
Cavitation With Finite Cavitation Numbers
,” Admiralty Research Laboratory, Teddington, Middlesex, UK, Report No. R1/H/36.
On Two Theories of Plane Potential Flows With Finite Cavities
,” Naval Ordnance Lab, White Oak, MD, Report No. NOLM-8718.
Free Boundaries and Jets in the Theory of Cavitation
.10.1002/sapm19502911
Large Eddy Simulation of Turbulent Vortex-Cavitation Interactions in Transient Sheet/Cloud Cavitating Flows
Numerical Analysis of Unsteady Cavitating Turbulent Flow and Shedding Horse-Shoe Vortex Structure Around a Twisted Hydrofoil
The Transient Characteristics of Cloud Cavitating Flow Over a Flexible Hydrofoil
Three-Dimensional Large Eddy Simulation and Vorticity Analysis of Unsteady Cavitating Flow Around a Twisted Hydrofoil
Numerical Simulation of Three Dimensional Cavitation Shedding Dynamics With Special Emphasis on Cavitation–Vortex Interaction
Large Eddy Simulation and Theoretical Investigations of the Transient Cavitating Vortical Flow Structure Around a NACA66 Hydrofoil
Large Eddy Simulation and Euler–Lagrangian Coupling Investigation of the Transient Cavitating Turbulent Flow Around a Twisted Hydrofoil
Tip Leakage and Backflow Vortex Cavitation
A Review of Methods for Vortex Identification in Hydroturbines
Proceedings of the Summer Program, Center for Turbulence Research
, Stanford University, Stanford, CA, June 27–July 22, pp.
Huzlik
Large Eddy Simulation and Investigation on the Flow Structure of the Cascading Cavitation Shedding Regime Around 3D Twisted Hydrofoil
On the Internal Flow of a Ventilated Supercavity
.10.1017/jfm.2018.1006
An Experimental Investigation Into Supercavity Closure Mechanisms
|
Random Graph Layout Method - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Graph Theory : GraphTheory Package : DrawGraph Details : Random Graph Layout Method
Random Graph Layout Method
distribution=function
This option can take any value understood by the RandomTools distribution generator. See RandomTools/flavor/distribution. This method will generate
n+{n}^{\frac{1}{d}}
points and check if there were
n
unique points generated. If not, it will try again. If both distribution and generator options are given, only generator will be used.
generator=function
This option can take a custom procedure that generates points (as lists) of the appropriate dimension. This method will generate
n+{n}^{\frac{1}{d}}
n
seed=posint
Specify a seed to send to randomize before generating the random points.
The random layout method uses RandomTools:-Generate to generate random positions for the vertices of the graph. By default, this method randomly populates a
{[2{n}^{\frac{1}{d}}]}^{d}
grid with the graph vertices (with an additional small random perturbation added to reduce the likelihood of overlapping edges).
\mathrm{with}\left(\mathrm{GraphTheory}\right):
\mathrm{with}\left(\mathrm{SpecialGraphs}\right):
G≔\mathrm{Graph}\left(\mathrm{undirected},{{1,2},{1,4},{2,3},{3,4}}\right)
\textcolor[rgb]{0,0,1}{G}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{Graph 1: an undirected unweighted graph with 4 vertices and 4 edge\left(s\right)}}
\mathrm{DrawGraph}\left(G,\mathrm{layout}=\mathrm{random}\right)
\mathrm{DrawGraph}\left(G,\mathrm{layout}=\mathrm{random},\mathrm{layoutoptions}=[\mathrm{distribution}=\mathrm{Normal}\left(0,0.5\right)]\right)
A custom procedure to create random points can also be used
R≔\mathrm{RandomTools}:-\mathrm{Generate}\left(\mathrm{list}\left(\mathrm{distribution}\left(\mathrm{Uniform}\left(-0.97,0.97\right)\right),2\right),\mathrm{makeproc}\right):
\mathrm{DrawGraph}\left(G,\mathrm{layout}=\mathrm{random},\mathrm{layoutoptions}=[\mathrm{generator}=R]\right)
H≔\mathrm{HypercubeGraph}\left(3\right)
\textcolor[rgb]{0,0,1}{H}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{Graph 2: an undirected unweighted graph with 8 vertices and 12 edge\left(s\right)}}
\mathrm{DrawGraph}\left(H,\mathrm{layout}=\mathrm{random},\mathrm{dimension}=3\right)
|
Conditional on human-level AI by 2039, will OpenAI Five win against the reigning Dota 2 world champions OG? | Forecasting AI
Conditional on human-level AI by 2039, will OpenAI Five win against the reigning Dota 2 world champions OG?
P(human\ level\ AI)
Conditional on human-level AI being developed before Jan 1st 2039, will OpenAI win a majority of the matches played against OG at thee OpenAI Five Finals on April 13th?
|
Home : Support : Online Help : Science and Engineering : Signal Processing : Computing Statistics : Sum
compute the sum of samples in an array
The Sum(A) command returns the sum of the values in the array A.
The SignalProcessing[Sum] command is thread-safe as of Maple 17.
\mathrm{with}\left(\mathrm{SignalProcessing}\right):
a≔\mathrm{Array}\left([1,2,3,4,5],'\mathrm{datatype}'='\mathrm{float}'[8]\right)
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}[\begin{array}{ccccc}\textcolor[rgb]{0,0,1}{1.}& \textcolor[rgb]{0,0,1}{2.}& \textcolor[rgb]{0,0,1}{3.}& \textcolor[rgb]{0,0,1}{4.}& \textcolor[rgb]{0,0,1}{5.}\end{array}]
\mathrm{Sum}\left(a\right)
\textcolor[rgb]{0,0,1}{15.}
The SignalProcessing[Sum] command was introduced in Maple 17.
|
3 Solve: (i) x - 3/2 + 4 = 3x + 7/4 (ii) 1/2(x - 3) - 2/5(x - - Maths - Rational Numbers - 10723325 | Meritnation.com
3. Solve: (i) x - 3/2 + 4 = 3x + 7/4
(ii) 1/2(x - 3) - 2/5(x - 3) = x+9
(iii) x + 7/4 = x - 3/5 +2
\left(\mathrm{i}\right).\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}} \mathrm{x} - \frac{3}{2} + 4 = 3\mathrm{x} + \frac{7}{4}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}⇒\mathrm{x} - 3\mathrm{x} = \frac{7}{4} + \frac{3}{2} - 4\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}⇒-2\mathrm{x} = \frac{7 + 6 - 16}{4}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}⇒-2\mathrm{x} = -\frac{3}{4}\phantom{\rule{0ex}{0ex}}\phantom{\rule{0ex}{0ex}}⇒\mathrm{x} = \frac{3}{8}
\left(\mathrm{ii}\right) \frac{\left(\mathrm{x}-3\right)}{2}-\frac{2\left(\mathrm{x}-3\right)}{5}=\mathrm{x}+9\phantom{\rule{0ex}{0ex}}⇒\frac{5\left(\mathrm{x}-3\right)-4\left(\mathrm{x}-3\right)}{10}=\mathrm{x}+9\phantom{\rule{0ex}{0ex}}⇒\mathrm{x}-3=10\left(\mathrm{x}+9\right)\phantom{\rule{0ex}{0ex}}⇒\mathrm{x}-3=10\mathrm{x}+90\phantom{\rule{0ex}{0ex}}⇒\mathrm{x}-10\mathrm{x}=90+3\phantom{\rule{0ex}{0ex}}⇒-9\mathrm{x}=93\phantom{\rule{0ex}{0ex}}⇒\mathrm{x}=-\frac{93}{9}\phantom{\rule{0ex}{0ex}}⇒\mathrm{x}=-\frac{31}{3}
Abheshek Nath answered this
|
Results Are Different to those from Another Program - Q
Results Are Different to those from Another Program
Where differences are identified between results in Q and other programs, it will reflect either differences in the data or differences in how things are computed. Generally, it is advisable to first investigate differences in the data.
1 Differences in the data
1.1 Different sample sizes
1.2 Inconsistent outfiling
2 Differences in how things are computed
2.3 Means (averages)
2.5 Effective sample sizes and design effects
2.7 Differences in percentages
2.7.1 Pick One and Pick One - Multi questions
2.7.2 Pick Any, Pick Any - Compact and Pick Any - Grid
2.10 Latent class models, mixture models, cluster analysis, trees (segmentation)
2.11 Significance tests
2.11.1 Choice of test
2.11.2 The role of weights
2.11.3 Multiple comparison corrections
2.11.4 Upper versus lower case letters in Column Comparisons
2.11.5 Treatment of 0% and 100%
3 Obtaining assistance from Q in reconciling differences
The first thing to check when trying to reconcile data in Q versus another program is the sample size (in Q, Base n and, if the data is weighted, Base Population). Where the sample size differs it will mean that either:
One of the files has been created at a different time to another (e.g., before or after more interviews have been conducted).
Different data cleaning processes have been used.
Different definitions of the base.
Inconsistent outfiling
When data is exported from a data collection program into a data file various decisions are made about how to represent the data. If these rules have been inconsistently applied it can cause differences in the results (e.g., if in one program "No" responses have been outfiled as missing data whilst not in the other).
Q rounds up to the nearest integer. More specifically, Q rounds to 13 significant digits (not decimals) and "away from zero". IBM SPSS products default to rounding to even numbers. Some crosstab products have two-stage rounding (they first round to the nearest decimal and then round to the nearest integer).
If a weight is applied in one program but not in another this will cause results to change. The way that different program address weights also impacts upon significance tests (discussed below).
{\displaystyle \sigma _{x}={\sqrt {\frac {\sum _{i=1}^{n}(x_{i}-{\frac {\sum _{i=1}^{n}x_{i}}{n}})^{2}}{n-1}}}}
{\displaystyle x_{i}}
{\displaystyle i}
th of
{\displaystyle n}
This formula does not take weights into account. A simple modification of the formula is to treat the
{\displaystyle i}
th observation's weight,
{\displaystyle w_{i}}
, as representing a frequency, which leads to:
{\displaystyle \sigma _{x}={\sqrt {\frac {\sum _{i=1}^{n}w_{i}(x_{i}-{\frac {\sum _{i=1}^{n}w_{i}x_{i}}{\sum _{i=1}^{n}w_{i}}})^{2}}{\sum _{i=1}^{n}w_{i}-1}}}}
This formula is widely used (e.g., in SPSS Statistics). However, it is incorrect in situations where the weights reflect the probability of a respondent being selected in a survey. For example, if the average weight is 1 this leads to a different standard deviation than if the average weight is 2. This formula is only used in Q where Weights and significance in Statistical Assumptions has been set to Un-weighted sample size in tests. Otherwise, Q instead uses the following formula:
{\displaystyle \sigma _{x}={\sqrt {\frac {\sum _{i=1}^{n}w_{i}(x_{i}-{\frac {\sum _{i=1}^{n}w_{i}x_{i}}{\sum _{i=1}^{n}w_{i}}})^{2}}{\frac {(n-1)\sum _{i=1}^{n}w_{i}}{n}}}}}
Using Frequency Weights in Q discusses how to make Q apply the simpler formula. The Displayr page on weighting for definitions of sampling weights and frequency weights. Also see Weights, Effective Sample Size and Design Effects for a more general discussion about weighting.
As discussed in Weights, Effective Sample Size and Design Effects, Q has many different options for how effective sample sizes are computed and how design effects are factored in. Two particular things to keep an eye out for are:
As discussed in the previous section, within Q, the design effect and effective sample size can be taken into account when computing standard deviations, whereas in many programs they are not taken into account.
By default, Q automatically computes a design effect for weighted data and includes this in addition to any supplied in Extra deff.
Weights containing missing values or 0s (Q automatically excludes these from sample size computations, but many other programs do not).
Rounding. Some programs round sample size data. For example, SPSS either default to rounding sample size (e.g., in Custom Tables), or gives user options for controlling this.
Q's treatment of missing data on multiple response questions (see Sample Size Seems Too Small).
Selecting the wrong type of sample size in Q (in particular, refer to n, Base n, or Column n for un-weighted sample sizes, and Population, Base Population, or Column Population for weighted sample sizes). For example, a Count in SPSS is equivalent to Population in Q, or, to a rounded Population.
Pick One and Pick One - Multi questions
Differences in percentages on Pick One and Pick One - Multi questions are generally attributable to differences in the data (e.g., the data having been recoded inconsistently, such as with different treatment of missing values, application of filters), or, different bases This is most easily assessed by comparing sample sizes (using Base n).
Pick Any, Pick Any - Compact and Pick Any - Grid
Differences in percentages on Pick Any, Pick Any - Compact and Pick Any - Grid] questions are typically explained by:
Differences in the data (e.g., the data having been recoded inconsistently, such as with different treatment of missing values, application of filters). This is most easily assessed by comparing samples sizes (using Base n).
Some programs force percentages on multiple response questions to add up to 100% (Q does not do this). Q can be made to force the numbers to add up to 100% by either creating a filter using the main NET on the table, or, using Table JavaScript and Plot JavaScript.
Where multiple response data is stored in the Pick Any - Compact format, some programs count repeated values twice. For example, in SPSS if one person chose the fourth option in two separate variables, they count as two people in the percentage of responses section, whereas in Q they do not. That is, Q computes percentages of respondents, whereas SPSS of responses.
Different definitions applying to NET, Base and Total columns. As discussed at NET, a NET in Q is often not the same as a Base or Total in other programs.
Pick Any - Compact questions having the wrong Question Type (i.e., being set as Pick One - Multi). See Multi-punch/Multiple Response Questions Displaying as Grids.
Treatment of missing values (e.g., SPSS has an option for Exclude cases pairwise whereas Q always uses listwise deletion).
Treatment of weights. Most statistics programs (e.g., SPSS) interpret weights as being frequency weights, unless specific instructions are given to interpret the data in other ways. Q assumes that the weights are sampling weights (unless otherwise so instructed) and uses a Calibrated Weight. Identifying if the cause of the problem is weights is best achieved by running the analysis without weights.
Selection of type of regression model (e.g. using a linear regression in one program and a binary logit in another).
Differences in PCA results between Q and other programs, and also between Q's different PCA implementations, are due to:
Different treatments of missing values. For example, whether analysis is based on pairwise correlations or not.
Local optima in rotations. PCA itself is an exact algorithm; provided that the above-issues are addressed, the results should be the same. However, the rotation methods are not exact, and different programs can find different solutions.
There is no reason to expect different programs to get the same results for any of latent class, cluster analysis and tree models. This is because:
There are multiple widely used k-means algorithms. For example, Q and R use Hartigan, J. A. and M. A. Wong (1979). "A K-means Clustering Algorithm." Applied Statistics 28(1): 100-108, but SPSS uses a different algorithm.
There are multiple different mixture modeling algorithms (e.g., latent class). For example, when attempting to specify the same basic model, Q may use Maximum Likelihood estimation, Latent Gold may use Posterior Mode estimation and Sawtooth may use Hierarchical Bayesian estimation.
Most mixture algorithms (e.g., latent class analysis) involve some element of randomization and differences in which random numbers are generated can change outputs.
More generally, the reason that segmentation algorithms give different results is that all only return approximate solutions, as it is computationally infeasible to find the best solution for all but the most trivial problems. For example, with 1,000 respondents and 5 segments, there are 8,250,291,000,000 possible segmentations. As latent class allows for people to be partially in multiple segments, it permits an infinite number of segments. Consequently, when computer programs try and find segments, they start by making a few guesses, and one the key differences between the different programs relates to how they make those guesses, with things like the order of cases and variables in the data file being a determinant to how the initial guesses are made.
In Q, the best protection to avoiding having solutions that are difficult to replicate in other programs is to go into the Advanced options when creating the segments and set Number of starts to, say, 10 or 20, or a larger number if you have the time to wait.
Most programs use slightly different statistical tests. In particular, Q does not default to the tests that are standard in SPSS, Quantum and Survey Reporter, but often equivalent tests can be selected by modifying the options in Statistical Assumptions.
How weights are treated can have a major impact on computations of statistical significance. Most statistics program treat all weights as frequency weights (e.g., SPSS Base, R). Most market research programs assume that weights are sampling weights and use a Calibrated Weight when computing statistical tests (e.g., Quantum, Survey Reporter, Wincross, Uncle). Most specialized survey analysis programs (e.g., SPSS Complex Samples, R's survey package) uses special-purpose variance estimation algorithms for dealing with weights. Q uses a combination of special-purpose variance estimation and Calibrated Weights in its analysis (which is used and when is discussed for each test - see Tests Of Statistical Significance) and can be modified in Statistical Assumptions.
By default, Q uses multiple comparison corrections on all tables. The specific correction used, the False Discovery Rate Correction (FDR) is not used by other market research programs. There is also no standard way to report the FDR, so the specific values of the corrected p-values differs from R, but does not affect conclusions. You can turn this off or select a different correction in Statistical Assumptions.
When performing the Multiple comparison correction in Column comparisons there are at least two different reasons for differences. First, there are differences between algorithms used to compute the studentized range statistics; in the case of SPSS versus Q, these differences are typically very small (e.g., in one program a p-value of 0.0135 may be computed whereas in another program a value p-value of 0.0132 may be computed). Second, where there are non-equal sample sizes in the groups being compared, these are treated differently as well (for example, when computing Homogeneous subgroups with Tukey’s HSD, Q’s formulas implicitly use the harmonic mean of two groups whereas SPSS computes the harmonic mean of all the groups).
Where using repeated measures data particular care should be taken as different program often make different assumptions regarding how the treat the likely occurrence of violations in the normality assumptions.
Some programs show all results using upper-case letters when performing Column Comparisons. Some programs use lowercase to indicate results between 0.05 and 0.1 levels of significance and uppercase for p-Values less than or equal to 0.05. Q uses lowercase for results less than 0.001 and uppercase for more significant results.
Q uses Corrected p when determining whether to assign letter or not and whether these letters are UPPERCASE or lowercase.
Obtaining assistance from Q in reconciling differences
If you require assistance in reconciling results obtained in Q with those obtained in other programs, please:
Send an email to support containing the following:
A QPack (File > Share > Send to Q Support (Encrypted)).
How To Replicate SPSS Custom Tables Significance Tests
How to Replicate SPSS Significance Tests in Q
Significance Testing - Independent Samples Column Means and Proportions Tests
Further reading: SPSS Alternatives
Retrieved from ‘https://wiki.q-researchsoftware.com/index.php?title=Results_Are_Different_to_those_from_Another_Program&oldid=57982’
Troubleshooting Significance Tests
Q Technical Reference > Troubleshooting > Troubleshooting Significance Tests
|
Home : Support : Online Help : Programming : Data Types : MultiSet : minus
MultiSet difference operator
M minus N
M minus N returns the set difference (counting multiplicity) between M and N.
M≔\mathrm{MultiSet}\left(a=2,b=5\right)
\textcolor[rgb]{0,0,1}{M}\textcolor[rgb]{0,0,1}{≔}{[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}]}
N≔\mathrm{MultiSet}\left(a,b=3\right)
\textcolor[rgb]{0,0,1}{N}\textcolor[rgb]{0,0,1}{≔}{[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}]}
M\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{minus}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}N
{[\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}]}
M\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{minus}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}[b,a,a]
{[\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]}
The MultiSet/minus operator was introduced in Maple 2016.
|
Experimental and numerical investigation on the impact resistance of fiber metal laminates | JVE Journals
Chang Liu1 , Guo Lai Yang2 , Yue Xiao3 , Quanzhao Sun4
1, 2, 3, 4School of Mechanical Engineering, Nanjing University of Science and Technology, Nanjing, 210094, Jiangsu, China
Received 1 December 2020; accepted 16 December 2020; published 25 March 2021
Copyright © 2021 Chang Liu, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The response of Fibre metal laminates (FMLs) to low-speed impact, including displacement at impact center, stress, strain response, and metal and fiber damage during the process was studied. Three types of stainless steel based FMLs with carbon fibre reinforced plastic (CFRP) specimens with different interface connections were designed and low-speed impact tests were carried out on these specimens. At the same time, finite element software ABAQUs is also used to simulate the impact process. During this process, we employ a cohesive element to characterize the bonding relationship of the interface, moreover, the influence of interface connection strength on impact resistance performance of FMLS is emphatically studied. The results show that the mechanical response and damage state of FMLS under different interface connection modes are greatly different, and the higher the connection strength, the better the overall impact resistance of the specimen. When the connection strength is poor, the specimen is easy to be separated when subjected to impact, and the metal layer suffers early damage failure, which is not ideal for the whole structure.
Three types of stainless steel based FMLs with carbon fibre reinforced plastic (CFRP) specimens with different interface connections were designed and low-speed impact tests were carried out on these specimens.
The FEA model of impact of laminated plates is established by using finite element software. In this model, we use a Cohesive element to represent the bonding relationship at the interface, and the simulation results are in good agreement with the experimental results.
In the impact process of laminates, if the interface debonding occurs, there will be a large gap at the interface in the impact rebound stage, which will seriously affect the strength of the structure.
Keywords: fibre metal laminates, impact behaviour, ABAQUS, interface deboning.
Fibre metal laminates (FMLs) are a novel material structure originally developed for aeronautical engineering applications with high fatigue tolerance [1]. Then, more and more people have realized that this structure has the characteristics of light weight and strong impact resistance, and it has been widely used in various fields such as aircraft skin, high-pressure vessel and weapon equipment. Interface debonding, metal plastic deformation and fiber breakage is one of the most important failure mode of FMLS. In recent years, scholars have conducted a large number of studies on the damage evolution process of FMLS, but has not yet formed a mature theory. The mechanical response of the structure and the law of damage evolution are far from enough in the process of laminates being impacted [2]. The processing of interface connection relation of FMLS is the key to study its interface debonding, moreover, TP et al. used Cohesive element to characterize the connection relationship of FMLS during low-speed impact simulation, the results show that this method can simulate the bonding and debonding process of interface [3].
In order to further reveal the mechanical response of FMLS under low-speed impact, FMLS specimens with different interface connection modes were designed in this paper, and low-speed impact tests were conducted on these specimens.
2. Purpose of the experiment
The plastic deformation of the metal layer and the debonding of the interface are the most important failure forms in FMLs. In order to explore the influence of interface bond on the load bearing capacity of laminates under low speed impact, three kinds of specimens with different interface connection modes were designed in this paper, namely mechanical connection, resin adhesion connection and FM300 adhesive bond. By studying the stress-strain response, deformation degree and damage of metal layer and fiber layer when the specimens with three different interface connection modes are subjected to the same load impact (finite element model is required for some indexes), the influence of interface connection strength on the bearing capacity of FMLs has been analyzed.
3. Preparation of the experiment
The test specimen in this test is stainless steel based FMLs with carbon fibre reinforced plastic (CFRP) layers, of which the composite material is Toray T300/ epoxy resin with a thickness of 1mm and the metal layer is 304L stainless steel with a thickness of 1mm.
There are three interface connection modes of specimens, namely mechanical connection, resin adhesion connection and FM300 adhesive bond. We numbered these three specimens as JX-1, HY-1 and FM-1 respectively. The structure of the specimen is shown in Fig. 1. JX-1 has no binder, while the binder of HY-1 and FM-1 is resin and FM300 adhesive respectively.
Fig. 1. Schematic diagram of specimen structure
The test equipment in this test is a horizontal impact test platform modified based on high pressure transmitter, as shown in Fig. 2. In this platform, we designed a special fixture to clamp our specimen. The impact bullet is also specially processed, which is made of high-strength bearing steel, and both ends are processed into hemispherical shape. In the process of impact test, strain gauges were used to collect the strain value of the composite layer on the back of the impact center, and the strain collection position was shown in Fig. 2. The laser displacement sensor (Keenses LK-G400) was used to collect the displacement value of the center of the specimen.
The speed measuring device is composed of two pieces of aluminum foil board, both sides of which are connected with voltage data collector respectively. When the bullet passes through the foil, the cardboard is switched on and a voltage square wave signal is generated. Through the time difference between the two square waves
∆t
L
between the two pieces of aluminum foil board, we can figure out the velocity of the bullet by formula
v=L/∆t
Fig. 2. Schematic diagram of impact test system and the specimen
The dynamic explicit analysis in finite element software ABAQUS was used to simulate the process of composite laminates subjected to low velocity impact of bullets. In this simulation analysis, the impact process of JX-1 and FM-1 specimens was simulated respectively. Impact bullets are made of high-strength bearing steel, and almost no deformation occurs during the impact process, so they are considered as rigid bodies in simulation. In consideration of the large plastic deformation of the metal sheet during the test impact process, as well as the large metal damage and strain rate hardening and other nonlinear effects, Johnson-Cook plastic model and damage failure model were applied to the metal layer during the simulation process.
The simulated analysis of the strength and damage of composite materials was based on two-dimensional Hashin progressive failure theory, which included fiber tensile fracture, compression buckling, tensile and shear fracture of the matrix, and collapse of the matrix under compression and shear. And the modeling is carried out by using the 8-node quadrilateral plane universal continuous shell element (SC8R).
In the modeling of specimen JX-1, the interface between metal layer and composite layer was set with separable contact connection mode, while for specimen FM-1, the interface was characterized by Cohesive element (COH3D8), and previous studies by TP et al provided us with the parameters of the element [3].
According to the velocity measuring device in the previous section, the impact velocity of bullets hitting FMLs is 7.85 m/s, and the impact energy can be calculated as 10.94 J according to the formula
E=
0.5 mv2. After the impact process, the central displacement of the back of the specimen changed as shown in Fig. 5. (data acquisition frequency was 2 kHz), and the finite element simulation results were shown in Fig. 3. and Fig. 4.
It can be seen from Fig. 3. that during the impact, the integrity of specimen FM-1 between metal plate and composite plate remains good. The laminated plate deformation reaches its maximum at about 1ms, and the displacement value is about 4.97 mm, which is consistent with the displacement value captured by the laser displacement sensor on the back of the impact point during the test. In Fig. 4, the deformation and displacement of the specimen JX-1 were observed, and it was found that the deformation of the FMLs reached its maximum at about 1ms, and the displacement value was about 3.32 mm, which was slightly smaller than that of the specimen FM-1, it was also consistent with the test results. After 1 ms, the FMLs began to rebound, but the degree of rebound of the metal plate was greater than that of the composite plate. At this time, obvious gaps began to appear between the contact surfaces of the specimen JX-1, and at this time, the integrity of the structure was destroyed.
Fig. 3. Displacement and deformation of the middle section of specimen FM-1 at different times
Fig. 4. Displacement and deformation of the middle section of specimen JX-1 at different times
The strain variation values of point A (parallel to the direction of the fiber) and Point B (perpendicular to the fiber direction) in the finite element calculation results were compared with the experimental results in Fig. 6 (the curve data were smoothed). By comparing the strain values of the two kinds of specimens, it can be found that the strain of specimen FM-1 is significantly greater than that of specimen JX-1. This is because during the impact process, the interface of specimen JX-1 is separated, the metal plate is greatly deformed, and the bearing capacity of composite layer is weakened.
Fig. 5. Variation of impact center displacement with time in the impact process of the three specimens
Fig. 6. Comparison of strain values between finite element calculation results and test results
a) Specimen FM-1
b) Specimen JX-1
The damage of fiber and matrix after impact was observed and compared with the test specimen. In the finite element results, the damage to the fiber and the matrix was determined by the Hashin damage criterion. When the damage index reached 1, the corresponding damage degree reached the maximum, that is, the complete fracture of the fiber or the complete cracking of the matrix.
Fig. 7. Damage of specimen FM-1
a) Fiber damage in FEA
b) Matrix damage in FEA
c) Damage in test
Fig. 8. Damage of specimen JX-1
It can be seen from Fig. 7 that obvious damage appears on the back of the specimen FM-1. In the center of the specimen is perpendicular to the direction of the fiber, a fiber break mark of about 6 mm in length appears. The shape, position and length of the break mark are consistent between the finite element results and the test results. From the finite element calculation results, it can also be found that the matrix of specimen FM-1 also has a large area of damage at the impact center, and the damage area is similar to the punch impact area.
It can be seen from Fig. 8 that although the specimen JX-1 underwent relatively obvious plastic deformation after impact, no obvious damage occurred. It can be easily found from the FEA results that the maximum damage factor of the fiber is 0.767, so the fiber of specimen JX-1 has not suffered fiber fracture damage. However, obvious damage had occurred to matrix, and the shape of the damage looked similar to peanut, which was hard to find from the test specimen.
In this paper, the response of FMLs subjected to low speed impact under different interface connections is analyzed. The horizontal impact tests of three different specimens and the finite element impact process simulation of two specimens were carried out. The results of the experiment and finite element simulation show that the interface debonding has a great influence on the impact response of the FMLs. After debonding, the structural integrity of FMLs will become worse. The metal layer will bear a large load, while the composite layer will bear a small load. For FMLs with poor bond strength, in the stage of impact resilience, a relatively obvious gap will be formed between the metal layer and the composite layer, which will destroy the integrity of the material.
The greater the bonding strength of the laminate interface, the better its integrity. For FMLs with good integrity, the fiber layer will bear greater stress during the impact process, and after the impact process, the fiber layer will have greater deformation and damage, however, the metal layer will have less plastic deformation, it is beneficial to improve the overall durability of the FMLs. The research also shows that the Cohesive element used to represent the interface of FMLs during finite element simulation is consistent with the experimental results, therefore, it is a reliable way during simulation.
The authors would like to acknowledge the financial supports from the National Natural Science Foundation of China (Grant number: 51705253).
Asundi A., Choi Alta Y. N. Fiber metal laminates: an advanced material for future aircraft. Journal of Materials Processing Technology, Vol. 63, Issues 1-3, 1997, p. 384-394. [Publisher]
Morinière F. D. Low-velocity impact on fibre-metal laminates. Ph.D. Thesis, Delft University of Technology, The Netherlands, 2014. [Search CrossRef]
Pärnänen T., Kanerva M., Sarlin E. Debonding and impact damage in stainless steel fibre metal laminates prior to metal fracture. Composite Structures, Vol. 119, 2015, p. 777-786. [Publisher]
|
Midpoint method - Wikipedia
Numeric solution for differential equations
For the midpoint rule in numerical quadrature, see rectangle method.
Illustration of the midpoint method assuming that
{\displaystyle y_{n}}
equals the exact value
{\displaystyle y(t_{n}).}
The midpoint method computes
{\displaystyle y_{n+1}}
so that the red chord is approximately parallel to the tangent line at the midpoint (the green line).
In numerical analysis, a branch of applied mathematics, the midpoint method is a one-step method for numerically solving the differential equation,
{\displaystyle y'(t)=f(t,y(t)),\quad y(t_{0})=y_{0}}
The explicit midpoint method is given by the formula
{\displaystyle y_{n+1}=y_{n}+hf\left(t_{n}+{\frac {h}{2}},y_{n}+{\frac {h}{2}}f(t_{n},y_{n})\right),\qquad \qquad (1e)}
the implicit midpoint method by
{\displaystyle y_{n+1}=y_{n}+hf\left(t_{n}+{\frac {h}{2}},{\frac {1}{2}}(y_{n}+y_{n+1})\right),\qquad \qquad (1i)}
{\displaystyle n=0,1,2,\dots }
{\displaystyle h}
is the step size — a small positive number,
{\displaystyle t_{n}=t_{0}+nh,}
{\displaystyle y_{n}}
is the computed approximate value of
{\displaystyle y(t_{n}).}
The explicit midpoint method is sometimes also known as the modified Euler method,[1] the implicit method is the most simple collocation method, and, applied to Hamiltonian dynamics, a symplectic integrator. Note that the modified Euler method can refer to Heun's method,[2] for further clarity see List of Runge–Kutta methods.
The name of the method comes from the fact that in the formula above, the function
{\displaystyle f}
giving the slope of the solution is evaluated at
{\displaystyle t=t_{n}+h/2={\tfrac {t_{n}+t_{n+1}}{2}},}
the midpoint between
{\displaystyle t_{n}}
at which the value of
{\displaystyle y(t)}
{\displaystyle t_{n+1}}
{\displaystyle y(t)}
needs to be found.
A geometric interpretation may give a better intuitive understanding of the method (see figure at right). In the basic Euler's method, the tangent of the curve at
{\displaystyle (t_{n},y_{n})}
is computed using
{\displaystyle f(t_{n},y_{n})}
. The next value
{\displaystyle y_{n+1}}
is found where the tangent intersects the vertical line
{\displaystyle t=t_{n+1}}
. However, if the second derivative is only positive between
{\displaystyle t_{n}}
{\displaystyle t_{n+1}}
, or only negative (as in the diagram), the curve will increasingly veer away from the tangent, leading to larger errors as
{\displaystyle h}
increases. The diagram illustrates that the tangent at the midpoint (upper, green line segment) would most likely give a more accurate approximation of the curve in that interval. However, this midpoint tangent could not be accurately calculated because we do not know the curve (that is what is to be calculated). Instead, this tangent is estimated by using the original Euler's method to estimate the value of
{\displaystyle y(t)}
at the midpoint, then computing the slope of the tangent with
{\displaystyle f()}
. Finally, the improved tangent is used to calculate the value of
{\displaystyle y_{n+1}}
{\displaystyle y_{n}}
. This last step is represented by the red chord in the diagram. Note that the red chord is not exactly parallel to the green segment (the true tangent), due to the error in estimating the value of
{\displaystyle y(t)}
at the midpoint.
The local error at each step of the midpoint method is of order
{\displaystyle O\left(h^{3}\right)}
, giving a global error of order
{\displaystyle O\left(h^{2}\right)}
. Thus, while more computationally intensive than Euler's method, the midpoint method's error generally decreases faster as
{\displaystyle h\to 0}
The methods are examples of a class of higher-order methods known as Runge–Kutta methods.
1 Derivation of the midpoint method
Derivation of the midpoint method[edit]
{\displaystyle y'=y,y(0)=1.}
Blue: the Euler method, green: the midpoint method, red: the exact solution,
{\displaystyle y=e^{t}.}
The step size is
{\displaystyle h=1.0.}
The same illustration for
{\displaystyle h=0.25.}
It is seen that the midpoint method converges faster than the Euler method.
The midpoint method is a refinement of the Euler's method
{\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}),\,}
and is derived in a similar manner. The key to deriving Euler's method is the approximate equality
{\displaystyle y(t+h)\approx y(t)+hf(t,y(t))\qquad \qquad (2)}
which is obtained from the slope formula
{\displaystyle y'(t)\approx {\frac {y(t+h)-y(t)}{h}}\qquad \qquad (3)}
and keeping in mind that
{\displaystyle y'=f(t,y).}
For the midpoint methods, one replaces (3) with the more accurate
{\displaystyle y'\left(t+{\frac {h}{2}}\right)\approx {\frac {y(t+h)-y(t)}{h}}}
when instead of (2) we find
{\displaystyle y(t+h)\approx y(t)+hf\left(t+{\frac {h}{2}},y\left(t+{\frac {h}{2}}\right)\right).\qquad \qquad (4)}
One cannot use this equation to find
{\displaystyle y(t+h)}
as one does not know
{\displaystyle y}
{\displaystyle t+h/2}
. The solution is then to use a Taylor series expansion exactly as if using the Euler method to solve for
{\displaystyle y(t+h/2)}
{\displaystyle y\left(t+{\frac {h}{2}}\right)\approx y(t)+{\frac {h}{2}}y'(t)=y(t)+{\frac {h}{2}}f(t,y(t)),}
which, when plugged in (4), gives us
{\displaystyle y(t+h)\approx y(t)+hf\left(t+{\frac {h}{2}},y(t)+{\frac {h}{2}}f(t,y(t))\right)}
and the explicit midpoint method (1e).
The implicit method (1i) is obtained by approximating the value at the half step
{\displaystyle t+h/2}
by the midpoint of the line segment from
{\displaystyle y(t)}
{\displaystyle y(t+h)}
{\displaystyle y\left(t+{\frac {h}{2}}\right)\approx {\frac {1}{2}}{\bigl (}y(t)+y(t+h){\bigr )}}
{\displaystyle {\frac {y(t+h)-y(t)}{h}}\approx y'\left(t+{\frac {h}{2}}\right)\approx k=f\left(t+{\frac {h}{2}},{\frac {1}{2}}{\bigl (}y(t)+y(t+h){\bigr )}\right)}
Inserting the approximation
{\displaystyle y_{n}+h\,k}
{\displaystyle y(t_{n}+h)}
results in the implicit Runge-Kutta method
{\displaystyle {\begin{aligned}k&=f\left(t_{n}+{\frac {h}{2}},y_{n}+{\frac {h}{2}}k\right)\\y_{n+1}&=y_{n}+h\,k\end{aligned}}}
which contains the implicit Euler method with step size
{\displaystyle h/2}
as its first part.
Because of the time symmetry of the implicit method, all terms of even degree in
{\displaystyle h}
of the local error cancel, so that the local error is automatically of order
{\displaystyle {\mathcal {O}}(h^{3})}
. Replacing the implicit with the explicit Euler method in the determination of
{\displaystyle k}
results again in the explicit midpoint method.
Leapfrog integration and Verlet integration
^ Burden & Faires 2011, p. 286 harvnb error: no target: CITEREFBurdenFaires2011 (help)
Griffiths,D. V.; Smith, I. M. (1991). Numerical methods for engineers: a programming approach. Boca Raton: CRC Press. p. 218. ISBN 0-8493-8610-1.
Burden, Richard; Faires, John (2010). Numerical Analysis. Richard Stratton. p. 286. ISBN 0-538-73351-9.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Midpoint_method&oldid=994366002"
|
{\displaystyle c=}
{\displaystyle T}
{\displaystyle \partial S}
{\displaystyle N}
{\displaystyle \partial T}
{\displaystyle \beta =-}
{\displaystyle 1}
{\displaystyle \partial V}
{\displaystyle V}
{\displaystyle \partial p}
{\displaystyle \alpha =}
{\displaystyle 1}
{\displaystyle \partial V}
{\displaystyle V}
{\displaystyle \partial T}
{\displaystyle U(S,V)}
{\displaystyle H(S,p)=U+pV}
{\displaystyle A(T,V)=U-TS}
{\displaystyle G(T,p)=H-TS}
The philosophy of thermal and statistical physics is that part of the philosophy of physics whose subject matter is an amalgam of classical thermodynamics, statistical mechanics, and related theories. Its central questions include: What is entropy, and what does the second law of thermodynamics say about it? Does either thermodynamics or statistical mechanics contain an element of time-irreversibility? If so, what does statistical mechanics tell us about the arrow of time? What is the nature of the probabilities that appear in statistical mechanics?
Uffink, J., 2001, "Bluff your way in the second law of thermodynamics," Studies in History and Philosophy of Modern Physics 32(3): 305–94.
--------, 2007, "Compendium of the Foundations of Classical Statistical Physics" in Butterfield, J., and John Earman, eds., Philosophy of Physics, Part B. North Holland: 923–1074.
Valev, P., 2002, "The Law of Self-Acting Machines and Irreversible Processes with reversible Replicas," in Sheehan, D., (ed.) Proceedings of the First International conference on Quantum Limits to the Second Law, American Institute of Physics: 430–35.
Martinas et al., Thermodynamics: History And Philosophy - Facts, Trends, Debates
Hoyer, Thermodynamics and Philosophy: Ludwig Boltzmann
Sklar, Physics and Chance: Philosophical Issues in the Foundations of Statistical Mechanics
Ernst & Hüttemann, Time, Chance, and Reduction: Philosophical Aspects of Statistical Mechanics
Sklar, Lawrence (2015). "Philosophy of Statistical Mechanics". Stanford Encyclopedia of Philosophy. Retrieved 2019-04-17.
Uffink, Jos (2014). "Boltzmann's Work in Statistical Physics". Stanford Encyclopedia of Philosophy. Retrieved 2019-04-17.
|
How to programmatically calculate the area of a cone in C#
The surface area of a cone is the summation of the curved area of the cone and the area of the base.
A=\pi r(r+ \sqrt{h^2+r^2} )
r is the radius of the cone.
h is the height of the cone.
π is the mathematical pi.
In C#, we can easily calculate the area of a cone. All we need to do is apply the formula or logic below.
Math.PI * r * (r + Math.Sqrt(h*h + r*r));
Math.PI: This is the pi we know in mathematics.
r: This represents the radius of the cone.
h: This represents the height of the cone.
The value returned is a double value which represents the surface area of a cone.
// create function to get the areas
static void GetArea(double h, double r){
double area = Math.PI * r * (r + Math.Sqrt(h*h + r*r));
Console.WriteLine("The area of the cone with height "+h+" and radius "+r+" is "+area);
// create some radii of some cones
// create also the heights of each cone
// get the area of the cones
GetArea(h1, r1);
Calculating the surface area of a cone in C#
Line 5: We create a method called GetArea() that takes two parameters, height and radius. It calculates the area of the cone and tells the user the result in the console.
Lines 14–16: Within the main function, we define the radii of the cones we want to calculate the areas of.
Lines 19–21: We define the heights of the cones as well.
Lines 24–26: We invoke the GetArea() method and passed it to the radii and heights we have created. We print the results to the console.
|
Vibration suppression effects on rotating wind turbine blade using a particle damping method | JVE Journals
1, 2Vishwakarma Institute of Technology, Pune, India
Received 7 August 2019; accepted 22 August 2019; published 28 November 2019
Due to the vibration of the wind turbine blade, the rate of electricity generation gets reduced. In this research, the focus is on reduction of wind turbine blade vibration. The important point is that in the attempt of vibration suppression, a new method of damping named particle damping has been tried. The novelty of this study is that this method is adopted for the first time in wind turbine blade for rotating condition. In this method, containers filled with spherical particles are mounted at four different positions on each blade alternatively. Taking tests at different rpm and container positions gives a different vibration suppression effect as compared to without damping and finding optimum positions for mounting of damper.
Compared results of without and with-damping conditions from 60 rpm to 80 rpm
Comparing dampers for all the four positions, optimum results are received at 60 rpm rotation
Comparing dampers for all the four positions, optimum results are received at 1200 mm position location of damper
Keywords: wind turbine blade, vibration suppression, damper.
Control of vibration in blade is necessary as it adversely affects electricity generation. According to Thomsen [1], the main modes of vibration in the blade are edgewise and flap wise. According to Dapeng [2], edgewise vibration is the main problem in most of the blades. Giguere [3] gives a characteristic which provides important findings of dynamic characteristics. Mainly load acting on the blade is wind load and many scientists have already worked by blade element momentum method (BEM) for calculating aerodynamic load of a blade. Extreme wind turbine load investigation, using different methods, are studied by Saranyasoontorn [5]. Typhoon winds are critically analyzed for the investigation of turbulent conditions, Ishizaki [4]. The damage of structural parts like nacelle cover and blades of wind turbine are investigated by Maalami [6] and Duquette [7]. Passive damper is inserted in wind turbine tower for minimizing vibration induced by wind loads, Murtagh [8]. Krenk [11] introduces active struts mounted near the root of each blade for reducing blade vibrations. For mitigating edgewise vibrations active tuned mass damper is investigated by Fitzgerald [10]. A roller damper and a tuned liquid column damper (TLCD) inside a rotating blade are introduced by Box and Khan [11, 12]. For multi mode vibration reduction of offshore wind turbine under seismic excitation, Hussan [13] introduces a multiple tuned mass damper (MTMD) technique.
The use of particle damping in wind turbine blade is not much explored in as found through the relevant literature. Therefore, in this research the focus is on the same.
2. Particle damping method
The use of a particle damping method is based on the ability of contact interactions using a small number of parameters that capture the most important contact properties. Forces between cavity walls and individual particles are calculated based on force-displacement relations.
Forces created due to particle-cavity and particle-particle impacts are the main aspect for modeling. Spherical particles, A and B with radii
{r}_{A}
{r}_{B}
with particle centres separated by distance
D
is shown in Fig. 1(a). At approach e is positive, at that time two particles interact with each other. The approach can be defined as:
e=\left({r}_{A}+{r}_{B}\right)-D,
contact forces between two colliding balls become:
\stackrel{\to }{f}={f}^{n}{\stackrel{\to }{N}}^{n}+\mathrm{ }{f}^{s}{\stackrel{\to }{N}}^{s},
{f}^{n}
– normal force,
{f}^{s}
– shear force,
{\stackrel{\to }{N}}^{n}
– unit vector in normal direction and
{\stackrel{\to }{N}}^{s}
– unit vector in shear direction.
{C}_{N}
{C}_{S}
{K}_{N}
{K}_{S}
damp constants and stiffness respectively.
Fig. 1. a) Particle-particle impact parameters, b) ball-wall spring mass diagram, c) ball-ball spring mass diagram
Fig. 2 shows a block diagram of the experimental set-up. Three blades having 1525 mm length each are mounted on flange connected to a shaft. The shaft is supported with a bearing, considering that all vibrations of the blade are transmitted through the flange and shaft to the bearing. Three-directional accelerometer is mounted on the bearing and it transmits signals to FFT analyzer (Bruel and Kjaer) and further from the analyzer to the display unit. Fig. 3 shows 9 mm spherical ball, particle damper container. Fig. 4 shows the experimental set-up of 1 kW wind turbine (a) FEA Model (b) test set-up. In all tests, considering the size of spherical particles as 9 mm having material containing chemical compositions of 0.010 % Mo, 0.050 % Ni ,0.98 % C, 0.33 % Mn, 0.25 % Si, 0.010 % S, 0.012 % P, 1.40 % Cr. Container diameter is 48 mm, and height is 28 mm having material of Poly-propylene (PP).
Fig. 2. Block diagram of the experiment set-up
Fig. 3. Container with particle damper
Position of Damper: Fig. 5(a) to (d) show a particle damper mounted on four positions on each blade at the difference of 300 mm from the tip of the blade as 300 mm, 600 mm, 900 mm and 1200 mm respectively. At each position, a container is mounted externally on all the three blades at the same time.
RPM of Blade: In this research, three different rpm i.e. 60 rpm, 70 rpm and 80 rpm are kept. Therefore, the first takes the readings on without damping conditions and compares results with with-damping conditions.
Fig. 4. Experimental set-up of 1 kW wind turbine a) FEA model and b) actual experimental set-up
Fig. 5. a) Damper at 300 mm, b) damper at 600 mm, c) damper at 900 mm, d) damper at 1200 mm
FEA model of the test set-up is prepared using hypermesh software by taking 2D mixed mesh and 3 D tetra mesh elements having 161073 nodes, 65761 elements. First, a test was conducted without damping using FEA results were compared with another similar test conducted through experimental set up. For without damping conditions taking reading at 90 rpm randomly gives result as 1.57 m/s2 at 50.96 Hz from CAE and acceleration of 1.66 m/s2 at 46.87 Hz from test set-up, which is close to 90% CAE results. Table 1 shows all the testing results with the first two modes for different rpm and position. Fig 6. shows the acceleration versus frequency with frequency range up-to 1000 Hz. Fig. 5(a) to (d) show damper at 300 mm, 600 mm, 900 mm and 1200 mm respectively. Fig. 6 shows testing results of damping with different positions and at different rpms. In all the results, the five peak modes are shown, and their magnitudes are also shown. Fig. 7 shows all results of undamped and damped conditions together for the first two modes.
Fig. 6. Testing results of damping with different positions at different rpm
a) 60 rpm, 1200 mm position
b) 70 rpm, 1200 mm position
c) 80 rpm, 1200 mm position
d) 60 rpm, 900 mm position
e) 70 rpm, 900 mm position
f) 80 rpm, 900 mm position
g) 60 rpm, 600 mm position
h) 70 rpm, 600 mm position
Fig. 7. Combined results of undamped and damped cases
Table 1. Testing results of acceleration (m/s2) for different rpm and position
1200 mm position (Damped)
900 mm position (Damped)
Comparing dampers for all the four positions, optimum results are received at 1200 mm position location of damper, because at each 60, 70 and 80 rpm difference of acceleration value between with and without damping is greater. Comparing three different rpms, if results are compared, at 60 rpm rotation and 1200 mm position damper location gives good results.
Giguère P., Selig M. S., Tangler J. L. Blade design trade-offs using low-lift airfoil for stall-regulated HAWTs. Journal of Solar Energy Engineering, Vol. 121, 1999, p. 217-223. [Publisher]
Saranyasoontorn K., Manuel L. Efficient models for wind turbine extreme loads using inverse reliability. Journal of Wind Engineering and Industrial Aerodynamics, Vol. 92, 2005, p. 789-804. [Publisher]
Ishizaki H. Wind profiles, turbulence intensities and gust factors for design in typhoon-prone regions. Journal of Wind Engineering and Industrial Aerodynamics, Vol. 13, 1983, p. 55-66. [Publisher]
Maalawi K. Y., Badawy M. T. S. A direct method for evaluating performance of horizontal axis wind turbines. Renewable and Sustainable Energy Reviews, Vol. 5, 2001, p. 175-190. [Publisher]
Duquette M. M., Visser K. D. Numerical implications of solidity and blade number on rotor performance of horizontal-axis wind turbines. Journal of Solar Energy Engineering, Vol. 125, Issue 4, 2003, p. 425-432. [Publisher]
Murtagh P. J., Ghosh A., Basu B., Broderick B. M. Passive control of wind turbine vibrations including blade/tower interaction and rotationally sampled turbulence. Wind Energy, Vol. 11, Issue 4, 2008, p. 305-317. [Publisher]
Box G. E. P., Draper N. R. Response Surfaces, Mixtures, and Ridge Analyses. 2nd ed., John Wiley and Sons, New Jersey, USA, 2007. [Publisher]
Khan A., Do J., Kim D. Cost effective optimal mix proportioning of high strength self-compacting concrete using response surface methodology. Computers and Concrete, Vol. 17, Issue 5, 2016, p. 629-648. [Publisher]
Mosaruf Hussan, Mohammad Sabbir Rahman, Faria Sharmin, Dookie Kim Multiple tuned mass damper for multi-mode vibration reduction of offshore wind turbine under seismic excitation. Ocean Engineering, Vol. 160, 2018, p. 449-460. [Publisher]
Braj Bhushan Prasad, Fabian Duvigneau, Daniel Juhre, Elmar Woschke
Effect of mistuning parameters on dynamic characteristics of mistuned bladed disk
Hongyuan Zhang, Haocun Dong, Tianyu Zhao, Xin Zhang, Huiqun Yuan
|
Talk:QB/a19ElectricPotentialField Capacitance - Wikiversity
Talk:QB/a19ElectricPotentialField Capacitance
{\displaystyle \Delta V_{AB}=V_{A}-V_{B}=-\int _{A}^{B}{\vec {E}}\cdot d{\vec {\ell }}}
{\displaystyle {\vec {E}}=-{\tfrac {\partial V}{\partial x}}{\hat {i}}-{\tfrac {\partial V}{\partial y}}{\hat {j}}-{\tfrac {\partial V}{\partial z}}{\hat {k}}=-{\vec {\nabla }}V}
{\displaystyle q\Delta V}
{\displaystyle U=qV}
{\displaystyle Power={\tfrac {\Delta U}{\Delta t}}={\tfrac {\Delta q}{\Delta t}}V=IV=e{\tfrac {\Delta N}{\Delta t}}}
{\displaystyle K={\tfrac {1}{2}}mv^{2}}
{\displaystyle V(r)=k{\tfrac {q}{r}}}
{\displaystyle V_{P}=k\sum _{1}^{N}{\frac {q_{i}}{r_{i}}}\to k\int {\frac {dq}{r}}}
{\displaystyle Q=CV}
{\displaystyle C=\varepsilon _{0}{\tfrac {A}{d}}}
{\displaystyle {\text{Series}}:\;{\tfrac {1}{C_{S}}}=\sum {\tfrac {1}{C_{i}}}.}
{\displaystyle {\text{ Parallel:}}\;C_{P}=\sum C_{i}.}
{\displaystyle u={\tfrac {1}{2}}QV={\tfrac {1}{2}}CV^{2}={\tfrac {1}{2C}}Q^{2}}
{\displaystyle u_{E}={\tfrac {1}{2}}\varepsilon _{0}E^{2}}
Retrieved from "https://en.wikiversity.org/w/index.php?title=Talk:QB/a19ElectricPotentialField_Capacitance&oldid=1917531"
Return to "QB/a19ElectricPotentialField Capacitance" page.
|
solution 2 Simplify (4x + 3y)2 - 48xy - Maths - Algebraic Expressions and Identities - 12420151 | Meritnation.com
\mathbf{2}\mathbf{.}\mathbf{ }\mathbf{ }\mathbf{ }\mathbit{S}\mathbit{i}\mathbit{m}\mathbit{p}\mathbit{l}\mathbit{i}\mathbit{f}\mathbit{y} \left(4x + 3y{\right)}^{2} - 48xy
We have,\phantom{\rule{0ex}{0ex}}{\left(4x+3y\right)}^{2}-48xy\phantom{\rule{0ex}{0ex}}={\left(4x\right)}^{2}+2×4x×3y+{\left(3y\right)}^{2}-48xy\phantom{\rule{0ex}{0ex}}=16{x}^{2}+24xy+9{y}^{2}-48xy\phantom{\rule{0ex}{0ex}}=16{x}^{2}-24xy+9{y}^{2}\phantom{\rule{0ex}{0ex}}={\left(4x\right)}^{2}-2×4x×3y+{\left(3y\right)}^{2}\phantom{\rule{0ex}{0ex}}={\left(4x-3y\right)}^{2}
16x2+24xy+9y2-48xy
16x2 -24xy+9y2
|
Killing vector field - formulasearchengine
A Killing vector field (red) with integral curves (blue) on a sphere.
In mathematics, a Killing vector field (often just Killing field), named after Wilhelm Killing, is a vector field on a Riemannian manifold (or pseudo-Riemannian manifold) that preserves the metric. Killing fields are the infinitesimal generators of isometries; that is, flows generated by Killing fields are continuous isometries of the manifold. More simply, the flow generates a symmetry, in the sense that moving each point on an object the same distance in the direction of the Killing vector field will not distort distances on the object.
Specifically, a vector field X is a Killing field if the Lie derivative with respect to X of the metric g vanishes:
{\displaystyle {\mathcal {L}}_{X}g=0\,.}
In terms of the Levi-Civita connection, this is
{\displaystyle g(\nabla _{Y}X,Z)+g(Y,\nabla _{Z}X)=0\,}
for all vectors Y and Z. In local coordinates, this amounts to the Killing equation
{\displaystyle \nabla _{\mu }X_{\nu }+\nabla _{\nu }X_{\mu }=0\,.}
This condition is expressed in covariant form. Therefore it is sufficient to establish it in a preferred coordinate system in order to have it hold in all coordinate systems.
The vector field on a circle that points clockwise and has the same length at each point is a Killing vector field, since moving each point on the circle along this vector field simply rotates the circle.
If the metric coefficients
{\displaystyle g_{\mu \nu }\,}
in some coordinate basis
{\displaystyle dx^{a}\,}
{\displaystyle x^{\kappa }\,}
{\displaystyle K^{\mu }=\delta _{\kappa }^{\mu }\,}
is automatically a Killing vector, where
{\displaystyle \delta _{\kappa }^{\mu }\,}
is the Kronecker delta.[1]
To prove this, let us assume
{\displaystyle g_{\mu \nu },_{0}=0\,}
{\displaystyle K^{\mu }=\delta _{0}^{\mu }\,}
{\displaystyle K_{\mu }=g_{\mu \nu }K^{\nu }=g_{\mu \nu }\delta _{0}^{\nu }=g_{\mu 0}\,}
Now let us look at the Killing condition
{\displaystyle K_{\mu ;\nu }+K_{\nu ;\mu }=K_{\mu ,\nu }+K_{\nu ,\mu }-2\Gamma _{\mu \nu }^{\rho }K_{\rho }=g_{\mu 0,\nu }+g_{\nu 0,\mu }-g^{\rho \sigma }(g_{\sigma \mu ,\nu }+g_{\sigma \nu ,\mu }-g_{\mu \nu ,\sigma })g_{\rho 0}\,}
{\displaystyle g_{\rho 0}g^{\rho \sigma }=\delta _{0}^{\sigma }\,}
The Killing condition becomes
{\displaystyle g_{\mu 0,\nu }+g_{\nu 0,\mu }-(g_{0\mu ,\nu }+g_{0\nu ,\mu }-g_{\mu \nu ,0})=0\,}
{\displaystyle g_{\mu \nu ,0}=0}
The physical meaning is, for example, that, if none of the metric coefficients is a function of time, the manifold must automatically have a time-like Killing vector.
In layman's terms, if an object doesn't transform or "evolve" in time (when time passes), time passing won't change the measures of the object. Formulated like this, the result sounds like a tautology, but one has to understand that the example is very much contrived: Killing fields apply also to much more complex and interesting cases.
A Killing field is determined uniquely by a vector at some point and its gradient (i.e. all covariant derivatives of the field at the point).
The Lie bracket of two Killing fields is still a Killing field. The Killing fields on a manifold M thus form a Lie subalgebra of vector fields on M. This is the Lie algebra of the isometry group of the manifold if M is complete.
For compact manifolds
Negative Ricci curvature implies there are no nontrivial (nonzero) Killing fields.
Nonpositive Ricci curvature implies that any Killing field is parallel. i.e. covariant derivative along any vector j field is identically zero.
If the sectional curvature is positive and the dimension of M is even, a Killing field must have a zero.
The divergence of every Killing vector field vanishes.
{\displaystyle X}
is a Killing vector field and
{\displaystyle Y}
is a harmonic vector field, then
{\displaystyle g(X,Y)}
is a harmonic function.
{\displaystyle X}
{\displaystyle \omega }
is a harmonic p-form, then
{\displaystyle {\mathcal {L}}_{X}\omega =0\,.}
Each Killing vector corresponds to a quantity which is conserved along geodesics. This conserved quantity is the metric product between the Killing vector and the geodesic tangent vector. That is, along a geodesic with some affine parameter
{\displaystyle \lambda }
{\displaystyle {\frac {d}{d\lambda }}(K_{\mu }{\frac {dx^{\mu }}{d\lambda }})=0}
is satisfied. This aids in analytically studying motions in a spacetime with symmetries.[2]
Killing vector fields can be generalized to conformal Killing vector fields defined by
{\displaystyle {\mathcal {L}}_{X}g=\lambda g\,}
{\displaystyle \lambda \,.}
The derivatives of one parameter families of conformal maps are conformal Killing fields.
Killing tensor fields are symmetric tensor fields T such that the trace-free part of the symmetrization of
{\displaystyle \nabla T\,}
vanishes. Examples of manifolds with Killing tensors include the rotating black hole and the FRW cosmology.[3]
Killing vector fields can also be defined on any (possibly nonmetric) manifold M if we take any Lie group G acting on it instead of the group of isometries.[4] In this broader sense, a Killing vector field is the pushforward of a right invariant vector field on G by the group action. If the group action is effective, then the space of the Killing vector fields is isomorphic to the Lie algebra
{\displaystyle {\mathfrak {g}}}
Affine vector field
Curvature collineation
Homothetic vector field
Matter collineation
|CitationClass=book }}.
|CitationClass=book }}. See chapters 3,9
Retrieved from "https://en.formulasearchengine.com/index.php?title=Killing_vector_field&oldid=235807"
|
005 Sample Final A, Question 5 - Math Wiki
005 Sample Final A, Question 5
Revision as of 18:16, 31 May 2015 by MathAdmin (talk | contribs) (Created page with "''' Question ''' Solve the following inequality. Your answer should be in interval notation. <math>\frac{3x+5}{x+2}\ge 2</math> {| class="mw-collapsible mw-collapsed" style...")
Question Solve the following inequality. Your answer should be in interval notation.
{\displaystyle {\frac {3x+5}{x+2}}\geq 2}
We start by subtracting 2 from each side to get
{\displaystyle {\frac {3x+5}{x+2}}-{\frac {2x+4}{x+2}}={\frac {x+1}{x+2}}\geq 0}
{\displaystyle x:}
{\displaystyle x<-2}
{\displaystyle x=-2}
{\displaystyle -2<x<-1}
{\displaystyle x=-1}
{\displaystyle -1<x}
{\displaystyle Sign:}
{\displaystyle (+)}
{\displaystyle VA}
{\displaystyle (-)}
{\displaystyle 0}
{\displaystyle (+)}
Now we just write, in interval notation, the intervals over which the denominator is nonnegative.
The domain of the function is:
{\displaystyle (-\infty ,-2)\cup [-1,\infty )}
{\displaystyle (-\infty ,-2)\cup [1,\infty )}
Retrieved from "https://wiki.math.ucr.edu/index.php?title=005_Sample_Final_A,_Question_5&oldid=773"
|
Economies_of_scope Knowpia
Economies of scope are "efficiencies formed by variety, not volume" (the latter concept is "economies of scale").[1] In economics, "economies" is synonymous with cost savings and "scope" is synonymous with broadening production/services through diversified products. Economies of scope is an economic theory stating that average total cost of production decrease as a result of increasing the number of different goods produced.[2] For example, a gas station that sells gasoline can sell soda, milk, baked goods, etc. through their customer service representatives and thus gasoline companies achieve economies of scope.[2]
Economies of scope make product diversification, as part of the Ansoff Matrix, efficient if they are based on the common and recurrent use of proprietary know-how or on an indivisible physical asset.[5] For example, as the number of products promoted is increased, more people can be reached per unit of money spent. At some point, however, additional advertising expenditure on new products may become less effective (an example of diseconomies of scope). Related examples include distribution of different types of products, product bundling, product lining, and family branding.
Economies of scope exist whenever the total cost of producing two different products or services (X and Y) is lower when a single firm instead of two separate firms produces by themselves.
{\displaystyle TC(QX,QY)<[TC(QX,0)+TC(0,QY)]}
The degree of economies of scope formula is as follows:
{\displaystyle DSC={TC(q1)+TC(q2)-TC(q1,q2) \over TC(q1,q2)}}
If DSC > 0, there is economies of scope. It is recommended that two firms can corporate and produce together.
If DSC = 0, there is no economies of scale and economies of scope.
If DSC < 0, there is diseconomies of scope. It is not recommended to work together for the two firms.[7] Diseconomies of scope means that it is more efficient for two firms to work separately since the merged cost per unit is higher than the sum of stand-alone costs.[7]
For a company, if they want to achieve diversity, the economy of scope is related to resource, and it is similar to resource requirements between enterprises. Relevance supports the economy by improving the applicability of resources in the merged companies and supporting the economical use of resources (such as employees, factories, technical and marketing knowledge) in these companies.[8]
Research and Development (R&D) is a typical example of economies of scope. In R&D economies, unit cost decreases because of the spreading R&D expenses. For example, R&D labs require a minimum number of scientists and researchers whose labour is indivisible. Therefore, when the output of the lab increases, R&D costs per unit may decrease. The substantial indivisible invest in R&D may also implies that average fixed costs will fall rapidly due to the output and sales increase. The ideas from one project can help another project (positive spillovers).[10][11]
Strategic fit, also known as complementarity that yields economies of scope, is the degree to which, or what kind of activities of different sections of an entrepreneur corporates with each other that complement themselves to achieve competitive advantage. Throughout the strategic fit, diversified firms can merge with interrelated businesses and share the resources. These kind of corporations can limit the duplication of research and developments, provide a more planned and developed selling pipelines for businesses.[8]
Joint costsEdit
The essential reason for economies of scope is some substantial joint cost across the production of multiple products (Ivan Png, Managerial Economics, 1998: 224-227).[12] The cost of a cable network underlies economies of scope across the provision of broadband service and cable TV. The cost of operating a plane is a joint cost between carrying passengers and carrying freight, and underlies economies of scope across passenger and freight services.
Natural monopoliesEdit
Faster throughput thanks to better machine use,[13] less in-process inventory, or fewer stoppages for missing or broken parts. (Higher speeds are now made possible and economically feasible by the sensory and control capabilities of the “smart” machines and the information management abilities of computer-aided manufacturing (CAM) software.)
Economies of scope arise when businesses share centralized functions (such as finance or marketing) or when they form interrelationships at other points in the business process (e.g., cross-selling one product alongside another, using the outputs of one business as the inputs of another).[2]
3D printing is one area that would be able to take advantage of economies of scope,[14] as it is an example of same equipment producing "multiple products more cheaply in combination than separately".[1]
If a sales team sells several products, it can often do so more efficiently than if it is selling only one product because the cost of travel would be distributed over a greater revenue base, thus improving cost efficiency. There can also be synergies between products such that offering a range of products gives the consumer a more desirable product offering than would a single product. Economies of scope can also operate through distribution efficiencies—i.e. it can be more efficient to ship to any given location a range of products than a single type of product.
Further economies of scope occur when there are cost savings arising from byproducts in the production process, such as when the benefits of heating from energy production has a positive effect on agricultural yields.[citation needed]
^ a b c Joel D. Goldhar; Mariann Jelinek (November 1983). "Plan for Economies of Scope". Harvard Business Review.
^ a b c d "Economies of scale and scope". The Economist. 20 October 2008.
^ John C. Panzar; Robert D. Willig (1977). "Economies of Scale in Multi-Output Production". Quarterly Journal of Economics. 91 (3): 481–493. doi:10.2307/1885979. JSTOR 1885979.
^ John C. Panzar; Robert D. Willig (May 1981). "Economies of Scope". American Economic Review. 71 (2): 268–272. JSTOR 1815729.
^ Teece, David J. (September 1980). "Economies of Scope and the Scope of the Enterprise". Journal of Economic Behavior & Organization. 1 (3): 223–247. doi:10.1016/0167-2681(80)90002-5.
^ Lukas, Erica (23 April 2014). "Horizontal Boundaries of the Firm". slideshare. {{cite web}}: CS1 maint: url-status (link)
^ a b "Economies of Scope", SpringerReference, Berlin/Heidelberg: Springer-Verlag, 2011, doi:10.1007/springerreference_6618, retrieved 20 April 2021
^ a b Arkadiy V, Sakhartov (November 2017). "Economies of Scope, Resource Relatedness, and the Dynamics of Corporate Diversification: Economies of Scope, Relatedness, and Dynamics of Diversification". Strategic Management Journal. 38 (11): 2168–2188. doi:10.1002/smj.2654.
^ Venkatesh Rao (15 October 2012). "Economies of Scale, Economies of Scope". Ribbonfarm.
^ Akerman, Anders (2018). "A theory on the role of wholesalers in international trade based on economies of scope". Canadian Journal of Economics. 51 (1): 156–185. doi:10.1111/caje.12319. ISSN 1540-5982. S2CID 10776934.
^ Hinloopen, Jeroen (1 January 2008), Cellini, Roberto; Lambertini, Luca (eds.), "Chapter 5 Strategic R&D with Uncertainty", The Economics of Innovation, Contributions to Economic Analysis, Emerald Group Publishing Limited, vol. 286, pp. 99–111, doi:10.1016/s0573-8555(08)00205-8, ISBN 978-0-444-53255-8, retrieved 20 April 2021
^ Png, Ivan (1998). Managerial Economics. Malden, MA: Blackwell. pp. 224–227. ISBN 1-55786-927-8.
^ Goldhar, Joel D.; Jelinek, Mariann (1 November 1983). "Plan for Economies of Scope". Harvard Business Review. No. November 1983. ISSN 0017-8012. Retrieved 8 May 2020.
^ Lee, Leonard (26 April 2013). "3D Printing – Transforming The Supply Chain: Part 1". IBM Insights on Business blog.
|
An improved method for EMD modal aliasing effect | JVE Journals
Shao-Wei Feng1 , Kai Chai2
Copyright © 2020 Shao-Wei Feng, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In order to eliminate the modal aliasing effect of the empirical modal decomposition (EMD) caused by signal interruption (singular event, cycling wave, local impact signal) or the interaction of each component, two elimination methods are proposed, namely, high frequency harmonic method and mask signal method. Firstly, two causes of EMD mode aliasing are analyzed: one is the local oscillation in the signal; the other is the amplitude-frequency relation of the signal. Secondly, the local impact mode aliasing is solved by the high frequency harmonic method. Finally, mask signal method is used to eliminate modal aliasing of similar frequencies. The simulation results show that the two methods of eliminating EMD modal aliasing, which are collected high frequency harmonic method and mask signal method, can quickly and effectively curb the modal aliasing phenomenon on the basis of ensuring the adaptive characteristics of EMD, and provide a novel method to expand the real-time online application of EMD.
Keywords: empirical modal decomposition, modal aliasing, high frequency harmonic method, mask signal method.
With the increasing demand for mechanical on-line monitoring and intelligent diagnosis, higher requirements have been put forward for the self-adaptability, computing efficiency and fault pattern recognition efficiency of signal processing technology [1]. Empirical Modal Decomposition (EMD) is a novel signal adaptive decomposition technology, which can adaptively decompose signals into several Intrinsic Mode functions (IMF) according to its own scale, and is very suitable for processing nonlinear and non-stationary signals. EMD is an adaptive decomposition process that is performed according to the time characteristic scale characteristics of the signal itself. The result of the implementation is to separate the different frequency components in the signal layer by layer according to the frequency from high to low into corresponding IMF [2]. The IMF can be intuitive understanding as composed of a set of similar frequency narrowband signal, a correct the IMF should contain only the same scale characteristics of the signal component (note: scale characteristics is not a specific value, but a scale), as the size of different characteristics of the signal components in the same the IMF, creates a modal aliasing phenomenon. Different from the phenomenon of endpoint effect, modal aliasing may occur at any time of the signal between any adjacent IMF and have a great impact on the subsequent components [3]. If it cannot be effectively eliminated, the decomposition results will lose their physical significance. Therefore, modal aliasing is another obstacle to the application of EMD [4].
Huang firstly proposed a method of interrupt detection by setting extremum spacing parameters to control the screening of extremum points in each order of IMF screening. Although this method solves such problems very well, in practical application, this time interval is difficult to be determined before the standard EMD decomposition is implemented, which damages the self-adaptability of EMD [5]. Huang's group combined with Rilling’s group on the decomposition properties of noise. A set of white noise method (Ensemble Empirical Modal Decomposition, EEMD) is proposed to suppress aliasing and maintain the adaptive characteristics of EMD. However, this method requires multiple EMD operations, which results in low efficiency and limits its online application. Through a large number of studies on decomposing conditions, Riiling provided a theoretical basis for solving the problem of modal aliasing caused by signal interaction in the future. The validity of two signals decomposed by EMD is discussed and the condition of effective separation of two signals is given [6]. Derring proposed the influential mask signal method for the first time, which realized the separation of two similar frequency components by adding a set of mask signals. Although the amplitude of the added signal depends on experience, it provides a new idea for solving modal aliasing [7].
The most significant characteristic of EMD is the self-adaptability of decomposition, which is also the main reason that EMD has been widely studied and applied. How to improve the operational efficiency of eliminating modal aliasing on the basis of the self-adaptive decomposition of EMD has become an important topic in the application research.
2. EMD mode aliasing problem
2.1. Aliasing caused by local oscillation (interrupt signal) in a signal
For non-stationary signals, time-scale parameters are characteristic parameters based on signal characteristics. Although they have no quantitative relationship with Fourier spectrum, they can also reflect signal characteristics. According to the selected reference points, there are generally two definitions of time scale:
(a) The time span of two adjacent zero crossings; (b) the time span of two adjacent extreme points.
The above two definitions of time scales can reflect the local characteristics of signals changing with time, among which the description method based on adjacent extreme points is more effective, because even if the signal does not cross zero, the modal information of the signal can also be grasped through the span of extreme points.
When such as Eq. (1) (as shown in Fig. 1) signal of z(t), the regularity of the harmonic signal in the local high frequency oscillation, the dramatic change in the local extreme value point spacing, according to the extreme value point fitting envelope local greatly distorted (Fig. 2), and eventually in the decomposition results, the phenomenon of the same IMF features include multiple scale (Fig. 3):
z\left(t\right)=\left\{\begin{array}{lll}\mathrm{b}\mathrm{a}\mathrm{s}\mathrm{e}\mathrm{b}\mathrm{a}\mathrm{n}\mathrm{d}& \mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi t/200\right)& t=2000,\\ & +& \\ \mathrm{i}\mathrm{n}\mathrm{t}\mathrm{e}\mathrm{r}\mathrm{r}\mathrm{u}\mathrm{p}\mathrm{t}& 0.2\mathrm{c}\mathrm{o}\mathrm{s}\left(2\pi t/12\right)& t\in \left[1180,1200\right].\end{array}\right\
Fig. 1. Original signal containing local oscillation
Fig. 2. Local distortion occurs in the upper and lower envelope
2.2. Aliasing caused by signal amplitude-frequency relationship
Take a randomly generated 1000 points discrete Gaussian white noise as an example to perform EMD decomposition, and the results are shown in Fig. 4.
As can be seen from the Fig. 4, the noise signal is decomposed into 9 components according to the frequency from high to low, and the extreme value points of each component are
{P}_{i}
i=
1,…,·9. Although each IMF is a narrowband signal containing a frequency band, the distribution of the number of extreme points can reflect the frequency component characteristics of each IMF. Set
{\lambda }_{i}={P}_{i}/{P}_{i+1}
{\lambda }_{i}
is always around 2, i.e., on a signal cycles the IMF contains about twice the next IMF. Huang inferred from this that EMD has similar binary filtering properties.
Fig. 3. Same modal component of the decomposition result contains multiple time scale components
Fig. 4. Gaussian white noise EMD results
Table 1. Binary decomposition characteristics of EMD
Extremum points
{p}_{i}
{\lambda }_{i}
Two simulation signals
{z}_{1}\left(t\right)
{z}_{2}\left(t\right)
, which are composed of four single-component signals of different frequencies, are constructed to illustrate. Where, the ratio between the frequencies of each component of signal
{z}_{1}\left(t\right)
is 1.25, 4, and 2 respectively; The ratio between the frequencies of each component of signal
{z}_{1}\left(t\right)
is 4, 1.3333 and 1.5, respectively. The result of EMD is shown in Fig. 5(a, b):
\left\{\begin{array}{l}{z}_{1}\left(t\right)=\mathrm{s}\mathrm{i}\mathrm{n}\left(0.1\pi t\right)+3\mathrm{s}\mathrm{i}\mathrm{n}\left(0.08\pi t\right)+0.5\mathrm{s}\mathrm{i}\mathrm{n}\left(0.02\pi t\right)+\mathrm{c}\mathrm{o}\mathrm{s}\left(0.01\pi t\right),\\ {z}_{2}\left(t\right)=3\mathrm{s}\mathrm{i}\mathrm{n}\left(0.08\pi t\right)+0.5\mathrm{s}\mathrm{i}\mathrm{n}\left(0.02\pi t\right)+\mathrm{s}\mathrm{i}\mathrm{n}\left(0.015\pi t\right)+\mathrm{c}\mathrm{o}\mathrm{s}\left(0.01\pi t\right),\end{array} \right\
t=500s,\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }{f}_{s}=4.
As can be seen from Fig. 5, it can be seen from the waveform of
{z}_{1}\left(t\right)
{z}_{2}\left(t\right)
that there is no interrupt in either signal.
{z}_{1}\left(t\right)
is decomposed into three results, in which
{r}_{2}
and IMF2 can basically reflect the information of two components in the original signal, while IMF1 has obvious aliasing phenomenon, because
{f}_{1}/{f}_{2}=\mathrm{ }
1.25 < 2. As a result, EMD cannot separate these two modes, which verifies the Eq. (5). In fact, IMF1 is the superposition of the first two high-frequency components in
{z}_{1}\left(t\right)
, which is regarded by IMF as an IMF with certain modulation degree without separation.
{z}_{2}\left(t\right)
is decomposed into 5 components, among which IMF1 can perfectly reflect the first component of the signal, while the subsequent components have obvious aliasing, because
{f}_{2}/{f}_{3}=\mathrm{ }
1.33 < 2 and
{f}_{3}/{f}_{4}=\mathrm{ }
1.5 < 2, which do not meet the frequency conditions of signal separation.
Fig. 5. Modal aliasing due to close frequencies
{z}_{1}\left(t\right)
{z}_{2}\left(t\right)
3. Processing method of local impact mode aliasing
According to the characteristic that most of the local impact signals are high-frequency signals, a high frequency simple harmonic signal close to the highest frequency in the signal is added to the signal to provide the scale characteristics suitable for high frequency impact. The local high frequency and high frequency simple harmonic signals are decomposed into IMF1 to reduce the impact on the subsequent components. This algorithm does not need to perform standard EMD decomposition for the signal in advance, and has a good processing effect for the modal aliasing phenomenon caused by high frequency impact, and has a small amount of computation. In this paper, the high frequency signal frequency is set near the highest analysis frequency corresponding to the sampling frequency.
The results of EMD processing by
z\left(t\right)
in this method are shown in Fig. 6. It can be found that although there is a small amount of residue in the impact site, the decomposition accuracy of this method is significantly. Moreover, there is no extra false component except the superposed high-frequency component.
Fig. 6. EMD results of high-frequency harmonic processing with local shock signal
4. Processing method of modal aliasing at similar frequencies
Take the aforementioned simulation signal
{z}_{1}\left(t\right)
as an example:
{z}_{1}\left(t\right)=\mathrm{s}\mathrm{i}\mathrm{n}\left(0.1\pi t\right)+3\mathrm{s}\mathrm{i}\mathrm{n}\left(0.08\pi t\right)+0.5\mathrm{s}\mathrm{i}\mathrm{n}\left(0.02\pi t\right)+\mathrm{c}\mathrm{o}\mathrm{s}\left(0.01\pi t\right).
The two components of aliasing are
{x}_{1}\left(t\right)=\mathrm{s}\mathrm{i}\mathrm{n}\left(0.1\pi t\right)
{x}_{2}\left(t\right)=3\mathrm{s}\mathrm{i}\mathrm{n}\left(0.08\pi t\right)\text{.}
According to the analysis in Section 2.2, when the frequency ratio of two equal-amplitude signals is between 0.5 and 2, the two signals will be considered as a pattern by EMD. According to this principle, the separation of
{x}_{1}\left(t\right)
{x}_{2}\left(t\right)
can be realized if a signal
s\left(t\right)
with appropriate frequency and amplitude can be constructed to form an aliasing mode after the superposition of
{x}_{1}\left(t\right)
S\left(t\right)
{f}_{1}=
{f}_{2}=
0.04, so the lower frequency
{f}_{L}=
0.05/2 = 0.025 and the upper frequency
{f}_{L}=\text{0.05×2}\text{ }\text{=}\text{ }\text{1.}
Take the frequency of
\overline{f}=\text{0.18,}
s\left(t\right)=1.5\mathrm{s}\mathrm{i}\mathrm{n}\left(0.32\pi t\right)
, set:
\left\{\begin{array}{l}{x}_{+}\left(t\right)=x\left(t\right)+s\left(t\right),\\ {x}_{-}\left(t\right)=x\left(t\right)-s\left(t\right).\end{array}\right\
EMD is applied to
{x}_{+}\left(t\right)
{x}_{-}\left(t\right)
, and the decomposition results, IMF+ and IMF-, can be obtained respectively:
IMF=\frac{\left(IM{F}_{+}+IM{F}_{-}\right)}{2}.
a) IMF+
b) IMF-
c) 0.5×(IMF++IMF-)
It can be seen from Fig. 7 that in the decomposition results of
{x}_{+}\left(t\right)
{x}_{-}\left(t\right)
{x}_{1}\left(t\right)
and auxiliary signals are formed into alias mode, and IMF2 is successfully separated. After averaging
{x}_{+}\left(t\right)
{x}_{-}\left(t\right)
, the mask component is eliminated, and the true component of
x\left(t\right)
is obtained (deformation at the end point is caused by the endpoint effect).
Fig. 8. EMD results of modal aliasing elimination method for practical hydraulic fault signals
Taking the actual hydraulic leakage signal as an example, the decomposition results are shown in Fig. 8. It can be seen from Fig. 8(a, b) that the signal is correctly decomposed to the mask signal of corresponding scale, and the decomposition result of the original signal is obtained after taking the mean value. Direct observation shows that there is no obvious aliasing in the decomposition results of this method. The reconstruction error can reach a level 10-15.
This paper analyzes the form of EMD mode aliasing, causes, influences on EMD and solutions. The principle, characteristics, advantages and disadvantages of high frequency harmonic wave method and mask signal method are analyzed, and their respective advantages and disadvantages are verified by typical simulation signals. On the basis of the EMD adaptive characteristics, the model aliasing is quickly and effectively curbed, and the effectiveness of the method is verified by the hydraulic signal simulation, which provides a new method for the expansion of the real-time online application of EMD.
Colominas M. A., Schloti’Hauer G., Torres M. E. Improved complete ensemble EMD: a suitable tool for biomedical signal processing. Biomedical Signal Processing and Control, Vol. 14, Issue 1, 2014, p. 19-29. [Publisher]
Xu Zheng Guang An alternative envelope approach for empirical mode decomposition. Digital Signal Processing, Vol. 20, Issue 1, 2010, p. 77-84. [Publisher]
Hassanpour H., Zehtabian A. Time domain signal enhancement based on an optimized singular vector denoising algorithm. Digital Signal Processing, Vol. 22, Issue 5, 2012, p. 786-794. [Publisher]
He Long, Yang Lihua, Huang Daren The study of the intermittency test filtering character of Hilbert-Huang transform. Mathematics and Computers in Simulation, Vol. 70, 2005, p. 22-32. [Publisher]
Wu Z. H., Huang N. E. Ensemble Empirical Mode Decomposition: a Noise Assisted Data Analysis Method. Calverton Center for Ocean-Land-Atmosphere Studies, 2005, p. 855-895. [Search CrossRef]
Rilling Gabriel, Flandrin Patrick One or two frequencies? The empirical mode decomposition answers. IEEE Transactions on Signal Processing, Vol. 56, Issue 1, 2008, p. 85-96. [Publisher]
Ryan Deering, Kaiser James F. The use of a masking signal to improve empirical mode decomposition. IEEE International Conference on Acoustics, Speech, and Signal Processing, 2005. [Search CrossRef]
Drought prediction based on an improved VMD-OS-QR-ELM model
Yang Liu, Li Hu Wang, Li Bo Yang, Xue Mei Liu, Zaher Mundher Yaseen
|
Learning with Serlo.org - learn with Serlo!
With the free learning platform, serlo.org, we want to enable you to study independently and at your own pace. Simply select the subject you want and access a variety of learning content, which is organized by topics and curriculum. All content is interlinked to help you quickly look up concepts or methods that are not quite clear. The linked content is also an invitation to pursue interesting details or even just to stumble across related topics, if you fancy it! Below, we've collected all the features of the learning platform that will assist you in your learning process.
This section will show you how to find the content you are looking for.
Topic pages give an overview of all learning content on serlo.org. For each topic, there are clear explanations, videos and practice questions with detailed step-by-step solutions. So you can learn about a topic and then start practicing it right away!
\displaystyle \color{#ffffff}{.}
With the search tool in the top right corner, you can find all topics and subtopics with just two clicks. Search for a key term (for example “hypotenuse”) and the search will come up with relevant explanations and exercises\displaystyle \color{#ffffff}{.}
Learning content on serlo.org is structured based on the high school grade level or national curriculum. Just choose your curriculum and then your grade level\displaystyle \color{#ffffff}{.}
This section will show you how to learn with Serlo.org
Explanations on serlo.org are short summaries written simply and supported with videos and graphics. Explanation pages is where linked content will take you to freshen up basics or look up formulas\displaystyle \color{#ffffff}{.}
With interactive applets, you can visualise and experience correlations for yourself, for example, by moving the curves of a function to explore the correlation of the variables\displaystyle \color{#ffffff}{.}
Serlo.org offers a selection of the best tutorials. With the combinations of courses and videos, you can learn topics step by step\displaystyle \color{#ffffff}{.}
Serlo.org provides a range of questions with different answer formats: textbox answer and multiple-choice. With instant feedback to your answer, you know immediately whether or not you understood the question\displaystyle \color{#ffffff}{.}
In the sample solutions we provide with exercises, you can find a step by step calculation and, if necessary, use the linked content to look up required methods or prior knowledge.
|
Cell identity search using PSS and SSS - MATLAB lteCellSearch - MathWorks France
lteCellSearch
Find Cell Identity
cellids
Cell identity search using PSS and SSS
[cellid,offset,peak] = lteCellSearch(enb,waveform)
[cellid,offset,peak] = lteCellSearch(enb,waveform,alg)
[cellid,offset,peak] = lteCellSearch(enb,waveform,cellids)
[cellid,offset,peak] = lteCellSearch(enb,waveform) returns the cell identity carried by the PSS and SSS signals in the input waveform, the timing offset to the start of the first frame of the waveform, and the peak correlation magnitude. The cell-wide settings structure, enb, defines the link configuration.
[cellid,offset,peak] = lteCellSearch(enb,waveform,alg) takes an additional input structure, alg, which provides control over the cell search. The input structure, alg, contains optional fields to define the SSS detection method, the maximum number of cells to detect, and which cell identities to search.
[cellid,offset,peak] = lteCellSearch(enb,waveform,cellids) uses an additional input to constrain the cell search to the list of cell identities specified by in cellids.
This syntax will be removed in a future release. Instead use the syntax [cellid,offset,peak] = lteCellSearch(enb,waveform,alg) and set alg.CellIDs = cellids.
Search for the cell identity (in this case 171) of an R.12 RMC waveform.
Initialize reference channel configuration, rmc. Perform cell search on the waveform produced using this configuration.
cellID = lteCellSearch(rmc,lteRMCDLTool(rmc,[1;0;0;1]))
cellID = 171
{N}_{\text{RB}}^{\text{DL}}
Time-domain waveform, specified as a numeric matrix of size T-by-P. Where T is the number of time-domain samples and P is the number of receive antennas. The sampling rate of the time domain waveform must be the same as used in the lteOFDMModulate function for the specified number of resource blocks enb.NDLRB. The number of time domain samples, T, must be sufficient to provide at least one subframe for FDD (or 2 for TDD since in TDD mode PSS and SSS lie in adjacent subframes). For the cell search to succeed, the waveform provided must contain the PSS and SSS signals.
enb.NDLRB is only required to specify the sampling rate of waveform.
alg — Cell search algorithm control
Cell search algorithm control, specified as a structure. alg accepts these fields defining optional cell search algorithm settings.
SSSDetection Optional 'PreFFT' (default), 'PostFFT'.
SSS detection method.
MaxCellCount Optional Nonnegative scalar integer. (1, ..., 504), default 1.
The number of cell identities to detect.
CellIDs Optional Vector of nonnegative integers, default vector (0:503).
A vector containing the cell identities to use for the cell search.
a 'PostFFT' SSS detection operates in the frequency domain. For 'PostFFT':
OFDM demodulation is performed using the timing estimate from PSS detection,
the demodulated SSS resource elements are correlated with possible SSS sequences to find the cell identity group,
and the peak correlation magnitude is the sum of the peak correlation magnitudes from time-domain PSS detection and frequency-domain SSS detection.
b When alg.MaxCellCount > 1, the returned cellid, offset, and peak are vectors, with each element corresponding to one cell.
c If alg.CellIDs is absent, the output vectors are sorted by decreasing correlation peak magnitude, that is, decreasing peak value. If alg.CellIDs is present and alg.MaxCellCount = numel(alg.CellIDs), the output vectors are in the same order as the cell identities in alg.CellIDs. Sorting the peaks enables monitoring of the peak output for a predetermined set of cells.
cellids — Cell identities
nonnegative scalar integer | vector of nonnegative integers
Cell identities to be used in the cell search, specified as a nonnegative scalar integer or vector of nonnegative integers.
cellids and the syntax it is associated with will be removed in a future release. Instead use alg.CellIDs and the recommended alternate syntax.
cellid — Cell identity
Physical layer cell identity, returned as a nonnegative scalar integer or vector of nonnegative integers. cellid indicates the detected cell identity. The returned cellid is a vector when alg.MaxCellCount > 1 and more than one cell is detected.
The overall physical layer cell identity is cellid = (3*Nid1) + Nid2. PSS conveys the second cell identity number (Nid2, (0,1,2)) within a cell identity group and is established via time-domain correlation using the lteDLFrameOffset function. SSS conveys the first cell identity number (Nid1, (0,...,167)) and is established in a similar fashion.
offset — Timing offset
Timing offset, returned as a nonnegative scalar integer or vector of nonnegative integers. offset indicates the number of samples from the start of the input waveform to the position in that waveform where the first frame begins. The timing offset is calculated by correlating with the detected PSS and SSS. The returned offset is a vector when alg.MaxCellCount > 1 and more than one cell is detected.
peak — Peak magnitude
numeric scalar | vector of numeric values
Peak magnitude of the correlation, returned as a numeric scalar or vector of numeric values, used for cell detection. The returned peak is a vector when alg.MaxCellCount > 1 and more than one cell is detected. The peak correlation magnitude is the sum of the peak correlation magnitudes from PSS and SSS detection. A complete correlation output is available as the output argument, corr, from lteDLFrameOffset.
lteDLFrameOffset | lteFrequencyOffset | lteFrequencyCorrect | lteOFDMDemodulate
|
Influence of gear grip quality on noise levels of gearbox | JVE Journals
Elias Tomeh1 , Tomáš Oudrnický2
The test of traces prints, or sometimes called as “tests for colour” lie in examining the area where the teeth touch the gearing at a specific load. The mutual prints, which are reached under given corrections of the gearing, are simulated by a practical test. It is the simplest way of practically evaluating the correct grip of cogwheels. This process is used for analysing the problematic gear. Based on the results of this test, it is possible to establish the area of interest for new corrections of gearing which can help to reduce the noise level, improve the grip conditions or durability on the given gearing.
Keywords: noise level, vibration, cogwheels, gearbox.
The tested gear must be degreased and it is first mounted into special housing which has cut holes. Through these holes, a special coating is applied on the tested gear and after the running-in, the test results are recorded with a laparoscopic camera. The complete gearbox is fixed to the relevant engine block and the whole configuration is placed in a test stand by using silentblocks. Everything is done in a way so that the situation is as close to the real vehicle installation as possible. The electric motor driving is provided only from the engine (in both senses of speed – for simulation of engine braking). The half-axes are braked by real disk brakes which simulate the road resistances and load. The braking force is expressed as a load in Newton-meters. The maximum value is 180 Nm. However, the measurement is generally made up to 120 Nm – in dependence on the load capacity of the gearbox. When the values are higher, the trace does not change significantly (by reason of elastic deformation of gearing – full supporting colour appears. The maximum revolution is 30 min-1.
2. Basic types of supporting colours
a) Full supporting colour (Fig. 1).
Full supporting colour is characteristic for a heavy load – this effect occurs due to elastic deformations and correctly set corrections and it is not acoustically evident in traffic.
b) Mirror supporting colour (Fig. 2).
Mirror supporting colour has a smaller bearing area than full supporting colour. It usually develops when the load is low (smaller elastic deformations) in normal traffic by reason of barrel gearing (
C\alpha
C\beta
). When the load is heavy, it changes to full supporting colour.
c) Addendum/dedendum supporting colour (Fig. 3)
This type of supporting colour is given by an incorrect profile of the tool. It is being solved by a profile modification. Addendum/dedendum supporting colour is dangerous because of the appearance of pitting (big pressure in a small area – Hertz pressure); and in terms of noise it is one of the worst cases.
d) Corner supporting colour (Fig. 4).
In this case, the bearing area is situated in the gearing edges. It leads to unsmooth running owing to a small supporting part; again, there is a threat of pitting appearance. The main reason for this is an unsuitable angle of inclination
\beta
or – from the technological point of view – deviation of the helix slope
fH\beta
e) Conical supporting colour (Fig. 5).
Conical supporting colour is formed especially because of a badly fixed workpiece in the machine or by a large deformation of a cogwheel.
f) Crossing trace.
Crossing traces can appear along the helix (Fig. 6) but also along an involute. This happens owing to a big circumferential runout
{F}_{r}
. Possible causes: eccentric hole, heat deformations, badly fixed workpiece in the machine. It is demonstrated by a fluctuating character of vibrations.
Fig. 1. Full supporting colour [1]
Fig. 2. Mirror supporting colour [1]
Fig. 3. a) Addendum supporting colour and b) dedendum supporting colour [1]
Fig. 4. Edge supporting colour [1]
Fig. 5. Conical supporting colour [1]
Fig. 6. Crossing trace [1]
2.1. Evaluation of tracing tests
Traces evaluation is a specific matter where the main factor is especially the evaluator experience. The evaluation output is not any physically measured quantity (except the area of contact ratio). The result is mainly a visual view of colour-coded traces after rolling the cogs.
The problem appears especially in cases where it is necessary to learn how the running-in behaves on a given load. During this test, the low load is changed into the heavy load, and the traces after both these regimes are then visible on sides of the cogs. It is not possible to deduce – in a simple way – for example 50 Nm without lowering the load (and therefore also the relevant print) previously. At present, the solution lies in making the measurement again on a low load – separately – and both traces will, in simple terms, be “deducted” from each other. This will provide a trace after the heavy load. Nevertheless, this solution is not so perfect and the results readability is limited.
It is also necessary to mention that even though the test stand is as close to the real vehicle installation as possible, certain errors still appear in the measurement. It is caused especially by a manual assembly of the gear into the prepared housing. The adjusted housing itself can have a certain influence on it. The cut holes are as small as possible so that they would not have any influence on the housing solidity. It is necessary to degrease the gearing perfectly so that the colour sticks to it. It is not possible to observe the traces on the pinion due to the inaccessibility of the place of the grip of the constant gear. It is possible to analyse only the traces on the driven gear of the final drive – to simplify it, both traces are thought to be the same.
3. Results of tracing on a serial type of construction
Fig. 7. Tracing 1° backward – series
{M}_{MOT}=
0 Nm a 20 Nm, 1° driving (Festrad), right side
{M}_{MOT}=
0 Nm a 20 Nm, 1° driven (Schaltrad), left side
3.1. Evaluation of tracing results
A zero load does not exist in real traffic. Certain passive resistances which load the gear always appear. In the regime „backward“, the engine brake effect ranging approximately from 6 to 18 Nm is taken into account in practice. Therefore, the prints tracing is by default carried out ranging from 0 to 20 Nm. The issue of the indicated noise level is related to 18 Nm, which means the results of traces are from 20 Nm. Consequently, it can be stated that the rolling trace is not optimal. Even when taking into consideration such a relatively low load, the edge matching in both the longitudinal and lateral directions is visible. In addition, almost full supporting colour appears even when the load is zero.
4. Tracing test results
Fig. 8. Tracing 1° backward – proposal
{M}_{MOT}=
{M}_{MOT}=
4.1. Tracing results evaluation – proposal
When comparing the results of the previously made calculation and the results of the practical tracing test, we can state that the results correspond. Compared with the serial type of the construction, the trace and the edge supporting colour – which negatively influenced the traces on the serial type of the construction – were reduced. Due to the extension of the lateral barrel gearing, the “full supporting colour” was changed into “mirror supporting colour” which is characteristic for big values of barrel gearing.
Especially when simulating the zero load (
{M}_{MOT}=
0 Nm), the difference is clearly visible. The trace of the supporting colour was considerably reduced and it also changed its shape from an angular shape of the full supporting colour to an elliptical shape of the mirror supporting colour. In this regime, the edge supporting colour was also completely removed.
Fig. 9. Results comparison from EOL – the whole spectrum
5. Results from the noise level state (EOL)
As well as all the serial gearboxes, also the test ones are assembled on the same assembly line so that the principle of the serial assembling is kept. Within the assembly, each gearbox goes through a so-called test bench (EOL). The following graphs are the results of the order analysis from this noise level in the regime of the first gear on the backward side. The figure allows us to see the differences between the gearbox and new, proposed modifications (blue curve) and serial measuring of safe gearboxes from the same day (black curves).
In the following figures, we can see the comparison of the amplitude size of the first harmonic frequency (Fig. 10) and the second harmonic frequency (Fig. 11) depending on the speed of the gearbox drive shaft.
Fig. 10. Results comparison from EOL – amplitude of first harmonic frequency
Fig. 11. Results comparison from EOL – amplitude of second harmonic frequency
Fig. 12. Amplitudes of acceleration comparison: green – first harmonic frequency, blue – second harmonis frequency, yellow – first interharmonic frequency
Fig. 13. Amplitudes of sound pressure comparison: green – first harmonic frequency, blue – second harmonis frequency
The results clearly show that – with increasing speed – the amplitude of the first harmonic frequency increases on a linear basis. Modificated gearing shows the increase of this amplitude by an average of 5 to 10 dB in comparison with the serial type of the construction of the gear.
The second harmonic frequency has a slightly different course. We can observe a local minimum of the values of the amplitude which is – in serial type of construction – ranged between 2400 and 2700 min-1. When considering the modificated type of construction, it is slightly moved to higher speed. As for the amplitude size, the results are comparable in low speed, higher in middle speed and rather lower in high speed – as shown in the graph.
Graphs (Fig. 12 and 13) show the comparison of amplitudes:
1) Graph in the Fig. 12 compares the amplitudes of acceleration of the first and second harmonic frequencies, including the interharmonic one.
2) Graph in the Fig. 13 compares the amplitudes of the sound pressure of the first and second harmonic frequency from the driving test, from the microphone. The first harmonic frequency of the sound pressure in the vehicle cab was reduced by 5 dB. Therefore, the noise impact on the vehicle crew was slightly reduced as well.
From the diagnostic point of view, the amplitude modulation around the first harmonic frequency was also completely removed.
Dočkal A. Structural Optimization of the Gearbox with Regard to the Reduction of the Noise Emission. Brno University of Technology, 2003. [Search CrossRef]
Moravec V. Construction of Machines and Equipment II Spur Gears: Theory – Calculation – Construction – Production – Control. Ostrava, 2001. [Search CrossRef]
Oudrnický T. Constructional Modification of Gearbox for Noise Reduce. Diploma Thesis. Technical University of Liberec 2016. [Search CrossRef]
|
A system design method based on digital AGC photoelectric preamplifier gain adaptive control | JVE Journals
Xuefeng Gu1 , Gaoquan Gu2 , Jing Zhang3
1, 2College of Naval Architecture and Ocean Engineering, Naval University of Engineering, Wuhan, China
3Department of Ship, China Ship Development and Design Center, Wuhan, China
Copyright © 2020 Xuefeng Gu, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In the process of measuring the radiated noise of underwater structure, using optical fiber hydrophone array is the main means of information acquisition. Due to the complexity of the measurement environment, when the wavelength of the light source and the array deformation in the optical fiber measurement system are changed, the power fluctuation of the input light and the demodulation failure of the system will be caused, which will affect the measurement effect of the radiated noise. For this reason, an adaptive control system for the optical source and the demodulator is designed, which can ensure the stability of the laser source and the digital demodulation system. The experimental data show that the adaptive control system designed in this paper can realize signal stable modulation and signal synchronous processing, which provides a more suitable scheme for accurate measurement of underwater equipment radiated noise.
Keywords: AGC, fiber optic hydrophone, underwater noise measurement, gain.
Optical fiber hydrophone has the advantages of high sensitivity, good integration, strong anti-electromagnetic interference and anti-signal crosstalk. Therefore, in the measurement of radiated noise of underwater equipment, optical fiber hydrophone array is widely used as noise measurement sensor, and optical fiber is used as transmission channel for signal transmission. In order to extract the acoustic detection signal of each element of the optical fiber hydrophone array, the modulated optical signal is usually converted into electrical signal, and then the digital acoustic signal of the target is finally obtained by demultiplexing and PGC demodulation [1]. Photoelectric signal conversion is mainly realized by photoelectric conversion module, but when the input light intensity is too large, the module will produce saturation phenomenon, and the correct signal can’t be obtained by collecting and processing at the back end; when the input light intensity is too weak, the self-noise generated by demodulation will be very high due to low signal-to-noise ratio [2]. In addition, the actual optical fiber hydrophone detection system because of its complex use environment, optical fiber source wavelength drift, array deformation, resulting in input light power fluctuations, affecting the stability and reliability of the detection system. However, the anti-shock ability of large-scale array acquisition and regulation system is limited. Generally, it can only process signals with small amplitude change, and the demodulation system will be invalid if the signal is too strong or weak. Therefore, in order to adapt to the change of the input optical power, the photoelectric preamplifier adaptive gain adjustment system is introduced into the acquisition and demodulation system to ensure the stability of the output signal amplitude, thus improving the accuracy and effectiveness of the optical fiber hydrophone array for underwater equipment radiation noise measurement.
2. Basic control principle of AGC
According to the characteristics of the optical fiber amplifier, the optical fiber amplifier is used to keep the output signal in the constant range. The basic principle is [3-5]: when the collected optical power signal exceeds the maximum limit, the preamplifier gain is reduced to keep the signal amplitude in an effective range; when the optical power signal is lower than the minimum limit, the preamplifier gain is increased and the signal-to-noise ratio of the input signal is increased. The implementation principle is shown in Fig. 1, When the input signal
{V}_{i}
is small (
{V}_{i}<{V}_{1}
), the output signal increases with the increase of input signal. When the input signal is greater than
{V}_{1}
, the output signal reaches the threshold value
{V}_{2}
. At this time, the AGC gain decreases, so that the amplitude of the output signal is always kept below the threshold
{V}_{1}
. As the amplitude of input signal continues to increase, the output signal will show clipping distortion when it exceeds
{V}_{2}
Fig. 1. The schematic diagram of AGC control
Fig. 2. The AGC model
3. AGC mathematical model
According to the above analysis principle, a digital circuit suitable for gain adaptive control of photoelectric preamplifier is designed, as shown in Fig. 2. It is mainly composed of adjustable attenuator, amplifier, coupler, detector and loop filter.
The gain of the variable gain amplifier (VGA) is expressed as follows:
{G}_{V}\left(t\right)={K}_{V}\cdot {\upsilon }_{F}\left(t\right)+{b}_{V},
{K}_{V}
is the VGA control accuracy,
{\upsilon }_{F}
is the output voltage of the loop filter to control the VGA gain, and
{b}_{V}
is the VGA gain with the control voltage of 0 mV. If the input and output voltages of AGC Based Active integral loop filter are
{\upsilon }_{D}
{\upsilon }_{F}
respectively, and the reference voltage is
{\upsilon }_{R}
{\upsilon }_{F}\left(t\right)=f\left({\upsilon }_{D}\left(t\right)\right)\left|{\mathrm{ }}_{{\upsilon }_{R}}\right.
If the insertion loss of the coupler is set to 0 and the coupling degree is
D
, the output detection voltage can be expressed as:
{\upsilon }_{D}\left(t\right)={K}_{D}\cdot {P}_{D}\left(t\right)+{b}_{D},
{K}_{D}
is the detection accuracy of the detector,
{P}_{D}
is the input level of the detector, and
{b}_{D}
is the detection voltage when the input level of the detector is 0 dbm.
From the above analysis, the output of AGC circuit is expressed as follows:
{P}_{O}\left(t\right)={P}_{i}\left(t\right)+{G}_{V}\left(t\right).
The input of the geophone can be expressed as:
{P}_{D}\left(t\right)={P}_{O}\left(t\right)-D.
Eq. (5) is the mathematical model of AGC circuit in time domain [6, 7].
Carry out Laplace transform. It can be obtained that:
\left\{\begin{array}{l}\mathrm{\Delta }{V}_{F}=\mathrm{\Delta }{V}_{D}\cdot \mathrm{\Delta }{H}_{F},\\ \mathrm{\Delta }{P}_{F}=\mathrm{\Delta }{P}_{I}\cdot \mathrm{\Delta }{G}_{V},\\ \mathrm{\Delta }{G}_{V}={K}_{V}\cdot \mathrm{\Delta }{V}_{F},\\ \mathrm{\Delta }{V}_{D}={K}_{D}\cdot \mathrm{\Delta }{P}_{D},\\ \mathrm{\Delta }{P}_{D}=\mathrm{\Delta }{P}_{O}.\end{array}\right\
Eq. (6) is the mathematical model of AGC control circuit after Laplace transformation, where
{H}_{F}
represents the closed-loop transfer function of loop filter. By solving this problem, we can get the following results:
\left\{\begin{array}{l}\frac{\mathrm{\Delta }{P}_{O}}{\mathrm{\Delta }{P}_{I}}=\frac{1}{1-{K}_{A}{K}_{D}{H}_{F}},\\ \frac{\mathrm{\Delta }{V}_{F}}{\mathrm{\Delta }{P}_{I}}=\frac{{K}_{D}{H}_{F}}{1-{K}_{A}{K}_{D}{H}_{F}}.\end{array}\right\
\mathrm{\Delta }{P}_{O}/\mathrm{\Delta }{P}_{I}
is the closed-loop transfer function of AGC control circuit;
\mathrm{\Delta }{V}_{F}/\mathrm{\Delta }{P}_{I}
represents the relationship between AGC control voltage and input level variation.
Based on the above principle analysis, a gain adaptive adjustment system of optical fiber array optical preamplifier based on AGC is designed, as shown in Fig. 3.
Fig. 3. The block diagram of adaptive gain adjustment principle
The realization principle of the adaptive control system is [8, 9]: the optical and electrical signals are converted by the photoelectric conversion module, and then the converted electrical signals are preliminarily amplified by the signal processing module to improve the anti-interference ability. Through AGC gain control, the output signal is guaranteed to be within the threshold range. The acoustic signal passing through the AGC adaptive gain adjustment module is sent to the high-performance FPGA gain control unit after passing through the AD conversion unit. The FPGA control unit detects the output of the AD conversion unit. When the output value is greater than the high threshold value of AGC, the gain is automatically reduced to prevent clipping distortion; when the output of the AD conversion unit is less than the pre-set value, the gain is reduced automatically When the threshold value is set low, the gain will be increased automatically to make the peak value of the output signal in the effective working range. In this way, FPGA control unit, AGC adaptive gain adjustment module, AD conversion unit and DA conversion unit constitute a closed feedback system. FPGA control unit detects the output of AD conversion unit, and then adjusts the gain of AGC adaptive gain adjustment module through feedback, so as to realize the function of adaptive gain adjustment of photoelectric preamplifier and ensure the stability and reliability of signal modulation and demodulation.
Due to the large amount of signal processing data and high real-time requirements of the optical fiber array test system, especially including multi-channel signals, the synchronous processing requirements of signal processing are higher. In order to realize the synchronous acquisition and control of all boards and cards, the synchronous control function is designed, and the high-performance FPGA is used as the core for AD sampling timing signal control [10]. The principle block diagram of synchronous control module is shown in Fig. 4.
Fig. 4. The block diagram of synchronization control module
5.1. Light source signal modulation test
According to the research and analysis of the light source control technology and the design of the modulation and amplification system, in order to verify the performance of the light source modulation and amplification system, the light source signal PGC modulation is carried out with 128 kHz, and the modulation result is shown in Fig. 5.
Fig. 5. The result of light source PGC modulation
It can be seen that the output wavelength is stable and matches the wavelength of fiber array. The phase noise test of the light source near 1530 nm is shown in Fig. 6.
The measurement results show that the background noise of the demodulated noise spectrum at 1 kHz can reach –110 dB with good frequency response, which can meet the design requirements of optical fiber hydrophone array light source.
5.2. Signal processing synchronization test
In order to fully test the synchronization of the synchronization control module, the clock synchronization signal of the first ad of the first ad board is selected as the reference, and the oscilloscope probe is connected. Another input channel of the oscilloscope is used to measure the clock synchronization signals of other ad cards of the first ad board and the clock synchronization signals of each ad of other boards. The deviation of clock synchronization signals between different ad is measured, and the measurement data is recorded in Table 1. The delay test diagram of AD2 of board 1 and AD1 of board 2 relative to AD1 of board 1 is shown in Fig. 7.
Fig. 6. The translation of phase noise test diagram of light source near 1533.56 nm
Fig. 7. The test diagram of signal processing synchronization
a) AD2 of board 1 versus AD1 delay of board 1
b) AD1 of board 2 versus AD1 delay of board 1
Table 1. The synchronous signal test data record table
AD synchronous signal error (unit: ns)
(Ref AD)
The measurement results of synchronous clock comparison show that the delay between each acquisition channel is less than 100 ns (less than the millisecond level index requirements), and the signal processing synchronization is good, which meets the requirements of signal synchronization processing of fiber optic hydrophone array.
In the measurement of underwater structure radiated noise, optical power fluctuation and system demodulation failure are caused by factors such as light source wavelength drift and array deformation, which affect the measurement effect of noise measurement system. An adaptive gain control system of photoelectric preamplifier based on digital AGC is designed to improve the demodulation of optical fiber source and multi-channel optical telecommunication The experimental results show that the designed adaptive control system can achieve stable output wavelength of fiber-optic hydrophone array, excellent frequency response characteristics, good signal processing synchronization between acquisition channels, which can meet the design requirements of underwater equipment radiated noise measurement system, and provide reliable support for improving the accuracy of underwater structure radiated noise measurement.
Wang Youzhao, Huang Jing Optical Fiber Sensing Technology. Xi’an University of Electronic Science and Technology Press, Xi’an, 2012, p. 84-108. [Search CrossRef]
Cai Bingtao, Chen Xiaobao Design and implementation of fiber optic hydrophone array demodulation system based on FPGA. Optical Fiber and Cable and Its Application Technology, Vol. 1, Issue 33, 2018, p. 36-40. [Search CrossRef]
Li Jianqiang, Qi Hongyu Analysis of feedback automatic gain control circuit. Electromagnetic Field and Microwave, Vol. 48, Issue 12, 2018, p. 1096-1099. [Search CrossRef]
Liu Yaru Research and Design of Automatic Gain Control Circuit for Wireless Communication Applications. Hangzhou University of Electronic Science and Technology, Hangzhou, 2017. [Search CrossRef]
Chen Dong, He Lin, Chen Shuchun, et al. Design of a weak photoelectric signal modulation and demodulation circuit. Electronic Measurement Technology, Vol. 39, Issue 4, 2016, p. 119-122132. [Search CrossRef]
Yu Xiaozhi Study on Interferometric Optical Fiber Acoustic Sensor System and AGC Scheme. Harbin Engineering University, Harbin, 2007. [Search CrossRef]
Sui Zhanju, Lu Wen Design and implementation of an improved AGC. Radio Engineering, Vol. 40, Issue 12, 2010, p. 61-64. [Search CrossRef]
Li Pengchong Research and System Implementation of Multi-Channel PGC Demodulation Technology Based on FPGA. Harbin Engineering University, Harbin, 2015. [Search CrossRef]
Xing Guanghui, Zhang Wanrong, Xie Hongzhi, et al. An ultra wideband variable gain amplifier using attenuator. Microelectronics, Vol. 43, Issue 4, 2013, p. 468-470. [Search CrossRef]
Su Ming Development of Wide Range AGC Control Circuit for HF Receiver Front End. Wuhan University of Technology, Wuhan, 2012. [Search CrossRef]
|
Operational_calculus Knowpia
The idea of representing the processes of calculus, differentiation and integration, as operators has a long history that goes back to Gottfried Wilhelm Leibniz. The mathematician Louis François Antoine Arbogast was one of the first to manipulate these symbols independently of the function to which they were applied.[1]
This approach was further developed by Francois-Joseph Servois who developed convenient notations.[2] Servois was followed by a school of British and Irish mathematicians including Charles James Hargreave, George Boole, Bownin, Carmichael, Doukin, Graves, Murphy, William Spottiswoode and Sylvester.
Treatises describing the application of operator methods to ordinary and partial differential equations were written by Robert Bell Carmichael in 1855[3] and by Boole in 1859.[4]
This technique was fully developed by the physicist Oliver Heaviside in 1893, in connection with his work in telegraphy.
Guided greatly by intuition and his wealth of knowledge on the physics behind his circuit studies, [Heaviside] developed the operational calculus now ascribed to his name.[5]
At the time, Heaviside's methods were not rigorous, and his work was not further developed by mathematicians. Operational calculus first found applications in electrical engineering problems, for the calculation of transients in linear circuits after 1910, under the impulse of Ernst Julius Berg, John Renshaw Carson and Vannevar Bush.
A rigorous mathematical justification of Heaviside's operational methods came only after the work of Bromwich that related operational calculus with Laplace transformation methods (see the books by Jeffreys, by Carslaw or by MacLachlan for a detailed exposition). Other ways of justifying the operational methods of Heaviside were introduced in the mid-1920s using integral equation techniques (as done by Carson) or Fourier transformation (as done by Norbert Wiener).
A different approach to operational calculus was developed in the 1930s by Polish mathematician Jan Mikusiński, using algebraic reasoning.
Norbert Wiener laid the foundations for operator theory in his review of the existential status of the operational calculus in 1926:[6]
The brilliant work of Heaviside is purely heuristic, devoid of even the pretense to mathematical rigor. Its operators apply to electric voltages and currents, which may be discontinuous and certainly need not be analytic. For example, the favorite corpus vile on which he tries out his operators is a function which vanishes to the left of the origin and is 1 to the right. This excludes any direct application of the methods of Pincherle…
Although Heaviside’s developments have not been justified by the present state of the purely mathematical theory of operators, there is a great deal of what we may call experimental evidence of their validity, and they are very valuable to the electrical engineers. There are cases, however, where they lead to ambiguous or contradictory results.
The key element of the operational calculus is to consider differentiation as an operator p = d/dt acting on functions. Linear differential equations can then be recast in the form of "functions" F(p) of the operator p acting on the unknown function equaling the known function. Here, F is defining something that takes in an operator p and returns another operator F(p). Solutions are then obtained by making the inverse operator of F act on the known function. The operational calculus generally is typified by two symbols, the operator p, and the unit function 1. The operator in its use probably is more mathematical than physical, the unit function more physical than mathematical. The operator p in the Heaviside calculus initially is to represent the time differentiator d/dt. Further, it is desired this operator bear the reciprocal relation such that p−1 denotes the operation of integration.[5]
In electrical circuit theory, one is trying to determine the response of an electrical circuit to an impulse. Due to linearity, it is enough to consider a unit step:
Heaviside step function: H(t) such that H(t) = 0 if t < 0 and H(t) = 1 if t > 0.
The simplest example of application of the operational calculus is to solve: p y = H(t), which gives
{\displaystyle y=\operatorname {p} ^{-1}H=\int _{0}^{t}H(u)\,du=t\ H(t)}
From this example, one sees that
{\displaystyle \operatorname {p} ^{-1}}
represents integration. Furthermore n iterated integrations is represented by
{\displaystyle \operatorname {p} ^{-n},}
{\displaystyle \operatorname {p} ^{-n}H(t)={\frac {t^{n}}{n!}}H(t).}
Continuing to treat p as if it were a variable,
{\displaystyle {\frac {\operatorname {p} }{\operatorname {p} -a}}H(t)={\frac {1}{1-{\frac {a}{\operatorname {p} }}}}\ H(t),}
which can be rewritten by using a geometric series expansion,
{\displaystyle {\frac {1}{1-{\frac {a}{\operatorname {p} }}}}H(t)=\sum _{n=0}^{\infty }a^{n}\operatorname {p} ^{-n}H(t)=\sum _{n=0}^{\infty }{\frac {a^{n}t^{n}}{n!}}H(t)=e^{at}H(t).}
Using partial fraction decomposition, one can define any fraction in the operator p and compute its action on H(t). Moreover, if the function 1/F(p) has a series expansion of the form
{\displaystyle {\frac {1}{\ F(\operatorname {p} )\ }}=\sum _{n=0}^{\infty }a_{n}\operatorname {p} ^{-n},}
it is straightforward to find
{\displaystyle {\frac {1}{F(\operatorname {p} )}}H(t)=\sum _{n=0}^{\infty }a_{n}{\frac {t^{n}}{n!}}H(t).}
Applying this rule, solving any linear differential equation is reduced to a purely algebraic problem.
Heaviside went further, and defined fractional power of p, thus establishing a connection between operational calculus and fractional calculus.
Using the Taylor expansion, one can also verify the Lagrange-Boole translation formula, ea p f(t) = f(t + a), so the operational calculus is also applicable to finite difference equations and to electrical engineering problems with delayed signals.
^ Louis Arbogast (1800) Du Calcul des Derivations, link from Google Books
^ Francois-Joseph Servois (1814) Analise Transcendante. Essai sur unNouveu Mode d'Exposition des Principes der Calcul Differential, Annales de Gergonne 5: 93–140
^ Robert Bell Carmichael (1855) A treatise on the calculus of operations, Longman, link from Google Books
^ George Boole (1859) A Treatise on Differential Equations, chapters 16 &17: Symbolical methods, link from HathiTrust
^ a b B. L. Robertson (1935) Operational Method of Circuit Analysis, Transactions of the American Institute of Electrical Engineers 54(10):1035–45, link from IEEE Explore
^ Norbert Wiener (1926) The Operational Calculus, Mathematische Annalen 95:557 , link from Göttingen Digitalisierungszentrum
Terquem and Gerono (1855) Nouvelles Annales de Mathematiques: journal des candidats aux écoles polytechnique et normale 14, 83 [Some historical references on the precursor work till Carmichael].
O. Heaviside (1892) Electrical Papers, London
O. Heaviside (1893, 1899, 1902) Electromagnetic Theory, London
O. Heaviside (1893) Proc. Roy. Soc. (London) 52: 504-529, 54: 105-143 (1894)
J. R. Carson (1926) Bull. Amer. Math. Soc. 32, 43.
J. R. Carson (1926) Electric Circuit Theory and the Operational Calculus, McGraw Hill).
H. Jeffreys (1927) Operational Methods In Mathematical Physics Cambridge University Press, also at Internet Archive
H. W. March (1927) Bull. Amer. Math. Soc. 33, 311, 33, 492 .
Ernst Berg (1929) Heaviside's Operational Calculus, McGraw Hill via Internet Archive
Vannevar Bush (1929) Operational Circuit Analysis with an appendix by Norbert Wiener, John Wiley & Sons
H. T. Davis (1936) The Theory of Linear Operators (Principia Press, Bloomington).
N. W. Mc Lachlan (1941) Modern Operational Calculus (Macmillan).
H. S. Carslaw (1941) Operational Methods in Applied Mathematics Oxford University Press.
Balthasar van der Pol & H. Bremmer (1950) Operational calculus Cambridge University Press
B. van der Pol (1950) "Heaviside's Operational Calculus" in Heaviside Centenary Volume by the Institute of Electrical Engineers
R. V. Churchill (1958) Operational Mathematics McGraw-Hill
J. Mikusinski (1960) Operational Calculus Elsevier
Rota, G. C.; Kahaner, D.; Odlyzko, A. (1973). "On the foundations of combinatorial theory. VIII. Finite operator calculus". Journal of Mathematical Analysis and Applications. 42 (3): 684. doi:10.1016/0022-247X(73)90172-8.
Jesper Lützen (1979) "Heaviside's operational calculus and attempts to rigorize it", Archive for History of Exact Sciences 21(2): 161–200 doi:10.1007/BF00330405
Paul J. Nahin (1985) Oliver Heaviside, Fractional Operators, and the Age of the Earth, IEEE Transactions on Education E-28(2): 94–104, link from IEEE Explore.
IV Lindell HEAVISIDE OPERATIONAL RULES APPLICABLE TO ELECTROMAGNETIC PROBLEMS
Ron Doerfler Heaviside's Calculus
Jack Crenshaw essay showing use of operators More On the Rosetta Stone
|
Do any two spanning trees of a simple graph always have some common edges? - PhotoLens
I tried few cases and found any two spanning tree of a simple graph has some common edges. I mean I couldn’t find any counter example so far. But I couldn’t prove or disprove this either. How to prove or disprove this conjecture?
No, consider the complete graph K4
K_4
Source : Link , Question Author : Mr. Sigma. , Answer Author : Bjørn Kjos-Hanssen
Categories directed-graphs, spanning-trees Tags directed-graphs, spanning-trees Post navigation
On iPhone 8 Plus I need To access and upload to Facebook or YouTube, video clips I made Several years ago
text gets pixalated when converting from illustrator to Photoshop
Adobe Illustrator 2018 Live Corner Edges not Showing to round rectangles and basic shapes
|
FromMatlab - Maple Help
Home : Support : Online Help : Mathematics : Numerical Computations : Matlab : FromMatlab
translate a MATLAB(R) expression to Maple
FromMatlab(s)
FromMatlab(s, evaluate )
FromMatlab(s, string )
FromMatlab(s, AddTranslator=false )
FromMatlab converts a statement written in MATLAB® syntax to an equivalent statement in Maple syntax.
By default the FromMatlab command shows the Maple syntax conversion and executes the converted code. Specifying the evaluate option will just do the evaluation without showing the code conversion first.
When the string option is specified the given MATLAB® expression will be translated to a Maple input string. It is generally better to first convert a MATLAB® expression to a string, so you can examine the translation, confirm its accuracy, and make necessary adjustments before leaping into an evaluation.
When the AddTranslator option is set to false, the translator bypasses the AddTranslator extension mechanism and gives a general MATLAB® to Maple syntax translation without attempting to interpret the meaning of MATLAB® function calls.
FromMatlab only understands a subset of the MATLAB® language. Among other things, structs and classes are not handled by the translator. Beyond the basic syntax, FromMatlab also does not know about every MATLAB® function. There are built-in conversions for over a hundred common MATLAB® commands, but some of the translations only cover the most common cases in those commands. Missing commands can be supplied by the user via AddTranslator.
If a translated name is not matched by anything known to AddTranslator, but does exactly match a protected name in Maple, the name will be prepended with m_. For example, FromMatlab("expand(x)") will generate m_expand(x) so as not to conflict with Maple's definition of expand. An exact match against an unprotected name does not cause m_ to be prepended. In this case the name is left as is.
The FromMatlab command is primarily an aid to learning how to write Maple programs for someone who is used to the MATLAB® language. With this in mind, translated code tries not to hide the "glue" -- code that figures out which option is wanted and dispatches to one or more Maple commands.
Another goal is to produce commands that will do exactly in Maple what was done in MATLAB®. Some commands therefore tend to be fairly verbose on translation and may contain things like on-the-fly procedure constructions. These wordy translations are probably not great examples of how Maple code would be constructed, and would not be written as such had they been handwritten from scratch in Maple. They may even be slower as a result. Consider the translation of sin(x). Since Maple can't know what value x is at translation time, it has to assume it could be an array or a constant. The resulting translation will take both into account and construct a call to map. Writing the code by hand, the author would know what x is intended to be at that time, and call the most appropriate function directly.
After translating MATLAB® code to Maple, it might be a good idea to look through the code and prune away cases that will never happen. The code will usually become easier to follow and more efficient.
Note that backslash is Maple's escape character. Inside a string, "\n" is a single character newline, and "\\n" is two characters -- a backslash and the letter "n". So, FromMatlab("A.\B"); has the 3 character sequence "A.B" for an argument as "\B" resolves to the letter "B". Use FromMatlab("A.\\B"); to get elementwise left divide. Code read from a file via FromMFile does not need its backslashes escaped.
In MATLAB®, variables like i and pi are initialized to special constant values, but can be reassigned to any value. In order to mimic this behavior the translator converts i to Matlab_i instead of I. Thus this variable can be used as, for example, a control variable in a loop.
\mathrm{with}\left(\mathrm{Matlab}\right):
\mathrm{FromMatlab}\left("\left[ 1 2 ; 3 4\right]"\right)
[\begin{array}{cc}\textcolor[rgb]{0,0,1}{1}& \textcolor[rgb]{0,0,1}{2}\\ \textcolor[rgb]{0,0,1}{3}& \textcolor[rgb]{0,0,1}{4}\end{array}]
\mathrm{FromMatlab}\left("A .* B"\right)
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{B}
\mathrm{FromMatlab}\left("A .* B",\mathrm{string}\right)
\textcolor[rgb]{0,0,1}{"A *~ B;"}
\mathrm{FromMatlab}\left("function \left[x\right] = mysum\left(varargin\right)\n x = sum\left(\left[varargin\left\{:\right\}\right]\right)"\right)
mysum := proc( varargin )
x := ArrayTools:-AddAlongDimension(ArrayTools:-Concatenate(2, args));
\textcolor[rgb]{0,0,1}{\mathbf{proc}}\left(\textcolor[rgb]{0,0,1}{\mathrm{varargin}}\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{local}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{ArrayTools}}\textcolor[rgb]{0,0,1}{:-}\textcolor[rgb]{0,0,1}{\mathrm{AddAlongDimension}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{\mathrm{ArrayTools}}\textcolor[rgb]{0,0,1}{:-}\textcolor[rgb]{0,0,1}{\mathrm{Concatenate}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{args}}\right)\right)\textcolor[rgb]{0,0,1}{;}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{return}}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{x}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\textcolor[rgb]{0,0,1}{\mathbf{end proc}}
\mathrm{mysum}\left(1,2,3\right)
\textcolor[rgb]{0,0,1}{6}
Matlab[AddTranslator]
|
Enter a complex number - Maple Help
Home : Support : Online Help : Getting Started : How Do I... : Enter a complex number
Enter a Complex Number?
In Maple, the default representation of the imaginary unit
\sqrt{-1}
is I. Thus,
1+2 I
is a complex number in Maple. The example below demonstrates two ways to enter a complex number in Maple.
Maple provides a way to customize the manner in which complex numbers are displayed. For instance, you may want to use i or j as the imaginary unit instead of I.
Example: Multiply Two Complex Numbers
Using a Different Symbol for the Imaginary Unit
Maple uses
I
for the constant
\sqrt{-1}
1+2 I
3+4 I
and display the result inline.
\left(1+2 I\right)
. Put a space between 2 and
I
to signify multiplication. (Alternatively, use * for multiplication.)
Enter * and then type
\left(3+4 I\right)
Press Ctrl + = to evaluate the expression inline.
Note that the multiplication symbol is needed between the two expressions in parentheses.
\left(1+2 I\right)\cdot \left(3+4 I\right)
\textcolor[rgb]{0,0,1}{-5}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{I}
You can also enter complex numbers using the Common Symbols palette. From the palette, the symbols
ⅈ
ȷ
I
can all be used to enter a complex number.
Note that simply typing
i
will not produce the imaginary unit, but typing
I
will.
When you type the letter i or j in Maple, it is understood as the name `i` or `j`. Only I is an initially known constant. This means:
You can use i or j as variables. For instance, as the index in expressions such as
\mathrm{seq}\left({i}^{2},i=1..5\right)
\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{16}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{25}
I
results in the imaginary constant. Thus,
{I}^{2}
\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}
From the Common Symbols palette:
ⅈ
ȷ
I
all mean the imaginary constant.
Results are displayed using I for the constant
\sqrt{-1}
{x}^{2}+1=0
\stackrel{\text{solve}}{\to }
\left\{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{I}\right\}\textcolor[rgb]{0,0,1}{,}\left\{\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{I}\right\}
You can customize the setting that specifies which symbol represents the imaginary unit. This will enable you to simply type this symbol to get
\sqrt{-1}
. Furthermore, ordinarily in output the imaginary unit is displayed with a capital I, no matter which symbol was used for input. By changing this setting, you change the output display.
To change the symbol used for the imaginary unit, (both for input and output), use the interface command. The calling sequence is interface(imaginaryunit=symbol);
Here, we set
j
to be the symbol for the imaginary unit.
Thereafter, clicking on
ⅈ
ȷ
I
in the Common Symbols palette, or typing the letter
j,
will produce the imaginary unit, and its symbol as displayed in any output will be
ȷ
From the palette:
In computations:
|
% yield Wikipedia
Measure of a quantity of moles of a product formed in a chemical reaction
In the section "Calculations of yields in the monitoring of reactions" in the 1996 4th ion of Vogel's Textbook of Practical Organic Chemistry (1978), the authors write that, "theoretical yield in an organic reaction is the weight of product which would be obtained if the reaction has proceeded to completion according to the chemical equation. The yield is the weight of the pure product which is isolated from the reaction."[1]: 33 [Notes 2] In 'the 1996 ion of Vogel's Textbook , percentage yield is expressed as,[1]: 33 [Notes 3]
{\displaystyle {\mbox{percent yield}}={\frac {\mbox{weight of product}}{\mbox{theoretical yield}}}\times 100}
According to the 1996 ion of Vogel's Textbook , yields close to 100% are called quantitative, yields above 90% are called excellent, yields above 80% are very good, yields above 70% are good, yields above 50% are fair, and yields below 40% are called poor.[1]: 33 In their 2002 publication, Petrucci, Harwood, and Herring wrote that Vogel's Textbook names were arbitrary, and not universally accepted, and depending on the nature of the reaction in question, these expectations may be unrealistically high. Yields may appear to be 100% or above when products are impure, as the measured weight of the product will include the weight of any impurities.[6]: 125
Theoretical, actual, and percent yields[]
{\displaystyle {\mbox{percent yield}}={\frac {\mbox{actual yield}}{\mbox{theoretical yield}}}\times 100}
Purification of products[]
Internal standard yield[]
Reporting of yields[]
In their 2010 Synlett article, Martina Wernerova and organic chemist, Tomáš Hudlický, raised concerns about inaccurate reporting of yields, and offered solutions—including the proper characterization of compounds.[11] After performing careful control experiments, Wernerova and Hudlický said that each physical manipulation (including extraction/washing, drying over desiccant, filtration, and column chromatography) results in a loss of yield of about 2%. Thus, isolated yields measured after standard aqueous workup and chromatographic purification should seldom exceed 94%.[11] They called this phenomenon "yield inflation" and said that yield inflation had gradually crept upward in recent decades in chemistry literature. They attributed yield inflation to careless measurement of yield on reactions conducted on small scale, wishful thinking and a desire to report higher numbers for publication purposes.[11] Hudlický's 2020 article published in Angewandte Chemie—since retracted—honored and echoed Dieter Seebach's often-cited 1990 thirty-year review of organic synthesis, which had also been published in Angewandte Chemie.[12] In his 2020 Angewandte Chemie 30-year review, Hudlický said that the suggestions that he and Wernerova had made in their 2010 Synlett article, were "ignored by the orial boards of organic journals, and by most referees."[13]
|
Kirchhoff's circuit laws - Wikipedia
Two equalities that deal with the current and potential difference
For other laws named after Gustav Kirchhoff, see Kirchhoff's laws.
Kirchhoff's circuit laws are two equalities that deal with the current and potential difference (commonly known as voltage) in the lumped element model of electrical circuits. They were first described in 1845 by German physicist Gustav Kirchhoff.[1] This generalized the work of Georg Ohm and preceded the work of James Clerk Maxwell. Widely used in electrical engineering, they are also called Kirchhoff's rules or simply Kirchhoff's laws. These laws can be applied in time and frequency domains and form the basis for network analysis.
Both of Kirchhoff's laws can be understood as corollaries of Maxwell's equations in the low-frequency limit. They are accurate for DC circuits, and for AC circuits at frequencies where the wavelengths of electromagnetic radiation are very large compared to the circuits.
1 Kirchhoff's current law
2 Kirchhoff's voltage law
3.1 Modelling real circuits with lumped elements
Kirchhoff's current law[edit]
This law, also called Kirchhoff's first law, Kirchhoff's point rule, or Kirchhoff's junction rule (or nodal rule), states that, for any node (junction) in an electrical circuit, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node; or equivalently:
Recalling that current is a signed (positive or negative) quantity reflecting direction towards or away from a node, this principle can be succinctly stated as:
{\displaystyle \sum _{k=1}^{n}{I}_{k}=0}
where n is the total number of branches with currents flowing towards or away from the node.
Kirchhoff's circuit laws were originally obtained from experimental results. However, the current law can be viewed as an extension of the conservation of charge, since charge is the product of current and the time the current has been flowing. If the net charge in a region is constant, the current law will hold on the boundaries of the region.[2][3] This means that the current law relies on the fact that the net charge in the wires and components is constant.
A matrix version of Kirchhoff's current law is the basis of most circuit simulation software, such as SPICE. The current law is used with Ohm's law to perform nodal analysis.
The current law is applicable to any lumped network irrespective of the nature of the network; whether unilateral or bilateral, active or passive, linear or non-linear.
Kirchhoff's voltage law[edit]
The sum of all the voltages around a loop is equal to zero.
This law, also called Kirchhoff's second law, Kirchhoff's loop (or mesh) rule, or Kirchhoff's second rule, states the following:
The directed sum of the potential differences (voltages) around any closed loop is zero.
Similarly to Kirchhoff's current law, the voltage law can be stated as:
{\displaystyle \sum _{k=1}^{n}V_{k}=0}
Derivation of Kirchhoff's voltage law
A similar derivation can be found in The Feynman Lectures on Physics, Volume II, Chapter 22: AC Circuits.[3]
Consider some arbitrary circuit. Approximate the circuit with lumped elements, so that (time-varying) magnetic fields are contained to each component and the field in the region exterior to the circuit is negligible. Based on this assumption, the Maxwell–Faraday equation reveals that
{\displaystyle \nabla \times \mathbf {E} =-{\frac {\partial \mathbf {B} }{\partial t}}=\mathbf {0} }
in the exterior region. If each of the components has a finite volume, then the exterior region is simply connected, and thus the electric field is conservative in that region. Therefore, for any loop in the circuit, we find that
{\displaystyle \sum _{i}V_{i}=-\sum _{i}\int _{{\mathcal {P}}_{i}}\mathbf {E} \cdot \mathrm {d} \mathbf {l} =\oint \mathbf {E} \cdot \mathrm {d} \mathbf {l} =0}
{\textstyle {\mathcal {P}}_{i}}
are paths around the exterior of each of the components, from one terminal to another.
Note that this derivation uses the following definition for the voltage rise from
{\displaystyle a}
{\displaystyle b}
{\displaystyle V_{a\to b}=-\int _{{\mathcal {P}}_{a\to b}}\mathbf {E} \cdot \mathrm {d} \mathbf {l} }
However, the electric potential (and thus voltage) can be defined in other ways, such as via the Helmholtz decomposition.
In the low-frequency limit, the voltage drop around any loop is zero. This includes imaginary loops arranged arbitrarily in space – not limited to the loops delineated by the circuit elements and conductors. In the low-frequency limit, this is a corollary of Faraday's law of induction (which is one of Maxwell's equations).
This has practical application in situations involving "static electricity".
Kirchhoff's circuit laws are the result of the lumped-element model and both depend on the model being applicable to the circuit in question. When the model is not applicable, the laws do not apply.
The current law is dependent on the assumption that the net charge in any wire, junction or lumped component is constant. Whenever the electric field between parts of the circuit is non-negligible, such as when two wires are capacitively coupled, this may not be the case. This occurs in high-frequency AC circuits, where the lumped element model is no longer applicable.[4] For example, in a transmission line, the charge density in the conductor may be constantly changing.
In a transmission line, the net charge in different parts of the conductor changes with time. In the direct physical sense, this violates KCL.
On the other hand, the voltage law relies on the fact that the action of time-varying magnetic fields are confined to individual components, such as inductors. In reality, the induced electric field produced by an inductor is not confined, but the leaked fields are often negligible.
Modelling real circuits with lumped elements[edit]
The lumped element approximation for a circuit is accurate at low frequencies. At higher frequencies, leaked fluxes and varying charge densities in conductors become significant. To an extent, it is possible to still model such circuits using parasitic components. If frequencies are too high, it may be more appropriate to simulate the fields directly using finite element modelling or other techniques.
To model circuits so that both laws can still be used, it is important to understand the distinction between physical circuit elements and the ideal lumped elements. For example, a wire is not an ideal conductor. Unlike an ideal conductor, wires can inductively and capacitively couple to each other (and to themselves), and have a finite propagation delay. Real conductors can be modeled in terms of lumped elements by considering parasitic capacitances distributed between the conductors to model capacitive coupling, or parasitic (mutual) inductances to model inductive coupling.[4] Wires also have some self-inductance.
According to the first law:
{\displaystyle i_{1}-i_{2}-i_{3}=0}
Applying the second law to the closed circuit s1, and substituting for voltage using Ohm's law gives:
{\displaystyle -R_{2}i_{2}+{\mathcal {E}}_{1}-R_{1}i_{1}=0}
The second law, again combined with Ohm's law, applied to the closed circuit s2 gives:
{\displaystyle -R_{3}i_{3}-{\mathcal {E}}_{2}-{\mathcal {E}}_{1}+R_{2}i_{2}=0}
This yields a system of linear equations in i1, i2, i3:
{\displaystyle {\begin{cases}i_{1}-i_{2}-i_{3}&=0\\-R_{2}i_{2}+{\mathcal {E}}_{1}-R_{1}i_{1}&=0\\-R_{3}i_{3}-{\mathcal {E}}_{2}-{\mathcal {E}}_{1}+R_{2}i_{2}&=0\end{cases}}}
{\displaystyle {\begin{cases}i_{1}+(-i_{2})+(-i_{3})&=0\\R_{1}i_{1}+R_{2}i_{2}+0i_{3}&={\mathcal {E}}_{1}\\0i_{1}+R_{2}i_{2}-R_{3}i_{3}&={\mathcal {E}}_{1}+{\mathcal {E}}_{2}\end{cases}}}
{\displaystyle {\begin{aligned}R_{1}&=100\Omega ,&R_{2}&=200\Omega ,&R_{3}&=300\Omega ,\\{\mathcal {E}}_{1}&=3{\text{V}},&{\mathcal {E}}_{2}&=4{\text{V}}\end{aligned}}}
{\displaystyle {\begin{cases}i_{1}={\frac {1}{1100}}{\text{A}}\\[6pt]i_{2}={\frac {4}{275}}{\text{A}}\\[6pt]i_{3}=-{\frac {3}{220}}{\text{A}}\end{cases}}}
The current i3 has a negative sign which means the assumed direction of i3 was incorrect and i3 is actually flowing in the direction opposite to the red arrow labeled i3. The current in R3 flows from left to right.
Lumped matter discipline
^ Oldham, Kalil T. Swain (2008). The doctrine of description: Gustav Kirchhoff, classical physics, and the "purpose of all science" in 19th-century Germany (Ph. D.). University of California, Berkeley. p. 52. Docket 3331743.
^ Athavale, Prashant. "Kirchoff's current law and Kirchoff's voltage law" (PDF). Johns Hopkins University. Retrieved 6 December 2018.
^ a b "The Feynman Lectures on Physics Vol. II Ch. 22: AC Circuits". feynmanlectures.caltech.edu. Retrieved 2018-12-06.
^ a b Ralph Morrison, Grounding and Shielding Techniques in Instrumentation Wiley-Interscience (1986) ISBN 0471838055
Paul, Clayton R. (2001). Fundamentals of Electric Circuit Analysis. John Wiley & Sons. ISBN 0-471-37195-5.
Graham, Howard Johnson, Martin (2002). High-speed signal propagation : advanced black magic (10. printing. ed.). Upper Saddle River, NJ: Prentice Hall PTR. ISBN 0-13-084408-X.
Wikimedia Commons has media related to Kirchhoff's circuit laws.
Divider Circuits and Kirchhoff's Laws chapter from Lessons In Electric Circuits Vol 1 DC free ebook and Lessons In Electric Circuits series
Retrieved from "https://en.wikipedia.org/w/index.php?title=Kirchhoff%27s_circuit_laws&oldid=1085618672"
|
The CodeGeneration package offers new support for translating Maple code to the Julia programming language.
With CodeGeneration[Julia], you can translate expressions to code fragments:
\mathrm{with}\left(\mathrm{CodeGeneration}\right):
\mathrm{Julia}\left(\sqrt{{a}^{2}+{b}^{2}+{c}^{2}}\right)
cg = sqrt(a ^ 2 + b ^ 2 + c ^ 2)
\mathrm{Julia}\left(m→\mathrm{add}\left(i,i=1..m\right)\right)
function cg0(m)
CodeGeneration[Julia] translates many Maple data structures and functions, including many routines for linear algebra and special functions, to equivalents in Julia.
\mathrm{Julia}\left(\left[\begin{array}{rr}1& 2\\ 3& 4\end{array}\right]\right)
cg1 = [1 2; 3 4]
\mathrm{Julia}({\begin{array}{cc}-{v}^{3}+3v+1& v<1\\ 2{v}^{3}-9{v}^{2}+12v-2& v<2\\ -{v}^{3}+9{v}^{2}-24v+22& \mathrm{otherwise}\end{array})
cg2 = -v ^ 3 + 3 * v + 1 ? v < 1 : 2 * v ^ 3 - 9 * v ^ 2 + 12 * v - 2 ? v < 2 : -v ^ 3 + 9 * v ^ 2 - 24 * v + 22
\mathrm{Julia}\left(\left(M,n\right)→M - x \mathrm{LinearAlgebra}:-\mathrm{IdentityMatrix}\left(n\right)\right)
cg3(M,n) = M - x * eye(n)
\mathrm{Julia}\left(\mathrm{_C1}\mathrm{BesselJ}\left(\mathrm{\nu },x\right)+\mathrm{_C2}\mathrm{BesselY}\left(\mathrm{\nu },x\right)\right)
cg4 = _C1 * besselj(nu, x) + _C2 * bessely(nu, x)
Code Generation for Julia in Maple can also translate some key commands from Statistics:
\mathrm{Julia}\left('\mathrm{Statistics}:-\mathrm{Mean}\left(\left[5,2,1,4,3\right]\right)'\right)
cg5 = mean([5,2,1,4,3])
\mathrm{Julia}\left('\mathrm{Statistics}:-\mathrm{Median}\left(\left[5,2,1,4,3\right]\right)'\right)
cg6 = median([5,2,1,4,3])
\mathrm{Julia}\left('\mathrm{Statistics}:-\mathrm{StandardDeviation}\left(\left[5,2,1,4,3\right]\right)'\right)
cg7 = std([5,2,1,4,3])
When possible, Maple attempts to return an equivalent command to commands dealing with many distributions. In this example, the evaluated probability density function is translated.
\mathrm{Julia}\left(\mathrm{Statistics}:-\mathrm{PDF}\left(\mathrm{LogNormal}\left(0,1\right),x\right)\right)
cg9 = 0 ? x < 0 : 1 / x * sqrt(2) * pi ^ (-1//2) * exp(-log(x) ^ 2 / 2) / 2
|
Why are NP-complete problems so different in terms of their approximation? - PhotoLens
I’d like to begin the question by saying I’m a programmer, and I don’t have a lot of background in complexity theory.
One thing that I’ve noticed is that while many problems are NP-complete, when extended to optimization problems, some are far more difficult to approximate than others.
A good example is TSP. Although all kinds of TSP are NP-complete, the corresponding optimization problems get easier and easier to approximate with successive simplifications. The general case is NPO-complete, the metric case is APX-complete, and the Euclidean case actually has a PTAS.
This seems counter-intuitive to me, and I’m wondering whether there is a reason for this.
One reason that we see different approximation complexities for NP-complete problems is that the necessary conditions for NP-complete constitute a very coarse grained measure of a problem’s complexity. You may be familiar with the basic definition of a problem \Pi
being NP-complete:
\Pi
is in NP, and
For every other problem \Xi
in NP, we can turn an instance x
of \Xi
into an instance y
of \Pi
in polynomial time such that y
is a yes-instance of \Pi
if and only if x
is a yes-instance of \Xi
Consider condition 2: all that is required is that we can take x
and turn it into some y
that preserves the “single-bit” yes/no answer. There’s no conditions about, for example, the relative size of the witnesses to the yes or no (that is, the size of the solution in the optimization context). So the only measure that’s used is the total size of the input which only gives a very weak condition on the size of the solution. So it’s pretty “easy” to turn a \Xi
into a \Pi
We can see the difference in various NP-complete problems by looking at the complexity of the some simple algorithms. k
-Coloring has a brute force O(kn)
O(k^{n})
(where n
is the input size). For k
-Dominating Set, a brute force approach take O(nk)
O(n^{k})
. These are, in essence the best exact algorithms we have. k
-Vertex Cover however has a very simple O(2knc)
O(2^{k}n^{c})
algorithm (pick an edge, branch on which endpoint to include, mark all covered, keep going until you have no edges unmarked or you hit your budget of k
and bactrack). Under polynomial-time many-one reductions (Karp reductions, i.e. what we’re doing in condition 2 above), these problems are equivalent.
When we start to approach complexity with even slightly more delicate tools (approximation complexity, parameterized complexity, any others I can’t think of), the reductions we use become more strict, or rather, more sensitive to the structure of the solution, and the differences start to appear; k
-Vertex Cover (as Yuval mentioned) has a simple 2-approximation (but doesn’t have an FPTAS unless some complexity classes collapse), k
-Dominating Set has a (1+\log n)
-approximation algorithm (but no (c\log n)
-approximation for some c>0
), and approximation doesn’t really make sense at all for the straight forward version of k
-Coloring.
Source : Link , Question Author : GregRos , Answer Author : Luke Mathieson
Categories approximation, co-np, complexity-theory, optimization Tags approximation, co-np, complexity-theory, optimization Post navigation
Match eraser settings to brush settings
My printer is asking me for a huge jpeg that makes AI run out of memory. How do I fix that?
Adjust workspace size
|
Revision as of 18:28, 31 May 2015 by MathAdmin (talk | contribs) (Created page with "''' Question ''' Prove the following identity, <br> <center><math>\frac{1-\sin(\theta)}{\cos(\theta)}=\frac{\cos(\theta)}{1+\sin(\theta)}</math></center> {| class="mw-collap...")
Question Prove the following identity,
{\displaystyle {\frac {1-\sin(\theta )}{\cos(\theta )}}={\frac {\cos(\theta )}{1+\sin(\theta )}}}
1) What can you multiply
{\displaystyle 1-\sin(\theta )}
by to obtain a formula that is equivalent to something involving
{\displaystyle \cos }
1) You can multiply
{\displaystyle 1-\sin(\theta )}
by Failed to parse (syntax error): {\displaystyle \frac{1 + \sin(\theta)}{\1 + \sin(\theta)}}
We start with the left hand side. We have
{\displaystyle {\frac {1-\sin(\theta )}{\cos(\theta )}}={\frac {1-\sin(\theta )}{\cos(\theta )}}{\Bigg (}{\frac {1+\sin(\theta )}{1+\sin(\theta )}}{\Bigg )}}
{\displaystyle {\frac {1-\sin(\theta )}{\cos(\theta )}}={\frac {1-\sin ^{2}(\theta )}{\cos(\theta )(1+\sin(\theta ))}}}
{\displaystyle 1-\sin ^{2}(\theta )=\cos ^{2}(\theta )}
{\displaystyle {\frac {1-\sin(\theta )}{\cos(\theta )}}={\frac {\cos ^{2}(\theta )}{\cos(\theta )(1+\sin(\theta ))}}={\frac {\cos(\theta )}{1+\sin(\theta )}}}
|
APL Wiki:Formatting - APL Wiki
APL Wiki:Formatting
APL Wiki is a standard MediaWiki, so the MediaWiki documentation for applies. In particular, the following sections may be useful:
APL Wiki doesn't have all the extensions and templates Wikipedia has, but does have some of its own. This page serves as a reference for many of them. If you are interested in making APL Wiki easier to edit, and know a bit about MediaWiki templates, you are also free to create your own.
2 Mentioning subjects that have their own page
3 Also on Wikipedia
4 Yes/No tables
5 Inserting code
5.2.1 Tagging dialects
5.2.2 Run online
5.2.3 Very long code blocks
5.3 Unsupported languages
6.3 Specific types of articles
6.3.1 Primitives and other built-ins
Linking to a Wikipedia page is more like an internal link than an external one: both technically, because it uses MediaWiki just like APLWiki, and conceptually, because Wikipedia also has encyclopedic information on a variety of topics. And in fact Wikipedia links are supported by our internal link syntax. This form is preferred because it is displayed with a "W" as an indication to the reader. Write an internal link as it would appear on Wikipedia, but put wikipedia: in front.
A Wikipedia link always needs to be given displayed text with |, because otherwise it will appear with that wikipedia: in the link, a very cluttered display! However, when editing there is a shortcut that can often be used. Writing the | character with no text afterwards will strip wikipedia: and any trailing material in parentheses from the display, so that for example [[wikipedia:APL (programming language)|]] displays as APL.
Mentioning subjects that have their own page
It is fine for a page to have a section about a subject that also has its own page. When that happens, begin the section with {{Main|subject}} where subject is the target page title. Don't include double brackets (as a page link) — it is done for you.
If a subject is notable for reasons unrelated to APL, in addition to being APL notable, then you can indicate this fact using the Also on Wikipedia template. By default, both the text and the Wikipedia link target are taken from the page's title. The optional first argument changes the name used in the text (for example, to use only a person's last name) and the second gives the Wikipedia article to link to.
As the template suggests, only APL-related topics, and important background information, should be covered on the APL Wiki if there is also a substantial Wikipedia article. In contrast, topics which are on Wikipedia only because of APL should eventually have APL Wiki articles with at least as much information as Wikipedia, and should not use the Also on Wikipedia template—instead, these articles should include a link to the Wikipedia article in an "External links" section. However, content should not be copied directly from Wikipedia. Rewrite it from an APL perspective instead.
Yes/No tables
For tables comparison tables, use the {{Yes}}, {{No}}, and {{Maybe}} templates, e.g.:
! Get wet?
| Indoors || Outdoors || Swimming
| Sunshine || {{No}} || {{No}} || {{Yes}}
| Rain || {{No}} || {{Maybe}} || {{Yes}}
Indoors Outdoors Swimming
Sunshine No No Yes
Rain No Maybe Yes
You can customise the text by supplying an argument:
| Indoors || Outdoors || Swimming
| Sunshine || {{No}} || {{No|Obviously not}} || {{Yes}}
| Rain || {{No|Depends}} || {{Maybe|Umbrella?}} || {{Yes|Obviously}}
Sunshine No Obviously not Yes
Rain Depends Umbrella? Obviously
Inline code is used for single primitives and short expressions and uses the format
<source lang=apl inline>(2=+⌿0=N∘.|N)/N←⍳100</source>
(2=+⌿0=N∘.|N)/N←⍳100
For session transcripts, function definitions and longer expressions, use code blocks like
<source lang=apl>
Tagging dialects
Optionally, you can indicate one or more APL dialects which are able to run the code by using a special template immediately after the source tag, like
{(2=+⌿0=⍵∘.|⍵)/⍵}⍳100
{{Works in|[[Dyalog APL]], [[dzaima/APL]], [[GNU APL]], [[ngn/apl]]}}
Works in: Dyalog APL, dzaima/APL, GNU APL, ngn/apl
You can also include a permalink to TryAPL, Try It Online, repl.it, abrudz.github.io/ngn-apl etc. right before the source tag, for example
{{try|1=https://tryapl.org/?clear&q=%7B(2%3D%2B%E2%8C%BF0%3D%E2%8D%B5%E2%88%98.%7C%E2%8D%B5)%2F%E2%8D%B5%7D%E2%8D%B3100&run}}
Note: Make sure to spell the template exactly as {{try|1= including 1= as otherwise any equal sign in the URL will prevent the template from working properly!
Very long code blocks
{{Collapse|The below code generates a tall column of numbers.|
⍪⍳10
The below code generates a tall column of numbers.
If your code is in a language the highlighter doesn't support, then there are two ways to present code without highlighting it. Don't use a <source> tag with no lang attribute as this puts the page in the "Pages with syntax highlighting errors" category, which is visible at the bottom of the page.
Use <code> tags for inline code and <pre> tags for blocks, avoiding <source> entirely
Use lang=text.
MathJax is enabled, so you can insert mathematical notation (for example for Iverson notation) inline using
included the expression <math>\bot p_{32,33}:+/\alpha^2/I^0</math> in its description
included the expression
{\displaystyle \bot p_{32,33}:+/\alpha ^{2}/I^{0}}
For multiple and larger mathematical expressions, use
:<math>i \gets O^{\bot I^0_{0,1,2,3}}_{\bot I^0_{4,5,6,7}}</math>
{\displaystyle i\gets O_{\bot I_{4,5,6,7}^{0}}^{\bot I_{0,1,2,3}^{0}}}
It is quite common to state equivalences. Please use a proper equivalence arrow, which is easy to insert with the template:
this {{←→}} that
{\displaystyle \Leftrightarrow }
Most articles should have a navigation template giving links to other articles of interest. The navbox should be added after all of the article text, but before the article's categories. A navbox is a template such as {{APL development}}, which displays as an expandable table of links:
The following navboxes exist at the moment:
Template:APL built-ins
Template:APL community
Template:APL dialects
Template:APL features
Template:APL glyphs
Template:APL syntax
If you create a page that belongs in one of these navboxes but isn't already there, add it in the appropriate place in addition to placing the navbox in the new page.
The APL Wiki has a full Wikipedia-style category tree, which you can see by starting at the top-level category Contents. While all of Wikipedia's user-facing content is in encyclopedic articles, APL Wiki also has non-encyclopedic essays that discuss a topic such as how to use APL for a particular problem. When adding a new page, try and fit it into at least one category, but it's fine to leave it out if you're not yet comfortable with the category system (other editors can easily see pages with no category). Categories are added with syntax such as [[Category:IBM employees]] at the end of the page.
Create new categories if needed, but please avoid Overcategorization. As with articles, each category should be placed in a parent category (the only uncategorized category is Contents). Make sure to read Wikipedia's guidelines about categories carefully and understand the existing category tree if you plan to make major changes to the category system.
Specific types of articles
Primitives and other built-ins
When creating a page for a primitive function, operator, or quad name, begin the page with the following template:
{{Built-in|Log|⍟}}
This inserts the text
Log (⍟)
and also adds a large floating illustration of the glyph, as seen at the right. You can use the Built-ins template for primitives with two different glyphs (for example {{Built-ins|Reverse|⌽|⊖}}), and Glyphbox if you just want a box at the right (for example {{Glyphbox|⍤}}). Since all of these templates must be put at the very beginning of the article's first paragraph, Glyphbox is useful if there should be leading text before the primitive's name.
At the bottom of such pages, include {{APL built-ins}} and edit that template's content if the page you're creating isn't already listed there.
Begin your page about an APL dialect with the Infobox array language info box. These are all the optional parameters it supports:
{{Infobox array language
| withdrawn =
| array model =
| index origin =
| function styles =
| numeric types =
| unicode support =
| implementation language =
| implementation languages =
| operating systems =
| documentation =
| run online =
Have a look at the existing dialect pages, e.g. Dyalog APL, to see how these values are used, Try to fill in as many as you can (but use only one of each singular/plural pair). You only need to use title if the language title differs from the page title (e.g. for technical reason like the inability to create a page that begins with a lowercase letter). In that case, begin the page with {{DISPLAYTITLE:real name}} where real name should be the proper name of the dialect.
At the bottom of the page, include {{APL programming language}} and edit that template's content if the dialect you're creating a page about isn't already listed there.
Retrieved from ‘https://aplwiki.com/index.php?title=APL_Wiki:Formatting&oldid=7676’
|
Availability model based on task and risk model based on safety for condition based maintenance | JVE Journals
Menghan Li1 , Xin Liu2 , Kun Yang3 , Zhenguo Li4
1, 4National Engineering Laboratory for Mobile Source Emission Control Technology, China Automotive Technology and Research Center Co., Ltd., Tianjin, China
2, 3Institute of Medical Support Technology, Academy of Systems Engineering, Academy of Military Science of PLA, Tianjin, China
Copyright © 2021 Menghan Li, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
The most important thing for condition based maintenance (CBM) is the maintenance decision-making, while the existing research mainly considers system maintenance decision-making from the economic point of view, but the actual situation is that the availability and safety of the system are more important in most cases. In this paper, the maintenance decision-making model of the system is established by giving priority to the availability and safety of the system, which is more in line with the engineering practice and lays a foundation for the application of condition based maintenance.
The availability model based on task for condition based maintenance is proposed
The risk model based on safety for condition based maintenance is proposed
The model is useful to improve the availability of maintenance decision
Keywords: maintenance decision-making, availability model, risk model, gearbox.
Nowadays, condition based maintenance (CBM) of multi-component systems optimization is studied [1-4], and the relationship between the main popular maintenance modes is analyzed [5]. Many new methods are proposed, such as a new maintenance decision-making which is made by an artificial neural network, expert system and emulation technology is proposed to decrease the life cost [6]. The maintenance decision-making optimization is proposed considering the influence of human reliability to minimize the cost [7]. The discounted cost associated with a maintenance policy combining both condition-based and age-based criteria for preventive maintenance is proposed which provides a more realistic basis for optimizing the maintenance policies [8]. A cost-based selective maintenance decision-making method was presented considering the influence of machine failure intensity [1].
From the above description, we can know that the maintenance decision-making model is becoming more and more abounding and practical. However, the existing models are focusing on the cost, ignoring the availability model and risk model which are play a more important role in the maintenance decision-making. Therefore, we proposed the availability model based on task and risk model based on safety for condition based maintenance in this paper in order to obtain the maximum availability and expected risk for system.
The rest of this paper is organized as follows. In Section 2 the availability model based on task and risk model based on safety for condition based maintenance is proposed. In Section 3, an experiment of gearbox is introduced. Then, the proposed method is applied to determine the gearbox optimal maintenance interval, which shows that different intervals will be obtained via the two models. Finally, the conclusions of the paper are drawn in Section 4.
2. Availability model and risk model
In actual engineering risk model based on safety is taken into account firstly to ensure the safety and the success of task. Secondly, availability model based on task is considered. In this paper, the components are assumed as well as new after maintenance. Moreover, maintenance decision-making model is proposed to determine the optimal maintenance interval.
2.1. Availability model based on task
In some cases, we can not only consider component economic efficiency, but also pay more attention to the task indicators. For example, we are more focused on availability and reliability of the aircraft, artillery and firearms in the battlefield, rather than how to reduce the cost. The economic factors will be considered only on the premise of ensuring availability. Then, the availability model based on task is needed to be established.
In an update cycle, the working time is consisted of as follows:
E\left({T}_{w}\right)={t}_{i}+{T}_{i}\cdot p\left({x}_{i}\ge {T}_{i}|{Y}_{i}\right)+{\int }_{0}^{{T}_{i}}{x}_{i}\cdot p\left({x}_{i}|{Y}_{i}\right)d{x}_{i},
{T}_{i}
is optimal maintenance interval and
{t}_{i}
i
th monitoring time of system.
The time definition for a maintenance cycle is shown as follows:
E\left(T\right)={t}_{i}+\left({T}_{i}+{T}_{p}\right)\cdot p\left({x}_{i}\ge {T}_{i}|{Y}_{i}\right)+{\int }_{0}^{{T}_{i}}\left({x}_{i}+{T}_{f}\right)p\left({x}_{i}|{Y}_{i}\right)d{x}_{i},
{T}_{p}
is an average maintenance interval and
{T}_{f}
is average time for corrective maintenance.
The availability model is obtained as follows:
A\left({T}_{i}\right)=\frac{E\left({T}_{w}\right)}{E\left(T\right)}=\frac{{t}_{i}+{T}_{i}\cdot p\left({x}_{i}\ge {T}_{i}|{Y}_{i}\right)+{\int }_{0}^{{T}_{i}}{x}_{i}\cdot p\left({x}_{i}|{Y}_{i}\right)d{x}_{i}}{{t}_{i}+\left({T}_{i}+{T}_{p}\right)\cdot p\left({x}_{i}\ge {T}_{i}|{Y}_{i}\right)+{\int }_{0}^{{T}_{i}}\left({x}_{i}+{T}_{f}\right)p\left({x}_{i}|{Y}_{i}\right)d{x}_{i}}.
2.2. Risk model based on safety
In practice, the safety of the component is the most important. The purpose of maintenance decision-making is to realize accurate, timely and effective maintenance with decreasing maintenance costs at the same time. However, it does not mean that safety can be ignored. Once the safety problems occur, it may threaten the life safety of operating personnel or cause serious los. As a result, risk model based on safety need to be considered firstly.
The reliability or failure probability can be selected as the objective function in risk model based on safety. The optimal maintenance interval is determined through lower the fault probability of component to a pre-determined level. Assuming that component risk
\alpha
, the failure rate cannot be higher than the risk
\alpha
in every monitoring process. Therefore, the probability of component not failure is:
p\left({x}_{i}\ge {T}_{i}|{Y}_{i}\right)>1-\alpha .
It can be described via the remaining useful life:
1-{\int }_{0}^{{T}_{i}}p\left({x}_{i}|{Y}_{i}\right)d{x}_{i}>1-\alpha .
The maintenance interval of risk model based on safety can be obtained by Eq. (5) when the remaining useful life has been known.
The maintenance decision-making usually takes reliability or risk rate as the objective function under the security requirement. So, the failure probability reduced to an acceptable level is used to determine the optimal maintenance interval.
3.1. The introduction of experiment
In the experiment, the gear box was driven by a electromotor, and a load was added to simulate the real working state of the gearbox. The gearbox is shown in Fig. 1, and test results are shown in Table 1.
Fig. 1. The experiment of gearbox
Table 1. The test results
{x}_{i}\left(h\right)
Predicted RUL (
h
3.2. The results analysis
In the experiment, we assume that
{T}_{p}=
5 hours,
{T}_{f}=
15 hours and the risk is
\alpha =
0.05. The results of availability model based on task is shown in Fig. 2.
Fig. 2. The results of availability model based on task
Table 2. The results of availability model based on task
Monitoring point
i
{A}_{i}
{x}_{i}
{T}_{i}
Prediction of RUL
The results of availability model based on task is shown in Table 2, which we can determine the maintenance interval. When maintenance interval that determined by availability model is earlier than RUL, the decision for system is obtained via availability model. On the other hand, when maintenance interval that determined by availability model is later than RUL, maintenance decision-making is obtained based on RUL. As we can see from Table 2, maintenance interval is earlier than RUL at all time, so optimal maintenance interval for maintenance decision is all based on availability model.
In the risk model based on safety, we assume that risk
\alpha =
0.05. Then, the result of maintenance decision-making based on risk model can be obtained which is shown in Fig. 3. As we can see from the figure, the safety index is higher with the decrease of the maintenance interval, which is consistent with our subjective experience. When
1-\alpha >
0.95 is satisfied, we will get the most optimal maintenance interval.
Fig. 3. The results of risk model based on safety
The results of risk model based on safety is shown in Table 3, which we can determine the maintenance interval. As we can see, the results based on risk model is earlier than the RUL of the gearbox, so it is necessary to propose maintenance decision based on risk model.
Table 3. The results of risk model based on safety
i
{x}_{i}
{T}_{i}
In actual engineering risk model based on safety is taken into account firstly to ensure the safety and the success of task. Secondly, availability model based on task is considered. In this paper, the availability model based on task and risk model based on safety for condition based maintenance is proposed, and the effectiveness of the method is verified by an experiment of gearbox. The availability model based on task and risk model based on safety is more in line with the actual situation of the project and can effectively improve the availability of maintenance decision.
The research is supported by National Engineering Laboratory for Mobile Source Emission Control Technology (No. NELMS2018C01), Science and Technology Directorate Project of Tianjin City (No. 18ZXSZSF00060) and the Natural Science Foundation of Hebei (No. E2019202198). The authors are grateful to all the reviewers and the editor for their valuable comments.
Liu Fanmao, Zhu Haiping, Liu Boxing Maintenance decision-making method for manufacturing system based on cost and arithmetic reduction of intensity model, Journal of Central South University, Vol. 20, 2013, p. 1559-1571. [Publisher]
Jinmao Guo, Yaohui Zhang, Rui He, et al. The process of mission based maintenance decision-making for material. 3rd International Conference on Maintenance Engineering, Chengdu, China, 2012, p. 15-18. [Search CrossRef]
Nipat Rasmekomen, Ajith Kumar Parlikad Condition-based maintenance of multi-component systems with degradation state-rate interactions. Reliability Engineering and System Safety, Vol. 148, 2016, p. 1-10. [Publisher]
Si Xiao Sheng, Wang Wenbin, Hu Chang Hua,et al. A Wiener-process-based degradation model with a recursive filter algorithm for remaining useful life estimation. Mechanical Systems and Signal Processing, Vol. 35, 2013, p. 219-237. [Publisher]
Zhou W., Nessim M. A. Optimal design of onshore natural gas pipelines. Journal of Pressure Vessel Technology, Vol. 133, Issue 3, 2011, p. 031702. [Publisher]
Yang Mingzhong, Huang Jinguo, Zang Tiegang Research on an intelligent maintenance decision-making support system. International Journal of Plant Engineering and Management. Vol. 9, Issue 2, 2004, p. 85-90. [Search CrossRef]
Asadzadeh S. M., Azadeh A. An integrated systemic model for optimization of condition-based maintenance with human error. Reliability Engineering and System Safety, Vol. 124, 2014, p. 117-131. [Publisher]
Vander Weide J. A. M., Pandey M. D., Van Noortwijk J. M. Discounted cost model for condition-based maintenance optimization. Reliability Engineering and System Safety, Vol. 95, 2010, p. 236-246. [Publisher]
|
Multi-index_notation Knowpia
Multi-index notation is a mathematical notation that simplifies formulas used in multivariable calculus, partial differential equations and the theory of distributions, by generalising the concept of an integer index to an ordered tuple of indices.
Definition and basic propertiesEdit
An n-dimensional multi-index is an n-tuple
{\displaystyle \alpha =(\alpha _{1},\alpha _{2},\ldots ,\alpha _{n})}
of non-negative integers (i.e. an element of the n-dimensional set of natural numbers, denoted
{\displaystyle \mathbb {N} _{0}^{n}}
For multi-indices
{\displaystyle \alpha ,\beta \in \mathbb {N} _{0}^{n}}
{\displaystyle x=(x_{1},x_{2},\ldots ,x_{n})\in \mathbb {R} ^{n}}
Componentwise sum and difference
{\displaystyle \alpha \pm \beta =(\alpha _{1}\pm \beta _{1},\,\alpha _{2}\pm \beta _{2},\ldots ,\,\alpha _{n}\pm \beta _{n})}
{\displaystyle \alpha \leq \beta \quad \Leftrightarrow \quad \alpha _{i}\leq \beta _{i}\quad \forall \,i\in \{1,\ldots ,n\}}
Sum of components (absolute value)
{\displaystyle |\alpha |=\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}}
{\displaystyle \alpha !=\alpha _{1}!\cdot \alpha _{2}!\cdots \alpha _{n}!}
{\displaystyle {\binom {\alpha }{\beta }}={\binom {\alpha _{1}}{\beta _{1}}}{\binom {\alpha _{2}}{\beta _{2}}}\cdots {\binom {\alpha _{n}}{\beta _{n}}}={\frac {\alpha !}{\beta !(\alpha -\beta )!}}}
{\displaystyle {\binom {k}{\alpha }}={\frac {k!}{\alpha _{1}!\alpha _{2}!\cdots \alpha _{n}!}}={\frac {k!}{\alpha !}}}
{\displaystyle k:=|\alpha |\in \mathbb {N} _{0}}
{\displaystyle x^{\alpha }=x_{1}^{\alpha _{1}}x_{2}^{\alpha _{2}}\ldots x_{n}^{\alpha _{n}}}
Higher-order partial derivative
{\displaystyle \partial ^{\alpha }=\partial _{1}^{\alpha _{1}}\partial _{2}^{\alpha _{2}}\ldots \partial _{n}^{\alpha _{n}}}
{\displaystyle \partial _{i}^{\alpha _{i}}:=\partial ^{\alpha _{i}}/\partial x_{i}^{\alpha _{i}}}
(see also 4-gradient). Sometimes the notation
{\displaystyle D^{\alpha }=\partial ^{\alpha }}
is also used.[1]
Some applicationsEdit
The multi-index notation allows the extension of many formulae from elementary calculus to the corresponding multi-variable case. Below are some examples. In all the following,
{\displaystyle x,y,h\in \mathbb {C} ^{n}}
{\displaystyle \mathbb {R} ^{n}}
{\displaystyle \alpha ,\nu \in \mathbb {N} _{0}^{n}}
{\displaystyle f,g,a_{\alpha }\colon \mathbb {C} ^{n}\to \mathbb {C} }
{\displaystyle \mathbb {R} ^{n}\to \mathbb {R} }
{\displaystyle \left(\sum _{i=1}^{n}x_{i}\right)^{k}=\sum _{|\alpha |=k}{\binom {k}{\alpha }}\,x^{\alpha }}
Multi-binomial theorem
{\displaystyle (x+y)^{\alpha }=\sum _{\nu \leq \alpha }{\binom {\alpha }{\nu }}\,x^{\nu }y^{\alpha -\nu }.}
Note that, since x + y is a vector and α is a multi-index, the expression on the left is short for (x1 + y1)α1⋯(xn + yn)αn.
For smooth functions f and g
{\displaystyle \partial ^{\alpha }(fg)=\sum _{\nu \leq \alpha }{\binom {\alpha }{\nu }}\,\partial ^{\nu }f\,\partial ^{\alpha -\nu }g.}
For an analytic function f in n variables one has
{\displaystyle f(x+h)=\sum _{\alpha \in \mathbb {N} _{0}^{n}}{{\frac {\partial ^{\alpha }f(x)}{\alpha !}}h^{\alpha }}.}
In fact, for a smooth enough function, we have the similar Taylor expansion
{\displaystyle f(x+h)=\sum _{|\alpha |\leq n}{{\frac {\partial ^{\alpha }f(x)}{\alpha !}}h^{\alpha }}+R_{n}(x,h),}
where the last term (the remainder) depends on the exact version of Taylor's formula. For instance, for the Cauchy formula (with integral remainder), one gets
{\displaystyle R_{n}(x,h)=(n+1)\sum _{|\alpha |=n+1}{\frac {h^{\alpha }}{\alpha !}}\int _{0}^{1}(1-t)^{n}\partial ^{\alpha }f(x+th)\,dt.}
General linear partial differential operator
A formal linear N-th order partial differential operator in n variables is written as
{\displaystyle P(\partial )=\sum _{|\alpha |\leq N}{a_{\alpha }(x)\partial ^{\alpha }}.}
For smooth functions with compact support in a bounded domain
{\displaystyle \Omega \subset \mathbb {R} ^{n}}
{\displaystyle \int _{\Omega }u(\partial ^{\alpha }v)\,dx=(-1)^{|\alpha |}\int _{\Omega }{(\partial ^{\alpha }u)v\,dx}.}
An example theoremEdit
{\displaystyle \alpha ,\beta \in \mathbb {N} _{0}^{n}}
{\displaystyle x=(x_{1},\ldots ,x_{n})}
{\displaystyle \partial ^{\alpha }x^{\beta }={\begin{cases}{\frac {\beta !}{(\beta -\alpha )!}}x^{\beta -\alpha }&{\text{if}}~\alpha \leq \beta ,\\0&{\text{otherwise.}}\end{cases}}}
The proof follows from the power rule for the ordinary derivative; if α and β are in {0, 1, 2, …}, then
{\displaystyle {\frac {d^{\alpha }}{dx^{\alpha }}}x^{\beta }={\begin{cases}{\frac {\beta !}{(\beta -\alpha )!}}x^{\beta -\alpha }&{\hbox{if}}\,\,\alpha \leq \beta ,\\0&{\hbox{otherwise.}}\end{cases}}}
{\displaystyle \alpha =(\alpha _{1},\ldots ,\alpha _{n})}
{\displaystyle \beta =(\beta _{1},\ldots ,\beta _{n})}
{\displaystyle x=(x_{1},\ldots ,x_{n})}
{\displaystyle {\begin{aligned}\partial ^{\alpha }x^{\beta }&={\frac {\partial ^{\vert \alpha \vert }}{\partial x_{1}^{\alpha _{1}}\cdots \partial x_{n}^{\alpha _{n}}}}x_{1}^{\beta _{1}}\cdots x_{n}^{\beta _{n}}\\&={\frac {\partial ^{\alpha _{1}}}{\partial x_{1}^{\alpha _{1}}}}x_{1}^{\beta _{1}}\cdots {\frac {\partial ^{\alpha _{n}}}{\partial x_{n}^{\alpha _{n}}}}x_{n}^{\beta _{n}}.\end{aligned}}}
For each i in {1, …, n}, the function
{\displaystyle x_{i}^{\beta _{i}}}
{\displaystyle x_{i}}
. In the above, each partial differentiation
{\displaystyle \partial /\partial x_{i}}
therefore reduces to the corresponding ordinary differentiation
{\displaystyle d/dx_{i}}
. Hence, from equation (1), it follows that
{\displaystyle \partial ^{\alpha }x^{\beta }}
vanishes if αi > βi for at least one i in {1, …, n}. If this is not the case, i.e., if α ≤ β as multi-indices, then
{\displaystyle {\frac {d^{\alpha _{i}}}{dx_{i}^{\alpha _{i}}}}x_{i}^{\beta _{i}}={\frac {\beta _{i}!}{(\beta _{i}-\alpha _{i})!}}x_{i}^{\beta _{i}-\alpha _{i}}}
{\displaystyle i}
and the theorem follows. Q.E.D.
^ Reed, M.; Simon, B. (1980). Methods of Modern Mathematical Physics: Functional Analysis I (Revised and enlarged ed.). San Diego: Academic Press. p. 319. ISBN 0-12-585050-6.
Saint Raymond, Xavier (1991). Elementary Introduction to the Theory of Pseudodifferential Operators. Chap 1.1 . CRC Press. ISBN 0-8493-7158-9
This article incorporates material from multi-index derivative of a power on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
DelaunayGraph - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Graph Theory : GraphTheory Package : GeometricGraphs : DelaunayGraph
construct Delaunay graph
DelaunayGraph(P, opts)
The DelaunayGraph(P, opts) command returns a Delaunay graph for the point set P.
P
be a set of points in
A Delaunay graph is a undirected graph corresponding to a Delaunay triangulation on P. Its vertices correspond to the points in
P
and an edge exists between points
p
q
P
p
q
form the corners of a triangle in a Delaunay triangulation.
The Delaunay graph is an example of geometric spanner, a graph for which the weighted graph distance connecting any two vertices
p
q
is a bounded multiple of the Euclidean distance between
p
q
The Gabriel graph on
P
is a subgraph of the Delaunay graph on
P
Generate a set of random two-dimensional points and draw the Delaunay graph.
\mathrm{with}\left(\mathrm{GraphTheory}\right):
\mathrm{with}\left(\mathrm{GeometricGraphs}\right):
\mathrm{points}≔\mathrm{LinearAlgebra}:-\mathrm{RandomMatrix}\left(60,2,\mathrm{generator}=0..100.,\mathrm{datatype}=\mathrm{float}[8]\right)
\textcolor[rgb]{0,0,1}{\mathrm{points}}\textcolor[rgb]{0,0,1}{≔}\begin{array}{c}[\begin{array}{cc}\textcolor[rgb]{0,0,1}{9.85017697341803}& \textcolor[rgb]{0,0,1}{82.9750304386195}\\ \textcolor[rgb]{0,0,1}{86.0670183749663}& \textcolor[rgb]{0,0,1}{83.3188659363996}\\ \textcolor[rgb]{0,0,1}{64.3746795546741}& \textcolor[rgb]{0,0,1}{73.8671607639673}\\ \textcolor[rgb]{0,0,1}{57.3670557294666}& \textcolor[rgb]{0,0,1}{2.34399775883031}\\ \textcolor[rgb]{0,0,1}{23.6234264844933}& \textcolor[rgb]{0,0,1}{52.6873367387328}\\ \textcolor[rgb]{0,0,1}{47.0027547350003}& \textcolor[rgb]{0,0,1}{22.2459488367552}\\ \textcolor[rgb]{0,0,1}{74.9213491558963}& \textcolor[rgb]{0,0,1}{62.0471820220718}\\ \textcolor[rgb]{0,0,1}{92.1513434709073}& \textcolor[rgb]{0,0,1}{96.3107262637080}\\ \textcolor[rgb]{0,0,1}{48.2319624355944}& \textcolor[rgb]{0,0,1}{63.7563267144141}\\ \textcolor[rgb]{0,0,1}{90.9441877431805}& \textcolor[rgb]{0,0,1}{33.8527464913022}\\ \textcolor[rgb]{0,0,1}{⋮}& \textcolor[rgb]{0,0,1}{⋮}\end{array}]\\ \hfill \textcolor[rgb]{0,0,1}{\text{60 × 2 Matrix}}\end{array}
\mathrm{DG}≔\mathrm{DelaunayGraph}\left(\mathrm{points}\right)
\textcolor[rgb]{0,0,1}{\mathrm{DG}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{Graph 1: an undirected unweighted graph with 60 vertices and 167 edge\left(s\right)}}
\mathrm{DrawGraph}\left(\mathrm{DG}\right)
Chew, L. Paul (1986), "There is a planar graph almost as good as the complete graph", Proc. 2nd Annual Symposium on Computational Geometry, pp. 169–177. doi:10.1145/10515.10534
The GraphTheory[GeometricGraphs][DelaunayGraph] command was introduced in Maple 2020.
|
Reliability analysis of railway freight car wheelset based on Birnbaunm-Saunders fatigue life distribution | JVE Journals
Yaxuan Zhang1 , Ziliang An2 , Mingxu Lu3 , Qingbiao Meng4
1, 4School of Railway Transportation, Shanghai Institute of Technology, Shanghai, 201418, China
2, 3School of Mechanical Engineering and Information, Shanghai Urban Vocational College, Shanghai 201415, China
Received 2 November 2020; received in revised form 11 November 2020; accepted 17 November 2020; published 26 November 2020
Copyright © 2020 Yaxuan Zhang, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Among the running faults of railway freight cars, wheelset faults still occupy a very large proportion in all operation faults. These faults seriously affect and threaten the operation safety of railway freight cars. Focusing on the data of the operating mileage of C80 wheelset, application of the two-parameter Birnbaunm-Saunders (BS) fatigue life distribution and the Weibull distribution are introduced and reviewed for analyzing the data of three high-risk wheelset faults, namely the wheel diameter difference, the circumferential wear of the tread and the location depression of tread. The analysis results show that the application of BS fatigue life distribution in the prediction of wheelset reliability is feasible.
Keywords: railway freight car, wheelset damage, Weibull distribution, Birnbaum-Saunders fatigue life distribution, operational safety.
As for the railway freight car, the operational safety is commonly affected by the reliability of the wheelset. Unfortunately, the failure of wheelset frequently occurred during the train in service, such as wheel diameter difference, the circumferential wear of the tread, and the location depression of tread [1]. According to the standard ‘Railway freight car axle assembly, maintenance and management rules’ (TG/CL 224-2016) [2], there are high standards for the maintenance and repair procedures of wheelset in the part 4.7, especially for the high-speed and heavy-haul railway freight car.
To investigate the wearing mechanism of wheelset, different methodologies have been studied to predict the wear of railway wheelset in recent years, especially in the rolling contact fatigue [3-5]. But the predicted result is not accuracy due to the less and dispersibility experimental date, as wheelset damage is a time-consuming test. Since the failure of wheelset is a random process, therefore, the mathematical statistic methodologies have been conducted to predict the wearing life of wheelset in many works, such as the statistical distribution [6], the Fastsim algorithm [7] and the time-to-failure distribution [8]. Lin and Asplund [9] also used more complex Weibull frailty model to model the data for a sample of locomotive wheels. Besides, literature [10, 11] point out that Birnbaum-Saunders (BS) fatigue life distribution has more advantages in processing failure data based on fatigue mechanism. In addition, there are few studies about the rule of failure from the relationship between the frequency of wheelset depot-repair and its operating mileage.
In this paper, the wheelset of C80 is the research object. The contribution of this work is to analyze the depot-repair data of wheelset by comparing the Weibull distribution and BS distribution, which applies the BS distribution to the field of wheelset failures while simultaneously considering the relationship between wheel damage and operating mileage. The characteristic curves of the wheelset with serious faults are obtained.
2. Experiment and statistics model
This paper selects the wheelset of C80 heavy-haul freight car owned by one railway company as a case study, which contained the unit running miles (cumulative mileage).As can be seen from the statistical Table 1 of C80 train wheelset maintenance, among the 15354 inspected wheelsets, there are 5346 wheel diameter difference exceeded, 4896 tread wear exceeded, 4292 local tread depression, and rim thickness over limit 228, with 586 tread stripping. Apparently, there are generally multiple kinds of faults co-existing in the depot repair, the main fault types of the railway freight car wheelset repair in this operating unit are wheel diameter difference exceeded (34.82 %), excessive wear of tread circumference (31.89 %), and local depression of tread (27.95 %). The three types account for about 94.66 % of the total depot record. The flange thickness over limit (1.48 %) and tread stripping (3.82 %) account for relatively smaller.
Table 1. Statistical table of C80 wheelset maintenance
Frequency of failure (%)
Wheel diameter difference exceeded
Excessive wear of tread circumference
Local depression of tread
Flange thickness over limit
Tread stripping
According to the depot-repair data of C80 wheelset, the data after the mileage of 700,000 kilometers were selected for statistical analysis of small specimens. The number of specimens of the wheel diameter difference exceeded was 128. The excessive wear of tread circumference and the local depression of tread were selected 27 and 93 respectively for reliability analysis.
The Weibull distribution, as well as its modified form, can describe various shapes for the failure rate functions and is widely used in modeling lifetime in reliability engineering [12]. The cumulative distribution function (CDF) of the two-parameter Weibull distribution can be expressed as:
F\left(x\right)=1-\mathrm{e}\mathrm{x}\mathrm{p}\left\{-{\left(\frac{x}{\eta }\right)}^{\theta }\right\},\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }x>0,
\theta >
0 is the shape parameter;
\eta >
0 is the scale parameter or characteristic life. Accordingly, its probability density function is:
f\left(x\right)=\frac{\theta }{\eta }{\left(\frac{x}{\eta }\right)}^{\theta -1}\mathrm{e}\mathrm{x}\mathrm{p}\left\{-{\left(\frac{x}{\eta }\right)}^{\theta }\right\},\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }x>0.
The corresponding reliability function is:
R\left(x\right)=1-F\left(x\right)=\mathrm{e}\mathrm{x}\mathrm{p}\left\{-{\left(\frac{x}{\eta }\right)}^{\theta }\right\},\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }x>0.
The Birnbaunm-Saunders fatigue life distribution was first proposed by Birnbaum and Sauders in 1969 [13]. Since the BS distribution is introduced from the basic characteristics of the fatigue process, it has been widely used in mechanical product reliability studies. Suppose that
x
obeys the two-parameter Birnbaum-Saunders fatigue life distribution BS
\left(\alpha ,\beta \right)
, and its distribution function
F\left(x\right)
f\left(x\right)
are as follows [14]:
F\left(x;\alpha ,\beta \right)=\mathrm{\Phi }\left[\frac{1}{\alpha }\left({\left(\frac{x}{\beta }\right)}^{\frac{1}{2}}-{\left(\frac{x}{\beta }\right)}^{-\frac{1}{2}}\right)\right], x>0,
f\left(x\right)=\frac{1}{2\alpha \sqrt{\beta }}\left(\frac{1}{\sqrt{x}}+\frac{\beta }{x\sqrt{x}}\right)\phi \left[\frac{1}{\alpha }\left(\sqrt{\frac{x}{\beta }}-\sqrt{\frac{\beta }{x}}\right)\right],\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }x>0,
\alpha >
0 is called the shape parameter;
\beta >
0 is called the scale parameter:
\phi \left(x\right)=\frac{1}{\sqrt{2\pi }}\mathrm{e}\mathrm{x}\mathrm{p}\left(-\frac{{x}^{2}}{2}\right),\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{\Phi }\left(x\right)={\int }_{-\infty }^{x}\phi \left(y\right)dy.
The specific reliability analysis is made on the repair data of wheel diameter difference exceeded. Fig. 1 is the probability density diagrams of Weibull (WBL) model and BS model of wheelset diameter difference depot-repair data. Fig. 1(a) and Fig. 1(b) show that the probability density functions of both models present a continuous upward trend, reaching the highest point at about 1 million kilometers, which also verifies the relationship between the operating mileage and the reliable operation of the wheelset. With the increase of train running mileage, the vehicle safety index gradually becomes worse, the wheelset reliability gradually declines. After the train reaches a certain mileage, the probability of vehicle failure rapidly increases. The two fault types of the tread are related to the wheel diameter difference, and the results of data processing are similar.
Fig. 1. Wheel diameter difference exceeded PDF
a) WBL
Fig. 2. Reliability function curve of wheel diameter difference
In this research, the least square estimation (LSE), inverse moment estimation (IME) and maximum likelihood estimation (MLE) are used to solve the Weibull model parameters of wheel diameter difference overrun fault data, and the curves of the three reliability curves obtained are shown in Fig. 2.
According to Eqs. (1-5), combined with the data processing method in the paper [15] and the curve comparison results in Fig. 2, the maximum likelihood method is adopted to identify the parameters of Weibull model and BS model. The results are shown in Table 2.
Table 2. Parameter estimation results
Weibull parameter estimation
BS parameter estimation
Shape parameter (
\theta
Scale parameter (
\eta
\alpha
\beta
Tread circumference wear limit
Local depression on tread
Fig. 3. Reliability characteristics of wheelset
In Fig. 3, the three figures a), b) and c) are the reliability characteristic diagrams of wheel diameter difference exceeding limit, tread circumference wear exceeding limit and local depression of tread, the marked lines in the figure are the reliability characteristic curves fitted by Weibull model and BS model respectively.
In addition, it can be seen from Fig. 3(a, b) that the reliability curves of the two types of faults, namely, the wheel diameter difference exceeding the limit and tread circumference wear exceeding limit, have similar trend. It can be seen that the two faults are closely related, and the wear of two wheels of the same wheelset is not the same, which will lead to an increase in the frequency of the wheel diameter difference exceeded. Compared to the reliability curve obtained by the Weibull model, the reliability curve obtained by the BS model gives a relatively conservative wheelset maintenance mileage. Therefore, the minimum scale parameter (98.546) of the BS model is selected as the minimum maintenance mileage limit.
In Fig. 4, the marked lines a, b and c are respectively the reliability curves of wheel diameter difference exceeded, excessive wear of tread circumference and the location depression of tread, which fitted by BS model. It can be seen from the figure that the three types of reliability curves have the same downward trend, but the rate of decline is different. The curve of tread circumferential wear drops the fastest, indicating that the C80 wheelset in operation has a serious failure rate of the wheelset tread circumferential wear. According to the BS model characteristic parameters of the tread wear data obtained in Table 2, it is necessary to ensure that the wheelsets are fully overhauled before the train runs to 98.546 million kilometer.
Fig. 4. Reliability function curves of wheelset
Through the sampling statistical analysis of a large number of railway freight car wheelset repair data, the characteristic values and reliability curves of the C80 wheelset is given by comparing the application of Weibull distribution and Birnbaunm-Saunders distribution on the wheelset maintenance data in this paper. The reliability curves of wheel tread circumferential wear and wheel diameter difference decrease rapidly. The results show that the BS distribution presents the applicable capability and effectiveness for analyzing the depot-repair data of wheelset. In addition, it is necessary to ensure that the wheelsets of C80are fully overhauled before the train runs to 98.546 million kilometers.
Lyu K., Liu et al. P. Analysis on the features and potential causes of wheel surface damage for heavy-haul locomotives. Engineering Failure Analysis, Vol. 109, 2019, p. 104292. [Search CrossRef]
TG/CL 224-2016, Railway Freight Car Axle Assembly, Maintenance and Management Rules. China Railway Publishing House, 2016. [Search CrossRef]
Fa Xiong Analysis and research on the wear law of wheelsets used in C80B gondola cars on Daqin line. Internal Combustion Engine and Parts, Vol. 44, Issue 5, 2020, p. 44-46. [Search CrossRef]
Six K., Mihalj T., Trummer G., et al. Assessment of running gear performance in relation to rolling contact fatigue of wheels and rails based on stochastic simulations. Journal of Rail and Rapid Transit, Vol. 234, Issue 4, 2019, p. 405-416. [Publisher]
Wang Ling, Xu Hong, Zhao Wenjie, et al. Optimizing the re-profiling strategy of metro wheels based on a data-driven wear model. European Journal of Operational Research, Vol. 243, Issue 3, 2015, p. 975-986. [Publisher]
Zou R., Ma W., Luo S. Influence of the wheel diameter difference on the wheel/rail dynamic contact relationship of the heavy haul locomotive. Australian Journal of Mechanical Engineering, Vol. 16, Issue 2, 2018, p. 98-108. [Publisher]
Andrade A. R., Stow J. Statistical modelling of wear and damage trajectories of railway wheelsets. Quality and Reliability Engineering International, Vol. 32, Issue 8, 2016, p. 2909-2023. [Publisher]
Tao Guo, Qi Yayun, Gan Feng Simulation and analysis of wheel wear prediction with rigid flexible couple wheelset. Earth and Environmental Science, Vol. 189, 2018, p. 032012. [Search CrossRef]
Freitas M. A., Toledo M. L. G., Colosimo E. A. Using degradation data to assess reliability: a case study on train wheel degradation. Quality and Reliability Engineering International, Vol. 25, Issue 5, 2009, p. 607-629. [Publisher]
Lin J., Asplund M., Parida A. Reliability analysis for degradation of locomotive wheels using parametric Bayesian approach. Quality and Reliability Engineering International, Vol. 30, Issue 5, 2013, p. 657-667. [Publisher]
Bingxing W., Lingling W. Estimation for the Birnbaum-Saunders fatigue life distribution. Chinese Journal of Applied Probability and Statistics, Vol. 12, Issue 4, 1996, p. 10-15. [Search CrossRef]
Tibor Tallian Weibull distribution of rolling contact fatigue life and deviations there from. ASLE Transactions, Vol. 5, Issue 5, 1962, p. 183-196. [Search CrossRef]
Kasra Mohammadi, Omid Alavi, Jon Mcgwan G. Use of Birnbaum-Saunders distribution for estimating wind speed and wind power probability distributions: a review. Energy Conversion and Management, Vol. 143, 2017, p. 109-122. [Publisher]
Sanhueza Antonio, Leiva Victor, Balakrishnan N. The generalized Birnbaum-Saunders distribution and its theory, methodology, and application. Communications in Statistics-Theory and Methods, Vol. 37, Issue 5, 2008, p. 645-670. [Publisher]
Sun Zhuling Regression estimation of the parameters of the Birnbaum-Saunders fatigue life distribution. Acta Armamentarii, Vol. 31, Issue 9, 2010, p. 1259-1262. [Search CrossRef]
A Fault Diagnosis Method for Out-of-Round Faults of Metro Vehicle Wheels with Strong Noise
Haifeng Huang, Heli Wang, Weijiu Zhang, Weijie Gu, Chengwei Fei
|
Cryosurgery of Normal and Tumor Tissue in the Dorsal Skin Flap Chamber: Part I—Thermal Response | J. Biomech Eng. | ASME Digital Collection
Department of Biomedical Engineering University of Minnesota, Minneapolis, MN 55455
Departments of Biomedical Engineering, Mechanical Engineering, and Urologic Surgery University of Minnesota, Minneapolis, MN 55455
Hoffmann, N. E., and Bischof, J. C. (February 27, 2001). "Cryosurgery of Normal and Tumor Tissue in the Dorsal Skin Flap Chamber: Part I—Thermal Response ." ASME. J Biomech Eng. August 2001; 123(4): 301–309. https://doi.org/10.1115/1.1385838
Current research in cryosurgery is concerned with finding a thermal history that will definitively destroy tissue. In this study, we measured and predicted the thermal history obtained during freezing and thawing in a cryosurgical model. This thermal history was then compared to the injury observed in the tissue of the same cryosurgical model (reported in companion paper (Hoffmann and Bischof, 2001)). The dorsal skin flap chamber, implanted in the Copenhagen rat, was chosen as the cryosurgical model. Cryosurgery was performed in the chamber on either normal skin or tumor tissue propagated from an AT-1 Dunning rat prostate tumor. The freezing was performed by placing a ∼1 mm diameter liquid-nitrogen-cooled cryoprobe in the center of the chamber and activating it for approximately 1 minute, followed by a passive thaw. This created a 4.2 mm radius iceball. Thermocouples were placed in the tissue around the probe at three locations
(r=2,
3, and 3.8 mm from the center of the window) in order to monitor the thermal history produced in the tissue. The conduction error introduced by the presence of the thermocouples was investigated using an in vitro simulation of the in vivo case and found to be <10°C for all cases. The corrected temperature measurements were used to investigate the validity of two models of freezing behavior within the iceball. The first model used to approximate the freezing and thawing behavior within the DSFC was a two-dimensional transient axisymmetric numerical solution using an enthalpy method and incorporating heating due to blood flow. The second model was a one-dimensional radial steady state analytical solution without blood flow. The models used constant thermal properties for the unfrozen region, and temperature-dependent thermal properties for the frozen region. The two-dimensional transient model presented here is one of the first attempts to model both the freezing and thawing of cryosurgery. The ability of the model to calculate freezing appeared to be superior to the ability to calculate thawing. After demonstrating that the two-dimensional model sufficiently captured the freezing and thawing parameters recorded by the thermocouples, it was used to estimate the thermal history throughout the iceball. This model was used as a basis to compare thermal history to injury assessment (reported in companion paper (Hoffmann and Bischof, 2001)).
skin, tumours, freezing, surgery, biothermics, physiological models, haemodynamics, Cryoinjury, Cell Injury, Cryosurgery, Vascular Injury, Intravital Microscopy, Skin Flap Chamber, Microvasculature
Biological tissues, Freezing, Heat conduction, Probes, Skin, Temperature, Thermocouples, Tumors, Errors, Transients (Dynamics), Temperature measurement, Wounds, Blood flow, Steady state
Cryosurgery of Normal and Tumor Tissue in the Dorsal Skin Flap Chamber: Part II — Injury Response
An Experimental Investigation of the Temperature Field Produced by a Cryosurgical Cannula
A Model for the Time-Dependent Thermal Distribution Within an Iceball Surrounding a Cryoprobe
Investigation of Temperature Fields Around Embedded Cryoprobes
MR Imaging Assisted Temperature Calculations During Cryosurgery
Gori, F., 1981, “On Heat Transfer in Non-Living Materials Around a Cryosurgical Probe,” in: Numerical Methods in Thermal Problems, Pineridge Press, Venice, Italy.
Smith, D. J., Devireddy, R. V., and Bischof, J. C., 1999, “Prediction of Thermal History and Interface Propagation During Freezing in Biological Systems—Latent Heat and Temperature-Dependent Property Effects,” in: Proc. 54th ASME/JSME Joint Thermal Engineering Conference, San Diego, CA, AJTE 99-6520.
Numerical Solution of the Multidimensional Freezing Problem During Cryosurgery
A Comparison of Instrument Methods of Monitoring Freezing in Cryosurgery
Uncertainty in Temperature Measurements Using Thermocouples
Fluorescence Digital Microscopy of Interstitial Macromelecular Diffusion in Burn Injury
Analysis of Tissue Injury by Burning: Comparison of in Situ and Skin Flap Models
A Uniform Thermal Field in a Hyperthermia Chamber for Microvascular Studies
Fennema, O. R., Powrie, W. D., and Marth, E. H., 1973, Low Temperature Preservation of Food and Living Matter, Marcel Dekker, New York.
Tissue Water Content in Rats Measured by Desiccation
Alexiades, A., and Solomon, A. D., 1993, “Formulation of the Stefan Problem,” in: Mathematical Modeling of Melting and Freezing Processes, Hemisphere Publishing, Washington, pp. 6–26.
Guide to Plastics, 1987, Modern Plastics Publication, McGraw-Hill New York.
Lunardini, V., 1981, “Finite Difference Methods for Freezing and Thawing,” in: Heat Transfer in Cold Climates, Van Nostrand Reinhold, Co., New York.
Treatment of Discontinuous Thermal Conductivity in Control-Volume Solutions of Phase Change Problems
Effect of Temperature on Liver Tumour Blood Flow
Rate of Lesion Growth Around Spherical and Cylindrical Cryoprobes
Prediction of Local Cooling Rates and Cell Survival During the Freezing of a Cylindrical Specimen
The Effect of Temperature-Dependent Thermal Conductivity in Heat Transfer Simulations of Frozen Biomaterials
|
Implement Incremental Learning for Classification Using Succinct Workflow - MATLAB & Simulink - MathWorks Benelux
This example shows how to use the succinct workflow to implement incremental learning for binary classification with prequential evaluation. Specifically, this example does the following:
Create a default incremental learning model for binary classification.
Although this example treats the application as a binary classification problem, you can implement multiclass incremental learning using an object for a multiclass problem by following this same workflow.
Store the cumulative metrics, the window metrics, and the first coefficient
{\beta }_{1}
{\beta }_{1}
{\beta }_{1}
|
A Note on the Estimation of the Maximum Possible Earthquake Magnitude Based on Extreme Value Theory for the Groningen Gas Field | Bulletin of the Seismological Society of America | GeoScienceWorld
A Note on the Estimation of the Maximum Possible Earthquake Magnitude Based on Extreme Value Theory for the Groningen Gas Field
Gert Zöller *
University of Potsdam, Institute of Mathematics, Potsdam (Golm), Germany
Corresponding author: zoeller@uni-potsdam.de
Gert Zöller; A Note on the Estimation of the Maximum Possible Earthquake Magnitude Based on Extreme Value Theory for the Groningen Gas Field. Bulletin of the Seismological Society of America 2022; doi: https://doi.org/10.1785/0120210307
Extreme value statistics is a popular and frequently used tool to model the occurrence of large earthquakes. The problem of poor statistics arising from rare events is addressed by taking advantage of the validity of general statistical properties in asymptotic regimes. In this note, I argue that the use of extreme value statistics for the purpose of practically modeling the tail of the frequency–magnitude distribution of earthquakes can produce biased and thus misleading results because it is unknown to what degree the tail of the true distribution is sampled by data. Using synthetic data allows to quantify this bias in detail. The implicit assumption that the true
Mmax
is close to the maximum observed magnitude
Mmax,observed
restricts the class of the potential models a priori to those with
Mmax=Mmax,observed+ΔM
with an increment
ΔM≈0.5…1.2
. This corresponds to the simple heuristic method suggested by Wheeler (2009) and labeled “
Mmax
Mobs
plus an increment.” The incomplete consideration of the entire model family for the frequency–magnitude distribution neglects, however, the scenario of a large so far unobserved earthquake.
|
Robust Controller Design - MATLAB & Simulink - MathWorks 한êµ
Examine Controller Performance
Compare Nominal and Worst-Case Behavior
This example shows how to design a feedback controller for a plant with uncertain parameters and uncertain model dynamics. The goals of the controller design are good steady-state tracking and disturbance-rejection properties.
Design a controller for the plant G described in Robust Controller Design. This plant is a first-order system with an uncertain time constant. The plant also has some uncertain dynamic deviations from first-order behavior beyond about 9 rad/s.
bw = ureal('bw',5,'Percentage',10);
Gnom = tf(1,[1/bw 1]);
W = makeweight(.05,9,10);
G = Gnom*(1+W*Delta)
bw: Uncertain real, nominal = 5, variability = [-10,10]%, 1 occurrences
Because of the nominal first-order behavior of the plant, choose a PI control architecture. For a desired closed-loop damping ratio ξ and natural frequency
{\mathrm{Ï}}_{\mathit{n}}
, the design equations for the proportional and integral gains (based on the nominal open-loop time constant of 0.2) are:
{K}_{p}=\frac{2\mathrm{ξ}{\mathrm{Ï}}_{n}}{5}-1,\phantom{\rule{0.2777777777777778em}{0ex}}{K}_{i}=\frac{{\mathrm{Ï}}_{n}^{2}}{5}.
To study how the uncertainty in G affects the achievable closed-loop bandwidth, design two controllers, both achieving ξ = 0.707, but with different
{\mathrm{Ï}}_{\mathit{n}}
values, 3 and 7.5.
xi = 0.707;
wn1 = 3;
Kp1 = 2*xi*wn1/5 - 1;
Ki1 = (wn1^2)/5;
C1 = tf([Kp1,Ki1],[1 0]);
The nominal closed-loop bandwidth achieved by C2 is in a region where G has significant model uncertainty. It is therefore expected that the model variations cause significant degradations in the closed-loop performance with that controller. To examine the performance, form the closed-loop systems and plot the step responses of samples of the resulting systems.
step(T1,'b',T2,'r',tfinal)
The step responses for T2 exhibit a faster rise time because C2 sets a higher closed-loop bandwidth. However, as expected, the model variations have a greater impact.
You can use robstab to check the robustness of the stability of the closed-loop systems to model variations.
stabmarg1 = robstab(T1,opt);
The display gives the amount of uncertainty that the system can tolerate without going unstable. In both cases, the closed-loop systems can tolerate more than 100% of the modeled uncertainty range while remaining stable. stabmarg contains lower and upper bounds on the stability margin. A stability margin greater than 1 means the system is stable for all values of the modeled uncertainty. A stability margin less than 1 means there are allowable values of the uncertain elements that make the system unstable.
While both systems are stable for all variations, their performance is affected to different degrees. To determine how the uncertainty affects closed-loop performance, you can use wcgain to compute the worst-case effect of the uncertainty on the peak magnitude of the closed-loop sensitivity function, S = 1/(1+GC). This peak gain of this function is typically correlated with the amount of overshoot in a step response; peak gain greater than one indicates overshoot.
Form the closed-loop sensitivity functions and call wcgain.
S1 = feedback(1,G*C1);
[maxgain1,wcu1] = wcgain(S1);
maxgain gives lower and upper bounds on the worst-case peak gain of the sensitivity transfer function, as well as the specific frequency where the maximum gain occurs. Examine the bounds on the worst-case gain for both systems.
maxgain1
maxgain1 = struct with fields:
wcu contains the particular values of the uncertain elements that achieve this worst-case behavior. Use usubs to substitute these worst-case values for uncertain elements, and compare the nominal and worst-case behavior.
wcS1 = usubs(S1,wcu1);
bodemag(S1.NominalValue,'b',wcS1,'b');
bodemag(S2.NominalValue,'r',wcS2,'r');
While C2 achieves better nominal sensitivity than C1, the nominal closed-loop bandwidth extends too far into the frequency range where the process uncertainty is very large. Hence the worst-case performance of C2 is inferior to C1 for this particular uncertain model.
robstab | wcgain | usubs
|
Revision as of 14:01, 2 April 2015 by MathAdmin (talk | contribs) (Created page with "<span style="font-size:135%">8. Find the derivative of the function <math style="vertical-align: -43%">f(x)=\frac{(3x-1)^{2}}{x^{3}-7}</math>. You do not need to simplify your...")
{\displaystyle f(x)={\frac {(3x-1)^{2}}{x^{3}-7}}}
The Chain Rule: I{\displaystyle f}
{\displaystyle g}
{\displaystyle (f\circ g)'(x)=f'(g(x))\cdot g'(x).}
The Quotient Rule: I{\displaystyle f}
{\displaystyle g}
{\displaystyle g(x)\neq 0}
{\displaystyle \left({\frac {f}{g}}\right)'(x)={\frac {f'(x)\cdot g(x)-f(x)\cdot g'(x)}{\left(g(x)\right)^{2}}}.}
|
Mathematical modeling of the solution to the problem of plate bending given various fastenings along the contour and various loads | JVE Journals
Received 20 February 2020; accepted 26 May 2020; published 26 November 2020
ERRATUM. The institutional affiliation was incorrect for Bidakhmet Zhanar in the paper finally approved (after the acceptance) by the authors. For more information read Editor's Note.
The classical equations of plate bending in various fastenings along the contour and various loads are considered. Using the decomposition method, three auxiliary problems are formulated to determine the deflection of the plate. The task is to determine the individual terms of the decomposition coefficients
{a}_{nm}^{\left(i\right)}
from the individual terms of the load series. Then the system of algebraic equations is obtained to determine
{a}_{nm}^{\left(i\right)}
. If the plate contour is pivotally fixed along the contour, then the system of equations takes a known form and the expression for the deflection function is explicitly found. Approximate analytical solutions are obtained for the functions of deflections, internal force factors and stresses under various boundary conditions of the plate.
Keywords: mathematical modeling, high-strength, composite material, contour, solving problems.
In our century, with the increasing complexity of the forms of building structures, the emergence of aircraft manufacturing, and the diverse demands of mechanical engineering, the role of elasticity theory methods has changed dramatically [1]. Now they form the basis for constructing practical methods for calculating deformable bodies and systems of bodies of various shapes [2]. At the same time, modern calculations take into account not only the complexity of the body shape and the variety of effects (power, temperature, etc.), but also the specificity of the physical properties of the materials from which the bodies are made [3]. The fact is that in modern designs along with traditional materials (steel, wood, concrete, etc.), new materials are widely used, in particular composites with a number of specific properties [4]. Thus, reinforcing polymers with fibers from high-strength materials makes it possible to obtain new lightweight structural material having high strength properties that exceed even the strength of modern steels [5]. But the presence of a polymer base gives such a composite material, viscous properties in addition to elasticity, which must be taken into account in the calculations [6]. Exact solutions in the analytical form of the equations of the elasticity theory subject to boundary conditions, which constitutes the so-called boundary value problem, are possible only in some special cases of loading bodies and the conditions for their fastening [7]. Therefore, for engineering practice, approximate, but fairly general methods for solving problems of the applied theory of elasticity are especially important [8].
Consider a rectangular plate of constant thickness, having geometric dimensions in plan
0\le x\le {l}_{1}
0\le y\le {l}_{2}
. We assume that the plate along the contour in the general case is fixed elastically. To determine the bends, we have the differential equation describing the deformations:
D\left(\frac{{\partial }^{4}W}{\partial {x}^{4}}+2\frac{{\partial }^{4}W}{\partial {x}^{2}\partial {y}^{2}}+\frac{{\partial }^{4}W}{\partial {y}^{2}}\right)=q\left(x,y\right),
D=\frac{{E}_{1}{\delta }^{3}}{12}=\frac{E{\delta }^{3}}{12\left(1-{\mu }^{2}\right)}
bending stiffness,
q\left(x,y\right)
external load. Boundary conditions at elastic fastening of the plate along the contour:
\begin{array}{l}W=0,\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }{l}_{1}k\frac{{\partial }^{2}W}{\partial {x}^{2}}+\left(1-k\right)\frac{\partial W}{\partial x}=0,\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }x=0,\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }x={l}_{1},\\ W=0,\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }{l}_{2}k\frac{{\partial }^{2}W}{\partial {y}^{2}}+\left(1-k\right)\frac{\partial W}{\partial y}=0,\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }y=0,\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }y={l}_{2}.\end{array}
Then boundary value problem Eq. (1) and Eq. (2) transform to the form (
\lambda ={l}_{1}/{l}_{2}
\frac{{\partial }^{4}\nu }{\partial {\alpha }^{4}}+2{\lambda }^{2}\frac{{\partial }^{4}\nu }{\partial {\alpha }^{2}\partial {\beta }^{2}}+{\lambda }^{4}\frac{{\partial }^{4}\nu }{\partial {\beta }^{4}}=\frac{q}{D}.
The purpose of the work is to determine the bending of the plate. In accordance with the decomposition method Eq. (3), we formulate three auxiliary problems:
\frac{{\partial }^{4}{\nu }^{\left(1\right)}}{\partial {\alpha }^{4}}={f}^{\left(1\right)}\left(\alpha ,\beta \right), {\nu }^{\left(1\right)}=0, \pi k\frac{{\partial }^{2}{\nu }^{\left(1\right)}}{\partial {\alpha }^{2}}+\left(1-k\right)\frac{\partial {\nu }^{\left(1\right)}}{\partial \alpha }=0, \alpha =0, \alpha =\pi .
{\lambda }^{4}\frac{{\partial }^{4}{\nu }^{\left(2\right)}}{\partial {\beta }^{4}}={f}^{\left(2\right)}\left(\alpha ,\beta \right), {\nu }^{\left(2\right)}=0, \pi k\frac{{\partial }^{2}{\nu }^{\left(2\right)}}{\partial {\beta }^{2}}+\left(1-k\right)\frac{\partial {\nu }^{\left(2\right)}}{\partial \beta }=0, \beta =0, \beta =\pi .
2{\lambda }^{2}\frac{{\partial }^{4}{\nu }^{\left(3\right)}}{\partial {\alpha }^{2}\partial {\beta }^{2}}=\frac{q}{D}-{f}^{\left(1\right)}\left(\alpha ,\beta \right)-{f}^{\left(2\right)}\left(\alpha ,\beta \right).
{\nu }^{\left(1\right)}={\nu }^{\left(2\right)}={\nu }^{\left(3\right)}=\nu ,
needs to perform approximately. Let’s decompose RHS of Eqs. (4-6) to the Fourier series:
{f}^{\left(i\right)}\left(\alpha ,\beta \right)={\sum }_{n=1}^{\mathrm{\infty }}{\sum }_{m=1}^{\mathrm{\infty }}{a}_{nm}^{\left(i\right)}\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right)\mathrm{s}\mathrm{i}\mathrm{n}\left(m\beta \right),\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\left(i=1,2\right),
{a}_{nm}^{\left(i\right)}
unknown constants, approximating unknown functions of the decomposition method.
Let’s decompose the load function
q
to the Fourier series:
q\left(\alpha ,\beta \right)={\sum }_{n=1}^{\mathrm{\infty }}{\sum }_{m=1}^{\mathrm{\infty }}{q}_{nm}\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right)\mathrm{s}\mathrm{i}\mathrm{n}\left(m\beta \right),
{q}_{nm}
are coefficients of the load decomposition, and are determined using formula:
{q}_{nm}=\frac{4}{{\pi }^{2}}\underset{0}{\overset{\pi }{\int }}\underset{0}{\overset{\pi }{\int }}q\left(\alpha ,\beta \right)\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right)\mathrm{s}\mathrm{i}\mathrm{n}\left(m\beta \right)d\alpha d\beta .
Then the auxiliary problems Eqs. (6-8) are written as follows.
\frac{{\partial }^{4}{\nu }^{\left(1\right)}}{\partial {\alpha }^{4}}={\sum }_{n=1}^{\infty }{\sum }_{m=1}^{\infty }{a}_{nm}^{\left(1\right)}\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right)\mathrm{s}\mathrm{i}\mathrm{n}\left(m\beta \right), {\nu }^{\left(1\right)}=0, \pi k\frac{{\partial }^{2}{\nu }^{\left(1\right)}}{\partial {\alpha }^{2}}+\left(1-k\right)\frac{\partial {\nu }^{\left(1\right)}}{\partial \alpha }=0,
\alpha =0, \alpha =\pi .
{\lambda }^{4}\frac{{\partial }^{4}{\nu }^{\left(2\right)}}{\partial {\beta }^{4}}={\sum }_{n=1}^{\infty }{\sum }_{m=1}^{\infty }{a}_{nm}^{\left(2\right)}\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right)\mathrm{s}\mathrm{i}\mathrm{n}\left(m\beta \right), {\nu }^{\left(2\right)}=0, \pi k\frac{{\partial }^{2}{\nu }^{\left(2\right)}}{\partial {\beta }^{2}}+\left(1-k\right)\frac{\partial {\nu }^{\left(2\right)}}{\partial \beta }=0,
\beta =0, \beta =\pi .
2{\lambda }^{2}\frac{{\partial }^{4}{\nu }^{\left(3\right)}}{\partial {\alpha }^{2}\partial {\beta }^{2}}=\frac{1}{D}{\sum }_{n=1}^{\infty }{\sum }_{m=1}^{\infty }{q}_{nm}\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right)\mathrm{s}\mathrm{i}\mathrm{n}\left(m\beta \right)
-{\sum }_{n=1}^{\infty }{\sum }_{m=1}^{\infty }{a}_{nm}^{\left(1\right)}\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right)\mathrm{s}\mathrm{i}\mathrm{n}\left(m\beta \right)-{\sum }_{n=1}^{\infty }{\sum }_{m=1}^{\infty }{a}_{nm}^{\left(2\right)}\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right)\mathrm{s}\mathrm{i}\mathrm{n}\left(m\beta \right).
And meeting the boundary conditions Eq. (11), we find:
{\nu }^{\left(1\right)}={\sum }_{n=1}^{\infty }{\sum }_{m=1}^{\infty }\frac{{a}_{nm}^{\left(1\right)}}{{n}^{3}}\left[\frac{1}{n}\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right)+\frac{4{\alpha }^{3}k\left(k-1\right)}{{\pi }^{2}\left(1-2k-11{k}^{2}\right)}+\frac{{\alpha }^{2}\left(4k-5{k}^{2}+1\right)}{\pi \left(1-2k-11{k}^{2}\right)}\right
+\frac{\alpha \left({k}^{2}-1\right)}{\left(1-2k-11{k}^{2}\right)}]\mathrm{s}\mathrm{i}\mathrm{n}\left(m\beta \right),
and solution of the boundary value problem Eq. (12) will be similar:
{\nu }^{\left(2\right)}={\sum }_{n=1}^{\infty }{\sum }_{m=1}^{\infty }\frac{{a}_{nm}^{\left(2\right)}}{{\lambda }^{4}{m}^{3}}\left[\frac{1}{m}\mathrm{s}\mathrm{i}\mathrm{n}\left(m\alpha \right)+\frac{4{\beta }^{3}k\left(k-1\right)}{{\pi }^{2}\left(1-2k-11{k}^{2}\right)}+\frac{{\beta }^{2}\left(4k-5{k}^{2}+1\right)}{\pi \left(1-2k-11{k}^{2}\right)}\right
+\frac{\beta \left({k}^{2}-1\right)}{\left(1-2k-11{k}^{2}\right)}]\mathrm{s}\mathrm{i}\mathrm{n}\left(n\alpha \right),
{a}_{nm}^{\left(i\right)}
bend surface decomposition coefficients to be determined. For this, the term of the series of
n
m
number for the load Eqs. (9) and the same term of the series of bend Eqs. (14-15). According to Eqs. (7) assume:
{\nu }^{\left(3\right)}\left(\frac{\pi }{2},\frac{\pi }{2}\right)=\frac{1}{2}\left[{\nu }^{\left(1\right)}\left(\frac{\pi }{2},\frac{\pi }{2}\right)+{\nu }^{\left(2\right)}\left(\frac{\pi }{2},\frac{\pi }{2}\right)\right], {\nu }^{\left(1\right)}\left(\frac{\pi }{2},\frac{\pi }{2}\right)={\nu }^{\left(2\right)}\left(\frac{\pi }{2},\frac{\pi }{2}\right).
Now the problem is to determine individual terms of decomposition coefficients
{a}_{nm}^{\left(i\right)}
from individual terms of load series Eq. (9). Then to determine
{a}_{nm}^{\left(i\right)}
from Eq. (13) and Eq. (16) we will derive a system of algebraic equations. So, functions Eqs. (14-15) are the solution to the problem posed, since they meet the conditions on the plate contour and, when choosing the coefficients, derived by solving the system of algebraic equations. We introduce the notations:
Fig. 1. Computed plot of the coordinates
\alpha
\beta
Under the action of a uniform load of the entire area we have:
{G}_{nm}=\frac{16q}{{D}_{nm}{\pi }^{2}}\mathrm{s}\mathrm{i}\mathrm{n}\left(\frac{n\pi }{2}\right)\mathrm{s}\mathrm{i}\mathrm{n}\left(\frac{m\pi }{2}\right).
Fig. 2. Computed plot of the evenly distributed load
Under the action of a load changing along a parabola in the directions of the axis
\alpha
of the entire area we have:
{G}_{nm}=\frac{32q}{{D}_{nm}^{3}{\pi }^{4}}\left[1-{\left(-1\right)}^{n}\right]\mathrm{s}\mathrm{i}\mathrm{n}\left(\frac{n\pi }{2}\right)\mathrm{s}\mathrm{i}\mathrm{n}\left(\frac{m\pi }{2}\right).
Fig. 3. Computed plot of the changing parabola
When the action is changing along an isosceles triangle in the directions of the axis
\alpha
{G}_{nm}=\frac{16q}{{D}_{nm}^{2}{\pi }^{3}}\mathrm{s}\mathrm{i}{\mathrm{n}}^{2}\left(\frac{n\pi }{2}\right)\mathrm{s}\mathrm{i}\mathrm{n}\left(\frac{m\pi }{2}\right).
Fig. 4. Computed plot of the isosceles triangle
The paper obtained equations of plate bending, internal force factors and stresses, and the following new scientific results:
1) approximate analytical solutions were obtained for the functions of bends, the formulas of internal force factors and stresses given various boundary conditions of the plate and load;
2) the results of a numerical experiment for the plate were obtained given various fastenings of the edges and load;
3) mechanical effects, conditions of fastening along the contour and load were revealed.
Khairnasov K. Z. Mathematical modeling of dynamic stability of shell structures under large elastoplastic deformations. IOP Conference Series: Journal of Physics, Vol. 991, 2018, p. 012042. [Search CrossRef]
Omarov T., Tulegenova K., Bekenov Y., Abdraimova G., Zhauyt A., Ibadullayev M. Determination of reduced mass and stiffness of flexural vibrating cantilever beam. Journal of Measurements in Engineering, Vol. 6, Issue 1, 2018, p. 1-9. [Publisher]
Mogilevich L. I., Popov V. S., Popova A. A., Christoforova A. V. Mathematical modeling of hydroelastic walls oscillations of the channel on Winkler foundation under vibrations. Vibroengineering Procedia, Vol. 8, 2016, p. 294-299. [Search CrossRef]
Kondratov D. V., Kalinina A. V., Mogilevich L. I., Popova A. A., Kondratova Y. N. Mathematical model of elastic ribbed shill dynamics interaction with viscous liquid under vibrations. Vibroengineering Procedia, Vol. 8, 2016, p. 300-305. [Search CrossRef]
Andreev V., Chepurnenko A. On the bending of a thin plate at nonlinear creep. Advanced Materials Research, Vol. 900, 2014, p. 707-710. [Publisher]
Dickey R. W. Nonlinear bending of circular plates. Journal of Applied Mathematics, Vol. 30, Issue 1, 1976, p. 1-9. [Search CrossRef]
Reissner E. On the theory of bending of elastic plates. Journal of Mathematics and Physics, Vol. 23, 1944, p. 184-191. [Publisher]
Charles Chinwuba Ike Mathematical solutions for the flexural analysis of Mindlin’s first order shear deformable circular plates. Mathematical Models in Engineering, Vol. 4, Issue 2, 2018, p. 50-72. [Publisher]
Bao Yazhou, Yang Gang, Jia Xingge, Song Yong, Yu Hongjun, Che Chuncheng, Xue Hailin
|
Look at the algebra tile shape at right.
Write an algebraic expression for the perimeter of the shape in two ways: first, by finding the length of each of the sides and adding them all together; second, by writing an equivalent, simplified expression.
Label all the side lengths.
x+x+x-2+1+1+x-1+1+x+x
Write an algebraic expression for the area of the shape.
To find the area, add up the values of all the tiles within the perimeter.
|
Tolerance - Maple Help
Home : Support : Online Help : Mathematics : Numerical Computations : Approximations : Tolerance
Introduction to the Tolerances Package
Entering Tolerances
Computing with Tolerances
Nominal Value and Tolerance Value
Tolerances and Units
with(Tolerances)
The Tolerances package provides an environment to perform computations with quantities involving tolerances.
After issuing the with(Tolerances) command, quantities involving tolerances can now be entered using a natural notation and any computations involving such quantities will return a value with the associated tolerance result.
The Tolerances package uses interval arithmetic to perform its computations.
Tolerances are entered by specifying the value, followed by the (absolute) tolerance, separated by a plus/minus (
±
) sign. To enter a plus/minus sign:
Use the palettes. See View>Palettes>Common Symbols. (Standard Worksheet interface only)
Use symbol completion. Enter plusmn, and then press Esc. (Standard Worksheet interface 2-D input only) For more information, see symbol completion.
Enter &+-.
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{2.000}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.100}
r := (Pi/2) &+- 0.2;
\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{1.571}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.200}
Note: The plus/minus operator takes precedence over most operators, like ^ and *. As a result, parentheses are required around any non-atomic expression for a nominal value or a tolerance value. For more information, see operators/precedence.
In addition to simple arithmetic, the following functions can be used with Tolerances:
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{2.000}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.100}
\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{3.000}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.050}
\textcolor[rgb]{0,0,1}{5.000}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.150}
\textcolor[rgb]{0,0,1}{6.005}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.400}
\textcolor[rgb]{0,0,1}{8.127}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{1.484}
\textcolor[rgb]{0,0,1}{0.905}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.042}
exp(1/b);
\textcolor[rgb]{0,0,1}{1.396}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.008}
By default, the Tolerances package displays values with 3 digits even though computations are always performed at the precision indicated by the Digits variable. Use interface(displayprecision) to control the number of displayed digits.
interface(displayprecision=10);
\textcolor[rgb]{0,0,1}{-1}
a := (1/3) &+- 0.1;
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.3333333332}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.1000000000}
interface(displayprecision=3);
\textcolor[rgb]{0,0,1}{10}
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.333}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.100}
The NominalValue command computes the nominal value of a quantity with a tolerance. The nominal value is the center of the confidence interval.
The ToleranceValue command computes the tolerance value of a quantity with a tolerance. The tolerance is the width of the confidence interval divided by two.
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{2.000}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.100}
\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{3.000}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.050}
\textcolor[rgb]{0,0,1}{5.000}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.150}
NominalValue(a+b);
\textcolor[rgb]{0,0,1}{5.000}
ToleranceValue(a+b);
\textcolor[rgb]{0,0,1}{0.150}
Tolerances and Units can be used together in the same computation
a := 150 &+- 10 * Unit(m);
\textcolor[rgb]{0,0,1}{a}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{150.000}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{10.000}\textcolor[rgb]{0,0,1}{}⟦\textcolor[rgb]{0,0,1}{m}⟧
b := 0.2 &+- 0.001 * Unit(km);
\textcolor[rgb]{0,0,1}{b}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{0.200}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{0.001}\textcolor[rgb]{0,0,1}{}⟦\textcolor[rgb]{0,0,1}{\mathrm{km}}⟧
C := 2*a+2*b;
\textcolor[rgb]{0,0,1}{C}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{700.000}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{22.000}\textcolor[rgb]{0,0,1}{}⟦\textcolor[rgb]{0,0,1}{m}⟧
A := a*b;
\textcolor[rgb]{0,0,1}{A}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{3.001}\textcolor[rgb]{0,0,1}{×}{\textcolor[rgb]{0,0,1}{10}}^{\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{±}\textcolor[rgb]{0,0,1}{2.150}\textcolor[rgb]{0,0,1}{×}{\textcolor[rgb]{0,0,1}{10}}^{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{}⟦{\textcolor[rgb]{0,0,1}{m}}^{\textcolor[rgb]{0,0,1}{2}}⟧
|
Simplify each expression using the rules for exponents.
\frac { 3 ^ { 5 } } { 3 ^ { 10 } }
3(5 − 10)
3^{-5}=\frac{1}{3^{5}}
10x^{4}\left(10x\right)^{−2}
Remember, when an exponent is located outside of a group of parentheses, the exponent applies to each and every value within the parentheses.
10x^{4}\left(10^{−2}x^{−2}\right)
\frac{{\it x}^{2}}{10}
( \frac { 1 } { 4 } ) ^ { 3 } \cdot ( 4 ) ^ { 2 }
\left(4^{−1}\right)^{3}\left(4\right)^{2}
\frac { ( x y ) ^ { 3 } } { x y ^ { 3 } }
|
closelink - Maple Help
Home : Support : Online Help : Mathematics : Numerical Computations : Matlab : closelink
close an open link to MATLAB(R) session
closelink()
The closelink command closes a MATLAB® session linked to the current Maple session. Note that closing the session removes all data associated with that MATLAB® session.
There are three ways of opening a MATLAB® session from within Maple: using with(Matlab) to establish a link to all Matlab commands; using a specific Matlab command such as Matlab[fft]; or, using the openlink() command. The closelink command closes a link with a new MATLAB® session no matter which method was used to start the MATLAB® session.
After closing the link with MATLAB®, you lose all access to the MATLAB® data associated with that session. Any subsequent calls that use the Matlab library (for example, executing another Matlab[fft] command from within Maple) must open a new link.
If the MATLAB® session is terminated while the link is still open, you must call closelink before establishing a new link.
Executing the closelink command returns a null.
Opening a link to a new MATLAB® session
\mathrm{Matlab}[\mathrm{openlink}]\left(\right)
Closing the current link to MATLAB®
\mathrm{Matlab}[\mathrm{closelink}]\left(\right)
|
Measuring the difficulty of SAT instances - PhotoLens
Given an instance of SAT, I would like to be able to estimate how difficult it will be to solve the instance.
One way is to run existing solvers, but that kind of defeats the purpose of estimating difficulty. A second way might be looking a the ratio of clauses to variables, as is done for phase transitions in random-SAT, but I am sure better methods exist.
Given an instance of SAT, are there some fast heuristics to measure the difficulty? The only condition is that these heuristics be faster than actually running existing SAT solvers on the instance.
Which SAT problems are easy? on cstheory.SE. This questions asks about tractable sets of instances. This is a similar question, but not exactly the same. I am really interested in a heuristic that given a single instance, makes some sort of semi-intelligent guess of if the instance will be a hard one to solve.
In general, this is a very relevant and interesting research question. “One way is to run existing solvers…” and what would this even tell us exactly? We could see empirically that an instance seems hard for a specific solver or a specific algorithm/heuristic, but what does it really tell about the hardness of the instance?
One way that has been pursued is the identification of various structural properties of instances that lead to efficient algorithms. These properties are indeed prefered to be “easily” identifiable. An example is the topology of the underlying constraint graph, measured using various graph width parameters. For example it is known that an instance is solvable in polynomial time if the treewidth of the underlying constraint graph is bounded by a constant.
Another approach has focused on the role of hidden structure of instances. One example is the backdoor set, meaning the set of variables such that when they are instantiated, the remaining problem simplifies to a tractable class. For example, Williams et al., 2003 [1] show that even when taking into account the cost of searching for backdoor variables, one can still obtain an overall computational advantage by focusing on a backdoor set, provided the set is sufficiently small. Furthermore, Dilkina et al., 2007 [2] note that a solver called Satz-Rand is remarkably good at finding small strong backdoors on a range of experimental domains.
More recently, Ansotegui et al., 2008 [3] propose the use of the tree-like space complexity as a measure for DPLL-based solvers. They prove that also constant-bounded space implies the existence of a polynomial time decision algorithm with space being the degree of the polynomial (Theorem 6 in the paper). Moreover, they show the space is smaller than the size of cycle-cutsets. In fact, under certain assumptions, the space is also smaller than the size of backdoors.
They also formalize what I think you are after, that is:
Find a measure \psi
, and an algorithm that given a formula \Gamma
decides satisfiability in time O(nψ(Γ))
O(n^{\psi ( \Gamma )})
. The smaller the measure is, the better it characterizes the hardness of a formula.
[1] Williams, Ryan, Carla P. Gomes, and Bart Selman. “Backdoors to typical case complexity.” International Joint Conference on Artificial Intelligence. Vol. 18, 2003.
[2] Dilkina, Bistra, Carla Gomes, and Ashish Sabharwal. “Tradeoffs in the Complexity of Backdoor Detection.” Principles and Practice of Constraint Programming (CP 2007), pp. 256-270, 2007.
[3] Ansótegui, Carlos, Maria Luisa Bonet, Jordi Levy, and Felip Manya. “Measuring the Hardness of SAT Instances.” In Proceedings of the 23rd National Conference on Artificial Intelligence (AAAI’08), pp. 222-228, 2008.
Source : Link , Question Author : Artem Kaznatcheev , Answer Author : Juho
Categories complexity-theory, heuristics, satisfiability Tags complexity-theory, heuristics, satisfiability Post navigation
Is it essential to type “WWW” when our design contains URL, or is it a matter of taste by now?
|
Analysis of influence of bridge deck damage on first order natural frequency of simply supported T-beam bridge | JVE Journals
Shuichang Li1
1Guangxi Nantian Expressway Co., Ltd, Nanning, China
Received 21 December 2020; received in revised form 5 January 2021; accepted 12 January 2021; published 25 March 2021
Copyright © 2021 Shuichang Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
As a simply-supported T beam of the girder grid system, the damage of the bridge deck will inevitably change the overall stiffness of the bridge, and then affect the natural dynamic characteristics of the bridge. The variation of the first-order natural frequency of the bridge before and after the damage of the T beam flange was analyzed by using the finite element analysis method, and the influence law of the T beam flange damage on the natural frequency of the simply supported T beam bridge was obtained. The analysis results showed that the damage of T beam flange will bring about a significant change in the first-order natural frequency. Taking the natural frequency test of multi-span simply supported beams in practical engineering as an example, the first order natural frequency of the undamaged span and the damaged span of the bridge deck were tested respectively, and the influence of the damage on the first order natural frequency was analyzed. Results showed that the first order natural frequency decreases by 15.63 % under the condition that the single T beam is completely damaged in the mid-span position. Therefore, the damage of T bridge can be preliminarily judged by the change of the natural frequency of simply supported beam bridge in practical engineering.
Keywords: simply-supported T beam, flange damage, natural frequency, dynamic test.
With the extension of operation time, the service performance of the bridge will decline in the actual operation process, resulting in different degrees of disease [1]. Heavy vehicle traffic is very likely to occur, especially in some Chinese national roads, which will accelerate the attenuation of bridge performance and easily lead to damage and other diseases [2, 3].
Reinforced concrete simply supported T beam has been widely used in bridge construction since 1990s. A main feature of this bridge type is the use of beam lattice system, which is formed by multi-slice T beam assembly. During the assembly process, T beam is connected by the wet connection of the wing plate, which plays the role of bearing the bridge deck. T beam flange is also the most vulnerable position under the reciprocating action of heavy vehicle, especially near the middle span of bridge deck center line. This kind of damage will inevitably cause the change of the dynamic and static characteristics of the bridge [4, 5]. It is a hot topic for scholars in recent years to identify such damage by the change of dynamic characteristics [6, 7]. Some literatures have shown that the damage of bridge will cause the change of the natural dynamic characteristics of bridge, such as natural frequency, mode, and so on [8]. Therefore, it is of great significance to study the influence degree of bridge damage on the change of dynamic characteristics, and to identify the bridge damage by using the measured dynamic characteristics.
Taking the reinforced concrete simply supported T beam bridge as the object, this paper studied and analyzes the influence of T beam flange damage (bridge deck damage) on the natural frequency of the bridge. By using the finite element method, the calculation models of damaged and undamaged bridge deck were established respectively, and the natural frequency variation calculated by the theory was compared. Also, based on an actual multi span simply supported beam bridge, the measured natural frequencies of undamaged span and damaged span were compared and tested. Finally, the effects of deck damage on the first order natural frequencies of simply supported T bridges were analyzed by theoretical and experimental methods. As a result, it provides a basis for identifying the damage of bridges by using the variation of the natural frequency of simply supported T bridges in engineering.
2. Theoretical calculation method for the first order natural frequency of simple supported beam
The simply supported beam bridge can be simplified according to the mechanics as the dynamic analysis.
The vibration equilibrium equation of simply supported beam can be formed by the moment balance equation and force balance equation:
EI\frac{{\partial }^{4}y\left(x,t\right)}{\partial {x}^{4}}+m\frac{{\partial }^{2}y\left(x,t\right)}{\partial {t}^{2}}=0.
The formula of natural frequency theory of simply supported beam is obtained by solving Eq. (1):
{\omega }_{n}={\left(\frac{n\pi }{L}\right)}^{2}\sqrt{\frac{EI}{m}}, n=1,2,3\dots ,
{\omega }_{n}
is the n order natural circular frequency of simply supported beam,
m
is the mass of unit length of simply supported beam, and
L
is the calculated span of simply supported beam.
EI
is the bending stiffness of simply supported beam section.
From Eq. (2), it can be seen that in the case of constant span of bridge, the change of bending stiffness
EI
of simply supported beam section will inevitably bring about the change of circular frequency
{\omega }_{n}
For the T beam bridge, the damage of the wing plate of the T section will lead to the decrease of the moment of inertia of the section
I
, and then the bending stiffness of the section
EI
will be reduced. Hence, according to Eq. (2), the natural frequency of the T beam will be reduced due to the breakage of the wing plate of the T section.
3. Finite element analysis of the effect of flange breakage on first order natural frequency
Taking a simply supported T beam bridge with a span of 22 m as an example, a finite element model was established to analyze the effect of bridge panel damage on the first order natural frequency. Bridge transverse is composed of 7 pieces of T beam, the width of middle beam and side beam is the same, and the steel plate welding connection is used between T beams. The web bottom width of each T beam is 0.36 m, the web height is 1.30 m, and the cantilever section width of one side of the flange plate is 0.70 m (see Fig. 1 for the section size of the T beam). Beam concrete material is C40 concrete. A finite element model of the bridge was established to simulate the sound condition and damage condition of the deck of middle and 3# and 4#T span are all damaged. The finite element model is shown in Fig. 2.
3.2. Calculation results and comparison of natural frequencies before and after damage
From the finite element calculation result, the first mode of simply supported beam was obtained as shown in Fig. 3. From this, the calculation results of natural frequency before and after damage were obtained, as shown in Table 1.
Fig. 1. Cross section of T beam
Fig. 2. Finite element model of simply supported T beam bridge based on beam grid method
Table 1. Comparison of natural frequencies before and after damage
First order natural frequency (Hz)
Frequency reduction percentage
Fig. 3. First order natural mode
It can be obtained from Table 1 that the first order natural frequency is obviously reduced under the condition of no damage, and the reduction ratio reaches 15 %.
4. Real bridge test on the effect of wing plate breakage on first order natural frequency
A bridge is located in Guangxi Hepu County Nanning to Beihai second class highway 325 line, built in 1990.The total length of the bridge is 330.94 m, the total width of the bridge deck is 12.50 m, the superstructure bridge hole is arranged as 8.50 m (cast-in-place reinforced concrete slab beam) 14×22.20 m (precast reinforced concrete simply supported T slab beam) 8.50 (cast-in-place reinforced concrete slab beam), and the T beam support is plate rubber bearing. There are 14 reinforced concrete simply supported T beams with span of 22.2 m. The bridge cross section diagram is shown in Fig. 4.
4.2. Damage to bridges
Due to the long construction time of the bridge, the service performance has been decayed, and the heavy vehicle traffic volume on the national highway causes the bridge deck (wing plate) of multi span T-beam to be broken in varying degrees. Among them, 6# span T beam flange is damaged most seriously, in the middle of the span between the two transverse partitions, two pieces of T beam between the flange all broken, as shown in Fig. 5. And the bridge 14# span and 15# are undamaged spans.
Fig. 4. Cross-sectional view (cm)
Fig. 5. Serious breakage of T beam flange in middle span of 6# span
4.3. Testing and comparative analysis of first order natural frequency
According to the above analysis, the natural frequency of the 6# span T beam flange will decrease after damage and its natural frequency will be significantly smaller compared with the undamaged 14# span and 15# span. For this reason, the natural frequencies of damaged and undamaged span were identified by pulsating test. By using the DH610V acceleration picker tested by Donghua, the vibration acceleration time history curves of 6# span, 14# span and 15# span were collected. The typical acceleration time history curves are shown in Fig. 6.
Using the acceleration versus time data to analyze the spectrum and the vibration spectra of 6# span, 14# span and 15# span were obtained which are shown in Figs. 7 and 8. Then identify the first order natural frequencies of each span, the comparison are shown in Table 2.
It can be seen from Table 2 that the first order natural frequency of 6# span with serious deck damage is significantly lower than that of 14# and 15# spans with undamaged deck, and the reduction percentage is 15.63 %, which is consistent with the theoretical calculation results in Table 1. Hence, the change of the natural frequency of simply supported T beam bridge can reflect the damage of the bridge in engineering.
Fig. 6. Time history curve of acceleration in midspan
Fig. 7. Spectrogram of 14# span
Fig. 8. Spectrogram of 6# span
Table 2. Comparison of first order natural frequency
First order natural frequency
Calculated value (Hz)
Measured value (Hz)
Measured value / Calculated value
6# span
14# span
In this paper, the change of the first order natural frequency of the simply supported T beam bridge before and after the damage of the bridge deck were analyzed with finite element analysis method, and the influence law of the damage of the T beam flange on the natural frequency of the simply supported T beam bridge was obtained. By using the method of engineering contrast test, the first order natural frequencies of damaged span and undamaged span of T beam flange plate under the condition of the same span and cross-section were tested. The following conclusions were drawn:
1) For T beam bridge, the damage of T shaped section flange will lead to the reduction of the cross-section moment of inertia, which will lead to the reduction of the bending stiffness of the section, and finally cause the obvious reduction of the natural frequency of the T beam.
2) The first order natural frequency of simply supported T beam bridge is obviously reduced when the airfoil between two transverse partitions and two beams is completely broken at the middle of the span. The contrast test results showed that the reduction percentage reached 15.63 %.
3) The change of the natural frequency of simply supported T beam bridge can reflect the damage of the bridge in engineering.
Zhiwei Jian Discussion on the function of highway bridge load test detection in bridge maintenance. Engineering and Technological Research, Vol. 5, Issue 3, 2020, p. 83-84. [Search CrossRef]
Xiaodong Chen Experimental study on dynamic characteristics of highway bridges. China Building Materials Science and Technology, Vol. 28, Issue 5, 2019, p. 104-106. [Search CrossRef]
Yi-Li Analysis on the Key technology of highway bridge maintenance and repair reinforcement construction. Construction and Design for Engineering, 2019. [Search CrossRef]
Rongjian Qi Application of test detection technology for reinforced concrete bridge. Transpo World, Vol. 10, 2019, p. 78-79. [Search CrossRef]
Lin Wang Prestressed concrete girder bridge reinforcement and static and dynamic load test evaluation research. Western China Communications Science and Technology, Vol. 11, 2015, p. 64-67. [Search CrossRef]
Xiangnan Wu, Yue Xu, Peng Liang, Bin Li Research status and prospect of bridge structure damage identification. Journal of Chang’an University (Natural Science Edition), Vol. 33, Issue 6, 2013, p. 49-58. [Search CrossRef]
Xu Shuo, Guo Shuan, Wu Wenpeng Dynamic characteristics analysis and dynamic test of the as-built two-span suspension bridges with steel truss girder. Highway Engineering, Vol. 45, Issue 1, 2020, p. 1-5+11. [Search CrossRef]
Yilun Liu, Shengpeng Shi, Wei Liao Bridge damage identification using curvature mode shapes. Journal of Vibration and Shock, Vol. 30, Issue 8, 2011, p. 77-81+96. [Search CrossRef]
|
Rescuing Legacy Seismic Data FAIR’ly
Lorraine J. Hwang; Tim Ahern; Cynthia J. Ebinger; William L. Ellsworth; Garrett G. Euler; Emile A. Okal; Paul G. Okubo; William R. Walter
Paul A. Spudich (1950–2019)
Ralph J. Archuleta; Jon B. Fletcher
A Plan for a Long‐Term, Automated, Broadband Seismic Monitoring Network on the Global Seafloor
Monica D. Kohler; Katrin Hafner; Jeffrey Park; Jessica C. E. Irving; Jackie Caplan‐Auerbach; John Collins; Jonathan Berger; Anne M. Tréhu; Barbara Romanowicz; Robert L. Woodward
Preface to the Focus Section on Historical Seismograms
Allison L. Bent; Diane I. Doser; Lorraine J. Hwang
Calibration Analysis and Noise Estimates of WWSSN Station ALQ (Albuquerque, New Mexico)
Adam T. Ringler; David C. Wilson; Emily Wolin; Tyler Storm; Leo Sandoval
Seismological Research Letters November 06, 2019, Vol.91, 1359-1366. doi:https://doi.org/10.1785/0220190201
Analysis of Intraslab Predigital Earthquakes of the South‐Central Alaska Region
Did Oldham Discover the Core After All? Handling Imprecise Historical Data with Hierarchical Bayesian Model Selection Methods
Jack B. Muir; Victor C. Tsai
Analog Seismogram Archives at the Earthquake Research Institute, the University of Tokyo
Kenji Satake; Hiroshi Tsuruoka; Satoko Murotani; Kenshiro Tsumura
Conservation and Utilization of Historical Seismograms from Early Stage (A.D. 1904–1948), Mainland China
Data Retrieval System of JMA Analog Seismograms in the Headquarters for Earthquake Research Promotion of the Japanese Government
Mitsuko Furumura; Koji Iwasa; Yasunori Suzuki; Tomotsugu Demachi; Takeo Ishibe; Ritsuko S. Matsu’ura
Finding, Organizing, and Preserving Legacy Nuclear Test Monitoring Data—Examples from the Livermore Nevada Network
Recovering Analog‐Tape Seismograms from the 1980 Mount St. Helens Pre‐Eruption Period
Stephen D. Malone
Digitization of the Carnegie Analog Broadband Instruments Tape Records (1965–1996)
Steven Golden; Lara S. Wagner; Brian Schleigh; Daniela Power; Diana C. Roman; Selwyn I. Sacks; Helen Janiszewski
Ritsuko S. Matsu’ura; Norihito Umino; Yoshiaki Tamura; Yoshihisa Iio; Minoru Kasahara; Takahiro Ohkura
Satoko Murotani; Kenji Satake; Hiroshi Tsuruoka; Hiroe Miyake; Toshiaki Sato; Tetsuo Hashimoto; Hiroo Kanamori
Ming‐Che Hsieh; Yen‐Yu Lin; Kuo‐Fong Ma; Li Zhao; Yi‐Wun Liao
Preservation and Reuse of Historical Seismic Data in Mexico: SISMOMex and the Online “National Seismogram Library”
Xyoli Pérez‐Campos; Saul Armendáriz‐Sánchez; Víctor H. Espíndola; Minerva Castro‐Escamilla; Jesus Perez; Luis Manuel Casiano; Ivan Rodriguez Rasilla; Caridad Cárdenas Monroy; Arturo Cárdenas
Recovery and Calibration of Legacy Underground Nuclear Test Seismic Data from the Leo Brady Seismic Network
Brian A. Young; Robert E. Abbott
Probabilistic Seismic Hazard Assessment Using Legacy Data in Georgia
Tuna Onur; Rengin Gok; Tea Godoladze; Irakli Gunia; Giorgi Boichenko; Albert Buzaladze; Nino Tumanova; Manana Dzmanashvili; Lasha Sukhishvili; Zurab Javakishvili; Eric Cowgill; István Bondár; Gurban Yetirmishli
On the Extraction of Microseismic Ground Motion from Analog Seismograms for the Validation of Ocean‐Climate Models
Thomas Lecocq; Fabrice Ardhuin; Fabienne Collin; Thierry Camelbeeck
Paul G. Richards; Margaret Hellweg
Near‐Field Ground Motions from the July 2019 Ridgecrest, California, Earthquake Sequence
Susan E. Hough; Eric Thompson; Grace A. Parker; Robert W. Graves; Kenneth W. Hudnut; Jason Patton; Timothy Dawson; Tyler Ladinsky; Michael Oskin; Krittanon Sirorattanakul; Kelly Blake; Annemarie Baltay; Elizabeth Cochran
Automatic Inversion of Rupture Processes of the Foreshock and Mainshock and Correlation of the Seismicity during the 2019 Ridgecrest Earthquake Sequence
Yijun Zhang; Xujun Zheng; Qiang Chen; Xianwen Liu; Xiaomei Huang; Yinghui Yang; Qian Xu; Jingjing Zhao
Operational Earthquake Forecasting during the 2019 Ridgecrest, California, Earthquake Sequence with the UCERF3‐ETAS Model
Kevin R. Milner; Edward H. Field; William H. Savran; Morgan T. Page; Thomas H. Jordan
Watts Bar Nuclear Power Plant Strong‐Motion Records of the 12 December 2018 M 4.4 Decatur, Tennessee, Earthquake
Vladimir Graizer; Dogan Seber; Scott Stovall
Seismotectonics of the 2017–2018 Songyuan Earthquake Sequence, Northeastern China: Passive Bookshelf Faulting and Block Rotation in the Songliao Basin
Zhe Su; Xi‐Wei Xu; Shan‐Shan Liang; Erchie Wang
Significance of Nonplanar Rupture of the Mainshock and Optimal Faulting in Forecasting Aftershocks of the 2015
Mw
7.8 Gorkha Earthquake
Neng Xiong; Fenglin Niu; Rongjiang Wang
Mw
8.1 Concepción Earthquake: A Deep Megathrust Foreshock That Started the 1960 Central‐South Chilean Seismic Sequence
Javier Ojeda; Sergio Ruiz; Francisco del Campo; Matías Carvajal
Evaluation of Earthquake Magnitude Estimation and Event Detection Thresholds for Real‐Time GNSS Networks: Examples from Recent Events Captured by the Network of the Americas
Kathleen M. Hodgkinson; David J. Mencin; Karl Feaux; Charles Sievers; Glen S. Mattioli
High‐Accuracy Discrimination of Blasts and Earthquakes Using Neural Networks With Multiwindow Spectral Data
Fajun Miao; N. Seth Carpenter; Zhenming Wang; Andrew S. Holcomb; Edward W. Woolery
Sensor Orientation of Iranian Broadband Seismic Stations from P‐Wave Particle Motion
Jochen Braunmiller; John Nabelek; Abdolreza Ghods
Pre‐Sinkhole Seismicity at the Napoleonville Salt Dome: Implications for Local Seismic Monitoring of Underground Caverns
Sean R. Ford; Douglas S. Dreger
Spatiotemporal Seismotectonic Implications for the Izu–Bonin–Mariana Subduction Zone from
b
‐Values
Zhou Gui; Yongliang Bai; Zhenjie Wang; Dongdong Dong; Shiguo Wu; Tongfei Li
Robert E. Anthony; Adam T. Ringler; David C. Wilson; Manochehr Bahavar; Keith D. Koper
Mapping Seismic Tonal Noise in the Contiguous United States
Omar E. Marcillo; Jonathan MacCarthy
Evaluating Uncertainties of Phase Velocity Measurements from Cross‐Correlations of Ambient Seismic Noise
Yinhe Luo; Yingjie Yang; Jinyun Xie; Xiaozhou Yang; Fengru Ren; Kaifeng Zhao; Hongrui Xu
Observation by Means of An Underground Ring Laser Gyroscope of Love Waves Generated in the Mediterranean Sea: Source Direction and Comparison with Models
Andreino Simonelli; Gaetano De Luca; Umberto Giacomelli; Giuseppe Terreni; Angela Di Virgilio
Using Deep Learning to Derive Shear‐Wave Velocity Models from Surface‐Wave Dispersion Data
Jing Hu; Hongrui Qiu; Haijiang Zhang; Yehuda Ben‐Zion
Julie Schnurr; Keehoon Kim; Milton A. Garces; Arthur Rodgers
Earthquake Early Warning ShakeAlert 2.0: Public Rollout
Monica D. Kohler; Deborah E. Smith; Jennifer Andrews; Angela I. Chung; Renate Hartog; Ivan Henson; Douglas D. Given; Robert de Groot; Stephen Guiwits
Evidence for Holocene Activity on the Jiali Fault, an Active Block Boundary in the Southeastern Tibetan Plateau
Hu Wang; Kaijin Li; Lichun Chen; Xingqiang Chen; An Li
Crustal Characteristics in the Subduction Zone of Mexico: Implication of the Tectonostratigraphic Terranes on Slab Tearing
Dana Carciumaru; Roberto Ortega; Jorge Castillo Castellanos; Eduardo Huesca‐Pérez
Comparison of Single‐Trace and Multiple‐Trace Polarity Determination for Surface Microseismic Data Using Deep Learning
Xiao Tian; Wei Zhang; Xiong Zhang; Jie Zhang; Qingshan Zhang; Xiangteng Wang; Quanshi Guo
Seismology in the Cloud: A New Streaming Workflow
Jonathan MacCarthy; Omar Marcillo; Chad Trabant
Sp Receiver‐Function Images of African and Arabian Lithosphere: Survey of Newly Available Broadband Data
Lin Liu; Siyou Tong; Sanzhong Li; Saleh Qaysi
Inversion of Source Mechanisms for Single‐Force Events Using Broadband Waveforms
Minhan Sheng; Risheng Chu; Yong Wang; Qingdong Wang
Natural Seismicity in and around the Rome Trough, Eastern Kentucky, from a Temporary Seismic Network
N. Seth Carpenter; Andrew S. Holcomb; Edward W. Woolery; Zhenming Wang; John B. Hickman; Steven L. Roche
Seismological Observatory Software: 30 Yr of SEISAN
Jens Havskov; Peter H. Voss; Lars Ottemöller
NoisePy: A New High‐Performance Python Tool for Ambient‐Noise Seismology
Chengxin Jiang; Marine A. Denolle
SHAPE: A MATLAB Software Package for Time‐Dependent Seismic Hazard Analysis
Konstantinos Leptokaropoulos; Stanisław Lasocki
Marc Wathelet; Jean‐Luc Chatelain; Cécile Cornou; Giuseppe Di Giulio; Bertrand Guillier; Matthias Ohrnberger; Alexandros Savvaidis
The Damaging Earthquake of 9 October 859 in Kairouan (Tunisia): Evidence from Historical and Archeoseismological Investigations
Nejib Bahrouni; Mustapha Meghraoui; Klaus Hinzen; Mohamed Arfaoui; Faouzi Mahfoud
The Deployment of the Seismometer to Investigate Ice and Ocean Structure (SIIOS) on Gulkana Glacier, Alaska
Angela G. Marusiak; Nicholas C. Schmerr; Daniella N. DellaGiustina; Erin C. Pettit; Peter H. Dahl; Brad Avenson; S. Hop Bailey; Veronica J. Bray; Natalie Wagner; Chris G. Carr; Renee C. Weber
A Report on Broadband Seismological Experiment in the Jammu and Kashmir Himalaya (JAKSNET)
Swati Sharma; Supriyo Mitra; Shubham Sharma; Keith Priestley; Sunil K. Wanchoo; Debarchan Powali; Liyaqet Ali
Erratum to Analysis of the 2014
Mw
7.3 Papanoa (Mexico) Earthquake: Implications for Seismic Hazard Assessment
Pouye Yazdi; Jorge M. Gaspar‐Escribano; Miguel A. Santoyo; Alejandra Staller
Seismological Research Letters April 08, 2020, Vol.91, 1927. doi:https://doi.org/10.1785/0220200098
Seismological Research Letters May 01, 2020, Vol.91, 1928. doi:https://doi.org/10.1785/0220200099
The medieval Prague (Czech Republic) Astronomical Clock serves as a reminder that accurate timing is an integral part of seismology, a topic highlighted in an article by Agnew (this issue) covering the history and importance of timekeeping in seismology. Within the Historical Seismogram focus section in this issue are many success stories involving the preservation and analysis of analog seismograms but many more seismograms are in danger of being lost due to natural deterioration and pressures to reduce storage space. Time is of the essence in preserving this valuable data set.
Image credit: iStock.com/Bill_Vorasate
Mw
Mw
|
Revision as of 18:11, 12 November 2017 by MathAdmin (talk | contribs) (Created page with "<span class="exam"> Evaluate the integral: ::<math>\int e^{-2x}\sin (2x)~dx</math> {| class="mw-collapsible mw-collapsed" style = "text-align:left;" !Background Information...")
{\displaystyle \int e^{-2x}\sin(2x)~dx}
1. Integration by parts tells us
{\displaystyle \int u~dv=uv-\int v~du.}
{\displaystyle \int e^{x}\sin x~dx?}
{\displaystyle u=\sin(x)}
{\displaystyle dv=e^{x}~dx.}
{\displaystyle du=\cos(x)~dx}
{\displaystyle v=e^{x}.}
{\displaystyle \int e^{x}\sin x~dx=e^{x}\sin(x)-\int e^{x}\cos(x)~dx.}
Now, we need to use integration by parts a second time.
{\displaystyle u=\cos(x)}
{\displaystyle dv=e^{x}~dx.}
{\displaystyle du=-\sin(x)~dx}
{\displaystyle v=e^{x}.}
{\displaystyle {\begin{array}{rcl}\displaystyle {\int e^{x}\sin x~dx}&=&\displaystyle {e^{x}\sin(x)-(e^{x}\cos(x)-\int -e^{x}\sin(x)~dx}\\&&\\&=&\displaystyle {e^{x}(\sin(x)-\cos(x))-\int e^{x}\sin(x)~dx.}\\\end{array}}}
Notice, we are back where we started.
Therefore, adding the last term on the right hand side to the opposite side, we get
{\displaystyle 2\int e^{x}\sin(x)~dx=e^{x}(\sin(x)-\cos(x)).}
{\displaystyle \int e^{x}\sin(x)~dx={\frac {e^{x}}{2}}(\sin(x)-\cos(x))+C.}
We proceed using integration by parts.
{\displaystyle u=\sin(2x)}
{\displaystyle dv=e^{-2x}dx.}
{\displaystyle du=2\cos(2x)dx}
{\displaystyle v={\frac {e^{-2x}}{-2}}.}
{\displaystyle {\begin{array}{rcl}\displaystyle {\int e^{-2x}\sin(2x)~dx}&=&\displaystyle {{\frac {\sin(2x)e^{-2x}}{-2}}-\int {\frac {e^{-2x}2\cos(2x)~dx}{-2}}}\\&&\\&=&\displaystyle {{\frac {\sin(2x)e^{-2x}}{-2}}+\int e^{-2x}\cos(2x)~dx.}\end{array}}}
Now, we need to use integration by parts again.
{\displaystyle u=\cos(2x)}
{\displaystyle dv=e^{-2x}dx.}
{\displaystyle du=-2\sin(2x)dx}
{\displaystyle v={\frac {e^{-2x}}{-2}}.}
{\displaystyle \int e^{-2x}\sin(2x)~dx={\frac {\sin(2x)e^{-2x}}{-2}}+{\frac {\cos(2x)e^{-2x}}{-2}}-\int e^{-2x}\sin(2x)~dx.}
Notice that the integral on the right of the last equation in Step 2
is the same integral that we had at the beginning of the problem.
Thus, if we add the integral on the right to the other side of the equation, we get
{\displaystyle 2\int e^{-2x}\sin(2x)~dx={\frac {\sin(2x)e^{-2x}}{-2}}+{\frac {\cos(2x)e^{-2x}}{-2}}.}
Now, we divide both sides by 2 to get
{\displaystyle \int e^{-2x}\sin(2x)~dx={\frac {\sin(2x)e^{-2x}}{-4}}+{\frac {\cos(2x)e^{-2x}}{-4}}.}
Thus, the final answer is
{\displaystyle \int e^{-2x}\sin(2x)~dx={\frac {e^{-2x}}{-4}}(\sin(2x)+\cos(2x))+C.}
{\displaystyle {\frac {e^{-2x}}{-4}}(\sin(2x)+\cos(2x))+C}
|
dinterp - Maple Help
Home : Support : Online Help : Mathematics : Numerical Computations : Interpolation and Curve Fitting : dinterp
probabilistic degree interpolation
dinterp(f, n, k, d, p)
Given an integer valued function f : (x1,...,xn, p) -> Z that evaluates a polynomial in n variables modulo p, and a degree bound d on the kth variable, determine probabilistically the degree of the kth variable.
The dinterp function may return FAIL if it encounters a division by zero when evaluating f. It may also return a result for the degree of the kth variable which is too low. The probability that this happens can be decreased by using a larger modulus. A 12 to 20 digit modulus is considered ideal.
f := proc(x,y,z,p) x^2+y^3+z^4 mod p end proc:
\mathrm{dinterp}\left(f,3,1,6,997\right)
\textcolor[rgb]{0,0,1}{2}
\mathrm{dinterp}\left(f,3,2,6,997\right)
\textcolor[rgb]{0,0,1}{3}
\mathrm{dinterp}\left(f,3,3,6,997\right)
\textcolor[rgb]{0,0,1}{4}
|
Heteroskedasticity - SAGE Research Methods
Heteroskedasticity | The SAGE Encyclopedia of Communication Research Methods
One of the standard assumptions of the classical linear regression model
{y}_{i}={\mathrm{\beta }}_{0}+{\mathrm{\beta }}_{1}{x}_{i1}+{\mathrm{\beta }}_{2}{x}_{i2}+\cdots +{\mathrm{\beta }}_{k}{x}_{ik}+\mathrm{\epsilon };\phantom{\rule{0.25em}{0ex}}i=1\cdots N
is that the variance of the error term
\left({\mathrm{\epsilon }}_{i}\right)
is the same for all observations, that is
\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left(|{x}_{1i},\phantom{\rule{0.25em}{0ex}}{x}_{2i},\phantom{\rule{0.25em}{0ex}}\cdots \phantom{\rule{0.25em}{0ex}},\phantom{\rule{0.25em}{0ex}}{x}_{ki}\right)={\sigma }^{2}
. The assumption of a constant error variance is known as homoskedasticity and its failure is referred to as heteroskedasticity, or unequal variance. Heteroskedasticity is expressed as
\mathrm{Var}\phantom{\rule{0.25em}{0ex}}\left(|{x}_{1i},\phantom{\rule{0.25em}{0ex}}{x}_{2i},\phantom{\rule{0.25em}{0ex}}\cdots \phantom{\rule{0.25em}{0ex}},\phantom{\rule{0.25em}{0ex}}{x}_{ki}\right)={\sigma }_{i}^{2}
, where an i subscript on ...
|
Revision as of 17:54, 11 November 2017 by MathAdmin (talk | contribs) (Created page with "<span class="exam">Suppose the size of a population at time <math style="vertical-align: 0px">t</math> is given by ::<math>N(t)=\frac{1000t}{5+t},~t\ge 0.</math>...")
Suppose the size of a population at time
{\displaystyle t}
{\displaystyle N(t)={\frac {1000t}{5+t}},~t\geq 0.}
(a) Determine the size of the population as
{\displaystyle t\rightarrow \infty .}
We call this the limiting population size.
(b) Show that at time
{\displaystyle t=5,}
the size of the population is half its limiting size.
{\displaystyle \lim _{x\rightarrow \infty }{\frac {a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{0}}{b_{n}x^{n}+b_{n-1}x^{n-1}+\cdots +b_{0}}}={\frac {a_{n}}{b_{n}}}}
{\displaystyle a_{n}\neq 0}
{\displaystyle b_{n}\neq 0.}
{\displaystyle \lim _{t\rightarrow \infty }N(t)=\lim _{t\rightarrow \infty }{\frac {1000t}{5+t}}.}
Using the Background Information, we have
{\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{t\rightarrow \infty }N(t)}&=&\displaystyle {\frac {1000}{1}}\\&&\\&=&\displaystyle {1000.}\end{array}}}
{\displaystyle {\begin{array}{rcl}\displaystyle {N(5)}&=&\displaystyle {\frac {1000(5)}{5+5}}\\&&\\&=&\displaystyle {\frac {1000(5)}{10}}\\&&\\&=&\displaystyle {100(5)}\\&&\\&=&\displaystyle {500}\\&&\\&=&\displaystyle {{\frac {1000}{2}}.}\end{array}}}
{\displaystyle 1000}
{\displaystyle N(5)=500}
|
Study on control measures of the influence of shallow buried tunnel excavation on the subgrade settlement of high speed railway in operation | JVE Journals
Qiyu Song1
1China Railway Guangzhou Group Co., Ltd, Guangzhou, China
Received 21 January 2021; received in revised form 8 February 2021; accepted 16 February 2021; published 25 March 2021
Copyright © 2021 Qiyu Song. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
A finite element model is established to study the settlement control technology of the ballastless track of high-speed railway under the shield of shallow buried tunnel. In particular, the reinforcement mechanism and construction control technology of the Metro Jet System (MJS) method for horizontal reinforcement under complex geological conditions (medium coarse sand and strong permeable layer) are discussed. The results showed that when the ground is not reinforced, the shield tunnel directly passes through the high-speed railway, and the settlement caused by the construction of the railway gene tunnel directly above the tunnel has greatly exceeded the limit of the settlement control standard. Due to the limitations of site, planning and construction conditions, the horizontal jet grouting pile of MJS construction method is selected as an auxiliary measure through comprehensive comparison. After the completion of the MJS construction, the maximum settlement of the subgrade is 30 mm, and the settlement of the high-speed railway subgrade is about 25 mm.
A finite element model is established to study the settlement control technology
Metro Jet System method for horizontal reinforcement under complex geological conditions are discussed
Embankment deformation of shield crossing high-speed railway under reinforced and unreinforced conditions is compared
Keywords: tunnel, excavation, railway, settlement, subgrade.
As the core technology of underground space development, shield tunneling technology has been widely used in the fields of energy and transportation tunnel construction. Although it has many advantages of shield construction, and in the development of more than one hundred years of shield machine performance is improved, but due to the construction of ground movement, in turn, affects the surface structure of hazards and accidents still happen. If the characteristics of the surrounding strata cannot be accurately predicted and the deformation characteristics of the soil around the tunnel cannot be obtained in the process of tunnel tunneling, the construction may cause great damage to the surface structure of the tunnel. Especially when the subway tunnel needs to run under the high-speed railway with a maximum speed of 350 km/h. It is very important and of practical significance to study settlement control measures to ensure the safety of important facilities such as high-speed railway in construction area.
There are few projects of tunnel underpass high-speed railway, which mainly focus on the theoretical analysis level. Zhang et al. [1] made a detailed analysis of the causes and solutions of the ground displacement caused by the underpassing of three-pipe shield tunnel, which provided a reference for the construction of similar projects. Tao et al. [2] compared and studied the reinforcement scheme of the railway subgrade under the shield, and got a better supporting effect. Chen et al. [3] studied the construction scheme and reinforcement measures of shield tunnel under the existing railway in the weak stratum. Huang et al. [4] proposed an accurate method for prediction of the stability of rock surrounding a tunnel.
High-speed railway has almost instantaneous requirements for subgrade deformation, so the study and control of subgrade deformation law is very important and extremely practical value. In this paper, a finite element model is established to study the settlement control technology of the ballastless track of high-speed railway under the shield of shallow buried tunnel. In particular, the reinforcement mechanism and construction control technology of the Metro Jet System (MJS) method for horizontal reinforcement under complex geological conditions (medium coarse sand and strong permeable layer) are discussed. It is the first engineering application of the new technology of MJS horizontal reinforcement operation of high-speed railway subgrade in China, and it is also the deepest horizontal reinforcement project of MJS in China. The results can provide reference for similar projects.
According to the general situation of the project, the strata range is selected as follows: 2.5 times the diameter of the tunnel on the left and right sides of the tunnel structure, that is, 15 m. Take about 2.5 times the hole diameter below the bottom plate of the interval structure, that is, 15 m. The length of the shield tunnel segment is 1.5 m. The length of the tunnel excavation direction is an integer multiple of the segment ring, which is 120 m. Model dimension is 50×120×30 m. The Angle between track plate and shield tunnel of Beijing-Guangzhou high-speed railway is 70°, the distance between two crossing lines is 5 m, and the distance between crossing lines and arrival and departure lines is 6.5 m. A total of 4 Wuhan-Guangzhou railway ballastless track board and supporting layer is built into a layer with the width of 2.8 m and thickness of 0.5 m. The thickness of gravel layer is 0.5 m. The lower part of the gravel layer is followed by artificial filling soil layer, silty clay layer, medium coarse sand layer, silty clay layer and stable bedrock.
The surrounding rock is assumed to be an ideal elastoplastic material and obey the Mohr-Coulomb yield criterion. The element types are all 8-node hexahedral three-dimensional solid elements and 4-node tetrahedral three-dimensional solid elements, and a total of 157991 solid elements are divided. The established numerical model is shown in Fig. 1.
Table 1. Physical and mechanical parameters of surrounding rock
Compression modulus (MPa)
Artificial filled soil
Plastic silty clay
In the thick sand layer
Hard plastic powder clay
Limestone breezes
The following boundary conditions are selected: the plane
X=
X=
50 are used to limit its displacement in the
X
direction. The plane
Y=
Y=
120 constrains its displacement in the
Y
Z=
–30 limits its displacement in
X
Y
Z
directions. Design load of high speed railway subgrade is set as 64.48 kPa.
The stratigraphic division, physical and mechanical parameters of the strata and shield structure parameters in the numerical model are shown in Table 1 and Table 2.
Table 2. Shield structure parameters
Grouting layer after solidification
3. Analysis of deformation caused by shield excavation in unreinforced stratum
Fig. 2 shows the ground subsidence surface and isolines after the completion of two tunnels. It can be seen from the figure that the surface settlement value of the middle line of the double-track tunnel is the largest, with the maximum surface settlement value of 66 mm. The maximum surface settlement at the location of the high-speed railway subgrade is about 59 mm.
Fig. 2. Ground subsidence surface and isolines
Fig. 3. Subgrade settlement groove curve
Fig. 3 shows the settlement groove curve of the subgrade when the left tunnel and right tunnel construction is completed without reinforcement of the stratum. It can be seen from the figure that the settlement caused by the construction of the railway gene tunnel directly above the tunnel has exceeded the limit of the settlement control standard of 10 mm. When the left tunnel is through, the maximum settlement of the high-speed railway subgrade occurs above the left tunnel. After the construction of both tunnels, the maximum settlement of the high-speed railway subgrade is located on the symmetry plane of the two tunnels. The position of the maximum settlement of high-speed railway changes before and after the left and right lines are completed, and the maximum settlement value increases.
The excavation of shield tunnel causes the settlement of ground surface and roadbed, and the track on roadbed subsided. Because the absolute settlement of the two tracks is different, the differential settlement will occur between the two tracks. If the differential settlement is too large, it will affect the safety of railway operation. According to the calculation results, the maximum differential settlement of 1# track (Fig. 4) is 0.7 mm, that of 2# track is 0.5 mm, that of 3# track is 0.5 mm, and that of 4# track is 0.6 mm under the condition without strata reinforcement. The differential settlement of tracks did not exceed the settlement limit of 2 mm.
Fig. 5 shows the plastic separation layout after the completion of double-line shield construction when the stratum is not reinforced. As can be seen from the figure, because the shield tunnel is in the sand layer, the shear failure zone will be generated. Around the tunnel, due to the influence of excavation disturbance, shear failure of different degrees will also occur. However, due to the supporting effect of shield shell and the timely installation of segments, the range of shear failure zone caused by shield excavation is limited.
Fig. 4. High-speed railway track location diagram
Fig. 5. Plastic separation layout after completion of double-line shield construction
4. Influence of MJS reinforcement method on subgrade settlement of high-speed railway
MJS horizontal rotary jet grouting piles were used to reinforce the stratum. The rotary jet grouting range is 170° in the lower semicircular, the effective improved body diameter is 2.0 m, the transverse pile spacing is 1.7 m, the occlusion is 0.3 m, the vertical spacing is 0.7 m, and the occlusion is 0.3 m. Jet pressure of cured material 40 Mpa, and air pressure of cured material discharge 0.7 MPa. The layout scheme is shown in Fig. 6.
Fig. 6. Jetting pile layout drawing
According to Fig. 7, due to the formation disturbance caused by MJS construction, the vertical displacement of soil layer in a certain range directly above the area where the jet grouting pile is located is large. After the completion of the MJS construction, the maximum settlement of the subgrade is 30 mm, and the settlement of the high-speed railway subgrade is about 25 mm.
Fig. 7. Subgrade settlement surface and isoline after MJS Construction (Unit: mm)
Fig. 8 shows the subgrade settling groove curve after the completion of MJS construction. The maximum settlement of high-speed railway subgrade occurs directly above the construction of MJS jet grouting pile, and the maximum settlement value is about 25 mm.
Fig. 8. Subgrade settling groove curve after completion of MJS construction
According to the calculation results, the maximum differential settlement of 1# track from Fig. 4 is 0.8 mm, that of 2# track is 0.6 mm, that of 3# track is 0.2 mm and that of 4# track is 0.3 mm. The differential settlement of tracks did not exceed the settlement limit of 2 mm.
Fig. 9. MJS spray construction after the completion of plastic differentiation layout
Fig. 9 shows the plastic separation layout after the completion of MJS rotary spray construction. It can be seen from the figure that around MJS jet grouting pile, due to the influence of construction disturbance, shear failure of different degrees will occur, but the range of shear failure is relatively small.
From the above analysis, it can be seen that the ground subsidence value generated by the shield tunneling without stratum reinforcement is much larger than the allowable value, about 60mm, which cannot meet the safe operation of the train. Therefore, the auxiliary reinforcement measures of stratum must be taken.
In the actual construction of this project, MJS horizontal rotary jet injection is used to reinforce the stratum, and the simulated settlement value is about 30 mm. Compared with the direct shield tunneling without reinforcement, the settlement value of high railway foundation is reduced by about 50 %. Although it still fails to meet the limit requirements, in the process of MJS construction, the settlement magnitude of ballastless track caused by the construction of each MJS horizontal jet grouting pile is small, and the settlement amount caused by the crossing of the reinforced backing structure is also small. The settlement of the whole construction process is slow and controllable, so in the construction process, we can ensure the safety of high-speed railway operation by means of speed limit, fine adjustment and lifting.
In this paper, the numerical simulation analysis of tunnel construction under high-speed railway is carried out, and the embankment deformation of shield crossing high-speed railway under reinforced and unreinforced conditions is compared in detail, and the following conclusions are drawn:
1) When the stratum is not reinforced, the surface settlement value of the middle line of the two-track tunnel is the largest, which is 66 mm, and the width of the surface settlement groove increases from 45 m when the original left line is through to about 60 m.
2) When the ground is not reinforced, the shield tunnel directly passes through the high-speed railway, and the settlement caused by the construction of the railway gene tunnel directly above the tunnel has greatly exceeded the limit of the settlement control standard. Due to the limitations of site, planning and construction conditions, the horizontal jet grouting pile of MJS construction method is selected as an auxiliary measure through comprehensive comparison.
3) After the completion of the MJS construction, the maximum settlement of the subgrade is 30 mm, and the settlement of the high-speed railway subgrade is about 25 mm.
Zhang X.,Zhou S., He C. Experimental investigation on train-induced vibration of the ground railway embankment and under-crossing subway tunnels. Transportation Geotechnics, Vol. 26, 2021, p. 100422. [Publisher]
Tao K., Zhang Y., Hou K. Experimental study on temperature distribution and smoke control in emergency rescue stations of a slope railway tunnel with semi-transverse ventilation. Tunnelling and Underground Space Technology, Vol. 106, 2020, p. 103616. [Publisher]
Cheng Y., Qiu W., Duan D. Automatic creation of as-is building information model from single-track railway tunnel point clouds. Automation in Construction, Vol. 106, 2019, p. 102911. [Publisher]
Huang S., Qi Q., Liu J., Liu W. Tunnel surrounding rock stability prediction using improved KNN algorithm. Journal of Vibroengineering, Vol. 22, Issue 7, 2020, p. 1674-1691. [Publisher]
Numerical Study on the Reinforcement Measures of Tunneling on Adjacent Piles
Hongsheng Qiu, Zhe Wang, Mo’men Ayasrah, Chuanbang Fu, Luo Gang
|
Pascal's rule - Wikipedia
Combinatorial identity about binomial coefficients
Not to be confused with Pascal's law.
In mathematics, Pascal's rule (or Pascal's formula) is a combinatorial identity about binomial coefficients. It states that for positive natural numbers n and k,
{\displaystyle {n-1 \choose k}+{n-1 \choose k-1}={n \choose k},}
{\displaystyle {\tbinom {n}{k}}}
is a binomial coefficient; one interpretation of which is the coefficient of the xk term in the expansion of (1 + x)n. There is no restriction on the relative sizes of n and k,[1] since, if n < k the value of the binomial coefficient is zero and the identity remains valid.
Pascal's rule can also be viewed as a statement that the formula
{\displaystyle {\frac {(x+y)!}{x!y!}}={x+y \choose x}={x+y \choose y}}
solves the linear two-dimensional difference equation
{\displaystyle N_{x,y}=N_{x-1,y}+N_{x,y-1},\quad N_{0,y}=N_{x,0}=1}
over the natural numbers. Thus, Pascal's rule is also a statement about a formula for the numbers appearing in Pascal's triangle.
Pascal's rule can also be generalized to apply to multinomial coefficients.
1 Combinatorial proof
Illustrates combinatorial proof:
{\displaystyle {\binom {4}{1}}+{\binom {4}{2}}={\binom {5}{2}}.}
Pascal's rule has an intuitive combinatorial meaning, that is clearly expressed in this counting proof.[2]
Proof. Recall that
{\displaystyle {\tbinom {n}{k}}}
equals the number of subsets with k elements from a set with n elements. Suppose one particular element is uniquely labeled X in a set with n elements.
To construct a subset of k elements containing X, include X and choose k − 1 elements from the remaining n − 1 elements in the set. There are
{\displaystyle {\tbinom {n-1}{k-1}}}
such subsets.
To construct a subset of k elements not containing X, choose k elements from the remaining n − 1 elements in the set. There are
{\displaystyle {\tbinom {n-1}{k}}}
Every subset of k elements either contains X or not. The total number of subsets with k elements in a set of n elements is the sum of the number of subsets containing X and the number of subsets that do not contain X,
{\displaystyle {\tbinom {n-1}{k-1}}+{\tbinom {n-1}{k}}}
{\displaystyle {\tbinom {n}{k}}}
; therefore,
{\displaystyle {\tbinom {n}{k}}={\tbinom {n-1}{k-1}}+{\tbinom {n-1}{k}}}
Alternatively, the algebraic derivation of the binomial case follows.
{\displaystyle {\begin{aligned}{n-1 \choose k}+{n-1 \choose k-1}&={\frac {(n-1)!}{k!(n-1-k)!}}+{\frac {(n-1)!}{(k-1)!(n-k)!}}\\&=(n-1)!\left[{\frac {n-k}{k!(n-k)!}}+{\frac {k}{k!(n-k)!}}\right]\\&=(n-1)!{\frac {n}{k!(n-k)!}}\\&={\frac {n!}{k!(n-k)!}}\\&={\binom {n}{k}}.\end{aligned}}}
Pascal's rule can be generalized to multinomial coefficients.[3] For any integer p such that
{\displaystyle p\geq 2}
{\displaystyle k_{1},k_{2},k_{3},\dots ,k_{p}\in \mathbb {N} ^{+}\!,}
{\displaystyle n=k_{1}+k_{2}+k_{3}+\cdots +k_{p}\geq 1}
{\displaystyle {n-1 \choose k_{1}-1,k_{2},k_{3},\dots ,k_{p}}+{n-1 \choose k_{1},k_{2}-1,k_{3},\dots ,k_{p}}+\cdots +{n-1 \choose k_{1},k_{2},k_{3},\dots ,k_{p}-1}={n \choose k_{1},k_{2},k_{3},\dots ,k_{p}}}
{\displaystyle {n \choose k_{1},k_{2},k_{3},\dots ,k_{p}}}
is the coefficient of the
{\displaystyle x_{1}^{k_{1}}x_{2}^{k_{2}}\dots x_{p}^{k_{p}}}
{\displaystyle (x_{1}+x_{2}+\dots +x_{p})^{n}}
The algebraic derivation for this general case is as follows.[3] Let p be an integer such that
{\displaystyle p\geq 2}
{\displaystyle k_{1},k_{2},k_{3},\dots ,k_{p}\in \mathbb {N} ^{+}\!,}
{\displaystyle n=k_{1}+k_{2}+k_{3}+\cdots +k_{p}\geq 1}
{\displaystyle {\begin{aligned}&{}\quad {n-1 \choose k_{1}-1,k_{2},k_{3},\dots ,k_{p}}+{n-1 \choose k_{1},k_{2}-1,k_{3},\dots ,k_{p}}+\cdots +{n-1 \choose k_{1},k_{2},k_{3},\dots ,k_{p}-1}\\&={\frac {(n-1)!}{(k_{1}-1)!k_{2}!k_{3}!\cdots k_{p}!}}+{\frac {(n-1)!}{k_{1}!(k_{2}-1)!k_{3}!\cdots k_{p}!}}+\cdots +{\frac {(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots (k_{p}-1)!}}\\&={\frac {k_{1}(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}+{\frac {k_{2}(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}+\cdots +{\frac {k_{p}(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}={\frac {(k_{1}+k_{2}+\cdots +k_{p})(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}\\&={\frac {n(n-1)!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}={\frac {n!}{k_{1}!k_{2}!k_{3}!\cdots k_{p}!}}={n \choose k_{1},k_{2},k_{3},\dots ,k_{p}}.\end{aligned}}}
^ Mazur, David R. (2010), Combinatorics / A Guided Tour, Mathematical Association of America, p. 60, ISBN 978-0-88385-762-5
^ Brualdi, Richard A. (2010), Introductory Combinatorics (5th ed.), Prentice-Hall, p. 44, ISBN 978-0-13-602040-0
^ a b Brualdi, Richard A. (2010), Introductory Combinatorics (5th ed.), Prentice-Hall, p. 144, ISBN 978-0-13-602040-0
Merris, Russell. Combinatorics. John Wiley & Sons. 2003 ISBN 978-0-471-26296-1
"Central binomial coefficient". PlanetMath.
"Binomial coefficient". PlanetMath.
This article incorporates material from Pascal's triangle on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
This article incorporates material from Pascal's rule proof on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Pascal%27s_rule&oldid=1077011160"
|
MaplePortal/LinearInterpolation - Maple Help
Home : Support : Online Help : MaplePortal/LinearInterpolation
Maple has multiple tools for interpolation, including linear, spline and more. Here we demonstrate how to linearly interpolate data.
This data was recorded during experiments on the outflow of a pump. The first column is time, while the second column is volumetric flowrate.
\mathrm{flowData}≔\left[\begin{array}{cc}0.0& 0.2\\ 1.0& 2.0\\ 2.0& 3.5\\ 5.0& 6.0\\ 7.0& 9.0\\ 9.0& 8.0\\ 10.0& 8.0\\ 15.0& 10.0\\ 16.0& 11.0\\ 19.0& 12.0\end{array}\right]:
\mathrm{T}≔\mathrm{flowData}\left[..,1\right]:\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathrm{Q}≔\mathrm{flowData}\left[..,2\right]:
Create a Linear Interpolation Function
\mathrm{flowInterpolated} ≔\mathrm{t}→\mathrm{CurveFitting}:-\mathrm{ArrayInterpolation}\left(\mathrm{T}, \mathrm{Q}, \mathrm{t}\right)
\textcolor[rgb]{0,0,1}{\mathrm{flowInterpolated}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{t}\textcolor[rgb]{0,0,1}{↦}\textcolor[rgb]{0,0,1}{\mathrm{CurveFitting}}\textcolor[rgb]{0,0,1}{:-}\textcolor[rgb]{0,0,1}{\mathrm{ArrayInterpolation}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{T}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{Q}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{t}\right)
Hence at the time of 2.7, the interpolated flowrate is
\mathrm{flowInterpolated}\left(2.7\right)
\textcolor[rgb]{0,0,1}{4.08333333333333}
Plot the original experimental data against the linearly-interpolation values.
\mathrm{f}≔\mathrm{plot}\left(\mathrm{flowInterpolated},\mathrm{min}\left(\mathrm{T}\right)..\mathrm{max}\left(\mathrm{T}\right)\right):
\mathrm{g}≔\mathrm{plot}\left(\mathrm{T},\mathrm{Q},\mathrm{style}=\mathrm{point},\mathrm{symbol}=\mathrm{solidcircle},\mathrm{symbolsize}=20\right):
\mathrm{plots}:-\mathrm{display}\left(\mathrm{f},\mathrm{g}\right)
|
Finance - Maple Help
Home : Support : Online Help : System : Information : Updates : Maple 2015 : Finance
In mathematical finance, The Greeks are quantities often used in risk management to represent the sensitivity of the price of a derivative, such as an option, to changes in underlying parameters on which the value of a financial instrument is dependent. This can include the sensitivity of a derivative to changes in the price of the underlying asset, the implied volatility, time-value decay, or other factors.
Maple 2015 includes 10 new commands for computing values for the Greeks, which appear in bold in the table below. The following table shows the definition of the various Greeks in terms of the underlying parameters in the first row. For example, Delta is the sensitivity of the derivative value (V) to the spot price (S), expressed as
\frac{∂}{∂ S} V
. The first order Greeks are shown in the first row in green, the second order Greeks are in rows two through four in blue, and the third order Greeks are in rows five and six in pink.
Spot Price (S)
Volatility (s)
Time to Expiry (t)
Vega (
\mathrm{\nu }
Rho (r)
\mathbf{\nu }
A notable omission from the table above is Lambda, which is used as a measure of leverage, namely the percent change in option value per percentage change in the price of the underlying asset, expressed as
\frac{∂}{∂ S} V \cdot \frac{S}{V} .
The following plot shows the characteristics for each of the Greeks in terms of time to maturity, spot price, and value, for a call option with given strike price (K) = 100, volatility (σ) = 0.1, interest rate (r) = 0.05, and paying no dividends.
DeltaLambdaRhoThetaVegaCharmGammaVannaVeraVetaVommaColorSpeedUltimaZomma
Other Updates in Finance
With the advances in programmatic content generation, it is now possible to programmatically generate interactive embedded components. Several Maple commands have been updated to utilize this technology, including the Finance:-amortization command, which now returns an embedded datatable when the output option set is to embed:
\mathrm{Finance}:-\mathrm{amortization}\left(1000.00,400.00,0.075,\mathrm{output}=\mathrm{embed}\right):
|
(-1)F Wikipedia
(-1)F
Term in quantum field theory
{\displaystyle i\hbar {\frac {\partial }{\partial t}}|\psi (t)\rangle ={\hat {H}}|\psi (t)\rangle }
In a quantum field theory with fermions, (−1)F is a unitary, Hermitian, involutive operator where F is the fermion number operator. For the example of particles in the Standard Model, it is equal to the sum of the lepton number plus the baryon number, F = B + L. The action of this operator is to multiply bosonic states by 1 and fermionic states by −1. This is always a global internal symmetry of any quantum field theory with fermions and corresponds to a rotation by 2π. This splits the Hilbert space into two superselection sectors. Bosonic operators commute with (−1)F whereas fermionic operators anticommute with it.[1]
This operator really shows its utility in supersymmetric theories.[1] Its trace is the spectral asymmetry of the fermion spectrum, and can be understood physically as the Casimir effect.
^ a b Terning, John (2006). Modern Supersymmetry:Dynamics and Duality: Dynamics and Duality. New York: Oxford University Press. ISBN 0-19-856763-4.
Shifman, Mikhail A. (2012). Advanced Topics in Quantum Field Theory: A Lecture Course. Cambridge: Cambridge University Press. ISBN 978-0-521-19084-8.
Ibáñez, Luis E.; Uranga, Angel M. (2012). String Theory and Particle Physics: An Introduction to String Phenomenology. Cambridge: Cambridge University Press. ISBN 978-0-521-51752-2.
Bastianelli, Fiorenzo (2006). Path Integrals and Anomalies in Curved Space. Cambridge: Cambridge University Press. ISBN 978-0-521-84761-2.
|
Compare computational methods for least squares regression » SAS博客列表
Compare computational methods for least squares regression
In a previous article, I discussed various ways to solve a least-square linear regression model. I discussed the SWEEP operator (used by many SAS regression routines), the LU-based methods (SOLVE and INV in SAS/IML), and the QR decomposition (CALL QR in SAS/IML). Each method computes the estimates for the regression coefficients, b, by using the normal equations (X`X) b = X`y, where X is a design matrix for the data.
This article describes a QR-based method that does not use the normal equations but works directly with the overdetermined system X b = y. It then compares the performance of the direct QR method to the computational methods that use the normal equations.
The QR solution of an overdetermined system
As shown in the previous article, you can use the QR algorithm to solve the normal equations. However, if you search the internet for "QR algorithm and least squares," you find many articles that show how you can use the QR decomposition to directly solve the overdetermined system X b = y. How does the direct QR method compare to the methods that use the normal equations?
Recall that X is an n x m design matrix, where n > m and X is assumed to be full rank of m. For simplicity, I will ignore column pivoting. If you decompose X = QRL, the orthogonal matrix Q is n x n, but the matrix RL is not square. ("L" stands for "long.") However, RL is the vertical concatenation of a square triangular matrix and a rectangular matrix of zeros:
\({\bf R_L} = \begin{bmatrix} {\bf R} \\ {\bf 0} \end{bmatrix}\)
{\bf R_L} = \begin{bmatrix} {\bf R} \\ {\bf 0} \end{bmatrix}
If you let Q1 be the first m columns of Q and let Q2 be the remaining (n-m) columns, you get a partitioned matrix equation:
\(\begin{bmatrix} {\bf Q_1} & {\bf Q_2} \end{bmatrix} \begin{bmatrix} {\bf R} \\ {\bf 0} \end{bmatrix} {\bf b} = {\bf y}\)
\begin{bmatrix} {\bf Q_1} & {\bf Q_2} \end{bmatrix} \begin{bmatrix} {\bf R} \\ {\bf 0} \end{bmatrix} {\bf b} = {\bf y}
If you multiply both sides by Q` (the inverse of the orthogonal matrix, Q), you find out that the important matrix equation to solve is \({\bf R b} = {\bf Q_1^{\prime} y}\)
{\bf R b} = {\bf Q_1^{\prime} y}
. The vector \({\bf Q_1^{\prime} y}\)
{\bf Q_1^{\prime} y}
is the first m rows of the vector \({\bf Q^{\prime} y}\)
{\bf Q^{\prime} y}
. The QR call in SAS/IML enables you to obtain the triangular R matrix and the vector Q` y directly from the data matrix and the observed vector. The following program uses the same design matrix as for my previous article. Assuming X has rank m, the call to the QR subroutine returns the m x m triangular matrix, R, and the vector Q` y. You can then extract the first m rows of that vector and solve the triangular system, as follows:
/* Use PROC GLMSELECT to write a design matrix */
proc glmselect data=Sashelp.Class outdesign=DesignMat;
/* The QR algorithm can work directly with the design matrix and the observed responses. */
call QR(Qty, R, piv, lindep, X, , y); /* return Q`*y and R (and piv) */
m = ncol(X);
c = QTy[1:m]; /* we only need the first m rows of Q`*y */
b = trisolv(1, R, c, piv); /* solve triangular system */
print b[L="Direct QR" F=D10.4];
This is the same least-squares solution that was found by using the normal equations in my previous article.
Compare the performance of least-squares solutions
How does this direct method compare with the methods that use the normal equations? You can download a program that creates simulated data and runs each algorithm to estimate the least-squares regression coefficients. The simulated data has 100,000 observations; the number of variables is chosen to be m={10, 25, 50, 75, 100, 250, 500}. The program uses SAS/IML 15.1 on a desktop PC to time the algorithms. The results are shown below:
The most obvious feature of the graph is that the "Direct QR" method that is described in this article is not as fast as the methods that use the normal equations. For 100 variables and 100,000 observations, the "Direct QR" call takes more than 12 seconds on my PC. (It's faster on a Linux server). The graph shows that the direct method shown in this article is not competitive with the normal-equation-based algorithms when using the linear algebra routines in SAS/IML 15.1.
The graph shows that the algorithms that use the normal equations are relatively faster. For the SAS/IML calls on my PC, you can compute the regression estimates for 500 variables in about 2.6 seconds. The graph has a separate line for the time required to form the normal equations (which you can think of as forming the X`X matrix). Most of the time is spent computing the normal equations; only a fraction of the time is spent actually solving the normal equations. The following table shows computations on my PC for the case of 500 variables:
The table shows that it takes about 2.6 seconds to compute the X`X matrix and the vector X`y. After you form the normal equations, solving them is very fast. For this example, the SOLVE and INV methods take only a few milliseconds to solve a 500 x 500. The QR algorithms take 0.1–0.2 seconds longer. So, for this example, forming the normal equations accounts for more than 90% of the total time.
Another article compares the performance of the SOLVE and INV routines in SAS/IML.
SAS regression procedures
These results are not the best that SAS can do. SAS/IML is a general-purpose tool. SAS regression procedures like PROC REG are optimized to compute regression estimates even faster. They also use the SWEEP operator, which is faster than the SOLVE function. For more than 20 years, SAS regression procedures have used multithreaded computations to optimize the performance of regression computations (Cohen, 2002). More recently, SAS Viya added the capability for parallel processing, which can speed up the computations even more. And, of course, they compute much more than only the coefficient estimates! They also compute standard errors, p-values, related statistics (MSE, R square,....), diagnostic plots, and more.
This article compares several methods for obtaining least-squares regression estimates. It uses simulated data where the number of observations is much greater than the number of variables. It shows that methods that use the normal equations are faster than a "Direct QR" method, which does not use the normal equations. When you use the normal equations, most of the time is spent actually forming the normal equations. After you have done that, the time required to solve the system is relatively fast.
You can download the SAS program that computes the tables and graphs in this article.
The post Compare computational methods for least squares regression appeared first on The DO Loop.
The QR algorithm for least-squares regression More on the SWEEP operator for least-square regression models
|
Revision as of 22:17, 1 April 2015 by MathAdmin (talk | contribs) (Created page with "<span style="font-size:135%"><font face=Times Roman>1. Use the definition of derivative to find the derivative of <math style="vertical-align: -15%">f(x)=\sqrt{x-5}</math>.</f...")
1. Use the definition of derivative to find the derivative of
{\displaystyle f(x)={\sqrt {x-5}}}
Recall that the derivative is actually defined through the limit
{\displaystyle f'(x)=\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}.}
The goal in solving this is to plug the values
{\displaystyle x+h}
{\displaystyle x}
into the appropriate
{\displaystyle f}
in the numerator, and then find a way to cancel the
{\displaystyle h}
in the denominator. Unlike simplifying a rational expression containing radicals, here it's appropriate to have a radical in the denominator.
Again, the goal is to cancel the
{\displaystyle h}
Following the hints above, we initially have
{\displaystyle f'(x)\,=\,\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}}\,=\,\lim _{h\rightarrow 0}{\frac {{\sqrt {x+h-5}}-{\sqrt {x-5}}}{h}}.}
We can't plug in
{\displaystyle h=0}
, as this would lead to the unallowed "division by zero". Instead, we multiply by the conjugate of the numerator and clean up:
{\displaystyle \lim _{h\rightarrow 0}{\frac {{\sqrt {x+h-5}}-{\sqrt {x-5}}}{h}}\cdot {\frac {{\sqrt {x+h-5}}+{\sqrt {x-5}}}{{\sqrt {x+h-5}}+{\sqrt {x-5}}}}}
{\displaystyle =\lim _{h\rightarrow 0}{\frac {\left({\sqrt {x+h-5}}\right)^{2}-\left({\sqrt {x-5}}\right)^{2}}{h\left({\sqrt {x+h-5}}+{\sqrt {x-5}}\right)}}}
{\displaystyle =\lim _{h\rightarrow 0}{\frac {x+h-5-\left(x-5\right)}{h\left({\sqrt {x+h-5}}+{\sqrt {x-5}}\right)}}}
{\displaystyle =\,\,\lim _{h\rightarrow 0}{\frac {h}{h\left({\sqrt {x+h-5}}+{\sqrt {x-5}}\right)}}}
{\displaystyle =\,\,\lim _{h\rightarrow 0}{\frac {1}{\left({\sqrt {x+h-5}}+{\sqrt {x-5}}\right)}}}
{\displaystyle =\,\,{\frac {1}{2{\sqrt {x-5}}}}.}
Notice that this is the same result you would get using our more convenient "rules of integration", including the chain rule, but that's not the point of this problem. You specifically need to treat the derivative as a limit.
|
Home : Support : Online Help : Programming : ImageTools Package : Draw Subpackage : Overview
Overview of the ImageTools:-Draw Package
List of ImageTools:-Draw Package Commands
ImageTools[Draw][command]( arguments )
ImageTools:-Draw is a subpackage of ImageTools that provides primitives for drawing into an ImageTools:-Image. All primitives work in terms of continuous mathematical coordinates, not discrete pixels, and are rendered using anti-aliasing.
The following is a list of the commands in the ImageTools[Draw] package.
Each primitive is a member of the ImageTools:-Draw package, and can be referenced by prefixing the primitive name with the package name (e.g., ImageTools:-Draw:-Poly), or using the bare primitive name in a context in which ImageTools:-Draw is in scope: after a call to with(ImageTools:-Draw), within a procedure having a uses ImageTools:-Draw clause, or within a use ImageTools:-Draw in ... end statement.
\mathrm{with}\left(\mathrm{ImageTools}\right):
\mathrm{with}\left(\mathrm{ImageTools}:-\mathrm{Draw}\right):
\mathrm{img}≔\mathrm{Create}\left(240,240,\mathrm{channels}=3,\mathrm{background}=\mathrm{white}\right):
\mathrm{Circle}\left(\mathrm{img},120,120,240-64,\mathrm{color}=0.5,\mathrm{thickness}=3\right)
\mathrm{Poly}\left(\mathrm{img},[[60,60],[60,180],[180,180],[180,60],[60,60]],\mathrm{color}=0,\mathrm{fill_color}=[0.9,1,0.8],\mathrm{fill_pattern}="sphere"\right)
\mathrm{Text}\left(\mathrm{img},121,131,"Draw",\mathrm{color}=[0.6,0.2,0.2],\mathrm{font}=18,\mathrm{font_size}=30\right):
\mathrm{Text}\left(\mathrm{img},120,130,"Draw",\mathrm{color}=[0.4,0.1,0.1],\mathrm{font}=18,\mathrm{font_size}=30\right):
\mathrm{expr}≔{\left(x-120\right)}^{2}+{\left(y-120\right)}^{2}={\left(\frac{240-64}{2}\right)}^{2}:
r≔\mathrm{solve}\left(\mathrm{expr},y\right)
\textcolor[rgb]{0,0,1}{r}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{120}\textcolor[rgb]{0,0,1}{+}\sqrt{\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{240}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{6656}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{120}\textcolor[rgb]{0,0,1}{-}\sqrt{\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{240}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{6656}}
\mathbf{for}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathrm{x1}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{from}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}32.\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{by}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}11.\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{to}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}208.\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{2.0em}{0.0ex}}\mathrm{SolidCircle}\left(\mathrm{img},\mathrm{x1},\mathrm{eval}\left(r[1],'x'=\mathrm{x1}\right),5,\mathrm{color}=\mathrm{black}\right);\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{2.0em}{0.0ex}}\mathrm{SolidCircle}\left(\mathrm{img},\mathrm{x1},\mathrm{eval}\left(r[2],'x'=\mathrm{x1}\right),5,\mathrm{color}=\mathrm{red}\right)\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathbf{end}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathbf{do}
\mathrm{Embed}\left(\mathrm{img}\right)
The ImageTools[Draw] package was introduced in Maple 2018.
|
Abstract of Plutos Network - PlutosNetwork
Abstract of Plutos Network
Why is Plutos Network needed?
#Solana, #Polkadot, #BSC
Plutos Network Synthetic Platform V3.0 User Doc - Guidelines & Tutorial
Plutos Network Synthetic Platform V3.0 User Disclaimers
Multi-chain synthetic issuance & derivative trading platform. #Solana #Polkadot #BSC
Plutos Network is a multi-chain synthetic issuance & derivative trading platform which introduces mining incentives and Staking rewards to users. By integrating Blockchains such as Solana, Polkadot and BSC, enabling on-chain and cross-chain liquidity and trading, Plutos Network is to offer users synthetic issuance and trading services for a wide range of synthetic products which are sustainable, profitable and disruptive to the traditional derivative market.
I. Key components of Plutos Network
Plutos Staking
Users can earn rewards by Staking derivative assets on Plutos Network. The Staking service will be first built on Ethereum, and then migrate to Polkadot after the sub-parachain, OCW components are ready for deployment. The Staking service will soon be available on other leading Blockchains such as BSC and Solana.
Plutos Market
Your One-stop platform for issuing and trading crypto derivatives. By applying AMM mechanism and enabling cross-chain wrapping function to build chain-bridges between mainstream assets of Blockchains including Ethereum, Polkadot, BSC and Solana, Plutos Market will be the marketplace when users can trade cross-chain derivatives with low cost, fast transaction and high security. In Plutos Market, users can trade derivatives such as contracts, options, swaps etc.
Plutos Pool
Through PLUT Staking and minting, integrating liquidity from leading DEXes such as Uniswap, 1inch, Balancer, PancakeSwap etc, Plutos Pool can offer infinite liquidity for users with the best trading experience and passive income. With Plutos Pool, slippage and will very much improved thanks to the liquidity integration and cross-chain liquidity pools. There will be no need for Liquidity Provider on Plutos Network. Users will only need to stake PLUT and mint assets such as pUSD to convert one asset to another or open long or short positions of leveraged contracts underlying various assets without any restriction.
II. Technical Architecture of Plutos Network
Sub-parachain Component
A Sub-parachain will be built to be the underlying module for communications with parachains on Polkadot. Thanks to the outstanding features of Plutos Sub-parachain, Plutos will enable fast communications through the module with various parachains such as Rococo etc. The component will be a valuable improvement in order to realize interoperability, scalability and fast transactions.
Plutos Marketplace for Derivative Issuance and Trading
The workflow of derivative issuance and trading on Plutos Network include several key components: Plutos Market for synthetic issuance and trading marketplace, Plutos Multi-chain Asset Collateral Pool for aggregating all assets on Plutos Network, Integration with leading liquidity and swap protocols including Uniswap, 1inch, Pancake and more for infinite liquidity.
Plutos Market is a synthetic/derivative DEX without an order book. The dApp provides both leveraged and un-leveraged products. Users are able to converting one PLUT to another without worrying about how deep the liquidity nor slippage because the debt pool smart contract is backed by the staked collateral which plays the role of liquidity provider. Users are also able to be benefited by taking short or long positions with leverage.
Plutos supports the multi-asset collateral service in a permission-less and decentralized manner. Users can deposit the supported assets such as ETH and DAI to issue the synthetic assets such as bonds and earn the most high interests on deposits & borrowed assets. Plutos also offers the lowest Collateral Ratio (C-ratio), thanks to the DAO Governance mode, full-scale security measures.
Derivative Issuance and Trading
III. Infinite Liquidity & Synthetic variety by Staking & Minting
Plutos Pool is a pool created in the process of PLUT holders staking PLUT and minting assets including pUSD, pBTC, pETH etc. The pool acts as a liquidity provider, acting as the counter-party when trading pUSD to pBTC, for example. Therefore, it can be said that the liquidity is infinite. In other words, there is no worry for users over the lack of liquidity or slippage existing in traditional financial markets.
However, if all users only have pBTC and the BTC price rises by 50%, the total debt also increases by 50%. In this case, the collateral pool, the staking pool has the opposite position, pBTC sell (or inverse piBTC), so the buy/sell ratio plays an important role in the overall collateral ratio of the system. Plutos Network will increase the stability of the system by reducing the risk of various methods such as position hedge in the on/off chain for the position imbalance of the system.
Product variety can be maximized by virtue of the Pool design:
Under such framework, the use cases are expanded largely, including but not limited to the following:
Natural hedging.
The borrower does not need to bear risks of foreign exchange rates. For Bitcoin miners in Europe, their costs and fixed asset investment are denominated in euros, so the US dollar stable currency is very tasteless for them. Now, European miners can hedge against foreign exchange risks by collateralizing Bitcoin or Ethereum and lending EUX. This also applies to traders who settle profits in euros.
Short/long positions.
Multi-currency stablecoins provide convenience for traders to invest in the foreign exchange market. For example, when a trader is bearish on the U.S. dollar and bullish on the euro, he can lend the U.S. dollar and exchange it for the euro.
Low-friction global carry trading.
For depositors in the euro zone with negative interest rates, although the liquidity mining benefits of USD stablecoins are attractive, the frictional costs of carrying out carry trades are still high. The transaction party needs to first convert the euro into USDC or USDT, which will bring additional transaction costs and foreign exchange risks. The EUX currency pair will effectively reduce friction costs and provide convenience for Euro holders to participate in liquidity mining and use other DeFi protocols.
The global interest rate market.
Since market supply and demand is a dynamic adjustment process, the interest rates of USX and EUX will also change accordingly, thus creating an active global interest rate market.
The global foreign exchange market.
As the demand for multi-currency stablecoins increases, liquid currency pairs (scale, trading currency pairs) will gradually increase, and eventually a multi-currency global foreign exchange market will be formed.
IV. Plutos AMM Structure for Trading Synthetics
Plutos Network applies pAMM mechanism during the trading of products such as options, swaps, contracts etc.
After users place collaterals in assets supported in the Plutos Network, for example, pETH, pUSD etc, the users can trade the synthetic products provided, without any worries about latency or big slippage.
The traditional order book is replaced by liquidity pools that are pre-funded on-chain for both assets of the trading pair. The liquidity is provided by other users who also earn passive income on their deposit through trading fees based on the percentage of the liquidity pool that they provide.
V. Plutos Market: how does synthetics trading work
Plutos Market offers better trading experience including reducing of friction, slippage, expanding the accessibility of mainstream crypto assets, etc.
The lack of an order book in Plutos Market offers better deals when people trade synthetic assets. Assets will be assigned with real-time pricing through on-chain and off-chain pricing feeding by oracles such as ChainLink and Umbrella, then these assets are easily converted. Due to the large size of collaterals placed in Plutos Market, it can avoid issues such as slippage by enabling infinite liquidity and on-chain trading process.
PLUT Mint - Collaterals
Holders of PLUT can mint assets such as pUSD, pETH, pBTC by placing collaterals and using Plutos smart contracts. There will be a Collateral Ratio required to be below 500%. After collateral placing, the debts will mint and be stored in pUSD.
If PLUT price rises, this Collateral Ratio (C-Ratio) also fluctuates. When the price of PLUT rises, the increased value of the PLUT tokens can be used to generate additional pUSD, which can be exchanged to additional synthetic assets, vice versa. Users who maintain collateral ratio above this optimal C-Ratio can claim the exchange fee reward and inflation reward according to the staking proportion.
The collateral ratio (C-ratio) is simply the ratio of the value of users; locked collaterals to the value of their current minted tokens.
Certain amount of Collaterals is required to always maintain a C-ratio above the pAsset's minimum requirements, otherwise the protocol will initiate a margin call to liquidate collateral in an attempt to restore the position's C-ratio. The protocol will be able to determine whether a position is underneath the required threshold by re-denominating all pAsset values into assets like ETH, BTC, USDT etc via oracle price feedings.
Should the pAsset's minimum C-ratio be
r_{\text{min}}
. Given the current quantities of collateral and minted pAssets
Q_c,Q_m
P_c, P_m
t
In parallel to the depositing and withdrawal collateral, the user can also mint and burn pAssets to adjust the value of their collateral's effective C-ratio. In this case, the collateral ratio is:
Plutos Market - Exchange
Plutos Market offers users a way of trading a large variety of synthetic assets without having an underlying asset. Users have the opportunity to easily and comfortably move their asset from one to another to make profits without going through any procedures or censorship mandatorily enforced in traditional financial systems.
There will be no counter-party needed during the trading as the system automatically convert debts into synthetic assets. When a user wants to exit or reduce their debts, they will have to pay for the debts as they must burn the identical amount of debits to unlock.
Take Plutos Market's AMM in Balancer as an example:
Due to the separation of token management and accounting between AMM and the vault, the fund pool can implement any arbitrary and customizable AMM logic, including weighted pools (for constant weight index funds), stable pools (for soft pegs) Tokens) and smart pools (for ongoing parameter changes).
With the launch of the Balancer Asset Manager (a trusted external smart contract that can use the base tokens stored in the vault in the fund pool), by lending unused assets in AMM to the lending agreement, capital efficiency and yield can be improved.
|
Solvable_group Knowpia
In mathematics, more specifically in the field of group theory, a solvable group or soluble group is a group that can be constructed from abelian groups using extensions. Equivalently, a solvable group is a group whose derived series terminates in the trivial subgroup.
Historically, the word "solvable" arose from Galois theory and the proof of the general unsolvability of quintic equation. Specifically, a polynomial equation is solvable in radicals if and only if the corresponding Galois group is solvable[1] (note this theorem holds only in characteristic 0). This means associated to a polynomial
{\displaystyle f\in F[x]}
there is a tower of field extensions
{\displaystyle F=F_{0}\subseteq F_{1}\subseteq F_{2}\subseteq \cdots \subseteq F_{m}=K}
{\displaystyle F_{i}=F_{i-1}[\alpha _{i}]}
{\displaystyle \alpha _{i}^{m_{i}}\in F_{i-1}}
{\displaystyle \alpha _{i}}
{\displaystyle x^{m_{i}}-a}
{\displaystyle a\in F_{i-1}}
{\displaystyle F_{m}}
contains a splitting field for
{\displaystyle f(x)}
For example, the smallest Galois field extension of
{\displaystyle \mathbb {Q} }
containing the element
{\displaystyle a={\sqrt[{5}]{{\sqrt {2}}+{\sqrt {3}}}}}
gives a solvable group. It has associated field extensions
{\displaystyle \mathbb {Q} \subseteq \mathbb {Q} ({\sqrt {2}},{\sqrt {3}})\subseteq \mathbb {Q} ({\sqrt {2}},{\sqrt {3}})\left(e^{2\pi i/5}{\sqrt[{5}]{{\sqrt {2}}+{\sqrt {3}}}}\right)}
giving a solvable group containing
{\displaystyle \mathbb {Z} /5}
(acting on the
{\displaystyle e^{2\pi i/5}}
{\displaystyle \mathbb {Z} /2\times \mathbb {Z} /2}
(acting on
{\displaystyle {\sqrt {2}}+{\sqrt {3}}}
A group G is called solvable if it has a subnormal series whose factor groups (quotient groups) are all abelian, that is, if there are subgroups 1 = G0 < G1 < ⋅⋅⋅ < Gk = G such that Gj−1 is normal in Gj, and Gj /Gj−1 is an abelian group, for j = 1, 2, …, k.
Or equivalently, if its derived series, the descending normal series
{\displaystyle G\triangleright G^{(1)}\triangleright G^{(2)}\triangleright \cdots ,}
where every subgroup is the commutator subgroup of the previous one, eventually reaches the trivial subgroup of G. These two definitions are equivalent, since for every group H and every normal subgroup N of H, the quotient H/N is abelian if and only if N includes the commutator subgroup of H. The least n such that G(n) = 1 is called the derived length of the solvable group G.
For finite groups, an equivalent definition is that a solvable group is a group with a composition series all of whose factors are cyclic groups of prime order. This is equivalent because a finite group has finite composition length, and every simple abelian group is cyclic of prime order. The Jordan–Hölder theorem guarantees that if one composition series has this property, then all composition series will have this property as well. For the Galois group of a polynomial, these cyclic groups correspond to nth roots (radicals) over some field. The equivalence does not necessarily hold for infinite groups: for example, since every nontrivial subgroup of the group Z of integers under addition is isomorphic to Z itself, it has no composition series, but the normal series {0, Z}, with its only factor group isomorphic to Z, proves that it is in fact solvable.
Abelian groupsEdit
The basic example of solvable groups are abelian groups. They are trivially solvable since a subnormal series being given by just the group itself and the trivial group. But non-abelian groups may or may not be solvable.
Nilpotent groupsEdit
More generally, all nilpotent groups are solvable. In particular, finite p-groups are solvable, as all finite p-groups are nilpotent.
Quaternion groupsEdit
In particular, the quaternion group is a solvable group given by the group extension
{\displaystyle 1\to \mathbb {Z} /2\to Q\to \mathbb {Z} /2\times \mathbb {Z} /2\to 1}
{\displaystyle \mathbb {Z} /2}
{\displaystyle -1}
Group extensionsEdit
Group extensions form the prototypical examples of solvable groups. That is, if
{\displaystyle G}
{\displaystyle G'}
are solvable groups, then any extension
{\displaystyle 1\to G\to G''\to G'\to 1}
defines a solvable group
{\displaystyle G''}
. In fact, all solvable groups can be formed from such group extensions.
Nonabelian group which is non-nilpotentEdit
A small example of a solvable, non-nilpotent group is the symmetric group S3. In fact, as the smallest simple non-abelian group is A5, (the alternating group of degree 5) it follows that every group with order less than 60 is solvable.
Finite groups of odd orderEdit
The celebrated Feit–Thompson theorem states that every finite group of odd order is solvable. In particular this implies that if a finite group is simple, it is either a prime cyclic or of even order.
The group S5 is not solvable — it has a composition series {E, A5, S5} (and the Jordan–Hölder theorem states that every other composition series is equivalent to that one), giving factor groups isomorphic to A5 and C2; and A5 is not abelian. Generalizing this argument, coupled with the fact that An is a normal, maximal, non-abelian simple subgroup of Sn for n > 4, we see that Sn is not solvable for n > 4. This is a key step in the proof that for every n > 4 there are polynomials of degree n which are not solvable by radicals (Abel–Ruffini theorem). This property is also used in complexity theory in the proof of Barrington's theorem.
Subgroups of GL2Edit
Consider the subgroups
{\displaystyle B=\left\{{\begin{bmatrix}*&*\\0&*\end{bmatrix}}\right\}{\text{, }}U=\left\{{\begin{bmatrix}1&*\\0&1\end{bmatrix}}\right\}}
{\displaystyle GL_{2}(\mathbb {F} )}
{\displaystyle \mathbb {F} }
. Then, the group quotient
{\displaystyle B/U}
can be found by taking arbitrary elements in
{\displaystyle B,U}
, multiplying them together, and figuring out what structure this gives. So
{\displaystyle {\begin{bmatrix}a&b\\0&c\end{bmatrix}}\cdot {\begin{bmatrix}1&d\\0&1\end{bmatrix}}={\begin{bmatrix}a&ad+b\\0&c\end{bmatrix}}}
Note the determinant condition on
{\displaystyle GL_{2}}
{\displaystyle ac\neq 0}
{\displaystyle \mathbb {F} ^{\times }\times \mathbb {F} ^{\times }\subset B}
is a subgroup (which are the matrices where
{\displaystyle b=0}
). For fixed
{\displaystyle a,b}
, the linear equation
{\displaystyle ad+b=0}
{\displaystyle d=-b/a}
, which is an arbitrary element in
{\displaystyle \mathbb {F} }
{\displaystyle b\in \mathbb {F} }
. Since we can take any matrix in
{\displaystyle B}
and multiply it by the matrix
{\displaystyle {\begin{bmatrix}1&d\\0&1\end{bmatrix}}}
{\displaystyle d=-b/a}
, we can get a diagonal matrix in
{\displaystyle B}
. This shows the quotient group
{\displaystyle B/U\cong \mathbb {F} ^{\times }\times \mathbb {F} ^{\times }}
Notice that this description gives the decomposition of
{\displaystyle B}
{\displaystyle \mathbb {F} \rtimes (\mathbb {F} ^{\times }\times \mathbb {F} ^{\times })}
{\displaystyle (a,c)}
{\displaystyle b}y
{\displaystyle (a,c)(b)=ab}
{\displaystyle (a,c)(b+b')=(a,c)(b)+(a,c)(b')=ab+ab'}
. Also, a matrix of the form
{\displaystyle {\begin{bmatrix}a&b\\0&c\end{bmatrix}}}
corresponds to the element
{\displaystyle (b)\times (a,c)}
Borel subgroupsEdit
For a linear algebraic group
{\displaystyle G}
its Borel subgroup is defined as a subgroup which is closed, connected, and solvable in
{\displaystyle G}
, and it is the maximal possible subgroup with these properties (note the second two are topological properties). For example, in
{\displaystyle GL_{n}}
{\displaystyle SL_{n}}
the group of upper-triangular, or lower-triangular matrices are two of the Borel subgroups. The example given above, the subgroup
{\displaystyle B}
{\displaystyle GL_{2}}
is the Borel subgroup.
Borel subgroup in GL3Edit
{\displaystyle GL_{3}}
there are the subgroups
{\displaystyle B=\left\{{\begin{bmatrix}*&*&*\\0&*&*\\0&0&*\end{bmatrix}}\right\},{\text{ }}U_{1}=\left\{{\begin{bmatrix}1&*&*\\0&1&*\\0&0&1\end{bmatrix}}\right\}}
{\displaystyle B/U_{1}\cong \mathbb {F} ^{\times }\times \mathbb {F} ^{\times }\times \mathbb {F} ^{\times }}
, hence the Borel group has the form
{\displaystyle U\rtimes (\mathbb {F} ^{\times }\times \mathbb {F} ^{\times }\times \mathbb {F} ^{\times })}
Borel subgroup in product of simple linear algebraic groupsEdit
In the product group
{\displaystyle GL_{n}\times GL_{m}}
the Borel subgroup can be represented by matrices of the form
{\displaystyle {\begin{bmatrix}T&0\\0&S\end{bmatrix}}}
{\displaystyle T}
{\displaystyle n\times n}
upper triangular matrix and
{\displaystyle S}
{\displaystyle m\times m}
Z-groupsEdit
Any finite group whose p-Sylow subgroups are cyclic is a semidirect product of two cyclic groups, in particular solvable. Such groups are called Z-groups.
OEIS valuesEdit
Numbers of solvable groups with order n are (start with n = 0)
0, 1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, 1, 5, 1, 5, 2, 2, 1, 15, 2, 2, 5, 4, 1, 4, 1, 51, 1, 2, 1, 14, 1, 2, 2, 14, 1, 6, 1, 4, 2, 2, 1, 52, 2, 5, 1, 5, 1, 15, 2, 13, 2, 2, 1, 12, 1, 2, 4, 267, 1, 4, 1, 5, 1, 4, 1, 50, ... (sequence A201733 in the OEIS)
Orders of non-solvable groups are
60, 120, 168, 180, 240, 300, 336, 360, 420, 480, 504, 540, 600, 660, 672, 720, 780, 840, 900, 960, 1008, 1020, 1080, 1092, 1140, 1176, 1200, 1260, 1320, 1344, 1380, 1440, 1500, ... (sequence A056866 in the OEIS)
Solvability is closed under a number of operations.
If G is solvable, and H is a subgroup of G, then H is solvable.[2]
If G is solvable, and there is a homomorphism from G onto H, then H is solvable; equivalently (by the first isomorphism theorem), if G is solvable, and N is a normal subgroup of G, then G/N is solvable.[3]
The previous properties can be expanded into the following "three for the price of two" property: G is solvable if and only if both N and G/N are solvable.
In particular, if G and H are solvable, the direct product G × H is solvable.
Solvability is closed under group extension:
If H and G/H are solvable, then so is G; in particular, if N and H are solvable, their semidirect product is also solvable.
It is also closed under wreath product:
If G and H are solvable, and X is a G-set, then the wreath product of G and H with respect to X is also solvable.
For any positive integer N, the solvable groups of derived length at most N form a subvariety of the variety of groups, as they are closed under the taking of homomorphic images, subalgebras, and (direct) products. The direct product of a sequence of solvable groups with unbounded derived length is not solvable, so the class of all solvable groups is not a variety.
Burnside's theoremEdit
Burnside's theorem states that if G is a finite group of order paqb where p and q are prime numbers, and a and b are non-negative integers, then G is solvable.
Supersolvable groupsEdit
As a strengthening of solvability, a group G is called supersolvable (or supersoluble) if it has an invariant normal series whose factors are all cyclic. Since a normal series has finite length by definition, uncountable groups are not supersolvable. In fact, all supersolvable groups are finitely generated, and an abelian group is supersolvable if and only if it is finitely generated. The alternating group A4 is an example of a finite solvable group that is not supersolvable.
If we restrict ourselves to finitely generated groups, we can consider the following arrangement of classes of groups:
cyclic < abelian < nilpotent < supersolvable < polycyclic < solvable < finitely generated group.
Virtually solvable groupsEdit
A group G is called virtually solvable if it has a solvable subgroup of finite index. This is similar to virtually abelian. Clearly all solvable groups are virtually solvable, since one can just choose the group itself, which has index 1.
HypoabelianEdit
A solvable group is one whose derived series reaches the trivial subgroup at a finite stage. For an infinite group, the finite derived series may not stabilize, but the transfinite derived series always stabilizes. A group whose transfinite derived series reaches the trivial group is called a hypoabelian group, and every solvable group is a hypoabelian group. The first ordinal α such that G(α) = G(α+1) is called the (transfinite) derived length of the group G, and it has been shown that every ordinal is the derived length of some group (Malcev 1949).
Prosolvable group
^ Milne. Field Theory (PDF). p. 45.
^ Rotman (1995), Theorem 5.15, p. 102, at Google Books
Rotman, Joseph J. (1995), An Introduction to the Theory of Groups, Graduate Texts in Mathematics, vol. 148 (4 ed.), Springer, ISBN 978-0-387-94285-8
OEIS sequence A056866 (Orders of non-solvable groups)
Solvable groups as iterated extensions
|
Model Array with Variations in Two Parameters - MATLAB & Simulink - MathWorks Deutschland
This example shows how to create a two-dimensional (2-D) array of transfer functions using for loops. One parameter of the transfer function varies in each dimension of the array.
You can use the technique of this example to create higher-dimensional arrays with variations of more parameters. Such arrays are useful for studying the effects of multiple-parameter variations on system response.
The second-order single-input, single-output (SISO) transfer function
H\left(s\right)=\frac{{\omega }^{2}}{{s}^{2}+2\zeta \omega s+{\omega }^{2}}.
depends on two parameters: the damping ratio,
\zeta
, and the natural frequency,
\omega
\zeta
\omega
vary, you obtain multiple transfer functions of the form:
{H}_{ij}\left(s\right)=\frac{{\omega }_{j}^{2}}{{s}^{2}+2{\zeta }_{i}{\omega }_{j}s+{\omega }_{j}^{2}},
{\zeta }_{i}
{\omega }_{j}
represent different measurements or sampled values of the variable parameters. You can collect all of these transfer functions in a single variable to create a two-dimensional model array.
Preallocate memory for the model array. Preallocating memory is an optional step that can enhance computation efficiency. To preallocate, create a model array of the required size and initialize its entries to zero.
H = tf(zeros(1,1,3,3));
In this example, there are three values for each parameter in the transfer function H. Therefore, this command creates a 3-by-3 array of single-input, single-output (SISO) zero transfer functions.
Create arrays containing the parameter values.
Build the array by looping through all combinations of parameter values.
H(:,:,i,j) = tf(w(j)^2,[1 2*zeta(i)*w(j) w(j)^2]);
H is a 3-by-3 array of transfer functions.
\zeta
varies as you move from model to model along a single column of H. The parameter
\omega
varies as you move along a single row.
Plot the step response of H to see how the parameter variation affects the step response.
You can set the SamplingGrid property of the model array to help keep track of which set of parameter values corresponds to which entry in the array. To do so, create a grid of parameter values that matches the dimensions of the array. Then, assign these values to H.SamplingGrid with the parameter names.
[zetagrid,wgrid] = ndgrid(zeta,w);
H.SamplingGrid = struct('zeta',zetagrid,'w',wgrid);
When you display H, the parameter values in H.SamplingGrid are displayed along with the each transfer function in the array.
|
Structural modal identification based on mobile phone sensor | JVE Journals
Feiyu Guo1 , Yinfeng Dong2 , Yutong Li3 , Yuanjun He4
Received 3 February 2021; received in revised form 20 February 2021; accepted 28 February 2021; published 25 March 2021
Copyright © 2021 Feiyu Guo, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Mobile phones can be used as potential useful tool in structural modal identification. Using the acceleration data collected by mobile phone sensors under ambient excitation, a method for structural modal identification based on variational mode decomposition (VMD) is proposed. Firstly, the feasibility and applicability of mobile phone sensors for vibration data measuring is verified by a shaking table test of a steel frame structure. Then, the mobile phone is used as an accelerometer to record the vibration acceleration of a pedestrian overpass under ambient excitation. In this test, vibration acceleration is effectively decomposed by VMD, and the major components of the vibration signal is obtained through the division and screening of frequency domain. Finally, the modal parameters of the pedestrian overpass are identified, and the results show the identification method based on VMD is effective and feasible.
The feasibility and applicability of mobile phone sensors for vibration data measuring is verified by a shaking table test of a steel frame structure.
Using the acceleration data collected by mobile phone sensors under ambient excitation, a method for structural modal identification based on variational mode decomposition (VMD) is proposed.
The signal is interpolated before VMD to reduce the error caused by asynchronous sampling.
The frequency domain method is used for modal identification of the pedestrian overpass in this paper. The procedure of the identification consists two steps, i.e., (1) identification of the natural frequency, and (2) identification of the mode shapes.
To verify the accuracy of the identification results, the numerical model of the tested structure is established for comparative analysis.
Keywords: modal identification, variational mode decomposition, mobile phone sensor, ambient excitation.
Structural dynamic identification is an important means of damage detection and health monitoring, and it has application prospect in automatic modal parameter extraction techniques [1]. However, there exist several problems at present, e.g., 1) the number of channels for data acquisition equipment is limited, 2) proprietary sensors and data acquisition equipment cost high and 3) installation and commissioning processes are complex and time-consuming. Many wireless accelerometer system and data network have been developed to solve above problems [2, 3], and the mobile sensors are also widely used in dynamic identification [4]. As a simple wireless device with accelerometer, mobile phone has potential in dynamic detection, which can realize real-time, efficient, simple, and low-cost detection of structure. In this paper, the feasibility and applicability of structural dynamic identification based on mobile phone sensors are studied. We compare the accuracy and precision between the collected data from mobile phone sensors and professional sensors installed for a shaking table test to verify the feasibility of mobile phone as an accelerometer [2]. Then, the pedestrian overpass was tested based on the mobile phone sensor under ambient excitation, and the modal parameters are identified. During this process, the acceleration data was interpolated [5-7], and then the data was processed based on VMD to improve the accuracy of identification [8]. Finally, the identification results are compared with the theoretical results, to verify the effectiveness of the data processing method based on VMD and the applicability of structural modal identification based on mobile phone sensor.
2. Verification of accuracy and reliability of mobile phone sensor data
The structure and sensor arrangement of the shaking table test is shown in Fig. 1. The acceleration time histories of base slab of the structure seismic excitation recorded using mobile sensors and professional sensors are presented in Fig. 2, and the corresponding response spectra and power spectra are given in Figs. 3 and 4. It can be found that the time history curves are close, indicating that the data of both types of sensors are in good agreement. As shown in Figs. 3-4, the spectrum analysis results of both sensors are also close, which verifies the accuracy and reliability of the data from mobile phone sensors.
Fig. 1. Structure and sensor arrangement of the shaking table test
Fig. 2. Acceleration time history curves
Fig. 3. Acceleration response spectrum
3. Test of pedestrian overpass under ambient excitation
The pedestrian overpass and corresponding layout of test points and are shown in Figs. 5 and 6, and in Fig. 7 the test order is presented. There are two mobile phones used in the test. As shown in the Figs. 6-7, one mobile phone sensor is moved along the pedestrian overpass to capture the vibration acceleration while the other one remains stationary [9]. The acceleration data obtained is used for subsequent analysis. The main procedure of the proposed identification method is summarized in Fig. 8.
Fig. 5. Tested pedestrian overpass
Fig. 6. Sketch map of pedestrian overpass
Fig. 7. Test points arrangement
Fig. 8. The procedure of the proposed identification method
3.2. Interpolation of vibration data
There are two mobile phones used in the test, and each may correspond to different sampling time, which will cause error. Therefore, the signal should be interpolated before VMD to reduce the error caused by asynchronous sampling [7]. The interpolation process is shown in Fig. 9.
Fig. 9. Interpolation diagram
a) Process of interpolation (before interpolation)
b) Process of interpolation (after interpolation)
3.3. Dynamic characteristic identification
The frequency domain method is used for modal identification of the pedestrian overpass in this paper. The procedure of the identification consists two steps, i.e., (1) identification of the natural frequency, and (2) identification of the mode shapes. By analyzing the power spectrum of tested structure, the natural frequency can be determined by the frequency corresponding to the peak point of the power spectrum. In the test of pedestrian overpass, the auto and cross power spectrum density of the data can be used to identify the mode shapes of the structure. The equation for determining the mode shape can be expressed as follows:
\frac{{\varphi }_{ki}}{{\varphi }_{pi}}=\frac{{S}_{pk}\left({\omega }_{i}\right)}{{S}_{pp}\left({\omega }_{i}\right)},
{\varphi }_{pi}
{\varphi }_{ki}
are the mode coefficients of the
i
th mode shape at the test point of
p
k
{\omega }_{i}
is the natural frequency of the
i
th mode;
{S}_{pp}\left({\omega }_{i}\right)
is the self-power spectral density of the response signal at the test point
p
{S}_{pk}\left({\omega }_{i}\right)
is the cross-spectral density of the response signal of test point
p
and test point
k
{S}_{pp}\left({\omega }_{i}\right)=\frac{1}{M{N}_{FFT}}{\sum }_{j=1}^{M}{X}_{pj}\left({\omega }_{i}\right){X}_{pj}^{*}\left({\omega }_{i}\right),
{S}_{pk}\left({\omega }_{i}\right)=\frac{1}{M{N}_{FFT}}{\sum }_{j=1}^{M}{X}_{pj}\left({\omega }_{i}\right){X}_{kj}^{*}\left({\omega }_{i}\right),
{X}_{pj}\left({\omega }_{i}\right)
j
{X}_{pj}^{*}\left({\omega }_{i}\right)
{X}_{pj}\left({\omega }_{i}\right)
{N}_{FFT}
M
Two time histories curves of vertical vibration acceleration of the pedestrian overpass are given in Figs. 10 and 11 respectively. One is time history curve before VMD, the other is time history curve after VMD. The time history curve after VMD is smoother and the “spike” points is less, which shows that the corresponding effective components of the signal are retained and by using VMD the effects of ambient noise on vibration waveform can be effectively reduced.
Fig. 10. Time history curve before VMD
Fig. 11. Time history curve after VMD
The power spectrum of corresponding to above time histories are shown in Figs. 12 and 13. There is ambient noise with frequency contents below 1 Hz in vibration data before VMD which may affect the accuracy of identification.
Fig. 12. Power spectrum before VMD
Fig. 13. Power spectrum after VMD
By picking the frequency corresponding to the peak point of the power spectrum, the natural frequency is identified from Fig. 13, which is about 5.22 Hz. Due to the limited minimum range of mobile phone sensor and the influence of ambient noise, only the first order mode shape (Fig. 14) can be obtained. It can be intuitively found that the identification result of the pedestrian overpass after VMD is quite consistent with the theoretical result, which proves the effectiveness of identification method. To verify the accuracy of the identification results, the numerical model of the tested structure is established for comparative analysis.
Fig. 14. First order mode shape
Fig. 15. Simulation results of first order mode shape
The simulation result of natural frequency is 5.29 Hz, which is in good agreement with the identification result 5.22 Hz. And it can be found that the first order mode shapes of the numerical model (Fig. 15) are similar to the identification result in Fig. 14.
In this paper, mobile phone sensor replaces professional accelerometer and become the tool of structural dynamic test of pedestrian overpass, and the signal preprocessing method based on VMD is proposed to improve the accuracy of modal identification. According to the acceleration response data picked up by mobile phones, the identification results are obtained. The conclusions are as follows.
1) The accuracy and precision of mobile phone sensor meets the requirements of simple test and the structural dynamic test based on mobile phone sensor is feasible.
2) VMD is an effective way to obtain useful vibration signal which can reduce the influence of ambient noise and improve the accuracy of modal identification.
3) The structural dynamic testing based on mobile phone sensors is proved to be applicable.
To sum up, the dynamic detection based on mobile phone have the potential to be a real-time, efficient, simple, and low-cost way for structural modal identification.
Marica P., Rosario C., Rodolfo E. An automatic modal identification procedure for the permanent dynamic monitoring of the Sanctuary of Vicoforte. International Journal of Architectural Heritage, Vol. 14, Issue 4, 2020, p. 630-644. [Publisher]
Koray G., Güray G., Ahmet D. Design and realization of multi-channel wireless data acquisition system for laboratory-scale experiments on structural health monitoring. Journal of Measurements in Engineering, Vol. 6, Issue 1, 2018, p. 64-73. [Publisher]
Chalioris E., Kytinou K., Voutetaki E., et al. Flexural damage diagnosis in reinforced concrete beams using a wireless admittance monitoring system – tests and finite element analysis. Sensors, Sensors, Vol. 21, Issue 3, 2021, p. 679-679. [Publisher]
Marmolejo M. Modal Identification with Mobile Sensors Using Cohen’s Class Time-Frequency Distributions. Gaceta Técnica, Vol. 19, 2019. [Search CrossRef]
XiaX. Comparison of Modal Analysis Methods of Environmental Excitation. Ph.D. Thesis, Central South University, 2013, (in Chinese). [Search CrossRef]
RenW. Comparative analysis of identification methods of environmental vibration system. Journal of Fuzhou University (Natural Science Edition), Vol. 1, Issue 6, 2001, p. 80-86, (in Chinese). [Search CrossRef]
LiZ., LinQ. Modal identification of high-rise buildings with asynchronous sampling. Journal of Natural Disasters, Vol. 05, 2009, p. 48-58, (in Chinese). [Search CrossRef]
Zheng J. Research on Instantaneous Frequency Identification of Time-Varying Bridge Structure Based on New Signal Processing Technology. Ph.D. Thesis, Fujian Agriculture and Forestry University, 2018, (in Chinese). [Search CrossRef]
Johannio M., Juan M., Peter T. Modal identification using mobile sensors under ambient excitation. Journal of Computing in Civil Engineering, Vol. 31, Issue 2, 2017, https://doi.org/10.1061/(ASCE)CP.1943-5487.0000619. [Search CrossRef]
|
Wake_(physics) Knowpia
Wake (physics)
In fluid dynamics, a wake may either be:
the region of recirculating flow immediately behind a moving or stationary blunt body, caused by viscosity, which may be accompanied by flow separation and turbulence, or
the wave pattern on the water surface downstream of an object in a flow, or produced by a moving object (e.g. a ship), caused by density differences of the fluids above and below the free surface and gravity (or surface tension).
Kelvin wake pattern generated by a small boat.
Kelvin wake simulation plot.
Visualisation of the Kármán vortex street in the wake behind a circular cylinder in air; the flow is made visible through release of oil vapour in the air near the cylinder.
The wake is the region of disturbed flow (often turbulent) downstream of a solid body moving through a fluid, caused by the flow of the fluid around the body.
For a blunt body in subsonic external flow, for example the Apollo or Orion capsules during descent and landing, the wake is massively separated and behind the body is a reverse flow region where the flow is moving toward the body. This phenomenon is often observed in wind tunnel testing of aircraft, and is especially important when parachute systems are involved, because unless the parachute lines extend the canopy beyond the reverse flow region, the chute can fail to inflate and thus collapse. Parachutes deployed into wakes suffer dynamic pressure deficits which reduce their expected drag forces. High-fidelity computational fluid dynamics simulations are often undertaken to model wake flows, although such modeling has uncertainties associated with turbulence modeling (for example RANS versus LES implementations), in addition to unsteady flow effects. Example applications include rocket stage separation and aircraft store separation.
Density differencesEdit
In incompressible fluids (liquids) such as water, a bow wake is created when a watercraft moves through the medium; as the medium cannot be compressed, it must be displaced instead, resulting in a wave. As with all wave forms, it spreads outward from the source until its energy is overcome or lost, usually by friction or dispersion.
The non-dimensional parameter of interest is the Froude number.
Wave cloud pattern in the wake of the Île Amsterdam (lower left, at the "tip" of the triangular formation of clouds) in the southern Indian Ocean
Cloud wakes from the Juan Fernández Islands
Wake patterns in cloud cover over Possession Island, East Island, Ile aux Cochons, Île des Pingouins
Kelvin wake patternEdit
Waterfowl and boats moving across the surface of water produce a wake pattern, first explained mathematically by Lord Kelvin and known today as the Kelvin wake pattern.[1]
This pattern consists of two wake lines that form the arms of a chevron, V, with the source of the wake at the vertex of the V. For sufficiently slow motion, each wake line is offset from the path of the wake source by around arcsin(1/3) = 19.47° and is made up of feathery wavelets angled at roughly 53° to the path.
The inside of the V (of total opening 39° as indicated above) is filled with transverse curved waves, each of which is an arc of a circle centered at a point lying on the path at a distance twice that of the arc to the wake source. This pattern is independent of the speed and size of the wake source over a significant range of values.
However, the pattern changes at high speeds (only), viz., above a hull Froude number of approximately 0.5. Then, as the source's speed increases, the transverse waves diminish and the points of maximum amplitude on the wavelets form a second V within the wake pattern, which grows narrower with the increased speed of the source.[2]
The angles in this pattern are not intrinsic properties of merely water: Any isentropic and incompressible liquid with low viscosity will exhibit the same phenomenon. Furthermore, this phenomenon has nothing to do with turbulence. Everything discussed here is based on the linear theory of an ideal fluid, cf. Airy wave theory.
Parts of the pattern may be obscured by the effects of propeller wash, and tail eddies behind the boat's stern, and by the boat being a large object and not a point source. The water need not be stationary, but may be moving as in a large river, and the important consideration then is the velocity of the water relative to a boat or other object causing a wake.
This pattern follows from the dispersion relation of deep water waves, which is often written as,
{\displaystyle \omega ={\sqrt {gk}},}
g = the strength of the gravity field
ω is the angular frequency in radians per second
k = angular wavenumber in radians per metre
"Deep" means that the depth is greater than half of the wavelength. This formula implies that the group velocity of a deep water wave is half of its phase velocity, which, in turn, goes as the square root of the wavelength. Two velocity parameters of importance for the wake pattern are:
v is the relative velocity of the water and the surface object that causes the wake.
c is the phase velocity of a wave, varying with wave frequency.
As the surface object moves, it continuously generates small disturbances which are the sum of sinusoidal waves with a wide spectrum of wavelengths. Those waves with the longest wavelengths have phase speeds above v and dissipate into the surrounding water and are not easily observed. Other waves with phase speeds at or below v, however, are amplified through constructive interference and form visible shock waves, stationary in position w.r.t. the boat.
Typical duck wake
The angle θ between the phase shock wave front and the path of the object is θ = arcsin(c/v). If c/v > 1 or < −1, no later waves can catch up with earlier waves and no shockwave forms.
In deep water, shock waves form even from slow-moving sources, because waves with short enough wavelengths move slower. These shock waves are at sharper angles than one would naively expect, because it is group velocity that dictates the area of constructive interference and, in deep water, the group velocity is half of the phase velocity.
All shock waves, that each by itself would have had an angle between 33° and 72°, are compressed into a narrow band of wake with angles between 15° and 19°, with the strongest constructive interference at the outer edge (angle arcsin(1/3) = 19.47°), placing the two arms of the V in the celebrated Kelvin wake pattern.
A concise geometric construction[3] demonstrates that, strikingly, this group shock angle w.r.t. the path of the boat, 19.47°, for any and all of the above θ, is actually independent of v, c, and g; it merely relies on the fact that the group velocity is half of the phase velocity c. On any planet, slow-swimming objects have "effective Mach number" 3!
Envelope of the disturbance emitted at successive times, fig 12.3 p.410 of G.B. Whitham (1974) Linear and Nonlinear Waves. The circles represent wavefronts.
For slow swimmers, low Froude number, the Lighthill−Whitham geometric argument that the opening of the Kelvin chevron (wedge, V pattern) is universal goes as follows. Consider a boat moving from right to left with constant speed v, emitting waves of varying wavelength, and thus wavenumber k and phase velocity c(k), of interest when < v for a shock wave (cf., e.g., Sonic boom or Cherenkov radiation). Equivalently, and more intuitively, fix the position of the boat and have the water flow in the opposite direction, like a piling in a river.
Focus first on a given k, emitting (phase) wavefronts whose stationary position w.r.t. the boat assemble to the standard shock wedge tangent to all of them, cf. Fig.12.3.
As indicated above, the openings of these chevrons vary with wavenumber, the angle θ between the phase shock wavefront and the path of the boat (the water) being θ = arcsin(c/v) ≡ π/2 − ψ. Evidently, ψ increases with k. However, these phase chevrons are not visible: it is their corresponding group wave manifestations which are observed.
Envelope of the disturbance emitted at successive times, fig 12.2 p.409 of G.B. Whitham (1974) Linear and Nonlinear Waves. Here ψ is the angle between the path of the wave source and the direction of wave propagation (the wave vector k), and the circles represent wavefronts.
Consider one of the phase circles of Fig.12.3 for a particular k, corresponding to the time t in the past, Fig.12.2. Its radius is QS, and the phase chevron side is the tangent PS to it. Evidently, PQ= vt and SQ = ct = vt cosψ, as the right angle PSQ places S on the semicircle of diameter PQ.
Since the group velocity is half the phase velocity for any and all k, however, the visible (group) disturbance point corresponding to S will be T, the midpoint of SQ. Similarly, it lies on a semicircle now centered on R, where, manifestly, RQ=PQ/4, an effective group wavefront emitted from R, with radius vt/4 now.
Significantly, the resulting wavefront angle with the boat's path, the angle of the tangent from P to this smaller circle, obviously has a sine of TR/PR=1/3, for any and all k, c, ψ, g, etc.: Strikingly, virtually all parameters of the problem have dropped out, except for the deep-water group-to-phase-velocity relation! Note the (highly notional) effective group disturbance emitter moves slower, at 3v/4.
Thus, summing over all relevant k and ts to flesh out an effective Fig.12.3 shock pattern, the universal Kelvin wake pattern arises: the full visible chevron angle is twice that, 2arcsin(1/3) ≈ 39°.
The wavefronts of the wavelets in the wake are at 53°, which is roughly the average of 33° and 72°. The wave components with would-be shock wave angles between 73° and 90° dominate the interior of the V. They end up half-way between the point of generation and the current location of the wake source. This explains the curvature of the arcs.
Those very short waves with would-be shock wave angles below 33° lack a mechanism to reinforce their amplitudes through constructive interference and are usually seen as small ripples on top of the interior transverse waves.
The above describes an ideal wake, where the body's means of propulsion has no other effect on the water. In practice the wave pattern between the V-shaped wavefronts is usually mixed with the effects of propeller backwash and eddying behind the boat's (usually square-ended) stern.
The Kelvin angle is also derived for the case of deep water in which the fluid is not flowing in different speed or directions as a function of depth ("shear"). In cases where the water (or fluid) has sheer, the results may be more complicated.[4]
"No wake zones" may prohibit wakes in marinas, near moorings and within some distance of shore[5] in order to facilitate recreation by other boats and reduce the damage wakes cause. Powered narrowboats on British canals are not permitted to create a breaking wash (a wake large enough to create a breaking wave) along the banks, as this erodes them. This rule normally restricts these vessels to 4 knots (4.6 mph; 7.4 km/h) or less.
Wakes are occasionally used recreationally. Swimmers, people riding personal watercraft, and aquatic mammals such as dolphins can ride the leading edge of a wake. In the sport of wakeboarding the wake is used as a jump. The wake is also used to propel a surfer in the sport of wakesurfing. In the sport of water polo, the ball carrier can swim while advancing the ball, propelled ahead with the wake created by alternating armstrokes in crawl stroke, a technique known as dribbling.
^ William Thomson (1887) "On ship waves," Institution of Mechanical Engineers, Proceedings, 38 : 409–34; illustrations, pp. 641–49.
^ The "hull Froude number" (Fr) of a ship is Fr = U / √gL, where U is the ship's speed, g is the acceleration of gravity at the earth's surface, and L is the length of the ship's hull, a characteristic wavelength. See Marc Rabaud and Frédéric Moisy (2013) "Ship wakes: Kelvin or Mach angle?," Physical Review Letters, 110 (21) : 214503. Available on-line at: University of Paris, Sud; Alexandre Darmon, Michael Benzaquen, and Elie Raphaël (2014) "Kelvin wake pattern at large Froude numbers," Journal of Fluid Mechanics, 738 : R3-1–R3-8. Available on-line at: ESPCI ParisTech
^ G.B. Whitham (1974). Linear and Nonlinear Waves (John Wiley & Sons Inc., 1974) pp. 409–10 Online scan
^ Norwegian University of Science and Technology, "A 127-year-old physics riddle solved", Phys.org, Aug 21, 2019. Retrieved 22 August 2019
^ BoatWakes.org, Table of distances
Look up wake (physics) in Wiktionary, the free dictionary.
Wikimedia Commons has media related to Wakes (fluids).
Erosion caused by boat wakes
NIST detailed derivation
|
a=0.
a
is set to the smallest value of the form
\mathrm{1eN}
b+\mathrm{1eN}>b
b=0.
b
-\mathrm{1eN}
a-\mathrm{1eN}<a
b<a
\mathrm{with}\left(\mathrm{RandomTools}\right):
\mathrm{Generate}\left(\mathrm{float}\right)
\textcolor[rgb]{0,0,1}{0.2342493224}
\mathrm{Generate}\left(\mathrm{float}\left(\mathrm{range}=2.532..7.723,\mathrm{digits}=4\right)\right)
\textcolor[rgb]{0,0,1}{2.537}
[\mathrm{seq}\left(\mathrm{Generate}\left(\mathrm{float}\right),i=1..10\right)]
[\textcolor[rgb]{0,0,1}{0.4077358422}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.5684678711}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.3475034102}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.2026591600}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.5479226237}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.008823971507}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.2074604148}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.9290936515}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.6664697983}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.3909313924}]
\mathrm{sort}\left([\mathrm{seq}\left(\mathrm{Generate}\left(\mathrm{float}\left(\mathrm{range}=0.0321..162.0,\mathrm{digits}=3\right)\right),i=1..10\right)]\right)
[\textcolor[rgb]{0,0,1}{10.4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{15.9}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{58.9}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{64.2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{66.7}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{87.8}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{89.0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{91.4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{124.}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{129.}]
\mathrm{sort}\left([\mathrm{seq}\left(\mathrm{Generate}\left(\mathrm{float}\left(\mathrm{range}=0.0321..162.0,\mathrm{digits}=3,\mathrm{method}=\mathrm{logarithmic}\right)\right),i=1..10\right)]\right)
[\textcolor[rgb]{0,0,1}{0.0475}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.0522}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.0778}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.0963}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.237}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.289}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.801}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0.918}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{9.15}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{87.1}]
\mathrm{Matrix}\left(3,3,\mathrm{Generate}\left(\mathrm{float}\left(\mathrm{range}=2..7\right)\mathrm{identical}\left(x\right)+\mathrm{float}\left(\mathrm{range}=2..7\right),\mathrm{makeproc}=\mathrm{true}\right)\right)
[\begin{array}{ccc}\textcolor[rgb]{0,0,1}{6.832623175}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{4.850164008}& \textcolor[rgb]{0,0,1}{2.540721923}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2.219972873}& \textcolor[rgb]{0,0,1}{2.156513983}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{3.755486969}\\ \textcolor[rgb]{0,0,1}{4.094092595}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{5.955127111}& \textcolor[rgb]{0,0,1}{5.468319344}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2.012105075}& \textcolor[rgb]{0,0,1}{5.052449900}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{5.323948758}\\ \textcolor[rgb]{0,0,1}{5.765644424}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{5.471087299}& \textcolor[rgb]{0,0,1}{2.076338300}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{5.527224675}& \textcolor[rgb]{0,0,1}{3.817480335}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2.678852156}\end{array}]
\mathrm{plots}[\mathrm{listplot}]\left(\mathrm{sort}\left([\mathrm{seq}\left(\mathrm{Generate}\left(\mathrm{float}\left(\mathrm{range}=0.0321..162.0,\mathrm{digits}=3\right)\right),i=1..20\right)]\right)\right)
\mathrm{plots}[\mathrm{listplot}]\left(\mathrm{sort}\left([\mathrm{seq}\left(\mathrm{Generate}\left(\mathrm{float}\left(\mathrm{range}=0.0321..162.0,\mathrm{digits}=3,\mathrm{method}=\mathrm{uniform}\right)\right),i=1..20\right)]\right)\right)
|
mikeash.com: Friday Q&A 2012-11-02: Building the FFT
Next article: Friday Q&A 2012-11-09: dyld: Dynamic Linking On OS X
Previous article: Friday Q&A 2012-10-26: Fourier Transforms and FFTs
Tags: audio fft fridayqa guest
Friday Q&A 2012-11-02: Building the FFT
by Chris Liscio
In this installment, I'll show you how we get from point A to point B. Specifically, I'll talk a bit about the magic behind the Fast Fourier Transform.
A bit of math (British localization: Some maths)
Fourier Transforms have some interesting mathematical properties. Most importantly, Fourier Transforms are a linear operation. That is, if we use
\mathcal{F}(h(t))
to denote the Fourier Transform of
h(t)
So whether you scale or add two signals in the time or frequency domain is up to you. That's pretty handy when you're working on signal processing code. But I digress.
In addition to the above, there are some more interesting properties relating the time and frequency domain representations of a function. We'll use
h(t) \Leftrightarrow H(f)
to relate the time and frequency domain representations.
Don't focus too much on the individual relations themselves. The main point that I'm trying to get across is that we can manipulate the Fourier Transform and the signal in meaningful ways, and relate those changes between domains.
Before we continue, I'd like to make a clarification of sorts. The mathematical properties above are using terminology specific to Fourier Transforms of continuous functions defined over infinite time and frequency.
Obviously we're working in a digital world, and we don't have the luxury of continuous signals to work with. In software, we're dealing with sampled data, which is where the Discrete Fourier Transform comes in.
This is basically what Mike already gave you in code form in his last post on the topic, and I encourage you to take another look at his code to understand how it relates to this equation.
What's important is that you understand the Discrete Fourier Transform and Continuous Fourier Transforms are closely related, and have almost exactly the same mathematical properties as described above. For what we're discussing, you can safely ignore the "almost" part, and look to Wikipedia's definition for more discussion.
The Danielson-Lanczos lemma
Using a combination of the mathematical properties of the Fourier Transform above, Danielson and Lanczos discovered that you can rewrite a Discrete Fourier Transform of length N as a sum of two Discrete Fourier Transforms of length N/2: one from the even-numbered and one from the odd-numbered points of input.
This is their proof:
W^k = e^{2\pi i k/N}
H_k^{e}
H_k^o
are the even and odd terms of
H_k
Again, it's not important to totally understand the above, or how they got from A to B. This is where I wave my hands and defer to the fact that the mathematical properties of the Discrete Fourier Transform have been used in the derivation.
The discovery by Danielson and Lanczos, combined with the fact that it can be applied recursively, is the basis of the Fast Fourier Transform. Breaking the problem down one more step, we will end up with a combination of
H_k^{ee}
H_k^{eo}
H_k^{oe}
H_k^{oo}
If we stick with power-of-two inputs to the Fourier Transform, we will guarantee that the problem continues to decompose until we reach a fourier transform on one element. And, guess what? That's just a copy of the input value:
There is a way to derive the value of
n
, but I'm choosing to wave my hands again.
Instead, I'm going to let recursion do the work for us. I vote for this option, to keep this post from exploding out of control. Also, get your hands on a copy of Numerical Recipes to really go deep.
(Fairly) Straightforward Implementation
I put together a compact implementation that demonstrates how this all works. It is based on the first optimization, because I think it's a good balance of readability with just a hint of cleverness. (I started out based on this resource, but massaged the implementation for clarity, and to closely match my math above.)
static complex double *FFT_recurse( complex double *x, int N, int skip ) {
complex double *X = (complex double*)malloc( sizeof(complex double) * N );
complex double *O, *E;
// We've hit the scalar case, and copy the input to the output.
E = FFT_recurse( x, N/2, skip * 2 );
O = FFT_recurse( x + skip, N/2, skip * 2 );
for ( int k = 0; k < N / 2; k++ ) {
O[k] = ( cexp( 2.0 * I * M_PI * k / N ) * O[k] );
// While E[k] and O[k] are of length N/2, and X[k] is of length N, E[k] and
// O[k] are periodic in k with length N/2. See p.609 of Numerical Recipes
// in C (3rd Ed, 2007). [CL]
X[k] = E[k] + O[k];
X[k + N/2] = E[k] + O[k];
free( O );
complex double *FFT( complex double *x, int N ) {
return FFT_recurse( x, N, 1 );
It's really not that complicated, but the improvement in performance is immense. I put together some driver code and tossed it all up on my github account. Some simple timings revealed that a straightforward "math definition" of the algorithm took approximately 12.1s, and the FFT implementation above took a mere 0.1s. More than a 100x speed increase, and that's with a whole bunch of malloc()s and free()s strewn about!
I hope that this explanation was somewhat helpful in demystifying the Fast Fourier Transform and how it works. It's one of many examples of algorithms that exploit mathematics in order to gain an order-of-magnitude speedup.
Oh, and in case Mike didn't make it clear, you should never implement this yourself. Use the vDSP routines in Accelerate.framework!
Apple's performance team continues to push the limits of their FFT implementation year after year on all platforms. A combination of mathematicians, physicists, engineers, scientists, and assembly language wizards are working hard to ensure that Accelerate.framework is always running as fast, and power-efficient as possible.
I'm of the opinion that Apple's performance team is largely responsible for what makes the "cool stuff" on the iPhone possible. Think about live audio effects in Garage Band, video effects in iMovie, the processing in iPhoto, and so forth. All of that stuff is depending on Accelerate.framework in some capacity.
The next time you visit WWDC, make some time to stop by their lab with any performance questions you may have that relate to Accelerate.framework. They're really nice folks, and astoundingly smart, too!
Arseny Kapoulkine at 2012-11-07 20:44:22:
Something that's been bugging me for a while but that I haven't looked into yet - maybe you know the answer! - is the problem of non-power-of-two sized data sets.
Is it possible to compute the FT in O(NlogN) in case the size is not the power of two? (padding the data with zeroes and applying FFT obviously generates wrong results; I think the same goes for padding with the repeated version of the data)
Chris Liscio at 2012-11-07 21:40:31:
Zero-padding the FFT does not produce incorrect results. In fact, it's the recommended approach when you're dealing with NPOT (non-power-of-two) data sets.
If you're getting bad results, it's because you're either doing something wrong, or perhaps your expectations are incorrect. Care to elaborate more on this?
As for answering your question, I believe there are algorithms that 'degrade gracefully' such that they're not as slow as the straightforward implementation, but they're definitely nowhere near the performance of a power-of-two data set. I'm not sure on their O notation runtimes though.
Check out the KissFFT implementation if you're interested in their approach. It's a fairly readable implementation of the FFT that also handles NPOT data sets.
Thanks for the KissFFT link - this does give me the info I wanted. What their implementation does is get the prime factors of the dataset size and then use the divide&conquer approach for each prime factor. As far as I can tell, each butterfly step is quadratic in the prime factor size, something like O((N/p) * p^2) = O(N*p), so for powers of two you get O(N) for each stage, but for i.e. a prime N you get O(N^2).
On to padding; I guess it depends on what you're using FFT for. I'm remarking that AFAICS there is no way to compute DFT for a NPOT dataset using the outlined algorithm with any padding - for example, the sine wave frequencies of the distribution are different?
For some applications it might not matter, I guess - i.e. using FFT as a means of fast convolution should work with zero-padding.
If you *are* interested in frequency response by itself, however, why should you pad with zeroes? The padding values change the result of the FFT, which means that you can't choose the padding values arbitrarily. If your 5-sample signal is meant to be periodic (with a period of 5), then the "right" way is to pad with the same sequence - but adjusting the size to 8 or 16 should give you different frequency responses, i.e.:
http://www.wolframalpha.com/input/?i=DFT%5B1%2C+2%2C+3%5D%2C+DFT%5B1%2C+2%2C+3%2C+0%5D%2C+DFT%5B1%2C+2%2C+3%2C+1%5D%2C+DFT%5B1%2C+2%2C+3%2C+1%2C+2%2C+3%2C+1%2C+2%5D
"If your 5-sample signal is meant to be periodic (with a period of 5), then the "right" way is to pad with the same sequence - but adjusting the size to 8 or 16 should give you different frequency responses"
You're presuming:
- a steady-state signal,
- with a known period,
- and all higher components (harmonic or otherwise) are even multiples of the fundamental period.
If any of those aren't true, your padding scheme wouldn't work.
An 8-sample window of a 5-sample signal might give a wrong zero-padded result, but you are remembering to window your samples with an apodization function, right? You can't just stick zeroes at the end of {0,1,2,0,1} and expect it to work, the samples have to be tapered. This causes spectral leakage, but that's a consequence of sampling a non-contrived signal, and it unavoidable.
You can only avoid having to use a windowing function by having prior knowledge of the signal, and if you require that level of precision, for an input of arbitrary length, I don't think the FFT is the droid you're looking for.
aReader at 2016-05-17 22:57:00:
There are some missing parts, I assume due to non-escaped characters.
|
Effect of Carbon Nanotube (CNT) Loading on the Thermomechanical Properties and the Machinability of CNT-Reinforced Polymer Composites | J. Manuf. Sci. Eng. | ASME Digital Collection
A. Dikshit,
A. Dikshit
J. Samuel Graduate Research Assistant
A. Dikshit Graduate Research Assistant
Samuel, J., Dikshit, A., DeVor, R. E., Kapoor, S. G., and Hsia, K. J. (May 5, 2009). "Effect of Carbon Nanotube (CNT) Loading on the Thermomechanical Properties and the Machinability of CNT-Reinforced Polymer Composites." ASME. J. Manuf. Sci. Eng. June 2009; 131(3): 031008. https://doi.org/10.1115/1.3123337
The machinability of carbon nanotube (CNT)-reinforced polymer composites is studied as a function of CNT loading, in light of the trends seen in their material properties. To this end, the thermomechanical properties of the CNT composites with different loadings of CNTs are characterized. Micro-endmilling experiments are also conducted on all the materials under investigation. Chip morphology, burr width, surface roughness, and cutting forces are used as the machinability measures to compare the composites. For composites with lower loadings of CNTs (1.75% by weight), the visco-elastic/plastic deformation of the polymer-phase plays a significant role during machining, whereas, at loadings
≥5%
by weight, the CNT distribution and interface effects dictate the machining response of the composite. The ductile-to-brittle transition that occurs with an increase in CNT loading results in reduced minimum chip thickness values and burr dimensions in the CNT composite. The increase in thermal conductivity with the increase in CNT loading results in reduced number of adiabatic shear bands being observed on the chips and reduced thermal softening effects at high cutting velocities. Thus, overall, an increase in CNT loading appears to improve the machinability of the composite.
carbon nanotubes, composite materials, machinability, CNT composites, micro-machining
Carbon nanotubes, Composite materials, Cutting, Machinability, Machining, Polymers, Thermal conductivity, Surface roughness, Thermomechanics, Materials properties, Polymer composites
Plast. Eng. (Brookfield, Conn.)
Effect of Carbon Nanotube Loading on the Machinability of Polycarbonate Nanocomposites
Proceedings of the International and INCOMM-6 Conference, Future Trends in Composite Materials and Processing
, Dec. 12–14.
A Microstructure-Level Material Model for Simulating the Machining of Carbon Nanotube Reinforced Polymer Composites
Melt Mixing of Polycarbonate/Multi-Wall Carbon Nanotube Composites
Molecular Dynamics in Nanostructured Polyimide-Silica Hybrid Materials and Their Thermal Stability
Glass Transition Temperature Depression at the Percolation Threshold in Carbon Nanotube-Epoxy Resin and Polypyrrole-Epoxy Resin Composites
Paramsothyb
Physical Interactions at Carbon Nanotube-Polymer Interface
Experimental Study of Micromechanisms of Brittle-to-Ductile Transition in Si Single Crystals
Y. -T.
On the Spacing Between Dislocation Nucleation Sources at Crack Tips
Tough-to-Brittle Transition in Multiwalled Carbon Nanotube (MWNT)/Polycarbonate Nanocomposites
Ductile-to-Semiductile Transition in PP-MWNT Nanocomposites
Mechanical Properties of Non-Functionalized Multiwall Nanotube Reinforced Polycarbonate at 77 K
Mechanical Analysis of the Scratching of Metals and Polymers With Conical Indenters at Moderate and Large Strains
Microstructure-Level Machining Simulation of Carbon Nanotube Reinforced Polymer Composites–Part 1: Model Development and Validation
Microstructure-Level Machining Simulation of Carbon Nanotube Reinforced Polymer Composites–Part 2: Model Interpretation and Application
The Role of Viscous Deformation in the Machining of Polymers
Micro-Burr Formation and Minimization Through Process Control
Effect of Strain Rate on the Fracture of Ceramic Fibre Reinforced Glass Matrix Composites
Microstructure-Level Machining Simulation of Carbon Nanotube Reinforced Polymer Composites—Part II: Model Interpretation and Application
Effect of Carbon Nanotube (CNT) Loading on the Thermo-Mechanical Properties and the Machinability of CNT-Reinforced Polymer Composites
|
Transfer function to coupled allpass lattice - MATLAB tf2cl - MathWorks India
Transfer function to coupled allpass lattice
[k1,k2] = tf2cl(b,a)
[k1,k2,beta] = tf2cl(b,a)
[k1,k2] = tf2cl(b,a) where b is a real, symmetric vector of numerator coefficients and a is a real vector of denominator coefficients, corresponding to a stable digital filter, will perform the coupled allpass decomposition
H\left(z\right)=\frac{B\left(z\right)}{A\left(z\right)}=\left(\frac{1}{2}\right)\left[H1\left(z\right)+H2\left(z\right)\right]
of a stable IIR filter H(z) and convert the allpass transfer functions H1(z) and H2(z) to a coupled lattice allpass structure with coefficients given in vectors k1 and k2.
[k1,k2] = tf2cl(b,a) where b is a real, antisymmetric vector of numerator coefficients and a is a real vector of denominator coefficients, corresponding to a stable digital filter, performs the coupled allpass decomposition
H\left(z\right)=\frac{B\left(z\right)}{A\left(z\right)}=\left(\frac{1}{2}\right)\left[H1\left(z\right)-H2\left(z\right)\right]
of a stable IIR filter H(z) and converts the allpass transfer functions H1(z) and H2(z) to a coupled lattice allpass structure with coefficients given in vectors k1 and k2.
In some cases, the decomposition is not possible with real H1(z) and H2(z). In those cases, a generalized coupled allpass decomposition may be possible, using the syntax described below.
[k1,k2,beta] = tf2cl(b,a) performs the generalized allpass decomposition of a stable IIR filter H(z) and converts the complex allpass transfer functions H1(z) and H2(z) to corresponding lattice allpass filters
H\left(z\right)=\frac{B\left(z\right)}{A\left(z\right)}=\left(\frac{1}{2}\right)\left[\overline{\beta }\cdot H1\left(z\right)+\beta \cdot H2\left(z\right)\right]
where beta is a complex scalar of magnitude equal to 1.
Coupled allpass decomposition is not always possible. Nevertheless, Butterworth, Chebyshev, and Elliptic IIR filters, among others, can be factored in this manner. For details, refer to Signal Processing Toolbox™ User's Guide.
[b,a]=cheby1(9,.5,.4);
[k1,k2]=tf2cl(b,a); % Get the reflection coeffs. for the lattices.
[num1,den1]=latc2tf(k1,'allpass'); % Convert each allpass lattice
[num2,den2]=latc2tf(k2,'allpass'); % back to transfer function.
num = 0.5*conv(num1,den2)+0.5*conv(num2,den1);
den = conv(den1,den2); % Reconstruct numerator and denonimator.
MaxDiff=max([max(b-num),max(a-den)]); % Compare original and reconstructed
% numerator and denominators.
ca2tf | cl2tf | iirpowcomp | latc2tf | tf2ca | tf2latc
|
Processing method of EMD endpoint effect based on SVRM extension | JVE Journals
Kai Chai1 , Shao-Wei Feng2
1, 2College of Naval Architecture and Ocean, Naval University of Engineering, Wuhan, 430033, China
Received 19 October 2020; received in revised form 5 November 2020; accepted 15 November 2020; published 26 November 2020
Copyright © 2020 Kai Chai, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Aiming at the problem of endpoint effect in empirical mode decomposition (EMD), the application method of support vector regression machine (SVRM) in EMD extension data prediction is studied. Firstly, the basic principle, data extension method and parameter setting of SVRM are introduced. Secondly, several application methods of SVRM in EMD extension are studied to analyze and verify the operational efficiency and decomposition accuracy characteristics of each method respectively. Finally, the proposed extension method based on SVRM extreme value point prediction can greatly improve the operation efficiency of SVRM long time extension. The simulation signal analysis shows that the SVRM extreme point prediction extension method not only improves the accuracy and reliability of EMD decomposition, but also effectively inhibits the end-point effect phenomenon, significantly reduces the SVRM extension time, and improves the practicability of EMD method.
Keywords: empirical modal decomposition, support vector regression machine, endpoint effect, data extension.
Empirical mode decomposition (EMD) in the processing of nonlinear, non-stationary signals has unique adaptability, so that it has been in-depth research and application, but the endpoint effect has been restricting the spread of EMD application in seismic signals, structural analysis and mechanical fault diagnosis field, but the endpoint effect has been restricting the spread of EMD application [1].
The extension method based on the original data is based on the data itself, which is fast and suitable for long time extension. Huang et al. found the problem of endpoint effect when the EMD method was proposed, proposed the characteristic wave method and applied for a patent [2]. Mirror closed extension method is simple in operation and efficient in the extension of long data. However, the truncated signal at the extreme value will cause partial information loss, so it is not suitable for the processing of short data, and it is not effective when the signal symmetry is poor [3]. The sinusoidal matching method alleviates the problem of end-point effect to a certain extent, but in practical application, the constructed waveform is difficult to reflect the change trend of the signal, and its practicability is poor [4]. The waveform matching method maintains the change trend of the original signal well, and has a good effect on the signal with regular waveform and strong periodicity [5]. However, for the signal with bad periodicity, it may be difficult to find the wavelet with high matching degree. The algorithm based on data prediction has high general accuracy and good extension smoothness, but the application range of EMD is affected due to the large amount of computation [6]. The prediction algorithm based on neural network has a high accuracy, but it requires a lot of training samples and takes too long learning time to carry out real-time online processing [7]. The adaptive autoregressive model processing method has high operational efficiency, but the autoregressive model itself belongs to linear operation, and its application effect to non-stationary signals is poor [8]. On the basis of summarizing the principle of endpoint effect in EMD decomposition, this paper studies the application of Support vector machine regression (SVRM) in EMD extension data prediction based on the problem of endpoint effect in empirical mode decomposition (EMD).
2. Overview of SVRM theory
2.1. Data series extension method of SVRM
After constructing the regression training model of SVRM, the data points can be extended forward and backward by using it. The following is an example to introduce the extension method and steps of SVRM:
(a) Construct the training set according to the existing data sequence.
For a given data sequence
s\left(1\right)
s\left(2\right)
s\left(N\right)
N
is the number of sampling points in the data column. Firstly, the number of training samples
l
is determined, and a training set
L=\left\{\left({x}_{1} ,{y}_{1} \right),\dots \left({x}_{l},{y}_{l}\right)\right\}
{x}_{i}=\left[\begin{array}{ll}s\left(i\right)& s\left(i+1\right)\cdots \end{array}\cdots
s\left(N-l+i-1\right){\right]}^{T}
{y}_{i}=s\left(N-l+i\right)
1\le i\le l
(b) Construct regression model.
The regression model is constructed according to the above method.
(c) Predict the first sequence value.
The first predicted value outside the boundary can be obtained:
s\left(N+1\right)={\sum }_{i=1}^{l}\left({\overline{a}}_{{ }_{i}}^{*}-{\overline{a}}_{i}\right)k\left({x}_{i},{x}_{l+1}\right)+\overline{b} , {x}_{l+1}=\left[\begin{array}{ll}s\left(l+1\right)& s\left(l+2\right)\cdots s\left(N\right)\end{array}{\right]}^{T}
(d) Step by step iteration to obtain predictive sequence values.
s\left(N+1\right)
as the new boundary point of the original data, the extension value of the second data sequence
s\left(N+2\right)
can be obtained, and so on. According to the required extension length
M
, the whole extension sequence can be obtained
s\left(N\right)
s\left(N+1\right)
s\left(N+M\right)
, the value of any extension
s\left(N+m\right)
s\left(N+m\right)={\sum }_{i=1}^{l}\left({\overline{a}}_{{ }_{i}}^{*}-{\overline{a}}_{i}\right)k\left({x}_{i},{x}_{l+m}\right)+\overline{b},
where, the value of any extension
s\left(N+m\right)
{x}_{l+m}=\left[\begin{array}{ll}s\left(l+m\right)& s\left(l+m+1\right)\cdots s\left(N+m-1\right)\end{array}{\right]}^{T}
2.2. Setting of extension parameters
The extension precision of SVRM increases with the increase of the number of samples, while the operation efficiency decreases with the increase of the number of samples. Therefore, in order to ensure the extension accuracy, the number of samples
{N}_{e}
must be increased or decreased with the extension length
{N}_{s}
k={N}_{e}/{N}_{s}
. In general, short time extension requires a higher extension accuracy, and
k
is set as a smaller value. In the case of long time extension, the extension accuracy requirements can be appropriately reduced, and a larger
K
can be selected to improve the operational efficiency. Specifically, the length between the signal extreme points is taken as the reference, so that:
l=\mathrm{m}\mathrm{a}\mathrm{x}\left[\mathrm{i}\mathrm{n}\mathrm{d}\mathrm{m}\mathrm{i}\mathrm{n}\left(2\right)-\mathrm{i}\mathrm{n}\mathrm{d}\mathrm{m}\mathrm{i}\mathrm{n}\left(1\right),\mathrm{i}\mathrm{n}\mathrm{d}\mathrm{m}\mathrm{a}\mathrm{x}\left(2\right)-\mathrm{i}\mathrm{n}\mathrm{d}\mathrm{m}\mathrm{a}\mathrm{x}\left(1\right)\right]/2,
{N}_{e}={k}_{\mathrm{e}}*l, {N}_{s}={k}_{\mathrm{s}}*l,
L
is the distance between extremum points;
\mathrm{i}\mathrm{n}\mathrm{d}\mathrm{m}\mathrm{i}\mathrm{n}\left(i\right)
\mathrm{i}\mathrm{n}\mathrm{d}\mathrm{m}\mathrm{a}\mathrm{x}\left(i\right)
are respectively the moment of the
i
th minimum and maximum value of the signal,
{k}_{s}
is the sample length coefficient,
{k}_{s}>
3. EMD extension algorithm based on SVRM
In order to prevent the endpoint effect of EMD from spreading into the signal as the decomposition progresses, we always hope to obtain the data extension of sufficient length accurately in a relatively short time. In order to ensure the prediction accuracy of data, sufficient sample quantity is required, while a long extension requires a large sample quantity, which will result in low computational efficiency. Therefore, improving the efficiency of SVRM extension in EMD application has become an urgent problem to be solved.
3.1. EMD primary extension algorithm based on SVRM
The primary extension method refers to the IMF which only conducts primary extension of data before EMD decomposition and then truncates the extension part of the obtained decomposition results to obtain the original data. The characteristic of this method is that the whole decomposition process only needs primary extension, the algorithm is simple to implement, and there is no accumulation of error caused by repeated extension. The simulation signal
{y}_{2}\left(t\right)
is taken as an example, and the waveform is shown in Fig. 1. The SVRM method is adopted to carry out a long extension, and then EMD decomposition. Sample length
{N}_{s}
and extension length
{N}_{e}
were controlled by adjusting the values of
{k}_{s}
{k}_{e}
. The decomposition of can is shown in Fig. 2:
{y}_{2}\left(t\right)=3\mathrm{s}\mathrm{i}\mathrm{n}\left(0.08\pi t\right)+\mathrm{s}\mathrm{i}\mathrm{n}\left(0.32\pi t\right)+\mathrm{c}\mathrm{o}\mathrm{s}\left(0.01\pi t\right)+2\mathrm{s}\mathrm{i}\mathrm{n}\left(0.04\pi t\right),
N=2000,\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }\mathrm{ }t=500s,{ f}_{s}=4.
Fig. 1. Waveform of simulated signal
{y}_{2}\left(t\right)
As shown in Fig. 2(a), when
{k}_{s}=
{k}_{e}=
5, the sample length
{N}_{s}=
16 and the extension length
{N}_{e}=
90, the decomposition results show large deformation at both ends, and the endpoint effect is not effectively contained. When
{k}_{s}=
{k}_{e}=
{N}_{s}
increases to 90. At this time, the decomposition results also show a large deformation at both ends, that is, the endpoint effect is effectively contained without the increase of the sample length. It can be inferred that when
{k}_{e}=
5, the extension length is too short, so the endpoint effect is not effectively contained outside the real signal. When
{k}_{s}=
{k}_{e}=
20, the extension length
{N}_{e}
increases to 320 points, and the decomposition results are greatly improved. There is no obvious distortion at the end points, but residual term
{r}_{5}
is generated. It can be inferred that when
{k}_{s}=
1, the regression model cannot accurately predict the signal trend due to the small sample data. When
{k}_{s}=
{k}_{e}=
20, the sample length increased to 80 points. At this time, a more accurate decomposition result was obtained, but the running time of the program increased to 71.8776 s.
3.2. EMD decomposition and extension simultaneously based on SVRM
In order to avoid the problem of operation efficiency caused by SVRM long time extension, an effective method is to make full use of the accuracy of SVRM short time extension, and adopt appropriate algorithm to avoid long time extension. The effective method is to decompose while extension. The idea is to conduct a short time extension with a length of more than two extreme points before each screening of EMD, and then cut off the extension part after generating the envelope. Since the algorithm needs to be extended before each screening, it only needs to extend a pair (a maximum value and a minimum value) above the extreme point at both ends of the signal each time. Repeated simulation experiments verify that, under the same conditions, when the two ends extend more than four extreme points respectively, the decomposition results will no longer change significantly. In order to ensure the decomposition accuracy, the method of extending four extreme points at both ends is adopted in this paper. Set
{k}_{s}=
4, then the EMD processing results of
y_2\left(t\right)
are shown in Fig. 3.
Fig. 2. Decomposition of simulated signal after one extension
{k}_{s}=
{k}_{e}=
T=
{k}_{s}=
{k}_{e}=
T=
22.2817 s
{k}_{s}=
{k}_{e}=
T=
{k}_{s}=
{k}_{e}=
T=
In the Fig. 3, when the sample is 10, residual terms are generated in the decomposition result; when the sample is increased to 20, the decomposition result reaches a very high precision, and at this time, the whole decomposition process takes only 0.9045 s.
It can be seen from Fig. 3 that when
{k}_{s}=
1, due to the small sample number and poor extension accuracy, the endpoint effect is restrained to some extent, but residual terms appear. When
{k}_{s}=
2, a relatively accurate decomposition result was obtained, and the operation time reached 388.2863 s.
Fig. 3. Decomposition of simulated signal after decomposition and extension simultaneously
{k}_{s}=
{k}_{e}=
T=
139.4893 s
{k}_{s}=
{k}_{e}=
T=
3.3. EMD continuation based on SVRM extremum prediction
The extremum forecast and continuation process, SVRM extremum predict continuation of sample sequence is extremum sequence of original signal, so the methods require original signal have enough number of extreme value (experiments show general may not be less than 6), when the signal extremum number is small, SVRM based on hard less extreme value point accurately predict epitaxial signal extremum position and value. It should be noted that, with the progress of EMD decomposition, the frequency components of the obtained components gradually decrease. If this method is used for side decomposition and side continuation, the phenomenon of inaccurate continuation may occur in the low-frequency stage due to the small number of extreme points. Therefore, this method is more suitable for the previous continuation of decomposition. The decomposition results are shown in Fig. 4.
Fig. 4. EMD results of continuation prediction based on SVRM extreme value
{N}_{s}=
{N}_{e}=
T=
{N}_{s}=
{N}_{e}=
T=
In order to verify the validity of the effect of the endpoint effect and the practical practicability of the actual hydraulic system, the sampling frequency of the measured hydraulic system is analyzed, and the sampling frequency is 5 kHz, and the sampling point is 2048. The results of EMD continuation based on SVRM extremum prediction results are as shown in the Fig. 5. The weak at both ends, and the similar components of the frequency are not visible in the endpoints, and the model of the IMF component is obviously suppressed.
Fig. 5. EMD results of hydraulic fault signal
Based on the application of SVRM in EMD continuation, this paper introduces the prediction principle of SVRM, and proposes a method to set the extension length and sample number with the signal extremum scale. The advantages and disadvantages of various SVRM application methods in EMD continuation are analyzed from the Angle of decomposition accuracy and efficiency.
Huang N. E., Shen Zheng, Long S. R., et al. The empirical mode decomposition and the Hilbert spectrum for nonlinear and nonstationary time series analysis. Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, Vol. 454, Issue 1971, 1998, p. 903-995. [Publisher]
Huang N. E., Wu M. L., Qu W. D. Applications of Hilbert-Huang transform to non-stationary financial time series analysis. Applied Stochastic Models in Business and industry, Vol. 19, Issue 3, 2003, p. 245-268. [Publisher]
Zhao Jinping Improvement of the mirror extending in empirical mode decomposition method and the technology for eliminating frequency mixing. High Technology Letters, Vol. 8, Issue 3, 2002, p. 40-47. [Search CrossRef]
Lin D. C., Guo Z. L., An F. P., et al. Elimination of end effects in empirical mode decomposition by mirror image coupled with sup ort vector regression. Mechanical Systems and Signal Processing, Vol. 31, 2012, p. 13-28. [Publisher]
Tang B. P., Dong S. J., Song T. Method for eliminating mode mixing of empirical mode decomposition based on the revised blind source separation. Signal Processing, Vol. 92, Issue 1, 2012, p. 248-258. [Publisher]
Guhathakurta K., Mukherjee I., Chowdhury A. R. Empirical mode decomposition analysis of two different financial time series and their comparison. Chaos, Solitons and Fractals, Vol. 37, 2008, p. 1214-1227. [Publisher]
Yu L., Wang S., Lai K. K. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm. Energy Economics, Vol. 30, 2008, p. 2623-2635. [Publisher]
Wu F. J., Qu L. S. An improved method for restraining the end effect in empirical mode decomposition and its applications to the fault diagnosis of large rotating machinery. Journal of Sound and Vibration, Vol. 314, 2008, p. 586-602. [Publisher]
|
Monitoring the dynamics of exhausting exercises: the time-variability of acceleration | JVE Journals
Lluc Montull1 , Pablo Vázquez2 , Robert Hristovski3 , Natàlia Balagué4
1, 2, 4Complex Systems in Sport Research Group, Institut Nacional d’Educació Física de Catalunya (INEFC), Universitat de Barcelona (UB), Barcelona, Spain
3Ss. Cyril and Methodius, Faculty of Physical Education, Sport and Health, Skopje, Republic of Macedonia
Copyright © 2019 Lluc Montull, et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Fatigue has been related to changes in the time-variability properties of coordinative variables during an exhausting isometric exercise [1]. In this study we aimed to investigate the qualitative changes in acceleration (kinematic collective variable) during exhausting running (
n=
8) and ski mountaineering (
n
males = 5,
n
females = 5) exercises. Time-variability of acceleration was calculated using the Multifractal Detrended Fluctuation Analysis (MFDFA). Initial and final time series of both exercises were compared through Wilcoxon test. A reduction of MFDFA spectrum was observed in the final period in both exercises while the participants approached exhaustion, except for the male group of ski mountaineers that increased their speed at the end of the exercise. In runners, those who approached the psychobiological exhaustion showed a higher reduction in the MFDFA spectrum compared to those who did not. Although more research is needed to model this dynamic behavior in front of different constraints, time-variability of acceleration throughout a multifractal application seems to provide a valid information about the system adaptation during exhausting dynamic exercises.
Keywords: multifractal analysis, self-organization, kinematic variability, fatigue, nonlinear dynamics.
The study of the time-variability properties, widely used in scientific fields to investigate the complex systems behavior, has revealed changes in the structure of several collective variables during exercise [1-3]. The dynamic behavior of such collective variables informs about the temporal coupling of the system’s components, and the ability of the system to control its behavior [1]. It has been investigated under different constraints (e.g., diseases, stress, etc.) to study phenomena as the self-organization and the interactions of processes across different levels and timescales: from micro- (e.g., cellular, etc.) to macro-timescales (e.g., psycho-emotional, etc.) [4-6]. Multifractal Detrended Fluctuation Analysis (MFDFA) has been used to identify the deviations in the fractal structures of the variability within time periods with large and small fluctuation, and thereby allowing to study this dynamic behavior [4, 5, 7].
A rapid and flexible control of the system’s timescales has been demonstrated at kinematic level in high-skilled athletes, who are able to coordinate better their movements and have a higher effectivity and task performance [8-10]. On the contrary, a rigid and slower control of the component’s interactions characterizes the less skilled and experienced performers, as well as less coordinated and efficient.
Taking fatigue as common constraint in exercise performance, it has shown how produces maladjustments of the timescales’ dynamics [1-3, 11]. In that sense, it has been observed how participants lost their initial fine regulation and control during a quasi-isometric exercise (i.e., static task), disabling over time the task performance, as they approached exhaustion [1, 11]. However, the time-variability of kinematic variables during dynamic exercises performed until exhaustion has not been studied yet. This information can be useful in terms of training and health monitoring. Concretely, it can help to detect the coordinative changes and loss of control produced by the exercise induced fatigue and anticipate the exhaustion point. It is worth to point that although this type of analysis has been applied to evaluate physiological processes and diagnose diseases [5], its applications to monitor sport and exercise activities is still scarce.
Accordingly, this study, divided in two parts, aimed to investigate the qualitative changes of the time variability of a kinematic collective variable, such as the acceleration, during two competitive and maximal (exhausting) dynamic exercises in runners and ski mountaineers.
– Running exercise: Eight experienced runners (39.37 ± 6.19 y.o.) performed a Cooper Test covering the maximum distance in 12 min.
– Ski mountaineering exercise: Ten ski mountaineering athletes (5 males and 5 females; 25.8 ± 5.3 y.o.) competing at international level performed a trial vertical race (1980 m of distance and 415 m of positive gain). Following the federation’s rules, the groups competed separately in function of the gender.
The athlete’s acceleration was recorded using WIMU devices (Realtrack Systems SL, Almería, Spain) placed on L3 [12]. The sample frequency was 100 Hz. The time series of acceleration were analysed through MFDFA [7]. The first and last minute of exercise were compared in the runners, and the first and third portion of the time series were compared in the ski mountaineers. Each portion contained
N=
30000 data points (males group), and
N=
41900 data points (females group). Velocity was also calculated and recorded during these periods to compare the performance in both running and skiing exercises.
The differences between initial and final periods of the MFDFA spectrum were analysed with the Wilcoxon non-parametric test while it was also calculated the subtraction’s differences between final and initial values of MFDFA spectrum and velocity. Heart Rate (HR) was continuously recorded in both exercises to assess the physiological stress and the Borg’s RPE (CR-10) was monitored only at the end of the running exercise. Data analysis was conducted via Matlab© R2013b and SPSS.
Both competitive exercises supposed a high psychobiological stress (HR running = 181.75 ± 9.31; RPE running = 8.38 ± 0.74; HR skiing = 187.23 ± 10.05) for the athletes. The results showed a reduction in the MFDFA spectrum of the acceleration time series (see example in Fig. 1) in both exercises as the effort accumulated, even though no significant differences were found between the first and the last period of exercise during running (
z
Wilcoxon = –1.402,
p=
0.161) and ski mountaineering (
z
Wilcoxon = –1.38;
p=
Fig. 1. Example of how the MFDFA of the acceleration spectrum was reduced from the initial to the final period of exercise
In the running study, those who approached exhaustion (
n=
4; RPE ≥ 9; Average HR ≥ 180) showed higher reduction (0.28) in the MFDFA of the acceleration spectrum at the end of the exercise, compared with those who reached less psychobiological stress (
n=
4; RPE < 9; Average HR < 180) with a reduction of 0.09. The loss of velocity between the initial and final period of the run matched with a reduction in the MFDFA of the acceleration spectrum (Table 1).
In the skiing exercise, gender groups followed different strategies during the race. While the female skiers group showed a reduction of the MFDFA of the acceleration spectrum and a velocity decrease at the end, the male group displayed a small increment of the MFDFA spectrum, probably due to a lower slope in the final part of the trial and reflected in an increase of the velocity (Table 1).
Table 1. Multifractal Detrended Fluctuation Analysis (MFDFA) spectrum of the acceleration and velocity of the first and last periods of the running and ski mountaineering exercises and their differences
n=
Male ski mountaineers (
n=
Female ski mountaineers
n=
Initial MFDFA spectrum
Final MFDFA spectrum
Final Velocity (m/s)
MFDFA spectrum differences
Velocity differences
These results are in accordance with previous studies during an isometric exercise [1, 11]. The reduction in the variability of the collective variable suggests a reduced adaptability to the task demands over time due to the accumulated effort and for hence a more difficult control of the task.
The time-variability of acceleration seems to provide a valid information about the system adaptation during exhausting dynamic exercises. More evidences are needed to model the way the collective variable changes its behaviour and how the system loses its control at different scales as a consequence of fatigue during exercise.
In future research, constraints that can affect the kinematic variability, such as the slope or the terrain, should be carefully controlled. The multifractal analysis of the acceleration time-variability points towards being a useful tool for monitoring and evaluating the adaptation to exercise. In this sense, it may complement the commonly used quantitative variables (e.g., HR or lactate values) in the control of training and competition. Such multifractal analysis may be applied as well to other variables with medical purposes.
Vázquez P., Hristovski R., Balagué N. The path to exhaustion: Time-variability properties of coordinative variables during continuous exercise. Frontiers in Physiology, Vol. 7, 2016, p. 37. [Publisher]
Cashaback J. G., Cluff T., Potvin J. R. Muscle fatigue and contraction intensity modulates the complexity of surface electromyography. Journal of Electromyography and Kinesiology, Vol. 23, 2013, p. 78-83. [Publisher]
Pethick J., Winter S. L., Burnley M. Fatigue reduces the complexity of knee extensor torque fluctuations during maximal and submaximal intermittent isometric contractions in man. Journal of Physiology, Vol. 593, 2015, p. 2085-2096. [Publisher]
Dutta S., Ghosh D., Chatterjee S. Multifractal detrended fluctuation analysis of human gait diseases. Frontiers in Physiology, Vol. 4, 2013, p. 274. [Publisher]
Cleetus H. M. M., Singh D. Multifractal application on electrocardiogram. Medical Engineering and Technology, Vol. 38, Issue 1, 2014, p. 55-61. [Publisher]
Wijnants M. L. A review of theoretical perspectives in cognitive science on the presence of scaling in coordinated physiological and cognitive processes. Journal of Nonlinear Dynamics, Vol. 12, 2014, p. 962043. [Search CrossRef]
Ihlen E. A. Introduction to multifractal detrended fluctuation analysis in Matlab. Frontiers in Physiology, Vol. 3, Issue 141, 2012, https://doi.org/10.3389/fphys.2012.00141. [Search CrossRef]
Den Hartigh R. J., Cox R. F., Gernigon C., Van Yperen N. W., Van Geert P. L. Pink noise in rowing ergometer performance and the roll of skill level. Motor Control, Vol. 19, Issue 4, 2015, p. 355-369. [Publisher]
Nourrit Lucas D., Tossa A. O., Zélic G., Delignières D. Learning, motor skill, and long-range correlations. Journal of Motor Behavior, Vol. 47, Issue 3, 2014, p. 182-189. [Publisher]
Terrier P., Dériaz O. Persistent and anti-persistent pattern in stride-to-stride variability of treadmill walking: Influence of rhythmic auditory cueing. Human of Movement Science, Vol. 31, 2012, p. 1585-1597. [Publisher]
Hristovski R., Balagué N. Fatigue-induced spontaneous termination point-Non equilibrium phase transitions and critical behavior in quasi-isometric exertion. Human Movement Science, Vol. 29, Issue 4, 2010, p. 483-493. [Publisher]
Schütte K. H., Aeles J., De Beéck T. O., Van Der Zwaard B. C., Venter R., Vanwanseele B. Surface effects on dynamics stability and loading during outdoor running using wireless trunk accelerometry. Gait and Posture, Vol. 48, 2016, p. 220-225. [Publisher]
|
(1-4)-beta-D-glucan:phosphate alpha-D-glucosyltransferase Wikipedia
(1-4)-beta-D-glucan:phosphate alpha-D-glucosyltransferase
In enzymology, a cellodextrin phosphorylase (EC 2.4.1.49) is an enzyme that catalyzes the chemical reaction
(1,4-beta-D-glucosyl)n + phosphate
{\displaystyle \rightleftharpoons }
(1,4-beta-D-glucosyl)n-1 + alpha-D-glucose 1-phosphate
Thus, the two substrates of this enzyme are (1,4-beta-D-glucosyl)n and phosphate, whereas its two products are (1,4-beta-D-glucosyl)n-1 and alpha-D-glucose 1-phosphate.
This enzyme belongs to GH (glycoside hydrolases) family 94. The systematic name of this enzyme class is 1,4-beta-D-oligo-D-glucan:phosphate alpha-D-glucosyltransferase. This enzyme is also called beta-1,4-oligoglucan:orthophosphate glucosyltransferase.
Sheth K, Alexander JK (1969). "Purification and properties of beta-1,4-oligoglucan:orthophosphate glucosyltransferase from Clostridium thermocellum". J. Biol. Chem. 244 (2): 457–64. PMID 5773308.
|
A distinctive narrow spectral feature of chemical species
Emission lines (discrete spectrum)
Absorption spectrum with Absorption lines (discrete spectrum)
Absorption lines for air, under indirect illumination, with the direct light source not visible, so that the gas is not directly between source and detector. Here, Fraunhofer lines in sunlight and Rayleigh scattering of this sunlight is the "source." This is the spectrum of a blue sky somewhat close to the horizon, pointing east at around 3 or 4 pm (i.e., Sun toward the west[clarification needed]) on a clear day.
A spectral line is a dark or bright line in an otherwise uniform and continuous spectrum, resulting from emission or absorption of light in a narrow frequency range, compared with the nearby frequencies. Spectral lines are often used to identify atoms and molecules. These "fingerprints" can be compared to the previously collected ones of atoms[1] and molecules,[2] and are thus used to identify the atomic and molecular components of stars and planets, which would otherwise be impossible.
Types of line spectra
Continuous spectrum of an incandescent lamp (mid) and discrete spectrum lines of a fluorescent lamp (bottom)
Spectral lines are the result of interaction between a quantum system (usually atoms, but sometimes molecules or atomic nuclei) and a single photon. When a photon has about the right amount of energy (which is connected to its frequency)[3] to allow a change in the energy state of the system (in the case of an atom this is usually an electron changing orbitals), the photon is absorbed. Then the energy will be spontaneously re-emitted, either as one photon at the same frequency as the original one or in a cascade, where the sum of the energies of the photons emitted will be equal to the energy of the one absorbed (assuming the system returns to its original state).
A spectral line may be observed either as an emission line or an absorption line. Which type of line is observed depends on the type of material and its temperature relative to another emission source. An absorption line is produced when photons from a hot, broad spectrum source pass through a cooler material. The intensity of light, over a narrow frequency range, is reduced due to absorption by the material and re-emission in random directions. By contrast, a bright emission line is produced when photons from a hot material are detected, perhaps in the presence of a broad spectrum from a cooler source. The intensity of light, over a narrow frequency range, is increased due to emission by the hot material.
Spectral lines are highly atom-specific, and can be used to identify the chemical composition of any medium. Several elements, including helium, thallium, and caesium, were discovered by spectroscopic means. Spectral lines also depend on the temperature and density of the material, so they are widely used to determine the physical conditions of stars and other celestial bodies that cannot be analyzed by other means.
Depending on the material and its physical conditions, the energy of the involved photons can vary widely, with the spectral lines observed across the electromagnetic spectrum, from radio waves to gamma rays.
Strong spectral lines in the visible part of the spectrum often have a unique Fraunhofer line designation, such as K for a line at 393.366 nm emerging from singly-ionized Ca+, though some of the Fraunhofer "lines" are blends of multiple lines from several different species. In other cases, the lines are designated according to the level of ionization by adding a Roman numeral to the designation of the chemical element. Neutral atoms are denoted with the Roman numeral I, singly ionized atoms with II, and so on, so that, for example, Fe IX represents eight times ionized iron.
More detailed designations usually include the line wavelength and may include a multiplet number (for atomic lines) or band designation (for molecular lines). Many spectral lines of atomic hydrogen also have designations within their respective series, such as the Lyman series or Balmer series. Originally all spectral lines were classified into series: the Principal series, Sharp series, and Diffuse series. These series exist across atoms of all elements, and the patterns for all atoms are well-predicted by the Rydberg-Ritz formula. These series were later associated with suborbitals.
Line broadening and shift
There are a number of effects which control spectral line shape. A spectral line extends over a range of frequencies, not a single frequency (i.e., it has a nonzero linewidth). In addition, its center may be shifted from its nominal central wavelength. There are several reasons for this broadening and shift. These reasons may be divided into two general categories – broadening due to local conditions and broadening due to extended conditions. Broadening due to local conditions is due to effects which hold in a small region around the emitting element, usually small enough to assure local thermodynamic equilibrium. Broadening due to extended conditions may result from changes to the spectral distribution of the radiation as it traverses its path to the observer. It also may result from the combining of radiation from a number of regions which are far from each other.
Broadening due to local effects
Natural broadening
The lifetime of excited states results in natural broadening, also known as lifetime broadening. The uncertainty principle relates the lifetime of an excited state (due to spontaneous radiative decay or the Auger process) with the uncertainty of its energy. Some authors use the term "radiative broadening" to refer specifically to the part of natural broadening caused by the spontaneous radiative decay.[4] A short lifetime will have a large energy uncertainty and a broad emission. This broadening effect results in an unshifted Lorentzian profile. The natural broadening can be experimentally altered only to the extent that decay rates can be artificially suppressed or enhanced.[5]
Main article: Doppler broadening
The atoms in a gas which are emitting radiation will have a distribution of velocities. Each photon emitted will be "red"- or "blue"-shifted by the Doppler effect depending on the velocity of the atom relative to the observer. The higher the temperature of the gas, the wider the distribution of velocities in the gas. Since the spectral line is a combination of all of the emitted radiation, the higher the temperature of the gas, the broader the spectral line emitted from that gas. This broadening effect is described by a Gaussian profile and there is no associated shift.
The presence of nearby particles will affect the radiation emitted by an individual particle. There are two limiting cases by which this occurs:
Impact pressure broadening or collisional broadening: The collision of other particles with the light emitting particle interrupts the emission process, and by shortening the characteristic time for the process, increases the uncertainty in the energy emitted (as occurs in natural broadening).[6] The duration of the collision is much shorter than the lifetime of the emission process. This effect depends on both the density and the temperature of the gas. The broadening effect is described by a Lorentzian profile and there may be an associated shift.
Quasistatic pressure broadening: The presence of other particles shifts the energy levels in the emitting particle,[clarification needed] thereby altering the frequency of the emitted radiation. The duration of the influence is much longer than the lifetime of the emission process. This effect depends on the density of the gas, but is rather insensitive to temperature. The form of the line profile is determined by the functional form of the perturbing force with respect to distance from the perturbing particle. There may also be a shift in the line center. The general expression for the lineshape resulting from quasistatic pressure broadening is a 4-parameter generalization of the Gaussian distribution known as a stable distribution.[7]
Pressure broadening may also be classified by the nature of the perturbing force as follows:
Linear Stark broadening occurs via the linear Stark effect, which results from the interaction of an emitter with an electric field of a charged particle at a distance
{\displaystyle r}
, causing a shift in energy that is linear in the field strength.
{\displaystyle (\Delta E\sim 1/r^{2})}
Resonance broadening occurs when the perturbing particle is of the same type as the emitting particle, which introduces the possibility of an energy exchange process.
{\displaystyle (\Delta E\sim 1/r^{3})}
Quadratic Stark broadening occurs via the quadratic Stark effect, which results from the interaction of an emitter with an electric field, causing a shift in energy that is quadratic in the field strength.
{\displaystyle (\Delta E\sim 1/r^{4})}
Van der Waals broadening occurs when the emitting particle is being perturbed by Van der Waals forces. For the quasistatic case, a Van der Waals profile[note 1] is often useful in describing the profile. The energy shift as a function of distance[definition needed] is given in the wings by e.g. the Lennard-Jones potential.
{\displaystyle (\Delta E\sim 1/r^{6})}
Inhomogeneous broadening is a general term for broadening because some emitting particles are in a different local environment from others, and therefore emit at a different frequency. This term is used especially for solids, where surfaces, grain boundaries, and stoichiometry variations can create a variety of local environments for a given atom to occupy. In liquids, the effects of inhomogeneous broadening is sometimes reduced by a process called motional narrowing.
Broadening due to non-local effects
Certain types of broadening are the result of conditions over a large region of space rather than simply upon conditions that are local to the emitting particle.
Opacity broadening
Electromagnetic radiation emitted at a particular point in space can be reabsorbed as it travels through space. This absorption depends on wavelength. The line is broadened because the photons at the line center have a greater reabsorption probability than the photons at the line wings. Indeed, the reabsorption near the line center may be so great as to cause a self reversal in which the intensity at the center of the line is less than in the wings. This process is also sometimes called self-absorption.
Macroscopic Doppler broadening
Radiation emitted by a moving source is subject to Doppler shift due to a finite line-of-sight velocity projection. If different parts of the emitting body have different velocities (along the line of sight), the resulting line will be broadened, with the line width proportional to the width of the velocity distribution. For example, radiation emitted from a distant rotating body, such as a star, will be broadened due to the line-of-sight variations in velocity on opposite sides of the star (this effect usually referred to as rotational broadening). The greater the rate of rotation, the broader the line. Another example is an imploding plasma shell in a Z-pinch.
Each of these mechanisms can act in isolation or in combination with others. Assuming each effect is independent, the observed line profile is a convolution of the line profiles of each mechanism. For example, a combination of the thermal Doppler broadening and the impact pressure broadening yields a Voigt profile.
However, the different line broadening mechanisms are not always independent. For example, the collisional effects and the motional Doppler shifts can act in a coherent manner, resulting under some conditions even in a collisional narrowing, known as the Dicke effect.
Spectral lines of chemical elements
See also: Hydrogen spectral series
The phrase "spectral lines", when not qualified, usually refers to lines having wavelengths in the visible band of the full electromagnetic spectrum. Many spectral lines occur at wavelengths outside this range. At shorter wavelengths, which correspond to higher energies, ultraviolet spectral lines include the Lyman series of hydrogen. At the much shorter wavelengths of X-rays, the lines are known as characteristic X-rays because they remain largely unchanged for a given chemical element, independent of their chemical environment. Longer wavelengths correspond to lower energies, where the infrared spectral lines include the Paschen series of hydrogen. At even longer wavelengths, the radio spectrum includes the 21-cm line used to detect neutral hydrogen throughout the cosmos.
For each element, the following table shows the spectral lines which appear in the visible spectrum at about 400-700 nm.
Spectral lines of the chemical elements
hydrogen 1 H
helium 2 He
beryllium 4 Be
boron 5 B
oxygen 8 O
fluorine 9 F
sodium 11 Na
aluminium 13 Al
phosphorus 15 P
sulfur 16 S
chlorine 17 Cl
potassium 19 K
calcium 20 Ca
scandium 21 Sc
titanium 22 Ti
vanadium 23 V
chromium 24 Cr
manganese 25 Mn
iron 26 Fe
copper 29 Cu
zinc 30 Zn
gallium 31 Ga
germanium 32 Ge
selenium 34 Se
bromine 35 Br
rubidium 37 Rb
zirconium 40 Zr
niobium 41 Nb
molybdenum 42 Mo
technetium 43 Tc
ruthenium 44 Ru
rhodium 45 Rh
palladium 46 Pd
silver 47 Ag
cadmium 48 Cd
indium 49 In
tin 50 Sn
antimony 51 Sb
tellurium 52 Te
iodine 53 I
caesium 55 Cs
barium 56 Ba
lanthanum 57 La
praseodymium 59 Pr
promethium 61 Pm
holmium 67 Ho
hafnium 72 Hf
tantalum 73 Ta
tungsten 74 W
osmium 76 Os
iridium 77 Ir
platinum 78 Pt
gold 79 Au
thallium 81 Tl
lead 82 Pb
radium 88 Ra
thorium 90 Th
uranium 92 U
plutonium 94 Pu
curium 96 Cm
berkelium 97 Bk
californium 98 Cf
einsteinium 99 Es
Table of emission spectra of gas discharge lamps
Hydrogen line (21-cm line)
^ "Van der Waals profile" appears as lowercase in almost all sources, such as: Statistical mechanics of the liquid surface by Clive Anthony Croxton, 1980, A Wiley-Interscience publication, ISBN 0-471-27663-4, ISBN 978-0-471-27663-0; and in Journal of technical physics, Volume 36, by Instytut Podstawowych Problemów Techniki (Polska Akademia Nauk), publisher: Państwowe Wydawn. Naukowe., 1995,
^ Kramida, Alexander; Ralchenko, Yuri (1999), NIST Atomic Spectra Database, NIST Standard Reference Database 78, National Institute of Standards and Technology, retrieved 2021-06-27
^ Rothman, L.S.; Gordon, I.E.; Babikov, Y.; Barbe, A.; Chris Benner, D.; Bernath, P.F.; Birk, M.; Bizzocchi, L.; Boudon, V.; Brown, L.R.; Campargue, A.; Chance, K.; Cohen, E.A.; Coudert, L.H.; Devi, V.M.; Drouin, B.J.; Fayt, A.; Flaud, J.-M.; Gamache, R.R.; Harrison, J.J.; Hartmann, J.-M.; Hill, C.; Hodges, J.T.; Jacquemart, D.; Jolly, A.; Lamouroux, J.; Le Roy, R.J.; Li, G.; Long, D.A.; et al. (2013). "The HITRAN2012 molecular spectroscopic database". Journal of Quantitative Spectroscopy and Radiative Transfer. 130: 4–50. Bibcode:2013JQSRT.130....4R. doi:10.1016/j.jqsrt.2013.07.002. ISSN 0022-4073.
^ Einstein, Albert (1905). "On a Heuristic Viewpoint Concerning the Production and Transformation of Light".
^ Krainov, Vladimir; Reiss, Howard; Smirnov, Boris (1997). Radiative Processes in Atomic Physics. Wiley. doi:10.1002/3527605606. ISBN 978-0-471-12533-4.
^ For example, in the following article, decay was suppressed via a microwave cavity, thus reducing the natural broadening: Gabrielse, Gerald; H. Dehmelt (1985). "Observation of Inhibited Spontaneous Emission". Physical Review Letters. 55 (1): 67–70. Bibcode:1985PhRvL..55...67G. doi:10.1103/PhysRevLett.55.67. PMID 10031682.
^ "Collisional Broadening". Fas.harvard.edu. Archived from the original on 2015-09-24. Retrieved 2015-09-24.
^ Peach, G. (1981). "Theory of the pressure broadening and shift of spectral lines". Advances in Physics. 30 (3): 367–474. Bibcode:1981AdPhy..30..367P. doi:10.1080/00018738100101467. Archived from the original on 2013-01-14.
Griem, Hans R. (1997). Principles of Plasma Spectroscopy. Cambridge: University Press. ISBN 0-521-45504-9.
Griem, Hans R. (1974). Spectral Line Broadening by Plasmas. New York: Academic Press. ISBN 0-12-302850-7.
Griem, Hans R. (1964). Plasma Spectroscopy. New York: McGraw-Hill book Company.
|
Artificial neuron - Wikipedia
An artificial neuron is a mathematical function conceived as a model of biological neurons, a neural network. Artificial neurons are elementary units in an artificial neural network.[1] The artificial neuron receives one or more inputs (representing excitatory postsynaptic potentials and inhibitory postsynaptic potentials at neural dendrites) and sums them to produce an output (or activation, representing a neuron's action potential which is transmitted along its axon). Usually each input is separately weighted, and the sum is passed through a non-linear function known as an activation function or transfer function[clarification needed]. The transfer functions usually have a sigmoid shape, but they may also take the form of other non-linear functions, piecewise linear functions, or step functions. They are also often monotonically increasing, continuous, differentiable and bounded. Non-monotonic, unbounded and oscillating activation functions with multiple zeros that outperform sigmoidal and ReLU like activation functions on many tasks have also been recently explored. The thresholding function has inspired building logic gates referred to as threshold logic; applicable to building logic circuits resembling brain processing. For example, new devices such as memristors have been extensively used to develop such logic in recent times.[2]
The artificial neuron transfer function should not be confused with a linear system's transfer function.
5 Types of transfer functions
6 Pseudocode algorithm
For a given artificial neuron k, let there be m + 1 inputs with signals x0 through xm and weights wk0 through wkm. Usually, the x0 input is assigned the value +1, which makes it a bias input with wk0 = bk. This leaves only m actual inputs to the neuron: from x1 to xm.
The output of the kth neuron is:
{\displaystyle y_{k}=\varphi \left(\sum _{j=0}^{m}w_{kj}x_{j}\right)}
{\displaystyle \varphi }
(phi) is the transfer function (commonly a threshold function).
The output is analogous to the axon of a biological neuron, and its value propagates to the input of the next layer, through a synapse. It may also exit the system, possibly as part of an output vector.
It has no learning process as such. Its transfer function weights are calculated and threshold value are predetermined.
Main article: Nv network
Depending on the specific model used they may be called a semi-linear unit, Nv neuron, binary neuron, linear threshold function, or McCulloch–Pitts (MCP) neuron.
Simple artificial neurons, such as the McCulloch–Pitts model, are sometimes described as "caricature models", since they are intended to reflect one or more neurophysiological observations, but without regard to realism.[3]
Biological modelsEdit
Main article: Biological neuron model
Artificial neurons are designed to mimic aspects of their biological counterparts. However a significant performance gap exists between biological and artificial neural networks. In particular single biological neurons in the human brain with oscillating activation function capable of learning the XOR function have been discovered.[4] However single artificial neurons with popular sigmoidal and ReLU like activation functions cannot learn the XOR function.[5]
Dendrites – In a biological neuron, the dendrites act as the input vector. These dendrites allow the cell to receive signals from a large (>1000) number of neighboring neurons. As in the above mathematical treatment, each dendrite is able to perform "multiplication" by that dendrite's "weight value." The multiplication is accomplished by increasing or decreasing the ratio of synaptic neurotransmitters to signal chemicals introduced into the dendrite in response to the synaptic neurotransmitter. A negative multiplication effect can be achieved by transmitting signal inhibitors (i.e. oppositely charged ions) along the dendrite in response to the reception of synaptic neurotransmitters.
Soma – In a biological neuron, the soma acts as the summation function, seen in the above mathematical description. As positive and negative signals (exciting and inhibiting, respectively) arrive in the soma from the dendrites, the positive and negative ions are effectively added in summation, by simple virtue of being mixed together in the solution inside the cell's body.
Axon – The axon gets its signal from the summation behavior which occurs inside the soma. The opening to the axon essentially samples the electrical potential of the solution inside the soma. Once the soma reaches a certain potential, the axon will transmit an all-in signal pulse down its length. In this regard, the axon behaves as the ability for us to connect our artificial neuron to other artificial neurons.
Unlike most artificial neurons, however, biological neurons fire in discrete pulses. Each time the electrical potential inside the soma reaches a certain threshold, a pulse is transmitted down the axon. This pulsing can be translated into continuous values. The rate (activations per second, etc.) at which an axon fires converts directly into the rate at which neighboring cells get signal ions introduced into them. The faster a biological neuron fires, the faster nearby neurons accumulate electrical potential (or lose electrical potential, depending on the "weighting" of the dendrite that connects to the neuron that fired). It is this conversion that allows computer scientists and mathematicians to simulate biological neural networks using artificial neurons which can output distinct values (often from −1 to 1).
Research has shown that unary coding is used in the neural circuits responsible for birdsong production.[6][7] The use of unary in biological networks is presumably due to the inherent simplicity of the coding. Another contributing factor could be that unary coding provides a certain degree of error correction.[8]
The first artificial neuron was the Threshold Logic Unit (TLU), or Linear Threshold Unit,[9] first proposed by Warren McCulloch and Walter Pitts in 1943. The model was specifically targeted as a computational model of the "nerve net" in the brain.[10] As a transfer function, it employed a threshold, equivalent to using the Heaviside step function. Initially, only a simple model was considered, with binary inputs and outputs, some restrictions on the possible weights, and a more flexible threshold value. Since the beginning it was already noticed that any boolean function could be implemented by networks of such devices, what is easily seen from the fact that one can implement the AND and OR functions, and use them in the disjunctive or the conjunctive normal form. Researchers also soon realized that cyclic networks, with feedbacks through neurons, could define dynamical systems with memory, but most of the research concentrated (and still does) on strictly feed-forward networks because of the smaller difficulty they present.
One important and pioneering artificial neural network that used the linear threshold function was the perceptron, developed by Frank Rosenblatt. This model already considered more flexible weight values in the neurons, and was used in machines with adaptive capabilities. The representation of the threshold values as a bias term was introduced by Bernard Widrow in 1960 – see ADALINE.
In the late 1980s, when research on neural networks regained strength, neurons with more continuous shapes started to be considered. The possibility of differentiating the activation function allows the direct use of the gradient descent and other optimization algorithms for the adjustment of the weights. Neural networks also started to be used as a general function approximation model. The best known training algorithm called backpropagation has been rediscovered several times but its first development goes back to the work of Paul Werbos.[11][12]
Types of transfer functionsEdit
The transfer function (activation function) of a neuron is chosen to have a number of properties which either enhance or simplify the network containing the neuron. Crucially, for instance, any multilayer perceptron using a linear transfer function has an equivalent single-layer network; a non-linear function is therefore necessary to gain the advantages of a multi-layer network.[citation needed]
Below, u refers in all cases to the weighted sum of all the inputs to the neuron, i.e. for n inputs,
{\displaystyle u=\sum _{i=1}^{n}w_{i}x_{i}}
where w is a vector of synaptic weights and x is a vector of inputs.
Step functionEdit
The output y of this transfer function is binary, depending on whether the input meets a specified threshold, θ. The "signal" is sent, i.e. the output is set to one, if the activation meets the threshold.
{\displaystyle y={\begin{cases}1&{\text{if }}u\geq \theta \\0&{\text{if }}u<\theta \end{cases}}}
This function is used in perceptrons and often shows up in many other models. It performs a division of the space of inputs by a hyperplane. It is specially useful in the last layer of a network intended to perform binary classification of the inputs. It can be approximated from other sigmoidal functions by assigning large values to the weights.
Linear combinationEdit
In this case, the output unit is simply the weighted sum of its inputs plus a bias term. A number of such linear neurons perform a linear transformation of the input vector. This is usually more useful in the first layers of a network. A number of analysis tools exist based on linear models, such as harmonic analysis, and they can all be used in neural networks with this linear neuron. The bias term allows us to make affine transformations to the data.
See: Linear transformation, Harmonic analysis, Linear filter, Wavelet, Principal component analysis, Independent component analysis, Deconvolution.
See also: Sigmoid function
A fairly simple non-linear function, the sigmoid function such as the logistic function also has an easily calculated derivative, which can be important when calculating the weight updates in the network. It thus makes the network more easily manipulable mathematically, and was attractive to early computer scientists who needed to minimize the computational load of their simulations. It was previously commonly seen in multilayer perceptrons. However, recent work has shown sigmoid neurons to be less effective than rectified linear neurons. The reason is that the gradients computed by the backpropagation algorithm tend to diminish towards zero as activations propagate through layers of sigmoidal neurons, making it difficult to optimize neural networks using multiple layers of sigmoidal neurons.
RectifierEdit
See also: Rectifier (neural networks)
In the context of artificial neural networks, the rectifier is an activation function defined as the positive part of its argument:
{\displaystyle f(x)=x^{+}=\max(0,x),}
where x is the input to a neuron. This is also known as a ramp function and is analogous to half-wave rectification in electrical engineering. This activation function was first introduced to a dynamical network by Hahnloser et al. in a 2000 paper in Nature[13] with strong biological motivations and mathematical justifications.[14] It has been demonstrated for the first time in 2011 to enable better training of deeper networks,[15] compared to the widely used activation functions prior to 2011, i.e., the logistic sigmoid (which is inspired by probability theory; see logistic regression) and its more practical[16] counterpart, the hyperbolic tangent.
Pseudocode algorithmEdit
The following is a simple pseudocode implementation of a single TLU which takes boolean inputs (true or false), and returns a single boolean output when activated. An object-oriented model is used. No method of training is defined, since several exist. If a purely functional model were used, the class TLU below would be replaced with a function TLU with input parameters threshold, weights, and inputs that returned a boolean value.
class TLU defined as:
data member threshold : number
data member weights : list of numbers of size X
function member fire(inputs : list of booleans of size X) : boolean defined as:
variable T : number
for each i in 1 to X do
if inputs(i) is true then
T ← T + weights(i)
if T > threshold then
Binding neuron
^ Maan, A. K.; Jayadevi, D. A.; James, A. P. (1 January 2016). "A Survey of Memristive Threshold Logic Circuits". IEEE Transactions on Neural Networks and Learning Systems. PP (99): 1734–1746. arXiv:1604.07121. Bibcode:2016arXiv160407121M. doi:10.1109/TNNLS.2016.2547842. ISSN 2162-237X. PMID 27164608. S2CID 1798273.
^ F. C. Hoppensteadt and E. M. Izhikevich (1997). Weakly connected neural networks. Springer. p. 4. ISBN 978-0-387-94948-2.
^ Gidon, Albert; Zolnik, Timothy Adam; Fidzinski, Pawel; Bolduan, Felix; Papoutsi, Athanasia; Poirazi, Panayiota; Holtkamp, Martin; Vida, Imre; Larkum, Matthew Evan (2020-01-03). "Dendritic action potentials and computation in human layer 2/3 cortical neurons". Science. 367 (6473): 83–87. Bibcode:2020Sci...367...83G. doi:10.1126/science.aax6239. PMID 31896716. S2CID 209676937.
^ Noel, Mathew Mithra; L, Arunkumar; Trivedi, Advait; Dutta, Praneet (2021-09-04). "Growing Cosine Unit: A Novel Oscillatory Activation Function That Can Speedup Training and Reduce Parameters in Convolutional Neural Networks". arXiv:2108.12943 [cs.LG].
^ Squire, L.; Albright, T.; Bloom, F.; Gage, F.; Spitzer, N., eds. (October 2007). Neural network models of birdsong production, learning, and coding (PDF). New Encyclopedia of Neuroscience: Elservier. Archived from the original (PDF) on 2015-04-12. Retrieved 12 April 2015.
^ Moore, J.M.; et al. (2011). "Motor pathway convergence predicts syllable repertoire size in oscine birds". Proc. Natl. Acad. Sci. USA. 108 (39): 16440–16445. Bibcode:2011PNAS..10816440M. doi:10.1073/pnas.1102077108. PMC 3182746. PMID 21918109.
^ Potluri, Pushpa Sree (26 November 2014). "Error Correction Capacity of Unary Coding". arXiv:1411.7406 [cs.IT].
^ Martin Anthony (January 2001). Discrete Mathematics of Neural Networks: Selected Topics. SIAM. pp. 3–. ISBN 978-0-89871-480-7.
^ Charu C. Aggarwal (25 July 2014). Data Classification: Algorithms and Applications. CRC Press. pp. 209–. ISBN 978-1-4665-8674-1.
^ Paul Werbos, Beyond Regression: New Tools for Prediction and Analysis in the Behavioral Sciences. PhD thesis, Harvard University, 1974
^ Werbos, P.J. (1990). "Backpropagation through time: what it does and how to do it". Proceedings of the IEEE. 78 (10): 1550–1560. doi:10.1109/5.58337. ISSN 0018-9219.
^ Hahnloser, Richard H. R.; Sarpeshkar, Rahul; Mahowald, Misha A.; Douglas, Rodney J.; Seung, H. Sebastian (2000). "Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit". Nature. 405 (6789): 947–951. Bibcode:2000Natur.405..947H. doi:10.1038/35016072. ISSN 0028-0836. PMID 10879535. S2CID 4399014.
^ R Hahnloser, H.S. Seung (2001). Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks. NIPS 2001. {{cite conference}}: CS1 maint: uses authors parameter (link)
^ Xavier Glorot, Antoine Bordes and Yoshua Bengio (2011). Deep sparse rectifier neural networks (PDF). AISTATS. {{cite conference}}: CS1 maint: uses authors parameter (link)
^ Yann LeCun, Leon Bottou, Genevieve B. Orr and Klaus-Robert Müller (1998). "Efficient BackProp" (PDF). In G. Orr; K. Müller (eds.). Neural Networks: Tricks of the Trade. Springer. {{cite encyclopedia}}: CS1 maint: uses authors parameter (link)
McCulloch, Warren S.; Pitts, Walter (1943). "A logical calculus of the ideas immanent in nervous activity". Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/bf02478259.
Samardak, A.; Nogaret, A.; Janson, N. B.; Balanov, A. G.; Farrer, I.; Ritchie, D. A. (2009-06-05). "Noise-Controlled Signal Transmission in a Multithread Semiconductor Neuron". Physical Review Letters. 102 (22): 226802. Bibcode:2009PhRvL.102v6802S. doi:10.1103/physrevlett.102.226802. PMID 19658886.
Artifical [sic] neuron mimicks function of human cells
McCulloch-Pitts Neurons (Overview)
Retrieved from "https://en.wikipedia.org/w/index.php?title=Artificial_neuron&oldid=1071027566"
|
Revision as of 09:50, 28 November 2017 by MathAdmin (talk | contribs) (Created page with "<span class="exam"> For a fish that starts life with a length of 1cm and has a maximum length of 30cm, the von Bertalanffy growth model predicts that the growth rate is ...")
For a fish that starts life with a length of 1cm and has a maximum length of 30cm, the von Bertalanffy growth model predicts that the growth rate is
{\displaystyle 29e^{-t}}
cm/year where
{\displaystyle t}
is the age of the fish. What is the average length of the fish over its first five years?
|
Jessica wants to make a spinner that has all of the following characteristics. Sketch a possible spinner for Jessica. Be sure to label each section of the spinner with a name and with its theoretical probability.
Blue, red, purple, and green are the only colors on the spinner.
It is half as likely to land on blue as to land on red.
It is three times as likely to land on purple as green.
50\%
probability of landing on either blue or red and a
50\%
probability of landing on either purple or green.
You know from characteristics
1
4
\text{blue}+\text{red}
\text{purple}+\text{green}
each equal one-half as shown below.
Use the eTool below to create the spinner with the characteristics outlined in the problem.
|
(Redirected from Standard Error)
The value which 5% of non-missing observations are equal to or below. This is referred to as
{\displaystyle {\hat {Q}}_{1}(p)}
in Hyndman and Fan (1996). In the edge-case where exactly 5% of values are less than a specific value (
{\displaystyle x_{1}}
), the 5th percentile is computed as
{\displaystyle 0.95\times x_{i}+0.05\times x_{i+1}}
{\displaystyle x_{i+1}}
is the next highest value; this method is sometimes known as the HAVERAGE method and is referred to as
{\displaystyle {\hat {Q}}_{6}(p)}
in Hyndman and Fan (1996).
The value which 25% of non-missing observations are equal to or below. This is referred to as
{\displaystyle {\hat {Q}}_{1}(p)}
in Hyndman and Fan (1996). In the edge-case where exactly 25% of values are less than a specific value (
{\displaystyle x_{1}}
), the 25th percentile is computed as
{\displaystyle 0.75\times x_{i}+0.25\times x_{i+1}}
{\displaystyle x_{i+1}}
is the next highest value; this is sometimes known as the HAVERAGE method and is referred to as
{\displaystyle {\hat {Q}}_{6}(p)}
{\displaystyle {\hat {Q}}_{1}(p)}
{\displaystyle x_{1}}
{\displaystyle 0.25\times x_{i}+0.75\times x_{i+1}}
{\displaystyle x_{i+1}}
{\displaystyle {\hat {Q}}_{6}(p)}
{\displaystyle {\hat {Q}}_{1}(p)}
{\displaystyle x_{1}}
{\displaystyle 0.05\times x_{i}+0.95\times x_{i+1}}
{\displaystyle x_{i+1}}
{\displaystyle {\hat {Q}}_{6}(p)}
The value which 50% of non-missing observations are equal to or below. For example, if 51% of respondents have a value of 0 and 49% have a value of 1 then the median will be reported as 0. This definition is different to that employed in some other statistics and market research programs. This is referred to as
{\displaystyle {\hat {Q}}_{1}(p)}
{\displaystyle x_{1}}
{\displaystyle 0.5\times x_{i}+0.5\times x_{i+1}}
{\displaystyle x_{i+1}}
is the next highest value; this method is sometimes known as the HAVERAGE method and is referred to as referred to as
{\displaystyle {\hat {Q}}_{6}(p)}
Where the data is not weighted, or, Weights and significance is computed using Kish's correction or Set to a specified value, the standard error of the proportion
{\displaystyle p}
{\displaystyle \sigma _{p}={\sqrt {\frac {p(1-p)}{\sum _{i=1}^{n}w_{i}-b}}}}
{\displaystyle b=1}
if Bessel's correction is selected and 0 otherwise and where
{\displaystyle w_{i}=1}
where no weight has been applied and
{\displaystyle w_{i}}
is the calibrated weight if Weights and significance is set to Kish correction or Set to a specified value (see Weights, Effective Sample Size and Design Effects). Where the Weights and significance has been set to Automatic or Taylor series linearization, then Taylor series linearization is used to compute the standard error, and the Bessel correction is always assumed to be 1 (which will give a different result to those used in most statistical texts, which instead have no Bessel correction).
{\displaystyle \sigma _{\bar {x}}={\sqrt {{\frac {1}{(\sum _{i=1}^{n}w_{i})(\sum _{i=1}^{n}w_{i}-b)}}\sum _{i=1}^{n}w_{i}(x_{i}-{\frac {1}{\sum _{i=1}^{n}w_{i}}}\sum _{i=1}^{n}w_{i}x_{i}}})^{2}}
{\displaystyle b=1}
if Bessel’s correction is selected and 0 otherwise and
{\displaystyle w_{i}=1}
were no weight has been applied,
{\displaystyle w_{i}}
is the calibrated weight if Design effect for weight is set to Kish correction or Set to a specified value (see Weights, Effective Sample Size and Design Effects). Where the Design effect for weight has been set to Automatic or Taylor series linearization, then Taylor series linearization is used to compute the standard error.
Retrieved from ‘https://wiki.q-researchsoftware.com/index.php?title=Statistics&oldid=58345#Standard_Error’
|
Talk:QB/d Bell.solitaire - Wikiversity
Talk:QB/d Bell.solitaire
First to allow and display discussion of each question, and second, to store the quiz in raw-script for.
1 QB/d_Bell.solitaire
2 Raw script
QB/d_Bell.solitaireEdit
Your solitaire deck uses ♥ ♠ ♣ and your answer cards are 4 and 5. You select 4♠, 4♣, and 5♥. If the questions were Q♠ and Q♣, you would__
+ lose 3 points - lose 1 point - win 1 point - win 3 points - be disqualified for cheating
different answers to the same question loses 3 points
- lose 3 points - lose 1 point + win 1 point - win 3 points - be disqualified for cheating
When you win, you get only one point (since winning is more likely than losing)
You solitaire deck uses ♥ ♠ ♣ and your answer cards are 4 and 5. You select 4♠, 5♣, and 5♥. If the questions were Q♠ and Q♣. Which of the following wins?
- K♥ and K♠ - K♠ and K♣ - K♥ and K♣ + two of these are true - none of these are true
The "losing pair" is 5♣ and 5♥, but both suits (♣ and ♥ must be turned up as question cards. Otherwise you win.
You solitaire deck uses ♥ ♠ ♣ and your answer cards are 4 and 5. You select 4♠, 5♣, and 5♥. If the questions were Q♠ and Q♣. Which of the following loses?
- K♥ and K♠ - K♠ and K♣ + K♥ and K♣ - two of these are true - none of these are true
The "losing pair" is ♣ and ♥; both must appear as question cards in order for you to lose.
If you play the solitaire game 6 times, you will on average win ___ times.
+ 4 - 2 - 3 - 6 - 5
The probability of wining is
{\displaystyle {\tfrac {2}{3}}}
{\displaystyle {\tfrac {2}{3}}\times 6=4}
If you play the solitaire game 3 times, you will on average lose ___ times.
The probability of losing is
{\displaystyle {\tfrac {1}{3}}}
{\displaystyle {\tfrac {1}{3}}\times 6=2}
- 4 + 2 - 3 - 6 - 5
{\displaystyle {\tfrac {1}{3}}}
{\displaystyle {\tfrac {1}{3}}\times 6=2}
{\displaystyle {\tfrac {2}{3}}}
{\displaystyle {\tfrac {2}{3}}\times 3=2}
Raw scriptEdit
t QB/d_Bell.solitaire
! CC0 user:Guy vandegrift q1
? Your solitaire deck uses ♥ ♠ ♣ and your answer cards are 4 and 5. You select 4♠, 4♣, and 5♥. If the questions were Q♠ and Q♣, you would__
+ lose 3 points
- lose 1 point
- win 1 point
- win 3 points
- be disqualified for cheating
$different answers to the same question loses 3 points
- lose 3 points
+ win 1 point
$When you win, you get only one point (since winning is more likely than losing)
? You solitaire deck uses ♥ ♠ ♣ and your answer cards are 4 and 5. You select 4♠, 5♣, and 5♥. If the questions were Q♠ and Q♣. Which of the following wins?
- K♥ and K♠
- K♠ and K♣
- K♥ and K♣
+ two of these are true
$: The "losing pair" is 5♣ and 5♥, but both suits (♣ and $hearts; must be turned up as question cards. Otherwise you win.
? You solitaire deck uses ♥ ♠ ♣ and your answer cards are 4 and 5. You select 4♠, 5♣, and 5♥. If the questions were Q♠ and Q♣. Which of the following loses?
+ K♥ and K♣
- two of these are true
$ The "losing pair" is ♣ and ♥; both must appear as question cards in order for you to lose.
? If you play the solitaire game 6 times, you will on average win ___ times.
$ The probability of wining is
{\displaystyle {\tfrac {2}{3}}}
{\displaystyle {\tfrac {2}{3}}\times 6=4}
?If you play the solitaire game 3 times, you will on average lose ___ times.
$ The probability of losing is
{\displaystyle {\tfrac {1}{3}}}
{\displaystyle {\tfrac {1}{3}}\times 6=2}
? If you play the solitaire game 6 times, you will on average lose ___ times.
{\displaystyle {\tfrac {1}{3}}}
{\displaystyle {\tfrac {1}{3}}\times 6=2}
?If you play the solitaire game 3 times, you will on average win ___ times.
$ The probability of winning is
{\displaystyle {\tfrac {2}{3}}}
{\displaystyle {\tfrac {2}{3}}\times 3=2}
Retrieved from "https://en.wikiversity.org/w/index.php?title=Talk:QB/d_Bell.solitaire&oldid=1918079"
Return to "QB/d Bell.solitaire" page.
|
poltodiffeq - Maple Help
Home : Support : Online Help : Mathematics : Discrete Mathematics : Combinatorics : gfun : poltodiffeq
poltodiffeq
determine the differential equation satisfied by a polynomial in holonomic functions
poltodiffeq(P, listdiffeq, list_unknowns, y(z))
polynomial in z and y1(z), y2(z), ... and possibly their derivatives and repeated derivatives
listdiffeq
list containing, for each of y1(z), y2(z), ..., either a linear differential equation it satisfies or a set containing the equation together with initial conditions
list of function names
[\mathrm{y1}\left(z\right),\mathrm{y2}\left(z\right),...]
The poltodiffeq(P, listdiffeq, list_unknowns, y(z)) command returns a linear differential equation satisfied by the polynomial P.
If y1(z), y2(z), ... are holonomic function solutions of listdiffeq[1], listdiffeq[2], ..., the poltodiffeq function returns a linear differential equation satisfied by
P\left(z,\mathrm{y1}\left(z\right),...\right)
\mathrm{with}\left(\mathrm{gfun}\right):
\mathrm{Sin}≔{\mathrm{diff}\left(\mathrm{y1}\left(z\right),z,z\right)=-\mathrm{y1}\left(z\right),\mathrm{y1}\left(0\right)=0,\mathrm{D}\left(\mathrm{y1}\right)\left(0\right)=1}:
\mathrm{Cos}≔{\mathrm{diff}\left(\mathrm{y2}\left(z\right),z,z\right)=-\mathrm{y2}\left(z\right),\mathrm{y2}\left(0\right)=1,\mathrm{D}\left(\mathrm{y2}\right)\left(0\right)=0}:
\mathrm{poltodiffeq}\left({\mathrm{y1}\left(z\right)}^{2}+{\mathrm{y2}\left(z\right)}^{2},[\mathrm{Sin},\mathrm{Cos}],[\mathrm{y1}\left(z\right),\mathrm{y2}\left(z\right)],y\left(z\right)\right)
{\frac{{\textcolor[rgb]{0,0,1}{ⅆ}}^{\textcolor[rgb]{0,0,1}{3}}}{\textcolor[rgb]{0,0,1}{ⅆ}{\textcolor[rgb]{0,0,1}{z}}^{\textcolor[rgb]{0,0,1}{3}}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}\frac{\textcolor[rgb]{0,0,1}{ⅆ}}{\textcolor[rgb]{0,0,1}{ⅆ}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{0}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{D}}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{0}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathrm{D}}}^{\left(\textcolor[rgb]{0,0,1}{2}\right)}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{y}\right)\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{0}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}}
\mathrm{poltodiffeq}\left({\mathrm{y1}\left(z\right)}^{2}+{\mathrm{diff}\left(\mathrm{y1}\left(z\right),z\right)}^{2},[\mathrm{Sin}],[\mathrm{y1}\left(z\right)],y\left(z\right)\right)
{\frac{\textcolor[rgb]{0,0,1}{ⅆ}}{\textcolor[rgb]{0,0,1}{ⅆ}\textcolor[rgb]{0,0,1}{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{z}\right)\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{0}\right)\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1}}
gfun[`diffeq+diffeq`]
gfun[`diffeq*diffeq`]
gfun[poltorec]
|
classify_function - Maple Help
Home : Support : Online Help : Mathematics : Mathematical Functions : FunctionAdvisor : classify_function
return the class(es) to which the given mathematical function belongs
FunctionAdvisor(classify_function, math_function)
literal name; 'classify_function'
The FunctionAdvisor(classify_function, math_function) command returns the class(es) to which the given function belongs.
\mathrm{FunctionAdvisor}\left(\mathrm{classify_function},\mathrm{BesselK}\right)
BesselK belongs to the subclass "Bessel_related" of the class "0F1" and so, in principle, it can be related to various of the 26 functions of those classes - see FunctionAdvisor( "Bessel_related" ); and FunctionAdvisor( "0F1" );
\textcolor[rgb]{0,0,1}{\mathrm{Bessel_related}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{0F1}}
\mathrm{FunctionAdvisor}\left(\mathrm{classify_function},\mathrm{tan}\right)
tan belongs to the subclass "trig" of the class "elementary" and so, in principle, it can be related to various of the 26 functions of those classes - see FunctionAdvisor( "trig" ); and FunctionAdvisor( "elementary" );
\textcolor[rgb]{0,0,1}{\mathrm{trig}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{elementary}}
\mathrm{FunctionAdvisor}\left(\mathrm{classify_function},\mathrm{JacobiTheta1}\right)
JacobiTheta1 belongs to the class "Jacobi_related" and so, in principle, it can be related to various of the 18 functions of that class - see FunctionAdvisor( "Jacobi_related" );
\textcolor[rgb]{0,0,1}{\mathrm{Jacobi_related}}
|
Modular Identities and Explicit Evaluations of a Continued Fraction of Ramanujan
Nipen Saikia, "Modular Identities and Explicit Evaluations of a Continued Fraction of Ramanujan", International Journal of Mathematics and Mathematical Sciences, vol. 2012, Article ID 694251, 10 pages, 2012. https://doi.org/10.1155/2012/694251
Nipen Saikia 1
1Department of Mathematics, Rajiv Gandhi University, Rono Hills, Doimukh-791112, Arunachal Pradesh, India
Academic Editor: Stefaan Caenepeel
We study a new continued fraction of Ramanujan. We prove its modular identities and give some explicit evaluations.
Throughout the paper, we assume . As usual, for positive integers and any complex number , we write
Ramanujan's general theta-function is defined by where . After Ramanujan, we define Ramanujan recorded many -continued fractions and some of their explicit values in his second notebook [1] and in his lost notebook [2]. The following beautiful continued fraction identity was recorded by Ramanujan in his second notebook and can be found in [3, p. 11, Entry 11]: where either , , and are complex numbers with , or , , and are complex numbers with for some integer . Several elegant -continued fractions have representations as -products and some of them can be expressed in terms of Ramanujan’s theta-functions. An account of this can be found in in Chapter 32 of Berndt's book [4] (also see [5]). The most famous one, of course, is the Rogers-Ramanujan continued fraction defined by The continued fraction has a very beautiful and extensive theory almost all of which was developed by Ramanujan. In particular, his lost notebook [2] contains several results on the Rogers-Ramanujan continued fraction. We refer to the paper by Berndt et al. [6], Kang [7, 8] for proofs of many of these theorems.
In this paper, we examine another continued fraction of Ramanujan arising from (1.7) and is defined by Note that, replacing by and then setting and in (1.7), we obtain (1.9).
In Section 2, we record some preliminary results. Section 3 is devoted to prove some modular identities for the continued fraction . Finally, in Section 4, we give some explicit evaluations of .
We complete this introduction by defining Ramanujan’s modular equation from Berndt’s book [3]. The complete elliptic integral of the first kind is defined by where , denotes the ordinary or Gaussian hypergeometric function. The number is called the modulus of , and is called the complementary modulus. Let , and denote the complete elliptic integrals of the first kind associated with the moduli , , , and , respectively. Suppose that the equality holds for some positive integer . Then, a modular equation of degree is a relation between the moduli and which is implied by (1.11). If we set we see that (1.11) is equivalent to the relation . Thus, a modular equation can be viewed as an identity involving theta-functions at the arguments and . Ramanujan recorded his modular equations in terms of and , where and . We say that has degree over . The multiplier connecting and is defined by where .
In this section, we record some results that will be used in the subsequent sections.
Lemma 2.1 (see [3, p. 124, Entry 12(i) and (ii)]). One has
Lemma 2.2 (see [3, p. 214, Entry 24(iii)]). If has degree 2 over , then
Lemma 2.3 (see [3, p. 230, Entry 5(ii)]). If has degree 3 over , then
Lemma 2.4 (see [3, p. 215, (24.22)]). If has degree 4 over , then
Lemma 2.5 (see [3, p. 280-281, Entry 13(v) and (vi)]). If has degree 5 over , then
Lemma 2.6 (see [3, p. 314, Entry 19(i)]). If has degree 7 over , then
3. Modular Identitites for
In this section, we use Ramanujan's modular equations to prove certain modular identities for .
Theorem 3.1. One has
Proof. Replacing by and the setting and in (1.7) and simplifying, we obtain Employing (1.6) and (1.9) in (3.2) and simplifying, we complete the proof.
Corollary 3.2. One has
Proof. Dividing numerator and denominator on right-hand side of the identity in Theorem 3.1 by and simplifying, we complete the proof.
Theorem 3.3. One has where has degree n over .
Proof. We employ Lemma 2.1 in Corollary 3.2 to complete the proof.
Theorem 3.4. Let and . Then,
Proof. Replacing by in Corollary 3.2, we obtain Now, eliminating between (3.6) and Corollary 3.2 and simplifying, we complete the proof.
Proof. Eliminating in (2.2) and then simplifying, we deduce that From Theorem 3.3(i), we have Now, employing Theorem 3.3(ii) with and (3.9) in (3.8) and factorizing using Mathematica, we obtain It can be seen that the first and the last factors in (3.10) do not vanish for . So, by identity theorem, we have
Proof. From Lemma 2.3, we obtain From Theorem 3.3, we deduce that where has degree 3 over .
Employing (3.14) in (3.13) and factorizing using Mathematica, we arrive at It can be seen that the second factor of (3.15) does not vanish for , so by identity theorem, we have
Proof. Squaring the modular equation in Lemma 2.4 and simplifying, we obtain From Theorem 3.3(i), we have Now, employing Theorem 3.3(ii) with and (3.19) in (3.18) and simplifying, we complete the proof.
Proof. From Theorem 3.3, we obtain where has degree 5 over .
Employing (3.21) in (2.5), we find that respectively.
Eliminating between (3.22) and (3.23) and simplifying, we deduce that Substituting for and from (3.21) in (3.24) and simplifying, we arrive at
Proof. From Lemma 2.6, we obtain Again, from Theorem 3.3, we deduce that where has degree 7 over .
Employing (3.28) in (3.27) and simplifying using Mathematica, we arrive at
4. Explicit Evaluations of
In this section, we establish some general theorems for the explicit evaluations of the continued fraction and give examples.
For , Ramanujan's two class invariants and are defined by The class invariants and are connected by the relation [4, p. 187, Entry 2.1]:
The singular modulus is defined by , where is a positive integer and unique positive number between 0 and 1 satisfying
Class invariants and singular moduli are intimately related by the equalities [4, p. 185, ]: An account of Ramanujan's class invariants and singular moduli can be found in Chapter 34 of Berndt's book [4].
Proof. We set in Theorem 3.3(i) and use the definition of singular moduli and simplifying, we complete the proof.
In the scattered places of his first notebook [1], Ramanujan calculated over 30 singular moduli . See Chapter 34 of Berndt's book [4] for details. Thus, one can use Theorem 4.1 to find the values of if the corresponding values of are known. For example, from [4, p. 281, Theorem 9.2], we note that Employing (4.6) in Theorem 4.1, we calculate Many other values of can be computed by using the known values of .
Proof. Dividing numerator and denominator of right-hand side of Theorem 3.1 and employing (1.6), we obtain Setting , employing the definitions of and from (4.1) in (4.9) and simplifying, we obtain Substituting for from (4.2) in (4.10) and simplifying, we complete the proof.
Theorem 4.2 implies that if we know the values of and for any positive number , then corresponding values of can easily be calculated. Saikia [9] evaluated several values of and for positive number . For example, noting from [9, Theorem 3.5], we have Employing (4.11) in Theorem 4.2, we obtain Many other values of can be determined by using the values of and evaluated in [9].
S. Ramanujan, Notebooks, vol. 1,2, Tata Institute of Fundamental Research, Bombay, India, 1957.
S. Ramanujan, The Lost Notebook and Other Unpublished Papers, Narosa, New Delhi, India, 1988.
B. C. Berndt, Ramanujan's Notebooks, part 3, Springer, New York, NY, USA, 1991. View at: Publisher Site | Zentralblatt MATH
B. C. Berndt, “Flowers which we cannot yet see growing in Ramanujan's garden of hypergeometric series, elliptic functions, and
q
's,” in Special Functions 2000: Current Perspective and Future Directions, J. Bustoz, M. E. H. Ismail, and S. K. Suslov, Eds., vol. 30, pp. 61–85, Kluwer Academic, Dordrecht, The Netherlands, 2001. View at: Publisher Site | Google Scholar
B. C. Berndt, S. S. Huang, J. Sohn, and S. H. Son, “Some theorems on the Rogers-Ramanujan continued fraction in Ramanujan's lost notebook,” Transactions of the American Mathematical Society, vol. 352, no. 5, pp. 2157–2177, 2000. View at: Publisher Site | Google Scholar
S.-Y. Kang, “Some theorems on the Rogers-Ramanujan continued fraction and associated theta function identities in Ramanujan's lost notebook,” The Ramanujan Journal, vol. 3, no. 1, pp. 91–111, 1999. View at: Publisher Site | Google Scholar | Zentralblatt MATH
S.-Y. Kang, “Ramanujan's formulas for the explicit evaluation of the Rogers-Ramanujan continued fraction and theta-functions,” Acta Arithmetica, vol. 90, no. 1, pp. 49–68, 1999. View at: Google Scholar | Zentralblatt MATH
N. Saikia, “Ramanujan's modular equations and Weber-Ramanujan's class invariants Gn and gn,” Bulletin of Mathematical Sciences, vol. 2, no. 1, pp. 205–223, 2012. View at: Google Scholar
Copyright © 2012 Nipen Saikia. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
Seismic response analysis on shear lag effect of continuous curved box girder with three spans | JVE Journals
Hailin Lu1 , Heng Cai2 , Hongyin Yang3 , Kaiyi Xue4
1, 2, 3, 4School of Resource and Civil engineering, Wuhan Institute of Technology, Wuhan, China
Received 7 March 2017; accepted 12 March 2017; published 30 May 2017
November 22-24, 2013 in Hainan Island, China
Shear lag effect of continuous curved box girder with three spans under seismic excitation is studied in this paper. Firstly, spatial shell finite element model is founded by ANSYS, and EL-centro seismic wave is chosen as seismic excitation. Secondly, the shear lag effect at different cross sections are investigated with dynamic time-history analysis, the results show that under seismic excitation there is prominent shear lag effect in continuous curved box girder, the maximum shear lag coefficient is 3.02, shear lag effect is severe, shear lag effect at mid-span cross sections are prominent than support cross sections, and inside peak shear lag coefficients are generally greater than outside. Finally, the numeric results are compared with the experimental results from a vibration table testing, which shows great consistencies.
Keywords: continuous curved box girder, shear lag, seismic excitation, dynamic time-history analysis, finite element.
Thin-walled curved box girders are widely used in urban overpass, viaduct and long-span bridge structures for their own advantages. For thin-walled box girder, bending normal stress is non-uniform distributed along flange width direction because of its shear deformation, this phenomena, known as shear lag effect [1], it is an important factor that influence safety and durability of bridge structures. It was indicated by many researchers that shear lag will not only cause local stress concentration and cracks [2, 3] but also impairs the bending stiffness of the beam, which leads to the increasing of vertical deflection [4], the primary beam theory can’t meet the demands of engineering design. In recent years, many scholars have studied the shear lag effect of curved box girder under dynamic loads, Hugo C investigated vibration characteristics of a curved box girder bridges under vehicle loads by long-term field test [5], Wang bao Zhou deduced governed differential equations and corresponding boundary conditions of steel-concrete composite continuous box girder, and put forward a calculating method for its vibration characteristics [6], it can be seen that these researches are limit to study the influence of shear lag and shear deformation on free vibration characteristics, while there seems to be few researches on its dynamic response. In view of this, finite element method is used in present work, shear lag effect of a continuous curved box girder with three spans is studied by dynamic time history analysis.
2. Continuous curved box girder model
2.1. Calculating parameters
The material properties and structural dimensions of curved box girder are shown in the literature [7], Fig. 1 shows the cross-section type, mass density
\rho =
1180 kg/m3, structural parameters:
{t}_{1}={t}_{2}={t}_{w}=
h=
100 mm,
{b}_{1}={b}_{2}=
100 mm, curvature radius
r=
2 m, curvature angle
\theta =
30°. The material parameters are as following: Young’s modulus
E=
3000 MPa, Poisson’s ratio
\mu =
Fig. 1. Cross section type
In this paper, 3D finite element method using 4-nodes shell elements is adopted in the foundation of the finite element model of the continuous curved box girder with three spans. The finite element model of curved box girder is shown as Fig. 2, there are 1152 elements and 1176 nodes, as for the boundary conditions,
Ux
Uy
Uz
Ux
Uz
are restricted at the junction between web and bottom flange in one side, while
Uy
Uz
Uz
are restricted in two middle sides and the other side, Fig. 3 shows plate form of continuous curved box girder.
Fig. 3. Plate form of continuous curved box girder
2.3. Seismic excitation
EL-centro Seismic wave (north-south component), recorded in second site, is chosen as seismic excitation in this paper, the peak value and acceleration are revised by basic earthquake protection intensity shown as Eq. (1), Fig. 4 shows the EL-centro seismic wave:
{a}^{\text{'}}\left(t\right)=\left(\frac{{A}_{0}}{A}\right)a\left(t\right),
a\left(t\right)
is the acceleration of original seismic wave at time
t
A
is the corresponding peak value,
{a}^{\text{'}}\left(t\right)
is the revised acceleration, and
{A}_{0}
is the maximum ground horizontal acceleration.
Considering the main reasons of shear lag are vertical bending deformation and plane shear deformation, the peak acceleration in different directions are set up as 1 (longitudinal direction) and 0.65 (vertical direction).
Fig. 4. EL-centro seismic wave
3. Shear lag effect analysis
The shear lag coefficient is shown as Eq. (4) [1]:
\lambda =\frac{\sigma }{\overline{\sigma }},
\sigma
is the real stress value,
\overline{\sigma }
is the stress calculated by primary beam theory.
Fig. 5 shows the shear lag coefficient at the mid-span cross sections, on upper flange, it present prominent shear lag effect at the mid-span 1 cross section and mid-span 3 cross section, the shear lag coefficients reach their peak at the ribs and decrease along two sides, the maximum is 2.656, shear lag is severe and cannot be neglected; while for mid-span 2 cross section, the shear lag coefficients are generally less than 1, it presents negative shear lag effect. On bottom flange, shear lag coefficients both increase from outside to inside at the mid-span 1 cross section and mid-span 2 cross section, the maximum are respectively 1.528 and 1.537.
Shear lag coefficient at the support cross sections are shown as Fig. 6, just like mid-span 1 cross section and mid-span 3 cross section, there is also prominent shear lag effect on upper flange, the inside peak shear lag coefficient is great than outside, which means the outside upper flange is easier to be damaged under seismic action. For bottom flange, there is no certain rule.
Fig. 5. Shear lag coefficient at the mid-spans
a) Upper flange
b) Bottom flange
Fig. 6. Shear lag coefficient at the supports
Shear lag effect reflects the non-uniform of stress distribution, in order to compare degree of stress concentration between mid-span cross sections and support cross sections, the peak shear lag effect coefficients of upper flange are analyzed, the result is shown as Table 1.
From the Table 1, it can be seen that for continuous curved box girder, shear lag effects at the mid-span cross sections are prominrnt than support cross sections, so full attentions should be paid in engineering design.
In order to verify the correctness and reliability of finite element model, the continuous curved box girder is changed to cantilever curved box girder so as to compare the results acquired with the shaking table testing [7], the shear lag coefficients at mid-span is shown as Fig. 7, the numerical analysis of ANSYS agrees well with that of experiment shown as Fig. 7.
Table 1. Peak shear lag coefficients of upper flange at different cross section
Mid-span 1
Mid-span3
Fig. 7. Comparison of experiment and numeric results
The following conclusions are acquired:
1) There is a prominent shear lag effect in the single-box single-cell continuous curved box girder under seismic excitation, this stress concentration phenomenon cannot be neglected.
2) Shear lag effect at mid-span cross sections are more prominent than support cross section.
3) Inside peak shear lag coefficients are greater than outside, which means for continuous curved box girder, inside is easier to be damaged than outside under seismic excitation.
4) The analysis results in this paper agree well with experiment, which can verify the validity of finite element model.
The present work is supported by the National Science Fund of China (No. 51378404) and Innovation Fund of Wuhan Institute of Technology (CX2016036), these supports are sincerely acknowledged.
Xiang Haifan, Fan Lichu Advanced Theory of Bridge Structures. Second Edition, China Communication Press, Beijing, 2008, (in Chinese). [Search CrossRef]
Sun Lu, Wang Wenlei, Wu Guoqi, Li Yaoxiong Analyses of the causes for crack on the end side of box girder for railway passenger dedicated line and its construction control. Journal of Railway Engineering Society, Vol. 10, 2007, p. 46-51, (in Chinese). [Search CrossRef]
Qiao Peng, Zhou Xuhong, Di Jin Shear lag effect analysis of flat steel curved box girder. Journal of Traffic and Transportation Engineering, Vol. 14, Issue 4, 2014, p. 36-44, (in Chinese). [Search CrossRef]
Zhang Yuanhai, Li Lin, Lin Lixia, Sun Xuexian Beam-segment finite element analysis on shear lag effect of thin-walled box girder adopting additional deflection as generalized displacement. China Civil Engineering Journal, Vol. 46, Issue 10, 2013, p. 100-107, (in Chinese). [Search CrossRef]
Gomeza Hugo C., Fanning Paul J., Fenga Maria Q., Lee Sungchil Testing and long-term monitoring of a curved concrete box girder bridge. Engineering Structures, Vol. 33, Issue 10, 2011, p. 2861-2869. [Search CrossRef]
Wang Bao Zhou, Li Zhong Jiang, Zhi Wu Yu Analysis of free vibration characteristic of steel-concrete composite box-girder considering shear lag and slip. Journal of Central South University, Vol. 20, Issue 9, 2013, p. 2570-2577. [Search CrossRef]
Lu H. L., Peng T. F., Huang M. S., Zhu S. B. Experimental study on shear lag of curved box girder under earthquake excitation. Vibroengineering Procedia, Vol. 5, 2015, p. 417-422. [Search CrossRef]
|
Building a World-Class CIFAR-10 Model From Scratch
In this post, I walk through how to build and train a world-class deep learning image recognition model. Deep learning models tout amazing results in competitions, but it can be difficult to go from a dense, technical research paper to actually working code. Here I take one of those papers, break down the import steps, and translate the words on the page into code you can run and get near state-of-the-art results on a popular image recognition benchmark.
The problem we will be solving is one of the most common in deep learning: image recognition. Here, our model is presented with an image (typically raw pixel values) and is tasked with outputting the object inside that image from a set of possible classes.
The dataset will be using is CIFAR-10, which is one of the most popular datasets in current deep learning research. CIFAR-10 is a collection of 60,000 images, each one containing one of 10 potential classes. These images are tiny: just 32x32 pixels (for reference, an HDTV will have over a thousand pixels in width and height). This means the resulting images are grainy and it’s potentially difficult to determine exactly what’s in them, even for a human. A few examples are depicted below.
Images of a boat and frog from the CIFAR-10 dataset
The training set consists of 50,000 images, and the remaining 10,000 are used for evaluating models. At the time of this writing, the best reported model is 97.69% accurate on the test set. The model we will create here won’t be quite as accurate, but still very impressive.
The architecture we will use is a variation of residual networks known as a wide residual network. We’ll use PyTorch as our deep learning library, and automate some of the data loading and processing with the Fast.ai library. But first, let’s dig into the architecture of ResNets and the particular variant we’re interested in.
The Residual Network
Deep neural networks function as a stack of layers. The input moves from one layer, to the next, with some kind of transformation (e.g. convolution) followed by a non-linear activation function (e.g. ReLU). With the exception of RNNs, this process of pushing inputs directly through the network one layer at a time was standard practice in top-performing deep neural networks.
Then, in 2015, Kaiming He and his colleagues at Microsoft Research introduced the Residual Network architecture. In a residual network (resnet, for short), activations are able to “skip” past layers at certain points and be summed up with the activations of the layers it skipped. These skip connections form what are typically referred to as a residual block. The image below depicts one block in a resnet.
The structure of a resnet block. Inputs are allowed to skip past layers and be summed up with the activations of the layers they skipped.
Architectures built by stacking together residual blocks (i.e. resnets) train much more efficiently and to less error. The original paper explores various depths, and are able to train networks of over 1,200 layers. Before, it was difficult to train networks with just 19 layers. One potential reason resnets allow for deeper networks is because they allow the gradient signal from backpropagation to travel further back up through the network, using the skip connections like a highway to get closer to the input layer. In 2015, a residual network won the ImageNet with 3.57% test error.
The authors explain the intuition (and the name) of the residual block as a recharacterization of the learning process. Consider just a few layers, like those that make up a single residual block. Now, there should be some ideal mapping from the block’s inputs to it’s output. Let’s call this mapping
H(x)
. Typical learning tries to derive this mapping directly: that is, find an
F(x, W)
similar to our ideal
H(x)
. But we can change this, and instead allow
F
to approximate the residual, or the difference, between
H(x)
x
which is the definition of our residual block.
The Wide ResNet
Since their introduction, resnets have become a standard choice for deep learning architectures dealing with computer vision. Several variations of the residual blocks and architectures presented in the original paper have been explored, one of which currently holds the state of the art test accuracy for CIFAR-10.
The variation we are going to implement here is the wide residual network. Here, the authors point out that the depth of resnets was the focal point in their introduction, rather than the width (that is, the number of convolutional filters in the layers). They explore some different kinds of resnet blocks, and show that shallow and wide can be faster and more accurate than the original deep and thin.
Comparison of the different block structures in vanilla and wide resnets. The two on the left are those found in a traditional resnet: a basic block of two thin 3x3 convolutions and a "bottleneck" block. On the right, the wide resnet uses blocks similar to the original basic block, but much wider convolutions (i.e. more filters). There may or may not be dropout between the convolutions to regularize the model.
The Structure of a Wide ResNet
The wide resnet consists of three main architectural components:
An initial convolution. This is done to pull out any high level features and help upsample our initial image from only three channels to a high-dimensional convolutional activation.
A number of “groups”. Each group will consists of a set of
N
residual blocks. More on this in a moment.
A pooling and linear layer. This will downsample our convolutions and convert them into class predictions.
The real meat of the wide resnet will lie in the groups: that’s where all of our residual blocks will live. The original paper always used three groups in their experiments, but we will write our code to be modular to the number of groups.
Outline of the wide resnet architecture. `conv1` is the initial convolution and `conv2` through `conv4` make up the three groups, each consisting of \(N\) blocks. In this case, the blocks are the wide 3x3 basic blocks, where the width is initially 16\(\cdot k\) and doubled after each group. Every group after the first also downsamples to reduce the width and height of the convolutional activations.
There are a few considerations that will become key to implementing the blocks in each of our groups:
Each block after the first will downsample the size of the activations. This means that the 32x32 activation block will shrink to 16x16. We’ll do this by setting the stride of the first convolution in the blocks to 2.
Each group will double the number of filters from the previous group.
The first block of each group will need to have a convolution in its shortcut to get it to the right dimensions for the addition operation.
So how wide will these convolutions be? Our initial convolution will turn our three channels into 16. The first group will multiply the number of channels by the widening factor
k
, and every subsequent group will double the width of the convolutions. Essentially, the
i
th group will have
(16 \cdot k)\cdot 2^i
filters in its convolutions (where
i
starts from 0).
Implementing the Wide ResNet
Now that the architecture is all settled, it’s time to write some code. I’m going to implement this in PyTorch, with a little help from the fastai library. Fastai is a fantastic library for quickly building high quality models. It’s also really helpful for automating the more mundane aspects of writing deep learning code, like building data loaders and training loops, which is what I’ll use it for here.
The implementation will be done piece by piece: starting with the basic block, then fleshing out the whole network, and finally building our data pipeline and training loop. You can find the complete implementation here.
Note: Some of this code is not going to be as tidy as it could be. In this article, I’m optimizing for understanding, not necessarily style or cleanliness.
Since the majority of the model will consist of basic residual blocks, it makes sense to define a reusable component that we can fill our model with. Fortunately, PyTorch makes this really easy by allowing us to subclass the nn.Module class.
The full implementation of the BasicBlock class can be seen below:
def __init__(self, inf, outf, stride, drop):
self.bn1 = nn.BatchNorm2d(inf)
self.conv1 = nn.Conv2d(inf, outf, kernel_size=3, padding=1,
stride=stride, bias=False)
self.drop = nn.Dropout(drop, inplace=True)
self.bn2 = nn.BatchNorm2d(outf)
self.conv2 = nn.Conv2d(outf, outf, kernel_size=3, padding=1,
stride=1, bias=False)
if inf == outf:
self.shortcut = lambda x: x
nn.BatchNorm2d(inf), nn.ReLU(inplace=True),
nn.Conv2d(inf, outf, 3, padding=1, stride=stride, bias=False))
x2 = self.conv1(F.relu(self.bn1(x)))
x2 = self.conv2(F.relu(self.bn2(x2)))
r = self.shortcut(x)
return x2.add_(r)
After the first group, the first block of each group will need to downsample the height and width of the convolutional activation. This can be done by passing in a 2 to the stride parameter when instantiating the first BasicBlock of the group.
With the exception mentioned above, each convolution should preserve the width and height of the convolutional activation. We achieve this by always using a kernel_size of 3 and a padding of 1. Additionally, since we’re using batchnorm, our convolutions don’t need a bias parameter, hence bias=False.
We follow the order of batchnorm -> relu -> convolution. Although the original batchnorm paper used a different order, this has since been shown to be more effective during training.
If this is the first block in a group, we’re going to double the width via our convolutions. In that case, the dimensions won’t match for the shortcut connection, so shortcut will need it’s own convolution (preceeded by batchnorm and relu) to increase it to have width outf. Also, since we only double on the first block in a group, and we may be downsampling then too, we’ll need to use the stride parameter in this convolution as well.
The WideResNet Class
Now that we have our BasicBlock implementation, we can flesh out the rest of the wide resnet architecture.
def __init__(self, n_grps, N, k=1, drop=0.3, first_width=16):
layers = [nn.Conv2d(3, first_width, kernel_size=3, padding=1, bias=False)]
# Double feature depth at each group, after the first
widths = [first_width]
for grp in range(n_grps):
widths.append(first_width*(2**grp)*k)
layers += self._make_group(N, widths[grp], widths[grp+1],
(1 if grp == 0 else 2), drop)
layers += [nn.BatchNorm2d(widths[-1]), nn.ReLU(inplace=True),
nn.Linear(widths[-1], 10)]
self.features = nn.Sequential(*layers)
def _make_group(self, N, inf, outf, stride, drop):
blk = BasicBlock(inf=(inf if i == 0 else outf), outf=outf,
stride=(stride if i == 0 else 1), drop=drop)
You can see the outline of the architecture in the code. Right after we call the super constructor, we initialize the first convolutional layer (conv1 in the architecture table).
After the initial convolution, we calculate the widths (i.e. number of filters) in each block, creating a list that will become our inf and outf parameters during block construction. Then we construct each group in a for loop. If this is the first group, we use stride=1 since this is the only time we don’t want to decrease the width and height of the convolutional activations. Making a group involves calling our _make_group helper function, which will construct N instances of BasicBlock with the appropriate inf, outf, and stride parameters.
Finally, we average pool our activations, turning each
64 \cdot k
convolutional activations into a single value, which is input to our last linear layer used for classification.
Data Loading and Training
Our model is locked and loaded, now we just need some data to feed it and a training loop to optimize it. Since this is the least interesting part of building a model, I’m going to rely heavily on the fastai library. Note that for this code to run, the library will need to be importable, which is most simply done by cloning the repository and then symlinking the library directory into the same directory that the model is in.
To start, we’ll set up our data folder and download our dataset via torchvision.datasets. We’ll also convert the dataset to numpy arrays of floating point values, and move the inputs between 0 and 1.
trn_ds = CIFAR10(PATH, train=True, download=True)
tst_ds = CIFAR10(PATH, train=False, download=True)
trn = trn_ds.train_data.astype('float32')/255, np.array(trn_ds.train_labels)
tst = tst_ds.test_data.astype('float32')/255, np.array(tst_ds.test_labels)
Now trn and tst are tuples containing our training and test inputs/outputs, respectively. Next we’ll set up our preprocessing transformations using fastai.
sz, bs = 32, 128
stats = (np.array([ 0.4914 , 0.48216, 0.44653]),
np.array([ 0.24703, 0.24349, 0.26159]))
aug_tfms = [RandomFlip(), Cutout(1, 16)]
tfms = tfms_from_stats(stats, sz, aug_tfms=aug_tfms, pad=4)
Our inputs will be size 32x32, with batch size 128 (you may need to decrease this depending on your hardware; this is the value used in the original paper). We set up our tfms object to be a list of transformation for our inputs: we normalize based on the known means and standard deviations, take a random crop after padding each size 4 pixels, and randomly flip the image 50% of the time. Additionally, we also use cutout, which will randomly zero out a square in our input image. Here we set cutout to use 1 square of length 16.
Finally, we’ll put everything together by creating a dataset object, instantiating our model, and creating a learner object.
data = ImageClassifierData.from_arrays('data', trn, tst, bs=bs, tfms=tfms)
wrn = WideResNet(n_grps=3, N=4, k=10)
learn = ConvLearner.from_model_data(wrn, data)
Here we’re using a wide resnet with 3 groups, each group has four blocks, and a widening factor of 10. We’ll also let dropout be 0.3, which is the default we picked when we defined the class. This results in a 28-layer network and produced the best results for our dataset.
To train, we will follow the same training procedure outline in the original paper.
wds = 5e-4
for i, epochs in enumerate([60, 60, 40, 40]):
learn.fit(lr, epochs, wds=5e-4 best_save_name=f'wrl-10-28-p{i}')
lr /= 5
We train for 200 epochs, decreasing the learning rate by a fifth at certain intervals. Fastai will automatically save the best performing model of each phase in our data directory since we set the best_save_name parameter.
In my own tests, this model achieved a final test time accuracy of 95.84%. The current state of the art for CIFAR-10 is about 98% (though they also trained for 9 times as long). Not bad for less than 100 lines of code!
In this post, I walked through implementing the wide residual network. Leveraging PyTorch’s modular API, we were able to construct the model with just a few dozen lines of code. We also were able to skip past the mundane image processing and training loop using the fastai library.
Our final results got us almost 96% accuracy on a rather challenging dataset. We are within 2% of the best that anybody has ever done. While deep learning moves at breakneck speeds, often times papers will present ideas that are fairly straightforward to reimplement yourself. This isn’t always the case, like in some experiments that require absolutely enormous computational power. But in cases like the wide resnet, it can be really fun and extremely rewarding to recreate a paper’s experiments from scratch.
|
Cauchy's_theorem_(group_theory) Knowpia
Proof 1Edit
The set that our cyclic group shall act on is the set
{\displaystyle X=\{\,(x_{1},\ldots ,x_{p})\in G^{p}:x_{1}x_{2}\cdots x_{p}=e\,\}}
{\displaystyle (x_{1},x_{2},\ldots ,x_{p})\mapsto (x_{2},\ldots ,x_{p},x_{1})}
As remarked, orbits in X under this action either have size 1 or size p. The former happens precisely for those tuples
{\displaystyle (x,x,\ldots ,x)}
{\displaystyle x^{p}=e}
. Counting the elements of X by orbits, and reducing modulo p, one sees that the number of elements satisfying
{\displaystyle x^{p}=e}
is divisible by p. But x = e is one such element, so there must be at least p − 1 other solutions for x, and these solutions are elements of order p. This completes the proof.
Let G is a finite group where x2 = e for all element x of G. Then G has the order 2n for some non negative integer n. Let |G| equal m. In the case of m is 1, then G = {e}. In the case of m ≥ 2, if m has the odd prime factor p, G has the element x where xp = e from Cauchy's theorem. It conflicts with the assumption. Therefore m must be 2n.[5] G is an abelian group, and G is called an elementary abelian 2-group or Boolean group. The well-known example is Klein four-group.
An abelian simple group is either {e} or cyclic group Cp whose order is a prime number p. Let G is an abelian group, then all subgroups of G are normal subgroups. So, if G is a simple group, G has only normal subgroup that is either {e} or G. If |G| = 1, then G is {e}. It is suitable. If |G| ≥ 2, let a ∈ G is not e, the cyclic group
{\displaystyle \langle a\rangle }
is subgroup of G and
{\displaystyle \langle a\rangle }
is not {e}, then
{\displaystyle G=\langle a\rangle .}
Let n is the order of
{\displaystyle \langle a\rangle }
. If n is infinite, then
{\displaystyle G=\langle a\rangle \supsetneqq \langle a^{2}\rangle \supsetneqq \{e\}.}
^ Cauchy 1845.
^ Finite groups where x2=e has order 2n, Stack Exchange, 2015-09-23
Cauchy, Augustin-Louis (1845), "Mémoire sur les arrangements que l'on peut former avec des lettres données, et sur les permutations ou substitutions à l'aide desquelles on passe d'un arrangement à un autre", Exercises d'analyse et de physique mathématique, Paris, 3: 151–252
Cauchy, Augustin-Louis (1932), "Oeuvres complètes" (PDF), Lilliad - Université de Lille - Sciences et Technologies, second series (reprinted ed.), Paris: Gauthier-Villars, 13: 171–282
Jacobson, Nathan (2009) [1985], Basic Algebra, Dover Books on Mathematics, vol. I (Second ed.), Dover Publications, p. 80, ISBN 978-0-486-47189-1
McKay, James H. (1959), "Another proof of Cauchy's group theorem", American Mathematical Monthly, 66 (2): 119, CiteSeerX 10.1.1.434.3544, doi:10.2307/2310010, JSTOR 2310010, MR 0098777, Zbl 0082.02601
"Cauchy's theorem". PlanetMath.
"Proof of Cauchy's theorem". PlanetMath.
|
Anti-Galling Effects of α-Zirconium Phosphate Nanoparticles as Grease Additives | J. Tribol. | ASME Digital Collection
Anti-Galling Effects of α-Zirconium Phosphate Nanoparticles as Grease Additives
e-mail: yanchen876@tamu.edu
Xuezhen Wang,
e-mail: xuezhenwang@tamu.edu
Abraham Clearfield,
e-mail: clearfield@chem.tamu.edu
e-mail: hliang@tamu.edu
Contributed by the Tribology Division of ASME for publication in the JOURNAL OF TRIBOLOGY. Manuscript received July 22, 2018; final manuscript received September 17, 2018; published online November 1, 2018. Assoc. Editor: Min Zou.
Chen, Y., Wang, X., Clearfield, A., and Liang, H. (November 1, 2018). "Anti-Galling Effects of α-Zirconium Phosphate Nanoparticles as Grease Additives." ASME. J. Tribol. March 2019; 141(3): 031801. https://doi.org/10.1115/1.4041538
Grease plays important roles in reducing frictional loss and providing protection of rubbing surfaces. In this research, we investigated the effects of
α
-zirconium phosphate nanoparticles as additives in grease on the galling behavior of a pair of steels (4130 against P530). The results showed that the addition of 0.5 wt% of nanoparticles in petroleum jelly could reduce the friction for 10% and the area being galled for 80%. In terms of particle sizes, the 1
μm
sized particles have profound influence in galling reduction. This is due to the increased contribution of van der Waals forces in the stacked layers of those particles. Under shear, those particles are exfoliated, resulting in low friction and more surface coverage to protect surfaces from galling.
Galling, Nanoparticles, Zirconium, Particulate matter, Petroleum
Tribology in Cold Metal Forming
ASME J Manuf. Sci. Eng.
A New Lubricity Evaluation Method for Metal Forming by a Compression-Twist Type Friction Testing Machine
The Effect of Journal Roughness and Foil Coatings on the Performance of Heavily Loaded Foil Air Bearings
Testing Thread Compounds for Rotary-Shouldered Connections
The Role of Stacking Fault Energy on Galling and Wear Behavior
The Formation of Wedges of Displaced Metal Between Sliding Metal Surfaces
Development of a Galling Resistance Test Method With a Uniform Stress Distribution
Flow Properties of Lithium Stearate-Oil Model Greases as Functions of Soap Concentration and Temperature
J. ASLE Trans.
A Review on Grease Lubrication in Rolling Bearings
Nanogrease Based on Carbon Nanotube
Cuvalci
Lubrication of Metal Surfaces by Fatty Acids
Chain Length Effects in Boundary Lubrication
Fluoride and Environmental Health: A Review
The Health Effects of Low Level Exposure to Lead
Essentiality and Toxicity in Copper Health Risk Assessment: Overview, Update and Regulatory Considerations
Shepelevskii
Thermal and Tribological Properties of Fullerene‐Containing Composite Systems—Part 2: Formation of Tribo‐Polymer Films During Boundary Sliding Friction in the Presence of Fullerene C60
J. Macromol. Sci. Part B
Dispersion of Alkylated Graphene in Organic Solvents and Its Potential for Lubrication Applications
Recent Achievements in the Synthesis and Application of Inorganic Nanoparticles as Lubricant Components
Roles of Nanoparticles in Oil Lubrication
Chinas-Castillo
Mechanism of Action of Colloidal Solid Dispersions
Von-Huth
Rosentsveig
New Test Method and Apparatus for Measuring Galling Resistance
Effect of Temperature on Galling Behavior of SS 316, 316 L and 416 Under Self-Mated Condition
α-Zirconium Phosphate Nanoplatelets as Lubricant Additives
Colloids Surf. Physicochem. Eng. Asp.
Amine-Intercalated α-Zirconium Phosphates as Lubricant Additives
Mechanism of Ion Exchange in Zirconium Phosphates. 20. Refinement of the Crystal Structure of.Alpha.-Zirconium Phosphate
Preparation of α-Zirconium Phosphate Nanoplatelets With Wide Variations in Aspect Ratios
High Nitrogen Alloyed Steels for Nonmagnetic Drill Collars. Standard Steel Grades and Latest Developments
Hassanzadeh-Tabrizi
Failure Analysis of Drill Pipe: A Review
Zia-Ebrahimi
Mechanisms of Tempered Martensite Embrittlement in Medium-Carbon Steels
The London—Van Der Waals Attraction Between Spherical Particles
Hamaker Constants of Inorganic Materials
Temperature-Dependent Isotropic-to-Nematic Transition of Charged Nanoplates
Galling 50 , a Stochastic Measure of Galling Resistance
Experimental Studies on Galling Onset in OCTG Connections: A Review
Ultrafine Particles From WTE and Other Combustion Sources
Combustion of Boron Nano-Particles in Ethanol Spray Flame
|
The Rational(f, k) command computes a closed form of the indefinite sum of
k
f\left(k\right)
s\left(k\right)
t\left(k\right)
f\left(k\right)=s\left(k+1\right)-s\left(k\right)+t\left(k\right)
t\left(k\right)
k
\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\sum }_{k}t\left(k\right)
g,[p,q]
g
is the closed form of the indefinite sum of
k
p
is a list containing the integer poles of
q
s
that are not poles of
\mathrm{with}\left(\mathrm{SumTools}[\mathrm{IndefiniteSum}]\right):
f≔\frac{1}{{n}^{2}+\mathrm{sqrt}\left(5\right)n-1}
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{1}}{{\textcolor[rgb]{0,0,1}{n}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\sqrt{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}}
g≔\mathrm{Rational}\left(f,n\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\sqrt{\textcolor[rgb]{0,0,1}{5}}}{\textcolor[rgb]{0,0,1}{2}}\right)}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\sqrt{\textcolor[rgb]{0,0,1}{5}}}{\textcolor[rgb]{0,0,1}{2}}\right)}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\sqrt{\textcolor[rgb]{0,0,1}{5}}}{\textcolor[rgb]{0,0,1}{2}}\right)}
\mathrm{evala}\left(\mathrm{Normal}\left(\mathrm{eval}\left(g,n=n+1\right)-g\right),\mathrm{expanded}\right)
\frac{\textcolor[rgb]{0,0,1}{1}}{{\textcolor[rgb]{0,0,1}{n}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\sqrt{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}}
f≔\frac{13-57x+2y+20{x}^{2}-18xy+10{y}^{2}}{15+10x-26y-25{x}^{2}+10xy+8{y}^{2}}
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{20}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{18}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{57}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{13}}{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{25}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{26}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{15}}
g≔\mathrm{Rational}\left(f,x\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{+}\left(\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}}{\textcolor[rgb]{0,0,1}{25}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{34}}{\textcolor[rgb]{0,0,1}{25}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{\Psi }}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{5}}\right)\textcolor[rgb]{0,0,1}{+}\left(\frac{\textcolor[rgb]{0,0,1}{17}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}}{\textcolor[rgb]{0,0,1}{25}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{5}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{\Psi }}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)
\mathrm{simplify}\left(\mathrm{combine}\left(f-\left(\mathrm{eval}\left(g,x=x+1\right)-g\right),\mathrm{\Psi }\right)\right)
\textcolor[rgb]{0,0,1}{0}
f≔\frac{1}{n}-\frac{2}{n-3}+\frac{1}{n-5}
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{2}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{5}}
g,\mathrm{fp}≔\mathrm{Rational}\left(f,n,'\mathrm{failpoints}'\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{fp}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{5}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]]
f
n=0,3,5
g
n=1,2,4
|
In the last exercise, you were probably able to make a rough estimate about the next data point for Sandra’s lemonade stand without thinking too hard about it. For our program to make the same level of guess, we have to determine what a line would look like through those data points.
A line is determined by its slope and its intercept. In other words, for each point y on a line we can say:
y = m x + b
where m is the slope, and b is the intercept. y is a given point on the y-axis, and it corresponds to a given x on the x-axis.
The slope is a measure of how steep the line is, while the intercept is a measure of where the line hits the y-axis.
When we perform Linear Regression, the goal is to get the “best” m and b for our data. We will determine what “best” means in the next exercises.
We have provided a slope, m, and an intercept, b, that seems to describe the revenue data you have been given.
Create a new list, y, that has every element in months, multiplied by m and added to b.
A list comprehension is probably the easiest way to do this!
Plot the y values against months as a line on top of the scatterplot that was plotted with the line plt.plot(months, revenue, "o").
Change m and b to the values that you think match the data the best.
What does the slope look like it should be? And the intercept?
|
2^2 - 3
Of course we must then first agree on the definition of a state machine... that's why I clearly stated the definition I used.
Hi, I am reading the "no-more-mvc-frameworks" article and Jean posted an example about an "increment" action and a model. I am trying to map this to CQRS, (but I am unsure if it even makes sense). Since the model checks the validity of the data, the action is not even a command in the CQRS sense but a mapping of user input?
So if you used CQRS for this system, the model would would split its mutation in two steps:
validate data + apply "increment" command
mutate + generate "incremented" event
Hi, there is a reasonable alignment with Event Sourcing, it may even well be a missing link in the CQRS architecture. SAM decouples the invocation of back-end APIs from the view, therefore, there is no need/urge to map the view events to the commands. It may be more logical to map the action proposals to commands. You could as you mention in option two pick the event in the model, but I believe the proposal is more natural.
@sladiri I would argue that we have been missing that clear distinction between model and everything else (CQRS being completely on the model side) and SAM (via TLA+ semantics) provide a much clearer isolation of the model business logic. All other approaches tend to mix action and state code with the model.
Marcus Feitoza
@mfeitoza
Hey guys I have one question in the above code:
const model = {}
// cat name space
model.Cat = {}
model.Cat.list = []
model.Dog = {}
model.Dog.list = []
model.present = (data) => {
// only one present for both Cat and Dog
// OR I can have present for Cat and Dog?
model.Cat.present = (data) => {
model.Dog.present = (data) => {
There is no general rule, it's really about the "protocol" and the roles and responsibilities of Actions, Model and State. There is no problem modularizing the model as needed as long as Actions "present" proposals to the model and these proposals are serialized, unless you know what you are doing (there could still be parts of the model that could be updated concurrently, but IMHO, that's not a good thing.
Thank you again JJ.
|
Revision as of 20:44, 4 June 2015 by MathAdmin (talk | contribs) (Created page with "<span class="exam">Find the derivative: <math style="vertical-align: -18px">g(x) = \frac{ln(x^3 + 7)}{(x^4 + 2x^2)}</math> . <span class="exam">''(Note: You do not ne...")
{\displaystyle g(x)={\frac {ln(x^{3}+7)}{(x^{4}+2x^{2})}}}
This problem requires some more advanced rules of differentiation. In particular, it needs
The Chain Rule: I{\displaystyle f}
{\displaystyle g}
{\displaystyle (f\circ g)'(x)=f'(g(x))\cdot g'(x).}
The Quotient Rule: I{\displaystyle f}
{\displaystyle g}
{\displaystyle g(x)\neq 0}
{\displaystyle \left({\frac {f}{g}}\right)'(x)={\frac {f'(x)\cdot g(x)-f(x)\cdot g'(x)}{\left(g(x)\right)^{2}}}.}
Find the derivative of the denominator:
We need to use the chain rule, where the inner function is
{\displaystyle x^{3}+7}
and the outer function is natural log:
{\displaystyle {\begin{array}{rcl}\left[\ln(x^{3}+7)\right]'&=&{\displaystyle {\frac {1}{x^{3}+7}}\cdot 3x^{2}}\\\\&=&{\displaystyle {\frac {3x^{2}}{x^{3}+7}}.}\end{array}}}
Apply the Quotient Rule:
{\displaystyle {\begin{array}{rcl}\left[{\displaystyle {\frac {\ln(x^{3}+7)}{x^{4}+2x^{2}}}}\right]'&=&{\displaystyle {\frac {\left[\ln(x^{3}+7)\right]'\cdot \left(x^{4}+2x^{2}\right)-\left(x^{4}+2x^{2}\right)'\cdot \ln(x^{3}+7)}{\left(x^{4}+2x^{2}\right)^{2}}}}\\\\&=&{\displaystyle {\frac {{\frac {3x^{2}}{x^{3}+7}}\cdot \left(x^{4}+2x^{2}\right)-\left(4x^{3}+4x\right)\cdot \ln(x^{3}+7)}{\left(x^{4}+2x^{2}\right)^{2}}}}.\\\\\end{array}}}
{\displaystyle \left[{\frac {\ln(x^{3}+7)}{x^{4}+2x^{2}}}\right]'\,=\,{\frac {{\frac {3x^{2}}{x^{3}+7}}\cdot \left(x^{4}+2x^{2}\right)-\left(4x^{3}+4x\right)\cdot \ln(x^{3}+7)}{\left(x^{4}+2x^{2}\right)^{2}}}.}
|
Determinism - Wikiquote
philosophical belief that all events are determined completely by previously existing causes
Determinism is the philosophical position that for every event, including human action, there exist conditions that could cause no other event.
Unlike the Hindu concept of karma, however, karma in Buddhism is not deterministic since there is in Buddhism no idea of a God who is the controller of karma; rather Buddhism takes karma as moral power, emphasizing the possibility of final release from the round of transmigration through a free decision of the will. Accordingly, on the one hand, we are bound by our own karma which shares in and inseparably linked to karma operating in the universe.
Masao Abe, Buddhism and Interfaith Dialogue, p. 84
It must be admitted that the new theoretical conception owes its origin not to any flight of fancy but to the compelling force of the facts of experience. All attempts to represent the particle and wave features displayed in the phenomena of light and matter, by direct course to a space-time model, have so far ended in failure. And Heisenberg has convincing shown, from an empirical point of view, any decision as to a rigorously deterministic structure of nature is definitely ruled out, because of the atomistic structure of our experimental apparatus. Thus it is probably out of the question that any future knowledge can compel physics again to relinquish our present statistical theoretical foundation in favor of a deterministic one which would deal directly with physical reality.
Albert Einstein, (May 24, 1940)"Considerations Concerning the Fundaments of Theoretical Physics". Science 91 (2369): 487–492. (quote from pp. 491–492)
The statement that "the future is predetermined" seems to us to belong to the language of common sense because we are, from our religious—Judeo-Christian—tradition, accustomed to the idea of an omniscient Intelligence in whose mind this predetermination takes place. To the pagans, since their gods were imagined as more human, this predetermination took place, not in the minds of the gods, but in the mind of "Fate" above the gods...
If science does not care to include an omniscient Intelligence in its conceptual scheme... it can only mean that it is determined by law.
Philipp Frank, Philosophy of Science: The Link Between Science and Philosophy (1957) pp. 261-262.
Newtonian laws of motion allow a prediction of the future based on knowledge of the present because these laws are of the form
{\displaystyle d\xi _{k}/dt=F_{k}(\xi _{1},\xi _{2},\cdots \xi _{n})\qquad \qquad (k=1,2,\cdots n)}
...if the values of the "state variables" are known for the present instant of time
{\displaystyle t=0}
, one can "predict" their values for any past or future time
{\displaystyle t}
. All laws of this kind are called "causal laws." The general "principle of causality" would claim that all phenomena are governed by causal laws which would have the [above] form... where
{\displaystyle \xi _{1},\cdots \xi _{n}}
are any variables that determine the "state" of a physical system at the time
{\displaystyle t}
. ...belief in this general principle is supported by the special case of astronomy where
{\displaystyle \xi _{k}}
are the coordinates and velocities of mass-points and the functions
{\displaystyle F_{k}}
are known to be simple mathematical formulae derived from Newton's laws of gravitation. ...What caused the success was the simplicity of the laws in comparison of the complexity of the observed facts. If we regard the
{\displaystyle F_{k}}
as arbitrary functions... and admit complicated initial conditions, the causal law... may be "valid" but will not guarantee the same kind of success. It may be that the law is as complex as the observed facts. Then there is no advantage...
William James, The Dilemma of Determinism (1884), republished in The Will to Believe, Dover, (1956), p. 149.
William James, The Dilemma of Determinism (1884), p.153.
The curve described by a single molecule of air or vapor is regulated in a manner just as certain as the planetary orbits; the only difference between them that which comes from our ignorance.
Pierre-Simon Laplace, A Philosophical Essay on Probabilities from a École Normale Lecture (1795) Tr. (1902 from 6th edition) Frederick Wilson Truscott, Frederick Lincoln Emory, p. 6.
We ought then to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it—an intelligence sufficiently vast to submit these data to analysis—it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes. The human mind offers, in the perfection which it has been able to give to astronomy, a feeble idea of this intelligence. Its discoveries in mechanics and geometry, added to that of universal gravity, have enabled it to comprehend in the same analytical expressions the past and future states of the system of the world. Applying the same method to some other objects of its knowledge, it has succeeded in referring to general laws observed phenomena and in foreseeing those which given circumstances ought to produce. All these efforts in the search for truth tend to lead it back continually to the vast intelligence which we have just mentioned, but from which it will always remain infinitely removed.
Pierre-Simon Laplace, Théorie Analytique des Probabilités (1812) Tr. Frederick Wilson Truscott as Philosophical Essay on Probabilities (1902) p. 4.
Ervin László (1996) The systems view of the world: A holistic vision for our time p. 10-11.
This is a funny question: we all know what it means to do something. But the problem is, if the act wasn't determined in advance, by your desires, beliefs, and personality, among other things, it seems to be something that just happened, without any explanation. And in that case, how was it your doing?
Thomas Nagel, What Does It All Mean?: A Very Short Introduction to Philosophy (1987), Ch. 6. Free Will
If determinism is true for everything that happens, it was already determined before you were born that you would choose cake. Your choice was determined by the situation immediately before, and that situation was determined by the situation before it, and so on as far back as you want to go.
Even if determinism isn't true for everything that happens -- even if some things just happen without being determined by causes that were there in advance -- it would still be very significant if everything we did were determined before we did it. However free you might feel when choosing between fruit and cake, or between two candidates in an election, you would really be able to make only one choice in those circumstances-though if the circumstances or your desires had been different, you would have chosen differently.
Biological determinism works as a phenomenon that normalizes same-sex desire while leaving heterosexism in place and disenfranchising certain queer people from fully participating in an accurate articulation of their experiences in political and popular discourse.
Shannon Weber "What's Wrong With Becoming Queer Biological Determinism as Discursive Queer Hegemony"; as quoted in Ages of the X-Men: Essays on the Children of the Atom in Changing Times, p.170.
Retrieved from "https://en.wikiquote.org/w/index.php?title=Determinism&oldid=3097001"
|
Jill is studying a strange bacterium. When she first looks at the bacteria, there are
1000
cells in her sample. The next day, there are
2000
cells. Intrigued, she comes back the next day to find that there are
4000
cells! (Desmos)
Hint(a)
Is the growth linear or exponential?
Create a table and graph for this situation. The inputs are the days that have passed after she first began to study the sample, and the outputs are the numbers of cells of bacteria.
How many cells of bacteria were there to start with?
Make a table starting with day
0
. What is the multiplier?
|
Fit a mixture of Weibull distributions in SAS » SAS博客列表
A previous article discusses how to use SAS regression procedures to fit a two-parameter Weibull distribution in SAS. The article shows how to convert the regression output into the more familiar scale and shape parameters for the Weibull probability distribution, which are fit by using PROC UNIVARIATE.
Although PROC UNIVARIATE can fit many univariate distributions, it cannot fit a mixture of distributions. For that task, you need to use PROC FMM, which fits finite mixture models. This article discusses how to use PROC FMM to fit a mixture of two Weibull distributions and how to interpret the results. The same technique can be used to fit other mixtures of distributions. If you are going to use the parameter estimates in SAS functions such as the PDF, CDF, and RAND functions, you cannot use the regression parameters directly. You must transform them into the distribution parameters.
Simulate a mixture of Weibull data
You can use the RAND function in the SAS DATA step to simulate a mixture distribution that has two components, each drawn from a Weibull distribution. The RAND function samples from a two-parameter Weibull distribution Weib(α, β) whose density is given by
f(x; \alpha, \beta) = \frac{\beta}{\alpha^{\beta}} (x)^{\beta -1} \exp \left(-\left(\frac{x}{\alpha}\right)^{\beta }\right)
where α is a shape parameter and β is a scale parameter. This parameterization is used by most Base SAS functions and procedures, as well as many regression procedures in SAS. The following SAS DATA step simulates data from two Weibull distributions. The first component is sampled from Weib(α=1.5, β=0.8) and the second component is sampled from Weib(α=4, β=2). For the mixture distribution, the probability of drawing from the first distribution is 0.667 and the probability of drawing from the second distribution is 0.333.
After generating the data, you can call PROC UNIVARIATE to estimate the parameters for each component. Notice that this fits each component separately. If the parameter estimates are close to the parameter values, that is evidence that the simulation generated the data correctly.
/* sample from a mixture of two-parameter Weibull distributions */
array prob [2] _temporary_ (0.667 0.333);
component = rand("Table", of prob[*]);
if component=1 then
d = rand("weibull", 1.5, 0.8); /* C=Shape=1.5; Sigma=Scale=0.8 */
d = rand("weibull", 4, 2); /* C=Shape=4; Sigma=Scale=2 */
histogram d / weibull NOCURVELEGEND; /* fit (Sigma, C) for each component */
ods select Histogram ParameterEstimates Moments;
ods output ParameterEstimates = UniPE;
inset weibull(shape scale) / pos=NE;
title "Weibull Estimates for Each Component";
proc print data=UniPE noobs;
where Parameter in ('Scale', 'Shape');
var Component Parameter Symbol Estimate;
The graph shows a histogram for data in each component. PROC UNIVARIATE overlays a Weibull density on each histogram, based on the parameter estimates. The estimates for both components are close to the parameter values. The first component contains 1,970 observations, which is 65.7% of the total sample, so the estimated mixing probabilities are close to the mixing parameters. I used ODS OUTPUT and PROC PRINT to display one table that contains the parameter estimates from the two groups. PROC UNIVARIATE calls the shape parameter c and the scale parameter σ.
Fitting a finite mixture distribution
The PROC UNIVARIATE call uses the Component variable to identify the Weibull distribution to which each observation belongs. If you do not have the Component variable, is it still possible to estimate a two-component Weibull model?
The answer is yes. The FMM procedure fits statistical models for which the distribution of the response is a finite mixture of distributions. In general, the component distributions can be from different families, but this example is a homogeneous mixture, with both components from the Weibull family. When fitting a mixture model, we assume that we do not know which observations belong to which component. We must estimate the mixing probabilities and the parameters for the components. Typically, you need a lot of data and well-separated components for this effort to be successful.
The following call to PROC FMM fits a two-component Weibull model to the simulated data. As shown in a previous article, the estimates from PROC FMM are for the intercept and scale of the error term for a Weibull regression model. These estimates are different from the shape and scale parameters in the Weibull distribution. However, you can transform the regression estimates into the shape and scale parameters, as follows:
title "Weibull Estimates for Mixture";
proc fmm data=Have plots=density;
ods select ParameterEstimates MixingProbs DensityPlot;
ods output ParameterEstimates=PE0;
/* Add the estimates of Weibull scale and shape to the table of regression estimates.
See https://blogs.sas.com/content/iml/2021/10/27/weibull-regression-model-sas.html */
data FMMPE;
set PE0(rename=(ILink=WeibScale));
if Parameter="Scale" then WeibShape = 1/Estimate;
else WeibShape = ._; /* ._ is one of the 28 missing values in SAS */
proc print data=FMMPE;
var Component Parameter Estimate WeibShape WeibScale;
The program renames the ILink column to WeibScale. It also adds a new column (WeibShape) to the ParameterEstimates table. These two columns display the Weibull shape and scale parameter estimates for each component. Despite not knowing which observation came from which component, the procedure provides good estimates for the Weibull parameters. PROC FMM estimates the first component as Weib(α=1.52, β=0.74) and the second component as Weib(α=3.53, β=1.88). It estimates the mixing parameters for the first component as 0.,6 and the parameter for the second component as 0.4.
The PLOTS=DENSITY option on the PROC FMM statement produces a plot of the data and overlays the component and mixture distributions. The plot is shown below and is discussed in the next section.
The graph of the component densities
The PLOTS=DENSITY option produces a graph of the data and overlays the component and mixture distributions. In the graph, the red curve shows the density of the first Weibull component (W1(d)), the green curve shows the density of the second Weibull component (W2(d)), and the blue curve shows the density of the mixture. Technically, only the blue curve is a "true" density that integrates to unity (or 100% on a percent scale). The components are scaled densities. The integral of a component equals the mixing probability, which for these data are 0.6 and 0.4, respectively. The mixture density equals the sum of the component densities.
Look closely at the legend in the plot, which identifies the component curves by the parameter estimates. Notice, that the estimates in the legend are the REGRESSION estimates, not the shape and scale estimates for the Weibull distribution. Do not be misled by the legend. If you plot the PDF
density = PDF("Weibull", d, 0.74, 0.66); /* WRONG! */
you will NOT get the density curve for the first component. Instead, you need to convert the regression estimates into the shapes and scale parameters for the Weibull distribution. The following DATA step uses the transformed parameter estimates and demonstrates how to graph the component and mixture densities:
/* plot the Weibull component densitites and the mixture density */
data WeibComponents;
retain d1 d2;
array WeibScale[2] _temporary_ (0.7351, 1.8820); /* =exp(Intercept) */
array WeibShape[2] _temporary_ (1.52207, 3.52965); /* =1/Scale */
array MixParm[2] _temporary_ (0.6, 0.4);
do d = 0.01, 0.05 to 3.2 by 0.05;
d1 = MixParm[1]*pdf("Weibull", d, WeibShape[1], WeibScale[1]);
Component = "Mixture "; density = d1+d2; output;
Component = "Weib(1.52,0.74)"; density = d1; output;
title "Weibull Mixture Components";
proc sgplot data=WeibComponents;
series x=d y=density / group=Component;
keylegend / location=inside position=NE across=1 opaque;
xaxis values=(0 to 3.2 by 0.2) grid offsetmin=0.05 offsetmax=0.05;
The density curves are the same, but the legend for this graph displays the shape and scale parameters for the Weibull distribution. If you want to reproduce the vertical scale (percent), you can multiply the densities by 100*h, where h=0.2 is the width of the histogram bins.
In general, be aware that the PLOTS=DENSITY option produces a graph in which the legend labels refer to the REGRESSION parameters. For example, if you use PROC FMM to fit a mixture of normal distributions, the parameter estimates in the legend are for the mean and the VARIANCE of the normal distributions. However, if you intend to use those estimates in other SAS functions (such as PDF, CDF, and RAND), you must take the square root of the variance to obtain the standard deviation.
This article uses PROC FMM to fit a mixture of two Weibull distributions. The article shows how to interpret the parameter estimates from the procedure by transforming them into the shape and scale parameters for the Weibull distribution. The article also emphasizes that if you use the PLOTS=DENSITY option produces a graph, the legend in the graph contains the regression parameters, which are not the same as the parameters that are used for the PDF, CDF, and RAND functions.
The post Fit a mixture of Weibull distributions in SAS appeared first on The DO Loop.
Interpret estimates for a Weibull regression model in SAS Create a frequency polygon in SAS
|
Ground vibration mitigation by using ballast mat | JVE Journals
Huan Yuan1 , Ruijun Zhu2 , Junwei Huang3 , Junjie Li4 , Chao Zou5
1, 2, 3, 4, 5School of Civil and Transportation Engineering, Guangdong University of Technology, Guangzhou, China
Received 1 February 2021; received in revised form 13 February 2021; accepted 20 February 2021; published 7 May 2021
The frequently moving subway trains often cause large vibration of the rack buildings especially tall buildings and noise problem, which lead to the discomfort of the occupants. Therefore, it is highly necessary to adopt measures to decrease the vibrations. The finite element model of vehicle-rail-foundation was established by using ABAQUS program, and vibration mitigation ability by using ballast mats was investigated. The results showed that the ballast mat has ability to mitigate train-induced environmental vibrations. The acceleration level can be highly reduced within 30 m from the track centerline on the ground. Increasing the thickness of the ballast mat and increasing the static foundation modulus can enhance the vibration mitigation effect. The research results can provide references for the train-induced environmental vibration mitigation design.
Finite element model of vehicle-rail-foundation was established
Vibration mitigation ability by using ballast mats was investigated
Ballast mat has ability to mitigate train-induced environmental vibrations
Keywords: ballast mat, subway, finite element.
Ballast mat is made of natural and synthetic rubber, which can mitigate vibration and even achieve the effect of preventing the propagation of vibration. In addition, it can also adjust the rigidity of the track superstructure and prevent excessive wear of the ballast [1]. The use of ballast mats plays an important role in environmental vibration control, which has two reasons. First, many cities are planning or implementing the development of subway superstructures. However, within 30 m from the track centerline, ground vibration and building vibration have exceeded the limit of relevant vibration standards. The large track stiffness is one of the main causes of environmental vibration and secondary noise. Second, the ballasted track often suffers from diseases such as rapid ballast pulverization, whitening of the ballast bed, serious compaction, and “liquefaction” of the ballast due to insufficient thickness of the ballast bed. These diseases not only greatly increase the workload of track maintenance and repair, but also make the environmental friendliness of rail transportation greatly reduced. Accordingly, it is particularly important to improve and strengthen the ballasted track.
A number of works have shown that ballast mats can effectively reduce the overall rigidity of the ballasted track, reduce wheel-rail impact, extend the track maintenance cycle and reduce track vibration and noise [2]. Costa et al. [3] proposed a numerical study to understand the dynamic behavior of ballast mats by including a train-track-ground model. Sheng et al. [4] built a finite element model of ballastless track of China Railway System and used mats as vibration isolation method to obtain the vibration mitigation effect. Diego et al. [5] proposed a new material for isolation mats and has been proved to be suitable for railway vibration mitigation by measurements. However, the detailed research of ballast mat properties for vibration mitigation is not enough.
In order to discuss the ballast mat properties for vibration mitigation, this paper established a vehicle-rail-foundation numerical model. The propagation characteristics of the vibrations were analyzed in the domains of time to figure out how the vibration mitigation effect will be changed by the thickness and the static foundation modulus of the ballast mat. The research results can enrich the vibration mitigation measures for subway projects that use ballast mat to achieve vibration reduction effects.
Wheel-rail forces mainly appear in 3 frequency ranges: (1) relative motion of the train body and suspension in 0.5 Hz-10 Hz; (2) It is caused by the spring-back effect of the mass of wheelset on the rail in intermediate frequency range (30 Hz-60 Hz); (3) It is produced by the resistance of the wheel-rail contact surface due to the movement of the rail in high frequency range (100 Hz-140 Hz) [6].
Therefore, it is feasible to simulate the train load with an excitation force function, as shown in Eq. (1):
F\left(t\right)={k}_{1}{k}_{2}\left({P}_{0}+{P}_{1}\mathrm{s}\mathrm{i}\mathrm{n}{\omega }_{1}t+{P}_{2}\mathrm{s}\mathrm{i}\mathrm{n}{\omega }_{2}t+{P}_{3}\mathrm{s}\mathrm{i}\mathrm{n}{\omega }_{3}t\right),
{k}_{1}
{k}_{2}
are the superposition and dispersion coefficient,
{P}_{0}
is the static train load;
{P}_{1}
{P}_{2}
{P}_{3}
are the loads related to the above three frequency ranges.
Let the unsprung mass of the train be
{M}_{0}
, the load can be defined by Eq. (2):
{P}_{i}={M}_{0}{a}_{i}{\omega }_{i}^{2}.
The relevant parameters of the train are shown in Table 1. According to the calculation conditions, the selected A-type subway train load simulation equation is Eq. (3):
F\left(t\right)=1.08×\left(20+0.3701\mathrm{sin}31.42t+0.6940\mathrm{sin}157.08t+0.2024\mathrm{sin}392.7t\right).
Table 1. Related parameters of subway train
Mass of wheelset
The track model consists of steel rails, sleepers and ballast. The track is U75V hot-rolled steel rail, and 60 kg/m. The rail gauge is 1435 mm; the track bed is a gravel track bed with a thickness of 0.45 m. Table 2 shows the parameters for track model.
Table 2. Related parameters of track structure model
2.3. Soil model
The soil model dimension is selected as 130 m × 130 m × 55 m, which is shown in Fig. 2. The vibration caused by the subway train is in a micro vibration, thus the soil model is adopt the linear elastic model. The soil property can be obtained from the results of the geological survey.
Fig. 1. Finite element model of track and sleeper
Fig. 2. Foundation soil model
2.4. Ballast mat
A ballast mat is set under the ballast. The ballast mat is made of high-quality rubber, wear-resistant, oil-resistant, and ozone-resistant (Fig. 3). Its width is 1500 mm. Its thickness is 27 mm, and the unit area weight is 18 kg/m². When the load is 0.020-0.100 N/mm², the static foundation modulus is 0.030 N/mm³±15 %, and when the preload is 0.060 N/mm² and 40 Hz, the dynamic foundation modulus is 0.080 N/mm³±15 %. It is assumed that the fixed hysteretic damping value is 0.1 for the mat. Fig. 4 shows the finite element model after adding ballast mat. Ballast mat was inserted between the sub-ballast and embankment.
Fig. 3. Ballast mat
Set the train model to Type A as the load input parameter, compare the calculation results for different distance from the track of the numerical model with field measurement data, and the reliability of the numerical model is verified, as shown in Fig. 5. It can be seen that the calculated value matched well with the measured value, which indicated the established numerical model can better simulate the vibration damping characteristics of ballast mat.
Fig. 5. Model verification
Fig. 6. Ballast mat vibration reduction effect
3.1. Influence of setting ballast mat
The vibration mitigation characteristics of setting ballast mat are shown in Fig. 6. It can be seen that the ballast mat can effectively attenuate the vibration. The ground vibration can be reduced by about 10 dB within 30 m from the center line of the track, and the attenuation of vibration decreases with the increase of the distance from the center line of track.
3.2. Impact of the thickness of the ballast mat on vibration reduction effect
In order to know the influence of the thickness of the ballast mat on vibration mitigation, the thickness of the ballast mat was changed without changing the width and the static foundation modulus, and combined with the actual situation of the ballast mat model, 15 mm, 23 mm, 27 mm and 32 mm are compared and analyzed.
In Fig. 7, the vibration isolation effect of the ballast mat is obvious as the thickness of the ballast mat increases. However, the improvement of the vibration mitigation is not obvious when the thickness is enough. The closer to the track centerline, the increase in the thickness of the ballast mat has a more obvious impact.
Fig. 7. Vibration mitigation of ballast mats of different thicknesses
Fig. 8. Vibration mitigation of ballast mats under different static foundation modulus
3.3. Impact of the static foundation modulus of ballast mat
To study the effects of the static foundation modulus of the ballast mat on the vibration mitigation, the static foundation modulus of the ballast mat was changed without changing the width and thickness, combined with the actual production situation of the ballast mat, respectively, as shown in Fig. 8.
As the static foundation modulus increases, the vibration mitigation of the ballast mat increases, especially as the closer to the center line of the rail, the increase effect is more obvious. However, at some distance from the center line of the rail, the vibration isolation effect tends to be consistent.
High-quality natural rubber synthetic ballast mat has the advantages of wear-resistant-oil-resistant, ozone-resistant and weather-resistance. Research on its vibration isolation effect shows:
1) The ballast mat has obvious damping effect on the vibration transmission. Within 30 m from the track centerline, the ground vibration can be reduced by about 10 dB, and the attenuation of vibration decreases with the increase of the distance from the center line of track.
2) Increasing the thickness of the ballast mat and increasing the static foundation modulus has advantage for the vibration mitigation, the closer to the rail center line, the greater the attenuation of vibration level.
Tao Z., Wang Y., Sanayei M., Moore J. A., Zou C. Experimental study of train-induced vibration in over-track buildings in a metro depot. Engineering Structures, Vol. 198, 2019, p. 109473. [Publisher]
Zou C., Wang Y., Moore J. A., Sanayei M. Train-induced field vibration measurements of ground and over-track buildings. Science of the Total Environment, Vol. 575C, 2017, p. 1339-1351. [Publisher]
Alves Costa P., Calçada R., Silva Cardoso A. Ballast mats for the reduction of railway traffic vibrations. Numerical study. Soil Dynamics and Earthquake Engineering, Vol. 42, 2012, p. 137-150. [Publisher]
Sheng X., Zheng W., Zhu Z., Luo T., Zheng Y. Properties of rubber under-ballast mat used as ballastless track isolation layer in high-speed railway. Construction and Building Materials, Vol. 240, 2020, p. 117822. [Publisher]
Diego S., Casado J. A., Carrascal I., Ferreño D., Cardona J., Arcos R. Numerical and experimental characterization of the mechanical behavior of a new recycled elastomer for vibration isolation in railway applications. Construction and Building Materials, Vol. 134, 2017, p. 18-31. [Publisher]
Zou C., Moore J. A., Sanayei M., Wang Y. Impedance model for estimating train-induced building vibrations. Engineering Structures, Vol. 172, 2018, p. 739-750. [Publisher]
|
Oscillations of transonic flow past a symmetric profile with a blunt trailing edge | JVE Journals
Alexander G. Kuzmin1
1Department of Fluid Dynamics, St. Petersburg State University, St. Petersburg, Russia
Received 23 January 2021; received in revised form 10 February 2021; accepted 18 February 2021; published 7 May 2021
Copyright © 2021 Alexander G. Kuzmin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
We study the two-dimensional turbulent transonic flow past a symmetric profile with a blunt base and flat sides. The numerical simulation is based on the unsteady Reynolds-averaged Navier-Stokes equations. The obtained solutions reveal both symmetric and asymmetric oscillating flows past the profile at zero angle of attack. An occurrence of the symmetric or asymmetric flow regime depends on the time history of boundary conditions. A jet injection into the wake does not eliminate the flow non-uniqueness, though attenuates flow oscillations.
Keywords: turbulent flow, numerical simulation, lift oscillations, non-uniqueness.
A thermal protection of blades is an important point in turbine engineering. An up-to-date design of blades usually incorporates a coolant flow injection through an orifice in the trailing edge of blade. The injection and flow behavior in the base region were investigated experimentally and numerically in a number of works [1-3]. The paper [3], e.g., focused on the development of oscillations in the wake of a flat-sided profile with a blunt base. The obtained time-averaged flows were symmetric about the airfoil’s chord. Meanwhile, in 2000s numerical studies showed that flat-sided airfoils admit asymmetric flows at zero angle of attack apart from symmetric ones [4] due to instability of a shock wave interaction with the flow acceleration region developed at the rear of airfoil.
In the present paper, we consider transonic flow over a simple flat-sided profile similar to the one examined in [3] with an emphasis on the asymmetric flow regimes. An effect of a jet injection on the lift coefficient oscillations is discussed.
2. Problem formulation and numerical method
The symmetric profile No. 1 under consideration is constituted by:
(a) two parallel segments
y\left(x\right)=±0.04
0.3\le x\le 0.7
(b) the nose
y\left(x\right)=±0.4x/3
0\le x<0.3
(c) the rear
y\left(x\right)=±0.4\left(1-x\right)/3
at 0
\le x<
0.3; −0.08/3
\le y\le
0.08/3 at
x=
\left(x,y\right)
are non-dimensional Cartesian coordinates. Profile No. 1 is actually a 20 percent truncation of a double wedge discussed in [5]. The fully turbulent flow over profile No. 1 is governed by the unsteady Reynolds-averaged Navier-Stokes equations (URANS) with respect to the static temperature
T\left(x,y,t\right)
\rho \left(x,y,t\right)
, and velocity components
U\left(x,y,t\right)
V\left(x,y,t\right)
t
A lens-type outer boundary of the computational domain is constituted by arcs
{\mathrm{\Gamma }}_{1}
{\mathrm{\Gamma }}_{2}
, which extend from −100 to 100 in the
y
-direction and from −40 to 120 in the
x
-direction, see Fig. 1. On the inflow part
{\mathrm{\Gamma }}_{1}
we prescribe the temperature
{T}_{\infty }=
250 K, angle of attack
\alpha =
0, free-stream Mach number
{M}_{\infty }<
1, and turbulence level of 1 %. The air is treated as a perfect gas whose heat capacity ratio
\gamma
is 1.4 and the specific heat at constant pressure
{c}_{p}
is 1004.4 J/(kg K).
{\mathrm{\Gamma }}_{2}
, we set the static pressure
{p}_{\infty }=
3×105 N/m2, which is related to
{\rho }_{\infty }
{T}_{\infty }
by the equation of state
p=\rho RT
R={c}_{p}-{c}_{p}/\gamma
. On profile No. 1, we impose the no-slip condition and zero heat flux. Initial conditions are either parameters of the free stream or a nonuniform flow obtained for other values of
{M}_{\infty }
. In Section 4, we will formulate an extra condition that simulates a jet injection through an orifice at
x=
The URANS equations were solved with ANSYS-18.2 CFX finite-volume solver [6] using the SST
k
\omega
turbulence model [7]. A computational mesh was constituted by quadrilaterals in 40 layers on profile No. 1 and by triangles in the rest of computational domain. The total number of mesh cells was 467,904, and the dimensionless thickness
{y}^{+}
of the first mesh layer on profile No. 1 was less than 1. The cells were clustered near the profile for an accurate resolution of the boundary layer and shocks. The time step of 10−5 s ensured the root-mean-square Courant-Friedrichs-Lewy number smaller than 2.
Fig. 1. Sketch of the computational mesh
3. Symmetric and asymmetric flows at zero angle of attack
First, we used the uniform free stream for initialization of time-dependent solutions and flow computation in the bands:
0.8300\le {M}_{\infty }\le 0.8445,
0.8445<{M}_{\infty }<0.8451,
0.8451\le {M}_{\infty }\le 0.8490.
In band (1) numerical simulations demonstrated a development of an oscillating symmetric flow with two local supersonic region (in which
M\left(x,y\right)>
1) on both sides of profile. The oscillations are caused by instability of the boundary layer separation at the rear and instability of the vortex pattern in the wake.
In band (2) the solutions showed a development of an oscillating symmetric flow in which there is a large local supersonic region terminated by a shock wave on each side of profile.
In band (3) the numerical simulations revealed that, due to instability of the symmetric flows, there is a transition to an asymmetric regime with either negative or positive
y
-component of the aerodynamic force, i.e., lift
L
, obtained by integration of flow pressure over the profile. To provide a transition, e.g., to the flow regime with positive lift, one can prescribe a perturbation
\alpha =
0.1° for the angle of attack on
{\mathrm{\Gamma }}_{1}
followed by a reset of
\alpha
to 0. For example, Fig. 2 shows instantaneous iso-Mach lines in the asymmetric flow with
L>
0 obtained at
{M}_{\infty }=
The calculated asymmetric flow at
{M}_{\infty }=
0.845 was then used for flow computations step-by-step at smaller and larger
{M}_{\infty }
. This scenario made it possible to determine the bifurcation band (4):
0.841\le {M}_{\infty }\le 0.847,
in which the asymmetric flows (with positive or negative lift
L
) are stable with respect to small perturbations. We notice that band (4) is 1.4 times longer than a bifurcation band for the full (non-truncated) double wedge [5]. Margins of the oscillating lift coefficient
{C}_{L}=2L/\left({\rho }_{\infty }{U}_{\infty }^{2}l
×1 [m]) are displayed in Fig. 3, where sketches next to regions 1-4 point out the number and locations of supersonic regions in the flow. The frequency of lift oscillations is approximately 2100 Hz in regions 1, 2, 4, and it increases to 4000 Hz in region 3. The Reynolds number based on the length of profile
l=
0.4 m is 1.1×107.
Fig. 2. Instantaneous iso-Mach lines in the flow with
L>
0 over profile No. 1 at
{M}_{\infty }=
Fig. 3. Margins of lift coefficient oscillations over profile No. 1 versus
{M}_{\infty }
at zero angle of attack: regions 1 and 3 – symmetric flows, regions 2 and 4 – asymmetric ones
4. Flow control by a jet injection
To control flow oscillations, we set a jet emanating from the orifice −9.2×10-3
<y<
9.2×10-3,
x=
0.8 in the base of profile No. 1. In the orifice, we prescribe the static pressure
{p}_{jet}
along with static temperature
{T}_{jet}=
250 K. Computations showed a high sensitivity of the amplitude of lift coefficient oscillations to the ratio
{p}_{jet}/{p}_{\infty }
. This is explained by small velocities of the flow in the base region and its intricate structure, which is sensitive to weak perturbations.
For example, in the asymmetric flow at
{M}_{\infty }=
0.830, the increase in
{p}_{jet}/{p}_{\infty }
from 0.73 to 0.933 yields a four times decrease in the amplitude of
{C}_{L}
oscillations, see region 2 in Fig. 4. Though this pressure rise produces very small Mach numbers in the jet, this attenuates considerably flow oscillations in the wake. The attenuated oscillations in the wake result in a damping effect on the oscillations of
{C}_{L}
via the boundary layer and subsonic region over the rear of profile.
Further increase in
{p}_{jet}/{p}_{\infty }
from 0.933 to 1.0 leads to the full damping of oscillations, see Figs. 4, 5, though it does not change the mean value of
{C}_{L}
. We notice that an increase of
{p}_{jet}/{p}_{\infty }
from 0.950 to 0.967 triggers a development of low frequency oscillations, on which a higher frequency is superposed, see Fig. 6.
In the symmetric flow regime, the jet injection also produces a damping effect on the oscillations, see region 1 in Fig. 4. In addition, the rise of
{p}_{jet}/{p}_{\infty }
is accompanied by a reduction of the spacing between local supersonic regions on both sides of profile No. 1. This eventually triggers a transition from the symmetric regime to an asymmetric one at
{p}_{jet}/{p}_{\infty }>
At the larger free-stream Mach number
{M}_{\infty }=
0.8455, computations showed a similar effect of jet injection on the asymmetric and symmetric flows.
Fig. 4. Margins of the lift coefficient oscillations at
{M}_{\infty }=
0.843 versus the relative jet pressure
{p}_{jet}/{p}_{\infty }
: region 1 – symmetric flow, region 2 – asymmetric flow
Fig. 5. Static pressure contours and velocity vectors in the near wake of asymmetric flow at
{M}_{\infty }=
0.843 with jet injection,
{p}_{jet}/{p}_{\infty }=
Fig. 6. A change of lift coefficient oscillations in time caused by the change of
{p}_{jet}/{p}_{\infty }
from 0.950 to 0.967 at
t=
The turbulent transonic flow over profile No. 1 exhibits oscillations and non-uniqueness of flow regimes at zero angle of attack and free-stream Mach numbers (4). Oscillations of the lift coefficient are very sensitive to the subsonic jet emanating from the blunt trailing edge/base. The numerical simulation showed that, at
{M}_{\infty }=
{M}_{\infty }=
0.8455, the jet injection attenuates the oscillations, though does not eliminate the flow non-uniqueness.
This research was performed using computational resources provided by the Computational Center of St. Petersburg State University (http://cc.spbu.ru). The work was partially supported by the Russian Foundation for Basic Research under grant no. 19-01-00242.
Raffel M., Kost F. Investigation of aerodynamic effects of coolant ejection at the trailing edge of a turbine blade model by PIV and pressure measurements. Experiments in Fluids, Vol. 24, 1998, p. 447-461. [Publisher]
Martinez-Cava A., Wang Y., Vicente J., Valero E. Pressure bifurcation phenomenon on supersonic blowing trailing edges. American Institute of Aeronautics and Astronautics Journal, Vol. 57, 2019, p. 153-164. [Publisher]
Martinez-Cava A., Valero E., Vicente J., Paniagua G., Soedel W. Coanda flow characterization on base bleed configurations using global stability analysis. American Institute of Aeronautics and Astronautics, 2019, https://doi.org/10.2514/6.2019-0881. [Search CrossRef]
Kuzmin A. Non-unique transonic flows over airfoils. Computers and Fluids, Vol. 63, 2012, p. 1-8. [Publisher]
Kuzmin A. Transonic flow bifurcations over a double wedge. Journal of Physics: Conference Series, Vol. 1697, 2020, p. 012207. [Search CrossRef]
ANSYS Fluids – Computational Fluid Dynamics, 2020, https://www.ansys.com/products/fluids. [Search CrossRef]
Menter F. R. Review of the shear-stress transport turbulence model experience from an industrial perspective. International Journal of Computational Fluid Dynamics, Vol. 23, 2009, p. 305-316. [Publisher]
|
Automatic Target Recognition (ATR) in SAR Images - MATLAB & Simulink - MathWorks France
Evaluate Detector on a Test Image
This example shows how to train a region-based convolutional neural networks (R-CNN) for target recognition in large scene synthetic aperture radar (SAR) images using the Deep Learning Toolbox™ and Parallel Computing Toolbox™.
The Deep Learning Toolbox provides a framework for designing and implementing deep neural networks with algorithms, pretrained models, and apps.
The Parallel Computing Toolbox lets you solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters. It enables you to use GPUs directly from MATLAB® and accelerate the computation capabilities needed in deep learning algorithms.
Neural network based algorithms have shown remarkable achievement in diverse areas ranging from natural scene detection to medical imaging. They have shown huge improvement over the standard detection algorithms. Inspired by these advancements, researchers have put efforts to apply deep learning based solutions to the field of SAR imaging. In this example, the solution has been applied to solve the problem of target detection and recognition. The R-CNN network employed here not only solves problem of integrating detection and recognition but also provides an effective and efficient performance solution that scales to large scene SAR images as well.
Download the dataset and the pretrained model
Load and analyze the image data
To illustrate this workflow, the example uses the Moving and Stationary Target Acquisition and Recognition (MSTAR) clutter dataset published by the Air Force Research Laboratory. The dataset is available for download here. Alternatively, the example also includes a subset of the data used to showcase the workflow. The goal is to develop a model that can detect and recognize the targets.
This example uses a subset of the MSTAR clutter dataset that contains 300 training and 50 testing clutter images with five different targets. The data was collected using an X-band sensor in the spotlight mode with a one-foot resolution. The data contains rural and urban types of clutters. The types of targets used are BTR-60 (armoured car), BRDM-2 (fighting vehicle), ZSU-23/4 (tank), T62 (tank), and SLICY (multiple simple geometric shaped static target). The images were captured at a depression angle of 15 degrees. The clutter data is stored in the PNG image format and the corresponding ground truth data is stored in the groundTruthMSTARClutterDataset.mat file. The file contains 2-D bounding box information for five classes, which are SLICY, BTR-60, BRDM-2, ZSU-23/4, and T62 for training and testing data. The size of the dataset is 1.6 GB.
Download the dataset using the helperDownloadMSTARClutterData helper function, defined at the end of this example.
outputFolder = pwd;
dataURL = ('https://ssd.mathworks.com/supportfiles/radar/data/MSTAR_ClutterDataset.tar.gz');
helperDownloadMSTARClutterData(outputFolder,dataURL);
Depending on your Internet connection, the download process can take some time. The code suspends MATLAB® execution until the download process is complete. Alternatively, download the dataset to a local disk using your web browser and extract the file. When using this approach, change the <outputFolder> variable in the example to the location of the downloaded file.
Download the pretrained network from the link here using the helperDownloadPretrainedSARDetectorNet helper function, defined at the end of this example. The pretrained model allows you to run the entire example without having to wait for the training to complete. To train the network, set the doTrain variable to true.
pretrainedNetURL = ('https://ssd.mathworks.com/supportfiles/radar/data/TrainedSARDetectorNet.tar.gz');
if ~doTrain
helperDownloadPretrainedSARDetectorNet(outputFolder,pretrainedNetURL);
Load the ground truth data (training set and test set). These images are generated in such a way that it places target chips at random locations on a background clutter image. The clutter image is constructed from the downloaded raw data. The generated target will be used as ground truth targets to train and test the network.
load('groundTruthMSTARClutterDataset.mat', "trainingData", "testData");
The ground truth data is stored in a six-column table, where the first column contains the image file paths and the second to the sixth columns contain the different target bounding boxes.
% Display the first few rows of the data set
trainingData(1:4,:)
imageFilename SLICY BTR_60 BRDM_2 ZSU_23_4 T62
______________________________ __________________ __________________ __________________ ___________________ ___________________
"./TrainingImages/Img0001.png" {[ 285 468 28 28]} {[ 135 331 65 65]} {[ 597 739 65 65]} {[ 810 1107 80 80]} {[1228 1089 87 87]}
"./TrainingImages/Img0002.png" {[595 1585 28 28]} {[ 880 162 65 65]} {[308 1683 65 65]} {[1275 1098 80 80]} {[1274 1099 87 87]}
"./TrainingImages/Img0003.png" {[200 1140 28 28]} {[961 1055 65 65]} {[306 1256 65 65]} {[ 661 1412 80 80]} {[ 699 886 87 87]}
"./TrainingImages/Img0004.png" {[ 623 186 28 28]} {[ 536 946 65 65]} {[ 131 245 65 65]} {[1030 1266 80 80]} {[ 151 924 87 87]}
Display one of the training images and box labels to visualize the data.
img = imread(trainingData.imageFilename(1));
bbox = reshape(cell2mat(trainingData{1,2:end}),[4,5])';
labels = {'SLICY', 'BTR_60', 'BRDM_2', 'ZSU_23_4', 'T62'};
annotatedImage = insertObjectAnnotation(img,'rectangle',bbox,labels,...
imshow(annotatedImage);
title('Sample Training Image With Bounding Boxes and Labels')
Create an R-CNN object detector for five targets: SLICY, BTR_60, BRDM_2, ZSU_23_4, T62.
objectClasses = {'SLICY', 'BTR_60', 'BRDM_2', 'ZSU_23_4', 'T62'};
The network must be able to classify the five targets and a background class in order to be trained using the trainRCNNObjectDetector function available in Deep Learning Toolbox™. 1 is added in the code below to include the background class.
The final fully connected layer of the network defines the number of classes that it can classify. Set the final fully connected layer to have an output size equal to numClassesPlusBackground.
% Define input size
% Define network
layers = createNetwork(inputSize,numClassesPlusBackground);
Now, these network layers can be used to train an R-CNN based five-class object detector.
Use trainingOptions to specify network training options. trainingOptions by default uses a GPU if one is available (requires Parallel Computing Toolbox™ and a CUDA® enabled GPU with compute capability 3.0 or higher). Otherwise, it uses a CPU. You can also specify the execution environment by using the ExecutionEnvironment name-value argument of trainingOptions. To automatically detect if you have a GPU available, set ExecutionEnvironment to auto. If you do not have a GPU, or do not want to use one for training, set ExecutionEnvironment to cpu. To ensure the use of a GPU for training, set ExecutionEnvironment to gpu.
'Verbose', true, ...
Use trainRCNNObjectDetector to train R-CNN object detector if doTrain is true. Otherwise, load the pretrained network. If training, adjust NegativeOverlapRange and PositiveOverlapRange to ensure that training samples tightly overlap with ground truth.
% Train an R-CNN object detector. This will take several minutes
detector = trainRCNNObjectDetector(trainingData, layers, options,'PositiveOverlapRange',[0.5 1], 'NegativeOverlapRange', [0.1 0.5]);
% Load a previously trained detector
preTrainedMATFile = fullfile(outputFolder,'TrainedSARDetectorNet.mat');
load(preTrainedMATFile);
To get a qualitative idea of the functioning of the detector, pick a random image from the test set and run it through the detector. The detector is expected to return a collection of bounding boxes where it thinks the detected targets are, along with scores indicating confidence in each detection.
imgIdx = randi(height(testData));
testImage = imread(testData.imageFilename(imgIdx));
% Detect SAR targets in the test image
[bboxes,score,label] = detect(detector,testImage,'MiniBatchSize',16);
To understand the results achieved, overlay the results with the test image. A key parameter is the detection threshold, the score above which the detector detected a target. A higher threshold will result in fewer false positives; however, it also results in more false negatives.
scoreThreshold = 0.8;
outputImage = testImage;
for idx = 1:length(score)
thisScore = score(idx);
if thisScore > scoreThreshold
annotation = sprintf('%s: (Confidence = %0.2f)', label(idx),...
round(thisScore,2));
outputImage = insertObjectAnnotation(outputImage, 'rectangle', bbox,...
annotation,'TextBoxOpacity',0.9,'FontSize',45,'LineWidth',2);
f.Position(3:4) = [860,740];
title('Predicted Boxes and Labels on Test Image')
By looking at the images sequentially, you can understand the detector performance. To perform more rigorous analysis using the entire test set, run the test set through the detector.
% Create a table to hold the bounding boxes, scores and labels output by the detector
numImages = height(testData);
'VariableTypes',{'cell','cell','cell'},...
'VariableNames',{'Boxes','Scores','Labels'});
% Run detector on each image in the test set and collect results
imgFilename = testData.imageFilename{i};
I = imread(imgFilename);
% Run the detector
[bboxes, scores, labels] = detect(detector, I,'MiniBatchSize',16);
% Collect the results
results.Labels{i} = labels;
The possible detections and their bounding boxes for all images in the test set can be used to calculate the detector's average precision (AP) for each class. The AP is the average of the detector's precision at different levels of recall, so let us define precision and recall.
Precision=\frac{tp}{tp+fp}
Recall=\frac{tp}{tp+fn}
tp
- Number of true positives (the detector predicts a target when it is present)
fp
- Number of false positives (the detector predicts a target when it is not present)
fn
- Number of false negatives (the detector fails to detect a target when it is present)
A detector with a precision of 1 is considered good at detecting targets that are present, while a detector with a recall of 1 is good at avoiding false detections. Precision and recall have an inverse relationship.
Plot the relationship between precision and recall for each class. The average value of each curve is the AP. Plot curves for detection thresholds with the value of 0.5.
For more details, see evaluateDetectionPrecision.
% Extract expected bounding box locations from test data
expectedResults = testData(:, 2:end);
% Evaluate the object detector using average precision metric
[ap, recall, precision] = evaluateDetectionPrecision(results, expectedResults,threshold);
% Plot precision recall curve
f = figure; ax = gca; f.Position(3:4) = [860,740];
grid on; hold on; legend('Location', 'southeast');
title('Precision Vs Recall Curve for Threshold Value 0.5 for Different Classes');
for i = 1:length(ap)
% Plot precision/recall curve
plot(ax,recall{i},precision{i},'DisplayName',['Average Precision for Class ' trainingData.Properties.VariableNames{i+1} ' is ' num2str(round(ap(i),3))])
The AP for most of the classes is more than 0.9. Out of these, the trained model appears to struggle the most in detecting the SLICY targets. However, it is still able to achieve an AP of 0.7 for the class.
The function createNetwork takes as input the image size inputSize and number of classes numClassesPlusBackground. The function returns a CNN.
function layers = createNetwork(inputSize,numClassesPlusBackground)
imageInputLayer(inputSize) % Input Layer
convolution2dLayer(3,32,'Padding','same') % Convolution Layer
reluLayer % Relu Layer
batchNormalizationLayer % Batch normalization Layer
maxPooling2dLayer(2,'Stride',2) % Max Pooling Layer
convolution2dLayer(3,128,'Padding','same')
convolution2dLayer(6,512)
dropoutLayer(0.5) % Dropout Layer
fullyConnectedLayer(512) % Fully connected Layer.
fullyConnectedLayer(numClassesPlusBackground)
softmaxLayer % Softmax Layer
classificationLayer % Classification Layer
function helperDownloadMSTARClutterData(outputFolder,DataURL)
radarDataTarFile = fullfile(outputFolder,'MSTAR_ClutterDataset.tar.gz');
if ~exist(radarDataTarFile,'file')
disp('Downloading MSTAR Clutter data (1.6 GB)...');
websave(radarDataTarFile,DataURL);
untar(radarDataTarFile,outputFolder);
function helperDownloadPretrainedSARDetectorNet(outputFolder,pretrainedNetURL)
preTrainedZipFile = fullfile(outputFolder,'TrainedSARDetectorNet.tar.gz');
if ~exist(preTrainedMATFile,'file')
if ~exist(preTrainedZipFile,'file')
disp('Downloading pretrained detector (29.4 MB)...');
untar(preTrainedZipFile,outputFolder);
This example shows how to train an R-CNN for target recognition in SAR images. The pretrained network attained an accuracy of more than 0.9.
[1] MSTAR Overview. https://www.sdms.afrl.af.mil/index.php?collection=mstar
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.