text
stringlengths 256
16.4k
|
|---|
Existence of Multiple Solutions for a Class of -Dimensional Discrete Boundary Value Problems
Weiming Tan, Zhan Zhou, "Existence of Multiple Solutions for a Class of -Dimensional Discrete Boundary Value Problems", International Journal of Mathematics and Mathematical Sciences, vol. 2010, Article ID 198465, 14 pages, 2010. https://doi.org/10.1155/2010/198465
Weiming Tan1,2 and Zhan Zhou1
2Institute of Mathematics and Physics, Wuzhou University, Wuzhou 543002, China
Academic Editor: Raul F. Manasevich
By using critical point theory, we obtain some new results on the existence of multiple solutions for a class of -dimensional discrete boundary value problems. Results obtained extend or improve existing ones.
Let , , be the set of all natural numbers, integers, and real numbers, respectively. For any , define , when .
In this paper, we consider the existence of multiple solutions for the following -dimensional discrete nonlinear boundary value problem:
where , , is the forward difference operator defined by , with , and is continuous for . with , and are constants, and are -dimensional vectors, and are real value functions defined on and , respectively, and .
Existence of solutions of discrete boundary value problems has been the subject of many investigations. Motivated by Hartman's landmark paper [1], Henderson [2] showed that the uniqueness of solutions implies the existence of solutions for some conjugate boundary value problems. In recent years, by using various methods and techniques, such as nonlinear alternative of Leray-Schauder type, the cone theoretic fixed point theorem, and the method of upper and lower solution, a series of existence results for the solutions of the BVP (1.1) in some special cases, for example, ; have been obtained in literatures. We refer to [3–7].
The critical point theory has been an important tool for investigating the periodic solutions and boundary value problems of differential equations [8–10]. In recent years, it is applied to the study of periodic solutions [11–13] and boundary value problems [14–17] of difference equations.
For the case when , the scalar BVP (1.1) was studied by Yu and Guo in [17]. By using critical point theorem, they obtained various conditions to guarantee the existence of one solution, but they did not obtain the existence conditions of multiple solutions. In this scalar case, the BVP can be viewed as the discrete analogue of the following self-adjoint differential equation:
which is a generalization of Emden-Fowler equation:
The Emden-Fowler equation originated from earlier theories of gaseous dynamics in astrophysics [18], and later, found applications in the study of fluid mechanics, relative mechanics, nuclear physics, and in the study of chemical reaction system [19].
For the case where , , (the zero vector of ), BVP (1.1) are reduced to
which were studied by Jiang and Zhou in [15]. They obtained the existence results of multiple solutions by using critical point theory again.
In the aforementioned references, most of the difference equations involved are scalar. The purpose of this paper is further to demonstrate the powerfulness of critical point theory in the study of existence of discrete boundary value problems and obtain various conditions for the existence of at least two nontrivial solutions for the BVP (1.1).
The remaining of this paper is organized as follows. First, in Section 2, we will establish the variational framework associated with (1.1) and transfer the problem of the existence of solutions of (1.1) into that of the existence of critical points of the corresponding functional. Some basic results will also be recalled. Then, in Section 3, we present various new conditions on the existence of at least two nontrivial solutions for the BVP (1.1). Some examples are given to illustrate the conclusions. We mention that our results generalize the ones in [15] and improve the ones in [20].
To conclude the introduction, we refer to [21, 22] for the general background on difference equations.
2. Preliminary and Variational Framework
Let be a real Hilbert space, , which means that is a continuously Frećhet-differentiable functional defined on . is said to satisfy Palais-Smale condition (P-S condition for short), if any sequence for which is bounded and as possesses a convergent subsequence in .
Let be the open ball in with radius and centered at and let denote its boundary. The following lemmas will be useful in the proofs of our main results.
Lemma 2.1. (Mountain Pass Lemma [10]). Let be a real Hilbert space, and assume that satisfies the P-S condition and the following conditions. (1)There exist constant and such that , where .(2) and there exists such that . Then possesses a critical value . Moreover, where
Lemma 2.2. Let , , , then
Without loss of generality, we assume that , where , and there exists a function , such that for any , , where is the gradient of in , and where is the zero vector in .
Let , then BVP (1.1) reduces to
Then, can be equipped with the inner product
by which the norm can be induced by
where , , and and are the norm and the inner product in , respectively.
Define a linear mapping by
Then is a linear and one to one mapping. Clearly, .
With this mapping, BVP (2.3) and (2.4) can be represented by the matrix equation:
Define afunctional on as
where . Then .
For any , denote
where satisfying (2.4). Thus, there is a one to one correspondence from to
Clearly, if and only if satisfies (2.3) and (2.4). Therefore, the existence of solutions to BVP (2.3) and (2.4) is transferred to the existence of the critical point of the functional on .
By a solution of (2.3) and (2.4), we mean that satisfies (2.3), and (2.4) holds. is nontrivial if .
In this section, we will suppose that the matrix defined in (2.11) is positive definite, , are the minimal eigenvalue and maximal eigenvalue of , respectively. It is clear that , are also the minimal eigenvalue and maximal eigenvalue of , respectively. The first result is as follows.
Theorem 3.1. If there exist constants , , and , , satisfying then BVP (2.3) and (2.4) has at least two nontrivial solutions.
Proof. Since , we know that where denotes the zero element of . By (3.1), we get and . Let Then by (3.2), we get where . Then , we have
Since , we see that as . Thus is bounded from above on , and can achieve its maximum on . In other words, there exists , such that . So is a critical point of , and is a solution of BVP (2.3) and (2.4).
For any with , we have
So and is a nontrivial solution to BVP (2.3) and (2.4).
To obtain another nontrivial solution of BVP (2.3) and (2.4), we will use Mountain Pass Lemma. We first show that satisfies P-S condition.
In fact, for any sequence in , is bounded and , then there exists such that , and it follows from (3.5) that that is, Since , this implies that is bounded and possesses a convergent subsequence. So satisfies P-S condition on .
Choosing , then for , from (3.6), we get This shows that satisfies condition (1) of the Mountain Pass Lemma.
On the other hand, from (3.5), as , so there exists such that for . Pick such that , then , and . So the condition (2) of the Mountain Pass Lemma is satisfied. Therefore, possesses a critical value where
A critical point corresponding to is nontrivial as . Let be a critical point corresponding to the critical value of . If , then we are done. Otherwise, , which gives Pick , then , and we have Thus, there exists such that , and is also a critical point of in .
If , then Theorem 3.1 holds. Otherwise, . In this situation, we replace with in the above arguments; then possesses a critical value and where Assume that is a critical point corresponding to . If , then the proof is complete. Otherwise, . Similarly, we can find a critical point of such that holds for some . Clearly, . The proof of Theorem 3.1 is now complete.
From Theorem 3.1, we have the following corollaries.
Corollary 3.2. If there exist constants , , and , , satisfying Then BVP (2.3) and (2.4) has at least two nontrivial solutions.
Proof. For any , , then , . So and condition (3.1) of Theorem 3.1 is satisfied. Let Then where . So, for any , and condition (3.2) of Theorem 3.1 is satisfied. The conclusion follows from Theorem 3.1.
Remark 3.3. For the special case where , the BVP (2.3) and (2.4) was studied in [15]. Here, the corresponding matrix becomes which is positive definite when and . So, . In this case, Corollary 3.2 reduces to Theorem of [15]. Therefore, our results extend the ones in [15]. Corollary 3.2 also improves the conclusion of Theorem in [20].
Corollary 3.4. If there exist constants , , , , and such that (3.1) holds and then BVP (2.3) and (2.4) has at least two nontrivial solutions.
Proof. It suffices to prove that (3.20) implies (3.2). In fact, pick , then for , so condition (3.2) of Theorem 3.1 is satisfied, and the proof are complete.
The following corollaries is obvious.
Corollary 3.5. If there exist , such that (3.1) holds and Then BVP (2.3) and (2.4) has at least two nontrivial solutions.
Corollary 3.6. If there exists constants , such that (3.14) holds and Then BVP (2.3) and (2.4) has at least two nontrivial solutions.
Now we will give an example to illustrate our results.
Example 3.7. Consider the discrete boundary problem: with where for and , .
In this example, , , , , , , , , So, . With the Matlab software, we can get the approximate eigenvalue of matrix : Thus is positive definite and , . Take , we find that Pick , , , then for any , , we have For any , , ,
In view of Corollary 3.2, we see that the boundary value problem (3.24) and (3.25) has at least two nontrivial solutions.
By a similar method, we can obtain the following results.
Theorem 3.8. Assume that for any . If there exist constants , , , , and satisfying Then BVP (2.3) and (2.4) has at least two nontrivial solutions.
Corollary 3.9. Assume that for any . If there exist constants , , , , and satisfying then BVP (2.3) and (2.4) has at least two nontrivial solutions.
Corollary 3.10. Assume that for any . If there exist constants , , and such that (3.32) holds and then BVP (2.3) and (2.4) has at least two nontrivial solutions.
Corollary 3.11. Assume that for any . If there exist constants , and such that (3.32) holds and then BVP (2.3) and (2.4) has at least two nontrivial solutions.
Corollary 3.12. Assume that for any . If there exist , and such that (3.34) holds and then BVP (2.3) and (2.4) has at least two nontrivial solutions.
At last, we will give another example.
Example 3.13. Consider the boundary value problem (3.24) and (3.25) where is replaced by Clearly, and . Pick , , then we have , and Pick , , then for any , we can see that For any , we can get According to Corollary 3.9, we know that the given BVP in this example has at least two nontrivial solutions.
Remark 3.14. When is negative definite, we can get similar conclusions. We do not repeat here.
This work is supported by the Specialized Fund for the Doctoral Program of Higher Eduction. (no. 20071078001), by the Natural Science Foundation of GuangXi (no. 0991279), by the Foundation of Education Department of GuangXi Province (no. 200807MS121), and by the project of Scientific Research Innovation Academic Group for the Education System of Guangzhou City.
P. Hartman, “Difference equations: disconjugacy, principal solutions, Green's functions, complete monotonicity,” Transactions of the American Mathematical Society, vol. 246, pp. 1–30, 1978. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
J. Henderson, “Existence theorems for boundary value problems for
n
th-order nonlinear difference equations,” SIAM Journal on Mathematical Analysis, vol. 20, no. 2, pp. 468–478, 1989. View at: Publisher Site | Google Scholar | MathSciNet
R. P. Agarwal and D. O'Regan, “Boundary value problems for discrete equations,” Applied Mathematics Letters, vol. 10, no. 4, pp. 83–89, 1997. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
R. P. Agarwal and D. O'Regan, “A fixed-point approach for nonlinear discrete boundary value problems,” Computers & Mathematics with Applications, vol. 36, no. 10–12, pp. 115–121, 1998. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
R. P. Agarwal and D. O'Regan, “Nonpositone discrete boundary value problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 39, no. 2, pp. 207–215, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
F. M. Atici, “Existence of positive solutions of nonlinear discrete Sturm-Liouville problems,” Mathematical and Computer Modelling, vol. 32, no. 5-6, pp. 599–607, 2000. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
J. R. Graef and J. Henderson, “Double solutions of boundary value problems for 2
m
th-order differential equations and difference equations,” Computers & Mathematics with Applications, vol. 45, no. 6–9, pp. 873–885, 2003. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
K. C. Chang, Critical Point Theory and Its Applications, Modern Mathematics Series, Science and Technical Press, Shanghai, China, 1986. View at: MathSciNet
Z. Guo and J. Yu, “Existence of periodic and subharmonic solutions for second-order superlinear difference equations,” Science in China. Series A. Mathematics, vol. 46, no. 4, pp. 506–515, 2003. View at: Publisher Site | Google Scholar | MathSciNet
Z. Zhou, J. Yu, and Z. Guo, “Periodic solutions of higher-dimensional discrete systems,” Proceedings of the Royal Society of Edinburgh. Section A, vol. 134, no. 5, pp. 1013–1022, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
Z. Zhou, J. Yu, and Z. Guo, “The existence of periodic and subharmonic solutions to subquadratic discrete Hamiltonian systems,” The ANZIAM Journal, vol. 47, no. 1, pp. 89–102, 2005. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
L. Q. Jiang and Z. Zhou, “Multiple nontrivial solutions for a class of higher dimensional discrete boundary value problems,” Applied Mathematics and Computation, vol. 203, no. 1, pp. 30–38, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
H. H. Liang and P. X. Weng, “Existence and multiple solutions for a second-order difference boundary value problem via critical point theory,” Journal of Mathematical Analysis and Applications, vol. 326, no. 1, pp. 511–520, 2007. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
R. H. Fowler, “The solution of Emden's and similar differential equations,” Monthly Notices of the Royal Astronomical Society, vol. 91, pp. 63–91, 1930. View at: Google Scholar
J. S. W. Wong, “On the generalized Emden-Fowler equation,” SIAM Review, vol. 17, pp. 339–360, 1975. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
G. Zhang and Z. Yang, “Existence of
{2}^{n}
nontrivial solutions for discrete two-point boundary value problems,” Nonlinear Analysis: Theory, Methods & Applications, vol. 59, no. 7, pp. 1181–1187, 2004. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet
R. P. Agarwal, Difference Equations and Inequalities: Theory, Methods, and Applications, vol. 228 of Monographs and Textbooks in Pure and Applied Mathematics, Marcel Dekker, New York, NY, USA, 2nd edition, 2000. View at: MathSciNet
W. G. Kelley and A. C. Peterson, Difference Equations: An Introduction with Applications, Academic Press, Boston, Mass, USA, 2nd edition, 1991. View at: MathSciNet
Copyright © 2010 Weiming Tan and Zhan Zhou. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
How to calculate hydraulic gradient
The hydraulic gradient calculator helps you determine the magnitude and direction of the water flow. It is a hydrogeological concept and is helpful in determining the groundwater movement. You can also track the flow of contaminants using the same hydraulic gradient that causes the water to move or flow. You can also use the value of hydraulic gradient in the hydraulic conductivity calculator to find out how much water flows through the soil.
The present article will briefly explain what a hydraulic gradient is and how to calculate a hydraulic gradient.
Before we get to the formula for hydraulic gradient, let's look at the hydraulic gradient.
The hydraulic gradient is the ratio of hydraulic head difference and the distance between two sample points. In simple words, it tells us how the hydraulic head changes over a certain distance. If you plot the hydraulic head vs. distance on a graph, the hydraulic gradient is the slope of the line. One can also define it as the change in water level per unit distance.
A hydraulic gradient is a vector quantity whose sign or direction denotes the movement of water, i.e., if the sign is negative, it means the water flow is downwards. Whereas a positive hydraulic gradient signifies that the flow is upwards. For the heads,
h_1
h_2
at two points over a distance
l
, the hydraulic gradient,
h_g
h_g = \frac{h_1 - h_2}{l}
Change in head across a distance L is observable using hydraulic gradient.
Find the hydraulic gradient if the heads are 5 and 14 meters at points 1 and 2, respectively. The distance between the two points is 9 meters.
To calculate hydraulic gradient:
Enter the head at point 1,
h_1 = 5 \text{ m}
Insert the head at point 2,
h_2 = 14 \text{ m}
The calculator will return the change in head,
\Delta h = -9 \text{ m}
Fill in the distance between points 1 and 2,
l = 9 \text{ m}
The hydraulic gradient calculator will return the answer as:
\small \begin{align*} \qquad H_g &= \frac{h_1 - h_2}{l} \\\\ &= \frac{5 - 14}{9} = -1 \end{align*}
The answer
-1 \text{ m/m}
implies the hydraulic head decreased
1 \text{ m}
for every meter of distance from the point 1 and the water flow is downwards.
What do you mean by hydraulic gradient?
The hydraulic gradient is the ratio of change in the hydraulic head and the distance between two points. It is a vector, and the direction of hydraulic gradient or head gradient gives you the direction of water movement while the magnitude tells us the significance.
How do I calculate hydraulic gradient?
Find the hydraulic heads, h₁, and h₂.
Find the difference between hydraulic heads, Δh.
Divide the difference by the distance between the two points to obtain the hydraulic gradient. Mathematically, this is: h_g = (h₁ - h₂)/L.
What do you mean by positive hydraulic gradient?
A positive hydraulic gradient denotes an upwards flow whereas the negative hydraulic head gradient implies the flow is downwards. The sign tells us the movement of water in aquifers and underground. It is also helpful to draw potentiometer surface maps for areas of interest.
What is the hydraulic gradient if the head varies 5 m over the length of 10 m?
The hydraulic gradient for the given condition is 0.5 m/m. The variation or difference in head is 5 m, i.e., Δh = 5 m and the distance between two points is 10 m. Mathematically, h_g = Δh/L = 5 / 10 = 0.5 m.
Head at point 1 (h₁)
Head at point 2 (h₂)
Change in head (Δh)
Hydraulic gradient (hg)
|
Hydraulic and Water Resources Engineering Department, Debre Markos University, Debre Markos, Ethiopia.
Abebe, S. (2018) Application of Time Series Analysis to Annual Rainfall Values in Debre Markos Town, Ethiopia. Computational Water, Energy, and Environmental Engineering, 7, 81-94. doi: 10.4236/cweee.2018.73005.
{x}_{t}={\varnothing }_{1}{x}_{t-1}+{\varnothing }_{2}{x}_{t-2}+\cdots +{\varnothing }_{p}{x}_{t-p}+{w}_{t}
{x}_{t}
{\varnothing }_{1},{\varnothing }_{2}\cdots {\varnothing }_{p}
{\varnothing }_{p}\ne 0
{x}_{t-1},{x}_{t-2}\cdots {x}_{t-p}
{w}_{t}
{\sigma }_{w}^{2}
B{x}_{t}={x}_{t-1}
\varnothing \left(B\right){x}_{t}={w}_{t}
\varnothing \left(B\right)
\varnothing \left(B\right)=1-{\varnothing }_{1}B-{\varnothing }_{2}{B}^{2}-\cdots -{\varnothing }_{p}{B}^{p}
{x}_{t}=\varnothing {x}_{t-1}+{w}_{t}
{w}_{t}
{x}_{t}={w}_{t}+{\theta }_{1}{w}_{t-1}+{\theta }_{2}{w}_{t-2}+\cdots +{\theta }_{q}{w}_{t-q}
{\theta }_{1},{\theta }_{2}\cdots {\theta }_{q}
{\theta }_{q}\ne 0
{x}_{t}=\theta \left(B\right){w}_{t}
\theta \left(B\right)
\theta \left(B\right)=1+{\theta }_{1}B+{\theta }_{2}{B}^{2}+\cdots +{\theta }_{q}{B}^{q}
{x}_{t}={w}_{t}+\theta {w}_{t-1}
{x}_{t}={\varnothing }_{1}{x}_{t-1}+\cdots +{\varnothing }_{p}{x}_{t-p}+{w}_{t}+{\theta }_{1}{w}_{t-1}+\cdots +{\theta }_{q}{w}_{t-q}
{\varnothing }_{p}\ne 0
{\theta }_{q}\ne 0
\varnothing \left(B\right){x}_{t}=\theta \left(B\right){w}_{t}
{\nabla }^{d}{x}_{t}={\left(1-B\right)}^{d}{x}_{t}
{\nabla }^{d}
{x}_{t}
\nabla {x}_{t}=\left(1-B\right){x}_{t}
\varnothing \left(B\right){\left(1-B\right)}^{d}{x}_{t}=\theta \left(B\right){w}_{t}
{\sigma }_{w}^{2}
[1] Machiwal, D. and Jha, M.K. (2012) Analysis of Trend and Periodicity in Long-Term Annual Rainfall Time Series of Nigeria. In: Hydrologic Time Series Analysis: Theory and Practice, Springer Netherlands, Dordrecht, 249-272.
[2] McCuen, R.H. (2003) Modeling Hydrologic Change: Statistical Methods. Lewis Publishers, Boca Raton, Fla.
[3] Box, G.E.P., Jenkins, G.M., Reinsel, G.C. and Ljung, G.M. (2016) Time Series Analysis: Forecasting and Control. Fifth Edition, Wiley Series in Probability and Statistics, John Wiley & Sons, Inc., Hoboken.
[4] Oguntunde, P.G., Friesen, J., van de Giesen, N. and Savenije, H.H.G. (2006) Hydroclimatology of the Volta River Basin in West Africa: Trends and Variability from 1901 to 2002. Physics and Chemistry of the Earth, Parts A/B/C, 31, 1180-1188.
[5] Wang, S., Feng, J. and Liu, G. (2013) Application of Seasonal Time Series Model in the Precipitation Forecast. Mathematical and Computer Modelling, 58, 677-683.
[6] Saada, N. (2015) Simulation of Long Term Characteristics of Annual Rainfall in Selected Areas in Saudi Arabia. Computational Water, Energy, and Environmental Engineering, 04, 18-24.
[7] Wikipedia (2018) DebreMarqos.
https://en.wikipedia.org/wiki/Debre_Marqos
[8] Shumway, R.H. and Stoffer, D.S. (2017) Time Series Analysis and Its Applications: With R Examples. Fourth Edition, Springer Texts in Statistics, Springer, Cham.
[9] NIST/SEMATECH (2012) e-Handbook of Statistical Methods.
[11] Konishi, S. and Kitagawa, G. (2008) Information Criteria and Statistical Modeling. Springer Series in Statistics, Springer, New York.
[12] Venables, W.N., Smith, D.M. and R Development Core Team (2017) An Introduction to R: Notes on R: A Programming Environment for Data Analysis and Graphics, Version 3.4.3
[13] Kabacoff, R. (2015) R in Action: Data Analysis and Graphics with R. Second Edition, Manning, Shelter Island.
[14] Cowpertwait, P.S.P. and Metcalfe, A.V. (2009) Introductory Time Series with R, Use R! Springer, Dordrecht, New York.
[15] Cryer, J.D. and Chan, K. (2008) Time Series Analysis: With Applications in R. 2nd Edition, Springer Texts in Statistics, Springer, New York.
[16] Nau, R. (2014) ARIMA Models for Time Series Forecasting.
https://people.duke.edu/~rnau/411arim.htm
[18] Imam, A. (2016) On Consistency of Tests for Stationarity in Autoregressive and Moving Average Models of Different Orders. American Journal of Theoretical and Applied Statistics, 5, 146.
|
How to use this mass to density calculator
Other relevant calculators
This mass to density calculator is a simple tool to convert mass to density or density to mass if you know the object's volume. In this article, we shall briefly discuss:
How to calculate density from an object's mass (or weight).
How to find mass given density and volume.
The density of an object is the amount of mass per unit volume. We can express it as:
\rho = \frac{m}{V}
\rho
— The density of a substance;
m
— The mass (or weight) of the substance; and
V
— The volume of the substance.
Its SI unit is kilograms per cubic meter (
\text{kg}/\text{m}^3
), and imperial unit is pounds per cubic feet (
\text{lb}/\text{ft}^3
Using this equation, we can find:
The density of the object from its mass and volume;
The mass of the object from its density and volume; or
The volume of the object from its density and mass.
This mass to density calculator is easy to use and versatile. You can convert mass to density or density to mass, so long as you know the volume:
To find density, enter the object's mass and volume along with their units.
To find the mass with density and volume, enter the object's density and volume and their units.
If you need to calculate the volume of a cuboidal object at any step, simply use our Advanced mode option at the bottom left of the calculator, and enter its dimensions!
If you enjoyed our mass to density calculator, you will find the following collection useful too:
What is the density of the Earth?
5.514 g/cm³ or 0.1992 lb/in³. You can arrive at this answer through these steps:
The mass of the Earth is 5.97237 ×10²⁴ kg, and its volume is 1.08321 ×10¹² km³.
Divide the mass of the Earth by its volume to get its density, 5,514 ×10⁻⁹ kg/km³, which is equal to 5.514 g/cm³.
Congratulations, you've calculated the mean density of the Earth!
How do you find mass from density and volume?
To find the mass of a substance with its density and volume, follow these steps:
Multiply the given density with the volume of the object.
The value we obtain in step 1 is the mass of the object. You can verify your answer using our mass to density calculator.
|
A Light Weight Compliant Hand Mechanism With High Degrees of Freedom | J. Biomech Eng. | ASME Digital Collection
Jason Potratz,
Virtual Soldier Research (VSR) Program, Center for Computer-Aided Design,
, 111 Engineering Research Facility, Iowa City, IA 52242-1000
e-mail: jyang@engineering.uiowa.edu
Virtual Soldier Research (VSR) Program, Center for Computer-Aided Design, and Department of Biomedical Engineering
, Iowa City, IA 52242-1000
Esteban Peña Pitarch,
Department Enginyeria Mecanica,
, Av. Bases de Manresa, 61-73, 08240 Manresa, Spain
Virtual Soldier Research (VSR) Program, Center for Computer-Aided Design, and Department of Biomedical Engineering, and Department of Orthopaedic Surgery and Rehabilitation,
Potratz, J., Yang, J., Abdel-Malek, K., Pitarch, E. P., and Grosland, N. (August 12, 2005). "A Light Weight Compliant Hand Mechanism With High Degrees of Freedom." ASME. J Biomech Eng. November 2005; 127(6): 934–945. https://doi.org/10.1115/1.2052805
This paper presents the design and prototyping of an inherently compliant lightweight hand mechanism. The hand mechanism itself has 15 degrees of freedom and five fingers. Although the degrees of freedom in each finger are coupled, reducing the number of independent degrees of freedom to 5, the 15 degrees of freedom of the hand could potentially be individually actuated. Each joint consists of a novel flexing mechanism that is based on the loading of a compression spring in the axial and transverse direction via a cable and conduit system. Currently, a bench top version of the prototype is being developed; the three joints of each finger are coupled together to simplify the control system. The current control scheme under investigation simulates a control scheme where myoelectric signals in the wrist flexor and extensor muscles are converted in to
x
y
coordinates on a control scheme chart. Static load-deformation analysis of finger segments is studied based on a 3-dimensional model without taking the stiffener into account, and the experiment validates the simulation.
artificial limbs, electromyography, prototypes, medical control systems, High Degree of Freedom, Hand Mechanism, Compliant Lightweight Mechanism
Cables, Compression, Deformation, Degrees of freedom, Design, Grasping, Motors, Springs, Stress, Weight (Mass), Control systems, Engineering prototypes, Artificial limbs, Signals
Epidemiologic Overview of Individuals with Upper Limb Loss and Their Reported Research Priorities
A New Ultralight Anthropomorphic Hand
Proceeding of the 2002 IEEE International Conference on Robotics & Automation
Tensor Actuated Elastic Manipulator
Proceedings of the 6th IFToMM World Congress
, New Delhi, Vol.
Advances in the Control of Prosthetic Arms
DLR’s Multisensory Articulated Hand. Part I: Hard- and Software Architecture
The Southampton Hand: An Intelligent Myoelectric Prosthesis
J. Rehabil. R. D
The Design of Anthropomorphic Prosthetic Hands: A Study of the Southampton Hand
Experimental Development of a Sensory Control System for an Upper Limb Myoelectric Prosthesis with Cosmetic Covering
Proceedings, 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems
Experimental Analysis of the Proprioceptive and Exteroceptive Sensors of an Underactuated Prosthetic Hand
Proceeding of the ICORR 2003 (The Eighth International Conference on Rehabilitation Robotics)
Peña Pitarch
Santos™ Hand: A 25-Degree-of-Freedom Model
Proceedings of SAE Digital Human Modeling for Design and Engineering
, 14–16 June, 2005, Iowa City, IA.
Three-Dimensional Load-Deformation Relationships of Arbitrarily Loaded Coiled Springs
, Goteborg, Sweden.
Simulated Neuroprosthesis State Activation and Hand-Position Control Using Myoelectric Signals from Wrist Muscles
A 25 Degrees of Freedom Hand Geometrical Model for Better Hand Attitude Simulation
Proceedings of the Digital Human Modeling for Design and Engineering Symposium
, Rochester, MI.
A Multi-Finger Hand Prosthesis
Preliminary Design Concepts for Compliant Mechanism Prosthetic Knee Joints
|
Hohmann Transfer Calculator
Before we start: a Hohmann transfer example
How to use Hohmann transfer calculator
Preliminary definition and concepts
Rocket equation and "delta-v"
Hohmann transfer & delta-v calculation
Traveling from Earth to Mars by using Hohmann transfer calculator
The Hohmann transfer calculator can help you find the transfer trajectory requiring the minimum amount of propellant (fuel and oxidizer) to go from one circular orbit to another. Our calculator can quickly find characteristics of the Hohmann transfer, including delta-v. Do you have a rocket engine with you? Our calculator can also provide you required mass of propellant for a Hohmann transfer.
Are you hearing about Hohmann's transfer for the first time? Are the words "Hohmann transfer delta-v calculation" and "Hohmann transfer orbit" confusing, or are you curious and want to understand what all the fuss is about a Hohmann transfer? If yes, the article below is a perfect place to understand all the concepts with Hohmann transfer examples. Please continue to read on!
Before you start reading our article, we would like to give you a chance to ponder about the Hohmann transfer and its application. Let us say you are the commander of a spaceship that needs to go from Earth to Mars. Your spacecraft uses chemical propulsion, and you want to know how much propellant (including both fuel and oxidizer) is needed for the travel.
What are the assumptions you can make on the nature of Earth's and Mars's orbit such that you can make use of Hohmann transfer? Okay! Do not worry if you do not know the answer yet. We assure you that you will know the answer by the end of the article (psst! the answer is given at the end of this article).
To use our calculator, you need very few details:
First, you should know your primary body characteristics: Mass and radius.
Note: You can use our unit switcher to change the mass unit to "Weights of Earth" if your primary body is Earth or to "Weights of Sun" if it is Sun (by doing this, you are measuring your mass in terms of multiples of the mass of the Earth or Sun).
Next is your initial and final orbit's altitude from the surface of the primary body.
With these pieces of information, our Hohmann transfer calculator will provide you details about the Hohmann transfer orbit's (or transfer ellipse) properties, delta-v (
\Delta V
After delta-v calculation, if you have your rocket engine's
I_{sp}
and initial mass, our calculator also embeds the rocket equation that provides you with the amount of propellant needed for the transfer.
At the bottom of the calculator, we have an advanced mode option. You can use that to get more details such as specific angular momentum and geometrical properties of the transfer orbit.
To directly see the example of Hohmann transfer, please go to traveling from Earth to Mars by using Hohmann transfer calculator section of this article.
Astrodynamics is the field of physics that deals with the study of the motion of artificial objects under the influence of the gravitational field of one or more natural bodies. Knowledge of astrodynamics is used in interplanetary trajectory design (e.g., Earth to Mars), constellation design of satellites for navigation, communication, Earth observation, futuristic space tourism, etc.
Trajectory of an object
The path traversed by an object in space is called its trajectory. For example, the Moon crosses a circular trajectory around the Earth. The object can traverse two different types of trajectories. In astrodynamics, the motion of an object under the influence of a single primary body is called a two-body problem and is characterized by conic sections. In the figure below, you can see that by cutting the right circular cone at different angles, we get two different trajectories (Kepler orbits).
The figure shows a right circular cone with lines cutting at different angles.
Two trajectories are described below:
Open trajectories – When the plane passing through the cone base cuts the cone, either perpendicularly (line
H^1-H
in the above figure) or at an inclined angle to the horizontal axis (line
P^1-P
), we end up having either a hyperbola or a parabola, respectively. The figure shows it clearly.
Closed trajectories – When the plane cuts the cone, either perpendicular (line
C^1-C
) or at an inclined angle(line
E^1-E
) to the vertical axis, we end up having either a circle or an ellipse, respectively. Velocity at each point on a circle is always tangential, whereas for ellipse only at the perigee (p) and apogee (a) are extreme points on the ellipse, the velocity is tangential. These properties are used by Hohmann transfer to calculate the best fuel-efficient transfer. Please check out our ellipse calculator to learn more about ellipse.
Specific angular momentum is defined as the angular momentum divided by its mass. Each orbit is characterized by specific angular momentum that remains constant. It is given by the cross product between position vector
\vec r
\vec v
\vec h=\vec r \times \vec v
. Below, we have equations to compute the magnitude of
\vec h
for circular and elliptical orbits:
Elliptical orbit: The magnitude of specific angular momentum equation is given as:
\begin{equation} \quad\ \ h=\sqrt{2\mu}\sqrt{\frac{r_a r_p}{r_a+r_p}} \end{equation}
\mu
– Gravitational parameter defined as the product between the primary object's mass
M
and gravitational constant
G
\mu=GM
. For Earth,
\mu
398600.418\ \text{km}^3/\text s^2
r_a
r_p
are the apogee (a) and perigee (p) radii of an ellipse respectively.
Circular orbit: Circular orbit case is a special case of elliptical orbit when the
r_a
r_p
are equal. Thus, the equation of
h
h=\sqrt{\mu r}
. If the orbital velocity is known for a circular orbit, you can use the fundamental definition of specific angular momentum
h
h= rv
If you want to compare the performances of two rocket engines, you will first look at their specific impulse. Specific impulse (
I_{sp}
) is the parameter that tells us how much thrust a rocket is producing per unit rate of propellant weight. It is given by:
\begin{equation} \quad I_{sp}=\frac{T}{\dot m_pg_o} \end{equation}
T
– Thrust produced by a rocket in
N
\dot m_p
is the rate of propellant consumption in
\text{kg/s}
g_o
is the acceleration due to gravity at sea level on Earth (also called as standard acceleration due to gravity) in
\text{m/s}^2
The unit of specific impulse is the second (
s
Transfers are essential aspects of astrodynamics used to move an object from one point to another in space. Specifically, this article focuses on transfers between two closed orbits (also known as periodic orbits) called orbital transfers. In the following paragraphs, let us understand different orbital transfers present in the literature.
The orbital transfer is the transfer of spacecraft from one to another orbit. For example, moving a spaceship from a circular orbit to another circular orbit. Based on the thrusting strategies used while transferring, orbital transfers can be divided into two types, they are:
Impulse transfers are the transfers whose thrusting time
t
is very small compared to the time of flight
\text{ToF}
(also known as the total transfer time). All of the transfers before electric propulsion technology were impulse transfers that were accomplished using chemical rocket propulsion. This transfer requires a tremendous amount of fuel, but you will reach the destination orbit in a reasonable time (as compared to a "low thrust transfer"). Hohmann transfer is one of the impulse transfer strategies. We will see this in some detail later in the article.
Low thrust transfer
Low thrust transfers are the transfers whose thrusting time
t
is not too small compared to the time of flight
\text{ToF}
. Currently, there is a shift in the space industry to use electric propulsion technologies as it avoids carrying the enormous amount of propellents onboard the spacecraft. This advantage comes at a price. The price is that the electric propulsion imparts only a small amount of momentum onto the spacecraft; as a result, you will require more time to reach your destination.
Before going into details of the Hohmann transfer, let us take a detour and understand the famous equation in rocket science.
You have a rocket engine that uses chemical propulsion (once the rocket is chosen, its performance parameter
I_{sp}
is fixed). To understand how far the rocket can go using the available propellant,
m_p
is provided by the Tsiolkovsy rocket equation. It is given by:
\begin{equation} \quad \Delta V = I_{sp}g_0 \ln\left(\frac{m_0}{m_f}\right) \end{equation}
\Delta V
– Maximum velocity that can be imparted to the spacecraft in
\text{m/s}
. This critical quantity indicates how much change in velocity a rocket engine can impart and has commonly called "delta-v" pronounced as "delta-vee" in the space industry;
I_{sp}
– Specific impulse in
s
g_0
– Standard acceleration due to gravity at sea level on the earth in
m/s^2
m_0
– Initial mass of the rocket in
\text{kg}
m_f
– Final mass in
\text{kg}
at burn-out time
t_{b}
after consuming all the propellants
m_p
m_f=m_0-m_p
This equation is valid only for impulse transfers; the burnout time
t_b
is much shorter than the time of flight (
\text{ToF}
In situations, where required
\Delta V
is known already for a given
I_{sp}
and initial mass
m_0
of the rocket, the propellant mass
m_p
\footnotesize \begin{equation} m_p =m_0\left(1-\exp{\left(-\frac{\Delta V}{I_{sp}g_0}\right)}\right) \end{equation}
We will make use of this equation later in the article. Now that we have learned about the rocket equation and
\Delta V
, it is the right time for us to go back to impulse transfer and look at Hohmann orbit transfer.
A Hohmann transfer is a type of impulse transfer that requires minimum fuel to move an object from one circular orbit to another. Essentially, we have two circular orbits; one is our initial orbit, and another is the destination. These two orbits will be connected by an orbit called a transfer orbit or Hohmann transfer orbit. In a Hohmann transfer, the transfer orbit is an ellipse.
A Hohmann transfer includes two
\Delta V
(thrustings) as shown in the figure below. As you can expect, there are two scenarios involving a Hohmann transfer. One is where you move your object from lower to a higher altitude orbit (subfigure a), and another is from higher to lower altitude orbit (subfigure b).
In both cases, the total
\Delta V
magnitude is the same. We can notice from the figure the only difference is the direction of application of
\Delta V
. In the following paragraphs, we consider only subfigure a scenario for further analysis.
The figure shows the two scenarios of Hohmann transfer.
\Delta V_1
is applied at a point p on the initial orbit, forming the perigee of the transfer ellipse. Let
v_{1_p}
be the velocity at a point p on the initial orbit. To transfer the object to Hohmann transfer orbit at point p, where the velocity is
v_{t_p}
on the transfer orbit, we need an extra velocity increment given by
\Delta V
. The following equations are used to find
\Delta V
at point p.
First, velocity
v_{1_p}
on the initial orbit is given by:
\begin{equation} \quad v_{1_p}=\frac{h_1}{r_{1_p}} =\frac{\sqrt{\mu r_{1_p}}}{r_{1_p}} \end{equation} <p></p>
h_1
– Specific angular momentum of the initial orbit in
\text m^2/\text s
r_{1_p}
– Radius of initial orbit at p in
\text m
; since its a circle
r_{1_p}
is the same at every point.
Now, the transfer orbit's velocity
v_{t_p}
at point p is given by:
\begin{equation} \quad v_{t_p}=\frac{h_t}{r_{t_p}} \end{equation}
\begin{equation} \quad v_{t_p}=\frac{\sqrt{2\mu}\sqrt{\frac{r_{t_a} r_{t_p}}{r_{t_a}+r_{t_p}}}}{r_{t_p}} \end{equation}
h_t
– Specific angular angular momentum of transfer orbit in
\text m^2/\text s
r_{t_a}
r_{t_p}
– Apogee and perigee radius of an ellipse in
\text m
, respectively; and
\mu
– Gravitational parameter in
\text{km}^3/\text s^2
Therefore the magnitude of
\Delta V_1
\begin{equation} \quad \Delta V_1=\lvert v_{t_p}-v_{1_p} \rvert \end{equation}
\Delta V_1
, the object travels through the transfer orbit and reaches point a, which is the apogee of the transfer orbit. Once we are at a point a on the destination orbit, the second
\Delta V
is applied, which forms the apogee of the transfer ellipse.
Similarly, in this case, to move the object from transfer ellipse to destination orbit, we increment the velocity of the transfer ellipse
v_{t_a}
at a to match the velocity of the destination orbit
v_{2_a}
using second
\Delta V_2
\Delta V_2
at point a.
v_{t_a}
on the transfer orbit at point a can be found just by replacing the
r_{t_p}
term of the denominator of equation 7 by
r_{t_a}
and it is given by:
\begin{gather} \quad v_{t_a}=\frac{\sqrt{2\mu}\sqrt{\frac{r_{t_a} r_{t_p}}{r_{t_a}+r_{t_p}}}}{r_{t_a}} \end{gather}
Finally, destination orbit's velocity
v_{2_a}
at a point a by using:
\begin{equation} \quad v_{2_a}=\frac{h_2}{r_{2_a}} =\frac{\sqrt{\mu r_{2_a}}}{r_{2_a}} \end{equation}
h_2
– Specific angular momentum of initial orbit in
\text m^2/\text s
r_{2_a}
– Radius of destination orbit at a in
\text m
r_{2_a}
\Delta V_2
\begin{equation} \quad \Delta V_2=\lvert v_{2_a}-v_{2_p} \rvert \end{equation}
Now that we have two
\Delta V
's at p and a, our final total
\Delta V_{\text{total}}
\begin{equation} \quad \Delta V_{\text{total}}=\Delta V_2+\Delta V_1 \end{equation}
The figure shows the Hohmann transfer to move from a lower circular orbit to a higher one (commonly called orbit raising in space industry jargon).
In essence, a Hohmann transfer provides the
\Delta V
required at each point, i.e., p and a which happens to be the perigee and apogee of the transfer orbit to move an object from one orbit to another.
The analysis is very much similar for transfer from higher to lower altitude orbit. The result will show changes only in the sign of obtained
\Delta V
indicating the change in the direction of application.
❓ Why do we wait for approximately two years to send spacecraft to Mars?
Have you wondered why we wait around two years every time we send probes or satellites to Mars? To give you some examples:
Exomars was launched in 2016;
Insight was launched in 2018; and
Mars 2020 was launched in 2020.
As you can see, there is a gap of around two years. This is because Mars's synodic period with respect to earth is 779.9400 days (or 2.1368 years). We launch our spacecraft around this time to obtain a transfer orbit that can closely resemble a Hohmann transfer orbit (we don't get exact Hohmann transfer orbit because actual orbits of Earth and Mars are not perfectly circular).
Amount of propellant required
Once we have the
\Delta V
for our transfer, we can make use of the rocket equation that we saw in our rocket equation and "delta-v" section to find the amount of propellant required:
\footnotesize \begin{equation} \tag{4} m_p =m_0\left(\!1-\exp{\left(-\frac{\Delta V}{I_{sp}g_0}\right)}\!\right) \end{equation}
|
Virtual work - Wikipedia @ WordDisk
In mechanics, virtual work arises in the application of the principle of least action to the study of forces and movement of a mechanical system. The work of a force acting on a particle as it moves along a displacement is different for different displacements. Among all the possible displacements that a particle may follow, called virtual displacements, one will minimize the action. This displacement is therefore the displacement followed by the particle according to the principle of least action. The work of a force on a particle along a virtual displacement is known as the virtual work.
This article is about the principle in mechanics. For the work arrangement, see remote work.
{\displaystyle {\textbf {F}}={\frac {d}{dt}}(m{\textbf {v}})}
This article uses material from the Wikipedia article Virtual work, and is written by contributors. Text is available under a CC BY-SA 4.0 International License; additional terms may apply. Images, videos and audio are available under their respective licenses.
|
{\displaystyle {\boldsymbol {\varepsilon }}\doteq {\cfrac {\partial }{\partial \mathbf {X} }}\left(\mathbf {x} -\mathbf {X} \right)={\boldsymbol {F}}'-{\boldsymbol {I}},}
{\displaystyle e={\frac {\Delta L}{L}}={\frac {l-L}{L}}}
{\displaystyle \lambda ={\frac {l}{L}}}
{\displaystyle e={\frac {l-L}{L}}=\lambda -1}
{\displaystyle \delta \varepsilon ={\frac {\delta l}{l}}}
{\displaystyle {\begin{aligned}\int \delta \varepsilon &=\int _{L}^{l}{\frac {\delta l}{l}}\\\varepsilon &=\ln \left({\frac {l}{L}}\right)=\ln(\lambda )\\&=\ln(1+e)\\&=e-{\frac {e^{2}}{2}}+{\frac {e^{3}}{3}}-\cdots \end{aligned}}}
{\displaystyle \varepsilon _{G}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{L^{2}}}\right)={\tfrac {1}{2}}(\lambda ^{2}-1)}
{\displaystyle \varepsilon _{E}={\tfrac {1}{2}}\left({\frac {l^{2}-L^{2}}{l^{2}}}\right)={\tfrac {1}{2}}\left(1-{\frac {1}{\lambda ^{2}}}\right)}
{\displaystyle \mathrm {length} (AB)=dx}
{\displaystyle {\begin{aligned}\mathrm {length} (ab)&={\sqrt {\left(dx+{\frac {\partial u_{x}}{\partial x}}dx\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}dx\right)^{2}}}\\&={\sqrt {dx^{2}\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+dx^{2}\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\\&=dx~{\sqrt {\left(1+{\frac {\partial u_{x}}{\partial x}}\right)^{2}+\left({\frac {\partial u_{y}}{\partial x}}\right)^{2}}}\end{aligned}}}
{\displaystyle u_{y}}
{\displaystyle \mathrm {length} (ab)\approx dx\left(1+{\frac {\partial u_{x}}{\partial x}}\right)=dx+{\frac {\partial u_{x}}{\partial x}}dx}
{\displaystyle \varepsilon _{x}={\frac {\text{extension}}{\text{original length}}}={\frac {\mathrm {length} (ab)-\mathrm {length} (AB)}{\mathrm {length} (AB)}}={\frac {\partial u_{x}}{\partial x}}}
{\displaystyle \varepsilon _{y}={\frac {\partial u_{y}}{\partial y}}\quad ,\qquad \varepsilon _{z}={\frac {\partial u_{z}}{\partial z}}}
{\displaystyle \gamma _{xy}=\alpha +\beta }
{\displaystyle {\begin{aligned}\tan \alpha &={\frac {{\tfrac {\partial u_{y}}{\partial x}}dx}{dx+{\tfrac {\partial u_{x}}{\partial x}}dx}}={\frac {\tfrac {\partial u_{y}}{\partial x}}{1+{\tfrac {\partial u_{x}}{\partial x}}}}\\\tan \beta &={\frac {{\tfrac {\partial u_{x}}{\partial y}}dy}{dy+{\tfrac {\partial u_{y}}{\partial y}}dy}}={\frac {\tfrac {\partial u_{x}}{\partial y}}{1+{\tfrac {\partial u_{y}}{\partial y}}}}\end{aligned}}}
{\displaystyle {\frac {\partial u_{x}}{\partial x}}\ll 1~;~~{\frac {\partial u_{y}}{\partial y}}\ll 1}
{\displaystyle \alpha \approx {\frac {\partial u_{y}}{\partial x}}~;~~\beta \approx {\frac {\partial u_{x}}{\partial y}}}
{\displaystyle \gamma _{xy}=\alpha +\beta ={\frac {\partial u_{y}}{\partial x}}+{\frac {\partial u_{x}}{\partial y}}}
{\displaystyle \gamma _{yz}=\gamma _{zy}={\frac {\partial u_{y}}{\partial z}}+{\frac {\partial u_{z}}{\partial y}}\quad ,\qquad \gamma _{zx}=\gamma _{xz}={\frac {\partial u_{z}}{\partial x}}+{\frac {\partial u_{x}}{\partial z}}}
{\displaystyle {\underline {\underline {\boldsymbol {\varepsilon }}}}={\begin{bmatrix}\varepsilon _{xx}&\varepsilon _{xy}&\varepsilon _{xz}\\\varepsilon _{yx}&\varepsilon _{yy}&\varepsilon _{yz}\\\varepsilon _{zx}&\varepsilon _{zy}&\varepsilon _{zz}\\\end{bmatrix}}={\begin{bmatrix}\varepsilon _{xx}&{\tfrac {1}{2}}\gamma _{xy}&{\tfrac {1}{2}}\gamma _{xz}\\{\tfrac {1}{2}}\gamma _{yx}&\varepsilon _{yy}&{\tfrac {1}{2}}\gamma _{yz}\\{\tfrac {1}{2}}\gamma _{zx}&{\tfrac {1}{2}}\gamma _{zy}&\varepsilon _{zz}\\\end{bmatrix}}}
{\displaystyle \mathbf {x} (\mathbf {X} ,t)={\boldsymbol {F}}(t)\cdot \mathbf {X} +\mathbf {c} (t)}
{\displaystyle {\begin{bmatrix}x_{1}(X_{1},X_{2},X_{3},t)\\x_{2}(X_{1},X_{2},X_{3},t)\\x_{3}(X_{1},X_{2},X_{3},t)\end{bmatrix}}={\begin{bmatrix}F_{11}(t)&F_{12}(t)&F_{13}(t)\\F_{21}(t)&F_{22}(t)&F_{23}(t)\\F_{31}(t)&F_{32}(t)&F_{33}(t)\end{bmatrix}}{\begin{bmatrix}X_{1}\\X_{2}\\X_{3}\end{bmatrix}}+{\begin{bmatrix}c_{1}(t)\\c_{2}(t)\\c_{3}(t)\end{bmatrix}}}
{\displaystyle \mathbf {x} (\mathbf {X} ,t)={\boldsymbol {Q}}(t)\cdot \mathbf {X} +\mathbf {c} (t)}
{\displaystyle {\boldsymbol {Q}}\cdot {\boldsymbol {Q}}^{T}={\boldsymbol {Q}}^{T}\cdot {\boldsymbol {Q}}={\boldsymbol {\mathit {1}}}}
{\displaystyle {\begin{bmatrix}x_{1}(X_{1},X_{2},X_{3},t)\\x_{2}(X_{1},X_{2},X_{3},t)\\x_{3}(X_{1},X_{2},X_{3},t)\end{bmatrix}}={\begin{bmatrix}Q_{11}(t)&Q_{12}(t)&Q_{13}(t)\\Q_{21}(t)&Q_{22}(t)&Q_{23}(t)\\Q_{31}(t)&Q_{32}(t)&Q_{33}(t)\end{bmatrix}}{\begin{bmatrix}X_{1}\\X_{2}\\X_{3}\end{bmatrix}}+{\begin{bmatrix}c_{1}(t)\\c_{2}(t)\\c_{3}(t)\end{bmatrix}}}
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {b} (\mathbf {X} ,t)+\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=\alpha _{iJ}b_{J}+x_{i}-\alpha _{iJ}X_{J}}
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {b} (\mathbf {x} ,t)+\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=b_{J}+\alpha _{Ji}x_{i}-X_{J}}
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\alpha _{Ji}=\alpha _{iJ}}
{\displaystyle u_{i}=\alpha _{iJ}U_{J}\qquad {\text{or}}\qquad U_{J}=\alpha _{Ji}u_{i}}
{\displaystyle \mathbf {e} _{i}=\alpha _{iJ}\mathbf {E} _{J}}
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=u_{i}\mathbf {e} _{i}=u_{i}(\alpha _{iJ}\mathbf {E} _{J})=U_{J}\mathbf {E} _{J}=\mathbf {U} (\mathbf {x} ,t)}
{\displaystyle \mathbf {E} _{J}\cdot \mathbf {e} _{i}=\delta _{Ji}=\delta _{iJ}}
{\displaystyle \mathbf {u} (\mathbf {X} ,t)=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \qquad {\text{or}}\qquad u_{i}=x_{i}-\delta _{iJ}X_{J}=x_{i}-X_{i}}
{\displaystyle \mathbf {U} (\mathbf {x} ,t)=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\qquad {\text{or}}\qquad U_{J}=\delta _{Ji}x_{i}-X_{J}=x_{J}-X_{J}}
{\displaystyle {\begin{aligned}\mathbf {u} (\mathbf {X} ,t)&=\mathbf {x} (\mathbf {X} ,t)-\mathbf {X} \\\nabla _{\mathbf {X} }\mathbf {u} &=\nabla _{\mathbf {X} }\mathbf {x} -\mathbf {I} \\\nabla _{\mathbf {X} }\mathbf {u} &=\mathbf {F} -\mathbf {I} \end{aligned}}}
{\displaystyle {\begin{aligned}u_{i}&=x_{i}-\delta _{iJ}X_{J}=x_{i}-X_{i}\\{\frac {\partial u_{i}}{\partial X_{K}}}&={\frac {\partial x_{i}}{\partial X_{K}}}-\delta _{iK}\end{aligned}}}
{\displaystyle {\begin{aligned}\mathbf {U} (\mathbf {x} ,t)&=\mathbf {x} -\mathbf {X} (\mathbf {x} ,t)\\\nabla _{\mathbf {x} }\mathbf {U} &=\mathbf {I} -\nabla _{\mathbf {x} }\mathbf {X} \\\nabla _{\mathbf {x} }\mathbf {U} &=\mathbf {I} -\mathbf {F} ^{-1}\end{aligned}}}
{\displaystyle {\begin{aligned}U_{J}&=\delta _{Ji}x_{i}-X_{J}=x_{J}-X_{J}\\{\frac {\partial U_{J}}{\partial x_{k}}}&=\delta _{Jk}-{\frac {\partial X_{J}}{\partial x_{k}}}\end{aligned}}}
{\displaystyle {\boldsymbol {F}}=F_{11}\mathbf {e} _{1}\otimes \mathbf {e} _{1}+F_{12}\mathbf {e} _{1}\otimes \mathbf {e} _{2}+F_{21}\mathbf {e} _{2}\otimes \mathbf {e} _{1}+F_{22}\mathbf {e} _{2}\otimes \mathbf {e} _{2}+\mathbf {e} _{3}\otimes \mathbf {e} _{3}}
{\displaystyle {\boldsymbol {F}}={\begin{bmatrix}F_{11}&F_{12}&0\\F_{21}&F_{22}&0\\0&0&1\end{bmatrix}}}
{\displaystyle {\boldsymbol {F}}={\boldsymbol {R}}\cdot {\boldsymbol {U}}={\begin{bmatrix}\cos \theta &\sin \theta &0\\-\sin \theta &\cos \theta &0\\0&0&1\end{bmatrix}}{\begin{bmatrix}\lambda _{1}&0&0\\0&\lambda _{2}&0\\0&0&1\end{bmatrix}}}
{\displaystyle F_{11}F_{22}-F_{12}F_{21}=1}
{\displaystyle \lambda _{1}\lambda _{2}=1}
{\displaystyle F_{11}\mathbf {e} _{1}+F_{21}\mathbf {e} _{2}=\mathbf {e} _{1}\quad \implies \quad F_{11}=1~;~~F_{21}=0}
{\displaystyle F_{11}F_{22}-F_{12}F_{21}=1\quad \implies \quad F_{22}=1}
{\displaystyle \gamma :=F_{12}}
{\displaystyle {\boldsymbol {F}}={\begin{bmatrix}1&\gamma &0\\0&1&0\\0&0&1\end{bmatrix}}}
{\displaystyle {\boldsymbol {F}}\cdot \mathbf {e} _{2}=F_{12}\mathbf {e} _{1}+F_{22}\mathbf {e} _{2}=\gamma \mathbf {e} _{1}+\mathbf {e} _{2}\quad \implies \quad {\boldsymbol {F}}\cdot (\mathbf {e} _{2}\otimes \mathbf {e} _{2})=\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}+\mathbf {e} _{2}\otimes \mathbf {e} _{2}}
{\displaystyle \mathbf {e} _{i}\otimes \mathbf {e} _{i}={\boldsymbol {\mathit {1}}}}
{\displaystyle {\boldsymbol {F}}={\boldsymbol {\mathit {1}}}+\gamma \mathbf {e} _{1}\otimes \mathbf {e} _{2}}
Retrieved from "https://en.wikipedia.org/w/index.php?title=Deformation_(physics)&oldid=1087523231"
|
Apply boundary conditions to electromagnetic model - MATLAB electromagneticBC - MathWorks India
electromagneticBC
Specify Voltage on Boundaries
Specify Magnetic Potential on Boundary
Specify Nonconstant Voltage on Boundary
Specify Boundary Conditions for Harmonic Analysis
Specify Magnetic Field on Boundary
emagBC
Specifying Nonconstant Parameters of Electromagnetic Model
Additional Arguments in Functions for Nonconstant Electromagnetic Parameters
Apply boundary conditions to electromagnetic model
electromagneticBC(emagmodel,RegionType,RegionID,"Voltage",V)
electromagneticBC(emagmodel,RegionType,RegionID,"MagneticPotential",A)
electromagneticBC(emagmodel,RegionType,RegionID,"ElectricField",E)
electromagneticBC(emagmodel,RegionType,RegionID,"MagneticField",H)
electromagneticBC(emagmodel,RegionType,RegionID,"FarField","absorbing","Thickness",h)
electromagneticBC(emagmodel,RegionType,RegionID,"FarField","absorbing","Thickness",h,"Exponent",e,"Scaling",s)
electromagneticBC(___,"Vectorized","on")
emagBC = electromagneticBC(___)
electromagneticBC(emagmodel,RegionType,RegionID,"Voltage",V) adds a voltage boundary condition to emagmodel. The boundary condition applies to regions of type RegionType with ID numbers in RegionID. The solver uses a voltage boundary condition for an electrostatic analysis.
electromagneticBC(emagmodel,RegionType,RegionID,"MagneticPotential",A) adds a magnetic potential boundary condition to emagmodel. The boundary condition applies to regions of type RegionType with ID numbers in RegionID. The solver uses a magnetic potential boundary condition for a magnetostatic analysis.
electromagneticBC(emagmodel,RegionType,RegionID,"ElectricField",E) adds an electric field boundary condition to emagmodel. The boundary condition applies to regions of type RegionType with ID numbers in RegionID. The solver uses an electric field boundary condition for a harmonic analysis with the electric field type.
electromagneticBC(emagmodel,RegionType,RegionID,"MagneticField",H) adds a magnetic field boundary condition to emagmodel. The boundary condition applies to regions of type RegionType with ID numbers in RegionID. The solver uses a magnetic field boundary condition for a harmonic analysis with the magnetic field type.
electromagneticBC(emagmodel,RegionType,RegionID,"FarField","absorbing","Thickness",h) adds an absorbing boundary condition to emagmodel and specifies the thickness of the absorbing region. The boundary condition applies to regions of type RegionType with ID numbers in RegionID. The solver uses an absorbing boundary condition for a harmonic analysis.
electromagneticBC(emagmodel,RegionType,RegionID,"FarField","absorbing","Thickness",h,"Exponent",e,"Scaling",s) specifies the rate of attenuation of the waves entering the absorbing region. You can specify e, s, or both.
electromagneticBC(___,"Vectorized","on") uses vectorized function evaluation when you pass a function handle as an argument. If your function handle computes in a vectorized fashion, then using this argument saves time. For details on this evaluation, see More About and Vectorization.
Use this syntax with any of the input argument combinations in the previous syntaxes.
emagBC = electromagneticBC(___) returns the electromagnetic boundary condition object.
Import and plot a geometry representing a plate with a hole.
Apply the voltage boundary condition on the side faces of the geometry.
bc1 = electromagneticBC(emagmodel,"Voltage",0,"Face",3:6)
ElectromagneticBCAssignment with properties:
MagneticPotential: []
RegionID: [3 4 5 6]
Apply the voltage boundary condition on the face bordering the hole.
bc2 = electromagneticBC(emagmodel,"Voltage",1000,"Face",7)
Apply a magnetic potential boundary condition on the boundary of a circle.
geometryFromEdges(emagmodel,@circleg);
electromagneticBC(emagmodel,"Edge",1,"MagneticPotential",0)
MagneticPotential: 0
Voltage: []
Use a function handle to specify a boundary condition that depends on the coordinates.
Create a unit circle geometry and include it in the model.
Specify the voltage on the boundary using the function
\text{\hspace{0.17em}}\mathit{V}\left(\mathit{x},\mathit{y}\right)={\mathit{x}}^{2}
bc = @(location,~)location.x.^2;
electromagneticBC(emagmodel,"Edge",1:emagmodel.Geometry.NumEdges, ...
"Voltage",bc)
Voltage: @(location,~)location.x.^2
Specify an absorbing boundary condition and an electric field on a boundary for harmonic analysis.
Import and plot a 2-D geometry representing a plate with a hole.
gm = importGeometry(emagmodel,"PlateHolePlanar.stl");
pdegplot(gm,"EdgeLabels","on")
Specify the electric field on the circular edge.
electromagneticBC(emagmodel,"Edge",5,"ElectricField",[10 0])
ElectricField: [10 0]
MagneticField: []
FarField: []
Thickness: []
Specify absorbing regions with the thickness 2 on the edges of the rectangle. Use the default attenuation rate for the absorbing regions.
"Thickness",2)
ElectricField: [2x1 double]
MagneticField: [2x1 double]
FarField: "absorbing"
Now specify the attenuation rate for the absorbing regions by using the Exponent and Scaling arguments.
"Thickness",2, ...
"Exponent",3, ...
"Scaling",100)
Apply a magnetic field on the boundary of a square for harmonic analysis.
emagmodel = createpde("electromagnetic","harmonic")
emagmodel =
Change the field type from the default electric to magnetic.
emagmodel.FieldType = "magnetic"
FieldType: "magnetic"
Include a square geometry in the model. Plot the geometry with the edge labels.
geometryFromEdges(emagmodel,@squareg);
Specify a magnetic field on the edges of the square.
electromagneticBC(emagmodel,"Edge",1:4,"MagneticField",[10 10])
ElectricField: []
MagneticField: [10 10]
Electromagnetic model, specified as an ElectromagneticModel object. The model contains a geometry, a mesh, electromagnetic properties of the material, the electromagnetic sources, and the boundary conditions.
"Edge" for a 2-D model | "Face" for a 3-D model
Geometric region type, specified as "Edge" for a 2-D model or "Face" for a 3-D model.
Example: electromagneticBC(emagmodel,"Edge",1,"Voltage",100)
Region ID, specified as a vector of positive integers. Find the edge or face IDs by using pdegplot with the "EdgeLabels" or "FaceLabels" name-value argument set to "on".
Voltage, specified as a real number or a function handle. Use a function handle to specify a voltage that depends on the coordinates. For details, see More About.
The solver uses a voltage boundary condition for an electrostatic analysis.
A — Magnetic potential
real number | column vector | function handle
Magnetic potential, specified as a real number, a column vector of three elements for a 3-D model, or a function handle. Use a function handle to specify a magnetic potential that depends on the coordinates. For details, see More About.
The solver uses a magnetic potential boundary condition for a magnetostatic analysis.
column vector | function handle
Electric field, specified as a column vector of two elements for a 2-D model, a vector of three elements for a 3-D model, or a function handle. Use a function handle to specify an electric field that depends on the coordinates. For details, see More About.
The solver uses an electric field boundary condition for a harmonic analysis with the electric field type.
H — Magnetic field
Magnetic field, specified as a column vector of two elements for a 2-D model, a column vector of three elements for a 3-D model, or a function handle. Use a function handle to specify a magnetic field that depends on the coordinates. For details, see More About.
The solver uses a magnetic field boundary condition for a harmonic analysis with the magnetic field type.
h — Width of far field absorbing region
Width of the far field absorbing region, specified as a nonnegative number. The solver uses an absorbing boundary condition for a harmonic analysis.
e — Exponent defining attenuation rate
Exponent defining the attenuation rate of the waves entering the absorbing region, specified as a nonnegative number. The solver uses an absorbing boundary condition for a harmonic analysis.
s — Scaling parameter defining attenuation rate
Scaling parameter defining the attenuation rate of the waves entering the absorbing region, specified as a nonnegative number. The solver uses an absorbing boundary condition for a harmonic analysis.
emagBC — Handle to electromagnetic boundary condition
ElectromagneticBCAssignment object
Handle to the electromagnetic boundary condition, returned as an ElectromagneticBCAssignment object. For more information, see ElectromagneticBCAssignment Properties.
In Partial Differential Equation Toolbox™, use a function handle to specify these electromagnetic parameters when they depend on the coordinates and, for a harmonic analysis, on the frequency:
Relative permittivity of the material
Relative permeability of the material
Conductivity of the material
Charge density as source (can depend on space only)
Current density as source (can depend on space only)
Voltage on the boundary (can depend on space only)
Magnetic potential on the boundary (can depend on space only)
Electric field on the boundary (can depend on space only)
Magnetic field on the boundary (can depend on space only)
For example, use function handles to specify the relative permittivity, charge density, and voltage on the boundary for emagmodel.
electromagneticProperties(emagmodel, ...
"RelativePermittivity", ...
@myfunPermittivity)
electromagneticSource(emagmodel, ...
"ChargeDensity",@myfunCharge, ...
"Face",2)
electromagneticBC(emagmodel, ...
"Voltage",@myfunBC, ...
"Edge",2)
function emagVal = myfun(location,state)
The solver computes and populates the data in the location and state structure arrays and passes this data to your function. You can define your function so that its output depends on this data. You can use any names in place of location and state.
If you call electromagneticBC with Vectorized set to "on", then location can contain several evaluation points. If you do not set Vectorized or set Vectorized to "off", then solvers passes just one evaluation point in each call.
location — A structure array containing these fields:
Furthermore, for boundary conditions, the solver passes this data in the location structure:
location.nx — The x-component of the normal vector at the evaluation point or points
location.ny — The y-component of the normal vector at the evaluation point or points
location.nz — For a 3-D or an axisymmetric geometry, the z-component of the normal vector at the evaluation point or points
location.nr — For an axisymmetric geometry, the r-component of the normal vector at the evaluation point or points
state — A structure array containing this field for a harmonic electromagnetic problem:
state.frequency - Frequency at evaluation points
Relative permittivity, relative permeability, and conductivity get this data from the solver:
state.frequency for a harmonic analysis
Charge density, current density, electric or magnetic field on the boundary get this data from the solver:
Voltage or magnetic potential on the boundary get these data from the solver:
When you solve an electrostatic or magnetostatic problem, the output returned by the function handle must be of the following size. Here, Np = numel(location.x) is the number of points.
1-by-Np if a function specifies the nonconstant relative permittivity, relative permeability, and charge density. For the charge density, the output can also be Np-by-1.
1-by-Np for a 2-D model and 3-by-Np for a 3-D model if a function specifies the nonconstant current density and magnetic potential on the boundary. For the current density, the output can also be Np-by-1 or Np-by-3.
When you solve a harmonic problem, the output returned by the function handle must be of the following size. Here, Np = numel(location.x) is the number of points.
1-by-Np if a function specifies the nonconstant relative permittivity, relative permeability, and conductivity.
2-by-Np for a 2-D problem and 3-by-Np for a 3-D problem if a function specifies the nonconstant electric or magnetic field.
2-by-Np or Np-by-2 for a 2-D problem and 3-by-Np or Np-by-3 for a 3-D problem if a function specifies the nonconstant current density and the field type is electric.
1-by-Np or Np-by-1 for a 2-D problem and 3-by-Np or Np-by-3 for a 3-D problem if a function specifies the nonconstant current density and the field type is magnetic.
If relative permittivity, relative permeability, or conductivity for a harmonic analysis depends on the frequency, ensure that your function returns a matrix of NaN values of the correct size when state.frequency is NaN. Solvers check whether a problem is nonlinear or time dependent by passing NaN state values and looking for returned NaN values.
emagVal = @(location,state) myfunWithAdditionalArgs(location,arg1,arg2...)
electromagneticBC(model,"Edge",3,"Voltage",emagVal)
ElectromagneticModel | ElectromagneticBCAssignment Properties | createpde | electromagneticSource | solve | assembleFEMatrices | electromagneticProperties
|
Ator Ali's Income | Toph
Ator Ali's Income
Ator Ali loves to make money. He does not care whether his source of income is good or evil. As his intention is not clear, he can not find any honest way to make the money. As a result, all his income sources are illegal.
Ator Ali doesn't know that a group is secretly observing his income pattern and they are keeping track of it. The group has set a penalty amount against his monthly income. When the right time comes, they will send him a legal notice and demand the penalty amount from him. After a few months, their analyst figured that Ator Ali's income increases by
d after every month. So there is no need to store the exact income of Ator Ali after each month. Instead, his income after any month
n can be calculated from his first month’s income and the value of
d. As there are many dishonest persons in society, the group is busy keeping track of the income of a lot of people. So they need your help to calculate the income of Ator after the
nth month.
T
(1 \leq T \leq 1000)
(1≤T≤1000) that denotes the number of test cases. Then for each test case there will be an integer
Q
(1 \leq Q \leq 500)
(1≤Q≤500) which denotes the number of queries we want to make. Then for each query, there will be three positive integers
x,
d,
n (
0 ≤ x, d, n ≤ 10^9
0≤x,d,n≤109) that denotes Ator Ali's income of the first month from when the group started tracking his income, the increased amount of income between any two consecutive months and the month on which they want to calculate Ator Ali’s income respectively.
For each input, output the amount of money earned by Ator Ali on the n-th month.
plasticneuronsLightest, 10 MB
To solve this problem, the main task is to calculate the income of Ator in any month. As we know the...
|
What is the beat frequency?
How to find beat frequency?
Applications of beat frequency
This beat frequency calculator is a tool that finds the beat frequency of two waves. If you are a music lover, you probably know all about beats, but have you ever wondered what the frequency of the beat you are listening to is? Knowing how to calculate beat frequency helps to understand, e.g., the logic behind the famous binaural beats and their frequencies.
In the article below, you will learn how to use the beat frequency equation and a few significant applications of the beat frequency.
The beat frequency calculator is a practical and efficient tool that lets you calculate the beat frequency of two sound waves.
Frequency refers to the occurrence of an event per unit time. Beats are the interference pattern between two waves that differ in frequencies. Thus, the beat frequency is:
The absolute difference between the frequencies of two waves.
The beat frequency calculator takes the frequencies of two sound waves as its input and calculates their beat frequency. For example, the beat frequency of two waves having frequencies 235 Hz and 335 Hz would be 100 Hz.
When two waves with similar frequencies travel and coincide in the same medium, they produce beats. The beat frequency, which is the absolute difference between the two waves making the beats, is the rate at which you hear the oscillating volume.
If the two waves differ in their wavelength, then the resulting sound will be irregular and non-repeating, which means it will be unpleasant to listen. Pure beats form if the interfering waves' amplitude are identical; if amplitudes differ, you may not detect beats that easily.
Binaural beats are an exciting kind of beat also known as auditory illusions, and play a role in helping get you to sleep.
They are perceived this way because the tone we hear is a perception of sound made exclusively by our brain. When we hear two waves of different frequencies through each ear, our brain materializes a third sound wave, the one we hear, which is called binaural beat frequency.
Not all beats can be categorized as a binaural beat because there are certain conditions that the waves need to fulfill to produce binaural beats:
The two waves must have frequencies less than 1000 Hz;
The beat frequency of the two waves must be less than 30 Hz; and
Each of the two waves must be heard through the respective ear.
The main part of the beat frequency equation is the time period of two different waves. The time period of an object is inversely proportional to its frequency, and hence you can deduct one from the other.
Since we understand that the beat frequency is the absolute difference between the frequencies of two waves, we can summarize the beat frequency formula as:
f_b= |f_2 - f_1|
f_b
- Beat frequency;
f_1
- Frequency of 1st wave; and
f_2
- Frequency of 2nd wave.
To calculate the beat frequency using our calculator, use the following steps:
Input the frequency of the first wave.
Input the frequency of the second wave.
And that's all you have to do! The rest of it is up to the beat frequency calculator. It will find the beat frequency in your selected unit based on the above beat frequency formula.
The default unit for frequency is Hertz (Hz).
If you find yourself in a situation where you cannot use the calculator, simply subtract the frequency of the first wave from the frequency of the second wave. The frequencies of either wave can be greater than the other, so to avoid having negative values for every beat frequency, we take the absolute value of the difference of the two waves. This is how to calculate beat frequency in your head.
For example, the frequency of the first wave is 1400Hz and the second frequency is 1560 Hz, the beat frequency will be 160 Hz. Even if the frequency values were reversed, the result would be the same because of the absolute function: the absolute value of -160 is 160.
Beat frequency is widespread, providing many beats applications, ranging from your regular radio transmissions to radars.
Musicians use the concept of interference beats in tuning their instruments, and the frequency of waves helps with the music scale. Below are some other critical applications:
If the beat frequency of two waves or tones is in the mid-frequency range (500 - 2000 Hz), then our ear perceives it as a third wave/tone. This third tone is called the subjective tone.
Radar detectors emit microwave radiations on vehicles and detect the reflected waves. These waves experience doppler effect/shift (they change in frequency in relation to an observer), and the beat frequency of the emitted and reflected waves give the measure of the speed of the vehicle in motion.
Missing fundamental effect
In music, the fundamental frequency refers to the the lowest frequency that any instrument can produce.
The missing fundamental effect applies to the human capacity to hear the sound of many tones in a situation where the fundamental pitch is either missing or diminished.
Have you ever heard a brass instrument played? A single instrument produces three tones, and that is because the musician plays a regular note and a hummed note. The beat frequency of these two notes gives rise to a third tone, which we refer to as multiphonics.
🔊 You might want to check out some more sound and waves calculators:
Speed of sound calculator;
Reverberation time calculator; and
Harmonic wave equation calculator.
How are beats formed?
Beats are formed when two sound waves traveling in the same medium interfere with each other. It causes the sound to alternate between soft and loud tones.
Beats form under the following conditions:
The amplitude of the two waves should be identical.
The frequency difference must be small.
Usually, the difference between frequencies should be less than 10 Hz to hear beats.
What is the beat frequency of two waves with frequencies 12 Hz and 10 Hz?
The beat frequency of the two waves with frequencies 12 and 10 Hz is 2 Hz. The formula of beat frequency is:
fb - Beat frequency;
f1 - Frequency of 1st wave; and
f2 - Frequency of 2nd wave.
The absolute function in the formula makes the difference always result in a positive value.
How can I calculate beat frequency?
The beat frequency is the rate of oscillating volume heard. You can calculate it as the absolute difference between two waves. So, if you want to know how to find the beat frequency of two waves:
Subtract the first wave's frequency from the second wave's frequency,
The beat frequency is always a positive value.
Can sound waves cancel out each other?
Yes, sound waves can cancel each other. The interaction between waves can be constructive and destructive depending on what their wavelengths, frequencies, and relative phases are.
If the peaks line up with troughs, they form destructive waves and cancel out each other. Noise-canceling headphones are based on the destructive waves phenomenon.
First wave frequency
Second wave frequency
Beat frequency 〰️
Acoustic impedanceAlfvén velocityCritical damping… 18 more
|
How do I use the low-pass filter calculator?
Different types of low-pass filters — passive vs. active low-pass filters
The RL low-pass filter
The inverting op-amp low-pass filter
The non-inverting op-amp low-pass filter
Welcome to Omni's low pass filter calculator. Whether you're designing an entire sound system complete with a bass boost, or just want to remove high-frequency noise in a signal, the low-pass filter calculator can help you create the perfect low-pass filter circuit for your needs. Read on to learn:
What a low-pass filter is;
The difference between passive and active low-pass filters; and
Whether inductors can be used for low-pass filters.
Using the low-pass filter calculator is easy! Here's how:
Select the filter type you're designing. Based on this choice, the low-pass filter calculator can magically transform into an RC low-pass filter calculator, an op-amp low-pass filter calculator, and others. We offer the following filter types:
An RC low-pass filter;
An RL low-pass filter;
An non-inverting op-amp low-pass filter; and
An inverting op-amp low-pass filter.
Input the values you are using. The passive (RC and RL) filters allow component values and the desired cutoff frequency. To the active (op-amp) filters, you can also add a gain to your AC signal. Read on if you want to learn how they work!
💡 The low-pass filter calculator is omnidirectional. You can enter whatever values you know and the calculator will work out the rest according to the selected filter type.
A low-pass filter is an electronic circuit that removes higher-frequency components from a given AC signal. In other words, it blocks high frequencies and lets low frequencies pass — hence the name "low-pass filter".
Let's illustrate a low-pass filter's frequency response (a fancy word for how a filter amplifies or dampens signals of certain frequencies). Take a look at the typical Bode plot for a low-pass filter below:
The Bode plot (frequency response) of a low-pass filter. The blue line represents an ideal filter, while the red line represents a real filter.
In the graphic above, we can see that a low-pass filter's frequency response is relatively flat up to
f_c
, after which it descends quickly. That point
f_c
is called the cutoff frequency and it's the defining parameter of a low-pass filter.
💡 The low-pass filter has an ideal (theoretical) and a real (practical) version. Ideally, the cutoff frequency
f_c
marks the sharp transition point between the Bode plot's flat and sloped regions. In reality, that transition is gradual, and
f_c
instead marks where the filter's frequency response hits the
-3\ \text{dB}
Past the cutoff frequency, the filter's frequency response drops at a slope of 20 decibels per decade — or, equivalently, the amplitude of a signal passing through the filter decreases by a factor of 10 for every tenfold increase in the signal's frequency.
💡 Note, that the explanation above is mostly true for all low-pass filters, even though the ones we discuss in this article are only first-order filters. The only difference between first-order and higher-order filters (such as RLC circuits) is that their high-frequency signal response drops at a higher rate than 20 decibels per decade.
When dealing with signals containing more than one frequency, a low-pass filter will remove the high-frequency components while leaving low-frequency components untouched. For audio systems, this would typically mean the treble is dampened, and the bass is seemingly amplified.
A low-pass filter can be seen as a system that removes high-frequency components from input signals, leaving behind only the low-frequency components.
While all low-pass filters perform the same function, many different low-pass filter circuits exist. They are split into two categories, passive and active, and this dichotomy can be categorized further:
Passive low-pass filters are built with only the three linear passive components: the resistor, the capacitor, and the inductor. They include:
RC low-pass filters; and
RL low-pass filters.
Active low-pass filters can be built with active components, most notably the operational amplifier (or op-amp). They include:
Inverting op-amp low-pass filters; and
Non-inverting op-amp low-pass filters.
The RC low-pass filter consists of a resistor (with resistance
R
) and a capacitor (with capacitance
C
) in the configuration shown below:
An RC low-pass filter, built with a resistor and a capacitor.
The RC low-pass filter is probably the most well-known passive low-pass filter. It is simple to design and build thanks to the simple formula for its cutoff frequency
f_c
f_c = \frac{1}{2\pi RC}
The RC low-pass filter takes advantage of the reactive properties of the capacitor, whose impedance
Z_C
decreases as the signal frequency
f
Z_C = \frac{1}{j\cdot (2\pi f)\cdot C}
Higher frequencies can easily pass through the capacitor and skip the load at
v_\text{out}
; lower frequencies are blocked from flowing through the capacitor and must instead travel through the output terminals. In this way, lower frequencies are delivered to the load, and higher frequencies are filtered out.
We haven't seen any inductors yet, but don't worry — inductors can be used for a low-pass filter just as easily as capacitors and resistors! Similar to the RC filter, the RL low-pass filter is another passive filter, and is constructed with a resistor
R
and an inductor
L
in this configuration:
An RL low-pass filter, built with a resistor and an inductor.
Its cutoff frequency can be determined with this formula:
f_c = \frac{R}{2\pi L}
Inductors behave in the opposite way to capacitors — their impedance
Z_L
grows with the frequency
f
of the signal it's conducting:
Z_L = \ j\cdot(2\pi f)\cdot L
As a result, the inductor in the RL low-pass filter blocks higher frequencies from ever reaching
v_\text{out}
, while allowing lower frequencies to pass through the inductor and reach the output.
The inverting op-amp low-pass filter is an active filter, meaning it doesn't use just passive components (resistors, capacitors, and inductors). This particular filter incorporates an operational amplifier (op-amp) that feeds back into itself with the feedback resistor
R_f
and capacitor
C
An inverting op-amp low-pass filter, built with resistors and a capacitor.
Lucky for us, the formula for the cutoff frequency is simple:
f_c = \frac{1}{2\pi R_f C}
Because op-amps are powered by an external voltage source that is independent of the input signal, the inverting op-amp low-pass filter introduces a gain
G
by which the input signal
v_\text{in}
will be multiplied to obtain
v_\text{out}
\begin{split} v_\text{out} &= G\cdot v_\text{in} \\ G &= -\tfrac{R_f}{R_i} \\ \therefore v_\text{out} &= -\tfrac{R_f}{R_i}\cdot v_\text{in} \\ \end{split}
It's important to note that the inverting op-amp low-pass filter's gain
G
is negative. Therefore, your output signal
v_\text{out}
will be flipped to be exactly 180° out of phase with the input signal
v_\text{in}
— that's why it's called an "inverting filter". For some circuits (like audio systems) this effect doesn't matter much (as speakers don't care about polarity) but in other applications, the flip must be kept in mind. If you want to avoid this flip, then jump on over to the section on NON-inverting op-amp low-pass filters.
💡 Remember that op-amps have a maximum DC voltage that can be supplied to their rails — consult the component's datasheet to find it. Whatever its value, this DC supply voltage limits the output of your op-amp filters. If your gain
G
is too large, or you supply
v_\text{in}
with signals that are too large, your output will be distorted.
The non-inverting op-amp low-pass filter doesn't flip the signal like the inverting op-amp filter does — its output
v_\text{out}
retains the polarity of its input
v_\text{in}
A non-inverting op-amp low-pass filter, built with resistors and a capacitor.
The formula for its cutoff frequency is:
f_c = \frac{1}{2\pi R_i C}
As an active filter, the non-inverting op-amp low-pass filter also introduces a gain
G
\begin{split} v_\text{out} &= G\cdot v_\text{in} \\ G &= 1 + \tfrac{R_f}{R_g} \\ \therefore v_\text{out} &= (1+\tfrac{R_f}{R_g})\cdot v_\text{in} \\ \end{split}
1
G = 1 + R_f/R_g
, the gain will always be at least
1
G\ge1
. So, if you were planning on reducing the amplitude of your signal with
G < 1
, you might want to consider the inverting op-amp low-pass filter instead.
What does a low-pass filter do? Why use a low-pass filter?
Low-pass filters block high frequencies and admit low frequencies. With suitable cutoff frequencies, low-pass filters can be applied to:
Sound engineering, specifically amplifier design;
Noise reduction (useful in telecommunications); and
Biomedical devices, such as vital sign monitoring and pacemakers.
A low-pass filter's cutoff frequency, fc, is the frequency at which the filter's gain is −3dB. Frequencies lower than the cutoff frequency are admitted through the filter, and higher frequencies are blocked. For the typical RC low-pass filter, fc = 1 / (2πRC).
How do I build a low-pass filter?
To build a low-pass filter, follow these easy steps:
Calculate the components' values based on the above.
What components do I need for a 1 kHz low-pass filter?
You can build an RC low-pass filter with a cutoff frequency of 1 kHz using a 3.3 kΩ resistor and a 47 nF capacitor (which are standard resistor and capacitor values). Such a circuit will deliver an exact cutoff frequency of
The RC passive low-pass filter
|
Angular Displacement Calculator
How to find angular displacement
Our angular displacement calculator has your back if you are a physics enthusiast or a student looking to complete an assignment. Even if you are neither, we are still here for you. And would you say, learning about various methods to calculate the angular displacement is on your bucket list?
You'd be glad to know the range of topics we have in store, from the angular displacement formula, its unit, and equations, to understanding angular displacement from angular velocity and angular acceleration.
Imagine an object moving along a circular path, an angle forms along the radius, and this angle is the angular displacement of the object. It means that the body or object is in rotational motion. It is a vector quantity as it has a magnitude and a direction.
To denote angular displacement, the symbol we use is
\theta
Remember not to confuse it with linear displacement, the distance covered by an object.
Angular displacement is the angle between the point of rest or starting point and radius.
The angular displacement calculator is based on multiple formulae to cater to all the various ways it can be determined.
Radius of the circular path is known;
Angular velocity is known; or
Angular acceleration is known.
When selecting the first option, you know the radius of the circular path. Input the radius and the distance traveled. As a result, you have angular displacement. The default angular displacement unit is radians, but you have a list of units to choose from.
The second option is when you have the angular velocity. This calculation is based on the simple fact that angular velocity is the rate of change of angular displacement. To use this method, input the angular velocity and the time taken for the object to cover the distance, and the result is angular displacement.
The third option is when you may want to determine the angular displacement with respect to angular acceleration. This method then does not need the radius or distance covered. Here you can input the angular velocity, the time taken for the object to cover the particular distance, and the angular acceleration. This is one of the most common formulas used to calculate angular displacement because it considers all the aspects of that object in a circular motion.
One thing to remember is that angular acceleration is in radians per second squared. Be mindful when adjusting the units of other variables.
✅ Since we are all about rotational motions today, why not take a look at our angular momentum calculator.
We have used various angular displacement equations to formulate our calculator. Let's take a look at all three of them one by one.
Angular displacement from the radius of the circular path
This method uses the radius of the circular path
r
and the distance covered along the circular path
s
. As a result, you get angular displacement. The formula looks like this:
\theta= s / r
Angular displacement from angular velocity
Angular velocity is the rate of change of angular displacement. If we shuffle this formula, we can quickly determine the angular displacement from the angular velocity.
\theta = \omega \times t
Angular displacement from angular acceleration
The most common method used to determine angular displacement is through angular acceleration. This formula uses angular velocity, angular acceleration, and time to estimate the angular displacement of the object.
\theta = (\omega \times t) + (1 / 2 \times \alpha \times t^2)
\theta
– Angular displacement;
s
– Distance;
r
– Radius of the circular path;
\omega
– Angular velocity;
t
– Time; and
\alpha
– Angular acceleration.
Let us consider an example to understand how to find angular displacement!
Imagine you are out for your morning jog on your favorite track in the neighborhood park. You get curious about what is your angular displacement? You are determined to find out the answer.
You estimate the radius by measuring the distance from the fountain in the middle of the park to the edge. And the radius is
9 \text{ meters}
and you run twice around the track and cover a distance of
185 \text{ meters}
, measured by your smartwatch.
Your angular displacement is:
\small \begin{align*} \theta &= s / r \\ &= 185 / 9\\ &= 20.556 \text { radians} \end{align*}
How can I calculate angular displacement from angular acceleration?
The formula for angular displacement given angular acceleration is:
θ = (ω × t) + (1 / 2 × ɑ × t²)
θ – Angular displacement;
ω – Angular velocity;
t – Time; and
ɑ – Angular acceleration.
If you observe, this formula uses Newton's second equation of motion, which determines the distance covered by an object moving with uniform acceleration.
How do I calculate angular displacement from angular velocity?
The formula for angular displacement given angular velocity is:
θ = ω × t
ω – Angular velocity; and
So this means all you are required to do is multiply the value of angular velocity by time, and your result is angular displacement.
A wheel rotates 3 times, what is its angular displacement?
A wheel rotating three times has an angular displacement of 1080°. One complete rotation of a circle is equal to 360°. To estimate the value of θ (angular displacement) from it:
Note the number of rotations;
Multiply the number of rotations by 360.
The result is angular displacement.
Remember, this is a general estimate as you might get different results using the angular displacement equations.
What is the difference between angular and linear displacement?
Angular displacement is the angle between the distance covered by an object on a circular path and the radius of the said circular path. In contrast, linear displacement is the shortest distance covered by an object from one point to another.
The angular displacement unit is radians or degrees as it measures the angle, whereas linear displacement is in meters.
Both angular and linear displacement are vector quantities, which means they have a magnitude and direction.
Radius of circular path is known
Distance travelled (s)
Radius of the circular path (r)
Angular displacement (θ)
Angular accelerationAngular frequencyAngular momentum… 16 more
|
Cross-validation - Command-line version | CatBoost
--cv-rand
--cv-no-shuffle
Training can be launched in cross-validation mode. In this case, only the training dataset is required. This dataset is split, and the resulting folds are used as the learning and evaluation datasets. If the input dataset contains the GroupId column, all objects from one group are added to the same fold.
Each cross-validation run from the command-line interface launches one training out of N trainings in N-fold cross-validation.
Use one of the following methods to get aggregated N-fold cross-validation results:
Run the training in cross-validation mode from the command-line interface N times with different validation folds and aggregate results by hand.
Use the cv function of the Python package instead of the command-line version. It returns aggregated results out-of-the-box.
catboost fit -f <file path> --cv <cv_type>:<fold_index>;<fold_count> [--cv-rand <value>] [other parameters]
catboost fit -f train.tsv --cv Classical:0;5
The path to the dataset to cross-validate.
Enable the cross-validation mode and specify the launching parameters.
<cv_type>:<fold_index>;<fold_count>
The following cross-validation types (cv_type) are supported:
Format: Classical:<fold_index>;<fold_count>
fold_index is the index of the fold to exclude from the learning data and use for evaluation (indexing starts from zero).
fold_count is the number of folds to split the input data into.
All folds, except the one indexed n, are used as the learning dataset. The fold indexed n is used as the validation dataset.
fold\_index < fold\_count
The data is randomly shuffled before splitting.
Format: Inverted:<fold_index>;<fold_count>
fold_index is the index of the fold to use for learning (indexing starts from zero).
The fold indexed fold_index is used as the learning dataset. All other folds are used as the validation dataset.
fold\_index < fold\_count
Split the input dataset into 5 folds, use the one indexed 0 for validation and all others for training:
--cv Classical:0;5
Required parameter for cross-validation
It must be used with the --cv parameter type set to Classical or Inverted.
Do not shuffle the dataset before cross-validation.
Any combination of the training parameters.
See the full list of default values in the Train a model section.
Launch the training three times with the same partition random seed and different validation folds to run a three-fold cross-validation:
catboost fit -f train.tsv --cv Classical:0;3 --cv-rand 17 --test-err-log fold_0_error.tsv
These trainings generate files with metric values, which can be aggregated manually.
|
A Brief Look at Mixture Discriminant Analysis | R-bloggers
A Brief Look at Mixture Discriminant Analysis
Posted on July 2, 2013 by John Ramey in R bloggers | 0 Comments
Lately, I have been working with finite mixture models for my postdoctoral work on data-driven automated gating. Given that I had barely scratched the surface with mixture models in the classroom, I am becoming increasingly comfortable with them. With this in mind, I wanted to explore their application to classification because there are times when a single class is clearly made up of multiple subclasses that are not necessarily adjacent.
As far as I am aware, there are two main approaches (there are lots and lots of variants!) to applying finite mixture models to classfication:
The Fraley and Raftery approach via the mclust R package
The Hastie and Tibshirani approach via the mda R package
Although the methods are similar, I opted for exploring the latter method. Here is the general idea. There are
K \ge 2
classes, and each class is assumed to be a Gaussian mixuture of subclasses. Hence, the model formulation is generative, and the posterior probability of class membership is used to classify an unlabeled observation. Each subclass is assumed to have its own mean vector, but all subclasses share the same covariance matrix for model parsimony. The model parameters are estimated via the EM algorithm.
Because the details of the likelihood in the paper are brief, I realized I was a bit confused with how to write the likelihood in order to determine how much each observation contributes to estimating the common covariance matrix in the M-step of the EM algorithm. Had each subclass had its own covariance matrix, the likelihood would simply be the product of the individual class likelihoods and would have been straightforward. The source of my confusion was how to write the complete data likelihood when the classes share parameters.
I decided to write up a document that explicitly defined the likelihood and provided the details of the EM algorithm used to estimate the model parameters. The document is available here along with the LaTeX and R code. If you are inclined to read the document, please let me know if any notation is confusing or poorly defined. Note that I did not include the additional topics on reduced-rank discrimination and shrinkage.
To see how well the mixture discriminant analysis (MDA) model worked, I constructed a simple toy example consisting of 3 bivariate classes each having 3 subclasses. The subclasses were placed so that within a class, no subclass is adjacent. The result is that no class is Gaussian. I was interested in seeing if the MDA classifier could identify the subclasses and also comparing its decision boundaries with those of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). I used the implementation of the LDA and QDA classifiers in the MASS package. From the scatterplots and decision boundaries given below, the LDA and QDA classifiers yielded puzzling decision boundaries as expected. Contrarily, we can see that the MDA classifier does a good job of identifying the subclasses. It is important to note that all subclasses in this example have the same covariance matrix, which caters to the assumption employed in the MDA classifier. It would be interesting to see how sensitive the classifier is to deviations from this assumption. Moreover, perhaps a more important investigation would be to determine how well the MDA classifier performs as the feature dimension increases relative to the sample size.
“` r Comparison of LDA, QDA, and MDA library(MASS) library(mvtnorm) library(mda) library(ggplot2)
set.seed(42) n <- 500
Randomly sample data
x11 <- rmvnorm(n = n, mean = c(-4, -4)) x12 <- rmvnorm(n = n, mean = c(0, 4)) x13 <- rmvnorm(n = n, mean = c(4, -4))
x21 <- rmvnorm(n = n, mean = c(-4, 4)) x22 <- rmvnorm(n = n, mean = c(4, 4)) x23 <- rmvnorm(n = n, mean = c(0, 0))
x31 <- rmvnorm(n = n, mean = c(-4, 0)) x32 <- rmvnorm(n = n, mean = c(0, -4)) x33 <- rmvnorm(n = n, mean = c(4, 0))
x <- rbind(x11, x12, x13, x21, x22, x23, x31, x32, x33) train_data <- data.frame(x, y = gl(3, 3 * n))
Trains classifiers
lda_out <- lda(y ~ ., data = train_data) qda_out <- qda(y ~ ., data = train_data) mda_out <- mda(y ~ ., data = train_data)
Generates test data that will be used to generate the decision boundaries via
# contours contour_data <- expand.grid(X1 = seq(-8, 8, length = 300), X2 = seq(-8, 8, length = 300))
Classifies the test data
lda_predict <- data.frame(contour_data, y = as.numeric(predict(lda_out, contour_data)$class)) qda_predict <- data.frame(contour_data, y = as.numeric(predict(qda_out, contour_data)$class)) mda_predict <- data.frame(contour_data, y = as.numeric(predict(mda_out, contour_data)))
p <- ggplot(train_data, aes(x = X1, y = X2, color = y)) + geom_point() p + stat_contour(aes(x = X1, y = X2, z = y), data = lda_predict) + ggtitle(“LDA Decision Boundaries”) p + stat_contour(aes(x = X1, y = X2, z = y), data = qda_predict) + ggtitle(“QDA Decision Boundaries”) p + stat_contour(aes(x = X1, y = X2, z = y), data = mda_predict) + ggtitle(“MDA Decision Boundaries”) ```
|
BPL Mubarak! | Toph
Bangladesh Premier League is in Sylhet for the first time. Today's match is Sylhet Sixers vs the Dhaka Dynamites. Dhaka Dynamites seems to have a good batting line-up. There are many people coming to Sylhet International Stadium to see this match and you are one of them.
Now you want to count how many balls there are in an over. Dhaka Dynamites is batting first and Nasir Hossain from Sylhet Sixers is bowling his first over.
Nasir bowled 9 times in his first over.
This gave you an idea: Why not write a program that can determine the number of legal balls based on the outcomes of each ball and hence the number of overs played.
Given a series of outcomes of balls bowled by a bowler, you have to determine the number of legal balls and the number of overs that have been played. The following are the possible outcomes of each ball:
N No ball
W Wide ball
D Dead ball
0-6 Runs scored
As always, no ball, wide ball, and dead balls are not to be counted.
The input starts with an integer
T (
0 < T \le 100
0<T≤100) denoting the number of test cases. Each case starts with a line containing a string
S (
0 < \texttt{Length of S} \le 100
0<Length of S≤100) representing the outcomes of each ball. It is guaranteed that there is at least one legal ball.
For each test case print, the number of legal balls and overs played. Make sure you are using the correct pluralization (
\texttt{OVER}
OVER vs
\texttt{OVERS}
OVERS, and
\texttt{BALL}
BALL vs
\texttt{BALLS}
BALLS).
W123NW6WD64
1 OVER 1 BALL
sarwarITEarliest, Dec '17
raridoy4.2018Fastest, 0.0s
Intra Sylhet Polytechnic Institute Winter Programming Contest 2017
4th Just Intra University Contest
4th Just Intra University Contest 2nd
|
What is a good signal-to-noise ratio?
How to calculate signal-to-noise ratio? SNR formula
Signal-to-noise ratio requirements and signal-to-noise ratio example
Our signal-to-noise ratio calculator is a tool that will help you find the ratio of the desired signal to the background noise. You might wonder: "Why is calculating this ratio so important?" You will get your answer in a while, but before that, you need to know what SNR is and what a good signal-to-noise ratio is.
Continue reading to learn some significant signal-to-noise ratio requirements and the difference between low and high signal-to-noise ratios.
SNR stands for signal-to-noise ratio. This ratio is a measure of the strength of the desired signal to the current level of background noise. Below are some signal-to-noise ratio examples we use in the real world:
Finding isotope levels in ice cores;
Measuring the efficiency of a cell's biochemical signaling;
Determining a car amplifier's sound clarity; and
Computing a communication channel's bandwidth and capacity 📡.
Our signal-to-noise ratio calculator is a convenient tool that calculates the ratio between the desired signal level and the acceptable background noise.
Now, let's come back to our original question, "Why is calculating this ratio so important?"
Imagine you are having a conversation with your friend, and it's going swimmingly. Now, imagine you a lot more people arrive and everyone is talking to someone while you are trying to have a conversation with your friend. It's still lovely but your friend gets cut off more and more. Now, imagine you are at a concert, and you want to chat with your friend. It seems tedious and exhausting trying to repeat what you're saying because of the loud music, which for you is the background noise.
Your voice here is the signal, and the concert music is the noise.
Our calculator lets you calculate the signal-to-noise ratio using five various methods.
In the tool, you have the option to select the type of SNR; whichever type you choose, you need to input:
The magnitude of the signal; and
The magnitude of the noise.
Each type has various unit associated with it. Also, some of them may appear unitless, but it is good to remember that they can be of any appropriate unit as long as both signal and noise are in the same units.
Signal-to-noise ratio calculation is an essential concept in science and engineering, and we can use it to measure any form of signal or transmission. So, after learning what the signal-to-noise ratio is, let's find out what a good SNR is.
The signal-to-noise ratio is expressed as a single numerical value, in decibels (dB).
The ratio can either be zero, positive or negative. An SNR ratio greater than zero indicates that the signal strength is higher than the noise level. When the value from the SNR formula is zero, then the signal has the same strength as the noise. Finally, a negative SNR means that the signal is weaker than the noise.
So, what is the signal-to-noise ratio value that we should aim for?
High signal-to-noise ratio 👍
A high signal-to-noise ratio means anything greater than zero. But the greater the SNR value, the greater the signal is in comparison to the noise. This means that, no matter which type of transmission has to take place, it will be efficient and accurate.
Low signal-to-noise ratio 👎
If the SNR value is low, it means that the noise is greater than the acceptable value and this will disrupt any form of data transfer. For example, this can occur during the transfer of text, image, audio & video streams, and telemetry.
The signal-to-noise ratio calculator has five different SNR formulae within it. This allows you to calculate various types of SNR based on what your input signal is or which unit you measure the data in:
Signals ratio;
SNR from decibels;
Power SNR;
Voltage SNR; and
SNR from the coefficient of variation.
All five types of SNR require you to input:
The strength of the signal; and
The level of noise.
As a result, you will obtain the signal-to-noise ratio based on the type you have selected.
For instance, if your signal is 6 volts and the noise is 4 volts, you need to select Voltage SNR. Then, the result is 1.761 dB.
The variation in the type of signal-to-noise ratio lies in the formula. As we mentioned above, there are five ways to calculate the SNR:
SNR as signal ratio in absolute units:
SNR = \text{signal} / \text{noise}
SNR as signal difference in decibels:
SNR \text{(dB)} = \text{signal (dB)} - \text{noise (dB)}
Power SNR:
pSNR = 20 \times \log{(\text{signal / noise})}
Voltage SNR:
vSNR = 10 \times \log{(\text{signal / noise})}
SNR = \mu / \sigma
SNR = \mu ^2 / \sigma ^2
SNR(dB)
- The signal-to-noise ratio in decibels;
pSNR
- The power signal-to-noise ratio;
vSNR
- The voltage signal-to-noise ratio;
\mu
- The signal mean; and
\sigma
- The standard deviation of the noise.
💡 Remember, the logarithmic base is always in 10 in the above equations.
📶 Generally, the recommended SNR for wireless networks to use the internet is 20 dB. The table below shows some SNR values and what their requirements for connectivity are:
Can establish an unreliable connection
Acceptable level to establish a poor connection
Considered a good connection
Considered to be an excellent connection
These signal-to-noise ratio statistics play a significant role in the field of wireless communication. Another good thing to remember is that the increase in SNR may increase the wireless network's channel capacity. Check out our modulation calculator to understand how data is transmitted using signals.
How do I calculate the signal-to-noise ratio?
To calculate the signal-to-noise ratio, you need the level of both the signal and the noise. Then:
If you have the signals in decibels (dB), subtract noise from the signal.
If your calculations are in watts, use the power signal-to-noise ratio formula SNR = 20 × log(signal / noise).
If your calculations are in volts, use the voltage signal-to-noise ratio formula SNR = 10 × log(signal / noise).
log denotes the common logarithm.
For instance, in terms of data network, a good SNR (signal-to-noise ratio) is 20 dB or above. And if the network is meant to use voice applications, then it needs to be 25 dB or above.
A good signal-to-noise ratio is one that has signal levels much higher than noise levels, as the greater the noise levels, the more disruption is caused. A low signal-to-noise ratio means that the level of background noise is more than it should be in comparison to the required signal.
What is the signal-to-noise ratio of a 450dB signal and a 350 dB noise?
The signal-to-noise ratio (SNR) for a signal of 450dB and noise of 350 dB is 100dB.
The signal and noise level values are already in decibels (dB), so the signal-to-noise ratio formula is:
SNR(dB) = signal - noise
What kind of noises can impact the signal-to-noise ratio?
Noise that impacts signal-to-noise ratio can be electronic, thermal, quantum, biological, or acoustic. Also, we can consider humidity as noise. Generally, you may treat any unwanted disturbance that impacts the quality of the signal as noise.
A significant amount of noise can cause disruption in text, graphics, audio, and video transfers.
Type of SNR
Power SNR
The noise figure calculator takes the signal-to-noise ratio (SNR) at input and output to calculate the noise figure value in decibels.
|
Day count conventions - Anaplan Technical Documentation
Many investment management functions rely on knowing the number of days between two dates. As the number of days in a year or month can vary, there are conventions that enable you to calculate the number of days in the year, which is known as the basis.
Anaplan defaults to the US 30/360 convention for day count, with a few differences. However, you can also choose to use other day count conventions.
US 30/360 day count conventions
The US 30/360 day count convention assumes 30 days for every month, and 360 days for the year. This convention was originally defined by the Financial Industry Regulatory Authority (FINRA).
US 30/360 uses the DayCountFactor formula to determine day count:
DayCountFactor=\frac{360\times(Y_2-Y_1)+30\times(M_2-M_1)+(D_2-D_1)}{360}
Y is year,
M is month, and
D is day.
There are then various conventions by which you can adjust D1 and D2 to determine the end of the month, as some months are not 30 days long.
The US 30/360 conventions are:
If the investment is End of Month (EOM), the start date is the last day of February, and the end date is the last day of February, then change D2 to 30.
If the investment is EOM and the start date is the last day of February, then change D1 to 30.
If D2 is 31 and D1 is 30 or 31, then change D2 to 30.
If D1 is 31, then change D1 to 30.
Differences in Anaplan
Anaplan conventions differ from these in that the full set of rules is only applied when calculating COUPDAYSNC. For other calculations, the start date check is not performed for the first and third conventions outlined above. Instead, these modified conventions apply:
If the investment is EOM and the end date is the last day of February, then change D2 to 30.
This allows the date adjustments for D2 to be independent of D1.
US 30/360 is the convention used by default in Anaplan. However, these conventions are also accommodated in the basis argument of the management functions:
Actual/360 and EUR 30/360, for which a year has 360 days.
Actual/365, for which a year has 365 days.
Actual/Actual, for which a year may have 365 or 366 days.
Note: Anaplan uses the International Swaps and Derivatives Association (ISDA) convention for Actual/Actual. In this convention, the number of days in leap and non-leap years are calculated separately.
|
Graph_(discrete_mathematics) Knowpia
A graph with six vertices and seven edges
A graph with three vertices and three edges
The vertices x and y of an edge {x, y} are called the endpoints of the edge. The edge is said to join x and y and to be incident on x and y. A vertex may belong to no edge, in which case it is not joined to any other vertex.
The edges of a graph define a symmetric relation on the vertices, called the adjacency relation. Specifically, two vertices x and y are adjacent if {x, y} is an edge. A graph may be fully specified by its adjacency matrix A, which is an
{\displaystyle n\times n}
square matrix, with Aij specifying the number of connections from vertex i to vertex j. For a simple graph,
{\displaystyle A_{ij}\in \{0,1\}}
, indicating disconnection or connection respectively, meanwhile
{\displaystyle A_{ii}=0}
(that is, an edge can not start and end at the same vertice). Graphs with self-loops will be characterized by some or all
{\displaystyle A_{ii}}
being equal to a positive integer, and multigraphs (with multiple edges between vertices) will be characterized by some or all
{\displaystyle A_{ij}}
being equal to a positive integer. Undirected graphs will have a symmetric adjacency matrix (
{\displaystyle A_{ij}=A_{ji}}
Directed graphEdit
In one restricted but very common sense of the term,[8] a directed graph is a pair
{\displaystyle G=(V,E)}
{\displaystyle V}
{\displaystyle E\subseteq \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}}
{\displaystyle (x,y)}
{\displaystyle x}
{\displaystyle y}
{\displaystyle x}
{\displaystyle y}
{\displaystyle x}
{\displaystyle y}
{\displaystyle x}
{\displaystyle y}
{\displaystyle x}
{\displaystyle y}
{\displaystyle (y,x)}
{\displaystyle (x,y)}
{\displaystyle G=(V,E,\phi )}
{\displaystyle V}
{\displaystyle E}
{\displaystyle \phi :E\to \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}}
A loop is an edge that joins a vertex to itself. Directed graphs as defined in the two definitions above cannot have loops, because a loop joining a verte{\displaystyle x}
{\displaystyle (x,x)}
{\displaystyle \{(x,y)\mid (x,y)\in V^{2}\;{\textrm {and}}\;x\neq y\}}
{\displaystyle E}
{\displaystyle E\subseteq \{(x,y)\mid (x,y)\in V^{2}\}}
{\displaystyle \phi }
{\displaystyle \phi :E\to \{(x,y)\mid (x,y)\in V^{2}\}}
{\displaystyle G}
{\displaystyle G}
{\displaystyle G}
{\displaystyle (x,y)}
{\displaystyle x}
{\displaystyle y}
{\displaystyle x}
{\displaystyle y}
Mixed graphEdit
Weighted graphEdit
A weighted graph with ten vertices and twelve edges
Types of graphsEdit
Oriented graphEdit
Regular graphEdit
Complete graphEdit
Finite graphEdit
Connected graphEdit
Bipartite graphEdit
Path graphEdit
A path graph or linear graph of order n ≥ 2 is a graph in which the vertices can be listed in an order v1, v2, …, vn such that the edges are the {vi, vi+1} where i = 1, 2, …, n − 1. Path graphs can be characterized as connected graphs in which the degree of all but two vertices is 2 and the degree of the two remaining vertices is 1. If a path graph occurs as a subgraph of another graph, it is a path in that graph.
Planar graphEdit
Cycle graphEdit
PolytreeEdit
Advanced classesEdit
Properties of graphsEdit
The diagram is a schematic representation of the graph with vertices
{\displaystyle V=\{1,2,3,4,5,6\}}
{\displaystyle E=\{\{1,2\},\{1,5\},\{2,3\},\{2,5\},\{3,4\},\{4,5\},\{4,6\}\}.}
In category theory, every small category has an underlying directed multigraph whose vertices are the objects of the category, and whose edges are the arrows of the category. In the language of category theory, one says that there is a forgetful functor from the category of small categories to the category of quivers.
Graph operationsEdit
series–parallel graphs.
^ a b Trudeau, Richard J. (1993). Introduction to Graph Theory (Corrected, enlarged republication. ed.). New York: Dover Pub. p. 19. ISBN 978-0-486-67870-2. Retrieved 8 August 2012. A graph is an object consisting of two sets called its vertex set and its edge set.
J. J. Sylvester (February 7, 1878) "Chemistry and algebra," Nature, 17 : 284. doi:10.1038/017284a0. From page 284: "Every invariant and covariant thus becomes expressible by a graph precisely identical with a Kekuléan diagram or chemicograph."
J. J. Sylvester (1878) "On an application of the new atomic theory to the graphical representation of the invariants and covariants of binary quantics, – with three appendices," American Journal of Mathematics, Pure and Applied, 1 (1) : 64–90. doi:10.2307/2369436. JSTOR 2369436. The term "graph" first appears in this paper on page 65.
^ Gross, Jonathan L.; Yellen, Jay (2004). Handbook of graph theory. CRC Press. p. 35. ISBN 978-1-58488-090-5.
^ Strang, Gilbert (2005), Linear Algebra and Its Applications (4th ed.), Brooks Cole, ISBN 978-0-03-010567-8
^ Lewis, John (2013), Java Software Structures (4th ed.), Pearson, p. 405, ISBN 978-0133250121
^ Fletcher, Peter; Hoyle, Hughes; Patty, C. Wayne (1991). Foundations of Discrete Mathematics (International student ed.). Boston: PWS-KENT Pub. Co. p. 463. ISBN 978-0-53492-373-0. A weighted graph is a graph in which a number w(e), called its weight, is assigned to each edge e.
Balakrishnan, V. K. (1997). Graph Theory (1st ed.). McGraw-Hill. ISBN 978-0-07-005489-9.
Berge, Claude (1958). Théorie des graphes et ses applications (in French). Paris: Dunod.
Biggs, Norman (1993). Algebraic Graph Theory (2nd ed.). Cambridge University Press. ISBN 978-0-521-45897-9.
Bollobás, Béla (2002). Modern Graph Theory (1st ed.). Springer. ISBN 978-0-387-98488-9.
Diestel, Reinhard (2005). Graph Theory (3rd ed.). Berlin, New York: Springer-Verlag. ISBN 978-3-540-26183-4.
Graham, R.L.; Grötschel, M.; Lovász, L. (1995). Handbook of Combinatorics. MIT Press. ISBN 978-0-262-07169-7.
Gross, Jonathan L.; Yellen, Jay (1998). Graph Theory and Its Applications. CRC Press. ISBN 978-0-8493-3982-0.
Gross, Jonathan L.; Yellen, Jay (2003). Handbook of Graph Theory. CRC. ISBN 978-1-58488-090-5.
Harary, Frank (1995). Graph Theory. Addison Wesley Publishing Company. ISBN 978-0-201-41033-4.
Iyanaga, Shôkichi; Kawada, Yukiyosi (1977). Encyclopedic Dictionary of Mathematics. MIT Press. ISBN 978-0-262-09016-2.
Zwillinger, Daniel (2002). CRC Standard Mathematical Tables and Formulae (31st ed.). Chapman & Hall/CRC. ISBN 978-1-58488-291-6.
Media related to Graph (discrete mathematics) at Wikimedia Commons
|
Low-Speed Go-Kart Crash Tests and a Comparison to Activities of Daily Living | ASME J. Risk Uncertainty Part B | ASME Digital Collection
Nick Kloppenborg,
Nick Kloppenborg
Stress Engineering Services,
7030 Stress Engineering Way,
e-mail: nick.kloppenborg@stress.com
Tara Amenson,
Tara Amenson
S-E-A, Ltd.,
7001 Buffalo Parkway,
e-mail: tamenson@sealimited.com
Jacob Wernik,
1800 Howard Street, Suite A.,
e-mail: jwernik@sealimited.com
e-mail: jwiechel@sealimited.com
Manuscript received January 28, 2016; final manuscript received February 8, 2018; published online May 2, 2018. Assoc. Editor: Chimba Mkandawire.
Kloppenborg, N., Amenson, T., Wernik, J., and Wiechel, J. (May 2, 2018). "Low-Speed Go-Kart Crash Tests and a Comparison to Activities of Daily Living." ASME. ASME J. Risk Uncertainty Part B. December 2018; 4(4): 041010. https://doi.org/10.1115/1.4039357
Go-karts are a common amusement park feature enjoyed by people of all ages. While intended for racing, contact between go-karts does occur. To investigate and quantify the accelerations and forces which result from contact, 44 low-speed impacts were conducted between a stationary (target) and a moving (bullet) go-kart. The occupant of the bullet go-kart was one of two human volunteers. The occupant of the target go-kart was a Hybrid III 50th percentile male anthropomorphic test device (ATD). Impact configurations consisted of rear-end impacts, frontal impacts, side impacts, and oblique impacts. Results demonstrated high repeatability for the vehicle performance and occupant response. Go-kart accelerations and speed changes increased with increased impact speed. Impact duration and restitution generally decreased with increased impact speed. All ATD acceleration, force, and moment values increased with increased impact speed. Common injury metrics such as the head injury criterion (HIC),
Nij
Nkm
were calculated and were found to be below injury thresholds. Occupant response was also compared to published activities of daily living data.
Collision, Crash, Safety
Bullets, Impact testing, Kinematics, Shear (Mechanics), Vehicles, Wounds, Stress, Collisions (Physics), Instrumentation, Safety
U.S. Consumer Products Safety Commission, 2017, “
National Electronic Injury Surveillance Sampling System
,” Directorate for Epidemiology and Health Sciences, Hazard Analysis Division, Bethesda, MD,
Head Kinematics and Upper Neck Loading During Simulated Low-Speed Rear-End Collisions: A Comparison With Vigorous Activities of Daily Living
Head and Spinal Trajectories in Children and Adults Exposed to Low Speed Frontal Acceleration
21st International Technical Conference on the Enhanced Safety of Vehicles
, Stuttgart, Germany, June 15–18, Paper No.
Head Kinematics and Upper Neck Loading During Simulated Low-Speed Lateral Impact Collisions
, Yokohama, Japan, Oct. 22–27.
, Bron, France, pp.
A Comparison of the BioRID II, Hybrid III, and RID2 in Low Severity Rear Impacts
,” 19th International Technical Conference on the Enhanced Safety of Vehicles (ESV), Washington, DC, June 6–9, Paper No.
,” Society of Automotive Engineers, Warrendale, PA, Standard No.
SAE J211-1:2014
U.S. DOT, 2011 “
Occupant Crash Protection, U.S. Code of Federal Regulations
,” U.S. Government Printing Office, Washington, DC, U.S. DOT Regulation 49CFR571.208.
N Km—A Proposal for a Neck Protection Criterion for Low-Speed Rear-End Impacts
In Vivo Study of Head Impacts in Football: A Comparison of National Collegiate Athletic Association Division I Versus High School Impacts
Neurosurg.
), Maastricht, The Netherlands, Sept. 19–21, pp 233–248.
Safety Assessment of Gravity Load–Designed Reinforced Concrete–Framed Buildings
Sociodemographic Influences on Injury Severity in Truck-Vulnerable Road User Crashes
Probabilistic Structural Health Assessment with Identified Physical Parameters from Incomplete Measurements
Reliability-Based Shear Rating of Prestressed Concrete Bridge Girders Considering Capacity Adjustment Factor
Two-Car Impact Test of Crash-Energy Management Passenger Rail Cars: Analysis of Occupant Protection Measurements
|
Combined Cycles With CO2 Capture: Two Alternatives for System Integration | J. Eng. Gas Turbines Power | ASME Digital Collection
Combined Cycles With
CO2
Capture: Two Alternatives for System Integration
Nikolett Sipöcz,
Nikolett Sipöcz
Department of Mechanical and Structural Engineering and Material Science,
, N-4036 Stavanger, Norway
e-mail: nikolett.sipocz@uis.no
Sipöcz, N., and Assadi, M. (March 17, 2010). "Combined Cycles With
CO2
Capture: Two Alternatives for System Integration." ASME. J. Eng. Gas Turbines Power. June 2010; 132(6): 061701. https://doi.org/10.1115/1.4000122
As carbon capture and storage technology has grown as a promising option to significantly reduce
CO2
emissions, system integration and optimization claim an important and crucial role. This paper presents a comparative study of a gas turbine cycle with postcombustion
CO2
separation using an amine-based absorption process with monoethanolamine. The study has been made for a triple pressure reheated 400 MWe natural gas-fuelled combined cycle with exhaust gas recirculation (EGR) to improve capture efficiency. Two different options for the energy supply to the solvent regeneration have been evaluated and compared concerning plant performance. In the first alternative heat is provided by steam extracted internally from the bottoming steam cycle, while in the second option an external biomass-fuelled boiler was utilized to generate the required heat. With this novel configuration the amount of
CO2
captured can be even more than 100% if the exhaust gas from the biofuelled boiler is mixed and cleaned together with the main exhaust gas flow from the combined cycle. In order to make an unprejudiced comparison between the two alternatives, the reduced steam turbine efficiency has been taken into consideration and estimated, for the alternative with internal steam extraction. The cycles have been modeled in the commercial heat and mass balance program IPSEPRO™ using detailed component models. Utilizing EGR can double the
CO2
content of the exhaust gases and reduce the energy need for the separation process by approximately 2% points. Using an external biomass-fuelled boiler as heat source for amine regeneration turns out to be an interesting option due to high
CO2
capture effectiveness. However the electrical efficiency of the power plant is reduced compared with the option with internal steam extraction. Another drawback with the external boiler is the higher investment costs but nevertheless, it is flexibility due to the independency from the rest of the power generation system represents a major operational advantage.
air pollution control, biofuel, boilers, combined cycle power stations
Biomass, Boilers, Carbon capture and storage, Carbon dioxide, Combined cycles, Cycles, Exhaust gas recirculation, Heat, Power stations, Separation (Technology), Steam, Exhaust systems, Steam turbines, Pressure, Absorption, Emissions, Temperature, Turbines, Gases
2007, IEA World Energy Outlook.
World Energy Technology Outlook-2050
Fluor’s Econamine FG PlusSM Technology: An Enhanced Amine-Based CO2 Capture Process
Second National Conference on Carbon Sequestration
, Alexandria, VA, May 5–8.
Engineering Feasibility and Economics of CO2 Capture on an Existing Coal-Fired Power Plant
,” US DOE NETL Report No. PPL-01-CT-09.
CO2 Capture and Sequestration Options: Impact on Turbo-Machinery Design
Performance Modelling of a Carbon Dioxide Removal System for Power Plants
Fredriksson Möller
CO2 Free Power Generation—A Study of Three Conceptually Different Plant Layouts
Desorber Energy Consumption Amine-Based Absorption Plants
CO2 Capture From Power Plants. Part I. A Parametric Study of the Technical Performance Based on Monoethalonamine
Aroonwilas
Veawab
Integration of CO2 Capture Unit Single- and Blended-Amines Into Supercritical Coal-Fired Power Plant: Implications for Emission and Energy Management
Espatoleroa
Designing a Supercritical Steam Cycle to Integrate the Energy Requirements of CO2 Amine Scrubbing
Simulation and Optimization of a Coal-Fired Power Plant With Integrated CO2 Capture Using MEA Scrubbing
Proceedings of the Eighth International Conference on Greenhouse Gas Control Technologies
Research and Development on Energy Saving Technology for Flue Gas Carbon Dioxide Recovery and Steam System in Power Plant
Simayoshi
Development of Energy Saving Technology for Flue Gas Carbon Dioxide Recovery in Power Plant by Chemical Absorption Method and Steam System
CO2 Capture With MEA: Integrating the Absorption Process and Steam Cycle of an Existing Coal-Fired Power Plant
,” M.Sc. thesis, University de Waterloo, Canada.
Mitakakis
Optimisation of HAT-Cycles-With and Without CO2 Capture
Natural Gas-Fired Combined Cycles With Low CO2 Emissions
Performance and Cost Analysis of Novel Gas Turbine Cycle With CO2 Capture
Exhaust Gas Recirculation in DLN F-Class Gas Turbines for Post-Combustion CO2 Capture
Alstom Power webpage: http://www.power.alstom.com
Evaluation of a CO2 Capture Ready Combined Cycle
,” M.Sc. thesis, Lund University, Sweden. 2007.
A Thermoeconomic Evaluation of CO2 Capture With Focus on Gas Turbine-Based Power Plants
, 2008, StandardKessel GmbH, Duisburg, Germany, personal communication.
Recovery of CO2 From Flue Gases: Commercial Trends
Canadian Society of Chemical Engineers Annual Meeting
, Saskatoon, Saskatchewan, Canada, Oct. 4–6.
, IP/07/1537, 2007, “
Air Pollution: Commission Takes Action Over Levels of Sulphur Dioxide and PM10 in Member States
,” Brussels, Belgium, October.
Limiting Global Climate Change to 2 Degrees Celsius: The Way Ahead for 2020 and Beyond
,” Brussels, Belgium, January.
IEA Greenhouse Gas R&D Programme, 2005, “
Retrofit of CO2 Capture to Natural Gas Combined Cycle Power Plants
Novel High-Perfoming Single-Pressure Combined Cycle With CO 2 Capture
|
determine the inverse Laplace transform of the function. L^{-1}\{R(s)\}=L^{-1}\{\frac{7}{(s+3)(s-3)}\}
determine the inverse Laplace transform of the function.
{L}^{-1}\left\{R\left(s\right)\right\}={L}^{-1}\left\{\frac{7}{\left(s+3\right)\left(s-3\right)}\right\}
R\left(s\right)=\frac{7}{\left(s+3\right)\left(s-3\right)}
R\left(s\right)=\frac{7}{6}\left[\frac{1}{s-3}-\frac{1}{s+3}\right]
Inverse Laplace is given by,
{L}^{-1}\left\{\frac{1}{s+a}\right\}={e}^{-at}\text{ and }{L}^{-1}\left\{\frac{1}{s-a}\right\}={e}^{at}
Hence, Laplace inverse of the given function is,
{L}^{-1}\left\{R\left(s\right)\right\}=\frac{7}{6}\left[{L}^{-1}\left\{\left[\frac{1}{s-3}\right]\right\}-{L}^{-1}\left\{\frac{1}{s+3}\right\}\right]
=\frac{7}{6}\left({e}^{3t}-{e}^{-3t}\right)
{t}^{2}
find the inverse Laplace transform of the given function.
F\left(s\right)=\frac{2s-3}{{s}^{2}-4}
Form the differential equations by eliminating arbitrary constant a.
r=a\left(1-\mathrm{sin}\theta \right)
\frac{dy}{dt}=y+10{u}_{4}\left(t\right)\mathrm{sin}\left(2\left(t-4\right)\right),y\left(0\right)=2
Find the laplace transform of this function
f\left(t\right)={e}^{-2t}\mathrm{sin}3t\mathrm{sin}t
y"+5y=2{t}^{2}-9,y\left(0\right)=0,{y}^{\prime }\left(0\right)=-3
\mathrm{sin}\left(2t\right)+{e}^{t}\mathrm{sin}\left(t\right)
|
Analysis of the Motion of Frenkel-Kontorova Dislocations in Single Crystals of Aluminum with Allowance for the Peierls Barrier ()
The regularities of the motion of a one-dimensional Frenkel-Kontorova dislocation in pure aluminum at helium temperatures are studied. Computer simulation was carried out using the sine Gordon equation, written in dimensionless variables. It is proven that when the transition to dimensionless variables the discreteness of the model is preserved. The dependence of the true values of stresses on deformation in the Euler variables, as well as the velocity distribution of the dislocation fragments along the coordinate for successive instants of time, are obtained. It is shown that under these conditions dislocation motion is realized by quantum tunneling of the dislocation bends. The quantum-mechanical estimate confirms the possibility of quantum tunneling of the kink of dislocations in aluminum at low temperatures.
Aluminum, Frenkel-Kontorova Dislocation, Sine Gordon Equation, Computer Simulation, Dislocation Kinks, Quantum Tunneling
Arakelyan, M. (2018) Analysis of the Motion of Frenkel-Kontorova Dislocations in Single Crystals of Aluminum with Allowance for the Peierls Barrier. Open Access Library Journal, 5, 1-11. doi: 10.4236/oalib.1104390.
\sigma ={a}_{0}{\epsilon }^{{a}_{1}}{\zeta }^{{a}_{2}}{\mathrm{exp}}^{-{a}_{3}\theta }
{a}_{0}=3.6\times {10}^{6}\text{\hspace{0.17em}}\text{MPa},\text{\hspace{0.17em}}{a}_{1}=0.255,\text{\hspace{0.17em}}{a}_{2}=0.05,\text{\hspace{0.17em}}{a}_{3}=-0.01
{a}_{0},\text{ }{a}_{1}\text{ },\text{ }{a}_{2},\text{ }{a}_{3}
\zeta
\theta
m{\stackrel{¨}{y}}_{n}=-{f}_{0}\mathrm{sin}\left(2\text{π}{y}_{n}/a\right)+k\left({y}_{n+1}+{y}_{n-1}-2{y}_{n}\right)
{y}_{n}
a
{f}_{0}\mathrm{sin}\left(2\text{π}{y}_{n}/a\right)
{\stackrel{¨}{\phi }}_{n}+\mathrm{sin}{\phi }_{n}-{{\phi }^{″}}_{n}=0
{\phi }_{n}
{l}_{0}=a\sqrt{m{v}_{0}^{2}/2\text{π}{f}_{0}a}
{v}_{0}=a\sqrt{k/m}
{l}_{0}
{l}_{0}\gg a
{\partial \phi /\partial x|}_{x=0}={\partial \phi /\partial x|}_{x=l}=0
{\phi }_{n}\left(x,t\right)
{\sigma }_{tru}={\sigma }_{con}\left(1+\epsilon \right)
8\times {10}^{8}\left(1+\epsilon \right)
\epsilon
{\sigma }_{n}
{\sigma }_{n}<8\times {10}^{8}\left(1+\epsilon \right)
{\sigma }_{n}
\sigma \left(\epsilon \right)
\frac{2}{\hslash }\sqrt{2{m}_{s}\left({W}_{p}-{E}_{ks}\right)}l\approx 1
\hslash
{m}_{s}
\frac{2}{\hslash }\sqrt{2{m}_{s}\left({W}_{p}-{E}_{ks}\right)}l\approx 0.73
\Delta \sim \frac{h}{a}{\left(mu\right)}^{1/2}
a
\Delta ~1
u\sim {10}^{-14}\text{\hspace{0.17em}}\text{erg}
|
Sphere Density Calculator
Density and density formula of a sphere
How to use sphere density calculator?
More relevant calculators
If you're struggling with questions about spheres, you've come to the right place — the sphere density calculator will help you with all the issues referring to spheres and the density formula of a sphere. Read on and find out more about calculating the density of a sphere and more, e.g., 'how to find the mass of a sphere given density and radius'?
Density is a physical term to define mass per unit volume of space.
\rho = \frac{m}{V}
\rho
m
— mass, and
V
If we've got two objects (let's say watermelons) that are the same size, but one of them is heavier (i.e., its mass is greater), then it means this one has the greater density than the other.
If we are interested in the density of a sphere, we need to use the volume of the sphere in the main equation.
The volume of a sphere is defined as:
V = \frac{4}{3} \times \pi \times r^3
V
— volume,
r
— radius of a sphere.
Combining the two formulas, we get the density formula of a sphere:
\rho = \frac{m}{\frac{4}{3} \times \pi \times r^3}
To calculate the desired values using our calculator:
Take a look at the left side of the page on the calculator panel.
Input the mass (weight) of a sphere. You can choose whichever unit you want.
Input one of the two — either volume of a sphere or simply its radius.
You can look at the picture above the sphere density calculator if needed.
The result is ready immediately. You'll also see that the rest of the data was automatically calculated.
If you want to know more about density and various shapes, check out our collection:
Cube density calculator; and
Density of a cylinder calculator.
How do I calculate the radius of a sphere given density and mass?
To calculate the radius of a sphere with a given density and mass:
Recalculate the data to the same units. For example, if your mass comes in kg, make sure the density refers to kgs as well.
Calculate the volume of a sphere with the formula volume = mass/density.
Knowing that the volume of a sphere is defined with (4/3) × π × r3, calculate the radius.
That's it! You can also use a combined formula right away: mass/density = (4/3) × π × r3
How do I find the mass of a sphere given density and radius?
To count the mass of a sphere with known density and radius:
Convert the units so that they're consistent. If your radius comes in cm, density should also refer to centimeters.
Knowing radius, you can count the volume of a sphere with the equation volume = (4/3) × π × r3.
Then, using the density formula density = mass/volume, you can count the mass by multiplying density and volume: mass = volume × density.
You can also use a combined formula: mass = (4/3) × π × r3 × density
How do I find density of a sphere which radius is 3 in and mass 0.5 lb?
To find the density of a sphere with a known radius and mass:
Count the volume of a sphere, using the equation volume = (4/3) × π × r3
In our case: volume = (4/3) × π × 33 = 113.1 cu in (cubic inches)
Then let's consider the formula for density, which is density = mass/volume.
We've already got all the data needed!
Let's put that into practice. density = 0.5 lb / 113.1 cu in = 0.004421 lb/cu in
Use this Boyle's law calculator to estimate the pressure and the volume of a gas in an isothermal process.
This carburetor cfm calculator will help you to estimate the correct carburetor size for your engine.
|
Industry Problems - Sentre
Sentre decides to tackles the four main problems in the DeFi market.
Liquidity Openness
In 2020, Uniswap introduced its version 2, which mentioned the concept of programmable liquidity. At one point, it is similar to the concept of an open liquidity. They opened and conducted a method to implement on-chain price oracles, which can be considered open data. Uniswap also utilizes the atomic property of Ethereum to leverage flash swaps. In short, you can ask for tokens before paying them with some conditions. If the conditions aren’t met, the transaction will be rolled back. The flash swaps are a superpower for arbitrageurs to explore the price differential.[1]
It’s clear that the liquidity openness on Uniswap is still limited when Liquidity Provider Token (LPT) cannot be consumed by other financial services. In the last half of 2020, the term “Yield Farming” was gathering notice as it accepted LPT to earn other tokens. However, these services are dependent on third parties without native support from Uniswap.[2]
SushiSwap is a younger but more innovative AMM than existing platforms before. It provides a comprehensive DeFi market with many services including swap, yield farming, lending & borrowing, staking, and so on. All these services are built up by themselves, which means they fully control the liquidity and circulate it in their platform. On Solana, Raydium is the current top DEX. Currently, they have services such as swapping, staking, and farming. However, Raydium seems to share the same vision as SushiSwap when they maintain the private permission in DeFi services development.[3]
Symmetric Deposit
It’s worth mentioning about the constant product function (CPF) which expresses the pricing curve in the aforementioned protocols. The main idea behind seems very simple, but it has strongly proved functionality and possibility in both experimentally[4] and practically. The CPF formulates the quoted price via the pair of token reserves. Giving token
A
B
with corresponding reserves
R_A
R_B
, the algorithm will maintain the following equation,
R_A*R_B=k
k>0
is a constant defined at the initial state[5]. In other words, let’s call α
the “changing rate” of
R_A
after an exchange (sell
A
B
for example). Then
R'_A=\frac{1}{α} R_A
R'_B = αR_B
0<α<1
, to maintain the constant product,
R_A× R_B = R'_A×R'_B= k
At an arbitrary timestamp
t
, the quoted price
p^{(t)}=R^{(t)}_B/R^{(t)}_A
. Therefore, the first liquidity provider (LP) must deposit both tokens, A and B, with reserves that satisfy the reference market price at the initial state,
p^{(0)}=R^{(0)}_B/R^{(0)}_A
. After the pool’s setup, subsequent LPs must follow the quoted price by depositing both tokens accordingly. But what if users only have one type of token? We call this problem Symmetric Deposit.
Swap Possibilities
p
be the current quoted price,
p'
be the next quoted price, then a slippage rate,
s=\frac{p'}{p}
Applying to the CPF,
s=\frac{R'_B/R'_A}{R_B/R_A}=α^2.
The slippage rate is amplified when routing. When there are no direct pools for the desired pair of tokens, traders need to swap through other middle tokens before reaching the destination, and even then, there might be no route for said desired pair at all. This problem, called Swap Possibilities, reduces the liquidity effectiveness and affects user experience.
Lastly, one of the biggest risks of liquidity provision is impermanent loss[6]. Due to price deviation or reserves deviation, from the initial state, the value of assets is no longer greater than or equal to a HODL strategy[7].
[1] H. Adams, N. Zinsmeister, and D. Robinson, “Uniswap v2 core,” 2 2020. [Online].
[2] K. Rapoza, “DeFi ‘Yield Farming’: How To Get DeFi Yield, And Why Invest In It,” 06 2021. [Online].
[3] Raydium Team, “Raydium Protocol Litepaper,” 03 2021. [Online].
[4] G.Angeris, H.-T.Kao, R.Chiang, C.Noyes, and T.Chitra, “An Analysis of Uniswap Markets,” Cryptoeconomic Systems Journal, 2019.
[5] H. Adams, “Uniswap Whitepaper,” 11 2018. [Online].
[6] N. Hindman, “Beginner’s Guide to (Getting Rekt by) Impermanent Loss,” 11 2020. [Online].
[7] HODL: Hold On for Dear Life, a slang among the cryptocurrency community, meaning you should hold a cryptocurrency instead of selling it.
|
get_feature_importance - CatBoostClassifier | CatBoost
get_feature_importance(data=None,
reference_data=None,
type=EFstrType.FeatureImportance,
prettified=False,
The dataset for feature importance calculation.
The required dataset depends on the selected feature importance calculation type (specified in the type parameter):
PredictionValuesChange — Either None or the same dataset that was used for training if the model does not contain information regarding the weight of leaves. All models trained with CatBoost version 0.9 or higher contain leaf weight information by default.
LossFunctionChange — Any dataset. Feature importances are calculated on a subset for large datasets.
PredictionDiff — A list of object pairs.
Required parameter for the LossFunctionChange and ShapValues type of feature importances and in case the model does not contain information regarding the weight of leaves.
Reference data for Independent Tree SHAP values from Explainable AI for Trees: From Local Explanations to Global Understanding. If type is ShapValues and reference_data is not None, then Independent Tree SHAP values are calculated.
Alias:fstr_type (deprecated, use type instead)
The type of feature importance to calculate.
FeatureImportance: Equal to PredictionValuesChange for non-ranking metrics and LossFunctionChange for ranking metrics (the value is determined automatically).
ShapValues: A vector
v
Interaction: The value of the feature interaction strength for each pair of features.
PredictionDiff: A vector with contributions of each feature to the RawFormulaVal difference for each pair of objects.
EFStrType
It is recommended to use EFStrType for this parameter.
Return the feature importances as a list of the following pairs sorted by feature importance:
(feature_id, feature importance)
Should be used if one of the following values of the typeparameter is selected:
bool — Output progress to stdout.
Works with the ShapValues type of feature importance calculation.
int — The logging period.
Depends on the selected feature strength calculation method:
PredictionValuesChange, LossFunctionChange or PredictionValuesChange with the prettified parameter set to False : a list of length [n_features] with float feature importances values for each feature
PredictionValuesChange or LossFunctionChange with the prettified parameter set to True : a list of length [n_features] with (feature_id (string), feature_importance (float)) pairs, sorted by feature importance values in descending order
ShapValues: np.array of shape (n_objects, n_features + 1) with float ShapValues for each (object, feature)
Interaction: list of length [ n_features] of three element lists of (first_feature_index, second_feature_index, interaction_score (float))
|
What is an ellipse standard form?
How to use the ellipse standard form calculator?
How to find the standard form of an ellipse
Other ellipse related calculators
To help you calculate the standard equation of an ellipse from its vertices and co-vertices, here's our ellipse standard form calculator. It uses the ellipse standard form equation to find the center and vertices of an ellipse or acts as the calculator for writing the equation of the ellipse in standard form.
The following article will also share how to find this standard form of an ellipse from its vertices.
To calculate the standard equation of an ellipse, we first need to know what makes an ellipse. Simply speaking, when we stretch a circle in one direction to create an oval, that makes an ellipse.
Here's the standard form or equation of an ellipse with center (0,0) and semi-major axis on the x-axis (if
a > b
\frac{(x - c_1)^2}{a^2} + \frac{(y - c_2)^2}{b^2} = 1
And here's the standard form or equation of the same ellipse with semi-major axis on the y-axis:
\frac{(x - c_1)^2}{b^2} + \frac{(y - c_2)^2}{a^2} = 1
(x , y)
- The coordinates of an arbitrary point on the ellipse;
(c_1 , c_2)
- Coordinates of the ellipse's center;
a
- semi-major axis (the longest distance from the ellipse center to the point on the ellipse); and
b
- semi-minor axis (the shortest distance from the ellipse center to the point on the ellipse).
The vertices ±a, co-vertices ±b, and foci ±c are related by the following equation:
c^2=a^2 - b^2
When given the foci and vertices coordinates of an ellipse, we can find the standard form of the ellipse.
It is very easy to use our ellipse standard form calculator:
Input the vertices and co-vertices to obtain the ellipse standard form, e.g.,
First vertex V1: (-10, 0)
Second vertex V2: (10, 0)
First vertex V3: (0, -6)
Second vertex V4: (0, 6)
Based on the input values:
The center or origin of our Ellipse is (0, 0); and
The calculator writes the equation of the ellipse in standard form:
\frac{x^2}{10^2} + \frac{y^2}{6^2} = 1
The following section explains how to find the standard form of an ellipse with an example. Let's calculate the standard form of an ellipse with vertices (0, ±8) and foci (0, ±4):
Rearrange the previously mentioned formula to:
b^2 = a^2 - c^2
Place the values:
b^2 = 8^2 - 4^2
b^2 = 48
b = \sqrt{48}
Since our ellipse's major axis is (0, ±8), we know it is in the vertical direction.
Thus, our calculated standard equation of the ellipse is:
\frac{x^2}{{\sqrt48}^2} + \frac{y^2}{8^2} = 1
Here's a list of other ellipse related calculators that you may find helpful:
Foci of an ellipse calculator;
Ellipse Circumference; and
Ellipse perimeter calculator.
Here is the standard equation of an ellipse with its center at (0, 0) and major axis on the x-axis:
x² / a² + y² / b² = 1
(x, y) - The coordinates of an arbitrary point in the ellipse;
a and b - semi-major and semi-minor axes.
What is the ellipse standard form with vertices at (±13, 0) and (0, ±12)?
The equation x² / 13² + y² / 12² = 1 is the ellipse standard form with vertices at (±13, 0) and (0, ±12), where:
(-13, 0) - First vertex on the horizontal axis;
(13, 0) - Second vertex on horizontal axis;
(0, -12) - First co-vertex on vertical axis;
(0, 12) - Second co-vertex on vertical axis; and
(0, 0) - Ellipse center.
(x - c1)2 + (y - c2)2 = 1
|
The Rational(f, k) command computes a closed form of the indefinite sum of
k
f\left(k\right)
s\left(k\right)
t\left(k\right)
f\left(k\right)=s\left(k+1\right)-s\left(k\right)+t\left(k\right)
t\left(k\right)
k
\textcolor[rgb]{0.564705882352941,0.564705882352941,0.564705882352941}{\sum }_{k}t\left(k\right)
g,[p,q]
g
is the closed form of the indefinite sum of
k
p
is a list containing the integer poles of
q
s
that are not poles of
\mathrm{with}\left(\mathrm{SumTools}[\mathrm{IndefiniteSum}]\right):
f≔\frac{1}{{n}^{2}+\mathrm{sqrt}\left(5\right)n-1}
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{1}}{{\textcolor[rgb]{0,0,1}{n}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\sqrt{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}}
g≔\mathrm{Rational}\left(f,n\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\sqrt{\textcolor[rgb]{0,0,1}{5}}}{\textcolor[rgb]{0,0,1}{2}}\right)}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\sqrt{\textcolor[rgb]{0,0,1}{5}}}{\textcolor[rgb]{0,0,1}{2}}\right)}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\sqrt{\textcolor[rgb]{0,0,1}{5}}}{\textcolor[rgb]{0,0,1}{2}}\right)}
\mathrm{evala}\left(\mathrm{Normal}\left(\mathrm{eval}\left(g,n=n+1\right)-g\right),\mathrm{expanded}\right)
\frac{\textcolor[rgb]{0,0,1}{1}}{{\textcolor[rgb]{0,0,1}{n}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\sqrt{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}}
f≔\frac{13-57x+2y+20{x}^{2}-18xy+10{y}^{2}}{15+10x-26y-25{x}^{2}+10xy+8{y}^{2}}
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{20}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{18}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{57}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{13}}{\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{25}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{x}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{10}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{26}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{15}}
g≔\mathrm{Rational}\left(f,x\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{x}}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{+}\left(\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{7}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}}{\textcolor[rgb]{0,0,1}{25}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{34}}{\textcolor[rgb]{0,0,1}{25}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{\Psi }}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{5}}\right)\textcolor[rgb]{0,0,1}{+}\left(\frac{\textcolor[rgb]{0,0,1}{17}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}}{\textcolor[rgb]{0,0,1}{25}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{3}}{\textcolor[rgb]{0,0,1}{5}}\right)\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{\mathrm{\Psi }}\textcolor[rgb]{0,0,1}{}\left(\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{}\textcolor[rgb]{0,0,1}{y}}{\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}\right)
\mathrm{simplify}\left(\mathrm{combine}\left(f-\left(\mathrm{eval}\left(g,x=x+1\right)-g\right),\mathrm{\Psi }\right)\right)
\textcolor[rgb]{0,0,1}{0}
f≔\frac{1}{n}-\frac{2}{n-3}+\frac{1}{n-5}
\textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{2}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{5}}
g,\mathrm{fp}≔\mathrm{Rational}\left(f,n,'\mathrm{failpoints}'\right)
\textcolor[rgb]{0,0,1}{g}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{fp}}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{5}}\textcolor[rgb]{0,0,1}{-}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{4}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{+}\frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{n}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}[[\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{5}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}]]
f
n=0,3,5
g
n=1,2,4
|
Parity Bit Calculator | Transmission Errors
How to calculate an odd or even parity bit
Parity bits for the detection of errors
How to use our parity bit calculator
A final bit on parity bits
Even if it sounds odd, our parity bit calculator will introduce you to the field of error detection.
Errors happen all the time when transmissions are involved. Here you will learn how to quickly understand if corruption of a message occurred — at least when it's possible to do so!
What is an error, and why do we need to look for them;
What are parity bits and the types of parity bits;
How to use parity bits in error detection;
The limitations of parity bits; and
Some examples and instructions for our parity bit calculator.
In computer science and information theory, errors are corruptions of a message. Usually, messages are binary strings, which means sequences of 0s and 1s.
We can call the messages "words". This allows us to say that the alphabet, in our case, is made of binary digits.
If your alphabet is binary, your errors are bit flips, instances where:
1
turns into a
0
0
1
Errors are dangerous: they can cause computers to crash or send the wrong message, and we don't want that! What to do, then?
With the coming of the Computer Age and the proliferation of transmissions (particularly digital transmission) came the need for protocols to identify errors. Richard Hamming was a pioneer in error detection and correction.
Here, at Omni Calculator, we have already covered many of his works: check out our Hamming codes calculator!
The general idea behind searching — and correcting — errors is to count bits. The technique used may be relatively complex (but efficient) like in Hamming codes or simple like the parity check.
The laws of Nature tell you that if you want to inform the receiver of your message about the presence — or lack thereof — of errors in your transmission, you have to add bits to your message, bits that will contain that information.
The simplest example of this addition is the parity bit.
There are two types of parity bits:
Odd parity bits; and
Even parity bits.
Calculating a parity bit, even or odd, is a straightforward thing to do.
For odd parity, count the 1s in the message, and if their number is even, add a
1
at a specified position; otherwise, add a
0
For even parity, do the opposite: if the number of 1s is even, add a
0
; otherwise, add a
1
🙋 There is no rule that fixes the position of the parity bit; however, it is added conventionally at the end of the message.
Ok, it may not sound that easy: look at these examples.
Take the message
1101
: it contains three 1s.
If you chose odd parity, you have to add a
0
11010
If you chose even parity, the added bit is a
1
11011
There is a helpful mathematical operation to write this process in rigorous terms: binary addition. This binary operation is nothing but the sum of the values of the bit in a binary message, modulo
2
Calling our message
m
, we define the binary addition as:
\sum_i m_i\ \text{mod}\ 2
If we calculate an even parity bit, we take the result as is. If we are using an odd parity bit, we take the one's complement of the result.
For the even parity, the value of the parity bit is:
\begin{align*} & (1+1+0+1)\ \text{mod}\ 2 = \\ & 3\ \text{mod}\ 2= 1 \end{align*}
When we are considering an odd parity bit, instead, the result must be "reversed" before use:
\begin{align*} & (1+1+0+1)\ \text{mod}\ 2 = \\ & 3\ \text{mod}\ 2= 1 \rightarrow 0 \end{align*}
Now you know how to add a parity bit to your message. But what's their purpose?
Imagine that Alice sends Bob a binary message. They agreed beforehand to use a specific type of parity, let's say, odd parity.
💡 Alice and Bob are the two people traditionally used in information theory when making examples. The names were chosen randomly to follow the alphabetic order; Charlie comes next. If someone is listening to the message, it's usually a bad guy (or girl, in fact), and we call her Eve, from eavesdropper.
The message is
0110
: Alice computes its parity:
(0+1+1+0)\ \text{mod}\ 2= 0 \rightarrow 1
And adds the parity bit at the end:
01101
. Bob receives the message and computes the overall parity:
(0+1+1+0+1 )\ \text{mod}\ 2= 1
🙋 Notice how the addition of an even (odd) parity bit makes the overall number of 1s in a message even (odd)!
He finds an odd result: the parity of the message and the parity bit agree. Bob knows that there are no single-bit errors in the message.
Let's flip a bit and see what happens. Alice transmits
01101
, but Bob receives
01\textcolor{red}{0}01
. He computes the parity:
(0+1+0+0+1 )\ \text{mod}\ 2= 0
Bob knows that the number of 1s in the message should be odd, and he understands that something went wrong. Alice has to send the message again.
🔎 The same reasoning applies if the parity bit is the one flipping: check it!
The position of the parity bit doesn't really matter, since it changes the overall parity of the message. However, the convention is to add it to the end of the message. Try it without the calculator!
What if two bits flipped during the transmission?
We will now see the limitations of parity bits in error detection. Alice transmitted the usual message
01101
, but she was very unlucky this time, and Bob received
10101
. He computes the parity.
(1+0+1+0+1 )\ \text{mod}\ 2= 1
It checks out! The number of 1s is odd, so everything should be fine, right?
No. A single parity bit can't give information on an error involving two bits. It can inform of a mistake happening on three bits, but we would not be able to discriminate it from single-bit errors.
🙋 We say that parity bit error detection has Hamming distance equal to
2
. This means that it is enough to flip two bits to obtain a new "uncorrupted" message from another.
It doesn't matter if you want to calculate even parity bit or odd parity bit or if you want to encode or analyze a message. Our parity bit calculator can help you!
If you want to generate a parity bit, select generate in the first menu. It is the default choice. Then choose the type of parity, even or odd, and finally insert both the message and the position where you want the parity bit to be (the default is at the end of the message). We will generate the parity bit and print the message, ready for your transmissions!
If you want to check if a message was corrupted or not, select check, choose the parity, and insert the message: we will tell you as much as we can, but remember: only single-bit errors can be detected!
Hamming codes, parity bits, and Hamming distance are only some of the inhabitants of the complex and interconnected world of information theory. Our dedicated calculators can help you understand it. We hope our message reached you without errors! 😉
What are parity bits used for?
We use parity bits to detect errors during the communication of information. A message can suffer from interference, and the addition of a bit that gives information on the number of ones or zeroes in a string can tell us if corruption happened. Parity bits are limited because they can't help locate the error.
A parity bit is a bit added to a binary message that carries information about the number of 1s in the string itself. There are two types of parity, even and odd. Even parity bits ensure that the number of ones in the message is even, and vice-versa for odd parity bits.
How to calculate the parity bit of the message 011101?
To calculate the parity of 011101 take the sum of the ones in the message and compute its modulo 2:
(1 + 1 + 1 + 1) mod 2 = 4 mod 2 = 0
The even parity bit is 0, and we transmit the message 0111010.
How to calculate the parity bit?
To calculate an even parity bit, follow these instructions:
Calculate the sum of the ones in the message,
Take the modulo 2 of this result; and
Attach the result to your message.
The number of ones in the message is now even. To calculate an odd parity bit:
Count the ones in your message;
Take the modulo 2 of the sum;
Take the complement; and
Attach it to the message.
The number of ones is now odd.
Type of parity
Parity bit position
|
National Flag | Toph
Today is the 16th December, a red letter day of our national history. The whole country equipped with red-green flags. Unlike every year, Afia can't go out to celebrate the day in this year, because she has an important exam 'Computer Interfacing' in the next day. But, she really wants to do at least something to make the day special. She has a large piece of green cloth and also a piece of red cloth. She wants to make a large national flag to fly it on the house roof. Before this, she may need to cut the cloth to give it our national flag shape. We know, our national flag is bottle green in color and rectangular in size in the proportion of 10:6, with a red circle in the middle. (Fig: A, B)
Suppose, Afia has a rectangular piece of green cloth which length is
x meters and width is
y meters. She is agreed to cut exactly
d meters (possibly zero) from length and same
d meters from width of the rectangular piece so that remaining part maintain the correct ratio of our national flag (10:6). (see Fig: C for more details) She is not yet concern about the red circle in the middle. As she is busy with her study for exam, your task is to help her to determine the new generated
x'
x′ and
y'
y′ after cutting d meters from both side. Remember, she always wants to get largest possible flag and you have a normal scale by which you can calculate only the integer value (in meters), not fractional value. Print “Not possible”, if it is not possible to obtain the ratio of our national flag that already mentioned above.
Input contains two space separated integers
x and
y (
1 \le x, y \le 10^{15}
1≤x,y≤1015) as described above.
If it is possible to obtain the ratio
x':y' = 10: 6
x′:y′=10:6, print
x'
y'
y′ (separated by a space), as described above. Print "Not possible" (without quotes) otherwise.
Mr_BangladeshEarliest, Dec '19
Mr_BangladeshFastest, 0.0s
ramisa.alamLightest, 0 B
CodingmasterShortest, 112B
|
Borrowing - Teddy Cash
Why would I use Teddy Cash for borrowing?
The Teddy Cash protocol offers interest-free loans and is more capital efficient than other borrowing systems (i.e. less collateral is needed for the same loan). Instead of selling AVAX to have liquid funds, you can use the protocol to lock up your AVAX, borrow against the collateral to withdraw TSD, and then repay your loan at a future date.
For example: Borrowers speculating on future AVAX price increases can use the protocol to leverage their AVAX positions up to 11 times, increasing their exposure to price changes. This is possible because TSD can be borrowed against AVAX, sold on the open market to purchase more AVAX — rinse and repeat.*
*Note: This is not a recommendation for how to use Teddy Cash. Leverage can be risky and should be used only by those with experience.
Collateral is any asset which a borrower must provide to take out a loan, acting as a security for the debt. Currently, Teddy Cash only supports AVAX as collateral.
Is AVAX the only collateral accepted by Teddy Cash?
Yes, AVAX is the only collateral type accepted by Teddy Cash.
The protocol charges one-time borrowing and redemption fees that algorithmically adjust based on the last redemption time. For example: If more redemptions are happening (which means TSD is likely trading at less than 1 USD), the borrowing fee would continue to increase, discouraging borrowing.
Other systems (e.g. MakerDAO) require variable interest rates to make borrowing more or less favorable, but do so implicitly since borrowers would not feel the impact upfront. Given that this also needs to be managed via governance, Teddy Cash instead opts for a fully decentralized and direct feedback mechanism via one-off fees.
How can I borrow with Teddy Cash?
To borrow you must open a Trove and deposit a certain amount of collateral (AVAX) to it. Then you can draw TSD up to a collateral ratio of 110%. A minimum debt of 2,000 TSD is required.
A Trove is where you take out and maintain your loan. Each Trove is linked to an Avalanche address and each address can have just one Trove. If you are familiar with Vaults or CDPs from other platforms, Troves are similar in concept.
Troves maintain two balances: one is an asset (AVAX) acting as collateral and the other is a debt denominated in TSD. You can change the amount of each by adding collateral or repaying debt. As you make these balance changes, your Trove’s collateral ratio changes accordingly.
Every time you draw TSD from your Trove, a one-off borrowing fee is charged on the drawn amount and added to your debt. Please note that the borrowing fee is variable (and determined algorithmically) and has a minimum value of 0.5% under normal operation. The fee is 0% during Recovery Mode. A 200 TSD Liquidation Reserve charge will be applied as well, but returned to you upon repayment of debt.
For example: The borrowing fee stands at 0.5% and the borrower draws 4,000 TSD from his open Trove. Being charged a fee of 18.91 TSD, the borrower will obtain 3,781.09 TSD after the Liquidation Reserve and issuance fee are deducted.
This is the ratio between the Dollar value of the collateral in your Trove and its debt in TSD. The collateral ratio of your Trove will fluctuate over time as the price of AVAX changes. You can influence the ratio by adjusting your Trove’s collateral and/or debt — i.e. adding more AVAX collateral or paying off some of your debt.
For example: Let’s say the current price of AVAX is $3,000 and you decide to deposit 10 AVAX. If you borrow 10,000 TSD, then the collateral ratio for your Trove would be 300%.
If you instead took out 25,000 TSD that would put your ratio at 120%.
The minimum collateral ratio (or MCR for short) is the lowest ratio of debt to collateral that will not trigger a liquidation under normal operations (aka Normal Mode). This is a protocol parameter that is set to 110%. So if your Trove has a debt 10,000 TSD, you would need at least $11,000 worth of AVAX posted as collateral to avoid being liquidated.
When you open a Trove and draw a loan, 200 TSD is set aside as a way to compensate gas costs for the transaction sender in the event your Trove being liquidated. The Liquidation Reserve is fully refundable if your Trove is not liquidated, and is given back to you when you close your Trove by repaying your debt. The Liquidation Reserve counts as debt and is taken into account for the calculation of a Trove's collateral ratio, slightly increasing the actual collateral requirements.
When TSD is redeemed, the AVAX provided to the redeemer is allocated from the Trove(s) with the lowest collateral ratio (even if it is above 110%). If at the time of redemption you have the Trove with the lowest ratio, you will give up some of your collateral, but your debt will be reduced accordingly.
The USD value by which your AVAX collateral is reduced corresponds to the nominal TSD amount by which your Trove’s debt is decreased. You can think of redemptions as if somebody else is repaying your debt and retrieving an equivalent amount of your collateral. As a positive side effect, redemptions improve the collateral ratio of the affected Troves, making them less risky.
Let’s say you own a Trove with 2 AVAX collateralized and a debt of 3,200 TSD. The current price of AVAX is $2,000. This puts your collateral ratio (CR) at 125% (= 100% * (2 * 2,000) / 3,200). Let’s imagine this is the lowest CR in the Teddy Cash system and look at two examples of a partial redemption and a full redemption:
Somebody redeems 1,200 TSD for 0.6 AVAX and thus repays 1,200 TSD of your debt, reducing it from 3,200 TSD to 2,000 TSD. In return, 0.6 AVAX, worth $1,200, is transferred from your Trove to the redeemer. Your collateral goes down from 2 to 1.4 AVAX, while your collateral ratio goes up from 125% to 140% (= 100% * (1.4 * 2,000) / 2,000).
Somebody redeems 6,000 TSD for 3 AVAX. Given that the redeemed amount is larger than your debt minus 200 TSD (set aside as a Liquidation Reserve), your debt of 3,200 TSD is entirely cleared and your collateral gets reduced by $3,000 of AVAX, leaving you with a collateral of0.5 AVAX (= 2 - 3,000 / 2,000).
By making liquidation instantaneous and more efficient, Teddy Cash needs less collateral to provide the same security level as similar protocols that rely on lengthy auction mechanisms to sell off collateral in liquidations.
You can sell the borrowed TSD on the market for AVAX and use the latter to top up the collateral of your Trove. That allows you to draw and sell more TSD, and by repeating the process you can reach the desired leverage ratio.
Assuming perfect price stability (1 TSD = $1), the maximum achievable leverage ratio is 11x. It is given by the formula:
maximum\ leverage \ ration = \frac{MCR}{(MCR - 100\%)}
maximum leverage ratio MCR where MCR is the Minimum Collateral Ratio.
|
Gold standards – fineness
How to calculate gold weight – Weight of a gold coin
Example: Using the gold weight calculator
The gold weight calculator will return the amount of gold or silver you need to make basic shapes like wires, cylinder bars, or sheets out of the precious metals. The calculator takes into account the dimensions of the basic shapes and estimates the volume. You can select from different types of gold standards to find the mass of gold or silver needed to build the chosen shape. You can also enter the cost of gold or silver per unit mass to estimate the expenses. Read on to understand how to calculate gold weight for coins or bars.
🔎 Also, check out our gold melt calculator to find the value of the gold based on its weight and purity.
Before estimating the mass and cost for the project, let's look at different gold standards to understand the calculation of gold weight. Fineness denotes the purity of a precious metal like gold, platinum, and silver. Often impurities are added to the pure or base metal to improve the overall properties. Such contaminants help enhance the durability and hardness of the objects made out of them. It also helps in introducing different colors and reduces the costs of jewelry or other products.
The fineness is denoted in the purity of platinum, gold, or silver alloy by parts per thousand of pure metal. For instance, the minimum gold standard is 333, which contains 33.3% of pure gold and 66.6% of other alloys. Similarly, other standards for gold are:
The gold also has a karat system (k), which is the measure of gold purity in parts per 24 parts whole. Mathematically:
\quad K = 24 \frac{M_g}{M_m}
K
– Karat rating of material;
M_g
– Mass of pure gold in the alloy; and
M_m
– Total mass of material.
To calculate gold weight:
Select the shape from the list – sheet, wire, cylinder, or rondelle.
Select the material to pick fineness, or you can directly enter the density of the material.
Enter the percent of material wasted during melting and fabrication.
Fill in the respective dimensions of the shapes.
The gold weight calculator will return the mass of the object using the volume.
Insert the cost per unit mass for the metal/alloy to receive a cost estimate.
The calculator will return the cost of material used.
The calculator covers the basic shapes and if you wish to try other forms, say, the weight of a gold bar, you can try our metal weight calculator. Otherwise, if you know the price of your gold bar, you can use this tool as a gold price per gram calculator. If you're interested how much a certain amount of money would weigh in various denominations, you can check out the money weight calculator.
The cost estimate considers the wastage; therefore, it is the cost of material used and not the cost of shape fabricated.
Calculate the mass and cost of a sheet having the following dimensions:
1 \text{ m} \times 0.5 \text{ m} \times 5 \text{ mm}
made using the fine gold. Take wastage percentage as 5% and price of gold as $65.56 / g.
Select the shape from the list as sheet.
Select the material to pick fineness as Gold - fine.
Enter the percent of material wasted as 5%.
Fill in the sheet's length, width, and thickness as 1 m, 0.5 m, and 5 mm, respectively.
The gold weight calculator will return the mass of the object, after accounting for wastage as 45.84 kg.
\scriptsize \qquad \begin{align*} \text{Mass} &= \text{Volume} \times \text{Density}\\ &= 1 \times 0.5 \times 0.005 \times 19300 \\ &= 48.25 \text{ kg}\\\\ \begin{align*}\text{Mass less}\\ \text{wastage}\end{align*} &= 48.25 \times (1 - 0.05)\\ &= 45.84\ \text{kg} \end{align*}
Insert the cost per unit mass for the fine gold as $65.56 / g.
The calculator will return the cost of material used as $3,163,270.
What do you mean by 18 karat gold?
18 karat gold refers to the 18 parts of pure gold per 24 parts whole. In other words, the 18 karat gold contains 18 / 24 = 0.75 or 75% of pure gold, and the rest 25% is an acceptable alloy like copper, zinc, or nickel.
How do I calculate mass of a gold sheet?
To calculate the mass of a gold sheet:
Multiply the length, width, and thickness to obtain the volume of the gold sheet.
Find the density of the gold using the fineness or karat value.
Multiply the density and volume to obtain the mass of the gold sheet.
Similarly, you can also find the weight of the gold bar or any other shape using its dimensions.
What do you mean by gold fineness?
Gold fineness refers to the amount of pure gold in the whole per thousand parts. It is represented as 333, 585, 916, or 999. The 999 is the purest gold in quality and the most expensive. The minimum acceptable gold standard is the 333 or 8 karat gold.
How do I find karat value from mass of gold?
To find the karat value from the mass of gold:
Divide the mass of pure gold in the alloy by the total mass.
Multiply the resultant by 24 to obtain the karat value from the mass of gold.
The 12 volt wire size calculator will assist you in choosing the proper wire size for your next house installation or battery appliance.
12 Volt Wire Size Calculator
Our Prandtl Meyer expansion calculator lets you calculate the downstream flow properties of an expansion wave.
Prandtl Meyer Expansion Calculator
|
Lobbying Is a Biologically Necessary Transaction Cost in a Democracy
2Financial Supervisory Service, Government of Korea, Seoul, South Korea
\begin{array}{l}Q=PT/S=\left[{P}_{2}\left(1+t\right)\right]/\left[{P}_{1}\left(1+s\right)\right]\\ S=\left(1+s\right)\\ T=\left(1+t\right)\end{array}
P: world product price ratio =
{P}_{2}/{P}_{1}
It is also assumed that the probability of victory by the pro-capital party is
\pi =\pi \left(K,L,S,T\right)
where π is increasing in K and T and decreasing in L and S. Young and Magee (1986) suppose that each lobby chooses its political contributions in order to maximize the expected utility of a representative owner of the corresponding factor, while each party chooses its policy in order to maximize its probability of election. Assume that the capital lobby’s choices are made as if made by a representative individual who owns K0 units of capital and has an indirect utility function,
{V}_{R}={V}_{R}\left(Q,{I}_{R}\right)
where IR is his or her income. K is the amount of capital devoted to politics. K is chosen to maximize expected utility, i.e., the lobby maximizes its expected utility. The capital owner takes account of the impact of the election on his cost of living because Q appears independently in his indirect utility function. The representative labor owner owns L0 units of labor and has an indirect utility function,
{V}_{W}={V}_{W}\left(Q,{I}_{W}\right)
, where IW is his or her income. L is the amount of labor devoted to politics and is chosen to maximize the laborer’s expected utility. Let
{K}^{*}\left(L,S,T\right)
{L}^{*}\left(K,S,T\right)
be the optimal policies of the two lobbies. Assume that
{K}_{S}^{\ast }>0
{L}_{T}^{\ast }>0
, i.e., if a party proposes a domestic price more favorable to the lobby that it leads, then it attracts more resources from that lobby.
Since the pro-capital party is a Stackelberg leader with respect to the capital lobby and adopts Nash behavior toward the other players, it maximizes its probability of victory by maximizing
\pi \left({K}^{*}\left(L,S,T\right),L,S,T\right)
with respect to S. Similarly, the pro-labor party maximizes its probability of victory by minimizing
\pi \left(K,{L}^{*}\left(K,S,T\right),S,T\right)
with respect to T. Let the optimal policies of the two parties be
{S}^{*}\left(L,T\right)
{T}^{*}\left(K,S\right)
respectively. The action of each player in the political game depends on the actions of two or three of the other players, as expressed by the reaction functions
{K}^{*}\left(L,S,T\right)
{L}^{*}\left(K,S,T\right)
{S}^{*}\left(L,T\right)
{T}^{*}\left(K,S\right)
. An equilibrium is a set of mutually consistent actions
\left({K}_{e},{L}_{e},{S}_{e},{T}_{e}\right)
{K}_{e}={K}^{*}\left({L}_{e},{S}_{e},{T}_{e}\right)
{L}_{e}={L}^{*}\left({K}_{e},{S}_{e},{T}_{e}\right)
{S}_{e}={S}^{*}\left({L}_{e},{T}_{e}\right)
{T}_{e}={T}^{*}\left({K}_{e},{S}_{e}\right)
{C}_{1}\left(R,W\right)={R}^{\alpha }{W}^{1-\alpha }
{C}_{2}\left(R,W\right)={R}^{\beta }{W}^{1-\beta }
W={Q}^{N}
R={Q}^{-M}
N\equiv \alpha /\left(\alpha -\beta \right)
M\equiv \left(1-\alpha \right)/\left(\alpha -\beta \right)
{V}_{R}\left({I}_{R},Q\right)={I}_{R}{Q}^{-\gamma },\text{\hspace{0.17em}}\text{\hspace{0.17em}}{V}_{W}\left(IQ\right)={I}_{W}{Q}^{-\delta }
where IR, IW are the incomes of the respective factor owners and 0 £ γ, δ £ 1. By (2),
{I}_{R}=\left({K}_{0}-K\right){Q}^{-M}
. Therefore, the utility of the capital lobby when the domestic price is Q is
{V}_{R}=\left({K}_{0}-K\right){Q}^{-m}
m\equiv M+\gamma
. Similarly, the utility of the labor lobby is
{V}_{W}=\left({L}_{0}-L\right){Q}^{n}
n\equiv N-\delta
Since 1 > α > β > 0, we have m, n > 0. Thus an increase in Q harms the capital lobby and benefits the labor lobby, as implied by the Stolper-Samuelson theorem. By (3), combinations of income IR and commodity price Q such that IRQ−γ is fixed will yield the same utility for the capital owner. Hence, Qγ can be seen as a cost of living index.
{Q}^{-m}={Q}^{-M}/{Q}^{\gamma }
can be interpreted as the “real” return to capital: the return Q−M in terms of the numeraire, adjusted by the cost of living index Qγ. Similarly, Qn is the real return to labor.
\pi \left(L,K,S,T\right)
-giving the pro-capital party’s probability of election—has the logit form. Thus, the pro-capital party’s odds of victory, Φ, is a log-linear function of the explanatory variables:
\mathrm{log}\Phi =\mathrm{log}\left[\pi /\left(1-\pi \right)\right]=\epsilon +\kappa \mathrm{log}K-\lambda \mathrm{log}L-\sigma \mathrm{log}S+\tau \mathrm{log}T
where κ, λ, σ, τ are positive constants. For the sake of simplicity, assume that the elasticities of the electoral odds with respect to the resources K and L are unity, so that κ = λ = 1. We also assume that ε = 0. This yields the probability of election for the pro-capital party as
\pi =1/\left(1+L{S}^{\sigma }/K{T}^{\tau }\right)=K{T}^{\tau }/\left(K{T}^{\tau }+L{S}^{\sigma }\right)
when σ = τ is also imposed.
\underset{K\left(0<K<{K}_{0}\right)}{\mathrm{max}}e\left(K\right)=\left({K}_{0}-K\right)\left\{\left[K{T}^{\tau }{\left(P{S}^{\sigma }\right)}^{-m}+L{S}^{\sigma }{\left(P{T}^{\tau }\right)}^{-m}\right]/\left[K{T}^{\tau }+L{S}^{\sigma }\right]\right\}
\underset{L\left(0<L<{L}_{0}\right)}{\mathrm{max}}f\left(L\right)=\left({L}_{0}-L\right)\left\{\left[K{T}^{\tau }{\left(P{S}^{\sigma }\right)}^{n}+L{S}^{\sigma }{\left(P{T}^{\tau }\right)}^{n}\right]/\left[K{T}^{\tau }+L{S}^{\sigma }\right]\right\}
K\left(L,S,T\right)=\left(L{S}^{\sigma }/{T}^{\tau }\right)\left\{{\left[\left(1-{\left({S}^{\sigma }{T}^{\tau }\right)}^{-m}\right)\left(1+{K}_{0}{T}^{\tau }/L{S}^{\sigma }\right)\right]}^{1/2}-1\right\}
The optimal policy of the capital lobby is:
{K}^{*}\left(L,S,T\right)=\mathrm{max}\left(0,K\left(L,S,T\right)\right)
L\left(K,S,T\right)=\left(K{T}^{\tau }/{S}^{\sigma }\right)\left\{{\left[\left(1-{\left({S}^{\sigma }{T}^{\tau }\right)}^{-n}\right)\left(1+{L}_{0}{S}^{\sigma }/K{T}^{\tau }\right)\right]}^{1/2}-1\right\}
The optimal policy of the labor lobby is:
{L}^{*}\left(K,S,T\right)=\mathrm{max}\left(0,L\left(K,S,T\right)\right)
1+m+m{S}^{\sigma }L/{K}_{0}{T}^{\tau }-{\left({S}^{\sigma }{T}^{\tau }\right)}^{m}=0
m>1
m=1
L<{K}_{0}{\left({T}^{\tau }\right)}^{2}
m<1
L<{K}_{0}{\left({T}^{\tau }\right)}^{2}m{\left(1-m\right)}^{\left(1-m\right)/m}
0<S<{\left\{{K}_{0}{\left({T}^{\tau }\right)}^{\left(1+m\right)}/L\right\}}^{1/\left(1-m\right)}
\left\{{\left({T}^{\tau }\right)}^{m}-1-m\right\}\left({T}^{\tau }\right){K}_{0}/m<L
1=\frac{{\left\{1-\left(1+\left(m/\sigma \right)\right){r}^{-m}\right\}}^{1/2}}{1-{r}^{-m}}+\frac{{\left\{1-\left(1+\left(n/\tau \right)\right){r}^{-n}\right\}}^{1/2}}{1-{r}^{-n}}
a\left(m,n\right)\equiv -1+\left(1-{r}_{e}^{-m}\right)/{\left\{1-\left(1+\left(m/\sigma \right)\right){r}_{e}^{-m}\right\}}^{1/2}
b\left(m,n\right)\equiv -1+\left(1-{r}_{e}^{-n}\right)/{\left\{1-\left(1+\left(n/\tau \right)\right){r}_{e}^{-n}\right\}}^{1/2}
c\left(m,n\right)\equiv \left(m/\sigma \right)/\left({r}_{e}^{m}-\left(m/\sigma \right)-1\right)
d\left(m,n\right)\equiv \left(n/\tau \right)/\left({r}_{e}^{n}-\left(n/\tau \right)-1\right)
1/{r}_{e}<\left(d/bc\right){K}_{0}/{L}_{0}<{r}_{e}
Then the country has a unique political-economic interior equilibrium
\left({K}_{e},{L}_{e},{S}_{e},{T}_{e}\right)
{K}_{e}={K}_{0}a/c
{L}_{e}={L}_{0}b/d
{S}_{e}={\left\{\left({K}_{0}/{L}_{0}\right){r}_{e}^{\sigma }d/bc\right\}}^{1/\left(\sigma +\tau \right)}
{T}_{e}={\left\{\left({L}_{0}/{K}_{0}\right){r}_{e}^{\tau }c/ad\right\}}^{1/\left(\sigma +\tau \right)}
{r}_{e}={S}_{e}{T}_{e}=\left(1+{s}_{e}\right)\left(1+{t}_{e}\right)
The policy product of the tariff and the export subsidy,
r=\left(1+s\right)\left(1+t\right)
, where Equation (13) can be re-written
1=\frac{{\left\{1-\left(1+\left(m/\sigma \right)\right){r}^{-m}\right\}}^{1/2}}{1-{r}^{-m}}+\frac{{\left\{1-\left(1+\left(n/\tau \right)\right){r}^{-n}\right\}}^{1/2}}{1-{r}^{-n}}
{T}_{e}={\left\{\left({L}_{0}/{K}_{0}\right){r}_{e}^{\tau }c/ad\right\}}^{1/\left(\sigma +\tau \right)}
\begin{array}{l}{K}_{e}/{K}_{0}=a/c\\ =\left[-1+\left(1-{r}_{e}^{-m}\right)/{\left\{1-\left(1+\left(m/\sigma \right)\right){r}_{e}^{-m}\right\}}^{1/2}\right]/\left[\left(m/\sigma \right)/\left({r}_{e}^{m}-\left(m/\sigma \right)-1\right)\right]\end{array}
\begin{array}{l}{L}_{e}/{L}_{0}=b/d\\ =\left[-1+\left(1-{r}_{e}^{-n}\right)/{\left\{1-\left(1+\left(n/\tau \right)\right){r}_{e}^{-n}\right\}}^{1/2}\right]/\left[\left(n/\tau \right)/\left({r}_{e}^{n}-\left(n/\tau \right)-1\right)\right]\end{array}
\alpha =\sum _{i}{\alpha }_{i}\frac{{\left(\text{netexport}\right)}_{i}}{\sum _{j}{\left(\text{netexport}\right)}_{j}}
\mathrm{ln}\left(1+t\right)=1/\left(\sigma +\tau \right)\mathrm{ln}\left\{\left({L}_{0}/{K}_{0}\right)c/ad\right\}+\tau /\left(\sigma +\tau \right)\mathrm{ln}r
\left\{\mathrm{ln}\left(1+t\right)-0.5\mathrm{ln}r\right\}=1/\left(\sigma +\tau \right)\mathrm{ln}\left\{\left({L}_{0}/{K}_{0}\right)c/ad\right\}
\left\{\mathrm{ln}{\left(1+t\right)}_{it}-0.5\mathrm{ln}{r}_{i}\right\}={f}_{i}+{g}_{t}+1/\left(\sigma +\tau \right)\mathrm{ln}\left\{\left({L}_{it}/{K}_{it}\right){c}_{i}/{a}_{i}{d}_{i}\right\}
Magee, S.P. and Yoo, K.-Y. (2019) Lobbying Is a Biologically Necessary Transaction Cost in a Democracy. Modern Economy, 10, 1589-1612. https://doi.org/10.4236/me.2019.106105
1. Magee, S., Brock, W. and Young, L. (1989) Black Hole Tariffs and Endogenous Policy Theory: Political Economy in General Equilibrium. Cambridge University Press, New York.
2. Young, L. and Magee, S. (1986) Endogenous Protection, Factor Returns and Resource Allocation. Review of Economic Studies, 53, 407-419. https://doi.org/10.2307/2297636
3. Magee, C. and Magee, S. (2004) The Madison Paradox and the Low Cost of Special-Interest Legislation. In: Nelson, D., Ed., The Political Economy of Policy Reform, Elsevier, New York, 131-154.
4. Magee, S. (1993) Bioeconomics and the Survival Model: The Economic Lessons of Evolutionary Biology. Public Choice, 77, 117-132. https://doi.org/10.1007/BF01049225
5. Dorfman, S.S. (1958) Efficient Programs of Capital Accumulation. In: Linear Programming and Economic Analysis, McGraw Hill, New York, 331.
6. Tullock, G. (1967) The Welfare Cost of Tariffs, Monopolies and Theft. Western Economic Journal, 5, 224-232. https://doi.org/10.1111/j.1465-7295.1967.tb01923.x
7. Krueger, A. (1974) The Political Economy of the Rent Seeking Society. American Economic Review, 64, 291-303.
8. Brock, W. and Magee, S. (1978) The Economics of Special Interest Politics: The Case of the Tariff. American Economic Review, 68, 246-250.
9. Magee, C. and Magee, S. (203) The Effects of Rent Seeking on Economic Development: An Increasing Divergence Between Rich and Poor Countries? In: Ramaswamy, S. and Cason, J.W., Eds., Development and Democracy, Middlebury College Press, Hanover, 123-144.
10. Becker, G. (1983) A Theory of Competition among Pressure Groups for Political Influence. Quarterly Journal of Economics, 98, 371-400. https://doi.org/10.2307/1886017
11. Wittman, D.A. (1989) Why Democracies Produce Efficient Results. Journal of Political Economy, 97, 1395-1422. https://doi.org/10.1086/261660
12. Barrow, R. (1999) The Determinants If Democracy. Journal of Political Economy, 107, 158-183. https://doi.org/10.1086/250107
13. United Nations (1971) Classification of Commodities by Industrial Origin: Links between the Standard International Trade Classification and the International Standard Industrial Classification, Stat. Papers Series M, No. 43, Rev. 1.
14. Mitchell, B.R. (1992) a. International Historical Statistics, Europe, 1750-1988, Stockton. b. International Historical Statistics—The Americas, Stockton. c. International Historical Statistics: Africa, Asia and Oceania, 1750-1988. https://doi.org/10.1007/978-1-349-12791-7
15. Magee, S., Lee, H. and Kim, J.Y. (2019) Evidence for the Tariff-Lobbying Paradox: Endogenous Tariffs Fall as Protectionist Lobbying Rises. Applied Economics, 51, 4368-4384. https://doi.org/10.1080/00036846.2019.1591604
|
Quantum Tunneling - Maple Help
Home : Support : Online Help : Math Apps : Natural Sciences : Physics : Quantum Tunneling
\textcolor[rgb]{0,0.329411764705882,0.501960784313725}{}
In the world of classical physics, a particle with energy
E
cannot pass through a potential well of height
{V}_{0}
E<{V}_{0}
. Yet in quantum mechanics, a particle can access regions where it is classically forbidden even though
E<{V}_{0}
. This phenomenon is called quantum tunneling, and it is responsible for the functionality of the transistors which are used to make computers. Because the particle's wavefunction must be continuous everywhere, the probability of finding the particle cannot instantly vanish when
E<{V}_{0}
, instead it exponentially decays in these regions. Quantum tunneling also plays a critical role in nuclear fusion, electronics and quantum computation. In this document, you can derive and animate the effects of a stationary wavefunction tunneling through a potential barrier.
Derivation of time-independent Schrödinger equation
The time-dependent Schrödinger equation for the wavefunction
\mathrm{Ψ}\left(\stackrel{→}{r},t\right)
i \mathrm{ℏ}\mathit{ }\frac{∂\mathrm{Ψ}}{∂t}=\stackrel{ˆ}{H }\mathrm{Ψ}
\stackrel{ˆ}{H }
is the Hamiltonian operator, and
\mathrm{ℏ}
is the reduced Planck's constant. For the case of a single non-relativistic particle, the Hamiltonian takes the form
-\frac{{\mathrm{ℏ}}^{2}}{2 m}{\nabla }^{2}+V\left(\stackrel{→}{r},t\right)
i\mathit{ }\mathrm{ℏ}\mathit{ }\frac{∂\mathrm{Ψ}}{∂t}=\left(-\frac{{\mathrm{ℏ}}^{2}}{2 m}{\nabla }^{2}+V\left(\stackrel{→}{r},t\right)\right) \mathrm{Ψ}
m
V
is its potential energy, and
{\nabla }^{2}
is the Laplacian. Simplifying to one dimension, and assuming the potential is constant in time, you get:
i\mathit{ }\mathrm{ℏ}\mathit{ }\frac{∂\mathrm{Ψ}}{∂t}=\left(-\frac{{\mathrm{ℏ}}^{2}}{2 m}{}^{}\frac{{∂}^{2}}{∂{x}^{2}}+V\left(x\right)\right) \mathrm{Ψ}
Recalling that the modulus squared of the wavefunction,
{\left|\mathrm{Ψ}\left(x, t\right)\right|}^{2}
, is the probability density of the particle, a stationary state is a state where this probability density does not change in time, that is, the particle stays in the same state in every observable way. As such, you observe that
\mathrm{Ψ}\left(x, t\right)
must take the form:
\mathrm{\Psi }\left(x, t\right) = \mathrm{ψ}\left(x\right) {ⅇ}^{i \mathrm{θ}\left(x,t\right)}
for some real valued function
\mathrm{θ}\left(x,t\right)
. Including further restriction that the potential barrier
V\left(x\right)
is a well-behaved function with no vertical asymptotes, it can be shown that
\mathrm{θ}\left(x,t\right)
-\frac{E}{\mathrm{ℏ}}\mathit{ }t
E
is the energy of the particle. Plugging this solution into the time-dependant Schrödinger equation produces the time-independent Schrödinger equation:
\left(-\frac{{\mathrm{ℏ}}^{2}}{2 m} \frac{{∂}^{2}}{∂{x}^{2}}+ V\left(x\right) \right) \mathrm{ψ}\left(x\right)=E \mathrm{ψ}\left(x\right)
\stackrel{ˆ}{H }\mathrm{ψ}=E \mathrm{ψ}
Derivation of tunneling
Consider the one-dimensional time-independent Schrödinger equation for a piecewise constant potential
V\left(x\right)
Assume that Region 1 of space on the interval
\left(-\infty , 0\right)
has potential energy
{V}_{1}
. Region 2 has potential energy
{V}_{2}
and is on the interval
\left(0, a\right],
and lastly, Region 3 has a potential energy
{V}_{3}
\left(a, \infty \right)
. The total potential can be mathematically written as:
V\left(x\right) = {\begin{array}{cc}{V}_{1},& x≤0,\\ {V}_{2},& 0<x ≤ a,\\ {V}_{3},& a<x,\end{array}
This is a very general derivation because you can vary the height of any potential well along with the central width
a
. The time-independent Schrödinger equation must be solved in the three regions and the solutions connected by junction conditions, that is, the requirement that the wavefunction and its derivative be continuous on the boundaries. If you the call the three solutions
{\mathrm{ψ}}_{1}\mathit{,}{\mathrm{ψ}}_{2}\mathit{,}{\mathrm{ψ}}_{3}
respectively, then the junction conditions are:
{\psi }_{1}\left(0\right)= {\psi }_{2}\left(0\right),\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}{\psi }_{2}\left(a\right) = {\psi }_{3}\left(a\right), \phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathit{ }\mathrm{ψ}{\mathit{'}}_{1}\left(0\right)= \mathrm{ψ}{\mathit{'}}_{2}\left(0\right),\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\mathit{ }\mathrm{ψ}{\mathit{'}}_{2}\left(a\right) = \mathrm{ψ}{\mathit{'}}_{3}\left(a\right),
where the primes denotes differentiation with respect to x. The solutions to the Schrödinger equation for
E>{V}_{1}
in these three regions are:
{\psi }_{1}\left(x\right) = A\cdot {e}^{i\cdot {k}_{1}\cdot x}+B\cdot {e}^{-i\cdot {k}_{1}\cdot x},\phantom{\rule[-0.0ex]{0.0em}{0.0ex}} {\mathrm{ψ}}_{2} \left(x\right) = C\cdot {e}^{i\cdot {k}_{2}\cdot x}+D\cdot {e}^{-i\cdot {k}_{2}\cdot x},\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}{\psi }_{3}\left(x\right) = F\cdot {e}^{i\cdot {k}_{3}\cdot x}+G\cdot {e}^{-i\cdot {k}_{3}\cdot x},\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}
{k}_{1}= \sqrt{\frac{2\cdot m\cdot \left(E-{V}_{1}\right)}{{\mathrm{ℏ}}^{2}}},
{k}_{2}= \sqrt{\frac{2\cdot m\cdot \left(E-{V}_{2}\right)}{{\mathrm{ℏ}}^{2}}},{k}_{3}= \sqrt{\frac{2\cdot m\cdot \left(E-{\mathrm{V}}_{3}\right)}{{\mathrm{ℏ}}^{2}}}.
E>V
, the wavefunction is a complex plane wave with the form
{e}^{i k x}
with real-valued
k
E<V
, the form of the solution is
{e}^{k x}
with a real-valued
k
E=V
has a different solution and is not considered here.
To extract more physically realizable quantities, it is necessary to assume that the particle or wavefunction 'originates' from only one side of the potential barrier. Physically, this is because of the assumption that there is no source of particles on the right hand side which travel in the
-x
direction. This makes the constant
G=0
G\cdot {e}^{-i\cdot {k}_{3}\cdot x}
is a plane wave traveling in the
-x
Now by applying the junction conditions above, you can derive the constants B, C, D, F, in terms of A, the amplitude of the wave. This can be very messy, especially in our highly general case of arbitrary
{V}_{1}, {V}_{2}, {V}_{3}.
To extract physically meaningful quantities from these abstract functions, the reflection and transmission coefficients are defined as the ratio:
R = \frac{{\left|B\right|}^{2}}{{|A|}^{2}},\phantom{\rule[-0.0ex]{0.0em}{0.0ex}}T = \frac{{\left|F\right|}^{2}}{{\left|A\right|}^{2}},
R + T = 1.
This gives the amount of the probability density that is "reflected" or "transmitted", similar to that used in optics. Remember that this is a solution for the time independent Schrödinger equation, so the particle is in a stationary state. It is not traveling through the potential barrier in time, rather it leaks through the barrier as a result of the plane wave solution to the Schrödinger equation.
Use the sliders to adjust the shape of the potential barrier, and the energy and amplitude of the incoming wave. You can also stop or play the animation using the checkbox, or move through time manually with the slider. Use the checkboxes above the plot to show different components of the wavefunction.
What happens when you make the potential width very thick?
Try adjusting the height of the potentials.
Try plotting the real, imaginary, or square of the wavefunction.
Barrier Width,
a
Initial Potential,
{V}_{1}
Potential Barrier,
{V}_{2}
Final Potential,
{V}_{3}
E
A
t
Transmission Coefficient,
T
Reflection Coefficient,
R
Wavefunction components
|
Radar Horizon Calculator | Distance of a Target
The effects of atmospheric refraction on radar
How to calculate the radar horizon
Practical examples of radar horizon
Pushing the envelope of radar detection: over the horizon radar
How to use our radar horizon calculator
Radars revolutionized the war and then our daily lives: let's learn how far they allow you to see with our radar horizon calculator.
Radar technology may sound complex, but it relies on simple physical phenomena, and its use is limited by the geometry of the planet we live on. Here you will learn:
How far does a radar see?
What is the radar horizon formula?
How to calculate the radar horizon?
We'll also see some examples of radar horizon distance. Keep reading, don't wait for the return signal!
Radar, the acronym for "radio detection and ranging", is a technology that uses radio frequencies to detect objects at a distance using the reflection principles. The core concept is to have a transmitter (that sends radio waves in a particular direction) and a receiver (that listens to the reflected "echoes").
Radars had been under development for a few decades before WWII, but only with the conflict brewing (and then exploding) did the technology really take off. Its invention helped fight the Germans during the Battle of England and proved its invaluable importance.
🔎 Cavity magnetrons helped reduce the size of radars, giving a fundamental advantage to the Allies in the war effort. The British invented the magnetron, and it traveled across the ocean in a regular suitcase: probably the most precious luggage in history.
Once the war ended, radar technology (like many other wartime inventions) was made available to the public, offering its services for peaceful purposes. Instead of detecting enemy planes and ships, the radar started helping in weather forecasting and civilian transport. In recent years, the automotive industry has been eyeing the radar for application in autonomous vehicles, where the radar's detection ability will help avoid collisions.
🔎 Radars often use microwaves. In fact, the microwave oven is a direct offspring of this technology. We need to thank the physicist Percy Spencer, a cavity magnetron, and a candy bar. Spencer walked too close to an active magnetron and noticed that the candy bar had melted. It was just a matter of deduction then!
The distance at which radars can detect a target limits its usefulness. To calculate such distance, we need to take into account some basic geometric considerations, which we can complicate further to obtain a more lifelike result.
The first important parameter to keep in mind is the height of the emitter. The higher, the better. This parameter uniquely identifies the distance at which the radar can see the surface of the Earth. But since Earth is (almost) spherical, a radar can see further than that. How? Let's introduce the target.
An airplane flying over the ground can still be visible to a radar if its height is higher than a certain value. But below the "line of sight" of the radar, the Earth's curvature would effectively hide it. That area is known as the shadow zone.
🔎 During the '60s, with missiles able to intercept bombers flying high and fast, both the US and USSR changed approaches. They began preferring low-flying, relatively slow bombers, like the B-1 Lancer. The planes would have remained in the radar's shadow zone by flying near the terrain until the last moment.
Radars suffer from another problem: the portion of the projected energy from the transmitter closer to the ground meets the various interferences caused by turbulence, bird flocks, and other factors. This phenomenon causes, in turn, a set of return echoes that pollute the signal received by the radar. We call that area clutter zone: a target moving in that area would be hard to detect!
Making simple geometrical deductions deliver ideal values for the radar horizon: to obtain accurate results, we need to introduce a correction that considers the atmospheric refraction.
The air density drops with altitude, and with it, the index of refraction of that layer of atmosphere. Consequently, the radio waves are effectively bent downward (almost following the Earth's surface) allowing radar to illuminate targets beyond the geometrical horizon. In the radar horizon formulas, this phenomenon is represented by a corrective factor of
4/3
applied to the Earth's radius: radars effectively "see" as if they were on a larger planet!
The radar horizon formula comes in two flavors: ignoring the atmospheric refraction or not. The first case is easier to understand, but yields less accurate results. The second case introduces a correction and delivers results that agree with radars used in the field.
Let's analyze the first case.
We can calculate the radar horizon with a simple application of the Pythagorean theorem. If we consider the Earth a perfect sphere with radius
R_E=6,371.009\ \text{km}
, and we place a radar at height
h_r
on the Earth's surface, we can imagine we've created a right triangle with hypotenuse
R_E+h_r
(from the Earth's center to the radar) and one side
R_E
(from the Earth's center to the surface). The remaining side is the radar's geometrical horizon (the distance it can see) and we name it
d_r
\begin{align*} \footnotesize d_r & \footnotesize =\sqrt{\left(R_E+h_r\right)^2-R_E^2}\\ & \footnotesize =\sqrt{\left(R_E^2+h_r^2+2R_Eh_r\right) - R_E^2}\\ & \footnotesize =\sqrt{\left(R_E^2+h_r^2+2R_Eh_r\right) - R_E^2}\\ & \footnotesize = \sqrt{h_r^2 + 2R_Eh_r} \end{align*}
💡 For small values of
h_r
2R_Eh_r
is the dominating the expression under the square root and the formula can be simplified to
d_r=\sqrt{2R_E h_r}
. If the height of the radar is less than
250\ \text{km}
, the error introduced is less than 1%.
The curvature of the Earth influences the detection of targets beyond a certain distance. In the grey area (the shadow zone ) targets are impossible to detect. The orange zone suffers from clutter due to interferences. Only the light blu area is good!
Consider now a target flying at height
h_t
over the sea level. We can find the maximum distance at which our radar can spot it by applying the same reasoning: we call it target visibility.
\begin{align*} \footnotesize d_t & \footnotesize =\sqrt{\left(R_E+h_t\right)^2-R_E^2} \\ & \footnotesize=\sqrt{\left(R_E^2+h_t^2+2 R_E h_t\right) - R_E^2} \\ & \footnotesize=\sqrt{h_t^2 + 2 R_E h_t} \end{align*}
The small height correction we applied before for the radar horizon distance works here, too: if the target flies low enough, we can write
d_t=\sqrt{2 R_E h_t}
Summing the two values of the horizons gives us the maximum distance from the radar at which we can identify the flying target: we call this distance
D
\footnotesize D=d_r+d_t
The correction introduced by the atmospheric refraction slightly changes the formulas we saw before. Considering the small height simplified formulas, we have:
\footnotesize \begin{split} d '_r &= \sqrt{2\cdot\tfrac{4}{3}\cdot R_E\cdot h_r}\\ d '_t &= \sqrt{2\cdot\tfrac{4}{3}\cdot R_E\cdot h_t} \end{split}
The difference may be substantial!
Radar operators would love a flat Earth: their radar horizon would be theoretically limitless... How are we sure that Earth is round? Try our flat vs. round Earth calculator!
Imagine you are flying on an E-3 airborne early warning and control (AWACS) aircraft, cruising at an altitude of
9,\!150\ \text{m}
30,\!000
feet). An enemy bomber is flying at
122\ \text{m}
400
feet) — using a terrain tracking radar, at what distance will we detect it?
Let's calculate the radar horizon and the target visibility with the modified radar horizon formulas:
\begin{align*} \footnotesize d'_r & \footnotesize =\sqrt{2\!\cdot\!\tfrac{4}{3}\!\cdot\! 6,\!371.009\ \text{km}\!\cdot\! 9.150\ \text{km}} \\ & \footnotesize = 394.3\ \text{km} \\[0.5em] \footnotesize d'_t & \footnotesize =\sqrt{2\!\cdot\!\tfrac{4}{3}\!\cdot\! 6,\!371.009\ \text{km}\!\cdot\! 0.122\ \text{km}} \\ & \footnotesize = 45.53\ \text{km} \end{align*}
The total distance is then:
D=d'_r+d'_t=439.8\ \text{km}
Imagine now that you are manning a radar station on the ground, with your antenna a mere
10\ \text{m}
above the ground. The new distances would be:
\begin{align*} \footnotesize d'_r & \footnotesize =\sqrt{2\!\cdot\!\tfrac{4}{3}\!\cdot\! 6,\!371.009\ \text{km}\!\cdot\! 0.01\ \text{km}} \\ & \footnotesize = 13.03\ \text{km} \\[0.5em] \footnotesize d'_t & \footnotesize =\sqrt{2\!\cdot\!\tfrac{4}{3}\!\cdot\! 6,\!371.009\ \text{km}\!\cdot\! 0.122\ \text{km}} \\ & \footnotesize = 45.53\ \text{km} \end{align*}
🙋 In these calculations, we didn't include the degradation due to the clutter zone. Remember that the effective detection of a low flying plane would be greatly affected by it!
D=d'_r+d'_t=58.56\ \text{km}
For a plane traveling at the speed of sound, the use of an early warning aircraft would give you more than 20 minutes before the arrival. This time would be reduced to a mere three minutes for the station on the ground: blink and you'll miss it!
What if the radar horizon is not enough for your detection needs? This problem surfaced during the Cold War, when the threat of missile attacks was high: the early detection of a launch of a naval unit or a bomber was considered a fundamental means of protecting a country's territories.
As we just saw, a "standard" radar has limits set by geometrical factors. Even an airborne radar can't see farther than a few hundred kilometers: radio waves must take another way. In over-the-horizon radars, the signal is directed toward the ionosphere, a radio-reflective layer of the atmosphere, which redirects them to the surface. The signal reflected by the eventual target is then re-reflected by the ionosphere and picked up by the receiver.
🔎 A Soviet over-the-horizon radar gained the nickname "woodpecker" during the Cold War because it was emitting a continuous signal at a "pecking" frequency. Amateur radio hobbyists constantly picked up the signal. Its antenna was located near Chernobyl, in Ukraine, giving a wide sight over Europe.
We created a useful and simple tool to calculate the radar horizon! You only have to select if you're considering the atmospheric refraction or not, and then insert your data. Moments later, you will learn the distances of the radar horizon and of the target visibility!
What is the radar horizon?
The radar horizon is the maximum distance a radar system can see ground-level targets. Its value depends only on the height of the radar emitter and receiver: the higher they are, the farther the horizon is.
How do you calculate the radar horizon?
To calculate the radar horizon, take the height of the radar system h, and feed it in the equation: d=√(2×Rₑ×h), where:
Rₑ is the Earth's radius; and
d is the radar horizon.
If you consider the atmospheric refraction, you must multiply the expression under the square root by an additional factor of 4/3.
What is the clutter zone?
The clutter zone is an area of the radar signal close to the surface of the Earth, where numerous sources of interference greatly affect the performance of a radar. Low flying planes may take advantage of this phenomenon to sneak into enemy territory without being detected.
Why do planes mount radars?
Planes flying high above the ground gives a far greater radar horizon than ground-based stations: that's why many countries employ planes purposefully designed to host powerful radars to detect incoming threats. Early warning aircraft have a radar horizon of almost 500 km by orbiting at about ten kilometers.
Atmospheric refraction?
Radar system height
Radar horizon (with refraction)
Target visibility (with refraction)
Maximum distance (with refraction)
The 100-ampere wire size calculator will assist you in choosing the proper wire size for your next equipment installation.
|
How to calculate insertion loss – Insertion loss formula
How to use the insertion loss calculator
The insertion loss calculator determines the insertion loss in signal transmission by comparing the input and output power (or voltage) levels.
Please continue reading to learn the definition of insertion loss and the formula to calculate it. You will also find an example of how to calculate the insertion loss using this tool.
Let us start with understanding what insertion loss is.
When we insert a network between the source and load of a circuit (see figure 1), a part of the power is either reflected by the network towards the source or is dissipated within the network. This results in a reduction of the power delivered to the load.
So what is the definition of insertion loss? Insertion loss is a parameter that measures this loss or attenuation in power (or signal strength). It is expressed in decibels (dB) and is crucial when designing microwave and RF transmission circuit components, e.g., filters, equalizers, etc.
For example, in power line communication (PLC) systems, we use impedance matching networks between PLC modems and power line channels. These impedance matching networks are made from passive elements like resistors, capacitors, and inductors that dissipate some power. Hence, the power delivered to the receiver is reduced, and we get insertion loss.
Fig 1: Two-port network inserted between source and load.
Another important performance parameter associated with the transmission circuits is the VSWR (Voltage Standing Wave Ratio).
To find the insertion loss of a 2-port network (for example, an attenuator or a filter), we measure the voltage across the load before and after the insertion of the network. Then we can calculate the insertion loss in decibels
IL
IL = 20\ \text{log} \left [ \frac{V_2}{V_1} \right ]
V_2
– Voltage across the load before insertion of the network; and
V_1
– Voltage across the load after insertion of the network.
As we know that the power delivered is directly proportional to the square of the voltage, i.e.,
I \propto V^2
, we can also use the following formula for insertion loss calculation:
IL = 10\ \text{log} \left [ \frac{P_L}{P_T} \right ]
P_L
– Power delivered to the load before insertion; and
P_T
– Power delivered to the load after insertion.
If you are interested in calculating the loss in signal strength of radiofrequency signal emitted by an antenna, check out our free space path loss calculator.
Now let us see how to use our calculator to compute the insertion loss if the power delivered to the load before insertion is 12 W and after insertion is 4 W.
Using the drop-down menu, choose to calculate the insertion loss from power delivered to the load.
Enter the values of power before (12 W) and after insertion (4 W) in the respective fields.
The tool will calculate the insertion loss in dB and display the result (4.77 dB).
Alternatively, if you know the voltages before and after the insertion, choose the voltage across load option from the drop-down menu.
How do I calculate insertion loss?
To calculate the insertion loss for a two-port network, follow these instructions:
Measure the power delivered to the load before inserting the two-port network.
Measure the power delivered to the load after inserting the two-port network.
Divide the value from step 1 by that from 2 and take the logarithm of the result.
Multiply the result from step 3 by 10 to get the insertion loss.
What is the insertion loss if 90% of the power is transmitted?
0.46 dB. To arrive at the answer, proceed as follows:
Use the formula for insertion loss: IL = 10 × log (Pi / Pt), where Pi is the incidenct power nad Pt is the transmitted power.
You will get: IL = 10 × log (100 / 90) = 0.46
Hence the insertion loss is 0.46 dB.
What causes insertion loss?
Insertion loss is unavoidable, and it may be due to one of the following reasons:
Ohmic loss due to power dissipated in the conductor used to make the transmission line components, i.e., cables, connectors, etc.
Poor connection and field termination can also cause a significant loss in transmitted signal strength.
Dielectric loss due to the absorption of the signal by the dielectric material that forms the conductor insulation and cable jacket.
How is insertion loss related to frequency?
For a given cable length and specification, the insertion loss is directly proportional to the frequency of the transmitted signal. The higher the frequency, the greater the loss.
Insertion loss from
Power before insertion
Power after insertion
This wire gauge calculator computes the diameter, cross-sectional area, and resistance per unit length of a wire once given a wire gauge number in either the American Wire Gauge (AMG) or Standard Wire Gauge (SWG) systems.
|
Commensurability_(mathematics) Knowpia
{\displaystyle {\sqrt {3}}}
{\displaystyle 2{\sqrt {3}}}
{\textstyle {\frac {\sqrt {3}}{2{\sqrt {3}}}}={\frac {1}{2}}}
{\textstyle {\sqrt {3}}}
{\textstyle {\frac {\sqrt {3}}{2}}}
The Pythagoreans are credited with the proof of the existence of irrational numbers.[1][2] When the ratio of the lengths of two line segments is irrational, the line segments themselves (not just their lengths) are also described as being incommensurable.
A separate, more general and circuitous ancient Greek doctrine of proportionality for geometric magnitude was developed in Book V of Euclid's Elements in order to allow proofs involving incommensurable lengths, thus avoiding arguments which applied only to a historically restricted definition of number.
Euclid's notion of commensurability is anticipated in passing in the discussion between Socrates and the slave boy in Plato's dialogue entitled Meno, in which Socrates uses the boy's own inherent capabilities to solve a complex geometric problem through the Socratic Method. He develops a proof which is, for all intents and purposes, very Euclidean in nature and speaks to the concept of incommensurability.[3]
The usage primarily comes from translations of Euclid's Elements, in which two line segments a and b are called commensurable precisely if there is some third segment c that can be laid end-to-end a whole number of times to produce a segment congruent to a, and also, with a different whole number, a segment congruent to b. Euclid did not use any concept of real number, but he used a notion of congruence of line segments, and of one such segment being longer or shorter than another.
That a/b is rational is a necessary and sufficient condition for the existence of some real number c, and integers m and n, such that
a = mc and b = nc.
Assuming for simplicity that a and b are positive, one can say that a ruler, marked off in units of length c, could be used to measure out both a line segment of length a, and one of length b. That is, there is a common unit of length in terms of which a and b can both be measured; this is the origin of the term. Otherwise the pair a and b are incommensurable.
In group theoryEdit
In group theory, two subgroups Γ1 and Γ2 of a group G are said to be commensurable if the intersection Γ1 ∩ Γ2 is of finite index in both Γ1 and Γ2.
Example: Let a and b be nonzero real numbers. Then the subgroup of the real numbers R generated by a is commensurable with the subgroup generated by b if and only if the real numbers a and b are commensurable, in the sense that a/b is rational. Thus the group-theoretic notion of commensurability generalizes the concept for real numbers.
There is a similar notion for two groups which are not given as subgroups of the same group. Two groups G1 and G2 are (abstractly) commensurable if there are subgroups H1 ⊂ G1 and H2 ⊂ G2 of finite index such that H1 is isomorphic to H2.
Two path-connected topological spaces are sometimes said to be commensurable if they have homeomorphic finite-sheeted covering spaces. Depending on the type of space under consideration, one might want to use homotopy equivalences or diffeomorphisms instead of homeomorphisms in the definition. If two spaces are commensurable, then their fundamental groups are commensurable.
Example: any two closed surfaces of genus at least 2 are commensurable with each other.
^ Kurt von Fritz (1945). "The Discovery of Incommensurability by Hippasus of Metapontum". The Annals of Mathematics. 46 (2): 242–264. doi:10.2307/1969021. JSTOR 1969021.
^ James R. Choike (1980). "The Pentagram and the Discovery of an Irrational Number". The Two-Year College Mathematics Journal. 11 (5): 312–316. doi:10.2307/3026893. JSTOR 3026893.
^ Plato's Meno. Translated with annotations by George Anastaplo and Laurence Berns. Focus Publishing: Newburyport, MA. 2004. ISBN 0-941051-71-4
|
Overview of Filter Banks - MATLAB & Simulink - MathWorks 한êµ
Analysis Filter Bank (Channelizer)
Synthesis Filter Bank (Channel synthesizer)
Two-Channel (Halfband) Filter Bank
A digital filter bank is an array of digital bandpass filters with either a common input or a common output. A filter bank can be an analysis filter bank with a series of analysis filters, or a synthesis filter bank with a series of synthesis filters. The analysis filter bank separates the input broadband signal x[n] into multiple components, each carrying a subband of the original signal. The synthesis filter bank merges these subbands into a single broadband signal, a reconstructed version of the original input signal.
The generic analysis filter bank, also known as the channelizer, consists of a series of parallel bandpass filters that split an input broadband signal, x[n], into a series of narrow subbands. Each bandpass filter retains a different portion of the input signal. After the bandwidth is reduced by one of the bandpass filters, the signal is downsampled to a lower sample rate commensurate with the new bandwidth.
The first branch in the filter bank contains a lowpass filter, H0(z), which acts as a prototype filter. The remaining filters H1(z) through HM−1(z) are modulated versions of this filter. These modulated versions can be rearranged in terms of a complex exponential (modulation factor) followed by the prototype lowpass filter H0(z).
y1[m], y2[m], …, yM-1[m] are narrow subband signals translated into baseband.
For more details on this structure, see Analysis Filter Bank.
This filter bank can be implemented efficiently using a polyphase structure. For more details on the polyphase structure and how it is implemented in the dsp.Channelizer object and the Channelizer block, see Channelizer Algorithm. The chief advantage of this polyphase implementation is that you can downsample the signal prior to filtering, thereby allowing you to filter the signal at a lower sample rate.
The synthesis filter bank, also known as the channel synthesizer, consists of a set of parallel bandpass filters that merge multiple input narrowband signals, y0[m], y1[m], y2[m], … , yM-1[m] into a single broadband signal, v[n]. The input narrowband signals are in the baseband. Each narrowband signal is interpolated to a higher sample rate by using the upsampler, and then filtered by the lowpass filter. A complex exponential that follows the lowpass filter centers the baseband signal around wk, where
{w}_{k}=2\mathrm{Ï}k/M
k=0,1,2,...,Mâ1
This filter bank is implemented efficiently using the polyphase structure described in Channel Synthesizer Algorithm. The dsp.ChannelSynthesizer object and the Channel Synthesizer block in DSP System Toolbox™ use this implementation.
Two-channel filter bank is a special case of the generic M-channel filter bank, where the number of filter branches is two.
The DFT matrix of the analysis portion looks like the following matrix.
\left[\begin{array}{cc}1& 1\\ 1& â1\end{array}\right]
The first row adds the two polyphase branches to give the lowpass subband output. The second row subtracts the two polyphase branches to give the highpass subband output. The halfband decimator objects and blocks in DSP System Toolbox implement their algorithm as shown in this diagram. A0(z) and A1(z) are the allpass polyphase components. This structure is the analysis portion of the two-channel halfband filter bank. Due to the halfband nature of the filters, one of the branches in this polyphase structure becomes a pure delay component.
For more details on this structure and its derivation, see Polyphase Implementation under Algorithms on these reference pages.
Analysis portion using FIR halfband filter dsp.FIRHalfbandDecimator FIR Halfband Decimator
Analysis portion using IIR halfband filter dsp.IIRHalfbandDecimator IIR Halfband Decimator
Similarly, the halfband interpolator objects and blocks in DSP System Toolbox implement their algorithm as shown in this diagram. This structure is the synthesis portion of the two-channel halfband filter bank. Due to the halfband nature of the filters, one of the branches in this polyphase structure becomes a pure delay component.
Synthesis portion using FIR halfband filter dsp.FIRHalfbandInterpolator FIR Halfband Interpolator
Synthesis portion using IIR halfband filter dsp.IIRHalfbandInterpolator IIR Halfband Interpolator
The other two-channel filter bank features that DSP System Toolbox offers let you specify the lowpass and highpass filter coefficients. These features can customize the partitioning of the broadband signal. For an example, see Reconstruction Through Two-Channel Filter Banks.
Analysis filter bank dsp.SubbandAnalysisFilter Two-Channel Analysis Subband Filter
Synthesis filter bank dsp.SubbandSynthesisFilter Two-Channel Synthesis Subband Filter
You can use the subband analysis and synthesis filter banks as basic units and create multilevel filter banks. For more details, see Multilevel Filter Banks.
|
Solve PDE with Discontinuity - MATLAB & Simulink - MathWorks América Latina
This example shows how to solve a PDE that interfaces with a material. The material interface creates a discontinuity in the problem at
\mathit{x}=0.5
, and the initial condition has a discontinuity at the right boundary
\mathit{x}=1
Consider the piecewise PDE
\text{\hspace{0.17em}}\left\{\begin{array}{cc}\frac{\partial \mathit{u}}{\partial \mathit{t}}={\mathit{x}}^{-2}\frac{\partial }{\partial \mathit{x}}\left({\mathit{x}}^{2}\text{\hspace{0.17em}}5\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)-1000{\mathit{e}}^{\mathit{u}}& \left(0\le \mathit{x}\le 0.5\right)\\ \frac{\partial \mathit{u}}{\partial \mathit{t}}={\mathit{x}}^{-2}\frac{\partial }{\partial \mathit{x}}\left({\mathit{x}}^{2}\text{\hspace{0.17em}}\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)-{\mathit{e}}^{\mathit{u}}& \left(0.5\le \mathit{x}\le 1\right)\end{array}
\begin{array}{l}\mathit{u}\left(\mathit{x},0\right)=0\text{\hspace{0.17em}\hspace{0.17em}}\left(0\le \mathit{x}<1\right),\\ \mathit{u}\left(1,0\right)=1\text{\hspace{0.17em}\hspace{0.17em}}\left(\mathit{x}=1\right).\end{array}
\begin{array}{l}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\frac{\partial \mathit{u}}{\partial \mathit{x}}=0\text{\hspace{0.17em}\hspace{0.17em}}\left(\mathit{x}=0\right),\\ \mathit{u}\left(1,\mathit{t}\right)=1\text{\hspace{0.17em}\hspace{0.17em}}\left(\mathit{x}=1\right).\end{array}
Before you can code the equation, you need to make sure that it is in a form that the pdepe solver expects. The standard form that pdepe expects is
\mathit{c}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)\frac{\partial \mathit{u}}{\partial \mathit{t}}={\mathit{x}}^{-\mathit{m}}\frac{\partial }{\partial \mathit{x}}\left({\mathit{x}}^{\mathit{m}}\mathit{f}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)\right)+\mathit{s}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)
In this case, the PDE is in the proper form, so you can read off the values of the coefficients.
\text{\hspace{0.17em}}\left\{\begin{array}{cc}\frac{\partial \mathit{u}}{\partial \mathit{t}}={\mathit{x}}^{-2}\frac{\partial }{\partial \mathit{x}}\left({\mathit{x}}^{2}\text{\hspace{0.17em}}5\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)-1000{\mathit{e}}^{\mathit{u}}& \left(0\le \mathit{x}\le 0.5\right)\\ \frac{\partial \mathit{u}}{\partial \mathit{t}}={\mathit{x}}^{-2}\frac{\partial }{\partial \mathit{x}}\left({\mathit{x}}^{2}\text{\hspace{0.17em}}\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)-{\mathit{e}}^{\mathit{u}}& \left(0.5\le \mathit{x}\le 1\right)\end{array}
The values for the flux term
\mathit{f}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)
and source term
\mathit{s}\left(\mathit{x}.\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)
change depending on the value of
\mathit{x}
. The coefficients are:
\mathit{m}=2
\mathit{c}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)=1
\left\{\begin{array}{cc}\mathit{f}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)=5\frac{\partial \mathit{u}}{\partial \mathit{x}}& \left(0\le \mathit{x}\le 0.5\right)\\ \mathit{f}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)=\frac{\partial \mathit{u}}{\partial \mathit{x}}& \left(0.5\le \mathit{x}\le 1\right)\end{array}
\left\{\begin{array}{cc}\mathit{s}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)=-1000{\mathit{e}}^{\mathit{u}}& \left(0\le \mathit{x}\le 0.5\right)\\ \mathit{s}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)=-{\mathit{e}}^{\mathit{u}}& \left(0.5\le \mathit{x}\le 1\right)\end{array}
\partial \mathit{u}/\partial \mathit{x}
if x <= 0.5
f = 5*dudx;
s = -1000*exp(u);
s = -exp(u);
Next, write a function that returns the initial conditions. The initial condition is applied at the first time value and provides the value of
\mathit{u}\left(\mathit{x},{\mathit{t}}_{0}\right)
\mathit{x}
. Use the function signature u0 = pdex2ic(x) to write the function.
\begin{array}{l}\mathit{u}\left(\mathit{x},0\right)=0\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\left(0\le \mathit{x}<1\right),\\ \mathit{u}\left(1,0\right)=1\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\left(\mathit{x}=1\right).\end{array}
\mathit{a}\le \mathit{x}\le \mathit{b}
, the boundary conditions apply for all t and either
\mathit{x}=\mathit{a}
\mathit{x}=\mathit{b}
\mathit{p}\left(\mathit{x},\mathit{t},\mathit{u}\right)+\mathit{q}\left(\mathit{x},\mathit{t}\right)\mathit{f}\left(\mathit{x},\mathit{t},\mathit{u},\frac{\partial \mathit{u}}{\partial \mathit{x}}\right)=0.
Since this example has spherical symmetry (
\mathit{m}=2
), the pdepe solver automatically enforces the left boundary condition to bound the solution at the origin, and ignores any conditions specified for the left boundary in the boundary function. So for the left boundary condition, you can specify
{\mathit{p}}_{\mathit{L}}={\mathit{q}}_{\mathit{L}}=0
. For the right boundary condition, you can rewrite the boundary condition in the standard form and read off the coefficient values for
{\mathit{p}}_{\mathit{R}}
{\mathit{q}}_{\mathit{R}}
\mathit{x}=1
\mathit{u}\left(1,\mathit{t}\right)=1\to \left(\mathit{u}-1\right)+0\cdot \frac{\partial \mathit{u}}{\partial \mathit{x}}=0
{\mathit{p}}_{\mathit{R}}\left(1,\mathit{t},\mathit{u}\right)=\mathit{u}-1
{\mathit{q}}_{\mathit{R}}\left(1,\mathit{t}\right)=0
{\mathit{p}}_{\mathit{L}}\left(\mathit{x},\mathit{t},\mathit{u}\right)
{\mathit{q}}_{\mathit{L}}\left(\mathit{x},\mathit{t}\right)
\mathit{x}=0
{\mathit{p}}_{\mathit{R}}\left(\mathit{x},\mathit{t},\mathit{u}\right)
{\mathit{q}}_{\mathit{R}}\left(\mathit{x},\mathit{t}\right)
\mathit{x}=1
The spatial mesh should include several values near
\mathit{x}=0.5
to account for the discontinuous interface, as well as points near
\mathit{x}=1
because of the inconsistent initial value (
\mathit{u}\left(1,0\right)=1
) and boundary value (
\mathit{u}\left(1,\mathit{t}\right)=0
) at that point. The solution changes rapidly for small
\mathit{t}
, so use a time step that can resolve this sharp change.
x = [0 0.1 0.2 0.3 0.4 0.45 0.475 0.5 0.525 0.55 0.6 0.7 0.8 0.9 0.95 0.975 0.99 1];
t = [0 0.001 0.005 0.01 0.05 0.1 0.5 1];
{\mathit{u}}_{\mathit{k}}
\mathit{u}
\mathit{x}
\mathit{t}
\mathit{m}=2
the problem is posed in a spherical geometry with spherical symmetry, so the solution only changes in the radial
\mathit{x}
title('Numerical solution with nonuniform mesh')
zlabel('Solution u')
\mathit{x}
\mathit{u}
to get a side view of the contours in the surface plot. Add a line at
\mathit{x}=0.5
to highlight the effect of the material interface.
plot(x,u,x,u,'*')
line([0.5 0.5], [-3 1], 'Color', 'k')
ylabel('Solution u')
function u0 = pdex2ic(x) %Initial conditions
|
Section 59.67 (0A2M): Galois cohomology—The Stacks project
Section 59.67: Galois cohomology (cite)
59.67 Galois cohomology
In this section we prove a result on Galois cohomology (Proposition 59.67.4) using étale cohomology and the trick from Section 59.66. This will allow us to prove vanishing of higher étale cohomology groups over the spectrum of a field.
Lemma 59.67.1. Let $\ell $ be a prime number and $n$ an integer $> 0$. Let $S$ be a quasi-compact and quasi-separated scheme. Let $X = \mathop{\mathrm{lim}}\nolimits _{i \in I} X_ i$ be the limit of a directed system of $S$-schemes each $X_ i \to S$ being finite étale of constant degree relatively prime to $\ell $. The following are equivalent:
there exists an $\ell $-power torsion sheaf $\mathcal{G}$ on $S$ such that $H_{\acute{e}tale}^ n(S, \mathcal{G}) \neq 0$ and
there exists an $\ell $-power torsion sheaf $\mathcal{F}$ on $X$ such that $H_{\acute{e}tale}^ n(X, \mathcal{F}) \neq 0$.
In fact, given $\mathcal{G}$ we can take $\mathcal{F} = g^{-1}\mathcal{F}$ and given $\mathcal{F}$ we can take $\mathcal{G} = g_*\mathcal{F}$.
Proof. Let $g : X \to S$ and $g_ i : X_ i \to S$ denote the structure morphisms. Fix an $\ell $-power torsion sheaf $\mathcal{G}$ on $S$ with $H^ n_{\acute{e}tale}(S, \mathcal{G}) \not= 0$. The system given by $\mathcal{G}_ i = g_ i^{-1}\mathcal{G}$ satisify the conditions of Theorem 59.51.3 with colimit sheaf given by $g^{-1}\mathcal{G}$. This tells us that:
\[ \mathop{\mathrm{colim}}\nolimits _{i\in I} H^ n_{\acute{e}tale}(X_ i, g_ i^{-1}\mathcal{G}) = H^ n_{\acute{e}tale}(X, \mathcal{G}) \]
By virtue of the $g_ i$ being finite étale morphism of degree prime to $\ell $ we can apply “la méthode de la trace” and we find the maps
\[ H^ n_{\acute{e}tale}(S, \mathcal{G}) \to H^ n_{\acute{e}tale}(X_ i, g_ i^{-1}\mathcal{G}) \]
are all injective (and compatible with the transition maps). See Section 59.66. Thus, the colimit is non-zero, i.e., $H^ n(X,g^{-1}\mathcal{G}) \neq 0$, giving us the desired result with $\mathcal{F} = g^{-1}\mathcal{G}$.
Conversely, suppose given an $\ell $-power torsion sheaf $\mathcal{F}$ on $X$ with $H^ n_{\acute{e}tale}(X, \mathcal{F}) \not= 0$. We note that since the $g_ i$ are finite morphisms the higher direct images vanish (Proposition 59.55.2). Then, by applying Lemma 59.51.7 we may also conclude the same for $g$. The vanishing of the higher direct images tells us that $H^ n_{\acute{e}tale}(X, \mathcal{F}) = H^ n(S, g_*\mathcal{F}) \neq 0$ by Leray (Proposition 59.54.2) giving us what we want with $\mathcal{G} = g_*\mathcal{F}$. $\square$
Lemma 59.67.2. Let $\ell $ be a prime number and $n$ an integer $> 0$. Let $K$ be a field with $G = Gal(K^{sep}/K)$ and let $H \subset G$ be a maximal pro-$\ell $ subgroup with $L/K$ being the corresponding field extension. Then $H^ n_{\acute{e}tale}(\mathop{\mathrm{Spec}}(K), \mathcal{F}) = 0$ for all $\ell $-power torsion $\mathcal{F}$ if and only if $H^ n_{\acute{e}tale}(\mathop{\mathrm{Spec}}(L), \underline{\mathbf{Z}/\ell \mathbf{Z}}) = 0$.
Proof. Write $L = \bigcup L_ i$ as the union of its finite subextensions over $K$. Our choice of $H$ implies that $[L_ i : K]$ is prime to $\ell $. Thus $\mathop{\mathrm{Spec}}(L) = \mathop{\mathrm{lim}}\nolimits _{i \in I} \mathop{\mathrm{Spec}}(L_ i)$ as in Lemma 59.67.1. Thus we may replace $K$ by $L$ and assume that the absolute Galois group $G$ of $K$ is a profinite pro-$\ell $ group.
Assume $H^ n(\mathop{\mathrm{Spec}}(K), \underline{\mathbf{Z}/\ell \mathbf{Z}}) = 0$. Let $\mathcal{F}$ be an $\ell $-power torsion sheaf on $\mathop{\mathrm{Spec}}(K)_{\acute{e}tale}$. We will show that $H^ n_{\acute{e}tale}(\mathop{\mathrm{Spec}}(K), \mathcal{F}) = 0$. By the correspondence specified in Lemma 59.59.1 our sheaf $\mathcal{F}$ corresponds to an $\ell $-power torsion $G$-module $M$. Any finite set of elements $x_1, \ldots , x_ m \in M$ must be fixed by an open subgroup $U$ by continuity. Let $M'$ be the module spanned by the orbits of $x_1, \ldots , x_ m$. This is a finite abelian $\ell $-group as each $x_ i$ is killed by a power of $\ell $ and the orbits are finite. Since $M$ is the filtered colimit of these submodules $M'$, we see that $\mathcal{F}$ is the filtered colimit of the corresponding subsheaves $\mathcal{F}' \subset \mathcal{F}$. Applying Theorem 59.51.3 to this colimit, we reduce to the case where $\mathcal{F}$ is a finite locally constant sheaf.
Let $M$ be a finite abelian $\ell $-group with a continuous action of the profinite pro-$\ell $ group $G$. Then there is a $G$-invariant filtration
\[ 0 = M_0 \subset M_1 \subset \ldots \subset M_ r = M \]
such that $M_{i + 1}/M_ i \cong \mathbf{Z}/\ell \mathbf{Z}$ with trivial $G$-action (this is a simple lemma on representation theory of finite groups; insert future reference here). Thus the corresponding sheaf $\mathcal{F}$ has a filtration
\[ 0 = \mathcal{F}_0 \subset \mathcal{F}_1 \subset \ldots \subset \mathcal{F}_ r = \mathcal{F} \]
with successive quotients isomorphic to $\underline{\mathbf{Z}/\ell \mathbf{Z}}$. Thus by induction and the long exact cohomology sequence we conclude. $\square$
Lemma 59.67.3. Let $\ell $ be a prime number and $n$ an integer $> 0$. Let $K$ be a field with $G = Gal(K^{sep}/K)$ and let $H \subset G$ be a maximal pro-$\ell $ subgroup with $L/K$ being the corresponding field extension. Then $H^ q_{\acute{e}tale}(\mathop{\mathrm{Spec}}(K),\mathcal{F}) = 0$ for $q \geq n$ and all $\ell $-torsion sheaves $\mathcal{F}$ if and only if $H^ n_{\acute{e}tale}(\mathop{\mathrm{Spec}}(L), \underline{\mathbf{Z}/\ell \mathbf{Z}}) = 0$.
Proof. The forward direction is trivial, so we need only prove the reverse direction. We proceed by induction on $q$. The case of $q = n$ is Lemma 59.67.2. Now let $\mathcal{F}$ be an $\ell $-power torsion sheaf on $\mathop{\mathrm{Spec}}(K)$. Let $f : \mathop{\mathrm{Spec}}(K^{sep}) \rightarrow \mathop{\mathrm{Spec}}(K)$ be the inclusion of a geometric point. Then consider the exact sequence:
\[ 0 \rightarrow \mathcal{F} \xrightarrow {res} f_* f^{-1} \mathcal{F} \rightarrow f_* f^{-1} \mathcal{F}/\mathcal{F} \rightarrow 0 \]
Note that $K^{sep}$ may be written as the filtered colimit of finite separable extensions. Thus $f$ is the limit of a directed system of finite étale morphisms. We may, as was seen in the proof of Lemma 59.67.1, conclude that $f$ has vanishing higher direct images. Thus, we may express the higher cohomology of $f_* f^{-1} \mathcal{F}$ as the higher cohomology on the geometric point which clearly vanishes. Hence, as everything here is still $\ell $-torsion, we may use the inductive hypothesis in conjunction with the long-exact cohomology sequence to conclude the result for $q + 1$. $\square$
Proposition 59.67.4. Let $K$ be a field with separable algebraic closure $K^{sep}$. Assume that for any finite extension $K'$ of $K$ we have $\text{Br}(K') = 0$. Then
$H^ q(\text{Gal}(K^{sep}/K), (K^{sep})^*) = 0$ for all $q \geq 1$, and
$H^ q(\text{Gal}(K^{sep}/K), M) = 0$ for any torsion $\text{Gal}(K^{sep}/K)$-module $M$ and any $q \geq 2$,
Proof. Set $p = \text{char}(K)$. By Lemma 59.59.2, Theorem 59.61.6, and Example 59.59.3 the proposition is equivalent to showing that if $H^2(\mathop{\mathrm{Spec}}(K'),\mathbf{G}_ m|_{\mathop{\mathrm{Spec}}(K')_{\acute{e}tale}}) = 0$ for all finite extensions $K'/K$ then:
$H^ q(\mathop{\mathrm{Spec}}(K),\mathbf{G}_ m|_{\mathop{\mathrm{Spec}}(K)_{\acute{e}tale}}) = 0$ for all $q \geq 1$, and
$H^ q(\mathop{\mathrm{Spec}}(K),\mathcal{F}) = 0$ for any torsion sheaf $\mathcal{F}$ and any $q \geq 2$.
We prove the second part first. Since $\mathcal{F}$ is a torsion sheaf, we may use the $\ell $-primary decomposition as well as the compatibility of cohomology with colimits (i.e, direct sums, see Theorem 59.51.3) to reduce to showing $H^ q(\mathop{\mathrm{Spec}}(K),\mathcal{F}) = 0$, $q \geq 2$ for all $\ell $-power torsion sheaves for every prime $\ell $. This allows us to analyze each prime individually.
Suppose that $\ell \neq p$. For any extension $K'/K$ consider the Kummer sequence (Lemma 59.28.1)
\[ 0 \to \mu _{\ell , \mathop{\mathrm{Spec}}{K'}} \to \mathbf{G}_{m, \mathop{\mathrm{Spec}}{K'}} \xrightarrow {(\cdot )^{\ell }} \mathbf{G}_{m, \mathop{\mathrm{Spec}}{K'}} \to 0 \]
Since $H^ q(\mathop{\mathrm{Spec}}{K'},\mathbf{G}_ m|_{\mathop{\mathrm{Spec}}(K')_{\acute{e}tale}}) = 0$ for $q = 2$ by assumption and for $q = 1$ by Theorem 59.24.1 combined with $\mathop{\mathrm{Pic}}\nolimits (K) = (0)$. Thus, by the long-exact cohomology sequence we may conclude that $H^2(\mathop{\mathrm{Spec}}{K'}, \mu _\ell ) = 0$ for any separable $K'/K$. Now let $H$ be a maximal pro-$\ell $ subgroup of the absolute Galois group of $K$ and let $L$ be the corresponding extension. We can write $L$ as the colimit of finite extensions, applying Theorem 59.51.3 to this colimit we see that $H^2(\mathop{\mathrm{Spec}}(L), \mu _\ell ) = 0$. Now $\mu _\ell $ must be the constant sheaf. If it weren't, that would imply there exists a Galois extension of degree relatively prime to $\ell $ of $L$ which is not true by definition of $L$ (namely, the extension one gets by adjoining the $\ell $th roots of unity to $L$). Hence, via Lemma 59.67.3, we conclude the result for $\ell \neq p$.
Now suppose that $\ell = p$. We consider the Artin-Schrier exact sequence (Section 59.63)
\[ 0 \longrightarrow \underline{\mathbf{Z}/p\mathbf{Z}}_{\mathop{\mathrm{Spec}}{K}} \longrightarrow \mathbf{G}_{a, \mathop{\mathrm{Spec}}{K}} \xrightarrow {F-1} \mathbf{G}_{a, \mathop{\mathrm{Spec}}{K}} \longrightarrow 0 \]
where $F - 1$ is the map $x \mapsto x^ p - x$. Then note that the higher Cohomology of $\mathbf{G}_{a, \mathop{\mathrm{Spec}}{K}}$ vanishes, by Remark 59.23.4 and the vanishing of the higher cohomology of the structure sheaf of an affine scheme (Cohomology of Schemes, Lemma 30.2.2). Note this can be applied to any field of characteristic $p$. In particular, we can apply it to the field extension $L$ defined by a maximal pro-$p$ subgroup $H$. This allows us to conclude $H^ n(\mathop{\mathrm{Spec}}{L}, \underline{\mathbf{Z}/p\mathbf{Z}}_{\mathop{\mathrm{Spec}}{L}}) = 0$ for $n \geq 2$, from which the result follows for $\ell = p$, by Lemma 59.67.3.
To finish the proof we still have to show that $H^ q(\text{Gal}(K^{sep}/K), (K^{sep})^*) = 0$ for all $q \geq 1$. Set $G = \text{Gal}(K^{sep}/K)$ and set $M = (K^{sep})^*$ viewed as a $G$-module. We have already shown (above) that $H^1(G, M) = 0$ and $H^2(G, M) = 0$. Consider the exact sequence
\[ 0 \to A \to M \to M \otimes \mathbf{Q} \to B \to 0 \]
of $G$-modules. By the above we have $H^ i(G, A) = 0$ and $H^ i(G, B) = 0$ for $i > 1$ since $A$ and $B$ are torsion $G$-modules. By Lemma 59.57.6 we have $H^ i(G, M \otimes \mathbf{Q}) = 0$ for $i > 0$. It is a pleasant exercise to see that this implies that $H^ i(G, M) = 0$ also for $i \geq 3$. $\square$
Definition 59.67.5. A field $K$ is called $C_ r$ if for every $0 < d^ r < n$ and every $f \in K[T_1, \ldots , T_ n]$ homogeneous of degree $d$, there exist $\alpha = (\alpha _1, \ldots , \alpha _ n)$, $\alpha _ i \in K$ not all zero, such that $f(\alpha ) = 0$. Such an $\alpha $ is called a nontrivial solution of $f$.
Example 59.67.6. An algebraically closed field is $C_ r$.
In fact, we have the following simple lemma.
Lemma 59.67.7. Let $k$ be an algebraically closed field. Let $f_1, \ldots , f_ s \in k[T_1, \ldots , T_ n]$ be homogeneous polynomials of degree $d_1, \ldots , d_ s$ with $d_ i > 0$. If $s < n$, then $f_1 = \ldots = f_ s = 0$ have a common nontrivial solution.
Proof. This follows from dimension theory, for example in the form of Varieties, Lemma 33.34.2 applied $s - 1$ times. $\square$
The following result computes the Brauer group of $C_1$ fields.
Theorem 59.67.8. Let $K$ be a $C_1$ field. Then $\text{Br}(K) = 0$.
Proof. Let $D$ be a finite dimensional division algebra over $K$ with center $K$. We have seen that
\[ D \otimes _ K K^{sep} \cong \text{Mat}_ d(K^{sep}) \]
uniquely up to inner isomorphism. Hence the determinant $\det : \text{Mat}_ d(K^{sep}) \to K^{sep}$ is Galois invariant and descends to a homogeneous degree $d$ map
\[ \det = N_\text {red} : D \longrightarrow K \]
called the reduced norm. Since $K$ is $C_1$, if $d > 1$, then there exists a nonzero $x \in D$ with $N_\text {red}(x) = 0$. This clearly implies that $x$ is not invertible, which is a contradiction. Hence $\text{Br}(K) = 0$. $\square$
Definition 59.67.9. Let $k$ be a field. A variety is separated, integral scheme of finite type over $k$. A curve is a variety of dimension $1$.
Theorem 59.67.10 (Tsen's theorem). The function field of a variety of dimension $r$ over an algebraically closed field $k$ is $C_ r$.
Proof. For projective space one can show directly that the field $k(x_1, \ldots , x_ r)$ is $C_ r$ (exercise).
General case. Without loss of generality, we may assume $X$ to be projective. Let $f \in k(X)[T_1, \ldots , T_ n]_ d$ with $0 < d^ r < n$. Say the coefficients of $f$ are in $\Gamma (X, \mathcal{O}_ X(H))$ for some ample $H \subset X$. Let $\mathbf{\alpha } = (\alpha _1, \ldots , \alpha _ n)$ with $\alpha _ i \in \Gamma (X, \mathcal{O}_ X(eH))$. Then $f(\mathbf{\alpha }) \in \Gamma (X, \mathcal{O}_ X((de + 1)H))$. Consider the system of equations $f(\mathbf{\alpha }) =0$. Then by asymptotic Riemann-Roch (Varieties, Proposition 33.45.13) there exists a $c > 0$ such that
the number of variables is $n\dim _ k \Gamma (X, \mathcal{O}_ X(eH)) \sim n e^ r c$, and
the number of equations is $\dim _ k \Gamma (X, \mathcal{O}_ X((de + 1)H)) \sim (de + 1)^ r c$.
Since $n > d^ r$, there are more variables than equations. The equations are homogeneous hence there is a solution by Lemma 59.67.7. $\square$
Lemma 59.67.11. Let $C$ be a curve over an algebraically closed field $k$. Then the Brauer group of the function field of $C$ is zero: $\text{Br}(k(C)) = 0$.
Proof. This is clear from Tsen's theorem, Theorem 59.67.10 and Theorem 59.67.8. $\square$
Lemma 59.67.12. Let $k$ be an algebraically closed field and $K/k$ a field extension of transcendence degree 1. Then for all $q \geq 1$, $H_{\acute{e}tale}^ q(\mathop{\mathrm{Spec}}(K), \mathbf{G}_ m) = 0$.
Proof. Recall that $H_{\acute{e}tale}^ q(\mathop{\mathrm{Spec}}(K), \mathbf{G}_ m) = H^ q(\text{Gal}(K^{sep}/K), (K^{sep})^*)$ by Lemma 59.59.2. Thus by Proposition 59.67.4 it suffices to show that if $K'/K$ is a finite field extension, then $\text{Br}(K') = 0$. Now observe that $K' = \mathop{\mathrm{colim}}\nolimits K''$, where $K''$ runs over the finitely generated subextensions of $k$ contained in $K'$ of transcendence degree $1$. Note that $\text{Br}(K') = \mathop{\mathrm{colim}}\nolimits \text{Br}(K'')$ which reduces us to a finitely generated field extension $K''/k$ of transcendence degree $1$. Such a field is the function field of a curve over $k$, hence has trivial Brauer group by Lemma 59.67.11. $\square$
Comment #5355 by Ben Moonen on June 30, 2020 at 07:15
Hi Johan, In the statement of Lemma 0DV7 there is no relation between the sheaves F and G, which is not per se wrong but looks funny. Maybe better to make the statement more explicit, matching its proof: if G has cohomology then so does F = g^{-1}G; if F has cohomology then so does G = g_*F. Best wishes, Ben
Comment #5390 by Thea Kosche on July 13, 2020 at 20:29
In the proof of Tsen's Theorem (03RD) shouldn't it say
\dim_k \Gamma(\ldots)
\dim_K \Gamma(\ldots)
Thanks to both of you. See corresponding changes here.
|
Linear Equations in One Variable - Practically Study Material
Most of the equations that we have worked with had integer coefficients and integer solutions. In this chapter, we shall deal with equations involving rational numbers as the coefficients and their solutions can also be rational numbers.
Equation A statement of equality which contains one or more unknown quantity or variable (literals) is called an equation.
3x + 7 = 12, and
\frac{x}{3}+5=\frac{x}{2}–3
are equations in one variable x.
2x + 3y = 15, are equations in two variables x and y.
An equation involving only linear polynomials is called a linear equation.
3x – 2 = 7,
\frac{3}{2}x+9=\frac{1}{2}
are linear equation in one variable, because the highest power of the variable in each equation is one whereas the equations.
3x2 – 2x + 1 = 0, y2 – 1 = 8 are not linear equations, because the highest power of the variable equation is not one.
2.4 SOLUTION OF A LINEAR EQUATION SOLUTION
A value of the variable which when substituted for the variable in an equation makes L.H.S. = R.H.S. is said to satisfy the equation and is called a solution or a root of the equation.
In other words, a value of the variable which makes the equation a true statement is called a solution or a root of the equation.
Example : Verify that x = 4 is a root of the equation 2x – 3 = 5.
Solution : Substituting x = 4 in the given equation,
We get L.H.S. = 2x – 3 = 2 × 4 – 3 = 8 – 3 = 5 = R.H.S.
Hence, x = 4 is a root of the equation 2x – 3 = 5.
Solving an equation means determining its roots i.e., determining value of the variable which satisfies it.
Rules for Solving Linear Equations in One Variable
We learnt the rules for solving an equation in one variable. Let us recall them. They are :
Rule 1 : Same quantity (number) can be added to both sides of an equation without changing the equality.
Rule 2 : Same quantity can be subtracted from both sides of an equation without changing the equality.
Rule 3 : Both sides of an equation may be multiplied by the same non-zero number without changing the equality.
Rule 4 : Both sides of an equation may be divided by the same non-zero number without changing the equality. It should be noted that some complicated equations can be solved by using two or more of these rules together.
Solving Equations having variable terms on the side and number(s) on the other side
The following examples will illustrate the method of solving equations in one variable having variable terms on one side and numbers on the other side.
Example : Solve the equation
\frac{x}{5}+11=\frac{1}{15}
Solution : We have
\frac{x}{5}+11=\frac{1}{15}
⇒
\frac{x}{5}+11–11=\frac{1}{15}–11
[Subtracting 11 from both sides]
⇒
\frac{x}{5}=\frac{1}{15}–11
⇒
\frac{x}{5}=\frac{1–165}{15}
⇒
\frac{x}{5}=–\frac{164}{15}
⇒
5×\frac{x}{5}=5×–\frac{164}{15}
[Multiplying both sides by 5]
⇒
x=–\frac{164}{3}
⇒
x=–\frac{164}{3}
is the solution of the given equation.
2.6 TRANSPOSITION METHOD FOR SOLVING LINEAR EQUATIONS IN ONE VARIABLE
Sometimes the two sides of an equation contain both variable (unknown quantity) and constants (numerals).
In such cases, we first simplify two sides in their simplest forms and then transpose (shift) terms containing variable on R.H.S. to L.H.S. and constant terms on L.H.S. to R.H.S. By transposing a term from one side to the other side, we mean changing its sign and carrying it to the other side. In transposition the plus sign of the term changes into minus sign on the other side and vice-versa.
The transposition method involves the following steps :
Step I : Obtain the linear equation.
Step II : Identify the variable (unknown quantity) and constants (numerals).
Step III : Simplify the L.H.S. and R.H.S. to their simplest forms by removing brockets.
Step IV : Transpose all terms containing variable on L.H.S. and constant terms on R.H.S. Note that the sign of the terms will change in shifting them from L.H.S. to R.H.S. and vice-versa.
Step V : Simplify L.H.S. and R.H.S. in the simplest form so that each side contains just one term.
Step VI : Solve the equation obtained in step V by dividing both sides by the coefficient of the variable on L.H.S. Following examples will illustrate the above procedure.
Example : solve
\frac{x}{2}–\frac{1}{5}=\frac{x}{3}+\frac{1}{4}
Solution : We have,
\frac{x}{2}–\frac{1}{5}=\frac{x}{3}+\frac{1}{4}
The denominators on two sides are 2,5,3, and 4. Their LCM is 60.
Multiplying both sides of the given equation by 60, we get
60×\left(\frac{x}{2}–\frac{1}{5}\right)=60\left(\frac{x}{3}+\frac{1}{4}\right)
⇒60×\frac{x}{2}–60×\frac{1}{5}=60×\frac{x}{3}+60×\frac{1}{4}
⇒3x–12=20x+15
⇒3x–20x=15+12 \left[\mathrm{On} \mathrm{transposing} 20x \mathrm{to} \mathrm{LHS} \mathrm{and} – 12 \mathrm{to} \mathrm{RHS}\right]
⇒10x=27
⇒x=\frac{27}{10} \left[\mathrm{On} \mathrm{dividing} \mathrm{both} \mathrm{sides} \mathrm{by} 10\right]
\mathrm{Hence}, x=\frac{27}{10} \mathrm{is} \mathrm{the} \mathrm{solution} \mathrm{of} \mathrm{the} \mathrm{given} \mathrm{equation}.
2.7 CROSS-MULTIPLICATION METHOD FOR SOLVING EQUATIONS OF THE FORM
\frac{ax+\mathrm{b}}{cx+d}=\frac{\mathrm{m}}{\mathrm{n}}
\frac{2x+5}{3x+7}=\frac{3}{5}
Clearly, it is an equation of the form
\frac{ax+\mathrm{b}}{cx+d}=\frac{\mathrm{m}}{\mathrm{n}}
, where a = 2, b = 5, c = 3, d = 7, m = 3 and n = 5.
Evidently, it is an equation in one variable x but it is not a linear equation, because the LHS is not a linear polynomial. However, it can be converted into a linear equation by applying the rules for solving an equation as discussed below.
\frac{2x+5}{3x+7}=\frac{3}{5}
As x represents a number, so 3x + 7 also represents a number. Multiplying both sides of (i) by (3x + 7) × 5 i.e., the product of numbers in the denominators on L.H.S. and RHS, we get
\left(3x+7\right)×5×\left(\frac{2x+5}{3x+7}\right)=\frac{3}{5}×\left(3x+7\right)×5
⇒5×\left(2x+5\right)=3×\left(3x+7\right)
⇒10x+25=9x+21
⇒10x–9x=21–25
⇒x=–4
This is the required solution of equation
\frac{2x+5}{3x+7}=\frac{3}{5}
Note that in solving this equation, we have first converted it into a linear equation given in (ii) by applying the rules of solving equations. Equation (ii) can also be obtained directly from equation (i) by equating the product of numerator of LHS and denominator of RHS to the product of denominator of LHS and numerator of RHS. This can be exhibited as follow :
This process of multiplying the numerator on LHS with the denominator on RHS and equating it to the product of the denominator on LHS with the numerator on RHS is called cross-multiplication.
It is evident from the above discussion that by using cross-multiplication we can convert an equation of the form
\frac{ax+b}{cx+d}=\frac{m}{n}
To a linear equation n(ax + b) = m (cx + d)
This equation can now be solved by using the rules for solving equations.
2.8 APPLICATIONS OF LINEAR EQUATIONS TO PRACTICAL PROBLEMS
In this section, we sill study formulation and solution of some practical problems. These problems involve relations among unknown quantities (variables) and known quantities (numbers) and are often stated in words. That is why we often refer to these problems as word problems. A word problem is first translated in the form of an equation containing unknown quantities (variables) and known quantities (numbers or constants) and then we solve it by using any one of the methods discussed in the earlier section. The procedure to translate a word problem in the form of an equation is known as the formulation of the problem. Thus, the process of solving a word problem consists of two parts, namely, formulation and solution.
The following steps should be followed to solve a word problem:
Step I : Read the problem carefully and note what is given and what is required.
Step II : Denote the unknown quantity by some letters, say x, y, z etc.
Step III : Translate the statements of the problem into mathematical statements.
Step IV : Using the condition(s) given in the problem, form the equation.
Step V : Solve the equation for the unknown.
Step VI : Check whether the solution satisfies the equation.
Understanding Quadrilaterals NCERT Questions →
|
Machinability Knowpia
Machinability is the ease with which a metal can be cut (machined) permitting the removal of the material with a satisfactory finish at low cost.[1] Materials with good machinability (free machining materials) require little power to cut, can be cut quickly, easily obtain a good finish, and do not wear the tooling much. The factors that typically improve a material's performance often degrade its machinability. Therefore, to manufacture components economically, engineers are challenged to find ways to improve machinability without harming performance.
Machinability can be difficult to predict because machining has so many variables. Two sets of factors are the condition of work materials and the physical properties of work materials.[2] The condition of the work material includes eight factors: microstructure, grain size, heat treatment, chemical composition, fabrication, hardness, yield strength, and tensile strength.[3] Physical properties are those of the individual material groups, such as the modulus of elasticity, thermal conductivity, thermal expansion, and work hardening.[3] Other important factors are operating conditions, cutting tool material and geometry, and the machining process parameters.[3]
Machinability of steels
Steels are among the most important and commonly used engineering materials. Their machinability has been greatly improved by adding lead and sulfur, with the resulting material being known as free machining steel.[4]
Quantifying machinabilityEdit
There are many factors affecting machinability, but no widely accepted way to quantify it. Instead, machinability is often assessed on a case-by-case basis, and tests are tailored to the needs of a specific manufacturing process. Common metrics for comparison include tool life, surface finish, cutting temperature, and tool forces and power consumption.[5][6]
Tool life methodEdit
Machinability can be based on the measure of how long a tool lasts. This can be useful when comparing materials that have similar properties and power consumptions, but one is more abrasive and thus decreases the tool life. The major downfall with this approach is that tool life is dependent on more than just the material it is machining; other factors include cutting tool material, cutting tool geometry, machine condition, cutting tool clamping, cutting speed, feed, and depth of cut. Also, the machinability for one tool type cannot be compared to another tool type (i.e. HSS tool to a carbide tool).[6]
{\displaystyle {\text{Machineability index (}}\%{)}={\frac {\text{cutting speed of material for 20 minute tool life}}{\text{cutting speed of free-cutting steel for 20 minute tool life}}}*100}
Tool forces and power consumption methodEdit
The forces required for a tool to cut through a material is directly related to the power consumed. Therefore, tool forces are often given in units of specific energy. This leads to a rating method where higher specific energies equal lower machinability. The advantage of this method is that outside factors have little effect on the rating.[6]
Surface finish methodEdit
The surface finish is sometimes used to measure the machinability of a material. Soft, ductile materials tend to form a built up edge. Stainless steel and other materials with a high strain hardening ability also want to form a built up edge. Aluminium alloys, cold worked steels, and free machining steels, as well as materials with a high shear zone don't tend to form built up edges, so these materials would rank as more machinable.[7]
The advantage of this method is that it is easily measured with the appropriate equipment. The disadvantage of this criterion is that it is often irrelevant. For instance when making a rough cut, the surface finish is of no importance. Also, finish cuts often require a certain accuracy that naturally achieves a good surface finish. This rating method also doesn't always agree with other methods. For instance titanium alloys would rate well by the surface finish method, low by the tool life method, and intermediate by the power consumption method.[7][8]
Machinability ratingEdit
The machinability rating of a material attempts to quantify the machinability of various materials. It is expressed as a percentage or a normalized value. The American Iron and Steel Institute (AISI) determined machinability ratings for a wide variety of materials by running turning tests at 180 surface feet per minute (sfpm).[9] It then arbitrarily assigned 160 Brinell B1112 steel a machinability rating of 100%.[9] The machinability rating is determined by measuring the weighted averages of the normal cutting speed, surface finish, and tool life for each material.[9] Note that a material with a machinability rating less than 100% would be more difficult to machine than B1112 and material with a value more than 100% would be easier.
Machinability Rating= (Speed of Machining the workpiece giving 60min tool life)/( Speed of machining the standard metal)
Machinability ratings can be used in conjunction with the Taylor tool life equation,
{\displaystyle VT^{n}=C}
, in order to determine cutting speeds or tool life. It is known that B1112 has a tool life of 60 minutes at a cutting speed of 100 sfpm. If a material has a machinability rating of 70%, it can be determined, with the above knowns, that in order to maintain the same tool life (60 minutes) the cutting speed must be 70 sfpm (assuming the same tooling is used).[1]
The carbon content of steel greatly affects its machinability. High-carbon steels are difficult to machine because they are strong and because they may contain carbides that abrade the cutting tool. On the other end of the spectrum, low-carbon steels are troublesome because they are too soft. Low-carbon steels are "gummy" and stick to the cutting tool, resulting in a built up edge that shortens tool life. Therefore, steel has the best machinability with medium amounts of carbon, about 0.20%.[5]
Chromium, molybdenum and other alloying metals are often added to steel to improve its strength. However, most of these metals also decrease machinability.
Inclusions in steel, especially oxides, may abrade the cutting tool. Machinable steel should be free of these oxides.
There are a variety of chemicals, both metal and non-metal, that can be added to steel to make it easier to cut. These additives may work by lubricating the tool-chip interface, decreasing the shear strength of the material, or increasing the brittleness of the chip. Historically, sulfur and lead have been the most common additives, but bismuth and tin are increasingly popular for environmental reasons.
Lead can improve the machinability of steel because it acts as an internal lubricant in the cutting zone.[10] Since lead has poor shear strength, it allows the chip to slide more freely past the cutting edge. When it is added in small quantities to steel, it can greatly improve its machinability while not significantly affecting the steel's strength.
Sulfur improves the machinability of steel by forming low shear strength inclusions in the cutting zone. These inclusions are stress risers that weaken the steel, allowing it to deform more easily.
Stainless steels have poor machinability compared to regular carbon steel because they are tougher, gummier and tend to work harden very rapidly.[5] Slightly hardening the steel may decrease its gumminess and make it easier to cut. AISI grades 303 and 416 are easier to machine because of the addition of sulfur and phosphorus.[11]
Aluminium is a much softer metal than steel, and the techniques to improve its machinability usually rely on making it more brittle. Alloys 2007, 2011 and 6020 have very good machinability.[11]
Thermoplastics are difficult to machine because they have poor thermal conductivity.[10] This creates heat that builds up in the cutting zone, which degrades the tool life and locally melts the plastic. Once the plastic melts, it just flows around the cutting edge instead of being removed by it. Machinability can be improved by using high lubricity coolant and keeping the cutting area free of chip build up.
Composites often have the worst machinability because they combine the poor thermal conductivity of a plastic resin with the tough or abrasive qualities of the fiber (glass, carbon etc.) material.
The machinability of rubber and other soft materials improves by using a very low temperature coolant, such as liquid carbon dioxide. The low temperatures chill the material prior to cutting so that it cannot deform or stick to the cutting edge. This means less wear on the tools and easier machining.
^ Schneider, George, "Machinability of Metals," American Machinist, December, 2009.
^ a b c Schneider, "Machinability."
^ Engineering book, Kalpak Jain. "Machinability". {{cite web}}: CS1 maint: url-status (link)
^ a b c Bakerjian, Ramon; Cubberly, W. H. (1989). Tool and manufacturing engineers handbook. Dearborn, Mich: Society of Manufacturing Engineers. pp. 15–3, 15–10, 19–13 to 19–18. ISBN 0-87263-351-9.
^ a b c Schneider, p. 8.
^ a b Schneider, p. 9.
^ Schneider, p. 10.
^ a b Kalpakjian, Serope; Steven R. Schmid (2003). Manufacturing Processes for Engineering Materials. Pearson Education. pp. 437–440. ISBN 81-7808-990-4.
^ a b "McMaster-Carr Catalog". Retrieved 2008-04-01.
Schneider, George Jr (2002). Cutting Tool Applications (PDF). Archived from the original (PDF) on November 30, 2006.
Machinability ratings from an industry publication
|
Moby-Dick#Epilogue (links to non-existent sections aren't really broken, they are treated as links to the page, i.e. to the top)
[[Moby-Dick#Epilogue]]
Endings are blended into the link: [[machine]]s, [[design]]ing
When adding a comment to a Talk page, you should sign it. You can do this by
Redirect one article title to another by putting text like this in its first line. This text on the United States of America page redirects browsers to United States.
[[Image:Wikiquote-logo-en.png|Wikiquote]]
Clicking on an uploaded image displays a description page, which you can also link directly to: Image:Wikiquote-logo-en.png
Similar markups are used for categories:
Plain wiki link categorizes the page under a certain category (see somewhere this page: you find a link to category "Wikiquote"). Clicking on a category link leads you to a category page, which you can also link directly in the page elsewhere to: Category:Categories
{\displaystyle \sum _{n=0}^{\infty }{\frac {x^{n}}{n!}}}
At the current status of the wiki makup language, at least four headers automatically trigger the TOC in front of the first header (or after introductory sections). Adding __TOC__ anywhere in the page forces a TOC for pages with fewer than four headers. Putting __NOTOC__ anywhere forces the TOC to disappear.
Retrieved from "https://en.wikiquote.org/w/index.php?title=Wikiquote:How_to_edit_a_page&oldid=3046489"
|
Created by Mehjabin Abdurrazaque and Wojciech Sas, PhD candidate
How to determine the wattage of a resistor?
How to find the power dissipated by a resistor?
How to use the resistor wattage calculator?
How to use the resistor wattage calculator for circuits with multiple resistors
Is this resistor wattage calculator suitable for resistors in AC circuits?
The Omni resistor wattage calculator lets you figure out how much electrical power a resistor absorbs and dissipates as heat or light. This article also explains:
How to determine the wattage of a resistor;
The derivation of the electrical power formula for a resistor; and
How to find the power dissipated by a resistor.
Our tool comes in handy in several ways. You can determine the unknown variables among resistance, power, voltage, and current from any two of these variables!
Alternatively, you can use this resistor power calculator to find the power dissipated by each resistor in a parallel circuit or a series circuit comprised of up to ten resistors! This part of our tool also functions as a parallel/series resistance calculator, voltage divider calculator, and current divider calculator. So, why not try it out?
We can't determine a resistor's wattage from its color code, but its size can help us here. A resistor's size varies according to its wattage or power rating. For example, the smallest carbon composition resistor's power rating is 1/8 W, while the largest resistor comes with a 5 W rating. A thick film chip resistor of size 20 x 10 mm has a power rating of 1/20 W, whereas a 250 x 120 mm sized thick film chip resistor's power rating is 1 W.
We know that electricity is the flow of electrons. The potential difference
V
is the amount of work done per unit charge to move a test charge from point A to B without changing its kinetic energy. The total work done when electrons flow through a resistor is:
W = Q \cdot V
W
— the total work done;
Q
— the total charge of the electrons that passed through the resistor over a given period of time; and
V
— the potential difference (or the potential drop) across the resistor.
We know that the current
I
is the total charge flowing over a time period
\Delta t
I = Q\ /\ \Delta t
Thus, we can rewrite the work done as:
W = \left(I \cdot \Delta t\right) \cdot V
Power is the rate of work done, so the electrical power is:
\begin{split} P &= W/ \Delta t \\ &= V\cdot I \end{split}
⚠️ Do not confuse unit charge with an electron. A unit charge is one coulomb. The charge of an electron has a magnitude of 1.60217662 × 10-19 coulombs.
To learn more about charges, visit Omni's electrostatic calculators on Coulomb's law and electric field!
To get the power dissipated by a resistor, we can begin with Ohm's law:
V=I\cdot R
R
is the resistance of the resistor.
Therefore, we can rewrite the electrical power formula,
P = V \cdot I
, to estimate the power dissipated by the resistor as:
\begin{split} P &= I^2 \cdot R \\ &= V^2 / R \\ \end{split}
So we know what the formula for electrical power is, and we've learned all the theory about calculating the power dissipated by a resistor. Let's try to use this knowledge in practice!
Let's assume we have the following problem:
❓ Three resistors of 20 Ω, 30 Ω, and 50 Ω are connected in series across a 125 V battery. Determine the total power dissipated by the resistors.
Let's see how to use the resistor wattage calculator to solve this problem:
Select the appropriate units for each quantity. The units of resistance, current, voltage, and power are ohm (Ω), ampere (A), volt (V), and watt (W) respectively by default.
Identify the variables given in the question — in the above question, the quantities given are resistance and voltage.
Enter 100 Ω (equivalent resistance) in the input box for resistance.
Enter 125 V in the input box for voltage.
There you have it! Our resistor power calculator displays both the current flown through (1.25 A) and the power dissipated (156.25 W) by the resistor.
To use the resistor wattage calculator for circuits with multiple resistors:
Select the circuit type from the drop-down list labelled Circuit type.
Choose the known parameter between the power supply's current and voltage from the drop-down list for My power supply has constant. Enter the known parameter's value in the next row.
Start entering the resistance of resistors from Resistor 1 (R₁). Each time you enter the value for resistance, a new row shows up to add the next resistance. You can add up to ten resistors.
Easy peasy! Our resistor power calculator displays the equivalent resistance, the current through each resistor, the voltage drop across each resistor, and the power dissipated in each resistor!
Our calculator uses the equation for power in a DC circuit to determine the power absorbed by a resistor, as given by
P = V\cdot I
. The average power of an AC circuit is the product of the root mean square (RMS) values of the voltage across and the current from the power supply, and the power factor:
P = V_{RMS} \cdot I_{RMS} \cdot \text{PF}
V_{RMS}
I_{RMS}
denote the RMS values of voltage and current.
\text{PF}
is the circuit's power factor.
The RMS values of voltage and current are equivalent to a DC voltage and current respectively. For a purely resistive circuit (a circuit that contains only resistors and does not contain capacitors or inductors, or one where only resistors dissipate all circuit power), the power factor will be 1.
Hence, the power dissipated by a resistor in an AC circuit with no capacitors and inductors is
P = V_{RMS} \cdot I_{RMS}
. This means you can use our tool to calculate the power dissipated by a resistor in an AC circuit, but only if it's a purely resistive one.
Resistors slow down the electrons flowing in its circuit and reduce the overall current in its circuit. The high electron affinity of resistors' atoms causes the electrons in the resistor to slow down. These electrons exert a repulsive force on the electrons moving away from the battery's negative terminal, slowing them. The electrons between the resistor and positive terminal do not experience the repulsive force greatly from the electrons near the negative terminal and in the resistor, and therefore do not accelerate.
Can a resistor supply power?
No. The process of supplying power involves converting other forms of energy into electrical energy. Resistors convert electrical energy into heat. So, a resistor cannot supply power to a circuit, but instead absorbs and dissipates power.
How do I find the power dissipated by a 10 Ω resistor connected parallel to a 5 Ω resistor of 40 W?
In a parallel connection of resistors, the voltage across each resistor is the same.
Find the voltage (V) across resistor R1 of power rating P1 using the formula:
V = √(P1 × R1).
Calculate the power dissipated by the second resistor (R2), P2 = V2/R2.
The overall voltage is 14.14 V, so the resulting power equals 20 W.
How do I find which resistor dissipates the most power in a circuit?
The component with the greatest resistance dissipates the most power in a series circuit. In a series circuit, the same amount of current flows through all resistors, and power is the product of the square of the current and resistance, I2R.
In a parallel circuit, the component with the least resistance dissipates the most power in a parallel circuit, as the voltage across resistors remains the same, and power is the product of voltage and current (V×I).
What are power resistors used for?
Power resistors are used for dissipating large amounts of energy as heat, as their resistance doesn't change significantly with rising temperatures.
Want to learn more about resistors? Check out our wire resistance calculator!
Mehjabin Abdurrazaque and Wojciech Sas, PhD candidate
Single resisor circuit
Multiple resistors circuit
My power supply has a constant
You can add up to ten resistors — their fields will appear as you need them.
Input at least one resistor to obtain a result.
|
MapleGcAllow - Maple Help
Home : Support : Online Help : Connectivity : Calling External Routines : ExternalCalling : C Application Programming Interface : MapleGcAllow
prevent garbage collection on an object in external code
allow garbage collection on an object in external code
test if an object is protected from garbage collection in external code
mark an object contained in a MaplePointer during a garbage collection
MapleGcProtect(kv, s)
MapleGcAllow(kv, s)
MapleGcIsProtected(kv, s)
MapleGcMark(kv, s)
MapleGcProtect prevents the object, s, from being collected by the Maple garbage collector. The memory pointed to by s is not freed until Maple exits, is restarted, or a call to MapleGcAllow is issued. Any Maple objects that must persist between external function invocations must be protected, or associated with a MaplePointer mark function. This includes any external global or static ALGEB variables that will be referred to in a later external call. Failure to protect such a persistent variable can lead to unexpected results if the Maple garbage collector disposes of it between function calls.
MapleGcAllow allows the Maple garbage collector to reclaim storage used by the object, s. This does not necessarily mean that the storage will be reclaimed. It just means that the object obeys the same rules applied to all other Maple objects -- that they can be collected whenever there is no longer anything actively referring to them. Be careful not to allow garbage collection on any object that was not protected by you, or was protected prior to when your code was to protect it.
MapleGcIsProtected returns TRUE if the given object is protected from garbage collection.
MapleGcMark is used in combination with the MaplePointer mark function callback supplied by MaplePointerSetMarkFunction. It must be called only by such a callback. It is recommended that you use a MaplePointer with a mark function instead of protecting and unprotecting objects.
ALGEB M_DECL MySaveLast( MKernelVector kv, ALGEB *args )
static ALGEB last = NULL;
static M_BOOL was_protected = FALSE;
ALGEB prev;
if( !last ) {
last = ToMapleNULL(kv);
was_protected = MapleGcIsProtected(kv,last);
MapleGcProtect(kv,last);
if( MapleNumArgs(kv,(ALGEB)args) > 0 ) {
if( !was_protected )
MapleGcAllow(kv,last);
last = args[1];
return( prev );
return( last );
\mathrm{with}\left(\mathrm{ExternalCalling}\right):
\mathrm{dll}≔\mathrm{ExternalLibraryName}\left("HelpExamples"\right):
\mathrm{last}≔\mathrm{DefineExternal}\left("MySaveLast",\mathrm{dll}\right):
\mathrm{last}\left(x+y\right)
\mathrm{last}\left({z}^{3}\right)
\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{y}
\mathrm{last}\left(\right)
{\textcolor[rgb]{0,0,1}{z}}^{\textcolor[rgb]{0,0,1}{3}}
|
Padding and Shearing an Image Simultaneously - MATLAB & Simulink Example - MathWorks América Latina
Padding and Shearing an Image Simultaneously
Step 1: Transform an Image Using Simple Shear
Step 2: Explore the Transformation
Step 3: Compare the 'fill', 'replicate', and 'bound' Pad Methods
Step 4: Exercise the 'circular' and 'symmetric' Pad Methods
This example shows how to construct a tform struct that represents a simple shear transformation and then applies it to an image. We explore how the transformation affects straight lines and circles, and then use it as a vehicle to explore the various options for image padding that can be used with imtransform and tformarray.
In two dimensions, a simple shear transformation that maps a pair of input coordinates [u v] to a pair of output coordinates [x y] has the form
x=u+a*v
y=v
Any simple shear is a special case of an affine transformation. You can easily verify that
\left[\begin{array}{c}x\phantom{\rule{0.5em}{0ex}}y\phantom{\rule{0.5em}{0ex}}1\end{array}\right]=\left[\begin{array}{c}u\phantom{\rule{0.5em}{0ex}}v\phantom{\rule{0.5em}{0ex}}1\end{array}\right]*\left[\begin{array}{c}1\phantom{\rule{0.2777777777777778em}{0ex}}0\phantom{\rule{0.2777777777777778em}{0ex}}0\\ a\phantom{\rule{0.2777777777777778em}{0ex}}1\phantom{\rule{0.2777777777777778em}{0ex}}0\\ 0\phantom{\rule{0.2777777777777778em}{0ex}}0\phantom{\rule{0.2777777777777778em}{0ex}}1\end{array}\right]
yields the values for x and y that you received from the first two equations.
Setting a = 0.45, we construct an affine tform struct using maketform.
T = maketform('affine', [1 0 0; a 1 0; 0 0 1] );
We select, read, and view and image to transform.
A = imread('football.jpg');
h1 = figure; imshow(A); title('Original Image');
We choose a shade of orange as our fill value.
orange = [255 127 0]';
We are ready to use T to transform A. We could call imtransform as follows:
B = imtransform(A,T,'cubic','FillValues',orange);
but this is wasteful since we would apply cubic interpolation along both columns and rows. (With our pure shear transform, we really only need to interpolate along each row.) Instead, we create and use a resampler that applies cubic interpolation along the rows but simply uses nearest neighbor interpolation along the columns, then call imtransform and display the result.
R = makeresampler({'cubic','nearest'},'fill');
B = imtransform(A,T,R,'FillValues',orange);
h2 = figure; imshow(B);
title('Sheared Image');
Transforming a grid of straight lines or an array of circles with tformfwd is a good way to understand a transformation (as long as it has both forward and inverse functions).
Define a grid of lines covering the original image, and display it over the image Then use tformfwd to apply the pure shear to each line in the grid, and display the result over the sheared image.
[U,V] = meshgrid(0:64:320,0:64:256);
gray = 0.65 * [1 1 1];
figure(h1);
line(U, V, 'Color',gray);
line(U',V','Color',gray);
line(X, Y, 'Color',gray);
line(X',Y','Color',gray);
You can do the same thing with an array of circles.
for u = 0:64:320
for v = 0:64:256
theta = (0 : 32)' * (2 * pi / 32);
uc = u + 20*cos(theta);
vc = v + 20*sin(theta);
[xc,yc] = tformfwd(T,uc,vc);
figure(h1); line(uc,vc,'Color',gray);
figure(h2); line(xc,yc,'Color',gray);
When we applied the shear transformation, imtransform filled in the orange triangles to the left and right, where there was no data. That's because we specified a pad method of 'fill' when calling makeresampler. There are a total of five different pad method choices ('fill', 'replicate', 'bound', 'circular', and 'symmetric'). Here we compare the first three.
First, to get a better look at how the 'fill' option worked, use the 'XData' and 'YData' options in imtransform to force some additional space around the output image.
Bf = imtransform(A,T,R,'XData',[-49 500],'YData',[-49 400],...
'FillValues',orange);
figure, imshow(Bf);
title('Pad Method = ''fill''');
Now, try the 'replicate' method (no need to specify fill values in this case).
R = makeresampler({'cubic','nearest'},'replicate');
Br = imtransform(A,T,R,'XData',[-49 500],'YData', [-49 400]);
figure, imshow(Br);
title('Pad Method = ''replicate''');
And try the 'bound' method.
R = makeresampler({'cubic','nearest'}, 'bound');
Bb = imtransform(A,T,R,'XData',[-49 500],'YData',[-49 400],...
figure, imshow(Bb);
title('Pad Method = ''bound''');
Results with 'fill' and 'bound' look very similar, but look closely and you'll see that the edges are smoother with 'fill'. That's because the input image is padded with the fill values, then the cubic interpolation is applied across the edge, mixing fill and image values. In contrast, 'bound' recognizes a strict boundary between the inside and outside of the input image. Points falling outside are filled. Points falling inside are interpolated, using replication when they're near the edge. A close up look helps show this more clearly. We choose XData and YData to bracket a point near the lower right corner of the image, in the output image space, the resize with 'nearest' to preserve the appearance of the individual pixels.
Cf = imtransform(A,T,R,'XData',[423 439],'YData',[245 260],...
R = makeresampler({'cubic','nearest'},'bound');
Cb = imtransform(A,T,R,'XData',[423 439],'YData',[245 260],...
Cf = imresize(Cf,12,'nearest');
Cb = imresize(Cb,12,'nearest');
subplot(1,2,1); imshow(Cf); title('Pad Method = ''fill''');
subplot(1,2,2); imshow(Cb); title('Pad Method = ''bound''');
The remaining two pad methods are 'circular' (circular repetition in each dimension) and 'symmetric' (circular repetition of the image with an appended mirror image). To show more of the pattern that emerges, we redefine the transformation to cut the scale in half.
Thalf = maketform('affine',[1 0; a 1; 0 0]/2);
R = makeresampler({'cubic','nearest'},'circular');
Bc = imtransform(A,Thalf,R,'XData',[-49 500],'YData',[-49 400],...
figure, imshow(Bc);
title('Pad Method = ''circular''');
R = makeresampler({'cubic','nearest'},'symmetric');
Bs = imtransform(A,Thalf,R,'XData',[-49 500],'YData',[-49 400],...
figure, imshow(Bs);
title('Pad Method = ''symmetric''');
|
Consider the graph at right as you answer the following questions. Homework Help ✎
Use the vertex to help you write an equation in graphing form.
(−3,−2)
y=a(x−h)^2+k
Use the graph to solve
x+5=\frac{1}{2}(x+3)^2-2
. How can you verify that your solution(s) are correct?
x=−5
x=1
Use the graph to solve the system below.
y=\frac{1}{2}(x+3)^2-2
y=x+5
Your answers should be in the form
(x,y)
Use the graph to solve the inequality
x+5<\frac{1}{2}(x+3)^2-2
x<−5
x>1
\frac{1}{2}(x+3)^2-2=0
x=−5
x=−1
How could you change the equation of the parabola so that the parabola and the lines do not intersect? Is there more than one way?
How can you move it up or flip it over?
|
Redox - Simple English Wikipedia, the free encyclopedia
chemical reaction involving reduction and oxidation of different species
Redox (shorthand for reduction/oxidation) describes all chemical reactions in which atoms have an increase or decrease in oxidation number (oxidation state).[1]
In an oxidation reduction reaction, the cation gives an electron to the anion because both ions will have a different charge to attract each other with. In an oxidation reduction reaction, the oxidizing reagent pulls an electron from the other atom to have a net positive charge. The reducing reagent gives an electron to have a net negative charge. However, there are exceptions.[2]
Chemical processEdit
{\displaystyle \mathrm {1)\ C+O_{2}\longrightarrow CO_{2}} }
{\displaystyle \mathrm {2)\ CO_{2}+C\rightleftharpoons 2\ CO} }
{\displaystyle \mathrm {3)\ Fe_{2}O_{3}+3\ CO\longrightarrow 3\ CO_{2}+2\ Fe} }
↑ This can be a simple redox process, such as the oxidation of carbon to yield carbon dioxide, the reduction of carbon by hydrogen to yield methane (CH4), or a complex process such as the oxidation of sugar in the human body, through a series of very complex electron transfer processes.
↑ Oxidation and reduction properly refer to a change in oxidation number – the actual transfer of electrons may never occur. Thus, oxidation is better defined as an increase in oxidation number, and reduction as a decrease in oxidation number. In practice, the transfer of electrons will always cause a change in oxidation number, but there are many reactions which are classed as "redox" even though no electron transfer occurs (such as those involving covalent bonds).
General Chemistry/Redox Reactions
Retrieved from "https://simple.wikipedia.org/w/index.php?title=Redox&oldid=7458478"
|
Master reward book - The RuneScape Wiki
Master reward book
You cannot get a replacement for this book. Read it to get your reward.
Item JSON: {"edible":"no","disassembly":"no","stackable":"no","stacksinbank":"yes","death":"never","name":"Master reward book","bankable":"no","gemw":false,"equipable":"no","members":"yes","id":"30782","release_date":"27 January 2014","release_update_post":"Holy Moley - Giant Mole Update","lendable":"no","destroy":"You cannot get a replacement for this book. Read it to get your reward.","highalch":false,"weight":0.001,"lowalch":false,"tradeable":"no","examine":"Your reward for multi-skilling.","noteable":"no"}
The master reward book is a gift from Xuan once you have completed the Master jack of trades aura aura's regular skilling. Once the aura depletes on its own or the owner removes the aura after he has trained in 15 different skills, they may get the book, but will not be able to receive another book until they use the first one. This book works similar to an XP Lamp or Penguin Points in the fact that players specify a single skill in which to receive their bonus. This book cannot be banked but can be received as a free player after membership expires. In this case the book cannot be used until membership is resumed.
{\displaystyle y=2\left(x^{2}-2x+100\right)}
where y = Experience earned. It gives 50% more experience than the reward book from the regular jack of trades aura.
param = type|Jack of trade|Master|select|Normal,Master,Supreme,Legendary
Retrieved from ‘https://runescape.wiki/w/Master_reward_book?oldid=35648693’
|
{\displaystyle \sum _{n=0}^{\infty }c_{n}x^{n}}
{\displaystyle \sum _{n=0}^{\infty }c_{n}{\bigg (}{\frac {x}{2}}{\bigg )}^{n}}
{\displaystyle \sum _{n=0}^{\infty }c_{n}(-x)^{n}}
{\displaystyle \sum a_{n}}
{\displaystyle L=\lim _{n\rightarrow \infty }{\bigg |}{\frac {a_{n+1}}{a_{n}}}{\bigg |}.}
{\displaystyle L<1,}
{\displaystyle L>1,}
{\displaystyle L=1,}
Assume that the power series
{\displaystyle \sum _{n=0}^{\infty }c_{n}x^{n}}
{\displaystyle R}
be the radius of convergence of this power series.
We can use the Ratio Test to find
{\displaystyle R.}
Using the Ratio Test, we have
{\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{n\rightarrow \infty }{\bigg |}{\frac {c_{n+1}x^{n+1}}{c_{n}x^{n}}}{\bigg |}}&=&\displaystyle {\lim _{n\rightarrow \infty }{\bigg |}{\frac {c_{n+1}x}{c_{n}}}{\bigg |}}\\&&\\&=&\displaystyle {|x|\lim _{n\rightarrow \infty }{\bigg (}{\frac {c_{n+1}}{c_{n}}}{\bigg )}.}\end{array}}}
Since the radius of convergence of the series
{\displaystyle \sum _{n=0}^{\infty }c_{n}x^{n}}
{\displaystyle R,}
{\displaystyle R={\frac {1}{\displaystyle {\lim _{n\rightarrow \infty }{\bigg (}{\frac {c_{n+1}}{c_{n}}}{\bigg )}}}}.}
Now, we use the Ratio Test to find the radius of convergence of the series
{\displaystyle \sum _{n=0}^{\infty }c_{n}{\bigg (}{\frac {x}{2}}{\bigg )}^{n}.}
{\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{n\rightarrow \infty }{\bigg |}{\frac {c_{n+1}2^{n}x^{n+1}}{c_{n}2^{n+1}x^{n}}}{\bigg |}}&=&\displaystyle {\lim _{n\rightarrow \infty }{\bigg |}{\frac {c_{n+1}x}{2c_{n}}}{\bigg |}}\\&&\\&=&\displaystyle {{\frac {|x|}{2}}\lim _{n\rightarrow \infty }{\bigg (}{\frac {c_{n+1}}{c_{n}}}{\bigg )}.}\end{array}}}
Hence, the radius of convergence of this power series is
{\displaystyle {\frac {2}{\displaystyle {\lim _{n\rightarrow \infty }{\bigg (}{\frac {c_{n+1}}{c_{n}}}{\bigg )}}}}=2R.}
Therefore, this power series converges.
{\displaystyle \sum _{n=0}^{\infty }c_{n}x^{n}}
{\displaystyle R}
{\displaystyle R.}
{\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{n\rightarrow \infty }{\bigg |}{\frac {c_{n+1}x^{n+1}}{c_{n}x^{n}}}{\bigg |}}&=&\displaystyle {\lim _{n\rightarrow \infty }{\bigg |}{\frac {c_{n+1}x}{c_{n}}}{\bigg |}}\\&&\\&=&\displaystyle {|x|\lim _{n\rightarrow \infty }{\bigg (}{\frac {c_{n+1}}{c_{n}}}{\bigg )}.}\end{array}}}
{\displaystyle \sum _{n=0}^{\infty }c_{n}x^{n}}
{\displaystyle R,}
{\displaystyle R={\frac {1}{\displaystyle {\lim _{n\rightarrow \infty }{\bigg (}{\frac {c_{n+1}}{c_{n}}}{\bigg )}}}}.}
{\displaystyle \sum _{n=0}^{\infty }c_{n}(-x)^{n}.}
{\displaystyle {\begin{array}{rcl}\displaystyle {\lim _{n\rightarrow \infty }{\bigg |}{\frac {c_{n+1}(-x)^{n+1}}{c_{n}(-x)^{n}}}{\bigg |}}&=&\displaystyle {\lim _{n\rightarrow \infty }{\bigg |}{\frac {c_{n+1}(-x)}{c_{n}}}{\bigg |}}\\&&\\&=&\displaystyle {|x|\lim _{n\rightarrow \infty }{\bigg (}{\frac {c_{n+1}}{c_{n}}}{\bigg )}.}\end{array}}}
{\displaystyle {\frac {1}{\displaystyle {\lim _{n\rightarrow \infty }{\bigg (}{\frac {c_{n+1}}{c_{n}}}{\bigg )}}}}=R.}
(a) converges
(b) converges
|
Playing with Numbers - Practically Study Material
Dealing with numbers has been quite interesting. There are many tricks and puzzles of numbers that we may like to solve. In this Chapter, by using algebra, we shall learn how such tricks and puzzles can be scientifically worked out.
16.2. NUMBERS IN GENERALISED FORM
We have learned to write a number in the expanded form. For example, we can write.
76 as 70 + 6 = 10 × tens digit + one digit
213 as 200 + 10 + 3 = 100 × hundreds digit + 10 × tens digit + ones digit
2498 as 2000 + 400 + 90 + 8 = 1000 × thousands digit + 100 × hundreds digit + 10 × tens digit + ones digit Thus,
A 2-digit number whose tens digits is ‘a’ and ones digit is ‘b’ can be written as 10a + b, where a(
\ne
0) and b are whole numbers from 0 to 9.
Similarly, we can write a 3-digit or a 4-digit number as follows
1. A 3-digit number with hundreds digit ‘a’; tens digit ‘b’ and ones digit ‘c’ is written as 100a + 10b + c, where a
\ne
0 (in short, as abc)
2. A 4-digit number with thousands digit ‘a’; hundreds digit ‘b’, tens digit ‘c’ and ones digit ‘d’ is written as 1000a + 100b + 10c + d (in short, as abcd).
Example 1 : What is the general form of the following numbers ?
(d) 5z7
Solution : (a) General form of 375 = 300 + 70 + 5 = 100 × 3 + 10 × 7 + 5
(b) General form of 901 = 900 + 0 + 1 = 100 × 9 + 10 × 0 + 1
(c) General form of ps = 10 × p + s = 10p + s
(d) General form of 5z7 = 500 + 10z + 7 = 100 × 5 + 10 × z + 7
Example 2 : Write the following expressions in their usual form.
1. 100 × x + 10 × z+ r
2. 100 × 7 + 10 × 2 + 0
Solution : (a) Usual form of 100 × x + 10 × z + r = 100x + 10z + r = xzr
(b) Usual form of 100 × 7 + 10 × 2 + 0 = 700 + 20 + 0 = 720
16.3. GAMES WITH NUMBERS
Let us enjoy some games with numbers.
1. Reversing the digits – 2-digit number
The sum of a two-digit number and the number obtained by reversing its digits is always divisible 11.
Think of any 2-digit number, say : 47
Reverse its two digits and get a new number : 74
Add the two numbers : 47 + 74, i.e., 121
Divide the sum by 11 : 121
÷
11, i.e., 11
Let the 2-digit number be ab.
Then, ab + ba = (10a + b) + (10b + a)
= 11a + 11b = 11(a + b)
⇒
(ab + ba)
÷
11 = 11(a + b)
÷
11 = a + b.
The difference between a two-digit number and the number obtained by reversing its digits is always divisible by 9.
Think of any 2-digit number say : 38
Subtract the smaller number from the greater : 83 – 38, i.e., 45
Divide the difference by 9 : 45
÷
9, i.e., 5
Then, ab – ba = (10a + b) – (10b + a)
= 9a – 9b = 9(a – b)
⇒
(ab – ba)
÷
9 = 9(a – b)
÷
9 = a – b.
Reversing the digits – 3-digit number
The difference between a three digit number and the number obtained by reversing its digits is always divisible by 99.
Think of a 3-digit number, say : 249
Reverse its three digits and get a new number : 942
Subtract the smaller number from the greater number : 942 – 249, i.e., 693
Divide the difference by 99. 693
÷
99 i.e., 7
Let the 3-digit number be abc.
Then, abc – cba = (100a + 10b + c)
–
(100c + 10b + a)
= 99 (a – c)
So, (abc – cba)
÷
99 = 99 (a – c)
÷
99 = a – c
If a three digit number has the same digit in its hundreds and one’s places, then the difference between the number and the number formed by reversing its digit is always zero.
abc – cba = 0, when a = c, a – c = 0
Forming 3-digit number, with given three digits.
The sum of three or more three-digit numbers obtained by re-arranging the digits of a three digit number is always divisible by 37 & 3, whatever be the values of a, b, c.
Let three digit numbers abc.
Then, abc = 100a + 10b + c
Sum of these three number we can get 111a + 111b + 111c = 3 × 37 (a + b + c)
16.4. LETTERS FOR DIGITS
We all are familiar with puzzles in books, magazines, newspapers etc. Here, we introduce a new type of puzzle where you will need to find some missing numbers using the basic properties of mathematics. Let us look at the following puzzle.
How can we find the missing numbers ?
In the first column, the sum of 7 and another digit cannot be 2, as 2 is less than 7. Hence, it is logical that the number may be 12 as 7 + 5 = 12.
Thus, in the first column the missing number is 5.
The second column has a carryover value of 1.
Thus, it becomes 1 + 2 + ___ = 6
Now, it is clear that the missing number is 3.
We can also write the same problem as
\begin{array}{cc} \mathrm{B}& 7\\ +2& \mathrm{A}\\ 6& 2\end{array}
Here, the digits are denoted by the letters of the English alphabet and only one digit is denoted by one letter.
For example, let us try to find the value of A in the following example.
\begin{array}{r}\mathrm{A}\\ \mathrm{A}\\ +\mathrm{A}\\ 1 \mathrm{A}\end{array}
Here, we add a digit three times and get the same digit at the units place.
Such digits are 0 and 5.
If we take A as 0, then the addition of three 0s equals 0. However, we cannot get any number in the tens place. In the problem, we can see that the sum has 1 in its tens place. Let us now take the digit 5 as the value of A.
\begin{array}{r}5\\ 5\\ +5\\ 15\end{array}
Example 3 : Find the values of the letters in the following puzzles :
+\begin{array}{ccc}1& B& A\\ A& B& 2\\ 5& 6& 6\end{array}
\begin{array}{cc} 3& \mathrm{A}\\ ×& 3\\ 1 0& \mathrm{A}\end{array}
\begin{array}{cc} \mathrm{P}& \mathrm{Q}\\ ×& 6\\ \mathrm{Q} \mathrm{Q}& \mathrm{Q}\end{array}
+\begin{array}{ccc}1& B& A\\ A& B& 2\\ 5& 6& 6\end{array}
Here, the sum of A and 2 is 6.
Thus, A can be 4.
On replacing letter A with its value i.e., 4, the puzzle becomes
+\begin{array}{ccc}1& \mathrm{B}& 4\\ 4& \mathrm{B}& 2\\ 5& 6& 6\end{array}
We can see that the sum of B and B is 6.
Thus, B can be 3
+\begin{array}{llr}1& 3& 4\\ 4& 3& 2\\ 5& 6& 6\end{array}
Hence, the value of A is 4 and the value of B is 3.
\begin{array}{cc} 3& \mathrm{A}\\ ×& 3\\ 1 0& \mathrm{A}\end{array}
Here, the ones digit of A × 3 = A.
Thus, A must be 0 or 5.
If we take A = 0, then the puzzle becomes
\begin{array}{l} 3 0\\ × 3\\ 100\end{array}
But 30 × 3 = 90 ≠ 100
Therefore, we take A = 5.
\begin{array}{l} 3 5\\ × 3\\ 105\end{array}
This gives us the correct product, as given in the puzzle.
Hence, the answer is A = 5.
\begin{array}{cc} \mathrm{P}& \mathrm{Q}\\ ×& 6\\ \mathrm{Q} \mathrm{Q}& \mathrm{Q}\end{array}
In the puzzle, we have 6 × Q = Q at the units place.
Hence, Q may be 0, 2, 4, 6, or 8.
If we take Q = 0, then the puzzle becomes
\begin{array}{ll} \mathrm{P}& 0\\ ×& 6\\ 0 0& 0\end{array}
This can only be possible if P = 0.
Since P and Q are different numbers, this answer is not acceptable.
Let us now take Q = 2.
On doing so, the puzzle becomes
\begin{array}{rr}\mathrm{P}& 2\\ ×& 6\\ 2 2& 2\end{array}
We know that 37 × 6 = 222. However, this is not our answer because 2 is not at the units place in the number 37.
\begin{array}{cc} \mathrm{P}& 4\\ ×& 6\\ 4 4& 4\end{array}
We know that 74 × 6 = 444. Hence, the value of P is 7.
\begin{array}{cc} \mathrm{P}& 6\\ ×& 6\\ 6 6& 6\end{array}
However, 111 × 6 = 666
This cannot be our answer because 6 is not at the units place of 111 and 111 is a three-digit number.
\begin{array}{rr}\mathrm{P}& 8\\ ×& 6\\ 8 8& 8\end{array}
This cannot be our answer because 148 is a three-digit number. Replace A, B, C by suitable numerals.
16.5. TESTS OF DIVISIBILITY
Hotel Grand Mansion has 432 rooms and 10 floors. Surbhi, who went to the hotel for the first time with her sister Shweta, was quite amazed by the numbers. She asked Shweta if she could tell whether each of the floors had the same number of rooms. Shweta replied that this is not possible as 432 is not exactly divisible by 10.
Surbhi wanted to check if what Shweta said was true. She quickly divided the numbers as
She found that since the remainder is 2, 432 is not exactly divisible by 10.
Shweta had also said that the number 432 is not divisible by 10.
There is a rule to check if a number is divisible by 10.
We can save the time used in division by applying this rule.
The rule to check whether a number is divisible by 10 or not is as follows :
A number is said to be divisible by 10 if the digit at the units place is 0.
For example,25690 is divisible by 10 as the digit at its units place is 0, whereas 759 is not divisible by 10 as the digit at its units place is 9.
Using this method, one can check the divisibility of a number by 10 without doing the actual division.
In a school, there were 495 students in Grade 8. The students had to be divided among the 5 houses of the school. The school’s principal asked the students whether they will be divided equally among the five houses or will there be some houses with more students than the other houses. Dinesh, the topper of Grade 8, instantly said that the 495 students could be equally divided into 5 houses. While Ashish, Dinesh’s rival, did the following.
Ashish said that when 495 is divided by 5, the remainder is 0. Hence, 495 is divisible by 5.
We can see that although both Ashish and Dinesh gave the same answer, Ashish took more time to give the answer.
Dinesh used the divisibility rule of 5. This rule saved him the time that he would have otherwise used in actual division.
The rule to check whether a number is divisible or not is as follows :
A number is said to be divisible by 5 if the digit at its units place is 0 or 5.
For example, numbers such as 14365, 256030, 1720, 985, 3685 etc. are divisible by 5 as the digits at the units places of these numbers are either 0 or 5. However, numbers such as 7586, 6523, 458 etc. are not divisible by 5 as the digits at the units places of these numbers are neither 5 nor 0.
We use this method to check the divisibility of a number by 5 without performing the actual division.
Ashok was standing with his friend Nikhil at a bus stop. The first bus that came was of route 432. Ashok looked at it and said to Nikhil that the route number of the bus was divisible by 2. Nikhil wanted to verify what Ashok had said and started calculating in his notebook. This is what he did
Although he verified what his friend had said, Nikhil was amazed at Ashok’s quick calculation. He asked Ashok how he did it.
Ashok said he used a trick to check the divisibility of 432 by 2.
Ashok used the following rule :
A number is said to be divisible by 2 if the digit at its units place is 2, 4, 6, 8, or 0.
For example, numbers such as 4782, 69120, 5736, 9218, 724 etc. are divisible by 2 as the digits at the units places of the numbers are 2, 4, 6, 8, or 0, whereas numbers such as 817, 1221, 5693, 625, 8569 etc. are not divisible by 2 as the digits at the units places of these numbers are not 2, 4, 6, 8, or 0.
Imagine if you had to check if numbers such as 27, 18, or 33 were divisible by 3 and 9. You will obviously check it mentally and give the answer without performing any long division. But what if you are asked to check if the number 123456789 is divisible by 3 and 9? You will take a lot of time calculating the divisibility of the number with both 3 and 9 before you give the answer.
A teacher asked the students in a class to check whether 624 is divisible by 3 and 9. Mayank said that 624 is divisible by 3 but not divisible by 9 by just looking at the number. Shikhar, another student, started performing the divisions.
This is what Shikhar did.
Although Shikhar and Mayank gave the same answer, Mayank took very little time to give the answer, while Shikhar had to perform the division.
Mayank used two rules of divisibility to get the answer. These can be stated as follows :
A number is said to be divisible by 3 if the sum of its digits is divisible by 3.
For example, we can say that444 is divisible by 3 without performing the division.
Sum of digits = 4 + 4 + 4= 12
Similarly, we can say that 234 is divisible by 9 without performing the division.
9 is divisible by 9. Hence, 234 is divisible by 9.
Let us once again look at the number Mayank and Shikhar were working on.
Sum of the digits of the number 624 = 6 + 2 + 4 = 12
12 is divisible by 3 but not by 9. Hence, 624 is divisible by 3 but not by 9.
← Introductions to GraphsRational Numbers →
|
Since the beginning of this Reinforcement Learning tutorial series, I've covered two different reinforcement learning methods: Value-based methods (Q-learning, Deep Q-learning…) and Policy-based methods (REINFORCE with Policy Gradients).
Both of these methods have considerable drawbacks. That's why, today, I'll try another type of Reinforcement Learning method, which we can call a 'hybrid method': Actor-Critic. The actor-Critic algorithm is a Reinforcement Learning agent that combines value optimization and policy optimization approaches. More specifically, the Actor-Critic combines the Q-learning and Policy Gradient algorithms. The resulting algorithm obtained at the high level involves a cycle that shares features between:
Actor: a PG algorithm that decides on an action to take;
Critic: Q-learning algorithm that critiques the action that the Actor selected, providing feedback on how to adjust. It can take advantage of efficiency tricks in Q-learning, such as memory replay.
The advantage of the Actor-Critic algorithm is that it can solve a broader range of problems than DQN, while it has a lower variance in performance relative to REINFORCE. That said, because of the presence of the PG algorithm within it, the Actor-Critic is still somewhat sampling inefficient.
The problem with Policy Gradients:
In my previous tutorial, we derived policy gradients and implemented the REINFORCE algorithm (also known as Monte Carlo policy gradients). There are, however, some issues with vanilla policy gradients: noisy gradients and high variance.
Recall the policy gradient function:
∆J\left(Q\right)={E}_{\tau }\left[\sum _{t=0}^{T-1}{\nabla }_{Q}\mathrm{log}{\pi }_{Q}\left({a}_{t}, {s}_{t}\right){G}_{t}\right]
The REINFORCE algorithm updates the policy parameter through Monte Carlo updates (i.e., taking random samples). This introduces inherent high variability in log probabilities (log of the policy distribution) and cumulative reward values because each training trajectory can deviate from each other to great degrees. Consequently, the high variability in log probabilities and cumulative reward values will make noisy gradients and cause unstable learning and/or the policy distribution skewing to a non-optimal direction. Besides high variance of gradients, another problem with policy gradients occurs trajectories have a cumulative reward of 0. The essence of policy gradient is to increase the probabilities for "good" actions and decrease those of "bad" actions in the policy distribution; both good and bad actions will not be learned if the cumulative reward is 0. Overall, these issues contribute to the instability and slow convergence of vanilla policy gradient methods. One way to reduce variance and increase stability is subtracting the cumulative reward by a baseline b(s):
∆J\left(Q\right)={E}_{\tau }\left[\sum _{t=0}^{T-1}{\nabla }_{Q}\mathrm{log}{\pi }_{Q}\left({a}_{t}, {s}_{t}\right)\left({G}_{t}-b\left({s}_{t}\right)\right]
Intuitively, making the cumulative reward smaller by subtracting it with a baseline will make smaller gradients and thus more minor and more stable updates.
How Actor-Critic works:
Imagine you play a video game with a friend that provides you some feedback. You're the Actor, and your friend is the Critic:
In the beginning, you don't know how to play, so you try some action randomly. The Critic observes your action and provides feedback. Let's first take a look at the vanilla policy gradient again to see how the Actor-Critic architecture comes in (and what it is):
∆J\left(Q\right)={E}_{\tau }\left[\sum _{t=0}^{T-1}{\nabla }_{Q}\mathrm{log}{\pi }_{Q}\left({a}_{t}, {s}_{t}\right){G}_{t}\right]
We can then decompose the expectation into:
∆J\left(Q\right)={E}_{\tau }\left[\sum _{t=0}^{T-1}{\nabla }_{Q}\mathrm{log}{\pi }_{Q}\left({a}_{t}, {s}_{t}\right)\right]{E}_{{r}_{t+1}, {s}_{t+1}, ..., {r}_{\tau }{s}_{\tau }}\left[{G}_{t}\right]
The second expectation term should be familiar; it is the Q value!
{E}_{{r}_{t+1}, {s}_{t+1}, ..., {r}_{\tau }{s}_{\tau }}\left[{G}_{t}\right]=Q\left({s}_{t}, {a}_{t}\right)
Plugging that in, we can rewrite the update equation as such:
∆J\left(Q\right)={E}_{{s}_{0}, {a}_{0}, ..., {s}_{t}, {a}_{t}}\left[\sum _{t=0}^{T-1}{\nabla }_{Q}\mathrm{log}{\pi }_{Q}\left({a}_{t}, {s}_{t}\right)\right]Q\left({s}_{t}, {a}_{t}\right)={E}_{\tau }\left[\sum _{t=0}^{T-1}{\nabla }_{Q}\mathrm{log}{\pi }_{Q}\left({a}_{t}, {s}_{t}\right)\right]Q\left({s}_{t}, {a}_{t}\right)
As we know, the Q value can be learned by parameterizing the Q function with a neural network. This leads us to Actor-Critic Methods, where:
The "Critic" estimates the value function. This could be the action-value (the Q value) or state-value (the V value).
We update both the Critic network and the Value network at each update step.
Intuitively, this means how better it is to take a specific action than the average general action at the given state. So, using the Value function as the baseline function, we subtract the Q value term with the Value. We will call this Value the advantage value:
A\left({s}_{t}, {a}_{t}\right)=Q\left({s}_{t}, {a}_{t}\right)-V\left({a}_{t}\right)
This is so-called the Advantage Actor-Critic; in code, it looks much more straightforward, you will see.
Advantage Actor-Critic implementation:
I am working on my previous tutorial code; we need to add the Critic model to the same principle. So in Policy Gradient, our model looked following:
To make it Actor-Critic, we add the 'value' parameter, and we compile not only the Actor model but and Critic model with 'mse' loss:
Another most important function we change is a def replay(self). In policy gradient, it looked following:
To make it work as an Actor-Critic algorithm, we predict states without the Critic model to get values that we subtract from discounted rewards, and this is how we calculate advantages. And instead of training Actor with discounted rewards, we use advantages, and for the Critic network, we use discounted rewards:
values = self.Critic.predict(states)[:, 0]
That's it; we just needed to change few lines of code. Moreover, you can change the 'save' and 'load model' functions. Here is the complete code:
#agent.test('Pong-v0_A2C_2.5e-05_Actor.h5', '')
Same as in my previous tutorial, I first trained 'PongDeterministic-v4' for 1000 steps, results you can see in bellow graph:
So, from the training results, we can say that the A2C model played pong relatively smoother. Despite that, it took a little longer to reach maximum scores, games of it were much more stable than PG, where we had many spikes. Then I thought, ok, let's give a chance to our 'Pong-v0' environment:
Now our 'Pong-v0' training graph looks much better than in Policy Gradient, much more stable games. But sadly, our average score couldn't get more than 11 scores per game. But keep in mind that I am using one deep layer network; you can play around with architecture.
So, in this tutorial, we implemented a hybrid between value-based algorithms and policy-based algorithms. But we still face a problem, that learning for these models takes a lot of time. So in the next tutorial part, I will implement it as an Asynchronous A2C algorithm. This means that we will run, for example, four environments at once, and we will train the same main model. In theory, this means we will train our agent four times faster, but you will see how it looks in practice in the next tutorial part.
|
GHGs - GCAM - IAMC-Documentation
GCAM can be considered as a process model for CO2 emissions and reductions. CO2 emissions change over time as fuel consumption in GCAM endogenously changes. Application of Carbon Capture and Storage (CCS) is explicitly considered as separate technological options for a number of processes, such as electricity generation and fertilizer manufacturing. GCAM, in effect, produces a Marginal Abatement Curve for CO2 as a carbon-price is applied within the model. Documentation for CO2 emissions can be found here.
{\displaystyle E_{t}=A_{t}*F_{t0}*(1-MAC(Cprice_{t}))}
Cprice Carbon Price
Non-CO2 GHG emissions are proportional to the activity except for any reductions in emission intensity due to the MAC curve. As noted above, the MAC curves are assigned to a wide variety of technologies, mapped directly from EPA 2013. Under a carbon policy, emissions are reduced by an amount determined by the MAC curve. Documentation for non CO2 emissions can be found here.
Most fluorinated gas emissions are linked either to the industrial sector as a whole (e.g., semiconductor-related F-gas emissions are driven by growth in the “industry” sector), or population and GDP (e.g., fire extinguishers). As those drivers change, emissions will change. Additionally, we include abatement options based on EPA MAC curves. Documentation for fluorinated gas emissions can be found here.
Retrieved from "https://www.iamcdocumentation.eu/index.php?title=GHGs_-_GCAM&oldid=14417"
|
Section 55.17 (0CEI): Semistable reduction in genus at least two—The Stacks project
Section 55.17: Semistable reduction in genus at least two (cite)
55.17 Semistable reduction in genus at least two
In this section we prove the semistable reduction theorem (Theorem 55.18.1) for curves of genus $\geq 2$. Fix $g \geq 2$.
Let $R$ be a discrete valuation ring with fraction field $K$. Let $C$ be a smooth projective curve over $K$ with $H^0(C, \mathcal{O}_ C) = K$. Assume the genus of $C$ is $g$. Choose a prime $\ell > 768g$ different from the characteristic of $k$. Choose a finite separable extension $K'/K$ of such that $C(K') \not= \emptyset $ and such that $\mathop{\mathrm{Pic}}\nolimits (C_{K'})[\ell ] \cong (\mathbf{Z}/\ell \mathbf{Z})^{\oplus 2g}$. See Algebraic Curves, Lemma 53.17.2. Let $R' \subset K'$ be the integral closure of $R$, see discussion in More on Algebra, Remark 15.111.6. We may replace $R$ by $R'_{\mathfrak m}$ for some maximal ideal $\mathfrak m$ in $R'$ and $C$ by $C_{K'}$. This reduces us to the case discussed in the next paragraph.
In the rest of this section $R$ is a discrete valuation ring with fraction field $K$, $C$ is a smooth projective curve over $K$ with $H^0(C, \mathcal{O}_ C) = K$, with genus $g$, having a $K$-rational point, and with $\mathop{\mathrm{Pic}}\nolimits (C)[\ell ] \cong (\mathbf{Z}/\ell \mathbf{Z})^{\oplus 2g}$ for some prime $\ell \geq 768g$ different from the characteristic of $k$. We will prove that $C$ has semistable reduction.
In the rest of this section we will use without further mention that the conclusions of Lemma 55.11.7 are true.
Let $X$ be a minimal model for $C$, see Proposition 55.8.6. Let $T = (n, m_ i, (a_{ij}), w_ i, g_ i)$ be the numerical type associated to $X$ (Definition 55.11.4). Then $T$ is a minimal numerical type of genus $g$ (Lemma 55.11.5). By Proposition 55.7.4 we have
\[ \dim _{\mathbf{F}_\ell } \mathop{\mathrm{Pic}}\nolimits (T)[\ell ] \leq g_{top} \]
By Lemmas 55.13.3 and 55.13.4 we conclude that there is an embedding
\[ (\mathbf{Z}/\ell \mathbf{Z})^{\oplus 2g - g_{top}} \subset \mathop{\mathrm{Pic}}\nolimits ((X_ k)_{red})[\ell ]. \]
\[ 2g - g_{top} \leq \dim _ k H^1((X_ k)_{red}, \mathcal{O}_{(X_ k)_{red}}) + g_{geom}(X_ k/k) \]
By Lemmas 55.11.8 and 55.11.9 we have
\[ g \geq \dim _ k H^1((X_ k)_{red}, \mathcal{O}_{(X_ k)_{red}}) \geq g_{top} + g_{geom}(X_ k/k) \]
Elementary number theory tells us that the only way these $3$ inequalities can hold is if they are all equalities. Looking at Lemma 55.11.8 we conclude that $m_ i = 1$ for all $i$. Looking at Lemma 55.11.10 we conclude that every irreducible component of $X_ k$ is smooth over $k$.
In particular, since $X_ k$ is the scheme theoretic union of its irreducible components $C_ i$ we see that $X_{\overline{k}}$ is the scheme theoretic union of the $C_{i, \overline{k}}$. Hence $X_{\overline{k}}$ is a reduced connected proper scheme of dimension $1$ over $\overline{k}$ with $\dim _{\overline{k}} H^1(X_{\overline{k}}, \mathcal{O}_{X_{\overline{k}}}) = g$. Also, by Varieties, Lemma 33.30.3 and the above we still have
\[ \dim _{\mathbf{F}_\ell }(\mathop{\mathrm{Pic}}\nolimits (X_{\overline{k}})[\ell ]) \geq 2g - g_{top} = \dim _{\overline{k}} H^1(X_{\overline{k}}, \mathcal{O}_{X_{\overline{k}}}) + g_{geom}(X_{\overline{k}}) \]
By Algebraic Curves, Proposition 53.17.3 we see that $X_{\overline{k}}$ has at only multicross singularities. But since $X_ k$ is Gorenstein (Lemma 55.9.2), so is $X_{\overline{k}}$ (Duality for Schemes, Lemma 48.25.1). We conclude $X_{\overline{k}}$ is at-worst-nodal by Algebraic Curves, Lemma 53.16.4. This finishes the proof.
Comment #2948 by Maciek Zdanowicz on October 10, 2017 at 20:54
It seems that the two occurrences of
(\mathbf{Z}/\ell \mathbf{Z})^{\oplus 2}
in first paragraphs should be substituted with
(\mathbf{Z}/\ell \mathbf{Z})^{\oplus 2g}
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0CEI. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0CEI, in case you are confused.
|
Scheduling and Timing - MATLAB & Simulink - MathWorks Australia
Timer-Based Interrupt Processing
High-Speed Peripheral Clock
External Interrupt Processing
ADC Interrupt Based Scheduling
Often, developers choose to run the code generated by Embedded Coder® in the context of a timer interrupt. Model blocks run in a periodical fashion clocked by the periodical interrupt whose period is tied to the base sample time of the model.
This execution scheduling model is not flexible enough for many systems, especially control and communication systems, which must respond to external events in real time. Such systems require the ability to handle various hardware interrupts in an asynchronous fashion.
Embedded Coder software lets you model and generate code for such systems by creating tasks driven by Hardware Interrupt blocks in addition to the tasks that are left to be handled in the context of the timer interrupt.
For code that runs in the context of the timer interrupt, each iteration of the model solver is run after an interrupt has been posted and serviced by an interrupt service routine (ISR). The code generated for the C2000 processors uses CPU_timer0 by default.
The timer is configured so that the base rate sample time of the model corresponds to the interrupt rate. The timer period and prescaler are calculated and set up to produce the desired rate as follows:
BaseRateSampleTime=\frac{TimerPeriod}{TimerClockSpeed}
The minimum achievable base rate sample time depends on the model complexity. The maximum value depends on the maximum timer period value (232-1) and the CPU clock speed .
If the blocks in the model inherit their sample time value, and a sample time is not explicitly defined, the default value is 0.2 s.
For more information about timer-based interrupt processing, see C28x-Scheduler Options.
The Event Managers and their general-purpose timers, which drive PWM waveform generation use the high-speed peripheral clock (HISCLK). By default, this clock is selected in Embedded Coder software. This clock is derived from the system clock (SYSCLKOUT):
HISCLK = [SYSCLKOUT / (high-speed peripheral prescaler)]
The high-speed peripheral prescaler is determined by the HSPCLK bits set in SysCtrl. The default value of HSPCLK is 1, which corresponds to a high-speed peripheral prescaler value of 2.
For example, on the F2812, the HISCLK rate becomes
HISCLK = 150 MHz / 2 = 75 MHz
For code that runs in the context of an external interrupt, the model uses the C28x Hardware Interrupt block. Configure the interrupt operation with Configuration Parameters > Hardware Implementation > Hardware board settings > Target hardware resources > External Interrupt. For more information, see Model Configuration Parameters for Texas Instruments C2000 Processors.
You can set ADC interrupt as the Simulink model base rate. This means that every periodic event in the model will occur at the rate decided by ADC interrupt.
In order to set the base rate, ADC interrupt must be triggered. In general, an ePWM module can be used to trigger an ADC interrupt. An ePWM is free running counter which can be configured to trigger Start of Conversion for ADC module at a periodic interval. The ADC module can trigger an interrupt at the end of its conversion.
Other sources for ADC Start of Conversion include external interrupt and software.
In any model with base rate trigger:
Any task in the model should not have base rate less than the base rate trigger.
The sample time of ADC block should match with the base rate of the model.
For more information about ADC Interrupt based scheduling, see C28x-Scheduler Options and Field-Oriented Control of PMSM with Quadrature Encoder Using C2000 Processors.
|
Global Securities Markets - Course Hero
Introduction to Finance/Capital Markets Efficiency/Global Securities Markets
Trading of Global Securities
Prior to the 2000s, global trading of securities was conducted primarily through major stock exchanges around the world. Today, a large portion of securities trades are conducted privately via "off-exchanges" through electronic private networks. Stock exchanges were originally created in the leading "money-centered" cities of the world, such as London and New York, where institutional and wealthy individual investors were concentrated. Thus, the strength of the local economy and investor interest were two of the most significant factors that drove the need for a stock market in a particular location. However, with private electronic networks now carrying a heavy load of securities trading of bonds and stock, it has become less important to have a local exchange presence. One exception might be China, as these electronic networks do not connect to its Shanghai Stock Exchange because of government restrictions. Therefore, the Chinese government built a connecting network with the Hong Kong Stock Exchange, which is electronically connected to leading global trading centers, in order to permit the trading of Chinese-based public companies via that exchange.
Sophisticated individual investors, as well as institutional investors, also seek to trade in countries that provide strong laws protecting investors’ rights. Thus, global investors are drawn to the capital markets of the United States, Japan, the United Kingdom, and the European Union because of their strong investor protection laws. While developing countries do often have stock exchanges as well, they frequently lack strong investor protection laws. Another factor is the limited trading volume found in developing countries' stock exchanges, forcing them to focus more on domestic companies' capital needs. For large, growing global companies with significant capital funding needs located in these developing regions, the Eurodollar market and U.S. capital markets present the only alternatives to raising large sums via the sale of their bonds or stocks to global investors. In summary, strong local investor protections coupled with highly liquid local exchange markets is the ideal foundation for a successful securities marketplace.
The two largest stock markets in the world are located within a few miles of each other in New York City. They are the New York Stock Exchange (NYSE) and the National Association of Securities Dealers Automated Quotations (Nasdaq). There are a couple of reasons for this. The first reason is that New York has been a money-centered city for more than 200 years. The second reason is that the Nasdaq focused on entrepreneurial technology companies that were more prevalent in the United States beginning in the 1980s.
In Asia the major stock exchanges are located in Japan, China, India, South Korea, and Taiwan. The major European stock exchanges are located in England, Italy, Germany, Switzerland, Spain, Ireland, and France. Other major global stock exchanges are located in Australia, South Africa, and Brazil.
Most global stock exchanges trade in foreign securities. The NYSE and the Nasdaq, for example, collectively trade many non-U.S. foreign stocks; such trades are allowed if the shares are registered with the U.S. Securities and Exchange Commission (SEC). Some firms choose to tap into overseas securities markets in addition to their domestic financial markets to gain access to new sources of capital for the company and to help facilitate the financing of overseas assets. More importantly, the low cost of raising debt, for example, has attracted U.S. businesses to Japan over the last twenty years due to lower interest rates. Some countries also have investors who are more likely to buy bonds rather than stocks because of conservative investment practices and because there are significant pension systems that limit investing aggressively for long-term retirement needs. Companies in such countries often seek to be traded on foreign exchanges where consumers are more likely to purchase stocks.
Major Global Securities Exchanges
NORTH AMERICAN SECURITIES EXCHANGES
New York Stock Exchange (NYSE) United States New York
National Association of Securities Dealers Automated Quotations (Nasdaq) United States New York
TMX Group Canada Toronto
ASIAN SECURITIES EXCHANGES
Japan Exchange Group Japan Tokyo
Hong Kong Stock Exchange China Hong Kong
Shanghai Stock Exchange China Shanghai
Shenzhen Stock Exchange China Shenzhen
Bombay Stock Exchange India Mumbai
National Stock Exchange of India India Mumbai
Korea Exchange South Korea Seoul
Taiwan Stock Exchange Taiwan Taipei
EUROPEAN SECURITIES EXCHANGES
Euronext European Union Amsterdam
London Stock Exchange Group United Kingdom
Italy London
Deutsche Börse Germany Frankfurt
SIX Swiss Exchange Switzerland Zurich
Nasdaq Nordic Armenia, Nordic and Baltic countries Stockholm
BME Spanish Exchanges Spain Madrid
OTHER GLOBAL MAJOR SECURITIES EXCHANGES
Australian Securities Exchange Australia Sydney
JSE Limited South Africa Johannesburg
B3 Brazil São Paulo
Numerous stock exchanges around the world trade securities, stocks, and bonds. Some of the largest include the New York Stock Exchange, the Japan Exchange Group, and Euronext.
How Securities Markets Impact the Economy
Changes in the valuation of stocks and bonds in securities markets can profoundly affect the economy of the nation in which the securities are traded, as well as global economic conditions. This is because a healthy economy depends in part on consumer and business confidence in prevailing and future economic conditions. For example, it was the U.S. stock market crash in 1929 that triggered the Great Depression, the greatest economic collapse in the history of the modern industrialized world. The Great Depression started in late 1929 and continued throughout the 1930s.
The collapse of the securities markets in the United States and the triggering of the Great Depression had a significant economic impact on the rest of the country. Unemployment soared to record highs as many companies went out of business. People that had assets began to hoard them, not trusting banks to hold cash and not investing in stocks or bonds. The cyclical negative impact led to food and resource shortages, reduced tax revenues for the government to operate with, and overall fiscal decline. Negative news and events in securities markets can have a cumulative impact in that investors become less confident about future financial security, so they are less willing to invest and demand greater returns for any investments they make. This overall siphoning of money out of the economy can have a repetitive negative impact in which negative security impacts leads to bad news, which leads to further securities market declines.
The opposite can also happen, in which a healthy economy leads to higher levels of consumer confidence and greater investment in corporate securities. Corporations use this new money to expand their offerings and raise pay for their current workforce. Their employees, feeling more secure at work, spend more and strengthen the consumer marketplace, which encourages other companies to expand and increase hiring. Overall, what people believe about their short- and long-term economic prospects heavily influences what they think about the securities markets and their willingness to invest at any given time.
The capital asset pricing model (CAPM) is based on the economic theory that postulates that the expected return of a securities asset depends on its level of systematic risk. The model provides a method to measure the amount of risk for a given individual security traded in the marketplace. Essentially the CAPM is used in finance for pricing risky securities and generating expected returns for assets given the risk of those assets and cost of capital. Thus, an investor can use the CAPM to determine the expected return on a security relative to the risk-free return plus the risk premium of that security. The risk-free investment rate is what a long-term government bond will pay in that securities market.
In building individual investment portfolios, investors can use the CAPM to predict investment risk and price volatility in any given security. Price volatility is risk related to the size of changes in a security's value. This ensures that the security portfolio created is commensurate with the level of risk the investor wants to take.
The CAPM formula calculates the return an investor can expect to receive from an investment in a corporate security in comparison to the risk-free return that could be received from a safe asset class, such as a government bond. The formula is made up of the return, the risk-free rate, the beta, and the risk premium. The return is what the investor expects the asset to generate. The risk-free rate on interest is the rate than can be earned in a market by investing in a safe asset class, like government bonds. The beta (
\beta
) is the sensitivity of the expected asset return in relation to the market return.
{\rm E}\;({\rm R}_i)=\;{\rm R}_f\;+\beta_i\;({\rm E}\;({\rm R}_m)-{\rm R}_f)
\begin{array}{rcl}{\rm E}\;({\rm R}_i)&=&\text{The Return the Investor Expects the Asset to Generate}\\{\rm R}_f&=&\text{The Risk-Free Rate on Interest that Can be Earned in that Market}\\&&\text{ by Investing in a Safe Asset Class Like Government Bonds}\\{\rm\beta}_i&=&\text{The}\;\rm\beta\;\text{or Sensitivity of the Expected Asset Return in Relation to the Market Return}\\\rm E\;({\rm R}_m)-{\rm R}_f&=&\text{The Risk Premium}\end{array}
The CAPM considers three factors. For example, if Widget Inc. issues stock, investors will first want to consider the time value of money. The time value of money is the reward that investors could expect if they purchase stock in Widget Inc. instead of investing in a risk-free asset, such as U.S. government savings bonds. Second, investors will look at the investment class and type of stock that Widget Inc. is offering. Investors will consider the systematic risk of the stock as well as the expected return a portfolio should generate when investing in the same type of stock class, or level of voting rights of shareholders, that Widget Inc. is offering. After all, investors will want to know if Widget Inc. stock is either safer than simply investing in a fund of stocks that are similar to Widget Inc. in terms of size and business field or that Widget Inc. is likely to produce greater returns than its peer stocks. Finally, those considering an investment into Widget Inc. will evaluate the stock by looking at whether its expected return is above or below the security market line. The security market line (SML) is a straight line that positively slopes and shows the relationship between the expected return of an asset class and beta. Beta is the amount of systematic risk, or overall market risk, an asset or portfolio has with respect to the market.
When making investment decisions, an investor will want to consider the expected rate of return on that asset and its risk with the expected return on a risk-free asset, such as a long-term government bond.
For investors, some securities produce returns that are more sensitive to system-wide market risk than others. For these investments, the gains will correlate more consistently with the returns of the other stocks in the investment portfolio and will, therefore, more heavily influence the portfolio's overall risk. This is because the price of this security will go up and down relative to similar stocks in the portfolio as opposed to being influenced by factors unique to that company. Factors unique to a particular company may be the death of a CEO, which may cause share value to go down, or the success of a new product, which may cause share value to go up. If a commodity, such as a natural resource or an agricultural product, is close to other stocks in a portfolio, any dramatic swings it has will be based more on the stock market as a whole in that industry than on that commodity's individual influencers.
<Vocabulary>Elastic and Inelastic Markets
|
numtheory(deprecated)/phi - Maple Help
Home : Support : Online Help : numtheory(deprecated)/phi
inverse of totient function
invphi(n)
Important: The numtheory package has been deprecated. Use the superseding commands NumberTheory[Totient] and NumberTheory[InverseTotient] instead.
The phi(n) calling sequence computes Euler's totient function of n, which is the number of positive integers not exceeding n and relatively prime to n.
The invphi(n) calling sequence returns a list of increasing integers [m1, m2, ..., mk] such that phi(mi) = n for i from 1 to k.
These functions are part of the numtheory package, and so can be used in the form phi(..) only after performing the command with(numtheory) or with(numtheory,phi) (and similarly for invphi). The functions can always be accessed in the long form numtheory[phi](..) or numtheory[invphi](..).
\mathrm{with}\left(\mathrm{numtheory}\right):
\mathrm{\phi }\left(6\right)
\textcolor[rgb]{0,0,1}{2}
\mathrm{invphi}\left(\right)
[\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}]
\mathrm{\phi }\left(15\right)
\textcolor[rgb]{0,0,1}{8}
\mathrm{invphi}\left(\right)
[\textcolor[rgb]{0,0,1}{15}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{16}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{20}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{24}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{30}]
\mathrm{map}\left(\mathrm{\phi },\right)
[\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{8}]
\mathrm{invphi}\left(15\right)
[]
numtheory(deprecated)[lambda]
|
Generic hydraulic variable orifice - MATLAB
Generic hydraulic variable orifice
The block represents a variable orifice of any type as a data-sheet-based model. Depending on data listed in the manufacturer's catalogs or data sheets for your particular orifice, you can choose one of the following model parameterization options:
By maximum area and opening — Use this option if the data sheet provides only the orifice maximum area and the control member maximum stroke.
By area vs. opening table — Use this option if the catalog or data sheet provides a table of the orifice passage area based on the control member displacement A=A(h).
By pressure-flow characteristic — Use this option if the catalog or data sheet provides a two-dimensional table of the pressure-flow characteristics q=q(p,h).
In the first case, the passage area is assumed to be linearly dependent on the control member displacement, that is, the orifice is assumed to be closed at the initial position of the control member (zero displacement), and the maximum opening takes place at the maximum displacement. In the second case, the passage area is determined by one-dimensional interpolation from the table A=A(h). In both cases, a small leakage area is assumed to exist even after the orifice is completely closed. Physically, it represents a possible clearance in the closed valve, but the main purpose of the parameter is to maintain numerical integrity of the circuit by preventing a portion of the system from getting isolated after the valve is completely closed. An isolated or “hanging” part of the system could affect computational efficiency and even cause failure of computation.
In the first and second cases, the flow rate is computed according to the following equations:
q={C}_{D}\cdot A\left(h\right)\sqrt{\frac{2}{\rho }}\frac{\Delta p}{{\left(\Delta {p}^{2}+{p}_{\text{Cr}}^{2}\right)}^{1/4}},
\Delta p={p}_{\text{A}}-{p}_{\text{B}},
h={x}_{0}+x·or
For the first parameterization, the opening area (A) is a piecewise function of the control member position (h). The area saturates at its leakage value when the control member is in the fully closed position (hmin). It saturates at its maximum value when the control member is in the fully open position (hmax).
A\left(h\right)=\left\{\begin{array}{ll}{A}_{\text{leak}}\hfill & h\le {h}_{\text{min}}\hfill \\ \frac{{A}_{\text{max}}}{{h}_{\text{max}}}h,\hfill & h>0\hfill \\ {A}_{\text{max}},\hfill & h\ge {h}_{\text{max}}\hfill \end{array}
The minimum control member position is calculated as:
{h}_{\text{min}}=\frac{{h}_{\text{max}}}{{A}_{\text{max}}}{A}_{\text{leak}}
For the second parameterization, the opening area is a tabulated function of control member displacement. As with the linear parameterization, the area saturates at its leakage value when the control member is in the fully closed position, and it saturates at its maximum value when the control member is in the fully open position. Between the closed and open positions:
A=f\left(h\right)
The table summarizes the parameters used in the equations.
Amax Orifice maximum area
hmax Control member maximum displacement
x Control member displacement from initial position
or Orifice orientation indicator. The variable assumes +1 value if the control member displacement in the globally assigned positive direction opens the orifice, and –1 if positive motion decreases the opening.
{p}_{cr}=\frac{\rho }{2}{\left(\frac{{\mathrm{Re}}_{cr}\cdot \nu }{{C}_{D}\cdot {D}_{H}}\right)}^{2}
{D}_{H}=\sqrt{\frac{4A}{\pi }}
In the third case, when an orifice is defined by its pressure-flow characteristics, the flow rate is determined by two-dimensional interpolation. In this case, neither flow regime nor leakage flow rate is taken into account, because these features are assumed to be introduced through the tabulated data. Pressure-flow characteristics are specified with three data sets: array of orifice openings, array of pressure differentials across the orifice, and matrix of flow rate values. Each value of a flow rate corresponds to a specific combination of an opening and pressure differential.
\Delta p={p}_{\text{A}}-{p}_{\text{B}},
. Positive signal at the physical signal port S opens or closes the orifice depending on the value of the orifice orientation indicator.
For orifices specified by pressure-flow characteristics (the third parameterization option), the model does not explicitly account for the flow regime or leakage flow rate, because the tabulated data is assumed to account for these characteristics.
Select one of the following methods for specifying the orifice:
By maximum area and opening — Provide values for the maximum orifice area and the maximum orifice opening. The passage area is linearly dependent on the control member displacement, that is, the orifice is closed at the initial position of the control member (zero displacement), and the maximum opening takes place at the maximum displacement. This is the default method.
By area vs. opening table — Provide tabulated data of orifice openings and corresponding orifice areas. The passage area is determined by one-dimensional table lookup. You have a choice of two interpolation methods and two extrapolation methods.
By pressure-flow characteristic — Provide tabulated data of orifice openings, pressure differentials, and corresponding flow rates. The flow rate is determined by two-dimensional table lookup. You have a choice of two interpolation methods and two extrapolation methods.
Orifice maximum area
Specify the area of a fully opened orifice. The parameter value must be greater than zero. The default value is 5e-5 m^2. This parameter is used if Model parameterization is set to By maximum area and opening.
Orifice maximum opening
Specify the maximum displacement of the control member. The parameter value must be greater than zero. The default value is 5e-4 m. This parameter is used if Model parameterization is set to By maximum area and opening.
Orifice opening vector, s
Specify the vector of input values for orifice openings as a one-dimensional array. The input values vector must be strictly increasing. The values can be nonuniformly spaced. The minimum number of values depends on the interpolation method: you must provide at least two values for linear interpolation, at least three values for smooth interpolation. The default values, in meters, are [-0.002 0 0.002 0.005 0.015]. If Model parameterization is set to By area vs. opening table, the Tabulated orifice openings values will be used together with Tabulated orifice area values for one-dimensional table lookup. If Model parameterization is set to By pressure-flow characteristic, the Tabulated orifice openings values will be used together with Tabulated pressure differentials and Tabulated flow rates for two-dimensional table lookup.
Specify the vector of orifice areas as a one-dimensional array. The vector must be of the same size as the orifice openings vector. All the values must be positive. The default values, in m^2, are [1e-09 2.0352e-07 4.0736e-05 0.00011438 0.00034356]. This parameter is used if Model parameterization is set to By area vs. opening table.
Pressure differential vector, dp
Specify the pressure differential vector as a one-dimensional array. The vector must be strictly increasing. The values can be nonuniformly spaced. The minimum number of values depends on the interpolation method: you must provide at least two values for linear interpolation, at least three values for smooth interpolation. The default values, in Pa, are [-1e+07 -5e+06 -2e+06 2e+06 5e+06 1e+07]. This parameter is used if Model parameterization is set to By pressure-flow characteristic.
Specify the flow rates as an m-by-n matrix, where m is the number of orifice openings and n is the number of pressure differentials. Each value in the matrix specifies flow rate taking place at a specific combination of orifice opening and pressure differential. The matrix size must match the dimensions defined by the input vectors. The default values, in m^3/s, are:
[-1e-07 -7.0711e-08 -4.4721e-08 4.4721e-08 7.0711e-08 1e-07;
-2.0352e-05 -1.4391e-05 -9.1017e-06 9.1017e-06 1.4391e-05 2.0352e-05;
-0.0040736 -0.0028805 -0.0018218 0.0018218 0.0028805 0.0040736;
-0.011438 -0.0080879 -0.0051152 0.0051152 0.0080879 0.011438;
-0.034356 -0.024293 -0.015364 0.015364 0.024293 0.034356;]
This parameter is used if Model parameterization is set to By pressure-flow characteristic.
Smooth — Select this option to produce a continuous curve (By area vs. opening table) or surface (By pressure-flow characteristic) with continuous first-order derivatives.
For more information on interpolation algorithms, see the PS Lookup Table (1D) and PS Lookup Table (2D) block reference pages.
Linear — Select this option to produce a curve or surface with continuous first-order derivatives in the extrapolation region and at the boundary with the interpolation region.
For more information on extrapolation algorithms, see the PS Lookup Table (1D) and PS Lookup Table (2D) block reference pages.
The total area of possible leaks in the completely closed orifice. The main purpose of the parameter is to maintain numerical integrity of the circuit by preventing a portion of the system from getting isolated after the valve is completely closed. The parameter value must be greater than 0. The default value is 1e-12 m^2.
Flow rate specified as the fluid volume per unit time at time zero. The value entered serves as a guide during model compilation. The actual flow rate can differ if needed to satisfy all model constraints. Set the Priority level to High to prioritize the specified value during model assembly.
Pressure change from port A to port B at time zero. The value entered serves as a guide during model compilation. The actual flow rate can differ if needed to satisfy all model constraints. Set the Priority level to High to prioritize the specified value during model assembly.
The Hydraulic Flapper-Nozzle Amplifier example illustrates the use of the Variable Orifice block in hydraulic systems.
Annular Orifice | Constant Area Hydraulic Orifice | Fixed Orifice | Orifice with Variable Area Round Holes | Orifice with Variable Area Slot | PS Lookup Table (1D) | PS Lookup Table (2D) | Variable Area Hydraulic Orifice
|
Stephen Cole Kleene — Wikipedia Republished // WIKI 2
Kleene–Mostowski hierarchy
Kleene–Rosser paradox
Kleene star
Kleene's algorithm
Kleene's theorem
Regular expressions
Kleene's smn theorem
Leroy P. Steele Prize (1983)
Robert Constable
Joan Moschovakis
Yiannis Moschovakis
Nels David Nelson
Dick de Jongh
Stephen Cole Kleene (/ˈkleɪni/ KLAY-nee;[a] January 5, 1909 – January 25, 1994) was an American mathematician. One of the students of Alonzo Church, Kleene, along with Rózsa Péter, Alan Turing, Emil Post, and others, is best known as a founder of the branch of mathematical logic known as recursion theory, which subsequently helped to provide the foundations of theoretical computer science. Kleene's work grounds the study of computable functions. A number of mathematical concepts are named after him: Kleene hierarchy, Kleene algebra, the Kleene star (Kleene closure), Kleene's recursion theorem and the Kleene fixed-point theorem. He also invented regular expressions in 1951 to describe McCulloch-Pitts neural networks, and made significant contributions to the foundations of mathematical intuitionism.
Kleene Star Closure | Kleene Plus | Kleene Positive in Theory of Automata Lecture 05 | Full Course
Lec-4:Power of Sigma Σ in TOC | Kleene closure in TOC
Intuitionism and Constructive Mathematics 2/23
L 2: Kleene closure and Kleene Plus in theory of computation | Kleene closure | Kleene Plus in TOC
Finite State Automaton for a Regular Expression: Kleene's Theorem Part 1
Kleene was awarded a bachelor's degree from Amherst College in 1930. He was awarded a Ph.D. in mathematics from Princeton University in 1934, where his thesis, entitled A Theory of Positive Integers in Formal Logic, was supervised by Alonzo Church. In the 1930s, he did important work on Church's lambda calculus. In 1935, he joined the mathematics department at the University of Wisconsin–Madison, where he spent nearly all of his career. After two years as an instructor, he was appointed assistant professor in 1937.
While a visiting scholar at the Institute for Advanced Study in Princeton, 1939–1940, he laid the foundation for recursion theory, an area that would be his lifelong research interest. In 1941, he returned to Amherst College, where he spent one year as an associate professor of mathematics.
During World War II, Kleene was a lieutenant commander in the United States Navy. He was an instructor of navigation at the U.S. Naval Reserve's Midshipmen's School in New York, and then a project director at the Naval Research Laboratory in Washington, D.C.
Kleene's teaching at Wisconsin resulted in three texts in mathematical logic, Kleene (1952, 1967) and Kleene and Vesley (1965). The first two are often cited and still in print. Kleene (1952) wrote alternative proofs to the Gödel's incompleteness theorems that enhanced their canonical status and made them easier to teach and understand. Kleene and Vesley (1965) is the classic American introduction to intuitionistic logic and mathematical mathematics.
[...] recursive function theory is of central importance in computer science. Kleene is responsible for many of the fundamental results in the area, including the Kleene normal form theorem (1936), the Kleene recursive theorem (1938), the development of the arithmetical and hyper-arithmetical hierarchies in the 1940s and 1950s, the Kleene-Post theory of degrees of unsolvability (1954), and higher-type recursion theory. which he began in the late 1950s and returned to in the late 1970s. [...] Beginning in the late 1940s, Kleene also worked in a second area, Brouwer's intuitionism. Using tools from recursion theory, he introduced recursive realizability, an important technique for interpreting intuitionistic statements. In the summer of 1951 at the Rand Corporation, he produced a major breakthrough in a third area when he gave an important characterization of events accepted by a finite automaton.[4]
Kleene served as president of the Association for Symbolic Logic, 1956–1958, and of the International Union of History and Philosophy of Science,[5] 1961. The importance of Kleene's work led to Daniel Dennett coining the saying, published in 1978, that "Kleeneness is next to Gödelness."[6] In 1990, he was awarded the National Medal of Science.
At each conference of the Symposium on Logic in Computer Science the Kleene award, in honour of Stephen Cole Kleene, is given for the best student paper.[7]
1935. Stephen Cole Kleene (Jan 1935). "A Theory of Positive Integers in Formal Logic. Part I". American Journal of Mathematics. 57 (1): 153–173. doi:10.2307/2372027. JSTOR 2372027.
1935. Stephen Cole Kleene (Apr 1935). "A Theory of Positive Integers in Formal Logic. Part II". American Journal of Mathematics. 57 (2): 219–244. doi:10.2307/2371199. JSTOR 2371199.
1936. "General recursive functions of natural numbers". Mathematische Annalen (112): 727–742. 1936.
{\displaystyle \lambda }
-definability and recursiveness". Duke Mathematical Journal. 2 (2): 340–352. 1936.
1938. "On Notations for Ordinal Numbers" (PDF). Journal of Symbolic Logic. 3 (4): 150–155. 1938. doi:10.2307/2267778. JSTOR 2267778.
1943. "Recursive predicates and quantifiers". Transactions of the American Mathematical Society. 53 (1): 41–73. Jan 1943. doi:10.1090/S0002-9947-1943-0007371-8.
1951. Kleene, Stephen Cole (15 December 1951). "Representation of Events in Nerve Nets and Finite Automata" (PDF). U. S. Air Force Project Rand Research Memorandum. No. RM-704. The RAND Corporation.
1952. Introduction to Metamathematics. New York: Van Nostrand. (Ishi Press: 2009 reprint).[8]
1956. Kleene, Stephen Cole (1956). Shannon, Claude; McCarthy, John (eds.). Representation of Events in Nerve Nets and Finite Automata. Automata Studies. OCLC 564148.
1967. Mathematical Logic. John Wiley & Sons. Dover reprint, 2002. ISBN 0-486-42533-9.
1981. "Origins of Recursive Function Theory" in Annals of the History of Computing 3, No. 1.
1987. "Reflections on Church's thesis". Notre Dame Journal of Formal Logic. 28 (4): 490–498. Oct 1987. doi:10.1305/ndjfl/1093637645.
Kleene–Brouwer order
Kleene's O
Kleene's T predicate
^ Pace, Eric (January 27, 1994). "Stephen C. Kleene Is Dead at 85; Was Leader in Computer Science". The New York Times.
^ In Entry "Stephen Kleene" at Free Online Dictionary of Computing.
^ "S. C. Kleene". Retrieved February 8, 2021.
^ Keisler, H. Jerome (September 1994). "Stephen Cole Kleene 1909–1994". Notices of the AMS. 41 (7): 792.
^ IUHPS website; also known as "International Union of the History and the Philosophy of Science". A member of ICSU, the International Council for Science (formerly named International Council of Scientific Unions).
^ "LICS – Archive". lics.siglog.org.
^ Bishop, Errett (1965). "Review: The foundations of intuitionistic mathematics, by Stephen Cole Kleene and Richard Eugene Vesley" (PDF). Bulletin of the American Mathematical Society. 71 (6): 850–852. doi:10.1090/s0002-9904-1965-11412-4.
O'Connor, John J.; Robertson, Edmund F., "Stephen Cole Kleene", MacTutor History of Mathematics archive, University of St Andrews
Biographical memoir – by Saunders Mac Lane
Kleene bibliography
"The Princeton Mathematics Community in the 1930s: Transcript Number 23 (PMC23): Stephen C. Kleene and J. Barkley Rosser". Archived from the original on 10 March 2015. – Interview with Kleene and John Barkley Rosser about their experiences at Princeton
Stephen Cole Kleene at DBLP Bibliography Server
|
Section 59.30 (06VW): Points in other topologies—The Stacks project
Section 59.30: Points in other topologies (cite)
59.30 Points in other topologies
In this section we briefly discuss the existence of points for some sites other than the étale site of a scheme. We refer to Sites, Section 7.38 and Topologies, Section 34.2 ff for the terminology used in this section. All of the geometric sites have enough points.
Lemma 59.30.1. Let $S$ be a scheme. All of the following sites have enough points $S_{Zar}$, $S_{\acute{e}tale}$, $(\mathit{Sch}/S)_{Zar}$, $(\textit{Aff}/S)_{Zar}$, $(\mathit{Sch}/S)_{\acute{e}tale}$, $(\textit{Aff}/S)_{\acute{e}tale}$, $(\mathit{Sch}/S)_{smooth}$, $(\textit{Aff}/S)_{smooth}$, $(\mathit{Sch}/S)_{syntomic}$, $(\textit{Aff}/S)_{syntomic}$, $(\mathit{Sch}/S)_{fppf}$, and $(\textit{Aff}/S)_{fppf}$.
Proof. For each of the big sites the associated topos is equivalent to the topos defined by the site $(\textit{Aff}/S)_\tau $, see Topologies, Lemmas 34.3.10, 34.4.11, 34.5.9, 34.6.9, and 34.7.11. The result for the sites $(\textit{Aff}/S)_\tau $ follows immediately from Deligne's result Sites, Lemma 7.39.4.
The result for $S_{Zar}$ is clear. The result for $S_{\acute{e}tale}$ either follows from (the proof of) Theorem 59.29.10 or from Lemma 59.21.2 and Deligne's result applied to $S_{affine, {\acute{e}tale}}$. $\square$
The lemma above guarantees the existence of points, but it doesn't tell us what these points look like. We can explicitly construct some points as follows. Suppose $\overline{s} : \mathop{\mathrm{Spec}}(k) \to S$ is a geometric point with $k$ algebraically closed. Consider the functor
\[ u : (\mathit{Sch}/S)_{fppf} \longrightarrow \textit{Sets}, \quad u(U) = U(k) = \mathop{\mathrm{Mor}}\nolimits _ S(\mathop{\mathrm{Spec}}(k), U). \]
Note that $U \mapsto U(k)$ commutes with finite limits as $S(k) = \{ \overline{s}\} $ and $(U_1 \times _ U U_2)(k) = U_1(k) \times _{U(k)} U_2(k)$. Moreover, if $\{ U_ i \to U\} $ is an fppf covering, then $\coprod U_ i(k) \to U(k)$ is surjective. By Sites, Proposition 7.33.3 we see that $u$ defines a point $p$ of $(\mathit{Sch}/S)_{fppf}$ with stalks
\[ \mathcal{F}_ p = \mathop{\mathrm{colim}}\nolimits _{(U, x)} \mathcal{F}(U) \]
where the colimit is over pairs $U \to S$, $x \in U(k)$ as usual. But... this category has an initial object, namely $(\mathop{\mathrm{Spec}}(k), \text{id})$, hence we see that
\[ \mathcal{F}_ p = \mathcal{F}(\mathop{\mathrm{Spec}}(k)) \]
which isn't terribly interesting! In fact, in general these points won't form a conservative family of points. A more interesting type of point is described in the following remark.
Remark 59.30.2. Let $S = \mathop{\mathrm{Spec}}(A)$ be an affine scheme. Let $(p, u)$ be a point of the site $(\textit{Aff}/S)_{fppf}$, see Sites, Sections 7.32 and 7.33. Let $B = \mathcal{O}_ p$ be the stalk of the structure sheaf at the point $p$. Recall that
\[ B = \mathop{\mathrm{colim}}\nolimits _{(U, x)} \mathcal{O}(U) = \mathop{\mathrm{colim}}\nolimits _{(\mathop{\mathrm{Spec}}(C), x_ C)} C \]
where $x_ C \in u(\mathop{\mathrm{Spec}}(C))$. It can happen that $\mathop{\mathrm{Spec}}(B)$ is an object of $(\textit{Aff}/S)_{fppf}$ and that there is an element $x_ B \in u(\mathop{\mathrm{Spec}}(B))$ mapping to the compatible system $x_ C$. In this case the system of neighbourhoods has an initial object and it follows that $\mathcal{F}_ p = \mathcal{F}(\mathop{\mathrm{Spec}}(B))$ for any sheaf $\mathcal{F}$ on $(\textit{Aff}/S)_{fppf}$. It is straightforward to see that if $\mathcal{F} \mapsto \mathcal{F}(\mathop{\mathrm{Spec}}(B))$ defines a point of $\mathop{\mathit{Sh}}\nolimits ((\textit{Aff}/S)_{fppf})$, then $B$ has to be a local $A$-algebra such that for every faithfully flat, finitely presented ring map $B \to B'$ there is a section $B' \to B$. Conversely, for any such $A$-algebra $B$ the functor $\mathcal{F} \mapsto \mathcal{F}(\mathop{\mathrm{Spec}}(B))$ is the stalk functor of a point. Details omitted. It is not clear what a general point of the site $(\textit{Aff}/S)_{fppf}$ looks like.
minor comment: In the line "Note that
U \mapsto U(k)
commutes with direct limits..." I think you meant "finite" instead of "direct"
|
Mathematics/Calculus - Thalesians Wiki
Revision as of 09:25, 24 December 2020 by Admin (talk | contribs) (Created page with "= Quotes on Calculus = From [http://www.gutenberg.org/ebooks/33283 ''Calculus Made Easy: being a Very Simplest Introduction to those Beautiful Methods of Reckoning which are...")
Quotes on Calculus
From Calculus Made Easy: being a Very Simplest Introduction to those Beautiful Methods of Reckoning which are Generally Called by the Terrifying Names of the Differential Calculus and the Integral Calculus by Silvanus Phillips Thompson (1851–1916):
The preliminary terror, which chokes off most fifth-form boys from even attempting to learn how to calculate, can be abolished once for all by simply stating what is the meaning—in common-sense terms—of the two principal symbols that are used in calculating.
{\displaystyle d}
which merely means "a little bit of."
{\displaystyle dx}
means a little bit of
{\displaystyle x}
{\displaystyle du}
{\displaystyle u}
. Ordinary mathematicians think it more polite to say "an element of," instead of "a little bit of." Just as you please. But you will find that these little bits (or elements) may be considered to be indefinitely small.
{\displaystyle \int }
which is merely a long
{\displaystyle S}
, and may be called (if you like) "the sum of."
{\displaystyle \int \,dx}
means the sum of all the little bits of
{\displaystyle x}
{\displaystyle \int \,dt}
{\displaystyle t}
. Ordinary mathematicians call this symbol "the integral of." Now any fool can see that if
{\displaystyle x}
is considered as made up of a lot of little bits, each of which is called
{\displaystyle dx}
, if you add them all up together you get the sum of all the
{\displaystyle dx}
's, (which is the same thing as the whole of
{\displaystyle x}
). The word "integral" simply means "the whole." If you think of the duration of time for one hour, you may (if you like) think of it as cut up into 3600 little bits called seconds. The whole of the 3600 little bits added up together make one hour.
When you see an expression that begins with this terrifying symbol, you will henceforth know that it is put there merely to give you instructions that you are now to perform the operation (if you can) of totalling up all the little bits that are indicated by the symbols that follow.
Retrieved from "https://wiki.thalesians.com/index.php?title=Mathematics/Calculus&oldid=134"
|
Hexagonal Pyramid Calculator - Area & Volume
Example: Using the hexagonal pyramid calculator
The hexagonal pyramid calculator is useful if you are looking to find out the volume and surface area of hexagonal pyramids. A pyramid is a 3D shape that has a polygonal base and an apex point that connects with all the vertices of the base. The lines joining the apex points and the base vertices are called edges. Each face of a pyramid is a triangle, and in the case of a regular pyramid, it is an isosceles triangle.
You can find more information on how to find the surface area of a hexagonal pyramid as well as its volume in the article below.
A hexagonal pyramid is a three-dimensional shape that has a hexagonal base and an apex vertex. Each edge joins the vertex of the base to the apex point. In addition to this, it has six isosceles triangles as its faces. It has 12 edges and 7 vertices.
The surface area of a hexagonal pyramid has two components —
Lateral surface area,
A_l
Base surface area
A_b
The lateral surface area is the sum of the area of all the lateral faces. A hexagonal pyramid has 6 lateral faces which are in the shape of an isosceles triangle. To find the area of a triangle, you would need:
Length of the base,
a
Height of the triangle,
l
The height of the triangular face of a pyramid is also known as the slant height,
h_s
. Such that the lateral surface area of a hexagonal pyramid is:
\scriptsize A_l = 3 a \sqrt{h^2 + \frac{3a^2}{4}}
Similarly, the base area,
A_b
\scriptsize A_b = \frac {3\sqrt{3}}{2} a^2
The volume of a hexagonal pyramid in term of base length and the pyramid height is given by the equation:
V = \frac{\sqrt{3}}{2} a^2 h
Find the surface area and volume of the hexagonal pyramid having base length 4 mm and height as 5 mm.
To find the volume and surface area of the hexagonal pyramid:
Enter the base length as 4 mm.
Insert the height as 5 mm.
The hexagonal pyramid calculator will return the following areas and volume:
Face area =
12.166 \text{ mm}^2
41.57 \text{ mm}^2
73 \text{ mm}^2
114.56 \text{ mm}^2
69.28 \text{ mm}^3
Similar to the hexagonal pyramid calculator, there are other tools based on pyramids that you can refer to learn more cool things about this 3-dimensional shape, such as:
Pyramid volume calculator;
Right square pyramid calculator;
Right rectangular pyramid calculator;
Triangular pyramid volume calculator; and
Height of a square pyramid calculator.
How do I find the base area for a hexagonal pyramid?
To find the base area of a hexagonal pyramid:
Find the square of the base length.
Multiply the resultant by 3.
Multiply the produce by square root of 3 to obtain the base area of the hexagonal pyramid.
What is the surface area of hexagonal pyramid having side length 5 cm and height 7 cm?
The surface area of the hexagonal pyramid is 151.55 sq cm. Out of which, the 123.46 sq. cm is the lateral surface area, which is calculated as: 3 × 5 × √((0.25×25) + 49) = 123.46 sq. cm., while 64.95 sq. cm is the base area.
This calculator helps you to find the area for crescents and lunes. These shapes consists of two circular arcs and are determined based on their radii and the distance between the circle centers.
|
InflationBond instrument object - MATLAB - MathWorks Deutschland
InflationBond
Price Inflation Bond Instrument Using inflationcurve and Inflation Pricer
Price Multiple Inflation Bond Instruments Using inflationcurve and Inflation Pricer
InflationBond instrument object
Create and price an InflationBond instrument object for one or more Inflation Bond instruments using this workflow:
Use fininstrument to create an InflationBond instrument object for one or more Inflation Bond instruments.
Use ratecurve to specify an interest-rate model for the InflationBond instrument object.
Use inflationcurve to specify an inflation curve model for the InflationBond instrument object.
Use finpricer to specify an Inflation pricing method for one or more InflationBond instruments.
Use inflationCashflows to compute cash flows for each one of the InflationBond instruments.
For more information on the available models and pricing methods for an InflationBond instrument, see Choose Instruments, Models, and Pricers.
InflationBond = fininstrument(InstrumentType,'CouponRate',couponrate_value,'Maturity',maturity_date)
InflationBond = fininstrument(___,Name,Value)
InflationBond = fininstrument(InstrumentType,'CouponRate',couponrate_value,'Maturity',maturity_date) creates an InflationBond object for one or more Inflation Bond instruments by specifying InstrumentType and sets the properties for the required name-value pair arguments CouponRate and Maturity.
InflationBond = fininstrument(___,Name,Value) sets optional properties using additional name-value pairs in addition to the required arguments in the previous syntax. For example, InflationBond = fininstrument("InflationBond",'Maturity',Maturity,'CouponRate',CouponRate,'IssueDate',IssueDate) creates a InflationBond option.
string with value "InflationBond" | string array with values of "InflationBond" | character vector with value 'InflationBond' | cell array of character vectors with values of 'InflationBond'
Instrument type, specified as a string with the value of "InflationBond", a character vector with the value of 'InflationBond', an NINST-by-1 string array with values of "InflationBond", or an NINST-by-1 cell array of character vectors with values of 'InflationBond'.
Example: InflationBond = fininstrument("InflationBond",'Maturity',Maturity,'CouponRate',CouponRate,'IssueDate',IssueDate)
Required InflationBond Name-Value Pair Arguments
CouponRate — InflationBond coupon rate
InflationBond coupon rate, specified as the comma-separated pair consisting of 'CouponRate' and a scalar decimal or an NINST-by-1 vector of decimals for an annual rate.
Maturity — InflationBond maturity date
InflationBond maturity date, specified as the comma-separated pair consisting of 'Maturity' and a scalar datetime, serial date number, date character vector, date string or an NINST-by-1 vector of datetimes, serial date numbers, cell array of date character vectors, or date string array.
Optional InflationBond Name-Value Pair Arguments
Principal — Initial principal amount
Initial principal amount, specified as the comma-separated pair consisting of 'Principal' and a scalar numeric or an NINST-by-1 numeric vector.
InflationBondObj = fininstrument("InflationBond",'CouponRate',0.34,'Maturity',datetime(2025,12,15),'Holidays',H)
true (in effect) (default) | scalar logical values of true or false | vector of logical values with true or false
Irregular first coupon date, specified as the comma-separated pair consisting of 'FirstCouponDate' and a scalar datetime, serial date number, date character vector, date string or an NINST-by-1 vector of datetimes, serial date numbers, cell array of date character vectors, date string array or an NINST-by-1 vector of datetimes, serial date numbers, cell array of date character vectors, or date string array.
Irregular last coupon date, specified as the comma-separated pair consisting of 'LastCouponDate' and a scalar datetime, serial date number, date character vector, date string or an NINST-by-1 vector of datetimes, serial date numbers, cell array of date character vectors, date string array or an NINST-by-1 vector of datetimes, serial date numbers, cell array of date character vectors, or date string array.
CouponRate — InflationBond coupon annual rate
InflationBond coupon annual rate, returned as a scalar decimal or an NINST-by-1 vector of decimals.
InflationBond maturity date, returned as a scalar datetime or an NINST-by-1 vector of datetimes.
Initial principal amount, returned as a scalar numeric or an NINST-by-1 numeric vector.
Bond issue date, returned as a datetime or an NINST-by-1 datetime vector.
true (in effect) (default) | scalar logical value of true or false | vector of logicals with value of true or false
End-of-month rule flag for generating dates when Maturity is an end-of-month date for a month having 30 or fewer days, returned as a scalar logical or an NINST-by-1 vector of logicals.
This example shows the workflow to price an InflationBond instrument when you use an inflationcurve and an Inflation pricing method.
InflationBond = fininstrument("InflationBond", 'IssueDate', IssueDate, 'Maturity', Maturity, 'CouponRate', CouponRate,'Name',"inflation_bond_instrument")
[Price, outPR] = price(outPricer, InflationBond)
This example shows the workflow to price multiple InflationBond instruments when you use an inflationcurve and an Inflation pricing method.
An inflation-indexed bond is a security that guarantees a return higher than the rate of inflation if it is held to maturity. Inflation-indexed securities link their capital appreciation, or coupon payments, to inflation rates
To price an inflation-indexed bond, use an inflation curve and a nominal discount curve (model-free approach), where the cash flows are discounted using the nominal discount curve.
\begin{array}{l}I\left(0,T\right){P}_{n}\left(0,T\right)=I\left(0\right){P}_{r}\left(0,T\right)\\ {B}_{TIPS}\left(0,{T}_{M}\right)=\frac{1}{I\left({T}_{0}\right)}\sum _{i=1}^{M}cI\left(0\right){P}_{r}\left(0,{T}_{i}\right)+FI\left(0\right){P}_{r}\left(0,{T}_{M}\right)\\ \text{ }=\frac{1}{I\left({T}_{0}\right)}\sum _{i=1}^{M}cI\left(0,{T}_{i}\right){P}_{n}\left(0,{T}_{i}\right)+FI\left(0,{T}_{M}\right){P}_{n}\left(0,{T}_{M}\right)\end{array}
Pn is the nominal zero-coupon bond price.
Pr is the real zero-coupon bond price.
I(0,T) is the breakeven inflation index for period (0,T).
I(0) is the inflation index at (t = 0).
I(T0) is the base inflation index at the issue date (t = T0).
BTIPS(0,TM) is the inflation-indexed bond price.
F is the face value.
YearYearInflationSwap | ZeroCouponInflationSwap | finpricer
|
This is an introduction tutorial to Reinforcement Learning. To understand everything from basics I will start with a simple game called - CartPole
The idea of CartPole is that there is a pole standing up on top of a cart. The goal is to balance this pole by moving the cart from side to side to keep the stick balanced upright.
We consider the environment won if we balance it for 500 frames and fail once the pole is tilted more than 15 degrees from totally vertical or the cart moves more than 2.4 units from the middle position.
For every frame that we go with the pole "balanced" (less than 15 degrees from vertical), our "score" gets +1, and our target is a score of 500.
Now, however, how can we do this? There are countless ways in which we can do this. Some are very complex, and some are very specific to the environment. I chose to demonstrate how deep reinforcement learning (deep Q-learning) can be implemented and applied to play a CartPole game using Keras and Gym. I will try to explain everything without requiring any prerequisite knowledge about reinforcement learning.
Before starting, take a look at this YouTube video with a real-life demonstration of a CartPole problem learning process. Looks impressive, right? Implementing such a self-learning system is easier than you may think. Let's dive in!
To achieve the desired behavior of an agent that learns from its mistakes and improves its performance, we need to get more familiar with the concept of Reinforcement Learning (RL).
RL is a type of machine learning that allows us to create AI agents that learn from the environment by interacting with it to maximize its cumulative reward. In the same way, how we learn to ride a bicycle, AI learns it by trial and error, agents in RL algorithms are incentivized with punishments for wrong actions and rewards for good ones.
After each action, the agent receives feedback. The feedback consists of the reward and the next state of the environment. A human usually defines the reward. Using the bicycle analogy, we can define reward as the distance from the original starting point.
Cartpole Game
CartPole is one of the most straightforward environments in OpenAI gym (collection of environments to develop and test RL algorithms). Cartpole is built on a Markov chain model that I give illustration below.
Then for each iteration, an agent takes the current state (St), picks the best (based on model prediction) action (At), and executes it on an environment. Subsequently, the environment returns a reward (Rt+1) for a given activity, a new state (St+1), and information if the new state is terminal. The process repeats until termination.
The goal of CartPole is to balance a pole connected with one joint on top of a moving cart. An agent can move the cart by performing a series of 0 or 1 actions, pushing it left or right. To simplify our task, instead of reading pixel information, there are four kinds of information given by the state: the angle of the pole and the cart's position.
The gym makes interacting with the game environment really simple:
Here, action can be either 0 or 1. If we pass those numbers, env, which represents the game environment, will emit the results. Done is a Boolean value telling whether the game ended or not. next_state space handles all possible state values:
[Cart Position from -4.8 to 4.8],
[Cart Velocity from -Inf to Inf],
[Pole Angle from -24° to 24°],
[Pole Velocity At Tip from -Inf to Inf]
The old state information paired with action, next_state, and reward is the information we need for training the agent.
So to understand everything from basics, lets first create a CartPole environment where our python script would play with it randomly:
# This will display the environment
# Only display if you really want to see it.
# Takes much longer to display it.
Learn with Simple Neural Network using Keras
This tutorial is not about deep learning or neural networks. So I will not explain how it works in detail; I'll consider it just as a black-box algorithm that approximately maps inputs to outputs. That's a Neural Networks algorithm that learns on the pairs of examples input and output data, detect some patterns, and predicts the outcome based on unseen input data.
Neural networks are not the focus of this tutorial, but we should understand how it's used to learn in deep Q-learning algorithms.
Keras makes it simple to implement a basic neural network. With the code below, we will create an empty NN model. Activation, loss, and optimizer are the parameters that define the characteristics of the neural network, but we are not going to discuss them here.
# Neural Network model for Deep Q Learning
def OurModel(input_shape, action_space):
X = Dense(512, input_shape=input_shape, activation="relu", kernel_initializer='he_uniform')(X_input)
model = Model(inputs = X_input, outputs = X, name='CartPole DQN model')
model.compile(loss="mse", optimizer=RMSprop(lr=0.00025, rho=0.95, epsilon=0.01), metrics=["accuracy"])
For a NN to understand and predict based on the environment data, we have initialized our model (will show it in original code) and feed it to the information. Then the model will train on those data to approximate the output based on the input. Later in the complete code, you will see that the fit() method provides input and output pairs to the model.
In the above model, I used three layers of Neural Network, 512, 256, and 64 neurons. Feel free to play with its structure and parameters.
Later in the training process, you will see what makes the NN predict the reward value from a particular state. You will see that in code, I will use model.fit(next_state, reward), same as in the standard Keras NN model.
After training, the model we will be able to predict the output from unseen input. When we call predict() function on the model, the model will predict the reward of the current state based on the data we trained. Like so: prediction = model.predict(next_state)
Implementing Deep Q Network (DQN)
Generally, in games, the reward directly relates to the score of the game. But, imagine a situation where the pole from the CartPole game is tilted to the left. The expected future reward of pushing the left button will then be higher than that of pushing the right button since it could yield a higher score of the game as the pole survives longer.
To logically represent this intuition and train it, we need to express this as a formula to optimize. The loss is just a value that indicates how far our prediction is from the actual target. For example, the model's prediction could suggest more value in pushing the left button to gain more reward by pressing the right button. We want to decrease this gap between the prediction and the target (loss). So, we will define our loss function as follows:
loss={\left(r+\gamma maxQ\text{'}\left(s, a\text{'}\right)-Q\left(s, a\right)\right)}^{2}
We first act a and observe the reward r and resulting new state s. Based on the result, we calculate the maximum target Q and then discount it to make the future reward worth less than the immediate reward. Lastly, we add the current reward to the discounted future reward to get the target value. Subtracting our current prediction from the target gives the loss. Squaring this value allows us to punish the large loss value more and treat the negative values as positive ones.
But it's not that difficult than you think it is; Keras takes care of most of the difficult tasks for us. We need to define our target. We can express the target in a magical one line of code in python: target = reward + gamma * np.max(model.predict(next_state))
Keras does all the work of subtracting the target from the NN output and squaring it. It also applies the learning rate that we can define when creating the neural network model (otherwise, the model will determine it by itself); all this happens inside the fit() function. This function decreases the gap between our prediction to target by the learning rate. The approximation of the Q-value converges to the true Q-value as we tend to repeat the change method. The loss decreases, and therefore the score grows higher.
The most notable features of the DQN algorithm are "remembered" and "replay" methods. Both are simple concepts. The original DQN design contains a lot of tweaks for a better learning process. However, we tend to stick to a less complicated version for better understanding.
Implementing "Remember" function
One of the specific things for DQN is that the Neural Network used in the algorithm tends to forget the previous experiences as it overwrites them with new experiences. So, we need a memory (list) of previous experiences and observations to re-train the model with the earlier experiences. Experience replay could be named a biologically inspired method that uniformly (scales back the correlation between sequence actions) samples experiences from the memory and updates its Q values for every entry. We will call this array of experiences memory and use a remember() function to append state, action, reward, and next state to the memory.
In our example, the memory list will have a form of:
memory = [(state, action, reward, next_state, done)...]
And remember function will store states, actions, and resulting rewards to the memory like:
To make the agent perform well in the long term, we need to consider the immediate rewards and the future rewards we will get. To do this, we will have a discount rate or gamma and ultimately add it to the current state reward. This way, the agent will learn to maximize the discounted future reward based on the given state. In other words, we are updating our Q value with the cumulative discounted future rewards.
done is just a Boolean that indicates if the state is the final state (cartpole failed).
Implementing Replay function
A method that trains NN with experiences in the memory we will call replay() function. First, we will sample some experiences from the memory and call them minibatch. minibatch = random.sample(memory, min(len(memory), batch_size))
The above code will make a minibatch, just randomly sampled elements from full memories of size batch_size. I will set the batch size as 64 for this example. If the memory size is less than 64, we will take everything is in our memory.
For those of you who wonder how such function can converge, as it looks like it is trying to predict its output (in some sense it is), don't worry — it's possible, and in our simple case, it does. However, convergence is not always that 'easy', and in more complex problems, there comes a need for more advanced techniques than CartPole stabilize training. For example, these techniques are Double DQN's or Dueling DQN's, but that's a topic for another article (stay tuned).
# Standard - DQN
There are some parameters that have to be passed to a reinforcement learning agent. You will see similar parameters in all DQN models:
EPISODES — number of games we want the agent to play;
Gamma — decay or discount rate, to calculate the future discounted reward;
epsilon — exploration rate is the rate in which an agent randomly decides its action rather than a prediction;
epsilon_decay — we want to decrease the number of explorations as it gets good at playing games;
epsilon_min — we want the agent to explore at least this amount;
learning_rate — Determines how much neural net learns in each iteration (if used);
batch_size — Determines how much memory DQN will use to train;
Putting It All Together: Coding The Deep Q-Learning Agent
I tried to explain each part of the agent in the above. In the code below, I'll implement everything we've talked about as a nice and clean class called DQNAgent.
self.epsilon_min = 0.001
# create main model
self.model = OurModel(input_shape=(self.state_size,), action_space = self.action_size)
print("episode: {}/{}, score: {}, e: {:.2}".format(e, self.EPISODES, i, self.epsilon))
print("Saving trained model as cartpole-dqn.h5")
self.save("cartpole-dqn.h5")
self.load("cartpole-dqn.h5")
agent = DQNAgent()
Below is the part code responsible for training our DQN model. I will not go deep into the explanation line by line because everything is already explained above. But in our code, we are running for 1000 episodes of the game to train. If you don't want to see how training performs, you can comment on this line self.env.render(). Every step is rendered here, and while done is equal to False, our model keeps training. We save results from every step to memory, which we use for training on every step. When our model hits a score of 500, we keep it, and already we can use it for testing. But I recommend not turning off training at first save and giving it more time to train before testing. It may take up to 100 steps before it reaches the 500 score. You may ask, why it takes so long? The answer is simple: because of the Dropout layer in our model, it may reach 500 much faster without dropout, but then our testing results would be worse. So, here is the code part of this short explanation:
For me, the model reached a 500 score in the 73rd step; here, my model was saved:
DQN CartPole testing part
So now, when you have trained your model, it's time to test it! Comment agent.run() line and uncomment agent.test(). And check how your first DQN model works!
So here are 20 test episodes of our trained model. As you can see, 16 times it hit the maximum score, it would be interesting what is the maximum score it could beat, but sadly limit is 500:
And here is a short gif, which shows how our agent performs:
We reached our goal for this task. A short recap of what we did:
Learned how DQN works;
Wrote simple DQN model;
Taught NN model to play CartPole game.
This is the end of this tutorial. I challenge you to try creating your own RL agents! Let me know how they perform in solving the cartpole problem. Furthermore, stay tuned for more future tutorials.
|
{\displaystyle {\tfrac {1}{3}}}
{\displaystyle {\tfrac {2}{3}}}
{\displaystyle {\tfrac {1}{3}}}
{\displaystyle {\tfrac {2}{3}}}
Partials whose frequencies are not integer multiples of the fundamental are referred to as inharmonic partials. Some acoustic instruments emit a mix of harmonic and inharmonic partials but still produce an effect on the ear of having a definite fundamental pitch, such as pianos, strings plucked pizzicato, vibraphones, marimbas, and certain pure-sounding bells or chimes. Antique singing bowls are known for producing multiple harmonic partials or multiphonics. [3][4] Other oscillators, such as cymbals, drum heads, and other percussion instruments, naturally produce an abundance of inharmonic partials and do not imply any particular pitch, and therefore cannot be used melodically or harmonically in the same way other instruments can.
Partials, overtones, and harmonicsEdit
On stringed instrumentsEdit
{\displaystyle {\tfrac {1}{2}}}
{\displaystyle {\tfrac {1}{3}}}
{\displaystyle {\tfrac {1}{4}}}
Sounded note
Audio frequency (Hz)
Cents above fundamental
(offset by octave)
perfect unison P1 600 0.0
2 first perfect octave P8 1,200 0.0
3 perfect fifth P8 + P5 1,800 702.0
4 doubled perfect octave 2·P8 2,400 0.0
5 just major third
major third 2·P8 + M3 3,000 386.3
6 perfect fifth 2·P8 + P5 3,600 702.0
7 harmonic seventh
septimal minor seventh
(‘the lost chord’) 2·P8 + m7↓ 4,200 968.8
8 third perfect octave 3·P8 4,800 0.0
9 Pythagorean major second 3·P8 + M2 5,400 203.9
10 just major third 3·P8 + M3 6,000 386.3
undecimal semi-augmented fourth 3·P8 + a4
12 perfect fifth 3·P8 + P5 5,400 702.0
13 tridecimal neutral sixth 3·P8 + n6
↓ 7,800 840.5
14 harmonic seventh
(‘the lost chord’) 3·P8 + m7⤈ 8,400 968.8
15 just major seventh 3·P8 + M7 6,750 1,088.3
16 fourth perfect octave 4·P8 4,800 0.0
17 septidecimal semitone 4·P8 + m2⇟ 10,200 105.0
18 Pythagorean major second 4·P8 + M2 10,800 203.9
19 nanodecimal minor third 4·P8 + m3
20 just major third 4·P8 + M3 12,000 386.3
Artificial harmonicsEdit
|
How Hasinur Met His Girlfriends | Toph
By Nobel_ruettt · Limits 1s, 512 MB
Hasinur’s love interest is always curious about different strings and their properties. He wants to impress her and needs your help.
You will be given several strings and you will have to find the number of distinct sub-strings of that string.
Input will start with an integer
T (
1 ≤ T ≤ 100
1≤T≤100), number of test cases. In each test case a string will be given. The length of each string will be at most 50. A string will contain only lowercase Latin letters ('a' - 'z').
For each test case, output an integer in a single line—the answer of the corresponding test case according to the problem statement.
A substring is a contiguous sequence of characters within a string. For example; the sub-strings of the string "abc" are "a", "ab", "abc", "b", "bc" and "c".
SuffixStructure uDebug
skmonirEarliest, Nov '18
Alamin_justFastest, 0.0s
Alamin_justLightest, 131 kB
touhidurrrShortest, 105B
Constraints are flexible. So we can do bruteforce for this problem. We can generate all the substrin...
|
The maternal weight gain in twin pregnancies has its own rules that are a bit different from singleton pregnancies. Let's find out how much weight you should gain every week, how to keep yourself healthy carrying two or multiple babies, and discover the most significant differences in pregnancy weight gain in different types of gestations.
💡 This article is a part of a bigger series, based on our pregnancy weight gain calculator.
Gestational weight gain is a must when it comes to pregnancy — your entire body is changing to support a baby growing inside of you. The amount of weight you gain depends not only on your baby's birth weight or the number of children you're carrying; all parts of your body need to adjust to this new situation:
Your breasts and uterus both grow 2 lb.
Your blood increases its volume by 50%.
Your body accumulates water and starts producing amniotic fluid.
A new organ is created — a 1.5 lb placenta!
If you're pregnant with twins, the situation's a little bit different. If all the babies have separate placenta, each one of them is on average 17% smaller. The difference grows to 24% in tripleton pregnancies.
A lot of extra fat tissue. It serves as a source of ingredients for pregnancy hormones production.
These values are usually different for singleton and twin pregnancies — women carrying twins and multiple pregnancies are expected to hold even more considerable weight.
As we've already mentioned in multiple articles, the expected weight gain during pregnancy depends on your pre-pregnancy weight and height, computed into BMI (body mass index):
\text{BMI} = \frac{ \text{weight [kg]} }{ (\text{height [m]})^2 }
As you may expect, these values are slightly higher for women with twin pregnancies. After all, you're carrying two souls under your heart! 👶👶
Here's the chart of weight gain recommendations for twin pregnancies:
Expected weight gain: 28–40 pounds (12.7–18.1 kg)
Twin pregnancy expected weight gain: 37–54 pounds (16.8–24.5 kg)
Twin pregnancy expected weight gain: 14–50 pounds (6.4–22.7 kg)
Twin pregnancy expected weight gain: 25–42 pounds (11.3–19 kg)
Some studies estimate that the weekly weight gain while pregnant with twins should be close to approximately 0.75 kg (1.5 lb) during pregnancy's second and third trimesters. However, every pregnancy and every woman is different — talk to your health care provider about your individual, healthy weight range.
Remember that excessive gestational weight gain may harm pregnancy outcomes! Weight gain during twin pregnancies is twice as significant. These gestations are already considered high risk or very high risk, depending on the chronicity, which is the number of placentas in the womb.
💡 Dichorionic (two placentas) twin pregnancies are the safest and most common type.
Possible threats for obese and overweight women include preterm birth (already increased in twins) and an even greater risk of preeclampsia.
What about low gestational weight?
Lower weight gain in twin pregnancies is associated with a higher incidence of low birth weight and an even greater risk of anemia.
As we already know, an average woman gains around 30–50 pounds when pregnant with twins. What about multiple pregnancies? It is estimated that every additional fetus adds another 10 pounds (4.5 kg).
Normal-BMI women with twins should eat around 30–45 kcal/kg extra each day.
Some sources recommend increasing the daily intake by:
300 kcal per baby in the 1st semester;
340 kcal per baby in the 2nd semester; and
452 kcal per baby in the 3rd semester.
Keep a maternal weight gain record — step on a scale once a week at the same time of the day and write it all down.
Chart by CDC.gov — Tracking your weight
It is essential to recognize multiple and singleton pregnancies early on. If you're pregnant with twins, but don't know it yet, you may needlessly worry about your "excessive" weight gain and try to impose some interventions that are not needed at all.
Try to exercise as long as you can — walk at least 150 minutes per week.
Fill your diet with proteins, vitamins, dairy products, and wholemeal products.
Maximize your calcium intake. Pregnancy may have a significant impact on your bones and teeth.
Min. weight gain
|
Lever Calculator | Mechanical Advantage
The elements of a lever
The lever equation
The mechanical advantage of a lever and the law of the lever
The three types of levers
An example: calculate the lever arm to lift the world
How to calculate the mechanical advantage of a lever using our lever calculator
Archimedes said, "Give me a lever long enough" - use our lever calculator to find out how long that lever should be!
The physics behind levers and the lever equation;
The law of the lever, by Archimedes;
What is the mechanical advantage of a lever; and
Some examples of levers in action, both practical and impractical!
A lever is probably the most straightforward mechanism ever devised by humanity. Take a plank of wood (or any rigid material), find a place where to put it, and you are all set.
Regardless of their lack of complexity, levers are marvelous machines everywhere in our daily lives. Look around you wherever you are: scissors, nail clippers, bottle openers, and so on, are all levers. Let's discover more about them.
🔎 Levers appear all the time in nature, in particular in the animal kingdom. Muscles, joints, jaws: evolution engineered the living being to be as good as possible.
A lever allows you to move an object with a specific advantage - that is, you'll be able to move it with either a reduced effort or an increased speed or displacement.
Levers work thanks to the application of forces and the exploitation of the torques which derive from them. This alters the balance, allowing us to lift heavy loads with very little effort.
We can describe levers with a relatively small number of elements:
The fulcrum is where the lever pivots. The fulcrum doesn't need to be in the middle of the lever - in fact, its position allows for entirely different uses of this simple machine.
The resistance (or load) is the force applied by the object you want to move, cut, or whatever else your lever does. We identify the resistance with the symbol
F_b
. The segment between the fulcrum to the point of application of the resistance is denoted as
b
The effort is the force you apply when operating the lever. The distance from the point of application to the fulcrum is
a
, while its value is
F_a
The operational principle of a lever is straightforward. We can identify an equilibrium situation where the lever is stationary.
In that condition, the torque applied to the lever by both forces equal each other.
The torque is the rotational analog of a linear force and, in layman's words, describes the effect of a force
F
applied at a certain distance
x
(the arm) from a pivot:
\boldsymbol{\tau}= \mathbf{x}\times\mathbf{F}= x\times F\times \sin{\theta}
The quantities are in bold because we need to consider them as vectors: the torque's magnitude depends on the sine of the angle between the applied force and the arm.
Good news! In a lever, the angle
\theta
is usually equal to
90\degree
, and we can happily ignore it since
\sin{90\degree}=1
. To learn more about torque, check out our torque calculator!
Back to the equilibrium condition. We said that the torques equal each other:
\tau_a = a \times F_a = b \times F_b = \tau_b
This equation allows us to find out every quantity we need in a problem involving levers. For example, if the arms' length is known, it is possible to calculate the force required to attain equilibrium against a specific resistance.
The quantity that measures the "performance" of a lever is called mechanical advantage. It derives from analyzing the torque applied by both forces on the lever.
The famous law of the lever says that the multiplication of the force in a lever is given by:
\text{MA} = \frac{F_a}{F_b} = \frac{a}{b}
Take a look at the formula and remember that
a
corresponds to the side where you apply the effort and
b
relates to the resistance.
First thing: it is easier to think in terms of the arms and not the forces: it is pretty obvious that the greater the force you apply, the higher the mechanical advantage is. But imagine now having a really "unbalanced" lever, with the effort applied at a distance far greater than the one of the resistance:
a\gg b
The mechanical advantage of such a lever would be extremely high, reflected in the need to apply a really high effort on one of the levers.
🔎 Archimedes said: "Give me a lever long enough and a fulcrum on which to place it, and I shall move the world". He had in mind a lever with a particularly good mechanical advantage!
The value of the mechanical advantage tells us how the lever behaves. The higher the mechanical advantage, the smaller the effort applied to balance the same resistance. In that case, the lever is called a force multiplier.
If the mechanical advantage equals
1
, the lever gives you no edge: it would be like applying the effort directly. Finally, if the mechanical advantage is smaller than
1
, the lever is a speed multiplier, which means you'll move the point where the resistance is applied faster than you'd think for the effort (over the same time, hence increasing the speed).
🙋 We dedicated an entire calculator to the mechanical advantage in various simple machines - check it out at our mechanical advantage calculator!
How the various elements of a lever relate to one another allows us to define three different types of levers:
Class I levers - The levers we picture in our mind when someone asks us to think of one. The fulcrum is between resistance and effort.
Class II levers - The resistance and the effort are placed on the same side on the same side of the fulcrum, with
a>b
Class III levers - The resistance and the effort are placed on the same side of the fulcrum, but this time
a<b
The three types of levers. From the top, a class I lever, a class II lever, and a class III lever.
Now we know how to characterize them. Let's take a look at the possible mechanical advantages!
For a class I lever,
and
b
can take every possible value greater than
0
(and bounded by the length of the lever, of course). According to the ratio of
and
b
, such a lever can be a force multiplier.
For a class II lever,
a\text{\textgreater}b
. This implies that the mechanical advantage is greater than
1
, and the lever always acts as a force multiplier.
For a class III lever, the opposite is true. Since
b\text{\textgreater} a
the mechanical advantage is always smaller than
1
, and so the lever always acts as a speed multiplier.
We can calculate the characteristics of a lever able to lift the world using the lever equation. There's only a condition: you're the one doing the lifting!
Assuming a weight of
70\ \text{kg}
and that you're standing on your end of the lever, we can compute an effort of:
\footnotesize F_a=70\ \text{kg}\times9.81\ {\text{m/}}{\text{s}^2} = 686.7\ \text{N}
What about the Earth? Eehh... its mass is
M_E=5.9722\times10^{24}\ \text{kg}
, which corresponds to a resistance of:
\footnotesize \begin{align*} F_b &=5.9722\times10^{24}\ \text{kg}\times9.81\ {\text{m/}}{\text{s}^2}\\ &= 5.8587\times10^{25}\ \text{N} \end{align*}
We need to consider a lever long enough: let's say as long as the distance between the Earth and the Sun (
148.81\times10^9\ \text{m}
). Calling the length
l
, and using the equality
a = l-b
, we can write the equation of the lever as:
l-b=\frac{F_b\times b}{F_a}
Let's isolate
b
\begin{align*} (l-b)\times F_a &= b\times F_b\\\\ l\times F_a &= (F_a + F_b)\times b\\\\ b&=\frac{l\times F_a}{F_a+F_b} \end{align*}
This equation allows us to find the length of the lever's arm associated to the resistance, in our case, the Earth. We substitute the numerical values (neglecting
F_a
at the denominator), and we calculate the arm of the lever:
\footnotesize \begin{align*} b&=\frac{1.481\times 10^{11}\ \text{m} \times 687.7\ \text{N}}{5.8587\times 10^{25}\ \text{N}}\\\\ &=1.74\times10^{-12}\ \text{m} \end{align*}
That's one long lever: the arm on which the Earth rests is smaller than a hydrogen atom!
Our Archimedes is trying to lift the world using only a lever. Will he manage doing so?
Back to some more practical examples that don't involve lifting planets. Let's say you are at the playground, playing on the seesaw with one of your friends. By now, you know that the seesaw is a class I lever. We assume that your weight is
75\ \text{kg}
and your friend, being on the thin side, is only
60\ \text{kg}
The seesaw is
4\ \text{m}
long. Where do you have to sit to balance your friend? And what is the mechanical advantage of the lever you created?
Apply the lever equation to find the arm of your side of the lever. Notice that in the equation you can use the masses instead of the weights. They differ only by the multiplicative constant,
g
, the standard gravitational parameter, and we can cancel it out.
\footnotesize \begin{align*} a&=\frac{g\times 60\ \text{kg}\times 2\ \text{m}}{g\times 75\ \text{kg}}\\\\ &=\frac{60\ \text{kg}\times 2\ \text{m}}{75\ \text{kg}}=1.6\ \text{m} \end{align*}
You have to seat
40\ \text{cm}
before the end of the seesaw to equal your friend's resistance - now the game is... balanced!
Now we can calculate the lever's mechanical advantage. Calculate the quotient of the two arms' lengths
\text{MA}=\frac{a}{b}=\frac{1.6\ \text{m}}{2\ \text{m}} = 0.8
It is smaller than
1
because you are lifting a smaller weight than yours.
Our lever calculator is a helpful tool that allows you to calculate everything you need in your physics homework — and not only: we hope that it can help you in your everyday problems too!
Leverage your time and quickly find the results: insert the quantities you know, and find the results. You can insert the forces acting on the levers, the arms, or the mechanical advantage and only one of the other quantities, to find the remaining ones!
What is the lever equation?
The lever equation defines the forces and the physical features of a lever in its equilibrium status. It derives from the comparison of the torque acting on the lever:
Fa × a = Fb × b
Fᵢ are the forces, either the effort or the resistance; and
lᵢ are the arms of the lever (a and b).
Manipulate that simple equation to isolate the desired quantity.
How long should the arm of a lever be to balance a 1500 kg car with my weight?
Let's say you weigh 70 kg. To balance a car with a mass of 1500 kg, you can use a lever defined by the mechanical advantage:
MA = (1500 × g)/(70 × g) = 21.43
where g is the standard gravitational parameter, equal to 9.81 m/s². The arm of the lever on your side should be 21.43 times longer than the arm on the car side.
How to calculate the mechanical advantage of a lever?
You can calculate a lever's mechanical advantage by computing the ratio of the forces acting on the lever or, interchangeably, the ratio of the lever's arms:
MA = Fa/Fb = a/b
The mechanical advantage can be smaller, greater, or equal to one.
What is the mechanical advantage of a lever?
The mechanical advantage of a lever tells you if the lever you are using will give you an edge when doing some mechanical work. A mechanical advantage greater than 1 defines levers that help you lift heavy loads. In contrast, a mechanical advantage smaller than 1 defines levers that reduce the force you apply, but instead return an increased speed.
Effort (Fa)
Effort's arm (a)
Resistance (Fb)
Resistance's arm (b)
|
Continuous Mean - MapleSim Help
Home : Support : Online Help : MapleSim : MapleSim Component Library : Signal Blocks : Mathematical : Functions : Continuous Mean
Calculates the empirical expectation (mean) value of its input signal
The Continuous Mean component continuously calculates the mean value of its input signal.
This can be used to determine the empirical expectation value of a random signal, such as generated by the Noise blocks. The parameter
{t}_{\mathrm{\epsilon }}
is used to guard against division by zero (the mean value computation starts at
{t}_{0}+{t}_{\mathrm{\epsilon }}
y={\begin{array}{cc}\mathrm{\mu }& {t}_{0}+{t}_{\mathrm{\epsilon }}\le t\\ u& \mathrm{otherwise}\end{array}
\frac{d\mathrm{\mu }}{\mathrm{dt}}={\begin{array}{cc}\frac{u-\mathrm{\mu }}{t-{t}_{0}}& {t}_{0}+{t}_{\mathrm{\epsilon }}\le t\\ 0& \mathrm{otherwise}\end{array}
u
y
Expectation (mean) value of the input signal
{t}_{\mathrm{\epsilon }}
1·{10}^{-7}
s
Mean value calculation starts at
{t}_{0}+{t}_{\mathrm{\epsilon }}
t_eps
|
Fazal M. Mahomed, Asghar Qadir, Mehmet Pakdemirli
Affine Differential Invariants of Functions on the Plane
Yuanbin Wang, Xingwei Wang, Bin Zhang
A differential invariant is a function defined on the jet space of functions that remains the same under a group action. It is an important concept to solve the equivalence problem. This paper presents an effective method to derive a special type of affine differential invariants. Given some functions defined on the plane and an affine group acting on the plane, there are induced actions of the group on the functions and on the derivative functions of the functions. Affine differential invariants of these functions are useful in many applications. However, there has been little systematic study of this problem at present. No clear and simple results are available for application users to use directly. We propose a direct and simple method to construct affine differential invariants in this situation. Some useful explicit formulas of affine differential invariants of 2D functions are presented.
Muhammad Ayub, Masood Khan, F. M. Mahomed
We present a systematic procedure for the determination of a complete set of kth-order (
k\ge 2
) differential invariants corresponding to vector fields in three variables for three-dimensional Lie algebras. In addition, we give a procedure for the construction of a system of two kth-order ODEs admitting three-dimensional Lie algebras from the associated complete set of invariants and show that there are 29 classes for the case of k = 2 and 31 classes for the case of
k\ge 3
. We discuss the singular invariant representations of canonical forms for systems of two second-order ODEs admitting three-dimensional Lie algebras. Furthermore, we give an integration procedure for canonical forms for systems of two second-order ODEs admitting three-dimensional Lie algebras which comprises of two approaches, namely, division into four types I, II, III, and IV and that of integrability of the invariant representations. We prove that if a system of two second-order ODEs has a three-dimensional solvable Lie algebra, then, its general solution can be obtained from a partially linear, partially coupled or reduced invariantly represented system of equations. A natural extension of this result is provided for a system of two kth-order (
k\ge 3
) ODEs. We present illustrative examples of familiar integrable physical systems which admit three-dimensional Lie algebras such as the classical Kepler problem and the generalized Ermakov systems that give rise to closed trajectories.
Permeability Models for Magma Flow through the Earth's Mantle: A Lie Group Analysis
N. Mindu, D. P. Mason
The migration of melt through the mantle of the Earth is governed by a third-order nonlinear partial differential equation for the voidage or volume fraction of melt. The partial differential equation depends on the permeability of the medium which is assumed to be a function of the voidage. It is shown that the partial differential equation admits, as well as translations in time and space, other Lie point symmetries provided the permeability is either a power law or an exponential law of the voidage or is a constant. A rarefactive solitary wave solution of the partial differential equation is derived in the form of a quadrature for the exponential law for the permeability.
Group Classification of a Generalized Lane-Emden System
Ben Muatjetjeja, Chaudry Masood Khalique, Fazal Mahmood Mahomed
We perform the group classification of the generalized Lane-Emden system
{xu}^{\mathrm{\prime }\mathrm{\prime }}+n{u}^{\mathrm{\prime }}+xH\left(v\right)=0,x{v}^{\mathrm{\prime }\mathrm{\prime }}+n{v}^{\mathrm{\prime }}+xg\left(u\right)=0
, which occurs in many applications of physical phenomena such as pattern formation, population evolution, and chemical reactions. We obtain four cases depending on the values of n.
Numerical Investigation of the Steady State of a Driven Thin Film Equation
A. J. Hutchinson, C. Harley, E. Momoniat
A third-order ordinary differential equation with application in the flow of a thin liquid film is considered. The boundary conditions come from Tanner's problem for the surface tension driven flow of a thin film. Symmetric and nonsymmetric finite difference schemes are implemented in order to obtain steady state solutions. We show that a central difference approximation to the third derivative in the model equation produces a solution curve with oscillations. A difference scheme based on a combination of forward and backward differences produces a smooth accurate solution curve. The stability of these schemes is analysed through the use of a von Neumann stability analysis.
A Note on Four-Dimensional Symmetry Algebras and Fourth-Order Ordinary Differential Equations
A. Fatima, Muhammad Ayub, F. M. Mahomed
We provide a supplementation of the results on the canonical forms for scalar fourth-order ordinary differential equations (ODEs) which admit four-dimensional Lie algebras obtained recently. Together with these new canonical forms, a complete list of scalar fourth-order ODEs that admit four-dimensional Lie algebras is available.
Abdullahi Rashid Adem, Chaudry Masood Khalique
|
How do I find the density of a cylinder?
How to use this cylinder density calculator?
This density of a cylinder calculator is helpful to determine the density of any cylindrical structure, given its mass and volume. If you're uncertain about the cylinder's volume, you can use its dimensions instead.
In this article, let's discuss some basic concepts:
What is the formula for cylinder density?
How to find the density of a cylinder?
Density is the amount of mass of a substance per unit volume. It is given by:
\rho = \frac{m}{V}
\rho
is the density of the substance;
m
is the mass of the substance; and
V
is the volume of the substance.
The SI unit of density is kilogram per cubic meter (
\text{kg}/\text{m}^3
), while its imperial unit is pound per cubic feet (
\text{lb}/\text{ft}^3
To derive a cylinder density formula, let's start by calculating its volume:
V = \pi r^2 h
V
is the cylinder's volume;
r
is the cylinder's radius; and
h
is the cylinder's height.
A cylinder with radius r and height h.
Combining this with the general formula for density we've seen in the previous section, we get the formula for cylinder density:
\rho = \frac{m}{\textcolor{red}{V}} = \frac{m}{\textcolor{red}{\pi r^2 h}}
To calculate the density of a cylinder, follow these steps:
Measure the mass m of the cylinder if it is not already known.
Determine the cylinder's volume V from its radius r and height h using the formula V = πr²h.
Divide the cylinder's mass by its volume to get its density ρ = m/v.
Verify your results using our density of a cylinder calculator.
This density of a cylinder calculator is easy to use. It has two modes of calculation to suit your needs:
In the default Simple mode, you can enter the cylinder's Mass and Volume to calculate its density. Use this mode if both the mass and volume are known quantities.
Using the Advanced mode, you can also feed the cylinder's Height and Radius instead of its volume to calculate the cylinder's density. Use this mode if you don't know the cylinder's volume.
The following is a useful collection of other relevant density calculators at your disposal:
What is the density of a 500g cylinder with a 5cm radius and 10cm height?
636.6 kg/m³ is the density of this cylinder. To calculate this answer yourself, follow these steps:
Calculate the cylinder's volume using V = πr²h to get V = π×(5 cm)²×(10 cm) = 785.40 cm³.
Divide the cylinder's mass by its volume to get its density as ρ = (500 g)/(785.40 cm³) = 0.6366 g/cm³.
Convert this density into SI unit to get ρ = 0.6366 × 1000 = 636.6 kg/m³.
Verify this result using our density of a cylinder calculator.
How do I calculate the density of an oblique cylinder?
The density of an oblique cylinder is given by: ρ = m/πr²h. As long as you measure the cylinder's height h perpendicular to its base, the formula for the volumes of oblique and right cylinders are the same: V = πr²h. Hence, their densities will also be the same.
|
V2 Migration - Olympus
V2 migration introduces new features such as on-chain governance and auto-staking for bonds.
Transitioning from sOHM V1 to gOHM allows for multiple bonds to be taken at one time, as opposed to one bond per vesting period as it was in v1.
Partial liquidity will remain for v1 OHM while the migration is in progress. This provides sufficient liquidity for borrowers to close or move their borrowing position.
You can read more about this on the Olympus Medium page.
For this article, we added V1 and V2 after each token name to help you differentiate between the old and new tokens. Partner websites, price aggregators, or your wallet will not display the version information.
OHM and sOHM tokens will have their identical V2 counterparts. OHM V1 becomes OHM V2, and sOHM V1 becomes sOHM V2.
Token tickers will remain the same for V1 tokens. For example, after migration, your wallet will show "OHM" instead of "OHM V1". Make sure to update the token contract in your wallet with the V2 addresses to show your balances.
When migrating OHM V1 and/or sOHM V1, you will get gOHM in return. Although the token balance will be different (gOHM price is calculated differently, which is based on the Current Index), the dollar amount remains the same.
After the migration, OHM V1 pools such as OHM-DAI will utilize OHM V2. This applies to new bonds as well. Partners like Abracadabra will only accept new deposits in gOHM. So, you will need to migrate if you want to use these features. Otherwise, you can sit tight and migrate only when you want to.
You don't get to enjoy the new features introduced by V2. Some partners such as Rari Capital will only accept V2 tokens once they are deployed, so you would need to migrate if you want to spend more tokens (e.g. make new deposits) on these platforms.
You would not miss any rebase rewards if you don't migrate. Even if you chose to migrate at a later time, you will receive the same amount of gOHM that you would have received had you migrated immediately.
Gas fees are high now, will I lose my rebase rewards if I delay the migration?
No, you can migrate at your leisure once it goes live. The smart contract will keep track of your entitled rebase rewards so you wouldn't miss any of them.
When the migration is live, the Olympus front-end will be updated to allow the migration of all your V1 tokens (i.e. OHM, sOHM, and wsOHM) to gOHM.
The migration process requires two steps: one to approve the contract for each of your V1 tokens, and another that actually migrates all your tokens to gOHM.
Each V1 token type requires its own approval step. For example, if you have OHM V1 and sOHM V1 in your wallet, you need to perform two token approvals, but only one migration operation (three transactions in total).
Can I migrate a specific type of V1 token and leave out the others?
No. You can either migrate all your V1 tokens (i.e. OHM, sOHM, and wsOHM) or none at all.
Can I switch back to V1 tokens after migrating them to gOHM?
No, you can't switch back from gOHM to V1 tokens through our migration tool.
How much gOHM can I expect from the migration?
The Index at the time of migration was 46.721314322 and will be used for the migration.
For OHM v1 and sOHM v1 the amount of gOHM you will receive is (amount of [s]OHM v1) / 46.721314322. So if you have 10 OHM or sOHM you will receive 10 / 46.721314322 → 0.214035074678779 gOHM. That gOHM will be worth much more (s)OHM v2 than you had (s)OHM v1 because it accounts for missing rewards.
For wsOHM you will receive exactly the same amount in gOHM. So if you have 0.5 wsOHM you will receive 0.5 gOHM.
As a reminder, if you're migrating from a non-index-based token (OHM, sOHM) to an index-based token (gOHM), you won't receive the same amount of tokens after the migration, but they still worth the same in dollar term.
Will my gOHM still earn rebase rewards?
Yes. Although gOHM does not change in quantity upon rebase like sOHM does, it still earns you rebase rewards. This is because the price of gOHM is tied to the Current Index:
gOHM_{price} = OHM_{price} * CurrentIndex
Every rebase event will cause the Current Index to go up, and your gOHM is worth more as a result (provided that OHM's price stays constant).
How are bonds affected after the migration?
In V2, you can purchase multiple bonds of the same type without resetting the bond vesting period.
Also, there is no need to claim bond rewards and stake them manually, as this process will be automated. The bonders will receive their entitled sOHM at the end of the vesting period.
Learn more about how bonds will behave in V2 from the Olympus Medium page.
Is Olympus V2 audited?
All V2-related contracts are live, and some of them are still under audit process. We are working with Runtime Verification for the audit, and the results will be published once they become available.
|
fi Objects and C Integer Data Types - MATLAB & Simulink - MathWorks Deutschland
fi Objects and C Integer Data Types
C Integer Data Types
fi Integer Data Types
Unary Conversions
ANSI C Usual Unary Conversions
fi Usual Unary Conversions
ANSI C Usual Binary Conversions
fi Usual Binary Conversions
ANSI C Overflow Handling
fi Overflow Handling
The sections in this topic compare the fi object with fixed-point data types and operations in C. In these sections, the information on ANSI® C is adapted from Samuel P. Harbison and Guy L. Steele Jr., C: A Reference Manual, 3rd ed., Prentice Hall, 1991.
This section compares the numerical range of fi integer data types to the minimum numerical range of C integer data types, assuming a Two's Complement representation.
Many C compilers support a two's complement representation of signed integer data types. The following table shows the minimum ranges of C integer data types using a two's complement representation. The integer ranges can be larger than or equal to the ranges shown, but cannot be smaller. The range of a long must be larger than or equal to the range of an int, which must be larger than or equal to the range of a short.
In the two's complement representation, a signed integer with n bits has a range from
-{2}^{n-1}
{2}^{n-1}-1
, inclusive. An unsigned integer with n bits has a range from 0 to
{2}^{n}-1
, inclusive. The negative side of the range has one more value than the positive side, and zero is represented uniquely.
The following table lists the numerical ranges of the integer data types of the fi object, in particular those equivalent to the C integer data types. The ranges are large enough to accommodate the two's complement representation, which is the only signed binary encoding technique supported by Fixed-Point Designer™ software.
Closest ANSI C Equivalent
fi(x,1,n,0)
-{2}^{n-1}
{2}^{n-1}-1
{2}^{n}-1
fi(x,1,8,0)
fi(x,1,16,0)
Unary conversions dictate whether and how a single operand is converted before an operation is performed. This section discusses unary conversions in ANSI C and of fi objects.
Unary conversions in ANSI C are automatically applied to the operands of the unary !, –, ~, and * operators, and of the binary << and >> operators, according to the following table:
Original Operand Type
ANSI C Conversion
char or short
unsigned char or unsigned short
int or unsigned int1
Pointer to T
Function returning T
Pointer to function returning T
1If type int cannot represent all the values of the original data type without overflow, the converted type is unsigned int.
The following table shows the fi unary conversions:
fi Equivalent
~x = not(x)
Result is logical.
bitcmp(x)
Result is same numeric type as operand.
x<<n
bitshift(x,n)
Result is same numeric type as operand. Round mode is always floor. Overflow mode is obeyed. 0-valued bits are shifted in on the right.
x>>n
bitshift(x,-n)
Result is same numeric type as operand. Round mode is always floor. Overflow mode is obeyed. 0-valued bits are shifted in on the left if the operand is unsigned or signed and positive. 1-valued bits are shifted in on the left if the operand is signed and negative.
Result is same numeric type as operand. Overflow mode is obeyed. For example, overflow might occur when you negate an unsigned fi or the most negative value of a signed fi.
This section describes the conversions that occur when the operands of a binary operator are different data types.
In ANSI C, operands of a binary operator must be of the same type. If they are different, one is converted to the type of the other according to the first applicable conversion in the following table:
Type of One Operand
Type of Other Operand
long or unsigned long1
int or unsigned
1Type long is only used if it can represent all values of type unsigned.
When one of the operands of a binary operator (+, –, *, .*) is a fi object and the other is a MATLAB® built-in numeric type, then the non-fi operand is converted to a fi object before the operation is performed, according to the following table:
Properties of Other Operand After Conversion to a fi Object
Signed = same as the original fi operand
WordLength = same as the original fi operand
FractionLength = set to best precision possible
WordLength = 8
FractionLength = 0
WordLength = 16
The following sections compare how ANSI C and Fixed-Point Designer software handle overflows.
In ANSI C, the result of signed integer operations is whatever value is produced by the machine instruction used to implement the operation. Therefore, ANSI C has no rules for handling signed integer overflow.
The results of unsigned integer overflows wrap in ANSI C.
Addition and multiplication with fi objects yield results that can be exactly represented by a fi object, up to word lengths of 65,535 bits or the available memory on your machine. This is not true of division, however, because many ratios result in infinite binary expressions. You can perform division with fi objects using the divide function, which requires you to explicitly specify the numeric type of the result.
The conditions under which a fi object overflows and the results then produced are determined by the associated fimath object. You can specify certain overflow characteristics separately for sums (including differences) and products. Refer to the following table:
fimath Object Properties Related to Overflow Handling
Overflows are saturated to the maximum or minimum value in the range.
Overflows wrap using modulo arithmetic if unsigned, two's complement wrap if signed.
'FullPrecision'
Full-precision results are kept. Overflow does not occur. An error is thrown if the resulting word length is greater than MaxProductWordLength.
The rules for computing the resulting product word and fraction lengths are given in fimath Object Properties in the Property Reference.
'KeepLSB'
The least significant bits of the product are kept. Full precision is kept, but overflow is possible. This behavior models the C language integer operations.
The ProductWordLength property determines the resulting word length. If ProductWordLength is greater than is necessary for the full-precision product, then the result is stored in the least significant bits. If ProductWordLength is less than is necessary for the full-precision product, then overflow occurs.
The rule for computing the resulting product fraction length is given in fimath Object Properties in the Property Reference.
'KeepMSB'
The most significant bits of the product are kept. Overflow is prevented, but precision may be lost.
The ProductWordLength property determines the resulting word length. If ProductWordLength is greater than is necessary for the full-precision product, then the result is stored in the most significant bits. If ProductWordLength is less than is necessary for the full-precision product, then rounding occurs.
'SpecifyPrecision'
You can specify both the word length and the fraction length of the resulting product.
The word length of product results when ProductMode is 'KeepLSB', 'KeepMSB', or 'SpecifyPrecision'.
The maximum product word length allowed when ProductMode is 'FullPrecision'. The default is 65,535 bits. This property can help ensure that your simulation does not exceed your hardware requirements.
The fraction length of product results when ProductMode is 'Specify Precision'.
Full-precision results are kept. Overflow does not occur. An error is thrown if the resulting word length is greater than MaxSumWordLength.
The rules for computing the resulting sum word and fraction lengths are given in fimath Object Properties in the Property Reference.
The least significant bits of the sum are kept. Full precision is kept, but overflow is possible. This behavior models the C language integer operations.
The SumWordLength property determines the resulting word length. If SumWordLength is greater than is necessary for the full-precision sum, then the result is stored in the least significant bits. If SumWordLength is less than is necessary for the full-precision sum, then overflow occurs.
The rule for computing the resulting sum fraction length is given in fimath Object Properties in the Property Reference.
The most significant bits of the sum are kept. Overflow is prevented, but precision may be lost.
The SumWordLength property determines the resulting word length. If SumWordLength is greater than is necessary for the full-precision sum, then the result is stored in the most significant bits. If SumWordLength is less than is necessary for the full-precision sum, then rounding occurs.
You can specify both the word length and the fraction length of the resulting sum.
The word length of sum results when SumMode is 'KeepLSB', 'KeepMSB', or 'SpecifyPrecision'.
The maximum sum word length allowed when SumMode is 'FullPrecision'. The default is 65,535 bits. This property can help ensure that your simulation does not exceed your hardware requirements.
The fraction length of sum results when SumMode is 'SpecifyPrecision'.
|
2147483647 - Simple English Wikipedia, the free encyclopedia
2,147,483,647 or 2147483647 is a Mersenne prime. It is the result of 231−1, and it is one of only four known double Mersenne primes.[1]
{\displaystyle {\stackrel {\kappa \alpha \delta \psi \mu \eta }{\mathrm {M} }}}
↑ Weisstein, Eric W. "Double Mersenne Number". MathWorld. Wolfram Research. Retrieved January 29, 2018.
Retrieved from "https://simple.wikipedia.org/w/index.php?title=2147483647&oldid=7961794"
|
Overridingly, while all of these needs could be addressed in, for example, a “small multiples” fashion by something as simple as graphically-represented contingency tables, a medium-sized sequence family with 300 positions, would require visualizing
\left(\genfrac{}{}{0}{}{300}{2}\right)=44850
contingency tables. Visually integrating these to develop an understanding of patterns in the data quickly fails to inattention and change-blindness issues, and so ideally the end user needs all of this data to be presented seamlessly within a single visualization.
With categorical parallel-coordinates axes arrayed around a cylinder, and fixed categories arrayed at specific locations along the cylinder’s length, we can overcome the Markov-Property-like character of the polyline used to represent each feature vector in traditional planar parallel coordinates, and replace this polyline with a formally complete undirected graph between all of the subnodes traversed by the feature vector. If we cast a set of feature vectors into this space, and weight the subnode-to-subnode edges based on the number of features sharing those sub-nodes, we can visualize the entire
\left(\genfrac{}{}{0}{}{\mathit{\text{SequenceLength}}}{2}\right)
set of contingency table joint and marginal distributions in the same figure. Figure 6 shows the results of this approach. It is clearly cluttered, and on paper, densely occluded and difficult to interpret, but even with these impediments it is already showing us that there are quite strong patterns of co-occurence between the A at position 2, with C at 1, T at 3, A at 4, and also with occluded sub-nodes at several other positions. The strength of the A2⇔C1 and A2⇔T3 relationships are visible with some study of Figure 6, however the other relationships with A2 are not conveyed by any canonical alternatives. Because these dependencies involve non-sequential columns, even with this limited view, this intuition is beyond what is easily attainable with traditional parallel coordinates or parallel sets.
|
IRR - Anaplan Technical Documentation
The internal rate of return is the annual rate of return an investment is expected to generate. You could use the IRR function to assess a number of investments and find the most effective investment.
The IRR function has two different syntaxes. The syntax that applies depends on whether you use more or less than two arguments with the function.
IRR with time scale
IRR(Cash flow [, Estimate])
IRR with dates
IRR(Cash flow, Dates, Transactions [, Estimate])
Cash flow (required) Number
A line item that contains a series of positive and negative values that represent cash inflow and outflow.
The line item used for this argument must have a time scale.
An estimate of the IRR. This argument uses percentage format, so 0.1 is equal to 10%.
This argument is optional and helps the IRR function to calculate a result more quickly.
A series of positive and negative values that represent cash inflow and outflow.
Must have the list used for the Transactions argument as a dimension.
Dates (required) Date The date associated with each value of the Cash flow argument.
Transactions (required) List A list of transactions, which must be a common dimension of the Cash flow and Dates argument.
Estimate of rate Number
The IRR function returns a number.
How IRR is calculated
IRR is the iterative solution to this equation:
0 = \Sigma_{i = 1}^{n} \dfrac{p_i}{(1+IRR) ^{d}i/365}
N is the number of payments in and out from the start of the first period
Use IRR with the Users list
You can reference the Users list with the IRR function. However, you cannot reference specific users within the Users list as this is production data, which can change and make your formula invalid.
The Cashflow argument must contain at least one positive and one negative value.
Example of IRR with time scale
The second module contains a single Percent format line item, which contains a formula that uses IRR with the Cash flow line item from the first module. As the IRR function returns a single value, the result does not need a time dimension.
IRR of annual cash flow
IRR('Annual cash flow'.Cash flow)
Example of IRR with dates
The second module uses the data from the Plant Transaction Data module with the IRR function to calculate the internal rate of return for each plant. The column for Plant 1 contains the internal rate of return for data displayed in the Plant Transaction Data module.
Internal Return Rate for Plant
IRR('Plant Transaction Data'.Cash flow, 'Plant Transactions Data'.Date, Transactions, 0.1)
|
Alayna's Adventure Journey | Toph
Alayna recently went on her first adventure journey to Dumbulla Safari Park. She enjoyed the bumpy roads and the wild animals, especially the elephants! And now she keeps Saying, "Hati!" .
Elephants tend to live in family groups and Alayna noticed many groups of elephants during her visit .
As her father Shakib was busy with his match practice, he could not join Alayna to her adventure visit. So when Shakib came to know about the elephant groups from Alayna, he wondered about the maximum possible size of such groups.
As there were a huge number of elephant groups, she could not remember the maximum group size. But she wrote down the number of elephants in each group in her notebook. As she is very busy with her playing, she asked you to help her find the maximum number of elephants among all groups.
n (
1 \le n \le 1000
1 ≤ n ≤ 1000) — the number of elephant groups.
The second line contains n space-separated integers—the number of elephants in each group. There will be at most
10^9
109 elephants in each group.
Print a single number—the maximum number of elephants among all groups.
sm_sohagLightest, 0 B
Ardent Programmers' Introductory Contest, Mar 2017
|
3 Merkle Trees, Merkle Roots, Merkle Paths and Merkle Proofs
4 SPV Wallet
5 Offline Payment
Simplified Payment Verification (SPV) is described in section 8 of the Bitcoin whitepaper. It allows a transaction recipient to prove that the sender has control of the source funds of the payment they are offering without downloading the full Blockchain, by utilising the properties of Merkle proofs. This does not guarantee that the funds have not been previously spent, this assurance is received by submitting the transaction to the Bitcoin miners. However, in such a case the SPV proof acts as strong evidence of fraud backed by legally recognised digital signature technology.
SPV allows users to securely transact with each other, peer-to-peer, while nodes act to form the settlement layer.
The advantages of using SPV are clear in terms of the volume of data required:
a wallet can store all necessary block headers in around 50MB - this covers the entire block chain (as of January 2020, with 80 bytes per block and around 620,000 blocks in the chain). The total grows linearly at around 4MB per year (i.e. it increases by 80 bytes with each block mined, regardless of the size of that block).
contrast this with the hundreds of gigabytes which would be required to store the entire chain, if SPV were not being used.
The size of the data required for the merkle paths is of maximum
{\displaystyle 64log_{2}{n}}
bytes, where
{\displaystyle n}
is the total number of transaction in one block.
As explained in Section 8 of the Bitcoin whitepaper:
" ... [An SPV client] only needs to keep a copy of the block headers of the longest proof-of-work chain, which he can get by querying network nodes until he's convinced he has the longest chain, and obtain the Merkle branch linking the transaction to the block it's timestamped in ...
" ... A block header with no transactions would be about 80 bytes. If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year ..."
There have been a lot of previous misunderstandings around SPV and peer-to-peer transacting. Previously, the custom had been for the sender of the payment to just broadcast the payment to the Bitcoin network nodes. The receiver of the payment would then need to somehow filter through all of the transactions coming onto the network for specific transactions relating to them (an extremely difficult task in of itself). Even if the sender sent the transaction to the receiver as well as the network nodes, the custom had been for the receiver to always wait for the transaction to be confirmed at least 6 times whatever the transaction type, amount or situation.
The better approach is that transactions between SPV clients are negotiated peer-to-peer and settled on the ledger through the network nodes. An analogy for this is a transaction done using cheque at a much faster speed. The customer hands the the signed cheque (transaction) to the merchant, who then banks or cashes the cheque (settles the transaction on chain). When/if the merchant is satisfied according to the situational risk of the transaction, then they can hand over the goods or services.
There is no such thing as absolute security, there is always a risk against the cost of being defrauded (which decreases exponentially as time goes by). If the transaction is only for a cup of coffee, then the merchant will be exposed to less risk than if the transaction is to buy a car for example, and they would behave differently. If selling a cup of coffee, they can satisfy themselves that the transaction they have received appears to be valid using the SPV process detailed above, and submit the transaction themselves to the network (or even to a trusted miner if using a Merchant API). Given that they will likely receive notification and proof of a fraud attempt within seconds, they will not want to maintain a copy of the entire ledger or even the UTXO set to check against, because the risk they face does not justify the cost. SPV is adequate just as an instant contactless payment without a pin number although arguably the security of SPV is far superior given that discovery of fraud attempts is rapid. Likewise, they will not want to detain their customer while they wait for 6 confirmations - it simply is not necessary - they have received a transaction which appears to be valid, and it has been accepted by the network without a double spend alert. This will probably be enough for them to risk the cost of the coffee.
Merkle Trees, Merkle Roots, Merkle Paths and Merkle Proofs
A Merkle Tree is a structure used in computer science to validate data - see wikipedia definition for more information.
The Merkle Root in a Bitcoin block is the hash contained in the block header, which is derived from the hashes of all other transactions in the block.
A Merkle Path in SPV represents the information which the user needs to calculate the expected value for the Merkle root for a block, from their own transaction hash contained in that block. The Merkle path is used as part of of the Merkle Proof.
A Merkle Proof in SPV proves the existence of a specific transaction in a specific block (without the user needing to examine all the transactions in the Block). It includes the Merkle Root and the Merkle Path.
To create a Merkle proof, a user or (or their wallet) simply needs the Merkle path of the transaction as well as the block header for a given block (80 bytes).
To validate a proof, a user (or their wallet) only needs the chain of block headers (as opposed to the whole blocks themselves). I.e. they need their own copy of the block header of each block, that they know to be accurate. Using their own block header chain, together with the transaction (or its hash/id) they want to verify, as well as its Merkle proof (also sometimes referred to as an inclusion proof), a user can verify the transaction was time stamped in a specific block, without examining every transaction in that block.
An article in March 2019 entitled Merkle Trees and SPV (Craig Wright, 2019) clarified some previous misunderstandings around SPV and transaction verification. The article included the following diagram which shows how transaction hashes can be related to the Merkle root in a block header:
An SPV wallet is a lightweight wallet that uses the mechanism of SPV to construct Bitcoin transactions and payments.
To spend a UTXO, a user of a SPV wallet will pass on the following information to the receiver:
{\displaystyle Transaction_{0}}
- the transaction that contains the UTXO as an output,
The Merkle path of
{\displaystyle Transaction_{0}}
The block header that contains the Merkle root derived from the Merkle path (or its identifier, e.g., block height)
{\displaystyle Transaction_{1}}
- the transaction that spends the UTXO
To validate the information, a user computes the Merkle root from the Merkle path of
{\displaystyle Transaction_{0}}
. The user then compares it with the Merkle root specified in the block header. If they are the same, the user accepts that
{\displaystyle Transaction_{0}}
is in the chain.
Note that by storing
{\displaystyle Transaction_{0}}
locally, a user will be able to sign
{\displaystyle Transaction_{1}}
offline, as any signature on
{\displaystyle Transaction_{1}}
requires the scriptPubKey (locking script) part from
{\displaystyle Transaction_{0}}
Retrieved from "https://wiki.bitcoinsv.io/index.php?title=Simplified_Payment_Verification&oldid=3009"
|
The Difference in Mass between Relativity and Quantum Mechanics, Also Novel Effects of the Axial Doppler Shift
1Warrensville Heights, OH, USA
If a particle has a wave function or is in other ways a moving wave, it should have an axial Doppler shift. Writers on relativity do not give moving particles that. The classic equation of quantum mechanics requires that frequency and mass have the same distortion from velocity (Doppler shift). But in the common writings on relativity mass always goes up with increases of velocity, and the transverse shift of frequency always goes down with increases of velocity [1] [2] [3] [4] . Most of this is due to simplifications and errors in the Lorentz transformation, some came from being in the aether wind era originally and because accelerators are noisy. It is not valid to say because the aether axial wind averages to zero between reflections so does axial Doppler shifts. After the first reflection in the Lorentz transformation, the light from the Sun is in Earth’s reference frame and there are no more Doppler shifts. Also the Michelson-Morley experiment is not all cases, and light is not the only thing deformed by velocity. The axial shift’s formula has the cosine of the observation angle in it. The implications are not just quantitative but also qualitative because anything with an axial Doppler shift has different values in different directions from an observer. That is the defining property of a vector and that changes its dimensions and the dimensions of the differential relations it is in. This happens with other scalar qualities as well. That means scalars such as mass and charge are now vectors and have additional dimensions. Therefore differential equations with them have additional dimensions. This includes Faraday-Maxwell’s equations and Schrodinger’s equations. Also the Doppler blue shift seems to imply additional dimensions of time another way. That is the first Lorentz transformation error; the second is assumption of non-existent symmetry.
Wave-Function, Relativity, Doppler, Mass, De Broglie, Schrodinger, Matter-Waves, Space-Time
c\Delta {t}^{\prime }=\sqrt{{\left(c-{v}_{a}\right)}^{2}+{v}_{t}^{2}}\Delta t
{v}_{r}={\left({v}_{t}^{2}+{v}_{a}^{2}\right)}^{1/2}
{v}_{a}={v}_{r}\mathrm{cos}\theta
{v}_{t}={v}_{r}\mathrm{sin}\theta
c={c}^{\prime }
{c}^{\prime }={\left({\left(c-{v}_{a}\right)}^{2}+{v}_{t}^{2}\right)}^{1/2}
\begin{array}{l}c{t}^{\prime }={c}^{\prime }t\\ {t}^{\prime }/t={\left[{\left(1-\left({v}_{a}/c\right)\right)}^{2}+{\left({v}_{t}/c\right)}^{2}\right]}^{1/2}=1/{K}_{r}\end{array}
{\omega }^{\prime }/\omega =t/{t}^{\prime }=1/{\left[{\left(1-\left({v}_{a}/c\right)\right)}^{2}+{\left({v}_{t}/c\right)}^{2}\right]}^{1/2}
1/{\left[{\left(1+\left({v}_{a}/c\right)\right)}^{2}+{\left({v}_{t}/c\right)}^{2}\right]}^{1/2}
v={\left({v}_{a}^{2}+{v}_{t}^{2}\right)}^{1/2}
{K}_{r}=1/{\left[1-2\left({v}_{a}/c\right)+{\left({v}_{t}/c\right)}^{2}\right]}^{1/2}
{v}_{a}=v\mathrm{cos}\left(\theta \right)
{v}_{t}=\mathrm{sin}\left(\theta \right)
{K}_{r}=1/{\left[1-2\left(v/c\right)\mathrm{cos}\left(\theta \right)+{\left(v/c\right)}^{2}\right]}^{1/2}
{v}_{a}=0
t/{t}^{\prime }={\omega }^{\prime }/\omega =1/{\left(1+{\left({v}_{t}/c\right)}^{2}\right)}^{1/2}
-\left({\hslash }^{2}/2\right){\nabla }^{2}\psi +mV\psi =mE\psi
-\left({\hslash }^{2}/2\right){\nabla }^{2}\psi +m\cdot V\psi =m\cdot E\psi
\text{De Broglie}’\text{swavelength}=\lambda =h/p=h/\left(mv\right)
m{v}^{2}/\lambda
m{v}^{2}=hv/\lambda =hf
m=hf/{v}^{2}
m={K}_{r}{m}^{\prime }={m}^{\prime }/{\left[1-2\left(v/c\right)\mathrm{cos}\left(\theta \right)+{\left(v/c\right)}^{2}\right]}^{1/2}
m={m}^{\prime }/{\left[1-{\left(v/c\right)}^{2}\right]}^{1/2}
-\left({\hslash }^{2}/\left(2m\right)\right){\nabla }^{2}\psi +V\psi =E\psi
\nabla
\nabla
\nabla
\left(1/\text{distance}\right)=\left(\nabla \text{operator}\right)
Kv/\lambda
m={m}^{\prime }K
m={m}_{o}K
{K}_{r}=1/{\left[1-2\left(v/c\right)\mathrm{cos}\left(\theta \right)+{\left(v/c\right)}^{2}\right]}^{1/2}
v\cdot c=‖c‖‖v‖\mathrm{cos}\theta
c-v\mathrm{cos}\theta
\left[\omega \left(c-v\mathrm{cos}\theta \right)/c\right]
\left(c-v\mathrm{cos}\theta \right)/c
{\omega }^{\prime }/\omega ={K}_{t}
{\omega }^{\prime }
K={K}_{a}{K}_{t}
{\Sigma }_{i}^{3}\partial {G}_{i}/\partial {T}_{i}{1}_{i}
{1}_{i}
{\Sigma }_{i}^{3}\partial {G}_{i}/\partial {T}_{i}{1}_{i}
{\Sigma }_{i}^{3}\partial {L}_{i}/\partial {T}_{i}{1}_{i}
curlE+\left(1/c\right)\partial B/\partial t=0
curlE+\left(1/c\right)tradB=0
{\nabla }^{2}A-\left(1/{c}^{2}\right){\partial }^{2}A/\partial {t}^{2}=0
{\nabla }^{2}A-\left(1/{c}^{2}\right){\Sigma }_{i}^{3}{\partial }^{2}{A}_{i}/\partial {t}_{i}{}^{2}=0
curl{E}^{\prime }+\left(1/c\right)trad{B}^{\prime }=0
{\nabla }^{2}{A}^{\prime }-\left(1/{c}^{2}\right){\Sigma }_{i}^{3}{\partial }^{2}{{A}^{\prime }}_{i}/\partial {t}_{i}{}^{2}=0
{\Sigma }_{i}^{3}{F}_{i}\left(\mathrm{sin}\left({k}_{i}{s}_{i}-{\omega }_{i}{t}_{i}\right)\right)
\left(\mathrm{sin}\left({k}_{i}{s}_{i}-{\omega }_{i}{t}_{i}\right)\right)
i\hslash \text{d}\Psi /\text{d}t=H\Psi \left(H=\text{Hamiltonianofthemass}\right).
i\hslash \int \left({\partial }^{j}\left({}^{w}\Psi \right)/{\partial }^{j}t\right)\text{d}j={}^{w}H{}^{w}\Psi .
i\hslash {\Sigma }^{j}\partial {}^{w}\Psi /{\partial }^{j}t={}^{w}H{}^{w}\Psi .
m={m}^{\prime }K
Reich, S.L. and Perera, W.G. (2019) The Difference in Mass between Relativity and Quantum Mechanics, Also Novel Effects of the Axial Doppler Shift. Journal of High Energy Physics, Gravitation and Cosmology, 5, 629-637. https://doi.org/10.4236/jhepgc.2019.53035
1. Einstein, A. (1955) The Meaning of Relativity. 5th Edition, Princeton University Press, Princeton, 36.
2. Ugarov, V.A. (1979) Special Theory of Relativity. MIR Publishers, Moscow, 83-84.
3. Fowles, G.R. (1989) Introduction to Modern Optics. Dover Publications, New York.
4. (2016) Search “Time Dilation”. https://www.wikipedia.org/
5. Hecht, K.T. (2000) Quantum Mechanics. Springer-Verlag, New York.
6. Newton, R.G. (2002) Quantum Physics. Springer-Verlag, New York.
7. Jackson, J.D. (1975) Classical Electrodynamics. John Wiley and Sons.
|
A Spectral Projected Gradient-Newton Two Phase Method for Constrained Nonlinear Equations
A Spectral Projected Gradient-Newton Two Phase Method for Constrained Nonlinear Equations
Yuezhe Zhang
In this paper, we proposed a spectral gradient-Newton two phase method for constrained semismooth equations. In the first stage, we use the spectral projected gradient to obtain the global convergence of the algorithm, and then use the final point in the first stage as a new initial point to turn to a projected semismooth asymptotically newton method for fast convergence.
Constrained Semismooth Equations, Spectral Projected Gradient Method, Newton Method, Two-Phase
In this paper, we consider the constrained nonlinear semismooth equations problem: finding a vector
{x}_{\ast }\in \Omega
\begin{array}{l}H\left(x\right)=0,\\ x\in \Omega :=\left\{x\in {ℝ}^{n}|l\le x\le u\right\},\end{array}
\Omega :=\left\{x\in {ℝ}^{n}|l\le x\le u\right\},\text{}{l}_{i}\in ℝ\cup \left\{-\infty \right\},\text{}{u}_{i}\in ℝ\cup \left\{-\infty \right\},\text{}{l}_{i}<{u}_{i},\text{}i=1,\cdots ,n
H:{ℝ}^{n}\to {ℝ}^{n}
is a semismooth mapping. The notation of semismoothness was introduced for the functionals by Mifflin [1] and extended to vector functions by Qi and Sun [2] .
Systems of constrained semismooth equations arise in various application, for instance complementarity problems, the box constrained variational inequality problems, the KKT system of variational inequlity problems and so on. The solution of nonlinear equations can be transformed into solving the following constrained optimization problem:
\begin{array}{l}\mathrm{min}f\left(x\right)=\frac{1}{2}{‖H\left(x\right)‖}^{2}\\ \text{s}\text{.t}\text{.}x\in \Omega \end{array}
f:{R}^{n}\to R
is continuously differentiable and its gradient denoted by
\nabla f\left(x\right)
. Many researchers have studied constrained optimization problems such as (2) and given many effective algorithms. For example, a new class of adaptive non-monotone spectral gradient method is given in reference [3] , an active set projected trust region algorithm in [4] . The methods of optimization problems involve the first-order methods and the second-order methods. Classical first-order algorithms include gradient method, sub-gradient method, conjugate gradient method, etc. The main advantage of first-order method is its small storage, which is particularly suitable for large-scale problems. However, the disadvantage of first-order method is that its convergence speed is at most linear, and it can not meet the requirements of high precision. For the second-order method, it has the advantage of fast convergence speed. Under certain conditions, it can achieve superlinear convergence or even quadratic convergence. But its disadvantage is that it needs a good initial point, sometimes it even needs the initial point to approach the local optimal point.
Motivated by this, in this paper, we combine the advantages of the first-order method with those of the second-order method. We will consider the two-stage combination algorithm to solve the optimization problem. First, we use the first-order method to obtain the global convergence of the algorithm, and then use the final point obtained by the first-order method as the new initial point to turn to the second-order method to obtain the fast convergence speed. At the same time, we use projection technology to solve the constrained conditions.
In this section, we present some definitions and theorems that are useful to our main result.
H:{R}^{n}\to {R}^{n}
is a locally Lipschitzian function, according to Rademacher theorem, H is differentiable almost everywhere. Denote the set of points at which H is differentiable by
{D}_{H}
{H}^{\prime }\left({x}_{k}\right)
for the usual
n\times m
Jacobian matrix of partial derivatives whenever x is a point at which the necessary partial derivatives exists. Let
\partial H\left(x\right)
be the generalized Jacobian defined by Clarke in [5] . Then
\partial H\left(x\right)={C}_{0}\left({\partial }_{B}H\left(x\right)\right)
{C}_{0}
denotes the convex hull of a set,
{\partial }_{B}H\left(x\right)=\left\{\underset{\begin{array}{l}\text{\hspace{0.17em}}{x}_{j}\to x\\ {x}_{j}\in {D}_{H}\end{array}}{\mathrm{lim}}{H}^{\prime }\left({x}_{j}\right)\right\}
Definition2.1 [2] Suppose
H:{R}^{n}\to {R}^{n}
is a locally Lipschitzian function, we say that H is semismooth at x if
\underset{\begin{array}{l}V\in \partial H\left(x+t{h}^{\prime }\right)\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}{h}^{\prime }\to h,t↓0\end{array}}{\mathrm{lim}}\left\{V{h}^{\prime }\right\}
h\in {R}^{n}
Lemma 2.2 [2] : Suppose
H:{R}^{n}\to {R}^{n}
is a locally Lipschitzian function, the following statements are equivalent:
1) H is semismooth at x;
V\in \partial H\left(x+h\right),h\to 0
Vh-{H}^{\prime }\left(x;h\right)=ο\left(‖h‖\right)
H\left(x+h\right)-H\left(x\right)-Vh=ο\left(‖h‖\right)
H:{R}^{n}\to {R}^{n}
is a locally Lipschitzian function, H is semismooth at x if each component of H is semismooth at x.
Definition 2.4 [2] : Suppose
H:{R}^{n}\to {R}^{n}
is a locally Lipschitzian function, If for any
V\in \partial H\left(x+h\right),h\to 0
Vh-{H}^{\prime }\left(x;h\right)=Ο\left({‖h‖}^{1+p}\right)
0<p\le 1
, then we call H is p-order semismooth at x.
H:{R}^{n}\to {R}^{n}
is a locally Lipschitzian function, we say H is strongly BD-regular at x if all
V\in {\partial }_{B}H\left(x\right)
Lemma 2.6 [6] : Suppose that
H:{ℝ}^{n}\to {ℝ}^{n}
is locally Lipschitz continuous and H is BD-regular at
x\in {ℝ}^{n}
. Then there exist a neighborhood
ℕ\left(x\right)
of x and a constant K such that for any
y\in ℕ\left(x\right)
V\in {\partial }_{B}H\left(y\right)
, V is nonsingular and
‖{V}^{-1}‖\le K
H:{ℝ}^{n}\to {ℝ}^{n}
is locally Lipschitz continuous and H is BD-regular at a solution
{x}_{\ast }
H\left(x\right)=0
. If H is semismooth at
{x}_{\ast }
, then there exist a neighborhood
ℕ\left({x}_{\ast }\right)
{x}_{\ast }
k>0
x\in ℕ\left(x\ast \right)
‖H\left(x\right)‖\ge k‖x-{x}_{\ast }‖
Lemma 2.8 [7] : The projection operator
{\Pi }_{X}\left(\cdot \right)
satisfies.
x\in X
{\left[{\Pi }_{X}\left(z\right)-z\right]}^{\text{T}}\left[{\Pi }_{X}\left(z\right)-x\right]\le 0
z\in {ℝ}^{n}
‖{\Pi }_{X}\left(y\right)-{\Pi }_{X}\left(z\right)‖\le ‖y-z‖
y,z\in {ℝ}^{n}
Lemma 2.9 [8] : Given
x\in {ℝ}^{n}
d\in {ℝ}^{n}
\xi
\xi \left(\lambda \right)=‖{\prod }_{X}\left(x+\lambda d\right)-x‖/\lambda ,\text{\hspace{0.17em}}\lambda \ge 0
Lemma 2.9 actually implies that if
x\in X
is a stationary point of (2), then
{\stackrel{¯}{d}}_{G}\left(\lambda \right)={\Pi }_{X}\left[x+\lambda {d}_{G}\right]-x=0,\text{}\forall \lambda \ge \text{0}
In order to obtain the global convergence of the algorithm, in the first stage, we adopt the non-monotone spectral projection gradient method of the first-order method. The one-dimensional search procedure of Algorithm 3.1 will be called SPG1 from now on and Algorithm 3.2 will be called SPG2 in the rest of the paper.
z\in {ℝ}^{n}
P\left(z\right)
as the orthogonal projection on
\Omega
g\left(x\right)=\nabla f\left(x\right)
{x}_{0}\in \Omega
, integer
M\ge 1
, a small parameter
{\alpha }_{\mathrm{min}}>0
, a large parameter
{\alpha }_{\mathrm{max}}>{\alpha }_{\mathrm{min}}
, sufficient decrease parameter
\gamma \in \left(0,1\right)
0<{\sigma }_{1}<{\sigma }_{2}<1
{\alpha }_{0}\in \left[{\alpha }_{\mathrm{min}},{\alpha }_{\mathrm{max}}\right]
{x}_{0}\in \Omega
Algorithm 3.1 [9] (SPG1)
‖P\left({x}_{k}\right)-g\left({x}_{k}\right)-{x}_{k}‖<{\epsilon }_{1}
, stop, input
{x}_{k}
Step 2. (Backtracking)
Step 2.1 Set
\lambda ={\alpha }_{k}
{x}_{+}=P\left({x}_{k}-\lambda g\left({x}_{k}\right)\right)
Step 2.3 If
f\left({x}_{+}\right)\le \underset{0\le j\le \mathrm{min}\left\{k,M-1\right\}}{\mathrm{max}}f\left({x}_{k-j}\right)+\gamma 〈{x}_{+}-{x}_{k},g\left({x}_{k}\right)〉,
{\lambda }_{k}=\lambda
{x}_{k+1}={x}_{+}
{s}_{k}={x}_{k+1}-{x}_{k}
{y}_{k}=g\left({x}_{k+1}\right)-g\left({x}_{k}\right)
If (11) does not hold, define
{\lambda }_{new}\in \left[{\sigma }_{1}\lambda ,{\sigma }_{2}\lambda \right]
\lambda ={\lambda }_{new}
, and go to step 2.2.
{b}_{k}=〈{s}_{k},{y}_{k}〉
{b}_{k}\le 0
{\alpha }_{k+1}={\alpha }_{\mathrm{max}}
; else compute
{\alpha }_{k}=〈{s}_{k},{s}_{k}〉
{\alpha }_{k+1}=\mathrm{min}\left\{{\alpha }_{\mathrm{max}},\mathrm{max}\left\{{\alpha }_{\mathrm{min}},{a}_{k}/{b}_{k}\right\}\right\}
{d}_{k}=P\left({x}_{k}-{\alpha }_{k}g\left({x}_{k}\right)\right)-{x}_{k}
\lambda =1
Step 2.2. Set
{x}_{+}={x}_{k}+\lambda {d}_{k}
f\left({x}_{+}\right)\le \underset{0\le j\le \mathrm{min}\left\{k,M-1\right\}}{\mathrm{max}}f\left({x}_{k-j}\right)+\gamma \lambda 〈{d}_{k},g\left({x}_{k}\right)〉,
{\lambda }_{k}=\lambda ,{x}_{k+1}={x}_{+},{s}_{k}={x}_{k+1}-{x}_{k},{y}_{k}=g\left({x}_{k+1}\right)-g\left({x}_{k}\right)
{\lambda }_{new}\in \left[{\sigma }_{1}\lambda ,{\sigma }_{2}\lambda \right]
\lambda ={\lambda }_{new}
The output point of the first stage is used as the initial point of the next stage.
Algorithm 3.3 [10] (A Projected semismooth asymptotical newton method)
Step 0. Choose constants
\rho ,\sigma ,\eta \in \left(0,1\right),{p}_{1}>0,{p}_{2}>2
{x}_{0}={x}_{N}\in \Omega
k:=0
{V}_{k}\in {\partial }_{B}H\left({x}_{k}\right)
\nabla f\left({x}_{k}\right)={V}_{k}^{\text{T}}H\left({x}_{k}\right)
{x}_{k}
is a stationary point, stop. Otherwise let
{d}_{G}^{k}=-{\gamma }_{k}\nabla f\left({x}_{k}\right)
{\gamma }_{k}=\mathrm{min}\left\{1,\eta f\left({x}_{k}\right)/{‖\nabla f\left({x}_{k}\right)‖}^{2}\right\}
Step 3. If the linear system
H\left({x}_{k}\right)+{V}_{k}d=0
{d}_{N}^{k}
-\nabla f{\left({x}_{k}\right)}^{\text{T}}{d}_{N}^{k}\ge {p}_{1}{‖{d}_{N}^{k}‖}^{{p}_{2}}
then use the direction
{d}_{N}^{k}
{d}_{N}^{k}={d}_{G}^{k}
{m}_{k}
be the smallest nonnegative integer m satisfying
f\left({x}_{k}+{\stackrel{¯}{d}}^{k}\left({\rho }^{m}\right)\right)\le f\left({x}_{k}\right)+\sigma \nabla f{\left({x}_{k}\right)}^{\text{T}}{\stackrel{¯}{d}}_{G}^{k}\left({\rho }^{m}\right)
where for any
\lambda \in \left[0,1\right]
{\stackrel{¯}{d}}^{k}\left(\lambda \right)={t}_{k}^{\ast }\left(\lambda \right){\stackrel{¯}{d}}_{G}^{k}\left(\lambda \right)+\left[1-{t}_{k}^{\ast }\left(\lambda \right)\right]{\stackrel{¯}{d}}_{N}^{k}\left(\lambda \right)
{\stackrel{¯}{d}}_{G}^{k}\left(\lambda \right)={\prod }_{X}\left[x+\lambda {d}_{G}^{k}\right]-{x}_{k},{\stackrel{¯}{d}}_{N}^{k}\left(\lambda \right)={\prod }_{X}\left[x+\lambda {d}_{N}^{k}\right]-{x}_{k}
{t}_{k}^{\ast }\left(\lambda \right)
is an optimal solution to
\underset{t\in \left[0,1\right]}{\mathrm{min}}\frac{1}{2}{\Vert H\left({x}_{k}\right)+{V}_{k}\left[t{\stackrel{¯}{d}}_{G}^{k}\left(\lambda \right)+\left(1-t\right){\stackrel{¯}{d}}_{N}^{k}\left(\lambda \right)\right]\Vert }^{2}
{t}^{\ast }\left(\lambda \right)=\mathrm{max}\left\{0,\mathrm{min}\left\{1,t\left(\lambda \right)\right\}\right\}
{\lambda }_{k}={\rho }^{{m}_{k}}
{x}_{k+1}={x}_{k}+{\stackrel{¯}{d}}^{k}\left({\lambda }_{k}\right)
k=:k+1
Theorem 4.1 [9] : Algorithm SPG1 is well defined, and any accumulation point of the sequence
\left\{{x}_{k}\right\}
that is generates is a constrained stationary point.
Theorem 4.2 [9] : Algorithm SPG2 is well defined, and any accumulation point of, and any accumulation.
Theorem 4.3 [10] : Let
\left\{{x}_{k}\right\}\subset X
be a sequence generated by Algorithm 3.3, then any accumulation point of
\left\{{x}_{k}\right\}
is a station point of (2).
Many practical problems can be solved by transforming them into constrained semi-smooth equations. For example, mixed complement problem (MCP):
F:{R}^{n}\to {R}^{n}
is a continuous differentiable function, finding a vectors
x\in X
F{\left(x\right)}^{\text{T}}\left(y-x\right)\ge 0,\text{}\forall y\in X
{\psi }_{\alpha }:{ℝ}^{2}\to ℝ
\alpha \in \left[0,1\right]
{\psi }_{\alpha }\left(a,b\right):={\left(\left[{\varphi }_{\alpha }{\left(a,b\right)}_{+}\right]\right)}^{2}+{\left({\left[-a\right]}_{+}\right)}^{2}
{\left[a\right]}_{+}:=\mathrm{max}\left\{0,a\right\}
a\in ℝ
{\varphi }_{\alpha }:{ℝ}^{2}\to ℝ
is the penalized Fischer-Burmeister function introduced by Chen et al. [11] and has the form:
{\varphi }_{\alpha }\left(a,b\right):=\alpha {\varphi }_{FB}\left(a,b\right)+\left(1-\alpha \right){a}_{+}{b}_{+}
{\varphi }_{\alpha }:{ℝ}^{2}\to ℝ
is an NCP function, which is given by
{\varphi }_{FB}\left(a,b\right):=\left(a+b\right)-\sqrt{{a}^{2}+{b}^{2}}
The mixed complement problem can be transformed into a semi-smooth system of equations by the above functions.
N=\left\{1,\cdots ,n\right\}
\begin{array}{l}{I}_{f}:=\left\{i|{l}_{i}=-\infty ,{u}_{i}=\infty ,i\in N\right\},\text{}{I}_{l}:=\left\{i|{l}_{i}>-\infty ,{u}_{i}=\infty ,i\in N\right\},\\ {I}_{u}:=\left\{i|{l}_{i}=-\infty ,{u}_{i}<\infty ,i\in N\right\},\text{}{I}_{lu}:=N\\left\{{I}_{l}\cup {I}_{u}\cup {I}_{f}\right\}\end{array}
MCP can be reformulated as
H\left(x\right)=0
{H}_{i}\left(x\right):=\left\{\begin{array}{l}|{F}_{i}\left(x\right)|\text{}\text{ }\text{ }\text{if}\text{\hspace{0.17em}}i\in {I}_{f}\\ |{\varphi }_{\alpha }\left({x}_{i}-{l}_{i},{F}_{i}\left(x\right)\right)|\text{if}\text{\hspace{0.17em}}i\in {I}_{l}\\ |{\varphi }_{\alpha }\left({u}_{i}-{x}_{i},-{F}_{i}\left(x\right)\right)|\text{}\text{ }\text{ }\text{if}\text{\hspace{0.17em}}i\in {I}_{u}\\ \sqrt{{\psi }_{\alpha }\left({x}_{i}-{l}_{i},{F}_{i}\left(x\right)\right)+{\psi }_{\alpha }\left({x}_{i}-{l}_{i},{F}_{i}\left(x\right)\right)}\text{if}\text{\hspace{0.17em}}i\in {I}_{lu}\end{array},i=1,\cdots ,n\text{}
Then we can use the two phase method to solve this problem.
In this paper, we proposed a two-phase method for the constrained equations. We can also combine other first-order and second-order methods. In this paper, the iteration complexity analysis of the first-order method is a meaningful work, and we will do further research.
Zhang, Y.Z. (2019) A Spectral Projected Gradient-Newton Two Phase Method for Constrained Nonlinear Equations. Journal of Applied Mathematics and Physics, 7, 104-110. https://doi.org/10.4236/jamp.2019.71009
1. Mifflin, R. (2006) Semismooth and Semiconvex Functions in Constrained Optimization. SIAM Journal on Control & Optimization, 15, 959-972. https://doi.org/10.1137/0315061
2. Qi, L. and Sun, J. (1993) A Nonsmooth Version of Newton’s Method. Mathematical Programming, 58, 353-368. https://doi.org/10.1007/BF01581275
3. Ji, L. and Yu, Z.S. (2009) New Class of Adaptive Nonmonotone Spectral Projected Gradient Method. Journal of University of Shanghai for Science & Technology.
4. Qi, L.Q., Tong, X.J. and Li, D.H. (2004) Active-Set Projected Trust-Region Algorithm for Box-Constrained Nonsmooth Equations. Journal of Optimization Theory and Applications, 120, 601-625. https://doi.org/10.1023/B:JOTA.0000025712.43243.eb
5. Clarke, F.H. (1983) Optimization and Nonsmooth Analysis. John Wiley & Sons, New York.
6. Qi, L. (1993) Convergence Analysis of Some Algorithms for Solving Nonsmooth Equations. Mathematics of Operations Research, 18, 227-244. https://doi.org/10.1287/moor.18.1.227
7. Zarantonello, E.H. (1971) Projections on Convex Sets in Hilbert Space and Spectral Theory: Part I. Projections on Convex Sets: Part II. Spectral Theory. Revista de la Unión Matemática Argentina, 26, 237-424.
8. Powell, M.J.D. (1983) Variable Metric Methods for Constrained Optimization. Mathematical Programming: The State of the Art, Springer, Berlin Heidelberg. https://doi.org/10.1007/978-3-642-68874-4_12
9. Birgin, E.G., Martínez, J.M. and Raydan, M. (2000) Nonmonotone Spectral Projected Gradient Methods on Convex Sets. Society for Industrial and Applied Mathematic, 10, 1196-1211. https://doi.org/10.1137/S1052623497330963
10. Sun, D., Womersley, R.S. and Qi, H. (2002) A Feasible Semismooth Asymptotically Newton Method for Mixed Complementarity Problems. Mathematical Programming, 94, 167-187. https://doi.org/10.1007/s10107-002-0305-2
11. Chen, B., Chen, X. and Kanzow, C. (2000) A Penalized Fischer-Burmeister NCP-Function. Mathematical Programming, 88, 211-216. https://doi.org/10.1007/PL00011375
|
Cubes and Cube Roots - Practically Study Material
This is a story about one of India’s great mathematical geniuses, S. Ramanujan. Once another famous mathematician Prof. G.H. Hardy came to visit him in a taxi whose number was 1729. While talking to Ramanujan, Hardy described this number “a dull number”. Ramanujan quickly pointed out that 1729 was indeed interesting. He said,“It is the smallest number that can be expressed as a sum of two cubes in two different ways”.
1729 = 1728 + 1 =
{12}^{3}
{1}^{3}
1729 = 1000 + 729 =
{10}^{3}
{9}^{3}
1729 has since been known as the Hardy – Ramanujan Number, even though this feature of 1729 was known more than 300 years before Ramanujan.
How did Ramanujan know this? Well, he loved numbers. All through his life, he experimented with numbers. He probably found numbers that were expressed as the sum of two squares and sum of two cubes also. There are many other interesting patterns of cubes. Let us learn about cubes, cube roots and many other interesting facts related to them.
In the previous chapter, we have learnt about squares and square roots of the numbers. In this lesson we try to understand the cubes and cube roots.
7.2. Cube of a number
If a number is multiplied by itself, we say that the number is squared. In the similar manner, if a number is multiplied by itself three times, we say that the number is cubed. In other words, the square of a number when multiplied by the number itself gives the cube of the number. If n is the number, then cube of n is n × n × n (3 factors) and it is denoted by
{n}^{3}
. Thus, the exponent of the cube of a number is 3.
Cube of the Number /Number Cubed
{1}^{3}
{2}^{3}
{3}^{3}
{4}^{3}
{5}^{3}
{6}^{3}
{7}^{3}
{8}^{3}
{9}^{3}
The above table gives the cubes of numbers from 1 to 9.
7.3. Perfect cube
In the above table, 1, 8, 27, …, 729 are called perfect cubes or perfect third powers of 1, 2, 3 , …, 9 respectively.
If small numbers are given, we can identify whether it is a perfect cube or not. But if a larger number is given then it is difficult to do so. Hence, we need a method to check whether the number is a perfect cube or not.
Test for a Perfect Cube
Let us now have a look at the method :
We know that if a prime p divides a perfect cube, then
{p}^{3}
also divides this perfect cube.
Also in the prime factorisation of a perfect cube, every prime occurs 3 times or a multiples of 3 times. Thus, to check whether a number is a perfect cube or not,
i) We first prime-factorize the given number.
ii) Then group together triplets of the same prime factors.
iii) If no factor is left out, the number is a perfect cube. Otherwise, it is not a perfect cube.
Example 1: Is 64 a perfect cube?
Let us find 64 is a perfect cube or not.
Step-1: 64 = 2 × 2 × 2 × 2 × 2 × 2 (prime factorisation)
Step-2: 64 = (2 × 2 × 2) × (2 × 2 × 2) (grouping together triplets of same prime factors)
Step-3: Here, no prime factor is left out. So 64 is a perfect cube.
Example 2: Is 392 a perfect cube ?
392 = 2 × 2 × 2 ×7 × 7 (prime factorisation)
= (2 × 2 × 2) × 7 × 7 (grouping together triplets)
7 does not appear in a group of three.
So 392 is not a perfect cube.
7.4. Cubes of negative numbers
Let us consider the cubes of negative numbers :
(–1) × (–1) × (–1) =
{\left(–1\right)}^{3}
–{1}^{3}
{\left(–3\right)}^{3}
{3}^{3}
{\left(–2\right)}^{3}
{2}^{3}
{\left(–5\right)}^{3}
{5}^{3}
From the above examples, we can see that –1 , –27 , –8 and –125 are perfect cubes. Here, we can note an important idea about perfect squares and perfect cubes.
We know from the previous chapter that a negative number cannot be a perfect square.
But from the above examples we see that negative numbers may also be perfect cubes. That is,
(–1) = ( –1 × –1 × –1) is a perfect cube.
(–27) = ( –3 × –3 × –3) is a perfect cube.
(–125) = ( –5 × –5 × –5) is a perfect cube.
If smaller numbers are given, to find the cube of that number is easier as we can multiply them thrice orally. But finding cubes of two digit and three digit numbers is difficult as it involves a lot of calculation. Let us now look at the alternative method to do so.
Let us now understand the pattern of the units digits of the cubes of the numbers.
Units digit of the number (n)
Units digit of the cube (
{n}^{3}
Numbers with units digits 0, 1, 4, 5, 6, 9, the units digits of their cubes are again the same digits 0, 1, 4, 5, 6, 9.
Till now we have understood the idea of a cube and cube of a negative number and some other properties of cubes.
7.7. Cube root through prime factorisation method
Step-1: Express the given number as the product of primes.
Step-2: Make groups in triplets of the same prime.
Step-3: Find the product of the primes choosing one from each triplet.
Step-4: The product from Step 3 is the required cube root of the given number.
Example: Find the cube roots of 512 and 531441.
512 = (2 × 2 × 2) ×(2 × 2 × 2) ×(2 × 2 × 2)
\sqrt[3]{512}=2×2×2=8
\therefore \sqrt[3]{512}=8
531441 = (3 × 3 × 3) × (3 × 3 × 3)× (3 × 3 × 3) × (3 × 3 × 3)
\sqrt[3]{531441}=3×3×3×3=9×9=81
\therefore \sqrt[3]{531441}=81
Note: The cube root of the product of two perfect cubes is the product of their cube roots. For two perfect cubes x and
y,\sqrt[3]{x×y}=\sqrt[3]{x}×\sqrt[3]{y}.
\sqrt[3]{9261}=21=3×7=\sqrt[3]{27}×\sqrt[3]{343}=\sqrt[3]{27×343}
\sqrt[3]{91125}=45=5×9=\sqrt[3]{125}×\sqrt[3]{729}=\sqrt[3]{125×729}
\sqrt[3]{551368}=82=2×41=\sqrt[3]{8}×\sqrt[3]{68921}=\sqrt[3]{8×68921}
7.8. Cube root through Unit digit method
In this method, we can find cube root of a perfect cube by using units digit.
Let us understand the steps to be used by finding the cube root of 2197.
Step-1: Look at the digit in the units place of the perfect cube and determine the digit in the units place of the cube root.
Units digit in 2197 is 7.
Therefore, units digit in its cube root is 3.
Step-2: Strike out from the right, last three (i.e., units, tens and hundreds) digits of the number.
On striking out units, tens and hundreds digits in 2
\overline{)1}
\overline{)9}
\overline{)7}
, the left out number is 2.
Note: If nothing is left, we stop and the digit in step 1 is the cube root.
Step-3: Consider the number left out from step 2. Find the largest single digit number whose cube is less than or equal to the left out number. This is the tens digit of the cube root.
The largest single digit number whose cube is less than 2 is 1
\left(\because {1}^{3}=1<2\right)
\therefore \sqrt[3]{2197}=13
i) Units digit in 636056 is 6.
units digit in its cube root is 6.
ii) On striking out units, tens and hundreds digits in 636
\overline{)0}
\overline{)5}
\overline{)6}
, the left out number is 636.
iii) The largest single digit number whose cub e is less than 636 is 8
\left(\because {8}^{3}=512<636\right)
\therefore \sqrt[3]{636056}=86.
We have seen various methods of finding the cube roots of numbers. Let us now have a look at the method of finding cube roots of negative numbers.
7.9. Cube roots of Positive Numbers
Let us now have a look at the method of finding cube roots of positive numbers.
The cube root of the product of two perfect cubes is the product of their cube roots.
Note: For two perfect cubes x and y,
\sqrt[3]{x×y}=\sqrt[3]{x}×\sqrt[3]{y}
\sqrt[3]{9261}=\sqrt[3]{27×343}=\sqrt[3]{27}×\sqrt[3]{343}
\sqrt[3]{91125}=\sqrt[3]{125×729}=\sqrt[3]{125}×\sqrt[3]{729}
\sqrt[3]{551368}=\sqrt[3]{8×68921}=\sqrt[3]{8}×\sqrt[3]{68921}
7.10. Cube roots of Negative Numbers
Let us observe the following examples
\sqrt[3]{–512}=\sqrt[3]{–1×512}
\begin{array}{l}=\sqrt[3]{–1}×\sqrt[3]{512}=–1×8=–8\\ \therefore \sqrt[3]{–512}=–8\end{array}
Example 2 :
\sqrt[3]{–125}=\sqrt[3]{–1×125}
\begin{array}{l}=\sqrt[3]{–1}×\sqrt[3]{125}=–1×5=–5\\ \therefore \sqrt[3]{–125}=–5\end{array}
\sqrt[3]{–343}=\sqrt[3]{–1×343}
=\sqrt[3]{–1}×\sqrt[3]{343}=–1×7=–7
From the above examples, we can conclude that
\begin{array}{l}\sqrt[3]{–m}=\sqrt[3]{–1}×\sqrt[3]{m}=–1×\sqrt[3]{m}=–\sqrt[3]{m}\\ \sqrt[3]{–m}=–\sqrt[3]{m}\end{array}
Cube root of a negative perfect cube is negative.
Cube roots of Rational Numbers
The cube root of the quotient of two perfect cubes is the quotient of their cube roots.
For any two perfect cubes x and y,
y\ne 0,\sqrt[3]{\frac{x}{y}}=\frac{\sqrt[3]{x}}{\sqrt[3]{y}}
1. Find the cube roots of
\frac{343}{125}
\sqrt[3]{\frac{343}{125}}=\frac{\sqrt[3]{343}}{\sqrt[3]{125}}=\frac{\sqrt[3]{7×7×7}}{\sqrt[3]{5×5×5}}=\frac{7}{5}
\frac{–2197}{1331}
\sqrt[3]{\frac{–2197}{1331}}=\frac{–\sqrt[3]{2197}}{\sqrt[3]{1331}}=\frac{–\sqrt[3]{13×13×13}}{\sqrt[3]{11×11×11}}=\frac{–13}{11}
Observe the following examples :
\sqrt[3]{125×64}=
\sqrt[3]{5×5×5×4×4×4}=5×4=20 _______\left(1\right)
\sqrt[3]{125}×\sqrt[3]{64}=\sqrt[3]{5×5×5}×\sqrt[3]{4×4×4}
=5×4=20_________\left(2\right)
\sqrt[3]{125×64}=\sqrt[3]{125}×\sqrt[3]{64}.
\sqrt[3]{216×\left(–343\right)}=\sqrt[3]{6×6×6×\left(–7\right)×\left(–7\right)×\left(–7\right)}
=6×\left(–7\right)=–42_________\left(2\right)
\sqrt[3]{216}×\sqrt[3]{\left(–343\right)}
=\sqrt[3]{6×6×6}×\sqrt[3]{\left(–7\right)×\left(–7\right)×\left(–7\right)}
=6×\left(–7\right)=–42____\left(2\right)
\sqrt[3]{216×\left(–314\right)}=\sqrt[3]{216}×\sqrt[3]{\left(–343\right)}
From the above examples, we can conclude that for any two integers a and b,
\sqrt[3]{a×b}=\sqrt[3]{a}×\sqrt[3]{b}·
← Squates and Square rootsComparing Quatities →
|
Shapley Values for Machine Learning Model - MATLAB & Simulink - MathWorks Deutschland
Shapley Value with MATLAB
Extension to KernelSHAP ('Method','conditional-kernel')
Specify Computation Algorithm
Extension to KernelSHAP
Large Number of Observations
Reduce Computational Cost
This topic defines Shapley values, describes two available algorithms in the Statistics and Machine Learning Toolbox™ feature that computes Shapley values, provides examples for each, and shows how to reduce the computational cost.
In game theory, the Shapley value of a player is the average marginal contribution of the player in a cooperative game. That is, Shapley values are fair allocations, to individual players, of the total gain generated from a cooperative game. In the context of machine learning prediction, the Shapley value of a feature for a query point explains the contribution of the feature to a prediction (response for regression or score of each class for classification) at the specified query point. The Shapley value corresponds to the deviation of the prediction for the query point from the average prediction, due to the feature. For each query point, the sum of the Shapley values for all features corresponds to the total deviation of the prediction from the average.
The Shapley value of the ith feature for the query point x is defined by the value function v:
{\phi }_{i}\left({v}_{x}\right)=\frac{1}{M}\sum _{S\subseteq ℳ\\left\{i\right\}}\frac{{v}_{x}\left(S\cup \left\{i\right\}\right)-{v}_{x}\left(S\right)}{\frac{\left(M-1\right)!}{|S|!\left(M-|S|-1\right)!}}
M is the number of all features.
ℳ
is the set of all features.
|S| is the cardinality of the set S, or the number of elements in the set S.
vx(S) is the value function of the features in a set S for the query point x. The value of the function indicates the expected contribution of the features in S to the prediction for the query point x.
You can compute Shapley values for a machine learning model by using a shapley object. Use the values to interpret the contributions of individual features in the model to the prediction for a query point. There are two ways to compute Shapley values:
Create a shapley object for a machine learning model with a specified query point by using the shapley function. The function computes the Shapley values of all features in the model for the query point.
Create a shapley object for a machine learning model by using the shapley function and, then compute the Shapley values for a specified query point by using the fit function.
shapley offers two algorithms: kernelSHAP [1], which uses interventional distributions for the value function, and the extension to kernelSHAP [2], which uses conditional distributions for the value function. You can specify the algorithm to use by setting the 'Method' name-value argument of the shapley function or the fit function.
The difference between the two algorithms is the definition of the value function. Both algorithms define the value function such that the sum of the Shapley values of a query point over all features corresponds to the total deviation of the prediction for the query point from the average.
\sum _{i=1}^{M}{\phi }_{i}\left({v}_{x}\right)=f\left(x\right)-E\left[f\left(x\right)\right].
Therefore, the value function vx(S) must correspond to the expected contribution of the features in S to the prediction (f) for the query point x. The two algorithms compute the expected contribution by using artificial samples created from the specified data (X). You must provide X through the machine learning model input or a separate data input argument when you create a shapley object. In the artificial samples, the values for the features in S come from the query point. For the rest of the features (features in Sc, the complement of S), the kernelSHAP algorithm generates samples using interventional distributions, whereas the extension to the kernelSHAP algorithm generates samples using conditional distributions.
shapley uses the kernelSHAP algorithm by default.
The kernelSHAP algorithm defines the value function of the features in S at the query point x as the expected prediction with respect to the interventional distribution D, which is the joint distribution of the features in Sc:
{v}_{x}\left(S\right)={E}_{D}\left[f\left({x}_{S},{X}_{{S}^{c}}\right)\right],
where xS is the query point value for the features in S, and XSc are the features in Sc.
To evaluate the value function vx(S) at the query point x, with the assumption that the features are not highly correlated, shapley uses the values in the data X as samples of the interventional distribution D for the features in Sc:
{v}_{x}\left(S\right)={E}_{D}\left[f\left({x}_{S},{X}_{{S}^{c}}\right)\right]\approx \frac{1}{N}\sum _{j=1}^{N}f\left({x}_{S},{\left({X}_{{S}^{c}}\right)}_{j}\right),
where N is the number of observations, and (XSc)j contains the values of the features in Sc for the jth observation.
For example, suppose you have three features in X and four observations: (x11,x12,x13), (x21,x22,x23), (x31,x32,x33), and (x41,x42,x43). Assume that S includes the first feature, and Sc includes the rest. In this case, the value function of the first feature evaluated at the query point (x41,x42,x43) is
{v}_{x}\left(S\right)=\frac{1}{4}\left[f\left({x}_{41},{x}_{12},{x}_{13}\right)+f\left({x}_{41},{x}_{22},{x}_{23}\right)+f\left({x}_{41},{x}_{32},{x}_{33}\right)+f\left({x}_{41},{x}_{42},{x}_{43}\right)\right].
The kernelSHAP algorithm is computationally less expensive than the extension to the kernelSHAP algorithm, supports ordered categorical predictors, and can handle missing values in X. However, the algorithm requires the feature independence assumption and uses out-of-distribution samples [3]. The artificial samples created with a mix of the query point and the data X can contain unrealistic observations. For example, (x41,x12,x13) might be a sample that does not occur in the full joint distribution of the three features.
Specify 'Method','conditional-kernel' to use the extension to the kernelSHAP algorithm.
the extension to the kernelSHAP algorithm defines the value function of the features in S at the query point x using the conditional distribution of XSc, given that XS has the query point values:
{v}_{x}\left(S\right)={E}_{{X}_{{S}^{c}}|{X}_{S}={x}_{S}}\left[f\left({x}_{S},{X}_{{S}^{c}}\right)\right].
To evaluate the value function vx(S) at the query point x, shapley uses nearest neighbors of the query point, which correspond to 10% of the observations in the data X. This approach uses more realistic samples than the kernelSHAP algorithm and does not require the feature independence assumption. However, this algorithm is computationally more expensive, does not support ordered categorical predictors, and cannot handle NaNs in continuous features. Also, the algorithm might assign a nonzero Shapley value to a dummy feature that does not contribute to the prediction, if the dummy feature is correlated with an important feature [3].
This example trains a linear classification model and computes Shapley values using both the kernelSHAP algorithm ('Method','interventional-kernel') and the extension to the kernelSHAP algorithm ('Method','conditional-kernel').
Train a linear classification model. Specify the objective function minimization technique ('Solver' name-value argument) as the limited-memory Broyden-Fletcher-Goldfarb-Shanno quasi-Newton algorithm ('lbfgs') for better accuracy of linear coefficients.
Compute the Shapley values for the first observation using the kernelSHAP algorithm, which uses the interventional distribution for the value function evaluation. You do not have to specify the 'Method' value because 'interventional-kernel' is the default.
For a classification model, shapley computes Shapley values using the predicted class score for each class. Plot the Shapley values for the predicted class by using the plot function.
The horizontal bar graph shows the Shapley values for the 10 most important variables, sorted by their absolute values. Each value explains the deviation of the score for the query point from the average score of the predicted class, due to the corresponding variable.
For a linear model where you assume features are independent from one another, you can compute the interventional Shapley values for the positive class (or the second class in Mdl.ClassNames, 'g') from the estimated coefficients (Mdl.Beta) [1].
Create a table containing the Shapley values computed from the kernelSHAP algorithm and the values from the coefficients.
"x1" 0.28789 0.28789
"x2" 2.315e-15 0
"x4" -0.01998 -0.01998
"x6" -0.076991 -0.076991
"x10" -0.030049 -0.030049
"x11" -0.23132 -0.23132
"x12" 0.1422 0.1422
"x15" 0.21051 0.21051
Compute the Shapley values for the first observation using the extension to the kernelSHAP algorithm, which uses the conditional distribution for the value function evaluation.
The two algorithms identify different sets for the 10 most important variables. Only the two variables x8 and x22 are common to both sets.
The computational cost for Shapley values increases if the number of observations or features is large.
Computing the value function (v) can be computationally expensive if you have a large number of observations, for example, more than 1000. For faster computation, use a smaller sample of the observations when you create a shapley object, or run in parallel by specifying 'UseParallel' as true when you compute the values using the shapley or fit function. Computing in parallel requires Parallel Computing Toolbox™.
Computing the summand in Equation 1 for all available subsets S can be computationally expensive when M (the number of features) is large. The total number of subsets to consider is 2M. Instead of computing the summand for all subsets, you can specify the maximum number of subsets by using the 'MaxNumSubsets' name-value argument. shapley chooses subsets to use based on their weight values. The weight of a subset is proportional to 1/(denominator of the summand), which corresponds to 1 over the binomial coefficient:
1/\left(\begin{array}{c}M-1\\ |S|\end{array}\right)
. Therefore, a subset with a high or low value of cardinality has a large weight value. shapley includes the subsets with the highest weight first, and then includes the other subsets in descending order based on their weight values.
This example shows how to reduce the computational cost for Shapley values when you have a large number of both observations and features.
The data set includes 55,246 observations of 10 variables with information on the sales of properties in New York City in 2015. This example uses these variables to analyze the sale prices (SALEPRICE).
Preprocess the data set. Convert the datetime array (SALEDATE) to the month numbers.
Compute the Shapley values of all predictor variables for the first observation. Measure the time required for the computation by using tic and toc.
As the warning message indicates, the computation can be slow because the predictor data has over 1000 observations.
shapley provides several options to reduce the computational cost when you have a large number of observations or features.
Large number of observations — Use a smaller sample of the training data and run in parallel by specifying 'UseParallel' as true.
Large number of features — Specify the 'MaxNumSubsets' name-value argument to limit the number of subsets included in the computation.
Compute the Shapley values again using a smaller sample of the training data and the parallel computing option. Also, specify the maximum number of subsets as 2^5.
Specifying the additional options reduces the computation time.
|
Solvation_shell Knowpia
Relation to activity coefficient of an electrolyte and its solvation shell numberEdit
{\displaystyle \ln \gamma _{s}={\frac {h-\nu }{\nu }}\ln \left(1+{\frac {br}{55.5}}\right)-{\frac {h}{\nu }}\ln \left(1-{\frac {br}{55.5}}\right)+{\frac {br(r+h-\nu )}{55.5\left(1+{\frac {br}{55.5}}\right)}}}
Hydration shells of proteinsEdit
|
Generate constant mass flow rate - MATLAB
Mixture mass flow rate
Generate constant mass flow rate
The Mass Flow Rate Source (MA) block represents an ideal mechanical energy source in a moist air network. The source can maintain a constant mass flow rate regardless of the pressure differential. There is no flow resistance and no heat exchange with the environment. A positive mass flow rate causes moist air to flow from port A to port B.
\stackrel{˙}{m}
\begin{array}{l}{\stackrel{˙}{m}}_{A}+{\stackrel{˙}{m}}_{B}=0\\ {\stackrel{˙}{m}}_{wA}+{\stackrel{˙}{m}}_{wB}=0\\ {\stackrel{˙}{m}}_{gA}+{\stackrel{˙}{m}}_{gB}=0\end{array}
{\Phi }_{A}+{\Phi }_{B}+{\Phi }_{work}=0
{\Phi }_{work}=0
{\Phi }_{work}={\stackrel{˙}{m}}_{A}\left({h}_{tB}-{h}_{tA}\right)
\begin{array}{l}{h}_{tA}={h}_{A}+\frac{1}{2}{\left(\frac{{\stackrel{˙}{m}}_{A}}{{\rho }_{A}{S}_{A}}\right)}^{2}\\ {h}_{tB}={h}_{B}+\frac{1}{2}{\left(\frac{{\stackrel{˙}{m}}_{B}}{{\rho }_{B}{S}_{B}}\right)}^{2}\end{array}
{\int }_{{T}_{A}}^{{T}_{B}}\frac{1}{T}dh\left(T\right)=R\mathrm{ln}\left(\frac{{p}_{B}}{{p}_{A}}\right)
The quantity specified by the Mixture mass flow rate parameter of the source is
{\stackrel{˙}{m}}_{A}={\stackrel{˙}{m}}_{specified}
Mixture mass flow rate — Constant mass flow rate through the source
Desired mass flow rate of the moist air mixture through the source.
|
Are Black Holes 4-D Spatial Balls Filled with Black Body Radiation? Generalization of the Stefan-Boltzmann Law and Young-Laplace Relation for Spatial Radiative Transfers
This is the first paper in a two part series on black holes. In this work, we concern ourselves with the event horizon. A second follow-up paper will deal with its internal structure. We hypothesize that black holes are 4-dimensional spatial, steady state, self-contained spheres filled with black-body radiation. As such, the event horizon marks the boundary between two adjacent spaces, 4-D and 3-D, and there, we consider the radiative transfers involving black-body photons. We generalize the Stefan-Boltzmann law assuming that photons can transition between different dimensional spaces, and we can show how for a 3-D/4-D interface, one can only have zero, or net positive, transfer of radiative energy into the black hole. We find that we can predict the temperature just inside the event horizon, on the 4-D side, given the mass, or radius, of the black hole. For an isolated black hole with no radiative heat inflow, we will assume that the temperature, on the outside, is the CMB temperature,
{T}_{2}=2.725\text{\hspace{0.17em}}\text{K}
. We take into account the full complement of radiative energy, which for a black body will consist of internal energy density, radiative pressure, and entropy density. It is specifically the entropy density which is responsible for the heat flowing in. We also generalize the Young-Laplace equation for a 4-D/3-D interface. We derive an expression for the surface tension, and prove that it is necessarily positive, and finite, for a 4-D/3-D membrane. This is important as it will lead to an inherently positively curved object, which a black hole is. With this surface tension, we can determine the work needed to expand the black hole. We give two formulations, one involving the surface tension directly, and the other involving the coefficient of surface tension. Because two surfaces are expanding, the 4-D and the 3-D surfaces, there are two radiative contributions to the work done, one positive, which assists expansion. The other is negative, which will resist an increase in volume. The 4-D side promotes expansion whereas the 3-D side hinders it. At the surface itself, we also have gravity, which is the major contribution to the finite surface tension in almost all situations, which we calculate in the second paper. The surface tension depends not only on the size, or mass, of the black hole, but also on the outside surface temperature, quantities which are accessible observationally. Outside surface temperature will also determine inflow. Finally, we develop a “waterfall model” for a black hole, based on what happens at the event horizon. There we find a sharp discontinuity in temperature upon entering the event horizon, from the 3-D side. This is due to the increased surface area in 4-D space,
{A}_{R}^{\left(4\right)}=2{\text{π}}^{2}{R}^{3}
, versus the 3-D surface area,
{A}_{R}^{\left(3\right)}=4\text{π}{R}^{2}
. This leads to much reduced radiative pressures, internal energy densities, and total energy densities just inside the event horizon. All quantities are explicitly calculated in terms of the outside surface temperature, and size of a black hole. Any net radiative heat inflow into the black hole, if it is non-zero, is restricted by the condition that,
0<1/c\text{d}Q/\text{d}t<4{F}_{R}^{\left(3\right)}
{F}_{R}^{\left(3\right)}
, is the 3-D radiative force applied to the event horizon, pushing it in. We argue throughout this paper that a 3-D/3-D interface would not have the same desirable characteristics as a 4-D/3-D interface. This includes allowing for only zero or net positive heat inflow into the black hole, an inherently positive finite radiative surface tension, much reduced temperatures just inside the event horizon, and limits on inflow.
Black Holes, 4-D Spatial Balls, Black Body Radiation, Stefan-Boltzmann Law, Young-Laplace Relation
\text{d}Q/\text{d}t=0
\text{d}Q/\text{d}t>0
r\to 0
{T}_{2}=2.725\text{\hspace{0.17em}}\text{K}
{T}_{2}>2.725\text{\hspace{0.17em}}\text{K}
\text{d}Q/\text{d}t>0
{T}_{2}
{T}_{1}
{S}_{\text{Bekenstein}}=\left(1/4\right){c}^{3}{k}_{B}/\left(G\hslash \right)\left(4\text{π}{R}^{2}\right)
{S}_{\text{Bekenstein}}~{R}^{2}
{R}^{2}
{T}_{H}=\hslash {c}^{3}/\left(8\text{π}GM{k}_{B}\right)
r=0
{A}_{R}^{\left(3\right)}=4\text{π}{R}^{2}
{A}_{R}^{\left(4\right)}=2{\text{π}}^{2}{R}^{3}
{\Phi }^{\left(N\right)}\left(T\right)
{\Phi }^{\left(N\right)}\left(T\right)=\text{d}Q/\text{d}t1/{A}^{\left(N\right)}={\sigma }^{\left(N\right)}{T}^{N+1}
{A}^{\left(N\right)}
{\sigma }^{\left(N\right)}
{A}^{\left(N\right)}
{A}^{\left(N\right)}={A}^{\left(N\right)}\left(R\right)=2{\text{π}}^{N/2}{R}^{N-1}/\Gamma \left(N/2\right)
\Gamma \left(x\right)
{\sigma }^{\left(N\right)}
{\sigma }^{\left(N\right)}={\left(2/c\right)}^{N-1}{\left(\sqrt{\text{π}}\right)}^{N-2}\left({k}_{B}^{N+1}/{h}^{N}\right)N\left(N-1\right)\Gamma \left(\frac{N}{2}\right)\zeta \left(N+1\right)
{k}_{B}
\zeta \left(x\right)
\Gamma \left(x\right)
{A}_{R}^{\left(4\right)}=2{\text{π}}^{2}{R}^{3}
{A}_{R}^{\left(3\right)}=4\text{π}{R}^{2}
{\sigma }^{\left(4\right)}=3.021\times {10}^{-5}\text{Watts}/\left({\text{m}}^{3}\cdot {\text{K}}^{5}\right)
{\sigma }^{\left(3\right)}=5.670\times {10}^{-8}\text{Watts}/\left({\text{m}}^{2}\cdot {\text{K}}^{4}\right)
{\sigma }^{\left(2\right)}=9.614\times {10}^{-11}\text{Watts}/\left(\text{m}\cdot {\text{K}}^{3}\right)
{\Phi }^{\left(N\right)}
\text{d}Q/\text{d}t=\text{d}{Q}^{\left(3\right)}/\text{d}t-\text{d}{Q}^{\left(4\right)}/\text{d}t={A}^{\left(3\right)}{\sigma }^{\left(3\right)}{T}_{2}^{4}-{A}^{\left(4\right)}{\sigma }^{\left(4\right)}{T}_{1}^{5}
\text{d}{Q}^{\left(3\right)}/\text{d}t
\text{d}{Q}^{\left(4\right)}/\text{d}t
{T}_{2}
{T}_{1}
{A}^{\left(3\right)}
{A}^{\left(4\right)}
{T}_{1}
{T}_{2}
\text{d}Q/\text{d}t
\text{d}Q/\text{d}t>0
\text{d}Q/\text{d}t=0
{T}_{2}
{T}_{2}
{T}_{1}
R=2GM/{c}^{2}
{T}_{2}
{T}_{1}
\text{d}Q/\text{d}t=0
{A}^{\left(4\right)}{\sigma }^{\left(4\right)}{T}_{1}^{5}={A}^{\left(3\right)}{\sigma }^{\left(3\right)}{T}_{2}^{4}
2{\text{π}}^{2}{R}^{3}{\sigma }^{\left(4\right)}{T}_{1}^{5}=4\text{π}{R}^{2}{\sigma }^{\left(3\right)}{2.725}^{4}
{T}_{1}
{T}_{1}=0.581{R}^{-1/5}
{T}_{2}
{T}_{1}
{T}_{1}
Pilot, C. (2019) Are Black Holes 4-D Spatial Balls Filled with Black Body Radiation? Generalization of the Stefan-Boltzmann Law and Young-Laplace Relation for Spatial Radiative Transfers. Journal of High Energy Physics, Gravitation and Cosmology, 5, 638-682. https://doi.org/10.4236/jhepgc.2019.53036
1. Montgomery, C., Orchiston, W. and Whittingham, I. (2009) Michell, Laplace and the Origin of the Black Hole Concept. Journal of Astronomical History and Heritage, 12, 90-96.
2. Narayan, R. and McClintock, J.E. (2015) Observational Evidence for Black Holes. In: Ashtekar, A., Berger, B.K., Isenberg, J. and MacCallum, M., Eds., General Relativity and Gravitation: A Centennial Perspective, Cambridge University Press, Cambridge, 1-20.https://www.cfa.harvard.edu/~narayan/Benefunder/Narayan_McClintock.pdf
3. Hutchings, J.B. (1985) Observational Evidence for Black Holes: The Existence of These Bizarre Objects at Last Seems to Be Established, and Some of Their Roles in the Universe Are Becoming Evident. American Scientist, 73, 52-59.
4. Hailey, C.J., et al. (2018) A Density Cusp of Quiescent X-Ray Binaries in the Central Parsec of the Galaxy. Nature, 556, 70-73. https://phys.org/news/2018-04-tens-thousands-black-holes-milky.html#jCp
5. Caramete, L.I. and Biermann, P.L. (2011) The Catalog of Black Hole Candidates. Astronomy & Astrophysics. https://www.researchgate.net/publication/51916198_The_catalog_of_nearby_black_hole_candidates
6. Schutz, B.F. (2003) Gravity from the Ground Up. Cambridge University Press, Cambridge, 110. https://doi.org/10.1017/CBO9780511807800
7. Davies, P.C.W. (1978) Thermodynamics of Black Holes. Reports on Progress in Physics, 41, 1313-1355. https://doi.org/10.1088/0034-4885/41/8/004
8. Barrow, J.D. and Hawthorne, W.S. (1990) Equilibrium Matter Fields in the Early Universe. Monthly Notices of the Royal Astronomical Society, 243, 608.
9. Ryden, B. (2006) Introduction to Cosmology. Addison Wesley, San Francisco.
10. Mukhanov, V. (2005) Physical Foundations of Cosmology. Cambridge University Press, Cambridge. https://doi.org/10.1017/CBO9780511790553
11. Brandenberger, R.H. (2010) Introduction to Early Universe Cosmology.
12. Kolb, E.W. and Turner, M.S. (1989) The Early Universe. Addison-Wesley, Reading, Frontiers in Physics, 69. (Newer Edition: Kolb, E.W. and Turner, M.S. (1994) The Early Universe. Westview Press).
14. Husdal, L. (2016) On effective Degrees of Freedom in the Early Universe. Galaxies, 4, 78. https://doi.org/10.3390/galaxies4040078
15. Senovilla, J.M.M. (2014) Black Hole Formation by Incoming Electromagnetic Radiation. Classical and Quantum Gravity, 32, Article ID: 017001. https://doi.org/10.1088/0264-9381/32/1/017001
16. Wheeler, J.A. (1955) Geons. Physical Review, 97, 511-536. https://doi.org/10.1103/PhysRev.97.511
17. Perry, G.P. and Cooperstock, F.I. (1999) Stability of Gravitational and Electromagnetic Geons. Classical and Quantum Gravity, 16, 1889-1916. https://doi.org/10.1088/0264-9381/16/6/321
18. Bousso, R. (2002) The Holographic Principle. Reviews of Modern Physics, 74, 825-874. https://doi.org/10.1103/RevModPhys.74.825
19. t’Hooft, G. (1993) Dimensional Reduction in Quantum Gravity.
20. Afshordi, N., et al. (2017) From Planck Data to Planck Era: Observational Tests of Holographic Cosmology. Physical Review Letters, 118, Article ID: 041301. https://phys.org/news/2017-01-reveals-substantial-evidence-holographic-universe.html#jCp https://doi.org/10.1103/PhysRevLett.118.041301
21. Bekenstein, J.D. (1973) Black Holes and Entropy. Physical Review D, 7, 2333-2346. https://doi.org/10.1103/PhysRevD.7.2333
22. Bekenstein, J.D. (2001) The Limit of Information. Studies in History and Philosophy of Modern Physics, 32, 511-524.
23. Weinberg, S. (1972) Gravitation and Cosmology. Wiley, Hoboken.
24. Penrose, R. (2006) Before the Big Bang. Proceedings of EPAC 2006, Edinburgh, 26-30 June 2006, 2761.
25. Kephart, T.W. and Ng, Y.J. (2003) Black Holes, Mergers, and the Entropy Budget of the Universe. JCAP, 0311, 011.
26. Frampton, P.H., Hsu, S.D.H., Kephart, T.W. and Reeb, D. (2009) What Is the Entropy of the Universe? Classical and Quantum Gravity, 26, Article ID: 145005. https://doi.org/10.1088/0264-9381/26/14/145005
27. Hawking, S.H. (1971) Gravitational Radiation from Colliding Black Holes. Physical Review Letters, 26, 1344-1346. https://doi.org/10.1103/PhysRevLett.26.1344
28. Hawking, S.W. (1974) Black Hole Explosions? Nature, 248, 30-31. https://doi.org/10.1038/248030a0
29. Hawking, S.H. (1974) Particle Creation by Black Holes. Communication of Mathematical Physics, 43, 199-220. https://doi.org/10.1007/BF02345020
30. Hawking, S.H. (1976) Black Hole and Thermodynamics. Physical Review, 13, 191-197. https://doi.org/10.1103/PhysRevD.13.191
31. Rabinowitz, M. (2000) Gravitational Tunneling Radiation. Physics Essays, 12, 346-357. https://doi.org/10.4006/1.3025389
32. Rabinowitz, M. (1999) Little Black Holes: Dark Matter and Ball Lightning. Astrophysics and Space Science, 262, 391-410. https://doi.org/10.1023/A:1001865715833
33. Rabinowitz, M. (1999) Ball Lightning, Little Black Holes, and Electric Power Systems. IEEE Power Engineering Review Letters, 19, 65.
34. Caruso, T. and de Castro, A. (2005) The Blackbody Radiation in D-Dimensional Universes. The Revista Brasileira de Ensino de Física, 27, 559-563.
37. Pilot, C. (2019) A New Type of Phase Transition Based on the Clausius-Clapeyron Relation Involving a Change in Spatial Dimension. Journal of High Energy Physics, Gravitation and Cosmology, 5, 291-309. https://doi.org/10.4236/jhepgc.2019.52016
39. Menon, V.J. and Agrawal, D.C. (1998) Comment on “The Stefan-Boltzmann Constant in N Dimensional Space”. Journal of Physics A: Mathematics and General, 31, 1109-1110. https://doi.org/10.1088/0305-4470/31/3/021
42. Frolov, V.P. and Kubiznak, D. (2008) Higher-Dimensional Black Holes: Hidden Symmetries and Separation of Variables. Classical and Quantum Gravity, 25, Article ID: 154005.
43. Emparan, R. and Reall, H.S. (2008) Black Holes in Higher Dimensions. Living Reviews in Relativity, 11, 6. https://doi.org/10.12942/lrr-2008-6
44. Tolman, R.C. and Ehrenfest, P. (1930) Temperature Equilibrium in a Static Gravitational Field. Physical Review, 35, 1791. https://doi.org/10.1103/PhysRev.36.1791
45. Gonzalez-Ayala, J., Cordero, R. and Angulo-Brown, F. (2016) Is the (3+1)-d Nature of the Universe a Thermodynamic Necessity? Europhysics Letters, 113, Article ID: 40006. https://doi.org/10.1209/0295-5075/113/40006
[A1]Ohanian, H. and Ruffini, R. (1994) Gravitation and Spacetime. 2nd Edition, Norton Publishing Co., 241-281.
[A2]Tee, G. (2005) Surface Area and Capacity of Ellipsoids in N Dimensions. New Zealand Journal of Mathematics, 34, 165-198.
[A3]photonics101.com/multipole-moments-electric/quadrupole-multiple-moments-homogeneously-charged-ellipsoidshow-solution. 4 p, where This Formula Is Derived in Some Detail Starting from Oblate Spherical Coordinates.
|
Star Shape Calculator | Star Polygons
Star Shape Calculator
Star shapes: pointy geometry
What is the pentagram shape?
More than pentagram: different star shapes
Some facts about star shapes!
How to use our star shape calculator
Twinkle, twinkle, little star... our star shape calculator will calculate all of the important elements of a star polygon in the blink of an eye!
Here you will learn everything about star-shaped polygons. Well, everything apart from occultism: we deal with geometry, not magic! Keep reading to find out:
What are star-shaped polygons;
How to build them;
What is the pentagram: a down to Earth introduction;
Other star-shaped polygons; and
How to use our star shape calculator.
We also added some fun facts about these shapes and a challenge: how many triangles are in a pentagram? To the stars, with Omni Star Shape Calculator!
Star polygons are the shiniest shapes in geometry. By definition, they are:
Non-convex;
Self-intersecting;
Equilateral; and
Equiangular polygons.
Let's look at all of these features one by one!
Non-convex means that you can draw a line between two points inside the star-shaped polygon that passes outside its perimeter.
Self-intersecting means that you can build all the different star shapes starting from a regular polygon and prolonging its vertexes until the extensions cross each other.
Equilateral signifies that all the outer sides of the star-shaped polygon have the same length (latus is the Latin for side).
Equiangular means that all the angles lying in the same "region" are equal.
The last two attributes of the star polygons automatically qualify them as regular polygons. Of course, you can build irregular star polygons, but they would be impossible to calculate, and we are here for this!
We can uniquely identify every star polygon using the Schläfli symbol. This symbol consists of a pair of numbers in the form
{n/m}
n
is the number of corners/sides of the polygon; and
m
is its starriness or how many distinct boundaries you can identify around the center.
Let's see how to build a star polygon, and then this notation will be clearer!
The animation shows the construction of a five-pointed star from a regular polygon.
This five-pointed star polygon has the Schläfli symbol
{5/2}
, and it is so important we will deal with it in the next section.
Let's meet the VIP (very important polygon) among the star polygons: the pentagram!
What is a pentagram? The pentagram is nothing but a five-pointed star shape. However, humanity covered it with a fair dose of mysticism. It is the first regular star polygon we can build by extending the sides of a polygon (try to do that with a square or a triangle, and you will understand), and also one of the most drawn shapes in the world. In particular, you can draw it from a pentagon.
🔎 You can draw a pentagram without separating the pen from the paper. But you knew this already: everyone draws stars in their lives!
The pentagram got a bad reputation due to its association with evil: don't trust such voices and keep exploring this fascinating shape's geometry!
Here is a pentagram with all the important features highlighted!
Don't worry: the only magical thing in this shape is the math!
You will meet these elements in all the other star polygons in this calculator: pay attention!
a
, the distance between two contiguous "points" of the star;
b
, the length of the first set of "rays" wounding around the polygon;
c
, the side of the original polygon; and
l
, the distance from two "points" of the stars belonging to the same side of the polygon.
Note that connecting the points of the star creates a new
n
-sided regular polygon with side
a
Some relationships between those quantities always hold (for every star polygon); others are specific for a given shape. Let's check the general ones first:
The "side" of a star is given by the sum
l=c+2\cdot b
You can calculate the perimeter of the first boundary of an
-pointed star with the formula:
\footnotesize P = 2\cdot n \cdot b
However, there are many more "specific" formulas. Let's see the ones for the pentagram:
\footnotesize \begin{align*} &l=a\cdot \varphi\\\\ &b=\frac{a}{\varphi}\\\\ &c=\frac{b}{\varphi} \end{align*}
\varphi
is the golden ratio:
\footnotesize \varphi = \frac{1+\sqrt{5}}{2}\simeq1.618
Only the pentagram among the star polygons has sides that relate to the ratio. Admire the proportions of these quantities:
\footnotesize \frac{l}{a}=\frac{a}{b}=\frac{b}{c}=\varphi
Knowing this, we can almost understand why the pentagram got its reputation! It looks like someone really put a lot of thought into such a simple geometric shape. Learn everything about this "magic" number at the golden ratio calculator!
Apart from those golden ratios, there are a couple more relationships holding in a five-pointed star shape:
\footnotesize \begin{align*} &a=b+c\\ &l=a+b \end{align*}
We can also calculate the value of the perimeter and the area. For the perimeter, use the general formula above, substituting
n=5
. For the area, a specific formula is needed:
\footnotesize A=\frac{\sqrt{5 \cdot ( 5-2\sqrt{5} ) \cdot a^2 }}{2}
Let's check out different star shapes!
If we start from the hexagon (the bestagon), we can build the hexagram. In a hexagram, we can use the following formulas to calculate the various characteristic quantities:
\footnotesize \begin{align*} &l=a\cdot \sqrt{3}\\\\ &b=\frac{a}{\sqrt{3}}\\\\ &c=b \end{align*}
\sqrt{3}
comes from the intimate connection between hexagons and equilateral triangles: the angles of such a shape are, in fact,
60\degree
(visit Omni's equilateral triangle calculator if you want to refresh your knowledge!). When we use this value in a trigonometric function like the tangent, we find
\tan{60\degree}=\sqrt{3}
. Find out more about hexagon symmetries with our hexagon calculator.
A hexagram with all its element highlighted.
We can calculate the area of a hexagram with the formula:
\footnotesize A = 3\cdot b^2 \cdot \frac{\sqrt{3}}{2}
Increase the number of sides and get a heptagram, a seven-pointed star. The heptagram has a particularity: it is the first star-shaped polygon for which we can naturally find a third boundary.
The reason for this peculiarity is that two sides of a heptagon separated by two others can finally meet if prolonged. In the case of a pentagram, the same couple would be a pair separated by a single side (thus a "first-order" ray), while in a hexagram, the same couple is parallel.
When drawing a heptagram then, we meet two star-shaped polygons: a stubbier one, associated with the Schläfli symbol
{7/2}
, and a more slender one with symbol
{7/3}
The heptagram is the first star shaped polygon with two additional boundaries.
We call the side of the
{7/3}
d
. We can calculate the geometric relationships for both shapes, but we warn you: we will use some trigonometry, and the formulas will start to look "ugly"!
\footnotesize \begin{align*} &a = 2\cdot b \cdot \sin{\left(\frac{180-(360/7)}{2}\right)}\\\\ &c = 2 \cdot b \cdot \cos{\left(\frac{360}{7}\right)} \end{align*}
The side of the boundary
{7/3}
comes from the formula:
\footnotesize d = \frac{l}{2\times \cos{\left(180 - 2\times \frac{360}{7}\right)}}-l
The formula for the area requires we compute:
The area of a heptagram with a side equal to
a
Seven times the area of an isosceles triangle with side
d
a
\footnotesize \begin{align*} &A = \left(\frac{7}{2} \cdot \frac{a}{2}\cdot \sqrt{d^2 - \left(\frac{a}{2}\right)^2}\right)+\\ &\qquad\left(\frac{7}{4}\cdot a^2 \cdot \cot{\left(\frac{\pi}{7}\right)}\right) \end{align*}
To conclude, the last "manageable" star-shaped polygon is the octagram, an eight-pointed star shape with an intriguing figure, rich in angles of
45\degree
. We start drawing it from an octagon, an eight-sided polygon.
The elements of an octagram. Can you see the squares?
As for the heptagon, we deal with a shape with three boundaries (we can identify both
{8/2}
{8/3}
). Here are the formulas for the side: don't worry if you see some
\sqrt{2}
factors, blame trigonometry instead!
\footnotesize \begin{align*} &a = 2 \cdot b \cdot \sin{(135)}\\ &c= b \cdot \sqrt{2}\\ &d = b+c \end{align*}
As for the heptagram, the fastest way to compute the area involves the separation of the octagram in an octagon with side
and eight triangles. The formula is:
\footnotesize \begin{align*} &A = \left(4 \cdot \frac{a}{2}\cdot \sqrt{d^2 - \left(\frac{a}{2}\right)^2}\right)+\\ &\qquad\left(2\cdot a^2 \cdot \cot{\left(\frac{\pi}{8}\right)}\right) \end{align*}
You can build star shapes for any regular polygon: the series continues with the enneagram, the decagram, etc. A generalization of the formulas is hardly helpful since every new odd-numbered polygon is associated with an additional boundary: for the enneagram, we can identify the Schläfli symbol
{9/4}
, for the hendecagram, we have the boundary
{11/5}
Star polygons are simple shapes, but somehow they keep appearing all around us.
Consider flags, for example. Of the
196
country flags in the world,
40
of them contain stars. However, the number of stars rises quickly if you count the shapes: the US flag alone contributes with 50! 😉
Burundi's and Israel's flags contain hexagrams. The flag of Jordan depicts a heptagram; the shape was also part of the old Georgian coat of arms. Only Azerbaijan contains an octagram!
Many countries have pentagrams in their flags: there is quite a constellation from the US to Somalia!
An old flag of Australia contained six-pointed stars, while the modern one has both a pentagram and some heptagrams! The designers of the national colors couldn't make up their minds!
Our favorite starred flag is the flag of Nepal: the only non-rectangular flag in the world also has a beautiful 12-pointed star which really makes it stand out!
Star-shaped polygons appear widely through sacred geometry. Do we need to mention the Star of David, a hexagram, or the shared symbolism associated with the pentagram from Hinduism to Neo-paganism?
You can also notice that the constructions of a star-shaped polygon create many triangles, the simplest bi-dimensional shape.
To make the game more entertaining, connect the outside points of the
n
-gram! How many triangles are in a pentagram shape?
We counted...
35
, how about you?
Our star shape calculator can help you calculate all of the characteristic lengths of the four most common star-shaped polygons:
Hexagram;
Heptagram; and
Octagram.
Select the desired number of sides at the top of the calculator and insert the value of one of the available variables there: we will calculate all the others! For example, you can insert the area, and we will tell you all the other lengths.
⚠️ The cotangent function in the area formulas of heptagrams and octagrams prevents the calculations from running in the inverse direction!
What is a star-shaped polygon?
A star-shaped polygon (n-gram) is a non-convex regular polygon. To build it starting from a regular n-polygon like a pentagon (n = 5), hexagon (n = 6), etc., follow these steps:
Prolong the sides of the starting polygon.
Extend each side until it intersects the prolongation of the next non-adjacent side.
For odd-sided polygons, you can have an intersection between prolongations separated by an increasing number of sides: check all the possibilities!
Are there star-shaped polygons you can't draw without lifting the pen from the paper?
No! If you are dealing with regular star-shaped polygons, you can always find a way to draw the entire shape without lifting the pen from the paper. From time to time, it may be cumbersome, though. Try to follow these tips:
Identify the components of the shape: other stars or convex polygons.
If there are separated elements, start from an intersection.
Retrace an already drawn shape.
What is the perimeter of a pentagram with side 5?
This perimeter is 50. The formula for the perimeter of a pentagram shape is:
perimeter = 10 × side
where side is the length of the side of the "rays" of our star. Since there are five rays, with two sides each, to calculate the perimeter of the pentagram, it suffices to count ten times the length of side.
What is the side of a hexagram built from a hexagon with side 3?
The answer is 3. The hexagram "rays" are nothing but equilateral triangles. In fact, you can consider the hexagram itself as the result of the intersection of two equilateral triangles with side 3 times larger than the side of the hexagram.
This interesting symmetry comes from the extreme regularity of hexagons. Learn more with the star shape calculator at Omni Calculator!
|
Buckling Calculator — Column Buckling
Euler's formula for column buckling
Using the buckling calculator
This buckling calculator helps you estimate the behavior of a column under axial loads. The calculator determines the critical load value based on Euler's formula for columns. Buckling occurs when a column is under high compressive load, and it is observable before the stress reaches the yield stress value in the column, making the phenomenon catastrophic.
Columns are a load-bearing member, be it the concrete columns for your house or the desk where you have set up your 3D printer! They can fail suddenly if the buckling load goes beyond the critical load. For instance, it could be the excessive snow load on the roof due to rough weather! Therefore, engineers need to understand this concept to design components considering the buckling loads. To this end, we hope the article below will help you get started on how to calculate buckling load.
Buckling is a phenomenon under which a member can suddenly fail due to excessive compressive load. The load at which the member fails is known as the critical load,
F_{crit}
F
. The buckling causes a reduction in the axial stiffness of the column that results in displacement and rotations having catastrophic consequences.
The buckling in a column depends on the elastic stiffness of the material rather than its load-bearing compressive strength. That is why the buckling load requires a separate consideration in the design and the stresses, as the failure due to buckling could occur before the stresses in the column satisfy the yield criterion.
The buckling induces instabilities in the structure causing it to fail. The buckling can be due to flexural or torsional loads and is categorized as flexural and torsional buckling. There's also a mixed case in which the combination of flexural and torsional load causes the structure to buckle, known as flexural-torsional buckling.
Euler's buckling formula helps estimate the critical load. However, that formula is applicable to only long columns. For short columns, you need to use Johnson's formula. So how do we know if the column is long or short? — We check for the slenderness ratio. It should be less than the critical slenderness ratio.
The slenderness ratio
S
, in simple terms, is a means to define the aspect ratio of a column or a building. The more slender the object larger the reinforcement it needs to dampen its vibration and swinging. It is the ratio of the effective length of the column,
L_{e}
, and the least radius of gyration,
R
S = \frac{L_{e}}{R}
The radius of gyration is the square root of the ratio of an area moment of inertia,
I
and cross sectional area,
A
. Mathematically, this is:
R = \sqrt{\frac {I}{A}}
Further, the critical slenderness ratio,
S_{crit}
, is a function of yield stress,
\sigma_y
, and Young's modulus,
E
S_{crit} = \sqrt {\frac {2 \pi^2 E}{\sigma_y}}
The column buckling calculator first calculates the slenderness ratio and compares it with the critical slenderness ratio,
S_{crit}
. If the slenderness ratio is lesser than its critical counterpart, the we classify the column as a short column. They are more prone to fail due to compression and without much buckling. Therefore, Johnson's formula is used to calculate the load in such cases. Johnson's formula for the short columns is:
\footnotesize F = \sigma_y A \left [ 1 - \left (\frac{\sigma_y}{4 \pi^2 E} \right ) \left ( \frac{LE}{R} \right )^2 \right ]
Suppose the slenderness ratio is larger than the critical slenderness ratio. In that case, the column is considered a long one, and you must use Euler's column buckling equation to calculate the critical buckling load for the said column.
The column buckling equation gives the critical load for buckling to occur in a column. For a column having an effective length
L_e
, the second moment of inertia,
I
, and material's Young's modulus,
E
, the buckling load,
F
F = \frac{\pi^2 E I}{L_e^2}
The effective length,
L_e
, is a function of the measured length of the column,
L
, and the Effective length factor,
K
. The effective length factor is obtained using the boundary conditions applied to the column. The table below mentions the value of the K factor.
Fixed — Fixed
Fixed — Free
Fixed — Pinned
Fixed — Guided
Guided — Guided
Pinned — Pinned
Pinned — Free
Guided — Free
Guided — Pinned
Boundary conditions for column
Each boundary condition has different restrictions on the associated degrees of freedom at the top and bottom end of the column. Use the following table if you get confused among these. Considering the displacement along
1, 2,
3
— directions as
\text{U}_1, \text{U}_2, \text{U}_3
and the rotations as
\text{UR}_1, \text{UR}_2, \text{UR}_3
. The 0 implies the degree of freedom is restricted, whereas — means the degree of freedom is free.
\text{U}_1
\text{U}_2
\text{U}_3
\text{UR}_1
\text{UR}_2
\text{UR}_3
Boundary conditions for column buckling
Let's find out the critical buckling load for the column with the following properties:
Boundary conditions: Free-Free
Area moment of inertia:
100 \text{ mm}^4
Area of cross section:
1.2 \text{ mm}^2
Length of the column:
1 \text{ m}
Material: Al6061-O
To find the buckling load:
Select the boundary condition as Free-Free.
Enter the area moment of inertia as
100 \text{ mm}^4
Insert the area of cross-section as
1.2 \text{ mm}^2
The calculator returns the radius of gyration:
\qquad \scriptsize R = \sqrt{\frac {I}{A}} = \sqrt{\frac {100}{1.2}} = 0.00913 \text{ m}
Fill in the length of column as
1 \text{ m}
The calculator determines the effective length as
1.2 \text{ m}
Select the material as Al6061-O to get the properties such as Young's modulus and yield stress.
The calculator now checks for slenderness ratio to classify the column:
\scriptsize \begin{align*} \qquad S_{crit} &= \sqrt {\frac {2 \times \pi^2 \times 69 \times 10^9}{83 \times 10^6}} = 128 \\ S &= \frac{1.2}{0.00913} = 131.45 \end{align*}
The slenderness ratio is greater than the critical value, therefore, the buckling calculator determines the Euler's critical load:
\qquad \scriptsize F = \frac{\pi^2 \times 69 \times 10^9 \times 100}{1.2^2} = 47.3\ \text N
Using CAE simulations for buckling analysis
You can use the buckling analysis option available through most computer-aided engineering (CAE) tools like ANSYS or ABAQUS to simulate your column's buckling failure and determine the buckling load. The load obtained through the simulations can be verified with hand calculations.
What is buckling of a column?
The sudden reduction in stiffness of a tall structure under compression loads is known as buckling. It may occur at stress values lower than the yield stress, and it is a function of material stiffness and slenderness rather than strength. The load at which a column begins to buckle is the critical buckling load, and one can estimate it using Euler's formula.
How do I calculate critical buckling load for a column?
To calculate critical load using Euler's formula for a column:
Find the square of pi.
Multiply the square by the Young's modulus of the material.
Multiply the product by the area moment of inertia.
Divide the resultant by the square of effective length of the column to obtain the critical buckling load. Mathematically, π² × E × I / L_e².
What the factors affecting critical load?
The critical buckling load depends on the stiffness of columns. It is a function of area moment of inertia and effective length of the column, and the Young's modulus of the column material. Further, the effective length depends upon the measurable length of the column and the effective length factor, which is obtained from the boundary conditions for the column.
What is the effective length factor for both fixed ends?
The effective length factor when both ends of a column are fixed is 0.65. The fixed end corresponds to the boundary condition in which the ends are restricted in both displacement and rotation along all three axes. Mathematically, Ux = Uy = Uz = 0; URx = URy = URz = 0.
Pinned-Free
in⁴
Length of column (L)
Effective length (Le)
Yield stress (σᵧ)
Slenderness ratio (S)
Critical slenderness ratio (Scrit)
Euler's formula for critical load
Critical load (F)
Use the relative humidity calculator to explore the relationship between relative humidity, air temperature, and dew point.
With this weight on other planets calculator, you can check how much you'd weigh if you landed on another planet within our Solar system.
|
Geographic coordinate system — Wikipedia Republished // WIKI 2
For broader coverage of this topic, see Spatial reference system.
Geographical distance
Figure of the Earth (radius and circumference)
Geodetic coordinates
Geodetic datum
Horizontal position representation
Map projection
Reference ellipsoid
Satellite geodesy
Spatial reference system
Spatial relations
Vertical positions
Global Nav. Sat. Systems (GNSSs)
Global Pos. System (GPS)
GLONASS (Russia)
BeiDou (BDS) (China)
Galileo (Europe)
NAVIC (India)
Quasi-Zenith Sat. Sys. (QZSS) (Japan)
Discrete Global Grid and Geocoding
NGVD 29 Sea Level Datum 1929
GRS 80 Geodetic Reference System 1980
ISO 6709 Geographic point coord. 1983
NAD 83 North American Datum 1983
WGS 84 World Geodetic System 1984
NAVD 88 N. American Vertical Datum 1988
Geo URI Internet link to a point 2010
International Terrestrial Reference System
Spatial Reference System Identifier (SRID)
Universal Transverse Mercator (UTM)
The geographic coordinate system (GCS) is a spherical or ellipsoidal coordinate system for measuring and communicating positions directly on the Earth as latitude and longitude.[1] It is the simplest, oldest and most widely used of the various of spatial reference systems that are in use, and forms the basis for most others. Although latitude and longitude form a coordinate tuple like a cartesian coordinate system, the geographic coordinate system is not cartesian because the measurements are angles and are not on a planar surface.[2][self-published source?]
A full GCS specification, such as those listed in the EPSG and ISO 19111 standards, also includes a choice of geodetic datum (including an Earth ellipsoid), as different datums will yield different latitude and longitude values for the same location.[3]
The Earth and the Geographic Coordinates
"Geographic Coordinate Systems" and "Projected Coordinate Systems" in ArcGIS and ArcMap
How do geographical coordinates work?
2 Latitude and longitude
3 Geodetic datum
4 Length of a degree
5 Alternate encodings
Further information: History of geodesy, history of longitude, and history of prime meridians
The invention of a geographic coordinate system is generally credited to Eratosthenes of Cyrene, who composed his now-lost Geography at the Library of Alexandria in the 3rd century BC.[4] A century later, Hipparchus of Nicaea improved on this system by determining latitude from stellar measurements rather than solar altitude and determining longitude by timings of lunar eclipses, rather than dead reckoning. In the 1st or 2nd century, Marinus of Tyre compiled an extensive gazetteer and mathematically plotted world map using coordinates measured east from a prime meridian at the westernmost known land, designated the Fortunate Isles, off the coast of western Africa around the Canary or Cape Verde Islands, and measured north or south of the island of Rhodes off Asia Minor. Ptolemy credited him with the full adoption of longitude and latitude, rather than measuring latitude in terms of the length of the midsummer day.[5]
Ptolemy's 2nd-century Geography used the same prime meridian but measured latitude from the Equator instead. After their work was translated into Arabic in the 9th century, Al-Khwārizmī's Book of the Description of the Earth corrected Marinus' and Ptolemy's errors regarding the length of the Mediterranean Sea,[note 1] causing medieval Arabic cartography to use a prime meridian around 10° east of Ptolemy's line. Mathematical cartography resumed in Europe following Maximus Planudes' recovery of Ptolemy's text a little before 1300; the text was translated into Latin at Florence by Jacobus Angelus around 1407.
In 1884, the United States hosted the International Meridian Conference, attended by representatives from twenty-five nations. Twenty-two of them agreed to adopt the longitude of the Royal Observatory in Greenwich, England as the zero-reference line. The Dominican Republic voted against the motion, while France and Brazil abstained.[6] France adopted Greenwich Mean Time in place of local determinations by the Paris Observatory in 1911.
The "latitude" (abbreviation: Lat., φ, or phi) of a point on Earth's surface is the angle between the equatorial plane and the straight line that passes through that point and through (or close to) the center of the Earth.[note 2] Lines joining points of the same latitude trace circles on the surface of Earth called parallels, as they are parallel to the Equator and to each other. The North Pole is 90° N; the South Pole is 90° S. The 0° parallel of latitude is designated the Equator, the fundamental plane of all geographic coordinate systems. The Equator divides the globe into Northern and Southern Hemispheres.
The "longitude" (abbreviation: Long., λ, or lambda) of a point on Earth's surface is the angle east or west of a reference meridian to another meridian that passes through that point. All meridians are halves of great ellipses (often called great circles), which converge at the North and South Poles. The meridian of the British Royal Observatory in Greenwich, in southeast London, England, is the international prime meridian, although some organizations—such as the French Institut national de l'information géographique et forestière—continue to use other meridians for internal purposes. The prime meridian determines the proper Eastern and Western Hemispheres, although maps often divide these hemispheres further west in order to keep the Old World on a single side. The antipodal meridian of Greenwich is both 180°W and 180°E. This is not to be conflated with the International Date Line, which diverges from it in several places for political and convenience reasons, including between far eastern Russia and the far western Aleutian Islands.
The combination of these two components specifies the position of any location on the surface of Earth, without consideration of altitude or depth. The visual grid on a map formed by lines of latitude and longitude is known as a graticule.[7] The origin/zero point of this system is located in the Gulf of Guinea about 625 km (390 mi) south of Tema, Ghana, a location often facetiously called Null Island.
Main article: Geodetic datum
Further information: Figure of the Earth, Reference ellipsoid, Geographic coordinate conversion, and Spatial reference system
In order to be unambiguous about the direction of "vertical" and the "horizontal" surface above which they are measuring, map-makers choose a reference ellipsoid with a given origin and orientation that best fits their need for the area to be mapped. They then choose the most appropriate mapping of the spherical coordinate system onto that ellipsoid, called a terrestrial reference system or geodetic datum.
Datums may be global, meaning that they represent the whole Earth, or they may be local, meaning that they represent an ellipsoid best-fit to only a portion of the Earth. Points on the Earth's surface move relative to each other due to continental plate motion, subsidence, and diurnal Earth tidal movement caused by the Moon and the Sun. This daily movement can be as much as a meter. Continental movement can be up to 10 cm a year, or 10 m in a century. A weather system high-pressure area can cause a sinking of 5 mm. Scandinavia is rising by 1 cm a year as a result of the melting of the ice sheets of the last ice age, but neighboring Scotland is rising by only 0.2 cm. These changes are insignificant if a local datum is used, but are statistically significant if a global datum is used.[8]
Examples of global datums include World Geodetic System (WGS 84, also known as EPSG:4326[9]), the default datum used for the Global Positioning System,[note 3] and the International Terrestrial Reference System and Frame (ITRF), used for estimating continental drift and crustal deformation.[10] The distance to Earth's center can be used both for very deep positions and for positions in space.[8]
Local datums chosen by a national cartographical organization include the North American Datum, the European ED50, and the British OSGB36. Given a location, the datum provides the latitude
{\displaystyle \phi }
{\displaystyle \lambda }
The latitude and longitude on a map made against a local datum may not be the same as one obtained from a GPS receiver. Converting coordinates from one datum to another requires a datum transformation such as a Helmert transformation, although in certain situations a simple translation may be sufficient.[11]
In popular GIS software, data projected in latitude/longitude is often represented as a Geographic Coordinate System. For example, data in latitude/longitude if the datum is the North American Datum of 1983 is denoted by 'GCS North American 1983'.
Main articles: Length of a degree of latitude and Length of a degree of longitude
See also: Arc length § Great circles on Earth
On the GRS80 or WGS84 spheroid at sea level at the Equator, one latitudinal second measures 30.715 meters, one latitudinal minute is 1843 meters and one latitudinal degree is 110.6 kilometers. The circles of longitude, meridians, meet at the geographical poles, with the west–east width of a second naturally decreasing as latitude increases. On the Equator at sea level, one longitudinal second measures 30.92 meters, a longitudinal minute is 1855 meters and a longitudinal degree is 111.3 kilometers. At 30° a longitudinal second is 26.76 meters, at Greenwich (51°28′38″N) 19.22 meters, and at 60° it is 15.42 meters.
{\displaystyle 111132.92-559.82\,\cos 2\varphi +1.175\,\cos 4\varphi -0.0023\,\cos 6\varphi }
{\displaystyle 111412.84\,\cos \varphi -93.5\,\cos 3\varphi +0.118\,\cos 5\varphi }
{\displaystyle \textstyle {\varphi }\,\!}
{\displaystyle {\frac {\pi }{180}}M_{r}\cos \varphi \!}
where Earth's average meridional radius
{\displaystyle \textstyle {M_{r}}\,\!}
is 6,367,449 m. Since the Earth is an oblate spheroid, not spherical, that result can be off by several tenths of a percent; a better approximation of a longitudinal degree at latitude
{\displaystyle \textstyle {\varphi }\,\!}
{\displaystyle {\frac {\pi }{180}}a\cos \beta \,\!}
{\displaystyle a}
{\displaystyle \textstyle {\tan \beta ={\frac {b}{a}}\tan \varphi }\,\!}
{\displaystyle \textstyle {\beta }\,\!}
is known as the reduced (or parametric) latitude). Aside from rounding, this is the exact distance along a parallel of latitude; getting the distance along the shortest route will be more work, but those two distances are always within 0.6 meter of each other if the two points are one degree of longitude apart.
60° Saint Petersburg 55.80 km 0.930 km 15.50 m 5.58 m
30° New Orleans 96.49 km 1.61 km 26.80 m 9.65 m
the Maidenhead Locator System, popular with radio operators.
the World Geographic Reference System (GEOREF), developed for global military operations, replaced by the current Global Area Reference System (GARS).
Open Location Code or "Plus Codes," developed by Google and released into the public domain.
Geohash, a public domain system based on the Morton Z-order curve.
Decimal degrees – Angular measurements, typically for latitude and longitude
Geographical distance – Distance measured along the surface of the earth
Geographic information system – System to capture, manage and present geographic data
Geo URI scheme
ISO 6709, standard representation of geographic point location by coordinates
Linear referencing
Primary direction – Celestial coordinate system
Planetary coordinate system
Spatial reference system – System to specify locations on Earth
^ The pair had accurate absolute distances within the Mediterranean but underestimated the circumference of the Earth, causing their degree measurements to overstate its length west from Rhodes or Alexandria, respectively.
^ Taylor, Chuck. "Locating a Point On the Earth". Archived from the original on 3 March 2016. Retrieved 4 March 2014.
^ "Using the EPSG geodetic parameter dataset, Guidance Note 7-1". EPSG Geodetic Parameter Dataset. Geomatic Solutions. Retrieved 15 December 2021.
^ McPhail, Cameron (2011), Reconstructing Eratosthenes' Map of the World (PDF), Dunedin: University of Otago, pp. 20–24 .
^ Evans, James (1998), The History and Practice of Ancient Astronomy, Oxford, England: Oxford University Press, pp. 102–103, ISBN 9780199874453 .
^ Greenwich 2000 Limited (9 June 2011). "The International Meridian Conference". Wwp.millennium-dome.com. Archived from the original on 6 August 2012. Retrieved 31 October 2012.
^ American Society of Civil Engineers (1 January 1994). Glossary of the Mapping Sciences. ASCE Publications. p. 224. ISBN 9780784475706.
^ a b c A guide to coordinate systems in Great Britain (PDF), D00659 v3.6, Ordnance Survey, 2020, retrieved 17 December 2021
^ "WGS 84: EPSG Projection -- Spatial Reference". spatialreference.org. Retrieved 5 May 2020.
^ Bolstad, Paul (2012). GIS Fundamentals (PDF) (5th ed.). Atlas books. p. 102. ISBN 978-0-9717647-3-6.
^ "Making maps compatible with GPS". Government of Ireland 1999. Archived from the original on 21 July 2011. Retrieved 15 April 2008.
Portions of this article are from Jason Harris' "Astroinfo" which is distributed with KStars, a desktop planetarium for Linux/KDE. See The KDE Education Project - KStars
Media related to Geographic coordinate system at Wikimedia Commons
|
How to Calculate Relative Error: 9 Steps (with Pictures) - wikiHow
1 Calculating Absolute Error
2 Calculating Relative Error
Absolute error is the actual amount you were off, or mistaken by, when measuring something. Relative error compares the absolute error against the size of the thing you were measuring. In order to calculate relative error, you must calculate the absolute error as well. If you tried to measure something that was 12 inches long and your measurement was off by 6 inches, the relative error would be very large. But, if you tried to measure something that was 120 feet long and only missed by 6 inches, the relative error would be much smaller -- even though the value of the absolute error, 6 inches, has not changed.[1] X Research source
Calculating Absolute Error Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/f\/f6\/Calculate-Relative-Error-Step-1-Version-3.jpg\/v4-460px-Calculate-Relative-Error-Step-1-Version-3.jpg","bigUrl":"\/images\/thumb\/f\/f6\/Calculate-Relative-Error-Step-1-Version-3.jpg\/aid1117655-v4-728px-Calculate-Relative-Error-Step-1-Version-3.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
When given an expected value, subtract the value you got from the expected value to get the Absolute Error. An expected value is usually found on tests and school labs. Basically, this is the most precise, common measurement to come up with, usually for common equations or reactions. You can compare your own results to get Absolute Error, which measures how far off you were from the expected results. To do so, simply subtract the measured value from the expected one. Even if the result is negative, make it positive. This is your absolute error![2] X Research source
Example: You want to know how accurately you estimate distances by pacing them off. You pace from one tree to another and estimate that they're 18 feet apart. This is the experimental value. Then you come back with a long measuring tape to measure the exact distance, finding out that the trees are in fact 20 feet (6 meters) apart. That is the "real" value. Your absolute error is 20 - 18 = 2 feet (60.96 centimeters).[3] X Research source
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/1\/1a\/Calculate-Relative-Error-Step-2-Version-4.jpg\/v4-460px-Calculate-Relative-Error-Step-2-Version-4.jpg","bigUrl":"\/images\/thumb\/1\/1a\/Calculate-Relative-Error-Step-2-Version-4.jpg\/aid1117655-v4-728px-Calculate-Relative-Error-Step-2-Version-4.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Alternatively, when measuring something, assume the absolute error to be the smallest unit of measurement at your disposal. For example, if you're measuring something with a meter stick, the smallest unit marked on the meter stick is 1 millimeter (mm). So you know that your measurement is accurate to within + or - 1 mm; your absolute error is 1 mm.
This works for any measurement system. Many scientific tools, like precision droppers and measurement equipment, often has absolute error labeled on the sides as "+/- ____ "
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/1\/18\/Calculate-Relative-Error-Step-3-Version-3.jpg\/v4-460px-Calculate-Relative-Error-Step-3-Version-3.jpg","bigUrl":"\/images\/thumb\/1\/18\/Calculate-Relative-Error-Step-3-Version-3.jpg\/aid1117655-v4-728px-Calculate-Relative-Error-Step-3-Version-3.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Always add the appropriate units. Say your Absolute Error was "2 meters." This tells your viewers exactly how far off your error was. But if you write that your error was simply "2," this doesn't tell your audience anything. Use the same unites as the ones in your measurements.
Practice with several examples. The best way to learn how to calculate error is to go ahead and calculate it. Take a stab at the following problems, then highlight the space after the colon (:) to see your answer.
Jill is studying chemical reactions. After mixing and matching, her test tube contains 32 grams of substrate. The accepted value for her experiment was 34 grams. Her Absolute Error is: +/- 2 grams
Clive is testing reactions in chemistry. It takes 10ml drops of water to cause a reaction, but his dropper claims it is "+/- .5ml." The Absolute Error in his measurements must be: +/- .5ml
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/e\/e2\/Calculate-Relative-Error-Step-5-Version-2.jpg\/v4-460px-Calculate-Relative-Error-Step-5-Version-2.jpg","bigUrl":"\/images\/thumb\/e\/e2\/Calculate-Relative-Error-Step-5-Version-2.jpg\/aid1117655-v4-728px-Calculate-Relative-Error-Step-5-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Understand what causes error, and how you can work to eliminate it. No scientific study is ever perfectly error free -- even Nobel Prize winning papers and discoveries have a margin or error attached. Still, understanding where error comes from is essential to help try and prevent it:[4] X Research source
Human error is the most common. This is from bad measurements, faulty premises, or mistakes in the lab.
Incidental energy/material loss, such as the little fluid left in the beaker after pouring, changes in temperature due to the environment, etc.
Imperfect equipment used either for measurement or studies, such as very small, precise measurements or burners that provide uneven heat.[5] X Research source
Calculating Relative Error Download Article
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/c\/c3\/Calculate-Relative-Error-Step-6-Version-3.jpg\/v4-460px-Calculate-Relative-Error-Step-6-Version-3.jpg","bigUrl":"\/images\/thumb\/c\/c3\/Calculate-Relative-Error-Step-6-Version-3.jpg\/aid1117655-v4-728px-Calculate-Relative-Error-Step-6-Version-3.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Divide the Absolute Error by the Actual Value of the item in question to get Relative Error. The result is the relative error.
Note that in most cases the unit of measurement of the absolute error will be the same as the unit of measurement of the actual value, and the units will cancel each other. This leaves the relative error without any units of measurement.
This simple equation tells you how far off you were in comparison to the overall measurement. A low relative error is, of course, desirable. To continue the example of measuring between two trees:
Your Absolute Error was 2 feet, and the Actual Value was 20 feet.
{\displaystyle {\frac {2ft}{20ft}}}
{\displaystyle =.1}
Multiply the answer by 100 to get an easier to understand percentage. Leave the relative error in fraction form, complete the division to render it in decimal form, or multiply the resulting decimal form by 100 to render your answer as a percentage. This tells you what percentage of the final measurement you messed up by. If you are measuring a 200 foot boat, and miss the measurement by 2 feet, your percentage error will be much lower than missing the 20 foot tree measurement by 2 feet. The error is a smaller percentage of the total measurement.[6] X Research source
{\displaystyle {\frac {2ft}{20ft}}=.1}
{\displaystyle .1*100=10\%}
Relative Error.
Calculate Relative Error all at once by turning the numerator (top of fraction) into your Absolute Error equation. Once you understand the difference between Absolute and Relative Error, there is really no reason to do everything all by itself. Simply substitute the equation for Absolute Error in for the actual number. Note that the vertical bars are absolute value signs, meaning anything within them must be positive.
{\displaystyle ={\frac {|\mathrm {Measured} -\mathrm {Actual} |}{\mathrm {Actual} }}}
Multiply the whole thing by 100 to get Relative Error Percentage all at once.[7] X Research source
{"smallUrl":"https:\/\/www.wikihow.com\/images\/thumb\/a\/aa\/Calculate-Relative-Error-Step-9-Version-2.jpg\/v4-460px-Calculate-Relative-Error-Step-9-Version-2.jpg","bigUrl":"\/images\/thumb\/a\/aa\/Calculate-Relative-Error-Step-9-Version-2.jpg\/aid1117655-v4-728px-Calculate-Relative-Error-Step-9-Version-2.jpg","smallWidth":460,"smallHeight":345,"bigWidth":728,"bigHeight":546,"licensing":"<div class=\"mw-parser-output\"><p>\u00a9 2022 wikiHow, Inc. All rights reserved. wikiHow, Inc. is the copyright holder of this image under U.S. and international copyright laws. This image is <b>not<\/b> licensed under the Creative Commons license applied to text content and some other images posted to the wikiHow website. This image may not be used by other entities without the express written consent of wikiHow, Inc.<br>\n<\/p><p><br \/>\n<\/p><\/div>"}
Always provide units as context. Let the audience know the units you're using for measurement. However, the relative error does not employ units of measurement. It is expressed as a fraction or a percentage, such as a relative error of 10%.
What does the +/- sign tell about the relative percentage error?
It means the reported or estimated amount could be higher or lower than the true amount.
What is the difference between systematic and random errors?
Systematic errors are those which occur according to a certain pattern or system; these errors are due to known reasons. Random errors have no set pattern or cause.
If the absolute error was 0.94, then what will the relative error be?
Relative error, as mentioned in the answer, equals (Absolute Error)/(Actual Value). Hence, it isn't possible to calculate relative error just by knowing the absolute error.
Make sure that your experimental value and real value are all expressed in the same unit of measurement. For example, if your experimental value is in inches but your real value is in feet, you must convert one of them to the other unit of measurement.
If taking the regents exam, make sure you round correctly
↑ http://www2.phy.ilstu.edu/~wenning/slh/Absolute%20Relative%20Error.pdf
Before you can calculate relative error, you must calculate the absolute error in your calculations. To do this, subtract your answer from the expected value, or the correct answer. Write the answer as a positive number, even if it’s negative, and add the appropriate units. To get the relative error, divide the absolute error by the actual value of the item in question. If you’d like, you can multiply the answer by 100 to display it as a percentage. To understand when you would need to use relative error, read on!
Português:Calcular Erro Relativo
"I've got this messed up exam tomorrow. Heck, if wikiHow didn't exist, Relative Errors would've pulled the carpet off my feet."..." more
"Finding the relative error made it lot easier with the examples."
|
Bond YTM Calculator | Yield to Maturity
What is yield to maturity (YTM)? The bond YTM meaning
How to calculate yield to maturity. The bond yield to maturity formula
Why is bond YTM important?
The bond YTM calculator is a handy tool that you can use to calculate the bond yield to maturity and determine the rate of return that an investor can expect from a bond. As this metric is one of the most significant factors that can impact the bond price, it is essential that an investor fully understands the calculation of bond yield to maturity.
We have written this article to help you understand the bond YTM meaning, how to calculate it using the bond YTM formula, and the factors that cause the bond YTM to rise and fall. We will also demonstrate some examples to help you understand the bond yield to maturity formula.
Before we talk about the bond YTM (yield to maturity) calculation, we must first understand what a bond is; only then you can understand what yield to maturity is. A bond is a financial instrument that governments and companies issue to get debt funding from the public.
If you invest in a bond, you are entitled to collect a fixed set of cash payments until the bond matures. The payments you receive can be seen as regular interest earnings, or coupon payments. When you arrive at the end of the bond's lifespan, or its maturity date, you get not only the last interest payment, but also recover the face value of the bond, that is, the bond's principal.
As bonds are a particular investment, their precise evaluation is crucial in the eyes of investors. The most apparent aspect of the assessment is whether money is made or lost on the investment, that is, what the return on the financial transaction is. And this is what the bond YTM definition represents.
The YTM can be thought of as the rate of return of a bond. If you hold the bond to maturity after buying it in the market and are able to reinvest the coupons at the YTM, the YTM will be the internal rate of return (IRR) of your bond investment.
Now that we know the YTM definition, let's look at some examples to understand how to find the YTM of a bond.
The bond yield to maturity formula needs five inputs, which you can find in our bond YTM calculator:
Let's take Bond A issued by Company Alpha, which has the following data, as an example of how to calculate yield to maturity:
Face value: $1,500;
Bond price: $1,350;
Annual coupon rate: 6%;
Coupon Frequency: Annual; and
Years to maturity: 15 years.
The face value is equivalent to the principal of the bond. In our example, face value = $1,500.
The bond price is the money an investor has to pay to acquire the bond. It can be found on most financial data websites. The bond price of Bond A is $1,350.
coupon rate is the annual interest you will receive by investing in the bond, and frequency is the number of times you will receive it in a year.
In our example, Bond A has a coupon rate of 6% and an annual frequency. This means that the bond will pay $1,500 * 5% = $75 as interest annually.
Calculate the bond YTM
The bond YTM can be seen as the internal rate of return of the bond investment if the investor holds it until it matures and reinvesting the coupons at the same interest rate. Hence, the bond YTM formula involves deducing the bond YTM
r
in the equation below:
\footnotesize\text{bond price} = \sum_{k=1}^n[cf / (1 + r)^k]
cf
- Cash flows, i.e., coupons or the principal;
r
- Bond YTM; and
n
- Years to maturity.
This calculation involves a complex iteration, and it is nearly impossible to do it by hand in a reasonable time. That's why we have built this bond YTM calculator for you!
\begin{split} \footnotesize\$1350& = \\ \text{ }& \footnotesize\$75 / (1 + r)^1\text{ }+ \\ & \footnotesize\$75 / (1 + r)^2\text{ }+ \\ & \footnotesize\$75 / (1 + r)^3\text{ }+ \\ & ...\text{ }+ \\ & \footnotesize\$75 / (1 + r)^{14}\text{ }+ \\ & \footnotesize\$1575 / (1 + r)^{15}\text{ } \end{split}
After the estimation, our bond YTM calculator gives bond YTM of
r = 7.11\%
Now that you understand the bond YTM meaning and how to calculate the bond YTM, let's explore its importance in analyzing bonds:
Bond dynamics are complex. A bond with a higher coupon rate does not mean that it gives you a better return. The magnitude of the face value means little to investors in terms of the returns that they can get as well. And this is where bond YTM comes in.
As the bond YTM represents the internal rate of return of a bond investment, calculating the bond YTM allows you to compare the returns of different bonds. Using the bond YTM, you can find which bond will bring the highest returns, regardless of their face value, coupon rate, etc.
Does bond yield equal to bond YTM?
Technically, yes. Bond yield will equal bond YTM if you hold to the bond until its maturity and reinvest at the same rate as the bond YTM.
How do I calculate the yield to maturity of a bond?
You can calculate the yield to maturity of a bond in three steps:
Check the face value, bond price, annual coupon rate, and years to maturity of your bond.
Find the cash flows for each year.
Calculate the bond YTM from the below formula. Note that it involves complex iteration:
bond price = Σk=1n[cash flows / (1 + YTM)k]
What causes bond YTM to fall?
There are several factors that can make bond YTM fall. For instance, the lower the inflation, the lower the bond YTM. The less volatile the market condition, the lower the bond YTM.
Can bond YTM be negative?
Yes, bond YTM can be negative. It happens every now and then, even though it is not common. This situation normally happens when inflation is out of control and the market is unstable.
In such a situation, even a negative bond YTM is still better than storing cash since hyperinflation might happen.
Lease calculator helps you determine the monthly and total payments for a lease. In order to do that you will need to know the initial and residual value of the good that you'd like to lease, the lease interest rate and the lease term.
|
{\displaystyle {\mathfrak {P}}}
Gospel of Matthew 10-11 †
Papyrus 19 (in the Gregory-Aland numbering), signed by
{\displaystyle {\mathfrak {P}}}
19, is an early copy of the New Testament in Greek. The manuscript paleographically has been assigned to the 4th or 5th century.[1]
The papyrus is currently housed at the Bodleian Library, Gr. bibl. d. 6 (P) at the University of Oxford.[1][2]
Papyrus 19 is a papyrus manuscript of the Gospel of Matthew, containing text for Matt 10:32-11:5. The leaf is complete at the top and bottom, but broken at the sides.[3]
The Greek text of this codex is a representative of the Alexandrian text-type. Aland placed it in Category II.[1]
Notable Readings[edit]
Matthew 10:34 has the variant ουν νομίσητε (Therefore, youpl think) instead of μη νομίσητε (Do not think).[3]
Matthew 10:37b-38 is also omitted in the Hebrew Shem Tov Matthew manuscript.[4]
^ a b c Aland, Kurt; Aland, Barbara (1995). The Text of the New Testament: An Introduction to the Critical Editions and to the Theory and Practice of Modern Textual Criticism. Erroll F. Rhodes (trans.). Grand Rapids: William B. Eerdmans Publishing Company. p. 97. ISBN 978-0-8028-4098-1.
^ "Handschriftenliste". Münster: Institute for New Testament Textual Research. Retrieved 23 August 2011.
^ a b Grenfell, B. P.; Hunt, A. S. (1912). Oxyrhynchus Papyri IX. London. p. 7.
^ Howard, George (1995). Hebrew Gospel of Matthew (2nd ed.). Macon, Georgia: Mercer University Press. pp. 186–187. ISBN 0-86554-442-5.
B. P., Grenfell; Hunt, A. S. (1912). Oxyrhynchus Papyri IX. London. pp. 4–8.
P. Oxy. 1170 at the Oxyrhynchus online
Retrieved from "https://en.wikipedia.org/w/index.php?title=Papyrus_19&oldid=995528825"
|
Hazy synthetic data quality metrics explained - Hazy
By Armando Vieira on 15 Jan 2021.
Synthetic data enables fast innovation by providing a safe way to share very sensitive data, like banking transactions, without compromising privacy. After removing personal identifiers, like IDs, names and addresses, Hazy machine learning algorithms generate a synthetic version of real data that retains almost the same statistical aspects of the original data but that will not match any real record.
In other words, the synthetic data keeps all the data value while not compromising any of the privacy.
Because synthetic data is a relatively new field, many concerns are raised by stakeholders when dealing with it — mainly on quality and safety. Our most common questions are:
How do you know that the synthetic data preserves the same richness, correlations and properties of the original data?
How can we be sure the synthetic data is really safe and can’t be reverse engineered to disclose private information?
In order to answer these questions, Hazy has developed a set of metrics to quantify the quality and safety of our synthetic data generation. Today we will explain those metrics that will bring rigour to the discussion on the quality of our synthetic data. Another blogpost will tackle the essential privacy and security questions.
Five ways to measure synthetic data quality¶
It’s important to our users that they are able to verify the quality of our synthetic data before they use it in production. With this in mind, Hazy has five major metrics to assess the quality of our synthetic data generation.
Synthetic Data Quality Metric #1:
Histogram Similarity¶
Assuming data is tabular, this synthetic data metric quantifies the overlap of original versus synthetic data distributions corresponding to each column. If both distributions overlap perfectly this metric is 1, and it’s 0 if no overlap is found. Histogram Similarity is the easiest metric to understand and visualise. Any model should be able to generate synthetic data with a Histogram Similarity score above 0.80, with an 80 percent histogram overlap.
Histogram Similarity for the adult dataset for the variable age. The synthetic data distribution follows pretty closely the original data except on the tails.
Mutual Information (MI)¶
Histogram Similarity is important but it fails to capture the dependencies between different columns in the data. For that purpose we use the concept of Mutual Information that measures the co-dependencies — or correlations if data is numeric — between all pairs of variables. Quantifying information is an abstract, but very powerful concept that allows us to understand the relationship between variables when we don’t have another way to achieve that.
Mutual information between a pair of variables X and Y quantifies how much information about Y can be obtained by observing variable X:
MI(X;Y) = \sum_{x \in X} \sum_{y \in Y} p(x, y) log \frac{p(x, y)}{p(x)p(y)}
p(x)
is the probability of observing x,
p(y)
is the probability of observing y and
p(x,y)
the probability of observing x given y. It can be shown that
MI(X;Y) = H(Y) - H(Y \| X)
H = - \sum_{-i} p_{i} \log_{2} p_{i}
is the entropy, or information, contained in each variable.
Mutual Information is not an easy concept to grasp. Let's explore the following example to help explain its meaning. Suppose we want to evaluate the Mutual Information between X (blood type) and Y (blood pressure) as a potential indicator for the likelihood of skin cancer. The following table contains hypothetical probabilities of skin cancer for all combinations of X and Y:
X: Blood Type
Y: Blood Pressure
The question is: how much information does each variable contain and how much information can we get from X, given Y? Information can be counterintuitive. It is equivalent to the uncertainty or randomness of a variable. In the series of events (head, tails) of tossing a coin each realization has maximum information (entropy) — it means that observing any length of past events would not help us predict the very next event. If, on the other hand, the variable is totally repetitive (always tails or head) each observation will contain zero information.
To evaluate these quantities we simply compute the marginals of X and Y (sums over rows and columns):
Marginal X = (\frac{1}{2}, \frac{1}{2}, \frac{1}{4}, \frac{1}{8})
Marginal Y = (\frac{1}{4}, \frac{1}{4}, \frac{1}{4}, \frac{1}{4})
And then the information H for variable X is obtained by summing over the marginals of X,
- \sum_{i=1, 4} pi.log_{2} (pi) = 7/4 bits.
The same for Y = 2 bits, so Y (blood pressure) is more informative about skin cancer than X (blood type).
The MI between X and Y is:
H(X) - H(X \| Y) = 2 - 11/8 = 0.375bits
As a side note, if X and Y are normal distributions with a correlation of
\rho
then the mutual information will be
-\frac{1}{2}log(1-\rho^2)
- it grows logarithmically as
\rho
The Mutual Information score is calculated for all possible pairs of variables in the data as the relative change in Mutual Information between the original to the synthetic data:
MI_{score} = \sum_{i=1}^{N} \sum_{j=1}^{N} \left[ \frac{ MI(x_{i},x_{j}) } { MI(\hat{x_{i}},\hat{x_{j}}) } \right]
x
is the original data and
\hat{x}
Good synthetic data should have a Mutual Information score of no less than 0.5. The next figure shows an example of mutual information (symmetric) matrix:
Mutual Information matrix. Synthetic data keep Mutual Information on most variables pairs except (‘housing’,’risk’), (‘housing’, sex’) and (‘checking_account’,’risk’).
When we developed this MI score alongside Nationwide Building Society, we were building on the work of Carnegie Mellon University’s DoppelGANger generator, which looks to make differentially private sequential synthetic data. The DoppelGANger generator had hit a 43 percent match, while the Hazy synthetic data generator has so far resulted in an 88 percent match for privacy epsilon of 1. We are pleased to be cited as having helped improve on their exceptional work.
Predictive Capability¶
A further validation of the quality of synthetic data can be obtained by training a specific machine learning model on the synthetic data and test its performance on the original data. For instance, we may use the synthetic data to predict the likelihood of customer churn using, say, an XGBoost algorithm. Normally this involves splitting the data into a Training Set to train the model and a Test Set to validate the model, in order to avoid overfitting. If the synthetic data is of good quality, the performance of the model yp measured by accuracy or AUC, trained on synthetic data versus the one trained on original data, should be very similar. Note that the test set should always consist of the original data:
P C = Accuracy model trained on synthetic data / Accuracy model trained on original data
Typically Hazy models can generate synthetic data with scores higher than 0.9, with 1 being a perfect score.
This metric compares the order of feature importance of variables in the same model as trained on the original data and on trained synthetic data. Most machine learning algorithms are able to rank the variables in that data that are more informative for a specific task.
Synthetic data of good quality should be able to preserve the same order of importance of variables. In the example below, we see that within Hazy you are able to see the level of importance set by the algorithm and how accurately Hazy retains that level.
Feature importance displayed as a histogram, displaying the relative value of importance
Feature importance displayed as a ladder, showing whether the order of importance has shifted
Query Quality¶
In some situations, synthetic data is used for reporting and business intelligence. For these cases, it is essential that queries made on synthetic data retrieve the same number of rows as on the original data. For instance, if we query the data for users above 50 years old and an annual income below £50,000, the same number of rows should be retrieved as in the original data.
This Query Quality score is obtained by running a battery of random queries and averaging the ratio of the number of rows retrieved in the original and in the synthetic data.
Remember, it’s not just about quality.¶
The metrics above give a good understanding of the quality of synthetic data. However, some caution is necessary as, in some cases, a few extreme cases may be overwhelmingly important and, if not captured by the generator, could render the synthetic data useless — like rare events for fraud detection or money laundering. In these cases we may need to skew the sampling mechanism and the metrics to capture these extremes.
Bonus Synthetic Data Metric:
If you are dealing with sequential data, like data that has a time dependency, such as bank transactions, these temporal dependencies must be preserved in the synthetic data as well. For instance, in healthcare the order of exams and treatments must be preserved: chemotherapy treatments must follow x-rays, CT scans and other medical analysis in a specific order and timing. The synthetic data should preserve this temporal pattern as well as replicate the frequency of events, costs, and outcomes. When talking about fraud detection, it’s important that seasonality patterns, like weekends and holidays, are preserved.
Even more challenging is the replication of seemingly unique events, like the Covid-19 pandemic, which proves itself a formidable challenge for any generative model.
To capture these short and long-range correlations the metric of choice is Autocorrelation with a variable lag parameter. Autocorrelation basically measures how events at time
X(t)
are related to events at time
X(t - \delta)
\delta
is a lag parameter.
The autocorrelation of a sequence
y = (y_{1}, y_{2}, ... y_{n})
AC = \sum_{i=1}^{n-k} (y_{i} - \bar{y})(y_{i+k} - \bar{y}) / \sum_{i=1}^{n} (y_{i} - \bar{y})^2
\bar{y}
y
. We assume events occur at a fixed rate, but this restriction does not affect the generality of the concept.
If the events are categorical instead of numeric (for instance medical exams), the same concept still applies but we use Mutual Information instead.
To illustrate Autocorrelation, we consider the following EEG dataset because brainwaves are entirely unique identifiers and thus exceptionally sensitive information. This dataset contains records of EEG signals from 120 patients over a series of trials. Each sample contains measurements from 64 electrodes placed on the subjects’ scalps which were sampled at 256 Hz (3.9-msec epoch) for 1 second. As can be seen in Figure 4 the data has a complex temporal structure but with strong temporal and spatial correlations that have to be preserved in the synthetic version.
Original EEG data
Synthetic EEG data. The synthetic version of the data preserves some structure of the original data, but clearly misses some details
Autocorrelation with several lag delays for the EEG dataset. The synthetic data is able to capture the same trends as in the original data data but with weaker intensity.
For temporal data, Hazy has a set of other metrics to capture the temporal dependencies on the data that we will discuss in detail in a subsequent post.
Whatever the metric or metrics our customers choose, we are happy that they are able to check the quality of our synthetic data for themselves, building trust and confidence in Hazy’s world-class, enterprise-grade generators.
By Dr. Alexander Mikhalev on 16 Dec 2020
|
Is BMI (Body Mass Index) a Weight Gain Indicator in Pregnancy?
Is BMI relevant during pregnancy?
What BMI is safe for pregnancy?
Can BMI affect fertility?
Does the obese BMI during pregnancy increase health risks?
Is BMI relevant in pregnancy? Does weight gain have anything to do with our health? How can we improve our fertility? We'll answer these and many other questions in the article below. ⤵️
Body mass index (BMI) is calculated with a simple equation that uses both height and weight of a person:
\text{BMI} = \frac{ \text{weight [kg]} }{ (\text{height [m]})^2 }
We use it to assess the weight and proportions of a given person. BMI allows us to determine whether a person is underweight, within normal range, overweight, or obese. The index is easy to use and widely applied in almost all medical sciences.
However, it does come with a few limitations:
BMI cannot be used for pregnant women or people with developed muscle tissue. The additional weight in both groups comes from different processes than simply putting on weight.
BMI should be modified for people with specific disabilities, e.g., missing a limb.
BMI in children shouldn't be used directly, but only via BMI percentiles.
Yes, BMI is relevant, but only if we think about the pre-pregnancy BMI. Both your pre-pregnancy weight and height play a vital role in assessing your recommended pregnancy weight gain.
Why is it important, you may ask?
High maternal weight may lead to serious pregnancy complications, such as high blood pressure or gestational diabetes. Overweight and obese women risk having a more traumatic birth; the excessive weight gain may result in a greater occurrence of congenital anomalies and stillbirths.
Women who gain less weight than recommended also bear some risk of adverse pregnancy outcomes, including low birth weight, premature births, and increased mortality of neonates.
The gestational weight gain differs depending on the BMI and ranges between 28–40 lb for underweight women to only 11–20 lb for patients with obesity. For a complete list of recommendations for weight gain, visit the How to maintain pregnancy weight article.
The safest pre-pregnancy body mass index (BMI) is the one within normal range: 18.5–24.9. The maternal weight gain for this BMI group should oscillate between 25–35 pounds (11.3–15.9 kg) for single and 37–54 pounds (16.8–24.5 kg) for twin pregnancy.
Maternal and neonatal outcomes are influenced by both being underweight or obese in pregnancy. Being underweight does slightly less harm than being obese. One thing is certain — women of normal weight have lower chances for many disorders of pregnancy, such as fetal complications or blood-clotting disorders. And, of course, weight gained during pregnancy might also be very challenging to lose.
Talk to your health care provider about possible changes in your lifestyle that could help you regain your healthy weight and improve both your condition and your baby's! 👶
Yes, BMI does affect fertility.
Obesity and being overweight affect the function of ovaries and may result in a lower amount of hormones circulating in the bloodstream. Being in these BMI categories may cause problems with getting pregnant and complicate the initial IVF therapy.
Extremely low BMI may also harm your potential to conceive — a certain amount of fat tissue is required to produce pregnancy hormones. Women who weigh very little tend to lose their periods and stop ovulating.
Bear in mind that there are tight associations between pre-pregnancy BMIs and a higher risk of complications typical for gestation. Both underweight and overweight women may see worse pregnancy outcomes compared to normal-weight women.
To summarize, keeping a healthy weight gain both before and during pregnancy is one of the best investments you could offer to yourself and your future child!
Yes. High BMI increases health risks in both pregnant and non-pregnant women. The list of possible complications of gaining too much weight is long, and it includes disorders for both mother and child:
Hypertensive disorders of pregnancy, including dangerous conditions such as preeclampsia and eclampsia;
Delivery of large infants;
C-section complications, e.g., wound infections.
Use this Gupta risk calculator to estimate a patient's risk of myocardial infarction or cardiac arrest during and after any surgery.
The Tinetti calculator is a professional tool used to assess the risk of falls, based on balance and gait in the elderly.
|
Differential_topology Knowpia
In mathematics, differential topology is the field dealing with the topological properties and smooth properties[a] of smooth manifolds. In this sense differential topology is distinct from the closely related field of differential geometry, which concerns the geometric properties of smooth manifolds, including notions of size, distance, and rigid shape. By comparison differential topology is concerned with coarser properties, such as the number of holes in a manifold, its homotopy type, or the structure of its diffeomorphism group. Because many of these coarser properties may be captured algebraically, differential topology has strong links to algebraic topology.[1]
The central goal of the field of differential topology is the classification of all smooth manifolds up to diffeomorphism. Since dimension is an invariant of smooth manifolds up to diffeomorphism type, this classification is often studied by classifying the (connected) manifolds in each dimension separately:
In dimension 1, the only smooth manifolds up to diffeomorphism are the circle, the real number line, and allowing a boundary, the half-closed interval
{\displaystyle [0,1)}
and fully closed interval
{\displaystyle [0,1]}
In dimension 2, every closed surface is classified up to diffeomorphism by its genus, the number of holes (or equivalently its Euler characteristic), and whether or not it is orientable. This is the famous classification of closed surfaces.[3][4] Already in dimension two the classification of non-compact surfaces becomes difficult, due to the existence of exotic spaces such as Jacob's ladder.
Beginning in dimension 4, the classification becomes much more difficult for two reasons.[5][6] Firstly, every finitely presented group appears as the fundamental group of some 4-manifold, and since the fundamental group is a diffeomorphism invariant, this makes the classification of 4-manifolds at least as difficult as the classification of finitely presented groups. By the word problem for groups, which is equivalent to the halting problem, it is impossible to classify such groups, so a full topological classification is impossible. Secondly, beginning in dimension four it is possible to have smooth manifolds that are homeomorphic, but with distinct, non-diffeomorphic smooth structures. This is true even for the Euclidean space
{\displaystyle \mathbb {R} ^{4}}
, which admits many exotic
{\displaystyle \mathbb {R} ^{4}}
structures. This means that the study of differential topology in dimensions 4 and higher must use tools genuinely outside the realm of the regular continuous topology of topological manifolds. One of the central open problems in differential topology is the four-dimensional smooth Poincaré conjecture, which asks if every smooth 4-manifold that is homeomorphic to the 4-sphere, is also diffeomorphic to it. That is, does the 4-sphere admit only one smooth structure? This conjecture is true in dimensions 1, 2, and 3, by the above classification results, but is known to be false in dimension 7 due to the Milnor spheres.
Important tools in studying the differential topology of smooth manifolds include the construction of smooth topological invariants of such manifolds, such as de Rham cohomology or the intersection form, as well as smoothable topological constructions, such as smooth surgery theory or the construction of cobordisms. Morse theory is an important tool which studies smooth manifolds by considering the critical points of differentiable functions on the manifold, demonstrating how the smooth structure of the manifold enters into the set of tools available.[7] Often times more geometric or analytical techniques may be used, by equipping a smooth manifold with a Riemannian metric or by studying a differential equation on it. Care must be taken to ensure that the resulting information is insensitive to this choice of extra structure, and so genuinely reflects only the topological properties of the underlying smooth manifold. For example, the Hodge theorem provides a geometric and analytical interpretation of the de Rham cohomology, and gauge theory was used by Simon Donaldson to prove facts about the intersection form of simply connected 4-manifolds.[8] In some cases techniques from contemporary physics may appear, such as topological quantum field theory, which can be used to compute topological invariants of smooth spaces.
Differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are 'softer' than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifold—that is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume.[citation needed]
On the other hand, smooth manifolds are more rigid than the topological manifolds. John Milnor discovered that some spheres have more than one smooth structure—see Exotic sphere and Donaldson's theorem. Michel Kervaire exhibited topological manifolds with no smooth structure at all.[9] Some constructions of smooth manifold theory, such as the existence of tangent bundles,[10] can be done in the topological setting with much more work, and others cannot.
Differential topology versus differential geometryEdit
One major difference lies in the nature of the problems that each subject tries to address. In one view,[4] differential topology distinguishes itself from differential geometry by studying primarily those problems that are inherently global. Consider the example of a coffee cup and a donut. From the point of view of differential topology, the donut and the coffee cup are the same (in a sense). This is an inherently global view, though, because there is no way for the differential topologist to tell whether the two objects are the same (in this sense) by looking at just a tiny (local) piece of either of them. They must have access to each entire (global) object.
More mathematically, for example, the problem of constructing a diffeomorphism between two manifolds of the same dimension is inherently global since locally two such manifolds are always diffeomorphic. Likewise, the problem of computing a quantity on a manifold that is invariant under differentiable mappings is inherently global, since any local invariant will be trivial in the sense that it is already exhibited in the topology of
{\displaystyle \mathbb {R} ^{n}}
. Moreover, differential topology does not restrict itself necessarily to the study of diffeomorphism. For example, symplectic topology—a subbranch of differential topology—studies global properties of symplectic manifolds. Differential geometry concerns itself with problems—which may be local or global—that always have some non-trivial local properties. Thus differential geometry may study differentiable manifolds equipped with a connection, a metric (which may be Riemannian, pseudo-Riemannian, or Finsler), a special sort of distribution (such as a CR structure), and so on.
This distinction between differential geometry and differential topology is blurred, however, in questions specifically pertaining to local diffeomorphism invariants such as the tangent space at a point. Differential topology also deals with questions like these, which specifically pertain to the properties of differentiable mappings on
{\displaystyle \mathbb {R} ^{n}}
(for example the tangent bundle, jet bundles, the Whitney extension theorem, and so forth).
Lashof, Richard (Dec 1972). "The Tangent Bundle of a Topological Manifold". American Mathematical Monthly. 79 (10): 1090–1096. doi:10.2307/2317423. JSTOR 2317423.
Kervaire, Michel A. (Dec 1960). "A manifold which does not admit any differentiable structure". Commentarii Mathematici Helvetici. 34 (1): 257–270. doi:10.1007/BF02565940.
|
Hungru Chen, Naoto Umezawa, "Sensitization of Perovskite Strontium Stannate SrSnO3 towards Visible-Light Absorption by Doping", International Journal of Photoenergy, vol. 2014, Article ID 643532, 3 pages, 2014. https://doi.org/10.1155/2014/643532
Hungru Chen 1 and Naoto Umezawa1,2,3
1Environmental Remediation Materials Unit, National Institute for Materials Sciences, Ibaraki Prefecture 305-0044, Japan
2PRESTO, Japan Science and Technology Agency (JST), 4-1-8 Honcho, Kawaguchi, Saitama Prefecture 332-0012, Japan
3TU-NIMS Joint Research Center, School of Materials Science and Engineering, Tianjin University, 92 Weijin Road, Nankai District, Tianjin, China
Perovskite strontium stannate SrSnO3 is a promising photocatalyst. However, its band gap is too large for efficient solar energy conversion. In order to sensitize SrSnO3 toward visible-light activities, the effects of doping with various selected cations and anions are investigated by using hybrid density functional calculations. Results show that doping can result in dopant level to conduction band transitions which lie lower in energy compared to the original band gap transition. Therefore, it is expected that doping SrSnO3 can induce visible-light absorption.
Photocatalysis is an important technology for fuel production and environmental remediation from solar energy conversion [1–4]. Perovskite strontium stannate SrSnO3 has been reported to be a promising photocatalyst [5–7]. Due to its suitable valence band and conduction band edge positions, it is able to split water into hydrogen and oxygen. The main drawback of SrSnO3 is the large band gap of 4.1 eV [7] and therefore it can only absorb light in the ultraviolet region. A material for efficient solar energy conversion should be able to absorb the majority of the sunlight, namely, visible light. In order to sensitize wide gap materials to visible light, doping has been a common approach [8–10]. In this study, hybrid density functional theory calculations are carried out to investigate the effect of doping in SrSnO3. Cations Cr3+, Fe3+, and Rh3+ and anions , , and , which have been reported to be effective dopants for sensitizing titanates to visible light, are considered [10, 11].
All calculations are carried out based on density functional theory [12] with the range separated HSE06 hybrid functional [13]. Valence electrons are described by a plane wave basis set with the energy cutoff of 500 eV. The interactions between core and valence electrons are treated with the projector augmented wave (PAW) method [14]. The k-space is sampled with k-point spacing less than 0.03 . The lattice vectors and atomic positions in all cells are optimized until the force is converged to less than 0.03 eV- at each atomic position. Vienna ab initio simulation package (VASP) [15] is used for all calculations.
SrSnO3 adopts the orthorhombic perovskite structure with space group symmetry Pbnm [16], as shown in Figure 1. To model doped SrSnO3, a SrSnO3 supercell, which contains 16 formula units (80 atoms), is constructed. Cation doping is simulated by substituting a dopant, Cr3+, Fe3+, or Rh3+, for one Sn4+, which corresponds to 6.25% of B-site cation impurity. To ensure the correct trivalent charge states of cation dopants, in each of the above cases La3+ is simultaneously substituted for Sr2+. Anion doping is modeled by substituting a dopant, , , or , for one , which corresponds to 2.08% of anion impurity. In the case is simultaneously substituted for .
The orthorhombic perovskite structure of SrSnO3. The green and red spheres denote strontium and oxygen, respectively. The gray octahedra denote SnO6 units.
The calculated lattice parameters for the orthorhombic perovskite SrSnO3 are Å, Å, and Å, which are in good agreement with experimental values Å, Å, and Å [17]. Figure 2 shows the density of states and band structure from our calculations. The calculated band gap of pristine SrSnO3 is 3.52 eV, smaller than experimentally reported optical gap 4.1 eV. The origin of this 0.5 eV discrepancy is not clear at the moment. It could be an underestimation from the functional used in calculations. It could be that the transition from the highest occupied state to the lowest occupied state is forbidden [18]. More detailed analysis is necessary to clarify this. The bottom of the conduction band is a mixture of Sn 5s states and O 2p states. Different from d state conduction bands, it is very dispersive and therefore electrons in this band should possess lower effective mass than in d bands. The top of valence band is comprised mainly of oxygen 2p states.
Calculated density of states and band structure of the pristine SrSnO3.
Figure 3 shows the calculated density of states of cation and anion doped SrSnO3. It is assumed that dopants do not influence the position of the SrSnO3 O 2p valence band and the top of O 2p valence band in each cell is aligned at 0 eV. In each case, dopant produces occupied states above the SrSnO3 valence band maximum (VBM). This indicates the emergence of dopant to conduction band minimum (CBM) transitions. However, in the cases of Cr3+, Fe3+, and Rh3+ doping, the position of CBM is also shifted up by about 0.3 eV compared to pristine SrSnO3. This is probably due to the s state interaction between Sn4+ ions, which form the lowest part of conduction band, being disturbed by the presence of cation impurity. Overall, the Cr3+ → CBM and Rh3+ → CBM transitions are 3.21 eV and 2.63 eV, respectively. The position of Fe3+ states above VBM is relatively shallow and therefore the Fe3+ → CBM transition 3.53 eV is even higher in energy than the pristine SrSnO3 band gap. In anion doping cases, the position of CBM is less affected. In the doping case, the CBM is actually downshifted by about 0.1 eV. Overall, the transitions from , , and to CBM are 2.94 eV, 2.45 eV, and 3.09 eV, respectively.
The calculated density of states in the pristine and doped cases. The shaded areas denote total density of states and the red lines denote the dopant states. The positions of the highest occupied state and the edge of conduction band minimum are marked by green lines. The magnitudes of the dopant to conduction band minimum transitions in each case are listed.
Through hybrid density functional calculations, it is shown that the doping of various cations and anions introduces occupied dopant states above the SrSnO3 O 2p valence band. Transitions from these dopant states to the SrSnO3 conduction band cost lower energies than the original band gap transition, apart from the Fe3+ case, and are hoped to contribute more towards visible-light photocatalytic activities. Among all considered cation dopants, Rh3+ gives rise to the deepest in gap states. The transition from Rh3+ to SrSnO3 conduction band is 0.89 eV lower than the band gap. Among all considered anion dopants, appears to show the best result. The transition from to SrSnO3 conduction band is 1.07 eV lower than the SrSnO3 band gap. Nevertheless, one should be cautioned that whether or not dopant associated localized states can really contribute to steady-state activities is a question [8, 19].
This work is partly supported by the Japan Science and Technology Agency (JST) Precursory Research for Embryonic Science and Technology (PRESTO) program and by the World Premier International Research Center Initiative on Materials Nanoarchitectonics (MANA), MEXT.
F. E. Osterloh and B. A. Parkinson, “Recent developments in solar water-splitting photocatalysis,” MRS Bulletin, vol. 36, no. 1, pp. 17–22, 2011. View at: Publisher Site | Google Scholar
F. E. Osterloh, “Inorganic materials as catalysts for photochemical splitting of water,” Chemistry of Materials, vol. 20, no. 1, pp. 35–54, 2008. View at: Publisher Site | Google Scholar
W. F. Zhang, J. Tang, and J. Ye, “Structural, photocatalytic, and photophysical properties of perovskite MSnO3 (M = Ca, Sr, and Ba) photocatalysts,” Journal of Materials Research, vol. 22, no. 7, pp. 1859–1871, 2007. View at: Publisher Site | Google Scholar
K. Mizushima, M. Tanaka, A. Asai, S. Iida, and J. B. Goodenough, “Impurity levels of iron-group ions in TiO2(II),” Journal of Physics and Chemistry of Solids, vol. 40, no. 12, pp. 1129–1140, 1979. View at: Google Scholar
H. P. Maruska and A. K. Ghosh, “Transition-metal dopants for extending the response of titanate photoelectrolysis anodes,” Solar Energy Materials, vol. 1, no. 3-4, pp. 237–247, 1979. View at: Publisher Site | Google Scholar
R. U. E. 't Lam, L. G. J. de Haart, A. W. Wiersma, G. Blasse, A. H. A. Tinnemans, and A. Mackor, “The sensitization of SrTiO3 photoanodes by doping with various transition metal ions,” Materials Research Bulletin, vol. 16, no. 12, pp. 1593–1600, 1981. View at: Publisher Site | Google Scholar
K. Iwashina and A. Kudo, “Rh-doped SrTiO3 photocatalyst electrode showing cathodic photocurrent for water splitting under visible-light irradiation,” Journal of the American Chemical Society, vol. 133, no. 34, pp. 13272–13275, 2011. View at: Publisher Site | Google Scholar
P. Hohenberg and W. Kohn, “Inhomogeneous electron gas,” vol. 136, pp. B864–B871, 1964. View at: Publisher Site | Google Scholar | MathSciNet
J. Heyd, G. E. Scuseria, and M. Ernzerhof, “Erratum: “Hybrid functionals based on a screened Coulomb potential” [J. Chem. Phys.118, 8207 (2003)],” The Journal of Chemical Physics, vol. 124, no. 21, Article ID 219906, 2006. View at: Publisher Site | Google Scholar
G. Kresse and J. Furthmüller, “Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set,” Physical Review B—Condensed Matter and Materials Physics, vol. 54, no. 16, pp. 11169–11186, 1996. View at: Google Scholar
E. H. Mountstevens, S. A. T. Redfern, and J. P. Attfield, “Order-disorder octahedral tilting transitions in SrSnO3 perovskite,” Physical Review B, vol. 71, no. 22, Article ID 220102, 2005. View at: Google Scholar
M. A. Green, K. Prassides, P. Day, and D. A. Neumann, “Structure of the
n=2
n=\infty
member of the Ruddlesden-Popper series,Sr(n+1)SnnO3n+1,” International Journal of Inorganic Materials, vol. 2, no. 1, pp. 35–41, 2000. View at: Publisher Site | Google Scholar
V. T. Agekyan, “Spectroscopic properties of semiconductor crystals with direct forbidden energy gap,” Physica Status Solidi A, vol. 43, no. 1, pp. 11–42, 1977. View at: Publisher Site | Google Scholar
J. B. Goodenough, A. Hamnett, M. P. Dare-Edwards, G. Campet, and R. D. Wright, “Inorganic materials for photoelectrolysis,” Surface Science, vol. 101, no. 1–3, pp. 531–540, 1980. View at: Publisher Site | Google Scholar
Copyright © 2014 Hungru Chen and Naoto Umezawa. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
How to calculate wavelength of a frequency?
More about frequency and wavelength!
If you are a curious physics enthusiast, then our wavelength to frequency calculator is your tool. Even if you aren't, you will find exciting information about frequencies and wavelengths. And we'll also help you understand how to calculate the wavelength of any frequency.
So keep on reading and have your curiosity satisfied.
A wavelength is the distance between adjacent crusts or adjacent troughs. It is the distance in which a wave completes its shape. As it measures the distance, its SI unit is meter.
Frequency is the measure of the number of waves per unit of time. Its SI unit is hertz (Hz). A wave with a high frequency has more energy, and a low-frequency wave has lower energy when measured at the same amplitude.
Our wavelength to frequency calculator is a time and energy-efficient tool that determines the frequency of a wave based on the wavelength. But that is not all; you can also input the frequency and estimate the wavelength. And if you still want more, we have got you covered because the calculator provides a list of wave velocities in different mediums, with the option to input a custom value for the velocity.
Look through the list of presets to locate your required wave and medium.
Selecting it will display its speed/velocity in the wave velocity section.
Input the wavelength in a unit of your choice.
The result is the frequency of the wave in Hz. You may select any unit to get your result.
If you want to enter a custom wave velocity, skip the selection of medium, and input the wave velocity directly.
For instance, the wave velocity is
89 \text { m/s}
, and the wavelength is
8 \text { m}
, then the frequency is
11.125 \text { Hz}
How to calculate the wavelength of a frequency? or
How to calculate the frequency from wavelength?
Worry not, the formula is fairly simple. And you may be able to do it at your fingertips, but you don't really have to, our tool is here to rescue you. But let's still discuss the formula to enhance our knowledge.
\lambda = v/f
\lambda
= wavelength;
v
= wave velocity; and
f
= frequency.
This calculates the wavelength if you know the frequency and wave velocity. You may rearrange this formula to calculate the frequency from the wavelength.
f = v/\lambda
Say the speed of sound in glass is
4540 \text { m/s}
20 \text { m}
Place these values in the formula:
f = \frac{4540\ \text{m/s}}{20\ \text{m}} = 227\ \text{Hz}
The wavelength of waves is interesting to know about, so these calculators might tickle your fancy.
How can I calculate frequency from wavelength?
The formula to calculate the wavelength of a wave is:
v= wave velocity; and
You can rearrange this formula to calculate the frequency, given the wavelength of a wave.
Verify the speed of the wave.
Divide it by the wavelength of the wave.
The result is the frequency of the wave.
What is the frequency of sound in air?
The audible frequency of sound in air ranges from 20 Hz to 20 kHz. It means any sound with a frequency in this range can be heard by our ears and is called audible sound. The speed of sound in air is 345 m/s, which varies slightly with temperature. Based on these values, the wavelength of audible sound is 0.01725 meters.
The wavelength of light is 740 nm (nanometers) or 7.4 ×10-7 m (meters). You can use this value to determine the frequency of the light as well.
We already know the speed of light in air is 3×108 m/s.
Placing these values in the formula:
v = wave velocity; and
f = 3×108 / 7.4×10-7
f = 4.05×1014 Hz
Amazing, isn't it, so many waves in just one second. This is why we see light immediately without any delays.
Use this capacitors in series calculator to work out the resulting capacitance in a circuit.
|
y=y\left(x\right)
\mathrm{κ}=\frac{|y″|}{{\left(1+y{\prime }^{2}\right)}^{3/2}}
x=x\left(t\right),y=y\left(t\right)
\mathrm{κ}=\frac{\left|\stackrel{.}{x} \stackrel{..}{y}-\stackrel{.}{y} \stackrel{..}{x}\right|}{{\left({\stackrel{.}{x}}^{2}+{\stackrel{.}{y}}^{2}\right)}^{3/2}}
r=r\left(\mathrm{θ}\right)
\mathrm{κ}=\frac{|{r}^{2}+2 r{\prime }^{2}-r r″|}{{\left({r}^{2}+r{\prime }^{2}\right)}^{3/2}}
For the explicit Cartesian curve
y=y\left(x\right)
, the primes in the formula for
\mathrm{κ}
represent derivatives with respect to the independent variable
x
. For the parametric curve given in Cartesian coordinates, the overdots represent derivatives with respect to the parameter
t
. For the polar curve given in the form
r=r\left(\mathrm{θ}\right)
, the primes represent derivatives with respect to the independent variable
\mathrm{θ}
Most modern calculus texts take the curvature as positive; hence, the absolute values in the numerators of the formulas for
\mathrm{κ}
(the Greek letter "kappa"). Some older texts, and some applications in the sciences, use a signed curvature that omits this absolute value.
\mathrm{κ}=\frac{d\mathrm{θ}}{\mathrm{ds}}
\mathrm{θ}
is the angle made by the tangent line and the horizontal, and
s=s\left(x\right)
is the "arc length" or distance along the curve.
y\prime =\mathrm{tan}\left(\mathrm{θ}\right)
\mathrm{θ}=\mathrm{arctan}\left(y\prime \right)
The differential of the arc length function is obtained from Figure 2.4.2 by approximating the arc length
s
by the hypotenuse of the dotted right triangle:
\mathrm{ds}=\sqrt{{\mathrm{dx}}^{2}+{\mathrm{dy}}^{2}}=\mathrm{dx}\sqrt{1+{\left(\frac{\mathrm{dy}}{\mathrm{dx}}\right)}^{2}}=\mathrm{dx} \sqrt{1+y{\prime }^{2}}
\frac{\mathrm{ds}}{\mathrm{dx}}=\sqrt{1+y{\prime }^{2}}
\mathrm{κ}
as the derivative of
\mathrm{θ}
s
is then as follows.
\frac{d\mathrm{θ}}{\mathrm{ds}}
=\frac{d}{\mathrm{ds}} \mathrm{arctan}\left(y\prime \right)
=\frac{d}{\mathrm{dx}}\left(\mathrm{arctan}\left(y\prime \right)\right) \frac{\mathrm{dx}}{\mathrm{ds}}
=\frac{y″}{1+y{\prime }^{2}} \frac{1}{\mathrm{ds}/\mathrm{dx}}
=\frac{y″}{1+y{\prime }^{2}} \frac{1}{\sqrt{1+y{\prime }^{2}}}
=\frac{y″}{{\left(1+y{\prime }^{2}\right)}^{3/2}}
\mathbf{R}=x\left(p\right) \mathbf{i}+y\left(p\right) \mathbf{j}+z\left(p\right) \mathbf{k}
\mathrm{κ}
∥\mathbf{T}\prime \left(s\right)∥
∥\frac{d\mathbf{T}}{\mathrm{dp}}\frac{1}{\mathrm{ρ}}∥
\mathrm{\kappa }=\frac{∥\mathbf{R}\prime ×\mathbf{R}″∥}{{\mathrm{\rho }}^{3}}
Table 2.4.2 Curvature of space curves, with
\mathrm{ρ}=∥\mathbf{R}\prime ∥
\frac{\mathrm{ds}}{\mathrm{dp}}
The circle of curvature is a circle of radius
1/\mathrm{κ}
that is tangent to a curve, and makes second-order contact with the curve. Second-order contact means that the first and second derivatives agree at the point. The radius
1/\mathrm{κ}
is called the radius of curvature, and the center of the circle of curvature is called the center of curvature.
y=m x+b
Show that the circle
{\left(x-h\right)}^{2}+{\left(y-k\right)}^{2}={r}^{2}
everywhere has constant curvature, that is, show
\mathrm{κ}=1/r
Use the appropriate formula from Table 2.4.1 to determine the curvature of
y\left(x\right)={x}^{3/2},x≥0
, then obtain the curvature from first principles, that is, by calculating the rate at which the tangent turns as arc length increases.
Obtain and graph the curvature of the cycloid defined by
x= \left(p-\mathrm{sin}\left(p\right)\right),y= \left(1-\mathrm{cos}\left(p\right)\right)
p∈\left[0,2 \mathrm{π}\right]
Obtain and graph the curvature of the catenary defined by
y=\mathrm{cosh}\left(x\right)
\mathbf{R}\left(p\right)=\mathrm{cos}\left(p\right) \mathbf{i}+\mathrm{sin}\left(p\right) \mathbf{j}+p \mathbf{k}
\mathbf{R}\left(p\right)=p \mathbf{i}+3 {p}^{2} \mathbf{j}+{p}^{3} \mathbf{k}
\mathbf{R}\left(p\right)=\mathrm{ln}\left(\mathrm{cos}\left(p\right)\right) \mathbf{i}+\mathrm{ln}\left(\mathrm{sin}\left(p\right)\right) \mathbf{j}+\sqrt{2}p \mathbf{k}
p∈\left(0,\mathrm{π}/2\right)
\mathbf{R}\left(p\right)=\left(3 p-{p}^{3}\right) \mathbf{i}+3 {p}^{2} \mathbf{j}+\left(3 p+{p}^{3}\right) \mathbf{k}
\mathrm{κ}
∥\mathbf{T}\prime \left(s\right)∥
\mathrm{κ}=∥\mathbf{R}\prime \left(p\right)×\mathbf{R}″\left(p\right)∥/{\mathrm{ρ}}^{3}
|
Avogadro's Law: Volume and Amount | Introduction to Chemistry | Course Hero
Avogadro's Law: Volume and Amount
State Avogadro's Law and its underlying assumptions
The number of molecules or atoms in a specific volume of ideal gas is independent of size or the gas' molar mass.
Avogadro's Law is stated mathematically as follows:
\frac{V}{n} = k
, where V is the volume of the gas, n is the number of moles of the gas, and k is a proportionality constant.
Volume ratios must be related to the relative numbers of molecules that react; this relationship was crucial in establishing the formulas of simple molecules at a time when the distinction between atoms and molecules was not clearly understood.
Avogadro's Lawunder the same temperature and pressure conditions, equal volumes of all gases contain the same number of particles; also referred to as Avogadro's hypothesis or Avogadro's principle
Avogadro's Law (sometimes referred to as Avogadro's hypothesis or Avogadro's principle) is a gas law; it states that under the same pressure and temperature conditions, equal volumes of all gases contain the same number of molecules. The law is named after Amedeo Avogadro who, in 1811, hypothesized that two given samples of an ideal gas—of the same volume and at the same temperature and pressure—contain the same number of molecules; thus, the number of molecules or atoms in a specific volume of ideal gas is independent of their size or the molar mass of the gas. For example, 1.00 L of N2 gas and 1.00 L of Cl2 gas contain the same number of molecules at Standard Temperature and Pressure (STP).
\frac{V}{n} = k
V is the volume of the gas, n is the number of moles of the gas, and k is a proportionality constant.
As an example, equal volumes of molecular hydrogen and nitrogen contain the same number of molecules and observe ideal gas behavior when they are at the same temperature and pressure. In practice, real gases show small deviations from the ideal behavior and do not adhere to the law perfectly; the law is still a useful approximation for scientists, however.
Interactive: The Number-Volume RelationshipThe model contains gas molecules under constant pressure. The barrier moves when the volume of gas expands or contracts. Run the model and select different numbers of molecules from the drop-down menu. What is the relationship between the number of molecules and the volume of a gas? (Note: Although the atoms in this model are in a flat plane, volume is calculated using 0.1 nm as the depth of the container.)
Significance of Avogadro's Law
Early chemists calculated the molecular weight of oxygen using the incorrect formula HO for water. This lead to the molecular weight of oxygen being miscalculated as 8, rather than 16. However, when chemists found that an assumed reaction of H + Cl
\rightarrow
HCl yielded twice the volume of HCl, they realized hydrogen and chlorine were diatomic molecules. The chemists revised their reaction equation to be H2 + Cl2
\rightarrow
2HCl.
HO \rightarrow H + O
, they discovered that the volume of hydrogen gas consumed was twice that of oxygen. By Avogadro's Law, this meant that hydrogen and oxygen were combining in a 2:1 ratio. This discovery led to the correct molecular formula for water (H2O) and the correct reaction
2H_2O \rightarrow 2H_2 + O_2
Experiment confirming the correct formula for waterIt was originally assumed that 1 hydrogen and 1 oxygen atom went into a water molecule. Using Avogadro's Law, this experiment confirmed that 2 hydrogen and 1 oxygen form 1 water molecule.
AvogadroPractice problems and examples, looking at the relationship between the volume and amount of gas (number of moles) in a gas sample.
"Avogadro's Law."
http://en.wikipedia.org/wiki/Avogadro's%20Law Wikipedia
http://en.wikipedia.org/wiki/Avogadro's_law Wikipedia
"Hofmann_voltameter.svg."
https://commons.wikimedia.org/wiki/File:Hofmann_voltameter.svg Wikimedia
Expt 16.docx
ebin_ESE142P-2_C1_ASS1.pdf
RESEARCH 0101 • Far Eastern University
7.7+Volume+and+Moles+(Avogadro's+Gas+Law)
Relationships among Pressure, Temperature, Volume, and Amount.pdf
CHEMISTRY 3A • Clovis Commuity College
Avogadros Law and Molar Volume.pptx
Avogadros_Law.ppt
CHEMISTRY 1 • Don Bosco High School
SCH3U -- Unit F -- Avogadro's Law & Molar Volume.pdf
CHEM SCH3U1 • Thistletown Collegiate Institute
5-3_Avogadros_Law_Moles_and_Volume_
CHEM 101 • Monterey Peninsula College
Chem 101 Document W10 Slides Avogadros Law.pdf
Avogadro's Law and Ideal Gas Law Notes
Determining the Amount of Gas Needed to Fill a Container Using Molar Volume of the Gas.docx
CHEMISTRY MISC • Baldwin Park High
SCH3U - Avogadro's Law & The Ideal Gas Law (1) - Copy.pdf
(12.1) Avogadros Law_ Molar Volume.ppt
Avogadros law.doc
avogadros_law_formula_cheat_sheet.jpeg
CHEM 126 • Doane University
2.7 Avogadro's Law & molar volume notes.pdf
12.1 Avogadro's law.pdf
Chapter 11, Investigation 4 Charles' Law (Volume and Temperature).docx
CHEM 1020 • University of Northern Iowa
[9781785363047 - European Family Law Volume III] Introduction to European family law volume III_ Fam
LAW MISC • University of Dundee
5 Avogadro's Law & Molar Volume.pdf
BIO SCH3U • St Aloysius Gonzaga Secondary School
CHM 110 Lab 14 Avogadros Law Questions.pdf
CHEMISTRY 110 • Northeastern Technical College, Cheraw
Avogadro's Law.docx
CHEMISTRY 12 • San Francisco State University
Avogadro's Law Post-Lab Analysis
CHEM MISC • The University of Tennessee, Knoxville
SCH3U Unit 5_ Lesson 6- Avogadro's Law.pdf
Grade 10- week 6-chemistry -worksheet 11.3 problems -Avogadros Law.docx
CHEM 101 • American University of Sharjah
Avogadros-Law.pptx
CHEMISTRY 100 • Mapúa Institute of Technology
Avogadro's Law.pdf
CHEM 4071 • Auburn University
avogadros_law_worksheet.pdf
CHEMISTRY 151 • Johnston Community College
Avogadros law guided practice.docx
CHEM MISC • Whitehouse H S
Avogadros_Law.pdf
CHEM 141 • Cuyamaca College
|
Reviewed by Rahul Dhari
Wattage, voltage, and amperage
How to calculate amperage from wattage
How to use the wattage to amperage calculator?
Omni's wattage to amperage calculator allows you to calculate the current through an AC or DC circuit from power and voltage values.
If you have ever wondered how to convert watts to amps, i.e., power to current for a given circuit, you have come to the right place.
Continue reading to learn how to calculate amperage from wattage and voltage for various circuit types. You will also find an example of using this wattage voltage amperage calculator.
Before going any further, let us understand what exactly we mean by the terms wattage, voltage, and amperage.
There is production or dissipation of power by the circuit elements (e.g., resistors) in every electrical circuit. We can express the power dissipation by the equation:
\small P = VI
P
- Power or wattage dissipated by the circuit element expressed in watts (W);
I
- Current or amperage through the circuit element measured in units called amperes (A); and
V
- Voltage across the element expressed in the unit volts (V).
Thus, when we talk about calculating amperage from wattage or converting wattage to amperage, we simply mean using the above equation to calculate the current from power and voltage values.
To calculate the current or voltage from power, we first need to know the circuit type, i.e., AC or DC. Then we can use one of the formulae given below to convert watts to amps:
For DC circuit, the relation between power (
P
), voltage (
V
) and current (
I
) is given as:
\small P = VI
For the single-phase AC circuit, we can use the following formula to convert watts to amps:
\small I_{RMS} = \frac{P_{avg}}{V_{RMS} \times PF}
I_{RMS}
- Root mean square current;
P_{avg}
- Average power;
V_{RMS}
- Root mean square voltage; and
PF
- Power factor, which is the ratio between the real and apparent power in a circuit.
Not sure how to calculate power factor? Worry not! We have a tool for that too.
If the circuit is a three-phase AC circuit, we can use the formulae:
\scriptsize \begin{align*} I_{RMS} &= \frac{P_{avg}}{\sqrt{3} \times V_{RMS} \times PF}, \\ &\text{for line-to-line voltage; and} \\ I_{RMS} &= \frac{P_{avg}}{3 \times V_{RMS} \times PF},\\ &\text{for line-to-neutral voltage.}\\ \end{align*}
Let us calculate the amperage of an electrical appliance using 1500 watts of power and running on a 120 V DC circuit:
Using the drop-down menu, choose the current type as DC.
Enter the power as 1500 W and voltage as 120 V.
The calculator will convert 1500 watts to amps for the given condition and show the resulting current as 12.5 A.
If you find this voltage amperage calculator useful, do check out our other similar tools:
3 phase motor amperage calculator
How does wattage relates to amperage for AC single-phase circuit?
For an AC single-phase circuit, the relation between power in wattage (P) and current in amperage (I) is given by the formula:
I = P / (V × PF ). Here PF is the power factor, and V is the RMS (root mean square) voltage in volts.
How do I calculate amperage from wattage and voltage for DC circuit?
To calculate amperage from wattage and voltage for the DC circuit, follow the given instructions:
Divide the power in wattage (W) by the voltage in volts (V).
Congrats! You have calculated the amperage from wattage and voltage.
|
Curved mirror - Wikipedia
(Redirected from Concave mirror)
Reflections in a convex mirror. The photographer is seen reflected at top right
A curved mirror is a mirror with a curved reflecting surface. The surface may be either convex (bulging outward) or concave (recessed inward). Most curved mirrors have surfaces that are shaped like part of a sphere, but other shapes are sometimes used in optical devices. The most common non-spherical type are parabolic reflectors, found in optical devices such as reflecting telescopes that need to image distant objects, since spherical mirror systems, like spherical lenses, suffer from spherical aberration. Distorting mirrors are used for entertainment. They have convex and concave regions that produce deliberately distorted images. They also provide highly magnified or highly diminished (smaller) images when the object is placed at certain distances.
1.1 Uses of convex mirrors
1.2 Convex mirror image
2.1 Uses of concave mirrors
2.2 Concave mirror image
3 Mirror shape
4.1 Mirror equation, magnification, and focal length
4.3 Ray transfer matrix of spherical mirrors
Convex mirrors[edit]
A convex mirror or diverging mirror is a curved mirror in which the reflective surface bulges towards the light source.[1] Convex mirrors reflect light outwards, therefore they are not used to focus light. Such mirrors always form a virtual image, since the focal point (F) and the centre of curvature (2F) are both imaginary points "inside" the mirror, that cannot be reached. As a result, images formed by these mirrors cannot be projected on a screen, since the image is inside the mirror. The image is smaller than the object, but gets larger as the object approaches the mirror.
A collimated (parallel) beam of light diverges (spreads out) after reflection from a convex mirror, since the normal to the surface differs at each spot on the mirror.
Uses of convex mirrors[edit]
The passenger-side mirror on a car is typically a convex mirror. In some countries, these are labeled with the safety warning "Objects in mirror are closer than they appear", to warn the driver of the convex mirror's distorting effects on distance perception. Convex mirrors are preferred in vehicles because they give an upright (not inverted), though diminished (smaller), image and because they provide a wider field of view as they are curved outwards.
These mirrors are often found in the hallways of various buildings (commonly known as "hallway safety mirrors"), including hospitals, hotels, schools, stores, and apartment buildings. They are usually mounted on a wall or ceiling where hallways intersect each other, or where they make sharp turns. They are useful for people to look at any obstruction they will face on the next hallway or after the next turn. They are also used on roads, driveways, and alleys to provide safety for motorists where there is a lack of visibility, especially at curves and turns.[2]
Convex mirror image[edit]
The image on a convex mirror is always virtual (rays haven't actually passed through the image; their extensions do, like in a regular mirror), diminished (smaller), and upright (not inverted). As the object gets closer to the mirror, the image gets larger, until approximately the size of the object, when it touches the mirror. As the object moves away, the image diminishes in size and gets gradually closer to the focus, until it is reduced to a point in the focus when the object is at an infinite distance. These features make convex mirrors very useful: since everything appears smaller in the mirror, they cover a wider field of view than a normal plane mirror, so useful for looking at cars behind a driver's car on a road, watching a wider area for surveillance, etc.
{\displaystyle S>F,\ S=F,\ S<F}
Concave mirrors[edit]
A concave mirror, or converging mirror, has a reflecting surface that is recessed inward (away from the incident light). Concave mirrors reflect light inward to one focal point. They are used to focus light. Unlike convex mirrors, concave mirrors show different image types depending on the distance between the object and the mirror.
The mirrors are called "converging mirrors" because they tend to collect light that falls on them, refocusing parallel incoming rays toward a focus. This is because the light is reflected at different angles at different spots on the mirror as the normal to the mirror surface differs at each spot.
Uses of concave mirrors[edit]
Concave mirrors are used in reflecting telescopes.[5] They are also used to provide a magnified image of the face for applying make-up or shaving.[6] In illumination applications, concave mirrors are used to gather light from a small source and direct it outward in a beam as in torches, headlamps and spotlights, or to collect light from a large area and focus it into a small spot, as in concentrated solar power. Concave mirrors are used to form optical cavities, which are important in laser construction. Some dental mirrors use a concave surface to provide a magnified image. The mirror landing aid system of modern aircraft carriers also uses a concave mirror.
Concave mirror image[edit]
{\displaystyle S<F}
{\displaystyle S=F}
In the limit where S approaches F, the image distance approaches infinity, and the image can be either real or virtual and either upright or inverted depending on whether S approaches F from its left or right side.
{\displaystyle F<S<2F}
{\displaystyle S=2F}
{\displaystyle S>2F}
Mirror shape[edit]
Most curved mirrors have a spherical profile.[7] These are the simplest to make, and it is the best shape for general-purpose use. Spherical mirrors, however, suffer from spherical aberration—parallel rays reflected from such mirrors do not focus to a single point. For parallel rays, such as those coming from a very distant object, a parabolic reflector can do a better job. Such a mirror can focus incoming parallel rays to a much smaller spot than a spherical mirror can. A toroidal reflector is a form of parabolic reflector which has a different focal distance depending on the angle of the mirror.
Mirror equation, magnification, and focal length[edit]
The Gaussian mirror equation, also known as the mirror and lens equation, relates the object distance
{\displaystyle d_{\mathrm {o} }}
and image distance
{\displaystyle d_{\mathrm {i} }}
to the focal length
{\displaystyle f}
{\displaystyle {\frac {1}{d_{\mathrm {o} }}}+{\frac {1}{d_{\mathrm {i} }}}={\frac {1}{f}}}
The sign convention used here is that the focal length is positive for concave mirrors and negative for convex ones, and
{\displaystyle d_{\mathrm {o} }}
{\displaystyle d_{\mathrm {i} }}
are positive when the object and image are in front of the mirror, respectively. (They are positive when the object or image is real.)[2]
For convex mirrors, if one moves the
{\displaystyle 1/d_{\mathrm {o} }}
term to the right side of the equation to solve for
{\displaystyle 1/d_{\mathrm {i} }}
, then the result is always a negative number, meaning that the image distance is negative—the image is virtual, located "behind" the mirror. This is consistent with the behavior described above.
For concave mirrors, whether the image is virtual or real depends on how large the object distance is compared to the focal length. If the
{\displaystyle 1/f}
term is larger than the
{\displaystyle 1/d_{\mathrm {o} }}
term, then
{\displaystyle 1/d_{\mathrm {i} }}
is positive and the image is real. Otherwise, the term is negative and the image is virtual. Again, this validates the behavior described above.
{\displaystyle m\equiv {\frac {h_{\mathrm {i} }}{h_{\mathrm {o} }}}=-{\frac {d_{\mathrm {i} }}{d_{\mathrm {o} }}}}
Ray tracing[edit]
The image location and size can also be found by graphical ray tracing, as illustrated in the figures above. A ray drawn from the top of the object to the mirror surface vertex (where the optical axis meets the mirror) will form an angle with the optical axis. The reflected ray has the same angle to the axis, but on the opposite side (See Specular reflection).
A second ray can be drawn from the top of the object, parallel to the optical axis. This ray is reflected by the mirror and passes through its focal point. The point at which these two rays meet is the image point corresponding to the top of the object. Its distance from the optical axis defines the height of the image, and its location along the axis is the image location. The mirror equation and magnification equation can be derived geometrically by considering these two rays. A ray that goes from the top of the object through the focal point can be considered instead. Such a ray reflects parallel to the optical axis and also passes through the image point corresponding to the top of the object.
Ray transfer matrix of spherical mirrors[edit]
The mathematical treatment is done under the paraxial approximation, meaning that under the first approximation a spherical mirror is a parabolic reflector. The ray matrix of a concave spherical mirror is shown here. The
{\displaystyle C}
element of the matrix is
{\displaystyle -{\frac {1}{f}}}
{\displaystyle f}
is the focal point of the optical device.
Boxes 1 and 3 feature summing the angles of a triangle and comparing to π radians (or 180°). Box 2 shows the Maclaurin series of
{\displaystyle \arccos \left(-{\frac {r}{R}}\right)}
up to order 1. The derivations of the ray matrices of a convex spherical mirror and a thin lens are very similar.
Alhazen's problem (reflection from a spherical mirror)
^ Nayak, Sanjay K.; Bhuvana, K.P. (2012). Engineering Physics. New Delhi: Tata McGraw-Hill Education. p. 6.4. ISBN 9781259006449.
^ a b c Hecht, Eugene (1987). "5.4.3". Optics (2nd ed.). Addison Wesley. pp. 160–1. ISBN 0-201-11609-X.
^ Venice Botteghe: Antiques, Bijouterie, Coffee, Cakes, Carpet, Glass Archived 2017-03-06 at the Wayback Machine
^ Lorne Campbell, National Gallery Catalogues (new series): The Fifteenth Century Netherlandish Paintings, pp. 178-179, 188-189, 1998, ISBN 1-85709-171-X
^ Joshi, Dhiren M. Living Science Physics 10. Ratna Sagar. ISBN 9788183322904. Archived from the original on 2018-01-18.
^ Sura's Year Book 2006 (English). Sura Books. ISBN 9788172541248. Archived from the original on 2018-01-18.
^ Al-Azzawi, Abdul (2006-12-26). Light and Optics: Principles and Practices. CRC Press. ISBN 9780849383144. Archived from the original on 2018-01-18.
Retrieved from "https://en.wikipedia.org/w/index.php?title=Curved_mirror&oldid=1089584220#Concave_mirrors"
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.