content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Rings and Things and a Fine Array of Twentieth Century Associative Algebra: Second Editionsearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
Rings and Things and a Fine Array of Twentieth Century Associative Algebra: Second Edition
Softcover ISBN: 978-0-8218-3672-9
Product Code: SURV/65.R
List Price: $129.00
MAA Member Price: $116.10
AMS Member Price: $103.20
eBook ISBN: 978-1-4704-1292-0
Product Code: SURV/65.R.E
List Price: $125.00
MAA Member Price: $112.50
AMS Member Price: $100.00
Softcover ISBN: 978-0-8218-3672-9
eBook: ISBN: 978-1-4704-1292-0
Product Code: SURV/65.R.B
List Price: $254.00 $191.50
MAA Member Price: $228.60 $172.35
AMS Member Price: $203.20 $153.20
Click above image for expanded view
Rings and Things and a Fine Array of Twentieth Century Associative Algebra: Second Edition
Softcover ISBN: 978-0-8218-3672-9
Product Code: SURV/65.R
List Price: $129.00
MAA Member Price: $116.10
AMS Member Price: $103.20
eBook ISBN: 978-1-4704-1292-0
Product Code: SURV/65.R.E
List Price: $125.00
MAA Member Price: $112.50
AMS Member Price: $100.00
Softcover ISBN: 978-0-8218-3672-9
eBook ISBN: 978-1-4704-1292-0
Product Code: SURV/65.R.B
List Price: $254.00 $191.50
MAA Member Price: $228.60 $172.35
AMS Member Price: $203.20 $153.20
• Mathematical Surveys and Monographs
Volume: 65; 2004; 475 pp
MSC: Primary 00; 01; 12; 13; 16; Secondary 03; 06; 08; 14; 15; 18
This book surveys more than 125 years of aspects of associative algebras, especially ring and module theory. It is the first to probe so extensively such a wealth of historical development.
Moreover, the author brings the reader up to date, in particular through his report on the subject in the second half of the twentieth century.
Included in the book are certain categorical properties from theorems of Frobenius and Stickelberger on the primary decomposition of finite Abelian groups; Hilbert's basis theorem and his
Nullstellensatz, including the modern formulations of the latter by Krull, Goldman, and others; Maschke's theorem on the representation theory of finite groups over a field; and the fundamental
theorems of Wedderburn on the structure of finite dimensional algebras and finite skew fields and their extensions by Braver, Kaplansky, Chevalley, Goldie, and others. A special feature of the
book is the in-depth study of rings with chain condition on annihilator ideals pioneered by Noether, Artin, and Jacobson and refined and extended by many later mathematicians.
Two of the author's prior works, Algebra: Rings, Modules and Categories, I and II (Springer-Verlag, 1973), are devoted to the development of modern associative algebra and ring and module theory.
Those works serve as a foundation for the present survey, which includes a bibliography of over 1,600 references and is exhaustively indexed.
In addition to the mathematical survey, the author gives candid and descriptive impressions of the last half of the twentieth century in “Part II: Snapshots of Some Mathematical Friends and
Places”. Beginning with his teachers and fellow graduate students at the University of Kentucky and at Purdue, Faith discusses his Fulbright-NATO Postdoctoral at Heidelberg and at the Institute
for Advanced Study (IAS) at Princeton, his year as a visiting scholar at Berkeley, and the many acquaintances he met there and in subsequent travels in India, Europe, and most recently,
Comments on the first edition:
“Researchers in algebra should find it both enjoyable to read and very useful in their work. In all cases, [Faith] cites full references as to the origin and development of the theorem .... I
know of no other work in print which does this as thoroughly and as broadly.”
—John O'Neill, University of Detroit at Mercy
“ ‘Part II: Snapshots of Some Mathematical Friends and Places’ is wonderful! [It is] a joy to read! Mathematicians of my age and younger will relish reading ‘Snapshots’.”
—James A. Huckaba, University of Missouri-Columbia
Graduate students, research mathematicians, and other scientists interested in the history of mathematics and science.
□ Chapters
□ 1. Direct product and sums of rings and modules and the structure of fields
□ 2. Introduction to ring theory: Schur’s Lemma and semisimple rings, prime and primitive rings, Noetherian and Artinian modules, nil, prime and Jacobson radicals
□ 3. Direct decompositions of projective and injective modules
□ 4. Direct product decompositions of von Neumann regular rings and self-injective rings
□ 5. Direct sums of cyclic modules
□ 6. When injectives are flat: Coherent FP-injective rings
□ 7. Direct decompositions and dual generalizations of Noetherian rings
□ 8. Completely decomposable modules and the Krull-Schmidt-Azumaya theorem
□ 9. Polynomial rings over Vamosian and Kerr rings, valuation rings and Prüfer rings
□ 10. Isomorphic polynomial rings and matrix rings
□ 11. Group rings and Maschke’s theorem revisited
□ 12. Maximal quotient rings
□ 13. Morita duality and dual rings
□ 14. Krull and global dimensions
□ 15. Polynomial identities and PI-rings
□ 16. Unions of primes, prime avoidance, associated prime ideals, ACC on irreducible ideals, and Annihilator ideals in commutative rings
□ 17. Dedekind’s theorem on the independence of automorphisms revisited
□ 18. Snapshots of some mathematical friends and places
□ From reviews of the first edition ...
[Regarding Chapter 18, “Snapshots of Some Mathematical Friends and Places”] These vignettes are really quite amusing as Faith has a keen (but kind) eye for the eccentricities and gifts of the
people he has come to know in the field.
The Times of Trenton
□ This book offers a well-written and very detailed survey of a century of ring theory, module theory and, more generally, associative algebra. The author has selected a great many topics
within this ambit and has done an excellent job in presenting them.
Mathematical Reviews
• Permission – for use of book, eBook, or Journal content
• Book Details
• Table of Contents
• Reviews
• Requests
Volume: 65; 2004; 475 pp
MSC: Primary 00; 01; 12; 13; 16; Secondary 03; 06; 08; 14; 15; 18
This book surveys more than 125 years of aspects of associative algebras, especially ring and module theory. It is the first to probe so extensively such a wealth of historical development. Moreover,
the author brings the reader up to date, in particular through his report on the subject in the second half of the twentieth century.
Included in the book are certain categorical properties from theorems of Frobenius and Stickelberger on the primary decomposition of finite Abelian groups; Hilbert's basis theorem and his
Nullstellensatz, including the modern formulations of the latter by Krull, Goldman, and others; Maschke's theorem on the representation theory of finite groups over a field; and the fundamental
theorems of Wedderburn on the structure of finite dimensional algebras and finite skew fields and their extensions by Braver, Kaplansky, Chevalley, Goldie, and others. A special feature of the book
is the in-depth study of rings with chain condition on annihilator ideals pioneered by Noether, Artin, and Jacobson and refined and extended by many later mathematicians.
Two of the author's prior works, Algebra: Rings, Modules and Categories, I and II (Springer-Verlag, 1973), are devoted to the development of modern associative algebra and ring and module theory.
Those works serve as a foundation for the present survey, which includes a bibliography of over 1,600 references and is exhaustively indexed.
In addition to the mathematical survey, the author gives candid and descriptive impressions of the last half of the twentieth century in “Part II: Snapshots of Some Mathematical Friends and Places”.
Beginning with his teachers and fellow graduate students at the University of Kentucky and at Purdue, Faith discusses his Fulbright-NATO Postdoctoral at Heidelberg and at the Institute for Advanced
Study (IAS) at Princeton, his year as a visiting scholar at Berkeley, and the many acquaintances he met there and in subsequent travels in India, Europe, and most recently, Barcelona.
Comments on the first edition:
“Researchers in algebra should find it both enjoyable to read and very useful in their work. In all cases, [Faith] cites full references as to the origin and development of the theorem .... I know of
no other work in print which does this as thoroughly and as broadly.”
—John O'Neill, University of Detroit at Mercy
“ ‘Part II: Snapshots of Some Mathematical Friends and Places’ is wonderful! [It is] a joy to read! Mathematicians of my age and younger will relish reading ‘Snapshots’.”
—James A. Huckaba, University of Missouri-Columbia
Graduate students, research mathematicians, and other scientists interested in the history of mathematics and science.
• Chapters
• 1. Direct product and sums of rings and modules and the structure of fields
• 2. Introduction to ring theory: Schur’s Lemma and semisimple rings, prime and primitive rings, Noetherian and Artinian modules, nil, prime and Jacobson radicals
• 3. Direct decompositions of projective and injective modules
• 4. Direct product decompositions of von Neumann regular rings and self-injective rings
• 5. Direct sums of cyclic modules
• 6. When injectives are flat: Coherent FP-injective rings
• 7. Direct decompositions and dual generalizations of Noetherian rings
• 8. Completely decomposable modules and the Krull-Schmidt-Azumaya theorem
• 9. Polynomial rings over Vamosian and Kerr rings, valuation rings and Prüfer rings
• 10. Isomorphic polynomial rings and matrix rings
• 11. Group rings and Maschke’s theorem revisited
• 12. Maximal quotient rings
• 13. Morita duality and dual rings
• 14. Krull and global dimensions
• 15. Polynomial identities and PI-rings
• 16. Unions of primes, prime avoidance, associated prime ideals, ACC on irreducible ideals, and Annihilator ideals in commutative rings
• 17. Dedekind’s theorem on the independence of automorphisms revisited
• 18. Snapshots of some mathematical friends and places
• From reviews of the first edition ...
[Regarding Chapter 18, “Snapshots of Some Mathematical Friends and Places”] These vignettes are really quite amusing as Faith has a keen (but kind) eye for the eccentricities and gifts of the
people he has come to know in the field.
The Times of Trenton
• This book offers a well-written and very detailed survey of a century of ring theory, module theory and, more generally, associative algebra. The author has selected a great many topics within
this ambit and has done an excellent job in presenting them.
Mathematical Reviews
Permission – for use of book, eBook, or Journal content
You may be interested in...
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/view?ProductCode=SURV/65.R","timestamp":"2024-11-09T07:36:40Z","content_type":"text/html","content_length":"118291","record_id":"<urn:uuid:bf2e9322-7a6a-482e-b1c6-fe1a771eeeff>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00485.warc.gz"} |
Below you will find some links to useful material/projects.
Estimating Variance of Firm Effects
Estimating variance components of high-dimensional fixed effects can be complex due to the fact that we’re squaring estimation noise, and this noise can potentially be large due to limited mobility
bias. I list some projects that develop procedures to deal with this problem. | {"url":"https://paulcorcuera.com/resources/","timestamp":"2024-11-05T17:11:51Z","content_type":"text/html","content_length":"6561","record_id":"<urn:uuid:6ea7dbcf-d879-4478-84d7-66f73e25a90e>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00182.warc.gz"} |
What are the Limitations of Ohm's Law? - Electrical Volt
Limitations of Ohm’s Law
The limitation of ohm’s law is that this law is not applicable to unilateral networks & nonlinear devices. The unilateral network permits the flow of current in one direction. Therefore, Ohm’s law is
not applicable for all electrical networks and devices. In this post, we will discuss what are its limitations & what are the reasons behind this.
Ohm’s law states that the electric current is proportional to the voltage and inversely proportional to the resistance.
Ohm’s Law
V = I/R
Where V, I, and R are voltage, current, and resistance respectively.
From the above graph, it is clear that Ohms’s law is applicable only when the resistance of the device remains constant.
Followings are the limitations of Ohm’s Law:
1. Ohm’s law is not applicable for unilateral electrical elements.
2. Ohm’s law is not applicable for non linear devices.
Example Cases of Ohm’s Law Limitations
In the following cases, ohm’s law is not applicable.
The diode is a non-ohmic or non-linear power device. This indicates that the current flows through the diode does not increase in the linear proportion of the increased voltage. The voltage across
the depletion layer becomes constant on increasing the applied voltage. As a result, no further increase of the voltage across the PN junction is possible. However, the current through the diode
increases with an increase in voltage.
Thus, it is clear that the diode current does not increase linearly on an increase of applied voltage. Therefore, in this case, Ohm’s law is not applicable.
In the case of an incandescent lamp, the current through the lamp does not increase with an increase in voltage. Why does this happen? The filament resistance increases with the rise in temperature,
and thus the filament lamp has non-linear characteristics. Here, Ohm’s law is not applicable.
Almost all the semiconductor devices like a diode, Zener diode, transistor, IGBT, SCR, Triac, MOSFET exhibit non-linear VI characteristics, and Ohm’s law is not applicable for these devices.
Therefore, we can not apply Ohm’s law to semiconductor devices.
Ohm’s law is not applicable for unilateral networks. The unilateral electrical network does not allow the flow of electric current in both directions. Examples of unilateral devices is diode and
Ohm’s law is not applicable for non-metallic conductors. All the insulating materials fall in the non-metallic conductor family. The current flowing through the insulating material is not
proportional to the applied voltage. Therefore, the VI equation is non-linear for the insulating materials.
The VI equation of insulating material is V=KI^m where k and m are constant and m<1,
Ohm’s law is not applicable in the case of arc discharge lamps. The resistance of the arc increase with the voltage, the current, the temperature, the pressure of the ionized gas, and the state of
the electrodes. As a result, the resistance of the arc lamp does not remain constant. And, therefore, the arc lamp does not follow ohm’s law.
Ohm’s law is not applicable in the case of electrolytes in which gases liberate on the electrodes. The reason behind this is the change in the resistance of the electrodes.
In a nutshell, ohm’s law is not applicable for those devices whose resistance does not remain constant.
Read Next:
Leave a Comment | {"url":"https://www.electricalvolt.com/what-are-the-limitations-of-ohms-law/","timestamp":"2024-11-13T15:58:05Z","content_type":"text/html","content_length":"96405","record_id":"<urn:uuid:f201d50e-fd82-4c2b-b31c-e32bc755a619>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00871.warc.gz"} |
High-Precision Positioning Using Plane-Constrained RTK Method in Urban Environments
High-precision positioning methods have drawn great attention in recent years due to the rapid development of smart vehicles as well as automatics driving technology. The Real-Time Kinematic (RTK)
technique is a mature tool to achieve centimeter-level positioning accuracy in open-sky areas. However, the users who drive under dense urban conditions are always confronted with harsh global
navigation satellite system (GNSS) environments. Skyscrapers and overpasses block the signals and reduce the number of visible satellites, making it difficult to achieve continuous and precise
positioning. Considering that the road is relatively smooth in most urban areas, vehicles are expected to travel on the same plane when they are close to each other. The road plane information is a
promising candidate to enhance the performance of the RTK method in constrained environments. In this paper, we propose a plane-constrained RTK (PCRTK) method using the positioning information from
cooperative vehicles. In a vehicle-to-vehicle (V2V) network, the positions of cooperative vehicles are used to fit a road plane for the target vehicle. The parameters of the plane fitting are treated
as new measurements to enhance the performance of the float estimator. The relationship between the plane parameters and the state of the estimator is derived in our study. To validate the
performance of the proposed method, several experiments with a four-vehicle fleet were carried out in open-sky areas and dense urban areas in Beijing, China. Simulations and experimental results show
that the proposed method can take advantage of the plane constraint and obtain more accurate positioning results compared to the traditional RTK method.
Global navigation satellite systems (GNSS) have been widely employed in vehicular applications to conduct real-time position estimation (Dabove, 2019; Du & Barth, 2008). One of the most competitive
and promising positioning methods, the Real-Time Kinematic (RTK) technique (Hofmann-Wellenhof et al., 2001), is feasible to provide centimeter-level positioning accuracy once carrier-phase
ambiguities have been correctly resolved. However, frequent signal blockages caused by skyscrapers and overpasses lead to degradation in satellite visibility (Alam & Dempster, 2013). In a harsh
environment, the positioning accuracy of the RTK method may decrease significantly, leading to unacceptable positioning errors for some safety-critical vehicular applications (Alam et al., 2012;
MacGougan et al., 2010).
To achieve reliable and precise positioning in urban scenarios, a feasible approach is to extend the RTK method by fusing other positioning sensors (Zhao et al., 2016). Lassoued et al. (2016)
proposed a tightly coupled integrated system using inertial navigation system (INS) and GNSS data, which realized robust positioning in various urban environments. Similarly, Qian et al. (2020)
proposed a cooperative RTK algorithm by jointly using INS and light detection and ranging (lidar) data, which achieved a higher fixed rate and position accuracy compared to the traditional RTK
method. Xiong et al. (2020) integrated ultra-wideband (UWB) sensors into the RTK method to improve the robustness of positioning. However, these methods require additional and expensive sensors to
obtain accurate sensing information, which is inapplicable to low-cost vehicles (hereinafter referred to as common vehicles). Common vehicles and sensor-rich vehicles coexist in current traffic
scenarios (Li et al., 2020). How to improve the positioning performance of common vehicles is still worth studying.
Owing to the fast development in the field of inter-vehicle communication (Zhang et al., 2018), vehicles equipped with communication devices are able to share navigation data with each other (Hu et
al., 2021). The cooperative positioning (CP) method based on vehicle-to-vehicle (V2V) communication is a promising approach to enhance the positioning accuracy and reliability of common vehicles (
Xiong et al., 2020; Song et al., 2020; Xiong et al., 2019). The theoretical features of achievable performance for CP have been deduced by Penna et al. (2010) and Schloemann and Buehrer (2015). Alam
and Dempster (2013) discussed the feasibility of conventional and modern CP systems in vehicular applications. Inspired by their studies, we previously proposed a cooperative positioning algorithm
that combined the benefits of the RTK method and the CP method (Zhuang et al., 2021). The accuracy of float solutions can be improved by fusing GNSS observations from neighboring vehicles. Thus, a
higher ambiguity fixed rate can be achieved compared to the conventional RTK method. However, the computational load would be relatively high because additional ambiguities between the ego vehicle
and cooperative vehicles must first be resolved.
As many researchers focus on the integration of cooperative GNSS measurements, some sensor-free environmental features that can also improve the positioning performance are seldom considered in CP
methods (Alam & Dempster, 2013). In urban scenarios, the road conditions are generally good and the roads are relatively smooth (Múčka, 2017), which means neighboring vehicles are usually traveling
on the same plane that the ego vehicle is located. Therefore, we can use the plane information obtained from cooperative vehicles as new measurements to constrain the float estimator of the
traditional RTK method.
In this paper, we propose a plane-constrained RTK (PCRTK) method using positioning solutions from cooperative vehicles. By collecting positioning data of cooperative vehicles traveling on the same
road, we derive a plane equation and use the parameters of the plane equation to constrain the float estimator. The main contributions of our work include the design of a plane construction method
using the positioning solutions shared by cooperative vehicles and the derivation of a float estimator based on an adaptive Kalman filter that fuses the plane parameters with GNSS observations. We
carried out several field tests involving four vehicles to collect real data to verify the proposed method. Numerical simulations and experimental results validate the feasibility and superiority of
our method.
The paper is organized as follows: In Section 2, we introduce the construction of the height plane; in Section 3, we describe the main procedure of the PCRTK method; then, the proposed method is
validated by several simulations and experiments in Section 4 and Section 5; discussions on the limits of the proposed method are presented in Section 6; and finally, we conclude our work in Section
Consider the following scenario: Vehicles traveling on a smooth road can share their positioning data and the distance of their GNSS antennas from the ground through inter-vehicle communications. A
vehicle can receive the data and then use them to fit a plane where the neighboring vehicles are traveling. To make it easier to understand the process of plane fitting, we define a target vehicle in
this section and the others are regarded as cooperative vehicles. The process of plane construction can be divided into two parts: the construction of the road plane and the construction of the
antenna height plane.
A GNSS antenna is generally placed on the roof of a vehicle. The height of a vehicle is generally different, which is determined by their brands. If the plane is constructed using the positions of
the antennas directly, larger errors are inevitable for fitting the plane where the vehicles are truly located. In this case, the positions of the target vehicle are likely to incline to a faulty
plane. Therefore, a road plane is first constructed by removing the height of the antennas of the vehicles. Then, we introduce the height of the antennas into the plane again. In this way, a plane
expression that constrains the positions of the vehicles can be obtained.
Generally, the Earth-centered, Earth-fixed (ECEF) coordinate system is used. To deal with the height information of vehicles, the coordinate system is converted from ECEF to the local Cartesian
coordinate system (East-North-Up or ENU) through an S matrix. Then, the baseline between the position of a vehicle cast to the road plane and the reference station can be expressed as: 1
where b[i,k] represents the baseline between the cooperative vehicle i and the reference station at the k-th epoch and h[i] is the antenna height of vehicle i. After eliminating the influence of
vehicle height, we add the location of the reference station b[o] and calculate a new conversion matrix S′ from the projection of the vehicle on the road to the center of the Earth, and then convert
the ENU coordinate system back to ECEF again: 2
Relative positions are used rather than absolute ones, as it is more convenient for us to introduce the plane constraint into the float estimator.
According to Equation (2), a point set can be obtained at epoch k. Only the data of one epoch may not be enough for fitting a plane. In our study, we selected the data of five consecutive epochs
closest to the current time to fit a plane. The set for fitting a plane is defined as V′ in this paper. We, by default, utilized all the available positioning solutions in V′ to fit the plane. The
least-squares method (Hurt & Colwell, 1980) was used to fit the plane α, whose distance to these positioning solutions was minimal: 3 4
where mean(·) is the mean function, SVD(·) is the singular value decomposition (SVD) function, and U, S, V are the corresponding matrices.
After the SVD, the eigenvector corresponding to the minimum singular value is the normal vector of the plane, which is expressed as: 5
where end represents the index of the minimum singular value. Meanwhile, the normal vectors also satisfy the following relationship: 6
The constant term can be expressed as: 7
Then, the expression of the road plane is defined as: 8
where x, y, z are the three-dimensional coordinates under the ECEF frame.
After the construction of the road plane, we can obtain the plane of the target vehicle by translating the road plane up by the height of the target vehicle h[o] as: 9
where: 10
where the left side of Equation (10) does not have the sign of absolute value since the direction of translation is known. This equation can be further simplified according to Equation (6) as: 11
It is worth noting that the premise of using the plane constraint is that the vehicles are traveling on the same plane and the positioning solutions used to fit the planes are accurate enough.
Therefore, it is necessary to check whether the positioning solutions used for fitting the planes meet this requirement before introducing the plane constraint into our method. In this paper, a plane
detection procedure was conducted by comparing the residual of plane fitting with a preset threshold.
In our study, we focus on the average value of the residuals in fitting a plane, which is calculated by: 12
where: 13
Here, N is the number of cooperative vehicles. It is recommended that at least two vehicles participate in fitting the plane and they are best traveling in different lanes. In this way, the plane can
be constructed from the perspective of geometry. k[start] and k[end] represent the starting epoch and the end epoch for selecting positioning results. d[i,k] is the distance between the position and
the fitted plane. If all the cooperative vehicles travel on the same plane and their positions are accurate enough, the residual of plane fitting would be small, which is regarded as a normal case in
our study. Otherwise, the positions of some cooperative vehicles would be far away from the plane and the residual would be larger, which is regarded as a faulty case. Therefore, whether the
positioning solutions used for plane fitting belong to the same plane or not can be determined by: 14
where T[fitting] is the threshold for plane detection. The value of the threshold depends on the distribution of the residuals of plane fitting and the false alarm rate. In our study, precise
positioning results collected in a flat and open-sky area are used to calculate the distribution function and the threshold. The details on how to use the distribution of the residuals and false
alarm rate to calculate the threshold are given in Section 4. If the plane fitting cannot pass detection, an iterative procedure is implemented to remove the positioning result with the largest
residual in plane fitting one by one, which is similar to the fault exclusion procedure in the Receiver Autonomous Integrity Monitoring (RAIM) method (Hsu et al., 2017). The removal procedure is
repeated until the plane detection has passed or the number of remaining positioning results used to fit the plane is less than a pre-set value, which is set to 10 in this paper.
The positions of the target vehicle can be constrained by this plane. Based on the constructed plane, a Kalman filter is employed to calculate float solutions. However, in high dynamic scenarios,
multipath, non-line-of-sight (NLOS), and other types of interference make it challenging to determine the noise covariance matrix Q and R correctly, which may greatly affect the accuracy of
estimation (Liao et al., 2017). Thus, we employ an adaptive Kalman filter to update the noise matrix and improve the stability of the estimator.
The architecture of the proposed method is illustrated in Figure 1. In the first stage, the vehicle plane is constructed by the positioning results of cooperative vehicles. Then, an adaptive Kalman
filter estimator is used to resolve the float solutions. Finally, the float solutions are sent to an integer ambiguity resolution model to calculate the fixed solutions. The final solutions are saved
and used for the plane construction of other vehicles in the next epoch.
3.1 GNSS Double-Differenced (DD) Observation Model
Double-differenced (DD) pseudorange and carrier-phase observations can be expressed as: 15 16
where ∇Δ represents the DD calculation. ρ and ϕ are the original observation of pseudorange and carrier phase, respectively. p is the actual distance between the satellite and receiver. λ and N are
the wavelength and integer ambiguity of carrier phase. The subscript rb denotes the difference between the corresponding terms of rover and base, while the superscript ij denotes the difference
between the j-th satellites and the i-th (reference) satellite. As the baseline between the vehicles and base station is usually short, the ionospheric and troposphere delay is neglectable in DD
To further excavate the baseline vector from the DD equations, we linearize and save the first-order term: 17
where represents the unit line-of-sight vector from the receiver to satellite and b[ur] represents the baseline vector.
3.2 Float Estimator Based on Adaptive Kalman Filters
In our proposed method, the state vector is defined as: 18
where b[ur], v[ur], and a[ur] are the baseline, velocity, and acceleration vectors between the rover and base station. represents the DD float ambiguity vector.
The system model is defined as: 19 20
where X[k,k−1] is the current state vector and X[k−1] is its previous state vector. F represents the system state transition matrix and ω[k] is the process noise at epoch k while Q is its covariance
matrix. P[k,k−1] is the estimated covariance matrix of the previous epoch.
The state transition matrix is written as: 21 22 23
where τ is the filtering period and I[m] is the identity matrix with size m. The observation vector is defined as: 24
where and are the DD pseudorange and carrier-phase observations and is the constant coefficient in Equation (9). Then, the observation model is defined as: 25
where Z[k] is the current observation vector, H is the observation matrix, and v[k] is the observation noise, while R is its covariance matrix.
The observation matrix is written as: 26
where: 27 28 29
where P[A], P[B], and P[C] are the corresponding coefficients in Equation (9).
The basic equations of the Kalman filter are written as: 30 31 32
where K[k] represents the Kalman gain, X[k] is the posterior state vector, and P[k] is the estimated covariance matrix.
In a general Kalman filter estimator, the process noise matrix Q and the observation noise matrix R are preset values. Thus, the influence of the surrounding environments has been ignored. In order
to obtain a more accurate noise matrix, we adopted the innovation-based adaptive Kalman filter method proposed by Mohamed and Schwarz (1999) to update the noise matrix at each epoch. If Q and R are
estimated simultaneously, the estimator can be misled by their relation and even diverge. To avoid this problem, a feasible solution is to estimate just one of them instead of both of them.
Therefore, we only estimate the update measurement noise matrix R while the system noise matrix Q is regarded as a constant matrix.
The elements of the innovation-based sequence are defined as the difference between observations and prediction values, which is written as: 33
The variance of innovation-based sequence is defined as: 34
In practical situations, it can be calculated by: 35 36
where k represents the current epoch and L is the window size of the innovation-based sequence. Since divergence may occur if the number of equations required to estimate the unknown adaptive
parameters is smaller than the number of unknowns, themselves, a window size larger than the number of update measurements is needed when adapting R and a window size larger than the number of filter
states is required when adapting Q.
The theoretical value of such variance is defined as: 37
Let , then Q and R matrices can be estimated by: 38 39
Although we give the derivation of both Q and R, only the update measurement noise matrix R is estimated while the system noise matrix Q is regarded as a constant matrix.
It worth noting that if Q is not positive semi-definite or R is not positive definite, the estimator may diverge. This phenomenon occasionally occurs in severe multipath-affected areas. If R is not
positive definite or Q is not positive semi-definite, we use a constant R or Q instead. The output of this filter is the float baselines and float ambiguities. The next step is to convert the float
carrier-phase ambiguities into integer carrier-phase ambiguities.
3.3 Integer Ambiguity Resolution
After the Kalman filter, the float state vector and the float ambiguity vector can be obtained. To fix the DD float ambiguities, the common least-squared ambiguity decorrelation (LAMBDA) method is
used (Teunissen, 1995). By searching over a set of integer grid points near the float resolution, LAMBDA finds some candidates that satisfy the equation: 40
where is estimated covariance matrix for float DD ambiguities. χ^2 is the size of searching space. After the searching step, a ratio test is employed as the acceptance test: 41
where N[1st] and N[2nd] are the best and second-best candidates, respectively. ξ is the threshold of ratio test, which is set to 3.
When the best candidate passes the ratio test, we can get the fixed solutions by updating as: 42
where is the estimated covariance matrix between the float state vector and DD ambiguity vector and N is the fix ambiguity vector.
In our study, we adopt an instantaneous mode rather than a fix-and-hold mode for handling the integer ambiguity resolution. This means that the integers are resolved in each epoch independently and
the integer fixes are not maintained. At each epoch, we try to fix all the ambiguities rather than just some part of them.
3.4 Extending the PCRTK Method to a Cooperative Network
In the above subsections, we focus on how to apply the plane-constrained method to a specific vehicle. Only the target vehicle is assumed to conduct the proposed PCRTK method, while the others are
regarded as information providers that calculate their positions by using a non-cooperative method. In this section, we discuss how to apply the proposed method to each vehicle in a cooperative
Figure 2 depicts the structure of a cooperative network involving a total of M vehicles. All the vehicles in this network belong to peer nodes, which means these vehicles will share information
between each other and conduct the same positioning algorithm. b[i,k] represents the positioning results of vehicle i at epoch k. h[i] denotes the distance from the GNSS antenna of vehicle i to the
ground, which is a fixed value measured for each vehicle. We assume that all the vehicles have obtained their positioning results of epoch k.
Taking Vehicle i as an example, it will broadcast its own positioning data b[i,k] of epoch k to the others and receive the positioning data from peer nodes simultaneously. The positioning results of
peer nodes from epoch k − N[fitting] + 1 to k − 1 are also stored by Vehicle i, where N[fitting] is the number of positioning solutions provided by each vehicle for fitting the planes. Vehicle i
utilizes these positioning solutions to fit a plane, and calculates its position of epoch k + 1 using the PCRTK method. Once Vehicle i obtains its position solution of epoch k + 1, it will broadcast
the latest positioning solution to the others and then receive new positioning solutions from peer nodes for calculating its own position at the next epoch.
The same process can be applied to the other vehicles in this network. In this way, all the vehicles in this network can benefit from the proposed method. Since the proposed method is distributed and
adopts the time series of positioning solutions to fit the planes, the demand for time delays is reduced compared to other ranging-based cooperative positioning methods.
It is worth noting that the resulting error correlation is inevitable due to the feedback from the other receivers in the network. If the PCRTK position solution of vehicle A suffers from a large
error, vehicle B cooperating with vehicle A will be affected when fitting the plane without any fault exclusion. Then the position solution of vehicle B will be contaminated and affect vehicle A in
turn, resulting in error correlation. Fortunately, the plane detection algorithm proposed in Section 2 can effectively reduce the influence of the resulting error correlation. An iterative procedure
is implemented to remove the outliers in plane fitting one by one. In this way, we can avoid utilizing position solutions with large errors to fit the plane to ensure the reliability of the plane
constraint and the stability of PCRTK method.
To evaluate the feasibility of the proposed method, simulation results are given in this section. These simulations were based on GNSS data collected in a ground vehicle test, which was conducted in
open-sky areas in Beijing, China. The road was extremely smooth with few bumps and there were few cars traveling in the test areas, so we could drive the test vehicles in different formations.
Figure 3 presents the test route in which the red star denotes the location of the reference station. Four vehicles (referred to as V1, V2, V3, and V4, respectively) were involved in the test and
traveled along the same route during the test. Each car was equipped with a GNSS receiver to collect GNSS measurements. In this test, V1 was equipped with a GNSS receiver named M300, which belongs to
a consumer-grade receiver. V2, V3, and V4 were equipped with NovAtel OEM628, OEM7500, and Trimble BD992 receivers, respectively. The data of GPS L1/L2 and BeiDou B1/B2 were collected at 1 Hz during
the test. The distance of the GNSS antennas from the ground was measured for each vehicle before the test.
The INS data were also collected by each vehicle as an input to the post-processing system (NovAtel Inertial Explorer). The raw GNSS observations of all-in-view satellites together with INS data and
the precisely known location of the reference station were processed using NovAtel Inertial Explorer to calculate the absolute reference solutions for each vehicle. Since all the GNSS data were
collected under good observation conditions, the horizontal root-mean-square error (RMSE) of the reference solutions could reach 0.01 m using RTK corrections under standard vehicle dynamics.
The sky plot of visible satellites observed by the reference station is shown in Figure 4. The initial G denotes GPS satellites and C denotes BeiDou satellites. The average number of satellites
observed by the reference receiver was about eight for GPS and 15 for BeiDou. To simulate signal blockages in urban areas, the measurements of the satellites whose elevation angles were less than 45
degrees were removed from the positioning process of both the proposed method and traditional RTK method in the following simulations.
4.1 Simulations on Plane Uncertainty
We imposed a constraint on the float estimator of the traditional RTK method by introducing a plane in which the vehicles would be traveling. The accuracy of this plane determines the performance of
the proposed method. Therefore, the influence of plane uncertainty on the performance gain for the PCRTK method is analyzed in this subsection.
Since the plane is constructed using the positioning results of cooperative vehicles, the precision of the positioning results provided by these cooperative vehicles would definitely affect the
uncertainty of the constructed plane and determine the benefit of using our method. To verify that the proposed method can benefit from the constraint of an accurate plane, we first adopted the
reference solutions of the test vehicles to construct the planes and conduct the PCRTK method. Since the post-processed reference solutions are extremely precise and the vehicles were traveling on a
flat road, the constructed planes would be quite accurate and the proposed method was expected to show its best performance in this case.
Considering that the vehicles can be treated as peer nodes and all the vehicles traveled under the same conditions, we only take Vehicle 4 (V4) as an example in this section. Figure 5 depicts the
positioning errors’ cumulative distribution function (CDF) of PCRTK for V4 in the case that the plane was constructed using the reference solutions of cooperative vehicles. To make a comparison, the
positioning errors of traditional RTK using the same data are also presented in this figure. Both PCRTK and RTK solutions were computed with a 45-degree mask angle to simulate signal blockages.
Compared to the traditional RTK method, the proposed method shows an improvement in positioning accuracy. The proportion of positioning errors less than 1 m was 88.48% for the RTK method, rising to
96.76% for the proposed method. The plane constraints can be treated as additional measurements, which make a great contribution to the improvement in positioning accuracy.
To simulate the uncertainty of the constructed planes, white Gaussian noise (WGN) with various standard deviations was added to the reference solutions of the test vehicles when we constructed the
planes. More specifically, the WGN with various standard deviations was added to the three directions of reference solutions in ENU coordinates. In simulations, the standard deviation of the WGN
ranged from 0 m to 3.5 m at intervals of 0.5 m. Figure 6 shows the RMSE of the PCRTK method for V4 in which the plane was constructed using reference solutions contaminated by WGN with various
standard deviations. For comparison, the RMSE of traditional RTK is also marked in this figure, which is equal to 1.72 m.
The proposed method saw an upward trend in RMSE when the standard deviation increased from 0 m to 3.5 m, with the figure climbing from 0.58 m to 2.06 m. The reason for the degradation of positioning
accuracy is that the inaccurate positioning solution used for constructing the plane resulted in an increased uncertainty of the constructed plane. If the plane parameters are treated as additional
measurements in the float estimator, the increased uncertainty of the plane means an increase in observation noise. The magnitude of plane uncertainty can be reflected by the residuals of plane
fitting, which was introduced in Equation (12). Figure 7 depicts the mean residuals of plane fitting when we introduced WGN with various standard deviations to reference solutions. It can be seen
that the mean residuals of plane fitting increase with the growth of standard deviation.
Since the positioning solutions provided by cooperative vehicles may not be accurate enough and we cannot rule out the possibility that vehicles travel on different planes, the method of selecting an
appropriate threshold for constraining the plane uncertainty is extremely important. It worth noting that the RMSE of the proposed method is larger than the traditional RTK method when the standard
deviation of the WGN added to reference solutions is greater than 3 m, which can be found in Figure 6. To ensure that the proposed method can benefit from the constraint of the plane, it is better to
set a threshold for plane fitting, which was originally mentioned in Section 2. The threshold can be set according to the distribution of the residuals of plane fitting and a false alarm rate. We
recorded the residuals of plane fitting in the case that WGN with a standard deviation of 2.5 m was introduced into the reference solutions. Since the RMSE of the PCRTK method was still lower than
the RTK method in the case of a standard deviation of 2.5 m, the distribution of the residuals in this case was selected to calculate the threshold. Figure 8 presents the residuals of plane fitting
in the case that WGN with a standard deviation of 2.5 m is added to the reference solutions. In our study, the false alarm rate was set to 1%. The final threshold was set to 2.6 m, which is used in
the following simulations.
4.2 Performance of Extending PCRTK Method to a Cooperative Network
In Section 3, we introduced how to extend the PCRTK method to a cooperative network. If the cooperative vehicles of a vehicle also use the PCRTK method rather than non-cooperative methods such as
traditional RTK, this vehicle would be expected to obtain more accurate positioning results since the positioning results used to fit the planes would be more accurate. To verify the benefit of
extending the proposed method to a cooperative network, we make a comparison between the positioning results with and without access to a cooperative network. Taking V4 as an example, all the
cooperative vehicles would adopt the traditional RTK method in the case that they are without access to a cooperative network, while the PCRTK method would be used by all the cooperative vehicles in
the case that a cooperative network was available.
Table 1 gives the statistical results of V4 for the PCRTK method with and without cooperative network access. The results of the traditional RTK method are also given in this table so that we are
able to see if the proposed method could benefit from the constructed planes in two cases. CEP95 represents the circular error at probability of 95%. It can be seen that the proposed method shows
better positioning performance than the traditional RTK method whether the cooperative network was used or not. Compared to the case without a cooperative network, the PCRTK method with a cooperative
network shows a drop in positioning errors, with the figure of RMSE decreasing from 1.01 m to 0.63 m. The improvement in positioning accuracy can be attributed to the improved precision of plane
fitting. When the cooperative vehicles also use the PCRTK method, it is feasible to provide a more accurate positioning solution for V4 to fit the plane. The reason that CEP95 and RMSE cannot reach
centimeter-level accuracy is that there are still some float solutions with relatively large errors. Therefore, it is difficult to achieve centimeter-level positioning accuracy overall.
Since we verified the benefit of cooperative networks, we adopted the PCRTK method with a cooperative network in the following simulations and experiments. This means that all the vehicles utilized
the proposed method to calculate their positions simultaneously.
4.3 Comparisons Between Different Formations
During the experiment, we attempted to drive the vehicles in different formations to evaluate the performance of the proposed method. Four kinds of formations were considered in the experiment that
are shown in Figure 9. Since Vehicle 4 was used in the previous simulations, we also take V4 as an example in this section. The green represents V4 and the blue denotes the other vehicles.
Table 2 gives the statistical results for V4 under different driving formations. It can be seen that the proposed method with formation (a) shows the worst positioning performance. Theoretically,
points on the same line cannot be used to fit a specific plane. However, the positions of GNSS antennas are unlikely to keep a line even if the vehicles were traveling in a line. Therefore, the
positioning results can also be used to fit the planes. In this case, a slight increase in the errors of the provided positioning solution might result in a great uncertainty of the constructed
plane. Since the plane was constructed inaccurately, the positioning accuracy for formation (a) was also degraded. As for the cases of formations (b), (c), and (d), the proposed method shows similar
positioning performance. The results of formation (b) show the best performance among these four cases because the vehicles traveled in formation (b) had better geometric distribution. The
constructed plane was, thus, more accurate and reliable in this case.
To evaluate the performance of the proposed method under degraded environments, several dynamic experiments were carried out in Zhongguancun E-park in Beijing, China, on the November 24, 2021. The
test routes are depicted in Figure 10. A reference station with exact positioning was set in an open-sky area within 2 kilometers of the test routes to collect raw observations for differential GNSS.
Four vehicles (also referred to as V1, V2, V3, and V4) traveled along the same routes in the experimental areas where high-rise buildings strongly challenge GNSS-based positioning performance.
During the experiments, V1 was equipped with a NovAtel SPAN-ISA-100C integrated navigation system as shown in Figure 11. The other three vehicles were also equipped with an integrated navigation
system named NPOS220, which consisted of a NovAtel OEM7500 receiver and an EPSON G320N inertial measurement unit (IMU). The collected raw GNSS measurements, together with IMU data of the vehicles,
were sent to the post-processing software NovAtel Inertial Explorer to calculate the reference solution for each vehicle using RTK corrections. Only GNSS data were used to analyze the positioning
performance of the proposed method and the control group.
Figure 12 presents the satellite visibility of V4 during the experiments; the initial G denotes GPS satellites, the initial J represents Quasi-Zenith Satellite System (QZSS) satellites, and the
initial C indicates BeiDou satellites. The green points represent the epochs when L1/2 or B1/2 could be received. The yellow points denote the epochs when only L1 or B1 could be received. The red
points indicate the epochs when only L2 or B2 were available. It can be seen that the loss of lock on GNSS signals occurs frequently in highly degraded environments. Figure 13 shows the number of
visible satellites (GPS and BeiDou) observed by V4.
The statistical results of all the vehicles are listed in Table 3. The RMSE in the vertical direction is also included in the statistics. To make a comparison, the results of the traditional RTK
method are given in this table. It is noted that we adopted the PCRTK method with access to a cooperative network, which means all the vehicles conducted the PCRTK method simultaneously. Besides, the
Receiver Autonomous Integrity Monitoring (RAIM) method was used to detect and isolate the multipath-affected measurements with large ranging errors before positioning. It can be seen that all the
vehicles could achieve better positioning performance by using the proposed plane-constraint method.
Taking Vehicle 2 (V2) as an example, the proposed method shows a decline in horizontal RMSE compared to the traditional RTK method, with the figure dropping from 6.74 m to 2.14 m. The PCRTK method
also saw a remarkable improvement in vertical positioning accuracy with the figure of vertical RMSE decreasing from 14.93 m to 2.36 m for V2. The performance gain in the vertical direction was
greater than that of the horizontal direction. For the traditional RTK method, the position errors in the vertical direction were obviously larger than those in the horizontal direction. However, for
the PCRTK method, the position errors in the vertical direction were closer to those in the horizontal direction. The greater benefit in the vertical direction can be attributed to the constraint of
the plane on the float estimator in this direction.
The fixed rate of the PCRTK and traditional RTK methods was almost the same for the four vehicles in the experiment. This means that the integer ambiguities were still difficult to resolve in
multipath-affected scenarios, even if the float solution was constrained to a plane. This phenomenon does not occur in simulations because the raw measurements used in simulations are relatively
clean and not contaminated by multipath signals. Although the ambiguities were still difficult to resolve, the overall positioning accuracy definitely improved, because the precision of float
solutions increased owing to the constraint of the road plane. Vehicle 1 showed the best positioning performance among the four vehicles, as its receiver had a stronger ability to resist multipath
We imposed a constraint on the float estimator of the conventional RTK method by introducing a plane in which the vehicles were traveling. The accuracy of this plane determined the performance of the
proposed method. Since this plane was constructed using the positioning results of cooperative vehicles, the precision of the positioning results provided by these cooperative vehicles affected the
benefits of our method.
In our experiments, the vehicles kept close to each other so that they were more likely to travel on the same plane. These vehicles were likely to observe the same satellites and, thus, the
observation quality of the vehicles was similar to each other. In urban canyons, closely traveling vehicles suffer from similar signal blocks and multipath. The cooperative vehicles might, at times,
provide some inaccurate positioning results for plane fitting. In this case, the proposed method can be limited. Fortunately, we can detect the presence of inaccurate positioning solutions using the
plane detection algorithm from Section 2. In this case, the plane constraint would not be introduced to the float estimator and we would, instead, only adopt the traditional RTK method.
A feasible way to solve this problem is to cooperate with the vehicles that keep a certain distance from the host vehicle, since these cooperative vehicles might have better observation conditions
and could potentially provide more accurate positioning solutions for plane fitting. However, vehicles that are too far away from each other may belong to different planes. The maximum separation
between the host vehicle and the cooperative vehicle depends on the road conditions, which will be studied in future work.
Although the proposed method is mainly aimed at vehicles traveling on a flat road, it can also be applied to some hilly urban areas. This is the reason that we introduced a plane, rather than the
mean altitude of vehicles, into the float estimator to constrain the RTK method. As long as the slope of the ramp changes slowly and the vehicles driving on the ramp keep close to each other, the
proposed method would also be available. If the slope of the road were to change rapidly, however, vehicles might belong to different planes and the benefit of the proposed method would be limited.
As for the plane detection algorithm, a much tighter threshold is recommended if we focus on fixing the ambiguities. It is true that using centimeter-level positioning to fit the plane can improve
the ability of fixing ambiguities to the greatest extent. However, it is unnecessary to have a standalone vertical position at a centimeter-level to fit the plane. It is worth noting that the overall
positioning accuracy is between the decimeter level and meter level before integer ambiguity resolution. Even if the standalone vertical position cannot reach centimeter-level accuracy, the
possibility of fixing ambiguities would still increase as long as the accuracy of the float estimator could be improved by using plane constraint.
The test results show that the fixed rate of the PCRTK method can still reach 87% in the case that the standalone vertical position error is 1.5 m (standard deviation), which is higher than that of
traditional RTK (82.7%, as shown in Table 1). The fixed rate of the PCRTK method climbs to 92% when the standalone vertical position error is reduced to 1 m (standard deviation). Conservatively, a
recommended threshold of 1 m is given to better fix the ambiguities. The performance of the proposed method under a tighter threshold will be evaluated with more experimental data in the future.
The performance may be different when the size of the cooperative network is enlarged, so a much larger-scale field test or simulation will be carried out in future work. Compared to moving vehicles,
roadside units (RSUs) are more competent for providing precise and reliable positioning information for target vehicles (Li et al., 2018). We will consider employing RSUs to validate the performance
of our method in a vehicle-to-everything (V2X) scenario.
This paper describes a plane-constrained RTK method that can be applied to connected vehicles. By employing positioning data from cooperative vehicles, a height plane is constructed and the
parameters of this plane are used as new measurements for improving positioning accuracy. The results of field tests verify the feasibility and superiority of the proposed method.
This method is applicable to many dense urban scenarios in which vehicles can be connected to each other easily and the road conditions are good enough for plane fitting. We will further discuss the
possibility of applying this proposed method to more complicated scenarios such as environments that are uphill and downhill.
Zhao, H., Zhuang, C., He, Y., Hu, S., Hou, B., & Feng, W. (2022) High-precision positioning using plane-constrained RTK method in urban environments. NAVIGATION, 69(4). https://doi.org/10.33012/
Zhuang and He contributed to the conception of the study, performed the data analyses, and wrote the manuscript. Zhao and Feng helped perform the analysis with constructive discussions. Hu and Hou
contributed significantly to analysis and manuscript preparation.
This work was supported by the Chinese National Natural Science Foundation under Grant 61901015.
The authors would like to thank BDStar Corporation for providing us with the NPOS220 navigation system, SPAN-100C reference system, and test vehicles. We also thank Rui Peng, Yuli He, Zixuan Wang,
Zihan Mu, Yuhang Yang, Dan Gao, and Qiang Wang for their help with data collection.
This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly | {"url":"https://navi.ion.org/content/69/4/navi.540","timestamp":"2024-11-02T23:49:55Z","content_type":"application/xhtml+xml","content_length":"293426","record_id":"<urn:uuid:fdd92a3c-2322-49d6-b9bf-c68ff20c749f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00293.warc.gz"} |
reate a real life scenario of domain and range function
Sure, here's an example scenario of domain and range functions: Let's say you're a business owner who sells t-shirts online. Your t-shirts come in various sizes, ranging from XS to XXL, and are
priced differently depending on the size. You want to create a function that will allow you to determine the price of any given t-shirt size. In this scenario, the domain of the function would be the
set of all possible t-shirt sizes, which in this case is {XS, S, M, L, XL, XXL}. The range of the function would be the set of all possible prices for each t-shirt size. To create the function, you
could assign a price to each size of t-shirt, like so: f(XS) = $15 f(S) = $18 f(M) = $20 f(L) = $22 f(XL) = $25 f(XXL) = $28 In this case, the domain is {XS, S, M, L, XL, XXL} and the range is {$15,
$18, $20, $22, $25, $28}. Now, if a customer wants to purchase a t-shirt in size L, you can simply plug L into the function and determine the price: f(L) = $22. This allows you to quickly and easily
determine the price for any given size of t-shirt, making it easier to manage your business and serve your customers. | {"url":"https://math4finance.com/general/create-a-real-life-scenario-of-domain-and-range-function","timestamp":"2024-11-10T14:42:32Z","content_type":"text/html","content_length":"29822","record_id":"<urn:uuid:ea69715f-dd02-4101-8d68-4a9f860b8caf>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00736.warc.gz"} |
Optimize an Impact Absorber
MV-3021: Optimize an Impact Absorber
In this tutorial you will learn how to setup an optimization problem using MotionView's Optimization Wizard for an impact absorber.
You will learn about the following:
• Defining stiffness and damping of SpringDamper element as design variables
• Defining responses of type MaxVal
• Using the responses as objectives
• Running the optimization and post-processing the results
An impact absorber is modeled as a single degree of freedom system with a mass m, a linear stiffness k and a linear damping coefficient c. The velocity of the mass is 1 m/s; Mass m = 1 kg and a
transient analysis end time of 12 seconds is used.
Figure 1.
The objective of the optimization is to minimize the maximum acceleration of the mass in the time interval 0 to 12s subject to the condition that the maximum displacement is less than 1m. In
order to achieve this, stiffness k and damping ratio c are modeled as design variables. MotionSolve's FD (Finite Differencing) capability is used to calculate sensitivities.
Add Design Variables
In this step, you will add design variables for the optimization.
Before you begin, copy the file mv_3021_initial_impact_absorber.mdl located in the mbd_modeling\motionsolve\optimization\MV-3021 into your <working directory>.
1. Open mv_3021_initial_impact_absorber.mdl in MotionView.
2. In the Project Browser, right-click on Model and select Optimization Wizard from the context menu.
3. Under the Design Variables page, click on the Springs tab.
4. Make the k and c of SpringDamper 0 design variables. Select k and c datamembers from the Model Tree under the spring damper and click Add.
Figure 2.
5. Modify the upper and lower bounds of the design variables according to Table 1.
Table 1.
│ DV │ Lower Bounds │ Upper Bounds │
│ sd_0.k │ 0.2 │ 1.0 │
│ sd_0.c │ 0.2 │ 1.0 │
Add Response Variables
In this step you will add response variables for the optimization.
The objective of the optimization is to minimize the maximum acceleration while keeping the displacement of mass to be less than 1.0. To achieve this, two responses are created:
• Maximum z direction acceleration of mass
• Maximum z direction displacement of mass
The maximum z-direction acceleration response is used to define an objective; the maximum z-direction displacement is used to define a constraint. Both responses are created using ‘MaxVal’ response.
This response can be used to capture the maximum value of an expression throughout the simulation. The details of ‘MaxVal’ response can be obtained from the ‘Multibody Optimization User’s Guide’ and
the ‘MotionView User’s Guide’.
1. Click on the Responses page.
2. Click OK.
3. Once the response variable is created, under Response Type, choose MaxVal.
4. In the Response Expression field, enter `ABS(ACCZ({b_0.cm.id},{m_0.id}))` (the absolute value of ACCZ of CM of Mass).
The Response Variable should look as shown in
Figure 3
Figure 3.
5. Follow these steps again to create a second 'MaxVal' response. Use the Response Expression `ABS(DZ({b_0.cm.id},{m_0.id}))`.
You have created all the necessary Response Variables. The completed page will look as shown in
Figure 4
Figure 4.
Add Objectives and Constraints
Now you will add objectives and constraints to the problem.
You can use the responses you created in the previous section as objectives.
1. Navigate to the Goals page. Under Objectives, click
This will add an objective with the response rv_0.
2. Choose a Weight of 1.0 and retain the Type as Min.
3. Under Constraints, click rv_1.
4. Retain the sign as '< =' and type 1.0 for value.
This will ensure that the value of rv_1 is less than 1.0. Now all objectives and constraints are defined, and the model is ready to run.
Figure 5.
Run the Optimization
In this step you will run the optimization.
1. Navigate to the Solutions page to specify optimization settings and run the analysis.
2. Click Optimization Settings.
3. Change the DSA type to FD.
Note: You also have the option to choose AUTO. When you choose AUTO, MotionSolve will detect the simulation type and choose the best approach to compute sensitivity. The simulation type is
dynamic, so MotionSolve will choose FD.
Figure 6.
4. Click Save & Optimize to start the optimization.
While the optimization is running, a plot of total weighted cost vs. iteration number and constraint value is displayed in a separate window.
Once the optimization process is complete, the text window in the Solution page displays the optimized design variables values, final value of the responses and optimized cost function.
Figure 7.
The expected values of design variables are provided in
Table 2
Table 2.
│ RV/VD │ Expected │ From Optimizer │
│ rv_0 │ 0.5206 │ 0.5206 │
│ sd_0.k │ 0.3606 │ 0.36003 │
│ sd_0.c │ 0.4851 │ 0.48569 │
In this step, you will post-process the optimization results of the impact absorber.
1. Navigate to the Review Results page.
The Summary tab is displayed with lists of values for design variables, responses and objective being tabulated iteration-wise. For this tutorial, the optimized design variables are from the last
iteration 5.
Figure 8.
2. Click the Plot tab to visualize variation of design variables, response variables and cost function using graphs.
Figure 9.
3. Click the Animation tab to animate the configuration generated during any iteration.
4. Choose Iteration 5 and click on Load Result to load the H3D file from that iteration.
You can click the
Figure 10.
From this tab, you can also export an archive of the model in a state corresponding to any iteration.
5. Click the Archive Model Location browser to specify a file path.
6. Click the Export button.
This will create an archive folder which contains all files necessary to open/run/optimize the model. The design variable values are set to the values in the iteration number you choose. | {"url":"https://2022.help.altair.com/2022.2/hwdesktop/mv/topics/tutorials/mv/tut_mv_3021_t.htm","timestamp":"2024-11-02T07:56:12Z","content_type":"application/xhtml+xml","content_length":"83112","record_id":"<urn:uuid:7b7fe368-f511-43bd-b66d-dee788fd2353>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00776.warc.gz"} |
Linear Algebra:- An Infinite Resource - HUNT4EDU
Linear Algebra:- An Infinite Resource
Linear Algebra:- An Infinite Resource
Here, We provide Linear Algebra:- An Infinite Resource. Linear Algebra is a continuous form of mathematics and is applied throughout science and engineering because it allows you to model natural
phenomena and to compute them efficiently. Because it is a form of continuous and not discrete mathematics, a lot of computer scientists don’t have a lot of experience with it. Linear Algebra is also
central to almost all areas of mathematics like geometry and functional analysis. Free download PDF Linear Algebra:- An Infinite Resource.
Therefore, you are mostly dealing with matrices and vectors rather than with scalars (we will cover these terms in the following section). When you have the right libraries, like Numpy, at your
disposal, you can compute complex matrix multiplication very easily with just a few lines of code. Linear algebra is a branch of mathematics that is widely used throughout science and engineering.
Yet because linear algebra is a form of continuous rather than discrete mathematics, many computer scientists have little experience with it. Free download PDF Linear Algebra:- An Infinite Resource.
Table of Contents:
• Linear equations
• Matrices
• Matrix decompositions
• Relations
• Computations
• Vector spaces
• Structures
• Multilinear algebra
• Affine space and related topics
• Projective space
Linear Algebra is very helpful for the aspirants of CSIR UGC NET Mathematics, IIT JAM Mathematics, GATE mathematics, NBHM, TIFR, and all different tests with a similar syllabus. Linear Algebra is
designed for the students who are making ready for numerous national degree aggressive examinations and additionally evokes to go into Ph. D. Applications by using manner of qualifying the numerous
the front examination. Free download PDF Linear Algebra:- An Infinite Resource.
BOOK NAME – LINEAR ALGEBRA:- AN INFINITE RESOURCE
AUTHOR –
The content material of the ebook explains the simple concept of the real numbers of starting. The series and series are elaborated in info and also the diverse techniques and formulas for checking
their convergence are mentioned. The exercise sets are introduced at the end of the topics which includes the style of questions from preceding yr papers of CSIR UGC net, IIT-JAM, TIFR, NBHM, and
GATE. Those questions are carefully selected in order that the students can practice mathematical knowledge in solving the questions.
Starting with the simple principles of vector areas such as linear independence, foundation and measurement, quotient area, linear transformation, and duality with an exposition of the theory of
linear operators on a finite-dimensional vector area, this e-book consists of the idea of eigenvalues and eigenvectors, diagonalization, triangulation and Jordan and rational canonical paperwork.
Free download PDF Linear Algebra:- An Infinite Resource.
Inner product spaces that cover finite-dimensional spectral principle and elementary theory of bilinear paperwork also are discussed. This new edition of the book incorporates the wealthy comments of
its readers. We have introduced new difficulty matters within the textual content to make the ebook greater complete. Free download PDF Linear Algebra:- An Infinite Resource.
Many new examples had been discussed to demonstrate textual content. Greater sporting activities have been covered. We’ve taken care to arrange the physical activities in increasing order of trouble.
There may be now a brand new segment of recommendations for nearly all exercises, besides those that are straightforward, to decorate their significance for.
Starting with the fundamental ideas of vector areas along with linear independence, basis and measurement, quotient space, linear transformation, and duality with an exposition of the principle of
linear operators on a finite-dimensional vector area, this book includes the concepts of eigenvalues and eigenvectors, diagonalization, triangulation and Jordan and rational canonical forms.
Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines,
planes, and rotations. Free download PDF Linear Algebra:- An Infinite Resource.
Also, the functional analysis may be basically viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and engineering areas, because it
allows modeling many natural phenomena, and efficiently computing with such models. For nonlinear systems, which cannot be modeled with linear algebra, linear algebra is often used as a first
approximation. Free download PDF Linear Algebra:- An Infinite Resource.
Friends, if you need an eBook related to any topic. Or if you want any information about any exam, please comment on it. Share this post with your friends on social media.
DISCLAIMER: HUNT4EDU.COM does no longer owns this book neither created nor scanned. We simply offer the hyperlink already to be had on the internet. If any manner it violates the law or has any
troubles then kindly mail us or Contact Us for this(hyperlink removal).
We don’t aid piracy this duplicate grows to be supplied for university youngsters who’re financially bad but deserve greater to examine. Thank you.
LINEAR ALGEBRA HAND WRITTEN NOTE BY SKM ACADEMY Download Now
LINEAR ALGEBRA HAND WRITTEN NOTE BY PI AIM INSTITUTE Coming Soon
LINEAR ALGEBRA HAND WRITTEN NOTE BY DIPS ACADEMY Download Now
LINEAR ALGEBRA HAND WRITTEN NOTE BY P KALIKA Download Now
LINEAR ALGEBRA BOOK Download Now
LINEAR ALGEBRA BY VIVEK SAHAI & VIKAS BIST Download Now
LINEAR ALGEBRA BY BALWAN MUDGIL Download Now
LINEAR ALGEBRA BY RISING STAR ACADEMY Download Now
3000 SOLVED PROBLEMS IN LINEAR ALGEBRA BY SCHAUM'S SERIES Download Now
LINEAR ALGEBRA PREVIOUS YEARS QUESTION PAPERS WITH SOLUTIONS FOR GATE Download Now
ADVANCED LINEAR ALGEBRA BY BRUCE N COOPERSTEIN Download Now
LINEAR ALGEBRA PROBLEM BOOK BY PAUL R HALMOS Download Now
LINEAR ALGEBRA BY SAMVEDNA PUBICATION Download Now
Real Analysis Hand Written Note By Vinit Raj Chauhan
Nuclear Physics:- An Introduction
Basic Pure Mathematics For Beginners By Dr Steve Warner
Application Of Derivatives Hand Written Note
Theory And Problems Of Quantum Mechanics By SCHAUM’S Series
Leave a Comment | {"url":"https://hunt4edu.com/linear-algebra-an-infinite-resource/","timestamp":"2024-11-02T08:07:27Z","content_type":"text/html","content_length":"88327","record_id":"<urn:uuid:1755c6b7-86af-4cf5-a114-421dacf939e8>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00838.warc.gz"} |
Demystifying the math and implementation of Convolutions: Part II.
2019-03-31 · 10 min read
Part 2 of the series, we will discuss how to improve the performance of our convolution implementation using vectorization and matrix multiplication.
So, in the previous part of this series, we have implemented Convolutions in Python and have gained deeper understanding of what exactly a convolution is. In this part, we will discuss how to improve
such techniques and represent it in terms of matrix multiplication.
Improving the performance through Vectorization
So far so good! Maybe not, since the previous implementation is so naive. We would better reduce those for loops into vectorized code, but it is a bit tricky. When it comes to vectorized code,
developers usually seeks to use BLAS-like operations such as GEMM (GEneral Matrix Multiplication). These routines are optimized for speed and provided by various manufacturers such as the generic
opensource OpenBLAS, Intel CPU optimized MKL (Math Kernel Libraries). Since we are coding in python (mean while, I am yet to implement this is C) numpy takes care of this for us, as long as we avoid
for loops. So let’s get started.
Turning Convolution Intro Matrix Multiplication im2col
The idea behind optimizing convolution is to transform each patch (or sub-matrix) into a flattened row in a new Matrix. This process is called im2col. If we use a stride of 1, we will have to slide
the filter 16 times over the matrix m, thus the output shape of im2col is 169 where 9 is the total size of filter 33 and 16 is the number of patches. Another way to figure it out is to calculate the
convolution output shape, which was 44 = 16. But let us take a smaller example of size 55 to reduce dimensions and thus the complexity of the process.
The first thing that probably would come to your mind is the huge data redundancy. Each optimization comes at the cost of something and in our case, we are losing some free space. Here, we prioritize
the speed of execution over the memory. Although in a real application, this would be executed on the GPU which might have more memory constraints but is actually favorable as memory access in matrix
multiplication can be optimized for an uncoalesced access, but that is a story of another day.
What we want to do next is to transform the filter into a single matrix row (called Flattening), from k = 33 to k = 91 (row vector). We will see the advantage of this operation soon. But first , here
is the process:
Now let’s visualize the convolution operation as matrix multiplication:
The result is a 91 Matrix, but recall that for this example our original Matrix has a shape of 55 so when performing the convolution with a filter 33 and a stride of 1 across both rows and cols, we
obtain an output of shape 33 (Use the previous formula to verify the output shape), and we can reconstruct the output original matrix by reshaping the matrix.
Now before we get into code, let me remind you that in real CNN, convolutions tend to go over volumes (which will be discussed later) and an increasing number of filter over the same input. So, this
is extremely easy to model using our new convolution approach. Each filter is simple a new Column of the new K, thus the number of columns in the matrix K is the number of filters you would like to
apply. And notice how the output shape has exactly the same columns of the filters.
Vectorized Implementation
Now that we have the theoretical aspects of vectorization clear (hopefully), let’s do some code. We first define our im2col method which will transform an image into the desired transformed matrix.
Then, we simply perform multiplication over the set of filters, flattened into a col matrix.
Since im2col is a bit tricky, it is very fortunate that it is already implemented in sk-image. Here the API documentation: https://scikit-image.org/docs/dev/api/skimage.util.html#
To be fair however, coding it is a bit tricky and chose to avoid it for this article as it is deep enough already
Words mean nothing, show me the stats!
You got it. Below is a speed comparison of the previous convolution operation from the first part, compared to the new code in terms of execution time! Before sharing the stats, the if you clone and
run the notebook yourself, you may not get the same results, I have tested them on Macbook Pro i5 laptop, it would be great if you can share your results! I might upload them here as well.
Performing Convolution on N filters
So far, we have seen convolutions on a single filter. With im2col matrix, we can compute N filters much faster than the original for-loop version. Since we have already discussed the theory, let me
just share the code with you.
End of the II-end part
So here comes the end of the second part of this series, hope it was informative and are you able to rewrite and hack the code, as we need more low level deep learning hackers to create new
innovative tools and techniques.
Articles written in this blog are my own opinions and do not reflect the views of my employer. Content on this website is original unless mentioned otherwise. Original content is licensed under a CC
BY 4.0 Deed. Some of the content might have been preprocessed by AI for clarity and articulation. | {"url":"https://praisethemoon.org/posts/2019/03/conv-pt2/","timestamp":"2024-11-03T15:03:49Z","content_type":"text/html","content_length":"11005","record_id":"<urn:uuid:f8cd7846-4fd4-4c52-97f7-b946fd09d19a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00744.warc.gz"} |
Stylized Facts - The Empirical Properties of Assets – Toni Esteves
Over the past fifty years, researchers in empirical finance have explored the statistical properties of market indexes, commodities, and stock prices using data from a variety of markets and
instruments. However, in the last ten years, the availability of large data sets of high-frequency price series and the use of computationally intensive methods for property analysis have opened up
new avenues for research and have helped to solidify the use of data-based approaches in financial modeling.
Some long-standing disagreements about the nature of the data have been resolved as a result of the analysis of these new data sets, but it has also brought forth new difficulties. The ability to
synthesize and meaningfully represent the qualities and information contained in this massive amount of data is not the least of them. Independent investigations have identified a collection of
characteristics that are shared across numerous instruments, markets, and historical periods and have been categorized as “stylized facts”.
Empirical Approach
Let’s examine the daily adjusted closing values of the Bovespa index(IBOV) and the daily closing prices of Petrobras(PETR4) - Data includes values from January, 2000 to March, 2020.
The Bovespa Index (Ibovespa, or IBOV) is the most important indicator of the average performance of stocks traded on the São Paulo Stock Exchange. The index is composed of a theoretical portfolio
with stocks that represented 80% of the traded volume in the previous 12 months and were present on at least 80% of the days when there was trading activity. The number of index components is not
fixed, ranging between 60-80 assets.
Stationary time series is a time series whose statistical properties do not change over time.Stationarity is a desirable property of time series for statistical analysis. Prices of stocks are often
not stationary as they exhibit a trend or seasonal component. To overcome this issue, we have considered log returns for statistical analysis instead of stock prices. Log returns are defined as
First of all let’s plot the adjusted closed prices series with periods of appreciation and devaluation.
Let’s visualize three distinct periods of returns from the Ibovespa Index.
The three charts are similar, but monthly log returns are a smoother version of daily log returns, which display greater fluctuations. Besides, it can be observed that the high volatility during the
periods from 2008 to 2020 is more pronounced in the daily chart. In contrast to prices series, returns fluctuate around a constant level, close to zero. Additionally, high fluctuations tend to
“cluster,” reflecting more volatile market periods.
These characteristics are also evident in the data of PETR4, which follows.
Let’s visualize the same three distinct periods of returns from the PETR4.
Again the three charts are similar, but monthly log returns are a smoother version of daily log returns, which display greater fluctuations with moments with higher volatility and moments with low
volatily. However, it’s worth noting that its returns typically fluctuate around a constant level, suggesting at least a constant average over time.
Let’s go ahead and evaluate the distribution of these returns.
Are stock returns normally distributed ?
The distribution of returns
In statistics, a Q–Q plot (quantile-quantile plot) is a probability plot, which is a graphical method for comparing two probability distributions by plotting their quantiles against each other. If
the two distributions being compared are similar, the points in the Q–Q plot will approximately lie on the identity line \(y = x\).
To continue with our analysis, we will evaluate the distributions of IBOV and PETR4 returns and understand how close these distributions are to a normal distribution.
From image above we note that the tails of the return distributions are heavier than those of the normal distribution. Let’s observe now histograms of the daily, weekly, and monthly log-returns of
the IBOV index, with normal density functions having the same mean and variance.
There are some concepts to be observed here. Note that as the range of returns increases in the distributions, from one day to one week and one month, the tails of the distribution become lighter.
Particularly the distribution of monthly returns, which is relatively close to the normal distribution. The distributions are somewhat asymmetric due to the presence of high negative/positive
returns. Not surprisingly, similar patterns can be observed in the log-return data of PETR4; see below.
Furthermore, it is possible to observe that the right tail of the distributions present small and very extreme returns, something very different from what a normal distribution would suggest. This
phenomenon is observed more frequently in the distribution of daily returns, somewhat less in the distribution of weekly returns and even less in the distribution of monthly returns.
Autocorrelation function of the price changes
In time series analysis, autocorrelation refers to any statistical, whether causal or not, relationship between past and present values of a random variable.
But what does this mean? In technical terms, autocorrelation measures the correlation between the values of a data series and its own lagged values over time (the value of the observation at index t
against the indices \(t-1, t-2, t-3\), and so on). In simpler terms, autocorrelation assesses the relationship between consecutive observations in a time series.
The ACF \(\hat{\rho}_{k}\) is defined at an shift \(k\) where given a series \(r_{1}, . . . , r_{T}\) the autocorrelation function is defined as \(\hat{\rho}_{k} = \hat{\gamma}_{k}/\hat{\gamma}_{0}
\), onde
$$\hat{\gamma_{k}} = \frac{1}{T}\sum_{t=1}^{T-k}(r_{t}-\overline r)(r_{t+k}-\overline r), \ \ \overline r=\frac{1}{T}\sum_{t=1}^{T}r_{t}$$
where \(\hat{\gamma}_{k}\) is the series covariance in the shift \(k\).
when plotting the autocorrelation function of the ibov log returns we have
The two striped horizontal lines, which are defined as \(±1.96/\sqrt{T}\), are the limits of the 95% confidence interval for \(\rho_{k}\) if the actual value of \(\rho_{k} = 0\). Therefore \(\rho_{k}
\) would be seen as not significantly different from zero if the estimate \(\hat{\rho}_{k}\) lies between these lines.
It is possible to observe that all daily, weekly and monthly log-returns do not exhibit significant autocorrelation. This corroborates the hypothesis that returns on a financial asset are
uncorrelated over time or that there is no significant autocorrelation. This means that if such a correlation existed, it would be possible to predict the direction of the market based on historical
data, in other words, it would be possible to create a strategy that predicts correctly and provides guaranteed profits. As a consequence of this, any investor would use this strategy by applying
pressure on the purchase movement and causing the purchase price to be higher than the sale price, increasing the price and thus eliminating all possible gains. A possible exception to this rule
would be very short time intervals, which would be the time needed for the market to react to new information. The same occurs to PETR4 log-returns.
In the autocorrelation functions for squared log-returns and absolute value, \(r_{t}\) is replaced by \(r^2_{t}\) and \(\vert r_{t} \vert\). There are small but significant autocorrelations in \(r^2_
{t}\) and even larger ones in \(\vert r_{t} \vert\). The more pronounced and persistent autocorrelations are found in daily data compared to weekly or monthly data. Empirical evidence suggests linear
independence among returns, but with some non-linear self-dependence.
The results seen so far, derived from two sets of real data, align with the so-called Stylized Facts. Stylized facts are theoretical approximations of phenomena observed empirically. These
“phenomena” are observed in different types of assets, including stocks, portfolios, commodities, and currencies.
What is a Stylized Fact?
According to an examination of most financial newspapers and journals, many market analysts have and continue to take an event-based approach, in which they attempt to “explain” or rationalize a
given market movement by relating it to an economic or political event or announcement [1]. From this perspective, it is easy to conclude that, because different assets are not always influenced by
the same events or information sets, price series obtained from different assets and, by extension, markets will exhibit distinct features.
But, on the other hand, is it wise to conclude corn prices behave similarly to IBM shares or the Dollar/Yen exchange rate?
The fact is that empirical studies on financial time series show that when examined statistically, a series of assets, which present purely random variations, share some non-trivial statistical
characteristics. Stylized empirical facts refer to characteristics of asset series that prevail across a wide range of instruments, markets, and historical periods.
Stylized facts are obtained by combining qualities observed in many marketplaces and instruments. Obviously, doing so increases generality while decreasing the precision of statements about asset
returns specifically. Indeed, stylized facts are typically expressed in terms of qualitative asset return characteristics and may be insufficient to distinguish between different parametric models.
Stylized Empirical Facts
Let me introduce a set of stylized statistical facts which are common to a wide set of financial assets.
Univariate Distributional Stylized Empirical Facts
Gain/Loss Asymmetry
Huge drawdowns in stock prices and index values, but not equally large upward movements. The distribution of returns normally has negative skewness. In other words, when considering any distribution,
we would like the tail on the left and right sides to be symmetrical. However, in the real world return distributions have asymmetric tails and this is explained because in general periods of decline
are generally more abrupt than periods of recovery. Besides that, investors tend to react more strongly to negative news than to positive news.
Leverage effect
Most indicators of an asset’s volatility are negatively linked to its returns, meaning periods of high volatility usually imply periods of decline.The term leverage derives precisely from the fact
that as prices fall, companies become more leveraged (the ratio between debt and equity grows) and more risky, and therefore their prices become more volatile. Therefore, the volatility caused by
drops in prices is typically greater than the appreciations caused by drops in volatility.
The majority of asset volatility metrics show a negative correlation with the asset’s returns [2]. Variance of returns over a certain time period is one metric used to quantify a stock’s volatility.
Since, mean returns for many of the stocks are zero, we consider squared returns as a measure of volatility. For random variables \(X\) and \(Y\), we define correlation coefficient as follows:
$$\rho = \frac{cov(X,Y)}{\sigma_{X}\sigma_{Y}}$$
\(cov(X,Y) = E(XY) - E(X)E(Y)\), \(\sigma^2_{X} = Var(X)\), \(\sigma^2_{Y} = Var(Y)\)
We calculate the correlation coefficient between log returns and squared returns of the same stock(PETR4 or IBOV). A positive correlation coefficient signifies the positive association of two
variables whereas a negative correlation coefficient signifies the negative association of two variables. According to the stylized facts, the correlation coefficient between log returns and squared
returns is expected to be negative, as we have shown before.
Disclaimer: This absence of autocorrelation in time does not necessarily mean independence: simple non-linear functions, such as absolute value, exhibit positive and persistent self-correction,
indicating long-term memory properties.
Autocorrelations in non-linear functions become weaker and less persistent when the interval of returns is changed from a day to a week or a month.
Aggregational Gaussianity
The distribution of results resembles a normal distribution as the time scale \(\Delta t\) over which they are computed grows. Specifically, the distribution’s form varies across different time
scales. In other words, when the time horizon increases, the distribution of returns tend to more closely approximate the normal distribution. We can observe this in the Q-Q plot generated for the
IBOV and PETR4 distributions.
Kolmogorov-Smirnov Test, Shapiro-Wilke Test and Jarque-Bera Test were used to test the normality of daily, weekly, monthly returns.p-values were recorded for each test conducted. According to the
stylized fact, p-value is expected to increase as we move from daily to quarterly returns for same stock.
Heavy tails
Return series generally exhibit heavier tails than the tail of a normal distribution. The (unconditional) distribution of returns appears to have a power-law or Pareto-like tail, with a tail index
that is finite, more than two but less than five for the majority of data sets investigated. This excludes stable rules with infinite variance, as well as the normal distribution. However, the exact
shape of the tails is difficult to identify. This phenomenon can be visually observed in the histograms and Q-Q plots of the IBOV and PETR4 series.
However, one way to quantify the weight of the tail as well as the size of the deviation from normality is using kurtosis (a.k.a normalized statistical fourth moment). By default, the kurtosis of a
theoretical normal distribution has its value set at \(3\). If the kurtosis of a given distribution is greater than \(3\), the tendency is for such a distribution to have a heavier tail than the
normal one, which is normally the case for distributions of financial returns.
Return distributions tend to be Non-Gaussian, with a peak sharp and heavy tails, with these properties more pronounced for intraday data, given that distributions with more frequent data tend to have
even heavier tails
Multivariate Stylized Empirical Facts
Volume-Volatility Correlation
Trading volume is correlated with all measures of volatility. The correlation coefficient between log returns and trading volume has been calculated for every stock under consideration. According to
the stylized fact,the correlation coefficient between log returns and trading volume is expected to be positive.
Risk-Return Tradeoff
Risk incurred in investment in a particular financial instrument and returns of that financial instrument are correlated. Volatility of a stock has been considered as measure of risk. Here, the
measure of volatility used is standard deviation of returns of a particular stock over full period of consideration. The correlation coefficient between mean return (Calculated over full period of
consideration) and standard deviation of returns of stocks listed on a particular stock market has been calculated. According to the stylized fact, this correlation coefficient is expected to be
positive for every market considered.
Time series Related Stylized Empirical facts
Absence of Autocorrelations
The Autocorrelation measures the similarity between a time series and its version with lag at different time intervals. (Linear) autocorrelations of asset returns are frequently minimal, particularly
on very brief intraday time scales (about 20 minutes), where microstructure effects come into play.
Returns in liquid markets do not exhibit self-correction when significant. When calculating the self-correction of a series with itself 1 shift to the right, it is possible to observe that the series
presents very small and insignificant values. This particular fact is often cited as support for the efficient market hypothesis.
If there were significant autocorrelation, it would be possible to predict the direction of the market based on historical data. In other words, it would be possible to create a strategy that
predicts the market direction accurately and provides guaranteed profits (Arbitrage). This would cause the strength of the buying moves to increase which in turn would eliminate all possible gains. A
possible exception to this rule would be very short time intervals, which represent the time it takes the market to react to new information.
Long-term Dependence:
The absence of autocorrelations provided empirical support for models based on “Random Walks” where returns are considered independent random variables. Therefore random walks assume that the random
variables that are governed by these stochastic processes (Random Walks) are independent of each other. That is, the value of today’s return is completely dependent on yesterday’s return.
However, this is not necessarily true because as we have seen, there is actually some type of non-linear correlation dependence. This becomes clear when we present the ACF of absolute log returns and
squared log returns - especially log absolute returns. This type of non-linear dependence is present in return series and it is possible to exploit it in a beneficial way if we wish to create some
strategy. One thing that can also be observed is that there is a downward trend shown by the ACF of the two series of assets until it eventually reaches zero. This statistical property is called
Ergodicity: In a stationary stochastic process in covariance, we make no assumptions about the intensity of dependence between the variables in the sequence.
$$\rho_{1} = corr(Y_{t}, Y_{t-1}) = \rho_{100} = corr(Y_{t}, Y_{t-100}) = 0.5$$
However, in many contexts it is reasonable to assume that the intensity of dependence decreases with distance in time. That is, \(\rho_{1} > \rho_{2} > \rho_{3} > ...\) and eventually \(\rho_{j} = 0
\) for a sufficiently large \(j\).
Volatility Clustering
Different measures of volatility show a positive autocorrelation across several days, indicating that high-volatility occurrences tend to cluster in time. In other words, Non-linear dependence is
directly related to the well-known phenomenon of volatility clustering: large changes in the price of an asset are usually followed by other large variations. Furthermore, it is common to observe
this phenomenon in almost all markets on the planet.
For example: it is possible to observe that the standard deviations of IBOV daily returns in 2007 were \(1.7\%\), \(2.0\%\) in the first half of 2008 and \(1.9\%\) in 2009. In the second half of 2008
(Subprime risis) it was \(4.1\%\) due to the several days of crisis and high volatility.
results show a significant degree of fluctuation across all time scales. This is measured by the occurrence of irregular bursts in time series from a wide range of volatility estimators.
Intermittency can be characterized by high kurtosis. Kurtosis was computed for the return series and also for residual series obtained after fitting a GARCH model(to eliminate the time series effect
from the data).
Kurtosis for a random variable \(X\) is defined as
$$K = \frac{E(X - \mu)^4}{\sigma^4}$$
Where \(\mu = E(X)\) and \(\sigma^2 = Var(X)\)
As we mentioned before, The value of kurtosis for normal distribution is \(3\). The one sided test to check if kurtosis is \(3\) against the alternative hypothesis that kurtosis is greter than was
carreied out using the test statistic \(\frac{\sqrt(n)(K-3)}{\sqrt(24)}\) which follows asymptotic normal distribution. p-values were computed in each case. Acoording to the stylized fact, the
p-values are expected to be small i.e the null hypothesis that kurosis is 3 is expected to get rejected.
In the previous sections, we presented statistical phenomena about asset returns that are consistent across multiple assets and markets. These features are based on qualitative hypotheses rather than
parametric assumptions about the return process. These requirements are necessary for a stochastic process to accurately recreate the statistical features of returns. Current models struggle to
recreate all statistical traits simultaneously, highlighting their limitations.
Finally, we should mention a few difficulties that we haven’t covered here. One significant question is whether a stylized empirical truth has economic relevance. In other words, can these empirical
findings be utilized to corroborate or refute specific modeling methodologies used in economic theory? Another concern is if these empirical findings are valuable for practitioners. For example, does
the occurrence of volatility clustering suggest anything useful for volatility forecasting? Can this lead to a more effective risk measurement and management approach? Can one use these correlations
to develop a volatility trading strategy? Or, how can one include a measure of irregularity, such as the singularity spectrum or the extremal index of returns, into a risk measure for portfolios? We
will leave these questions for future research.
Here at Turing Quant we try to take all these factors into consideration when we build our strategies and try to keep our portfolio optimized aiming for the best gains. It’s always a good time to
take a stand. Do not let this chance go by.
Final Thoughts
If you’ve read this post so far, thank you very much, I hope this material has been useful and makes sense to you, and if any other topic related or not to the content of this post interests you, or
has left you in doubt, put it in the comments and I will be very happy to bring the content more clearly in a new post.
Remembering that any feedback, whether positive or negative, just get in touch via my twiter, linkedin, Github or in the comments below. Thanks. | {"url":"https://www.toniesteves.com/stylized-facts","timestamp":"2024-11-09T07:12:47Z","content_type":"text/html","content_length":"117438","record_id":"<urn:uuid:7f7fe7c6-e388-4f7c-8d15-1c72a4ee7e5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00742.warc.gz"} |
Can anyone explain why and how the 1.0 factor rule solves the AC coupling problem?
drcoolzic asked •
Can anyone explain why and how the 1.0 factor rule solves the AC coupling problem?
I have read the document "AC-coupling and the Factor 1.0 rule" carefully and I can understand the potential problem posed by AC-coupling but I have difficulty understanding how the "Factor 1.0" rule
helps in this situation. This rule is simple: “The max PV power must be equal or less than the VA rating of the inverter/charger”. But I do not understand why this rule fix the problem described in
section 2.2 “example and background” of the above document.
Let's assume that our installation has a MultiPlus II 5000VA connected to a battery and an AC coupled Fronius Primo 3.6 kW. Now let's apply the problem described in section 2.2 to this configuration.
Let’s assume that on a sunny day you need to draw 4000 W from your system: 3.6 kW comes directly from the Fronius and 400 W from the battery through the MP2. Now let's assume that the battery is full
and the load is suddenly cut off. The Fronius still delivers full power and the question is how fast the system regulates itself and where the power goes in the meantime.
I can understand that shutting down the Fronius takes time because it involves communication between the two systems: the MP2 has to change the frequency in response to the overload situation and the
Fronius has to interpret it before shutting down. However, I find it hard to understand that the MP2 cannot cope with the overload situation because it is internal to the device and the electronics
are supposed to react very quickly.
But suppose the MP2 is not fast enough to handle this situation properly, what does the Factor 1.0 rule change? There is no explanation in section 2.2 (or elsewhere in the document) as to why the PV
power must be lower than the inverter power and how it fixes the problem.
I would appreciate if someone has an explanation as to why this rule solves the problem and why this factor needs to be 1.0 instead of 0.5 or 2.0?
AC PV Couplingpower factor
question details
31 People are following this question.
marekp answered ·
Internal electronics of MP is rated at the maximum VA.
If system has no way to feed the grid or supply loads it will charge the battery using power from AC coupled inverter. It cannot do it with power grater than its maximum VA rating. This is why AC
coupled inverter cannot produce more that maximum rating of MP.
There is no problem charging already charged battery for short time needed to throttle down AC coupled inverter.
and you really think there is no chance, that the Multi can handle a overload for those few seconds?!
I assume there will also be no problem, if you use a 1,1 or 1,2 rule...
But i might be wrong?!
Nevertheless Victron needs to state approved and safe limits in their manuals, thats clear.
it would not be clever to allow people to spec above limits.
Alexandra answered ·
The factor also applies to the battery as well. And it mostly so the systen can handle the production overshoot. Or pick up the load if AC PV drops off suddenly.
This mostly because the overshoot power will have to be absorbed by something. And if that something is not big enough bad things will happen.
You can go 0.5 pv to 1 inverter not a problem.
If you are wanting to add PV power say for battery charging it is best to use DC for this now. So if you have 5kva inverter 5kva AC PV and run all day at 4000W loads, that DC the more efficient way
to charge the system of you are offgrid or have bad grid.
davidrenaud answered ·
When talking about factor 1.0 rule, it would be good to mention PV INVERTER power and not only PV power : For example, running a Fronuis 4.0 overloaded with 6000Wc PV behind a MP2 5000VA is in
respect of the factor 1.0 rule.
I have tested this config at 100% SOC, no grid (ESS mode): As Marekp said, In case a heavy load switch OFF, depending of battery pack and max cell voltage, MP2 charger overshoot the "charged voltage"
with 0.1 - 0.2V. After Fronius throttling, inverter draw power from battery to bring back DC voltage to target value.
If One go over this 1.0 rule : in the particular case of no grid / WHATEVER SOC /PV inverter at full blast / heavy load switch off,/ low to no remaining loads: He take the risk to see voltage to
raise suddenly on the load side to unknown levels.
PV inverter delivers at a voltage a bit higher than the lines so than current can flow. If there is no way or "too small" ways for current to flow, we may excpect voltage to go higher and higher
until a security is triggered on inverter side or something break on MP side resulting in a "loss of main" for the PV inverter).
the' rule 1.0 is a simplification done for anyone, made in the worst case situation of an inverter behind MP2, with... ZERO LOAD.
True equation should be:
MP2 power >= PV inverter power - UNERRING loads.
But this doesn't match warranty contracts ;o)
Still, there is no rule 1.03 : all power produced by PV inverter should be able to be absorbed.
Anyway, this equation make sense for specific cases with heavy continuous day loads and light night loads.
For example : we can imagine a small business with a fronuis 6.0 behind a MP2 3000 and an uninterruptable load of 3000W (during the PV production period).
This case enter in the factor 1.0 "full equation" because MP2 is able to draw 100% of power excess (or remaining power).
but if you setup this case, you're on your own and it would be wise to double check the load power, ,COS FI and make a risk analysis of all potential unwanted load disconnection and immediate actions
to be taken to derate PV inverter power.
Best regards.
In your "small business" example, what happens on Sunday when business is closed?
tuky answered ·
So if I understood it correctly, the main problem is that MP has to deal with excess power when you switch of the load. For example: You have 2kW MP and 5kW Fronius. Multiple loads are drawing 4kW
power, let's say of them is 2kW water heater. The battery is full. Fronius is producing 4kW from PV, and 2kW MP controls the situation keeping the balance between loads and production. The 2kW heater
turns off, and other loads are still drawing 2kW. Now MP has to deal only with 2kW of the excess power which is within its limits. After that, you can switch off the other 2kW loads and MP has to
deal the excess power again within its range. So the correct statement should be that you can not use and disconnect loads greater then the rated power of the MP.
Is this correct? Probably it is, but since Victron has a broad customer base, with various degrees of technical understanding (or misunderstanding), I presume there would be a lot of burned inverters
and warranty claims. So the factor 1.0 is a much safer bet and easier to understand for the average user.
If you can 100% ensure that the loads are switched off one by one this could work like you described it.
But there is also another scenario that you can't control:
You are connected to the grid, only small loads are running (lets say 500W) and the PV inverter is running at 5kW and the battery is full.
500W are going directly to the loads and 4,5kW are feed-back into the grid.
Now the grid fails and the MultiPlus is disconnecting the grid and now you have 4,5kW going through the MultiPlus to the battery and that will kill the MultiPlus within a few seconds.
tuky Matthias Lange - DE ♦ commented ·
You are right, I didn't think of it that way. | {"url":"https://communityarchive.victronenergy.com/questions/145179/can-anyone-explain-why-and-how-the-10-factor-rule.html","timestamp":"2024-11-09T14:11:12Z","content_type":"text/html","content_length":"200147","record_id":"<urn:uuid:996e4750-f41d-4fc6-8780-eab3aa931fb6>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00489.warc.gz"} |
2D Flow Aerofoil Sections
SOFTWARE DOWNLOADS
Aerofoil sections come in a variety of shapes and sizes. Some are classified by their geometric properties while others by their aerodynamic properties. One of the earliest and simplest naming
conventions is that for the NACA 4 and/or 5 Digit aerofoil families. Here the designation numbers determine the mean line and thickness distribution of the section. More modern designation numbers,
such as the 6 and 6A series sections incorporate values related to the aerodynamic behaviour of the section and are constructed by mapping from the desired aerodynamic properties to a geometry that
will then produce these. All section geometries will have methods for determining the surface x,y coordinates, the NACA systems below represent a first attempt at a parametric representation of
camber and thickness.
NACA 4 and 5 Digit Aerofoil Sections
The NACA 4 and 5 Digit aerofoils represent two families of aerofoil section that can be generated by the use of a set of simple polynomial equations. While these sections are slightly out of date in
terms of current aircraft usage, they still represent useful sections and are easy to create.
The aerofoils are created by summing a thickness distribution with a given mean line equation.
For both families of aerofoil section the thickness distribution is as follows,
where x is a position along the chord line, given as a fraction of chord and t is the value of maximum thickness as given by the last two digits of the aerofoil designation number.(ie 0012 =
symmetric section with t(max)=0.12c)
For the 4-digit family, the mean line is given as,
y[c]=m/(p^2)*(2*p*x-x^2) for 0<x<p
y[c]=m/((1-p)^2)*((1-2*p)+2*p*x-x^2) for p<x<1
Values p and m are given from the first two digits of the designation number. m being the value of maximum camber height (1/100ths chord) and p being the position of maximum camber height (1/10ths
chord). (ie 2412 = maximum camber height =0.02c located at 0.4c).
For the 5-digit family, the mean line is given as,
y[c]=k/6*(x^3-3*m*x^2+m^2*(3-m)*x) for 0<x<m
y[c]=k/6*m^3*(1-x) for m<x<1
Values p,k and m are found from the following table based on the first three digits of the designation number.
| Mean Line No. | p | m | k |
| 210 | 0.05 | 0.0580 | 361.4 |
| 220 | 0.10 | 0.1260 | 51.64 |
| 230 | 0.15 | 0.2025 | 15.957 |
| 240 | 0.20 | 0.2900 | 6.643 |
| 250 | 0.25 | 0.3910 | 3.230 |
The value of maximum camber height and its position will now be determined by the section construction process.(ie 23012 = maximum camber height =0.02c located at 0.15c).
The construction of the section is then done numerically by identifying surface points which are the sum of camber and thickness effects. Points are normally generated using a cosine distribution of
chord x coordinates. For each x coordinate an upper (x[u],y[u]) and lower surface (x[l],y[l]) data point is created by applying the above equations and construction method.
where x
A leading edge radius r is applied to smooth the front data points.
r = 1.1019*t^2
The program below creates coordinate data that can then be stored as an ASCII formatted data file for use with other applications.
NACA 6 and 6A Series Aerofoil Sections
These aerofoil sections are designed to produce laminar flow and low drag over a reasonable range of angles of attack. The thickness distribution is thus based on a prescribed velocity distribution
for the specific symmetric section required. The camber line is a polynomial function based on the desired ideal lift coefficient.
For 6 Series Sections the designation numbers represent the aerofoil aerodynamic properties as shown in the following example,
6 -- 6 series designation number.
4 -- location of Cp(min) as 1/10ths chord.
(1)-- 1/2 width of drag bucket in CL counts.
2 -- Ideal (or Design) CL value.
15 -- Max thickness to chord ratio, 1/100ths chord
The following program can also construct NACA 6 and 6A series sections using the techniques described by Ladson, Brooks, Hill and Sproles. In these cases the data file contains 175 coordinate points
with compact spacing closer to the leading edge of the section.
Executable Program : NACA Sections(278k)
"Theory of Wing Sections" I.H.Abbott & A.E.Von Doenhoff, Dover, NY, 1959.
"Computer Program to Obtain Ordinates for NACA Aerofoils." Ladson, Brooks, Hill & Sproles, NACA Langley, NASA TM-4741
Data for Other Aerofoil Sections
Clark-Y Section
Web pages with lots of aerofoil sections :
NASG, Martin Selig's Pages at UIUC
Return to Contents Page
© AMME, University of Sydney, 1998-2006 | {"url":"http://www-mdp.eng.cam.ac.uk/web/library/enginfo/aerothermal_dvd_only/aero/naca45/naca45.html","timestamp":"2024-11-09T13:55:48Z","content_type":"text/html","content_length":"8048","record_id":"<urn:uuid:ab5da144-83f0-4ddb-b4dd-b114f104ad3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00873.warc.gz"} |
Identify each number as prime, composite, or neither. If the number is composite, write it as a product of prime factors. $$ 30 $$
Short Answer
Expert verified
30 is a composite number. Prime factorization: 2 × 3 × 5
Step by step solution
- Understand the Type of Number
A number can either be prime, composite, or neither. A prime number has only two distinct positive divisors: 1 and itself. A composite number has more than two positive divisors. If a number is
neither prime nor composite, it is either 0 or 1.
- Check if the Number is Prime
To determine if 30 is a prime number, check if it has any divisors other than 1 and 30. Since 30 can be divided evenly by 1, 2, 3, 5, 6, 10, 15, and 30, it is not a prime number.
- Determine if the Number is Composite
Since 30 has more than two positive divisors, it is a composite number.
- Prime Factorization
To express 30 as a product of prime factors, use factorization. Start dividing 30 by the smallest prime number, which is 2. \[ 30 \text{ divided by 2} = 15 \] Next, divide 15 by the next smallest
prime number, which is 3. \[ 15 \text{ divided by 3} = 5 \] Since 5 is a prime number and cannot be divided further by other primes (except 5 itself), the prime factors of 30 are: \[ 30 = 2 \times 3
\times 5 \]
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
prime numbers
A prime number is a number greater than 1 that has no positive divisors other than 1 and itself. These numbers are like the building blocks of all numbers since they can't be divided evenly by any
number other than 1 and themselves. The first few prime numbers are 2, 3, 5, 7, 11, and 13.
Here’s a simple test: if you want to know if a number is prime, you can try dividing it by smaller prime numbers (like 2, 3, 5, 7) and see if any of them divide the number evenly. If none do, then
the number is prime. If even one prime number divides it evenly, then the number is not prime.
An important fact about prime numbers is that 2 is the only even prime number. All other even numbers can be divided by 2, so they can't be prime.
composite numbers
A composite number has more than two positive divisors. This means it can be divided evenly by numbers other than 1 and itself. Examples include 4, 6, 8, and 9.
To determine if a number is composite, you check if any smaller numbers (other than 1) can divide it evenly. If you find such a number, then the number is composite. For example, the number 30 can be
divided by 2, 3, 5, and so on.
Knowing whether a number is composite can help in various areas of mathematics, including prime factorization.
prime factorization
Prime factorization involves breaking down a composite number into a product of prime numbers. These prime numbers are called the 'prime factors'.
For example, let’s take the number 30. To start the prime factorization process, you would begin with the smallest prime number that can divide it, which is 2:
\[ 30 \div 2 = 15 \] Then take 15 and divide it by the next smallest prime number, which is 3:
\[ 15 \div 3 = 5 \] Since 5 is already a prime number, the process stops here.
So, the prime factors of 30 are 2, 3, and 5, and you can write it as:
\[ 30 = 2 \times 3 \times 5 \] Prime factorization is useful in simplifying fractions, finding least common multiples, and understanding the structure of numbers. | {"url":"https://www.vaia.com/en-us/textbooks/math/beginning-and-intermediate-algebra-7-edition/chapter-0/problem-15-identify-each-number-as-prime-composite-or-neithe/","timestamp":"2024-11-08T01:18:00Z","content_type":"text/html","content_length":"250610","record_id":"<urn:uuid:b853944b-8de4-4b1e-926e-ddc7c767bea1>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00060.warc.gz"} |
Excel Function
Horizontal well productivity index under steady state flow. The Joshi method for well in an anisotropic reservoir, [STB/(d.psi)]
Excel Function Syntax:
ProdIndexHorWellJoshi(L, Rw, Re, h, Kz, Kxy, Bl, Ul)
Parameter Value Description
L Length of the horizontal part of the well, [ft]
Rw Wellbore radius, [ft]
Re Drainage radius of the horizontal well, [ft]
h Reservoir height, [ft]
Kz Vertical permeability, [mD]
Kxy Horizontal permeability, [mD]
Bl Liquid formation volume factor, [bbl/STB].
Ul Liquid viscosity, [cP]
Related Functions
6 items
Horizontal well productivity index under steady state flow. Borisov method for well in an isotropic reservoir, [STB/(d.psi)]
Horizontal well productivity index under steady state flow. The Giger-Reiss-Jourdan method for well in an anisotropic reservoir, [STB/(d.psi)]
Horizontal well productivity index under steady state flow. The Renard-Dupuy method for well in an anisotropic reservoir, [STB/(d.psi)] | {"url":"https://petroleumoffice.com/function/prodindexhorwelljoshi/","timestamp":"2024-11-13T11:10:19Z","content_type":"text/html","content_length":"31047","record_id":"<urn:uuid:c51c6f1d-0299-4120-bc0c-a3e4ef1a32e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00642.warc.gz"} |
An Introduction to Mathematical Probability
Lauretta J. Fox
Mutually exclusive events are two or more events that cannot occur simultaneously. If one die is thrown and comes up three, it cannot come up six or any other number at the same time. If a coin is
tossed and comes up tails, it cannot come up heads on the same toss. If a person weighs 125 pounds, he cannot have any other weight simultaneously.
The probability of one or the other of two mutually exclusive events happening is the sum of the separate probabilities of these events. If X and Y represent two mutually exclusive events
P(X or Y) = P(X) + P(Y)
This is known as the addition theorem and may be extended to any number of mutually eXclusive events.
Example 1: If a bag contains four blue marbles, six yellow marbles, and five green marbles, what is the probability that in one drawing a person will pick either a blue marble or a green marble?
Solution: There are fifteen marbles in the bag. The probability that a blue marble will be selected is 4/15. The probability that a green marble will be drawn is 5/15 or 1/3.
P(B or G)—P(B) + P(G) = 4/15 + 5/15 = 9/15 = 3/5
The probability that either a blue marble or a green marble will be drawn is 3/5.
Example 2: If a die is thrown, what is the probability that either a two or a six will come up?
Solution: The die can come up any one of six ways. The probability that a two will come up is 1/6. The probability that a six will come up is 1/6.
P(2 or 6) = P(2) + P(6) = 1/6 + 1/6 = 2/6 = 1/3
The probability that either a two or a six will come up is 1/3.
1.) Are the following pairs of events mutually exclusive?
____ a.) Living in New Haven and working in New York.
____ b.) Being a freshman and being a junior in high school.
____ c.) Being a professor and being an author of a book.
____ d.) Drawing a red card and drawing the ace of spades.
____ e.) Drawing a face card and drawing the six of hearts from a normal deck of cards.
2.) If the probabilities that Joan, Beverly and Evelyn will be elected secretary of a ski club are 1/8, 2/5, and 1/3 respectively, find the probability that one of the three will be elected.
3.) If the probabilities that John and Harry will be valedictorian of a high school class are 1/4 and 3/7 respectively, what is the probability that either John or Harry will be valedictorian?
4.) Chris and Janet are among twenty girls who enter a tennis tournament. What is the probability that either one of these two girls will win the tournament?
5.) In a drawer are six white gloves, four black gloves, and eight brown gloves. If a glove is picked at random, what is the probability that it will be either white or brown? | {"url":"https://teachersinstitute.yale.edu/curriculum/units/1987/5/87.05.02/7","timestamp":"2024-11-05T19:19:57Z","content_type":"text/html","content_length":"40483","record_id":"<urn:uuid:2057357d-6cec-48ab-bb69-6e3953abcb43>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00510.warc.gz"} |
This outlines my approach to creating a solver for the Galaxy Game.
1. The problem made me think of the eight-queens problem because of the directions and extent of what each tile can see. To find the mines in an 8x8 grid, one of the solutions to the eight-queens
puzzle would give complete coverage with the board with the least number of checked tiles (At least I suspect. It would be pretty good regardless.).
2. Using this set of eight hints, every tile unobstructed by a bomb may be seen at least twice which I am thinking will give a fairly good set of information to find the bombs. It may even be the
case that enough information is gathered before flipping all 8 tiles or the bombs are all on the initial 8.
3. Using this information, any tiles that see 0 bombs can be used to discard all the tiles they see. All the tiles that can see a bomb give a likelihood weighting to all the tiles in its path of
vision (unless of course they have been discarded from a 0 weighting). The likelihood rating for each tile will be the sum of all its observers weightings. The first tile with the highest
weighting will be tested and then the weightings will be recalculated and the next tile guessed unless all tiles are found.
It is possible to get full coverage of the board with seven queens (starting at say the top left and choosing each square on a diagonal except for the bottom right square) however this will give
information that is symmetric, giving no distinction to which half of the board the goals are lost in. The nine queens solutions seem to give a lot more information and it may be worth investigating
which solution gives the most overlap of squares and if this will infact help. | {"url":"https://wiki.kram.nz/w/2008:GalaxyGameSolver","timestamp":"2024-11-11T06:29:34Z","content_type":"text/html","content_length":"16418","record_id":"<urn:uuid:9704e522-3942-4f4e-930a-7e97e7aec1fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00675.warc.gz"} |
Modern Portfolio Theory For Cryptocurrency | Mudrex Learn
Observations of the crypto market give the impression that “when Bitcoin sneezes, the cryptocurrency market catches a cold.” Technically, diversifying away risk in a crypto-only portfolio could be
difficult. Creating a two-asset portfolio with highly correlated assets gives an investor a greater risk of losing more wealth. When two assets have a strong correlation coefficient, they tend to
move in the same direction. If two assets in the same portfolio move in the same direction, then your gains in wealth will be greater and your losses more severe. That could be the reason why
investors try to create portfolios with negatively correlated assets. If one asset is declining in a portfolio consisting of two assets that are negatively correlated, then the other asset in the
portfolio should be increasing. This should, in effect, diminish the maximum amount of wealth that can be lost in a portfolio.
Just from checking out the digital asset prices on a cryptocurrency exchange, one can see that they are highly correlated with one another. If Bitcoin is in the red for the day, nearly every
cryptocurrency on the homepage will be in the red; if Bitcoin is in the green, – so would the others. That is why people say that “even the most unseaworthy boats will float when the tide rises.”
Correlation of Bitcoin with other top cryptocurrencies
In particular, the Modern Portfolio Theory advocates diversification of securities and asset classes or the benefits of not putting all your eggs in one basket.
Introduction to Modern Portfolio Theory (MPT)
Modern portfolio theory is based on the expected returns and the variance and covariance between the returns of assets. Investors try to form a portfolio that achieves the highest expected return,
but that does not exceed a given level of risk measured by the portfolio variance.
While the expected return of a portfolio of assets is the weighted average of the expected returns of its components, the variance of a portfolio of assets can be smaller than the average of the
variance of its components because assets might move in opposite directions. If one asset gains while the other asset loses, the overall volatility of the portfolio is smaller since the movements of
the assets compensate for each other.
The way to overcome this dilemma, MPT proposes, is through diversification, which refers to the spread of money across different asset classes and investments. According to MPT, an investor can hold
a particular asset type or investment that is high in risk individually, but, when combined with several others of different types, the whole portfolio can be balanced in such a way that its risk is
lower than the individual risk of underlying assets or investments.
Instead of focusing on the risk of each individual asset, Markowitz demonstrated that a diversified portfolio is less volatile than the total sum of its individual parts. While each asset itself
might be quite volatile, the volatility of the entire portfolio can actually be quite low.
The expected return of the portfolio is calculated as a weighted sum of the individual assets’ returns. If a portfolio contained four equally weighted assets with expected returns of 4%, 6%, 10%, and
14%, the portfolio’s expected return would be:
(4% x 25%) + (6% x 25%) + (10% x 25%) + (14% x 25%) = 8.5%
The portfolio’s risk is a complicated function of the variances of each asset and the correlations of each pair of assets. To calculate the risk of a four-asset portfolio, an investor needs each of
the four assets’ variances and six correlation values since there are six possible two-asset combinations with four assets. Because of the asset correlations, the total portfolio risk, or standard
deviation, is lower than what would be calculated by a weighted sum.
Diversification is a portfolio allocation strategy that aims to minimize idiosyncratic risk by holding assets that are not perfectly positively correlated. Correlation is simply the relationship that
two variables share, and it is measured using the correlation coefficient, which lies between -1≤ρ≤1.
• A correlation coefficient of -1 demonstrates a perfect negative correlation between two assets. It means that a positive movement in one is associated with a negative movement in the other.
• A correlation coefficient of 1 demonstrates a perfect positive correlation. Both assets move in the same direction in response to market movements.
A perfect positive correlation between assets within a portfolio increases the standard deviation/risk of the portfolio. Diversification reduces idiosyncratic risk by holding a portfolio of assets
that are not perfectly positively correlated.
For example, suppose a portfolio consists of assets A and B. The correlation coefficient for A and B is -0.9. This shows a strong negative correlation – a loss in A is likely to be offset by a gain
in B. It is the advantage of owning a diversified portfolio.
Kinds of Risk
Modern portfolio theory states that the risk for individual asset returns has two components:
• Systematic Risk: This refers to market risks that cannot be reduced through diversification or the possibility that the entire market and economy will show losses that negatively affect
investments. It’s important to note that MPT does not claim to be able to moderate this type of risk, as it is inherent to an entire market or market segment.
• Unsystematic Risk: Also called specific risk, unsystematic risk is specific to individual assets, meaning it can be diversified as you increase the number of assets in your portfolio.
For a well-diversified portfolio, the risk – or average deviation from the mean—of each asset contributes little to portfolio risk. Instead, it is the difference—or covariance—between individual
assets’ levels of risk that determines overall portfolio risk. As a result, investors benefit from holding diversified portfolios instead of individual assets. As a result, assets with low
correlations are generally good diversifiers in a portfolio, whereas assets with negative correlations serve as a portfolio hedge.
Systematic vs. Unsystematic Risk
The Efficient Frontier
While the benefits of diversification are clear, investors must determine the level of diversification that best suits them. This can be determined through what is called the Efficient Frontier, a
graphical representation of all possible combinations of risky securities for an optimal level of return given a particular level of risk.
The Efficient Frontier is the cornerstone of MPT. You can think of it like this: Different cryptocurrencies will produce different returns. Say if you select three cryptos for your portfolio, then
the Efficient Frontier will represent the best combinations of these three cryptocurrencies. Every point on the Efficient Frontier represents the maximum expected return for a given level of risk.
The Efficient Frontier Curve
Any portfolio that falls outside the Efficient Frontier is considered suboptimal for one of two reasons: it carries too much risk relative to its return or too little return relative to its risk. A
portfolio that lies below the Efficient Frontier doesn’t provide enough return when compared to the level of risk. Portfolios found to the right of the Efficient Frontier have a higher level of risk
for the defined rate of return.
Examples of diverse portfolios on the Efficient Frontier curve
How to diversify away risk in crypto portfolios?
Although digital assets are highly correlated, it is possible to diversify away risk in a crypto-only portfolio by adding more crypto assets to the portfolio. It is possible to diminish standard
deviation when you move from a single-asset portfolio to a two-asset portfolio, from a two-asset portfolio to a three-asset portfolio, and then to a four-asset portfolio, and so on and so forth.
The reason you are able to diversify away risk in a crypto-only portfolio even though the crypto-assets are highly correlated could be because there are different types of risk.
1. Single asset risks: Risks of project failure, delisting from exchanges, ban from government, huge dump due to a major holder deciding to sell all his holdings one day.
2. Average industry growth: If you invest in just a single or few assets, it is like playing the lottery. Your assets can perform differently – one could grow fast, and another could just make +10%,
and that is all. So, portfolio diversification gives you the opportunity to receive profit from the whole market growth and not depend just on having faith in one coin.
3. You can make different portfolios (for example, high-risk, average, low risk) and receive profit that will be “averaged” on the risk type.
Despite its volatility, Bitcoin has not exhibited a significant correlation with other traditional asset classes, such as commodities, equities, or fixed-income products, since its creation in 2009
(with a median correlation coefficient with other asset classes below 0.10). Binance Research simulated different Bitcoin allocation techniques in existing diversified multi-asset portfolios.
All simulated portfolios, which included Bitcoin, exhibited overall better risk-return profiles than traditional multi-asset class portfolios. These results show that Bitcoin provides active
diversification benefits for all investors worldwide, following multi-asset strategies. Therefore, make sure you backtest your strategies to review them for historical prices.
Criticism of Modern Portfolio Theory (MPT)
Perhaps the most serious criticism of MPT is that it evaluates portfolios based on variance rather than downside risk. Two portfolios that have the same level of variance and returns are considered
equally desirable under modern portfolio theory. One portfolio may have that variance because of frequent small losses. In contrast, the other could have that variance because of rare spectacular
declines. Most investors would prefer frequent small losses, which would be easier to endure. Post-modern portfolio theory (PMPT) attempts to improve modern portfolio theory by minimizing downside
risk instead of variance.
The takeaway from this article is that spreading wealth over a number of assets instead of putting it all into one could diversify away the idiosyncratic risk that is unique to a particular digital
asset. Moreover, to manage risk while trading, the more one is able to diversify, the better situated he could be to protect himself against losses in the cryptocurrency portfolio. There are many
strategies to minimize risk in crypto trading, but you just have to find one that suits you the most.
If you wish to invest in multiple cryptocurrencies without manually trading, Invest With Mudrex using algo trading and generate consistent returns on autopilot.
Until next time! | {"url":"https://mudrex.com/learn/modern-portfolio-theory-for-cryptocurrency/","timestamp":"2024-11-15T03:44:22Z","content_type":"text/html","content_length":"122476","record_id":"<urn:uuid:3a10ba5e-4b40-4eec-8b2d-2d33066f8057>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00710.warc.gz"} |
Phase Transitions and the Conformal Bootstrap: Part One
This is a two-part post. Stay tuned for part two.
Hi all, I’m Brian—a recent PhD graduate of the University of Michigan. I wanted to write a post that connects some classic concepts in physics—phase transitions—with an important branch of
research—the conformal bootstrap. The stuff at the beginning of this post is pretty introductory, but watch out: the end gets pretty technical, and a little knowledge of Quantum Field Theory would
help too.
One of the exciting parts of theoretical physics is finding unexpected connections. Often we find that physical systems which look completely different at a superficial level are partially or
entirely the same when you consider their dynamics in a more abstract way—for instance at the level of the organization of the states in the theory. Erin wrote a post discussing an example of this,
which is both very surprising and very important in contemporary research: holography. One of the most important things that we’ve learned about fundamental physics in the last few decades is that
quantum gravity in Anti-de Sitter space has a dual description in terms of the quantum dynamics on the boundary. Other dualities in high energy physics abound, such as those between different types
of string theory, between different quantum field theories, or between the same theories in different regimes.
Critical Phenomena
Phase Transitions
Two-dimensional spin lattice, with spins aligned.
In this post, we will focus on a particular type of these coincidences that arise between different systems near certain kinds of phase transitions. The transitions that we most commonly encounter in
everyday life are the freezing/melting transition between solid and liquids, and the evaporation/condensation transition between liquids and gases. However, many physical systems on earth undergo
transitions, such as the formation of carbon into diamonds due to high temperatures and pressures in the earth’s mantle, or the onset of superconductivity when certain metals reach low enough
temperatures. In general, phase transitions are defined as discontinuities in the behavior of a system when you change external thermodynamic quantities such as temperature or pressure. They may be
broadly classified by where the discontinuity shows up. If the free energy of the system is a discontinuous function of the thermodynamic variables (typically temperature and pressure), it is called
discontinuous or first order. This discontinuity in the free energy appears as a latent heat, which is thermal energy required for a phase transition but which does not change the temperature. Most
of the transitions we encounter in day-to-day life are first order. For example, boiling water at atmospheric pressure has a latent heat of 40.65 kJ/mol. This means that even after you reach the
boiling point, you still have to put in a lot of energy just to convert the water to gas– 40.65 kJ (or about 10 Calories) for every mole (about 18 grams) of water!
In this post, however, we will be interested in the other class of phase transitions, which are called continuous or second order. These are transitions where the free energy is continuous across the
transition (but often its derivatives are not). In the case of continuous transitions, it is often useful to define an order parameter to describe the transition. These are quantities that are zero
in one phase and non-zero in another. We’ll see a few examples below.
Interacting across a distance with a string telephone.
Another really fundamental feature of continuous phase transitions is that when physical systems are near them, parts of the systems which are far away from each other can still interact. These
interactions are quantified by correlation functions, which describe how quantities in the system at different positions are related. These typically decay exponentially as the distance
increases—that is, a variable like the spin at some point may be highly correlated with the spins at nearby points, but it will be almost completely uncorrelated with far away spins.
A simple example is a lattice of spins, where spin sits on a corner and can be up or down. In this simple example we might quantify this by
Here we’ve used ∼ to indicate that we are considering only the scaling, and might be ignoring constant and subleading pre-factors. This equation tells us that the correlation between the spins at
site i and site j decays exponentially as the distance rij between i and j increases. Since we can’t put a dimensionful quantity like distance in an exponent, we are forced to introduce another
distance scale ξ, which is called the correlation length. This variable ξ can depend on the external parameters of the system, such as temperature and pressure. However we find that as the system
approaches a phase transition, the correlation length approaches infinity, implying that even greatly separated spins are now correlated.
Example 1: Boiling Water
The phase diagram of water.
Let’s illustrate this with an example. As we know, water boils at 100 degrees Celsius. Actually, that’s only true at atmospheric pressure. As you increase the pressure, the boiling point will
increase—but not for- ever. Eventually, you reach a point, called a critical point, above which there is no sharp phase transition between liquid and gas. This is shown in the figure below, which is
called a “phase diagram.” For water, this point occurs at 374 degrees Celsius and 218 atmospheres of pressure–this is far above the pressures we are used to in everyday life, which is only about one
atmosphere! Below the critical point, the phase transition is discontinuous, which means that there is a latent heat. At the critical pressure, however, the latent heat disappears and the transition
becomes continuous. The order parameter for this transition is the difference in density between the liquid and gas phase. We will denote this by ρ. Let’s consider the pressure to be fixed at its
critical value. Then, if we were to measure ρ as we approach the critical temperature Tc, we would find a simple behavior for ρ:
The absolute values mean that there will be a discontinuity in the first derivative of ρ. β is a constant called a critical exponent that is intrinsic to the system. For the water / gas transition,
we have
Recall also that the correlation length diverges as we approach the critical temperature. This is also characterized by a critical exponent:
β and ν are only two of a number of critical exponents characterizing the system.
Example 2: The Ising Model
Let’s turn to another example: the lattice of spins we mentioned earlier. This model has a particle at each corner i of a square lattice with spin σi. We’ll allow the spin at each site to be up or
down, so σ = ±1. The spins in this model only interact with their nearest neighbor—the interaction energy is positive or negative according to whether the spins are the same or different. The total
energy of the system depends on the configuration of spins and is given by:
Here ⟨ij⟩ means that we sum over all pairs of neighboring particles, while the sum over i is just a sum over every site. Therefore, J describes the interactions within the lattice and h models the
effect of an external magnetic field. This is a very simplified model of a magnet—the spin of each particle comprising the magnet points up or down, and the resulting magnetic field is the net number
of spins pointing up or down. This net field is called the magnetization, given by the net magnetic field averaged over the number of spins N:
The magnetization plays the role of the order parameter of this system, just as the density did for the liquid / gas transition. At high temperatures, the spins are oriented more or less
randomly—each spin has an even chance of being up or down, so the magnetization is close to zero. But as the temperature drops, thermal fluctuations stop overriding the interaction energy, and nearby
spins will be more and more correlated. At zero temperature, the system relaxes into its ground state. For J > 0, this is the state where all the spins point the same way—then the material is said to
be ferromagnetic (J < 0 leads to a ground state with anti-aligned neighboring spins, a configuration called anti-ferromagnetic). With all the spins pointing the same way, the magnetization will
approach one. There is a phase transition between the disordered, high-temperature phase and low-temperature, aligned phase. We can model this with a similar equation to the case of water:
and we can also consider the correlation length
This model can exist in any number of dimensions—in one dimensions the “lattice” is evenly spaced points on a line, in two dimensions it’s squares on a plane, and so on. The critical exponents β and
ν are different for each number of dimensions. In two dimensions, we find:
while in three dimensions, we find
These three-dimensional values were the same ones that we found for the liquid/gas transition! The remarkable coincidence that we’ve been hinting at before is that completely different physical
systems can be described by the same critical exponents! This phenomenon is called universality. It tells us that, near a critical point, the behavior of a system depends on the dimension and the
symmetries of the problem, but not the underlying dynamics.
The Ising model has a long history. The one-dimensional model was solved by Ising himself in his PhD thesis way back in 1924. The two-dimensional model with no external magnetic field was solved by
Onsager in 1944. The two-dimensional model with a magnetic field was only solved exactly in 1989, by Zamolodchikov. In four or more dimensions, the exponents can be computed using an approach called
mean field theory. Therefore the three-dimensional model is of considerable theoretical interest, because it is notoriously difficult to study. Unlike the other dimensions, its exponents are not
believed to be rational numbers*, and so far they do not have a closed form expression.
Conformal Field Theory
In this section we will introduce a formalism, called Conformal Field Theory (CFT), which can be used to study the behavior of the Ising model (and many other systems) at the critical point. In some
cases, this formalism will allow us to fully solve the theory. This is the case for the two-dimensional Ising model, where CFT allows us to compute the exact values of the critical exponents. In
other cases, such as the three-dimensional Ising model, the theory cannot be fully solved, but CFT gives an efficient set of tools for putting rigorous bounds on the exponents. Those tools are called
the conformal bootstrap, and they will be the subject of the final section. However, for the sake of simplicity we will focus on the two-dimensional Ising model in what follows.
What is a CFT?
Let’s first go over what a conformal field theory (CFT) is. This is a huge field, and we won’t be able to do it justice in this post. Let’s briefly try to give a little of the flavor.
A CFT is a quantum field theory (QFT) for which an enlarged group of spacetime symmetries—the “conformal group”—act on the states. Typically, we study QFT in flat (rather than curved) spacetime,
where the symmetries are translations, rotations, and Lorentz transformations. The conformal group, how- ever, includes these symmetries and adds two more—dilatations and special conformal
transformations. A typical introduction to CFTs usually involves determining the commutation relations between the gener- ators of these symmetries and showing how they each act on the fields in the
theory. Here, we’re going to skip to the important part: scaling transformations. These act on the coordinates as
which means the fields transform as
In a CFT, each field φ has a positive real number ∆ associated with it—this is called the scaling dimension of φ. It’s conceptually similar to the mass dimension in a normal quantum field theory**.
This extra symmetry may seem innocuous, but it affects the structure of the theory on a fundamental level. For one thing, scaling transformations mean that there is no notion of asymptotic states,
because there is no real notion of particles getting “very far apart”. This means there is no way to define an S-matrix. Therefore, it is natural for the correlation functions to play the role of the
primary observables in CFTs. We observe that two-point functions in a CFT transform in the following way under scaling:
The rotational and translational symmetry imply that these functions can only depend on the difference between x and y. Only one function of |x − y| satisfies the scaling relationship we’ve written
above, so the two-point function in CFT is fixed to be
From Lattice Models to Continuous Fields
So much for our lightning outline of CFTs. As we’ve seen, they are basically collections of fields, and the physical content is described by the correlation functions of the fields, which transform a
certain way under scaling. The next question is: what do these models have to do with the lattice models we’ve outlined above? They are quite different after all—the CFT is a quantum field theory and
the fields take continuous values. In the lattice models, we also have correlation functions, but they are at discrete points in space, and the spin takes discrete values. But the remarkable fact is
there exists a CFT whose correlation functions are the same as the correlation functions for the Ising model, in the limit of zero lattice spacing. The basic idea can be summarized succinctly as:
Here a is the spacing between sites, so the CFT operator at location x corresponds to a lattice spin at the site corresponding to the integer part of x/a. It is clear from this equation that the
scaling transformation of the CFT correlation function holds for the right-hand side of the equation as long as a is scaled along with x and y. To get a feeling for the fields in this CFT, let’s
consider the critical exponent ν of the lattice model, defined by:
where rij is the distance between the sites. We know from Onsager’s solution that
in the two-dimensional Ising model. Therefore, a continuum CFT describing the model must have a field σ(x) with ∆ = 8. Such a CFT does exist: it turns out to have three operators
In this case, I is the identity operator, which doesn’t do anything to the states it acts on. σ represents the local spin, and ε is the local energy density.
It is important to mention that this is not a proof that the models are the same. You might call it a “physicist’s proof”—if you find that enough quantities in two different-looking models are the
same, you can convince yourself they are the same model. Nonetheless, proving they are the same is more difficult***.
To be continued....
in part two, the post will focus on the details of
the conformal bootstrap.
Thank you to Andrew Hanlon for several rounds of thorough editing, and thank you to the Theory Girls for the opportunity to write this post! Please stay tuned for part two.
Citations and Acknowledgements
• The first section of this post was largely inspired by Henriette Elvang’s course on CFTs.
• Paul Ginsparg’s introduction to 2D CFTs is at (9108028)
• The modular bootstrap was introduced in (1608.06241).
• The modern conformal bootstrap was introduced in (0807.0004). For a nice introduction, see David Simmons-Duffin’s TASI lectures, (1602.07982).
• The bootstrap results for the three-dimensional Ising model and the liquid helium discrepancy were reported in (1603.04436).
**Like the mass dimension, it is essentially determined by the dimension of the theory and spin of the field in a free theory, but can be changed by renormalization effects which may be drastic in
strongly coupled theories.
***One approach involves a series of transformations between the two theories. First one must prove an equivalence between the two-dimensional (classical) Ising model we’ve described, and a 1D
quantum model. Then the operators of the 1D model are related to fermionic operators via a Jordan-Wigner transformation. Finally, a theory of free Majorana fermions is obtained in the limit where the
lattice spacing goes to zero, and this model has the three operators described above.
(2) By Matthieumarechal, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=4623701 | {"url":"https://www.theorygirls.com/post/phase-transitions-and-the-conformal-bootstrap-part-one","timestamp":"2024-11-14T16:27:05Z","content_type":"text/html","content_length":"1050039","record_id":"<urn:uuid:01cdc8c6-3efd-4e87-aa0d-98e3f1973273>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00524.warc.gz"} |
CTAN Update: computational-complexity class
Date: March 19, 2008 12:37:47 PM CET
On Tue, 18 Mar 2008, Michael Nüsken submitted an update to the computational-complexity package. Location on CTAN: /macros/latex/contrib/computational-complexity Summary description: Class originally
desigend for the journal Computational Complexity License type: lppl Announcement text: The LaTeX2e class cc was written for the journal Computational Complexity, and it can also be used for a lot of
other articles. You may like it since it contains a lot of features as more intelligent references, a set of theorem definitions, an algorithm environment, ... The class requires natbib. This is the
version: 2008/03/18 v2.08. It contains some bugfixes. (Note that still the combination with hyperref is problematic.) The author is <a href="
">Michael Nüsken</a>.This package is located at
. More information is at
?id=computational-complexity (if the package is new it may take a day for that information to appear). We are supported by the TeX Users Group
. Please join a users group; see
. _______________________________________________ Thanks for the upload. For the CTAN Team Rainer Schöpf
computational-complexity – Class for the journal Computational Complexity
The LaTeX2ε class cc was written for the journal Computational Complexity, and it can also be used for a lot of other articles. You may like it since it contains a lot of features such as more
intelligent references, a set of theorem definitions, an algorithm environment, and more.
The class requires natbib.
Package computational-complexity
Version 2.25f
Copyright 2000–2015 Michael Nüsken
Maintainer Michael Nüsken | {"url":"https://ctan.org/ctan-ann/id/alpine.DEB.1.00.0803191236070.19415@bk-ng-05.proteosys","timestamp":"2024-11-15T01:03:52Z","content_type":"text/html","content_length":"16728","record_id":"<urn:uuid:ac7ecd4a-731f-4e32-9a64-8bd1bb7ce362>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00494.warc.gz"} |
How do you combine (y/(4y+8)) - (1/(y^2+2y))? | HIX Tutor
How do you combine #(y/(4y+8)) - (1/(y^2+2y))#?
Answer 1
#f(y) = y/(4(y + 2)] - 1/(y(y + 2)) = #
#= ( y^2 - 4)/[4y(y + 2)# = #((y - 2)(y + 2))/(4y(y + 2)# = #(y - 2)/(4y) #
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To combine the expressions (y/(4y+8)) - (1/(y^2+2y)), we need to find a common denominator for the two fractions. The common denominator is (4y+8)(y^2+2y).
Next, we can rewrite the fractions with the common denominator:
y/(4y+8) = y(y^2+2y)/(4y+8)(y^2+2y)
1/(y^2+2y) = (4y+8)/(4y+8)(y^2+2y)
Now, we can subtract the fractions:
(y(y^2+2y) - (4y+8))/(4y+8)(y^2+2y)
Expanding the numerator:
(y^3 + 2y^2 - 4y - 8)/(4y+8)(y^2+2y)
This is the combined expression.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-combine-y-4y-8-1-y-2-2y-8f9af9c250","timestamp":"2024-11-08T06:07:26Z","content_type":"text/html","content_length":"574289","record_id":"<urn:uuid:86ffead8-6975-4c88-b2f4-78779252b34a>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00413.warc.gz"} |
[Solved] List the major classes of birds that are | SolutionInn
List the major classes of birds that are the most common pets.
List the major classes of birds that are the most common pets.
Fantastic news! We've Found the answer you've been seeking!
Step by Step Answer:
Answer rating: 50% (6 reviews)
The major classes of birds that are commonly kept as pets include Parrots and Parakeets Order Psittaciformes This is one of the most popular classes o...View the full answer
Answered By
Abdul Wahab Qaiser
Before working at Mariakani, I volunteered at a local community center, where I tutored students from diverse backgrounds. I helped them improve their academic performance and develop self-esteem and
confidence. I used creative teaching methods, such as role-playing and group discussions, to make the learning experience more engaging and enjoyable. In addition, I have conducted workshops and
training sessions for educators and mental health professionals on various topics related to counseling and psychology. I have presented research papers at conferences and published articles in
academic journals. Overall, I am passionate about sharing my knowledge and helping others achieve their goals. I believe that tutoring is an excellent way to make a positive impact on people's lives,
and I am committed to providing high-quality, personalized instruction to my students.
0.00 0 Reviews 10+ Question Solved
Students also viewed these Sciences questions
Study smarter with the SolutionInn App | {"url":"https://www.solutioninn.com/study-help/veterinary-assisting-fundamentals/list-the-major-classes-of-birds-that-are-the-most-1106446","timestamp":"2024-11-13T03:22:11Z","content_type":"text/html","content_length":"78770","record_id":"<urn:uuid:03753b99-0d40-4343-af9a-4cf8aba1b65d>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00222.warc.gz"} |
String Theory: Insight from the Holographic Principle - dummies
Another key insight into string theory comes from the holographic principle, which relates a theory in space to a theory defined only on the boundary of that space. The holographic principle isn’t
strictly an aspect of string theory (or M-theory), but applies more generally to theories about gravity in any sort of space. Because string theory is such a theory, some physicists believe the
holographic principle will lie at the heart of it.
Capturing multidimensional information on a flat surface
It turns out, as shown by Gerard ’t Hooft in 1993 (and developed with much help from Leonard Susskind), the amount of “information” a space contains may be related to the area of a region’s boundary,
not its volume. (In quantum field theory, everything can be viewed as information.) In short, the holographic principle amounts to the following two postulates:
• A gravitational theory describing a region of space is equivalent to a theory defined only on the surface area that encloses the region.
• The boundary of a region of space contains at most one piece of information per square Planck length.
In other words, the holographic principle says that everything that happens in a space can be explained in terms of information that’s somehow stored on the surface of that space. For example,
picture a 3-dimensional space that resides inside the 2-dimensional curled surface of a cylinder, as in this figure. You reside inside this space, but perhaps some sort of shadow or reflection
resides on the surface.
Now, here’s a key aspect of this situation that’s missing from our example: A shadow contains only your outline, but in ’t Hooft’s holographic principle, all of the information is retained.
Another example, and one that is perhaps clearer, is to picture yourself inside a large cube. Each wall of the cube is a giant television screen, which contains images of the objects inside the cube.
You could use the information contained on the 2-dimensional surface of the space to reconstruct the objects within the space.
Again, though, this example falls short because not all of the information is encoded. If you were to have objects blocking you in all six directions, your image wouldn’t be on any of the screens.
But in the holographic principle view of the universe, the information on the surface contains all the information that exists within the space.
About This Article
This article can be found in the category: | {"url":"https://www.dummies.com/article/academics-the-arts/science/physics/string-theory-insight-from-the-holographic-principle-178049/","timestamp":"2024-11-10T16:25:35Z","content_type":"text/html","content_length":"74462","record_id":"<urn:uuid:77e1a613-bb61-49b8-8fb5-219d08cb48f6>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00077.warc.gz"} |
- Definiția din
FUNCTION - Definiția din dicționar
Traducere: română
Notă: Puteţi căuta fiecare cuvânt din cadrul definiţiei printr-un simplu click pe cuvântul dorit.
Func"tion (?), n. [L. functio, fr. fungi to perform, execute, akin to Skr. bhuj to enjoy, have the use of: cf. F. fonction. Cf. Defunct.] 1. The act of executing or performing any duty, office, or
calling; performance. “In the function of his public calling.” Swift.
2. (Physiol.) The appropriate action of any special organ or part of an animal or vegetable organism; as, the function of the heart or the limbs; the function of leaves, sap, roots, etc.; life is the
sum of the functions of the various organs and parts of the body.
3. The natural or assigned action of any power or faculty, as of the soul, or of the intellect; the exertion of an energy of some determinate kind.
“As the mind opens, and its functions spread.” Pope.
4. The course of action which peculiarly pertains to any public officer in church or state; the activity appropriate to any business or profession.
“Tradesmen . . . going about their functions.” Shak.
“The malady which made him incapable of performing his
regal functions.” Macaulay.
5. (Math.) A quantity so connected with another quantity, that if any alteration be made in the latter there will be a consequent alteration in the former. Each quantity is said to be a function of
the other. Thus, the circumference of a circle is a function of the diameter. If x be a symbol to which different numerical values can be assigned, such expressions as x2, 3x, Log. x, and Sin. x, are
all functions of x.
6. (Eccl.) A religious ceremony, esp. one particularly impressive and elaborate.
“Every solemn ‘function' performed with the requirements of the liturgy.” Card. Wiseman.
7. A public or social ceremony or gathering; a festivity or entertainment, esp. one somewhat formal.
“This function, which is our chief social event.” W. D. Howells.
Algebraic function, a quantity whose connection with the variable is expressed by an equation that involves only the algebraic operations of addition, subtraction, multiplication, division, raising
to a given power, and extracting a given root; -- opposed to transcendental function. -- Arbitrary function. See under Arbitrary. -- Calculus of functions. See under Calculus. -- Carnot's function
(Thermo-dynamics), a relation between the amount of heat given off by a source of heat, and the work which can be done by it. It is approximately equal to the mechanical equivalent of the thermal
unit divided by the number expressing the temperature in degrees of the air thermometer, reckoned from its zero of expansion. -- Circular functions. See Inverse trigonometrical functions (below). --
Continuous function, a quantity that has no interruption in the continuity of its real values, as the variable changes between any specified limits. -- Discontinuous function. See under Discontinuous
. -- Elliptic functions, a large and important class of functions, so called because one of the forms expresses the relation of the arc of an ellipse to the straight lines connected therewith. --
Explicit function, a quantity directly expressed in terms of the independently varying quantity; thus, in the equations y = 6x2, y = 10 -x3, the quantity y is an explicit function of x. -- Implicit
function, a quantity whose relation to the variable is expressed indirectly by an equation; thus, y in the equation x2 + y2 = 100 is an implicit function of x. -- Inverse trigonometrical functions,
or Circular functions, the lengths of arcs relative to the sines, tangents, etc. Thus, AB is the arc whose sine is BD, and (if the length of BD is x) is written sin -1x, and so of the other lines.
See Trigonometrical function (below). Other transcendental functions are the exponential functions, the elliptic functions, the gamma functions, the theta functions, etc. -- One-valued function, a
quantity that has one, and only one, value for each value of the variable. -- Transcendental functions, a quantity whose connection with the variable cannot be expressed by algebraic operations; thus
, y in the equation y = 10x is a transcendental function of x. See Algebraic function (above). -- Trigonometrical function, a quantity whose relation to the variable is the same as that of a certain
straight line drawn in a circle whose radius is unity, to the length of a corresponding are of the circle. Let AB be an arc in a circle, whose radius OA is unity let AC be a quadrant, and let OC, DB,
and AF be drawnpependicular to OA, and EB and CG parallel to OA, and let OB be produced to G and F. E Then BD is the sine of the arc AB; OD or EB is the cosine, AF is the tangent, CG is the cotangent
, OF is the secant OG is the cosecant, AD is the versed sine, and CE is the coversed sine of the are AB. If the length of AB be represented by x (OA being unity) then the lengths of Functions. these
lines (OA being unity) are the trigonometrical functions of x, and are written sin x, cos x, tan x (or tang x), cot x, sec x, cosec x, versin x, coversin x. These quantities are also considered as
functions of the angle BOA.
{ Func"tion (fŭ&nsmacr_;k"shŭn), Func"tion*ate (?), } v. i. To execute or perform a function; to transact one's regular or appointed business. | {"url":"https://www.archeus.ro/lingvistica/CautareWebster?query=FUNCTION","timestamp":"2024-11-09T20:19:35Z","content_type":"text/html","content_length":"83126","record_id":"<urn:uuid:6ec99fe9-c112-4a76-83bb-5cc03aded36e>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00761.warc.gz"} |
How do we use normalization?
Let $n \ge 2$ and $X = (X_1, \dots, X_n)$, and set $X’:= (X_1, \dots, X_{n-1})$.
While the Normalization Theorem explains how to simplify series towards normality, it is still awkward to use directly. The reason is the factoring out of monomials at various stages, such as after
using blow-up substitutions: if we want to do an induction on $\height_n(F)$ to establish some property of $F$, we can only apply the inductive hypothesis after factoring out this monomial. So,
unless this property is easily preserved under multiplication by monomials (most aren’t!), we need to extract a notion of height that decreases with every substitution, without having to factor our
A first possibility to do so is to simply define the new height $h_n(F)$ of $F \in \Ps{R}{X}$ by looking at the set of pairs $\left(X^r,G\right)$, with $r \in \NN^n$ and $G \in \Ps{R}{X}$, such that
$F = X^r G$, and to let $h_n(F)$ be the minimum of all $\height_n(G)$ of such pairs.
However, some of the substitutions will change monomials into series that aren’t even normal: for instance, the linear substitution $l_{2,1}$ changes the monomial $X_1$ into the non-normal series
What we need is to consider factorizations of $F$ into pairs that remain “acceptable” factorizations after any of our substitutions. The following is the “right” definition: for $F \in \Ps{R}{X}$, we
set $$\F(F):= \set{(H,G):\ H \in \Ps{R}{X’}, G \in \Ps{R}{X} \text{ and } F = HG}.$$
Let $(H,G) \in \F(F)$ and $s$ be one of the substitutions $t_\alpha$, $p^*_{i,q}$ or $\bl_\lambda^{i,j}$ with $\lambda \ne \infty$ if $j=n$. Show that $(sH,sG) \in \F(sF)$.
On the other hand, if $s$ is $l_{n,c}$ or one of the blow-up substitutions $\bl^{i,n}_\infty$, then the conclusion of the above exercise fails in general.
However, the Normalization Theorem shows that the former is only used on a series that has infinite order in $X_n$, while the latter is used only on series that are blow-up prepared and become normal
after applying it. Thus, in the latter situation, as long as we ensure that $F$ has a factorization $(H,G)$ with $H$ normal and $G$ blow-up prepared, the series $\bl_\infty F$ will be normal.
Therefore, we define $h_n(F)$ as follows: we first set $$\ord_n(F):= \min\set{\ord_{X_n}(G):\ \text{there exists } H \in \Ps{R}{X’} \text{ such that } (H,G) \in \F(F)}$$ and $$\F_1(F):= \set{(H,G) \
in \F(F):\ \ord_{X_n}(G) = \ord_n(F)} \ne \emptyset.$$
Second, we set $$\tp’_n(F):= \min\set{\tp_n(G):\ \text{there exists } H \in \Ps{R}{X’} \text{ such that } (H,G) \in \F_1(F)}$$ and $$\F_2(F):= \set{(H,G) \in \F_1(F):\ \tp_n(G) = \tp’_n(F)} \ne \
Third, if $\tp’_n(F) = 0$, there exists $(H,G) \in \F_2(F)$ with $G$ Tschirnhausen prepared. In this situation, we need to not only normalize the coefficients of $G$, but also $H$. Therefore, for $
(H,G) \in \F_2(F)$, we set $\pn_n(H,G):= \infty$ if $G$ is not Tschirnhausen prepared and $\pn_n(H,G):= h_{n-1}\left(H\tilde G\right)$ if $G$ is Tschirnhausen prepared, where $\tilde G$ is defined
for $G$ as the definition of $\pn_n(G)$. Then we set $$\pn’_n(F):= \min\set{\pn_n(H,G):\ (H,G) \in \F_2(F)}$$ and $$\F_3(F):= \set{(H,G) \in \F_2(F):\ \pn_n(H,G) = \pn’_n(F)} \ne \emptyset.$$
The remaining definitions are now straightforward: fourth, we set $$\bp’_n(F):= \min\set{\bp_n(G):\ \text{there exists } H \in \Ps{R}{X’} \text{ such that } (H,G) \in \F_3(F)}$$ and $$\F_4(F):= \set
{(H,G) \in \F_3(F):\ \bp_n(G) = \bp’_n(F)} \ne \emptyset;$$
and fifth, we set $$\rd’_n(F):= \min\set{\rd_n(G):\ \text{there exists } H \in \Ps{R}{X’} \text{ such that } (H,G) \in \F_4(F)}$$ and, finally, $$h_n(F):= \begin{cases} \left(\ord_n(F), \tp’_n(F), \
pn’_n(F), \bp’_n(F), \rd’_n(F)\right) &\text{if } F \text{ is not normal}, \\ 0 &\text{otherwise.} \end{cases}$$ Adapting the proof of the Normalization Theorem in the way described above, we
therefore obtain:
Normalization Corollary
Let $F \in \D_n$ be nonzero and not normal. Then $h_n(F) > 0$ and one of the following holds:
1. there exist $i \in \{2, \dots, n\}$ and $c \in \RR^{i-1}$ such that $\,h_n(l_{i,c} F) < h_n(F)$;
2. there exist $i \in \{2, \dots, n\}$ and $\alpha \in \D_{i-1}$ such that $\alpha(0) = 0$ and $\,h_n(t_\alpha F) < h_n(F)$;
3. there exist $i \in \{1, \dots, n-1\}$ and $q \in \NN$ such that $h_n\left(p^{\ast}_{i,q} F\right) < h_n(F)$ for $\ast \in \{+,-\}$;
4. there exist $1 \le i < j \le n$ such that, for $\lambda \in \RR \cup \{\infty\}$, we have $\,h_n\left(\bl^{i,j}_\lambda F\right) < h_n(F)$. $\qed$
Exercise 16
Prove the Normalization Corollary.
Finally, for later use we record the following consequences of the normalization algorithm.
Let $F \in \D_n$ be nonzero such that $h_n(F) > 0$, and let $\tau$ be a substitution as obtained from the algorithm such that $h_n(\tau F) < h_n(F)$.
1. Assume there are $i \in \{1, \dots, n\}$ and $k \in \NN$ such that $\tau \in \left\{p^{+}_{i,k}, p^{-}_{i,k}\right\}$. Then $i \lt n$ and, for $q \in \NN$, we have $\,h_n\left(\tau\left(X_i^q F\
right)\right) < h_n(F)$.
2. Assume there are $1 \leq i < j \leq n$ and $\lambda \in \RR \cup \{\infty\}$ such that $\tau = \bl^{i,j}_\lambda$. Then for $q \in \NN$, we have $$h_n\left(\bl^{i,j}_{\infty}\left(X_j^q F\right)\
right) < h_n(F)$$ and, for $\lambda \in \RR$, $$h_n\left(\bl^{i,j}_{\lambda}\left(X_i^q F\right)\right) < h_n(F).$$
(Show proof)
For part 1, we proceed by induction on $n$. If $n=1$, we never use power substitutions, so there is nothing to do; so we assume $n>1$ and part 1 holds for $n-1$. There are two possibilities: either
$F$ is Tschirnhausen prepared but not prenormalized, or $F$ is prenormalized but not blow-up prepared.
If $F$ is Tschirnhausen prepared but not prenormalized, the substitution $\tau$ must be used to lower $\pn’_n(F)$; in particular, we must have $i \le n-2$. Let $(H,G) \in \F_2(F)$; then $\left(X_i^
qH,G\right) \in \F_2\left(X_i^q F\right)$ and, by the inductive hypothesis, $$h_{n-1}\left(\tau \left(X_i^q H \tilde G\right)\right) \lt h_{n-1}\left(H \tilde G\right),$$ which proves the lemma in
this case.
If $F$ is prenormalized but not blow-up prepared, then $i < n$ and, for $(H,G) \in \F_3(F)$, the series $H$ is normal; hence, $X_i^qH$ is normal and $\left(X_i^q H,G\right) \in \F_3\left(X_i^q F\
right)$, which implies the lemma in this case. For part 2, we also proceed by induction on $n$, assuming $n>1$ and the lemma holds for $n-1$. The algorithm implies that $F$ is blow-up prepared, so if
$j \lt n$, the lemma holds by the inductive hypothesis. So we assume that $j=n$; since $\bl^{i,n}_\infty \left(X_n^q F\right) = X_n^q \cdot \bl^{i,n}_\infty F$ in this case, we obtain the first
statement of the lemma. On the other hand, if $\lambda \in \RR$ then, since $i \lt n$, we have $\left(X_i^q H,G\right) \in \F_4\left(X_i^q F\right)$ for $(H,g) \in \F_4(F)$, so the lemma also follows
in this case. $\qed$
You must be logged in to post a comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"http://ms.mcmaster.ca/~speisseg/blog/?p=2389","timestamp":"2024-11-02T21:22:46Z","content_type":"text/html","content_length":"71361","record_id":"<urn:uuid:17ab1d1c-3e5d-4c20-90d5-69d9a76ecf57>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00345.warc.gz"} |
Evaluating Contact-Less Sensing and Fault Diagnosis Characteristics in Vibrating Thin Cantilever Beams with a MetGlas® 2826MB Ribbon
Evaluating Contact-Less Sensing and Fault Diagnosis Characteristics in Vibrating Thin Cantilever Beams with a MetGlas^® 2826MB Ribbon
Department of Industrial Design and Production Engineering, University of West Attica, 250 Thivon and P. Ralli, 12241 Athens, Greece
Author to whom correspondence should be addressed.
Submission received: 3 November 2023 / Revised: 29 November 2023 / Accepted: 4 January 2024 / Published: 6 January 2024
The contact-less sensing and fault diagnosis characteristics induced by fixing short Metglas^® 2826MB ribbons onto the surface of thin cantilever polymer beams are examined and statistically
evaluated in this study. Excitation of the beam’s free end generates magnetic flux from the vibrating ribbon (fixed near the clamp side), which, via a coil suspended above the ribbon surface, is
recorded as voltage with an oscilloscope. Cost-efficient design and operation are key objectives of this setup since only conventional equipment (coil, oscilloscope) is used, whereas filtering,
amplification and similar circuits are absent. A statistical framework for extending past findings on the relationship between spectral changes in voltage and fault occurrence is introduced.
Currently, different levels of beam excitation (within a frequency range) are shown to result in statistically different voltage spectral changes (frequency shifts). The principle is also valid for
loads (faults) of different magnitudes and/or locations on the beam for a given excitation. Testing with either various beam excitation frequencies or different loads (magnitude/locations) at a given
excitation demonstrates that voltage spectral changes are statistically mapped onto excitation levels or occurrences of distinct faults (loads). Thus, conventional beams may cost-efficiently acquire
contact-less sensing and fault diagnosis capabilities using limited hardware/equipment.
1. Introduction
The use of magnetoelastic materials in the design and production of contact-less sensors owes much to their characteristic property of exhibiting shape changes under external magnetic fields.
Conversely, these materials also emit magnetic flux when suffering shape deformation due to external loading, with flux dynamics related to those of the imposed loading [
]. Then, if magnetoelastic strips are, for instance, clamped on both sides and subjected to variable magnetic fields, the resulting shape changes will cause the strip to vibrate. The dynamics of such
vibration depend on the external variable magnetic field, the strip dimensions and its mass distribution. Thus, the accumulation on the strip surface of substances such as biological agents [
], air pollutants [
], volatile organic compounds [
], H
O [
] or H
], which bind to suitable surface coating, will change its mass distribution and consequently its vibration characteristics (resonant frequencies). Hence, shifted resonant frequencies indicate a
significant concentration of substances on it and, accordingly, the environment. This is the operational principle of magnetoelastic (magnetostrictive) sensors, which allow for monitoring dangerous
substances in hostile environments without requiring human presence on the field since the strip vibration signal can obviously be remotely recorded and assessed.
Two main classes of sensor setups can be distinguished based on whether the magnetoelastic element is driven to resonance via electrical excitation used specifically for this purpose, or whether it
is simply vibrating because it receives mechanical excitation from its environment. The first class involves setups such as those presented in the previous paragraph, where an interrogation coil
(excited by a suitable electrical signal) is used to provide the magnetoelastic material with the magnetic flux required to drive it to resonance. Then, another coil placed above the vibrating ribbon
or film (usually referred to as the reception coil) picks up the emitted flux in a contact-less manner and transforms it into an electrical signal (voltage). The latter is then monitored for
frequency shifts indicating the accumulation of substances. This two-coil setup is referred to as active and was reviewed among others in [
]. The active setup is quite sensitive to mass accumulation on the magnetoelastic element’s surface, especially if the latter is selected based on its Young’s modulus and ∆E effect characteristics [
], its length-to-width ratio [
] or even its shape, with hourglass [
] or rhomboid forms [
] being more efficient.
The second class of sensor setups involves those not requiring interrogation coils to provide excitation to the magnetoelastic element since the latter is part of vibrating mechanical structures (or
machines with rotating parts) and, hence, vibrates due to its normal operation. Then, the vibrating magnetoelastic element produces magnetic flux, which is picked up by a reception coil in a
contact-less manner. Analyzing the signal’s spectral characteristics draws conclusions on the dynamics of the magnetoelastic part and, obviously, of the underlying mechanical structure [
]. This one-coil-only setup is referred to as the passive setup and was also reviewed in [
]. Note that, sometimes, setups involving magnetoelastic elements that are fixed as parts of a vibrating structure, with the intention of estimating spectral characteristics of the underlying
structure, do use two coils but with the interrogation coil fed by DC. This is completed in order to induce bias to the magnetic flux produced by the magnetoelastic element and seemingly achieve
better efficiency [
]. Due to the presence of the interrogation coil, such setups should really be included in the class of active setups. This remark also illustrates the fact that the magnetic flux produced by the
passive setups is notably weaker than that of the active ones, but the operational costs are lower, and the associated electrical or electronic circuits are far simpler. Even though signals obtained
with passive setups are noisy, faults/failures that influence the dynamics of the underlying structure (or machinery) are, indeed, detectable in the recorded voltage. In [
], it was shown that specific resonant frequencies of the structure’s dynamics, which were estimated using Finite Element Analysis, are present in the recorded voltage’s spectral characteristics.
Using a passive setup, fault diagnosis was achieved in polymer slabs with magnetoelastic ribbons integrated with 3D printing during slab manufacturing [
]. Fault diagnosis was also achieved for structures composed by bolting together such slabs, and specifically, for indicating loose connections between structural members [
]. Interestingly, this study demonstrated that although only one slab with integrated magnetoelastic material was used in the structure, more than one loose connection could be detected. Again,
cracks were diagnosed in metal cantilever beams involving magnetoelastic ribbons fixed on their surface [
] or metal rotating beams [
]. Most importantly, passive setups were proven adequate for obtaining sensing and diagnostic results both for short/sturdy and for long, thin/flexible polymer cantilever beams (see [
] and references therein). Pure sensing properties were also extensively evaluated for metal cantilever beams [
] and for plastic beams (in terms of bending frequencies) [
], although, strictly speaking, these works involved an interrogation coil inducing bias into the magnetic flux produced.
The current work aims at obtaining a novel two-fold extension of the preliminary results presented in [
]. First, the previously established sensing ability of the setup is consolidated with a statistical evaluation of the mapping between the level of excitation of the beam and the resulting frequency
shifts in the recorded voltage signal. Hence, sensing capabilities are obtained because the vibration level provided to the beam may now be deduced by monitoring frequency shifting patterns in the
voltage signal, with the uncertainty in the process quantified. Second, the fault diagnosis capabilities already shown in [
] are statistically consolidated: A mapping of complex-plane areas (containing poles linked to shifted voltage frequencies) onto faults (loads) of specific magnitude and position affecting the beam
is statistically established, so that the risk of wrong diagnosis is quantified. As already explained, the setup involves a thin, flexible cantilever beam clamped on one end and supported by an
exciter on the opposite end. The latter provides excitation as specified with a waveform generator. The magnetoelastic element (short ribbon of Metglas
2826MB) is attached at the clamped end (in contrast to [
]), with a low-cost reception coil suspended above the ribbon surface. The raw voltage induced in a contact-less manner is recorded using a conventional oscilloscope, without circuits for preliminary
conditioning/filtering or amplification. Thus, the objective of obtaining sensing and diagnostic capabilities out of a low complexity (in terms of hardware and operation) setup, by investing in the
optimization of the algorithmic framework used, is possible: It is shown that although the beam is only excited at the free end, the sensing of the beam excitation level and diagnosis of different
structural changes (magnitude/position on the beam) are both achievable. By virtue of the current results, conventional long, flexible beams equipped with magnetoelastic elements may be used: (i)
either for deducing the level of excitation (due to external forces, for instance) suffered by the beam (or any structure connected to it) or (ii) for detecting and localizing faults (loads) of
different magnitude affecting the beam for a given level of excitation.
2. Materials and Methods
The experimental setup is essentially that which was used in [
] and consists of a long, thin and flexible beam (with a length of 425 mm, width of 25 mm and thickness equal to 1 mm), an exciter (SMARTSHAKER
K2004E01), a 25 mm long ribbon of Metglas
2826MB magnetoelastic material and a low-cost Vishay IWAS reception coil (normally used for wireless charging). The beam is 3D-printed in FDM (fused deposition modeling) mode with a PET-G filament
and is used in a cantilever arrangement with one end clamped, as presented in
Figure 1
. The opposite (free) end is fixed to the exciter rod, thereby receiving the vibration of the user-defined profile. For this purpose, an external waveform generator (SIGLENT SDG 5122) is connected to
the exciter. The magnetoelastic ribbon is fixed on the beam surface near the clamp with glue, whereas the reception coil is fixed 5 mm above the ribbon, thus bearing no contact with it. The distance
of 5 mm was selected from sensitivity tests, as described in [
]. Magnetic flux created by the vibrating (along with the beam) ribbon induces voltage in the reception coil circuit, which is recorded with a digital oscilloscope. Based on the analysis of the
recorded voltage’s spectral characteristics, sensing and fault diagnosis results may be obtained. Especially in terms of fault diagnosis, this approach based on using only one signal is
representative of real-life applications (bridges, flexible structures and so on), because the excitation signal is often unavailable (or hard to measure) with only the structure’s response signal
being available.
2.1. Methodology for the Evaluation of the Sensing Characteristics of the Setup
The sensing principle of the proposed passive setup was previously examined in [
] using a first series of tests with voltage recorded while the beam was at rest and a second one with the beam excited with a triangular force at 160 Hz. Triangular waveforms were used instead of
pure sinusoids or pulses because they correspond better to real sources of vibrations such as those created by machine-reciprocating parts [
]. Comparison of the two series of recorded voltage signals led to two conclusions. First, the time histories of both voltage series were almost identical, as expected for passive setups [
]. Second, the spectral characteristics of the voltage with the beam under excitation were different than those from the voltage recorded with the beam at rest. Dominant frequencies at 1300–1370 Hz
showed consistent shifting with the beam under excitation, as presented in Figure 2b,c in [
Based on this result, the current study systematically examines the link between the shifting of dominant frequencies in the recorded voltage and the excitation levels provided to the beam. For this
purpose (see also
Section 3.1
), the excitation level is set to as low as 10 Hz for the first series of tests, and keeps increasing for each test series, until reaching a value of 160 Hz. For each test series (and, hence,
excitation level), frequency bands with higher contributions to the voltage frequency content (those corresponding to prominent peaks in Fourier plots) are designated. In each such band, the values
of dominant frequency peaks form relevant groups, one per test series. Then, statistically comparing two (or more) of these groups (in the considered band) corresponds to statistically evaluating
whether the respective beam excitation levels cause similar shifting patterns in the recorded voltage’s spectral characteristics. Demonstrating that different excitation levels ultimately result in
statistically not similar groups of dominant frequency values in the considered band means that there exists a mapping between beam excitation levels and frequency shifts in the recorded voltage
signal. Then, conversely, one may use this mapping to estimate the level of beam excitation based on the dominant frequency shifting pattern exhibited.
Statistical evaluation of the similarity in two or more groups (intervals) of data may be formulated as a problem of deciding whether the data in both groups come from similar distributions or not.
In the current case, a hypothesis-testing problem may be formulated as follows:
H[0]: Frequency values in both (or all) groups follow a similar distribution.
H[1]: Frequency values in both (or all) groups follow different distributions.
referred to as the null hypothesis and
as the alternative hypothesis. Note that there is no available information on whether the data in the previously mentioned groups follow a normal distribution; hence, non-parametric statistical tests
must be used to choose between the null and alternative hypotheses in (1) at a given risk level α (usually equal to 0.05). The latter is the probability of rejecting
even though it is true. Such non-parametric statistical tests include the Kolmogorov–Smirnov two-sample test and the Kruskal–Wallis test [
]. As its name suggests, the Kolmogorov–Smirnov two-sample test is designed to address a hypothesis problem such as that in (1), when comparisons between only two groups are considered. The null
hypothesis is accepted (or rejected) based on the distance between the empirical distributions of data for each group estimated using the associated data. On the other hand, the Kruskal–Wallis test
may be used for two or more groups of data [
] and provides an answer to the question of whether data in the groups under consideration follow similar statistical distributions. If these distributions have similar shapes, then the
Kruskal–Wallis test accepts (or rejects) the null hypothesis based on whether the medians of all groups are sufficiently (in some statistical sense) close [
]. Furthermore, the Kruskal–Wallis test may be used with groups containing 5 or 6 data values, as shown in cases presented in [
], respectively. These characteristics motivated the choice of the Kruskal–Wallis test to address the hypothesis testing problem (1), which will be presented later (
Section 3.1
). The Kruskal–Wallis test is coded in most software packages like SPSS
or MATLAB
, with the relevant routines using data provided to instantly compute the probability value (referred to as the
-value), which evaluates the evidence against the null hypothesis. A lower
-value indicates more important evidence against accepting the null hypothesis. As will be explained in
Section 3.1
Section 3.2
, the
-value may offer valuable information for quantifying the uncertainty (risk) involved when deciding on whether two or more frequency groups feature significant similarities: in other words, whether
these groups potentially overlap in part or not.
2.2. Methodology for Evaluating Fault Diagnosis Characteristics of the Setup
The principle of fault diagnosis was also examined in [
] using several series of tests with faults affecting the beam. These were simulated as loads of two different magnitudes (EUR 1 cent designated with the suffix –1C or a bolt fixed with its nut on
the beam designated with –BN) potentially placed on the beam at three positions (A, B or C—see
Figure 1
Table 1
presents the fault cases (also referred to as test scenarios) and their characteristics. For each series of tests, one load was placed at one position throughout the series with voltage recorded as
usual. The series also involved tests without load on the beam (designated with the prefix N). For each fault case, a prefix other than N designated the load position—so it should be A, B or C. The
associated voltage signals were recorded and analyzed, the bands of dominant frequencies were inspected and the patterns of the dominant peak shifting according to the load, its magnitude and its
position on the beam were studied. Once the impact of load position and magnitude was validated in specific frequency bands, a model-based fault diagnosis procedure was defined and applied to all
signals (obtained from testing according to the test scenarios in
Table 1
), with the following steps:
• The voltage signal considered was filtered and subsampled (details are given in Section 3.2 in [
• Discrete-time stochastic AutoRegressive (AR) time-series representations were identified on the signal resulting from step 1 (thus modeling its spectral characteristics), and the discrete-time AR
poles corresponding to specific bands of the dominant frequencies were computed and plotted on the z-plane;
• Using the AR poles from step 2, the corresponding continuous-time poles were computed and plotted on the s-plane, thus enabling the calculation of the natural frequencies ω[n] and damping ratios
ζ for the considered bands of dominant frequencies.
The reader is referred to [
] for specific details on signal filtering, identification of AR representations and their optimization for enhancing fault detectability. The application of the 3-step procedure allowed for mapping
areas of the s-plane onto each one of the test scenarios in
Table 1
. According to this mapping, the majority of poles from each test scenario would only be located inside their proper s-plane area, meaning that fault occurrence, localization and magnitude estimation
were in principle achievable.
The current study addresses the remaining part of the problem, namely, the demonstration that each s-plane area (which contains mostly poles resulting from one of the specific fault scenarios in
Table 1
) may be statistically distinguishable from other neighboring areas with a specific level of confidence. This is crucial because, as described in [
] (and alluded to in the previous paragraph), it is hard to delimitate (pole) areas in the s-plane corresponding to specific faults without using some kind of statistical inference to estimate the
level of accuracy of this process. In other terms, the risk of erroneously classifying poles with respect to the fault case they result from should be quantified. For instance, ref. [
] shows that in a few isolated cases, the poles from the signals corresponding to N-1C faults were in the s-plane areas designating A-1C faults. Again, an isolated case of poles from the signals
corresponding to A-1C faults inside the area designating the poles from B-1C fault scenarios may also be found in [
]. Obviously, such isolated cases do not cast doubt on the principle that fault diagnosis is, indeed, achievable using this setup. Nonetheless, it is important to quantify the risk of overlapping
s-plane areas potentially leading to wrong decisions when attempting to detect and isolate specific faults.
Interestingly, the current study also demonstrates that results similar to those obtained in continuous time (s-plane areas) may also be achievable in discrete time (z-plane areas). This could lead
to simplifying the previously presented 3-step procedure in terms of fault detection/classification, if the calculation of natural frequencies ω
and damping ratios ζ for the considered bands of dominant frequencies is not needed. This fact is particularly promising because in [
], no conclusive (or even indicative) evidence of z-plane pole areas being able to be mapped onto fault cases (test scenarios) was found.
For these purposes, the following statistical hypothesis problem may be formulated:
H[0]: Pole locations inside the considered groups follow a similar distribution.
H[1]: Pole locations inside the considered groups follow different distributions.
is the null hypothesis potentially corresponding to (neighboring) s- or z-plane areas with significant overlapping and
is the alternative hypothesis designating substantially separable areas at a given risk level. As with the frequency data used to evaluate the setup sensing characteristics, there is no available
knowledge of pole locations following a normal distribution. Then, non-parametric statistical tests should be used to decide between the null and alternative hypotheses in (2) at a given risk level α
(usually equal to 0.05 or 5%), which is the probability of the examined pole areas not being considered as overlapping (or, in other terms, that
is rejected) even though they are. The Kruskal–Wallis test may again be used to solve the hypothesis testing problem (2), as will be presented later (
Section 3.2
3. Results and Discussion
The proposed setup is evaluated in the current section with extended testing and statistical evaluation of results using the methodology presented in
Section 2
. Sensing characteristics are assessed in
Section 3.1
by statistically evaluating the mapping between excitation frequency levels and experimentally obtained shifts in the voltage signal’s dominant frequency peaks inside the principal frequency bands.
The fault diagnosis characteristics are assessed in
Section 3.2
by statistically evaluating the connection between the recorded signal’s AR pole locations in the s-plane or z-plane and the occurrence of specific faults (type, magnitude and location as in
Table 1
3.1. Results of the Statistical Evaluation of Sensing Characteristics
The testing procedure involved six experiments for each excitation level, as presented in
Table 2
. The reception coil was placed at a distance of 5 mm above the ribbon, following the relevant testing performed in [
], to define an optimal value for that distance. As seen in
Table 2
, the beam was excited with frequencies starting at 10 Hz for the first series of six experiments and finishing at 160 Hz for the last series. In general, at each test series, the excitation level
increased by 15 Hz with respect to the previous one. The only exception to this rule regarded the test series with the beam excited at 25 Hz, which presented very similar results to those obtained
when the beam was under an excitation of 10 Hz (see also the relevant comment later on) and was therefore omitted. Note also that a test series of six experiments with the beam at rest, namely,
series zero, was included for comparison purposes. The voltage signals were recorded and examined with respect to their frequency content. In
Figure 2
, the Fourier plot of one representative signal from each test series shows that the bands of the dominant frequencies are consistently situated (in a decreasing order of magnitude) around 316 KHz,
1400 Hz, 1800 Hz and 2100 Hz. For each of these four frequency bands, the values of the dominant peaks are collected for each test series (six values per series), and the respective groups are
plotted (in the form of error bars) in
Figure 3
Figure 4
Figure 5
Figure 6
An initial remark is related to the groups resulting from the excitations at 10 Hz and 40 Hz, whose upper and lower values, respectively, are very close in all four frequency bands. This is
indicative of the fact that an intermediate excitation level at 25 Hz was, indeed, not necessary since its group exhibited considerable overlapping neighboring groups. Then, it is easily noted that
all groups up to 100 Hz feature an increasing trend with no apparent overlapping in frequency bands around 1400, 1800 and 2100 Hz. This, in turn, means that by focusing on the band around 1400 Hz and
examining the frequency groups shown in
Figure 4
Figure 5
Figure 6
, a one-to-one mapping may be established between the beam excitation levels and the respective frequency peak groups. Hence, if a (voltage) signal’s frequency peak around, for instance, 1400 Hz is
available, then one may deduce the excitation level of the beam simply using the previously mentioned mapping, provided that this level does not exceed 100 Hz.
This mapping becomes less consistent for excitation levels above 100 Hz, with
Figure 4
suggesting that the levels of 100 Hz are distinguishable from those of 115 Hz in the 1400 Hz band, or that the levels of 115 Hz are distinguishable from those of 130 Hz in the 316 KHz band (
Figure 3
). In any case, a measure of the probability that two or more frequency groups are mutually distinguishable in each frequency band is required. For this purpose, the Kruskal–Wallis test is used
between all possible pairs of groups to solve the hypothesis testing problem (1). The results, in terms of
-values, are given in
Table 3
Table 4
Table 5
Table 6
for the frequency bands at 316 KHz, 1400 Hz, 1800 Hz and 2100 Hz, respectively.
In these tables, the intersecting cell of the
-th line and the
-th column presents the
-value obtained using the Kruskal–Wallis test for the groups indicated in the respective line and column. Standard (non-shaded) cells correspond to cases where
(groups with a similar distribution of data) is rejected in favor of
(groups with a different distribution of data) at a risk level equal to α = 0.05, as explained in
Section 2.1
. In such cases, the two groups considered are mutually distinguishable at the indicated risk level, meaning that the previously mentioned mapping is valid, again, at the indicated risk level. On the
other hand, the shaded cells correspond to cases where
(groups with a similar distribution of data) is accepted for the pair of groups under consideration, at a risk level equal to α = 0.05. Then, the two considered excitation frequencies would result in
significantly overlapping groups of frequency peaks in the band of interest, meaning that no exclusive one-to-one mapping is possible. An examination of
Table 3
Table 4
Table 5
Table 6
basically validates the conclusions drawn by visual inspection of (the corresponding)
Figure 3
Figure 4
Figure 5
Figure 6
. The frequency band around 1400 Hz provides statistically non-overlapping groups at a risk level of 0.05 (
Table 4
) for beam excitation frequencies up to 115 Hz. Then, the excitation levels of 115 Hz and 130 Hz create overlapping groups (at a risk level of 0.05), since the p-value computed with the
Kruskal–Wallis test is 0.055 (see the intersection between the ninth line and the tenth column), or just larger than 0.05, which leads to accepting the null hypothesis. At the same time,
-values just larger than the risk level indicate a statistical tendency of being close to rejecting H
. In other words, even though the 1400 Hz band does not actually allow for distinguishing between the excitation levels of 115 and 130 Hz, it would be relevant to look at other frequency bands for
the null hypothesis being rejected when comparing the groups associated with the levels of 115 and 130 Hz. The band at 316 KHz offers this possibility since the
-value (related to comparing groups created by the excitation levels of 115 Hz and 130 Hz—
Table 3
) computed with the Kruskal–Wallis test is 0.0077 (see the intersection between the ninth line and the tenth column). However, in the band around 316 KHz, all groups created by the excitation levels
above 130 Hz are statistically similar, as indicated by a
-value of the Kruskal–Wallis test statistic equal to 0.2738 > 0.05 when comparing all three groups corresponding to the excitation levels of 130, 145 and 160 Hz. Then, distinguishing between the
excitation levels of 115 and 130 Hz in the 316 KHz band may only be achieved in conjunction with p-values from the band around 2100 Hz in
Table 6
. The latter indicates that the group resulting from the beam excited at 130 Hz cannot be mistaken for that associated with the excitation of 145 Hz at the considered risk level of 0.05. Thus,
frequency peak groups resulting from excitation levels up to 145 Hz may be distinguished in a one-to-one comparison using the proposed setup and methodology.
Figure 3
Figure 4
Figure 5
Figure 6
Table 3
Table 4
Table 5
Table 6
suggest that a meaningful solution for distinguishing groups associated with excitation levels up to 145 Hz from a group associated with the excitation levels of 160 Hz is not available in all four
bands considered.
A final remark regards these results with respect to the passive excitation principle used in this setup. Using mechanical excitation for the beam (and, hence, the magnetoelastic ribbon) without, for
instance, some form of DC bias from a second coil [
] mainly allows for inducing sensing capabilities in standard conventional beams in a cost-effective manner. On the other hand, the recorded signal is quite noisy, meaning that more (and surely
non-trivial) algorithmic effort has to be invested in rejecting noise effects and obtaining results on higher-order modes of vibrations and/or larger beam deflections.
3.2. Results of the Statistical Evaluation of Fault Diagnosis Characteristics
The testing procedure involved six experiments per fault case (or test scenario as referred to) in
Table 1
. The 48 voltage signals resulting from the testing were also used in [
] along with the three-step procedure outlined in
Section 2.2
, to deliver initially the discrete-time AR poles inside the bands of the dominant frequencies (around 1350–1400 Hz in this case) and then their corresponding continuous-time counterparts. This
yielded the pole areas in
Figure 7
a,b at the z-plane and the s-plane, respectively, for the –1C fault cases.
Figure 8
a,b presents the pole areas at the z-plane and the s-plane, respectively, for the –BN fault cases. From these figures, it seems quite hard to visually distinguish groups of poles corresponding to
specific pole scenarios in discrete time (z-plane), whereas it is relatively easier to distinguish these groups in continuous time (s-plane). But even when the s-plane is considered, poles
corresponding to the N-1C beam configuration may be found in the area involving poles corresponding to the A-1C-affected beam (
Figure 7
b). In other words, the corresponding pole groups seem to effectively (although slightly) overlap. The same is obvious for the A-1C and B-1C poles, as well as the poles from the B-BN- and
C-BN-affected beams (
Figure 8
b). Hence, although, in principle, –1C or –BN faults of all magnitudes and positions on the beam may be distinguished from each other, a statistical assessment of (overlapping) pole groups would be
desirable. This assessment would quantify the inherent uncertainty when fault diagnosis (detection, isolation and magnitude estimation) is carried out by examining regions where these pole groups are
located on the s-plane.
The statistical assessment of whether the regions of continuous-time pole groups corresponding to fault cases (test scenarios) are distinguishable between them may be carried out by formulating the
statistical hypothesis problem (2), as described in
Section 2.2
. The Kruskal–Wallis test is again used for pairs of pole groups and addresses the hypothesis testing problem at a risk level α equal to 0.05. Only imaginary parts of the poles for each group are
considered as pole coordinates since pole regions are delimited with respect to their imaginary part in
Figure 7
b and
Figure 8
b. The detection of fault occurrence for fault types –1C and –BN is examined by forming
Table 7
Table 8
, respectively. The intersecting cell of the
-th line and the
-th column presents the
-value obtained using the Kruskal–Wallis test for the fault cases indicated in the respective line and column. The shaded cells correspond to significantly overlapping groups of poles.
Table 7
is systematically rejected for any comparison of N-1C against the A, B or C-1C fault cases, at α = 0.05, since in all such cases, the
-value is lower than α. In other words, the (imaginary parts of the) poles from the N-1C configuration do not follow a similar distribution as those from the A, B or C-1C configurations at α = 0.05,
and faults may be systematically detected. Again, in
Table 7
is systematically rejected for any comparison of N-1C against the A, B or C-1C fault cases at α = 0.05. Hence, all –1C fault cases have different impacts on the pole (imaginary) locations, meaning
that all –1C faults may be identified. Note, however, that the
-value for comparing the A-1C and B-1C configurations is notably higher (see the intersection between the third line and the fourth column), although smaller than α = 0.05. This is related to the
overlap between the two groups, as seen in
Figure 7
b, which was commented upon earlier on. The same conclusions in terms of detection and identification may be drawn for the –BN configurations with a careful examination of
Table 8
. Again, comparing the B-BN and C-BN configurations results in a higher-than-usual
-value (although again smaller than α = 0.05), which is related to the slight overlap of groups designating the B-BN and C-BN fault cases in
Figure 8
b. Lastly,
Table 9
allows for addressing the issue of distinguishing between the fault configurations –1C (small fault magnitude) and –BN (large fault magnitude), as defined in
Table 1
. Again, the shaded cells correspond to significantly overlapping groups of poles. In general, the
-values are always smaller than α = 0.05, meaning that
is systematically rejected at the risk level α = 0.05 for all comparisons between the –1C and –BN configurations. Then, no –BN fault may be mistaken for a –1C fault at the designated risk level.
Obviously, these results are valid for cases of a single fault (load) occurrence at a time.
Note that formulating the statistical hypothesis problem (2), as described in
Section 2.2
, proved again to be highly beneficial for discrete-time pole groups in the z-plane. As noted before, in [
], it was hard to delimit specific z-plane pole areas associated with the fault cases of
Table 1
using a simple visual inspection. In the current study, the Kruskal–Wallis test is again used for pairs of discrete-time pole groups (see
Figure 7
a and
Figure 8
a) in order to address the hypothesis testing problem at a risk level α equal to 0.05. For each such pole, its angle of rotation with respect to the origin is considered, and specifically, the ratio
of its imaginary over its real part (effectively corresponding to the tangent of that angle). The detection of fault occurrence for the fault types –1C and –BN is examined by forming
Table 10
Table 11
, respectively. As in the continuous-time case, the intersecting cell of the
-th line and the
-th column presents the
-value obtained using the Kruskal–Wallis test for the fault cases indicated in the respective line and column. The shaded cells correspond to significantly overlapping groups of poles. The results
are equivalent to those obtained for the s-plane pole groups, with
systematically rejected for any comparison of N-1C against the A, B or C-1C fault cases, at α = 0.05, as shown in
Table 10
The same comments hold for results presented in
Table 11
, with
systematically rejected for any comparison of N-BN against the A, B or C-BN fault cases, at α = 0.05. Again, one may distinguish between the s –1C (small fault) and –BN (large fault) configurations
Table 1
using rotation angles of the discrete-time poles in the z-plane to form
Table 12
. As in the continuous-time case, the
-values are always smaller than α = 0.05, meaning that
is systematically rejected at the risk level α = 0.05 for all comparisons between the –1C and –BN configurations. Then, no –BN fault may be mistaken for a –1C fault at the designated risk level, even
using discrete-time AR poles. As in the continuous-time case, these results are valid for cases of a single fault (load) occurrence at a time.
A remark may at this point be made on having relatively few data values in each group when the comparisons between two or more groups are carried out using the Kruskal–Wallis test. As noted in
Section 2
, the Kruskal–Wallis test may be used with groups of five or more data values each [
], with relevant examples in [
], ch. 8, and in [
], ch. 25. It is clear that in the current study, this condition is fulfilled. Nonetheless, it would be advisable to use more data values per group since this would render the Kruskal–Wallis test
more powerful. Currently, the test is more conservative rather than powerful in the sense that it is more reluctant to reject the null hypothesis
at the designated risk level. This, in turn, means that the results presented both for sensing and for fault diagnosis purposes are rather conservative. In terms of evaluating sensing
characteristics, for instance, if more data per group were available, then comparisons between certain groups at the band around 1400 Hz (which indicated overlapping groups due to the
-values being marginally higher than α = 0.05) could yield results toward rejecting
, thus enabling a distinction between the groups considered.
A final remark is related to these sensing and fault diagnosis results being obtained for a long, thin and flexible beam. The same basic setup with a model-based algorithmic framework was applied to
shorter, thicker and quite more rigid polymer beams in the previous works [
] and (in part) [
], with similarly successful fault diagnosis results. Nonetheless, sensing properties were not comprehensively evaluated in these studies since they both aimed at obtaining fault diagnosis results.
Moreover, no experiments with steel structures have been carried out yet. Specifically, for such cases, the applicability of the proposed setup and algorithmic analysis in terms of sensing and fault
diagnosis has yet to be tested, even though the currently obtained results are promising.
4. Conclusions
A thin cantilever polymer beam with a short Metglas
2826MB ribbon attached to its surface was statistically evaluated in terms of contact-less sensing and fault diagnosis characteristics. The vibration of the beam’s free end creates the emission of
magnetic flux by the Metglas
ribbon (fixed on the opposite end), which induces a voltage in a coil suspended over the film. This voltage is, hence, obtained in a contact-less manner and is recorded with an oscilloscope. The
voltage signal analysis showed that shifting of the dominant frequencies may result either from changes in the excitation frequency provided to the beam or from faults (loads) of various magnitudes
and positions on the beam when the latter vibrates at a given frequency. A statistical framework based on the formulation of statistical hypothesis problems was introduced to evaluate such
frequency-shifting characteristics, which led to two main results. First, a mapping between the vibration frequency level of the beam and the resulting frequency shifts observed in the recorded
voltage was statistically established. Hence, sensing properties were obtained because the vibration level of the beam may be deduced by monitoring frequency shifting patterns in the voltage signal,
with the uncertainty in the process quantified. Second, s-plane or z-plane areas containing poles corresponding to shifted frequencies of the voltage signal (modeled as per
Section 2.2
) were statistically linked to faults (loads) of specific magnitude and position affecting the beam. Hence, fault diagnosis properties were obtained because the occurrence, magnitude and position of
faults (loads) on the beam may be deduced by checking the s-plane or z-plane pole locations, with the uncertainty in the process quantified. Future work will involve validating the setup’s sensing
and fault diagnosis performance for low excitation frequency and/or high beam deflection and/or cases of multiple fault occurrence. It would be equally useful to evaluate the impact of integrating
multiple sensing sets (ribbon and coils) on the beam.
Author Contributions
Conceptualization, D.D. and R.-G.S.; methodology, D.D.; software, A.D. and R.-G.S.; validation, A.D., R.-G.S. and D.D.; formal analysis, D.D.; investigation, A.D. and R.-G.S.; resources, D.D.; data
curation, A.D., R.-G.S. and D.D.; writing—original draft preparation, D.D.; visualization, D.D., A.D. and R.-G.S.; supervision, D.D. All authors have read and agreed to the published version of the
This research received no external funding.
Data Availability Statement
Data are available on a personal need basis by contacting the corresponding author of this published article and upon agreement with the authors.
Conflicts of Interest
The authors declare no conflict of interest.
1. Le Bras, Y.; Greneche, J.M. Magneto-elastic resonance: Principles, modeling and applications. In Resonance; IntechOpen: London, UK, 2017. [Google Scholar] [CrossRef]
2. Hristoforou, E.; Ktena, A. Magnetostriction and magnetostrictive materials for sensing applications. J. Magn. Magn. Mater. 2007, 316, 372–378. [Google Scholar] [CrossRef]
3. Grimes, C.A.; Roy, S.C.; Cai, Q. Theory, instrumentation and applications of magnetoelastic resonance sensors: A review. Sensors 2011, 11, 2809–2844. [Google Scholar] [CrossRef] [PubMed]
4. Gonzalez, J.M. Magnetoelasticity and Magnetostriction for Implementing Biomedical Sensors. In Engineering Biomaterials for Neural Applications; López-Dolado, E., Serrano, M.C., Eds.; Springer:
Cham, Switzerland, 2022; pp. 127–147. [Google Scholar] [CrossRef]
5. Dimogianopoulos, D.G. Sensors and energy harvesters utilizing the magnetoelastic principle: Review of characteristic applications and patents. Recent Pat. Elec. Eng. 2012, 5, 103–119. [Google
Scholar] [CrossRef]
6. Ren, L.; Yu, K.; Tan, Y. Applications and advances of magnetoelastic sensors in biomedical engineering: A review. Materials 2019, 12, 1135. [Google Scholar] [CrossRef] [PubMed]
7. Grimes, C.A.; Jain, M.K.; Singh, R.S.; Cai, Q.; Mason, A.; Takahata, K.; Gianchandani, Y. Magnetoelastic microsensors for envinronmental monitoring. In Proceedings of the 14th IEEE International
Conference on Micro Electro Mechanical Systems, Interlaken, Switzerland, 25 January 2001; pp. 278–281. [Google Scholar] [CrossRef]
8. Baimpos, T.; Boutikos, P.; Nikolakakis, V.; Kouzoudis, D. A polymer-Metglas sensor used to detect volatile organic compounds. Sens. Actuator A Phys. 2010, 158, 249–253. [Google Scholar] [CrossRef
9. Atalay, S.; Izgi, T.; Kolat, V.S.; Erdemoglu, S.; Orhan, O.I. Magnetoelastic humidity sensors with TiO[2] nanotube sensing layers. Sensors 2020, 20, 425. [Google Scholar] [CrossRef] [PubMed]
10. Samourgkanidis, G.; Nikolaou, P.; Gkovosdis-Louvaris, A.; Sakellis, E.; Blana, I.M.; Topoglidis, E. Hemin-modified SnO[2]/Metglas electrodes for the simultaneous electrochemical and
magnetoelastic sensing of H[2]O[2]. Coatings 2018, 8, 284. [Google Scholar] [CrossRef]
11. Sagasti, A.; Gutiérrez, J.; Lasheras, A.; Barandiarán, J.M. Size Dependence of the Magnetoelastic Properties of Metallic Glasses for Actuation Applications. Sensors 2019, 19, 4296. [Google
Scholar] [CrossRef] [PubMed]
12. Skinner, W.S.; Zhang, S.; Guldberg, R.E.; Ong, K.G. Magnetoelastic Sensor Optimization for Improving Mass Monitoring. Sensors 2022, 22, 827. [Google Scholar] [CrossRef] [PubMed]
13. Atalay, S.; Inan, O.O.; Kolat, V.S.; Izgi, T. Influence of Ferromagnetic Ribbon Width on Q Factor and Magnetoelastic Resonance Frequency. Acta Phys. Pol. A. 2021, 139, 159–163. [Google Scholar] [
14. Ren, L.; Cong, M.; Tan, Y. An Hourglass-Shaped Wireless and Passive Magnetoelastic Sensor with an Improved Frequency Sensitivity for Remote Strain Measurements. Sensors 2020, 20, 359. [Google
Scholar] [CrossRef] [PubMed]
15. Saiz, P.G.; Gandia, D.; Lasheras, A.; Sagasti, A.; Quintana, I.; Fdez-Gubieda, M.L.; Gutiérrez, J.; Arriortua, M.I.; Lopes, A.C. Enhanced mass sensitivity in novel magnetoelastic resonators
geometries for advanced detection systems. Sens. Actuators B Chem. 2019, 296, 126612. [Google Scholar] [CrossRef]
16. Saiz, P.G.; Porro, J.M.; Lasheras, A.; Fernández de Luis, R.; Quintana, I.; Arriortua, M.I.; Lopes, A.C. Influence of the magnetic domain structure in the mass sensitivity of magnetoelastic
sensors with different geometries. J. Alloys Compd. 2021, 863, 158555. [Google Scholar] [CrossRef]
17. Dimogianopoulos, D.G.; Charitidis, P.J.; Mouzakis, D.E. Inducing damage diagnosis capabilities in carbon fiber reinforced polymer composites by magnetoelastic sensor integration via 3D printing.
Appl. Sci. 2020, 10, 1029. [Google Scholar] [CrossRef]
18. Dimogianopoulos, D.G.; Mouzakis, D.E. Nondestructive Contactless Monitoring of Damage in Joints between Composite Structural Components Incorporating Sensing Elements via 3D-Printing. Appl. Sci.
2021, 11, 3230. [Google Scholar] [CrossRef]
19. Samourgkanidis, G.; Kouzoudis, D. A pattern matching identification method of cracks on cantilever beams through their bending modes measured by magnetoelastic sensors. Theor. Appl. Fract. Mech.
2019, 103, 102266. [Google Scholar] [CrossRef]
20. Samourgkanidis, G.; Kouzoudis, D. Characterization of magnetoelastic ribbons as vibration sensors based on the measured natural frequencies of a cantilever beam. Sens. Actuator A Phys. 2020, 301,
111711. [Google Scholar] [CrossRef]
21. Samourgkanidis, G.; Kouzoudis, D. Magnetoelastic Ribbons as Vibration Sensors for Real-Time Health Monitoring of Rotating Metal Beams. Sensors 2021, 21, 8122. [Google Scholar] [CrossRef]
22. Tapeinos, C.I.; Kamitsou, M.D.; Dassios, K.G.; Kouzoudis, D.; Christogerou, A.; Samourgkanidis, G. Contactless and Vibration-Based Damage Detection in Rectangular Cement Beams Using
Magnetoelastic Ribbon Sensors. Sensors 2023, 23, 5453. [Google Scholar] [CrossRef] [PubMed]
23. Kouzoudis, D.; Samourgkanidis, G.; Tapeinos, C.I. Contactless Detection of Natural Bending Frequencies using Embedded Metallic-Glass Ribbons inside Plastic Beams made of 3-D Printing. Recent
Prog. Mater. 2021, 3, 010. [Google Scholar] [CrossRef]
24. Sultana, R.G.; Dimogianopoulos, D. Contact-Less Sensing and Fault Detection/Localization in Thin Cantilever Beams via Magnetoelastic Film Integration and AR Model-based Methodology. In
Model-Based and Data-Driven Methods for Advanced Control and Diagnosis; Theilliol, D., Korbicz, J., Kacprzyk, J., Eds.; ACD 2022. Studies in Systems, Decision and Control; Springer: Cham,
Switzerland, 2023; Volume 467. [Google Scholar] [CrossRef]
25. Zar, J.H. Biostatistical Analysis, 5th ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2010. [Google Scholar]
26. Hoffman, J.I.E. Basic Biostatistics for Medical and Biomedical Practitioners, 2nd ed.; Elsevier Science: Amsterdam, The Netherlands, 2019; ISBN 978-0-12-817084-7. [Google Scholar] [CrossRef]
27. Getting Started with the Kruskal Wallis-Test|UVA Library. Available online: https://library.virginia.edu/data/articles/getting-started-with-the-kruskal-wallis-test (accessed on 24 September
Figure 1. Experimental setup indicating a load (coin) at position A, other load positions B and C (middle and left) and the arrangement of the reception coil and magnetoelastic film.
Fault Case Load Used Load Mass (g) Load Position
(Test Scenario) (From Free End)
N-1C No load 0 n/a
A-1C EUR 1 cent 2.3 A (35 mm)
B-1C EUR 1 cent 2.3 B (185 mm)
C-1C EUR 1 cent 2.3 C (360 mm)
N-BN No load 0 n/a
A-BN Bolt + nut 6 A (35 mm)
B-BN Bolt + nut 6 B (185 mm)
C-BN Bolt + nut 6 C (360 mm)
Excitation Frequency (Hz) Test Series Number of Experiments
0 s0 6
10 s1 6
40 s2 6
55 s3 6
70 s4 6
85 s5 6
100 s6 6
115 s7 6
130 s8 6
145 s9 6
160 s10 6
Table 3. p-values for the comparison of two groups with the Kruskal–Wallis test in the band of 316 KHz.
Excitation 0 Hz 10 Hz 40 Hz 55 Hz 70 Hz 85 Hz 100 Hz 115 Hz 130 Hz 145 Hz 160 Hz
0 Hz N/A 0.0032 0.0032 0.0038 0.0036 0.0036 0.0032 0.0032 0.0036 0.0036 0.0032
10 Hz 0.0032 N/A 0.0027 0.0032 0.0031 0.0031 0.0028 0.0027 0.0031 0.0031 0.0027
40 Hz 0.0032 0.0027 N/A 0.0032 0.0031 0.0031 0.0028 0.0027 0.0031 0.0031 0.0027
55 Hz 0.0038 0.0032 0.0032 N/A 0.0037 0.0037 0.0033 0.0032 0.0037 0.0037 0.0032
70 Hz 0.0036 0.0031 0.0031 0.0037 N/A 0.0086 0.0032 0.0031 0.0036 0.0036 0.0031
85 Hz 0.0036 0.0031 0.0031 0.0037 0.0086 N/A 0.3880 0.4844 0.0086 0.0086 0.0031
100 Hz 0.0032 0.0028 0.0028 0.0033 0.0032 0.3880 N/A 0.8474 0.0203 0.0203 0.0045
115 Hz 0.0032 0.0027 0.0027 0.0032 0.0031 0.4844 0.8474 N/A 0.0077 0.0077 0.0027
130 Hz 0.0036 0.0031 0.0031 0.0037 0.0036 0.0086 0.0203 0.0077 N/A 1.0000 0.1620
145 Hz 0.0036 0.0031 0.0031 0.0037 0.0036 0.0086 0.0203 0.0077 1.0000 N/A 0.1620
160 Hz 0.0032 0.0027 0.0027 0.0032 0.0031 0.0031 0.0045 0.0027 0.1620 0.1620 N/A
Table 4. p-values for the comparison of two groups with the Kruskal–Wallis test in the band of 1400 Hz.
Excitation 0 Hz 10 Hz 40 Hz 55 Hz 70 Hz 85 Hz 100 Hz 115 Hz 130 Hz 145 Hz 160 Hz
0 Hz N/A 0.0038 0.0036 0.0037 0.0036 0.0032 0.0032 0.0020 0.0033 0.0027 0.0032
10 Hz 0.0038 N/A 0.0037 0.0038 0.0037 0.0032 0.0032 0.0021 0.0034 0.0028 0.0032
40 Hz 0.0036 0.0037 N/A 0.0036 0.0036 0.0031 0.0031 0.0020 0.0033 0.0026 0.0031
55 Hz 0.0037 0.0038 0.0036 N/A 0.0036 0.0032 0.0032 0.0020 0.0033 0.0027 0.0032
70 Hz 0.0036 0.0037 0.0036 0.0036 N/A 0.0031 0.0031 0.0020 0.0033 0.0026 0.0031
85 Hz 0.0032 0.0032 0.0031 0.0032 0.0031 N/A 0.0027 0.0017 0.0029 0.0023 0.0027
100 Hz 0.0032 0.0032 0.0031 0.0032 0.0031 0.0027 N/A 0.0190 0.0105 0.0037 0.0144
115 Hz 0.0020 0.0021 0.0020 0.0020 0.0020 0.0017 0.0190 N/A 0.0555 0.0051 0.1380
130 Hz 0.0033 0.0034 0.0033 0.0033 0.0033 0.0029 0.0105 0.0555 N/A 0.2410 0.5751
145 Hz 0.0027 0.0028 0.0026 0.0027 0.0026 0.0023 0.0037 0.0051 0.2410 N/A 0.0926
160 Hz 0.0032 0.0032 0.0031 0.0032 0.0031 0.0027 0.0144 0.1380 0.5751 0.0926 N/A
Table 5. p-values for the comparison of two groups with the Kruskal–Wallis test in the band of 1800 Hz.
Excitation 0 Hz 10 Hz 40 Hz 55 Hz 70 Hz 85 Hz 100 Hz 115 Hz 130 Hz 145 Hz 160 Hz
0 Hz N/A 0.0037 0.0036 0.0038 0.0036 0.0036 0.0035 0.0032 0.0020 0.0033 0.0027
10 Hz 0.0037 N/A 0.0036 0.0038 0.0036 0.0036 0.0035 0.0032 0.0020 0.0033 0.0027
40 Hz 0.0036 0.0036 N/A 0.0037 0.0036 0.0036 0.0035 0.0031 0.0020 0.0033 0.0026
55 Hz 0.0038 0.0038 0.0037 N/A 0.0037 0.0037 0.0036 0.0032 0.0021 0.0034 0.0028
70 Hz 0.0036 0.0036 0.0036 0.0037 N/A 0.0036 0.0035 0.0031 0.0020 0.0033 0.0026
85 Hz 0.0036 0.0036 0.0036 0.0037 0.0036 N/A 0.0084 0.0031 0.0020 0.0033 0.0026
100 Hz 0.0035 0.0035 0.0035 0.0036 0.0035 0.0084 N/A 0.0570 0.0068 0.0062 0.0074
115 Hz 0.0032 0.0032 0.0031 0.0032 0.0031 0.0031 0.0570 N/A 0.1380 0.0303 0.0917
130 Hz 0.0020 0.0020 0.0020 0.0021 0.0020 0.0020 0.0068 0.1380 N/A 0.0555 0.3173
145 Hz 0.0033 0.0033 0.0033 0.0034 0.0033 0.0033 0.0062 0.0303 0.0555 N/A 0.2410
160 Hz 0.0027 0.0027 0.0026 0.0028 0.0026 0.0026 0.0074 0.0917 0.3173 0.2410 N/A
Table 6. p-values for the comparison of two groups with the Kruskal–Wallis test in the band of 2100 Hz.
Excitation 0 Hz 10 Hz 40 Hz 55 Hz 70 Hz 85 Hz 100 Hz 115 Hz 130 Hz 145 Hz 160 Hz
0 Hz N/A 0.0037 0.0036 0.0038 0.0036 0.0035 0.0037 0.0033 0.0020 0.0032 0.0020
10 Hz 0.0037 N/A 0.0036 0.0038 0.0036 0.0035 0.0037 0.0033 0.0020 0.0032 0.0020
40 Hz 0.0036 0.0036 N/A 0.0038 0.0036 0.0035 0.0036 0.0033 0.0020 0.0031 0.0020
55 Hz 0.0038 0.0038 0.0038 N/A 0.0038 0.0036 0.0038 0.0035 0.0021 0.0033 0.0021
70 Hz 0.0036 0.0036 0.0036 0.0038 N/A 0.0035 0.0036 0.0033 0.0020 0.0031 0.0020
85 Hz 0.0035 0.0035 0.0035 0.0036 0.0035 N/A 0.0068 0.0032 0.0019 0.0030 0.0019
100 Hz 0.0037 0.0037 0.0036 0.0038 0.0036 0.0068 N/A 0.0750 0.0071 0.0051 0.0071
115 Hz 0.0033 0.0033 0.0033 0.0035 0.0033 0.0032 0.0750 N/A 0.0555 0.0105 0.0555
130 Hz 0.0020 0.0020 0.0020 0.0021 0.0020 0.0019 0.0071 0.0555 N/A 0.0190 1
145 Hz 0.0032 0.0032 0.0031 0.0033 0.0031 0.0030 0.0051 0.0105 0.0190 N/A 0.0190
160 Hz 0.0020 0.0020 0.0020 0.0021 0.0020 0.0019 0.0071 0.0555 1 0.0190 N/A
Table 7. p-values for the comparison of two groups in the s-plane with the Kruskal–Wallis test for the fault detection and localization of –1C cases.
Fault Case N-1C A-1C B-1C C-1C
N-1C N/A 0.0039 0.0039 0.0039
A-1C 0.0039 N/A 0.0104 0.0039
B-1C 0.0039 0.0104 N/A 0.0039
C-1C 0.0039 0.0039 0.0039 N/A
Table 8. p-values for the comparison of two groups in the s-plane with the Kruskal–Wallis test for the fault detection and localization of –BN cases.
Fault Case N-BN A-BN B-BN C-BN
N-BN N/A 0.0039 0.0039 0.0039
A-BN 0.0039 N/A 0.0039 0.0039
B-BN 0.0039 0.0039 N/A 0.0374
C-BN 0.0039 0.0039 0.0374 N/A
Table 9. p-values for the comparison of two groups in the s-plane with the Kruskal–Wallis test for fault localization and magnitude estimation between the –1C (small fault) and –BN (large fault)
Fault Case A-1C B-1C C-1C A-BN B-BN C-BN
A-1C N/A 0.0104 0.0039 0.0039 0.0039 0.0039
B-1C 0.0104 N/A 0.0039 0.0039 0.0039 0.0039
C-1C 0.0036 0.0036 N/A 0.0038 0.0036 0.0035
A-BN 0.0038 0.0038 0.0038 N/A 0.0038 0.0036
B-BN 0.0036 0.0036 0.0036 0.0038 N/A 0.0035
C-BN 0.0035 0.0035 0.0035 0.0036 0.0035 N/A
Table 10. p-values for the comparison of two groups in the z-plane with the Kruskal–Wallis test for the fault detection and localization of –1C cases.
Fault Case N-1C A-1C B-1C C-1C
N-1C N/A 0.0163 0.0039 0.0039
A-1C 0.0163 N/A 0.0065 0.0039
B-1C 0.0039 0.0065 N/A 0.0374
C-1C 0.0039 0.0039 0.0374 N/A
Table 11. p-values for the comparison of two groups in the z-plane with the Kruskal–Wallis test for the fault detection and localization of –BN cases.
Fault Case N-ΒΝ A-ΒΝ B-ΒΝ C-ΒΝ
N-BN N/A 0.0039 0.0039 0.0039
A-BN 0.0039 N/A 0.0039 0.0039
B-BN 0.0039 0.0039 N/A 0.0374
C-BN 0.0039 0.0039 0.0374 N/A
Table 12. p-values for the comparison of two groups in the z-plane with the Kruskal–Wallis test for fault localization and magnitude estimation between the –1C (small load) and –BN (big load) cases.
Fault Case A-1C B-1C C-1C A-BN B-BN C-BN
A-1C N/A 0.0065 0.0039 0.0039 0.0039 0.0039
B-1C 0.0065 N/A 0.0374 0.0039 0.0039 0.0039
C-1C 0.0039 0.0374 N/A 0.0039 0.0039 0.0039
A-BN 0.0039 0.0039 0.0039 N/A 0.0039 0.0039
B-BN 0.0039 0.0039 0.0039 0.0039 N/A 0.0374
C-BN 0.0039 0.0039 0.0039 0.0039 0.0374 N/A
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s).
MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Sultana, R.-G.; Davrados, A.; Dimogianopoulos, D. Evaluating Contact-Less Sensing and Fault Diagnosis Characteristics in Vibrating Thin Cantilever Beams with a MetGlas^® 2826MB Ribbon. Vibration 2024
, 7, 36-52. https://doi.org/10.3390/vibration7010002
AMA Style
Sultana R-G, Davrados A, Dimogianopoulos D. Evaluating Contact-Less Sensing and Fault Diagnosis Characteristics in Vibrating Thin Cantilever Beams with a MetGlas^® 2826MB Ribbon. Vibration. 2024; 7
(1):36-52. https://doi.org/10.3390/vibration7010002
Chicago/Turabian Style
Sultana, Robert-Gabriel, Achilleas Davrados, and Dimitrios Dimogianopoulos. 2024. "Evaluating Contact-Less Sensing and Fault Diagnosis Characteristics in Vibrating Thin Cantilever Beams with a
MetGlas^® 2826MB Ribbon" Vibration 7, no. 1: 36-52. https://doi.org/10.3390/vibration7010002
Article Metrics | {"url":"https://www.mdpi.com/2571-631X/7/1/2","timestamp":"2024-11-07T19:24:11Z","content_type":"text/html","content_length":"507474","record_id":"<urn:uuid:c36a730e-9e1d-4887-8ed2-0d544473e0d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00676.warc.gz"} |
Final Report Summary - MATLAN (High relative accuracy matrix techniques for linear and non-linear structured eigenvalue problems and applications) | FP7 | CORDIS | European Commission
Final Report Summary - MATLAN (High relative accuracy matrix techniques for linear and non-linear structured eigenvalue problems and applications)
The visitor Prof. Dr Ivan Slapnicar spent a scientifically highly interesting year at the Technische Univesitaet Berlin. Most of the research was done on the first objective of the MATLAN project,
which was to develop highly accurate algorithms for Hamiltonian and skew-Hamiltonian eigenvalue problems (EVPs), focussing in particular on the Hamiltonian case. The problem proved to be harder than
expected and took almost the entire project time. The work continued after the visitor's return to his home institution. Since the Hamiltonian eigenvalue problem had all features of general
non-symmetric EVPs, the visitor first thoroughly studied existing Jacoby-type algorithms for these problems. The summary of the obtained results is as follows:
1. the Jacobi algorithm for normal matrices converged, and actually converged quadratically. It was sufficient to apply the standard Jacobi algorithm to the Hermitian, i.e. symmetric, part or
Paaredkooper algorithm on the skew-symmetric part (Loizou, 1970). The algorithm was run implicitly on a given matrix, while each transformation was to be computed from the symmetric part of the
current two by two pivot submatrix. This algorithm was simpler than the known norm-reducing method, as described by Goldstine and Hurwitz, 1959.
2. for skew-symmetric matrices the algorithm of choice was the well known Paardekooper (1970) method which used four by four orthogonal transformations and reduced the starting matrix to the
Murnaghan form. The method converged and was quadratically convergent.
3. for complex symmetric matrices the norm-reducing Jacobi type algorithm which used plane rotations by complex orthogonal matrices (Seaton, 1968; Anderson and Loizou, 1974) was convergent (Eberlein,
1971) and converged quadratically (Anderson and Loizou, 1973).
4. the best convergent algorithm for the general real matrix was the norm reducing method which diagonalised the given matrix, as mentioned by Eberlein, 1970. This method, which employed complex
arithmetic, had better convergence properties than the methodology which used only real arithmetic (Eberlein and Boothroyd, 1968). The method converged quadratically (Ruhe, 1968) and was also more
reliable than other related techniques (Veselic, 1975 and 1979; Shroff, 1989). The drawback of the method was that nearly defective eigenvalues resulted in a highly ill-conditioned eigenvector
matrix, up to the inverse of the square root of machine precision.
5. for the general real matrix the Jacobi algorithm (Mehl, 2008), which was characterised by bottom-up-left-right pivoting, converged quadratically towards the complex Schur form, but did not
converge in general. The algorithm utilised unitary transformations and was not norm-reducing. The adapted method for Hamiltonian matrices used unitary symplectic transformations to reduce the matrix
to the Hamiltonian Schur form. This variant was also quadratically convergent, but did not need to converge. This adaptation was an improvement over the method from Byers, 1990, with respect to the
pivoting scheme.
6. in case the Hamiltonian or skew-Hamiltonian matrix was additionally symmetric or skew-symmetric, the Jacobi-type methods based on unitary quaternion transformations (Mackey, 1995; Fassbender,
Mackey nd Mackey, 2001) were convergent and quadratically convergent.
7. for the real Hamiltonian matrix the state of the art at the beginning of the project was the algorithm from Mehrmann, Schroeder and Watkins, 2009. The algorithm was developed from series of
results (Benner, Mehrmann and Xu, 1998; Chu, Liu and Mehrmann, 2004; Watkins, 2006) and used orthogonal symplectic transformations that were applied on the implicitly squared matrix and reductions to
real Schur forms of the parts of the current matrix. The symplectic transformations depended on their property of working on isotropic subspaces. However, the algorithm might not perform well in the
presence of purely imaginary eigenvalues or nearly purely imaginary eigenvalues. The visitor's task was to improve this algorithm in the sense that it always computed the Hamiltonian Schur form. The
visitor successfully fulfilled this task by developing three new algorithms.
The first development was the norm-reducing algorithm A1 which incorporated the Jacobi type transformations as in the method of Eberlein for general real matrices with special Hamiltonian pivoting
derived by Mehl. The algorithm was convergent and converged quadratically. However, since the algorithm diagonalised the given matrix in the presence of defective or nearly defective eigenvalues, the
condition of the eigenvalue matrix could grow up to the inverse of the square root of machine precision. In addition, the algorithm utilised complex transformations even in case the starting matrix
was real.
The second developed algorithm A2 was the modification of the algorithm by Mehrmann, Schroeder and Watkins (MSW). The MSW algorithm carefully advanced in blocks as small as possible, trying to
preserve all the necessary numerical properties of the current matrices. The new proposal was that the blocks of real, complex and purely imaginary eigenvalues were determined at the beginning from
the squared starting matrix, after which the computation proceeded through these three blocks. The proposal was somewhat slower than the MSW approach, but more stable. The final block B was the block
with purely imaginary eigenvalues and was then reduced by A1. Here the complex arithmetic was necessary, yet it was confined to the last block.
The final development was the algorithm A3, a modification of A2 in which the final block B was reduced to Hamiltonian Schur form by unitary symplectic transformations. The proposal was essentially
the method from MSW applied to the matrix i times B which had only real eigenvalues. In addition, the subspaces that the method worked on were isotropic. The only difference to the real case was
that, in this case, all transformation matrices were unitary and symplectic of the block-form (U1 U2; -conj(U2) conj(U1)). After the algorithm was finished, and if the lower triangular part of the
computed Schur form was not sufficiently small, the Hamiltonian Jacobi method by Mehl could be used to reduce it quadratically. The algorithm was numerically stable since only orthogonal and unitary
transformations were used.
The socioeconomic impact of the preformed research was that the algorithms would be used in subsequent projects, e.g. the European Research Council advanced grant 'Modelling, simulation and
regulation of multi-physics systems' for computations with Hamiltonian matrices. The appropriate publications were pending by the time of the project completion. | {"url":"https://cordis.europa.eu/project/id/220978/reporting","timestamp":"2024-11-04T18:33:57Z","content_type":"text/html","content_length":"72233","record_id":"<urn:uuid:5834ce47-3876-423d-a415-cfdfb3accd7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00774.warc.gz"} |
Unit Projects - Connected Mathematics Project
Unit Projects
Comparing and Scaling: Paper Pool
Students investigate the pattern of bounces for a pool ball as it makes its way around pool tables of various dimensions. As you might expect, ratio and scaling are involved in this Project. For a
pool table with given dimensions students predict the number of times the ball “hits” the sides of the table and which of the four pockets it will fall.
Growing, Growing, Growing Project: Half-Life
Students use cubes to simulate the radioactive decay of a substance and estimate its half-life. They then create a new situation involving radioactive decay and design and carry out their own
Say It With Symbols: Finding Surface Area of Rod Stocks
Students use digital Cuisenaire rods and find the surface areas of stacks of rods of certain lengths by varying the number of rods n. They describe a pattern and find the relationship between the
number of rods n and the surface area of the stack A. The equation for the surface area is a linear relationship in terms of the number of rods used. Students will find that different but equivalent
expressions can be used to model the data. You could include questions about volume by asking how many unit rods will be needed to build a stack of n rods of length x. Then ask the students to
determine the function that can model this situation. | {"url":"https://connectedmath.msu.edu/covid-19-cmp-resources/resources-for-teachers/unit-projects/index.aspx","timestamp":"2024-11-13T08:23:43Z","content_type":"text/html","content_length":"55565","record_id":"<urn:uuid:43da7849-5eea-430e-bd06-8b7d91d5d34b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00631.warc.gz"} |
Видеотека: С. Пилипович, Contributions to the convolution and $\Psi$DO's over ultradistribution spaces
Аннотация: The convolution of distributions was studied from the early beggining of the distribution theory, by many authors. Important contribution was given by Professor Vladimirov. I have studied,
with my students, convolution in various spaces of distributions and ultradistributions. The aim of this talk is to show that one can extend the Anti-Wick calculus over
$\mathcal D^{\{M_p\}}(\mathbb{R}^d)$ for ultradistributions in ${\mathcal S'}_{\{A_p\}}^{\{M_p\}}(\mathbb{R}^d)$ with very weak assumptions on $A_p$ and conditions on $M_p$ related to the sequence
$p!^m, m>1$ noted in the abstract. This is done by the use of the Wigner transform $W(\varphi,\varphi)$ with $\varphi $ being ultradifferentiable functions with the fast decrease as
$|x|\rightarrow \infty.$ We develop the theory for $\varphi=e^{-r{\langle \cdot\rangle^q}},\; r>0, q\geq 1,$ as well as for $\varphi$ satisfying even faster decay. Special example is $\varphi= \exp
{(-se^{{\langle\cdot\rangle}^q})}, s>0,q\geq 1.$ Note that we have given earlier a complete answer in our analysis related to the convolution with the kernel $e^{a|\cdot|^q},a>0$ and the related
Anti-Wick calculus, in the case when $\varphi$ is a Gaussian.
Язык доклада: английский | {"url":"https://m.mathnet.ru/php/presentation.phtml?option_lang=rus&presentid=37744","timestamp":"2024-11-14T23:22:22Z","content_type":"text/html","content_length":"9175","record_id":"<urn:uuid:27fe98a8-30fc-4034-915e-6abe8a0b579e>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00478.warc.gz"} |
T1: Coding Theory (1)
Reminder: This post contains 7616 words · 22 min read · by Xianbin
To encode some information, we use bits. In general, if there are \(M\) symbols, we need \(\log_2 M\) bits. It seems that we cannot do better.
However, if the symbols have some particular properties, we can do better. Now, things become interesting. What kind of properties we can use such that the size of encoding is less than \(\log_2 M\)?
Example 1:
Imagine that we have a file with 2000 and there are four symbols “00,01,10,00” with probability \(1/2,1/4,1/8,1/8\) respectively. (It is easy to learn this property by property testing algorithms).
It is not optimal. We can use “0, 10,110,111” to represent these four symbols and the number of bits we use is smaller. In short, the main idea is to encode the frequent symbols with less bits. We
call such a strategy variable-length coding.
Of course, there are problems.
• unique decoding.
• instantaneous/prefix codes.
1. The first one says that, if we apply some variable-length coding, the decoding may have more than one result.
2. The second one says that even there is a unique decoding, we have to wait for some codeword decoding first. If it is prefix-free, we can directly decode an codeword.
Useful Inequalities
Kraft’s inequality shows an exact condition for the existence of a prefix code in terms of the lengths of codewords.
\(\textbf{Lemma 1}\)[Kraft’s Inequality]. A binary instaneous/prefix code with codeword length \(\{\ell_i\}_{i=1}^N\) exists iff \(\sum_{i=1}^N 2^{-\ell_i} \leq 1\)
\(\textbf{Proof}\). We first prove the this condition is necessary. Imagine that we expand the binary prefix code in the form of binary tree. For example, “000111” (0 -> go left; 1 -> go right). Then
we build a binary tree. Let \(C\) be a prefix code with the codewords \(w_1,\ldots,w_N\) of length \(\ell_1,\ldots,\ell_N\). Each codeword \(w_i\) corresponds to an interval \(I(w_i).\) Since it is a
prefix code, these intervals will not overlap.
\[\sum_{i=1}^N \lvert I(w_i) \rvert= \sum_{i=1}^N 2^{-\ell_i} \leq 1\]
Next, we prove this condition is sufficient. That is to say that given a set of codeword length \(\{\ell_i\}\), we can construct a prefix code. To do this this, we only need to give an algorithm.
After sorting, we obtain \(\ell_1 \leq \cdots \leq \ell_N\)(relabeled). We divide intervals as follows.
\[[l_i,r_i) = \left[ \sum_{j=1}^{i-1} 2^{-\ell_j}, \sum_{j=1}^i 2^{-\ell_j}\right)\]
where \(i\in[N]\).
(Notice that the length of \([l_i,r_i)\) is at most \(2^{-\ell_i}\). By the condition, we can design this for all \(i\).)
Then, \([l_i,r_i) = I(w_i)\) for the binary string \(w_i\) with length \(\ell_i\). Because these intervals does not overlap, the binary strings are codewords for a prefix code.
\(\textbf{Lemma 2}\) (Improved) Lemma 1 holds for a uniquely decodable binary code.
Shannon’s Source Encoding Theorem
Above discussion makes us think of the following question.
What is the minimum number of bits to encode words such that we can decode them uniquely and directly?
Now, let us the brilliant Shannon’s Source Coding Theorem.
\(\textbf{Theorem 1}\) Given a collection of \(n\) i.i.d random variables, each of which has entropy \(H(X)\), we can compress these variables into \(n H(X)\) bits on average with negligible
errors as \(n\to \infin\). Also, no uniquely decodable code can compress them using less thatn \(nH(X)\) bits without loss of information.
The idea behind Shannon’s source encoding theorem is to encode only typical messages. As we talked before, we can save bits by encoding them. We will show that non-typical string tend to 0.
Intuitive Proof
Step 1.
We define a random message of length \(N\) as \(x_1,\ldots,x_N\) each letter of which is drawn an alphabet \(\mathcal{A} = \{a_1,\ldots,a_T\}\) with probabilities \(p(a_i) = p_i \in(0,1], i = 1,\
dots, T\) where \(\sum p_i = 1\).
Now, let us look at \(y = x_1\ldots,x_N\). The probability that \(y\) appears is \(p(x_1,\ldots,x_N) = p(x_1)\ldots p(x_N)\).
Step 2.
Next, we consider a long enough string \(x\). We let \(a_i\) to denote the frequency \(N_i \approx N p_i\). Then, the probability of existence of \(x\) is as follows:
\[p(x) \approx p_{typ} = p_1^{N_1}\ldots p_i^{N_i}\]
The typical messages are uniformly distributed by \(p_{typ}\) which indicates that the set \(S\) has typical messages of size
\[\lvert S \rvert\approx\frac{1}{p_{typ}}\]
If we encode each member of \(S\) by a binary string we need
\(I_N = \log_2 \lvert S\rvert = -N\sum_{i=1}^T p_i \log p_i = NH(X)\) bits.
Therefore, the average number of bits per letter is
\[I = I_N/N = H(X).\]
Therefore, we can see that
\[I_N \approx N H(X)\]
\[p_{typ} \approx 2^{-NH(X)}.\]
Formal Proof
The above proof is not clear. At least, what is the typical message? Why can we think they are uniformly distributed?
1. Random Variable:
Given a letter ensemble \(S\), the function \(f: S\to \mathbb{R}\) defines a discrete random variable. The realization \(f(S)\) are the real numbers \(f(x), x\in S\). The average of \(f(S)\) is
defined as follows.
\[\overline {f(S)}:= \sum_{x\in S}p(x)f(x) = \sum_{i=1}^T p_i f(a_i)\]
2. Typical Set \(Tp\) of random variables
Roughly speaking, the typical set is some good sampling of the source. Recall that
\[p(x) \approx p_{typ} = p_1^{N_1}\ldots p_i^{N_i}\]
It says that a string \(x\) is typical if the frequency of each symbol in the string is the same as the probability of the symbol in the source.
We formally show the above property as follows.
Let \(x^n\) be a sequence with elements drawn from finite alphabet \(\mathcal{A}\). Let \(\pi(x \mid x^n) = \frac{\lvert \{i: x_i = x \}\rvert}{n}\) for all \(x\in \mathcal{A}\) be the empirical
probability mass function of \(x^n\). For example, if \(x^n = (0,1,1,1,1,1,1,0)\), then
\[\pi(x \mid x^n) = 6/8((x = 1);\\ \pi(x \mid x^n) = 2/8 (x = 0);\]
Let \(X_1, X_2, \ldots\) be a sequence of i.i.d random variables with \(X_i\to P_X(x_i)\) (do experiments).
\(\textbf{Lemma}\) (Weak Law of Large Numbers (WLLN)). Let \(X_1,X_2, \ldots,X_N\) be i.i.d random variables with a finite expected value \(\mu\), then for any \(\epsilon > 0\), we have \(\lim_{n
\to +\infin} P(\lvert \bar X - \mu \rvert) = 0\)
By WLLN, for each \(x\in \mathcal{A}\), we have
\[\pi(x \mid X^n) \to p(x)\]
Thus, with high probability, we can use \(\pi(x \mid X^n)\) to replace the true probability \(p(x)\). Therefore, for \(X \sim p(x)\) and \(\epsilon \in (0,1)\), we define \(\epsilon\)-typical \(n\)
-sequences \(x^n\) as
\(T^n_\epsilon(X) = \{x^n:\lvert \pi(x \mid x^n) -p(x) \rvert \leq \epsilon p(x) \}\) for all \(x\in \mathcal{A}\).
Then, we can easily obtain the following lemma by applying the above definition.
\(\textbf{Typlical Set Lemma}\). Let \(x^n \in T^n_\epsilon(X)\), Then for any non-negative function \(f(x)\), we have \((1-\epsilon)E[f(X)] \leq \frac{1}{n} \sum_{i=1}^n f(x_i) \leq (1+\epsilon)
So, we define the typical set \(T\) of a random sequence \(S\) as the set of realizations \(\hat x :=x_1,\cdots,x_N\) such that
\[\overline {f(S)} - \delta \leq \frac{1}{N}\sum_{i=1}^Nf(x_n) \leq \overline {f(S)} + \delta\]
It means that the typical set contains all very possible showing strings!!!!
\[P( X^n \in T_p) \geq 1-\epsilon\]
Now, we use \(P_T\) to represent the probability for a randomly chosen sequence \(\hat x\) to be in the typical set \(T_p\).
\[P_T = \sum_{\hat x\in T_p} p(\hat x) \geq 1-\epsilon\]
The above can be proved by Chebyshev’s Inequality.
\(\textbf{Chebyshev's Inequality}\). Let \(X\) be any random variable with \(\mu = \mathbb{E}[X]\) and finite variance \(Var(X)\). Then, for any real number \(\alpha\), we have \(P(\lvert X - \mu
\rvert \geq \alpha) \leq \frac{Var(X)}{\alpha^2}\)
Next, consider the function
\[f(S): = -\log p(S)\]
The average of \(f(S)\) is
\[\overline {f(S)} = -\sum_{x\in \mathcal{A}}p(x) \log p(x) = H(S)\]
Replace \(f(x)\) by \(-\log f(S)\), we have \(H(S) - \delta \leq - \frac{1}{N}\sum_{i=1}^N \log p(x_i) \leq H(S) +\delta\)
The above inequality can be transformed as follows ( \(H = H(S)\) in short).
\[2^{-N(H+\delta)} \leq p(\hat x) \leq 2^{-N(H-\delta)}\]
\[\lvert T_p \rvert 2^{-N(H-\delta)} \geq \sum_{\hat x\in T_p} p(\hat x) \geq \lvert T_p \rvert 2^{-N(H+\delta)}\]
\[\lvert T_p \rvert 2^{-N(H-\delta)} \geq 1-\epsilon \geq \lvert T_p \rvert 2^{-N(H+\delta)}\]
\[(1-\epsilon)2^{N(H+\delta)} \geq \lvert T_p \rvert \geq (1-\epsilon) 2^{N(H-\delta)}\]
When \(\epsilon, \delta \to 0\), we obtain that \(\lvert T_p \rvert \to 2^{NH}\)
So, \(I_N \to NH\).
Many days ago, I wanted to write a blog about Shannon’s source coding theorem. Now, I finally do it. I feel the most important thing is the conception of the typical set. It is really a genius
[1]. Shannon’s Source Coding Theorem. Kim Boström
[2]. C. E. Shannon and W. Weaver. A mathematical Theory of communication, The Bell System Technical Journal, 27, 379–423,623–656, (1948)
[3]. El Gamal, Abbas, and Young-Han Kim. Network information theory. Cambridge university press, 2011. | {"url":"https://blog-aaronzhu.site/coding-theory/","timestamp":"2024-11-05T22:47:03Z","content_type":"text/html","content_length":"13053","record_id":"<urn:uuid:96a243ac-2f06-40bc-a951-518141503439>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00522.warc.gz"} |
Volume of cone calculator
Volume of cone calculator
The formula for the volume of a cone is:
V = (1/3)πr^2h
where r is the radius of the circular base, h is the height of the cone, and π is a constant equal to approximately 3.14.
To use the volume of cone calculator, follow these steps:
1. Enter the value of the radius (r) in the input field.
2. Enter the value of the height (h) in the input field.
3. Click on the “Calculate” button to obtain the volume (V) of the cone.
For example, let’s say you have a cone with a radius of 5 cm and a height of 10 cm. To calculate its volume, you would follow these steps:
1. Enter the value of the radius (r) as 5 cm.
2. Enter the value of the height (h) as 10 cm.
3. Click on the “Calculate” button.
The calculator would then use the formula to calculate the volume of the cone, which is approximately 261.8 cubic centimeters (cm^3).
Therefore, the volume of the cone with a radius of 5 cm and a height of 10 cm is 261.8 cm^3.
The formula for the volume of a cone is:
V = (1/3)πr^2h
where V is the volume, r is the radius of the circular base, h is the height of the cone, and π is a constant equal to approximately 3.14.
To calculate the volume of a cone, follow these steps:
1. Measure the radius of the circular base of the cone. If the radius is not given, you can measure it using a ruler or a tape measure.
2. Measure the height of the cone from the base to the top point.
3. Substitute the values of the radius (r) and height (h) into the formula:
V = (1/3)πr^2h
4. Simplify the formula using the order of operations (PEMDAS) by first squaring the radius and multiplying it by π, then multiplying the result by the height, and finally dividing the result by 3:
V = (1/3)πr^2h = πr^2h/3
5. Calculate the value of the volume using the simplified formula and the values of the radius and height:
V = πr^2h/3
For example, let’s say you have a cone with a radius of 5 cm and a height of 10 cm. Here’s how to calculate its volume:
1. Measure the radius of the circular base, which is 5 cm.
2. Measure the height of the cone, which is 10 cm.
3. Substitute the values of the radius and height into the formula:
V = (1/3)πr^2h = π(5 cm)^2(10 cm)/3
4. Simplify the formula:
V = πr^2h/3 = π(5 cm)^2(10 cm)/3 ≈ 261.8 cm^3
Therefore, the volume of the cone with a radius of 5 cm and a height of 10 cm is approximately 261.8 cubic centimeters (cm^3). | {"url":"https://calculator3.com/volume-of-cone-calculator/","timestamp":"2024-11-05T17:31:30Z","content_type":"text/html","content_length":"59439","record_id":"<urn:uuid:d5abd268-8410-4ec2-a573-3f9c817ceac5>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00617.warc.gz"} |
Multibody Dynamics
Multibody Dynamics of an Excavator using RecurDyn
In the past years, the use of MultiBody Simulation (MBS) in industry has progressively grown. This approach is used to investigate both kinematics and dynamics of moving mechanisms, which are
composed by multiple bodies interacting with each other through joints and contacts. MultiBody Simulation is the recommended numerical method to quickly complete the following tasks:
• design of mechanisms for motion control (e.g. cams, links, guides)
• check of functionality and performance assessment (e.g. interference check, speed and acceleration analysis)
• estimation of joint loads and internal reactions in transient conditions, in order to choose actuators, brakes and other power devices.
The most basic MBS approach idealizes the system components as rigid bodies. Although sometimes this assumption is fairly acceptable, a high number of applications cannot be virtualized, ignoring the
body compliance. Flexibility affects the value of the participating inertia, influences the application points of loads and changes the way the kinetic energy is dissipated in the system. For all of
these reasons, different numerical methods have been developed in order to introduce flexible bodies in MultiBody Simulation. Although general guidelines cannot be formulated, flexible MultiBody
Simulation is recommended in, at least, three situations:
• when external loads have frequencies close to structural ones (resonances)
• when the system undergoes high speed dynamics and vibrations affect the requested outputs
• when the calculation of stress and strains in transient conditions is a mandatory output of the study
For the types of outputs it provides, a flexible MultiBody Simulation can be assimilated to a transient Finite Element Analysis. The big difference is in the numerical formulation of the two
problems, which makes MultiBody a little less precise, but much (much) faster. Moreover, the MultiBody approach, even when it includes flexible bodies, it keeps its distinctive natural connectivity
with control system design, pneumatics, hydraulics, and electronics.
RecurDyn, the premium multibody software from FunctionBay Korea, is a key technology in this scenario. It implements two alternative technologies for flexible body modeling. This paper compares the
two approaches and highlights the advantages of each one of them.
Case study
The multibody model of an excavator is chosen as reference case study. The simplicity of this example helps in maximizing the differences between the modeling approaches.
A series of revolute joints connects the arm bodies in a single kinematic chain going from the bucket to the cabin. The latter body is then connected to the vehicle base through one more revolute
joint with vertical axis. Three groups of hydraulic cylinders control the arm configuration. Each actuator consists of a piston and a cylinder, coupled together by a translational joint. Both ends of
each actuator are linked to the excavator structure by means of revolute/spherical joints. The overall kinematic scheme is fairly simple and represents the degrees of freedom of the actual excavator.
The model is initially built using rigid bodies only. This step is useful to check the appropriateness of joints, drivers, motion functions and contacts. The flexibility is applied to selected bodies
in a second phase. In general, it is not necessary to switch all bodies to flexible and, more important, it is not necessary to use the same approach for all flexible bodies. In our example, we will
convert just the excavator boom to a flexible, which is the main part of the machine arm.
The simulation reproduces a standard working cycle of the excavator. The bucket approaches a target object (not modeled), digs it, transports it, unloads it and finally moves back to the initial
position. All tasks are completed in about eleven seconds. The motion is obtained by governing the lengths of the hydraulic actuators and the angular position of the revolute joint between base and
Rigid Body Dynamics
The dynamic analysis with a rigid multibody excavator model returns the outputs of Figure 2. The plots show the reaction forces and torques of the revolute joints connecting the boom to the rest of
the model.
The loads coming from a rigid multibody analysis are averagely higher than the true ones, because the moving inertia is overestimated. Accordingly, the power demand of the actuators is overestimated.
From a structural point of view the procedure is conservative, although it is not easy to rate how much. For mechanisms moving at slow speed (with respect to the first natural frequency of the
structure), the two step procedure is fairly applicable. For high speed dynamics, the use of rigid bodies easily leads to excessive loads and, therefore, excessive sizing of the components. Another
weak point of this procedure is that two separate codes (Multi-Body and Finite Element) are required. There is always a risk of error in the load data transfer, especially because the multibody loads
have spatial components over moving bodies.
Flexible Multi-Body Simulation is an interesting alternative to this traditional approach, which improves the quality of the results and, at the same time, makes the whole calculation process
straightforward and easier.
Reduced Flex Technology (RFlex)
In RecurDyn, the Reduced Flex method coincides with the well-known and widely accepted Craig-Bampton approach. The method was developed at the end of ’70 in the aerospace industry, to reduce the
overall size of large FE models. The theory assumes that the response of a flexible body in static (and dynamic) conditions can be represented by a linear superposition of several mode shapes, which
is why this approach is also known as Component Mode Synthesis. By doing so, the initial meshed body is translated into a mathematical object whose unknowns are the linear multipliers of the base
modes. This results in an evident reduction in the number of unknowns. This theory has been expanded and adjusted along the years, but the original rules are still valid and applied to create
flexible bodies in almost every modern multibody software.
Figure 4 graphically describes the RFLex approach in RecurDyn. The modal basis, which will numerically describe the body flexibility, is created by combining two sets of structural modes: the fixed
interface vibration modes and the so-called constraint modes. The result is a mathematical object whose unknowns are the multipliers of the orthonormalized modes.
All of these operations require an external Finite Element code, which provides the tools for meshing the geometry and for performing the necessary FE analyses. As all of the competitor software,
RecurDyn can import RFLex data from ANSYS and NASTRAN. However, it also includes both an internal mesher and an internal FE solver that make it possible to prepare the RFlex data without the need of
an external FE code.
First the fixed-interface modal analysis is performed on the FE model. For our problem, we limited the calculation to the first 30 eigenvalues and eigenvectors. There is not a general rule to fix
this number normally the user evaluates the mode frequencies and chooses the number in accordance to the higher phenomena he would like to see in the simulation. Then, 48 constraint modes are
calculated. This number is obtained by multiplying the number of master nodes (the boom has 8 joints) by the number of Degrees Of Freedom (DOF) of each master node (6). Each constraint mode consists
in a static analysis where a unit displacement is applied to a single DOF, while the remaining ones are kept to zero.
Once the RFlex boom is ready, it is incorporated in the excavator model (joints are automatically connected to the master nodes) and the MultiBody Simulation is performed as usual. Besides the
results we got earlier from the rigid multibody simulation, the model now outputs also stresses and strains. These structural quantities are available over the entire boom extension, over all
simulation time. This makes possible to easily identify where and when the most critical stress state occurs.
Figure 7 shows the distribution of the equivalent Von Mises stress over the boom deformed body, at the instant where it reaches the maximum value. In our example, the peak is much lower than the
material strength. The highest stress is even lower than the fatigue limit, excluding any type of structural problem for this structure. By watching the animation of the results, it is easy to
observe oscillations of the excavator arm that were not visible in the rigid multibody results. This is a realistic behavior, which can be easily observed in a true working excavator.
FullFlex Technology (FFlex)
Despite being very well performing, RFlex technology has intrinsic limits that make it unusable in several applications. First, the linear behavior is acceptable for small deformations only. Second,
it is almost impossible to properly describe the effects of contacts through a master node interface.
In order to overcome these limits, FunctionBay has introduced in RecurDyn an advanced approach for flexible body modeling. The FullFlex (FFlex) technology is a simplified implementation of the Finite
Element formulation. While RFlex method speeds up the solution by reducing the number of unknowns, the FFlex method is based on a smart simplification of the equations that describe the coordinates
of all mesh nodes.
A FFlex model keeps all of its native DOFs, but the solution time is terrifically reduced. Every MultiBody Simulation performed with RecurDyn FullFlex technology is equivalent to a transient Finite
Element analysis. For this reason, this advanced approach has been called Multi-Flexible-Body Dynamics (MFBD).
The FullFlex technology breaks all limits of the ReducedFlex one. It calculates the structural response of structures undergoing large rotations and large displacements. It also manages large
deformations, with the ability to simulate a non-linear behavior of the material. It also makes possible the definition of contacts over the bodies (solids, shells, beams) with no restrictions.
From a numerical point of view, FullFlex models have a noticeable number of DOFs. However, thanks to both the smart formulation of the equations and the power of the hybrid solver, RecurDyn assures
very fair computing time.
In order to highlight the benefits of the FullFlex technology, we have modified the excavator model and, in particular, the flexible formulation of the boom body. A new solid mesh is created, with
specific refinements applied on the connecting areas (Figure 9). FDR interfaces are kept only for the connections of the actuators, whereas non-linear contacts are set between holes and pins on the
two boom ends.
Comparison of methods and conclusions
The table in Figure 12 points out the main differences between the three possible approaches to model a mechanical system in RecurDyn. The complexity of the model (and the calculation time) grows
going from left to right, as the number of unknowns grows as well. The user should always choose the approach that returns the desired output with the minimum computational effort.
• The most significant advantages of the RecurDyn Full Flex approach can be summarized as follows:
• modeling of body connections in a very realistic way, without introducing any local stiffening spider
• large deformations, large rotations and large displacements are natively taken into account
• it is possible to use both linear and non-linear materials
• there is no limit in the use of non-linear contacts between flexible bodies. They can be set between solid, shell and beam elements
• the simulated dynamics of flexible bodies is exact, because it is no longer the output of a transfer function based on selected vibration modes (Craig-Bampton approach). | {"url":"https://enginsoftusa.com/RecurDynMBD/RecurDyn-MBD-Case_Study-Excavator.html","timestamp":"2024-11-14T08:30:30Z","content_type":"text/html","content_length":"47253","record_id":"<urn:uuid:4d441b63-27de-4eb0-9340-b9fd7f30905a>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00660.warc.gz"} |
Analytic Combinatorics - Free Computer, Programming, Mathematics, Technical Books, Lecture Notes and Tutorials
Analytic Combinatorics
• Title Analytic Combinatorics
• Author(s) Philippe Flajolet and Robert Sedgewick
• Publisher: Cambridge University Press; 1 edition (January 19, 2009)
• Hardcover: 824 pages
• eBook: PDF (826 pages, 11.8 MB)
• Language: English
• ISBN-10: 0521898064
• ISBN-13: 978-0521898065
• Share This:
Book Description
The definitive treatment of analytic combinatorics. This self-contained text covers the mathematics underlying the analysis of discrete structures, with thorough treatment of a large number of
applications. Exercises, examples, appendices and notes aid understanding: ideal for individual self-study or for advanced undergraduate or graduate courses.
About the Authors
• Philippe Flajolet is Research Director of the Algorithms Project at INRIA Rocquencourt.
• Robert Sedgewick is William O. Baker Professor of Computer Science at Princeton University and a member of the board of directors of Adobe Systems. In addition, he is the coauthor of the highly
acclaimed textbook, Algorithms, 4th Edition and Introduction to Programming in Java: An Interdisciplinary Approach
Reviews, Ratings, and Recommendations: Related Book Categories: Read and Download Links: Similar Books:
Book Categories
Other Categories
Resources and Links | {"url":"https://freecomputerbooks.com/Analytic-Combinatorics.html","timestamp":"2024-11-13T16:10:15Z","content_type":"application/xhtml+xml","content_length":"35733","record_id":"<urn:uuid:36a2507a-8481-4005-bb7f-b2a7a1fde955>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00287.warc.gz"} |
Tutorial:Advanced Rocket Design
From Kerbal Space Program Wiki
By Vincent McConnell and Kosmo-not
Getting to learn basic rocket science for a space game like Kerbal Space Program can be very important to the success of building rockets that can perform a desired job. In this guide, we will be
covering things like calculating the full Delta-V of your ship, explaining how to perform transfer maneuvers, getting Thrust to Weight Ratio, calculating the Peak G-force experienced during a
particular burn, also calculating Delta-V needed for a full Hohmann transfer and much more.
${\displaystyle \Delta v}$ (change in velocity) is the bread and butter of rocket science. It is probably the most important thing to know about your rocket because it determines what your rocket is
capable of achieving. Among the several things we will explain in this basic tutorial, ${\displaystyle \Delta v}$ is most likely the most useful thing you will apply to Kerbal Space Program while
building a rocket. To find the ${\displaystyle \Delta v}$ of your rocket for each stage at a time we have to sum up the part masses of every single part of the stage.
• Total mass: ${\displaystyle m_{\text{total}}}$
• Fuel mass: ${\displaystyle m_{\text{fuel}}}$
• Dry Mass: ${\displaystyle m_{\text{dry}}=m_{\text{total}}-m_{\text{fuel}}}$
The equation only needs the total and dry mass, but as it is easier to get the dry mass by subtracting the fuel mass from the total mass. Of course other combinations like calculating the total mass
and measuring the fuel and dry mass are also possible.
The next important part of this set of calculations is to find your engine's specific impulse. Specific impulse is a measure of how fuel efficient an engine is (the greater the specific impulse, the
more fuel efficient it is). For example, the non-vectoring stock engine LV-T30 has a vacuum specific impulse of 300 s. So here, we must apply the Tsiolkovsky rocket equation. More informally known as
"The Rocket Equation".
It states:
${\displaystyle \Delta v=I_{sp}\cdot \ln \left({\frac {m_{\text{total}}}{m_{\text{dry}}}}\right)}$
If the specific impulse is given in seconds it is necessary to multiply this value by ${\displaystyle 9.82{\frac {m}{s^{2}}}}$ (see also Terminology about I[sp]).
So go ahead and sum up your stage's total mass with fuel. Then, go ahead and sum up the mass minus the fuel (this can be done by just adding up the 'dry mass' where given). Input these into the
equation in the place of ${\displaystyle m_{\text{total}}}$ and ${\displaystyle m_{\text{dry}}}$. Following is a quick example, where the surface gravity of Earth ${\displaystyle 9.81{\frac {m}{s^
{2}}}}$ is used:
Stage 3 (TMI, Mun lander, Return)
${\displaystyle m_{\text{total}}}$ ${\displaystyle 3.72{\text{t}}}$
${\displaystyle m_{\text{dry}}}$ ${\displaystyle 1.72{\text{t}}}$
${\displaystyle I_{\text{sp}}}$ ${\displaystyle 400{\text{s}}}$
${\displaystyle \Delta v}$ ${\displaystyle 3027.0{\frac {m}{s}}}$
Stage 2 (Kerbin orbit insertion)
${\displaystyle m_{\text{total}}}$ ${\displaystyle 7.27{\text{t}}}$
${\displaystyle m_{\text{dry}}}$ ${\displaystyle 5.27{\text{t}}}$
${\displaystyle I_{\text{sp}}}$ ${\displaystyle 300{\text{s}}}$
${\displaystyle \Delta v}$ ${\displaystyle 946.8{\frac {m}{s}}}$
Stage 1 (Ascent):
${\displaystyle m_{\text{total}}}$ ${\displaystyle 38.52{\text{t}}}$
${\displaystyle m_{\text{dry}}}$ ${\displaystyle 14.52{\text{t}}}$
${\displaystyle I_{\text{sp}}}$ ${\displaystyle 350{\text{s}}}$ (estimated due to atmospheric flight)
${\displaystyle \Delta v}$ ${\displaystyle 3349.9{\frac {m}{s}}}$
${\displaystyle \Delta v}$ ${\displaystyle 7544.6{\frac {m}{s}}}$
Multiple engines
To calculate the I[sp] for multiple engines with different I[sp] values, you need to find total thrust and mass flow:
${\displaystyle I_{sp_{avg}}={\frac {\sum \limits _{i}^{n}(thrust_{i})}{\sum \limits _{i}^{n}({\dot {m}}_{i}\cdot g_{0})}}={\frac {\sum \limits _{i}^{n}(thrust_{i})}{\sum \limits _{i}^{n}\left({\
frac {thrust_{i}}{I_{sp_{i}}}}\right)}}={\frac {thrust_{1}+thrust_{2}+\dots +thrust_{n}}{thrust_{1}\div I_{sp_{1}}+thrust_{2}\div I_{sp_{2}}+\dots +thrust_{n}\div I_{sp_{n}}}}}$
This will give you the correct I[sp] to use for your Δv calculation. If all engines are the same, they act as one engine in this calculation so the sums aren't needed.
Calculating transfer maneuvers
The next part of this tutorial is how to perform a transfer maneuver. This kind of action is called a Hohmann Transfer and it requires two burns at opposite points in an orbit. Adding velocity will
boost our apoapsis higher. We would then simply wait until we hit our newly established apoapsis and then add more velocity to boost our periapsis to circularize. Or, we could drop our orbit by
subtracting velocity by burning retro-grade.
We can also apply some ${\displaystyle \Delta v}$ calculations to find out how much thrust we will need to perform this maneuver. We will break this burn up into impulses. For example purposes, we
will start at a 100 km orbit and then boost into a 200 km orbit. Both circularized. The formula for the first burn is the following:
${\displaystyle \Delta v_{1}={\sqrt {\frac {\mu }{r_{1}+R}}}{\Bigg (}{\sqrt {\frac {2(r_{2}+R)}{r_{1}+r_{2}+2R}}}-1{\Bigg )}}$
This is the formula for the final burn in the transfer:
${\displaystyle \Delta v_{2}={\sqrt {\frac {\mu }{r_{2}+R}}}{\Bigg (}1-{\sqrt {\frac {2(r_{1}+R)}{r_{1}+r_{2}+2R}}}{\Bigg )}}$
• ${\displaystyle \mu }$= Gravitational parameter of parent body (3530.461 km³/s² for Kerbin).
• ${\displaystyle r_{1}}$= The altitude of our first orbit (100 km in this case).
• ${\displaystyle r_{2}}$= The altitude of our second orbit (200 km in this case).
• ${\displaystyle R}$= The radius of parent body (600 km in this case).
This formula will give us our velocity for the burn in km/s (multiply by 1000 to convert it into m/s). It's important to make sure that you will have the ${\displaystyle \Delta v}$ in the stage to
make this burn. Again, you can do that by using the ${\displaystyle \Delta v}$ calculations above.
In our case we get a Δv[1] of 73.65 m/s, a Δv[2] of 71.23 m/s and a total Δv of 144.88 m/s.
Calculating fuel flow
Next, we will explain how to calculate fuel flow in mass to see how much fuel a burn uses up in a specific amount of time.
If we know the ${\displaystyle \Delta v}$ needed for the burn and the total mass of the rocket before the burn, we can calculate how much fuel is required to complete the burn.
First, we calculate the mass of the rocket after the burn is complete. To do this, we use the Tsiolkovsky Rocket Equation, inputting the initial mass and ${\displaystyle \Delta v}$ of the burn. We
can then solve the equation for the final mass (“dry mass”) after the burn. The difference between these two masses will be used to determine the length of time that is needed to complete the burn.
The equation for mass flow rate of fuel, given I[sp] and thrust, is:
${\displaystyle {\dot {m}}={\frac {thrust}{I_{sp}}}}$
where ${\displaystyle {\dot {m}}}$ is the mass flow rate of fuel consumed. Again if the specific impulse is given in seconds it needed to multiplied by 9.81 m·s⁻² (see also Terminology about I[sp]).
Dividing the difference between initial mass and final mass for the burn by the mass flow rate of fuel, we can determine how many seconds are required.
Usually, when the thrust is in kN and the specific impulse is in m/s the result is in Mg/s (= t/s). As the density of the liquid fuel/oxidizer mixture is 5 Mg/m³ this gives 1/5 m³/s = 2 dm³/s (= l/
Orbital velocity
Rather easy is the formula to calculate the orbital velocity of an orbit. This assumes circular orbit or the velocity of a specific point in an orbit. For this, we simply do this calculation:
${\displaystyle {\sqrt {\frac {\mu }{r}}}}$
${\displaystyle \mu }$ = Gravitational Parameter of parent body. (km³/s²)
${\displaystyle r}$ = radius of orbit. (km)
If we input the radius of the orbit in Kilometers, our orbital velocity will come out in Kilometers per second. In a 100 km orbit, our radius will be 700 km. Meaning our velocity will be ~2.2458
kilometers per second (km/s), or 2245.8 m/s.
Delta-v map
A ${\displaystyle \Delta v}$ map consists of approximate amounts of ${\displaystyle \Delta v}$ needed to get from one place (whether it is on the ground or in space) to another. The ${\displaystyle \
Delta v}$ values we have for our ${\displaystyle \Delta v}$ map are approximate and include a fudge factor (in case we slip up on our piloting). Our map is as follows:
Launch to 100 km Kerbin orbit: 3500 m/s
Trans-Munar Injection: 900 m/s
Landing on the Mun: 1000 m/s
Launch from Mun and return to Kerbin: 1000 m/s
Total ${\displaystyle \Delta v}$: 6400 m/s
If we design our rockets to have 6400 total ${\displaystyle \Delta v}$, and the acceleration of the launch stages are adequate, we can have confidence that our rocket is able to land on the Mun and
return to Kerbin. A rocket with a little less ${\displaystyle \Delta v}$ can accomplish this goal, but it is less forgiving of less efficient piloting.
Calculate the acceleration
→ See also: Thrust-to-weight ratio
Calculating the thrust-to-weight ratio is very simple. It is important to know the thrust to weight ratio of your rocket to ensure your rocket will actually liftoff. If your TWR is less than 1, you
can bet that you won't make an inch in altitude when starting from the launch pad. The minimum optimal TWR to have for your rocket at launch is 2.2.
To lift off the rocket's thrust need to exceed the gravitational force. The formula for this is simply the thrust of all of your current stage engines divided by the weight of your ship, fully
${\displaystyle F_{T}>F_{G}=m\cdot g\implies TWR={\frac {F_{T}}{F_{G}}}={\frac {F_{T}}{m\cdot g}}>1}$
To calculate the acceleration simply use Newton's second law:
${\displaystyle F=m\cdot a=F_{T}-F_{G}=F_{T}-m\cdot g=m\cdot a\implies a={\frac {F_{T}}{m}}-g}$
These calculations only work when counteracting gravity. While coasting on an orbit the gravitational acceleration isn't important and thus the TWR may be below one and still work. The acceleration
is at minimum directly after launch when the craft is heavy and at maximum immediately before running out of fuel, when the tanks are dry:
${\displaystyle a_{min}\approx {\frac {F_{T}}{m_{total}}}-g}$ and ${\displaystyle a_{max}\approx {\frac {F_{T}}{m_{dry}}}-g}$
The dry mass also includes the fully fuelled upper stages of the craft. To determine the g-force simply divide achieved acceleration by ${\displaystyle g_{0}=9.81{\frac {m}{s^{2}}}}$. As the craft is
in free fall, the gravitational acceleration isn't felt by the crew so the accelerations appear to be higher for the crew leading to cancelling out the factor g:
${\displaystyle {\text{g-force}}_{min}\approx {\frac {F_{T}}{m_{total}\cdot g_{0}}}}$ and ${\displaystyle {\text{g-force}}_{max}\approx {\frac {F_{T}}{m_{dry}\cdot g_{0}}}}$
As the weight of the ship depends on the current gravitation (${\displaystyle g}$) the TWR differs between the celestial bodies.
This guide will hopefully have helped with designing your rockets to allow you to get the job done—whatever it may be—with no test flights first. We hope this guide has been helpful to new and
continuing KSP pilots alike. | {"url":"https://wiki.kerbalspaceprogram.com/wiki/Tutorial:Advanced_Rocket_Design","timestamp":"2024-11-02T13:48:00Z","content_type":"text/html","content_length":"109414","record_id":"<urn:uuid:1cff3182-979e-4ff0-ba6e-1c410e4ce6d7>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00268.warc.gz"} |
What Might Explain Ron Santo's Low Hall Of Fame Voting Percentages?
It seems like it might be for the reasons I have have seen people give the last week or so: no post-season exposure, somewhat short career (he did not reach 10,000 PAs), lack of milestones like 3000
hits or 500 HRs and lack of MVP awards.
Last year and earlier this year I posted some regression generated equations that tried to explain the percentage of the Hall of Fame vote player got in their first year of eligibility (and also
their highest percentage). The model I came up with was based on some trial and error. That seemed unavoidable, since it is hard to have priors on what exactly the voters are thinking. The model
looked at all players that became eligible for the first time from 1980-2009.
The model uses the following data to explain vote percentage:
Reaching 10,000 PAs
500 HRs
3000 hits
500 SBs
Gold Gloves
All-Star games
World Series performance
MVP awards
Gold Gloves and All-Star games got capped at certain levels which were then squared. The idea was that those things have an exponential effect which tapers off. There were also interaction terms for
World Series performance, Gold Gloves and All-Star games. The idea there was that getting lots of Gold Gloves and playing in lots of All-Star games has more than an additive effect (after I discuss
what the model predicted for Santo, technical details like regression results and variable descriptons will be covered).
Santo's first year percentage was 3.9%. Normally, he would no longer be eligible in the writers' voting. But he and some other players were re-instated in 1985. He got 13.4%. The model predicted that
he would get 17.65%. The standard error was .08. So even if we give him 8% more, that only jumps him up to 21.4%. Still a pretty low total for a first year (Billy Williams got 23.4% in his first year
in 1982 and steadily increased until he got 85.7% in 1987).
Santo's highest percentage was 43%. The model predicted it would be 30%. So he actually did better than that. The standard error was .117. So he was predicted to be about 4 standard errors below what
is needed for induction, 75%. And his actual highest percentage was still about 3 standard errors below 75%. Billy Williams highest predicted percentage was 29.6% while it was actually 85.7%. That
differential of 56.1% is the highest positive differential. Why Williams is in and Santo isn't is an interesting question.
Here was the equation where the player's first year vote percentage was the dependent variable
PCT = -.010 + .00086(WSAS) + .048(GGAS) + .070(MVP) + .404(3000 HIT) + .280(500 HR) + .002(ASSQ10) - .00089(GGSQ7) + .071(500SB) - .006(WSIMPSQ50) + .100(10000PA)
The adjusted r-squared was .898 The standard error was .08.
Here was the equation where the player's highest vote percentage was the dependent variable
PCT = -.014 + .00037(WSAS/1000) + .025(GGAS/1000) + .067(MVP) + .257(3000 HIT) + .201(500 HR) + .0048(ASSQ10) - .0013(GGSQ7) + .071(500SB) - .00167(WSIMPSQ50/1000) + .137(10000PA)
The adjusted r-squared was .861 The standard error was .117.
MVP is number of MVP awards won, 3000H is a dummy variable (1 if a player reached it, 0 otherwise). The 500HR is also a dummy variable as it is for 500SB and 10000PA (if you made it to 10,000 career
plate appearances, you get a 1, 0 otherwise). I used all the voting data from 1990-2009.
What is ASSQ10? It is the square of the number of All-star games played in squared. But AS games played is maxed out at 10. The assumption here is that being an all-star has a positive exponential
effect but only up to a point where no more games helps (I have a graph below to help explain this). The GGSQ7 is the same thing for Gold Gloves.
WSIMPSQ50 involves World Series play. First, WSIMP is World Series PAs times OPS. The idea here that the more you play in the World Series the more votes you would get, but by multiplying it by OPS,
it also includes how well you played (or just hit). This gets maxed out at 50 and is squared, for the same reason as all-star games (yes, Reggie Jackson is first here and way ahead of everyone else
at 141, with Dave Justice and Lonnie Smith tied for 2nd at 101).
The last two variables are interaction variables. GGAS is the gold glove variable multiplied by the all-star variable and WSAS is the world series variable times the all-star game variable. It looks
strange that the coefficient values on GGSQ7 and WSIMPSQ50 are negative. But you might notice that they are positive on the interactive variables. I think this is like when a regression uses both X
and X-squared in a regression if the phenomena is non-linear (an inverted parabola, for example). The coefficient on X ends up being positive while the x-squared coefficient is negative. The reason I
put in these interactive variables was to see if players who were strong in both got an extra boost, as if there was some synergy going on. It seems like they did get an extra boost.
Since the dependent variable can only go from 0 to 100, the coefficient would be very low. So I divided these three variables by 1000 (my stat package was showing coefficient values of .00000 before
I did this).
11 comments:
Very interesting, Cy. Great work.
One other thing that I think holds him back is that I've gotten the impression he wasn't well liked at the time he played and perhaps even less liked after he started campaigning to get into the
Hall of Fame, though that campaigning was largely the fans themselves.
Thanks for the comments. So you think the fans hurt his cause? Did Billy Williams campaign? Did the fans campaign on his behalf? Why is he in and not Santo?
Stick to economics.
Hey Cy,
I've got a paper under review right now modeling the same sort of thing. Interestingly enough, at least based on what the BBWAA likes, Ron Santo had none of it. And I didn't actually bother to
include playoff experience in the mix.
Without taking the era into account (which, of course, should be done), Santo is comparably offensively to Chili Davis (they're almost like twins if you ask me). By my model, his career gave him
about an 11.3% chance of getting into the Hall, just ahead of Steve Garvey, Brett Butler, and Bernie Williams. In fact, according to the model, Rob Nen seems to have a career more in line with
what the BBWAA likes to vote for.
Now, this says nothing about them being right, but rather tells us a bit how they think. As you say, the shorter career seems to have done him in.
Thanks. Interesting that we both have similar conclusions. Good luck on your paper and in grad school.
I think the fans did hurt his cause, Cy. I recall reading on more than one occasion that the Veteran's Committee was irritated with the campaign. That doesn't say anything as to why he was not
elected before the VC had a chance to do so. I think these numbers you post here pretty much explain that, but in recent years, yes, the fans campaign hurt his chances in my opinion.
As for why he wasn't elected initially, I think it's partly the numbers you've posted here and I do think there is reason to believe that silly heel tap frustrated the BBWAA.
All of that said, your work here further proves to me we need an improved system of electing hall of famers. I was never outspoken about Santo getting into the Hall of Fame because I really don't
care. The HOF is important to the individual and while I would have liked to have seen Santo get in, there's a detailed record of everything he accomplished and all of the hardships he suffered
the along the way. That will never be forgotten. I think a lot of fans think it will, but just as I look at stats pages of some of the greats from the 1910s or 1920s and read more about them,
many fans will do the same with regards to Santo in the future. Those who want to know will and those who don't care won't. In other words, nothing will change except that he does in fact deserve
to be in Cooperstown. Then again, there are others who deserve to be and are not.
dwill66, this is a thoughtful article and it's well written. I don't believe he is stating his opinion one way or the other, but rather showing why he was not in the HOF. This is exactly the kind
of thing that keeps me and many others coming here.
I'm not sure if I'd call Santo's vote totals low. He exhausted his 15 years of eligibility on the BBWAA ballot and peaked at 43.2 percent of the vote his final year. I wrote a piece on Don
Mattingly awhile back where I noted that, historically, the Veterans Committee has had a better than 50 percent induction rate on players who peak between even 20 and 30 percent of the vote. My
guess is Santo gets in somewhere within the next 10-15 years.
I admittedly did not read your post before submitting my first comment. Having now read it, I would be interested to see what it would show for Gil Hodges.
Good work!
Thanks for dropping by. I guess I meant low percentages as in they were not enough to get him elected. On Hodges, I am not sure if he was part of my study, since he started getting votes before
1980. I wanted to have some cutoff. I will check when I get home. One problem I wanted to avoid was going back to far in time. As you do that whot the voters are changes and I think any results
are less meanigful.
Thanks for the vote of confidence.
This is a couple months late and may have already been answered, but a good cutoff for looking back on Hall of Fame voting could be somewhere around 1967. That's about the time modern voting
procedures like 15 years of eligibility were implemented. | {"url":"https://cybermetric.blogspot.com/2010/12/what-might-explain-ron-santos-low-hall.html","timestamp":"2024-11-13T22:55:36Z","content_type":"text/html","content_length":"70651","record_id":"<urn:uuid:2191bce5-4e4c-4d3d-9702-757c5c4e3cbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00276.warc.gz"} |
IntroductionLiterature ReviewAdaptive Plant Propagation Algorithm (APPA)Single Anchor Node Localization ConceptFlowchart of optimized localization using Adaptive Plant Propagation Algorithm (APPA)Detailed description of 3D localization using Adaptive Plant Propagation Algorithm (APPA)Umbrella projection to find out the position of mobile target nodesSensor field in 3D environment with anchor and virtual anchor nodes3D centroid calculationAPPA particles deployed in 3D scenarioEstimated 3D locationSimulation Results and DiscussionParameter settingsOptimized localization of sensor nodes with BBOOptimized localization of sensor nodes with PSOOptimized localization of sensor nodes with FAOptimized localization of sensor nodes with HPSOOptimized localization of sensor nodes with GWOOptimized localization of sensor nodes with APPAComparison of meta-heuristic algorithmsComparison of average localization error for all the six algorithmsConclusionsReferences
CMC CMC CMC Computers, Materials & Continua 1546-2226 1546-2218 Tech Science Press USA 19171 10.32604/cmc.2022.019171 Article Three Dimensional Optimum Node Localization in Dynamic Wireless Sensor
Networks Three Dimensional Optimum Node Localization in Dynamic Wireless Sensor Networks Three Dimensional Optimum Node Localization in Dynamic Wireless Sensor Networks WaliaGagandeep Singh1
SinghParulpreet1 SinghManwinder1 AbouhawwashMohamed23 ParkHyung Ju4 KangByeong-Gwon4bgkang@sch.ac.kr MahajanShubham5 PanditAmit Kant5 Department of Electronics and Communication Engineering, Lovely
Professional University, Jalandhar, 144411, Punjab, India Department of Mathematics, Faculty of Science, Mansoura University, Mansoura, 35516, Egypt Department of Computational Mathematics, Science,
and Engineering (CMSE), Michigan State University, East Lansing, MI, 48824, USA Department of ICT Convergence, Soonchunhyang University, Asan, 31538, Korea School of Electronics & Communication, Shri
Mata Vaishno Devi University, Katra, 182320, India Corresponding Author: Byeong-Gwon Kang. Email: bgkang@sch.ac.kr 30082021 70 1 305 321 0342021 0452021 © 2022 Walia et al. 2022 Walia et al. This
work is licensed under a Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
Location information plays an important role in most of the applications in Wireless Sensor Network (WSN). Recently, many localization techniques have been proposed, while most of these deals with
two Dimensional applications. Whereas, in Three Dimensional applications the task is complex and there are large variations in the altitude levels. In these 3D environments, the sensors are placed in
mountains for tracking and deployed in air for monitoring pollution level. For such applications, 2D localization models are not reliable. Due to this, the design of 3D localization systems in WSNs
faces new challenges. In this paper, in order to find unknown nodes in Three-Dimensional environment, only single anchor node is used. In the simulation-based environment, the nodes with unknown
locations are moving at middle & lower layers whereas the top layer is equipped with single anchor node. A novel soft computing technique namely Adaptive Plant Propagation Algorithm (APPA) is
introduced to obtain the optimized locations of these mobile nodes. These mobile target nodes are heterogeneous and deployed in an anisotropic environment having an Irregularity (Degree of
Irregularity (DOI)) value set to 0.01. The simulation results present that proposed APPA algorithm outperforms as tested among other meta-heuristic optimization techniques in terms of localization
error, computational time, and the located sensor nodes.
Wireless sensor networks localization particle swarm optimization h-best particle swarm optimization biogeography-based optimization grey wolf optimizer firefly algorithm adaptive plant propagation
Wireless Sensor Networks (WSNs) contain many small low-power sensor nodes (SNs) deployed randomly in the environment to determine the physical behavior. Sensors are often used to obtain measurements
of location, temperature, humidity, irradiance, sound, and pressure [1]. In most of WSNs applications location determination is crucially important and sensor nodes deployed in these areas are of
utmost importance as no one is present in the field to locate and place the nodes personally. So, in these applications sensor nodes are randomly deployed at unknown locations and they adopt random
locations in the sensor field. On the other hand, the exact location is not known of an occurring event the information gathered by these sensors is useless [2]. To locate the sensor nodes in WSN,
GPS which is one of the most widely used technique for localization, was developed to overcome the limitations of previous navigation systems [2]. GPS is being used in military, industry, and more
recently, consumer/civilian applications. However, GPS does not work with obstacles that limit LOS communications between the satellites and the GPS receiver, therefore, its utility is limited in
dense forests, mountains, and also in indoor environments. To overcome GPS limitations, sensor networks can be applied for localization. An alternative way to find out all unknown nodes in the
scenario is to deploy few sensors within built GPS feature in them are known as anchor nodes. Thus, the exact location of these sensors is known after deployment in WSNs. By using the known locations
of these anchors, many methods already available in literature are used for evaluation of the location of unknown nodes (or unknown nodes). Range-based and Range-free algorithms are different
algorithms that exist in the literature. First one measures the distance between nodes using RSSI, AoA, ToA [3–4]. Thus, range free strategies, distance vector hop, multidimensional signaling, and
adhoc positioning system provide the location of various targeted nodes with fewer infrastructure requirements. In WSN, providing exact localization is one of the greatest problems. Localization can
be done precisely in static nodes, but it is much more difficult in moving nodes. We introduced the idea of using a novel APPA to target unknown nodes with the help of only one node which is called
as the anchor and assumption is taken about this node virtually in six different directions. Whenever the nodes whose location is to be found outcomes under the range of anchor, virtual anchors
placed at 60 degrees’ angles, with the same range as that of anchor, and out of the six only three nodes are nominated to trace the exact position of the unknown node because at least four SNs are
needed to find out three dimensional positions. Here, we are working to find out the evaluation and hence efficiency of localization problem with various meta-heuristics using APPA.
The following section in this work is as described: Section 2 illustrates challenges which deal with 3D localization. Section 3, introduces a novel approach named APPA. In Section 4 the process of
deploying only one anchor node in the sensing field is explained. Section 5 concludes results and discussions. At last, the Future Scope and the conclusive part is discussed in Section 6.
A lot of research is done in Wireless Communications. Liu et al. [5–6] have published several papers on a variety of wireless networks, including Mobile Ad Hoc Social Networks and Mobile
Opportunistic Networks. This paper, in comparison to their network, focuses on WSNs with multiple sensor nodes to track a physical area. Various localization schemes have recently been proposed, with
the majority of research proposals concentrating on Two Dimensional localization techniques with a flat sensing region. Due to this, the design of Three Dimensional localization systems in WSNs faces
new challenges.
Chu et al. [7] developed a new global optimization algorithm called the Symbiotic Organism Search Algorithm with Multi-Group Quantum-Behavior Communication (MQSOS) by integrating the multi-group
communication and quantum behavior strategies with the symbiotic organisms search (SOS) algorithm. It is swift and convergent, and it is useful for solving practical problems involving multiple
arguments. Under the CEC2013 large-scale optimization test suite, they compared MQSOS to other intelligent algorithms including particle swarm optimization (PSO), parallel PSO (PPSO), adaptive PSO
(APSO), Quasi-Affine Transformation Evolutionary (QUATRE), and oppositional SOS (OSOS). The results of the experiments show that the MQSOS algorithm outperformed the other intelligent algorithms. Liu
et al. [8] proposed various strategies to accomplish the localization of nodes using distance data between neighbor nodes. they verify through experiment that the proposed algorithms provide better
performance in confinement accuracy and energy utilization. Distributed localization nodes, according to Kotwal et al. [9], use crude RSSI to estimate their minimum and maximum distance limits with
respect to anchor nodes. A simple binary search algorithm is used in the approximation. The rough distance limits assist in the creation of the node's feasibility area in relation to anchor nodes. To
solve the optimization problem of minimizing localization error, the feasibility area coordinates are used as initial particles in particle swarm optimization (PSO). It was discovered that nodes can
be localized with greater accuracy using simple calculations than current algorithms, and that fewer anchor nodes with limited communication range are needed. In a wireless sensor network (WSN)
system, Low et al. [10] present a localization system for unknown emitter nodes. For this scheme, four anchor nodes with known positions are presumed, as well as one or more unknown nodes
transmitting RF signals that can be received by the four anchor nodes. The system's only source of data is the obtained signal strength indicator, which is inaccurate. The particle swarm optimization
(PSO) scheme, which can be implemented in real time, is investigated in this paper to obtain a better approximate position of the sensor nodes. The suggested approach's simulation and experimental
findings are discussed. To improve the WSN localization accuracy, Wang et al. [11] proposed a new coupling algorithm based on Bacterial Foraging Algorithm (BFA) and Glow-worm Swarm Optimisation (GSO)
(BFO-GSO). The algorithm has good convergence speed and the optimization performance is verified by CEC2013 benchmarks. The RSSI method is used to measure the estimated distance between the reference
and target nodes deployed in the field using the trilateration approach, according to Graefenstein et al. [12]. Sumathi et al. [13] proposed an RSS method for locating unknown nodes that only needs a
single anchor node. This paper presents a least squares method for locating fixed target nodes. Guo et al. [14] developed a mobile-based method called perpendicular intersection (PI) that does not
map RSS distances directly. The geometric PI relationship is used to calculate the location of the node. Shi et al. [15] proposed a scheme, in which a single mobile anchor sends ultra wideband (UWB)
signals to the sensor nodes for localizing whole network The Distance Vector-Hop dependent approach for locating sensor nodes was introduced by Wang et al. [16]. The failure of this algorithm is
primarily due to its complexity and increased cost. Xu et al. [17] proposed an improved 3D localization technique that combined DV-Distance with the quasi-newton optimize approach to improve the
performance. The efficacy of the proposed algorithm was further checked by taking into account localization accuracy and coverage. The 3D WSN localization approach based on irregular RSSI model was
proposed by Li et al. [18]. The authors proposed this model to quantify the relationship between DOIs and signal transmission range variability. When the deployed sensors are positioned in an area
surrounded by a community of anchor nodes, Ahmad et al. [19] proposed a parametric loop-division algorithm for 3D localization. This approach accurately shrinks the network toward the center and
produces reliable localization performance. Gopakumar et al. [20] proposed a new and computationally efficient swarm intelligence method for locating static nodes that is easy to implement and
requires little memory. Chuang et al. [21] use the RSS ranging technique to efficiently locate sensor nodes using a PSO-based approach. In terms of localization, the scheme has a higher success rate.
PSO-Iterative is a distributed iterative localization algorithm developed by Kulkarni et al. [22–23]. There are more than three anchors for each target node, and PSO is used to reduce the
localization error. Kumar et al. [24] proposed localization strategies based on HPSO and BBO principles with minimal hardware specifications, dubbed Range free HPSO and BBO, respectively. The edge
weights are optimized using PSO and BBO applications. In order to optimize the position of unknown sensor nodes, Arora et al. [25] suggested using the BOA optimization algorithm. The performance of
PSO and FA in 2D scenarios is compared to the performance of BOA. As compared to other meta heuristic algorithms, their solution outperforms in terms of convergence time and position accuracy.
Range-based methods are widely used due to their higher precision, but flip uncertainty is a major disadvantage to range-based methods. References [26–30] proposed a PSO-based computational
intelligence algorithm for determining the position of moving target nodes in WSNs. The algorithm is divided into two stages, with anchor nodes placed at the corners of the sensing area. During the
first step, distance calculations were made using RSSI. Virtual anchor nodes were believed to locate unknown nodes with the aid of anchor in a later stage. In these stages, centroid calculations are
obtained along with an optimization technique called PSO, and the results indicate a faster convergence time. The APPA algorithm has been used to deal with the localization principle in WSNs in this
article. The main goal is to investigate the efficiency of the APPA algorithm in the localization of WSNs and compare it to the output of other algorithms. The following segment explains the basic
concept behind APPA.
This Algorithm is comprised of a population of shoots, and every shoot presents a solution in the search space. It is assumed that each shoot has taken root which is equivalent to the objective
function being assessed. Each shoot will then send runners out to explore the space around the solution.
A plant is considered to be in a location Yi={yi,j,j=1,2,…,n}, where the dimension of the search space is given as n. Let the population size be denoted as Np which determines the number of
strawberry plants to be used initially. It is known that Strawberry plants which are in poor spots propagate by sending long runners which are few in number, the process being known as exploration.
The plants which are in location with abundance of essential nutrients, minerals and water propagate by sending many short runners, the process is known as exploitation. Maximum number of generations
considered is gmax and maximum number of permissible runners per plant is nmax.
The objective function values at different positions Yi,i=1,2,…,Np are calculated. These possible candidate solutions will be sorted according to their fitness scores. Here the fitness is a function
of value of the objective function under consideration. It is better to keep the fitness scores within ascertain boundary between 0 and 1, that is, f(x)∈[0,1]. To keep the fitness values within this
range, a mapping is done using the sigmoid function, described by Eq. (1)
The effect of this mapping function is that, it provides a means of emphasizing further better solutions over those which are not as good.
The number of runners that are found out by the solution and the distance of propagation of each of them are described. There exists a direct relationship between the number of runners produced by a
candidate solution and its fitness given by Eq. (2)
Here, nr is the number of runners produced for solution i in a particular generation or iteration after the population is sorted according to the fitness given in Eq. (2), nmax is the number of
runners which is maximum permissible, Ni is the mapped fitness as determined using Eq. (2), r is a random number lying between 0 and 1 which is randomly selected for each individual in every
iteration or generation, and ceil refers to the ceiling function. The minimum number of runners is 1 and maximum is nr. This function ensures that at least 1 runner should be there which may
correspond to the long runner as described before. The distance of each runner is inversely related to its fitness as shown in Eq. (3)
where n represents the dimension of the search space. So, each runner is restricted to a certain range between −0.5 and 0.5. The calculated distance of the runners is used to update the solution for
further exploration and exploitation of the search space by the Eq. (4)
The algorithm is modified to be an adaptive one in view of the limits of the search domain. Hence, the name is given as Adaptive Plant Propagation Algorithm (APPA). In the event that the limits are
disregarded the point is changed in accordance to lie within the search space. Essentially, aj and bj are the lower and upper boundaries of the jth coordinate of the search space respectively. New
plants are polled and the entire extended population is organized after every single individual plant in the population has passed on their designated runners. To keep the population fixed, rather
than the size of the population fixed, it is to be guaranteed that the candidates with lower growth are dispensed from the population. Another strategy is adopted to avoid being struck in the local
minima. It might happen that for a certain number of generations there is no improvement in a candidate solution, rather the runners it sends out are also not fit to remain in the population. So a
threshold to be set for such a solution such that if the number of generations in which it is not enhancing surpasses the threshold then the solution is discarded and another fresh candidate solution
or individual is produced within the limits of the search space.
In this 3D localization problem, a single anchor node with known location information is considered and this location information of anchor is utilized to find out the locations of randomly placed
mobile nodes. These mobile nodes are grouped into three different layers with anchor placed at the top most position and unknown nodes are moving at middle and the bottom layers. Anchor nodes
transmits a beacon signal that will be sensed by mobile nodes and using the concept of virtual anchors, three of these virtual anchors and anchor node itself are selected to locate all the mobile
nodes. Based on received RSSI the approximated distance between anchor and target node is estimated. The complete flow of localization procedure is given by Fig. 1. The detailed description of
localization using APPA algorithm is given in Fig. 2.
The proposed algorithm has below mentioned properties and further steps for estimating location information have been discussed in this section.
Using the APPA algorithm, a new method for projecting virtual nodes in the field to determine the exact locations of deployed sensor nodes in a three dimensional scenario.
Line of Sight (LoS) problems will be reduced to a greater extent with virtual anchor nodes.
Flip ambiguity issues in range-based methods are also minimized.
Firstly, the anchor and moving targets distance is determined in 3D scenarios using RSS measures. Further, the anchor nodes which are virtual (six in number) are placed with same distance at an angle
difference of sixty degrees, given by Fig. 3. For each target localization, anchor with three virtual anchor nodes is selected in order to find coordinates in three dimensional scenarios
respectively, shown by Fig. 4. This selection of virtual anchor nodes is done using directional information of target node. The anchor and target node distance is given by Eq. (5)
Here in three dimensions, the position of the nodes which are targets is given by(xt, yt,zt) and the present location of the anchor node is given by (x,y,\;z) for three dimensional scenarios.
Further, the centroid (xc, yc,zc) is obtained by Eq. (6) in three dimensional environments, and is given in Fig. 5. Each moving target localize itself using APPA with the centroid value (xc, yc, zc)
as the initial guess
It has been shown by Fig. 6 that proposed APPA is used to find out the coordinates of the node which is the target and is given by (xs,ys,zs,). The distance among the estimated and actual deployment
of target nodes will be reduced by the objective function utilized in Eq. (7)
Here, the estimated position of the target node is given by (xe,ye,ze), estimated position of the beacon node i and the nodes which are the targets is given by (xi,yi,zi) (M>4 to compute 3D
location) respectively for 3D scenario.
Error in the process of localization is given by Et, and is found out by Eq. (8) and is shown in Fig. 7 for three dimensional scenarios.
Here, a novel technique APPA is used for three dimensional localization problem where the concept one anchor and six virtual anchors assumed in six directions placed at 60∘ is considered to find out
the exact position of all unknown nodes. In three dimensional environments, the structure is divided into different layers, normally three and it is a cubic structure. Here the unknown nodes whose
position is to be found out are placed at the lower two layers and the known nodes are kept at the top most layer. The number of unknown nodes at each layer is kept to be forty. For finding out the
positions of the unknown nodes in three dimensional environments, an umbrella projection is created. Deployment of more than six virtual anchors is also practically possible, but by keeping more than
six virtual anchor nodes, hardly there is any change in the efficiency of the algorithm. The parameters required for various meta-heuristic optimization algorithms are given below in Tab. 1.
Algorithm Parameters
PSO NP = 30; D = 3; G[max] = 100; c1, c2, c3 = 1.494; w = 0.729
HPSO NP = 30; D = 3; G[max] = 100; c1, c2, c3 = 1.494; η = 0.1; w = 0.729
BBO NP = 30; D = 3; G[max] = 100; pm = 0.05
FA NP = 30; D = 3; G[max] = 100; α=0.2; γ=0.96
GWO NP = 20; D = 3; G[max] = 100; a=[2to0]; C=[0to2]
APPA NP = 30; D = 3; G[max] = 100; nmax=3
Here, NP is number of population, D is dimension of problem, Gmax is number of iteration.
Where (c1), (c2) and (c3) are the cognitive, social and neighborhood learning parameters. Here w is the inertia weight and Pm is the probability of mutation. In FA x and γ are randomizing and
absorption coefficient. In mobility-based scenario, various optimization algorithms available in the literature are evaluated. Here the unknown nodes whose position is to be found out are placed at
the lower two layers and the known nodes are kept at the top most layer. All the unknown nodes are moving while the anchor node is kept static. The average of the localization error given in Eq. (8)
is used to find out the fitness function. Figs. 8–13 represent the output obtained by various optimized algorithms. The line of sight disadvantage is also reduced a lot with the help of assuming
various nodes at different angles. It has been proved with the help of the results that using APPA, accurate locations are being found as compared to other algorithms and convergence characteristics
are also faster. In future, with the help of hybridization of few optimized algorithms more accuracy could be achieved.
The average localization error for all competitive algorithms is computed in Tab. 2 and shown in Fig. 14. When compared to other competitive algorithms tested for the same situation, APPA has a much
faster convergence time.
Algorithms Movements number Max localization error Min localization error Average error Number of located targets
PSO 1 3.9358 0.0554 0.9958 80
2 5.3379 0.0831 0.9839 80
3 5.0108 0.0800 0.9267 80
4 5.1655 0.0367 0.9757 80
5 5.1325 0.0812 0.9612 80
HPSO 1 3.1204 0.1044 0.6742 80
2 5.0134 0.0647 0.4876 80
3 4.8279 0.0976 0.4032 80
4 5.2376 0.0230 0.5546 80
5 5.2134 0.0316 0.5324 80
BBO 1 5.8904 0.1822 1.1892 80
2 5.3500 0.3318 1.2560 80
3 5.5989 0.1822 1.1585 80
4 5.6348 0.1528 1.2818 80
5 5.9014 0.1911 1.1916 80
GWO 1 3.1101 0.0944 0.6442 80
2 4.9834 0.0547 0.4776 80
3 4.8134 0.0876 0.3932 80
4 4.7976 0.0430 0.4946 80
5 4.9776 0.0513 0.4713 80
FA 1 6.1101 0.1922 2.2234 80
2 6.3120 0.3412 2.3124 80
3 6.6990 0.1923 2.4651 80
4 6.8912 0.1627 2.5123 80
5 6.9036 0.2010 2.2013 80
APPA 1 3.1101 0.0964 0.6415 80
2 4.3983 0.0437 0.4732 80
3 4.8032 0.0721 0.3841 80
4 4.7679 0.0412 0.4471 80
5 4.3108 0.0403 0.4312 80
The localization optimization using algorithms viz. PSO, HPSO, BBO, GWO and FA are already available in the literature with static scenarios. In this paper, these algorithms are also implemented with
the proposed technique having single anchor node with umbrella based projection. Further these algorithms are compared with APPA algorithm, given by Tab. 2.
The performances of all algorithms have been compared with the proposed scheme in dynamic scenarios. It has been analyzed from the results given in Tab. 2 that the Average Localization error is
coming out to be the minimum for all the various number of movements when we are using APPA Algorithm.
The single anchor node method was used to obtain three Dimensional positions of unknown nodes with range-based technique using a meta-heuristic algorithm called APPA. The idea of an anchor and
virtual anchor node forms an umbrella projection for finding all unknown nodes. When the mobile target nodes come under the range of the known node, further, with the help of anchor as well as
virtual anchors, position of unknown nodes is determined (to find out three dimensional positions, at least four anchor nodes are required). A variety of applications exists where sensor node
location is essential and the proposed algorithm is helpful, including logistics, underwater scenarios, tracking of coal mine workers, monitoring of environmental aspects, localization of occurring
events in remote and hilly regions etc. Performance of APPA algorithm proposed in this work in order to find out the exact location of the nodes is found out to be better than its competitive
algorithms. It has been proved with the help of the results that using APPA, accurate locations are being found as compared to other algorithms and convergence characteristics are also faster. In
future, with the help of hybridization of few optimized algorithms, more accuracy could be achieved.
Funding Statement: This research was supported by X-mind Corps program of National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT (No. 2019H1D8A1105622) and the
Soonchunhyang University Research Fund.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
R. Kulkarni, G. Venayagamoorthy and M. Cheng, “Bio-inspired node localization in wireless sensor networks,” in 22, pp. 205–210, 2009. D. Lavanya and S. K. Udgata, “Swarm intelligence based
localization in wireless sensor networks,” in Int. Workshop on Multi-Disciplinary Trends in Artificial Intelligence, Berlin, Heidelberg, Springer, pp. 317–328, 2011. P. Singh, B. Tripathi and N.
Singh, “Node localization in wireless sensor networks,” 2, no. 6, pp. 2568–2572, 2011. A. Boukerche, H. Oliveira, E. Nakamura and A. Loureiro, “Localization systems for wireless sensor networks,” 14,
no. 6, pp. 6–12, 2006. Y. Liu, Z. Yang, T. Ning and W. Hongyi, “Efficient quality-of-service (QoS) support in mobile opportunistic networks,” 63, no. 9, pp. 4574–4584, 2014. Y. Liu, Z. Yang, T. Ning
and W. Hongyi, “Efficient data query in intermittently-connected mobile ad hoc social networks,” 26, no. 5, pp. 1301–1312, 2014. S. Chu, Z. Du and J. Pan, “Symbiotic organism search algorithm with
multi-group quantum-behavior communication scheme applied in wireless sensor networks,” 10, no. 3, pp. 930–952, 2020. D. Liu, S. Guo, W. Chen and F. Wang, “History based multi-node collaborative
localisation in mobile wireless ad hoc networks,” 30, no. 2, pp. 59–72, 2019. S. Kotwal, S. Gill and K. Saini, “Development of range free three dimensional localisation in wireless sensor networks,”
31, no. 1, pp. 52–63, 2019. K. Low, H. Nguyen and H. Guo, “Optimization of sensor node locations in a wireless sensor network,” in IEEE, vol. 5, pp. 286–290, 2008. Y. Wang, P. Wang, J. Zhang, X. Cai,
W. Li et al., “A novel DV-hop method based on coupling algorithm used for wireless sensor network localisation,” 16, no. 2, pp. 128–137, 2019. J. Graefenstein, A. Albert, P. Biber and A. Schilling,
“Wireless node localization based on RSSI using a rotating antenna on a mobile robot,” in 6th Workshop on Positioning, Navigation and Communication, Hannover, Germany, IEEE, pp. 253–259, 2009. R.
Sumathi and R. Srinivasan, “RSS-Based location estimation in mobility assisted wireless sensor networks,” in Proc. of the 6th IEEE Int. Conf. on Intelligent Data Acquisition and Advanced Computing
Systems, vol. 2, no. 3, pp. 848–852, 2011. Z. Guo, Y. Guo, F. Hong, Z. Jin, Y. He et al., “Perpendicular intersection: Locating wireless sensors with mobile beacon,” 59, no. 7, pp. 3501–3509, 2010.
Q. Shi, H. Huo, T. Fang and D. Li, “A 3d node localization scheme for wireless sensor networks,” 6, no. 3, pp. 67–72, 2009. L. Wang, J. Zhang and D. Cao, “A new 3-dimensional dv-hop localization
algorithm,” 8, no. 6, pp. 2463–75, 2012. Y. Xu, Y. Zhuang and J. Gu, “An improved 3D localization algorithm for the wireless sensor network,” 11, no. 6, pp. 1–13, 2015. J. Li, X. Zhong and I. Lu,
“Three-dimensional node localization algorithm for WSN based on differential RSS irregular transmission model,” 9, no. 5, pp. 391–397, 2014. T. Ahmad, X. Li and B. Seet, “Parametric loop division for
3D localization in wireless sensor networks,” 17, no. 7, pp. 1697–1729, 2017. A. Gopakumar and L. Jacob, “Localization in wireless sensor networks using particle swarm optimization,” in IET Conf. on
Wireless, Mobile and Multimedia Networks, Beijing, China, pp. 227–230, 2008. P. Chuang and C. Wu, “An effective pso-based node localization scheme for wireless sensor networks,” in Ninth Int. Conf.on
Parallel and Distributed Computing, Applications and Technologies, Dunedin, New Zealand, IEEE, pp. 187–194, 2008. J. Kennedy, “Bare bones particle swarms,” in Swarm Intelligence Sym., Indianapolis,
IN, USA, IEEE, pp. 80–87, 2003. R. Kulkarni and G. Venayagamoorthy, “Particle swarm optimization in wireless-sensor networks a brief survey,” 41, no. 2, pp. 262–267, 2011. A. Kumar, A. Khosla, J.
Saini and S. Singh, “Meta-heuristic range-based node localization algorithm for wireless sensor networks,” in Int. Conf. on Localization and GNSS, Starnberg, Germany, IEEE, pp. 1–7, 2012. S. Arora
and S. Singh, “Node localization in wireless sensor networks using butterfly optimization algorithm,” 42, no. 8, pp. 1–11, 2017. P. Singh, A. Khosla, A. Kumar and M. Khosla, “A novel approach for
localization of moving target nodes in wireless sensor networks,” 10, no. 10, pp. 33–44, 2017. P. Singh, A. Khosla, A. Kumar and M. Khosla, “Computational intelligence based localization of moving
target nodes using single anchor node in wireless sensor networks,” Springer vol. 69, no. 3, pp. 397–411, 2018. P. Singh, A. Khosla, A. Kumar and M. Khosla, “3D localization of moving target nodes
using single anchor node in anisotropic wireless sensor networks,” 82, no. 1, pp. 543–552, 2017. P. Singh, A. Khosla, A. Kumar and M. Khosla, “Optimized localization of target nodes using single
mobile anchor node in wireless sensor network,” 91, no. 3, pp. 55–65, 2018. P. Singh, A. Khosla, A. Kumar and M. Khosla, “A novel approach for localization of moving target nodes in wireless sensor
networks,” 10, no. 10, pp. 33–44, 2017. | {"url":"https://cdn.techscience.cn/ueditor/files/cmc/TSP_CMC_70-1/TSP_CMC_19171/TSP_CMC_19171.xml?t=20220620","timestamp":"2024-11-03T07:19:55Z","content_type":"application/xml","content_length":"94171","record_id":"<urn:uuid:64f1d0fb-27cb-4204-bedb-c2c875b87efd>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00094.warc.gz"} |
Quantum Awesome Learning Hub - Hoa T. Nguyen | Hoà IO
Get started
Get a feel for quantum computing with interactive introduction:
1. Quantum Computing Journey by QuantumAI.Google
An interactive map to get started with Quantum Computing from Google QuantumAI
2. Introduction to Quantum Computing by Qiskit Learn
An short introduction course to get started with Quantum Computing
3. Quantum Country by Andy Matuschak and Michael Nielsen
An introduction to quantum computing and quantum mechanics
Programming Languages and SDKs
Quantum Providers
Quantum Blogs
1. Quantum Landscape 2022 by Samuel Jaques, University of Oxford
To be updated frequently… | {"url":"https://hoaio.com/quantum-awesome/","timestamp":"2024-11-05T16:24:33Z","content_type":"text/html","content_length":"167480","record_id":"<urn:uuid:7d8c6bfe-98ea-4b69-b624-9cba03e3d6dc>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00038.warc.gz"} |
Queueing Models
The History of the Analysis of Queues and Waiting Lines
A leading subject of study in operations research, queueing theory is the mathematical study of queues and waiting lines. Today’s queueing theory is indebted to the emergence and growth of operations
research after 1945 yet the origins of the field predate those of operations research, extending back by some measures to Siméon Denis Poisson’s 1837 work on criminal cases. A more generally accepted
starting date is the early 20^th century, when university-trained Danish and Norwegian mathematicians first began using probability theory while working at telephone companies. This essay takes the
latter perspective, tracing the history of queueing theory from the pioneering work of Agner Krarup Erlang, through the events of the Cold War, to the present.
Queueing theory’s greatest successes in real-world applications have been in telecommunications and data networking. While there have been vigorous attempts to adapt and extend the model to solve for
a variety of queueing problems, the complexity of calculations, challenges adapting stochastic methods to various real-world queues, and the need to have useable, timely results have all limited the
practical adoption of queueing theory. Despite these challenges the field has achieved a prominent place in ORMS practice today.
Queueing Theory's Origins: 1900 to 1917
The origins of modern queueing analysis lie in the growth of telephone systems in Denmark and Norway during the early 20^th century. There, telephone engineers (themselves university-trained
mathematicians) used statistical techniques to estimate capacity requirements for automatic telephone exchanges. With little prior experience on which to base design choices, these engineers used
mathematics, especially probabilities, as an aid to their work. The Copenhagen Telephone Company (CTC), formed in 1882, employed a thriving community of university-trained mathematicians thanks to
the efforts of its chief engineer, Johan Ludwig Jensen, perhaps best known for “Jensen’s Inequality.” Jensen himself had studied physics, chemistry, and mathematics at Denmark’s College of
Technology. As the president of the Danish Mathematical Society, Jensen attracted young mathematics to the CTC, among them Agner Krarup Erlang. While studying problems of estimating telephone
exchange capacity requirements under Jensen’s direction, Erlang showed that inbound calls to a switch followed a Poisson distribution, and that a system of lines and calls would achieve statistical
equilibrium. In 1917, Erlang published “Solution of Some Problems in the Theory of Probabilities of Significance in Automatic Telephone Exchanges,” which described three formulas used to model call
activity. Two of those, the Erlang B (or “blocking” formula) and Erlang C (the “delay” or queueing formula; sometimes called Erlang D) are still in use today. Under the Erlang B regime, a customer
who finds all servers (telephone lines) busy, departs never to return. Under the Erlang C regime, a customer who finds all servers busy, waits in queue until a server is available.
In neighboring Norway, Erlang’s contemporary Tore Olaus Engset also studied physics and mathematics, earning a degree in 1894 from the University of Oslo. At the state-owned Telegrafverket (today’s
Telenor), Engset worked on planning for an aggressive expansion of Telegrafverket’s phone service to all Norwegians, as mandated by an 1899 national law. As part of that work he developed a blocking
formula, and also addressed other concerns, including customer departures, traffic variations, traffic grading, and the finite source of the customer population. While Engset’s blocking formula found
some adoption outside Telegrafverket, his other contributions (which are described in a 1915 report) did not. Other telephone engineers appear to have preferred Erlang’s simpler model and formulas to
Engset’s both due to their being easier to calculate as well as the former’s tendency to give results which tended to be more conservative, that is, over-estimate switch capacity requirements.
Congestion Theory and Its Diffusion, 1917-1945
From its beginnings in Denmark and Norway queueing theory (or what was then called the study of congestion problems) spread with the growth of telephone systems in Europe, the United States, and the
Soviet Union. Telephone engineers in those nations, seeking mathematical approaches to estimating phone switch capacity, translated Erlang’s articles into English, French, German, and Russian. Before
English translations were available, one engineer at Bell Labs learned Danish in order to study Erlang’s work. Much of the work on congestion theory from 1917 to 1927, such as that completed by C. D.
Commelin and G. F. O’Dell, centered around confirming the results of Erlang’s formulas with empirical findings. Others, including E.C. Molina and Thornton C. Fry, advocated using Erlang’s formulas -
and probability theory in general - to the management of phone systems in the United States. Erlang’s formulae have long been used in setting staffing levels in in-bound telephone call centers, see
for example the description by Bruce Andrews and Henry Parsons at the retailer, L. L. Bean.
Modelling work remained limited until after World War II, but there were several key contributions made during the 1930s. Conny Palm studied caller abandonment. Felix Pollaczek and Aleksandr
Yakovlevich Khinchin independently arrived at formulas to calculating the mean waiting time of a queueing model with an arbitrary service time distribution (their work eventually led them to form a
friendship in the years before the war). There were also some attempts to apply congestion theory to problems outside teletraffic, with T.C. Fry’s The Theory of Probability as Applied to Problems of
Congestion (1928) being one example. As tensions turned to open hostilities in Europe, however, the use of probability theory to analyze congestion problems was still limited to telephone system
engineering work. The blossoming of queueing theory out of the study of congestion in telephone systems would have to wait for the events of World War II to create a fertile ground.
Operations Research and the "Golden Age" of Queueing Theory, 1945-1975
The rise of operations research during World War II marked a new phase in the history of queueing theory. A new generation of practitioners became interested in the mathematical study of queues. The
formation of professional associations, conferences, and survey texts demonstrated a maturing field. The twenty-year long period of economic expansion after the war also seemed to offer numerous
opportunities for operations researchers to apply queueing theory in industry, urban planning, highway construction, jet travel, and recreation. These events led some to reflect on the decade and a
half after World War II as a “golden age” of queueing theory. However, this enthusiasm was tempered by the fact that, outside of communications, real-world application of queueing theory remained
A key figure in the postwar era is the Oxford-trained mathematician David George Kendall. The Berlin Airlift of 1948-1949 inspired Kendall to consider how probability theory could be used to solve
queueing problems. He published several papers on the topic that significantly shaped the future of queueing theory, including the “A/B/C” shorthand notation for describing queues, embedded Markov
chains, and the first use of the phrase “queueing system” to describe the mathematical model. In Kendall’s A/B/C notation, the A argument identifies the arrival process, e.g., M for Markovian or
exponentially distributed interarrival times. The B identifies the service distribution, e.g., M for exponential, and G for general. The third argument denotes the number of servers. Kendall also
published one of the first comprehensive bibliographies covering work on teletraffic, congestion theory, and queues. Through this bibliography-building work of Kendall and others, the new generation
of operations researchers “rediscovered” the work of earlier teletraffic engineers. For example, while studying road traffic problems in the late 1950s Frank Haight came across Conny Palm’s 1937 work
on queueing abandonment.
Model building took off in the decade and a half after 1945. Alan Cobham (1954) wrote one the first well known paper on priority queueing systems. Cobham was a polymath. His 1965 paper, “The
intrinsic computational difficulty of functions,” was one of the first on the unrelated topic of computational complexity. A common use of priorities is to give priority to short jobs ( think express
lanes in a supermarket). At heavy loads this can substantially reduce average wait time at a modest expense for long jobs. J.F.C. Kingman and Linus Schrage published work on queue service
disciplines, including “First in, First Out” (FIFO) and Shortest Remaining Processing Time. Several practitioners including H. White and L.S. Christie (1958), Avi-Itzhak and Naor (1961), Miller and
Gaver (1962), and Kielson and Kooharian studied server breakdown. In 1961, John D. C. Little put forward the eponymous Little’s law, which would later significantly impact operations management and
computer architecture. This law states that for service systems that are, loosely speaking, stable, the average number of customers in system equals the average arrival rate times the average time in
system, or in shorthand notation, N = λ*T. Lajos Takács made several contributions to time-dependent queueing processes, including queue feedbacks, priority queues, and balking. In 1956 Paul J. Burke
noted that if the input to a queue is Poisson, then under certain circumstances, e.g., M/M/c, then so is the output. This observation is now called Burke’s theorem. A year later James R. Jackson,
while studying ways to improve machine shop job scheduling, built on Burke’s work to develop the idea of networks of queues. This modeling work helped motivate a parallel effort to ease the
increasing complexity of calculations involved. Jackson showed that there are certain classes of networks, now so-called Jackson networks, for which one can analyze each node independently to
correctly compute expected wait times and other measures. Takács demonstrated the utility of combinatorics and showed how other innovations (such as Bertrand’s ballot theorem) could help solve
queueing problems. A particular area of focus was in the use of transforms. In 1954, W. Lederman and G.E. Reuter, showed how spectral theory could be useful; and Norman T.J. Bailey demonstrated
generating functions.
Several textbooks published during this time demonstrate a rapidly maturing field. Included among them are the first text by Philip McCord Morse (1958), and Lajos Takács (1962). In the United States,
Thomas L. Saaty’s 1961 Elements of Queueing Theory, the second English-language book on the subject, became a standard, accessible textbook used to teach queueing theory up through the early 1980s.
Saaty’s book is also noteworthy for including an exhaustive bibliography.
Most queueing theory publications during this era were devoted to modeling rather than applications. In 1966, Saaty noted that “real life queues are still primitive,” an assertion which prompted U.
Narayan Bhat to undertake a historical review of queueing theory literature. As Bhat concluded in his 1969 article “Sixty Years of Queueing Theory,” the complex calculations involved were a chief
factor contributing to queueing theory’s limited applications. In the case of road traffic queues, stochastic methods were not easily adapted to the behavior of vehicular traffic, thus dissuading
practitioners from using queueing theory over other available techniques. Clients also doubted the value of large-scale queueing theory-based systems to solve for particular challenges. The 1964-1965
New York World’s Fair Corporation, for example, rejected a proposal from Arthur D. Little, Inc. to implement a crowd control system based on a system of priority queues. At the same time, however,
Fair management did solicit IBM’s Service Bureau Corporation to complete a queueing theory simulation of turnstile requirements for the Fair’s main gates. John Magee, in his oral history interview,
described how queueing theory was used by the Arthur D. Little consulting firm to help American Airlines design the first computerized on-line reservations system, SABRE. Using queueing theory, Magee
and his colleagues were able to show that one expensive communication modem could serve multiple reservation agents. The catalog merchant, L.L. Bean, used the M/M/c queueing model to set customer
service agent staffing levels at its inbound call center, see Andrews and Parsons.
Yet the problem of finding applied uses of queueing theory may also be one of the historical record. Many published articles, for example, were concerned with demonstrating the relevance, rather than
reviewing applied examples, of particular models.
Queueing Theory in the Soviet Union after 1945
In the Soviet Union, where operations research did not have a foothold as it did in the United States and the United Kingdom, the mathematical study of queues continued thanks to the efforts of
Alexander Khinchin. In Russia, queueing theory went by the name of the theory of mass service. Khinchin become a key force in sustaining the interest in the use of mathematics to solve queueing
problems east of the Iron Curtain. He published a well-regarded, highly readable study in 1954, “Mathematical methods of theory of queues” in the proceedings of the Steklov Institute of Mathematics.
This accessible work achieved widespread popularity in the Soviet Union, setting the paradigm for further developments of queueing analysis within the Soviet Block.
The 1970s Onward
Queueing theory’s history from the 1970s is one marked by shifting national priorities in the United States and the United Kingdom, which along with deregulation and the development of new markets
created new opportunities for model building and real-world applications. Federal spending in the United States moved away from military to domestic initiatives, leading to a shift among
practitioners studying military queueing problems to ones in urban service delivery, including fire suppression, emergency medical services, and law enforcement. New journals and professional
societies forming after 1980 also show a strong, sustained interest in queueing theory. The 1990s saw further real-world applications of queueing theory outside of communications systems.
Practitioners used Little’s law, for example, to help inform staffing level decisions at hospital emergency rooms, operations management, and computer architecture. Despite these achievements,
however, as recently as 2009 some still lamented that outside demands reduce the quality of queueing models.
Model-building continued, albeit with some important changes. Perhaps the biggest shift was away from what had been a long-standing stochastic, top-down orientation. Gordon Newell’s 1971 book
Applications of Queueing Theory notably took this new direction to queueing theory, stressing the use of deterministic models over stochastic ones, and diffusion approximations. In 1987, Richard
Larson published “Psychology of Queueing and Social Justice,” which brought attention to how the customer experiences queues as a factor in queue design and helped define a new subfield. In 1990
Larson also proposed the idea of a “Queue Inference Engine,” which helped enable bottom-up model building through the analysis of transactional data associated with a queue’s functioning.
One of queueing theory’s greatest real-world achievements began during the 1970s, when the work of Leonard Kleinrock and his students became fundamental to the design of what would become today’s
data communications technologies, and the Internet. Kleinrock began working on queueing systems in 1960, building on the work of Burke and J.R.R. Jackson. Kleinrock published a two-volume collection
in 1975 and 1976 and sponsored a series of lectures to disseminate the ideas presented in the two books.
Queueing theory’s institutional supports continued to grow through the twenty-five years of the twentieth century. New journals and books also helped reshape the field. 1986 saw the first issue of
the journal, Queueing Systems. In 1985, Donald Gross and Carl M. Harris released a revised edition of Fundamentals of Queueing Theory, a book that has since been widely used to teach the subject (a
fourth edition was published in 2008).
The year 2009 saw the publication of Optimal Design of Queueing Systems by Shaler Stidham. In a 1964 article, Fred Hillier introduced the idea of optimization in a queueing system to choose
parameters such as the best service rate (faster processors cost more money but reduce customer wait time) or arrival rate (think of an expressway on-ramp with red-green lights to limit arrival
rate). Stidham’s book surveyed subsequent research, including the use of tolls or prices in networks of queues, the distinction between individual and system optima in queues, and how tolls can be
used to achieve a system optimum, as well as applications to flow control in communication networks.
Optimal control of queueing systems extends the idea of optimization to dynamically varying service rates, arrival rates, and other variables. Research has emphasized the application of dynamic
programming among other optimization methods to establish the structure of optimal control policies, e.g., admit fewer arriving customers when the queue is longer.
The first Canadian Queueing Conference, or “CanQueue,” was held in 1999. Innovations in calculation continued, with Marcel Neuts showing how matrix methods could solve queueing problems in 1981. In
the early 2000s, Joseph Abate and Ward Whitt proposed a framework for easing calculations.
In the 2000s, the U.S. journal Operations Research and the UK’s Operations Research Society celebrated their fiftieth anniversaries. In commemoration of these events both Operations Research and The
Journal of Operations Research released special issues in January 2002 and May 2009, respectively, collecting articles reflecting on the field’s history. Several contributors discussed the history of
queueing theory. Shaler Stidham’s chapter in the 50th anniversary issue of Operations Research includes a comprehensive survey of work done up through the year 2000 on optimal design and control of
queueing systems.
Compiled by James Skee.
Edited by Linus Schrage.
Links and References
“Agner Krarup Erlang (1878 - 1929),” +plus Magazine. https://plus.maths.org/content/agner-krarup-erlang-1878-1929
“Aleksandr Yakovlevich Khinchin,” MacTutor History of Mathematics archive, School of Mathematics and Statistics, University of St Andrews, Scotland, Created by John J O'Connor and Edmund F Robertson,
http://www-history.mcs.st-andrews.ac.uk/Biographies/Khinchin.html, accessed 26 April 2019.
“An Interview with Leonard Kleinrock,” OH 190, Conducted by Judy O’Neill on 3 April 1990, Los Angeles, CA, Charles Babbage Institute, The Center for the History of Information Processing, University
of Minnesota, Minneapolis, https://conservancy.umn.edu/bitstream/handle/11299/107411/oh190lk.pdf?sequence=1&isAllowed=y, Accessed 8 July 2019.
Andrews, Bruce and Henry Parsons, "Establishing Telephone-Agent Staffing Levels through Economic Optimization," Interfaces, Vol. 23, No. 2 (Mar - Apr, 1993), pp. 14-20.
Bailey, Norman T.J., “A Continuous Time Treatment of a Simple Queue Using Generating Functions,” Journal of the Royal Statistical Society. Series B (Methodological), Vol. 16, No. 2 (1954), pp.
Basharin, G. P., K. E. Samouylov, N. V. Yarkina, and I. A. Gudkova, “A New Stage in Mathematical Teletraffic Theory,” trans from Russian, Automation and Remote Control, 2009, Vol. 70, No. 12, pp.
Beneš, V.E., Review of Saaty Elements of Queueing Theory, with Applications, The Annals of Mathematical Statistics, Vol. 34, No. 4 (Dec., 1963), pp. 1610-1612.
Bhat, U. Narayan, “Sixty Years of Queueing Theory,” Management Science, Vol. 15, No. 6, Application Series (Feb., 1969), pp. B280-B294.
Bingham, N. H., “A Conversation with David Kendall,” Statistical Science, Vol. 11, No. 3, (Aug., 1996), pp. 159-188, Institute of Mathematical Statistics,
Cobham, Alan, “The Intrinsic Computational Difficulty of Functions,” in Y. Bar-Hillel, ed., Logic, Methodology and Philosophy of Science: Proceedings of the 1964 International Congress, North-Holland
Publishing Company, Amsterdam, 1965, pp. 24-30.
Cobham, Alan, (1954) “Priority Assignment in Waiting Line Problems,” Journal of the Operations Research Society of America 2(1):70-76.
“History of Queueing Theory,” https://web2.uwindsor.ca/math/hlynka/qhist.html (accessed 8 May 2019)
“Johan Ludwig William Valdemar Jensen,” MacTutor, http://www-history.mcs.st-andrews.ac.uk/Biographies/Jensen.html.
“Lajos Takács”, https://www.informs.org/content/view/full/271236,
“Leo Félix Pollaczek,” MacTutor History of Mathematics archive, School of Mathematics and Statistics, University of St Andrews, Scotland, Created by John J O'Connor and Edmund F Robertson, http://
www-history.mcs.st-andrews.ac.uk/Biographies/Pollaczek.html, accessed 26 April 2019.
Daganzo, Carlos F., “In Memoriam: Gordon F. Newell, 1925–2001,” Transportation Science, Vol. 35, No. 2 (May 2001), pp. iii-v
Daley, D. J. and D. Vere-Jones, “David George Kendall and Applied Probability,” Journal of Applied Probability, Vol. 45, No. 2 (Jun., 2008), pp. 293-296.
Erlang, A.K., Solution of some problems in the theory of probabilities of significance in automatic telephone exchanges,” 1917, reprinted in Telektronikk, Volume 91 No. 2/3 - 1995, 42-49.
Grimmett, Geoffrey, “Obituary: David George Kendall, 1918-2007” Journal of the Royal Statistical Society. Series A (Statistics in Society), Vol. 172, No. 1 (Jan., 2009), pp. 275-278.
Gross, Donald, John Shortle, James Thompson, and Carl Harris, Fundamentals of Queueing Theory, 4^th ed., Wiley, 2008.
Haight, Frank A., "Queueing with Balking," Biometrika 44, 360-69 (1957); also, Rand Report P-995 (1956).
Haight, Frank A., “Two Queues in Parallel,” Biometrika, Vol. 45, No. 3/4 (Dec., 1958), pp. 401-410.
Hillier, Fred, “The application of waiting-line theory to industrial problems,” J. Industrial Engineering, Vol. 15, pp. 3-8, (1964).
INFORMS, “Leonard Kleinrock,” https://www.informs.org/Explore/History-of-O.R.-Excellence/Biographical-Profiles/Kleinrock-Leonard
Jackson, Jim, “How Networks of Queues Came About,” Operations Research, Vol. 50, No. 1, 50th Anniversary Issue (Jan. - Feb., 2002), pp. 112-113.
Jackson J. (2002) How Networks of Queues Came About. Operations Research, 50(1): 112-113. (link)
Kendall, David G., “Some Problems in the Theory of Queues,” Journal of the Royal Statistical Society. Series B (Methodological), Vol. 13, No. 2 (1951), pp. 151-185.
Kleinrock, Leonard, “Creating a Mathematical Theory of Computer Networks,” Operations Research, Vol. 50, No.1, January–February 2002, pp. 125–131
Kleinrock, Leonard, “Time-shared Systems: A Theoretical Treatment” 1961.
Larson, Richard C., “Perspectives on Queues: Social Justice and the Psychology of Queueing.” Operations Research, Vol. 35, No. 6 (Nov. - Dec., 1987), pp. 895-905.
Larson, Richard C., “The Queue Inference Engine: Deducing Queue Statistics from Transactional Data,” Management Science, Vol. 36, No. 5 (May, 1990), pp. 586-601.
Lathrop, John B., “Reviewed Works: Elements of Queueing Theory with Applications by Thomas L. Saaty; Stochastic Service Systems by John Riordan; Introduction to the Theory of Queues by Lajos Takács;
Elements of the Theory of Markov Processes and Their Applications by A. T. Bharucha-Reid,” Operations Research, Vol. 11, No. 2 (Mar. - Apr., 1963), pp. 290-293
Little, John D. C., “Little’s Law as Viewed on Its 50th Anniversary,” Operations Research, Vol. 59, No. 3, May–June 2011, pp. 536–549.
Malyshev, V. A., “On Mathematical Models of the Service Networks,” translated from Russian, Automation and Remote Control, 2009, Vol. 70, No. 12, pp. 1947-1953.
Miranti, Paul J. Jr., “Corporate Learning and Traffic Management at the Bell System, 1900-1929: Probability Theory and the Evolution of Organizational Capabilities,” The Business History Review, Vol.
76, No. 4 (Winter, 2002), pp. 733-765.
Myskya, Arne, “The man behind the formula - Biographical Notes on Tore Olaus Engset,” Telektronikk Vol 94 No 2 1998, 154-164.
Myskya, Arne, “A Tribute to A.K. Erlang,” Telektronikk, Volume 91 No. 2/3 - 1995, 41-49.
New York World’s Fair 1964-1965 Corporation Records, New York Public Library Special Collections.
Newell, G.F., “Memoirs on Highway Traffic Flow Theory in the 1950s,” Operations Research, Vol. 50, No.1, January–February 2002, pp.173-178.
Saaty, Thomas L., Elements of Queueing Theory, with Applications, McGraw-Hill Book Company, New York: 1961.
Shimshak, Daniel G., “Reviewed Work: Fundamentals of Queueing Theory by Donald Gross, Carl M. Harris,” Interfaces, Vol. 17, No. 2 (Mar. - Apr., 1987), pp. 121-122.
Simpson, N. C. and P. G. Hancock, “Fifty Years of Operational Research and Emergency Response,” The Journal of the Operational Research Society, Vol. 60, Supplement 1: Milestones in OR (May, 2009),
pp. s126-s139.
Stidham, S. (2002) Analysis, Design, and Control of Queueing Systems. Operations Research (2002) Vol, 50(1), pp. 197-216 (link)
Stidham, S., Optimal Design of Queueing Systems, CRC Press, (2009).
Switzer, Paul, “Editor's Note Regarding the Interview with Professor David Kendall, August 1996 Issue,” Statistical Science, Vol. 12, No. 3 (Aug., 1997), p. 220, Institute of Mathematical Statistics.
Worthington, D., “Reflections on Queue Modelling from the Last 50 Years,” The Journal of the Operational Research Society, Vol. 60, Supplement 1: Milestones in OR (May, 2009), pp. s83-s92.
Wright, Sarah H., “Avenue Queue: One long wait inspired career shift,” MIT News, https://news.mit.edu/2008/eureka-larson-tt0206, accessed 16 July 2019.
Yashkov, S. F., Foreword to the Thematical Issue “Centenary of the Queuing Theory” trans from Russian, Automation and Remote Control, 2009, Vol. 70, No. 12, pp. 1941–1946.
Associated Historic Individuals
Bass, Frank M. Beightler, Charles S. Bhat, U. Narayan Blumstein, Alfred Bradley, Hugh E. Cherry, W. Peter Cinlar, Erhan Conway, Richard W. Crane, Roger R. Disney, Ralph L. Edie, Leslie C. Erlang,
Agner Krarup Flagle, Charles D. Garber, H. Newton Gaver, Donald P. Gross, Donald Harris, Carl M. Harrison , J. Michael Heyman, Daniel P. Hillier, Frederick S. Ho, Yu-Chi Howard, Ronald A. Iglehart,
Donald L. Kleinrock, Leonard Kolesar, Peter J. Larson, Richard C. Lehoczky, John P. Liebman, Judith Little, John D. C. Maxwell, William Morse, Philip M. Odoni, Amedeo R. Pollaczek, Felix Porteus,
Evan L. Ross, Sheldon M. Saaty, Thomas L. Schrage, Linus Smith, Robert L. Stidham Jr., Shaler Takács, Lajos White, Jr., John A. Whitt, Ward Wolman, Eric | {"url":"https://trsc.informs.org/Explore/History-of-O.R.-Excellence/O.R.-Methodologies/Queueing-Models","timestamp":"2024-11-13T01:34:07Z","content_type":"application/xhtml+xml","content_length":"121798","record_id":"<urn:uuid:5114a417-4c3e-4cf8-9421-cacd0bec1785>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00073.warc.gz"} |
Compute dihedral angel in a polydata
I have a surface mesh made of triangle cells. I would like to compute the dihedral angle for each edge in the mesh.
I have found the FeatureEdge filter, which must be computing the dihedral angle, but I haven’t find any way to expose that angle, or any other filter that can provide this information.
Is there anyway to compute the dihedral angle for the edges in a PolyData?
I do not see a filter doing that directly.
This is a nice idea, I think you can move your post to the “Feature Request” category | {"url":"https://discourse.paraview.org/t/compute-dihedral-angel-in-a-polydata/15670","timestamp":"2024-11-08T12:22:07Z","content_type":"text/html","content_length":"14649","record_id":"<urn:uuid:78d87025-aa50-4ec1-abb7-f2c3eda1b63a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00564.warc.gz"} |
Intro to Droplet Digital PCR | Utsav Bali
Intro to Droplet Digital PCR
The utility of droplet digital PCR in gene quantification/copy number determination is becoming increasingly prevalent in industry. Its immediate benefits over traditional absolute quantitative PCR
methods is hard to deny as it obviates the need to either generate rigorous standard curves or measure PCR efficiency of every gene pair to be measured. ddPCR delivers the same sensitivity as with
conventional approaches with the added benefit of smaller errors. ddPCR relies on the Poisson algorithm and is essentially an end point measurement where the sample to be measured is randomly
distributed into discrete partitions containing either none, one or more nucleic acid template copies. These individual partitions are then thermally cycled as in conventional PCR to their end point
and then read to determine the fraction of partitions that are positive for the amplification.
Poisson distribution, essentially counting statistics, gives the probability Pr(n) that a droplet will contain n copies of target if the mean number of target copies per droplet is C. This can be
written as:
$Pr(n) = \frac{C^n . e^{-C}}{n!}$
So, the probability of getting a droplet with zero copies of the target gene is given by:
$Pr(0) = e^{-C}$
In practice, C is defined as Copies Per Droplet (CPD) which essentially means the average number of copies per droplet. Take an instance where a typical 20 µl PCR reaction volume is divided up into
20,000 droplets each of 0.001 µl volume, then a CPD can be defined as either:
1. $CPD = \frac {total number of molecules}{total number of droplets}$ or
2. $CPD = (\frac {total number of molecules}{total reaction volume}) droplet volume (\mu l)$
So, assuming we had 100,000 molecules in a $20 \mu l$ reaction volume then by equation 1, we would have a CPD of 100,000 copies/20,000 droplets = 5 copies per droplet. Alternatively, using eq. 2, you
would calculate (100,000 copies per $20 \mu l$) x 0.001 µl per droplet = 5 copies per droplet.
In other words, CPD can be thought of as the ‘expected value’ of the Poisson distribution. 100,000 copies (corresponding to a CPD of 5), as it so happens, is the limit of the dynamic range for this
technique as CPD values beyond 5 lead to saturation.
So, why exactly do we need Poisson distribution anyway? Imagine, we had only 6 copies of a gene in a total reaction volume of $20 \mu l$. If the $20 \mu l$ were divided into 20,000 droplets, then it
would be quite straightforward to find six droplets out of the 20,000 droplets that respond with a positive signal. In that case, we could simply count the number of positive droplets (6) out of
20,000 and know our concentration. But what happens when the concentration of the gene copy number is high enough to give droplets that might contain one or more copies of the gene. Here simply
counting a positive signal as corresponding to a single copy of a gene would result in underestimation of the mean concentration. Some droplets might contain 2, 3, 4 or more copies and likewise,
there might be more droplets that don’t contain any copies at all. Because some droplets can have more than one copy of the gene, Poisson statistics essentially tells us the true fraction of negative
droplets which would otherwise be underestimated by simply taking an average value of positive droplets.
In these instances, the fraction of positive droplets is then used to calculate the concentration using the following equation:
$Concentration (copies per volume) = \frac {-ln (\frac{N_{neg}}{N})}{V_{droplets}}$
Where $-ln (\frac{N_{neg}}{N})$ is the fraction of negative droplets and $V_{droplets}$ is the volume of droplets. This equation can likewise be rewritten as:
$Concentration = \frac{-ln(1-p)}{V_{droplets}}$, where p is the fraction of positive reactions.
So, let’s take an example where a sample contains 5000 droplets in a $20 \mu l$ reaction volume partitioned into 20,000 droplets. This corresponds to a CPD of 5000/20000 = 0.25 copies per droplet. If
the gene copies were evenly distributed amongst the 20,000 droplets, one might be tempted into thinking that 25% of the droplets contain one copy of the gene and that 75% of the droplets contain zero
copies of the gene. However, Poisson statistics tells us to expect 78% of the droplets containing zero copies of the gene as follows:
$Pr(0) = e^{-C} = e^{-0.25} = 0.7788 = 77.88%$, alternatively, for
$Pr(1) = \frac {0.25^1 . e^{-0.25}}{1!} = 0.1947 or 19.5\%$ positive droplets with 1 copy of the gene rather than the expected 25%.
The mean and the variance of a Poisson distribution are the same $(\mu = \lambda and \mu^2 = \lambda)$. CV is expressed as standard deviation divided by the mean $(\frac{\sqrt\lambda}{\lambda})$.
The above concepts can be easily visualized using this interactive graphic below or view it here! | {"url":"https://utsavbali.com/2017/05/04/intro-to-droplet-digital-pcr/","timestamp":"2024-11-08T03:08:24Z","content_type":"text/html","content_length":"36814","record_id":"<urn:uuid:ddcac9e0-f4e6-4722-8af0-c4b9e055d0ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00634.warc.gz"} |
A solution for simultaneous turbulent heat and vapor transfer between a water surface and the atmosphere
Interaction between sensible heat and water vapor diffusion in the lower atmosphere leads to the necessity of solving two simultaneous turbulent diffusion equations. This solution is obtained by the
construction of Green's function which when incorporated in the boundary conditions produces two integral equations. These are solved by transformation into two algebraic equations by means of the
Laplace Transformation. The results show how for a simple steady-state case, sensible heat and water vapor transfer and also the water surface temperature depend on the meteorological conditions and
the rate of change of energy content of the water body. Due to advection, the water surface temperature and the turbulent fluxes vary in the downwind direction. However, for practical calculations of
the mean evaporation or heat transfer, the error introduced by the use of an average temperature is usually quite small and negligible.
深入研究「A solution for simultaneous turbulent heat and vapor transfer between a water surface and the atmosphere」主題。共同形成了獨特的指紋。 | {"url":"https://scholars.ncu.edu.tw/zh/publications/a-solution-for-simultaneous-turbulent-heat-and-vapor-transfer-bet","timestamp":"2024-11-13T06:54:32Z","content_type":"text/html","content_length":"54798","record_id":"<urn:uuid:4987fe10-14d5-41e5-8423-4fcd96f735fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00839.warc.gz"} |
How to Calculate Poly Bag Costing in Garment Industry?
How to Calculate Poly Bag Costing in Apparel Industry?
Polybag costing is an important matter in garments merchandising. All the garment merchandisers should know about it. Polybag costing is not a tough task that we have normally seemed. As its
importance in the readymade garments sector, today I will present here the costing calculation method of the poly bag.
Poly Bag Costing Calculation Method in Garments Industry:
At the starting of poly bag costing, a garments merchandiser should identify the below information:
1. Polybag length in inch,
2. Polybag width in inch,
3. Poly bag thickness in gauge,
4. ½ flap of the poly bag (Flap is single layer and width of the poly bag is double layer),
5. Numbers of print on the polybag (Print can be any types of logo, text, warning, etc.),
6. Rate of polymer per pound (It can be PP, PE, and LDPE, etc.).
Important Tips**
• The next duty is to calculate poly bag consumption or calculate the amount of polymer needed to produce poly bags.
• Then, by multiplying, poly bag consumption (total amount of polymer needed to produce poly bag) with polymer rate, a garment merchandiser can easily calculate poly bag costing in the apparel
Now, I will present an example here for poly bags costing. Hope all the confusion will be cleared.
Suppose, the buyer “ZARA” suggests the below information about the poly bag of a garment export order.
Poly bag length – 28”,
Poly bag width- 24”,
Poly bag thickness – 160gauge,
½ flap of the poly bag- 5”,
Rate of LDPE polymer- $0.40 per pound.
Now, calculate the poly bags costing for the above order.
At first, we have to calculate poly consumption according to the given data.
Poly bag consumption (for 1000pcs in kg),
= 38.4 kg per 1000pcs ply bag.
So, for 1000pcs poly bag, LDPE polymer needed 38.4kg.
For 1pc poly bag, LDPE polymer needed,
= (38.4/1000) kg= 0.04kg= (0.04 × 2.20) lbs= 0.08lbs
So, cost of LDPE polymer for 1pc poly bag ($),
= LDPE polymer needed for 1pc poly bag × LDPE polymer cost per pound (lb)
= 0.08 × 0.40
= 0.03
So, the cost of LDPE polymer or cost for 1pc poly bag is $0.03
Cost of 1000pcs poly bag,
= $(1000 × 0.03) = $30
Speech from the writer:
If you read this article attentively, then you can easily answer the following questions in the interview:
1. How to calculate poly bag costings in the garments industry?
2. How to estimate poly bags costing in the apparel industry?
3. How to estimate poly bag costings in the clothing industry?
4. Mention the poly bags costing calculation method.
5. Describe the poly bags costing for garments merchandiser.
Mayedul Islam is a Founder and Editor of Garments Merchandising. He is an Expert in Garments Merchandising. Writing is his passion. He loves to write articles about Apparel, Textile and Garment
Washing specially on Merchandising. Mail him at mayedul.islam66@gmail.com
12 thoughts on “How to Calculate Poly Bag Costing in Garment Industry?”
1. This is very important article about poly bag costing. It will be effective for beginners. Thanks for sharing.
2. Thanks for a helpful topic.
3. Your blog is enriched by so many useful contents. Luckily, I found it while I am searching another topic in google. I will frequently visit your blog. Please carry on your writing to deliver such
interesting article. You gave me an idea for posting on my own blog. Thanks for this post, let me check out your other posts.
4. what is 3300 for? Could you explain pls.
1. i am very great full to see post.
5. what is 3300 for? Could you explain pls.
6. how to calculate for PE bag?
7. How convert from GAUGE to MM thickness ?
1. Search in Google plz
8. For 1pc poly bag, LDPE polymer needed,
= “””” ….(38.4/1000) kg= 0.04kg…….”..?????/ = (0.04 × 2.20) lbs= 0.08lbs
So, cost of LDPE polymer for 1pc poly bag ($),
= LDPE polymer needed for 1pc poly bag × LDPE polymer cost per pound (lb)
= 0.08 × 0.40……???? HOW COME… FROM ” 0.04KG TO .04 “….???
= 0.03
So, the cost of LDPE polymer or cost for 1pc poly bag is $0.03
Cost of 1000pcs poly bag,
= $(1000 × 0.03) = $30
SO THAT MY CALCULATION NOT MATCHING
9. How to calculate polyproplene bag price
10. what is 3300 for? Could you explain pls. | {"url":"https://garmentsmerchandising.com/how-to-calculate-poly-bag-costing-in-garment-industry/","timestamp":"2024-11-11T22:45:29Z","content_type":"text/html","content_length":"109157","record_id":"<urn:uuid:879ab496-0887-479e-9621-575e9380ff6f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00589.warc.gz"} |
Effect of window shape on the detection of hyperuniformity via the local number variance
Hyperuniform many-particle systems in d-dimensional space , which includes crystals, quasicrystals, and some exotic disordered systems, are characterized by an anomalous suppression of density
fluctuations at large length scales such that the local number variance within a 'spherical' observation window grows slower than the window volume. In usual circumstances, this direct-space
condition is equivalent to the Fourier-space hyperuniformity condition that the structure factor vanishes as the wavenumber goes to zero. In this paper, we comprehensively study the effect of
aspherical window shapes with characteristic size L on the direct-space condition for hyperuniform systems. For lattices, we demonstrate that the variance growth rate can depend on the shape as well
as the orientation of the windows, and in some cases, the growth rate can be faster than the window volume (i.e. L ^d), which may lead one to falsely conclude that the system is non-hyperuniform
solely according to the direct-space condition. We begin by numerically investigating the variance of two-dimensional lattices using 'superdisk' windows, whose convex shapes continuously interpolate
between circles (p = 1) and squares (p → ∞), as prescribed by a deformation parameter p, when the superdisk symmetry axis is aligned with the lattice. Subsequently, we analyze the variance for
lattices as a function of the window orientation, especially for two-dimensional lattices using square windows (superdisk when p → ∞). Based on this analysis, we explain the reason why the variance
for d = 2 can grow faster than the window area or even slower than the window perimeter (e.g. like (L)). We then study the generalized condition of the window orientation, under which the variance
can grow as fast as or faster than L^d (window volume), to the case of Bravais lattices and parallelepiped windows in ℝ^d. In the case of isotropic disordered hyperuniform systems, we prove that the
large-L asymptotic behavior of the variance is independent of the window shape for convex windows. We conclude that the orientationally-averaged variance, instead of the conventional one using
windows with a fixed orientation, can be used to resolve the window-shape dependence of the direct-space hyperuniformity condition. We suggest a new direct-space hyperuniformity condition that is
valid for any convex window. The analysis on the window orientations demonstrates an example of physical systems exhibiting commensurate-incommensurate transitions and is closely related to problems
in number theory (e.g. Diophantine approximation and Gauss' circle problem) and discrepancy theory.
All Science Journal Classification (ASJC) codes
• Statistical and Nonlinear Physics
• Statistics and Probability
• Statistics, Probability and Uncertainty
• fluctuation phenomena
• random/ordered microstructures
• structural correlations
Dive into the research topics of 'Effect of window shape on the detection of hyperuniformity via the local number variance'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/effect-of-window-shape-on-the-detection-of-hyperuniformity-via-th","timestamp":"2024-11-09T08:15:07Z","content_type":"text/html","content_length":"58362","record_id":"<urn:uuid:b919bd47-0054-483f-9eb5-287a8c431e32>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00314.warc.gz"} |
GR9367, Q 63
Any help with GR9367 question 63?
Let R be the circular region of the xy plane with the center at the origin and radius 2.
Double Integral (over R) of e ^ -(x^2 + y^2) dx dy = ?
A. 4 * Pi
B. Pi *exp(-4)
C. 4* Pi * exp(-4)
D. Pi * (1 - exp(-4))
E. 4*Pi*(exp(1) - exp(-4))
Re: GR9367, Q 63
The key here seems to be the power of the exponential (along with the fact we have a nice smooth curve to integrate over):
$$x^2+y^2$$, which implies we can convert this problem to polar coordinates, with $$r^2 = x^2+y^2$$ and
$$\theta \ge 0$$ and $$\theta \le 2\pi$$. So, then you can use a u substitution and integrate the following integral:
$$\int_0^{2\pi}\int_0^2 re^{-r^2} \, dr d\theta$$
from there to find that D is the correct solution. | {"url":"https://mathematicsgre.com/viewtopic.php?f=1&t=297","timestamp":"2024-11-02T14:02:38Z","content_type":"text/html","content_length":"18800","record_id":"<urn:uuid:8d4eece0-9cbb-4a70-93ee-e9e7cfec246f>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00588.warc.gz"} |
DPSTF2 - Linux Manuals (3)
DPSTF2 (3) - Linux Manuals
dpstf2.f -
subroutine dpstf2 (UPLO, N, A, LDA, PIV, RANK, TOL, WORK, INFO)
DPSTF2 computes the Cholesky factorization with complete pivoting of a real symmetric or complex Hermitian positive semi-definite matrix.
Function/Subroutine Documentation
subroutine dpstf2 (characterUPLO, integerN, double precision, dimension( lda, * )A, integerLDA, integer, dimension( n )PIV, integerRANK, double precisionTOL, double precision, dimension( 2*n )WORK,
DPSTF2 computes the Cholesky factorization with complete pivoting of a real symmetric or complex Hermitian positive semi-definite matrix.
DPSTF2 computes the Cholesky factorization with complete
pivoting of a real symmetric positive semidefinite matrix A.
The factorization has the form
P**T * A * P = U**T * U , if UPLO = 'U',
P**T * A * P = L * L**T, if UPLO = 'L',
where U is an upper triangular matrix and L is lower triangular, and
P is stored as vector PIV.
This algorithm does not attempt to check that A is positive
semidefinite. This version of the algorithm calls level 2 BLAS.
UPLO is CHARACTER*1
Specifies whether the upper or lower triangular part of the
symmetric matrix A is stored.
= 'U': Upper triangular
= 'L': Lower triangular
N is INTEGER
The order of the matrix A. N >= 0.
A is DOUBLE PRECISION array, dimension (LDA,N)
On entry, the symmetric matrix A. If UPLO = 'U', the leading
n by n upper triangular part of A contains the upper
triangular part of the matrix A, and the strictly lower
triangular part of A is not referenced. If UPLO = 'L', the
leading n by n lower triangular part of A contains the lower
triangular part of the matrix A, and the strictly upper
triangular part of A is not referenced.
On exit, if INFO = 0, the factor U or L from the Cholesky
factorization as above.
PIV is INTEGER array, dimension (N)
PIV is such that the nonzero entries are P( PIV(K), K ) = 1.
RANK is INTEGER
The rank of A given by the number of steps the algorithm
TOL is DOUBLE PRECISION
User defined tolerance. If TOL < 0, then N*U*MAX( A( K,K ) )
will be used. The algorithm terminates at the (K-1)st step
if the pivot <= TOL.
LDA is INTEGER
The leading dimension of the array A. LDA >= max(1,N).
WORK is DOUBLE PRECISION array, dimension (2*N)
Work space.
INFO is INTEGER
< 0: If INFO = -K, the K-th argument had an illegal value,
= 0: algorithm completed successfully, and
> 0: the matrix A is either rank deficient with computed rank
as returned in RANK, or is indefinite. See Section 7 of
LAPACK Working Note #161 for further information.
Univ. of Tennessee
Univ. of California Berkeley
Univ. of Colorado Denver
NAG Ltd.
September 2012
Definition at line 141 of file dpstf2.f.
Generated automatically by Doxygen for LAPACK from the source code. | {"url":"https://www.systutorials.com/docs/linux/man/docs/linux/man/3-DPSTF2/","timestamp":"2024-11-13T08:17:08Z","content_type":"text/html","content_length":"9938","record_id":"<urn:uuid:07aacf69-2a24-4c8a-906f-a83024b4df2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00451.warc.gz"} |
Topics in Algebra, Chapter 3.5
This page covers section 3.5 (“More Ideals and Quotient Rings”). Throughout, $R$ is a ring.
Topics covered: 3.5
• Lemma 3.5.1: Let $R$ be commutative and unital and only have the ideals $\left(0\right)$ and $R$. Then $R$ is a field.
• Definition: An ideal $I\subset R$ is a maximal ideal if any other ideal $J$ such that $I\subset J\subset R$ is either $J=I$ or $J=R$.
• The prime ideals of $\mathbb{Z}$ are exactly the ideals $\left(p\right)=\left\{kp\mid k\in \mathbb{Z}\right\}$ for $p$ a prime integer. See also: problem 3.4.8.
• Theorem 3.5.1: Let $R$ be commutative and unital and let $I$ be an ideal of $R$. Then $R\mathrm{/}M$ is maximal if and only if $R\mathrm{/}M$ is a field.
The problems below are paraphrased from/inspired by those given in Topics in Algebra by Herstein. The solutions are my own unless otherwise noted. I will generally try, in my solutions, to stick to
the development in the text. This means that problems will not be solved using ideas and theorems presented further on in the book.
Let $a\mathrm{e }0$ be an element of $R$ and consider $aR=\left\{ar\mid r\in R\right\}$ which was shown to be a right ideal in problem 3.4.14. Because $1\in R$, we have that $a1\mathrm{e }0$ is a
non-zero element of $aR$. By the conditions of the problem, we conclude that $aR=R$. In particular, there exists $b\in R$ such that $ab=1$. We would like to show also that $ba=1$. If that is the
case, then we have exhibited a multiplicative inverse in $R$ for an arbitrary non-zero ring element.
Considering $bR$ and arguing as above, we find an element $c\in R$ such that $bc=1$. However, we see that $abc=a\left(bc\right)=a1=a$ while $abc=\left(ab\right)c=1c=c$ so that $a=c$. Therefore, $ab=
ba=1$ and we have proved that every non-zero element of $R$ has a multiplicative inverse, so that $R$ is a division ring.
This is related to problem 3.5.1 in that we only relax the condition on $R$ being unital. My solution was heavily influenced by the nice solution by AllTheCheese in this forum thread.
Case 1: there exists some $a\in R$ such that $aR=R$. In that case, there exists an element $e\in R$ such that $ae=a$. We would like to show that this $e$ is the multiplicative unit for the ring,
which will put us in the realm of problem 3.5.1. First recall the annihilator $\mathrm{A}\mathrm{n}\mathrm{n}\left(a\right)=\left\{x\in R\mid ax=0\right\}$ shown in problem 3.4.17 to be a right ideal
of $R$. As $e\in ̸\mathrm{A}\mathrm{n}\mathrm{n}\left(a\right)$, we must have that $\mathrm{A}\mathrm{n}\mathrm{n}\left(a\right)=\left(0\right)$, so $a$ annihilates nothing but the zero element.
However, right-multiplying the equation $ae=a$ by $a$, we find $0=aea-aa=a\left(ea-a\right)\mathrm{.}$
In light of the comment about $\mathrm{A}\mathrm{n}\mathrm{n}\left(a\right)$, we have that $ea=a$.
Now take any other element $b\in R$; we would like to show that $eb=be=e$ so that $e$ is truly the unit element. Recalling that $aR=R$, there exists $x\in R$ such that $ax=b$, so $eb=eax=ax=b$. It is
more difficult to show that $be=b$.
Let $be=c$ and multiply on the right by $e$ to see that $be=ce$ (${e}^{2}=e$, see below for the proof*). Then $\left(b-c\right)e=0$. Recall problem 3.4.18 which tells us that $\lambda \left(Re\right)
=\left\{x\in R\mid xre=0$ for all $r\in R\right\}$ is a right ideal. Putting $x=a$ and $r=e$, we see that $aee=ae=a\mathrm{e }0$, so there exists at least one element of $R$ which doesn’t belong to $
\lambda \left(Re\right)$. With our strict condition on ideals, this means that $\lambda \left(Re\right)=\left(0\right)$. In other words, there does not exist a non-zero element $x\in R$ for which
$xre=0$ for all $r\in R$. Letting $x=b-c$, and assuming $x\mathrm{e }0$, we see that there is at least some $r\in R$ such that $\left(b-c\right)re\mathrm{e }0$, which forces us to conclude that $\
mathrm{A}\mathrm{n}\mathrm{n}\left(b-c\right)=\left(0\right)$. However, we have already seen that $\left(b-c\right)e=0$, a contradiction. The only way out is to have that $c=b$, which shows that $be=
b$, after all. Now we have shown that case 1 leads to the ring having a unit, and therefore being a division ring by problem 3.5.1.
Case 2: there is no $a\in R$ such that $aR=R$; this is the complement of case 1. Then $aR=\left(0\right)$ always so that $ab=0$ for any $a,b\in R$. Now, let $a\in R$ be a non-zero element. If we
consider the set $I=\left\{0,a,2a,3a,\dots \right\}$, we see that it is a right ideal because multiplication from the outside always yields $0$ which belongs to $I$, while closure under addition is
clear. Also, $I$ contains the non-zero element $a$, so $I=R$. Now, if $\mathrm{\mid }R\mathrm{\mid }$ is finite, the sequence $0,\text{\hspace{0.17em}}a,\text{\hspace{0.17em}}2a\text{\hspace{0.17em}}
\dots$ will yield $\mathrm{\mid }R\mathrm{\mid }$ distinct elements and then loop around to elements it has already visited. In particular, if $\mathrm{\mid }R\mathrm{\mid }$ is composite, say $\
mathrm{\mid }R\mathrm{\mid }=mn$ with $m,n>1$ integers, then $ma$ is non-zero, and $J=\left\{0,ma,2ma,\dots \right\}$ is a right ideal with a non-zero element, but it is a proper subset of $I$. By
the condition of the problem, such a $J$ may not exist. Therefore if $R$ is in case 2 and is finite, then $\mathrm{\mid }R\mathrm{\mid }$ must be prime.
Finally, we must also consider case 2 with $\mathrm{\mid }R\mathrm{\mid }$ infinite. In fact, this is not possible. Simply consider the ideal $J=\left\{0,2a,4a,\dots \right\}$ which is a proper
subset of $I$ with a non-zero element. As before, this is a contradiction of the terms of the problem.
Here is a brief digression from the main course of the proof to show that ${e}^{2}=e$ for the alleged identity element $e$ of $R$. Recall that the non-zero element $a$ satisfied $ae=ea=a$ and $ax=0$
implied $x=0$. Now, $ea\mathrm{e }0$ implies that $eR\mathrm{e }\left(0\right)$, so $eR=R$. In particular, there exists $d\in R$ with $ed=e$. Multiplying on the left by $a$ gives $0=aed-ae=ad-ae=a\
left(d-e\right)$. As $a$ only annihilates the zero element, we must have that $d=e$, so ${e}^{2}=e$ as claimed.
(a) There is a natural homomorphism to consider, namely $\varphi :\mathbb{Z}\mathrm{/}\left(p\right)\to {\mathbb{Z}}_{p}$
given by $\varphi \left(a+\left(p\right)\right)=a\phantom{\rule{0ex}{0ex}}\phantom{\rule{0.6667em}{0ex}}\mathrm{m}\mathrm{o}\mathrm{d}\text{\hspace{0.17em}}\text{\hspace{0.17em}}p$, where on the
right hand side $a$ is reduced modulo $p$ so as to live in ${\mathbb{Z}}_{p}$. To show that this is well-defined, let $a,{a}^{\mathrm{\prime }}\in R$ be such that $a+\left(p\right)={a}^{\mathrm{\
prime }}+\left(p\right)$. Then $a-{a}^{\mathrm{\prime }}\in \left(p\right)$ so they differ by a multiple of $p$ and the map is well-defined. It is also easily seen to be a homomorphism. Because the
kernel is trivial, $\varphi$ is an isomorphism.
(b) By problem 3.4.8, we know that $\left(p\right)$ is a maximal ideal of $\mathbb{Z}$, and by theorem 3.5.1, we therefore have that $\mathbb{Z}\mathrm{/}\left(p\right)$ is a field, because it is a
commutative unital ring ($\mathbb{Z}$) modded out by a maximal ideal.
First we note that an invertible element in this ring is one which has no zeroes. The multiplicative identity element is the constant function $x↦1$, and if $f$ has no roots then it has a
multiplicative inverse given by $x↦1\mathrm{/}f\left(x\right)$. If an ideal contains an invertible element, then it is all of $R$. Therefore every element of a proper ideal must have a root.
The sketch of the proof is as follows:
1. Let $I$ be a proper ideal of $R$. Any two elements $f,g\in I$ have a root in common.
2. Every element of $I$ has at least one root in common. In other words, there is a non-empty set ${Z}_{I}$ such that every element $f$ of $I$ satisfies $f\left({Z}_{I}\right)=0$.
3. A maximal ideal $M$ must have $\mathrm{\mid }{Z}_{M}\mathrm{\mid }=1$ and thus be of the stated form.
(1) Let $f,g\in I$. We have that ${f}^{2}$, ${g}^{2}$ and therefore ${f}^{2}+{g}^{2}$ are also in $I$ because it is an ideal. Suppose that $f$ and $g$ share no root. Then at any root $x$ of $f$, we
have $f\left(x{\right)}^{2}+g\left(x{\right)}^{2}=g\left(x{\right)}^{2}>0$. Similarly $f\left(x{\right)}^{2}+g\left(x{\right)}^{2}=f\left(x{\right)}^{2}>0$ at any root $x$ of $g$. Of course, ${f}^{2}
+{g}^{2}>0$ elsewhere. Therefore if $f$ and $g$ share no root, then $I$ contains the invertible element ${f}^{2}+{g}^{2}$, and hence $I=R$. We are interested in proper ideals of $R$, in which, as we
have just seen, any pair of elements has a root in common.
We can say something more general than the pairwise statement, though. Given any finite subset of a proper ideal $I$, say $\left\{{f}_{1},\dots ,{f}_{n}\right\}\subset I$, the same argument shows
that all members of that subset share at least one zero. Otherwise ${\sum }_{i}{f}_{i}^{2}$ would be an invertible element which forces $I=R$.
With $f\in I$ and writing ${Z}_{f}$ for the (non-empty) set of zeroes of $f$, we have just shown that the collection $\left\{{Z}_{f}\mid f\in I\right\}$ has the finite intersection property. Any
finite subcollection of that collection has non-empty intersection.
(2) We should now like to extend the statement above to cover the existence of a root shared by everything in the ideal. That is, we would like to prove that ${Z}_{I}=\bigcap _{f\in I}{Z}_{f}\mathrm
{e }\mathrm{\varnothing }\mathrm{.}$
It is not true in general that the finite intersection property extends to a non-empty intersection on the whole collection. However, we have additional conditions at our disposal.
Let $f\in R$. Because $f$ is continuous, we know that the inverse image under $f$ of an open set is open. In particular, ${f}^{-1}\left(\mathbb{R}\setminus 0\right)$ is open. Its complement, ${f}^
{-1}\left(0\right)={Z}_{f}$, is therefore closed in $\left[0,1\right]$. That is, every zero set ${Z}_{f}$ that we consider is topologically closed.
Additionally, $\left[0,1\right]$ is a compact set. In a compact metric space such as $\left[0,1\right]$, any collection of closed sets with the finite intersection property has a non-empty
intersection (proof). This follows simply by taking the definition that “every open cover of a compact set has a finite subcover” and turning it into a statement about the closed sets given by the
complements of the sets in the cover. Therefore we have that, for any proper ideal $I$ of $R$, there is a nonempty set ${Z}_{I}$ on which every member of $I$ takes the value zero.
(3) Now we want to determine what a maximal ideal $M$ looks like. If $\mathrm{\mid }{Z}_{M}\mathrm{\mid }>1$, then $M$ may not be maximal. If $\mathrm{\mid }{Z}_{M}\mathrm{\mid }>1$, then we could
take a proper, non-empty subset ${Z}_{J}\subset {Z}_{M}$ and construct the proper ideal $J=\left\{f\in R\mid f\left({Z}_{J}\right)=0\right\}$. If $f\in M$, then $f\left({Z}_{J}\right)\subset f\left
({Z}_{M}\right)=0$, so we also have $f\in J$ and therefore $M\subset J$. In other words, reducing the size of the zero set makes the ideal bigger — the bigger ideal contains all the functions that
were zero on the expanded zero set, but it also contains new functions that take other values on the roots that were excluded. On a side note, it may be true that zero sets are in one-to-one
correspondences with ideals, but I do not know at this time. For the purposes of this exercise, it suffices to construct one ideal from a given zero set, as we constructed $J$ earlier in this
Thus it must be the case that a maximal ideal $M$ has exactly one shared root among its elements, $\mathrm{\mid }{Z}_{M}\mathrm{\mid }=1$. Calling that root $\gamma$, we must have $M$ as a subset of
${M}^{\mathrm{\prime }}=\left\{f\in R\mid f\left(\gamma \right)=0\right\}$. However, we easily verify that ${M}^{\mathrm{\prime }}$ is itself an ideal, so any proper subset of it would clearly not be
maximal. Therefore, our maximal ideal is ${M}^{\mathrm{\prime }}$ and the claim is proven. | {"url":"https://jonathan.bergknoff.com/journal/topics-in-algebra-chapter-3-section-5/","timestamp":"2024-11-14T01:39:19Z","content_type":"text/html","content_length":"220469","record_id":"<urn:uuid:3bf941b7-34ad-4fe9-955d-833f44d18d12>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00845.warc.gz"} |
The Stacks project
Lemma 51.2.1. Let $A$ be a ring and let $I$ be a finitely generated ideal. Set $Z = V(I) \subset X = \mathop{\mathrm{Spec}}(A)$. For $K \in D(A)$ corresponding to $\widetilde{K} \in D_\mathit{QCoh}(\
mathcal{O}_ X)$ via Derived Categories of Schemes, Lemma 36.3.5 there is a functorial isomorphism
\[ R\Gamma _ Z(K) = R\Gamma _ Z(X, \widetilde{K}) \]
where on the left we have Dualizing Complexes, Equation (47.9.0.1) and on the right we have the functor of Cohomology, Section 20.34.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0DWQ. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0DWQ, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0DWQ","timestamp":"2024-11-07T04:46:43Z","content_type":"text/html","content_length":"23823","record_id":"<urn:uuid:dfef2613-b118-4ffd-8032-99cd14210165>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00649.warc.gz"} |
finding-mk-average | Leetcode
Similar Problems
Similar Problems not available
Finding Mk Average - Leetcode Solution
LeetCode: Finding Mk Average Leetcode Solution
Difficulty: Hard
Topics: design heap-priority-queue
The problem statement for the Finding Mk Average problem on LeetCode is as follows:
Design a data structure that receives a stream of integers, creates and maintains a sorted list of the last M integers (from the stream), and has a method to return the median of the last K integers
(from the sorted list). The constructor of the data structure receives an integer, M, representing the number of the last elements the data structure keeps for every new integer. It also receives an
integer, K, representing the number of elements we need to calculate the median.
The data structure should have a method named addElement that receives an integer, val, and adds it to the data structure. The data structure should also have a method named calculateMKAverage that
returns the median of the last K elements of the sorted list of the last M input elements. If there are less than K elements in the sorted list, MKAverage returns -1.
To solve the given problem, we need to implement a data structure that can receive data in a stream and calculate the median of a sub-array of the last M elements. One approach to solve such types of
problems is to use a circular buffer or a deque. We can implement the MKAverage algorithm using a deque data structure with the following steps:
1. Create a deque with a fixed size of M to store and maintain the last M integers.
2. Create two priority queues, left, and right, to store the integers in sorted order.
3. Implement a function to add a new integer to the deque. If the deque is full, remove the oldest integer to maintain the size of M.
4. Whenever a new integer is added or an integer is removed from the deque, adjust the two priority queues based on the new integers.
5. Implement a function to find the median of the last K integers. If K is greater than the current size of the deque, return -1.
6. If K is smaller than or equal to the current size of the deque, pop and push elements from the left and right priority queues until the size of the deque becomes equal to K.
7. If K is even, the median is the average of the two central elements. If K is odd, the median element is the middle element.
Here is the code implementation of the above algorithm:
class MKAverage {
deque<int> window;
priority_queue<int> left;
priority_queue<int, vector<int>, greater<int>> right;
long m, k, sum;
MKAverage(int m, int k) {
this->m = m;
this->k = k;
this->sum = 0;
void addElement(int num) {
sum += num;
if (window.size() > m) {
sum -= window.front();
if (left.empty() || num <= left.top())
if (left.size() > (m - k) ) {
sum -= left.top();
else if (right.size() > k) {
sum -= right.top();
int calculateMKAverage() {
if (window.size() < k)
return -1;
long res_sum = sum;
if (left.size() == (m - k))
res_sum -= left.top() * (m - k);
res_sum -= right.top() * k;
return res_sum / k;
The implementation of the MKAverage class includes a constructor and two methods: addElement and calculateMKAverage. The constructor initializes the size of the deque and priority queues, and the sum
variable holds the sum of the current window.
The addElement method adds a new element to the deque and updates the sum variable. If the size of the deque exceeds M, it removes the oldest element. Then, it maintains the left and right priority
queues based on the new element and the current size. If the size of the left queue exceeds (M-k), it moves the top element to the right queue to balance the sizes. Similarly, if the size of the
right queue exceeds k, it moves the top element to the left queue to balance the sizes.
The calculateMKAverage method calculates the median of the last K integers added to the deque. If the size of the deque is less than K, it returns -1. Otherwise, it calculates the sum of the integers
in the window after removing the top elements from the left or right queue. Finally, it returns the average of the remaining K integers.
This implementation has a time complexity of O(log k) for adding a new integer to the deque (due to the use of priority queues) and a time complexity of O(1) for calculating the median. The space
complexity is O(M) to store the deque and priority queues.
Finding Mk Average Solution Code | {"url":"https://prepfortech.io/leetcode-solutions/finding-mk-average","timestamp":"2024-11-08T08:38:14Z","content_type":"text/html","content_length":"59585","record_id":"<urn:uuid:3997f752-e22e-40f3-8f38-c0457362dbed>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00698.warc.gz"} |
January 2010 | 1 704 pages | SAGE Publications Ltd
Causality is a core problem in social science methodology, as the laws of causality found in physics - which state generalizations without exceptions - are not found in the social sciences. As a
consequence, classical definitions of the causal relation, such as John Stuart Mill's definition in terms of invariant succession, need either to be modified and qualified, or replaced by a different
concept of causality entirely. This has led to a long and complex literature on the problems of causality.
This four volume major reference work, Causality, covers the main issues, methods of analysis, and alternatives, of causality, including the classic texts applying these alternative concepts and
methods to empirical cases. The volumes give a substantial historical and philosophical introduction relevant to the concerns of practitioners. As a whole, the volumes represent a complete guide to
the literature on social science causality from the beginning to the present.t.
VOLUME 1
On the Study of Causes
L.A. Quetelet
Quetelet on Probabilities
John F.W. Herschel
Jacques Bertillon
The Scientific Law
Karl Pearson
Cause and Effect-Probability
Karl Pearson
Contingency and Correlation-the Insufficiency of Causation
Karl Pearson
On the Correlation of Total Pauperism with Proportion of Out-Relief
G. Udny Yule
An Investigation into the Causes in Pauperism in England, Chiefly During the Last Two Intercensal Decades
G. Udny Yule
Partial Association
G. Udny Yule
VOLUME 2
The Generalizing Theories: Adequate cause
H.L.A. Hart and Tony Honoré
Concerning Cause and the Law of Torts
Guido Calabresi
Causation in Tort Law
Richard W. Wright
Causal Ordering and Identifiability
Herbert A. Simon
Spurious Correlation: A causal interpretation
Herbert A. Simon
Correlation and Causality: The multivariate case
H.M. Blalock, Jr.
The Introduction of Additional Variables and the Problem of Spuriousness
Herbert Hyman
The Introduction of Additional Variables and the Elaboration of Analysis
Herbert Hyman
The Environment and Disease: Association or causation?
Austin Bradford Hill
Investigating Causal Relations by Econometric Models and Cross-spectral Models
C.W.J. Granger
Spurious Regressions in Econometrics
C.W.J. Granger and P. Newbold
Testing for Causality: A personal viewpoint
C.W.J. Granger
Statistics and Causal Inference
Paul W. Holland
Statistics and Causal Inference: Comment: Which ifs have causal answers?
Donald B. Rubin
Statistics and Causal Inference: Comment
D.R. Cox
Statistics and Causal Inference: Comment: Statistics and metaphysics
Clark Glymour
Statistics and Causal Inference: Comment
Clive Granger
Statistics and Causal Inference: Rejoinder
Paul W. Holland
Causal Inference, Path Analysis, and Recursive Structural Equations Models
Paul W. Holland
Dangers of Cigarette Smoking
Ronald A. Fisher
Cigarettes, Cancer, and Statistics
Ronald A. Fisher
The Nature of Probability
Ronald A. Fisher
Lung Cancer and Cigarettes
Ronald A. Fisher
Cancer and Smoking
Ronald A. Fisher
Ronald A. Fisher
Sample Selection Bias as a Specification Error
James J. Heckman
Choosing Among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs: The case of manpower training
James J. Heckman and Joseph V. Hotz
Choosing Among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs: The case of manpower training: Comment
Paul W. Holland
Choosing Among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs: The case of manpower training: Comment
Robert Moffitt
Choosing Among Alternative Nonexperimental Methods for Estimating the Impact of Social Programs: The case of manpower training: Rejoinder
James J. Heckman and Joseph V. Hotz
VOLUME 3
Graphs, Causality, and Structural Equation Models
Judea Pearl
Confounding and Collapsibility in Causal Inference
Sander Greenland, Judea Pearl and James M. Robins
Causal Diagrams for Empirical Research
Judea Pearl
Graphical Models for Causation, and the Identification Problem
David A. Freedman
Measures of Association for Cross Classifications
Leo A. Goodman and William H. Kruskal
Simple Models for the Analysis of Association in Cross-classifications Having Ordered Categories
Leo A. Goodman
The Multivariate Analysis of Qualitative Data: Interactions among multiple classifications
Leo A. Goodman
The Analysis of Cross-classified Data: Independence, quasi-independence, and interactions in contingency tables with or without missing entries
Leo A. Goodman
The Central Role of the Propensity Score in Observational Studies for Causal Effects
Paul R. Rosenbaum and Donald B. Rubin
Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies
Donald B. Rubin
Constructing a Control Group Using Multivariate Matched Sampling Methods that Incorporate the Propensity Score
Paul R. Rosenbaum and Donald B. Rubin
Reducing Bias in Observational Studies Using Subclassification on the Propensity Score
Paul R. Rosenbaum and Donald B. Rubin
Identification of Causal Effects Using Instrumental Variables
Joshua D. Angrist, Guido W. Imbens and Donald B. Rubin
Bayesian Inference for Causal Effects: The role of randomization
Donald B. Rubin
Estimating Causal Effects from Large Data Sets Using Propensity Scores
Donald B. Rubin
Assignment of Treatment Group on the Basis of a Covariate
Donald B. Rubin
From Association to Causation in Observational Studies: The role of tests of strongly ignorable treatment assignment
Paul R. Rosenbaum
Notes on the Theory of Association of Attributes in Statistics
G. Udny Yule
The Interpretation of Interaction in Contingency Tables
E.H. Simpson
On Simpson's Paradox and the Sure-thing Principle
Colin R. Blyth
Confounding and Simpson's Paradox
Steven A. Julious and Mark A. Mullee
Comment on: 'Confounding and Simpson's Paradox'
C.R. Charig
Ecological Correlations and the Behavior of Individuals
W.S. Robinson
Social experiments: Some developments over the past fifteen years
T.D. Cook and W.R. Shadish
The Moderator-mediator Variable Distinction in Social Psychological Research: Conceptual, strategic and statistical considerations
Reuben M. Baron and David A. Kenny
VOLUME 4
Causal Laws and Effective Strategies
Nancy Cartwright
Causes and Conditions
J.L. Mackie
Causation as Influence
David Lewis
Small n's and Big Conclusions: An examination of the reasoning in comparative studies based on a small number of cases
Stanley Lieberson
Utilizing Causal Models to Discover Flaws in Experiments
Herbert L. Costner
Statistical Models and Shoe Leather
David A. Freedman
From Association to Causation Via Regression
David A. Freedman
Nuisance Variables and the Ex Post Facto Design
Paul Meehl
The Path Analysis Controversy: A new statistical approach to strong appraisal of verisimilitude
Paul E. Meehl and Nils G. Waller | {"url":"https://uk.sagepub.com/en-gb/asi/causality/book233193","timestamp":"2024-11-05T16:37:50Z","content_type":"text/html","content_length":"117779","record_id":"<urn:uuid:4018c9bb-d173-4447-ad1d-43f688bedc5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00709.warc.gz"} |
Support Vector Machines in Python - A Step-by-Step Guide
Support vector machines (SVMs) are one of the world's most popular machine learning problems.
SVMs can be used for either classification problems or regression problems, which makes them quite versatile.
In this tutorial, you will learn how to build your first Python support vector machines model from scratch using the breast cancer data set included with scikit-learn.
Table of Contents
You can skip to a specific section of this Python machine learning tutorial using the table of contents below:
The Python Libraries We Will Need In This Tutorial
You will be using a number of open-source Python libraries in this tutorial, including NumPy, pandas, and matplotlib. Here are some imports that you'll need to run before getting started:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
Next up, you'll import the data set we will be using throughout this tutorial.
The Data Set We Will Use In This Tutorial
This tutorial makes use of the breast cancer data set that comes included with scikit-learn. Accordingly, we will now import that data set into our Python script.
First, import the load_breast_cancer function from the datasets module of scikit-learn with this command:
from sklearn.datasets import load_breast_cancer
Next, you need to create an instance of the breast cancer data set. The following statement should do the trick:
cancer_data = load_breast_cancer()
This cancer_data variable includes more than just the breast cancer data set. As an example, we will see shortly that there is a useful description contained in this raw_data data structure.
Because of this, the last step that we need to do in importing the data set is store data alone in its own DataFrame called raw_data. Here is the code to do this:
raw_data = pd.DataFrame(cancer_data['data'], columns = cancer_data['feature_names'])
Let's investigate what's actually contained in this data set.
Every data set included in scikit-learn comes with a description field that can help you understand what the data set is describing.
Let's print this description. The following statement should do the trick:
This generates:
.. _breast_cancer_dataset:
Breast cancer wisconsin (diagnostic) dataset
**Data Set Characteristics:**
:Number of Instances: 569
:Number of Attributes: 30 numeric, predictive attributes and the class
:Attribute Information:
- radius (mean of distances from center to points on the perimeter)
- texture (standard deviation of gray-scale values)
- perimeter
- area
- smoothness (local variation in radius lengths)
- compactness (perimeter^2 / area - 1.0)
- concavity (severity of concave portions of the contour)
- concave points (number of concave portions of the contour)
- symmetry
- fractal dimension ("coastline approximation" - 1)
The mean, standard error, and "worst" or largest (mean of the three
worst/largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 0 is Mean Radius, field
10 is Radius SE, field 20 is Worst Radius.
- class:
- WDBC-Malignant
- WDBC-Benign
:Summary Statistics:
===================================== ====== ======
Min Max
===================================== ====== ======
radius (mean): 6.981 28.11
texture (mean): 9.71 39.28
perimeter (mean): 43.79 188.5
area (mean): 143.5 2501.0
smoothness (mean): 0.053 0.163
compactness (mean): 0.019 0.345
concavity (mean): 0.0 0.427
concave points (mean): 0.0 0.201
symmetry (mean): 0.106 0.304
fractal dimension (mean): 0.05 0.097
radius (standard error): 0.112 2.873
texture (standard error): 0.36 4.885
perimeter (standard error): 0.757 21.98
area (standard error): 6.802 542.2
smoothness (standard error): 0.002 0.031
compactness (standard error): 0.002 0.135
concavity (standard error): 0.0 0.396
concave points (standard error): 0.0 0.053
symmetry (standard error): 0.008 0.079
fractal dimension (standard error): 0.001 0.03
radius (worst): 7.93 36.04
texture (worst): 12.02 49.54
perimeter (worst): 50.41 251.2
area (worst): 185.2 4254.0
smoothness (worst): 0.071 0.223
compactness (worst): 0.027 1.058
concavity (worst): 0.0 1.252
concave points (worst): 0.0 0.291
symmetry (worst): 0.156 0.664
fractal dimension (worst): 0.055 0.208
===================================== ====== ======
:Missing Attribute Values: None
:Class Distribution: 212 - Malignant, 357 - Benign
:Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian
:Donor: Nick Street
:Date: November, 1995
This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.
Features are computed from a digitized image of a fine needle
aspirate (FNA) of a breast mass. They describe
characteristics of the cell nuclei present in the image.
Separating plane described above was obtained using
Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree
Construction Via Linear Programming." Proceedings of the 4th
Midwest Artificial Intelligence and Cognitive Science Society,
pp. 97-101, 1992], a classification method which uses linear
programming to construct a decision tree. Relevant features
were selected using an exhaustive search in the space of 1-4
features and 1-3 separating planes.
The actual linear program used to obtain the separating plane
in the 3-dimensional space is that described in:
[K. P. Bennett and O. L. Mangasarian: "Robust Linear
Programming Discrimination of Two Linearly Inseparable Sets",
Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
.. topic:: References
- W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction
for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on
Electronic Imaging: Science and Technology, volume 1905, pages 861-870,
San Jose, CA, 1993.
- O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and
prognosis via linear programming. Operations Research, 43(4), pages 570-577,
July-August 1995.
- W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques
to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994)
The most important takeaways from this data set description are:
• There are 569 observations in the data set
• Each observation has 30 numeric attributes
Now that we have an understanding of how our data set is structured, let's move on to splitting our data set into training data and test data.
Splitting the Data Set Into Training Data and Test Data
To split our data set into training data and test data, the first thing we need to do is specify our x and y variables.
Our x variables will be the raw_data pandas DataFrame that we created earlier. Our y variables need to be parsed from the original cancer_data object that we created earlier, where it is stored under
the key target.
More specifically, here is the code to create our x and y variables:
x = raw_data
y = cancer_data['target']
We will be using scikit-learn's train_test_split function combined with list unpacking to split our data set into training data and test data (just like we did with linear regression and logistic
regression earlier in this course).
First you'll need to import the function with the following statement:
from sklearn.model_selection import train_test_split
Now you can create training data and test data along both the x and y axes with the following statement:
x_training_data, x_test_data, y_training_data, y_test_data = train_test_split(x, y, test_size = 0.3)
This splits the data such that the test data is 30% of the original data set (indicated by the parameter test_size = 0.3).
Now that our data is split, let's move on to training our first support vector machines model.
Training The Support Vector Machines Model
Before you can train your first support vector machine model, you'll need to import the model class from scikit-learn.
The SVC class lives within scikit-learn's svm module. Here is the statement to import it:
from sklearn.svm import SVC
Now let's create an instance of this class and assign it to the variable model:
We can now train the SVM model using the same method as with our k-nearest neighbors model and our random forests model earlier in this course: by invoking the fit method on it, and passing in
x_training_data and y_training_data.
Here's the code to do this:
model.fit(x_training_data, y_training_data)
Our model has now been trained. Let's move on to making predictions with the model in the next section of this tutorial.
Making Predictions With Our Support Vector Machines Model
Any machine learning model created using scikit-learn can be used to make predictions by simply invoking the predict method on it and passing in the array of values that you'd like to generate
predictions from.
In this case, here is the Python statement that you would use to store predictions from the x_test_data in a variable called predictions:
predictions = model.predict(x_test_data)
We'll assess the performance of our model next.
Assessing the Performance of Our Support Vector Machines Model
We'll use the same performance measurement techniques for our support vector machines model as we did with the other classification models we've built in this course: a classification_report and a
To start, let's import these functions from scikit-learn:
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
First let's generate our classification_report:
print(classification_report(y_test_data, predictions))
This generates:
precision recall f1-score support
0 1.00 0.84 0.91 67
1 0.90 1.00 0.95 104
accuracy 0.94 171
macro avg 0.95 0.92 0.93 171
weighted avg 0.94 0.94 0.93 171
Next let's generate our confusion matrix:
print(confusion_matrix(y_test_data, predictions))
This generates:
The Full Code For This Tutorial
You can view the full code for this tutorial in this GitHub repository. It is also pasted below for your reference:
#Data imports
import pandas as pd
import numpy as np
#Visualization imports
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
#Import the data set from scikit-learn
from sklearn.datasets import load_breast_cancer
cancer_data = load_breast_cancer()
raw_data = pd.DataFrame(cancer_data['data'], columns = cancer_data['feature_names'])
# print(cancer_data['DESCR'])
#Split the data set into training data and test data
x = raw_data
y = cancer_data['target']
from sklearn.model_selection import train_test_split
x_training_data, x_test_data, y_training_data, y_test_data = train_test_split(x, y, test_size = 0.3)
#Train the SVM model
from sklearn.svm import SVC
model = SVC()
model.fit(x_training_data, y_training_data)
#Make predictions with the model
predictions = model.predict(x_test_data)
#Measure the performance of our model
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
print(classification_report(y_test_data, predictions))
print(confusion_matrix(y_test_data, predictions))
Final Thoughts
In this tutorial, you learned how to build Python support vector machines models.
Here is a brief summary of what was discussed in this tutorial:
• How to import and load the built-in breast cancer data set from scikit-learn
• How to print descriptions from the built-in datasets included with scikit-learn
• How to split your data set into training data and test data using scikit-learn
• How to import the SVC model from scikit-learn's svm module
• How to train an SVM model
• How to make predictions with a support vector machines model in Python
• How to measure the performance of a support vector machines model using the classification_report and confusion_matrix functions | {"url":"https://www.nickmccullum.com/python-machine-learning/support-vector-machines-python/","timestamp":"2024-11-14T21:55:23Z","content_type":"text/html","content_length":"99555","record_id":"<urn:uuid:d65ccf43-957c-4cfd-80bc-573c11f9d293>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00430.warc.gz"} |
A3 - Programming Help
A3 consists of three phases.
1. Implement a standard min-heap in Heap.java.
2. Implement a hash table in HashTable.java.
3. Augment your heap in Heap.java to implement efficient contains and changePriority methods.
Getting Started
The Github Classroom invitation link for this assignment is in Assignment 3 on Canvas. Begin by accepting the invitation and cloning a local working copy of your repository as you have for past
The Heap class depends on the AList class you wrote for Lab 4. Copy your AL-ist.java from your lab 4 repository into your A3 repo’s lib/src/main/java/heap/ directory.
Unit Tests and Gradle
AListTest.java, from Lab 4, has been included. If your implementation fails any of the AList tests, fix that first.
You are provided with unit tests for each phase in A3Test.java. The test methods are numbered with three-digit numbers; the hundreds place indicates which phase the test pertains to. Test often, and
make sure you pass any tests associated with a TODO item before moving on to the next one. You may find it helpful to run only the tests for the phase you’re currently working on using the –tests
flag with a wildcard; for example to run only the phase 2 tests, you could enter
gradle test –tests “test2*”
Phase 1
In Phase 1, you’ll complete the implementation of a min-heap given in the skeleton repository’s Heap.java file. For details on how these operations work, see the slides from Lecture 12.
The tasks listed below are marked in Heap.java as TODO 1.0, TODO 1.1, and so on. Phase 3 involves further modifications to Heap.java. For now, ignore anything in the code marked as Phase 3 Only, or
TODO 3.x.
1. Read and understand the class invariant in the comment at the top of Heap.java. Ignore the Phase 3 parts for now. This specifies the properties the Heap must satisfy. All public methods are
responsible for making sure that the class invariants are true before the method returns.
2. Implement the add method, using the bubbleUp helper according to its specification.
3. Implement the swap helper method, which you’ll use in both bubbling routines.
4. Implement the bubbleUp helper method. Feel free to use private helper methods to keep this code clean.
5. Implement peek. Recall that peek returns the minimum element without modifying the heap.
6. Implement poll using bubbleDown, which you’ll implement next.
7. Implement bubbleDown. You are highly encouraged to use one or more helper methods to keep this code clean. In fact, we’ve included a suggested private method smallerChild, along with its
specification, that we used in our solution.
Phase 2
In Phase 2 you’ll implement a hash table with chaining for collision resolution. Although we could base it on our AList class, we need access to the internals of the growth process to handle growing
and rehashing as needed. Similarly, for the underlying storage we could use an array of LinkedLists, but dealing with Java’s LinkedList machinery ends up being a bit of a headache—more so than simply
writing a little linked list code to do the chaining by hand. For these reasons, you’ll complete a standalone hash table implementation in HashTable.java without using any tools from java.util. The
following major design decisions have been made for you:
• The hash table encapsulates its key-value pairs in an inner class called
• The hash table uses chaining for collision resolution.
• The Pair class doubles as a linked list node, so it has fields to store its key, value and a reference to the next Pair in the chain.
• The underlying array doubles in size when the load factor exceeds 0.8.
Here’s some sample code using the hash table and its output using my solution. The dump method shows the internal layout of the table: each bucket in the buckets array stores a reference to Pair
objects, each of which stores a reference to the next Pair in the chain, or null.
HashTable<Integer,Integer> hm = new HashTable<Integer,Integer>(4);
Table size: 3 capacity: 4
0. –>(4, 1)–>(0, 0)–|
0. –|
0. –|
0. –>(19, 4)–|
Your job in Phase 2 is to implement four methods: (get, put, containsKey, and remove). Details of how you implement them are left up to you – be sure to read the specification of each method
carefully to be sure you’re implementing the specified behavior correctly, completely, and according to the given efficiency bounds.
Your tasks:
1. To get familiar with the underlying storage mechanism, read the existing code in HashTable.java. In particular, read the javadoc comments above the class, the comments describing each field and
its purpose, and the Pair inner class. Then take a look at the provided dump method, which may be useful for debugging.
2. Implement get(K key).
3. Implement put without worrying about load factor and array growth. Make sure that you replace the value if the key is already in the map, and insert a new key-value pair otherwise. After
implementing get and put without growing the array, you should pass test210PutGet, test211PutGet, test212Put, test213Put, test230PutGet, and test231Put.
4. Implement containsKey Your code should pass test240containsKey and test241containsKey.
5. Implement remove. Your code should pass test250Remove and test251Remove.
5. Finally, modify put to check the load factor and grow and rehash the array if the load factor exceeds 0.8 after the insertion. Use the createBucketArray helper method to create the new array.
We’ve also included a method stub for a growIfNeeded private helper method with a suggested specification; you are welcome to implement and use this, but not required to. At this point your code
should pass all Phase 2 tests. Note: If you’re not careful, put can have worst-case ( ^2) runtime.
Phase 3
In Phase 3, we turn back to the Heap class. Now that we have a working HashTable implementation, we can overlay the hash table on the heap in order to make the contains and changePriority operations
efficient. This is a common strategy when building data structures in the real world: often a textbook data structure does not provide efficient runtimes for all the operations you need, so you can
combine multiple textbook data structures to get more efficient operations at the cost of some extra bookkeeping and storage.
In the Phase 1 heap, finding a given value in the tree requires searching the whole tree, so the runtime is O(n). In Phase 3, we’ll use a HashTable to map from values to heap indices, which allows us
to find any value in the heap in expected O(1) time using a hash table lookup. Keeping the HashTable up to date requires small changes throughout the Heap class to make sure that the HashTable stays
consistent with the state of the heap – whenever the heap is modified, we need to update the HashTable to match.
One constraint imposed by the use of a HashTable is that each value can only map to a single index. This means that if we insert two entries with equal value into the table, we can’t differentiate
between them and store both indices HashTable. To deal with this, we will simply add the requirement that all values stored in the Heap must be distinct. Note that two different values may still have
equal priorities.
Your tasks are as follows:
1. Update the add method to keep the map consistent with the state of the heap. Also be sure to throw an exception if a duplicate value is inserted— the map makes it possible to check this
efficiently. For now, don’t worry about the map during bubbleUp—the next TODO will handle this. At this point, your code should pass test300Add.
2. swap – update the swap method to keep the HashTable consistent with the heap. If you used swap to implement both bubbling routines, bubbleUp and bubbleDown You should now pass test310Swap and
3. Update poll. Your code should now pass test330Poll_BubbleDown_NoDups and
3. Implement contains. Once again, the map makes this easy and efficient. You should pass test350contains.
3. Implement changePriority by finding the value in question, updating its priority, and fixing the Heap property by bubbling it up or down. You should now pass test360ChangePriority.
At this point, your code should be correct! Check the following things before you submit:
1. Each method adheres to the asymptotic runtime given in its specification, if any.
2. Your code follows the style guidelines set out in the rubric and the syllabus.
3. Your submission compiles and passes all tests on the command line in Linux without modification.
4. All code is committed and pushed to your A3 GitHub repository.
Game Plan
Start small, test incrementally, and git commit often. Please keep track of the number of hours you spend on this assignment, as you will be asked to report it in A3 Survey. Hours spent will not
affect your grade.
A suggested timeline for completing the assignment in a stress-free manner is given below:
1. By 2/20: Complete TODO 1.0. Reading is hard, but it’s worth your while: don’t skip this step!
2. By 2/23: Complete TODO 1.1–1.6 (phase 1 heap functionality)
3. By 2/26: Complete TODO 2.0–2.5 (phase 2 hash table functionality)
4. By 3/1: Complete TODO 3.1–3.5 (phase 3 heap functionality)
How and What to Submit
Submit the assignment by pushing your final changes to GitHub before the deadline, then submitting A3 Survey on Canvas.
Extra Credit
For up to 5 points of extra credit, complete the following enhancement in an extensions branch of your repository. One piece of important functionality that’s missing from our hash table
implementation is the ability to iterate over
its key-value pairs. Augment the base assignment’s hash table class so that it implements Iterable<HashTable<K,V>.Pair>. This requires implementing the iterator method to return an instance of an
iterator class that you’ll need to implement as an inner class of the HashTable. You may find the Java documentation for Iterable<T> and Iterator<T> helpful.
If you complete any extra credit, please include a readme.txt file in your reposi-tory describing what you did, instructions for testing it (ideally by running some example code and/or unit tests
that you’ve written) and any design decisions you made. Please also mention your enhancements in the A3 Survey on Canvas as well as emailing the TA similar to A2.
Points are awarded for correctness and efficiency of your program, and points can be deducted for errors in commenting, style, clarity, and following assignment instructions. Correctness will be
judged based on the unit tests provided to you. A3 is out of a total of 50 points.
• (1 point) Code is pushed to github and hours are reported in A3 Survey
Code : Correctness
• (33 points) Each unit test is worth 1 point. These are from the A3Test file.
Code : Efficiency
• (2 points) Heap.add is average-case O( log ) and worst-case O( ), unless rehashing is necessary in which case the runtime is worst case O( + ), where C is the smaller array’s capacity.
• (1 points) Heap.peek is O(1)
• (2 points) Heap.poll is average-case O( log ) and worst-case O( )
• (2 points) HashTable.get is average-case O(1), worst-case O( )
• (2 points) HashTable.put is average-case O(1), worst-case O( )
• (2 points) HashTable.containsKey is average-case O(1), worst-case O( )
• (2 points) HashTable.remove is average-case O(1), worst-case O( )
• (1 point) Heap.contains is average-case O(1) and worst-case O( )
• (2 points) Heap.changePriority is average-case O( log ) and worst-case O( )
Clarity deductions
• Include author, date and purpose in a comment comment at the top of each file you write any code in
• Methods you introduce should be accompanied by a precise specification
• Non-obvious code sections should be explained in comments
• Indentation should be consistent
• Methods should be written as concisely and clearly as possible
• Methods should not be too long – use private helper methods
• Code should not be cryptic and terse
• Variable and function names should be informative | {"url":"https://www.edulissy.org/product/augment-your-heap-in-heap-java-to-implement-e%EF%AC%80icient-contains-and-changepriority-methods-2/","timestamp":"2024-11-11T13:51:47Z","content_type":"text/html","content_length":"202799","record_id":"<urn:uuid:a06c50be-af5f-43da-859d-3fecd4c0cc91>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00886.warc.gz"} |
How to Optimize Magnetizing Inductance for ZVS Zero Volt Switching
This article based on Sotiris Zorbas, Power Εlectronics Εngineer Frenetic newsletter discusses optimisation of magnetic inductance for ZVS zero volt switching.
Our mission today is to understand how an LLC Converter achieves ZVS on the primary side.
The zero-voltage switching in an LLC Converter is directly linked to the magnetizing inductance of the Transformer and the output parasitic capacitance Coss of the mosfets that form the full/half
driving bridge.
Let’s start from an LTspice schematic as you can see in Figure 1.
I have used a simple trick for the Transformer model: I kept the series and magnetizing inductances separate and used an ideal Transformer utilizing high inductance perfectly coupled inductors.
That way, I could monitor the magnetizing current going through Lm (L4) and that going to L3, thus the load eventually. L7, L2 secondaries are sized at 1H/n^2, where n is the turns ratio.
What else is important in that schematic?
Switches S2, S3 are working like on/off switches using the Rds_on of the selected mosfet each time. The important part is putting a couple of capacitors in parallel with these to model the output
parasitic capacitance. That parameter can be found in the respective datasheets of the selected device. However, there is a trap here!
Is the output capacitance calculated based on time, or energy?
The output capacitance changes dynamically in mosfets when switching, ranging from a few picofarads to a few nanofarads, as the voltage on the drain-source pins drops from a high to a low potential.
The manufacturer has constructed an artificial effective capacitor that either gives the same stored energy value or the same charging time.
Looking at Figure 2 helps to better understand what I’ve just described.
An example design – case study.
I have used the worked example in SLUA923 reference below, to help the reader. The specs are listed in Table 1.
The first thing when looking to achieve ZVS is to realize the working mechanism that causes the parasitic capacitances to be discharged. Without getting into many details, the resonant current in an
LLC circulates through the LLC network and the power switches, causing ZVS of the latter. Part of the current is flowing through the magnetizing inductance Lm and part “flows” to the load through the
ideal Transformer.
In Figure 3, the total current through the LLC tank marked as I[total ]is L1’s current. I[mag] as the name suggests is L[m ]current. The difference of these currents flows into the ideal
Transformer. Right at the switching transitions the I[total ]equals I[mag ], and we can assume that this is practically constant during the transitions.
The rate of voltage rise dv/dt of the switching transition depends on the total capacitance to be charged/discharged (2*Coss_tr) and the actual peak current level, as shown in the figure 3. above.
The usual 50-100ns of rise/fall times in 400V systems yields good results overall.
The magnetizing current swings triangularly from a maximum positive to a maximum negative value in T/2 as seen in Figure 3. At the moment of switching to get the 1.28A we just calculated, the current
need to swing from -1.28A to +1.28A thus a ΔΙ=2I[mag ]is the current swing.
Using the basic formula for an inductor:
The voltage at the magnetizing Inductor is approximately half of the switching voltage because of the DC bias removal action of the resonant capacitor. So, in the end:
o achieve 100ns of switching time (0-100% of the voltage) we should select the magnetizing inductance below 332μH. Checking out LTspice in Figure 4 we can verify the correctness of the previous
Proper selection of the magnetizing inductance ensures ZVS conditions over an extensive input voltage range. Of course, lowering the magnetizing inductance comes with the price tag. Reduced
efficiency number at light loads (especially) is observed because of the increase in resonant current. | {"url":"https://passive-components.eu/how-to-optimize-magnetizing-inductance-for-zvs-zero-volt-switching/","timestamp":"2024-11-02T08:25:06Z","content_type":"text/html","content_length":"295939","record_id":"<urn:uuid:b6d3315f-97bb-4bbd-b884-604b5ecbc8e7>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00103.warc.gz"} |
ICFP Contest 2023 - JKRT Report
The International Conference on Functional Programming (ICFP) Programming Contest is an annual programming contest. The event lasted for 3 days, and out of curiosity, we created a team with my dear
colleague Jens Petersen.
In this post I present how we worked on the problems. I include many code samples for the curious reader but feel free to skip over them or checkout the repo The event was divided in two parts: a
lightning division for the first 24 hours, and a full division for the remaining 48 hours.
Lightning division
At the ZuriHac we briefly talked about attending the contest, and a few days before we created a matrix chat room to get ready. We were on opposite timezones so we could only met twice per day.
Haskell project scaffold
This was the first time I ever participated in such a competition, and we didn’t have a project template or toolbox. However we knew we wanted to use Haskell. Thus we setup a git repository with a
standard cabal file adjusted with the following settings:
• Set the default-language to GHC2021: Learn more about this.
• Enable the following extensions:
□ BlockArguments, ImportQualifiedPost, LambdaCase, MultiWayIf: useful small syntax change.
□ OverloadedRecordDot: enables using the record.field syntax.
□ DerivingStrategies: for better instances definition, e.g. deriving newtype.
□ PartialTypeSignatures along with -Wno-partial-type-signatures: enables omitting type signature, e.g. _ -> IO (Maybe _).
• Add the following extra build-depends in anticipation with -Wno-unused-packages:
□ MonadRandom, aeson, bytestring, directory, rio, time, vector: base enhancement
□ gloss: GUI
□ http-conduit, retry: network.
• Create a bunch of empty modules: Parser, Syntax, Eval, GUI and Solve.
• Setup the CI.
This enabled us to prepare the tool-chain before the event started so that we had a running ghcid --test Solve.main process ready to interpret our code.
The first specification
The contest started on Friday with a specification.pdf that described the following challenge: place musicians on a stage to provide the best possible sound for the attendees.
Here was the example problem input:
"room_width": 2000.0,
"room_height": 5000.0,
"stage_width": 1000.0,
"stage_height": 200.0,
"stage_bottom_left": [500.0, 0.0],
"musicians": [0, 1, 0],
"attendees": [
{"x": 100.0, "y": 500.0, "tastes": [1000.0, -1000.0]},
{"x": 200.0, "y": 1000.0, "tastes": [200.0, 200.0]},
{"x": 1100.0, "y": 800.0, "tastes": [800.0, 1500.0]}
And here was the example solution output:
"placements": [
{"x": 590.0, "y": 10.0},
{"x": 1100.0, "y": 100.0},
{"x": 1100.0, "y": 150.0}
Along with some placements rules (e.g. musician must not overlap), we were given a scoring function to evaluate the solution:
Moreover musicians can block each other when they are on the path of an attendee.
Finally we had a few problems to solve.
After reading this specification, I got excited because it looked like we would be able to solve some problems. It was not the kind of problems where only a few solutions work. For example, simply
filling the stage randomly was a valid solution.
Decoding JSON
I started with the easy part. Using the excellent json-to-haskell website I generated the specification’s data types with their FromJSON and ToJSON instances. Then I tweaked the types to use a more
efficient list representation with unboxed vectors:
import Data.Vector.Unboxed qualified as UV
data Problem = Problem
{ problemStageHeight :: Int
, problemStageWidth :: Int
, problemMusicians :: UV.Vector Int
, problemRoomHeight :: Int
, problemRoomWidth :: Int
, problemAttendees :: [Attendee]
, problemStageBottomLeft :: (Int, Int)
data Solution = Solution
{ solutionPlacements :: UV.Vector (Int, Int)
Using a vector let us read a value in O(1) with UV.! instead of O(n) with Prelude.!!.
Gloss visualization
Next I wanted to see what the problems looked like. So I used the gloss library to draw a visualization:
drawProblem :: Problem -> Solution -> Picture
drawProblem problem solution = Pictures (room : stage : attendees)
room :: Picture
room =
Color red $
(int2Float problem.problemRoomWidth)
(int2Float problem.problemRoomHeight)
toAbs :: Float -> Float -> Picture -> Picture
toAbs x y = Translate (topX + x) (topY - y)
topX, topY :: Float
topX = -1 * fromIntegral problem.problemRoomWidth / 2
topY = fromIntegral problem.problemRoomHeight / 2
stage :: Picture
stage =
(stageWidth / 2 + int2Float stageX)
(stageHeight / 2 + int2Float stageY)
$ Color orange
$ Polygon
$ rectanglePath stageWidth stageHeight
stageWidth = int2Float problem.problemStageWidth
stageHeight = int2Float problem.problemStageHeight
(stageX, stageY) = problem.problemStageBottomLeft
attendees :: [Picture]
attendees = map drawAttendee problem.problemAttendees
drawAttendee :: Attendee -> Picture
drawAttendee attendee =
(fromIntegral attendee.attendeeX)
(fromIntegral attendee.attendeeY)
(Circle 3)
And finally the display code:
doRender :: Problem -> Solution -> IO ()
doRender problem solution = Gloss.display disp bg picture
disp = InWindow "ICFP Contest 2023" (winX, winY) (10, 10)
bg = greyN 0.6
picture = windowScale problem $ drawProblem problem solution
windowScale :: Problem -> Picture -> Picture
windowScale problem = Scale pscale pscale
pscale = fromIntegral scale * 0.98
| problem.problemRoomHeight > problem.problemRoomWidth =
winY `div` problem.problemRoomHeight
| otherwise =
winX `div` problem.problemRoomWidth
Gloss places the pictures’ origin at the center, so they needed to be translated and scaled with their absolute coordinates. Once this is done, the gloss display implementation is great because it
comes with navigation out of the box: you can move and zoom with the mouse.
Here is what the first and tenth problems looked like:
Implementing the happiness score
Next we needed a way to evaluate a solution. So we translated the specification into source code. I had some trouble reading the math notations, for example the half bracket meant that the happiness
had to be rounded up.
Later, Jens optimized this implementation by switching the coordinate representation from Float to Int. Here is the code we used for the lightning division:
score :: Problem -> Solution -> Int
score problem solution =
sum $ map (attendeeHappiness problem.problemMusicians solution) problem.problemAttendees
attendeeHappiness :: UV.Vector Int -> Solution -> Attendee -> Int
attendeeHappiness instruments solution attendee =
UV.sum $ UV.imap musicianImpact solution.solutionPlacements
musicianImpact :: Int -> (Int, Int) -> Int
musicianImpact musician placement
| isBlocked = 0
| otherwise =
let (d,m) = (1_000_000 * taste) `divMod` distance
in d + if m > 0 then 1 else 0
-- the musician's instrument
instrument = instruments ! musician
-- the attendee taste for this instrument
taste = attendee.attendeeTastes ! instrument
-- the distance between the attendee and the musician
distance = calcDistance attendee placement
-- is the musician blocked by another musician?
isBlocked = UV.any checkBlocked solution.solutionPlacements
checkBlocked :: (Int, Int) -> Bool
checkBlocked otherPlacement = otherDistance < distance && isCrossed
otherDistance = calcDistance attendee otherPlacement
isCrossed = lineCrossCircle attendee placement otherDistance 5 otherPlacement
calcDistance :: Attendee -> (Int, Int) -> Int
calcDistance attendee (px, py) = (attendee.attendeeX - px) ^ 2 + (attendee.attendeeY - py) ^ 2
-- | Check if the line between two points is blocked by a third point of a given radius (exclusive).
-- See: https://mathworld.wolfram.com/Circle-LineIntersection.html
lineCrossCircle :: Attendee -> (Int, Int) -> Int -> Int -> (Int, Int) -> Bool
lineCrossCircle attendee (mx, my) distance radius (px, py) = discriment > 0
(x1, y1) = (attendee.attendeeX - px, attendee.attendeeY - py)
(x2, y2) = (mx - px, my - py)
d = x1 * y2 - x2 * y1
discriment = radius ^ 2 * distance - d ^ 2
Thanks to the mathworld.wolfram.com website, I found a suitable circle-line intersection function.
Musician placements
I wanted to try a generative strategy, and to do that I needed to determine valid placements for the musicians. So I wrote a function to transforms the stage into a grid:
-- | Arranging the musicians in a grid, this function returns all the available placements.
allGridPlacement :: (Int, Int) -> UV.Vector (Int, Int)
allGridPlacement (width, height) = UV.fromList $ go radius radius []
-- go takes the current (x, y) position, and the list of accumulated position
go :: Int -> Int -> [(Int, Int)] -> [(Int, Int)]
go x y !acc
| -- there is room to fit another musician on this line, keep the y pos
x + nextMusician < width = go (x + diameter) y newAcc
| -- there is room to start another line, reset the x pos
y + nextMusician < height = go radius (y + diameter) newAcc
| -- this is the end
otherwise = newAcc
-- store the current pos in the accumulator
newAcc = (x, y) : acc
-- | Placement dimension: ( -r- o -r- )
radius, diameter :: Int
radius = 10
diameter = radius * 2
{- | nextMusician is the distance from the current position + a whole new musician
e.g: o -r- )( -r- o -r-)|
nextMusician :: Int
nextMusician = radius + diameter
Here is what the placements looked like:
Random mutation
I proposed the following strategy:
• generate a bunch of random placements.
• perform some small mutations.
• keep the best placements and repeat the step 2.
Here is the implementation context:
import Control.Monad.Random.Strict
type RandGen a = RandT StdGen IO a
-- | Helper to run the MonadRandom.
runRandGen :: RandGen a -> IO a
-- runRandGen action = evalRandT action (mkStdGen 42)
runRandGen action = do
stdg <- initStdGen
evalRandT action stdg
Note that the following implementation is incomplete, it should have tracked the different branches instead of only keeping the best solutions. This resulted in the solutions to quickly converging to
a local maxima. Nevertheless, here is the procedural implementation we used for the lightning division:
geneticSolve :: String -> Problem -> RandGen (Int, Solution)
geneticSolve name problem = do
initialSeeds <- replicateM seedCount (randomSolution problem placements)
((finalScore, finalSolution) : _) <- go genCount initialSeeds
solution <- toSolution problem finalSolution
pure (finalScore, solution)
genCount = 3
seedCount = 5
breedCount = 10
dim = (problem.problemStageWidth, problem.problemStageHeight)
placements = toAbsPlacement problem <$> allGridPlacement dim
total = UV.length placements
musicianCount = UV.length problem.problemMusicians
go :: Int -> [(Int, GenSolution)] -> RandGen [(Int, GenSolution)]
go 0 !seeds = pure seeds
go count !seeds = do
-- Generate a new population
population <- concat <$> traverse breedNewSolutions seeds
-- Order by score
let populationOrdered = sortOn (\(score, _) -> negate score) population
let best = case populationOrdered of
(score, _) : _ -> score
_ -> minBound
liftIO do
now <- getCurrentTime
sayString $ printf "%s %s: gen %2d - %10d" (take 25 $ iso8601Show now) name count best
-- Repeat the process, keeping only the best seed.
go (count - 1) (take seedCount populationOrdered)
breedNewSolutions :: (Int, GenSolution) -> RandGen [(Int, GenSolution)]
breedNewSolutions x@(_, s) = do
newSolutions <- replicateM breedCount (makeNewSeed s)
-- Keep the original seed
pure (x : newSolutions)
makeNewSeed :: GenSolution -> RandGen (Int, GenSolution)
makeNewSeed (GenSolution seedPlacements) = do
newSolution <- GenSolution <$> MV.clone seedPlacements
doMutate newSolution
score <- liftIO (scoreSolution problem newSolution)
pure (score, newSolution)
doMutate :: GenSolution -> RandGen ()
doMutate (GenSolution iov) = do
mutationCount <- getRandomR (genCount, MV.length iov `div` 5)
replicateM_ mutationCount do
-- Pick a random musician
musician <- getRandomR (0, musicianCount - 1)
-- Pick a random new position
swapPos <- getRandomR (0, total - 1)
-- Mutate
MV.swap iov musician swapPos
-- | All the positions are stored, that way the mutations happen in-place.
-- in 'toSolution' we keep only the one for the active musicians.
newtype GenSolution = GenSolution (MV.IOVector (Int, Int))
-- | Create a random solution.
randomSolution :: Problem -> UV.Vector (Int, Int) -> RandGen (Int, GenSolution)
randomSolution problem placements = do
iov <- V.thaw placements
-- Randomize the placements with the 'vector-shuffle' library
liftRandT \stdg -> do
newstdg <- stToIO $ VectorShuffling.Mutable.shuffle iov stdg
pure ((), newstdg)
let gs = GenSolution iov
score <- liftIO (scoreSolution problem gs)
pure (score, gs)
-- | Create the 'Solution' data from a 'GenSolution'.
toSolution :: Problem -> GenSolution -> IO Solution
toSolution problem (GenSolution iov) = do
xs <- UV.convert <$> V.freeze iov
pure $ Solution $ UV.take (UV.length problem.problemMusicians) xs
-- | Compute the score of a 'GenSolution'.
scoreSolution :: Problem -> GenSolution -> IO Int
scoreSolution problem gs = do
solution <- toSolution problem gs
pure $ scoreHappiness problem solution
This technique worked and it found acceptable solutions. The first day was almost over and we were missing some tooling to run the code efficiently.
Submitting the results
Jens added a command line interface so that we could run variations without rebuilding the project. We also worked on some network code to automate the submissions. We were running out of time, and
the organizers released more problems to be solved. At that time, we had 55 problems to crack. To do that, we created a solve problem and submit problem commands. Then we stored all the problems and
their solutions in git.
For example, here was our submit function:
newtype SubmitID = SubmitID Text
deriving newtype (FromJSON)
deriving (Show)
submit :: ProblemID -> Solution -> IO (Maybe SubmitID)
submit pid solution = do
let obj = object ["problem_id" .= pid, "contents" .= decodeUtf8 (BSL.toStrict $ encode solution)]
token <- getEnv "ICFP_TOKEN"
manager <- newTlsManager
initialRequest <- parseRequest "https://api.icfpcontest.com/submission"
let request =
{ method = "POST"
, requestBody = RequestBodyLBS $ encode obj
, requestHeaders =
[ ("Content-Type", "application/json")
, ("Authorization", "Bearer " <> encodeUtf8 (pack token))
response <- httpLbs request manager
if statusCode (responseStatus response) == 201
then pure $ decode (responseBody response)
else do
putStrLn $ "The status code was: " ++ show (statusCode $ responseStatus response)
print $ responseBody response
return Nothing
As expected we had a bunch of bugs. Some solutions appeared unsolvable, some got a huge negative score. Sometime the grid algorithm put musicians off the stage. Moreover the scoreboard was moving
At the end of day 1 the light division was over.
The full specification
After 24 hour, the organizers released a new specification with the following changes:
• The room now had pillars that can block the sound.
• The musician had an extra closeness factor that affected the attendee’s happiness:
New problems were also released, for a grand total of 90 to be solved.
These extensions looked reasonable and we decided to give it a shot using the exact same strategy.
Overall code improvements
It was time to level-up our initial code. We added:
• exception handling.
• automatic saving.
• solution reloading.
• command line parameters.
I introduced a couple of new data types:
newtype ProblemID = ProblemID Int
deriving newtype (Show, Eq, Ord, Enum, Num, ToJSON)
data ProblemDescription = ProblemDescription
{ name :: ProblemID
, problem :: Problem
, -- a dense pillars representation using (x, y, radius)
pillars :: UV.Vector (Int, Int, Int)
data SolutionDescription = SolutionDescription
{ score :: Int
, musicianCount :: Int
, genPlacements :: MV.IOVector (Float, Float)
, genVolumes :: MV.IOVector Float
Haskell made such code refactoring really easy.
Updating the scoring function
To take into account the room’s pillars, we updated the attendeeHappiness implementation with:
-- is the musician blocked
isBlocked = isBlockedPillar || isBlockedMusician
-- … by a pillar (Extension 1)?
isBlockedPillar = UV.any checkBlockedPillar problemDesc.pillars
checkBlockedPillar :: (Int, Int, Int) -> Bool
checkBlockedPillar (px, py, radius) = isCrossed
otherPlacement = (px, py)
otherDistance = calcDistance attendee otherPlacement
isCrossed = lineCrossCircle attendee placement otherDistance radius otherPlacement
-- … by another musician?
isBlockedMusician = UV.any checkBlocked solution.solutionPlacements
To handle the closeness factor, we pre-computed the list of factor:
-- Extension 2: pre-compute all the factor in advance
calcClosenessFactor musician
| problemDesc.name > 0 && problemDesc.name < 56 = 1 -- extension is disabled for the first problems
| otherwise = 1 + UV.sum (UV.generate musicianCount calcMusicianDistance)
instrument = problemDesc.problem.problemMusicians UV.! musician
musicianPos = solution.solutionPlacements UV.! musician
calcMusicianDistance otherMusician
| otherMusician == musician || instrument /= otherInstrument = 0
| otherwise =
let d = calcDistance2 musicianPos (solution.solutionPlacements UV.! otherMusician)
in 1 / d
otherInstrument = problemDesc.problem.problemMusicians UV.! otherMusician
And we applied the formula as explained in the specification.
Better musician’s placement
Looking at the circle packing Wikipedia’s page, I updated our placements generator to better pack the musicians. This unblocked most of the problems except the number 38. This one had a very tight
stage and I think we needed a Float precision to accommodate all the musicians.
Here is the final placements technique we used:
-- | Packing the musicians in a grid, this function returns all the available placements.
allPackedPlacement :: (Int, Int) -> UV.Vector (Int, Int)
allPackedPlacement (width, height) = UV.fromList $ go 0 radius radius []
| -- do the offset per line
width > height = goLine
| -- do the offset per column
otherwise = goCol
goLine :: Int -> Int -> Int -> [(Int, Int)] -> [(Int, Int)]
goLine line x y !acc
| -- there is room to fit another musician on this line
x + nextMusician < width = goLine line (x + diameter) y newAcc
| -- there is room to start another line
y + nextMusician < height = goLine (line + 1) newX (y + newOffset) newAcc
| -- this is the end
otherwise = newAcc
newAcc = (x, y) : acc
-- we alternate the start position every two lines
newX | even line = diameter
| otherwise = radius
goCol :: Int -> Int -> Int -> [(Int, Int)] -> [(Int, Int)]
goCol col x y !acc
| -- there is room to fit another musician on this column
y + nextMusician < height = goCol col x (y + diameter) newAcc
| -- there is room to start another column
x + nextMusician < width = goCol (col + 1) (x + newOffset) newY newAcc
| -- this is the end
otherwise = newAcc
newAcc = (x, y) : acc
-- we alternate the start position every two columns
newY | even col = diameter
| otherwise = radius
-- best offset using int
newOffset = 19
Even though there are some gaps, this implementation may produce better results. Here is how it looked:
On some stages, the previous strategy was better, so we used this helper to pick the best one:
maximumPlacements :: Problem -> UV.Vector (Int, Int)
maximumPlacements problem = UV.map toAbsPlacement best
best | UV.length packed > UV.length grid = packed
| otherwise = grid
packed = allPackedPlacement dim
grid = allGridPlacement dim
dim = (problem.problemStageWidth, problem.problemStageHeight)
toAbsPlacement :: (Int, Int) -> (Int, Int)
toAbsPlacement (x, y) = (sx + x, sy + y)
(sx, sy) = problem.problemStageBottomLeft
Musician volumes
Halfway through the full division, the organizers released a third extension, and it turned out to be quite important. Musicians now had a volume attribute, and it affected the attendee happiness:
Using the same random strategy, we added volume permutations. After the solution reached a big score, we added command line arguments to control the amount of mutations used to improve the solutions.
We alternated between placement and volume mutations using these commands:
• cabal run progcon -- solve --placement 0 --volume 1 $PB_NUM
• cabal run progcon -- solve --placement 1 --volume 0 $PB_NUM
Driver mode
Instead of trying to solve the problem one by one, I implemented a driver mode to run the code autonomously:
mainDriver :: Int -> IO ()
mainDriver maxTime = do
let dryRun = False
putStrLn $ printf "Starting driver with max %d seconds" maxTime
-- start from the smallest score
solutions <- _sortProblemByScore
-- start from the oldest solution
-- solutions <- _sortProblemByDate
now <- getCurrentTime
let solutionsOrdered =
-- Focus on the first few problems
take 12 $
-- Start from the biggest/recent one
reverse solutions
-- Start from the smallest/oldest
-- solution
forM_ solutionsOrdered \(pid, time) -> do
ageSec :: Integer
ageSec = truncate (nominalDiffTimeToSeconds $ diffUTCTime now time) `div` 60
solution <- loadSolutionPath (solutionPath pid)
putStrLn $ printf "Trying to improve problem-%02s (%4s minutes old): %13s"
(show pid)
(show ageSec)
(showScore solution.score)
problem <- loadProblem pid
start_time <- getCurrentTime
unless dryRun do
improved <- runRandGen $ mainImprove maxTime problem start_time start_time solution 0
when improved do
void $ forkIO $ submitOne pid
-- | This function call 'tryImprove' repeatedly. It returns True on success.
mainImprove :: Int -> ProblemDescription -> UTCTime -> UTCTime -> SolutionDescription -> Int -> RandGen Bool
mainImprove maxTime problemDesc initial_time start_time solutionDesc idx = do
-- maybe get a new solution
mSolution <- tryImprove problemDesc solutionDesc (toEnum (idx `mod` 3))
(newTime, newSolution) <- case mSolution of
Nothing -> pure (start_time, solutionDesc) -- no improvement
Just sd -> liftIO do
saveSolutionPath sd (solutionPath problemDesc.name)
now <- getCurrentTime
sayString $ printf "%s #%02s new highscore:%14s (+%10s)"
(formatLogTime now)
(show problemDesc.name)
(showScore solutionDesc.score)
(showScore $ sd.score - solutionDesc.score)
pure (now, sd)
-- check how long we ran sinch the last improvement
end_time <- liftIO getCurrentTime
let elapsed = nominalDiffTimeToSeconds (diffUTCTime end_time start_time)
hasImproved = initial_time /= start_time -- start_time is increased by improvements
if elapsed < fromIntegral maxTime
then mainImprove maxTime problemDesc initial_time newTime newSolution (idx + 1)
else pure hasImproved
And here is the simpler tryImprove implementation:
data Mutation = Placement | Volume | Both
deriving (Enum, Bounded, Show)
-- | This function simply try to improve a given solution by applying a single mutation
tryImprove :: ProblemDescription -> SolutionDescription -> Mutation -> RandGen (Maybe SolutionDescription)
tryImprove problemDesc sd mutation = do
newSolution <- case mutation of
Placement -> do
genPlacements <- newPlacements
pure $ sd{genPlacements}
Volume -> do
genVolumes <- newVolumes
pure $ sd{genVolumes}
Both -> do
genPlacements <- newPlacements
genVolumes <- newVolumes
pure $ sd{genPlacements, genVolumes}
score <- scoreSolution problemDesc newSolution.genPlacements newSolution.genVolumes
pure $
if score > sd.score
then Just (newSolution{score})
else Nothing
musicianCount = UV.length problemDesc.problem.problemMusicians
newPlacements = do
-- Copy the previous placements and do one swap
iov <- MV.clone sd.genPlacements.iov
musician <- getRandomR (0, musicianCount - 1)
swapPos <- getRandomR (0, MV.length iov - 1)
MV.swap iov musician swapPos
pure $ GenPlacements iov
newVolumes = do
-- Copy the previous volumes and do one change
iov <- MV.clone sd.genVolumes
musician <- getRandomR (0, musicianCount - 1)
volume <- getRandomR (0, 10)
-- TODO: to a relative increase of the current volume?
MV.write iov musician volume
pure iov
To make the code runs in parallel, I used the scheduler library:
mainDriver :: Int -> IO ()
- mainDriver maxTime = do
+ mainDriver maxTime = withScheduler_ Par \scheduler -> do
let dryRun = False
- forM_ solutions \(pid, time) -> do
+ forM_ solutions \(pid, time) -> scheduleWork_ scheduler do
let ageSec :: Integer
… resulting in such load average:
We also updated the submit code to automatically git push the new solutions. That way we could let the code run in the background.
Last hours
The driver was making steady progress and at some point we reached position 68 on the scoreboard but the other teams were also busy updating their solutions. Unfortunately, our solver was not
improving the solutions fast enough and we started to fall off the scoreboard. Moreover, the organizers judiciously disabled the scoreboard in the last two hours. Our final position at that time was
91 with a total happiness score of 39.442 billions.
Lessons learned
I think the main issue was that we didn’t re-evaluate the initial strategy, and we didn’t leverage the gloss visualization to better understand how to solve the problems. We simply stuck with the
initial implementation and worked around it. Looking at it with a fresh mind, I can see how inefficient that was. For example, instead of picking a new random volume at each iteration, we should have
tried to increase or decrease the value, and kept what worked the best.
Moreover, before running the driver for the last run, I made a last minute change to better utilize the cores. Unfortunately, that small change made the new highscore to be discarded, resulting in
many cpu cycles to be wasted.
Lastly, being in opposite timezones was useful. The problem’s extensions were released during my day time and I worked on them until the evening. Then I wrote some documentations and explained the
code to Jens so that he could continue during his day time. It was like a programing relay race.
If I had to do this again, here is what I would do:
• Setup an infra: we could have used a VPN to share compute cores (though we only had two i5 laptops and one ryzen desktop), and to setup a database to keep track of the solutions.
• Invest in tooling: having a fast feedback loop and power-tools like the driver made a big difference.
• Make a bigger team: that way we could have assigned the problems and tried different strategies.
• Have fun: computers can be frustrating, but don’t let that drain your motivation. For example, I got weird bugs trying to display the solutions in real-time with gloss, and I lost of a bunch of
time in vain trying to make that work. Jens also struggled with network errors from the API. Eventually we kept working on it and we achieved some success.
I like working under time pressure, because you have to be pragmatic, and I think Haskell was a great fit for the competition. We were able to easily discuss and change the source code thanks to the
powerful type annotations.
Even though our results were not spectacular, I really enjoyed participating in the icfpc-2023. It was a lot of fun collaborating with Jens, and this weekend was quite memorable. Big thanks to my
wife for understanding, to the organizers and all the participants for the great competition!
If you would like to join our team next time, please let me know :) | {"url":"http://midirus.com/blog/icfpc2023","timestamp":"2024-11-08T20:55:57Z","content_type":"text/html","content_length":"79345","record_id":"<urn:uuid:508f17fe-fdf6-459a-b8d2-99863334d79d>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00683.warc.gz"} |
3.7 Electromagnetic Energy: Units Conversion
The electromagnetic spectrum encompasses a wide range of wavelengths or energies of light. Scientists working in different disciplines will use units of wavelength or energy that are most convenient
for the region of electromagnetic energy in which they work. Different sub-disciplines of science working in the same region of light may also use different units of measurement because of the
established convention in their respective fields. For example, an astronomer working in the infrared region plots spectra using wavelength in micrometers, while a geochemist or chemist uses
wavenumbers, which is a unit proportional to energy.
It is useful to be able to convert among units so it is possible to compare data among scientific disciplines.
Learning Objectives
• Convert wavelengths of electromagnetic energy between commonly used units.
• Convert the wavelength of electromagnetic energy to wavenumbers (cm^-1), a commonly used unit in infrared and Raman spectroscopy.
• Describe the relationships between energy, wavelength, frequency, and wavenumber with the help of Planck’s equation.
Prior Knowledge and Skills
3.2 The Electromagnetic Spectrum
3.3 Wave Behavior
3.6 The Dual Nature of Electromagnetic Energy
Key Terms
GuIded Inquiry
Units Conversion: Wavelength
Table 3.7.1. Selected common wavelength units and their symbols.
*Angstroms are not a standard unit of length but have been commonly used in the past to describe bond distances and atomic radii in minerals.
3.7.1. Convert orange light with a 630 nm wavelength to units of Angstroms:
3.7.2 Convert orange light with a 630 nm wavelength to units of centimeters:
Units Conversion: Wavelength to Wavenumbers
In 3.6, we saw that electromagnetic energy has characteristics of both a wave and a particle (photon). The Planck equation relates the energy of a photon to its frequency:
Equation 3.7.1. E = hν
Where E = energy, h is Planck’s constant (6.62607004 × 10^-34 m^2 kg / s), and ν “nu” is the frequency.
We can combine this with Equation 3.2.1 which relates wavelength (λ) and frequency (ν) of electromagnetic energy:
Equation 3.2.1 c = νλ
Where c = 299,792,458 m / s or 3×10^8 m /s.
Rearranging Equation 3.2.1:
and substituting for ν in Equation 3.7.1 results in:
3.7.3. Equation 3.7.2 relates wavelength to energy. Is wavelength proportional to or inversely proportional to energy?
Wavenumber (
Infrared and Raman spectra are commonly plotted as wavenumbers in units of cm^-1.
3.7.4. Is wavenumber proportional to or inversely proportional to energy?
3.7.5. Your astronomer friend reports seeing an absorption band in their measurements at 3 μm wavelength. What is this wavelength in units of wavenumbers (cm^-1)?
Scientists use different units of wavelength and energy depending upon the application and their specific sub-discipline. Spectroscopists use wavenumber as a unit that is proportional to energy and
is convenient to compare to wavelength.
There are many web calculators out there- for example, here is a page that converts to several units of energy at once: http://halas.rice.edu/conversions. It is possible to just Google search
“convert nanometers to centimeters” and have a calculator pop up in the search results window. However, it is useful to have a working knowledge of the approximate conversions between commonly used
units in different fields.
Solution to 3.7.4.
Solution to 3.7.5. | {"url":"https://viva.pressbooks.pub/analyticalmethodsingeosciences/chapter/3-7-electromagnetic-energy-units-conversion/","timestamp":"2024-11-04T05:28:48Z","content_type":"text/html","content_length":"95436","record_id":"<urn:uuid:9ae6ea45-427b-4a04-8a9f-6959c7ecc88e>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00246.warc.gz"} |
James Webb Space Telescope to Lift Off to L2 ~ But What’s L2?
Douglas MacDougal
There’s a buzz of activity at the Arianespace's launch complex near Kourou, French Guiana. Finally, after so many delays, the James Webb Space Telescope is ready for its trip into space this month by
an Ariane 5 rocket, the European Space Agency’s most reliable vehicle. After it’s up, JWST will spend a month journeying to the second Lagrangian point, the “L2 point,” where it will make its lasting
home. JWST will hover in an orbit not around the Earth, as does the Hubble Space Telescope, but around the Sun. It will mimic Earth’s orbit but be some distance out from it, at the L2 point. It will
also orbit around the L2 point, in a kind of perpendicular halo orbit with a six-month period, as the L2 point revolves around the Sun in lockstep with the Earth’s annual revolution. The Space
Telescope will always look away from Sun and Earth. NASA tells us that the telescope will maintain a stable temperature out there, unaffected by passing in and out of the Earth’s shadow.
Yet some are puzzled by this unusual arrangement, and how it could even work. What created these magical, mysterious places in the sky that scientists always want to send stuff to? There’s nothing
there, right? Others are vaguely aware of Lagrangian points but are in the dark about the not-so-simple physical theory behind them. All fair questions. To learn more, we’re going to take our own
quick mathematical tour to L2. But first things first. Let’s find out what these funky points are and why they are so-called.
We’ll imagine asking the eminent Joseph-Louis Lagrange himself, the brilliant French mathematician and astronomer who was born in 1736 in Turin, Italy [1]. Though he died in 1813, he has graciously
consented to answer a few questions for us. As moderator, I’ll let him do most of the talking.
Moderator: Professor Lagrange, I know you’ve been busy in new ways but glad you found time to join us. And you’re looking quite remarkably well, considering. Can you please explain your Lagrangian
points to us, including the L2 point?
Professor Lagrange talks about his mysterious points
Prof. Lagrange: Oui! Time puts everything back in its place, non? Merci for allowing me to be among you, and for telling me about the wonderful progress you’ve made in celestial mechanics from where
I left off two centuries ago. Consider three cases:
(1) Where the spacecraft is between the Earth and Sun, it is at the L1 point.
(2) Where it is outward of the Earth, it is at the L2 point.
(3) Where the spacecraft is opposite the Sun from us it is at the L3 point.
The L1, L2, and L3 points are all on the Earth-Sun axis. These, as I have shown, are points of only relative stability in any orbit.
Moderator: Are there other L points?
Prof Lagrange: I found two other points of greater stability on our planet’s orbital path, the L4 point, leading at sixty-degree angles from the Earth-Sun axis, and the L5 point, following at sixty
degrees. They thus form an equilateral triangle with the Earth-Sun line. En effet, my Lagrangian points can be found in almost any three-body system. These stable areas are spots where solar system
objects can accumulate and remain for eons. The best examples, confirmed long after my passing, are the Trojan asteroids clustering at the L4 and L5 points in Jupiter’s orbit. Here's a clear image
your moderator modified from NASA showing the L points:
Moderator: Come to think of it, professor, NASA just launched the Lucy mission in October to explore Jupiter’s Trojan Asteroids at both L4 and L5 points. It will reach four of the Trojan L4
asteroids: 3548 Eurybates (which has a tiny moon, Queta) and 15094 Polymele in 2027, and 11351 Leucus and 21900 Orus in 2028. Then Lucy will make a graceful arc to the other side of the solar system
to spy on the much larger binary Trojan asteroids 617 Patroclus and Menoetius in 2033 (binary because they orbit their common center of mass). The name Lucy comes from the australopithecine skeleton
discovered in 1974 revealing early secrets of evolution; it is hoped spacecraft Lucy will do the same for solar system evolution.
Prof. Lagrange: Mon Dieu! Quels miracles!
Student: I hate to interrupt, but sorry, Professor Lagrange, I’m confused. Thanks by the way for being here. What I am trying to figure out is: How can an object orbiting the Sun farther out from the
Earth, in a larger orbit, keep up with us? We learned in school that bodies in more distant orbits move more slowly. We overtake the slower, outer planets as we orbit the Sun, and the inner planets
outpace us. Why won’t the Space Telescope fall behind us if it is farther out?
The professor explains how his Lagrangian points work
Prof. Lagrange: Oh no, it will stay with us! But to know why requires more attentive understanding of my points. We will try a thought experiment to exercise our intuition before we stir up the pot
with equations. And from some simple deductions we will assuredly glean deeper insights that will prepare us for the mathematics.
First, any object – even a spacecraft – at Earth’s distance from the Sun will also have Earth’s one year period, correcte? Remember, by Kepler’s remarkable Third Law, the period (cubed) is
proportional to the distance (squared) from the Sun. So, if the distance is the same, the period is the same. Now let’s look first at the L1 point and suppose your country has put a solar-orbiting
spacecraft into a smaller, more inward orbit, which we’ll assume for the sake of simplicity is circular. Venus moves with a quicker period than Earth, and Mercury is quicker still. The spacecraft’s
orbital period would, like the inner planets, also be less than a year, its increased velocity counteracting the increased pull of the Sun tending to draw it inward [2]. The actual period would of
course still be determined by Kepler’s Third Law.
But now let’s go one step more: What if the gravitational attraction of the Sun perceived by the space-faring craft were somehow made less? This could happen if the spacecraft is still close enough
to Earth-Moon system to feel their gravity pulling it outward, offsetting the inward pull of the Sun, in effect, reducing its pull; so, from the spacecraft’s gravity-perceiving perspective, it feels
the same net solar pull just as the Earth does. If the ship is poised at some as-yet unknown distance between Earth’s system and the Sun, the net gravitational acceleration acting on it could be
reduced, so that the period of the spacecraft would not be as short as it otherwise would be that distance [3]. In fact, if its distance from Earth were just so – the L1 point – its period could in
theory exactly match the Earth’s – it would keep up with the Earth exactly, like the camera car in one of your marathons. In that case, the sum of the positive pull of the Sun in one direction and
negative pull of the Earth-Moon system in the other direction must equal the centripetal force necessary to keep the spacecraft at L1. The balance of forces will result in the orbital velocity
necessary to keep the spacecraft’s inertia from carrying it off in a straight line into space, ’est-ce pas vrai?
Student: Ok I get that, but how would it work for something that is farther out from the earth, at L2, where the Space Telescope’s going to be? Is it the same principle?
Prof. Lagrange: Exactement! A spacecraft or observatory such as JWST orbiting the Sun at a larger orbit than Earth would normally have a longer period than Earth (as in the case of the outer planets,
all of which orbit successively more slowly). But if it were positioned just close enough to Earth so that the gravitation of the Earth-Moon system supplemented the Sun’s pull, then it would have a
shorter orbital period than it otherwise would at that distance, i.e., were it experiencing the Sun’s gravity alone. At some particular distance – the L2 point – its period too would just match the
Earth’s period. That turns out to be (as we’ll demonstrate) about 1.5 million kilometers farther out from the Earth’s orbit.
Moderator: I know we’ve done this before. The Wilkinson Microwave Anisotropy Probe (WMAP) launched by NASA in 2001 went to the L2 point and lingered in a halo orbit. It measured the cosmic microwave
background (CMB) radiation from that thermally quiet environment. Other L2 examples I recall include the Herschel and Planck observatories launched by ESA back in 2009 and the ESA’s Gaia mission.
Student: Is it complicated to figure out exactly where these L points are?
Prof. Lagrange: No, but interestingly the study of my Lagrangian points yielded useful insights on how bodies orbit and interact in gravitational fields. While the interactions of two bodies are
straightforwardly analyzed with Newtonian mechanics (the so-called two body problem), things become much more difficult when three or more bodies are involved (the three body problem). Many such
problems cannot be solved by analysis but require computers that did not exist in my day to iterate approximate solutions. My Lagrangian points restrict the inquiry to certain spots in front and
behind and to each side of a planet’s (or moon’s) orbit, where solutions are much simpler. Thus, it is an interesting form of what is known as a restricted three body problem. I’m sure you’ve read my
1772 prize-winning paper on this, Essai sur le Problme des Trois Corps. But we need to put pencil to paper to mathematically calculate where the points are.
He shows how to compute the L2 distance
To find the distance from Earth to the L2 point, we need to think of the applicable forces. Here we use Newton’s brilliant gravitational force equation which says the gravitational force imparted
between two bodies is proportional to the product of their masses divided by the square of the distance between them [4]. All we need to do is calculate the gravitational force of the sun acting upon
the spacecraft and add it to the gravitational force of the earth acting upon the spacecraft; their sum will tell us the total gravitational force pulling the spacecraft inward. The other equation
tells us the centripetal force needed to buck its straight-line inertial force and hold it in orbit at the L2 point. Since we know its orbital period, we just need to find its distance out. We’ll
equate the two forces to find the magical spot where it will neither fall in or fly off!
Student: Can you show us the actual equations please?
Prof. Lagrange: Oui. Let r be the distance between the Earth and the Sun, and R be the distance between the Earth and the L1 point that we want to solve for [5]. M is the mass (of Earth-Moon system
or Sun, as indicated by the subscripts), and G is the Newtonian gravitational constant [6]. These numbers are in my notes below.
F1 is the Sun’s gravitational pull on a spacecraft at L2, at distance r + R from the Sun:
F2 is the Earth-Moon system’s gravitational pull on the spacecraft at distance R from Earth:
F3 is the centripetal force needed to hold something in circular orbit around the Sun at distance R from Earth:
where ω is 4π/P, and P is the period of the Earth’s orbit [7]. Our task is to solve the equation F1 + F2 = F3. That is to say, find the distance R from the Earth in the direction opposite the Sun
(the L2 point) where the gravitational and rotating forces are equal. In other words, we need to solve this equation for R:
I find R is 1,512,000 kilometers [8]. That means that the L2 point is a million and a half kilometers outward from Earth, 180° away from the sun. This conforms to published data.
Student: I see. So, to find the L1 point, too, could we just use the same equation but reverse some of the signs?
Prof. Lagrange: Oui! All of the plus signs in the above equation become minus and you end up with the right equation to find the L1 point. Do you see why? I find R for the L1 to be about 1.49 million
kilometers sunward from Earth. I am informed that the Solar and Heliospheric Observatory Satellite (SOHO) is there. See if you can come up with a formula for the L3 point on the opposite side of the
How stable is the L2 point?
Moderator: Professor, can explain what you meant when you said the L2 point isn’t all that stable?
Lagrange: Bien sûr! First, let’s put some things in perspective. We know that while the Sun is massive (in kilograms, about 2 followed by 30 zeros) it is also far away – about 149.6 million
kilometers distant from the Earth. And because gravity’s force diminishes with the square of the distance away, the actual acceleration perceived at our earthly outpost is modest. The Sun pulls on
the Earth with a gravitational acceleration of only about .00593 m/s^2. That is, about 6 millimeters per second, every second [9]. That is Earth’s “free fall” acceleration toward the Sun. How can
such a tiny pull keep our big Earth from travelling off into space? While this seems like a tenuous hold on our home, the Sun’s attraction is felt by every particle of the Earth, so the Sun’s net
pull on the globe is about 3.54 × 10^22 newtons. It is this attraction on every particle composing our planet that keeps us in an orbital period of about 365¼ days. This solar acceleration is also
slight compared to the Earth’s own gravity at its surface. The downward pull felt by you and me on the ground is about 9.8 m/s^2. This is about 1,650 times more than the 6 mm/s^2 gravitational
acceleration from the Sun at Earth’s distance.
But by plugging in the R value back in the above equations – the distance from the Earth to the L2 point – we can see what the gravitational acceleration is for each component:
Differences in the acceleration of each component is on the order of less than a millimeter per second per second. So any gravitational perturbations that you can imagine will affect the position of
the space telescope [10]. There are no strong gravitational limits there to prevent it from sloshing around a bit in L2 space.
Moderator: Thank you Prof. Lagrange for that excellent presentation! It is remarkable how slight is the contribution of the Earth’s own gravity to keep JWST on station at L2. The velocity differences
are slight too. The Earth’s mean orbital velocity is 29.79 km/s. The mean velocity necessary to maintain the Space Telescope in orbit at the L2 point is 30.09 km/s, or only 300 meters per second
faster. That’s an increase of only 1% of Earth’s orbital velocity. The extra speed works out to 671 mph, just under Mach 1. Your average jetliner at L2 could keep up with us!
NOTES Top picture: Joseph-Louis Lagrange looking on. The equation is the one derived in the text for determining Lagrange's L2 point; its form is mine based on Lagrange's theory (the picture assumes
he'd nod at it approvingly). The star field is a portion of a wide-field photograph of the southern sky in February. I took it a few years ago from the dark dark skies of Haleakala on the Island of
Maui, altitude about 10,000'. [1] Lagrange mastered and extended Newtonian mechanics, and in 1788 published his most famous work, Mécanique Analytique (Analytical Mechanics) after 35 years of effort.
[2] In terms of the principle of conservation of energy, the increased kinetic energy of the spacecraft (determined by the square of its velocity) offsets the (negative) gravitational potential
energy of the Sun (determined by its distance from it) minus that of the Earth (also determined by its distance from it). [3] The square of the period of an orbiting body is proportional to the cube
of the distance and inversely proportional to the net gravitational pull on it. So for a given distance from the Sun, the greater the net gravitational pull, the shorter the period; the less the
gravitational pull, the longer the period. To keep the periods the same, as we do when working with L1 or L2, one must either decrease the distance from the Sun if the net gravitational pull is less
(the L1 scenario); or increase the distance from the Sun if the net gravitational pull is greater (the L2 scenario). [4] The equation is F = GMm/r2 with the little m being the secondary mass here. We
ignore the mass of the spacecraft because, no surprise, it is completely dwarfed by the mass of the Earth and/or Sun. [5] The distance r is the astronomical unit. It is 1.496 x 1011 meters. R is the
unknown that we are solving for. We are employing the simplifying assumption throughout that the orbits are all circular. [6] Mass of the Earth-Moon system is 6.0471 x 1024 kg. Mass of the Sun is
1.9891 x 1030 kg. G = 6.674 x 10-11. [7] We are using the MKS system, so the period of the Earth's orbit in seconds is, 3.155811840 x 107. This equation can be derived from the centripetal force
equation discovered by Christian Huygens, F = mv2/r. [8]1,511,993.793 kilometers by my calculation, with the assumptions and constants noted. [9] This can be readily calculated from the Newtonian
equation for gravitational acceleration, f = GM/r2. [10] Nor, again, have we considered the eccentric orbit of the Earth, where the Moon is in its orbit, the variation in solar accelerations at the
L2 point at different times of year and so forth. | {"url":"https://www.douglasmacdougal.com/post/james-webb-space-telescope-to-lift-off-to-l2-~-but-what-s-l2","timestamp":"2024-11-06T04:31:06Z","content_type":"text/html","content_length":"1050045","record_id":"<urn:uuid:67350270-a151-42ae-8134-e4c0e77a7acf>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00587.warc.gz"} |
Moscow Center of Fundamental and Applied Mathematics
On August 29, 2019, the winners of the competition held within the framework of the national project “Science” for the creation of world-class mathematical centers in Russia were announced. Among the
winners are the Moscow Center of Fundamental and Applied Mathematics in a consortium of Lomonosov Moscow State University (based on Faculty of Mechanics and Mathematics, Faculty of Computational
Mathematics and Cybernetics and Research computing center), the Keldysh Institute of Applied Mathematics of the Russian Academy of Sciences, and the Marchuk Institute of Computational Mathematics of
the Russian Academy of Sciences.
Russian President Vladimir Putin noted the value of the work of scientists and declared 2021 the Year of Science and Technology.
The formation of international mathematical centers will make a significant contribution to the accelerated development of the research and development sector.
Among the main tasks of the Center:
• conducting research in most relevant areas both in solving fundamental mathematical problems and in applied problems involving leading scientists and promising young researchers from Russia and
other countries;
• creating an environment for communication, cooperation and joint research by teams of participants of the center and leading experts from other scientific, educational and industrial centers in
the field of mathematical sciences;
• training highly qualified specialists in the field of mathematical sciences in most relevant areas of research.
The mathematical school of Moscow University gained worldwide fame over a hundred years ago. The school of Egorov, Luzin and their students received fundamental results on the widest range of
problems faced by mathematicians in the first half of the 20th century. The names of Kolmogorov, Alexandrov, Gelfand, Arnold are known throughout the mathematical world. Since the 30-40s of the last
century, the Faculty of Mechanics and Mathematics of Moscow State University has been one of the largest mathematical centers in the world, research is being conducted here in almost all relevant
areas. It is enough to note that his graduates 6 times became winners of the Fields Medal.
The Faculty of Mechanics and Mathematics has always been one of the main centers of the Moscow School of Mathematics. Experts in all major mathematical areas work at its departments. One of the main
traditions of the faculty is the presence of a number of large mathematical schools headed by outstanding scientists and conducting research in the rapidly developing areas of modern mathematics. The
Faculty of Mechanics and Mathematics maintains constant strong ties with other mathematical centers; in particular, the director of the MI RAS, his deputy and many employees of this institute
(including department heads), the director of the IAM RAS, employees of the Moscow Institute of Physics and Technology, the Higher School of Economics, etc. work at the faculty.
Faculty of Computational Mathematics and Cybernetics, Moscow State University Lomonosov, founded in 1970 on the initiative and thanks to the efforts of one of the largest Russian scientists of the
20th century, academician Tikhonov, is today the leading educational center in Russia for training personnel in the field of basic research in applied mathematics, computer technology and computer
science. The faculty conducts research on the widest range of fundamental and applied problems of modern mathematics.
Moscow University is a source of highly qualified mathematical personnel for the mathematical centers of Russia and the world; in particular, the vast majority of employees of Moscow mathematical
centers are graduates of Moscow State University.
The institutes of IAM RAS and INM RAS, founded by outstanding scientists, mathematicians, who made a huge contribution to the solution of many applied problems (in particular, in space exploration,
the development of atomic energy, etc.) and who served as president of the USSR Academy of Sciences, academicians Keldysh and Marchuk, conduct research on how in the field of fundamental principles
of computational mathematics and mathematical modeling, so in a wide range of areas related to solving important applied problems.
The solution of a large number of modern problems of mathematical modeling is possible only with the help of powerful computer technology, supercomputers. Moreover, the very task of transferring
computations to this technique is a fundamental, non-trivial mathematical problem. Specialists of the Scientific and Research Center of Moscow State University in collaboration with colleagues from
Moscow State University, IPM, IVM and other mathematical centers are engaged in solving this problem both at a fundamental, theoretical level and in the practical implementation of the developed
Employees of the Research computing center of Moscow State University possess world-class competencies in the development and practical use of mathematical models and methods for building scalable
computing systems and ultra-high performance environments, in creating scalable parallel algorithms and methods for solving applied problems in the natural sciences and humanities. It is this
potential that is laid in the foundation of the MSU supercomputer complex, combining the resources of the Lomonosov supercomputers and the Lomonosov-2 supercomputer - the most powerful in Russia at
present. The Center for the collective use of ultra-high-performance computing resources of Moscow University was created on the basis of the Research computing center of Moscow State University,
which makes it possible to efficiently use powerful supercomputer resources to carry out more than 700 projects from various fields of science, based on the potential of mathematical modeling and
computational technologies.
In many applied fields of science based on supercomputers and computational technologies, such as climate research, computational chemistry, cryptography, unique methods of processing big data and
data compression, bioinformatics and bioengineering, astrophysics, and in many other MSU scientists hold strong authoritative positions in the world. A significant area of research and development
at Moscow University is the development of models, methods and technologies for creating highly efficient parallel applications. The results of work in this area are being actively implemented in the
Russian supercomputer community, and are constantly used by thousands of users of high-performance computing systems.
It is planned to increase the number of young researchers participating in scientific programs and projects implemented by the center.
An increase in the number of papers published in journals indexed in international databases (Web of Science Core Collection / Scopus) is expected.
A branch of the Moscow Center of Fundamental and Applied Mathematics at Lomonosov Moscow State University (hereinafter referred to as the Center) announces a contest for the implementation of
initiative projects on the following topics: | {"url":"https://mathcenter.ru/en","timestamp":"2024-11-09T12:34:46Z","content_type":"text/html","content_length":"93738","record_id":"<urn:uuid:f6c6a042-ebbd-4390-9949-ed7cc34edbae>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00420.warc.gz"} |
Researching | Embarking on a skyrmion odyssey
Optical skyrmions, as an emergent cutting-edge topic in optics and photonics, extend the concept of non-singular topological defects to topological photonics, providing extra degrees of freedom for
light–matter interaction manipulations, optical metrologies, optical communications, etc^[1]. The realization of artificial optical skyrmions did not occur until 2018^[2,3], while the starting point
of pursuits of optical skyrmions could date to Maxwellian and Kelvin’s era, as shown in Fig. 1. The history of the rheological of skyrmions concept is somewhat similar to the homeward journey of the
Greek myths hero Odysseus with twists and turns. The story goes all the way back to the old days when scientists discovered electromagnetism. Inspired by the fact of the curl field nature of
magnetism, Maxwell believed that electromagnetism should have a rotational origin and proposed a model of ether vortices to derive the equations of electromagnetism^[4]. After that, Lord Kelvin went
a step further to propose an atomic model based on knots of swirling ethereal vortices immersed in an ether sea^[5]. In the 1870s, there were huge debates over Kelvin’s vortex atoms model. Maxwell, a
vortex atom enthusiast, promoted the model in his influential Encyclopedia Britannica article, “Atom.” The opponent, like Boltzmann, said that the model lacks any proof of the validity of the
equations. Along with the discovery of electrons and nuclei, the vortex atom hypothesis was finally abandoned, whereas the attractive features of those knots, including discreteness and immutability,
have never been forgotten, and the idea of knots and knot invariants spawned a key modern physics conception, topological defects in the field theory. Around 60 years later, shown in Fig. 1, the
general interests of physicists changed from atoms to sub-atoms. The idea of knots returned to the stage, and it was employed by Skyrme to describe the nuclei^[6,7]. In Skyrme’s picture, protons and
neutrons are depicted as topological knot defects excitation in the three-component pion field, well known as skyrmions. The number of knot twists, or knot invariants, is equal to the number of
nucleons in the nuclei. And by skyrmions, certain nucleus states had also correctly been predicted. Moreover, different from Kelvin’s vortex atom hypothesis, the skyrmions in the nuclei are based on
the nonlinear field theory with pion-pion interactions. And the nonlinear interaction physically guarantees that the skyrmions are stable under perturbations, in addition to the topological reason.
Although it is accepted that the skyrmion is historically the first example of a topological defects model, as the saying goes, the course of true love never did run smoothly. Along with the
discovery of quarks, the skyrmion model was overlooked. Unexpected turns were associated with the rise of condensed matter physics. In condensed matter physics, a collectively large number of atoms
and electrons with fruitful symmetries, interactions, as well as phases offers a platform to effectively construct various topological defect excitations, for example, vortices in superconductors,
monopoles in spin ices, and skyrmions in non-centrosymmetric magnetic systems^[8–10]. Most importantly, condensed matter systems are sensitive to diversified external fields, leading to steerable
manipulations of those topological defects with both versatility and precision. Therefore, since entering the condensed matter physics era, the concept of topology has been not only used to explain
natural matter but also to control or even design matter, being the core of modern physics and related disciplines. Nevertheless, the skyrmion in condensed matter physics is still not a trouble-free
journey. In the 1960s and 70s, it was believed that skyrmions were not expected to exist in most condensed matter systems due to the Hobart-Derrick theorem^[9,10]. Even in one of the most well-known
papers of Nobel prize winners Kosterlitz and Thouless, it is said that: “If we regard the direction of magnetization in space as giving a mapping of the space on to the surface of a unit sphere
(actually it is exactly skyrmions), this invariant (skyrmions number) measures the number of times the map of the space encloses the sphere. This invariant is of no significance in statistical
mechanics”^[11]. However, as marked in Fig. 1, in 1989 A.N. Bogdanov and collaborators^[12,13] uncovered that magnetic materials with a broken inversion symmetry or, in other words, so-called
non-centrosymmetric magnetic systems, could support magnetic skyrmions. And it took another 20 years to realize the magnetic skyrmions experimentally^[14,15]. Since then, magnetic skyrmions have
become one of the hottest topics in condensed matter physics, constituting a promising new direction for data storage and spintronics^[10,16]. Of course, it is not the end of the Odyssey journey of
skyrmions. Ahead, there are still lots of challenges and opportunities, such as simultaneously increasing the stability and the transition temperature of the magnetic skyrmions with nanometer size
and realizations of skyrmions in other classical systems such as light. Recently, the storyline went to optical skyrmions, as shown in Fig. 1. | {"url":"https://m.researching.cn/Articles/OJ13744ab7d9fd0cca","timestamp":"2024-11-14T08:35:53Z","content_type":"text/html","content_length":"59927","record_id":"<urn:uuid:dca1fb6e-8c84-41db-9cf1-77e21d182196>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00026.warc.gz"} |
Tidy Finance - Construction of a Historical S&P 500 Total Return Index
I wanted to simulate simple equity savings plans over long time horizons and many different initiation periods for a story with the German news portal t-online. The good thing is that the S&P 500
index provides a great starting point as it is easily available since 1928 via Yahoo Finance. However, I wanted my savings plans to be accumulating, i.e., all cash distributions are reinvested in the
savings plan. The S&P index is inadequate for this situation as it is a price index that only tracks its components’ price movements. The S&P 500 Total Return Index tracks the overall performance of
the S&P 500 and would be the solution to my problem, but it is only available since 1988.
Fortunately, I came up with a solution using data provided by Robert Shiller and provide the complete code below for future reference. If you spot any errors or have better suggestions, please feel
free to create an issue.
This is the set of packages I use throughout this post.
First, let us download the S&P 500 Total Return Index from Yahoo Finance. I only consider the closing prices of the last day of each month because my savings plans only transfer funds once a month.
In principle, you could also approximate the daily time series, but I believe it will be noiser because Shiller only provides monthly data.
sp500_recent <- tq_get("^SP500TR", get = "stock.prices",
from = "1988-01-04", to = "2023-01-31") |>
select(date, total_return_index = close) |>
drop_na() |>
group_by(month = ceiling_date(date, "month")-1) |>
arrange(date) |>
filter(date == max(date)) |>
ungroup() |>
select(month, total_return_index)
Next, I download data from Robert Shiller’s website that he used in his great book Irrational Excuberance. I create a temporary file and read the relevant sheet. In particular, the data contains
monthly S&P 500 price and dividend data. The original file has a bit of annoying date format that I have to correct before parsing.
temp <- tempfile(fileext = ".xls")
download.file(url = "http://www.econ.yale.edu/~shiller/data/ie_data.xls",
destfile = temp, mode='wb')
shiller_historical <- read_excel(temp, sheet = "Data", skip = 7) |>
transmute(month = ceiling_date(ymd(str_replace(str_c(Date, ".01"), "\\.1\\.", "\\.10\\.")), "month")-1,
price = as.numeric(P),
dividend = as.numeric(D))
To construct the total return index, I need a return that includes dividends. In the next code chunk, I compute monthly total returns of the S&P 500 index by incorporating the monthly dividend paid
on the index in the corresponding month. Note that Shiller’s data contains the 12-month moving sum of monthly dividends, hence the division by 12. Admittedly, this is a brute force approximation, but
I couldn’t come up with a better solution so far.
Before I go back in time, let us check whether the total return computed above is able to match the actual total return since 1988. I start with the first total return index number that is available
and use the cumulative product of returns from above to construct the check time series.
The correlation between the actual time series and the check is remarkably high which gives me confidence in the method I propose here.
total_return_index total_return_check
total_return_index 1.000 0.999
total_return_check 0.999 1.000
In addition, the visual inspection of the two time series in Figure 1 corroborates my confidence. Note that both the actual and the simulated total return indexes start at the same index value.
check |>
select(month, Actual = total_return_index, Simulated = total_return_check) |>
pivot_longer(cols = -month) |>
ggplot(aes(x = month, y = value, color = name)) +
geom_line() +
scale_y_continuous(labels = comma)+
labs(x = NULL, y = NULL, color = NULL,
title = "Actual and simulated S&P 500 Total Return index",
subtitle = glue("Both indexes start at {min(check$month)}"))
Now, let us use the same logic to construct the total return index for the time before 1988. Note that I just sort the months in descending order and divide by the cumulative product of the total
return from Shiller’s data.
Before we take a look at the results, I also add the S&P price index from Yahoo Finance for comparison.
Finally, let us combine (i) the actual S&P 500 Total Return Index from 1988 until 2023, (ii) the simulated S&P 500 total return index before 1988, and (iii) the S&P 500 price index from 1928 until
sp500_monthly <- sp500_recent|>
bind_rows(sp500_historical |>
filter(month < min(sp500_recent$month)) |>
select(month, total_return_index)) |>
full_join(sp500_price_index |>
select(month, price_index), by = "month") |>
filter(month >= "1928-01-01") |>
# A tibble: 1,141 × 3
month total_return_index price_index
<date> <dbl> <dbl>
1 1928-01-31 1.20 17.6
2 1928-02-29 1.21 17.3
3 1928-03-31 1.20 19.3
4 1928-04-30 1.26 19.8
5 1928-05-31 1.35 20
# ℹ 1,136 more rows
Figure 2 shows the dramatic differences in cumulative returns if you only consider price changes, as the S&P 500 Index does, versus total returns with reinvested capital gains. Note that I plot the
indexes in log scale, otherwise everything until the last couple of decades would look like a flat line. I believe it is also important to keep the differences between price and performance indexes
in mind whenever you compare equity indexes across countries. For instance, the DAX is a performance index by default and should never be compared with the S&P 500 price index.
sp500_monthly |>
`Price Index` = price_index,
`Total Return Index` = total_return_index) |>
pivot_longer(cols = -month) |>
group_by(name) |>
arrange(month) |>
mutate(value = value / value[1] * 100) |>
ggplot(aes(x = month, y = value, color = name)) +
geom_line() +
scale_y_log10(labels = comma) +
scale_x_date(expand = c(0, 0), date_breaks = "10 years", date_labels = "%Y") +
labs(x = NULL, y = NULL, color = NULL,
title = "S&P 500 Price index and Total Return index since 1928",
subtitle = glue("Both indexes are normalized to 100 at {min(sp500_monthly$month)}")) | {"url":"https://www.tidy-finance.org/blog/historical-sp-500-total-return/index.html","timestamp":"2024-11-12T18:48:21Z","content_type":"application/xhtml+xml","content_length":"57209","record_id":"<urn:uuid:0e68725a-4d8b-4058-aa2e-45dca15bce5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00718.warc.gz"} |
Application of fuzzy random finite element method on rotor dynamics
Fuzzy and stochastic characteristics of parameters exist widely in rotating machinery. To research the parameters characteristics is of great significance in rotor dynamics. Dynamic characteristics
of rotor system are analyzed taking into account uncertain properties of fuzzy and stochastic coexisting. Fuzzy variables are transformed into stochastic variables based on information entropy
theory. The Neumann stochastic finite element method based on Neumann expansion combined with Newmark-β method is used in linear and nonlinear rotor system within the frame work of Monte Carlo
simulation. Critical speed and dynamic response of fuzzy stochastic rotor systems are described by the proposed method. The results show that the Neumann stochastic finite element method has good
applicability and efficiency in rotor dynamics.
1. Introduction
Rotor dynamics is very important for responses prediction of rotating machineries. In the past 20 years, a lot of literatures were focused on numerical simulation of rotor systems, and dynamic models
can be divided into three kinds: single disc Jeffcott rotor system with less degrees of freedom, multi discs and span rotor system with more degrees of freedom, complicated finite element model with
nonlinear effects. And also, a lot of numerical methods have been applied, e.g., initial value methods such as Runge-Kutta method [1], Newmark-β [2], Wilson-θ method; boundary value methods such as
shooting method [3]. For example, Tejas H. Patel and Ashish K. Darpe [4] presented vibration response of a single disc rotor system with multi faults using Runge-Kutta method. Jerome Didier et al.
[5] investigated nonlinear dynamic response of two discs rotor system with multi faults and uncertainties using finite element model. Nowadays finite element model is becoming popular increasingly to
analyze complicated rotor systems [6-8], as it is capable to predict the static and dynamic behavior of rotor system based on its geometry and material characteristics, such as Mzaki Dakel et al. [9]
modeled rotor system with hydrodynamic journal bearings using Timoshenko beam finite element model to study rotor nonlinear dynamics.
However, it is often difficult to define a reliable finite element model of rotor system with a number of uncertain physical properties. In fact, many components in rotor system are subject to
uncertainty. There are two types of uncertainty in rotor system. One is probabilistic uncertainty. Uncertain parameters are described as random variables with known probability distributions. The
other is fuzzy uncertainty; in contrast to probabilistic uncertainty, it is non-probability as fuzzy data are not available.
Uncertainty has a great effect on dynamic behavior of rotor system greatly. Finite element model of a structure can be reliable by taking into account uncertainty. Many papers focused on the
probabilistic uncertainty in rotor system, from random structures to random external forces [10-12], such as Yanhong Ma et al. [13] considered support and connecting structure stiffness, phase and
amount of rotor unbalance to present interval analysis method in rotor system. Jean-Jacques Sinou and Béatrice Faverjon [14] obtained dynamic response of a transverse crack in a rotating shaft by
changing stiffness of the crack as random variables. Yazhao Qiu and Singiresu S. Rao [15] applied a fuzzy approach to analyze nonlinear rotor-bearing system. Uncertainty analysis has been also
developed quickly in other fields [16-19], such as management science, engineering and technology, social economy, communication and transportation, finance and insurance. However, to the author’s
knowledge, there is rarely work in rotor dynamics under the conditions of random and fuzzy coexisting.
In this paper, uncertain properties of fuzzy and random coexisting in rotor system are considered simultaneously to research rotor dynamics. Firstly, fuzzy variables are transformed into random
variables based on information entropy theory, so Neumann expansion stochastic finite element method can be used in rotor dynamics. Secondly, Neumann expansion stochastic finite element method
combined with Newmark-β method is applied to analyze rotor system within the frame work of Monte Carlo simulation. Thirdly, critical speed and response are presented as examples in fuzzy random
linear and nonlinear rotor systems by the proposed method. Efficiency of the method is verified by the examples.
2. Uncertainty in rotor system and entropy theory
2.1. Uncertainty in rotor system
Uncertainties exist widely in rotor-bearing system due to randomness in material and geometric properties, or variant operation circumstance, and so on. External loads such as bearing reaction and
rotating speed are all subject to variation; lubricant properties such as density and viscosity are varied by oil temperature; performance of components such as bearings and shafts is varied by wear
and changes in operating conditions during their lifetimes.
Some of these uncertainties are statistical and probabilistic. For example, manufacturing and assembly tolerances exist in all mechanical parts and components, so dimensions of any mechanical part or
component are probabilistic; eccentricity of wheel, clearance between the journal and the bearing, and so on are probabilistic uncertainty.
Some of the uncertainties do not have sufficiently reliable stochastic data, and they are associated with human error or limitation of professional knowledge. These uncertainties may be modeled by
fuzziness. For example, sometimes it’s hard to determine whether boundary condition is simply supported or clamped supported, but can be described by using fuzzy terms, such as “nearly simple
supported”, and “almost clamped supported”. In this case, it is fuzzy uncertainty.
2.2. Entropy theory and transformation rules
Information entropy is used to measure uncertainty of information, that is, entropy is a measure of uncertainty of a random variable [11-13]. Probabilistic entropy for continuous random variable $X$
can be defined as:
$H=-{\int }_{-\infty }^{+\infty }p\left(x\right)\mathrm{l}\mathrm{n}p\left(x\right)dx,$
where $H$ is entropy of $X$ and $p\left(x\right)$ is probability density function of $X$.
Fuzzy information can be measured by information entropy. It can be called fuzzy entropy. Non-probabilistic entropy can be represented as:
$\stackrel{´}{G}=-{\int }_{-\infty }^{+\infty }f\left(x\right)\mathrm{l}\mathrm{n}\stackrel{´}{f}\left(x\right)dx,$
$\stackrel{´}{f}\left(x\right)=\frac{f\left(x\right)}{{\int }_{-\infty }^{+\infty }\stackrel{´}{f}\left(x\right)dx},$
where $\stackrel{´}{G}$ is entropy of $X$ and $f\left(x\right)$ is membership function of $X$.
Fuzzy variable can be transformed into random variable based on entropy principle, that is, fuzzy entropy is equal to probabilistic entropy. The principle of transformation is that equivalent
probabilistic entropy is equal to entropy of original fuzzy variable:
Normally, fuzzy variable is transformed into equivalent normal random variable. Assuming that normal random variable of mean is $m$ and standard deviation is $\sigma$. Probabilistic entropy of
variable $X$ can be obtained by Eq. (1):
${H}_{eq}=\mathrm{l}\mathrm{n}\left(\sqrt{2\pi e\sigma }\right).$
The equivalent standard deviation $\sigma$ can be given:
$\sigma =\left(\frac{1}{\sqrt{2\pi }}\right){e}^{\stackrel{´}{G}-0.5}.$
Fuzzy uncertainty can be transformed into any probabilistic distribution. Thus, equivalent normal random variable can be used to transform fuzzy structure to random structure for fuzzy stochastic
finite element method applied in rotor dynamics.
Fig. 1Three familiar types of fuzzy distributions: a) Triangular, b) Trapezoidal, c) Γ
Three common types of fuzzy distributions are shown in Fig. 1. There are Triangular distribution, Trapezoidal distribution and $\mathrm{\Gamma }$ distribution, respectively. The expressions of the
fuzzy distributions are as follows.
Triangular distribution:
$f\left(x\right)=\left\{\begin{array}{cc}\frac{\left(x-{a}_{1}\right)}{\left(a-{a}_{1}\right)},& {a}_{1}\le x\le a,\\ \frac{\left({a}_{2}-x\right)}{\left({a}_{2}-a\right)},& a\le x\le {a}_{2},\end
Trapezoidal distribution:
$f\left(x\right)=\left\{\begin{array}{ll}\frac{\left({a}_{2}+x-a\right)}{\left({a}_{2}-{a}_{1}\right)},& {a-a}_{2}\le x\le a-{a}_{1},\\ 1,& {a-a}_{1}\le x\le a+{a}_{1},\\ \frac{\left({a}_{2}-x+a\
right)}{\left({a}_{2}-{a}_{1}\right)},& {a+a}_{1}\le x\le a+{a}_{2},\end{array}\right\$
$\mathrm{\Gamma }$ distribution:
$f\left(x\right)=\left\{\begin{array}{c}{e}^{k\left(x-a\right)},a-{a}_{1}\le x\le a,\\ {e}^{-k\left(x-a\right)},a\le x\le a+{a}_{1},\end{array}\right\$
3. Neumann expansion stochastic finite element method
Stochastic finite element method based on perturbation technique has been used widely to solve dynamic response of stochastic parameters structure with random excitation, but it’s not suitable for
uncertain parameters structure which coefficient of variance are larger than 0.2.
The direct Monte Carlo simulation is suitable for problems with uncertain parameters which coefficient of variance are larger than 0.2, but the method is quite inefficient as large number of samples
are required to guarantee accurate statistical result.
Neumann expansion stochastic finite element method, which based on the direct Monte Carlo method, can overcome the short coming of the direct Monte Carlo method, and is used to analyze uncertainty
rotor system.
3.1. Neumann expansion theory and Newmark-β method
3.1.1. Neumann expansion theory
Assuming ${\mathbf{K}}^{-1}$ is the inverse matrix of $\mathbf{K}$, and matrix $\stackrel{-}{\mathbf{K}}$ can be given $\stackrel{-}{\mathbf{K}}=\mathbf{K}+∆\mathbf{K}$, so inverse matrix of $\
stackrel{-}{\mathbf{K}}$ is given by Neumann series:
${\stackrel{-}{\mathbf{K}}}^{-1}={\mathbf{K}}^{-1}\left(\mathbf{I}+\mathbf{P}+{\mathbf{P}}^{2}-{\mathbf{P}}^{3}+\cdots +{\left(-1\right)}^{m}{\mathbf{P}}^{m}\right),$
where $\mathbf{I}$ is the identity matrix.
3.1.2. Newmark-β method
Assuming $\mathbf{M}$, $\mathbf{C}$, $\mathbf{K}$ and $\mathbf{F}$ are mass, damping, stiffness and load matrix in rotor system, respectively. ${\mathbf{x}}_{0}$, ${\stackrel{˙}{\mathbf{x}}}_{0}$ and
${\stackrel{¨}{\mathbf{x}}}_{0}$ are initial values. ${b}_{0}=1/\left(\beta {∆t}^{2}\right)$, ${b}_{1}=\delta /\left(\beta ∆t\right)$, ${b}_{2}=1/\left(\beta ∆t\right)\text{,}$${b}_{3}=1/2\beta -1\
text{,}$${b}_{4}=\delta /\beta -1\text{,}$${b}_{5}=∆t/2\left(\delta /\beta -2\right)\text{,}$${b}_{6}=∆t\left(1-\delta \right)\text{,}$ and ${b}_{7}=\delta ∆t$ ($\delta \ge 1/2$, $\beta \ge 1/4{\left
(1/2+\delta \right)}^{2}$) are the integral constants.
Effective stiffness matrix $\stackrel{~}{\mathbf{K}}$ is given $\stackrel{~}{\mathbf{K}}={b}_{0}\mathbf{M}+{b}_{1}\mathbf{C}+\mathbf{K}$.
Then triangular decomposition of $\stackrel{~}{\mathbf{K}}$ is represented $\stackrel{~}{\mathbf{K}}=\mathbf{L}\mathbf{D}{\mathbf{L}}^{T}$.
Effective load is given at $t+∆t$:
Then displacement vector is given:
Acceleration and velocity are calculated:
3.2. Neumann expansion Newmark-β method
3.2.1. Linear system
Linear system dynamic equation is written when time is $t+∆t$:
The procedure using Newmark-β method is as follows.
Firstly, effective stiffness matrix is given:
And then, for each time step, the effective load vector can be calculated:
Then triangular decomposition of the $\stackrel{~}{\mathbit{K}}$ is represented $\stackrel{~}{\mathbf{K}}=\mathbf{L}\mathbf{D}{\mathbf{L}}^{T}$.
And then, node displacement vector at time $t+∆t$ can be solved:
Finally, the node acceleration and velocity can be calculated by:
The random stiffness matrix $\mathbf{K}$ can be decomposed into mean ${\mathbf{K}}_{0}$ and fluctuating deviator $∆\mathbf{K}$ by Neumann expansion theory, and $\mathbf{K}={\mathbf{K}}_{0}+∆\mathbf
According to Neumann expansion method:
${\mathbf{K}}^{-1}={\left({\mathbf{K}}_{0}+∆\mathbf{K}\right)}^{-1}={\mathbf{K}}_{0}^{-1}\left(\mathbf{I}-\mathbf{P}+{\mathbf{P}}^{2}-{\mathbf{P}}^{3}+\cdots +{\left(-1\right)}^{m}{\mathbf{P}}^{m}\
Only ${\mathbf{K}}_{0}$ is variable in $\mathbf{K}$ in linear system. Once inverse matrix of ${\mathbf{K}}_{0}$ is obtained, the Eq. (18) can be used iteratively to obtain inverse matrix of $\mathbf
{K}$ without further decomposition and inverse, which can save a great deal of the CPU time.
Expansion series may be truncated when the series converges. In most cases, it can be expanded up to the third order to satisfy requirement of engineering.
3.2.2. Nonlinear system
Assuming dynamic equation of a weak nonlinear system (which is applied to most rotor systems) is given:
where $\mathbf{K}\left(\mathbf{x}\right)$ is nonlinear stiffness.
Effective stiffness matrix is given:
Assuming variable quantity of the stiffness matrix by Neumann expansion method is $\mathrm{\Delta }{\mathbf{K}}_{1}$, variable quantity of the stiffness matrix at the $i$th iteration in each time
step is $\mathrm{\Delta }{\mathbf{K}}_{2}^{i}$:
$\mathrm{\Delta }\mathbf{K}=\mathrm{\Delta }{\mathbf{K}}_{1}+\mathrm{\Delta }{\mathbf{K}}_{2}^{i}.$
For the first iteration:
$\mathrm{\Delta }\mathbf{K}=\mathrm{\Delta }{\mathbf{K}}_{1}+\mathrm{\Delta }{\mathbf{K}}_{2}=\mathrm{\Delta }{\mathbf{K}}_{1}+\left(\frac{\partial \mathbf{F}}{\partial {\mathbf{x}}_{n}}-\mathbf{K}\
For the $i$th iteration:
$\mathrm{\Delta }\mathbf{K}=\mathrm{\Delta }{\mathbf{K}}_{1}+\mathrm{\Delta }{\mathbf{K}}_{2}^{i}=\mathrm{\Delta }{\mathbf{K}}_{1}+\left(\frac{\partial \mathbf{F}}{\partial {\mathbf{x}}_{n+1}^{i}}-\
Calculation of the matrix decompose can be greatly reduced. $\mathbf{K}$ can be calculated by Neumann expansion theory.
4. Numerical examples
4.1. Example 1
A rotor system as shown in Fig. 2 has been considered for analysis. The system consists of two elastic supports, an elastic shaft and a rigid disc. There are 6 elements, 7 nodes and 28 degrees of
freedom in finite element model of the rotor system. The supporting stiffness and damping ratio are 4.6×10^7 N/m and 0.027, respectively. The rotating speed is 3000 rpm, and other data of the
elements are shown in Table 1.
Fig. 2Element model of rotor system
Dynamic equation of a $n$ nodes rotor system by Jalan A. K. and Mohanty A. R. [20] with finite element method can be written as:
$\mathbf{M}\stackrel{¨}{\mathbf{x}}+\left(\mathbf{C}+\omega \mathbf{G}\right)\stackrel{˙}{\mathbf{x}}+\mathbf{K}\mathbf{x}=\mathbf{F},$
where $\mathbf{M}$, $\mathbf{C}$, $\mathbf{G}$ and $\mathbf{K}$ are the mass, damping, gyroscopic moment and stiffness matrices of system, respectively, $\omega$ is angular velocity, $\mathbf{F}$ is
exciting force vector.
Table 1Data of elements
Element 1 2 3 4 5 6
Length / mm 80 80 10 10 80 80
Diameter / mm 10 10 80 80 10 10
First three order critical speeds of the rotor system are ${\omega }_{n1}=$65.68 Hz, ${\omega }_{n2}=$458.86 Hz, ${\omega }_{n3}=$1276.24 Hz, respectively.
Assuming the supporting stiffness obeys triangular distribution. The parameters are $a=$4.6×10^7 N/m, ${a}_{1}=$4.0×10^7 N/m and ${a}_{2}=$5.2×10^7 N/m, respectively. Assuming the elastic modulus
obeys normal distribution which mean value is 2.1×10^11 Pa and variance is 0.05. Probability distributions of critical speed of the rotor system were carried out by Neumann expansion stochastic
finite element method within 100000 times Monte Carlo simulation. Distributions of first three order critical speeds are shown in Fig. 3. Distribution parameters such as mean value and variance can
be obtained by the distributions.
Fig. 3Probability distribution of first three order critical speed in fuzzy and random rotor system: a) First order, b) Second order, c) Third order
4.2. Example 2
Rotor-stator rub fault often occurs on local position as small clearance between rotor and stator in rotor system. It can cause amplitude jumping phenomenon when rotor system runs. It is a distinct
nonlinear phenomenon, and can be explained by nonlinear stiffness in rotor system.
Fig. 4Element model of nonlinear rotor system
Assuming the nonlinear stiffness are ${\stackrel{´}{k}}_{x}={\stackrel{´}{k}}_{y}=$5×10^7 N/m at node 3 in rotor system as shown in Fig. 4. Nonlinear force can be given:
$\left\{\begin{array}{c}{F}_{x}={\stackrel{´}{k}}_{x}{x}^{3},\\ {F}_{y}={\stackrel{´}{k}}_{y}{y}^{3}.\end{array}\right\$
Amplitude-frequency response curve was carried out by Newmark-β method as shown in Fig. 5. It shows that the system occurred amplitude jumping phenomenon. The jumping point is $\mathrm{a}$ when the
rotor system is run-up, and the jumping point is $b$ when the rotor system is run-down.
Fig. 5Amplitude jumping phenomenon in nonlinear rotor system
Fig. 6Probability distributions of jumping point
Assuming the supporting stiffness obeys triangular distribution. The parameters are $a=$5×10^7 N/m, ${a}_{1}=$2×10^7 N/m and ${a}_{2}=$8×10^7 N/m, respectively. Assuming eccentric mass obeys normal
distribution which mean value is 1×10^-2 kg/m and variance is 0.05. Probability distribution of the jumping point was calculated by Neumann expansion stochastic finite element method within 1000
times Monte Carlo simulation as shown in Fig. 6. Distribution parameters such as mean value and variance can be obtained by the distributions.
4.3. Example 3
Rub fault often occurs in compressor rotor system. Finite model of a compressor medium pressure cylinder rotor system is shown in Fig. 7. There are 23 elements, 24 nodes and 96 degrees of freedom in
the finite element model. The supporting stiffness and damping are 12×10^8 N/m and 11×10^5 N$\bullet$s/m, respectively. Local rub fault occurs at node 12, and mass eccentricity is at nodes 9 and 16.
Fig. 7Element model of compressor medium pressure cylinder rotor system
First three order critical speeds of the rotor system are ${\omega }_{n1}=$58.42 Hz, ${\omega }_{n2}=$182.26 Hz, ${\omega }_{n3}=$72.98 Hz, respectively. Amplitude-frequency response curve was
carried out by Newmark-β method as shown in Fig. 8.
Fig. 8Amplitude-frequency response curve of the system
Assuming the supporting stiffness obeys triangular distribution. The parameters are $a=\text{1}\text{2}\text{×}\text{10}\text{8}\text{}\text{N/m,}$${a}_{1}=\text{8}\text{×}\text{10}\text{8}\text{}\
text{N/m}$ and ${a}_{2}=\text{1}\text{6}\text{×}\text{10}\text{8}\text{}\text{N/m,}$ respectively. Assuming elastic modulus obeys normal distribution which mean value is 2.1×10^11 Pa and variance is
0.05. Probability distributions of critical speed, peek and peek frequency of the rotor system were carried out by Neumann expansion stochastic finite element method. 100000, 10000 and 1000 times
Monte Carlo simulation were calculated, respectively. Distributions of first three order critical speeds, peek and peek frequency are shown in Figs. 9 and 10. Distribution parameters such as mean
value and variance can be obtained by the distributions.
Fig. 9Probability distribution of first three order critical speed in compressor medium pressure cylinder rotor system: a) First order, b) Second order, c) Third order
Fig. 10Probability distributions of peek and peek frequency: a) peek, b) peek frequency
5. Conclusions
The Neumann stochastic finite element method based on Neumann expansion combined with Newmark-β method is used in rotor dynamic with fuzzy and random coexisting. The analysis indicates that dynamic
characteristics such as critical speed, amplitude jumping phenomenon and peek frequency are affected in uncertainty system. The proposed method can be used in uncertainty rotor system of linear or
nonlinear with the frame work of Monte Carlo simulation, and overcome the limit that coefficient of variance cannot be larger than 0.2. Meanwhile, a large of computational complexity and time of
matrix decomposition and inverse can be reduced by the proposed method. The examples show that the method applied in rotor dynamic is effective.
• Chang-Jian C. W., Chen C. K. Non-linear dynamic analysis of rub-impact rotor supported by turbulent journal bearings with non-linear suspension. International Journal of Mechanical Sciences, Vol.
50, Issue 6, 2008, p. 1090-1113.
• Lee A. S., Kim B. O., Kim Y. C. A finite element transient response analysis method of a rotor-bearing system to base shock excitations using the state-space Newmark scheme and comparisons with
experiments. Journal of Sound and Vibration, Vol. 297, Issue 3-5, 2006, p. 595-615.
• Musznyska A. Rotor Dynamics. Taylor & Francis, New York, 2005.
• Patel T. H., Darpe A. K. Vibration response of a cracked rotor in presence of rotor-stator rub. Journal of Sound and Vibration, Vol. 317, Issue 3-5, 2008, p. 841-865.
• Didier J., Sinou J. J., Faverjon B. Study of the non-linear dynamic response of a rotor system with faults and uncertainties. Journal of Sound and Vibration, Vol. 331, Issue 3, 2012, p. 671-703.
• Xiang J. W., Chen D. D., Chen X. F., He Z. J. A novel wavelet-based finite element method for the analysis of rotor-bearing systems. Finite Element in Analysis Design, Vol. 45, Issue 12, 2009, p.
• Wang Z. L., Yu X. L., Liu F. L., Feng Q. K., Tan Q. Dynamic analyses for the rotor-journal bearing system of a variable speed rotary compressor. International Journal of Refrigeration, Vol. 36,
Issue 7, 2013, p. 1938-1950.
• Taplak H., Parlak M. Evaluation of gas turbine rotor dynamic analysis using the finite element method. Measurement, Vol. 45, Issue 5, 2012, p. 1089-1097.
• Dakel M., Baguet S., Dufour R. Nonlinear dynamics of a support-excited flexible rotor with hydrodynamic journal bearings. Journal of Sound and Vibration, Vol. 333, Issue 10, 2014, p. 2774-2799.
• Young T. H., Shiau T. N., Kuo Z. H. Dynamic stability of rotor-bearing systems subjected to random axial forces. Journal of Sound and Vibration, Vol. 305, Issue 3, 2007, p. 467-480.
• Leng X. L., Meng G. A., Zhang T., Fang T. Bifurcation and chaos response of a cracked rotor with random disturbance. Journal of Sound and Vibration, Vol. 299, Issue 3, 2007, p. 621-632.
• Qiu Y. Z., Rao S. S. A fuzzy approach for the analysis of unbalanced nonlinear rotor systems. Journal of Sound and Vibration, Vol. 284, Issue 1-2, 2005, p. 299-323.
• Ma Y. H., Liang Z. C., Chen M., Hong J. Interval analysis of rotor dynamic response with uncertain parameters. Journal of Sound and Vibration, Vol. 332, Issue 16, 2013, p. 3869-3880.
• Sinou J. J., Faverjon B. The vibration signature of chordal cracks in a rotor system including uncertainties. Journal of Sound and Vibration, Vol. 331, Issue 1, 2012, p. 138-154.
• Qiu Y. Z., Rao S. S. A fuzzy approach for the analysis of unbalanced nonlinear rotor systems. Journal of Sound and Vibration, Vol. 284, Issue 1-2, 2005, p. 299-323.
• Trawińskia K., Cordóna O., Quirinc A., Sánchezd L. Multiobjective genetic classifier selection for random oracles fuzzy rule-based classifier ensembles: How beneficial is the additional
diversity? Knowledge-Based Systems, Vol. 54, 2013, p. 3-21.
• Shapiro A. F. Modeling future lifetime as a fuzzy random variable. Insurance: Mathematics and Economics, Vol. 53, Issue 3, 2013, p. 864-870.
• Parka J. P., Jeong J. U. On random fuzzy functional differential equations. Fuzzy Sets and Systems, Vol. 223, 2013, p. 89-99.
• Wang W. J., Liu D., Liu X., Pan L. Fuzzy overlapping community detection based on local random walk and multidimensional scaling. Physica A: Statistical Mechanics and its Applications, Vol. 392,
Issue 24, 2013, p. 6578-6586.
• Jalan A. K., Mohanty A. R. Model based fault diagnosis of a rotor-bearing system for misalignment and unbalance under steady-state condition. Journal of Sound and Vibration, Vol. 327, Issue 3-5,
2009, p. 604-622.
About this article
27 February 2014
rotor dynamics
Neumann expansion
The authors would like to gratefully acknowledge the National Basic Research Program of China (2011CB706504), the National Natural Science Foundation of China (51005042) and the Fundamental Research
Funds for the Central Universities of China (N120403007).
Copyright © 2014 JVE International Ltd.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.extrica.com/article/15052","timestamp":"2024-11-08T23:34:15Z","content_type":"text/html","content_length":"161211","record_id":"<urn:uuid:63d027a7-88cd-439f-bdbc-70144ec9d7b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00623.warc.gz"} |
DATEDIF in Excel (Formula, Example) | How To Use DATEDIF Function?
Updated June 9, 2023
DATEDIF in Excel
The datedif function in Excel counts the days, months, and years between the two dates. DatedIf function was available in the older version of MS Office until 2007. We cannot find this application,
but we can still use this function if we know the syntax. To use the Datedif function, we should have a start and end date and select what we want to count. Use D for days, M for months, Y for years,
MD for months ignoring Days, and YD for days ignoring years.
Unit Result
“y” A difference in complete years.
“m” A difference in a complete month.
“d” A difference in complete days.
“md” A difference in complete days, Ignoring months and years.
“ym” A difference in complete months, Ignoring days and years.
“yd” A difference in complete days, Ignoring years.
• As if we have two different dates, by Excel DATEDIF formula and its three arguments, we can also find the difference between those dates in days, months, and years.
• After applying the DATEDIF formula, you are supposed to get the result in a number, representing the difference between the dates in Excel.
How to Use the DATEDIF Function in Excel?
• The DATEDIF function in Excel is widely used for different purposes; here, we take some examples.
• We know many tools for calculating your age, but Excel and this formula are really fun.
• In many areas of corporate, we can use it to identify the aging of a particular file/ report/ case; honestly, I am using it very much in my MIS to know the aging of the claims, so I can decide my
priorities and give the attention to the oldest.
Example #1
Find Yearly Differences
“y” A difference in complete years.
For example, let’s get the difference between two dates in years; we will take the start date and end date similar in all examples for your better understanding. For instance, let’s assume the Start
Date is 21/01/2016 and the End Date is 29/07/2019.
To Find out the difference between two dates in years in Excel, just follow the below steps:
Step 1 – Mention the start date and end date in the date format (Note: The date format can be changed from cell formatting); here, we have formatted the date in dd/mm/yyyy format, which is most
common nowadays.
Step 2 – Now, in a separate cell, mention the Excel DATEDIF formula, which is =DATEDIF(start_date, end_date, unit)
Step 3 – Here, for this example, for the start date, select the rows accordingly, and for a unit in the formula, enter “y” (Note: Units always has to be mentioned in between double inverted comma);
in this formula,” y “stands for years.
You can see that you got to answer 3, meaning there is a difference in their years between the start and end dates.
Example #2
Find the Difference in Months
“m” A difference in a complete month
• Now we have to find the difference. You can probably say that if there are 3 years, then it should be around 36 months, true, but as this formula doesn’t provide us with the answer infraction, so
we just need to find it by another formula which has “m” (Month) instead of years.
• As you can see from the images above, we have to use the unit “m” instead of “y” in this formula.
• The rest of the formula will remain the same =DATEDIF(start_date, end_date,”m”)
• For our example, you can see in the image that there are 42 months of gap between the start and end dates.
Example #3
Find Difference in Days
“d” A difference in complete days
Using the same formula with different units, we can calculate the difference in days between these two dates.
• The difference in days is very useful, as you’re unable to get the fraction values using this formula, so when it comes to relatively smaller calculations, you need the most accurate data
• With this formula, you can calculate the difference in days and then divide it by 30, so you can get the difference in months, most probably, and then into years by dividing it by 12.
• A formula to find the difference in days is =DATEDIF(start_date, end_date,”D”)
• We can see from our given example there is a difference of 1285 days.
Example #4
Unit “md.”
“md” A difference in complete days, Ignoring months and years.
• Unit “md” can be useful when you want to count only days between the given dates, irrespective of months and years. Formula for this unit is =DATEDIF(start_date, end_date,”md”)
• As per this unit, the dates will behave like they have been in the same month and year, Exactly like it’s showing the difference in date numerics.
• For the given example, it provides the difference between days is 8. As we have learned before, in this function with this unit, only the dates will perform as numbers, so the difference between
29 and 21 we got is 8.
Example #5
Unit “ym.”
“ym” A difference in complete months, Ignoring days and years.
• From this unit, we can identify the difference in months between a start date and an end date, irrespective of the days and years. Formula for this unit is =DATEDIF(start_date, end_date,”ym”)
• So the given example shows the answer 6, as the difference between month # 7 and month # 1 is 6.
• Now, if the start date is month #9 and the end date is month #1, as the answer, this formula will show 4 as it will calculate that for the month from month # 9, it will take 4 months to reach
month # 1.
• So this is the logic used behind this formula or unit.
Example #6
Unit “yd.”
“yd” A difference in complete days, Ignoring years.
• From this particular function, you can find the difference in days between given dates, irrespective of years; it may count the months but only till the very next year, and it always provides the
result in counting days.
• Formula for this unit is =DATEDIF(start_date, end_date,”yd”)
• For the given an example, it shows the answer 190.
• As per the given image, you can see that the unit “yd” counts the difference from 21/07 to 29/01 for the given example.
So far, we have learned about six units for DATEDIF Function in Excel. As a conclusion for all the above examples, we can understand that to find the most accurate data. We should find the difference
in days; for moderately accurate data, find the difference in months, and when the smaller difference of even months doesn’t bother our calculation try the difference in years.
Things to Remember about DATEDIF Function in Excel
• Here we have to remember that it might not be the exact answer when calculating the years or months, as the system has to consider an average day for every month and year.
• As we know, all months do not have the same number of days. It might affect your calculation when you are calculating for very long spans, so it should get a difference of about 1 month.
Recommended Articles
This has been a guide to DATEDIF in Excel. Here we discuss DATEDIF Formula and how to use the DATEDIF Function in Excel, with practical examples and a downloadable Excel template. You can also go
through our other suggested articles – | {"url":"https://www.educba.com/datedif-in-excel/","timestamp":"2024-11-04T18:21:59Z","content_type":"text/html","content_length":"353339","record_id":"<urn:uuid:93145af9-0ee0-46ab-acc7-1ecb21a0afb5>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00053.warc.gz"} |
Unscramble HACKLES
How Many Words are in HACKLES Unscramble?
By unscrambling letters hackles, our Word Unscrambler aka Scrabble Word Finder easily found 115 playable words in virtually every word scramble game!
Letter / Tile Values for HACKLES
Below are the values for each of the letters/tiles in Scrabble. The letters in hackles combine for a total of 16 points (not including bonus squares)
• H [4]
• A [1]
• C [3]
• K [5]
• L [1]
• E [1]
• S [1]
What do the Letters hackles Unscrambled Mean?
The unscrambled words with the most letters from HACKLES word or letters are below along with the definitions.
• hackle (n.) - A comb for dressing flax, raw silk, etc.; a hatchel.
• shackle (n.) - Stubble. | {"url":"https://www.scrabblewordfind.com/unscramble-hackles","timestamp":"2024-11-06T14:07:38Z","content_type":"text/html","content_length":"55795","record_id":"<urn:uuid:d877f874-990e-4e05-bb1c-255a3623b691>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00392.warc.gz"} |
: Goubko M.
O. Miloserdov O. :
Simple Alcohols with the Lowest Normal Boiling Point Using Topological Indices
: ( ):
Kraguevac University
: 2016 :
MATCH Commun. Math. Comput. Chem.
() : V. 75, No 1 :
M. Goubko, O. Miloserdov: Simple Alcohols with the Lowest Normal Boiling Point Using Topological Indices, MATCH Commun. Math. Comput. Chem. 2016, V. 75, No 1 P. 29-56
We find simple saturated alcohols with the given number of carbon atoms and the minimal normal boiling point. The boiling point is predicted with a weighted sum of the generalized first Zagreb index,
the second Zagreb index, the Wiener index for vertex-weighted graphs, and a simple index caring for the degree of a carbon atom being incident to the hydroxyl group. To find extremal alcohol
molecules we characterize chemical trees of order n, which minimize the sum of the second Zagreb index and the generalized first Zagreb index, and also build chemical trees, which minimize the Wiener
index over all chemical trees with given vertex weights.
: 5263, : 25945, : 20. | {"url":"http://mtas.ru/search/search_results_ubs_new.php?publication_id=20123&IBLOCK_ID=10","timestamp":"2024-11-03T10:58:48Z","content_type":"text/html","content_length":"11971","record_id":"<urn:uuid:c1f5c5d4-a945-4649-be1b-a7c4f1aa3c69>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00680.warc.gz"} |
Printable Calendars AT A GLANCE
Rounding To The Nearest Thousand Worksheet
Rounding To The Nearest Thousand Worksheet - Practise rounding numbers to the nearest 10 and 100; Rounding off to the nearest 10, 100, 1000 or 10000; Web using these sheets will help your child to:
Position numbers to 10000 on a number line. Web this file contains 30 task cards. Our generator will create the following worksheets: Web here is our generator for generating your own rounding off
numbers worksheets. Web round to the nearest thousand. 4,689 rounded to the nearest 1,000 is 5,000. Use these cards for classroom games, small group instruction, morning work, or learning centers.
Our generator will create the following worksheets: Position numbers to 10000 on a number line. 4,689 rounded to the nearest 1,000 is 5,000. Rounding off to the nearest 10, 100, 1000 or 10000;
Practise rounding numbers to the nearest 10 and 100; Rounding to the nearest whole, to 1dp, or 2dp. Web here is our generator for generating your own rounding off numbers worksheets.
Rounding to the nearest whole, to 1dp, or 2dp. Round 4,512 to the nearest thousand.) 3rd and 4th grades. Practise rounding numbers to the nearest 10 and 100; Rounding off to the nearest 10, 100, 1000
or 10000; 4,689 rounded to the nearest 1,000 is 5,000.
rounding worksheet to the nearest 1000 rounding worksheets practice
Web here is our generator for generating your own rounding off numbers worksheets. Instruct kids in grade 3 to follow the rules of rounding to a tee, and they will accomplish this in a flash. Web
round to the nearest thousand. Round numbers to the nearest 1000; 4,689 rounded to the nearest 1,000 is 5,000.
Rounding To Nearest Ten Thousand Worksheet Worksheets For Kindergarten
All the free rounding worksheets in this section support the. Practise rounding numbers to the nearest 10 and 100; Web this file contains 30 task cards. Round 4,512 to the nearest thousand.) 3rd and
4th grades. Use these cards for classroom games, small group instruction, morning work, or learning centers.
Free Printable Math Worksheets For 3rd Grade Rounding Elcho Table
Web round to the nearest thousand. All the free rounding worksheets in this section support the. Round to the nearest thousand. Our generator will create the following worksheets: 4,689 rounded to
the nearest 1,000 is 5,000.
Rounding Worksheet to the nearest 1000
Web here is our generator for generating your own rounding off numbers worksheets. Round 4,512 to the nearest thousand.) 3rd and 4th grades. Web using these sheets will help your child to: Web round
to the nearest thousand. All the free rounding worksheets in this section support the.
Rounding To The Nearest Ten Thousand Worksheet Ivuyteq
Rounding to the nearest whole, to 1dp, or 2dp. Rounding off to 1sf, 2sf or 3sf Round 4,512 to the nearest thousand.) 3rd and 4th grades. Rounding off to the nearest 10, 100, 1000 or 10000; Practise
rounding numbers to the nearest 10 and 100;
Rounding Numbers to the Nearest 100,000 (U.S. Version) (A)
Practise rounding numbers to the nearest 10 and 100; Web using these sheets will help your child to: Round to the nearest thousand. Rounding off to the nearest 10, 100, 1000 or 10000; Web this file
contains 30 task cards.
Rounding to the Nearest Ten Thousand Worksheet Have Fun Teaching
Students round numbers to the nearest thousand with this inviting practice worksheet! All the free rounding worksheets in this section support the. Instruct kids in grade 3 to follow the rules of
rounding to a tee, and they will accomplish this in a flash. Web this file contains 30 task cards. Use these cards for classroom games, small group instruction,.
Rounding To The Nearest Thousand Worksheet Worksheets For Kindergarten
4,689 rounded to the nearest 1,000 is 5,000. Web round to the nearest thousand. Rounding to the nearest whole, to 1dp, or 2dp. Round 4,512 to the nearest thousand.) 3rd and 4th grades. Our generator
will create the following worksheets:
Rounding to the Nearest Hundred Thousand Worksheet by Teach Simple
Rounding to the nearest whole, to 1dp, or 2dp. Web using these sheets will help your child to: Practise rounding numbers to the nearest 10 and 100; Use these cards for classroom games, small group
instruction, morning work, or learning centers. Round to the nearest thousand.
Rounding To The Nearest Thousand Worksheet - Web round to the nearest thousand. Round 4,512 to the nearest thousand.) 3rd and 4th grades. Instruct kids in grade 3 to follow the rules of rounding to a
tee, and they will accomplish this in a flash. 4,689 rounded to the nearest 1,000 is 5,000. Web using these sheets will help your child to: Round to the nearest thousand. Practise rounding numbers to
the nearest 10 and 100; Students round numbers to the nearest thousand with this inviting practice worksheet! Rounding off to the nearest 10, 100, 1000 or 10000; Web here is our generator for
generating your own rounding off numbers worksheets.
Rounding to the nearest whole, to 1dp, or 2dp. 4,689 rounded to the nearest 1,000 is 5,000. Web this file contains 30 task cards. Instruct kids in grade 3 to follow the rules of rounding to a tee,
and they will accomplish this in a flash. Round 4,512 to the nearest thousand.) 3rd and 4th grades.
Round 4,512 to the nearest thousand.) 3rd and 4th grades. Practise rounding numbers to the nearest 10 and 100; Rounding to the nearest whole, to 1dp, or 2dp. Web round to the nearest thousand.
Web Round To The Nearest Thousand.
Rounding off to 1sf, 2sf or 3sf All the free rounding worksheets in this section support the. Instruct kids in grade 3 to follow the rules of rounding to a tee, and they will accomplish this in a
flash. Web here is our generator for generating your own rounding off numbers worksheets.
Our Generator Will Create The Following Worksheets:
Students round numbers to the nearest thousand with this inviting practice worksheet! Round numbers to the nearest 1000; Rounding to the nearest whole, to 1dp, or 2dp. Round to the nearest thousand.
4,689 Rounded To The Nearest 1,000 Is 5,000.
Position numbers to 10000 on a number line. Round 4,512 to the nearest thousand.) 3rd and 4th grades. Web using these sheets will help your child to: Rounding off to the nearest 10, 100, 1000 or
Use These Cards For Classroom Games, Small Group Instruction, Morning Work, Or Learning Centers.
Practise rounding numbers to the nearest 10 and 100; Web this file contains 30 task cards.
Related Post: | {"url":"https://ataglance.randstad.com/viewer/rounding-to-the-nearest-thousand-worksheet.html","timestamp":"2024-11-10T14:52:49Z","content_type":"text/html","content_length":"36449","record_id":"<urn:uuid:980c10d4-2151-43ad-b8ff-bd38e6ff24c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00348.warc.gz"} |
Week 45 Pool Banker Room 2024 – Pool Banker This Week | Sure Bet WayWeek 45 Pool Banker Room 2024 – Pool Banker This Week
Week 45 banker room 2024, Week 45 Pool Banker 2024, Pool Draw This Week 45
Welcome to Sure Bet Way weekly one banker room. The Pool Banker This Week this week 2023. In this banker forum, you are required to post just your best banker or pairs or winning lines with concrete
proofs and well-explained sequences backed up with appropriate references.
Please note that the selling of games is not accepted in SureBetWay one banker room any advert sales post will be deleted with effect and spam comments are highly prohibited.
The aim of creating this forum is to enhance communication between stalkers all over the world and to share ideas towards regular winnings weekly. So whatever you post here that is not in terms of
our guidelines will be deleted.
HERE IS A GUIDE ON HOW TO COMMENT
1. Click on the show comment
2. Type your comment
3. Comment as — Choose NAME/EMAIL
4. Publish your comment
If you wish to appreciate anyone in this banker room. Kindly contact the Admin and your gift will be delivered straight to the recipient
44 Comments
1. Week 45
Proof: Vacated position of Brighton last season to host Brentford in current week to draw.
Ref; week 44 and 45
2. X8X full time draw match
HI SCORE QUIZ MOVEMENT
Week 42
4F at game 2 of Hi score Quiz
Week 43
X4X✓ at game 3 of Hi score Quiz
Week 44
8F at game 2 of Hi score Quiz
Week 45
X8X to draw at game 3 of Hi score Quiz
Good luck
3. xxxxx05xxxxx ftd.
Whenever the sum additions of the week of play and the Saturday date of play equal 56 (exceed), mark the month of play as full-time draw. Then the following week, awaiting 05? to deliver this
week for the continuations.
Refs; Wk26-27 2023/2024.
And now wk45-wk46 2023/2024.
4. Week 45 2023/24
Using Special advance record.
Stating from 2021 week 45 of none Australia week
Bank on
*Addition of lotto sequence for your sure draw
Week 45 2021
Lotto sequence
B and J
2 + 10 = 12XFT✓
Week 45 2022
Lotto sequence
I and M
9 + 13 = 22XFT✓
Week 45 2023
Lotto sequence
H and M
8 + 13 = 21XFT✓
Week 45 2024
Lotto sequence
E and H
5 + 8 = 13XFT?
5. My banker for the week
Week 37.. Nottingham Forest vs Luton xxxxxx
Week 41.. Nottingham Forest @ no.7 xxxxxx
Week 45.. Luton @ no.7. xxxxxx
6. Week 45
Current season.
Real Madrid on coupon….
It position to play the next digit up.
Week 45… Real Madrid at no.17 to play no.7cbk
7. Banker no 16cbk
Prove, since week 38 away team last alphabet is a draw.
8. 1xxxxxx Banker
Follow this movement of any time BOURNEMOUTH plays with letter B.
Phase 1
Week 12 Purple
3ff Brighton vs BOURNEMOUTH failed
Return leg
Week 43 Red
2ff BOURNEMOUTH vs Brighton
Phase 2
Week 17 Brown
3ff BOURNEMOUTH vs Burnley
Return leg
Week 35 Red
2ff Burnley vs BOURNEMOUTH
Phase 3
Week 9 Brown
2xxx Brentford x BOURNEMOUTH
To play in Return in same color
Week 45
1xxx BOURNEMOUTH x Brentford
9. «»22«»BANKER.
ALALANTA vs ROMA.
week 45.
ROMA to position @ away in advance position of
VALENCIA @ home in week 46.
is a gazetted draw.
last season week 46.
VALENCIA @ home 21ff.
week 45.
ROMA @ away
current week 46.
VALENCIA. @ home
week 46.
ROMA @ away
xxx22 to draw.
10. In every week 45.
Just add the 1st letter home and away 1st letter game 24 together, the answer your draw.
Reference 1.
week 45, 2020/2021. Team @ 24. Was Ath. Madrid vs Osasuna=
played (A+O) = 1+15= 16xx√.
Reference 2.
week 45, 2021/2022.
Team @ 24 was
Celta vigo vs Elche = (C+E) = 3+5= 8xx√.
Reference 3.
week 45, 2022/2023.
Team @ 24 was
Juventus vs Cremonese = (J+C) = 10+3= 13xx√.
This week 45. 2023/2024.
Team @ 24 was
Juventus vs Salernitana = (10+19) = 10+19= 29xx.
Awaiting result.
11. WEEK 45
WHEN VILLARREAL VS SEVILLA IS ON TOP OF AC MILAN VS CAGLIARI WHICH BAR DIVIDE THE BOTH LEAGUES.
BANK ON U. BERLIN @ AWAY UNDER HOFFENHEIM @ AWAY.
REF: WK.45,2020/2021.
12. Banker is 27
Proof: Every wk of 5, SAF Treble Jinx 10 is a draw.
This week 45 2024 it is 27.
13. XX 20 XXCBK
Athl.Bilbao vs Real Madrid @ 23fff in WK 45 2020/21, opponent of that Athl.Bilbao, Madrid maintained same 23xx and drew away while Athl.Bilbao picked Osasuna under bar to die in WK 45 2021/22.
In week 45, 2022/23, Villarreal vs Athl.Bilbao @ 20fff, opponent of Athl.Bilbao i.e. Villarreal is still @ 20XX home this wk to draw while Athl.Bilbao is under bar meeting Osasuna.
Good luck!
14. WEEK 45 BROWN
BANKER 13BK
FROM WEEK 25, OPPONENT OF ABERDEEN TO MEET KILMARNOCK IN NEXT WEEK 5 TO REGISTER DRAW.
WEEK 25/35
WEEK 35/45
GREEN LUCK.
15. 11xxxxxx banker
Proof of 11
Whenever you see Brentford set on number 1 away, count the letters of the opponent, the answer to give you a draw.
Week 36 (Purple)
Opponent of Brentford @ no 1 was A-R-S-E-N-A-L (7 letters)
West Ham vs Burnley @ 7xxxxxxx
Week 40 (Purple)
Opponent of Brentford @ no 1 was A-S-T-O-N V. (6 letters)
Man UTD vs Liverpool @ 6xxxxxx
Week 45 (Brown) Current week
Opponent of Brentford @ no 1 is B-o-u-r-n-e-m-o-u-t-h (11 letters)
Livingston vs St. J’Stone @ 11xxxxx
16. Week 45 mark no 8cbk
Proof go to week 21 this season you will see Chelsea at no5 home or away that week 21 mark no48 last draw on coupon then following week 22 Chelsea will maintain no5 again you look for Crystal P.
at away on top of bar to draw with letters (W) check the same last week 44 and this week 45 so mark no8cbk full time draw.
17. Welcome to week 45 Brown color
XXX 10 xxx Banker for this week 45
Anytime you see Verona v Torino meeting together on coupon no 27 on top of bar, check everywhere u we see Chelsea, home or Away, from Chelsea count six (6) down to meet Aberdeen as a fixed draw.
Ref week 45, 2021/ 2022. Week 45, 2022/23 and now week 45, 2023/ 2024
18. Every week of 5, Add week number together, from the result take the last letter of away team, last family as your banker.xx45xx
Every week of 5 Add date of play digit answer, to play it’s next family down. xxx33xxx
19. 22XXX is my banker this wk 45.
Proof: Starting from week 33 this season, every brown pick third letter of home team @ saturday DOP as draw.
Ref; week 33, 37, 41, 45
20. WEEK 45 BANKER = 13XXXCBK
SINCE THE BEGINNING OF THIS SEASON 2023/2024
ST MIRREN VS KILMARNOCK = 13XXXCBK
21. Banker 19
Since week 42 game 2 of treble jink minus one answer to draw.
Good green luck
22. Appendix
XXX 8 XXX
WEEK 39
NOTTMFOR vs CRYSTAL P. XXX
WEEK 41
NOTT’M FOR. vs WOLVES XXX
WEEK 45
WOLVES vs CRYSTAL P. XXX
23. Week45.
Just nap 22xxxx proof is that when ever any number by the league table in soccer research paper at page 3 move to soccer banker box, it become a must draw. Therefore pls use it and money this
week. 22XXXX
Nap 22XXXX
24. Week 45 2022 Brown
Count 3 up from the second bar count 6 down from the third bar to give you one Banker
Week 45 2023 Brown
Count 3 up from the second bar count 6 down from the third bar to give you one solid Banker
Week 45 2024 Brown
Count 3 up from the second bar count 6 down from the third bar it will give you *33* as a solid Banker.
Admin you’re doing well.
25. Welcome to week 45.
Bank on 46.
Prove -Vitesse on week number at away to play game down. Reff.week 47 last season.
26. XXX 26 XXX cbk
Prove goes to Ross county position previous week to draw current week.
Since week 39
27. Since 2020/2021
Number 1 away team starting from first, second and third letter alphabet to produce a draw this 2023/2024 week 45 produces XX 5 XX as a banker this week
28. My banker is 36
Since 2021, 2022, 2023, this week in week 45 the letter of HARTLEPOOL has been drawing
In 2021, H = 8 , L = 12, 8 + 12 = 20 drew.
In 2022, A = 1, O = 15, 1 + 15 = 16 drew.
In 2023, R = 18, O = 15, 18 + 15 = 33 drew
In 2024, T = 20, P = 16, 20 + 16 = 36 to draw.
29. Week 45
When ever BRIGHTON enters the previous numbers of draws at away, is a key admin.
Ref week 14 to 15 2021.
30. Week 45 Bank On 4XBK
Since this Current Season, Any week bar Cut Accros 36/37
Bank on 31 by naija number, the following week Bank on 4 by naija number.
Wk5 31xbk_ by naija number
Wk6 4xbk_ by naija number
Wk27 31xbk_ by naija number
Wk28 4xbk_ by naija number
Wk28 31xb_ by naija number
Wk29 4xbk_ by naija number
Wk30 31xbk_ by naija number
Wk31 4xbk_ by naija number
Wk38 31xbk_ by naija number
Wk39 4xbk_ by naija number
Wk44 31xbk_ by naija number
Wk45 4XBK_ by naija number
31. Week 45
8XX Gazetted draw cbk
Crystal @ away ontop of bar vs letter W team taken to Soccer Treble chance 12 game 1 is a fixed draw.
Ref wk 22, wk 45
32. Every week 45
St Mirren to draw ether home or away.
Since 2018 till date
33. Week 45 bank on No5✓✓✓✓✓prove
Luton must position at No7 away,Wolves on top West ham.(key)
When next Luton is positioning at No7 away and wolves on top of west ham the previous opponent of Wolves must retain it previous position to meet Nott’m’for for a sure draw.
34. WK 45: Game moving from HOT PAIR back page CAPITAL INTERNATIONAL to THIS WEEKS DRAW PICTURE page 1 of SOCCER “X” RESEARCH.
WK 18 it moved from up to left side and drew.
WK 27 it moved from down to de right side of SOCCER and also drew to balance.
In WK 4 game moved from down to de right side of soccer and both washed.
Dis WK 45 game move from up to left side of soccer to clear and balance.
35. This WK 45 single bet:
3 cbk 1:1 cs
Proof; Arsenal away in Brown week of “5” to play a fixed 1:1 cs.
WEEK 5 ==== 1:1 == 1 drew
WEEK 25 === 1:1 == 2 drew
WEEK 45 === 1:1 == 3 fixed
36. Week 45 Banker 1CBK 100% FIXED DRAW
BOURNEMOUTH VS BRENTFORD
In week 42, 2017
Man Utd vs Swansea drew @ 3XX
Treble jinx box 3 = 1 fail
Following week (wk 43)
Arsenal vs Man Utd fail @ 1
Swansea vs Everton fail @ 7
Burnley being away team @ 1 in week 42 will enter 3XX and draw in week 43.
In current week 45
Man Utd vs Arsenal @ 3 to ??
Treble jinx box 3 = 1 to draw
Following week (wk 46)
Arsenal being the away team @ 3 in week 45 to enter no. 1 and repeat draw in week 46.
37. 12XXXXXXXXXXXBANKER
SINCE 2023/2024 SEASON IF FULL LIST GAME 10 AND DRAW BANKER BOX ARE SAME FAMILY NUMBERS MARK THEM FOR A DRAW.
(1) MARK GAME 16 OF TREBLE CHANCE 16 AS YOUR FIRST BANKER. 12XXXX
WEEK 20
*22XXXBK* OSS
WEEK 32
*20XXXBK* 0SS
WEEK 36
*42XXXBK* ‘X’
WEEK 45
*12XXXBK* ‘X’
38. WEEK 45 BROWN
8 ccbbkk
* Bob-Morton rsk brown WK sequence. Frontpage picture ,player Jersey number move to 3rd page down pair, LHS as banker
8✓ * 39 . This WK45, the player on picture faint background is man.Utd captain Bruno Fernandez Jersey number 8. Ref. Brown WK41 current.
39. Week 45 one banker update: no20 is d banker or no draw on coupon.
Prove: since three season, every week45, game no45 home last alphabet and no45 away last alphabet plus together, d answer to play game down. This week45, game no45 home last alphabet is 14 and
away last alphabet is 5 plus together it’s 19 to play game down which is 20.
40. My banker this week 45 is xxx7xxx
Proof BRENTFORD @ NUMBER 1 AWAY.
GO TO WEEK OF PLAY number 36 (-week 36 first setting) the number before the week of play must play and the first Home alphabet must start with G=7 is a confirmed draw.
In week 36 y have HARROGATE VS CRAWLEY in number 36 then a number before is 35 = GILLINGHAM VS TRANMERE .YOU SEE G IN GILLINGHAM = 7 PLAYED.
This week 45 you have HEERENVEN VS VITESSE, a number before the week of play 44 first Home alphabet = G.A.Eagles VS AZ Alkmaar. This week mark G = 7 number 44 as full time draw [xxx7xxx].
41. 10xxxxx banker.
Prove of 10.
Open the front page of your soccer research, from week 37, every Brown, bank on the 6th game under ” Special ‘X’ Tips for the Week.
Week 37 (Brown)
Luton Town appeared as the 6th game in Special ‘X’ Tips for the week.
Luton vs Nott’m. For @ 7xxxxxx
Week 41 (Brown)
NOTT’M FOR. appeared as the 6th game in “Special ‘X’ Tips for the week.
NOTT’M FOR. Vs Wolves @ 7xxxxxx.
Week 45 Brown ( Current week)
Hibernian appeared as the 6th game in “Special ‘X’ tips for the week.
Hibernian vs Aberdeen @ 10xxxxxx
Secondly, open your Bob Morton for this 3 weeks pair interpolation,
From week 43, two games in Joker pair has been drawing side by side.
Week 43
Joker pair 26xxxp44fff
Peterboro vs Bolton @ 26xxxxx
Week 44
Joker pair 17fffp48xxxx
W.Bremen vs B. M’Glabach @ 48xxxx
Week 45 (Current week)
Joker pair 10xxxp22??
Obviously the turn of 10 to draw.
Hibernian vs Aberdeen @ 10xxxxxxx
Bank on 10xxxxx with confidence
42. Welcome to week 45
1st phase
Man UTD Burnley xx
Newcastle vs Sheffield UTD ff
Burnley vs Newcastle ff
Crystal p vs West Ham ff
West ham vs Liverpool xx
2nd phase
Man UTD vs Arsenal xx
Everton vs Sheffield UTD??
Arsenal vs Everton??
Wolves vs Crystal p xx
Liverpool vs Wolves??
43. REF 2021/22, 2022/23 & 2023/24
THE MEETING OF VERONA vs TORINO
MARK THE FOLLOWING TO DRAW.
CHELSEA XXX
ST.MIRREN XXX
ROMA XXX
XXX 5*13*22 XXX
44. Welcome to wk.45 Banker Room. Play 1xx as draw. Proof: Whenever game in box 1 of hi-score quiz draws and if number 1 is placed there the following wk., mark it as draw. So play 1xx as draw.
Ref.wks. 5-6,11-12 and 44-45.
Leave a Response Cancel reply | {"url":"https://surebetway.com.ng/week-45-banker-room-2024/","timestamp":"2024-11-05T17:01:15Z","content_type":"text/html","content_length":"242838","record_id":"<urn:uuid:9759f892-2925-45b4-9c6e-38bb4eef445a>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00344.warc.gz"} |
Creating Tailored Numeric Sequences in Python for Scalable Solutions
Written on
Chapter 1: Introduction to Numeric Sequences
Welcome back, inquisitive learners! Today, we’re diving into the fascinating realm of number sequences in Python. Our focus will be on crafting custom finite sequences designed for various
applications. So, grab your favorite code editor, and let’s get started!
Section 1.1: Utilizing the range() Function
When we think about finite sequences in Python, the range() function is often the first that comes to mind. It primarily serves to facilitate iteration over integer intervals. Here's a simple example
of its use:
for i in range(10):
Output: 0, 1, ..., 8, 9
You might have noticed that '10' is absent from the output. This is because range() halts before reaching the specified limit. If you want to include it, you can add an extra parameter like this:
for i in range(10, 14):
Output: 10, 11, 12, 13
Additionally, we can create sequences that have specific steps using range():
for i in range(0, 20, 3):
Output: 0, 3, 6, ..., 15, 18
Now, let’s explore how to construct nonlinear sequences.
Subsection 1.1.1: Creating Nonlinear Sequences
Imagine you need a nonlinear sequence, such as squares or cubes. Instead of calculating these values manually in loops, you can use list comprehension alongside lambda functions:
squares = [x**2 for x in range(10)]
cubes = [x**3 for x in range(10)]
print("Squares:", squares)
print("Cubes:", cubes)
Output: Squares: [0, 1, 4, 9, 16, 25, 36, 49, 64, 81], Cubes: [0, 1, 8, 27, 64, 125, 216, 343, 512, 729]
This method enables the quick generation of complex sequences that can be utilized for various computational needs.
Section 1.2: Fibonacci Sequence Generation
A classic example is generating the Fibonacci sequence, where each number is the sum of the two preceding ones. While a recursive approach may lead to inefficiencies, Python's generators provide a
powerful alternative:
def fib():
a, b = 0, 1
yield a
while True:
a, b = b, a + b
yield b
fib_sequence = list(itertools.islice(fib(), 10))
Output: [0, 1, 1, 2, 3, 5, 8, 13, 21, 34]
By using islice(), we can generate a sequence of a specified length efficiently.
Chapter 2: Conclusion
Finite sequences are vital for addressing many real-world computational challenges. With functions like range(), list comprehensions, and lambda expressions, you can easily manage linear sequences.
Furthermore, utilizing Python's generators allows for effective handling of more complex sequences, including those that require recursive definitions. | {"url":"https://hmrtexas.com/custom-numeric-sequences-python-scalable-solutions.html","timestamp":"2024-11-02T08:17:32Z","content_type":"text/html","content_length":"8418","record_id":"<urn:uuid:3de0e3dc-dca3-4117-8313-eedb5abf2d3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00405.warc.gz"} |
Row Echelon form - Definition, Theorem, Formulas, Solved Example Problems | Elementary Transformations of a Matrix
Row-Echelon form
Using the row elementary operations, we can transform a given non-zero matrix to a simplified form called a Row-echelon form. In a row-echelon form, we may have rows all of whose entries are zero.
Such rows are called zero rows. A non-zero row is one in which at least one of the entries is not zero. For instance, in the matrix,
R[1] and R[2] are non-zero rows and R[3] is a zero row
Definition 1.5
A non-zero matrix E is said to be in a row-echelon form if:
i. All zero rows of E occur below every non-zero row of E.
ii. The first non-zero element in any row i of E occurs in the j^th column of E , then all other entries in the j^th column of E below the first non-zero element of row i are zeros.
iii. The first non-zero entry in the ith row of E lies to the left of the first non-zero entry in (i +1)^th row of E .
A non-zero matrix is in a row-echelon form if all zero rows occur as bottom rows of the matrix, and if the first non-zero element in any lower row occurs to the right of the first non- zero entry in
the higher row.
The following matrices are in row-echelon form:
Consider the matrix in (i). Go up row by row from the last row of the matrix. The third row is a zero row. The first non-zero entry in the second row occurs in the third column and it lies to the
right of the first non-zero entry in the first row which occurs in the second column. So the matrix is in row- echelon form.
Consider the matrix in (ii). Go up row by row from the last row of the matrix. All the rows are non-zero rows. The first non-zero entry in the third row occurs in the fourth column and it occurs to
the right of the first non-zero entry in the second row which occurs in the third column. The first non-zero entry in the second row occurs in the third column and it occurs to the right of the first
non-zero entry in the first row which occurs in the first column. So the matrix is in row-echelon form.
The following matrices are not in row-echelon form:
Consider the matrix in (i). In this matrix, the first non-zero entry in the third row occurs in the second column and it is on the left of the first non-zero entry in the second row which occurs in
the third column. So the matrix is not in row-echelon form.
Consider the matrix in (ii). In this matrix, the first non-zero entry in the second row occurs in the first column and it is on the left of the first non-zero entry in the first row which occurs in
the second column. So the matrix is not in row-echelon form.
Method to reduce a matrix [a[ij]][m ][×][n] to a row-echelon form.
Step 1
Inspect the first row. If the first row is a zero row, then the row is interchanged with a non-zero row below the first row. If a[11] is not equal to 0, then go to step 2. Otherwise, interchange the
first row R[1] with any other row below the first row which has a non-zero element in the first column; if no row below the first row has non-zero entry in the first column, then consider a[12] . If
a[12] is not equal to 0, then go to step 2. Otherwise, interchange the first row R[1] with any other row below the first row which has a non-zero element in the second column; if no row below the
first row has non-zero entry in the second column, then consider a[13]. Proceed in the same way till we get a non-zero entry in the first row. This is called pivoting and the first non-zero element
in the first row is called the pivot of the first row.
Step 2
Use the first row and elementary row operations to transform all elements under the pivot to become zeros.
Step 3
Consider the next row as first row and perform steps 1 and 2 with the rows below this row only.
Repeat the step until all rows are exhausted.
Example 1.13
Reduce the matrix to a row-echelon form.
This is also a row-echelon form of the given matrix.
So, a row-echelon form of a matrix is not necessarily unique.
Example 1.14
Reduce the matrix to a row-echelon form. | {"url":"https://www.brainkart.com/article/Row-Echelon-form_39064/","timestamp":"2024-11-02T17:32:40Z","content_type":"text/html","content_length":"59551","record_id":"<urn:uuid:8378a471-3c1d-4200-9518-e231887bb338>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00020.warc.gz"} |
Leonardo Saud Maia Leite: A study about the chain polynomial of the lattice of flats of a matroid | KTH
Leonardo Saud Maia Leite: A study about the chain polynomial of the lattice of flats of a matroid
Time: Wed 2023-03-22 11.15 - 12.15
Location: 3721 KTH
Participating: Leonardo Saud Maia Leite (KTH)
Abstract: The chain polynomial of a finite lattice \(\mathcal{L}\) is given by \(p_\mathcal{L} = \sum_{k ≥ 0} c_k (\mathcal{L}) x^k\), where \(c_k (\mathcal{L})\) is the number of chains of length \
(k\) in \(\mathcal{L}\). There is a conjecture which states that, if \(\mathcal{L}\) is a geometric lattice, then its chain polynomial \(p_L\) is real-rooted. In particular, it is log-concave. Here,
we will consider a finite matroid \(M\), define its lattice of flats \(L(M)\), and study the polynomial \(p_{L(M)}\). We verified that the conjecture is true for paving matroids and for some
generalized paving matroids, a new class of matroids introduced during this study. This is an ongoing and joint work with Petter Brändén. | {"url":"https://www.kth.se/math/kalender/leonardo-saud-maia-leite-a-study-about-the-chain-polynomial-of-the-lattice-of-flats-of-a-matroid-1.1239431?date=2023-03-22&orgdate=2023-03-19&length=1&orglength=0","timestamp":"2024-11-12T06:15:32Z","content_type":"text/html","content_length":"56797","record_id":"<urn:uuid:c698b4dd-9542-4bca-a3c9-8e931a6c5930>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00629.warc.gz"} |
Smoothness of the twistor space of a lorentzian manifold, or “convexity wrt null geodesics”
Null lines in Minkowski space form a 5-dimensional manifold, represented as a (real) quadric $\mathbf{PN}\subset\mathbb{C}\mathbf{P}^3$. This is a well-known fact, on which R. Penrose’s twistor
programme is based. It is also known that the space of null lines can be constructed locally (albeit without Cauchy–Riemann structure in the curved case) and that the resulting space $\mathfrak{N}$
of (non-parametrized) null geodesics possesses a natural contact structure, co-oriented one if original manifold is time-oriented.
As of global constructions, a Cauchy hypersurface $M$ permits to describe $\mathfrak{N} = ST^*M$; see e.g. arxiv:0810.5091. This requires global hyperbolicity, a strong condition on the original
manifold, that isn’t anywhere near necessity for $\mathfrak{N}$ to be a manifold.
On the other hand, for a lorentzian manifold $X$ (or d+1 pseudo-Riemann for arbitrary dimension) let’s define $\mathfrak{S}_x$ (called the sky of $x$) as the projectivization of all null vectors
in $T_x X$, diffeomorphic (and conformly equivalent) to the sphere $S^{d-1}$. Let $\mathfrak{S}X$ be the bundle of skies for all $x\in X$, with (d−1)-dimensional fibres. Its total space (2d
-dimensional) has a natural foliation with one-dimensional leaves, namely null geodesics. Now we build $\mathfrak{N}$ as the space of leaves (in other words, the quotient space of $\mathfrak{S}X$ by
equivalence relation to lie on the same null geodesic) and for a strongly causal lorentzian manifold it should result in a Hausdorff space. But it isn’t necessary a manifold, as the following
1+1-dimensional example shows:
Intuitively, the property looked for smoothness of $\mathfrak{N}$ is convexity of $X$ wrt null geodesics, but Ī̲’m unsure how to say it exactly. Convexity of subsets in aforementioned sense is
well-known in lorentzian geometry, but Ī̲ failed to find in papers anything like “convex”, “light-convex” or “causally convex” referring to entire manifold. Can anybody suggest a strict formulation
that is weaker than global hyperbolicity?
This post imported from StackExchange MathOverflow at 2015-12-22 18:43 (UTC), posted by SE-user Incnis Mrsi | {"url":"https://physicsoverflow.org/34664/smoothness-twistor-lorentzian-manifold-convexity-geodesics","timestamp":"2024-11-09T03:34:53Z","content_type":"text/html","content_length":"100877","record_id":"<urn:uuid:a9227adc-d1b8-454a-85de-1273bc6664d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00895.warc.gz"} |
VTU CGPA Calculator (2022,2021,2018 Scheme) » VTU Student Portal
VTU CGPA Calculator
Note : If SGPA is not available for any semester , leave blank.
Your CGPA is :
Percentage is : %
Division is :
Calculations are done as per VTU scheme regulations.
**Brief Information on VTU CGPA**
VTU CGPA stands for Visvesvaraya Technological University Cumulative Grade Point Average. It is a measure used to evaluate a student’s overall academic performance across all semesters completed so
far. The CGPA is calculated by taking the weighted average of the SGPA’s (Semester Grade Point Average) from all semesters, with each semester’s total credit points considered. This provides a
comprehensive view of a student’s academic progress throughout their course of study.
What is the formula to calculate VTU CGPA?
To calculate the VTU CGPA (Cumulative Grade Point Average), you can use the following formula :
CGPA = {Σ(Subject Credits * Grade Points)} for all subjects excluding those with F grade until that semester / {Σ(Subject Credits)} for all subjects excluding those with F grade untill that semester.
Award Of Class Based On VTU CGPA :
Range of CGPA Class
7.75 and above First Class with Distinction
6.75 - 7.74 First Class
6.74 and below Second Class
Range of CGPA Class
7.00 and above First Class with Distinction
6.00 - 6.99 First Class
5.00 - 5.99 Second Class
What is VTU CGPA Calculator?
A VTU CGPA calculator is a tool used by students studying under Visvesvaraya Technological University (VTU) to calculate their overall academic performance. VTU uses a credit-based grading system
where each course is assigned a certain number of credits, and students receive grades for each course that correspond to grade points.
Here’s a simple and easy-to-understand explanation of how the VTU CGPA calculator works:
1.Grade Points and Credits: Each course you take is worth a certain number of credits, and you receive a grade that corresponds to a grade point (e.g., A = 9, B = 8, etc.)
2.Calculating SGPA: For each semester, you calculate your Semester Grade Point Average (SGPA) by multiplying the grade points you earned in each course by the credits for that course, adding these up
for all courses, and then dividing by the total number of credits for that semester.
SGPA=∑(Grade Points×Course Credits) / ∑Course Credits
3.Calculating CGPA: To find your CGPA, you take the SGPAs of all the semesters completed and calculate the weighted average, considering the total credits of each semester.
CGPA=∑(SGPA×Total Semester Credits) / ∑Total Semester Credits
A VTU CGPA calculator automates this process by allowing you to input your SGPA for each semester and then computing your CGPA based on the formulas above. This helps you quickly determine your
academic standing without manual calculations.
How to use VTU CGPA Calculator?
Step 1 : Select the scheme under which you are studying at VTU.
Step 2 : Enter the SGPA for the semesters for which you want to calculate your VTU CGPA.
Step 3 : Click the “Calculate” button. | {"url":"https://vtustudent.in/vtu-cgpa-calculator/","timestamp":"2024-11-04T11:44:31Z","content_type":"text/html","content_length":"216970","record_id":"<urn:uuid:bd292d97-1cc1-40da-9185-e4f00fa56585>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00495.warc.gz"} |
Excel COUNTIF Example | How to Implement COUNTIF Examples?
Updated June 8, 2023
Excel COUNTIF Example (Table of Contents)
Excel COUNTIF Examples
Excel COUNTIF Example counts the cells that meet certain criteria or conditions. It can count cells that match specific criteria with text, numbers or dates. By referring to some COUNTIF examples in
Excel, you can understand the use and implantation of the COUNTIF Function.
Syntax of COUNTIF Example in Excel
The Syntax of the COUNTIF Function includes 2 parameters. Before we apply COUNTIF, first let’s see the syntax of the COUNTIF Function in Excel, as shown below;
Range = The range we must select from where we will get the count.
Criteria = Criteria should be any exact word or number we need to count.
The return value of COUNTIF in Excel is a positive number. The value can be zero or non-zero.
How to implement Excel COUNTIF Examples?
Using the COUNTIF Function in Excel is very easy. Let’s understand the working of the COUNTIF Function in Excel by the examples below.
Excel COUNTIF Example – Illustration #1
The COUNTIF function in Excel is used for counting cell content in selected range data. The selected cells may contain numbers or text. Here we have a list of some products which are repeated
multiple times. Now we need to check how many times a product gets repeated.
As we can see in the above screenshot. We have some product types, and besides that, we have chosen a cell for counting cells of a specific product type.
For applying the COUNTIF Function example, go to the cell where we need to see the output and type the “=” (Equal) sign to enable all the inbuilt functions of Excel. Now type COUNTIF and select it.
Range = Select the range as A2:A18.
Criteria = For text, let’s select the criteria as Mobile in inverted commas (“); it is a text.
As we see below the screenshot, how our applied COUNTIF final formula will look like. Blue colored cells are our range value, and in inverted commas, Mobile is our criteria to be calculated.
Once we press the Enter key, we will get the applied formula as shown below.
As we can see, the count of product type Mobile is coming as 5, which is also highlighted in yellow in the above screenshot.
We can test different criteria to check the correctness of the applied formula.
Excel COUNTIF Example – Illustration #2
There is one more method of applying the COUNTIF Function in Excel. For this, put the cursor on the cell where we need to apply COUNTIF and then go to the Formula menu tab and click on Insert
Function, as shown in the below screenshot.
Once we click on it, we will get the Insert Function box with the Excel list of inbuilt functions, as shown below. From the tab, select a category, and choose All to get the list of all functions.
And from the Select a function box, select COUNTIF and click OK. Or type COUNTIF or a keyword related to this and find related functions in the Search for a function box.
After that, we will see the function argument box, where we need to select the same range as we did in Illustration #1 but with different criteria as Desktop and click on OK.
If the formula is correct, we will see it resulting in the Function arguments box, as highlighted. After that, we will get the result in the output cell, as shown below.
As we can see in the above screenshot, the Desktop count is 4. Which are also highlighted in yellow color in the above screenshot?
For this process also, we can test different criteria to check the correctness of the applied formula.
This is how the COUNTIF function is used to calculate the numbers or words that are repeated multiple times. This is quite helpful where the data is so huge that we cannot apply filters.
Excel COUNTIF Example – Illustration #3
Let’s see one more example of the COUNTIF Function in Excel. We have a list of some students where student marks of Subject X and Subject Y are mentioned in columns B and C. Now with the help of the
COUNTIF Function Example, we will see how many students got 19 Marks out of 20.
For this, go to the cell where we need to see the output. Type = (Equal) sign and search for the COUNTIF function and select it as shown below.
Now select the range. Here, as we have two columns where we can count the values, we will select columns B and C from cell B2 to B6. By this, we will be covering the B2 to C6 cell range. For the
criteria, type 19 in inverted commas, as shown below.
After that, press the Enter key to apply the formula, as shown below.
As we can see in the above screenshot, the COUNTIF function counted that only 2 students got marks that are 19 in any subject.
Here, by applying COUNTIF functions where the range is more than one column, the function checks the criteria in the selected range and gives the result. As per the above marks, Kane and Reema are
students who got 19 marks in one of the subjects. There could be cases where we could get 19 marks against a single entry irrespective of the range selected, but the output will be the combined
result of data available in the complete selected range.
Things to Remember
• The second parameter in the formula “Criteria” is case-insensitive.
• As a result, only the values that meet the criteria will be returned.
• If the wildcard characters are to be used as they are in the criteria, the tilde operator must precede them, i.e., ‘~? ‘, ‘*’.
Recommended Articles
This has been a guide to Examples of the COUNTIF Function in Excel. Here we discuss how to use COUNTIF Example in Excel, practical illustrations, and a downloadable Excel template. You can also go
through our other suggested articles – | {"url":"https://www.educba.com/countif-examples-in-excel/","timestamp":"2024-11-12T23:13:50Z","content_type":"text/html","content_length":"354280","record_id":"<urn:uuid:277a0d16-9509-4a1e-8e3f-a15112bfe071>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00090.warc.gz"} |
How To Turn A Column Into A Table Using Formulas
In a previous post I showed How To Turn A Table Into A Column Using Formulas, and in this post we’re going to explore how to do the inverse action and turn a column into a table.
You could do this in a number of different ways but these are the two that make the most sense given a column of data comprised of small blocks of related data like in the example. In this example
every three rows of the column relate to one person.
1. We could convert this to a table where each column in the table contains the data relating to one person.
2. We could convert this to a table where each row in the table contains the data relating to one person.
Option 1
To create a table where each column contains related data we can use this formula.
$B$3:$B$14 is the original column of data and $D$3:$G$5 is a 4 column and 3 row range because our original data has 4 blocks of related data and 3 items in each block.
Option 2
To create a table where each row contains related data we can use this formula.
$B$3:$B$14 is the original column of data and $E$11:$G$14 is a 3 column 4 row range because our original data has 4 blocks of related data and 3 items in each block.
Formula Breakdown
The INDEX function returns a value from a range based on a row number and column number. So, =INDEX($B$3:$B$14,4,1) refers to row 4 and column 1 of the range $B$3:$B$14, in our example this contains
the value Yoda.
Since our range only has one column, the column index in our formula will always be 1 and our formula will look like this =INDEX($B$3:$B$14,number representing the right row,1). We need to be clever
about how we get the row number.
Above are the row index numbers we would like (for option 2) when the formula is copied across the table. Of course, for this formula we will need to know in advance there are 4 blocks of data
containing 3 fields each so we can set up the range of our output table as a 4 row and 3 column table. In our option 2 example this table is $E$11:$G$14.
If we want this series of numbers we need a formula like this number of columns in the range * current row of the range + current column of the range + 1. COLUMNS($E$11:$G$14) will give us the number
of columns in the range $E$11:$G$14. ROW()-ROW($E$11:$G$14) will give us the current row number (starting at 1) of the range while COLUMN()-COLUMN($E$11:$G$14) gives us the current column number
(starting at 1) of the range.
Putting it all together we get our formula for the row index:
0 Comments
Are you wondering how to apply a formula to an entire column in Excel? Read...
8 Ways to Apply a Formula to an Entire Column in Microsoft Excel | {"url":"https://www.howtoexcel.org/how-to-turn-a-column-into-a-table-using-formulas/","timestamp":"2024-11-12T02:27:20Z","content_type":"text/html","content_length":"420043","record_id":"<urn:uuid:71faf29b-3c18-44c1-88b2-1b4e495cd7fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00343.warc.gz"} |
Semi-Algebraic Models
3.1.2 Semi-Algebraic Models
In both the polygonal and polyhedral models, was a linear function. In the case of a semi-algebraic model for a 2D world, can be any polynomial with real-valued coefficients and variables and . For a
3D world, is a polynomial with variables , , and . The class of semi-algebraic models includes both polygonal and polyhedral models, which use first-degree polynomials. A point set determined by a
single polynomial primitive is called an algebraic set; a point set that can be obtained by a finite number of unions and intersections of algebraic sets is called a semi-algebraic set.
Consider the case of a 2D world. A solid representation can be defined using algebraic primitives of the form
As an example, let . In this case, represents a disc of radius that is centered at the origin. This corresponds to the set of points for which , as depicted in Figure 3.4a.
Figure 3.4: (a) Once again, is used to partition into two regions. In this case, the algebraic primitive represents a disc-shaped region. (b) The shaded ``face'' can be exactly modeled using only
four algebraic primitives.
Example 3..1
(Gingerbread Face)
Consider constructing a model of the shaded region shown in Figure
b. Let the center of the outer circle have radius and be centered at the origin. Suppose that the ``eyes'' have radius and and are centered at and , respectively. Let the ``mouth'' be an ellipse with
major axis and minor axis and is centered at . The functions are defined as
For , , and , the familiar circle and ellipse equations were multiplied by to yield algebraic primitives
for all points outside of the circle or ellipse. The shaded region is represented as
In the case of semi-algebraic models, the intersection of primitives does not necessarily result in a convex subset of . In general, however, it might be necessary to form by taking unions and
intersections of algebraic primitives.
A logical predicate, , can once again be formed, and collision checking is still performed in time that is linear in the number of primitives. Note that it is still very efficient to evaluate every
primitive; is just a polynomial that is evaluated on the point .
The semi-algebraic formulation generalizes easily to the case of a 3D world. This results in algebraic primitives of the form
which can be used to define a solid representation of a 3D obstacle and a logical predicate .
Equations (3.10) and (3.13) are sufficient to express any model of interest. One may define many other primitives based on different relations, such as , , , , and ; however, most of them do not
enhance the set of models that can be expressed. They might, however, be more convenient in certain contexts. To see that some primitives do not allow new models to be expressed, consider the
The right part may be alternatively represented as , and may be considered as a new polynomial function of , , and . For an example that involves the relation, consider the primitive
It can instead be constructed as , in which
The relation does add some expressive power if it is used to construct primitives.^3.2 It is needed to construct models that do not include the outer boundary (for example, the set of all points
inside of a sphere, which does not include points on the sphere). These are generally called open sets and are defined Chapter 4.
Steven M LaValle 2020-08-14 | {"url":"https://lavalle.pl/planning/node83.html","timestamp":"2024-11-09T19:05:07Z","content_type":"text/html","content_length":"19617","record_id":"<urn:uuid:2d39962e-6d85-4bb6-9186-95c8c6ad60e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00132.warc.gz"} |
JEXPO Exam Pattern-Marking Scheme, Exam Mode, Exam Duration 99EntranceExam
JEXPO Exam Pattern-Marking Scheme, Exam Mode, Exam Duration
Candidates aspiring for admission in diploma programs of Engineering/Technology and Architecture offered through JEXPO should familiarize themselves with the JEXPO exam pattern. The exam will be
conducted by West Bengal state council of technical & vocational education & skill development to select the candidates for different diploma courses offered by polytechnic colleges/institutes
affiliated to it. Candidates will be given the admission based on their performance in the exam. A fair amount of applicants take this exam every year but only a few get through it. Therefore,
students must study for the exam strictly according to the JEXPO Exam pattern and syllabus associated with it.
JEXPO Exam Pattern
Given below is the complete JEXPO exam pattern with the help of which candidates can prepare an effective strategy to crack the exam-
• Exam Mode: Online or Offline
• Exam Duration: The duration of the exam is 2 Hours.
• Type of Questions: The questions asked will be of Objective types (Multiple Choice Questions)
• Subjects: Questions are to be asked from Physics, Chemistry and Mathematics.
• Marking: 1 mark will be provided for every correct attempt.
• Negative Marking: 0.25 will be deducted for each incorrect response.
• Use of Pen: Only Black/Blue ballpoint pens are allowed (if exam is conducted in offline mode)
Question Paper Pattern
Subjects No. of questions Marking Scheme
Physics 25 questions 25 marks
Chemistry 25 questions 25 marks
Mathematics 50 questions 50 marks
Check Here- Best Books to prepare for JEXPO
JEXPO Application Form JEXPO/VOCLET Syllabus
JEXPO Exam Pattern Best Books for JEXPO
JEXPO Eligibility Criteria JEXPO Admit Card
JEXPO Result JEXPO Counselling
JEXPO Official Website JEXPO/VOCLET 2022
JEXPO 2022 Syllabus
After going through the JEXPO exam pattern, candidates should know that the syllabus for the exam will be as per the level of class 10^th. Candidates should study the relevant subjects taught to them
in their secondary examinations. The questions in the exam will be strictly based on the syllabus prescribed for class 10^th. Here are some important topics they need to prepare for the JEXPO exam-
Mathematics: Transversal & Mid-Point Theorem, Profit & Loss, Statistics, Real Numbers, Laws of Indices, Graph, Co-ordinate Geometry – Distance Formula, Linear Simultaneous Equations, Properties of
Parallelogram, Polynomial, Factorization, Theorems on Area, Construction of a Parallelogram whose measurement of one angle is given and equal in area of a Triangle, Circumference of Circle, Theorems
on concurrence, Area of circular region, Co-ordinate Geometry – Internal and External Division of Straight Line Segment, Co-ordinate Geometry – Area of Triangular Region, Logarithm, Quadratic
Equations with one variable, Construction of a Triangle equal in area of a quadrilateral, Area & Perimeter of Triangle & Quadrilateral shaped region, Simple Interest, Theorems related to circle,
Rectangular Parallelopiped or Cuboid, Ratio and Proportion, Compound Interest and Uniform Rate of Increase or Decrease, Theorems related to Angles in a Circle, Right Circular Cylinder, etc.
Physical Sciences: Mole Concept, Matter – Structure and Properties, Measurement, Force & Motion, Atomic Structure, Solution, Acids, Power & Energy, Sound, Heat, Bases & Salts, Work, Separation of
Components of Mixtures, Water, Light, Concerns about our Environment, Behavior of Gases, Periodic Table and Periodicity of the Properties of Elements, Ionic and Covalent Bonding, Chemical
Calculations, etc.
Leave a Comment | {"url":"https://www.99entranceexam.in/jexpo-exam-pattern/","timestamp":"2024-11-09T03:34:42Z","content_type":"text/html","content_length":"126484","record_id":"<urn:uuid:0370e74c-3b3c-4fd3-abe4-77b2fb795081>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00620.warc.gz"} |
Kinematics(12th Grade > Physics ) Questions and answers for exam Preparation
12th Grade > Physics
Circular Kinematics
Total Questions : 41 | Page 3 of 5 pages
Question 21.
A balloon starts rising from the surface of the Earth. The ascension rate is constant and equal to
v[o]. Due to the wind the balloon gathers the horizontal velocity component v[x]=ay, where a is a constant and y is the height of ascent. Find the total, tangential, and normal accelerations of the
A. ar=av0;at=av0√1+(ayv0)2;aN=av0
B. ar=av0;at=av0;aN=0
C. ar=av0;at=a2y√1+a2y2v20;aN=av0 ⎷(1+a2y2v20)
D. ar=a2y√1+a2y2v20;at=av0;aN=av0
Answer: Option C. -> ar=av0;at=a2y√1+a2y2v20;aN=av0 ⎷(1+a2y2v20)
The path of the balloon will look something like this
After t sec ballon would have gone a height of
then at that very instance the balloon's
will be
is actually the resultant velocity
Now we know
Question 22. A hollow vertical cylinder of radius R and height h has smooth internal surface. A small particle is placed in contact with the inner side of the upper rim at a point P. It is given a
horizontal speed v[o] tangential to rim. It leaves the lower rim at point Q, vertically below P. The number of revolutions made by the particle will
A. h2πR
B. v0√2gh
C. 2πRh
D. v02πR(√2hg)
Answer: Option D. -> v02πR(√2hg)
since the body has no initial velocity in the vertical direction. az
=-g, vertical displacement z=-h.
time taken to reach the bottom
let,t be the time taken to complete one revolution.
number of revolution=
Question 23. Find the maximum velocity for skidding for a car moved on a circular track of radius 100 m. The coefficient of friction between the road and tyre is 0.2
A. 0.14 m/s
B. 140 m/s
C. 1.4 km/s
D. 14 m/s
Answer: Option D. -> 14 m/s
Question 24. A point P moves in counter-clockwise direction on a circular path as shown in the figure. The movement of 'P' is such that it sweeps out a length s=t3+5, where s is in meters and t is
in seconds. The radius of the path is 20 m. The acceleration of 'P' when t = 2 s is nearly
A. 13 m/s2
B. 12 m/s2
C. 7.2 m/s2
D. 14 m/s2
Answer: Option D. -> 14 m/s2
Speed,v=dsdt=3t2 and rate of change of speed =dvdt=6t
∴ tangential acceleration at t=2S,at = 6 × 2=12m/s2
∴ centripetal acceleration,
∴ net acceleration=√a2t+a2i≈14m/s2
Question 25. Which is these is a possible direction of acceleration for a point on a car that is going on a circular track and is speeding up?
A. a
B. b
C. c
D. d
Answer: Option D. -> d
The net acceleration is always directed between radially inward and tangential direction.
Question 26.
A particle of mass M moves with constant speed along a circular path of radius r under the action of a force F. Its speed is
A. √rFm
B. √Fr
C. √Fmr
D. √Fmr
Answer: Option A. -> √rFm
Question 27.
A body takes the following path and moves with constant speed. If
are the magnitude of its radial acceleration at A and B.
A. aA=aB
B. aA
C. aA>aB
D. none of these
Question 28.
The speed of the truck is 40 m s-1, after 10 seconds its speed decreases to 20 m s-1, its acceleration is
A. -1m/s2
B. -2m/s2
C. -4m/s2
D. -5m/s2
Answer: Option B. -> -2m/s2
Answer: Option C. -> km/hr
A. 10m/s
B. 10km/s
C. 10m/hr
D. 10km/hr
Answer: Option A. -> 10m/s
Latest Videos
Latest Test Papers | {"url":"https://lakshyaeducation.in/topic/kinematics/16263676842126082ae7406/3/","timestamp":"2024-11-01T19:30:29Z","content_type":"text/html","content_length":"200137","record_id":"<urn:uuid:b46f8b70-250c-4e3a-ba7c-6a5f849ed89c>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00680.warc.gz"} |
Giving to Kent
An anonymous user
gave £50.00
An anonymous user
gave £80.00
An anonymous user
gave £50.00
An anonymous user
gave £2755.45
An anonymous user
gave £37.50
An anonymous user
gave £12.50
An anonymous user
gave £200.00
An anonymous user
gave £6.25
An anonymous user
gave £18.75
An anonymous user
gave £100.00
An anonymous user
gave £125.00
An anonymous user
gave £31.25
An anonymous user
gave £800.00
An anonymous user
gave £63.75
An anonymous user
gave £10.00
An anonymous user
gave £5.00
An anonymous user
gave £13.75
An anonymous user
gave £5000.00
An anonymous user
gave £225.00
An anonymous user
gave £1250.00
An anonymous user
gave £5000.00
An anonymous user
gave £2500.00
An anonymous user
gave £20000.00
An anonymous user
gave £1000.00
An anonymous user
gave £250.00
An anonymous user
gave £150.00
An anonymous user
gave £250.00
An anonymous user
gave £30000.00
An anonymous user
gave £750.00
An anonymous user
gave £1000.00
An anonymous user
gave £200.00
An anonymous user
gave £1450.00
Michimasa Kobayashi
All of us face a difficult time in our lives but I believe that someone cares about you and gives hands to you as long as you never give up and are doing your best you can every moment.
An anonymous user
gave £75.00
Flavio Iorio
United Kingdom
All should have access to further education and to better themselves and society
Robert Harris-Jones
An anonymous user
gave £3.00
William White
gave £500.00
United Kingdom
Tanya and I met at UKC in 1977 and have been married since 1981. We've been blessed by the University and we like to bless others when we can.
William & Denise Pettit
United Kingdom
Brian Macfarland
United Kingdom
Three of the best years of my life. Happy to help others feel the same about their time at UKC.
David Line
United Kingdom
An anonymous user
gave £3.00
David Hooper
gave £100.00
United Kingdom
John Partridge
United Kingdom
Alda Daci
United Kingdom
Janet Montefiore
gave £25.00 monthly
United Kingdom
An anonymous user
gave £5.00 monthly
Ann broadhead
Ashford Kent
United Kingdom | {"url":"https://giving.kent.ac.uk/donor-wall/?page=3","timestamp":"2024-11-03T15:50:35Z","content_type":"text/html","content_length":"205871","record_id":"<urn:uuid:548988c2-9ed0-4452-9916-19954b800a8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00809.warc.gz"} |
Nonparametric Bayesian Methods: Models, Algorithms, and Applications
This tutorial is took place at the 2018 Machine Learning Summer School (MLSS) at the Universidad Torcuato Di Tella, Buenos Aires, Argentina. See this link for the latest versions and videos of all
Part 1: Tuesday, June 19, 5:15 PM–6:15 PM
Part 2: Wednesday, June 20, 9:00 AM–10:30 AM
Part 3: Wednesday, June 20, 11:00 AM–12:30 PM
Professor Tamara Broderick
This tutorial introduces nonparametric Bayes (BNP) as a tool for modern data science and machine learning. BNP methods are useful in a variety of data analyses---including density estimation without
parametric assumptions and clustering models that adaptively determine the number of clusters. We will demonstrate that BNP allows the data analyst to learn more from a data set as the size of the
data set grows and see how this feat is accomplished. We will describe popular BNP models such as the Dirichlet process, Chinese restaurant process, Indian buffet process, and hierarchical BNP
models---and how they relate to each other.
Working knowledge of Bayesian data analysis. Know how to use Bayes' Theorem to calculate a posterior for both discrete and continuous parametric distributions. Have a basic knowledge of Markov chain
Monte Carlo (especially Gibbs) sampling for posterior approximation.
What we won't cover
Gaussian processes are an important branch of nonparametric Bayesian modeling, but we won't have time to cover them here. We'll be focusing on the discrete, or Poisson point process, side of
nonparametric Bayesian inference. | {"url":"https://tamarabroderick.com/tutorial_2018_mlss_ba.html","timestamp":"2024-11-02T11:24:15Z","content_type":"text/html","content_length":"4628","record_id":"<urn:uuid:6ba15db7-bc3d-4ff1-b13a-a457cd6fb184>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00632.warc.gz"} |
cv::flann::GenericIndex< Distance >
The FLANN nearest neighbor index class. This class is templated with the type of elements for which the index is built. More...
template<typename Distance>
class cv::flann::GenericIndex< Distance >
The FLANN nearest neighbor index class. This class is templated with the type of elements for which the index is built.
Distance functor specifies the metric to be used to calculate the distance between two points. There are several Distance functors that are readily available:
cv::cvflann::L2_Simple - Squared Euclidean distance functor. This is the simpler, unrolled version. This is preferable for very low dimensionality data (eg 3D points)
cv::flann::L2 - Squared Euclidean distance functor, optimized version.
cv::flann::L1 - Manhattan distance functor, optimized version.
cv::flann::MinkowskiDistance - The Minkowski distance functor. This is highly optimised with loop unrolling. The computation of squared root at the end is omitted for efficiency.
cv::flann::MaxDistance - The max distance functor. It computes the maximum distance between two vectors. This distance is not a valid kdtree distance, it's not dimensionwise additive.
cv::flann::HammingLUT - Hamming distance functor. It counts the bit differences between two strings using a lookup table implementation.
cv::flann::Hamming - Hamming distance functor. Population count is performed using library calls, if available. Lookup table implementation is used as a fallback.
cv::flann::Hamming2 - Hamming distance functor. Population count is implemented in 12 arithmetic operations (one of which is multiplication).
cv::flann::DNAmmingLUT - Adaptation of the Hamming distance functor to DNA comparison. As the four bases A, C, G, T of the DNA (or A, G, C, U for RNA) can be coded on 2 bits, it counts the bits pairs
differences between two sequences using a lookup table implementation.
cv::flann::DNAmming2 - Adaptation of the Hamming distance functor to DNA comparison. Bases differences count are vectorised thanks to arithmetic operations using standard registers (AVX2 and AVX-512
should come in a near future).
cv::flann::HistIntersectionDistance - The histogram intersection distance functor.
cv::flann::HellingerDistance - The Hellinger distance functor.
cv::flann::ChiSquareDistance - The chi-square distance functor.
cv::flann::KL_Divergence - The Kullback-Leibler divergence functor.
Although the provided implementations cover a vast range of cases, it is also possible to use a custom implementation. The distance functor is a class whose operator() computes the distance between
two features. If the distance is also a kd-tree compatible distance, it should also provide an accum_dist() method that computes the distance between individual feature dimensions.
In addition to operator() and accum_dist(), a distance functor should also define the ElementType and the ResultType as the types of the elements it operates on and the type of the result it
computes. If a distance functor can be used as a kd-tree distance (meaning that the full distance between a pair of features can be accumulated from the partial distances between the individual
dimensions) a typedef is_kdtree_distance should be present inside the distance functor. If the distance is not a kd-tree distance, but it's a distance in a vector space (the individual dimensions of
the elements it operates on can be accessed independently) a typedef is_vector_space_distance should be defined inside the functor. If neither typedef is defined, the distance is assumed to be a
metric distance and will only be used with indexes operating on generic metric distances. | {"url":"https://docs.opencv.org/3.4/db/d18/classcv_1_1flann_1_1GenericIndex.html","timestamp":"2024-11-07T13:58:23Z","content_type":"application/xhtml+xml","content_length":"45136","record_id":"<urn:uuid:745fcf35-75da-4bc5-b803-1d0e3e2c36ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00516.warc.gz"} |
The Stacks project
Definition 13.27.4. Let $\mathcal{A}$ be an abelian category. Let $A, B \in \mathop{\mathrm{Ob}}\nolimits (\mathcal{A})$. A degree $i$ Yoneda extension of $B$ by $A$ is an exact sequence
\[ E : 0 \to A \to Z_{i - 1} \to Z_{i - 2} \to \ldots \to Z_0 \to B \to 0 \]
in $\mathcal{A}$. We say two Yoneda extensions $E$ and $E'$ of the same degree are equivalent if there exists a commutative diagram
\[ \xymatrix{ 0 \ar[r] & A \ar[r] & Z_{i - 1} \ar[r] & \ldots \ar[r] & Z_0 \ar[r] & B \ar[r] & 0 \\ 0 \ar[r] & A \ar[r] \ar[u]^{\text{id}} \ar[d]_{\text{id}} & Z''_{i - 1} \ar[r] \ar[u] \ar[d] & \
ldots \ar[r] & Z''_0 \ar[r] \ar[u] \ar[d] & B \ar[r] \ar[u]_{\text{id}} \ar[d]^{\text{id}} & 0 \\ 0 \ar[r] & A \ar[r] & Z'_{i - 1} \ar[r] & \ldots \ar[r] & Z'_0 \ar[r] & B \ar[r] & 0 } \]
where the middle row is a Yoneda extension as well.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 06XT. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 06XT, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/06XT","timestamp":"2024-11-11T18:06:19Z","content_type":"text/html","content_length":"14402","record_id":"<urn:uuid:e7f4a38f-943a-46d4-9e22-d511066580f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00729.warc.gz"} |
the truth value of an array with more than one element is ambiguous. use a.any() or a.all()
by Admin | Oct 16, 2023
The truth value of an array with more than one element is ambiguous. use a.any() or a.all(): When working with arrays in Python, specifically with libraries such as NumPy or pandas, a common error
that developers encounter is the ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() error. This error occurs when you attempt to evaluate an array
with more than one element in a boolean context. The error message is Python’s way of telling you that it doesn’t know how to interpret an array with multiple elements as a single truth value.
Key Takeaways:
Understanding Arrays in Python
Basic Understanding of Arrays
Arrays are data structures that can hold more than one value at a time. They are a collection of variables that are accessed with an index number. In Python, arrays can be created using the NumPy
library which provides a high-performance multidimensional array object, and tools for working with these arrays.
Truth Value of an Array with Multiple Elements
When an array has more than one element, its truth value becomes ambiguous. In Python, a single value can be evaluated as either True or False. However, an array with multiple elements cannot be
implicitly evaluated as True or False because it contains more than one value.
Evaluation Type Single Element Array Multi-Element Array
Implicit Boolean Evaluation Allowed Not Allowed
Explicit Boolean Evaluation (using a.any() or a.all()) Allowed Allowed
Encountering the Error
Common scenarios where this error occurs include conditional statements, looping through arrays, and other control flow structures where a boolean evaluation is required.
For example, consider the following code snippet:
import numpy as np
arr = np.array([True, False, True])
if arr:
print("The array is evaluated as True.")
print("The array is evaluated as False.")
Executing the above code will result in the ValueError message as Python is unable to determine the truth value of the multi-element array
Understanding the a.any() and a.all() Methods
Detailed Explanation of a.any() and a.all() Methods
The a.any() and a.all() methods provide a way to evaluate the truth value of an array with multiple elements. The a.any() method returns True if at least one element in the array is True, while the
a.all() method returns True only if all elements in the array are True.
Method Return Value if at least one element is True Return Value if all elements are True
a.any() True True
a.all() False True
Resolving the Truth Value Ambiguity
By using these methods, the truth value ambiguity can be resolved. These methods provide a way to explicitly state how the array should be evaluated in a boolean context.
if arr.any():
print("At least one element in the array is True.")
if arr.all():
print("All elements in the array are True.")
Alternative Methods to a.any() and a.all()
Explanation of Alternative Methods
While a.any() and a.all() are straightforward solutions to resolving the truth value ambiguity, other methods exist within the NumPy library that can also be employed. Two of these methods are
np.logical_and() and np.logical_or() which can be used to evaluate the truth values of two arrays element by element.
Method Description Use Case
np.logical_and() Element-wise logical AND operation When needing to compare two arrays element by element and return a new array with Boolean values
np.logical_or() Element-wise logical OR operation Similar to logical AND, but returns True if at least one of the elements is True
Code Examples Showcasing these Alternatives
Various code examples can further elaborate on how these methods can be employed to resolve the truth value ambiguity.
Practical Applications
Handling the truth value error proficiently is crucial in many real-world scenarios, especially in data analysis and other fields where large datasets are handled.
Real-World Scenarios
• Data Analysis: When analyzing large datasets, understanding the truth value of arrays is fundamental to making correct interpretations and decisions.
• Machine Learning: In machine learning, arrays are often used to hold data. Understanding how to evaluate these arrays in boolean contexts is crucial.
Impact on Programming Efficiency
Mastering the handling of the truth value error can significantly impact one’s programming efficiency. It ensures that the code runs smoothly without unexpected errors, which in turn speeds up the
development process.
Frequently Asked Questions
1. What causes the truth value error in Python?
□ The error occurs when attempting to evaluate an array with more than one element in a boolean context without specifying how the evaluation should be done.
2. How can the a.any() and a.all() methods resolve this error?
□ The a.any() method returns True if at least one element in the array is True, while the a.all() method returns True only if all elements in the array are True.
3. Are there other methods besides a.any() and a.all() to resolve the truth value error?
□ Yes, methods like np.logical_and() and np.logical_or() can also be used to handle array evaluations.
4. Where is this error commonly encountered?
□ Common scenarios include conditional statements, looping through arrays, and other control flow structures where a boolean evaluation is required.
5. Why is mastering the handling of this error important?
□ Proficient handling of this error ensures accurate data computations, especially in fields like data analysis and machine learning, leading to more efficient programming. | {"url":"https://www.tracedynamics.com/the-truth-value-of-an-array-with-more-than-one-element-is-ambiguous-use-a-any-or-a-all/","timestamp":"2024-11-14T01:50:47Z","content_type":"text/html","content_length":"192798","record_id":"<urn:uuid:e3f10048-bdaf-4562-87b6-a74f0e989f56>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00182.warc.gz"} |
eMathHelp Math Solver - Free Step-by-Step Calculator
Solve math problems step by step
This advanced calculator handles algebra, geometry, calculus, probability/statistics, linear algebra, linear programming, and discrete mathematics problems, with steps shown.
At eMathHelp, we provide a wealth of mathematical calculators designed to simplify your daily computations, whether you need to tackle complex equations or perform fundamental math operations.
Our Calculator Categories
Explore our Algebra Calculator, designed to help solve equations, factor polynomials, and more, making algebra more accessible to you.
Our Geometry Calculator is your handy tool for working with triangles.
Solve pre-calculus problems with our specialized calculator, helping you master foundational math concepts before diving into advanced mathematics.
Improve your calculus knowledge with our Calculus Calculator, which makes complex operations like derivatives, integrals, and differential equations easy.
Perform matrix operations and solve systems of linear equations with our Linear Algebra Calculator, essential for fields like physics and engineering.
Tackle discrete mathematical problems confidently with our specialized calculator, ideal for computer science, cryptography, and more.
Make data analysis a breeze with our Probability and Statistics Calculator, which helps you extract meaningful insights from your data.
Optimize linear objective functions easily using our Linear Programming Calculator, which is valuable in resource allocation and economics.
Who Are We?
eMathHelp is a team of dedicated math enthusiasts who believe everyone should have access to powerful mathematical tools. Our mission is to make math more approachable and enjoyable for people of all
ages and backgrounds.
Why Choose Our Calculators?
• Versatility
We offer a wide range of calculators for math, including algebraic and calculus tools, making it your one-stop destination for all your mathematical needs.
• Simplicity
Our user-friendly interface ensures that even complex calculations can be performed effortlessly, making math accessible to everyone.
• Accuracy
Our calculator is designed to provide precise results, helping you save time and eliminate errors.
• Diverse Categories of Calculators
We cover various mathematical concepts and topics, from simple to complex.
Our Most Popular Math Calculators
Solve complex integration problems, including improper integrals, quickly.
Efficiently optimize resources by solving linear programming problems.
Easily find antiderivatives by applying different techniques.
Find the main properties of functions easily.
Quickly analyze quadratic equations.
What types of calculations can the online math calculator perform?
Our online math calculator offers a wide range of operations to perform. You can use it to solve equations, find derivatives, factor expressions, and more.
Are the online math calculators free to use?
Absolutely! We provide free calculators for math, ensuring you can access powerful mathematical tools without cost. No subscriptions or hidden fees.
How many online math calculators do you offer?
We offer a wide range of online math calculators covering a variety of math topics.
Do the calculators provide step-by-step solutions?
Many of our calculators provide detailed, step-by-step solutions. This will help you better understand the concepts that interest you. | {"url":"https://www.emathhelp.net/en/","timestamp":"2024-11-13T02:37:28Z","content_type":"application/xhtml+xml","content_length":"74021","record_id":"<urn:uuid:7445c813-bae4-44e9-b6db-d37735c42d40>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00821.warc.gz"} |
Example: Accumulated values
In this example, Budget and Revenue are the only two measures in the crosstab - m1 and m2 respectively. Also, we have a Time dimension - on the month level - along the vertical axis of the crosstab.
The scenario is, that we are at the end of September 2021, looking back at the Revenue achieved so far, compared to Budget of the full year.
The two calculations, 'Acc Budget' and 'Acc Revenue' are both added as calculated columns to the crosstab.
Acc Budget:
sum(d1, d1:0, m1)
Acc Revenue:
sum(d1, d1:0, m2)
In both calculations, the 'accumulation' comes from the row reference of 'd1:0' - which means that each calculated value is calculated as the sum 'from the first row to the current row'.
The line chart is a copy of the crosstab turned into a line chart. Furthermore, with the Visibility option from the Properties Smartpad, the two original measures have been made invisible
3 comments
• Hi Ole,
I can't find the "Hide if empty result" check box when I want to make a nice Akk..sum. We use Targit 2022.11.29002
Best regards Jette
• Hi Jette,
I presume you want to hide the 2023 Akk.sum when there are no more values in the other column?
You will need to add a 'Visibility agent' to your Akk.sum calculation in the line chart.
Note that I cannot tell from your screenshot whether you are using measures, calculated measures or calculated columns. In my example below, I have one measure (No of Sales) and one calculation
(Akk sum) added 'As a new measure'.
Table on the left is what you have. Table on the right is what we want to achieve:
I have taken these steps:
Viewed as a line chart (No of Sales measure has been hidden):
BR / Ole
• Hi Ole,
Thats perfect - thanks a lot :-) It works for me now ..
BR / Jette
Please sign in to leave a comment. | {"url":"https://community.targit.com/hc/en-us/articles/6660219081105-Example-Accumulated-values?sort_by=created_at","timestamp":"2024-11-05T15:42:29Z","content_type":"text/html","content_length":"41940","record_id":"<urn:uuid:6fa74761-3048-411b-93f0-b9366b871ed9>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00471.warc.gz"} |
James O. C. Ezeilo
James Okoye Chukuka Ezeilo
place: Nigeria
thesis: Some Topics in the Theory of Ordinary Non-linear differential Equations of the Third Order
email: EZEILO@science.uniswa.sz
Also see the web page: Who are the greatest Black Mathematicians?
James Ezeilo took his B.Sc. of London University in 1953 with First Class Hons and the M.Sc. ( also of London University) in 1955, received his Ph.D. from University of Cambridge (Queens' College) in
Professor James Ezeilo, with Chike Obi and Adegoke Olubummo, was one of a trio of black mathematicians who pioneered modern mathematics research in Nigeria is sometimes called the "father of
mathematics" in Nigeria. Dr. James Ezeilo's early research dealt mainly with the problem of stability, boundedness, and convergence of solutions of third order ordinary differential equations. Apart
from extending known results and techniques to higher order equations, the main thrust of his work was the construction of Lyapunov-like functions, which he did elegantly and used to study the
qualative properties of solutions. In addition he was a pioneer in the use of Leray-Schauder degree type arguments to obtain existence results for periodic solutions of ordinary differential
James Okoye Chukuka Ezeilo received the degrees of DSc honoris causa from the University of Maiduguri, 1989-11-, and the University of Nigeria, Nsukka, 1996-04-, and the degree of DTech honoris causa
from the Federal University of Technology, Akure, 1995-11-.
Special issue in honour of Professor James O. C. Ezeilo: J. Nigerian Math. Soc. {11} (1992), no. 3. Nigerian Mathematical Society, University of Ibadan, Department of Mathematics, Ibadan, 1992. pp.
i--iv and 1--146.
Adichie, J. N. Professor J. O. C. Ezeilo: More than three decades of active academic work. Special issue in honour of Professor James O. C. Ezeilo. J. Nigerian Math. Soc. 11 (1992), no. 3, i--iv.
Selected Research
70. Ezeilo, J.O.C. Non-resonant oscillations for some third order differential equations II, J. Nigerian Math. Soc. 8 (1989), 25-48 (with J.O.C.)
69. Ezeilo, J. O. C.; Nkashama, M. N. Resonant and nonresonant oscillations for some third order nonlinear ordinary differential equations. Nonlinear Anal. 12 (1988), no. 10, 1029--1046.
68. Ezeilo, J. O. C.; Onyia, J. Nonresonant oscillations for some third-order differential equations. J. Nigerian Math. Soc. 3 (1984), 83--96 (1986).
67. Ezeilo, J. O. C. An application of a theorem of Güssefeldt in the proof of the existence of periodic solutions of a certain class of differential equations. J. Nigerian Math. Soc. 2 (1983),
66. Ezeilo, J. O. C. Uniqueness theorems for periodic solutions of certain fourth and fifth order differential systems. J. Nigerian Math. Soc. 2 (1983), 55--59.
65. Ezeilo, J. O. C. Some properties of the differential equation $f(u)=d\sp{p}u/dt\sp{p}$\ of arbitrary order $p\geq 1$. Qualitative theory of differential equations, Vol. I, II (Szeged, 1979), pp.
231--241, Colloq. Math. Soc. János Bolyai, 30, North-Holland, Amsterdam-New York, 1981.
64. Ezeilo, J. O. C. Periodic solutions of certain sixth order differential equations. J. Nigerian Math. Soc. 1 (1982), 1--9.
63. Ezeilo, J. O. C. A Leray\mhy Schauder technique for the investigation of periodic solutions of the equation $\ddot x+x+µ x\sp{2}=\varepsilon \,{\rm cos}\,\omega t$ $(\varepsilon \not=0)$. Acta
Math. Acad. Sci. Hungar. 39 (1982), no. 1-3, 59--63.
62. Ezeilo, J. O. C. Existence of periodic solutions of a certain system of fifth-order differential equations. Ninth international conference on nonlinear oscillations, Vol. 1 (Kiev, 1981),
420--422, 454, "Naukova Dumka", Kiev, 1984. 34C25
61. Ezeilo, James O. C. On the existence of periodic solutions of certain third order nondissipative differential systems. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 66 (1979), no.
2, 126--135. 34C25
60. Ezeilo, James O. C. Extension of certain instability theorems for some fourth and fifth order differential equations. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 66 (1979), no. 4,
239--242. 34D05 (34A30)
59. Ezeilo, James O. C. A further result on the existence of periodic solutions of the equation $\dotiii x+\psi (\dot x)\ddot x+\varphi (x)\dot x+\theta (t,\,x,\,\dot x,\,\ddot x)=p(t)$ with a
bounded $\theta $. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 65 (1978), no. 1-2, 51--57 (1979). 34C25
58. Ezeilo, James O. C. Periodic solutions of certain third order differential equations of the nondissipative type. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 63 (1977), no. 3-4,
212--224 (1978).
57. Ezeilo, James O. C. Periodic solutions of a certain fourth order differential equation. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 63 (1977), no. 3-4, 204--211 (1978).
56. Ezeilo, J. O. C. An instability theorem for a certain sixth order differential equation. J. Austral. Math. Soc. Ser. A 32 (1982), no. 1, 129--133.
55. Ezeilo, James O. C.; T\d ejum\d ola, Haroon O. Periodic solutions of a certain fourth order differential equation. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 66 (1979), no. 5,
54. Ezeilo, James O. C. Further results on the existence of periodic solutions of a certain third order differential equation . Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 63 (1977),
no. 6, 493--503 (1978).
53. Ezeilo, James O. C. Further results on the existence of periodic solutions of a certain third-order differential equation. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 64 (1978),
no. 1, 48--58.
52. Ezeilo, J. O. C. A further instability theorem for a certain fifth-order differential equation. Math. Proc. Cambridge Philos. Soc. 86 (1979), no. 3, 491--493.
51. Ezeilo, J. O. C. Instability theorems for certain fifth-order differential equations . Math. Proc. Cambridge Philos. Soc. 84 (1978), no. 2, 343--350.
50. Ezeilo, J. O. C. An instability theorem for a certain fourth order differential equation . Bull. London Math. Soc. 10 (1978), no. 2, 184--185.
49. Ezeilo, J. O. C.; Tejumola, H. O. Periodic solutions of certain fifth order differential equations . Nonlinear vibration problems, No. 15 (Proc. Sixth Internat. Conf. Nonlinear Oscillations,
Pozna\'n, 1972, Part II), pp. 75--84. PWN---Polish Sci. Publ., Warsaw, 1974. 34C25
48. Eseilo, J. O. C. New properties of the equation $x+ax+bx+h(x)=p(t,x,x,x)$ for certain special values of the incrementary ratio $y\sp{-1}\{h(x+y)-h(x)\}$ . Équations différentielles et
fonctionnelles non linéaires (Actes Conférence Internat. "Equa-Diff 73", Brussels/Louvain-la-Neuve, 1973), pp. 447--462. Hermann, Paris, 1973.
47. Ezeilo, J. O. C.; Tejumola, H. O. On the boundedness and the stability properties of solutions of certain fourth order differential equations . Ann. Mat. Pura Appl. (4) 95 (1973), 131--145.
46. Ezeilo, James O. C.; Tejumola, Haroon O. Further remarks on the existence of periodic solutions of certain fifth order non-linear differential equations . Atti. Accad. Naz. Lincei Rend. Cl. Sci.
Fis. Mat. Natur. (8) 58 (1975), no. 3, 323--327.
45. Ezeilo, James O. C.; Tejumola, Haroon O. Further results for a system of third order differential equations. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 58 (1975), no. 2,
44. Ezeilo, J. O. C. Periodic solutions of certain third order differential equations . Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 57 (1974), no. 1-2, 54--60 (1975).
43. Ezeilo, James O. C. Some new criteria for the existence of periodic solutions of a certain second order differential equation . Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 56
(1974), no. 5, 675--683.
42. Ezeilo, James O. C.; Tejumola, H. O. Boundedness theorems for certain third order differential equations . Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 55 (1973), 194--201 (1974).
41. Ezeilo, J. O. C. A further result on the existence of periodic solutions of the equation $\dotiii x+a\ddot x+b\dot x+h(x)=p(t,x,\dot x,\ddot x)$ . Math. Proc. Cambridge Philos. Soc. 77 (1975),
40. Ezeilo, James Okoye Chukuka Periodic solutions of a certain third order differential equation. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 54 (1973), 34--41.
39. Ezeilo, J. O. C. A generalization of some boundedness results by Reissig and Tejumola . J. Math. Anal. Appl. 41 (1973), 411--419.
38. Ezeilo, J. O. C. A boundedness theorem for a certain $n$th order differential equation . Ann. Mat. Pura Appl. (4) 88 (1971), 135--142.
37. Ezeilo, J. O. C. A boundedness theorem for a certain fourth order differential equation . J. London Math. Soc. (2) 5 (1972), 376--384.
36. Ezeilo, J. O. C.; Tejumola, H. O. Boundedness theorems for some fourth order differential equations . Ann. Mat. Pura Appl. (4) 89 (1971), 259--275.
35. Ezeilo, J. O. C.; Tejumola, H. O. A boundedness theorem for a certain fourth order differential equation . Ann. Mat. Pura Appl. (4) 88 (1971), 207--216.
34. Ezeilo, James Okoye Chukuka A generalization of a boundedness theorem for the equation $\ddot x+\alpha \ddot x+\phi\sb 2$ $(\ddot x)+\phi\sb 3$ $(x)=\psi (t,x,\dot x,\ddot x)$ . Atti Accad. Naz.
Lincei Rend. Cl. Sci. Fis. Mat. Natur. (8) 50 (1971), 424--431.
33. Ezeilo, J. O. C. A generalization of a theorem of Reissig for a certain third order different equation . Ann. Mat. Pura Appl. (4) 87 (1970), 349--356.
32. Ezeilo, J. O. C. On the boundedness of the solutions of the equation $\dotiii x+a\ddot x+f(x)\dot x+g(x)=p(t)$ . Ann. Mat. Pura Appl. (4) 80 1968 281--299.
31. Ezeilo, J. O. C. On the stability of the solutions of some third order differential equations . J. London Math. Soc. 43 1968 161--167.
30. Ezeilo, J. O. C. A generalization of a boundedness theorem for a certain third-order differential equation . Proc. Cambridge Philos. Soc. 63 1967 735--742.
29. Ezeilo, J. O. C. $n$-dimensional extensions of boundedness and stability theorems for some third order differential equations . J. Math. Anal. Appl. 18 1967 395--416.
28. Ezeilo, J. O. C. On the stability of solutions of certain systems of ordinary differential equations . Ann. Mat. Pura Appl. (4) 73 1966 17--26.
27.Ezeilo, J. O. C.; Tejumola, H. O. Boundedness and periodicity of solutions of a certain system of third-order non-linear differential equations . Ann. Mat. Pura Appl. (4) 74 1966 283--316.
26. Ezeilo, J. O. C. Corrigendum: A boundedness theorem for a certain third-order differential equation . Proc. London Math. Soc. (3) 17 1967 382--384.
25. Ezeilo, J. O. C. A generalization of a result of Demidovi\v c on the existence of a limiting regime of a system of differential equations . Portugal. Math. 24 1965 65--82.
24. Ezeilo, J. O. C. Erratum: On the existence of almost periodic solutions of some dissipative second order differential equations . Ann. Mat. Pura Appl. (4) 74 1966 399.
23. Ezeilo, J. O. C. A note on the convergence of solutions of certain second order differential equations . Portugal. Math. 24 1965 49--58.
22. Ezeilo, J. O. C. A stability result for a certain third order differential equation . Ann. Mat. Pura Appl. (4) 72 1966 1--9.
21. Ezeilo, J. O. C. On the convergence of solutions of certain systems of second order differential equations . Ann. Mat. Pura Appl. (4) 72 1966 239--252.
20. Ezeilo, J. O. C. Some boundedness results for a fourth order nonlinear differential equation . 1964 Nonlinear Vibration Problems, 5, Second Conf. on Nonlinear Vibrations, Warsaw, 1962 pp.
252--257 Pa\'nstwowe Wydawnictwo Naukowe, Warsaw
19. Ezeilo, J. O. C. An estimate for the solutions of a certain system of differential equations . Nigerian J. Sci. 1 1966 5--10.
18. Ezeilo, J. O. C. A stability result for the solutions of certain third order differential equations . J. London Math. Soc. 37 1962 405--409.
17. Ezeilo, J. O. C. Stability results for the solutions of some third and fourth order differential equations . Ann. Mat. Pura Appl. (4) 66 1964 233--249.
16. Ezeilo, J. O. C. On the existence of an almost periodic solution of a non-linear system of differential equations . Contributions to Differential Equations 3 1964 337--349.
15. Ezeilo, J. O. C. On the existence of almost periodic solutions of some dissipative second order differential equations . Ann. Mat. Pura Appl. (4) 65 1964 389--405.
14. Ezeilo, J. O. C. A boundedness theorem for some non-linear differential equations of the third order. J. London Math. Soc. 37 1962 469--474.
13. Ezeilo, J. O. C. An extension of a property of the phase space trajectories of a third order differential equation. Ann. Mat. Pura Appl. (4) 63 1963 387--397.
12. Ezeilo, J. O. C. An elementary proof of a boundedness theorem for a certain third order differential equation. J. London Math. Soc. 38 1963 11--16.
11. Ezeilo, J. O. C. A boundedness theorem for a differential equation of the third order. 1963 Qualitative methods in the theory of non-linear vibrations (Proc. Internat. Sympos. Non-linear
Vibrations, Vol. II, 1961) pp. 513--538 Izdat. Akad. Nauk Ukrain. SSR, Kiev
10. Ezeilo, J. O. C. Some results for the solutions of a certain system of differential equations. J. Math. Anal. Appl. 6 1963 387--393.
9. Ezeilo, J. O. C. Further results for the solutions of a third-order differential equation. Proc. Cambridge Philos. Soc. 59 1963 111--116.
8. Ezeilo, J. O. C. On the boundedness and the stability of solutions of some differential equations of the fourth order. J. Math. Anal. Appl. 5 1962 136--146.
7. Ezeilo, J. O. C. A boundedness theorem for a certain third-order differential equation. Proc. London Math. Soc. (3) 13 1963 99--124.
6. Ezeilo, J. O. C. A property of the phase-space trajectories of a third-order non-linear differential equation. J. London Math. Soc. 37 1962 33--41.
5. Ezeilo, J. O. C. A stability result for solutions of a certain fourth order differential equation. J. London Math. Soc. 37 1962 28--32.
4. Ezeilo, J. O. C. A note on a boundedness theorem for some third order differential equations. J. London Math. Soc. 36 1961 439--444.
3. Ezeilo, J. O. C. On the existence of periodic solutions of a certain third-order differential equation. Proc. Cambridge Philos. Soc. 56 1960 381--389.
2. Ezeilo, J. O. C. On the stability of solutions of certain differential equations of the third order. Quart. J. Math. Oxford Ser. (2) 11 1960 64--69.
1. Ezeilo, J. O. C. On the boundedness of solutions of a certain differential equation of the third order. Proc. London Math. Soc. (3) 9 (1959) 74--114.
The web pages
MATHEMATICIANS OF THE AFRICAN DIASPORA
are brought to you by
The Mathematics Department of
The State University of New York at Buffalo.
They are created and maintained by
Scott W. Williams
Professor of Mathematics | {"url":"https://www.math.buffalo.edu/mad/PEEPS/ezeilo_james.html","timestamp":"2024-11-04T18:44:13Z","content_type":"text/html","content_length":"19755","record_id":"<urn:uuid:90e01180-8b94-4d8a-ba01-aa696a6ae769>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00028.warc.gz"} |
Hash rate headaches
One of the more infuriating challenges when trying to do any sort of analysis of Bitcoin mining is to understand the current world-wide hashing rate and how this affects difficulty changes. The very
best "live update" websites seem to show the hash rate being all over the place. Large spikes occur frequently and it appears that huge amounts of hashing capacity have either come online or gone
offline. This explanation may appeal to conspiracy theorists, and will sometimes be the real cause, but there is a much more mundane reason most of the time (but nonetheless surprising).
Isn't mining set up to generate a block once every 10 minutes?
The first thing to look at is the way mining operates. The use of the SHA256 hash is intended to make it effectively impossible to predict what will or won't give a particular hash result without
actually computing the hash and seeing if it solved a block. Essentially each minor change in the an attempt to solve a block gives a totally random effect, so trying one hash means that the next
attempt is neither no more likely, or no less likely, to succeed! This highly random nature means that mining is a Poisson Process As each attempt to solve a block is unpredictable then in theory
everyone might mine all day and never solve a block. Similarly it's also possible that a single miner might find 6 blocks in a succession. Both outcomes are possible, but both are staggeringly
A Poisson process
Poisson Processes have some very well understood characteristics. We can prediction how many events (finding blocks in our case) will occur in a particular period of time when we know what the
average number of events will be.
For Bitcoin mining where the difficulty isn't changing (the hash rates are constant) then we should see an average of 6 blocks per hour, 144 per day, or 2016 per 2 weeks.
Here's what the probabilities look like for a single hour:
Probabilities of blocks in any given hour
The chart shows the probability (between 0 and 1) for each block count in yellow and the cumulative probability in red. Even though we might expect 6 blocks every hour we will actually see 2 or fewer
blocks around once every 16 hours; we'll also see 10 or more blocks once every 24 hours too. It may seem surprising but once every 2.8 days we'll find an hour between consecutive blocks [2015-02-05:
This originally stated 16.8 days and not 2.8 days, but I had mistakenly multiplied by 6.]
What happens when difficulty levels are increasing?
When difficulty levels are increasing we see a change in the probabilities. Let's look at our original cumulative probability chart and add in a chart for where the average block finding rate is 10%
higher (we're seeing 6.6 blocks per hour):
Cumulative probabilities
Our original statistics are in red and the new ones are in blue. It's now more likely that we'll see a slightly higher block finding rate, but we still see much lower and much higher numbers
occurring quite frequently!
Hash rate calculators
Hash rate calculators have a huge problem as a result of the randomness shown by the statistics. All they can do is measure the event rate and make an estimate of the rate, based on the block finding
rates. They have no way of telling if the statistics for any given period of time were normal, low, high, very low, very high, etc.
Difficulty changes
Difficulty changes occur every 2016 blocks. They play a very interesting role in hash rate statistics because they're computed by taking the time it took to find the previous set of blocks and to set
the difficulty to a level where they would have taken 14 days to find.
Let's look at the probabilities for a 14 day period:
Probabilities of finding different numbers of blocks in a 14 day period
The scale here is different to our original graphs, and we're only looking at the numbers closer to the nominal 2016 blocks that should be found in 14 days. There are some interesting markers shown.
As we might expect, the most likely outcome is that we will see 2016 blocks found, but 10% of the time we'll see fewer than 1958. Similarly 10% of the time we'll see more than 2073. Of course the
difficulty will be reset after 2016 anyway but in that case it would be set about 2.8% higher than it should be. If we think about those two 10% numbers this means that every 5 blocks we will see a
difficulty level that is either 2.8% higher or lower than it should be. In the next difficulty change period we will probably see that counteracted, but there's no actual guarantee since we may see
two consecutive high estimates.
We can also look at the 1% and 99% markers. They represent things that between them will happen about once every 2 years. Approximately once every 2 years the hash rate estimates at the difficulty
change will be out by more than 5% and so the difficulty will be set incorrectly by as much as 5%!
What's really important here is that even if the worldwide hash rate was constant we'd still appear to see significant difficulty changes occurring every 2016 blocks!
As for hash rate estimation, doesn't it now look much more complex than it seemed it would?
Source code
This article was written with the help of data from a C language application that generates the probability distributions. The data was rendered into charts using Excel. The source code can be found
on github: https://github.com/dave-hudson/hash-rate-headaches | {"url":"https://davehudson.io/blog/2014-05-20-0000","timestamp":"2024-11-12T04:02:24Z","content_type":"text/html","content_length":"13599","record_id":"<urn:uuid:6d175e0c-1be9-446d-8f80-3b97a4a607ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00888.warc.gz"} |
Flowgorithm Array Types - TestingDocs.com
In this tutorial, we will learn about Flowgorithm Array Types. In general, Arrays are classified based on their dimensions. For example,
• One-Dimensional Array ( 1D Array)
• Multi-dimensional Arrays
□ Two-Dimensional Array ( 2D Array )
□ Higher dimensions.
Flowgorithm only directly supports only one-dimensional array as of now.
Single Dimensional Array
Single dimensional arrays have only one dimension either a row or a column. The fruits array is an example of a single-dimensional array. We can refer to the single-dimensional array as a 1D array in
short form. We need only one index to refer to the array element.
Two-Dimensional Array
Two-dimensional arrays have both rows and columns. A chessboard is an example of a two-dimensional array. We can refer to the two-dimensional array as a 2D Array in short form. We need two index
variables to refer to each array element. One index for the row and another for the column.
For example:
Each square can be thought of as an array element and can be referred to as
i is the index for the row and j is the index for the column.
In Chess game terminology, the row is called rank and the column is called a file on a chessboard.
A normal Chessboard with 8 rows and 8 columns.
Note that: 2D Array support is not directly available in Flowgorithm. For example, we cannot represent the two-dimensional array as arr[3,3].
Flowgorithm Arrays
We can also classify Flowgorithm Arrays based on the data types of the elements. Flowgorithm supports
• Integer Array
• Real Array
• String Array
• Boolean Array
Flowgorithm Tutorials
Flowgorithm flowchart tutorials on this website:
Flowgorithm /
Flowgorithm Features | {"url":"https://www.testingdocs.com/flowgorithm-array-types/","timestamp":"2024-11-11T19:51:31Z","content_type":"text/html","content_length":"126298","record_id":"<urn:uuid:f08de077-4f59-47dc-a314-911339281c04>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00039.warc.gz"} |
Sims 4 Slave Mod | fukagawakiyuukai
Sims 4 Slave Mod
Click Here >>> https://blltly.com/2tySsK
To calculate the value of using equation (21), we need to use a numerical method to estimate the sum of the series . However, this series is infinite and we need to truncate it somehow. One way to do
this is to approximate the parameter with a rational number , where and are integers. Then, we can use equation (17) to rewrite the series as a finite sum: (34)
The advantage of using this method is that we can control the accuracy of the approximation by choosing the values of and . The smaller the denominator , the closer the rational number is to the
parameter . However, this also means that the finite series will have more terms to sum up, which increases the computational cost. Therefore, we need to balance the trade-off between accuracy and
To find a suitable rational approximation for , we can use a technique called continued fraction expansion. This technique expresses a real number as a nested fraction of integers, such as: (35)
where , , , ... are positive integers. The continued fraction can be truncated at any stage to obtain a rational approximation for . For example, if we truncate after the first term, we get: (36)
which is a good approximation for .
We can use an algorithm called Euclid's algorithm to find the continued fraction expansion of any real number. The algorithm works by repeatedly applying the division algorithm to the numerator and
denominator of the fraction. For example, to find the continued fraction expansion of , we start with: (37) Then, we apply the division algorithm to get: (38) where is the quotient and is the
remainder. We repeat this process with and until we get a zero remainder. The quotients are then the coefficients of the continued fraction expansion.
Let us see an example of Euclid's algorithm applied to . We start with: (39) Then, we apply the division algorithm to get: (40) where and . We repeat this process with and to get: (41) where and . We
continue with and to get: (42) where and . Finally, we get a zero remainder with and : (43) where and . Therefore, the continued fraction expansion of is: (44)
Now that we have the continued fraction expansion of , we can truncate it at any stage to obtain a rational approximation for . The more terms we include, the better the approximation. For example,
if we truncate after the first term, we get: (45) which is the same as equation (36). If we truncate after the second term, we get: (46) which is a better approximation than equation (45). If we
truncate after the third term, we get: (47) which is even better than equation (46). And so on.
We can use a formula to find the rational approximation for any truncation of the continued fraction expansion. The formula is: (48) where and are the numerator and denominator of the rational
approximation, and are the coefficients of the continued fraction expansion, and is the index of truncation. For example, if we truncate after the second term, we get: (49) which is the same as
equation (46). If we truncate after the third term, we get: (50) which is the same as equation (47). And so on. 061ffe29dd | {"url":"https://www.fukagawakiyuukai.com/forum/deisukatusiyon/sims-4-slave-mod","timestamp":"2024-11-12T12:04:41Z","content_type":"text/html","content_length":"887568","record_id":"<urn:uuid:2c9cc87f-f860-4546-94b5-241c948f4456>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00074.warc.gz"} |
How to use Java BigDecimal to display floating point numbers
The Java class BigDecimal makes it possible to process complex floating point numbers with precision. Once they’ve been created, you can apply different methods to them. The structure of the syntax
is always the same, so it’s easy to quickly familiarize yourself with the class.
What is Java BigDecimal?
Java BigDecimal allows you to accurately display and process complex floating point numbers of theoretically any size. This article will show you different methods for using this class, be it for
rounding, arithmetic or format conversion. You’ll also learn how to implement it for hashing and for precise, sophisticated comparisons.
Java BigDecimal consists of a 32-bit integer scale and an unscaled integer value with optional precision. In this case, “scale” means the number of digits after the decimal point, provided they’re
greater than or equal to zero. However, if the value is less than zero, it is multiplied by 10^(-scale). The size of the class is limited only by the computer’s memory. Though this is more of a
theoretical consideration, as it’s unlikely that a program will create a number that exceeds its available memory. BigDecimal in Java is intended exclusively for floating point numbers, while the
BigInteger class is used for processing integers.
Web Hosting
Fast, scalable hosting for any website
• 99.9% uptime
• PHP 8.3 with JIT compiler
• SSL, DDoS protection, and backups
What is the class needed for?
Java BigDecimal’s level of precision isn’t needed for every scenario. But there are situations where its precision is invaluable. For example, it serves its purpose well in e-commerce transactions,
where calculations can be impacted by even the smallest decimal place. The class is also used to conduct precise static analyses. Programs used for the control and navigation of airplanes or rockets
rely on the class, as does the medical segment. In other fields, the levels of precision offered by Java BigDecimal provides the best possible security.
How is an object created?
To use BigDecimal in Java, you’ll need to first import the class into your Java program. Once you’ve done that, you can declare an object of this class. Next, create the desired value as an argument
and pass this to the appropriate Java constructor. Once you’ve completed this process, you can use BigDecimals in Java. Within the class, you’ll find various methods, which we’ll explain in more
detail in the following section. First, we’re going to import the class and declare two BigDecimal objects:
/ / Your Java program for the BigDecimal class
import java.math.BigDecimal;
public class BigDecimalExample
public static void main(String[] args)
/ / Create two new BigDecimals
BigDecimal ExampleOne =
new BigDecimal ("1275936001.744297361");
BigDecimal ExampleTwo =
new BigDecimal ("4746691047.132719503");
Now you can use these objects with the methods for the BigDecimal class.
Examples for Java BigDecimal
Once you have created the objects, you can use different methods to use the objects and perform operations. Let’s look at a few examples to show you how this works. The output is initiated using the
Java command System.out.println().
Adding two BigDecimals
If you want to add two BigDecimals in Java, you need to use the add() method. To do this, store the two values that you want to calculate the sum for. In our example, the value ExampleOne will be
added to the value ExampleTwo.
ExampleOne =
System.out.println ("Here is the result: " + ExampleOne);
Subtract numbers
To subtract two values from each other, you need the subtract() method. In the next example, we subtract ExampleTwo from ExampleOne.
ExampleOne =
System.out.println ("Here is the result: " + ExampleOne);
Multiply values
The method you use to multiply two BigDecimals in Java works similarly. It’s called multiply(). To multiply ExampleTwo by ExampleOne, use the following code:
ExampleOne =
System.out.println ("Here is the result: " + ExampleOne);
Dividing numbers
If you want to divide two BigDecimal objects in Java, use the divide() method. This follows the same syntax as the other examples and looks like this:
ExampleOne =
System.out.println ("Here is the result: " + ExampleOne);
However, this only works if the result is exact or an integer. If this is not the case, the following error message will be output: java.lang.ArithmeticException: Non-terminating decimal expansion;
no exact representable decimal result.. This describes a runtime error. To avoid this, there are various rounding options for the divide() method, which can be transmitted via java.math.RoundingMode.
You can choose from the following constants:
Constant Function
CEILING Rounds to positive infinity
DOWN Rounds to 0
FLOOR Rounds to negative infinity
HALF_DOWN Rounds to the nearest neighboring number and the opposite of 0 if both are equidistant
HALF_EVEN Rounds to the next neighboring number and to the next even number if both are equidistant
HALF_UP Rounds to the nearest neighboring number and in the direction of 0, provided they are both the same distance away
UNNECESSARY Omits rounding and only performs exact operations; can only be used if the division is exact
UP Rounds away from 0
Overview of the most important methods
Now that you’ve learned how to use BigDecimal in Java, here’s an overview of some of the most important methods you can use with it.
Method Function
abs() Returns a BigDecimal with its absolute value
add() Returns a BigDecimal whose value is composed of (this + Addend)
divide() Output value results from (this / Divisor)
max(BigDecimal val) Outputs the maximum of the BigDecimal
min(BigDecimal val) Outputs the minimum of the BigDecimal
movePointLeft(int n) Outputs a BigDecimal where the decimal point has been moved to the left by the value “n”
movePointRight(int n) Outputs a BigDecimal where the decimal point has been moved to the right by the value “n”
multiply(BigDecimal multiplicand, MathContext mc) Returns a value that results from (this * multiplicand)
Was this article helpful? | {"url":"https://www.ionos.com/digitalguide/websites/web-development/java-class-bigdecimal/","timestamp":"2024-11-04T13:56:24Z","content_type":"text/html","content_length":"153239","record_id":"<urn:uuid:491fd0ae-fb7a-4106-a25c-0762ff045481>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00562.warc.gz"} |
Best points w.r.t. non dominated sorting with hypervolume contribution. — nds_selection
Best points w.r.t. non dominated sorting with hypervolume contribution.
Select best subset of points by non dominated sorting with hypervolume contribution for tie breaking. Works on an arbitrary dimension of size two or higher.
Numeric matrix with each column corresponding to a point
Amount of points to select.
Reference point for hypervolume.
Should the ranking be based on minimization? Can be specified for each dimension or for all. Default is TRUE for each dimension. | {"url":"https://bbotk.mlr-org.com/reference/nds_selection.html","timestamp":"2024-11-12T20:30:00Z","content_type":"text/html","content_length":"9179","record_id":"<urn:uuid:f6b2ca0f-7cf4-4314-97f8-3d3620a8c64d>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00177.warc.gz"} |
Critically damped response - (Control Theory) - Vocab, Definition, Explanations | Fiveable
Critically damped response
from class:
Control Theory
A critically damped response occurs in a dynamic system when the damping ratio is exactly equal to one, allowing the system to return to equilibrium as quickly as possible without oscillating. This
response is significant because it represents an optimal balance between speed and stability, ensuring that the system settles down to its final value without overshooting or oscillations. In many
control applications, achieving a critically damped response is desired for fast and stable behavior.
congrats on reading the definition of critically damped response. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. In a critically damped system, the time to reach steady state is minimized while avoiding oscillations, making it ideal for applications requiring quick responses.
2. A critically damped response can be mathematically represented using second-order differential equations where the characteristic equation has a repeated real root.
3. Critical damping is often used in engineering systems like shock absorbers and control systems, where rapid stabilization without overshoot is essential.
4. Systems with critical damping are often characterized by a specific trade-off between responsiveness and stability, making them highly effective in real-time applications.
5. Determining whether a system is critically damped involves analyzing the damping ratio and comparing it to the ideal value of one.
Review Questions
• How does a critically damped response compare to underdamped and overdamped responses in terms of system behavior?
□ A critically damped response represents an optimal situation where the system returns to equilibrium as quickly as possible without oscillating. In contrast, an underdamped response
oscillates before settling down, taking longer to reach equilibrium. On the other hand, an overdamped response returns to equilibrium slowly without oscillations but takes more time than a
critically damped system. Understanding these differences is crucial for designing systems that require specific dynamic behaviors.
• Describe the significance of achieving critical damping in practical control systems.
□ Achieving critical damping in control systems is essential because it allows for fast stabilization of the system's output while avoiding undesirable oscillations. This is particularly
important in applications like robotics and automotive suspension systems, where rapid response times are required for safety and performance. By ensuring that systems operate at critical
damping, engineers can optimize performance and enhance user experience by providing smooth and quick responses to changes.
• Evaluate how the concept of critical damping impacts design decisions in engineering applications.
□ The concept of critical damping significantly influences design decisions across various engineering fields. Engineers must consider the balance between responsiveness and stability when
designing systems like feedback controllers or mechanical devices. When aiming for critical damping, designers may need to adjust parameters such as mass, stiffness, or damping coefficients
to meet performance specifications. This evaluation not only optimizes functionality but also ensures reliability and safety in real-world applications, highlighting the importance of
understanding dynamic responses in engineering design.
"Critically damped response" also found in:
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/control-theory/critically-damped-response","timestamp":"2024-11-13T01:58:34Z","content_type":"text/html","content_length":"146664","record_id":"<urn:uuid:9fe7a724-fa0d-4046-9199-04b49957022e>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00282.warc.gz"} |
16. Implementation#
We implement a function algebra, which allows us to write expressions like
func = Id + 3 * Compose (f, g);
where f and g are functions, and Id is the identic function. Then the composed function func shall be able to compute the function value and the derivative at a given point:
Matrix jacobi(func->DimF(), func->DimX());
func->EvaluateDeriv(x, jacobi);
The base class for such functions is
class NonlinearFunction {
virtual ~NonlinearFunction() = default;
virtual size_t DimX() const = 0;
virtual size_t DimF() const = 0;
virtual void Evaluate (VectorView<double> x, VectorView<double> f) const = 0;
virtual void EvaluateDeriv (VectorView<double> x, MatrixView<double> df) const = 0;
DimX and DimF provide the vector space dimensions of the domain, and the image. The Evaluate and EvaluateDeriv take vector- and matrix-views, such that we can take sub-vectors and sub-matrices when
calling the evaluations.
We build expression trees, similar as the expression templates for vectors and matrices. But now we use virtual inheritance instead of the Barton Neckman trick (i.e. dynamic polymorphism instead of
static polymorphism). This is more expensive to create, but it allows to pass NonlinearFunction objects between C++ functions.
A SumFunction implements the sum \(f_A+f_B\). The two childs are provided by pointers. Shared pointers allow simple life-time management:
class SumFunction : public NonlinearFunction {
shared_ptr<NonlinearFunction> fa, fb;
SumFunction (shared_ptr<NonlinearFunction> _fa,
shared_ptr<NonlinearFunction> _fb)
: fa(_fa), fb(_fb) { }
size_t DimX() const { return fa->DimX(); }
size_t DimF() const { return fa->DimF(); }
void Evaluate (VectorView<double> x, VectorView<double> f) const
fa->Evaluate(x, f);
Vector<> tmp(DimF());
fb->Evaluate(x, tmp);
f += tmp;
void EvaluateDeriv (VectorView<double> x, MatrixView<double> df) const
fa->EvaluateDeriv(x, df);
Matrix<> tmp(DimF(), DimX());
fb->EvaluateDeriv(x, tmp);
df += tmp;
To generate such a SumFunction object, we overload the operator+ for two NonlinearFunction objects, represented as shared objectes:
auto operator+ (shared_ptr<NonlinearFunction> fa, shared_ptr<NonlinearFunction> fb)
return make_shared<SumFunction>(fa, fb);
16.1. Implementing a Newton solver#
Newton’s method for solving the non-linear equation
\[ f(x) = 0 \]
is this iterative method:
\[ x^{n+1} = x^n - f^\prime(x^n)^{-1} f(x^n) \]
If the Jacobi-matrix at the solution \(x^\ast\) is regular, and the initial guess \(x^0\) is sufficiently close to \(x^\ast\), Newton’s method converges quadratically:
\[ \| x^{n+1} - x^\ast \| \leq c \, \| x^n - x^\ast \|^2 \]
This means the number of valid digits double in every iteration.
void NewtonSolver (shared_ptr<NonlinearFunction> func, VectorView<double> x,
double tol = 1e-10, int maxsteps = 10,
std::function<void(int,double,VectorView<double>)> callback = nullptr)
Vector<> res(func->DimF());
Matrix<> fprime(func->DimF(), func->DimX());
for (int i = 0; i < maxsteps; i++)
func->Evaluate(x, res);
func->EvaluateDeriv(x, fprime);
x -= fprime*res;
double err = Norm(res);
if (callback) callback(i, err, x);
if (err < tol) return;
throw std::domain_error("Newton did not converge");
16.2. Coding the Implicit Euler method#
In every time-step we have to solve for the new value \(y^{n+1}\):
\[ y^{n+1} - y^n - h f(y^{n+1}) = 0 \]
The function \(f : {\mathbb R}^n \rightarrow {\mathbb R}^n\) is the right hand side of the ODE, which has been brought into autonomous form.
We use our function algebra to build this composed function, and throw it into the Newton solver. If we make the time-step not too large, the value \(y^n\) of the old time-step is a good starting
To express that the independent varible is \(y^{n+1}\), we create an IdentityFunction. The old value is considered as a constant, i.e. a ConstantFunction. The right hand side function is given by the
user. Then the implicit Euler method is coded up like that:
void SolveODE_IE(double tend, int steps,
VectorView<double> y, shared_ptr<NonlinearFunction> rhs,
std::function<void(double,VectorView<double>)> callback = nullptr)
double dt = tend/steps;
auto yold = make_shared<ConstantFunction>(y);
auto ynew = make_shared<IdentityFunction>(y.Size());
auto equ = ynew - yold - dt * rhs;
double t = 0;
for (int i = 0; i < steps; i++)
NewtonSolver (equ, y);
t += dt;
if (callback) callback(t, y);
16.3. Using the time-stepping method#
A mass attached to a spring is described by the ODE
\[ m y^{\prime \prime}(t) = -k y(t) \]
where \(m\) is mass, \(k\) is the stiffness of the spring, and \(y(t)\) is the displacement of the mass. The equation comes from Newton’s law
force = mass \(\times\) acceleration
We replace the second-order equation with a first order system. For mathematical simplicity we set \(k = m = 1\). Then we can define the right-hand-side as a NonlinearFunction. The derivative is
needed for the Newton solver:
class MassSpring : public NonlinearFunction
size_t DimX() const override { return 2; }
size_t DimF() const override { return 2; }
void Evaluate (VectorView<double> x, VectorView<double> f) const override
f(0) = x(1);
f(1) = -x(0);
void EvaluateDeriv (VectorView<double> x, MatrixView<double> df) const override
df = 0.0;
df(0,1) = 1;
df(1,0) = -1;
Finally, We start the time-stepper with the time interval, number of steps, initial values, right-hand-side function, and a call-back function called at the end of every time-step:
double tend = 4*M_PI;
int steps = 100;
Vector<> y { 1, 0 };
auto rhs = make_shared<MassSpring>();
SolveODE_IE(tend, steps, y, rhs,
[](double t, VectorView<double> y) { cout << t << " " << y(0) << " " << y(1) << endl; });
This example is provided in demos/test_ode.cc.
16.4. Excercises#
• Implement an explicit Euler time-stepper, and the Crank-Nicolson method.
• Compare the results of the mass-spring system for these methods, and various time-steps. Plot the solution function. What do you observe ?
• Model an electric network by an ODE. Bring it to autonomous form. Solve the ODE numerically for various parameters with the three methods, and various time-steps.
Voltage source \(U_0(t) = \cos(100 \pi t)\), \(R = C = 1\) or \(R = 100, C = 10^{-6}\).
Ohm’s law for a resistor \(R\) with resistivity \(R\):
\[ U = R I \]
Equation for a capacitor \(C\) with capacity \(C\):
\[ I = C \frac{dU }{dt} \]
Kirchhoff’s laws:
• Currents in a node sum up to zero. Thus we have a constant current along the loop.
• Voltages around a loop sum up to zero. This gives:
\[ U_0 = U_R + U_C \]
\[ U_C(t) + R C \frac{dU_C}{dt}(t) = U_0(t) \]
Use initial condition for voltage at capacitor \(U_C(t_0) = 0\), for \(t_0=0\).
16.5. Stability function#
An ODE \(y^\prime(t) = A y(t)\) with \(A \in {\mathbb R}^n\) diagonizable can be brought to \(n\) scalar ODEs
\[ y_i^\prime(t) = \lambda_i y_i(t), \]
where \(\lambda_i\) are eigenvalues of \(A\). If \(\lambda_i\) has negative (non-positive) real part, the solution is decaying (non-increasing). Skip index \(i\).
The explicit Euler method with time-step \(h\) leads to
\[ y_{i+1} = (1 + h \lambda) y_i, \]
the implicit Euler method to
\[ y_{i+1} = \frac{1}{1-h \lambda} y_i, \]
and the Crank-Nicolson to
\[ y_{i+1} = \frac{ 2 + h \lambda } { 2 - h \lambda } y_i \]
The stability function \(g(\cdot)\) of a method is defined such that
\[ y_{i+1} = g(h \lambda) y_i \]
These are for the explicit Euler, the implicit Euler, and the Crank-Nicolson:
\[ g_{EE}(z) = 1+z \qquad g_{IE}(z) = \frac{1}{1-z} \qquad g_{CN}(z) = \frac{2+z}{2-z} \]
The domain of stability is
\[ S = \{ z : | g(z) | \leq 1 \} \]
What is
\[ S_{EE} = \{ z : |z + 1| \leq 1 \} \]
\[ S_{IE} = \{ z : | z - 1 | \geq 1 \} \]
\[ S_{CN} = \{ z : {\text Re} (z) \leq 0 \} \]
If Re\((\lambda) \leq 0\), then \(h \lambda\) is always in the domain of stability of the implicit Euler, and of the Crank-Nicolson. This property of an method is called \(A\)-stability. The explicit
Euler leads to (quickly) increasing numerical solutions if \(h\) is not small enough.
If \(\lim_{z \rightarrow -\infty} g(z) = 0\), quickly decaying solutions lead to quickly decaying numerical solutions. This is the case for the implicit Euler, but not for the Crank-Nicolson. This
property is called \(L\)-stability. One observes (slowly decreasing) oscillations with the CR - method when \(-h \lambda\) is large. | {"url":"https://jschoeberl.github.io/IntroSC/ODEs/implementation.html","timestamp":"2024-11-05T07:04:49Z","content_type":"text/html","content_length":"62366","record_id":"<urn:uuid:071d81a6-f35e-4f73-9135-0bff332d97ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00289.warc.gz"} |
Literature suggestions about numerical errors on calculators
03-18-2020, 01:41 PM
Post: #1
erazor Posts: 16
Junior Member Joined: Nov 2019
Literature suggestions about numerical errors on calculators
the book "Rounding errors in algebraic processes" by Wilkinson is a good read but it's way out of date.
IEEE-754 is now the thing for both radix 2 and 10. Also there certainly are proceedings on this topic in 50 years.
Can anyone suggest up-to-date/more modern books which are easy to read?
"Accuracy and Stability of Numerical Algorithms" by Higham seems to go this way but might be too in-depth.
03-18-2020, 02:52 PM
(This post was last modified: 03-18-2020 02:55 PM by BobVA.)
Post: #2
BobVA Posts: 455
Senior Member Joined: Dec 2013
RE: Literature suggestions about numerical errors on calculators
The HP-15C Advanced Functions Handbook has a good, easy-to-read discussion of this topic (Appendix: Accuracy of Numerical Calculations). It doesn't specifically discuss IEEE 754 though.
03-18-2020, 04:04 PM
Post: #3
toml_12953 Posts: 2,191
Senior Member Joined: Dec 2013
RE: Literature suggestions about numerical errors on calculators
(03-18-2020 01:41 PM)erazor Wrote: Hello,
the book "Rounding errors in algebraic processes" by Wilkinson is a good read but it's way out of date.
IEEE-754 is now the thing for both radix 2 and 10. Also there certainly are proceedings on this topic in 50 years.
Can anyone suggest up-to-date/more modern books which are easy to read?
"Accuracy and Stability of Numerical Algorithms" by Higham seems to go this way but might be too in-depth.
Have you seen the Wikipedia entry? If not, go here:
Tom L
Cui bono?
03-18-2020, 04:26 PM
Post: #4
KeithB Posts: 542
Senior Member Joined: Jan 2017
RE: Literature suggestions about numerical errors on calculators
Goldberg: "What every computer scientist should know about floating point"
I think it is all summed up by Kernigham and Plauger in "Elements of Programming Style": "Working with floating point is like moving sand piles. Every time you move one you lose a little sand and
pick up a little dirt".
03-18-2020, 09:42 PM
(This post was last modified: 03-18-2020 09:45 PM by SlideRule.)
Post: #5
SlideRule Posts: 1,533
Senior Member Joined: Dec 2013
RE: Literature suggestions about numerical errors on calculators
An excerpt from Computer Arithmetic and Validity Theory, Implementation, and Applications 2e, De Gruyter, © 2013, e-ISBN 978-3-11-030179-3
Introduction (pg. 3)
"The task of numerical analysis is to develop and design algorithms which use floating-point numbers to deliver a reasonably good approximation to the exact result. An essential part of this task
is to quantify the error of the computed answer. Managing this quite natural error is the crucial challenge of numerical or scientific computing. In this respect, numerical analysis is completely
irrelevant to everyday applications of computers like those mentioned in the opening paragraph of the Preface. For solving problems of this kind, integer arithmetic, which is exact, is used, or
should be, whenever arithmetic is needed."
Emphasis mine.
the branch of mathematics dealing with the properties and manipulation of numbers.
03-19-2020, 05:45 AM
Post: #6
erazor Posts: 16
Junior Member Joined: Nov 2019
RE: Literature suggestions about numerical errors on calculators
Thanks for the suggestions.
Posting the link to the
HP-15c advanced functions manual
The Goldberg article is available to the public in an edited version from
03-19-2020, 09:49 AM
Post: #7
EdS2 Posts: 608
Senior Member Joined: Apr 2014
RE: Literature suggestions about numerical errors on calculators
(03-18-2020 04:04 PM)toml_12953 Wrote:
(03-18-2020 01:41 PM)erazor Wrote: Hello,
the book "Rounding errors in algebraic processes" by Wilkinson is a good read but it's way out of date.
IEEE-754 is now the thing for both radix 2 and 10. Also there certainly are proceedings on this topic in 50 years.
Can anyone suggest up-to-date/more modern books which are easy to read?
"Accuracy and Stability of Numerical Algorithms" by Higham seems to go this way but might be too in-depth.
Have you seen the Wikipedia entry? If not, go here:
There are some great writings by Kahan, and some of them are linked as footnotes from that Wikipedia article section.
How Java's floating-point hurts everyone everywhere (PDF) Why do we need a floating-point arithmetic standard? (PDF) The Baleful Effect of Computer Benchmarks upon Applied Mathematics, Physics and
Chemistry (PDF) Marketing versus Mathematics (PDF)
Also, from
his own web site
Mathematics Written in Sand (PDF)
Also of interest:
Severance, Charles (1998-02-20). "
An Interview with the Old Man of Floating-Point
User(s) browsing this thread: 1 Guest(s) | {"url":"https://hpmuseum.org/forum/thread-14664-post-129190.html","timestamp":"2024-11-02T20:53:35Z","content_type":"application/xhtml+xml","content_length":"36815","record_id":"<urn:uuid:dea07acb-bfa0-4f10-894e-8a53f4593157>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00062.warc.gz"} |
Effective Mathematics Instruction The Importance of Curriculum
I found a nice little study comparing a fourth grade Direct Instruction math program with a well regarded fourth grade constructivist program. The results were surprising, to say the least.
The study Effective Mathematics Instruction The Importance of Curriculum (2000), Crawford and Snider,
Education & Treatment of Children
compared the Direct Instruction 3rd grade math curriculum
Connecting Math Concepts
(CMC, level D) to the constructivist fourth grade math curriculum
Invitation to Mathematics
(SF) published by Scott Foresman.
Invitation to Mathematics (SF)
SF has a spiral design (but of course). and relies on discovery learning and problem solving strategies to "teach" concepts. The SF text included chapters on addition and subtraction facts, numbers
and place value, addition and subtraction, measurement, multiplication facts, multiplication, geometry, division facts, division, decimals, fractions, and graphing. Each chapter in the SF text
interspersed a few activities on using problem solving strategies. Teacher B taught the 4th grade control class. He was an experienced 4th grade math teacher and had taught using the SF text for 11
Teacher B's math period was divided into three 15-minute parts. First, students checked their homework as B gave the answers. Then students told B their scores, which he recorded. Second, B lectured
or demonstrated a concept, and some students volunteered to answer questions from time-to-time. The teacher presentation was extemporaneous and included explanations, demonstrations, and references
to text objectives. Third, students were assigned textbook problems and given time for independent work.
The SF group completed 10 out of 12 chapters during the experiment.
Connecting Math Concepts (CMC)
CMC is a typical Direct Instruction program having a stranded design in which multiple skills/concepts are taught in each lesson, each skill/concepts is taught for about 5-10 minutes each lesson and
are revisited day after day until the skill/concept has been mastered. Explicit instruction is used to teach each skill/concept. CMC included strands on multiplication and division facts, calculator
skills, whole number operations, mental arithmetic, column multiplication, column subtraction, division, equations and relationships , place value, fractions, ratios and proportions, number families,
word problems, geometry, functions, and probability. Teacher A had 14 years of experience teaching math. She had no previous experience with CMC or any other Direct Instruction programs. She received
4 hours of training at a workshop in August and about three hours of additional training from the experimenters.
Teacher A used the scripted presentation in the CMC teacher presentation book for her 45 minute class. She frequently asked questions to which the whole class responded, but she did not use a signal
to elicit unison responding. If she got a weak response she would ask the question again to part of the class (
, to one row or to all the girls) or ask individuals to raise their hands if they knew the answer. There were high levels of teacher-pupil interaction, but not every student was academically engaged.
Generally, one lesson was covered per day and the first 10 minutes were set aside to correct the previous day's homework. Then a structured, teacher-guided presentation followed, during which the
students responded orally or by writing answers to the teacher's questions. Student answers received immediate feedback and errors were corrected immediately. If there was time, students began their
homework during the remaining minutes.
The CMC group completed 90 out of 120 lessons during the experiment.
The Experiment
Despite the differences in content and organization, both programs covered math concepts generally considered to be important in 4th grade--addition and subtraction of multi-digit numbers,
multiplication and division facts and procedures, fractions, and problem solving with whole numbers.
Students were randomly assigned to each 4th grade classroom. The classes were heterogeneous and included the full range of abilities including learning disabled and gifted students. There were no
significant pretest differences between students in the two curriculum groups on the computation, concepts and problem solving subtests of the NAT nor on the total test scores. Nor did any
significant pretest differences show up on any of the curriculum-based measures.
The Results
Students did not use calculators on any of the tests.
The CMC Curriculum Test
For the CMC measure the experimenters designed a test that consisted of 55 production items for which students computed answers to problems, including both computational and word problems. The CMC
test was comprehensive as well as cumulative; problems were examples of the entire range of problems found in the last quarter of the CMC program. Problems were chosen from the last quarter of the
program because the various preskills taught in the early part of the program are integrated in problem types seen in the last quarter of the program.
The results here were not surprising, although the magnitude of the difference between the two groups may be.
The SF class averaged 15 out 55 (27%) correct answers on the posttest up from 7 out of 55 correct on the pre-test. The CMC class averaged 41 (75%) correct on the posttest up from 6 out of 55 correct
on the pretest. I calculated the effect size to be 3.25 standard deviations which is enormous, though biased in favor of the CMC students.
The SF Curriculum Test
The SF test was published by Scott, Foresman to go along with the Invitation to Mathematics text and was the complete Cumulative Test for Chapters 1-12. It was intended to be comprehensive as well as
cumulative. The SF test consisted of 22 multiple-choice items (four choices) which assessed the range of concepts presented in the 4th grade SF textbook.
The SF class averaged 16 out 22 (72%) correct answers on the posttest up from 4 out of 22 correct on the pre-test. However, surprisingly the CMC class averaged 19 (86%) correct on the posttest up
from 3 out of 15 correct on the pretest. I calculated the effect size to be 0.75 standard deviations which is large, even though the test was biased in favor of the SF students.
The NAT exam Math Facts Test
The CMC group also scored significantly higher on rapid recall of multiplication facts. Of 72 items, the mean correctly answered in 3 minutes for the CMC group was 66 compared to 48 for the SF group
for the multiplication facts posttest. I calculated the effect size to be 1.5 sd.
Posttest comparisons on the computation subtest of the NAT indicated a significant difference in favor of the CMC group. Effect size = 0.86. On the other hand, neither the scores for the concepts and
problem-solving portion of the NAT nor the total NAT showed any significant group differences. The total NAT scores put the CMC group at the 51st percentile and the SF group at the 46th percentile,
but this difference was not statistically significant.
The CMC implementation was less than optimal, yet it still achieved significantly better performance gains compared to the constructivist curriculum. The experimenters noted:
We believe this implementation of CMC was less than optimal because (a) students began the program in fourth grade rather than in first grade and (b) students could not be placed in homogeneous
instructional groups. A unique feature of the CMC program is that it's designed around integrated strands rather than in a spiraling fashion. Each concept is introduced, developed, extended, and
systematically reviewed beginning in Level A and culminating in Level F (6th grade). This design sequence means that students who enter the program at the later levels may lack the necessary
preskills developed in previous levels of CMC. This study with fourth graders indicated that even when students enter Level D, without the benefit of instruction at previous levels, they could
reach higher levels of achievement in certain domains. However, more students could have reached mastery if instruction were begun in the primary grades.
Another drawback in this implementation had to do with heterogeneous ability levels of the groups. Heterogeneity was an issue for both curricula. However, the emphasis on mastery in CMC created a
special challenge for teachers using CMC. To monitor progress CMC tests are given every ten lessons and mastery criteria for each skill tested are provided. Because of the integrated nature of
the strands, students who do not master an early skill will have trouble later on. Unlike traditional basals, concepts do not "go away," forcing teachers to continue to reteach until all students
master the skills. This emphasis on mastery created a challenge for teachers that was exacerbated in this case by the fact that students had not gone through the previous three levels of CMC.
Why didn't the CMC gains show up on the NAT problem solving subtest and total math measure? The experimenters opine:
Our guess is that a more optimal implementation of CMC would have increased achievement in the CMC group, which may have shown up on the NAT. In general, the tighter focus of curriculum-based
measures such as those used in this study makes them more sensitive to the effects of instruction than any published, norm-referenced test. Standardized tests have limited usefulness for program
evaluation when the sample is small, as it was in this study (Carver, 1974; Marston, Fuchs, & Deno, 1985). Nevertheless, we included the NAT as a dependent measure because it is
curriculum-neutral. The differences all favored the CMC program.
That no significant differences occurred either between teachers or across years on the NAT should be interpreted in the light of several other factors. One, the results do not indicate that the
SF curriculum outperformed CMC, only that the NAT did not detect a difference between the groups, despite the differences found in the curriculum-based measures. Two, performance on published
norm-referenced tests such as the NAT are more highly correlated to reading comprehension scores than with computation scores (Carver, 1974; Tindal & Marston, 1990). Three, the NAT concepts and
problem solving items were not well-aligned with either curriculum. The types of problems on the NAT were complex, unique, non-algorithmic problems for which neither program could provide
instruction. Performance on such problems has less to do with instruction than with raw ability. Four, significant differences on the calculation subtest of the NAT favored the CMC program during
year 1 (see Snider and Crawford, 1996 for a detailed discussion of those results). Because less instructional time is devoted to computation skills after 4th grade, the strong calculation skills
displayed by the CMC group would seem to be a worthy outcome. Five, although the NAT showed no differences in problem solving skills between curriculum groups or between program years, another
source of data suggests otherwise. During year 1, on the eight word problems on the curriculum-based test, the CMC group outscored the SF group with an overall mean of 56% correct compared to
32%. An analysis of variance found this difference to be significant...
And, here's the kicker. The high-performing kids liked the highly-structured Direct Instruction program better than the loosey goosey constructivist curriculum:
Both teachers reported anecdotally that the high-performing students seemed to respond most positively to the CMC curricula. One of Teacher A's highest performing students, when asked about the
program, wrote, "I wish we'd have math books like this every year.... it's easier to learn in this book because they have that part of a page that explains and that's easier than just having to
pick up on whatever."
It may be somewhat counter-intuitive that an explicit, structured program would be well received by more able students. We often assume that more capable students benefit most from a less
structured approach that gives them the freedom to discover and explore, whereas more didactic approaches ought to be reserved for low-performing students. It could be that high-performing
students do well and respond well to highly-structured approaches when they are sufficiently challenging. These reports are interesting enough to bear further investigation after collection of
objective data.
17 comments:
Are these changes in the department of education something to be worried about?
Sadly, my son is in a first grade TERC (i.e., constructivist math) classroom. As such, his ability to multiply, divide, square numbers, work competently with negative numbers (including
multiplication/division), and do multiple-digit addition/subtraction is largely ignored. However, should he make a counting error when counting up 30 or 40 blocks, it's a mark against him.
American primary educational philosophy seems to labor under a strange view that kids can't learn math. Teachers seem to think kids in other countries, particularly Asian countries, are simply
innately good at math. This is simply untrue (in fact, Richard Nesbitt in his book The Geography of Thought cogently suggests that Westerners may think more reductionistically (i.e., in a more
math-friendly fashion))
Somehow even with a back to basics curriculum, I doubt that your son would learn to "multiply, divide, square numbers, work competently with negative numbers (including multiplication/division),
and do multiple-digit addition/subtraction" in 1st grade.
My biggest issue with my 1st graders classroom, is that they aren't required to master the addition (and subtraction) facts. It is so annoying to see kids counting on their fingers while adding 7
+ 3.
Apologies, I misread your post...
It sounds like your 1st grader has the same problems that my 3rd grader has. I am willing to bet that he could survive in Algebra, but is stuck doing stuff he mastered last year.
Why oh why is Ability Grouping such a dirt phrase?
Ooh! I know! Pick me!!!
Abilities grouping is considered "bad" (even though if we were honest with ourselves we would know it was necessary) because even in second grade kids know if they are in the "smart", "average"
or "dummy" class. And believe me, those are the words we used in the 70s.
Can't have little Johnny thinking he's a dummy. It would be bad for his self-esteem. Maslow's Pyramid is more important that math you know.
Don't forget that the struggling students learn just by being in the presence of the advanced students. Osmosis, you know.
I like Engelmann's take th best:
"The notion that the lower performers are humiliated if they are in a homogeneous group with other lower performers is actually backwards. They will suffer far more if they are placed far beyond
their level of skill and knowledge, because they will receive an uninterrupted flow of evidence that they are dumber than all the other children in the group"
Why is "ability grouping" the great unmentionable? Because the racial and ethnic makeup of the groups is politically intolerable.
Even though there may be a race/class component in many districts, I went to a lily white suburban school (a local apartment manager was regularly threatened by the locals about what might happen
should she rent to any of those **** [insert racial slur here]) with a solid tax base of mostly middle class housing. This school was as culturally homogenous as you can get outside a rerun of
some old sitcom.
Even in the absence of stereotypical low-performing minorities, abilities grouping had to be abandoned.
Success for All groups students by reading performance so that the teacher can focus on the skills the whole group needs to learn. I reported on a number of Success for All schools. Teachers said
student behavior improved significantly. Poor readers, they thought, had been living in fear of being called on to read in class. So they kept their classes in an uproar to hide their
incompetence. In a small group of readers with similar skills, students felt safe from humiliation.
I think "ability" grouping is a poor term. There are many reasons why some kids have trouble with reading or math that have nothing to do with innate ability. For example, they may have been
taught poorly.
"I think "ability" grouping is a poor term. There are many reasons why some kids have trouble with reading or math that have nothing to do with innate ability. For example, they may have been
taught poorly."
This is why many don't like tracking in the lower grades. It's quite understandable. When schools have such trouble with curricula and teaching methods, who can trust their ability to correctly
place students. It may just allow them to ignore real problems and blame it on the students.
But just because schools are bad at teaching and selecting curricula doesn't mean that ability grouping shouldn't be done. Perhaps it would sound better if you called it capability grouping. It
won't make their teaching problems go away, but it will sure help those who are ready and willing for more.
Look at it this way, if you don't separate kids by capability, then that doesn't guarantee better teaching methods and curricula either. Perhaps cpapbility grouping would get parents to question
why their child is not in the faster paced group. Schools would have to justify their decisions. Without grouping, it's like Sergeant Schultz ..."I see nothing! NOTHING!"
By high school, everyone does abiliity grouping, but by then, everything looks like external reasons.
I don't think I buy the arguments against ability grouping. The arguments against it seem to argue that kids will be put into a "slower" group because they haven't been taught the material
earlier, but this doesn't have to be the case.
Ideally kids would be grouped into levels based on current "level" and "speed". For example...
For example, imagine a program that had several levels (roughly equivalent to grades) 1, 2 3, 4, 5
Now within each level there would be three groups corresponding to the pace upon which that group is able to get through the program... i.e. a for slower kids, b for average, c for faster kids.
This means my 3rd grader might be placed in level 2, but he could also be placed in group 2c which would have him up to grade level within a year… and beyond it in another year.
Also note, because of the grouping, even kids in the slowest group would still move at a pace that ensured they were progressing trough the level in one calendar year.
Summary: I want ability groups based on "potential" not on simply current level.
Yes... some kids will move through the system exponentially quicker... so what... at least the kids who struggled a little more would move through the system at a quicker pace than they do now
and not get left behind.
Right now our system takes all kids through to about 6th grade level at the same time. 1/3 are held back from what they are capable of… 1/3 are just right… and 1/3 are completely left behind with
no way to catch up.
My daughter is sinking in reading in her current 1st grade class, but she is still tested and expected to read books way above her current level. She is slowly learning to hate reading, homework
is torture for her, and she has recently taken to calling herself a dummy. I would much rather she get placed in a class that is on her level and moves at a pace that she can master what she
needs too.
(Note: she also needs to get out of a classroom that promotes whole language)
When I was in Junior High School, there were 5 levels in 8th grade math. I was in the second highest group, and performed rather poorly there (I believe because of my teacher's rather strict
insistence on memorizing formulae -- when we had a substitute for about a week, my performance suddenly shot up).
The teacher decided that due to performance I should be moved down a level -- in 9th grade there were only four levels, so I was moved into the third group (even though those who had been in the
third group the prior year went into the second group in the revised system -- I'm not sure, I think there was some confusion somewhere). Yes, being moved from the "pretty smarts" to the "pretty
dumbs" did bother me for a bit, though I never actually heard any comments on it from other students.
I had no idea at the time, but this ended up being one of the best things that happened during my high school career.
We went over things laboriously and repeatedly. Over and over, again and again long after I knew the material. Immediately I aced every test, of course, which wasn't a great motivator but made me
feel good anyway.
It's also worthwhile to note that it helped me develop more sympathy for those with lesser abilities.
By early 11th grade, I would correct the teacher regularly. By the end of 11th grade, I would see proofs of problems the teacher wasn't aware existed. In 12th grade, I started coming in to school
early, talking to the calculus teacher for 15 minutes or so about his lesson, and then going out and helping kids doing their calculus homework in the hall. My base of knowledge was solid enough
that picking up new information was a breeze.
I disagree with stevenh on changing it to "capability" grouping. In fact, to my ears this sounds even worse -- it's like saying the kids will always be at the lower rungs. Children do progress at
different paces at different times, and have spurts in their intellectual growth just as in their physical growth. Ability grouping is just that -- current ability grouping. Joanne, I understand
your concerns, but the reasons behind the differing ability are not necessarily at issue (except for the kids). It's not "innate ability" they're being grouped by. No one can judge their innate
ability anyway.
The sad part about school today is the fact that the people trying all this new, progressive pedagogy like constructivist math don't have a clue what basic education is really about. They spend
so much time playing games and doing tricks that the students don't learn the fundamentals.
I have seen from experience the damage done to children who do not learn the critical fundamentals at the early grades. Being a fourth grade teacher for four years, I have seen the disaster
"progressive" education has done to some children. I personally have a student who is in my EIP (Early Intervention Program- a class limited to 14 low performing regular education students) class
who has been retained once, been in the second and third grade "taught" by two supposedly "good" teachers and the child cannot decode at all. She was never taught it. It is not a learning
disability. I am now trying to teach this child basic decoding, AFTER BEING IN AN AMERICAN PUBLIC SCHOOL FOR 5 3/4 YEARS.
I have encountered NUMEROUS students who, even to this day, do not know basic addition, subtraction, and multiplication facts entering the fourth grade. Fifth grade teachers lament the fact that
students do not know basic facts.
I think what the problem is the lack of SYSTEMATIC instruction. In many places, like my current school, some teachers are allowed to do what ever they damn well please. Unfortunately, the student
suffers under tutelage of "all high and mighty." These all high and mightys are either the ones who insist on games and gimmicks or are too lazy to properly teach and subsist on busy work and
I have used a program with similar coomponents like DI (Voyager Time Warp). I like DI it because it's simplicity, but effectiveness. It takes the mystery out of instruction. It is consistent. It
works for both the fresh rookie and the experienced veteran. It engages students. No child is left behind. If these teachers had been using DI instead of doing their own thing, these kids would
be able to read now.
Rather than "ability" or "capability" grouping, why not call it "readiness" grouping. There are all sorts of reasons for being ready, or not ready, for a particular level of instruction. Prior
instruction, missing school because of illness, innate ability, emotional upset -- the list goes on.
re: Grouping
I wish this were just a matter of semantics, but the politics of hurt feelings has a powerful lobby. So, we must be careful with our word choice.
Joanne hit the nail on the head. These are skill based groups. These groups can be formed without assessment of fuzzy measures such as potential and readiness, or, ideally, even their age.
Rather, their membership should be based on what skills the child has attained at what rate they can acquire new skills.
Skill based groups should be unique for each subject area, including art, music, gym. Such grouping should allow fluid movement between groups throughout the year. The name "skill based" should
remind teacher and student that their task is to demonstrate skill mastery through any number of measuarble objectives.
An art teacher I once worked with defined talent as the outward representation of acquired skills. Anyone could learn these skills. Some will master them faster or with a greater degree of
perfection than others, but we can all improve our skills. The physicist Richard Feyman knew this when he sought out someone who could teach him portraiture.
There should be no shame associated with skill based grouping, whatever it is called. I believe that by ensuring all subjects are so grouped, almost everyone is going to find themselves in a
variety of levels as compared to their peers. Most importantly, the requirements for advancing to the next group should be clear and explicit, with all children given the opportunities to stretch
themselves or be comfortable as they choose.
Ability grouping happens all of the time in schools. The reason it is frowned upon, is because these groups were historically static, keeping children on the same track with the same other
students at the same rate of learning.
The newest term is invitational groups, based on specific skills. So, if you had a class with a portion of fifth graders that needed decoding work, you could work with that group for just that
skill. Students could move in and out of these groups as skills were brought up to grade level (mastery is such a false term). This flexible grouping allows for students to spend most of their
time in independent practice and inquiry, while meeting for specific skill work that is needed.
In ability grouping, students are usually given a large assessment that measures a general level. If students are grouped based on this assessment, they will not receive specific skill work,
rather, they will progress at a learning rate at this level with similar peers. This just puts children on a bell curve in a smaller group. This approach is better than the whole class model, but
still not efficient or effective for each child.
A teacher that wants to effectively group students homogeneously will look at the results of a large assessment, and then get more information about students at levels of need by conducting
authentic assessments such as interviews, "kid watching," running records, informal reading assessments from student chosen text and teacher chosen text. This will allow the teacher to get a
better view of the whole child.
As far as constructivism, it is a misnomer that it is a pedagogical strategy. Contructivism is Piaget's theory on how students learn. When appied to education, it is very effective and takes a
masterful teacher and classroom manager. It is not an easy philosophy to be successful with because our students are tested, schools are organized and designed, and the culture of the public is
in line with traditional practices. In mathematics, children should learn the same content as traditional courses yet not rely on the same strategies, such as memorization of an algorithm.
Instead, they should be coached to acquire more and more sophisticated problem strategies that are elegant and efficient and practiced. Children should have strategies for solving a multi-digit
multiplication problem that are based on their own construction of knowledge that are accurate and work well for the child.
A child that has been in an effective constructivist setting should view each problem and devise a strategy that is most efficient for that problem (and accurate, logical of course). So for math
facts, memory is typically the best first strategy. If memory is not sufficient or developed around these facts, a child should still be able to construct an accurate response in a reasonable
amount of time.
The hardest thing to see is the role of the teacher. Teachers should be like facilitators, coaches, and learners all in one. When working with a math concept, a teacher can control the direction
of the class by having clear learning targets and seeking out mathematical thinking from students to meet these targets. So, for a multiplication lesson, if I wanted to see my class move from
counting on fingers to using an algorithm, I would provide time for my students to work independently and together on a situation or problem(s), and then look for a progression of strategies that
students were using that I could teach from via student presenting. I would allow the students to share and discuss a few strategies, and use this work to help my class see the connections, and
practice the more efficient strategies.
This teaching keeps all students in their zones of proximal development and allows them to accelerate by seeing and practicing strategies that are on the cusps of their learning edges.
We have a long way to go as educators as far as professional develop goes, but many great things are happening in progressive education that cannot be ignored. | {"url":"https://d-edreckoning.blogspot.com/2007/04/effective-mathematics-instruction.html","timestamp":"2024-11-07T06:44:13Z","content_type":"text/html","content_length":"117976","record_id":"<urn:uuid:50d308ca-c0c4-4937-a9df-4cb6c97d5981>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00155.warc.gz"} |
How to Design a Sleeve and Cotter Joint? - ExtruDesign
As you know, there are 3 different types of cotter joints. The Sleeve and Cotter Joint is one of them. The other two are the Socket and Spigot Cotter Joint, and Gib and cotter joint. In this article,
we are going to discuss this Sleeve and Cotter joint construction and how to design this Cotter Joint to avoid possible failures in different parts of this cotter joint.
Cotter Joint
A Cotter joint is a joint which is locked by a cotter. A cotter is a flat wedge-shaped piece of rectangular cross-section and its width is tapered from one end to another for an easy adjustment.
Mostly it is tapered on one side but sometimes it may be tapered on both sides.
• A Typical Cotter has a taper that varies from 1 in 48 to 1 in 24 and it may be increased up to 1 in 8, if a locking device is provided.
• In order to lock the cotter in place we need additional locking device such as a taper pin or a set screw on the lower end of the cotter to keep in its place.
• The cotter is usually made of mild steel or wrought iron.
• The cotter joint comes under the temporary fastening method.
• Cotter Joints are used to connect rigidly two co-axial rods or bars which are subjected to axial tensile or compressive forces.
• This cotter joints can be fount in well know applications such as in
□ Connecting a piston rod to the cross-head of a reciprocating steam engine,
□ Connecting a piston rod and its extension as a tail or pump rod,
□ in strap end of connecting rod
Sleeve and Cotter Joint
Following is the Sleeve and Cotter Joint schematic diagram.
As we have seen above, the sleeve and cotter joint is used to connect two round rods or bars.
• In this type of joint, a sleeve also called as muff is used over the two rods and then two cotters (one on each rod end) are inserted in the holes provided for them in the sleeve and rods.
• The Sleeve and cotter is usually have a taper of 1 in 24. It may be noted that the taper sides of the two cotters should face each other as shown in above schematic diagram.
• The clearance is so adjusted that when the cotters are driven in, the two rods come closer to each other thus making the joint tight.
Design of Sleeve and Cotter Joint
Let us assume
P = Load carried by the rods,
d = Diameter of the rods,
d[1] = Outside diameter of sleeve,
d[2] = Diameter of the enlarged end of the rod.
t = Thickness of cotter,
l = Length of cotter,
b = Width of cotter,
a = Distance of the rod end from the beginning to the cotter hole (inside the sleeve ends as mentioned above),
c = Distance of the rod end from its end to the cotter hole,
σ[t]= Permissible tensile stress for the material of the rods and cotter
τ = Permissible shear stress for the material of the rods and cotter
σ[c] = Permissible crushing stress for the material of the rods and cotter
Standard dimensions of Sleeve and Cotter Joint
The various proportions of the dimensions for the sleeve and cotter joint in terms of the diameter of the rod (d) are prescribed as follows.
Outside diameter of sleeve d[1] = 2.5d
Diameter of enlarged end of rod d[2] = Inside diameter of sleeve = 1.25 d
Length of sleeve L = 8 d
The thickness of cotter t = d[2]/4 or 0.31 d
Width of cotter b = 1.25 d
Length of cotter l = 4 d
Distance of the rod end (a) from the beginning to the cotter hole (inside the sleeve end) = Distance of the rod end (c) from its end to the cotter hole = 1.25d
In order to determine these dimensions for a sleeve and cotter joint, we need to consider the possible failures that may occur in the different parts of this joint and the permissible stress for the
materials used for this cotter joint. These possible failures are mentioned below.
• Failure of the rods in tension
• Failure of the rod in tension across the weakest section
• Failure of the rod or cotter in crushing
• Failure of sleeve in tension across the slot
• Failure of cotter in shear
• Failure of rod end in shear
• Failure of sleeve end in shear
These possible failures of different parts of the Sleeve and cotter joint are considered to determine the above parameters.
1. Diamter of the Rods
By considering the Failure of the rods in tension we can determine the diameter of the rods that need to be connected with the sleeve and cotter joint.
The rods may fail in tension due to the tensile load P. We know that
We know that the cross-sectional Area resisting tearing inside the rod is given by
∴ The Tearing strength of the rods will be the product of the permissible tensile stress (σ[t]) and the cross-sectional area that resist the tearing. We can write the relation as follows
Now in order to resist the load, we must Equate this tearing strength to load (P), We can write
From this above relation, we can determine the required diameter of the rods (d).
2. Diameter Of The Enlarged End Of The Rod (d[2])
By considering the Failure of the rod in tension across the weakest section, we can determine the diameter of the enlarged end of the rod as shown in the above diagram.
Since the weakest section is that section of the rod that has a slot in it for the cotter, therefore The cross-sectional Area resisting tearing of the rod across the slot is given by
and tearing strength of the rod across the slot will be the product of the permissible tensile stress (σ[t]) in and the cross-sectional area that resists this stress. We can write the relation as
Now in order to resist the load, we must be Equating this tearing strength to load (P), We can write
From this equation, we can determine the diameter of the enlarged end of the rod (d[2]) may be obtained.
Also, the thickness of the cotter is usually taken as d[2]/4.
3. Induced crushing stress (σ[c]) in rods or cotter
By considering the Failure of the rod or cotter in crushing, we can determine the Induced crushing stress in the rod or cotter.
We know that the cross-sectional area that resists crushing of a rod or cotter to avoid failure, is given by
= d[2] × t
The Crushing strength will be the product of the area that resists the crushing and the permissible crushing stress.
We can write the relation as = d[2] × t × σ[c]
In order to resist the load, we must equate this crushing strength to the applied load (P), we can write have
P = d[2] × t × σ[c]
From this relation, the induced crushing stress in the rods or the cotter can be determined.
4. Outside Diameter Of The Sleeve (d[1])
By considering the Failure of the sleeve in tension across the slot, we can determine the outside diameter of the sleeve.
We already knew that the cross-sectional area that resists tensile stress in the sleeve across the slot to avoid failure is given by
∴ Tearing strength of the sleeve across the slot
∴ The Tearing strength of the sleeve across the slot is will be the product of the permissible tensile stress (σ[t]) and the cross-sectional area of the sleeve across the slot. We can write the
relation as follows
Now in order to resist the load, we must Equate this tearing strength to load (P), We can write
From this relation, we can determine the outside diameter of the sleeve (d[1]).
5. Width Of Cotter (b)
The most important part of the Cotter’s joint is the cotter. We need to prevent any chance of failure in the cotter.
By considering the Failure of the cotter in shear, we can determine the width of the cotter.
As you can see the above schematic representation of the Cotter being pulled away by the two equal opposite loads results in double shear.
The shearing area of the cotter will
= 2 b × t
and shearing strength of the cotter will be the product of the permissible shear stress of the
= 2 b × t × τ
Now equate this relation to the load (P), we can write as
P = 2 b × t × τ
From this relation, we can determine the width of cotter (b).
6. Distance of the rod end from the beginning to the cotter hole (a)
By considering the Failure of the rod end in shear, we can determine the distance of the rod end from the beginning to the cotter hole.
Since the rod end is in double shear, therefore the area resisting the shear of the rod end
= 2 a × d[2]
and shear strength of the rod end will be equal to the product of the area resisting the shear and the permissible shear stress. We can write this as follows
= 2 a × d[2] × τ
Now let us equate this relation to the load (P), we can write
P = 2 a × d[2] × τ
From this relation, we can determine the distance of the rod end from the beginning to the cotter hole (a).
7. Distance of the rod end from its end to the cotter hole (c)
By considering the Failure of sleeve end in shear, we can determine the distance of the rod end from its end to the cotter hole.
Since the sleeve, the end is in double shear, therefore the area resisting shear of the sleeve end is written as follows
= 2 (d[1] – d[2]) c
and shear strength of the sleeve end is = 2 (d[1] – d[2] ) c × τ
Let us equate this relation to the load (P), we can write
P = 2 (d[1] – d[2]) c × τ
From this relation, we can determine the distance (c).
These are all the required parameters that we need to determine for the Sleeve and the Cotter Joint.
Let us solve a sample problem to design a Sleeve and Cotter Joint.
Example Problem Statement
Design a sleeve and cotter joint to resist a tensile load of 60 kN. All parts of the joint are made of the same material with the following allowable stresses :
Permissible tensile Stress σ[t]= 60 MPa = 60 N/mm^2
Permissible Shear Stress τ = 70 MPa = 70 N/mm^2
Permissible Crushing Stress σ[c]= 125 MPa = 125 N/mm^2
Given load Load P = 360 kN = 60 × 10^3 N
1. Diameter of the rods
Let d = Diameter of the rods., Considering the failure of the rods in tension. We know that load (P),
60 × 10^3 = (π/4) × d^2 × 60
d ^2 = 60 × 10^3 / 47.13
d ^2 = 1273
d = 35.7
We got the diameter of the rod as 35.7, we can assume the rod diameter d as 36mm.
2. Diameter of enlarged end of rod and thickness of cotter
Let d[2] = Diameter of enlarged end of the rod, and
t = Thickness of cotter. It may be taken as d[2] / 4.
Considering the failure of the rod in tension across the weakest section (i.e. slot). We know that load (P),
60 × 10^3 = [(π/4) × (d[2])^2 – d[2] × (d[2] / 4)] × 60
(d[2])^2 = 60 × 10^3 / 32.13
(d[2])^2 = 1867
d[2] = 43.2
The Diameter of the enlarged end of the rod and thickness of the cotter is 43.2mm, we can assume it as 44mm.
and thickness of cotter will be t = d[2] / 4 = 44/4 = 11mm.
Let us now check the induced crushing stress in the rod or cotter. We know that load (P),
P = d[2] × t × σ[c]
60 × 10^3 = d[2] × t × σ[c]
60 × 10^3 = 44 × 11 × σ[c]
60 × 10^3 = 484 σ[c]
σ[c] = 60 × 10^3 / 484
σ[c] = 124 N/mm^2
We have the crushing stress of 124N/mm^2, which is within the given permissible crushing stress value of 125 N/mm^2, therefore the dimensions d[2] and t are within safe limits.
3. Outside diameter of sleeve
Let d[1] = Outside diameter of sleeve.
Considering the failure of the sleeve in tension across the slot. We know that load (P)
60 × 10^3 = {(π/4) × [(d[1])^2 -(44)^2]- (d[1]-44) × (11)} × 60
60 × 10^3 = {0.7854 (d[1]) ^2 – 1520.7 – 11 d[1] + 484} × 60
(d[1])^2 – 14 d[1] – 2593 = 0
Solving this second-order algebraic equation we get d[1] = 58.4mm. let us take the outside diameter of the sleeve as 60mm.
4. Width of cotter
Let b = Width of cotter.
Considering the failure of cotter in shear. Since the cotter is in double shear, therefore load (P),
P = 2 b × t × τ
60 × 10^3 = 2 b × t × τ
60 × 10^3 = 2 × b × 11 × 70
60 × 10^3 = 1540 b
b = 60 × 10^3 / 1540
b = 38.96
We got the width of the cotter as 38.96mm, let us take the width of the cotter as 40mm.
5. Distance of the rod from the beginning to the cotter hole (a)
Let a = Required distance.
Considering the failure of the rod end in shear. Since the rod end is in double shear, therefore load (P),
P = 2 a × d[2 ]× τ
60 × 10^3 = 2 a × d[2] × τ
60 × 10^3 = 2 a × 44 × 70
60 × 10^3 = 6160 a
a = 60 × 103 / 6160
a = 9.74
We got the distance of the rod from the beginning to the cotter hole (a) is 9.74mm, let us take this value as 10mm.
6. Distance of the rod end from its end to the cotter hole
Let c = Required distance.
Considering the failure of the sleeve end in shear. Since the sleeve end is in double shear, therefore load (P),
P = 2 (d[1] – d[2]) c × τ
60 × 10^3 = 2 (d[1] – d[2]) c × τ
60 × 10^3 = 2 (60 – 44) c × 70
60 × 10^3 = 2240 c
c = 60 × 10^3 / 2240
c = 26.78
we got the distance of the rod end from its end to the cotter hole (c) is 26.78, let us take this value as 28mm.
These are all parameters of the sleeve and Cotter Joint.
Applications of Cotter Joints
• Historically, the Cottar joint has been used to connect connecting rods to steam engines and pumps used in dumping mines.
• Cotter Joints are used between the piston rod and the tail of the pump rod.
• Cotter’s joints are used between the slide spindle and the fork of the valve mechanism.
• Cotter and Dowell arrange to join two parts of a flywheel.
• Foundation bolt: mainly used for fastening foundation and construction heavy machines
• In an automobile engine, the cotter joint is used to connect the extension of the piston rod with the connecting rod in the crosshead.
• The Cottar joint has historically been used to connect connecting rods to steam engines and pumps of dumping mines.
• It is used in bicycles to connect the paddle to the sprocket wheel.
• Use a wet air pump to join a tail rod with the piston rod.
• It is used to connects two rods of equal diameter subjected to axial forces.
Leave a Reply Cancel reply | {"url":"https://extrudesign.com/how-to-design-a-sleeve-and-cotter-joint/","timestamp":"2024-11-06T07:17:26Z","content_type":"text/html","content_length":"107930","record_id":"<urn:uuid:739936a6-bca1-4ee3-abfd-76d6962bcb6e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00354.warc.gz"} |
Solving impossible equations
Eric Michielssen has discovered a new way to rapidly analyze electromagnetic phenomena, and it’s catching on.
Eric Michielssen, professor of Electrical and Computer Engineering, has received the Sergei A. Schelkunoff Transactions Prize Paper Award for research impacting the ability to rapidly analyze
electromagnetic phenomena.
This award is presented to the authors of the best paper published in the IEEE Transactions on Antennas and Propagation during the previous year.
The 2017 paper, “A Butterfly-Based Direct Integral-Equation Solver Using Hierarchical LU Factorization for Analyzing Scattering From Electrically Large Conducting Objects,“ co-authored by Han Guo
(ECE doctoral student), Yang Liu (MSE PHD, EE, 2013 2015; Lawrence Berkeley National Lab), and Prof. Jun Hu (UESTC), describes a new algorithm for solving Maxwell’s equations that is orders of
magnitude faster than prior algorithms, opening the door to its use for the design and optimization of electromagnetic devices.
Maxwell’s equations are a set of four partial differential equations published in 1865 that scientifically explained light, electricity, and magnetism for the first time. Called the “second great
unification in physics,” these equations continue to hold the key to advancements in a wide array of applications, in particular high-frequency electronic devices and systems, and optics.
To solve Maxwell’s equations, Michielssen and his group convert them into a system of linear equations. In the last 2 decades, the number of unknowns in these linear equations has increased from
about 100K to tens of millions.
“Just writing out these equations by hand would require a sheet of paper the size of the United States,” says Michielssen. “Using traditional methods, not even a suite of high-performance computers
would be able to solve a matrix of this size.”
That is, until Michielssen revisited an algorithm that he developed back in 1996. This algorithm, which came to be known as Butterfly because of its resemblance to the Fast Fourier transform, was
able to compress a system of linear equations. Michielssen originally used the algorithm to efficiently store systems of equations in computer memory.
At the time, a scheme for using the butterfly compression scheme to actually solve the equations remained elusive, thus limiting the practical applicability of the method.
However, that all changed when Michielssen and his co-workers adapted his old Butterfly compression scheme to directly solve highly complex equations quickly and efficiently.
The solution scheme is remarkably similar to the old tried and true method of Gaussian elimination for solving linear systems of equations by eliminating unknowns, a simple concept taught in most
high schools.
“Now that we can rapidly analyze devices,” said Michielssen, “we’re asking ourselves – how can we optimize and synthesize them. That’s the next step. That means you have to do these equations over
and over again – and that changes the game.”
And this is where the new method is expected to really shine.
Traditional techniques for analyzing electromagnetic phenomena rely on indirect, iterative solution methods that are notoriously expensive in terms of computational resources as well as time when
used in optimization settings. In contrast, Michielssen’s method, rooted in Gaussian elimination, can be applied rapidly to problems that require the repeated solution of perturbed (ie, slightly
altered) equations, a situation that naturally arises when synthesizing the shape or material composition of a device.
“Our butterfly scheme has blown new life into direct solution methods for Maxwell’s equations,” Michielssen said. “For many real-world problems out there, our Butterfly method is orders of magnitude
faster than prior algorithms, while using fewer computing resources.”
Michielssen’s approach is applicable anywhere you need to solve Maxwell’s equations. While he has used it primarily for radar cross section, the same technique can be used for antenna design,
wireless system analysis, monitoring signal integrity, as well as high-frequency terahertz and imaging systems.
The calculations were done on the FLUX high-performance computing cluster at Michigan.
“This research would not have been possible without the vast computing resources available here at Michigan,” said Michielssen.
Michielssen, Louise Ganiard Johnson Professor of Engineering, is a Professor of Electrical Engineering & Computer Science, Associate Vice President for Advanced Research Computing, and Co-Director of
the Precision Health Initiative. He is also Editor-in-Chief of the International Journal of Numerical Modelling, Electronic Networks, Devices, and Fields. | {"url":"https://eecsnews.engin.umich.edu/solving-impossible-problems/","timestamp":"2024-11-07T01:22:51Z","content_type":"text/html","content_length":"38539","record_id":"<urn:uuid:26146fbc-72b5-4e37-98d6-0c30a03e52aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00107.warc.gz"} |
How to assign clusters to new observations (test data) using scipy's hierchical clustering
from scipy.cluster.hierarchy import dendrogram, linkage,fcluster
import numpy as np
import matplotlib.pyplot as plt
# data
np.random.seed(4711) # for repeatability of this tutorial
a = np.random.multivariate_normal([10, 0], [[3, 1], [1, 4]], size=[100,])
b = np.random.multivariate_normal([0, 20], [[3, 1], [1, 4]], size=[50,])
X = np.concatenate((a, b),)
plt.scatter(X[:,0], X[:,1])
# fit clusters
Z = linkage(X, method='ward', metric='euclidean', preserve_input=True)
# plot dendrogram
max_d = 50
clusters = fcluster(Z, max_d, criterion='distance')
# now if I have new data
a = np.random.multivariate_normal([10, 0], [[3, 1], [1, 4]], size=[10,])
b = np.random.multivariate_normal([0, 20], [[3, 1], [1, 4]], size=[5,])
X_test = np.concatenate((a, b),)
print(X_test.shape) # 150 samples with 2 dimensions
plt.scatter(X_test[:,0], X_test[:,1])
how to compute distances for the new data and assign clusters using clusters from training data?
code references:joernhees.de
You do not have to compute distances for the new data and assign clusters using clusters from training data because clustering is an explorative approach it does not have training and test stages.
For this algorithm, you cannot assign new data to the old structure as this new data can change the structure that is previously discovered.
You can use a classifier if you want classification.
If you want to learn data science in-depth then enroll for best data science training. | {"url":"https://intellipaat.com/community/4575/how-to-assign-clusters-to-new-observations-test-data-using-scipys-hierchical-clustering","timestamp":"2024-11-14T01:09:48Z","content_type":"text/html","content_length":"100314","record_id":"<urn:uuid:6f8714c4-b990-4f25-9c44-0a34a58c98e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00427.warc.gz"} |
Digestive Tract Diagram Letters And Numbers Worksheet
Digestive Tract Diagram Letters And Numbers Worksheet act as fundamental tools in the world of maths, supplying a structured yet versatile platform for learners to discover and understand numerical
ideas. These worksheets provide an organized approach to understanding numbers, supporting a solid structure whereupon mathematical efficiency grows. From the simplest counting exercises to the ins
and outs of innovative calculations, Digestive Tract Diagram Letters And Numbers Worksheet satisfy students of diverse ages and skill levels.
Unveiling the Essence of Digestive Tract Diagram Letters And Numbers Worksheet
Digestive Tract Diagram Letters And Numbers Worksheet
Digestive Tract Diagram Letters And Numbers Worksheet -
Learners will use a word bank to label 15 parts of the human digestive system in this life sciences diagramming activity This worksheet designed for fifth graders is a memorable introduction to human
anatomy and vocabulary and
Test your knowledge of the digestive system with this worksheet Label the parts of the digestive system by cutting out the organ names at the bottom and pasting them in the correct places
At their core, Digestive Tract Diagram Letters And Numbers Worksheet are automobiles for theoretical understanding. They encapsulate a myriad of mathematical concepts, guiding learners with the
labyrinth of numbers with a series of appealing and purposeful exercises. These worksheets go beyond the limits of conventional rote learning, motivating active involvement and cultivating an
instinctive understanding of numerical relationships.
Supporting Number Sense and Reasoning
Free Digestive System Printables Printable Form Templates And Letter
Free Digestive System Printables Printable Form Templates And Letter
Use this cut and stick worksheet to support learning on the digestive system in KS2 science Students can follow the instructions to produce a differentiated labelled and interactive diagram to help
them explore the human digestive system If you want to use some themed classroom displays for your lesson you can download this Digestive
Finish This interactive worksheet will help students learn about the digestive system
The heart of Digestive Tract Diagram Letters And Numbers Worksheet depends on cultivating number sense-- a deep comprehension of numbers' meanings and interconnections. They encourage expedition,
welcoming students to study arithmetic procedures, decode patterns, and unlock the mysteries of sequences. With provocative challenges and logical challenges, these worksheets end up being gateways
to honing reasoning abilities, supporting the analytical minds of budding mathematicians.
From Theory to Real-World Application
DIAGRAM Digestive System Diagram Worksheet For Kids MYDIAGRAM ONLINE
DIAGRAM Digestive System Diagram Worksheet For Kids MYDIAGRAM ONLINE
Digestive system diagram worksheet Live Worksheets Home Worksheets Digestive system diagram
Included are handouts and fillable worksheets in color and black and white that outline the parts of the digestive system Handouts with labels are provided as well as fillable worksheets This
Digestive Tract Diagram Letters And Numbers Worksheet serve as channels linking academic abstractions with the palpable realities of day-to-day life. By infusing sensible circumstances right into
mathematical exercises, learners witness the relevance of numbers in their surroundings. From budgeting and dimension conversions to comprehending analytical data, these worksheets encourage trainees
to possess their mathematical prowess past the confines of the class.
Varied Tools and Techniques
Adaptability is inherent in Digestive Tract Diagram Letters And Numbers Worksheet, using a toolbox of instructional devices to accommodate different knowing styles. Aesthetic aids such as number
lines, manipulatives, and electronic resources act as buddies in visualizing abstract concepts. This varied technique makes sure inclusivity, accommodating students with different preferences,
toughness, and cognitive designs.
Inclusivity and Cultural Relevance
In a progressively varied world, Digestive Tract Diagram Letters And Numbers Worksheet welcome inclusivity. They transcend cultural boundaries, integrating examples and problems that resonate with
students from diverse backgrounds. By including culturally relevant contexts, these worksheets promote an atmosphere where every student really feels represented and valued, enhancing their
connection with mathematical concepts.
Crafting a Path to Mathematical Mastery
Digestive Tract Diagram Letters And Numbers Worksheet chart a training course in the direction of mathematical fluency. They impart willpower, important thinking, and problem-solving skills, crucial
attributes not only in maths but in different elements of life. These worksheets empower learners to browse the elaborate surface of numbers, supporting an extensive recognition for the
sophistication and logic inherent in maths.
Accepting the Future of Education
In an era noted by technical development, Digestive Tract Diagram Letters And Numbers Worksheet flawlessly adjust to electronic platforms. Interactive user interfaces and digital resources augment
conventional discovering, supplying immersive experiences that go beyond spatial and temporal borders. This combinations of standard techniques with technological technologies heralds an encouraging
age in education and learning, promoting a more dynamic and appealing discovering environment.
Final thought: Embracing the Magic of Numbers
Digestive Tract Diagram Letters And Numbers Worksheet illustrate the magic inherent in maths-- an enchanting trip of expedition, exploration, and proficiency. They transcend traditional pedagogy,
acting as drivers for sparking the fires of inquisitiveness and inquiry. Via Digestive Tract Diagram Letters And Numbers Worksheet, students embark on an odyssey, unlocking the enigmatic globe of
numbers-- one trouble, one remedy, at a time.
Digestive System Diagram Worksheets
Draw A Labelled Diagram Of The Human Digestive System And Explain It
Check more of Digestive Tract Diagram Letters And Numbers Worksheet below
Human Anatomy Diagram Worksheet
Parts Of The Digestive System Worksheet
Digestive System Worksheet Answer Key Beautiful 13 Best Of Biology Corner Worksheets Answer Key
Vector Illustration Of A Black And White Digestive Tract Diagram Labeled With Text By
Digestive Tract Basicmedical Key
3a1 Digestive System Nature Journals
Label The Digestive System Cut And Paste All Kids Network
Test your knowledge of the digestive system with this worksheet Label the parts of the digestive system by cutting out the organ names at the bottom and pasting them in the correct places
Human Digestive System Labeling Sendat academy
Cut out the labels and stick them on to the correct digestive parts in this diagram Label the digestive parts in this diagram
Test your knowledge of the digestive system with this worksheet Label the parts of the digestive system by cutting out the organ names at the bottom and pasting them in the correct places
Cut out the labels and stick them on to the correct digestive parts in this diagram Label the digestive parts in this diagram
Vector Illustration Of A Black And White Digestive Tract Diagram Labeled With Text By
Parts Of The Digestive System Worksheet
Digestive Tract Basicmedical Key
3a1 Digestive System Nature Journals
Digestive System Diagram Diabetes Inc
Digestive System Unit Reading Diagrams Worksheets Advanced Downloadable Only
Digestive System Unit Reading Diagrams Worksheets Advanced Downloadable Only
Directions Label The Parts Of Human Digestive System And Pair It With Its Proper Function By | {"url":"https://szukarka.net/digestive-tract-diagram-letters-and-numbers-worksheet","timestamp":"2024-11-09T00:27:56Z","content_type":"text/html","content_length":"25806","record_id":"<urn:uuid:41f39712-44c0-4a70-bec9-ccfab7728f5c>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00173.warc.gz"} |
The new solution framework for Ordinary Differential Equations (ODEs) in MATLAB R2023b
Along with linear algebra, one of the iconic features of MATLAB in my mind is how it handles ordinary differential equations (ODEs). ODEs have been part of MATLAB almost since the very beginning.
One of the features of how MATLAB traditionally allows users to solve ODEs is that it provides a suite of functions. For many years, there were 7 routines in the MATLAB suite, described in this 2014
post by Cleve Moler and in more detail in the 1997 paper, The MATLAB ODE Suite. In 2021b, 2 new high-order methods were added to the suite, ode78 and ode89 based on the algorithms described in the
paper Numerically optimal Runge–Kutta pairs with interpolants.
The design of the suite was elegant and has served the community well for over 25 years! However, MATLAB has evolved a lot since it was designed and user-expectations have evolved with it. It was
felt that it was time for a fresh look at how to solve ODEs in MATLAB, one that would additionally support our future plans in a modern and elegant manner.
Before I dive into the new interface, I'd like to make it clear that the existing suite of ODE functions are not going anywhere! Millions of lines of code make use of ode45 and friends and we have no
plans on doing anything that would break that code. This is about adding functionality, not taking anything away.
Furthermore, the focus of this release is the new interface. There are no new solvers or any fundamentally new functionality....yet! However, some things will be significantly easier to do using the
new interface.
With that said, let's take a look at how the new design looks.
Solving ODEs in MATLAB the OOP way
Say we want to solve the ODE
${\mathit{y}}^{\prime }=2\mathit{t}$
with the initial condition ${\mathit{y}}_{0}=0$ over the interval $\mathit{t}=\left[0,10\right]$
With the new interface, we can set up and solve this problem as follows
F = ode(ODEFcn=@(t,y) 2*t,InitialTime=0,InitialValue=0); % Set up the problem by creating an ode object
sol = solve(F,0,10); % Solve it over the interval [0,10]
plot(sol.Time,sol.Solution,"-o") % Plot the solution
An alternative way to proceed would have been to start with an empty ode object and add one property at a time:
F = ode; % Empty ode object called F
F.ODEFcn = @(t,y) 2*t; % Add the function we want to solve to F
F.InitialTime = 0; % Add the initial time to F
F.InitialValue = 0; % Add the initial value to F
Once our problem is set up, we pass it to the solve function.
The solution is an ODEResults object with two properties: Time and Solution
You access either property with . notation
Automatic solver selection
One of the first things to note in the above workflow is that we didn't choose a solver. That is, I didn't have to think about if I should use ode45, ode78 etc. All I did was state the problem
mathematically and ask MATLAB to solve it. Let's take a closer look at the details.
I set up my problem like this:
F = ode(ODEFcn=@(t,y) 2*t,InitialTime=0,InitialValue=0);
F is an ode object:
I can see the details by evaluating F
The object display separates the mathematical problem definition from how we are going to solve it. By default, the Solver property of the class is set to "auto" which means that MATLAB attempts to
choose a suitable solver based on various properties of the problem we've asked it to solve. In this case it has chosen ode45 which is a pretty safe bet for many problems.
Ask for a tighter tolerance for this problem, however, and it switches to using ode78.
F.RelativeTolerance = 1e-7
At this stage, MATLAB hasn't solved anything. We've just set up our problem and made decisions about how we are going to solve it. We need the solve function to complete the job.
One thing to bear in mind is that the solver that "auto" chooses may change in future releases for a number of reasons. For example, we may improve the heuristics used to select the best solver or
maybe add new solvers that do a better job for a given problem.
If you want to override what MATLAB chooses and fix the solver type you can do that as follows
F.Solver = "ode45" % Force the framework to use ode45.
Trying a stiff ODE problem
I was curious about how it would handle a stiff ODE and so fed it the example given by Cleve in his blog post Ordinary Differential Equations, Stiffness.
CleveF = ode(ODEFcn=@(t,y) y^2 - y^3);
CleveF.InitialValue = delta;
sol = solve(CleveF,0,2/delta);
Looks reasonable. Following Cleve's suggestion, we zoom in on the steady state that begins at x=1 and see that the solver is working hard to do its job, just as it did when Cleve explicitly chose
Sure enough, the "auto" option has also chosen ode45 in this case.
Using the old solver suite, I'd now have to dig into the documentation and read about the stiff solvers available to me before trying those out. With the new interface, however, I just tell it that I
think the problem is stiff and it will choose a stiff solver for me.
This time it has selected ode15s for me. Let's see how that does
stiffSol = solve(CleveF,0,2/delta);
Zooming in at x=1:
Much better behaved!
Being able to tell MATLAB that the problem is "stiff" makes life a little easier than before but I was a disappointed that "auto" didn't realise that my problem was stiff and choose a relevant solver
for me.
I reached out to development to ask why it failed me. They told me that the automatic solver selection is a heuristic that operates without peering inside your equations. It reacts to the data you
supply to define the problem, e.g. if you've supplied a Jacobian, what the relative tolerances are, etc. There's no attempt to diagnose stiffness at all. That's why there are "auto", "stiff", and
"nonstiff" automatic modes.
Of course, this may change in future releases. Now this feature exists, it will be possible to improve on it and its already pretty useful!
More flexible in time
Consider the following classic system of first-order ODEs that describes simple harmonic motion
$ \frac{dy_1}{dt} = y_2 \ \frac{dy_2}{dt} = -y_1 $
subject to the initial conditions ${\mathit{y}}_{1}{\left(0\right)=1}_{}$and ${\mathit{y}}_{2}\left(0\right)=0$
With the classic suite of solvers, you might solve this as follows
y0 = [1; 0]; % initial conditions: y1(0)=1, y2(0)=0
tspan = [0 10]; % time interval from 0 to 10 for the solution
[t,y] = ode45(@(t,y)[y(2);-y(1)], tspan,y0);
So far so simple. Imagine now that I want to go backwards in time from my initial condition as well as forwards. That is, I want my time span to be
tspan2 = [-10 10]; % time interval from 0 to 10 for the solution
This is a little complicated using ode45 since it assumes that the initial condition you set using y0 corresponds to the beginning of the span. That is, if I do
[t,y] = ode45(@(t,y)[y(2);-y(1)], tspan2,y0);
title("Not the boundary condition I wanted!")
it applies the boundary condition ${\mathit{y}}_{1}{\left(-10\right)=1}_{}$and ${\mathit{y}}_{2}\left(-10\right)=0$ which is not the problem I wanted it to solve! Instead I have to make two calls to
ode45 -- one that goes forwards in time and the other that goes backwards.
y0 = [1; 0]; % initial conditions: y1(0)=1, y2(0)=0
[tf,yf] = ode45(@(t,y)[y(2);-y(1)], [0 10],y0);
[tb,yb] = ode45(@(t,y)[y(2);-y(1)], [0 -10],y0);
We got there but it took a couple of backflips. With the new interface, I don't need to worry about this as its much more flexible.
shm = ode(ODEFcn=@(t,y)[y(2);-y(1)],InitialTime=0,InitialValue=[1;0],Solver="ode45");
shmSol = solve(shm,-10,10);
title("The same result but we got there more easily")
Unlike the traditional suite, the solution span we ask for doesn't even need to include our initial conditions.
% Request a solution span that doesn't include the initial conditions at t=0
shmSol2 = solve(shm,2*pi,4*pi)
The closeness of the solution points around t = 0 is the solver starting the two integrations with a small initial step size. If we would rather choose the output points ourselves, we can just
specify them.
shmSol3 = solve(shm,linspace(-pi,pi));
We can get all of these results using the traditional suite, of course, it's just that its easier and more intuitive now.
Event detection
Event detection has always been part of the MATLAB ODE suite and so, of course, this is also possible with the new framework. The documentation for ode demonstrates how to solve the classic bouncing
ball problem using event detection. Here, I'll demonstrate how to use it to find maxima and minima of the solution to an ODE problem. The original problem comes from a 2011 blog post by John Kitchin.
First, let's solve the ODE ${\mathit{y}}^{\prime }={\mathit{e}}^{-0.05\mathit{t}}\mathrm{sin}\left(\mathit{t}\right)$
myODE = ode(ODEFcn=@(t,y) exp(-0.05*t)*sin(t),InitialTime=0,InitialValue=0);
sol = solve(myODE,0,20,Refine=10);
We want to find the maxima and minima of this solution. We know from calculus that these occur when our original equation ${\mathit{y}}^{\prime }={\mathit{e}}^{-0.05\mathit{t}}\mathrm{sin}\left(\
mathit{t}\right)$is equal to zero.
odeEvent objects define events that the solver detects while solving an ordinary differential equation. An event occurs when one of the event functions you specify crosses zero.
So, we create an odeEvent object with an event function equal to our original ODE.
E = odeEvent(EventFcn = myODE.ODEFcn)
This time, let's solve the ODE with this event definition
myODE = ode(ODEFcn=@(t,y) exp(-0.05*t)*sin(t),InitialTime=0,InitialValue=0,EventDefinition=E);
sol = solve(myODE,0,20,Refine=10)
The solution includes all of the times the event occurs. Let's plot them
Exactly what we were looking for.
Obtaining the ODE solution as a function
Until now, we have been working with discrete representations of the solutions to our ODE problems and this can present limitations on further analysis. For example, what if we wanted to integrate
the solution? What we need is a function that represents the solution and we can get one using solutionFcn, passing in the ode object we are aiming to solve.
myODE = ode(ODEFcn=@(t,y) exp(-0.05*t)*sin(t),InitialTime=0,InitialValue=0); % Define the ODE problem
solFcn = solutionFcn(myODE,0,20); % Use it to create a solutionFcn
I can now evaluate the solution at any point I like
or pass it to MATLAB's integral function
Again, you could have done something similar with the old framework using deval and friends but I much prefer the new way!
Mass Matrices, Jacobians, Parameters and everything else I haven't covered here
This is just an introduction to the new framework and I encourage you to read the documentation to explore everything else it offers. Let me know what you think in the comments section.
To leave a comment, please click here to sign in to your MathWorks Account or create a new one. | {"url":"https://blogs.mathworks.com/matlab/2023/10/03/the-new-solution-framework-for-ordinary-differential-equations-odes-in-matlab-r2023b/?s_tid=prof_contriblnk","timestamp":"2024-11-15T04:41:36Z","content_type":"text/html","content_length":"267179","record_id":"<urn:uuid:5bb03c05-2098-41df-873f-9b9e823b9d12>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00823.warc.gz"} |
ACO Seminar
The ACO Seminar (2021–2022)
October 21
, 3:30pm, Wean Hall 8220
Joel Spencer
, New York University
Balancing Problems: A Fourfold Approach
The balancing of items -- or discrepany -- arises naturally in Computer Science and Discrete Mathematics. Here we consider n vectors in n-space, all coordinate +1 or -1. We create a signed sum of the
vectors, with the goal that this signed sum be as small as possible, Here we use the max [or L-infinity] norm, though many variants are possible. We create a game with Paul (Erdos) selecting the
vectors and Carole (find the anagram!) choosing to add or subtract. This becomes four (two TIMES two) different problems. The vectors (Paul) can be chosen randomly or adversarially, equivalently
average case and worst case analysis for Carole. The choice of signed sum (Carole) can be done on-line or off-line. All four variants are interesting and are at least partially solved. We emphasize
the random (Paul) on-line (Carole) case, joint work with Nikhil Bansal | {"url":"https://aco.math.cmu.edu/abs-21-22/oct21.html","timestamp":"2024-11-11T19:43:13Z","content_type":"text/html","content_length":"2775","record_id":"<urn:uuid:f22d55cd-783f-4285-be2b-798b52ce5e4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00319.warc.gz"} |
Is physics or mathematics? - IHSSNC
Is physics or mathematics?
Physics is a fascinating field that bridges the gap between science and mathematics. While physics itself is a science, it heavily relies on mathematical principles to explain and predict natural
phenomena. The relationship between physics and math is intricate, with math providing the language through which the laws of physics can be articulated and understood.
Mathematics serves as the foundation upon which the theories and concepts of physics are built, allowing scientists to develop mathematical models to describe the behavior of physical systems.
However, it is important to recognize that physics goes beyond mere mathematical equations, as it involves experimentation, observation, and testing of hypotheses to validate theoretical predictions.
Ultimately, physics and mathematics are closely intertwined disciplines that complement each other in unraveling the mysteries of the universe.
The eternal debate whether physics is considered a science or a math is complex – and the answer is equally layered. Physics does involve a great deal of mathematics; however, at its core, physics is
a science. Further, we will continue to examine the fundamental questions around our understanding of physics and unravel the connection it has with both science and math.
Understanding Physics: A Scientific Discipline
Physics, frequently known as “the science of matter, energy, space, and time”, is intrinsically a scientific branch of knowledge. It is a science because it relies on empirical evidence, experimental
methods, observable phenomena, and logical analysis to understand the natural world. The purpose of physics as a science is to understand the behavior of the universe and explain its various
Role of Mathematics in Physics
That being said, one cannot overlook the crucial role mathematics plays in physics. Mathematics is the abstract tool physicists use to build conceptual frameworks. These frameworks not only explain
current observations but also predict future events. Thus, it is not inaccurate to regard the components of physics as mathematical.
Physics: The Interplay Between Science and Mathematics
The relationship between physics (science) and mathematics is rich, dynamic, and complicated. Understanding physics requires more than just understanding the mathematics it employs. In physics, math
can serve as a predictive tool, aiding in expressing theories and directly linking concepts with empirical data and observation.
The Predictive Role of Mathematics in Physics
From Newton’s laws of motion to Einstein’s theory of general relativity, the essential role of mathematics in forming, expressing, and testing physical theories is evident. Mathematics allows
scientists to quantify physical properties and derive other properties. Thus, it is used as a predictive tool in physics.
How Physics is More Than Just Mathematics
While math plays a vital role in physics, to view physics merely as math would be understating the richness and depth of physics as a science. Physics goes beyond just the numbers; it deals with
conceptual reality, something that mathematics alone cannot provide.
Physics: A Synthesis of Empirical Knowledge and Mathematical Representation
Physics is the bridge between the empirical world and abstract mathematical structures. It integrates the precision of mathematics with the raw empirical data and the observed universe, making it
intrinsically a science.
The Balance Between Physics as a Science and its Mathematical Nature
In conclusion, while physics does heavily employ mathematical constructs and methodologies, it remains fundamentally a science. It integrates the empirical with the abstract, making it far more than
just a branch of mathematics. The question is not whether physics is a science or math, but rather understanding that it is a science that uses math as a crucial tool. The relationship between
physics and mathematics is thus not one of competition, but of coexistence and symbiosis.
Physics can be considered both a science and a branch of mathematics. While it uses mathematical concepts to describe and analyze physical phenomena, it also relies on empirical observation,
experimentation, and the scientific method to discover new insights about the natural world. Overall, the interdisciplinary nature of physics blurs the boundaries between science and math,
highlighting the interconnectedness of different fields of study in advancing our understanding of the universe.
Leave a Comment | {"url":"https://ihssnc.org/is-physics-or-mathematics/","timestamp":"2024-11-05T14:00:34Z","content_type":"text/html","content_length":"138303","record_id":"<urn:uuid:afc17dac-30b5-436b-b686-098041018851>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00391.warc.gz"} |
Intermediate Value Theorem - (Universal Algebra) - Vocab, Definition, Explanations | Fiveable
Intermediate Value Theorem
from class:
Universal Algebra
The Intermediate Value Theorem states that for any continuous function defined on a closed interval, if the function takes on two values at the endpoints of the interval, then it must take on every
value in between at least once. This theorem emphasizes the behavior of continuous functions and is essential in understanding the characteristics of polynomial functions, as well as the
correspondence between congruences and subalgebras in algebraic structures.
congrats on reading the definition of Intermediate Value Theorem. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The Intermediate Value Theorem applies specifically to continuous functions, meaning that there are no interruptions in their graph.
2. For polynomial functions, which are always continuous, this theorem guarantees that if you know the values at two points, you can find any value between those points in the output.
3. This theorem can also be used to show the existence of roots for equations, meaning if a polynomial changes sign over an interval, there must be at least one root within that interval.
4. In relation to congruences and subalgebras, this theorem illustrates how values can map consistently through equivalence classes, ensuring each class can represent intermediate values.
5. The theorem not only helps in understanding polynomial behaviors but also plays a critical role in numerical methods for finding roots.
Review Questions
• How does the Intermediate Value Theorem relate to the properties of polynomial functions?
□ The Intermediate Value Theorem is directly linked to polynomial functions because these functions are continuous everywhere. This means that if you have a polynomial function with values at
two points that differ in sign, the theorem ensures that there is at least one root within that interval. Understanding this connection helps clarify why polynomial functions behave
predictably and ensures that they cover all values between their endpoints.
• Discuss how the Intermediate Value Theorem supports the existence of congruence relations within algebraic structures.
□ The Intermediate Value Theorem supports congruence relations by showing how values transition smoothly within algebraic structures. Since congruences create partitions within a structure into
distinct classes, the theorem implies that if a function is continuous over these classes, it must attain all intermediate values between any two classes. This helps establish a framework for
understanding how functions behave under congruences and assures us that transitions between classes are consistent and represent all necessary values.
• Evaluate the implications of the Intermediate Value Theorem on numerical methods used for finding roots of equations.
□ The implications of the Intermediate Value Theorem on numerical methods are significant as it provides a theoretical foundation for algorithms such as bisection or Newton's method. These
methods rely on identifying intervals where a function changes sign, implying the existence of roots based on the theorem. By confirming that roots must exist within specific intervals due to
continuity, these numerical methods can effectively approximate solutions and enhance our understanding of function behavior in various contexts.
ยฉ 2024 Fiveable Inc. All rights reserved.
APยฎ and SATยฎ are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/universal-algebra/intermediate-value-theorem","timestamp":"2024-11-07T00:29:29Z","content_type":"text/html","content_length":"153148","record_id":"<urn:uuid:7698b487-f08b-463b-8256-c9d77221ac81>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00419.warc.gz"} |
What is the polar form of (2,3)? | HIX Tutor
What is the polar form of #(2,3)#?
Answer 1
$\left(\sqrt{13} , {\tan}^{- 1} \left(\frac{3}{2}\right)\right)$
To write in polar form, you need to know
To solve 1. we use Pythagoras Theorem
#r = sqrt{2^2 + 3^2}#
#= sqrt13#
To solve 2. we first find the quadrant that the point lies in.
#x# is positive and #y# is positive #=># quadrant I
Then we can find the angle by directly taking inverse tangent of #y/x#. (Note that this is only applicable to quadrant I).
#theta = tan^{-1}(3/2)#
#~~ 0.983# (in radians)
Therefore, the polar coordinate is #(sqrt13,tan^{-1}(3/2))#
Note that the answer above is not unique. You can add any integer multiples of #2pi# to #theta# to get other representations of the same point.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
The polar form of the point (2,3) in the Cartesian coordinate system can be found using the conversion formulas from rectangular (Cartesian) coordinates to polar coordinates:
r = sqrt(x^2 + y^2) θ = arctan(y/x)
For the point (2,3): r = sqrt(2^2 + 3^2) = sqrt(4 + 9) = sqrt(13) θ = arctan(3/2) = approximately 56.31 degrees (in radians: approximately 1.25 radians)
Therefore, the polar form of the point (2,3) is (sqrt(13), 56.31°).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/what-is-the-polar-form-of-2-3-8f9afa2011","timestamp":"2024-11-04T17:40:51Z","content_type":"text/html","content_length":"571746","record_id":"<urn:uuid:3b7b78e3-1cf7-48be-969f-e5e4ef6f1a3a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00231.warc.gz"} |
Hi, I am Falk Hassler, an assistant professor at the Institute of Theoretical Physics at the University of Wrocław.
Before coming here in 2021, my life as a postdoc started in 2015, when I finished my PhD at the Ludwig Maximilian University of Munich. Since then, it has not been short of adventures. I had the
opportunity to work in many exciting places with great people from all over the world: New York City, Chapel Hill (North Carolina), Philadelphia, Oviedo (Spain) and College Station (Texas) have been
home to my family (wife Antje and our daughter Amy who was born in Philly) and me. After growing up in a small town in the northeast of Germany, I would have never dreamed that physics would lead me
to all these incredible places one day. You can find more details in my CV.
Lay version
Imagine we take a coffee mug and zoom in with a very powerful microscope. Eventually, we will discover that it is made out of atoms. These atoms have protons and neutrons in their core which consists
of quarks held together by gluons. We don't have machines yet to zoom in much further. But one thing is certain: something dramatic has to happen at the incredibly small scale of $10^{-35}$ meters.
At this point, the two fundamental ingredients of physics, general relativity and quantum field theory, start to contradict each other.
My research takes us exactly to this point. Although we do not have any experimental data at this scale yet, the last 50 years have produced some incredible ideas of what we might find. All of them
are based on the fundamental mechanisms in physics that we already have confirmed experimentally. The most studied idea is that extended objects, strings, should ultimately substitute point
Strings are so fundamental that not only particles are made of them but also the interactions between them and even spacetime itself.
Hence, we face a crucial change of paradigms. Point particles have a natural notion of distance. Take as a simple a free particle on a ring. Its energy spectrum is indirectly proportional to the
radius. Thus, we could easily distinguish between large and small rings. Distance between points is also the defining concept in Riemannian geometry that underpins general relativity.
Things become more subtle if we look at strings because, in addition to the centre of mass motion of point particles, they can wind around the circle. Hence, their spectrum is characterised by two
quantum numbers. Remarkably, it is the same on two circles, one larger and the other one smaller than the length of the string. Just the role of momentum and winding gets flipped. This effect is
called T-duality, and it obfuscates the clear notion of distance neeed to define geometry. Therefore strings require ultimately to work with a generalisation of geometry.
My work has revealed how this adapted version of geometry can capture T-dualities far beyond the simple example we have just discussed. In contrast to a circle or a torus, the spacetimes I am
interested in are curved.
Strings in curved backgrounds automatically induce higher curvature corrections that modify the Einstein-Hilbert action of point particles. These corrections are essential to understanding how a
quantum theory of gravity might resolve singularities at the centre of black holes or the Big Bang. Thus, my current efforts focus on how T-duality allows to explicitly compute these corrections.
Moreover, my work gives a new handle on integrable string models, which are an indispensable tool in the long-standing quest of proving the AdS/CFT correspondence, perhaps the successful spin-offs of
string theory.
For experts
Using dualities in string theory, I explore quantum field theory and quantum gravity at strong coupling, very high energies and small distances. Important ingredients in my work are double/
exceptional field theory, an effective target space description of string/M-theory, which makes T-/U-dualities manifest, and (super)conformal field theories, (S)CFTs, in two and more dimensions. On
the formal side, I look into the underlying principles of generalised geometry, non-commutative or even non-associative geometry and, especially, how they naturally arise from strings and higher
dimensional membranes probing spacetime. String field theory and worldsheet renormalisation group flow, which allow extracting new mathematical structures from the $\sigma$-model that underlies the
string's dynamics, are powerful tools I rely on. Although this is still a perturbative approach, it can point out underlying symmetry principles that allow accessing the non-perturbative regime.
A prominent example is supersymmetry. It allows studying certain protected sectors of a theory (like BPS solutions) at strong coupling. These by the symmetry distinguished sectors are also
indispensable for approaching higher-dimensional SCFTs, which usually do not have a weak coupling limit. Besides having all these fundamental aspects in mind, I am interested in applications. They
range from flux compactifications over consistent truncations to simple toy models for inflation in cosmology. In the last years, I revealed an elementary link between Poisson-Lie T-duality and
generalised geometry. It brings together two thriving research communities and paves the way for important discoveries. Among them is a new approach to one of the biggest questions in contemporary
physics: What is the fundamental structure of space and time? I am exploring this and related questions together with my research team.
Institute of Theoretical Physics
University of Wrocław
pl. M. Borna 9
50-204 Wrocław, Poland
+48 71 375-9241
+48 573 551 052 | {"url":"https://www.fhassler.de/","timestamp":"2024-11-10T21:07:59Z","content_type":"text/html","content_length":"451002","record_id":"<urn:uuid:32cf5eb8-c942-44bb-b94e-ea627e4c4744>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00313.warc.gz"} |
Credit Analysis Models
Credit Analysis Models
2024 Curriculum CFA Program Level II Fixed Income
Credit analysis plays an important role in the broader fixed-income space. Our coverage will go over important concepts, tools, and applications of credit analysis. We first look at modeling credit
risk. The inputs to credit risk modeling are the expected exposure to default loss, the loss given default, and the probability of default. We explain these terms and use a numerical example to
illustrate the calculation of the credit valuation adjustment for a corporate bond and its credit spread over a government bond yield taken as a proxy for a default-risk-free rate (or default-free
We then discuss credit scoring and credit ratings. Credit scoring is a measure of credit risk used in retail loan markets, and ratings are used in the wholesale bond market. We explain two types of
credit analysis models used in practice—structural models and reduced-form models. Both models are highly mathematical and beyond the scope of our coverage. Therefore, we provide only an overview to
highlight the key ideas and the similarities and differences between them. We then use the arbitrage-free framework and a binomial interest rate tree to value risky fixed-rate and floating-rate bonds
for different assumptions about interest rate volatility. We also build on the credit risk model to interpret changes in credit spreads that arise from changes in the assumed probability of default,
the recovery rate, or the exposure to default loss. We also explain the term structure of credit spreads and finally compare the credit analysis required for securitized debt with the credit analysis
of corporate bonds.
Learning Outcomes
The member should be able to:
• explain expected exposure, the loss given default, the probability of default, and the credit valuation adjustment;
• explain credit scores and credit ratings;
• calculate the expected return on a bond given transition in its credit rating;
• explain structural and reduced-form models of corporate credit risk, including assumptions, strengths, and weaknesses;
• calculate the value of a bond and its credit spread, given assumptions about the credit risk parameters;
• interpret changes in a credit spread;
• explain the determinants of the term structure of credit spreads and interpret a term structure of credit spreads;
• compare the credit analysis required for securitized debt to the credit analysis of corporate debt.
This reading has covered several important topics in credit analysis. Among the points made are the following:
• Three factors important to modeling credit risk are the expected exposure to default, the recovery rate, and the loss given default.
• These factors permit the calculation of a credit valuation adjustment that is subtracted from the (hypothetical) value of the bond, if it were default risk free, to get the bond’s fair value
given its credit risk. The credit valuation adjustment is calculated as the sum of the present values of the expected loss for each period in the remaining life of the bond. Expected values are
computed using risk-neutral probabilities, and discounting is done at the risk-free rates for the relevant maturities.
• The CVA captures investors’ compensation for bearing default risk. The compensation can also be expressed in terms of a credit spread.
• Credit scores and credit ratings are third-party evaluations of creditworthiness used in distinct markets.
• Analysts may use credit ratings and a transition matrix of probabilities to adjust a bond’s yield-to-maturity to reflect the probabilities of credit migration. Credit spread migration typically
reduces expected return.
• Credit analysis models fall into two broad categories: structural models and reduced-form models.
• Structural models are based on an option perspective of the positions of the stakeholders of the company. Bondholders are viewed as owning the assets of the company; shareholders have call
options on those assets.
• Reduced-form models seek to predict when a default may occur, but they do not explain the why as do structural models. Reduced-form models, unlike structural models, are based only on observable
• When interest rates are assumed to be volatile, the credit risk of a bond can be estimated in an arbitrage-free valuation framework.
• The discount margin for floating-rate notes is similar to the credit spread for fixed-coupon bonds. The discount margin can also be calculated using an arbitrage-free valuation framework.
• Arbitrage-free valuation can be applied to judge the sensitivity of the credit spread to changes in credit risk parameters.
• The term structure of credit spreads depends on macro and micro factors.
• As it concerns macro factors, the credit spread curve tends to become steeper and widen in conditions of weak economic activity. Market supply and demand dynamics are important. The most
frequently traded securities tend to determine the shape of this curve.
• Issuer- or industry-specific factors, such as the chance of a future leverage-decreasing event, can cause the credit spread curve to flatten or invert.
• When a bond is very likely to default, it often trades close to its recovery value at various maturities; moreover, the credit spread curve is less informative about the relationship between
credit risk and maturity.
• For securitized debt, the characteristics of the asset portfolio themselves suggest the best approach for a credit analyst to take when deciding among investments. Important considerations
include the relative concentration of assets and their similarity or heterogeneity as it concerns credit risk.
2.5 PL Credit
If you are a CFA Institute member don’t forget to record Professional Learning (PL) credit from reading this article. | {"url":"https://www.cfainstitute.org/insights/professional-learning/refresher-readings/2024/credit-analysis-models","timestamp":"2024-11-03T07:05:47Z","content_type":"text/html","content_length":"125956","record_id":"<urn:uuid:94cc7d71-2b54-4de2-85c4-b2f81c64c1e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00895.warc.gz"} |