content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Facets of Random Symmetric Edge Polytopes, Degree Sequences, and Clustering
Symmetric edge polytopes are lattice polytopes associated with finite simple graphs that are of interest in both theory and applications. We investigate the facet structure of symmetric edge
polytopes for various models of random graphs. For an Erdos-Renyi random graph, we identify a threshold probability at which with high probability the symmetric edge polytope shares many
facet-supporting hyperplanes with that of a complete graph. We also investigate the relationship between the average local clustering, also known as the Watts-Strogatz clustering coefficient, and the
number of facets for graphs with either a fixed number of edges or a fixed degree sequence. We use well-known Markov Chain Monte Carlo sampling methods to generate empirical evidence that for a fixed
degree sequence, higher average local clustering in a connected graph corresponds to higher facet numbers in the associated symmetric edge polytope.
Bibliographical note
Publisher Copyright:
© 2023 Discrete Mathematics and Theoretical Computer Science. All rights reserved.
MK was partially supported by National Science Foundation award DMS-2005630. BB and KB were partially supported by National Science Foundation award DMS-1953785. The authors thank Tianran Chen and
Rob Davis for helpful discussions that motivated this project. The authors thank Dhruv Mubayi for helpful suggestions regarding random graphs.
Funders Funder number
National Science Foundation Arctic Social Science Program DMS-2005630, DMS-1953785
• clustering
• facets
• random graphs
• Symmetric edge polytope
ASJC Scopus subject areas
• Theoretical Computer Science
• General Computer Science
• Discrete Mathematics and Combinatorics
Dive into the research topics of 'Facets of Random Symmetric Edge Polytopes, Degree Sequences, and Clustering'. Together they form a unique fingerprint. | {"url":"https://scholars.uky.edu/en/publications/facets-of-random-symmetric-edge-polytopes-degree-sequences-and-cl","timestamp":"2024-11-09T20:00:53Z","content_type":"text/html","content_length":"55623","record_id":"<urn:uuid:fadfbe67-96f3-433d-8fa4-ae10b3d695fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00673.warc.gz"} |
How to Solve Percentage Questions?
A value or ratio that may be stated as a fraction of 100 is called a percentage in mathematics. Divide the number by the whole and multiply the result by 100 if we need to get the percentage of a
given number. As a result, the percentage denotes a fraction per hundred. The definition of percent is per 100. “%” is the symbol used to represent it.
Percentage Formula:
To determine the percentage, we have to divide the value by the total value and then multiply the resultant by 100.
Percentage formula = (Value/Total value) × 100
Example: 6/4 × 100 = 1.5 × 100 = 150 per cent
How to Calculate Percentage?
We must use a separate formula, such as this one, to determine the percentage of a number.
P/% of Total = X
where X is the necessary proportion.
In order to represent the aforementioned formulas, we must eliminate the % symbol.
P/100 * Number = X
Example: Calculate 10% of 60
Let 10% of 60 = x
10/100*60= x
x = 6
Converting Fractions into Percentage:
A fraction can be represented by a/b, Multiplying and Dividing the fraction by 100, we have
From the definition of percentage, we have;
(1/100) = 1%
Thus, equation (i) can be written as:
(a/b) × 100%
Therefore, a fraction can be converted to a percentage simply by multiplying the given fraction by 100.
Conversion of Fraction into Percentage
Percentage Difference Formula :
The following formula can be used to determine the percentage difference between two values if we are given two values:
Percentage Increase and Decrease:
The original number is subtracted from a new number, divided by the original number, and then multiplied by 100 to get the percentage increase.
% increase = [(New number – Original number)/Original number] x 100
where, increase in number = New number – original number
Comparably, a percentage drop is calculated by deducting a new figure from the base number, dividing the result by the base number, and then multiplying the result by 100.
% decrease = [(Original number – New number)/Original number] x 100
Where decrease in number = Original number – New number
So basically if the answer is negative then there is a percentage decrease.
Percentage Questions
Q.1. If 12% of 50% of a number is 12, then find the number?
Let the required number be x,
Therefore, as per the given question
(12/100) * (50/100)* x = 12
So, x= ( 10*100*100)/(12*50) = 200
Q.2. What percentage of 1/5 is 1/35 ?
let x% of 1/5 is 1/25
Therefore, [(1/5)/100] * x = 1/25
x= 1/25* 5/1 * 100= 4%
Q.3. Which number is 60% less than 80?
Required number= 40% of 80
(80*40)/100 = 32
Therefore, the number 32 is 60%, less than 80.
Q.4. The sum of (10% OF 12.5) and (6% of 25.2) is equal to what value?
As per the given question,
Sum= (10% of 12.5) + (6% of 25.2)
(12.55*10)/10 + (6* 25.2)/100
= 14.012
Word Problem
Q.1 A lady spends her total income as follows-30% on food, 25% on rent, 16% on travel and 19% on education. After all expenses made, she saves Rs. 8,900. Find the amount spent on education.
Given, A lady spends 30% of her income on food, 25% on rent, 16% on travel and 19% on education.
Her total saving: 8,900
Concepts used: Expenditure + Saving+ Income
Let the income earned by lady be Rs. x
Expenditure made by lady on food= 30% of x = 0.30x
Expenditure made by lady on rent= 25% of x= 0.25x
Expenditure made by lady on travel= 16% of x= 0.16x
Expenditure made by man on education = 19% of x = 0.19x
Savings= Rs. 8,900
Income= Expenditure + Savings
x=0.90x+ Rs.8,900
0.10x= Rs.8,900/0.10
Expenditure made by lady on education= 19% of x
0.19*Rs.89,000= Rs.16,910
Thus, the expenditure made on education by the lady is equal to Rs.16,910
Q.2. On a shelf, the first row contains 15% more books than the second row and the third row contains 15% less books than the second row. If the total number of books contained in all the rows is
900, then find the number of books in the first row.
Let the number of books on second row be 100x
Then, 1st row=
Given, the books on first row contains 15% more books than the second row
Thus, 1st row= 100x+15x= 115x
2nd row= 100x
3rd row=
Given, the books on second row contains 15% less books than second row
Thus, 3rd row= 100x-15x= 85x
Total number of books= 115x+100x+85x= 300x
According to the question, 300x=900
x= 3
So, on 1st row = 115*3= 345 books.
Q.3. A television depreciates in value each year at the rate of 10% of its previous value. However, every second year there is some maintenance work so that in that particular year, depreciation is
only 5% of its previous value. If at the end of the fourth year , the value of the television stands at Rs. 1,46,205, then the value of the television at the start of the first year.
Let the value of television at the start of first year be Rs. x
According to the question, x*90/100*95/100*90/100*95/100= 1,46,205
x= 1,46,205*10,00,000/81*95*95
x= Rs. 2,00,000
Q.4. In a mixture of 80 litres of mineral oil and edible oil, 25% of the mixture is mineral oil. How much edible oil should be added to the mixture so that mineral oil becomes 20% of the mixture?
In a mixture of 80 litres of mineral and edible oil, 25% of the mixture is mineral oil.
Amount of Mineral oil= 80*25/100= 20 litres
Let x be the amount of edible oil added
ATQ,, 20=20% of (80+x)
20= 20/100 (80+x)
x+20 litres= x= 20 lts
Q.5. There are three solutions, A, B and C containing milk and water. The strength of the three solutions are 80%,90%, and 70% respectively. When 1 litre each of A and B is mixed with C, the strength
of C increases to 75%. What would be strength of C if two litres of A and 0.5 litres of B were mixed with C?
Let the volume of C be x litres.
As the strengths of A and B are 80% and 90%, by mixing 1 litre each of A and B with C, we are adding 300 ml and 900 ml respectively of alcohol to C. The strenth of C is 70% which increases to 75%.
So, the new strength = 75%
7x/10+ 1.7/x+2 = 75/100
On solving we get x= 4 litres
So, alcohol present in C= 70% of 4 litres= 2.8 litres
In the second case, total alcohol added= (1.6+0.45) ltrs= 2.05 ltrs
Required strength = 2.8+2.05/4=2.5 = 74.6%
Test your Caliber with us!
Try your knowledge of this idea by solving the questions given on FundaMakers. Click on ‘Question Bank’ to access the question bank.
Visit the link below to learn how to solve problems based on Profit & Loss:
0 Comments
Inline Feedbacks
View all comments | {"url":"https://fundamakers.com/how-to-solve-percentage-questions/","timestamp":"2024-11-06T07:32:06Z","content_type":"text/html","content_length":"172568","record_id":"<urn:uuid:9566470b-d34c-462a-b495-13bbc91f3a9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00331.warc.gz"} |
Apple Inc is looking for a
4G/5G system engineer on mobility control
. Based in California, USA (Bay Area or San Diego).
Details Here
Toolbox: CRC calculator
Parameter Value
CRC Coefficients Polynomial
Input length
Input data
Parity bits 100001010101100000001011
Explanation of the CRC calculation steps
1. Line the input bits in a row, a[0] at the left-most position and a[A-1] at the right most position.
2. Pad the input bits by L zeros to the right side. L + 1 is the length of the polynomial.
3. Divide the padded bits with the coefficients of the polynomial. The remainder are the CRC bits. The division steps are listed in table below:
Input data Padding
Polynomial coefficients 000000000000000000000000
Dividing results
Denote the input bits to the CRC computation by a[0], a[1], a[2], a[3], ..., a[A-1], and the parity bits by p[0], p[1], p[2], p[3], ..., p[L-1], where A is the size of the input sequence and L is the
number of parity bits. The parity bits are generated by one of the following cyclic generator polynomials:
1. - g[CRC24A](D)=[D^24+D^23+D^18+D^17+D^14+D^11+D^10+D^7+D^6+D^5+D^4+D^3+D^1+1] for a CRC length L=24;
2. - g[CRC24B](D)=[D^24+D^23+D^6+D^5+D^1+1] for a CRC length L=24;
3. - g[CRC24C](D)=[D^24+D^23+D^21+D^20+D^17+D^15+D^13+D^12+D^8+D^4+D^2+D+1] for a CRC length L=24;
4. - g[CRC16](D)=[D^16+D^12+D^5+1] for a CRC length L=16;
5. - g[CRC11](D)=[D^11+D^10+D^9+D^5+1] for a CRC length L=11;
6. - g[CRC6](D)=[D^6+D^5+1] for a CRC length L=6.
The encoding is performed in a systematic form, which means that in GF(2), the polynomial: a[0]D^A+L-1+a[1]D^A+L-2+...+a[A-1]D^L+p[0]D^L-1+p[1]D^L-2+...+p[L-2]D^1+p[L-1] yields a remainder equal to 0
when divided by the corresponding CRC generator polynomial.
The bits after CRC attachment are denoted by b[0], b[1], b[2], b[3],..., b[B-1] where B = A + L. The relation between a[k] and b[k] is:
b[k] = a[k] for k = 0,1,2,...,A-1
b[k-A] = a[k] for k = A, A+1, A+2, ..., A+L-1
CRC is an error detecting code to detect accidental changes to raw data. Blocks of input data get a short check value attached, based on the remainder of a polynomial division of their contents.
Receiver performs the same CRC calculation to detect data corruption. Specification of a CRC code requires definition of a generator polynomial. This polynomial becomes the divisor in a polynomial
long division, which takes the input data as the dividend and in which the quotient is discarded and the remainder becomes the result.
Commonly used CRCs employ the Galois field of two elements, GF(2). The two elements are usually called 0 and 1, matching computer architecture.
A CRC is called an n-bit CRC when its check value is n bits long. For a given n, multiple CRCs are possible, each with a different polynomial. Such a polynomial has highest degree n, which means it
has n + 1 terms. In other words, the polynomial has a length of n + 1. | {"url":"https://www.nrexplained.com/crc","timestamp":"2024-11-06T07:50:45Z","content_type":"text/html","content_length":"25471","record_id":"<urn:uuid:15643390-a11d-48f9-909f-4480bdb63dac>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00075.warc.gz"} |
A big wrong in math calculation
I'm new in MyScript and just found the wrongs of this APP in its math basics as shown in the attached file.
while I am looking for some answers and solutions, I suddenly realised that the last update of this APP in Apple's App Store was 1 year ago.
SO, not sure either the crew is still working on and keep regular updates of MrScript or you by any chance will see this post.
1 Comment
Dear Guochao Wang,
Thank you for your question.
The reason why you get this result, is because the real operator that is implemented in our math solver is x+y%, thus taking precedence over the multiplication.
We have chosen to define the x+y% operator that translates to x+x*y/100.
This allows you to write 120+10%=132, which is what is usually expected in everyday life, for a calculator use.
Best regards, | {"url":"https://developer-support.myscript.com/support/discussions/topics/16000032518","timestamp":"2024-11-10T18:59:49Z","content_type":"text/html","content_length":"64689","record_id":"<urn:uuid:d0190040-ff8b-4af6-9376-554d954adc62>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00740.warc.gz"} |
Refuted Yet Again
This article is written in response to Matt Young’s “How to Evolve Specified Complexity by Natural Means.” Both pieces appeared on Metanexus (http://www.metanexus.net).
The mathematician George Polya used to quip that if you can’t solve a problem, find an easier problem and solve it. Matt Young seems to have taken Polya’s advice to heart. Young has taken Shannon’s
tried-and-true theory of information and shoehorned my notion of specified complexity into it. The shoe no longer fits, and so there must be something wrong with specified complexity and the
implications I draw from it--notably the law of conservation of information.
Young’s basic argument is that information conceived as improbability is subject to Shannon’s theory and the 2nd law of thermodynamics, but that information conceived as the specification of
possibilities is not and actually runs counter to the first construal of information. Thus my entire work on intelligent design is supposed to devolve to an equivocation over the use of the term
There is no equivocation in my work over information. What I do is define a type of information--specified complexity--that enriches Shannon information but at the same is not reducible to it. What
gives specified complexity its traction in detecting design is a coincidence of two things: (1) an event that under a chance hypothesis has small probability and therefore high information content in
the first of Young’s senses; and (2) a pattern that is objectively given and complexity-theoretically tractable, and yet matches the event.
It’s that coincidence that makes the design inference work. It is the same coincidence that makes ETI detection work. And it is the same coincidence that makes Fisher’s theory of significance testing
work (in which events--i.e., the high information carriers--coincide with rejection regions--i.e., the objectively given patterns).
Even a cursory reading of my Cambridge monograph The Design Inference would have made this clear. But instead, Young cites semi-popular work of mine directed toward a theological audience (in
particular, Intelligent Design: The Bridge Between Science & Theology). Young as a physicist claiming expertise in information theory has no excuse for not engaging my technical work (no mention of
The Design Inference in his article).
Even so, let me offer one concession. Although I developed my theoretical apparatus for design-detection at length in The Design Inference, I did not there develop the information-theoretic
connections. Moreover, until No Free Lunch (2002), my treatment of the information-theoretic connections was semi-popular. Charitable readers with the requisite technical background were thus able to
fill in the details and see the merit of my previous information-theoretic work. Uncharitable readers like Young, on the other hand, have been eager to attribute confusion on my part.
Young seems especially to be taking his cues from Victor Stenger, Mark Perakh, and others who claim that I’m all mixed up about information theory. Perhaps I am. But let’s make a deal. Start to
engage my technical work on the information-theoretic underpinnings of intelligent design by reading and citing The Design Inference and especially chs. 2-4 of my newest book No Free Lunch. Having
engaged that material, give me your best shot. | {"url":"https://billdembski.com/wp-content/uploads/2019/04/Refuted_Yet_Again_Dembski_2002.html","timestamp":"2024-11-09T10:54:12Z","content_type":"text/html","content_length":"19612","record_id":"<urn:uuid:d48912e9-922f-43d9-a895-4b2bdfd9e730>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00428.warc.gz"} |
Kids.Net.Au - Encyclopedia > Correlation
probability theory
, the
, also called
correlation coefficient
, between two
random variables
is found by dividing their
by the product of their
standard deviations
. (It is only defined if these standard deviations are finite.) It is a corollary of the
Cauchy-Schwarz inequality
that the correlation cannot exceed 1 in
absolute value
The correlation is 1 in the case of an increasing linear relationship, -1 in the case of a decreasing linear relationship, and some value in between in all other cases, indicating the degree of
linear dependence between the variables.
If the variables are independent then the correlation is 0, but the converse is not true because the correlation coefficient detects only linear dependencies between two variables. Here is an
example: Suppose the random variable X is uniformly distributed on the interval from -1 to 1, and Y = X^2. Then Y is completely determined by X, so that X and Y are as far from being independent as
two random variables can be, but their correlation is zero; they are uncorrelated.
If several values of X and Y have been measured, then the Pearson product-moment correlation coefficient can be used to estimate the correlation of X and Y. The coefficient is especially important if
X and Y are both normally distributed and follow the linear regression model.
Pearson's correlation coefficient is a parametric statistic, and it is less useful if the underlying assumption of normality is violated. Less powerful non-parametric correlation methods, such as
Spearman's ρ may be useful when distributions are not normal.
To get a measure for more general dependencies in the data (also nonlinear) it is better to use so called correlation ratio[?] which is able to detect almost any functional dependency, or mutual
information[?] which detects even more general dependencies.
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/co/Correlation","timestamp":"2024-11-06T14:12:59Z","content_type":"application/xhtml+xml","content_length":"14497","record_id":"<urn:uuid:9c2ab88d-76e0-4ac5-8d01-85adfb091ed5>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00105.warc.gz"} |
Sinking the 9-Ball - ITtutoriaSinking the 9-Ball
You must login to ask question.(5)
Jeanette is playing in a 9-ball pool tournament. She will win if she sinks the 9-ball from the final rack, so she needs to line up her shot precisely. Both the cueball and the 9-ball have mass m ,
and the cue ball is hit at an initial speed of v_i. Jeanette carefully hits the cue ball into the 9-ball off center, so that when theballs collide, they move away from each other at the same angle
theta from the direction in which the cue ball was originally traveling (see figure). Furthermore,after the collision, the cue ball moves away at speed v_f, while the 9-ball moves at speed v_9.
For the purposes of this problem, assume that the collision is perfectly elastic, neglect friction, and ignore the spinning of the balls.
Find the angle theta through which the 9-ball travels away from the horizontal, as shown in the figure. Perhaps surprisingly, you should be able to obtain anexpression that is independent of any of
the given variables!
Express your answer in degrees to three significant figures. theta = rm degrees
Relevant knowledge:
Complementary angles – Any two angles whose sum equals 90° is called the complementary angle.
If A and B are the two complementary angles then-
In the right triangle, the two angles are not related to one another.
Supplementary Angles – Two angles that have a sum of 180° can be called supplementary angles.
If X and Y are the two supplementary angles then-
The opposite angles of a rectangle can be complementary. | {"url":"https://ittutoria.net/question/sinking-the-9-ball/?show=random","timestamp":"2024-11-03T13:13:06Z","content_type":"text/html","content_length":"200444","record_id":"<urn:uuid:3038f483-9da6-4a73-8264-c1259fd9d0aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00062.warc.gz"} |
Introduction | Welcome to my digital notebooks! 📚
These notes are taken from YouTube video lectures by Matt Woerman.
What is Structural Econometrics?
• Structural econometrics is defined as combining explicit economic theories with statistical models to identify parameters of economic models based on individual choices or aggregate relations.
• Structural econometrics is a branch of economics that combines economic theory, statistical methods, and empirical analysis to model and understand the underlying structures of economic systems.
It aims to uncover the relationships between different economic variables by developing and estimating models based on economic theory.
Contrast with Nonstructural (reduced form) Econometrics
Reduced form econometrics emphasises on:
• Less direct incorporation of economic theory.
• More focus on data-driven, empirical findings without a strong theoretical foundation.
Why Add Structure to an Econometric Model?
1. Estimation of Unobservable Parameters:
□ Examples include marginal utility, marginal cost, risk preferences, discount rates, etc.
2. Counterfactual Simulations:
□ Assessing what would happen under different economic scenarios.
3. Comparing Economic Theories:
□ Testing competing theories by modeling their implications.
Balance and Credibility
• The choice between structural and nonstructural approaches depends on research context and questions.
• Structural models can sometimes add credibility, especially in policy analysis or forecasting.
Constructing a Structural Econometric Model
1. Start with Economic Theory:
□ Define economic setting, list primitives (preferences, technologies), and equilibrium concepts.
2. Transform into Econometric Model:
□ Incorporate statistical elements like unobservables and errors.
3. Estimation:
□ Define functional forms, distributional assumptions, and select estimation methods.
A Simple Example of a Structural Model
This example demonstrates the estimation of output elasticities of capital and labor for a firm using a structural econometric model.
• Output $(Y_t)$
• Capital $(K_t)$
• Labor $(L_t)$
1. Start with a Cobb-Douglas Production Function
The initial economic model is based on the Cobb-Douglas production function, which is a common representation in economics to describe the relationship between outputs and inputs.
• Functional Form: $Y_t = A K_t^\alpha L_t^\beta$
• Rewritten as a Log-Linear Model: To facilitate estimation and interpretation, this production function is transformed into a log-linear form.
$\ln(Y_t) = \gamma + \alpha \ln(K_t) + \beta \ln(L_t)$
2. Incorporate an Error Term
An error term $(\varepsilon_t)$ is added to the model to account for measurement error and other unobserved factors.
• Assumptions on Error Term:
□ The error term is assumed to follow a normal distribution with mean zero and variance $\sigma^2: \varepsilon_t \sim N(0, \sigma^2)$.
□ It is assumed that the expectation of the error term, given capital and labor, is zero: $E(\varepsilon_t | K_t, L_t) = 0$.
3. Estimation Using Ordinary Least Squares (OLS)
The final step involves estimating the output elasticities $\alpha$ and $\beta$ using OLS, a standard method in econometrics for estimating the parameters of a linear regression model.
• OLS Estimation Model:
$\ln(Y_t) = \gamma + \alpha \ln(K_t) + \beta \ln(L_t) + \varepsilon_t$
A More Complex Example of a Structural Model
This example demonstrates a more complex structural model involving procurement auctions with risk-neutral bidders and the goal of estimating the underlying common distribution of costs known to all
• Winning Bid $(w_t)$: Observed in T procurement auctions with $(N_t)$ risk-neutral bidders.
1. Economic Theory and Expected Profit Maximization
• Each firm is assumed to maximize its expected profit.
• The expected profit for firm $(i)$ with bid $(b_i)$ and cost $(c_i)$ is given by:
$E[\pi_i(b_1, ..., b_N)] = (b_i - c_i) . Pr(b_i < b_j , \forall j eq i | c_i)$ | {"url":"https://macropy.com/Notebooks_Courses/docs/eio/Structural%20Econometrics/intro/","timestamp":"2024-11-12T19:53:04Z","content_type":"text/html","content_length":"81942","record_id":"<urn:uuid:69afc926-fa60-43d8-8b48-6885efbc46e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00385.warc.gz"} |
992 Square Digits to Square Fathoms
Square Digit [digit2] Output
992 square digits in ankanam is equal to 0.053819444444444
992 square digits in aana is equal to 0.011322132943755
992 square digits in acre is equal to 0.000088957680769782
992 square digits in arpent is equal to 0.00010529691968863
992 square digits in are is equal to 0.0035999928
992 square digits in barn is equal to 3.5999928e+27
992 square digits in bigha [assam] is equal to 0.00026909722222222
992 square digits in bigha [west bengal] is equal to 0.00026909722222222
992 square digits in bigha [uttar pradesh] is equal to 0.00014351851851852
992 square digits in bigha [madhya pradesh] is equal to 0.00032291666666667
992 square digits in bigha [rajasthan] is equal to 0.00014233241505969
992 square digits in bigha [bihar] is equal to 0.00014235855988244
992 square digits in bigha [gujrat] is equal to 0.00022239439853076
992 square digits in bigha [himachal pradesh] is equal to 0.00044478879706152
992 square digits in bigha [nepal] is equal to 0.000053155006858711
992 square digits in biswa [uttar pradesh] is equal to 0.0028703703703704
992 square digits in bovate is equal to 0.000005999988
992 square digits in bunder is equal to 0.000035999928
992 square digits in caballeria is equal to 7.999984e-7
992 square digits in caballeria [cuba] is equal to 0.0000026825579731744
992 square digits in caballeria [spain] is equal to 8.999982e-7
992 square digits in carreau is equal to 0.000027906920930233
992 square digits in carucate is equal to 7.4073925925926e-7
992 square digits in cawnie is equal to 0.000066666533333333
992 square digits in cent is equal to 0.0088957680769782
992 square digits in centiare is equal to 0.35999928
992 square digits in circular foot is equal to 4.93
992 square digits in circular inch is equal to 710.47
992 square digits in cong is equal to 0.00035999928
992 square digits in cover is equal to 0.0001334319051149
992 square digits in cuerda is equal to 0.000091602870229008
992 square digits in chatak is equal to 0.086111111111111
992 square digits in decimal is equal to 0.0088957680769782
992 square digits in dekare is equal to 0.00035999951749168
992 square digits in dismil is equal to 0.0088957680769782
992 square digits in dhur [tripura] is equal to 1.08
992 square digits in dhur [nepal] is equal to 0.021262002743484
992 square digits in dunam is equal to 0.00035999928
992 square digits in drone is equal to 0.000014015480324074
992 square digits in fanega is equal to 0.000055987446345257
992 square digits in farthingdale is equal to 0.00035573051383399
992 square digits in feddan is equal to 0.000086366680359012
992 square digits in ganda is equal to 0.0044849537037037
992 square digits in gaj is equal to 0.43055555555556
992 square digits in gajam is equal to 0.43055555555556
992 square digits in guntha is equal to 0.0035583103764922
992 square digits in ghumaon is equal to 0.000088957759412305
992 square digits in ground is equal to 0.0016145833333333
992 square digits in hacienda is equal to 4.0178491071429e-9
992 square digits in hectare is equal to 0.000035999928
992 square digits in hide is equal to 7.4073925925926e-7
992 square digits in hout is equal to 0.00025329526996265
992 square digits in hundred is equal to 7.4073925925926e-9
992 square digits in jerib is equal to 0.00017807904411765
992 square digits in jutro is equal to 0.000062554175499566
992 square digits in katha [bangladesh] is equal to 0.0053819444444444
992 square digits in kanal is equal to 0.00071166207529844
992 square digits in kani is equal to 0.00022424768518519
992 square digits in kara is equal to 0.017939814814815
992 square digits in kappland is equal to 0.0023337176196033
992 square digits in killa is equal to 0.000088957759412305
992 square digits in kranta is equal to 0.053819444444444
992 square digits in kuli is equal to 0.026909722222222
992 square digits in kuncham is equal to 0.00088957759412305
992 square digits in lecha is equal to 0.026909722222222
992 square digits in labor is equal to 5.0219989595442e-7
992 square digits in legua is equal to 2.0087995838177e-8
992 square digits in manzana [argentina] is equal to 0.000035999928
992 square digits in manzana [costa rica] is equal to 0.000051509706737483
992 square digits in marla is equal to 0.014233241505969
992 square digits in morgen [germany] is equal to 0.000143999712
992 square digits in morgen [south africa] is equal to 0.00004202162717404
992 square digits in mu is equal to 0.00053999891730001
992 square digits in murabba is equal to 0.0000035583072307913
992 square digits in mutthi is equal to 0.028703703703704
992 square digits in ngarn is equal to 0.0008999982
992 square digits in nali is equal to 0.0017939814814815
992 square digits in oxgang is equal to 0.000005999988
992 square digits in paisa is equal to 0.045289855072464
992 square digits in perche is equal to 0.010529691968863
992 square digits in parappu is equal to 0.0014233228923165
992 square digits in pyong is equal to 0.10889270417423
992 square digits in rai is equal to 0.00022499955
992 square digits in rood is equal to 0.00035583103764922
992 square digits in ropani is equal to 0.00070763330898466
992 square digits in satak is equal to 0.0088957680769782
992 square digits in section is equal to 1.3899649908173e-7
992 square digits in sitio is equal to 1.999996e-8
992 square digits in square is equal to 0.03875
992 square digits in square angstrom is equal to 35999928000000000000
992 square digits in square astronomical units is equal to 1.6086101567044e-23
992 square digits in square attometer is equal to 3.5999928e+35
992 square digits in square bicron is equal to 3.5999928e+23
992 square digits in square centimeter is equal to 3599.99
992 square digits in square chain is equal to 0.00088957395005971
992 square digits in square cubit is equal to 1.72
992 square digits in square decimeter is equal to 36
992 square digits in square dekameter is equal to 0.0035999928
992 square digits in square exameter is equal to 3.5999928e-37
992 square digits in square fathom is equal to 0.10763888888889
992 square digits in square femtometer is equal to 3.5999928e+29
992 square digits in square fermi is equal to 3.5999928e+29
992 square digits in square feet is equal to 3.88
992 square digits in square furlong is equal to 0.0000088957680769782
992 square digits in square gigameter is equal to 3.5999928e-19
992 square digits in square hectometer is equal to 0.000035999928
992 square digits in square inch is equal to 558
992 square digits in square league is equal to 1.5443993831657e-8
992 square digits in square light year is equal to 4.0220915627309e-33
992 square digits in square kilometer is equal to 3.5999928e-7
992 square digits in square megameter is equal to 3.5999928e-13
992 square digits in square meter is equal to 0.35999928
992 square digits in square microinch is equal to 557999507756460
992 square digits in square micrometer is equal to 359999280000
992 square digits in square micromicron is equal to 3.5999928e+23
992 square digits in square micron is equal to 359999280000
992 square digits in square mil is equal to 558000000
992 square digits in square mile is equal to 1.3899649908173e-7
992 square digits in square millimeter is equal to 359999.28
992 square digits in square nanometer is equal to 359999280000000000
992 square digits in square nautical league is equal to 1.1662110659657e-8
992 square digits in square nautical mile is equal to 1.0495890334771e-7
992 square digits in square paris foot is equal to 3.41
992 square digits in square parsec is equal to 3.7809455653343e-34
992 square digits in perch is equal to 0.014233241505969
992 square digits in square perche is equal to 0.0070488518644452
992 square digits in square petameter is equal to 3.5999928e-31
992 square digits in square picometer is equal to 3.5999928e+23
992 square digits in square pole is equal to 0.014233241505969
992 square digits in square rod is equal to 0.014233186718038
992 square digits in square terameter is equal to 3.5999928e-25
992 square digits in square thou is equal to 558000000
992 square digits in square yard is equal to 0.43055555555556
992 square digits in square yoctometer is equal to 3.5999928e+47
992 square digits in square yottameter is equal to 3.5999928e-49
992 square digits in stang is equal to 0.00013289009966777
992 square digits in stremma is equal to 0.00035999928
992 square digits in sarsai is equal to 0.12809917355372
992 square digits in tarea is equal to 0.0005725179389313
992 square digits in tatami is equal to 0.21779858430637
992 square digits in tonde land is equal to 0.000065264554024656
992 square digits in tsubo is equal to 0.10889929215319
992 square digits in township is equal to 3.8610104500773e-9
992 square digits in tunnland is equal to 0.00007292749372012
992 square digits in vaar is equal to 0.43055555555556
992 square digits in virgate is equal to 0.000002999994
992 square digits in veli is equal to 0.000044849537037037
992 square digits in pari is equal to 0.000035583103764922
992 square digits in sangam is equal to 0.00014233241505969
992 square digits in kottah [bangladesh] is equal to 0.0053819444444444
992 square digits in gunta is equal to 0.0035583103764922
992 square digits in point is equal to 0.0088958453762036
992 square digits in lourak is equal to 0.000071166207529844
992 square digits in loukhai is equal to 0.00028466483011938
992 square digits in loushal is equal to 0.00056932966023875
992 square digits in tong is equal to 0.0011386593204775
992 square digits in kuzhi is equal to 0.026909722222222
992 square digits in chadara is equal to 0.03875
992 square digits in veesam is equal to 0.43055555555556
992 square digits in lacham is equal to 0.0014233228923165
992 square digits in katha [nepal] is equal to 0.0010631001371742
992 square digits in katha [assam] is equal to 0.0013454861111111
992 square digits in katha [bihar] is equal to 0.0028471711976488
992 square digits in dhur [bihar] is equal to 0.056943423952976
992 square digits in dhurki is equal to 1.14 | {"url":"https://hextobinary.com/unit/area/from/sqdigit/to/sqfathom/992","timestamp":"2024-11-02T18:05:23Z","content_type":"text/html","content_length":"129572","record_id":"<urn:uuid:f22a8185-4371-492b-be6c-12c1409a1eb6>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00465.warc.gz"} |
12,831 research outputs found
We investigate the vertical structure and elements distribution of neutrino-dominated accretion flows around black holes in spherical coordinates with the reasonable nuclear statistical equilibrium.
According our calculations, heavy nuclei tend to be produced in a thin region near the disk surface, whose mass fractions are primarily determined by the accretion rate and the vertical distribution
of temperature and density. In this thin region, we find that $^{56}\rm Ni$ is dominant for the flow with low accretion rate (e.g., $0.05$$M_{\odot}$ $\rm s^{-1}$) but $^{56}\rm Fe$ is dominant for
the high counterpart (e.g., $1 M_{\odot}$ $\rm s^{-1}$). The dominant $^{56}\rm Ni$ in the special region may provide a clue to understand the bumps in the optical light curve of core-collapse
supernovae.Comment: 15 pages, 2 figures, accepted for publication in Ap
A classic result in extremal graph theory, known as Mantel's theorem, states that every non-bipartite graph of order $n$ with size $m>\lfloor \frac{n^{2}}{4}\rfloor$ contains a triangle. Lin, Ning
and Wu [Comb. Probab. Comput. 30 (2021) 258-270] proved a spectral version of Mantel's theorem for given order $n.$ Zhai and Shu [Discrete Math. 345 (2022) 112630] investigated a spectral version for
fixed size $m.$ In this paper, we prove $Q$-spectral versions of Mantel's theorem.Comment: 14 pages, 4 figure
We present a simplified version of the atomic dark matter scenario, in which charged dark constituents are bound into atoms analogous to hydrogen by a massless hidden sector U(1) gauge interaction.
Previous studies have assumed that interactions between the dark sector and the standard model are mediated by a second, massive Z' gauge boson, but here we consider the case where only a massless
gamma' kinetically mixes with the standard model hypercharge and thereby mediates direct detection. This is therefore the simplest atomic dark matter model that has direct interactions with the
standard model, arising from the small electric charge for the dark constituents induced by the kinetic mixing. We map out the parameter space that is consistent with cosmological constraints and
direct searches, assuming that some unspecified mechanism creates the asymmetry that gives the right abundance, since the dark matter cannot be a thermal relic in this scenario. In the special case
where the dark "electron" and "proton" are degenerate in mass, inelastic hyperfine transitions can explain the CoGeNT excess events. In the more general case, elastic transitions dominate, and can be
close to current direct detection limits over a wide range of masses.Comment: 5 pages, 2 figures; v2: added references, and formula for dark ionization fraction; published versio | {"url":"https://core.ac.uk/search/?q=author%3A(Liu%2C%20Xue-Lu)","timestamp":"2024-11-08T09:42:40Z","content_type":"text/html","content_length":"93557","record_id":"<urn:uuid:9f4e9f0a-3802-46be-934f-20839879ee2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00634.warc.gz"} |
Geometry: Exploring Midpoints
1. / Geometry: Exploring Midpoints
Geometry: Exploring Midpoints
Exploring Midpoints
Recall that the midpoint of ¯AB is a point M on ¯AB that divides ¯AB into two congruent pieces. You can use this definition to prove that each piece has length ^1/[2] AB. That's certainly reasonable.
If you divide a segment into two pieces of equal length, it makes sense that half of the original length will go to the first piece, the other half to the second piece. This is such a reasonable
statement, it's just got to be a theorem. Consider this your first invitation to a formal proof. I'll go through each of the five steps in the process.
• Example 1: Prove that the midpoint of a segment divides the segment into two pieces, each of which has length equal to one-half the length of the original segment.
• Solution: Follow the steps outlined in how to write a formal proof.
□ 1. Give a statement of the theorem:
• Theorem 9.1: The midpoint of a segment divides the segment into two pieces, each of which has length equal to one-half the length of the original segment.
□ 2. Draw a picture. Theorem 9.1 talks only about a line segment and its midpoint. So Figure 9.1 only shows ¯AB with midpoint M.
• 3. State what is given in terms of our drawing. Given: a line segment ¯AB and a midpoint M.
• 4. State what you want to prove in terms of your drawing. Prove: AM = ^1/[2] AB.
• 5. Write the proof. You need a game plan. In proving this theorem, you will want to make use of any definitions, postulates, and theorems that you have at your disposal. The definition you will
want to use is that of the midpoint.
• The postulate that will come in handy is the Segment Addition Postulate, which states that if X is a point on ¯AB, then AX + XB = AB. This theorem doesn't seem to have any special needs, so you
will prove this theorem directly. Start with your given information, and don't stop until AM = ^1/[2]AB.
Statements Reasons
1. M is the midpoint of ¯AB Given
2. ¯~=¯MB Definition of midpoint
3. AM = MB Definition of ~=
4. AM + MB = AB Segment Addition Postulate
5. 2AM = AB Substitution (steps 3 and 4)
6. AM = ^1/[2] AB Algebra
There is a little flexibility in the reasons given, especially when you are dealing with algebra. For example, the reason for step 6 was “algebra,” but it could also have been “the division property
of equality.” When it comes to the geometrical parts of a proof, however, there is not much flexibility. The reason for step 2 could only have been the definition of a midpoint, and step 3 is valid
only because of the Segment Addition Postulate. There are no other options in these cases.
Excerpted from The Complete Idiot's Guide to Geometry © 2004 by Denise Szecsei, Ph.D.. All rights reserved including the right of reproduction in whole or in part in any form. Used by arrangement
with Alpha Books, a member of Penguin Group (USA) Inc.
To order this book direct from the publisher, visit the Penguin USA website or call 1-800-253-6476. You can also purchase this book at Amazon.com and Barnes & Noble. | {"url":"https://www.factmonster.com/math-science/mathematics/geometry/geometry-exploring-midpoints","timestamp":"2024-11-04T12:02:21Z","content_type":"text/html","content_length":"38802","record_id":"<urn:uuid:9db01b3d-4bae-42e4-9d51-de858f265cf8>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00548.warc.gz"} |
Why Math Is College Critical: Crunching Numbers After High School
Every year, millions of high school seniors breathe sighs of relief upon finishing their final exams before university, thinking they’ll never again have to find x, do derivatives, or convert degrees
into radians on their TI-84s.
And, every year, millions of incoming freshmen schedule their first university math classes shortly after Googling, with exasperation and dread, “Do you have to take math in college?”
In this article, let’s explore not just why math is important to a well-rounded college education, but also the different types of math – and, therefore, its related challenges – that students
encounter upon beginning their college careers.
A Rude Awakening: An Introduction to Math Classes in College
College-level math presents for many students a sharp learning curve. Application-based word problems, solvable upon simply identifying the correct formulas to use, give way to lectures about the
theoretical and abstract. Those who were comfortable with high-school “mental math” must face unfamiliar leaps in logical and conceptual complexity. Even finding help with homework online may require
more original searches and Internet sleuthing than ever before.
While some high schools try to give their students a strong “foundation” for university-level depth, most freshmen are still in for a rude awakening. Before giving any advice, let’s at least learn
what to expect.
Reviewing Different Types of College Level Math
College students come from a wide array of educational backgrounds, so it’s important to establish a baseline understanding of what college level math problems might actually look like. In this
section, we’ll elaborate upon the levels of math through which students progress all the way from high school to graduate study.
Keep in mind that not all of these levels form distinct, hermetically isolated fields. Part of what makes mathematics so fascinating, albeit sometimes frustrating, is that its different concepts are
constantly communicating with each other, allowing mathematicians always to build upon and apply their prior knowledge to new situations.
Stairway to… Heaven? Climbing the Levels of Math
When you think about it, the interplay of mathematical fields makes perfect sense. In elementary school, students progress from learning basic arithmetic to pre-algebra to their first forays into
variables and equations. There’s no reason why high school should not continue that progression, nor why the transition to college math should somehow disrupt it.
We say this not to pretend that college math is not a difficult adjustment – as we’ve established, it is – but to emphasize that its challenges are not insurmountable. By mastering these
fundamentals, there’s no reason to panic nor think, with rigorous studying, that you cannot succeed.
Mastering Geometry: The One That “Shapes” Students’ Conceptual Skills
We’re starting with geometry because it often marks students’ first opportunity to understand mathematics not just in terms of numerical operations, but as a world of consistent proofs and
One of the oldest mathematical fields, geometry forces students to conceptualize angles, dimensions, and other properties of objects. By dealing with real-world problems, geometry enables students to
expand their understanding of math’s practical applications. However, by teaching exercises in logical thinking, it also empowers them to confront future math courses dealing heavily with theories
and abstractions. Therefore, geometry constitutes an early bridge through which students begin to approach higher-level math.
Mastering Trigonometry: A True “Sine” of Maturity
While its five syllables sound intimidating, the rest shouldn’t be. As a subset of geometry, and often not even occupying its own course, trigonometry simply refers to the study of the properties of
triangles. To succeed in trigonometry, students must learn relationships between angles, sides, and their corresponding ratios known as “sine,” “cosine,” and “tangent.”
Trigonometric applications are historic and highly variable, ranging from astronomy to navigation to finding the heights of buildings. While not the most complex subject here, mastering “trig” is
still necessary for students who, when taking college math, would rather have bigger things to worry about.
Mastering Precalculus: An “Integral” Part of College Prep
For those who don’t take AP Calculus, “precalc” is often the last math class students take in high school. And for good reason; with its focus on introducing new principles, getting adequate
precalculus help before graduating allows you to prepare for the transition to college-level math.
Combining geometry, trigonometry, and algebra, precalculus gives students a chance to practice concepts that are explored in full-on calculus. These include graphing inequalities, convergent and
divergent series, and the first whispers of calculus terms like derivatives and integrals. While far-reaching in scope, precalculus is entrenched in skills that many students have spent years
Mastering College Level Algebra: Navigating the “X”-tra Work
Having taken Algebra 1 and Algebra 2 beforehand, most freshmen have some experience with formulas, equations, and using symbols to represent numbers. But how does college make such things more
complex? At what level is college algebra, exactly?
College algebra still teaches functions, quadratics, and exponents, but in more depth, with more abstract reasoning, and at a faster pace. You’ll be expected to study, keep up, and understand new
material after less practice than you’re used to. Therefore, while seemingly less necessary than other subjects, seeking college algebra homework help is never an unhelpful way to set up for success.
Mastering Statistics: “Polling” Your Own Weight
College marks the first time many students encounter statistics, or the study of drawing conclusions about large “populations” by analyzing data from representative “sample” groups.
In some ways, stats is less dependent upon “math” calculations than the concepts it requires students, likely for the first time, to learn, like correlation, tests of significance, and regression
analysis. As a means of estimating population data, its inherent imperfections can frustrate those who take satisfaction in getting exact answers. That said, statistics is among the most practical,
applicable forms of mathematics, and studying it in college is hardly regrettable in the long run.
Mastering Calculus: “Changing” With the Times
It’s often seen rightly as “the big one.”. Invented by Isaac Newton and Gottfried Leibniz, calculus is the study of change. Want to know how fast a pool drains when it’s raining? Or how to chart a
planet’s orbit while accounting for the Sun’s gravitational pull? Or, even, how quickly medicine starts to act on the human body?
Calculus is complex, but it has vital importance in physics, engineering, economics, and other major fields. Students might need calculus homework help to understand ever-intensifying limits,
derivatives, and integrals, but doing so will yield ever greater academic enrichment in the long run.
Academic Prerequisites: Math Courses Required in College
Let’s return to the question with which we started: Is math a requirement in college? At American universities, the answer is likely “yes.”
Even for humanities majors, math credits are necessary to fulfill general education curricula. At Arizona State University, for example, every student must pass general college mathematics, college
algebra, precalculus, or any other more advanced course. Similarly, Rutgers University requires that all students, alongside subjects like English Composition, attain three credits in logical and
quantitative reasoning.
Though these classes may sound like a headache, they don’t have to weigh down your GPA with the right attention and focus.
Getting Ahead of College Math: Some Useful Summer Programs
For those who want to prepare adequately, summer math programs for college students are an effective way to stay in front of any impending math prerequisites. For those who are committed to studying
higher-level mathematics, such programs create opportunities to meet like-minded students and produce original, often collaborative contributions to mathematical research.
In either case, pursuing additional studies outside of the conventional school year shows dedication, persistence, and commendable work ethic. These three programs vary in attendance and exclusivity,
but in doing so they demonstrate that students, should they only look for them, can find resources that suit their individual needs.
An Early Start: The Stanford University Mathematics Camp
Often the world of college preparation, perhaps spurred by watchful parents, begins well before students start to write the Common App. That’s why programs like Stanford University’s Mathematics Camp
(SUMaC) cater to rising juniors and seniors who seek advanced mathematical enrichment, not to mention the chance to study at one of the world’s premier research institutes.
SUMaC runs every June and July, combining online with “residential” curricula. Given that students attend before college application season, the program constitutes a strong résumé booster while
uniting future university trailblazers in their dedication to number theory, algebraic topology, and other relevant academic subjects.
“Think Deeply About Simple Things”: The Ross Mathematics Program
Located in both Columbus, Ohio and Terre Haute, Indiana, the Ross Program invites motivated pre-college students to spend six weeks attending intensive lectures and discussions on number theory.
Since 1957, the program has trained future leaders in math, business, and scientific fields.
Apart from its academic advantages, Ross also gives recent high school graduates an early chance to experience college life. Students are organized in small “family groups,” live in dorms, and must
be responsible for their own schedules. For soon-to-be freshmen worried about leaving home, the program thus offers a headstart on not just academic but social and emotional development.
The Pre-Ph.D. Track: The Bernoulli Center for Fundamental Studies
Headquartered in Lausanne, Switzerland, the Bernoulli Center invites a select group of prospective researchers from around the world to attend its Young Researchers in Mathematics summer program.
The central goal of YRM, which covers all expenses for participants, is to produce and publish a collaborative research project over the course of one week. Though selective, the program gives
aspiring mathematicians the chance to engage in professional academic study, make international colleagues, and gain some experience abroad at a prestigious Swiss institution. Though applications are
closed for this year, the Bernoulli Center keeps general program information updated and publicly available here.
Why Does This Matter, Anyway? The Overall Importance of Math
In times of budget cuts, the arts and humanities are often the first to face constriction. Therefore, it seems we’re more likely to hear impassioned defenses of why studying literature, history, or
theater matters for social and cultural enrichment.
These defenses are quite correct, but their prominence implies that it’s obvious why the “other stuff,” like math, also matters.
For our purposes, math learning is important because it teaches critical thinking, creativity, and a similar set of personal attributes transferable to almost any analytical discipline. In this
section, let’s explore the specific benefits of studying math and, for good measure, some practical learning tips.
The Soft Skills: Why Math Matters
More than almost any other subject, studying math equips students with essential “character” assets – qualitative over quantitative skills – that facilitate success in diverse fields. If that seems
counterintuitive, just remember that college courses appreciate mathematics not just in terms of concrete operations, but also for its abstract and theoretical depth.
These “soft” skills include communication, independent problem-solving, logical reasoning, collaboration, and even reading comprehension. In the long run, employers value these qualities because they
make employees dependable, adaptable, and easy to work with. For students, they prove that studying math is not just “job training,” but a means of achieving a well-rounded, enriching education.
The Hard Skills: The Practical Applications of Learning Math
Of course, learning math also teaches students specialized skills that unlock unique opportunities in STEM careers. These include training in statistical analysis; advanced calculus; linear algebra;
and courses which directly prepare students for specific fields, like “applied engineering mathematics.”
While many math majors pursue jobs in accounting or actuarial sciences after graduation, the hard skills they learn along the way thus allow for welcome flexibility. Therefore, studying math is also
important because it empowers “mathematicians” to become statisticians, computer programmers, financial analysts, data scientists, and much else. For all of its challenges, math is important in
opening worlds of possibility.
Lastly, Some Practical Advice: How Best to Study Math
Finally, here are some tips for learning math at university:
• Don’t cram. Take initiative and study a little bit every day. Math is conceptual; concepts take time to absorb. Building knowledge incrementally will strengthen your foundation for the next exam
and future ones.
• Read the textbook. Sometimes the wording is confusing, but they’re better than relying only upon disorganized lecture notes. Plus textbooks contain useful exercises that, with repetition, can
hammer home new material.
• Do every practice set. Don’t do four problems and assume you “get it.” Practice for each scenario, and encounter the most difficult ones before your exam.
College Math and the Path to Success
We’ve discussed various aspects of studying math in college: whether it’s necessary (it is), what it actually looks like, and how you can prepare before and during university for its newfound
In the end, “how much” math you learn in college depends upon what you choose to study. But whether you’re meeting a prerequisite or a full-on mathematics major, it’s incumbent upon you to remain
attentive, develop useful study habits, and utilize all resources that can help you succeed. Grades are important, but also is math more broadly – in terms of professional training, skill-building,
and simply providing enrichment in each student’s academic life. | {"url":"https://www.youngresearchersinmaths.org/","timestamp":"2024-11-07T08:48:01Z","content_type":"application/xhtml+xml","content_length":"30709","record_id":"<urn:uuid:68eb2cd1-b1d4-48b7-8ab4-ab4b64d4c4f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00497.warc.gz"} |
east Pth-norm
Constrained least Pth-norm optimal IIR filter
[num,den] = iirlpnormc(n,d,f,edges,a)
[num,den] = iirlpnormc(n,d,f,edges,a,w)
[num,den] = iirlpnormc(n,d,f,edges,a,w,radius)
[num,den] = iirlpnormc(n,d,f,edges,a,w,radius,p)
[num,den] = iirlpnormc(n,d,f,edges,a,w,radius,p,dens)
[num,den] = iirlpnormc(n,d,f,edges,a,w,radius,p,dens,initnum,initden)
[num,den,err] = iirlpnormc(...)
[num,den,err,sos,g] = iirlpnormc(...)
[num,den] = iirlpnormc(n,d,f,edges,a) returns a filter having numerator order n and denominator order d which is the best approximation to the desired frequency response described by f and a in the
least-pth sense. The vector edges specifies the band-edge frequencies for multi-band designs. A constrained Newton-type algorithm is employed. n and d should be chosen so that the zeros and poles are
used effectively. See the Hints section. Always check the resulting filter using zplane.
[num,den] = iirlpnormc(n,d,f,edges,a,w) uses the weights in w to weight the error. w has one entry per frequency point (the same length as f and a) which tells iirlpnormc how much emphasis to put on
minimizing the error in the vicinity of each frequency point relative to the other points. f and a must have the same number of elements, which can exceed the number of elements in edges. This allows
for the specification of filters having any gain contour within each band. The frequencies specified in edges must also appear in the vector f. For example,
[num,den] = iirlpnormc(5,5,[0 .15 .4 .5 1],[0 .4 .5 1],...
[1 1.6 1 0 0],[1 1 1 10 10])
designs a lowpass filter with a peak of 1.6 within the passband.
[num,den] = iirlpnormc(n,d,f,edges,a,w,radius) returns a filter having a maximum pole radius of radius where 0<radius<1. radius defaults to 0.999999. Filters that have a reduced pole radius may
retain better transfer function accuracy after you quantize them.
[num,den] = iirlpnormc(n,d,f,edges,a,w,radius,p) where p is a two-element vector [pmin pmax] allows for the specification of the minimum and maximum values of p used in the least-pth algorithm.
Default is [2 128] which essentially yields the L-infinity, or Chebyshev, norm. pmin and pmax should be even. If p is 'inspect', no optimization will occur. This can be used to inspect the initial
pole/zero placement.
[num,den] = iirlpnormc(n,d,f,edges,a,w,radius,p,dens) specifies the grid density dens used in the optimization. The number of grid points is (dens*(n+d+1)). The default is 20. dens can be specified
as a single-element cell array. The grid is not equally spaced.
[num,den] = iirlpnormc(n,d,f,edges,a,w,radius,p,dens,initnum,initden) allows for the specification of the initial estimate of the filter numerator and denominator coefficients in vectors initnum and
initden. This may be useful for difficult optimization problems. The pole-zero editor in Signal Processing Toolbox™ software can be used for generating initnum and initden.
[num,den,err] = iirlpnormc(...) returns the least-Pth approximation error err.
[num,den,err,sos,g] = iirlpnormc(...) returns the second-order section representation in the matrix SOS and gain G. For numerical reasons you may find SOS and G beneficial in some cases.
• This is a weighted least-pth optimization.
• Check the radii and location of the resulting poles and zeros.
• If the zeros are all on the unit circle and the poles are well inside of the unit circle, try increasing the order of the numerator or reducing the error weighting in the stopband.
• Similarly, if several poles have a large radius and the zeros are well inside of the unit circle, try increasing the order of the denominator or reducing the error weight in the passband.
• If you reduce the pole radius, you might need to increase the order of the denominator.
The message
Poorly conditioned matrix. See the "help" file.
indicates that iirlpnormc cannot accurately compute the optimization because either:
1. The approximation error is extremely small (try reducing the number of poles or zeros — refer to the hints above).
2. The filter specifications have huge variation, such as a=[1 1e9 0 0].
Magnitude Response of Constrained Least Pth-norm Optimal IIR Filter
This example returns a lowpass filter whose pole radius is constrained to 0.8.
[b,a,err,s,g] = iirlpnormc(6,6,[0 .4 .5 1],[0 .4 .5 1],...
[1 1 0 0],[1 1 1 1],.8);
The magnitude response shows the lowpass nature of the filter. The pole/zero plot following shows that the poles are constrained to 0.8 as specified in the command.
[1] Antoniou, A., Digital Filters: Analysis, Design, and Applications, Second Edition, McGraw-Hill, Inc. 1993.
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
Usage notes and limitations:
• All inputs must be constant. Expressions or variables are allowed if their values do not change.
• Does not support syntaxes that have cell array input.
Version History
Introduced in R2011a | {"url":"https://se.mathworks.com/help/dsp/ref/iirlpnormc.html","timestamp":"2024-11-13T05:22:00Z","content_type":"text/html","content_length":"81091","record_id":"<urn:uuid:0d562dde-ced0-45b3-9fd9-eb39343befe2>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00050.warc.gz"} |
Mathematical Study of Complex Networks: Brain, Internet, and Power Grid
The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii)
electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems
from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one
is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.
Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links
(interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies
between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical
properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node
voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden
inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy
the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is
well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and
demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.
Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite
the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the
original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the
effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless
a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.
Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an
optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to
the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a
convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is
allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.
Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown
injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named
generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation
is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations
(i.e., load over-delivery) is not needed in practice under a very mild angle assumption.
Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every
edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how
the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an
optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted
graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time
solvable due to the passivity of transmission lines and transformers.
Item Type: Thesis (Dissertation (Ph.D.))
Subject Keywords: Complex Networks, fMRI ,Communication Networks, Power Grid.
Degree Grantor: California Institute of Technology
Division: Engineering and Applied Science
Major Option: Control and Dynamical Systems
Thesis Availability: Public (worldwide access)
Research Advisor(s): • Doyle, John Comstock
Thesis Committee: • Doyle, John Comstock (chair)
• Murray, Richard M.
• Low, Steven H.
• Chandy, K. Mani
Defense Date: 15 May 2013
Record Number: CaltechTHESIS:05252013-081655550
Persistent URL: https://resolver.caltech.edu/CaltechTHESIS:05252013-081655550
DOI: 10.7907/E750-2M74
Default Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 7753
Collection: CaltechTHESIS
Deposited By: Somayeh Sojoudi
Deposited On: 31 May 2013 21:18
Last Modified: 04 Oct 2019 00:01
Thesis Files
PDF - Final Version
See Usage Policy.
Repository Staff Only: item control page | {"url":"https://thesis.library.caltech.edu/7753/","timestamp":"2024-11-06T23:29:18Z","content_type":"application/xhtml+xml","content_length":"42280","record_id":"<urn:uuid:4f6edc76-ec5a-4bd4-bd63-4ee3b9946e13>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00129.warc.gz"} |
Re: [dev] [RFC] Design of a vim like text editor
From: Roberto E. Vargas Caballero <k0ga_AT_shike2.com> Date: Wed, 17 Sep 2014 00:16:22 +0200 > No. Because the realloc that you have to do maybe has to move in the > worst case n-1 elements in a new
memory arena. So you have n (moving > to the position) * n-1 (memcpy to new position in memory) = n^2. You > can decrease the possibility of reallocation allocating > more memory of needed, but when
you have the reallocation you get > the O(n^2). In a list you don't have ever a reallocation.
Upps, I mistook here. It is 2n, that means a complexity of O(n), so
in this point you were right, but it doesn't change too much the discusison.
For a resume of these complexity see this table I took in some place:
Linked list Array Dynamic array Balanced tree
Indexing O(n) O(1) O(1) O(log n)
Insert/delete at beginning O(1) N/A O(n) O(log n)
Insert/delete at end O(1) N/A O(1) amortized O(log n)
Insert/delete in middle search time
+ O(1) N/A O(n) O(log n)
Wasted space (average) O(n) 0 O(n)[2] O(n)
Roberto E. Vargas Caballero
Received on Wed Sep 17 2014 - 00:16:22 CEST
This archive was generated by hypermail 2.3.0 : Wed Sep 17 2014 - 00:24:07 CEST | {"url":"http://lists.suckless.org/dev/1409/23575.html","timestamp":"2024-11-13T11:58:08Z","content_type":"application/xhtml+xml","content_length":"7782","record_id":"<urn:uuid:54e97a77-8b4c-453a-b4ab-1b767d79afa8>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00816.warc.gz"} |
Nautical miles (International) to Femtometers Converter
Enter Nautical miles (International)
β Switch toFemtometers to Nautical miles (International) Converter
How to use this Nautical miles (International) to Femtometers Converter π €
Follow these steps to convert given length from the units of Nautical miles (International) to the units of Femtometers.
1. Enter the input Nautical miles (International) value in the text field.
2. The calculator converts the given Nautical miles (International) into Femtometers in realtime β using the conversion formula, and displays under the Femtometers label. You do not need to click
any button. If the input changes, Femtometers value is re-calculated, just like that.
3. You may copy the resulting Femtometers value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Nautical miles (International) to Femtometers?
The formula to convert given length from Nautical miles (International) to Femtometers is:
Length[(Femtometers)] = Length[(Nautical miles (International))] × 1852000011852800000
Substitute the given value of length in nautical miles (international), i.e., Length[(Nautical miles (International))] in the above formula and simplify the right-hand side value. The resulting value
is the length in femtometers, i.e., Length[(Femtometers)].
Calculation will be done after you enter a valid input.
Consider that a luxury yacht cruises 120 nautical miles on a journey.
Convert this distance from nautical miles to Femtometers.
The length in nautical miles (international) is:
Length[(Nautical miles (International))] = 120
The formula to convert length from nautical miles (international) to femtometers is:
Length[(Femtometers)] = Length[(Nautical miles (International))] × 1852000011852800000
Substitute given weight Length[(Nautical miles (International))] = 120 in the above formula.
Length[(Femtometers)] = 120 × 1852000011852800000
Length[(Femtometers)] = 222240001422336000000
Final Answer:
Therefore, 120 nmi is equal to 222240001422336000000 fm.
The length is 222240001422336000000 fm, in femtometers.
Consider that an aircraft travels 500 nautical miles to reach its destination.
Convert this distance from nautical miles to Femtometers.
The length in nautical miles (international) is:
Length[(Nautical miles (International))] = 500
The formula to convert length from nautical miles (international) to femtometers is:
Length[(Femtometers)] = Length[(Nautical miles (International))] × 1852000011852800000
Substitute given weight Length[(Nautical miles (International))] = 500 in the above formula.
Length[(Femtometers)] = 500 × 1852000011852800000
Length[(Femtometers)] = 926000005926400000000
Final Answer:
Therefore, 500 nmi is equal to 926000005926400000000 fm.
The length is 926000005926400000000 fm, in femtometers.
Nautical miles (International) to Femtometers Conversion Table
The following table gives some of the most used conversions from Nautical miles (International) to Femtometers.
Nautical miles (International) (nmi) Femtometers (fm)
0 nmi 0 fm
1 nmi 1852000011852800000 fm
2 nmi 3704000023705600000 fm
3 nmi 5556000035558400000 fm
4 nmi 7408000047411200000 fm
5 nmi 9260000059264000000 fm
6 nmi 11112000071116800000 fm
7 nmi 12964000082969600000 fm
8 nmi 14816000094822400000 fm
9 nmi 16668000106675200000 fm
10 nmi 18520000118528000000 fm
20 nmi 37040000237056000000 fm
50 nmi 92600000592640000000 fm
100 nmi 185200001185280000000 fm
1000 nmi 1.8520000118528e+21 fm
10000 nmi 1.8520000118528e+22 fm
100000 nmi 1.8520000118528e+23 fm
Nautical miles (International)
A nautical mile (international) is a unit of length used in maritime and aviation contexts. One nautical mile is equivalent to 1,852 meters or approximately 1.15078 miles.
The nautical mile is defined based on the Earth's circumference and is equal to one minute of latitude.
Nautical miles are used worldwide for navigation at sea and in the air. They are particularly important for charting courses and distances in maritime and aviation industries, ensuring consistency
and accuracy in navigation.
A femtometer (fm) is a unit of length in the International System of Units (SI). One femtometer is equivalent to 0.000000000001 meters or 1 Γ 10^(-15) meters.
The femtometer is defined as one quadrillionth of a meter, making it a very small unit of measurement used for measuring atomic and subatomic distances.
Femtometers are commonly used in nuclear physics and particle physics to describe the sizes of atomic nuclei and the ranges of fundamental forces at the subatomic level.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Nautical miles (International) to Femtometers in Length?
The formula to convert Nautical miles (International) to Femtometers in Length is:
Nautical miles (International) * 1852000011852800000
2. Is this tool free or paid?
This Length conversion tool, which converts Nautical miles (International) to Femtometers, is completely free to use.
3. How do I convert Length from Nautical miles (International) to Femtometers?
To convert Length from Nautical miles (International) to Femtometers, you can use the following formula:
Nautical miles (International) * 1852000011852800000
For example, if you have a value in Nautical miles (International), you substitute that value in place of Nautical miles (International) in the above formula, and solve the mathematical expression to
get the equivalent value in Femtometers. | {"url":"https://convertonline.org/unit/?convert=nautical_miles-femtometers","timestamp":"2024-11-10T16:33:38Z","content_type":"text/html","content_length":"92523","record_id":"<urn:uuid:b51385ac-fc96-4797-b212-7e71422b6831>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00105.warc.gz"} |
template<typename Traits>
class CGAL::Nef_polyhedron_S2< Traits >
An instance of data type Nef_polyhedron_S2<Traits> is a subset of the sphere \( S_2\) that is the result of forming complements and intersections starting from a finite set H of halfspaces bounded by
a plane containing the origin.
Halfspaces correspond to hemispheres of \( S_2\) and are therefore modeled by oriented great circles of type Sphere_circle. Nef_polyhedron_S2 is closed under all binary set operations intersection,
union, difference, complement and under the topological operations boundary, closure, and interior.
template< class Nef_polyhedronTraits_S2,
class Nef_polyhedronItems_S2 = CGAL::SM_items,
class Nef_polyhedronMarks = bool >
An instance of data type Nef_polyhedron_S2<Traits> is a subset of the sphere that is the result of f...
Definition: Nef_polyhedron_S2.h:70
The first parameter requires one of the following exact kernels: Homogeneous, Simple_homogeneous parametrized with Gmpz, leda_integer or any other number type modeling \(\mathbb{Z}\), or Cartesian,
Simple_cartesian parametrized with Gmpq, leda_rational, Quotient<Gmpz> or any other number type modeling \(\mathbb{Q}\).
The second parameter and the third parameter are for future considerations. Neither Nef_polyhedronItems_S2 nor Nef_polyhedronMarks is specified, yet. Do not use other than the default types for these
two template parameters.
Exploration - Point location - Ray shooting
As Nef polyhedra are the result of forming complements and intersections starting from a set H of half-spaces that are defined by oriented lines in the plane, they can be represented by an attributed
plane map \( M = (V,E,F)\). For topological queries within M the following types and operations allow exploration access to this structure.
Input and Output
A Nef polyhedron N can be visualized in an open GL window. The output operator is defined in the file CGAL/IO/Nef_polyhedron_2_Window-stream.h.
Nef polyhedra are implemented on top of a halfedge data structure and use linear space in the number of vertices, edges and facets. Operations like empty take constant time. The operations clear,
complement, interior, closure, boundary, regularization, input and output take linear time. All binary set operations and comparison operations take time \(O(n \log n)\) where \( n\) is the size of
the output plus the size of the input.
The point location and ray shooting operations are implemented in the naive way. The operations run in linear query time without any preprocessing.
Nef_S2/nef_s2_construction.cpp, Nef_S2/nef_s2_exploration.cpp, Nef_S2/nef_s2_point_location.cpp, and Nef_S2/nef_s2_simple.cpp.
enum Boundary { EXCLUDED , INCLUDED }
construction selection. More...
enum Content { EMPTY , COMPLETE }
construction selection. More...
typedef unspecified_type SVertex_const_handle
non-mutable handle to svertex.
typedef unspecified_type SHalfedge_const_handle
non-mutable handle to shalfedge.
typedef unspecified_type SHalfloop_const_handle
non-mutable handle to shalfloop.
typedef unspecified_type SFace_const_handle
non-mutable handle to sface.
typedef unspecified_type SVertex_const_iterator
non-mutable iterator over all svertices.
typedef unspecified_type SHalfedge_const_iterator
non-mutable iterator over all shalfedges.
typedef unspecified_type SHalfloop_const_iterator
non-mutable iterator over all shalfloops.
typedef unspecified_type SFace_const_iterator
non-mutable iterator over all sfaces.
typedef unspecified_type SHalfedge_around_svertex_const_circulator
circulating the adjacency list of an svertex v.
typedef unspecified_type SHalfedge_around_sface_const_circulator
circulating the sface cycle of an sface f.
typedef unspecified_type SFace_cycle_const_iterator
iterating all sface cycles of an sface f.
typedef unspecified_type Mark
attributes of objects (vertices, edges, faces).
typedef unspecified_type size_type
size type
typedef unspecified_type Object_handle
a generic handle to an object of the underlying plane map.
void clear (Content plane=EMPTY)
makes N the empty set if plane == EMPTY and the full plane if plane == COMPLETE.
bool is_empty ()
returns true if N is empty, false otherwise.
bool is_sphere ()
returns true if N is the whole sphere, false otherwise.
bool contains (Object_handle h)
returns true iff the object h is contained in the set represented by N.
bool contained_in_boundary (Object_handle h)
returns true iff the object h is contained in the \( 1\)-skeleton of N.
Object_handle locate (const Sphere_point &p)
returns a generic handle h to an object (face, halfedge, vertex) of the underlying plane map that contains the point p in its relative interior.
Object_handle ray_shoot (const Sphere_point &p, const Sphere_direction &d)
returns a handle h with N.contains(h) that can be converted to a Vertex_/Halfedge_/Face_const_handle as described above.
Object_handle ray_shoot_to_boundary (const Sphere_point &p, const Sphere_direction &d)
returns a handle h that can be converted to a Vertex_/Halfedge_const_handle as described above.
Additionally there are operators *,+,-,^,! which implement the binary operations intersection, union, difference, symmetric difference, and the unary operation complement respectively.
There are also the corresponding modification operations <,<=,>,>=,==,!=.
There are also comparison operations like <,<=,>,>=,==,!= which implement the relations subset, subset or equal, superset, superset or equal, equality, inequality, respectively.
Nef_polyhedron_S2< K > complement ()
returns the complement of N in the plane.
Nef_polyhedron_S2< K > interior ()
returns the interior of N.
Nef_polyhedron_S2< K > closure ()
returns the closure of N.
Nef_polyhedron_S2< K > boundary ()
returns the boundary of N.
Nef_polyhedron_S2< K > regularization ()
returns the regularized polyhedron (closure of interior).
Nef_polyhedron_S2< K > intersection (const Nef_polyhedron_S2< K > &N1)
returns N \( \cap\) N1.
Nef_polyhedron_S2< K > join (const Nef_polyhedron_S2< K > &N1)
returns N \( \cup\) N1.
Nef_polyhedron_S2< K > difference (const Nef_polyhedron_S2< K > &N1)
returns N \( -\) N1.
Nef_polyhedron_S2< K > symmetric_difference (const Nef_polyhedron_S2< K > &N1)
returns the symmectric difference N - T \( \cup\) T - N.
Size_type number_of_svertices ()
returns the number of svertices.
Size_type number_of_shalfedges ()
returns the number of shalfedges.
Size_type number_of_sedges ()
returns the number of sedges.
Size_type number_of_shalfloops ()
returns the number of shalfloops.
Size_type number_of_sloops ()
returns the number of sloops.
Size_type number_of_sfaces ()
returns the number of sfaces.
Size_type number_of_sface_cycles ()
returns the number of sface cycles.
Size_type number_of_connected_components ()
calculates the number of connected components of P.
void print_statistics (std::ostream &os=std::cout)
print the statistics of P: the number of vertices, edges, and faces.
void check_integrity_and_topological_planarity (bool faces=true)
checks the link structure and the genus of P. | {"url":"https://doc.cgal.org/latest/Nef_S2/classCGAL_1_1Nef__polyhedron__S2.html","timestamp":"2024-11-02T10:40:02Z","content_type":"application/xhtml+xml","content_length":"67786","record_id":"<urn:uuid:125818d1-94d2-4f38-887a-f65a07942a22>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00104.warc.gz"} |
Phone Pi
Today is the much celebrated pi-day . Ok, perhaps it’s not that big a holiday – I don’t think Hallmark is selling any pi-day cards yet – but anyone who uses google today knows that something
mathematical and geeky is being honored. I promise not to go into diatribes about calculations of the first few million digits of pi, or how many digits one needs to keep in order to calculate the
radius of the universe to atomic accuracy. Instead, I merely want to relay a simple short story a colleague of mine recounted to me years ago. Several years ago, before pi-day was famous, a student
called the phone number associated with the digits in pi that appear after the decimal point, i.e., 1-415-926-5358. Apparently this is rather common now, and in fact, appears to be promoted as a
mnemonic for the first 10 decimal places for those folks we need to have those numbers handy at all times. But this story happened in earlier times, back before the Bay Area split into several area
codes. And, as the clever reader has already guessed, that student reached the SLAC main gate. How cool to phone pi and reach the main gate of a major national scientific research laboratory! Alas,
time and phone numbers march on, and nowadays phoning pi yields a “your call cannot be completed as dialed” message. (And I’m told that I cannot publish this post without noting that 3-14-15 will be
a more accurate pi day.) | {"url":"https://www.discovermagazine.com/the-sciences/phone-pi","timestamp":"2024-11-11T07:42:23Z","content_type":"text/html","content_length":"82450","record_id":"<urn:uuid:346db5cf-ad55-4f34-9762-f2fa0f880cb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00520.warc.gz"} |
Kähler-Einstein metrics on complex surfaces with C<sub>1</sub>>0
Various estimates of the lower bound of the holomorphic invariant α(M), defined in [T], are given here by using branched coverings, potential estimates and Lelong numbers of positive, d-closed (1, 1)
currents of certain type, etc. These estimates are then applied to produce Kähler-Einstein metrics on complex surfaces with C[1]>0, in particular, we prove that there are Kähler-Einstein structures
with C[1]>0 on any manifold of differential type {Mathematical expression}.
All Science Journal Classification (ASJC) codes
• Statistical and Nonlinear Physics
• Mathematical Physics
Dive into the research topics of 'Kähler-Einstein metrics on complex surfaces with C[1]>0'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/k%C3%A4hler-einstein-metrics-on-complex-surfaces-with-csub1subgt0","timestamp":"2024-11-14T01:17:32Z","content_type":"text/html","content_length":"45809","record_id":"<urn:uuid:6738e7b1-25e8-49bc-97ae-fff3e4ab67c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00067.warc.gz"} |
A Springy Thingy, Part 2 - David The Maths TutorA Springy Thingy, Part 2
A Springy Thingy, Part 2
So last time, I presented the equation that describes the position of a mass on a spring:
Animated portion is licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license
The equation is
\[ {x}\hspace{0.33em}{=}\hspace{0.33em}{A}\hspace{0.33em}\sin\left({\frac{180t}{\mathit{\pi}}\sqrt{\frac{k}{m}}}\right) \]
where x is the position of the mass at a given time t in seconds, k is the spring constant, and m is the mass in kg. This was developed from the equation that describes the forces on the spring
(gravity and the spring), and through calculus, out pops the equation above. This equation is the sine of stuff in brackets multiplied by a number A.
Even though the stuff in the brackets looks rather ominous, we are still just taking the sine of it and the sine only goes from -1 to 1. So the maximum extent of the mass is from –A to A. Now let’s
look at the stuff in the brackets.
The 180 and ???? are just there to change the rest of the expression so that you can press the sine button on your calculator in the “degrees” mode. Normally, when engineers model something like
this, they use radians and not degrees. I have not explained what radians are yet so I’ve included an adjustment (the 180 and the ????) so that you can continue to use degrees. I think I’ll explain
radians in my next post.
The rest of the numbers, t, k, and m are the real meat of the model. For simplicity, I have started time at 0 seconds when the mass is at its rest position and is moving upwards in the postive
direction. So you would expect the position of the mass to change with time and that is what the t in the expression does. The k and the m determine how fast or how slowly the mass oscillates. Let’s
actually use some numbers instead of letters here for a specific mass and spring.
Now let’s assume the spring has a spring constant of 1 kg/s² (I’ll discuss these units in my next post), and the mass connected to it is 1 kg. That means the stuff in the square root sign (called a
radical) is just 1 and the square root of 1 is 1. And let’s further assume that I start the spring moving by stretching the spring 5 cm from its rest position. So now, the position equation above
simplifies to
Starting at time as 0, you can choose various values of t, compute the stuff in the brackets, use the “SIN” button on your calculator which is in the “degrees” mode, and then multiply by 5. So for
example, at t = 1 sec, 180/???? is 57.2958. Taking the sine of that gives 0.8415 and then multiplying that by 5 gives 4.2073 cm. So at 1 sec, the mass is 4.2073 cm above its resting position. You can
plot this point and many others to graph this, or you can be lazy like me and use a graphing calculator. The graph of the position of the mass versus time for this scenario is
So no surprise, a sine wave. Now remember when I first talked about sine waves, I talked about the wavelength. Here I have indicated the wavelength as 6.28 sec. When dealing with time, the wavelength
is called the period and usually represented with the symbol T. The period is the length of time it takes for one full cycle of motion. So it takes the mass 6.28 seconds to make one complete bounce.
It turns out that you do not have to graph the curve to find this:
Since our k and m are each 1, T in this case is just 2????. Funny how ???? keeps cropping up. Again, I’ll explain that in my next post on radians.
Associated with the period is something called frequency. The period is how long it takes for one complete cycle to occur, whereas the frequency is how many complete cycles occur in 1 second.
Frequency is the reciprocal of the period and vice versa. That is f = 1/T. So for our mass, the frequency is 1/6.28 or 0.16 cycles per second. The term “cycles per second” is given a special unit
called hertz which is abbreviated as hz. You may have heard this term before.
As you change the values of k and m, the values of T and f will change as well. If the spring gets stiffer (a higher k), you would expect the frequency to increase, that is it will bounce faster. You
would expect a heavier mass to slow down the frequency and it does. I will leave it as an exercise for the student to check this using a graphing calculator or Excel.
A good simulation on the web that shows the effect of changing mass an spring constant is at https://www.physicsclassroom.com/Physics-Interactives/Waves-and-Sound/Mass-on-a-Spring/
Mass-on-a-Spring-Interactive. This sets up the graph a bit differently than I do here, but the frequency changes are easy to see. Also, you can add damping to this which I did not include in this
post to keep it simple, but you can play with that as well on this site. | {"url":"https://davidthemathstutor.com.au/2019/02/03/a-springy-thingy-part-2/","timestamp":"2024-11-03T13:29:03Z","content_type":"text/html","content_length":"49078","record_id":"<urn:uuid:647abe20-56d4-4b16-a87a-a8eae77b597e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00478.warc.gz"} |
Product - bettermarks
Created by teachers for teachers
Our goal is to facilitate the learning and teaching of mathematics. To reach this goal, bettermarks employs teachers, mathematicians, academic educationalists and software specialists to work
together. Our adaptive math books guide students through exercises step by step with constructive feedback. As a teacher, you receive comprehensive reports on all your students‘ activity. | {"url":"https://us.bettermarks.com/product/","timestamp":"2024-11-15T02:25:07Z","content_type":"text/html","content_length":"48674","record_id":"<urn:uuid:4686c303-ab15-4a00-9fa5-3ad5777c9ea1>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00578.warc.gz"} |
Disjoint of Sets using Venn Diagram | Disjoint of Sets | Non-overlapping Sets
Disjoint of Sets using Venn Diagram
Disjoint of sets using Venn diagram is shown by two non-overlapping closed regions and said inclusions are shown by showing one closed curve lying entirely within another.
Two sets A and B are said to be disjoint, if they have no element in common.
Thus, A = {1, 2, 3} and B = {5, 7, 9} are disjoint sets; but the sets C = {3, 5, 7} and D = {7, 9, 11} are not disjoint; for, 7 is the common element of A and B.
Two sets A and B are said to be disjoint, if A ∩ B = ϕ. If A ∩ B ≠ ϕ, then A and B are said to be intersecting or overlapping sets.
Examples to show disjoint of sets using Venn diagram:
If A = {1, 2, 3, 4, 5, 6}, B = {7, 9, 11, 13, 15} and C = {6, 8, 10, 12, 14} then A and B are disjoint sets since they have no element in common while A and C are intersecting sets since 6 is the
common element in both.
2. (i) Let M = Set of students of class VII
And N = Set of students of class VIII
Since no student can be common to both the classes; therefore set M and set N are disjoint.
(ii) X = {p, q, r, s} and Y = {1, 2, 3, 4, 5}
Clearly, set X and set Y have no element common to both; therefore set X and set Y are disjoint sets.
A = {a, b, c, d} and B = {Sunday, Monday, Tuesday, Thursday} are disjoint because they have no element in common.
P = {1, 3, 5, 7, 11, 13} and Q = {January, February, March} are disjoint because they have no element in common.
1. Intersection of two disjoint sets is always the empty set.
2. In each Venn diagram ∪ is the universal set and A, B and C are the sub-sets of ∪.
● Set Theory
● Finite Sets and Infinite Sets
● Problems on Intersection of Sets
● Problems on Complement of a Set
● Problems on Operation on Sets
● Venn Diagrams in Different Situations
● Relationship in Sets using Venn Diagram
● Union of Sets using Venn Diagram
● Intersection of Sets using Venn Diagram
● Disjoint of Sets using Venn Diagram
● Difference of Sets using Venn Diagram
8th Grade Math Practice
From Disjoint of Sets using Venn Diagram to HOME PAGE
Didn't find what you were looking for? Or want to know more information about Math Only Math. Use this Google Search to find what you need.
New! Comments
Have your say about what you just read! Leave me a comment in the box below. Ask a Question or Answer a Question. | {"url":"https://www.math-only-math.com/disjoint-of-sets-using-Venn-diagram.html","timestamp":"2024-11-03T23:30:06Z","content_type":"text/html","content_length":"49424","record_id":"<urn:uuid:6954045c-c5ce-40d9-b93e-361ed51ef21d>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00409.warc.gz"} |
Combination tones and the nonlinearity of the ear
10776 Views
1 Reply
10 Total Likes
Combination tones and the nonlinearity of the ear
Some time ago I published a blog post about combination tones and the nonlinearity of the human ear.
Per request, I will show here how to play with this topic in Mathematica.
A combination tone (also called Tartini tone after the violinist Giuseppe Tartini who famously described them) is a phenomenon where playing two tones of different frequencies at the same time causes
a third tone (or several others) to be heard as well. As you will see below, this is a physical effect: the combination tone appears due to nonlinear transmission in the middle ear.
Before starting, let's set up the Play function to output at higher than default quality:
SetOptions[Play, SampleDepth -> 16, SampleRate -> 22050]
You can go to a full 44100 Hz, but then Play will be slower. Using a lower sampling rate risks audible aliasing effects.
We will play the two tones on separate stereo channels (Play[{left, right}, ...]). This is so that they only mix in our ears, and not in your audio playback electronics. I want to convince you that
the effect is due to the ear's nonlinearity, and not that of your speakers or amplifier. So please listen to the samples with loudspeakers (not headphones), so both of your ears can hear both
channels at the same time.
Since this is a nonlinear effect, it will be audible only if the volume is high enough. Turn up the volume a bit, or better: lean closer to (or away from) your speakers to control the volume level
near your ears.
I also advise you not to use the interface of Audio objects for stereo playback during these experiments. Unfortunately, Mathematica 11.0 and 11.1 have a bug (at least on OS X) where sounds are mixed
down to mono before playback. This affects only playback, not processing. The interface presented by the older Sound objects doesn't have this problem.
The experiment
Let's start by playing a 1000 Hz and a 1500 Hz tone together:
Play[{Sin[1000 2Pi t], Sin[1500 2Pi t]}, {t, 0, 3}]
If you listen carefully, you may be able to hear a lower pitch tone at $1500-1000 = 500 \;\mathrm{Hz}$. The effect is fairly subtle, and many people have trouble hearing this. It may be easier to
hear if you compare it to an actual 500 Hz tone, Play[Sin[500 2Pi t], {t, 0, 3}].
There is a less often used, but much better way to demonstrate the effect. Since the most prominent combination tone tends to be the difference of the two frequencies, $f_c = |f_1 - f_2|$, we can use
a steady tone and a lower falling tone. Their difference will be increasing. It is much easier to notice that there is a third tone present when its pitch is clearly changing in another direction
than that of the base tones we are actually playing.
How do we create a tone with a changing frequency? The phase of a wave, $\varphi(t)$, is the integral of its angular frequency, $\omega(t) = 2\pi f(t)$. To get a tone whose pitch falls by 100 Hz per
second, starting at 1300 Hz, we need to use the following phase:
Integrate[1300 - 100 t, t]
(* 1300 t - 50 t^2 *)
So let's try it:
snd = Play[{Sin[1500 2 Pi t], Sin[(1300 t - 50 t^2) 2 Pi]}, {t, 0, 3}]
We play a steady and a falling tone But I can clearly hear a rising tone as well. If we play only one channel at a time, the rising tone is no longer audible. Try it: AudioChannelSeparate[snd].
The explanation
Where does the combination tone come from, and how is it related to nonlinear transmission? Let's think about what happens if we pass the sum of two sine waves of different frequencies through a
non-linear amplifier, described by the function $a(u)$. (This is admittedly a much simplified model of what happens a nonlinear oscillator would be more accurate. But it's simple and it explains the
phenomenon.) The Taylor expansion of a non-linear $a(u)$ will also contain higher order terms:
$$a(u) = c_1 u + c_2 u^2 + c_3 u^3 + \cdots$$
The linear term, $u$, doesn't change the signal. What about the square term, $u^2$? Mathematica makes it easy to do the calculation:
(Sin[w1 t] + Sin[w2 t])^2 // TrigReduce
(* 1/2 (2 - Cos[2 t w1] - Cos[2 t w2] + 2 Cos[t w1 - t w2] - 2 Cos[t w1 + t w2]) *)
Notice that the sum (w1+w2) and the difference (w1-w2) of the frequencies appeared too (in addition to the harmonics 2 w1 and 2 w2 ). This explains why the most prominent combination tone we are
hearing is the difference tone.
What about the third order term?
(Sin[w1] + Sin[w2])^3 // TrigReduce
(* 1/4 (9 Sin[w1] - Sin[3 w1] - 3 Sin[w1 - 2 w2] +
3 Sin[2 w1 - w2] + 9 Sin[w2] - Sin[3 w2] - 3 Sin[2 w1 + w2] -
3 Sin[w1 + 2 w2]) *)
Now we have 2 w1+w2, 2 w1-w2, w1-2 w2 and w1+2 w2.
Generally, the $k$th order term will introduce $n \omega_1 + m \omega_2$ integer linear combinations, where $n,m \in \mathbb{Z}$ and $|n|,|m| < k$.
Where does the nonlinearity come from? We play the two sounds on different stereo channels (ideally on loudspeakers which are in contained in separate housings), to rule out any effects coming from
the electronics. Transmission through the air is fairly linear, so that can be ruled out too. What remains is our ear.
The higher order a term, the less loud the corresponding combination tones are. However, human loudness perception logarithmic, so these do not necessarily sound so much quieter so as not to be
audible. Can we hear the third order tones?
If I play the last example for a bit longer than in our original experiment,
Play[{Sin[1500 2 Pi t], Sin[(1300 t - 50 t^2) 2 Pi]}, {t, 0, 5}]
towards the end and of the samepl I hear a much more sharply falling tone as well. This happens to be the 2 w2-w1 third-order combination tone. We notice it towards the end only because due to our
logarithmic perception of the pitch, it will appear to fall with an accelerating rate as it approaches zero. This makes it stand out.
This becomes clear from a LogPlot of the changing pitches:
a = 1300 - 100 t; (* falling pitch *)
b = 1500; (* steady pitch *)
sqc = RGBColor[0., 0.780007, 0.550005];
cuc = RGBColor[1., 0.659993, 0.069994];
tmax = 5;
pl = Legended[
LogPlot[{a, b} // Abs // Evaluate, {t, 0, tmax}, PlotStyle -> Directive[Thick, Black], GridLines -> Automatic],
LogPlot[{a + b, a - b, 2 a, 2 b} // Abs // Evaluate, {t, 0, tmax}, PlotStyle -> sqc],
LogPlot[{2 a - b, 2 b - a, 2 a + b, 2 b + a, 3 a, 3 b} // Abs // Evaluate, {t, 0, tmax}, PlotStyle -> cuc, PlotRange -> All],
PlotRange -> Log@{50, 6000}, Frame -> True, Axes -> False,
FrameLabel -> (Style[#, FontSize -> 14] &) /@ {"time (s)", "frequency (Hz)"}, AspectRatio -> 1/2, ImageSize -> 500,
BaseStyle -> {FontFamily -> "Open Sans"},
PlotRangePadding -> {Automatic, 0}
LineLegend[{Black, sqc, cuc}, {"base", "from \!\(\*FormBox[SuperscriptBox[\(x\), \(2\)],TraditionalForm]\)",
"from \!\(\*FormBox[SuperscriptBox[\(x\), \(3\)],TraditionalForm]\)"}]
We can also pass the sum of the two tones through a non-linear function of our own design, and look at the spectrogram to see all combination tones:
amp[u_] := Log[u + 1]
This "amplifier function" contains all higher order terms with coefficients that are comparable in magnitude:
amp[u] + O[u]^6
The spectrogram can be plotted as follows:
Play[amp[0.25 (Sin[1500 2 Pi t] + Sin[(1300 t - 50 t^2) 2 Pi])], {t, 0, 6}],
PlotRange -> {All, {0, 5000}},
ColorFunction -> "Rainbow",
FrameLabel -> {"time", "frequency"}
I hope you enjoyed this post. You can read a bit more about the background of combination tones, and listen to an additional experiment in the original blog post.
1 Reply
Staff Pick! Thank you for your wonderful contributions. Please, keep them coming!
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use | {"url":"https://community.wolfram.com/groups/-/m/t/1042836","timestamp":"2024-11-14T03:29:18Z","content_type":"text/html","content_length":"103404","record_id":"<urn:uuid:6a7196b3-473e-47d8-a3aa-edf1f90ecb6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00249.warc.gz"} |
Introduction: Hypothesis Test for Difference in Two Population Proportions
What you’ll learn to do: Construct and interpret an appropriate hypothesis test to compare two population/treatment group proportions.
• Under appropriate conditions, conduct a hypothesis test for comparing two population proportions or two treatments. State a conclusion in context.
• Interpret the P-value as a conditional probability.
• Identify type I and type II errors and select an appropriate significance level based on an analysis of the consequences of each type of error.
Candela Citations
CC licensed content, Shared previously | {"url":"https://courses.lumenlearning.com/atd-herkimer-statisticssocsci/chapter/introduction-hypothesis-test-for-difference-in-two-population-proportions/","timestamp":"2024-11-14T12:27:17Z","content_type":"text/html","content_length":"28680","record_id":"<urn:uuid:ec1946f1-e926-48e2-9a2f-2c2521319796>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00857.warc.gz"} |
Seminars and Colloquia Schedule
Monday, October 21, 2024 - 11:30 for 1 hour (actually 50 minutes)
Skiles 005
Julia Lindberg – UT Austin
Density estimation for Gaussian mixture models is a classical problem in statistics that has applications in a variety of disciplines. Two solution techniques are commonly used for this problem: the
method of moments and maximum likelihood estimation. This talk will discuss both methods by focusing on the underlying geometry of each problem. | {"url":"https://math.gatech.edu/seminar-and-colloquia-schedule/2024-W43","timestamp":"2024-11-11T19:33:41Z","content_type":"text/html","content_length":"50097","record_id":"<urn:uuid:92ab862c-ccf9-4474-98d2-2bf21a65ae8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00816.warc.gz"} |
Frigid Fridays On each Friday last January, the temperature was a record low temperature for the date. On Friday, January 30, the mercury dropped to 5°F, - ppt download
Presentation is loading. Please wait.
To make this website work, we log user data and share it with processors. To use this website, you must agree to our
Privacy Policy
, including cookie policy.
Ads by Google | {"url":"https://slideplayer.com/slide/766875/","timestamp":"2024-11-14T16:59:10Z","content_type":"text/html","content_length":"147397","record_id":"<urn:uuid:50e0616d-6f25-4e9f-8e10-2de158c01104>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00240.warc.gz"} |
European Mathematical SymbolsEuropean Mathematical Symbols
Dark Ages
Around 500-1000 AD
As Europe came out of the Dark Ages, a new culture of intellectualism began to emerge.
Mathematical enquiry was at its heart.
By the 16th century, mathematical duels occurred in city streets.
Great minds would pose each other mathematical problems, until one would surrender.
In this period, maths was expressed using words, making equations incredibly long-winded.
A solution was desperately needed, to make mathematical problems easier to write.
Creating Symbols
The new breed of European mathematicians began introducing symbols to their written work.
17th century British mathematician James Hodder, wrote:
Note that a + sign doth signify addition, and two lines thus =, equality or equation.
But an X thus, multiplication.
It was during this period that these and many other symbols – such as those for division (÷), ratio (:) and angle (∠) – were also standardised.
François Viète
But it was Frenchman, François Viète, who played a key role in the creation of a new style of mathematics.
He was first to conceive of letters standing in the place of numbers.
Consonants would denote known quantities.
Vowels denoted unknown quantities.
Q, stood for squared.
This groundbreaking solution, allowed equations to be solved using general rules.
Viète and his successors built on the rules for balancing equations, introduced by the Arab world – algebra.
Symbols became the basis of algebra – a coherent shorthand for complex mathematical equations.
Which paved the way for Europe to become a mathematical powerhouse, in the coming centuries. | {"url":"https://twig-aksorn.com/film/european-mathematical-symbols-1744/","timestamp":"2024-11-07T12:03:01Z","content_type":"text/html","content_length":"47425","record_id":"<urn:uuid:7fa87536-518e-487c-a90b-d3ea5d83c7cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00743.warc.gz"} |
The Length 2 DFT
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
The length DFT is particularly simple, since the basis sinusoids are real:
The DFT sinusoid sampling rate.
Figure 6.4 illustrates the graphical relationships for the length
Analytically, we compute the DFT to be
and the corresponding projections onto the DFT sinusoids are
Note the lines of orthogonal projection illustrated in the figure. The ``time domain'' basis consists of the vectors orthogonal projections onto them are simply the coordinate axis projections
frequency domain'' basis vectors are coefficients of projection is essentially ``taking the DFT,'' and constructing
In summary, the oblique coordinates in Fig.6.4 are interpreted as follows:
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] | {"url":"https://book.huihoo.com/mathematics-of-the-discrete-fourier-transform-with-audio-applications/Length_2_DFT.html","timestamp":"2024-11-15T04:17:40Z","content_type":"text/html","content_length":"13560","record_id":"<urn:uuid:c4493065-8b43-4eac-aa14-84c11fa7d359>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00617.warc.gz"} |
what is distributive property Arsip - Property
Distributive Property: Definition, Formula, Examples – The distributive property is a well-known property related to numbers and algebra in mathematics. As the name suggests, this property focuses on
distributing or dividing a quantity through proper conditions. The distributive property or distributive law is only operated in the multiplication of numbers and algebra. This is why it is also
called the distributive law of multiplication.
Note: Distributive property can never be applied in the addition or subtraction of numbers. Even if you apply, the result will be void or produce errors in the solution.
Before diving deep into multiplication’s distributive property, let us have a quick look at other important properties in mathematics. They are listed below:
• Commutative Property: This property states that the numbers or terms can commute or move their places in the expression without altering the result. This is true for addition and multiplication.
For instance, (1 + 4) = (4 + 1) and (2 * 4) = (4 * 2). Subtraction doesn’t follow this property, for example, (1 – 4) = -3 is not equal to (4 – 1) = 3.
• Associative Property: This property states that the number of terms in an expression can associate themselves or groups with each other without altering the result. This is true for addition and
multiplication. For instance (1 + 4) + 3 = 1 + (4 + 3).
Let us now discuss what distributive property means and examples.
Distributive Property Definition
Let us first understand a simple concept. If you have to distribute something, let’s say chocolate, with your friends, you divide the chocolate bar into pieces to ease the distribution, right!
Mathematics follows the same concepts. When we have to simplify a hard problem, the distributive property helps to break down the expression into a sum or a difference of 2 numbers.
Mathematically the distributive property states that any expression provided in the form K × (L + M) can be easily resolved as K × (L + M) = KL + KM. This is known as the distributive law of
multiplication’s application in addition. Likewise, the distributive law also stands true for expression containing subtraction. This is expressed as K × (L – M) = KL – KM.
As you all can witness, K is being distributed to both the terms in addition or subtraction. Here K is known as an operand, and the terms inside the expression are known as addends.
Let us learn some important terms we have learned so far:
• Operand: The term being distributed is known as the operand.
• Addends: The terms inside the bracket which are either added or subtracted are known as addends.
• Distributive property of addition: K × (L + M) = KL + KM
• Distributive property of subtraction: K × (L – M) = KL – KM
We can visualize now it states that when the operand is multiplied by the sum or difference to the addends, it is equal to the sum or difference of the individual product of operand and addend terms.
Distributive Property Formula
The formula for a given value’s distributive property can be stated as
c * ( a + b ) = ca + cb
This concludes all the theoretical aspects of the distributive property of multiplication. Next, let us look at the distributive law of multiplication over addition and subtraction in-depth with
proper instances.
Distributive Property of Addition
When multiplying a number (operand) by the summation of two integers (addend), we use the distributive property of addition. Multiplying three by the sum of 10 + 8 is a good example. 3 x (10 + 8) is
the mathematical expression for this.
Example: The distributive principle of addition may solve the formula 3 x (10 + 8).
Solution: Using this, we multiply each addend by three using the distributive property before solving the formula 3 x (10 + 8). After that, we may add the products by dividing the number 3 between
the two addends. This signifies that the addition will take place before the multiplication of 3 (18) and 3 x (10) + 3 x (8) = 30 + 24 = 54 is the result of the distribution property of addition.
Distributive Property of Subtraction
Similarly, when multiplying a number (operand) by the difference between two integers (addend), we use the distributive property of subtraction. Multiplying three by the difference of 10 – 8 is a
good example of subtraction’s distributive property. The mathematical expression for this equation is 3 x (10 – 8).
Example: The distributive principle of subtraction may be used to solve the formula 3 x (10 – 8).
Solution: Using this, we multiply each addend by three before solving the formula 3 x (10 – 8). After that, we may subtract the products by dividing the number 3 between the two addends. This
signifies that the subtraction will take place before the multiplication of 3 x (18) and 3 x (10) – 3 x (8) = 30 – 24 = 6 is the result of the distributive property of subtraction.
We have talked so much about the distributive property, but how does it stand true in mathematics? Is there a way to verify this property? Indeed there is verification. Continue reading the article
to know why.
Verification of Distributive Property
Let’s look at how it works for various operations. We’ll use the distributive law to apply the two basic operations of addition and subtraction separately.
1. Distributive Property of Addition: We already know that the addition’s distributive property is given as k × (l + m) = kl + lm. Now it is time to verify this property by taking an example.
Example: Let us take an Expression, say, 10 x ( 3 + 6).
Solution: We will normally solve this expression by using the rules of BODMAS as standard.
In the first step, we will always solve the expressions inside the bracket. In this case (3 + 6 ) = 9. In the second step, we will multiply 10 by the number obtained, i.e. 9. This will give us the
result as 10 x 9 = 90.
Now solve this using the distributive property of addition:
10 x ( 3 + 6 ) = (10 x 3) + (10 x 6)
= 30 + 60
= 90
As we can see both the methods yield the same result.
2. Distributive Property of Subtraction: Now, let us verify the same for the distributive property of subtraction. We all already know that the distributive property of addition is given as k × (l –
m) = kl – lm. Now it is time to verify this property by taking an example.
Example: Let us take an expression, say, 10 x (6 – 3).
Solution: We will normally solve this expression by using the rules of BODMAS as standard.
In the first step, we will always solve the expressions inside the bracket. In this case ( 6 – 3 ) = 3. In the second step, we will multiply 10 by the number obtained, i.e. 3. This will give us the
result as 10 x 3 = 30.
Now solve this using the addition:
10 x ( 6 – 3 ) = (10 x 6) – (10 x 3)
= 60 – 30
= 30
Read More : Mypass-a-grille.com
As we can see both the methods yield the same result again.
Hence, we have verified that the property of both addition and subtraction distribution is true.
Distributive Property of Division
The distributive property of division is the same as the distributive law of multiplication, with only the multiplication sign changing to division along with the operation. The larger term is
divided into smaller factors (addend), and the divisor acts as the operand. You will understand this better with the example given below.
Example: Using the Distributive Property of Division, solve 36 ÷ 12.
Solution: 36 can be written as 24 + 12
Therefore we can write 36 ÷ 12 = (24 + 12) ÷ 12
Now, let us distribute 12 inside the bracket
⇒ (24 ÷ 12) + (12 ÷ 12)
⇒ 2 + 1
This gives us the answer as 3.
Distributive Property Examples
Example 1: Solve the Expression 2 (11 + 7) using the Distributive Property.
Using the distributive property formula,
k × (l + m) = (k × l) + (k × m)
= (2 × 11) + (2 × 7)
= 22 + 14
= 36
Therefore, the value of 2 (11 + 7) = 36
Example 2: Prove that 5 x (3 – 12) has a Negative Result using the Distributive Property of Multiplication.
Using the distributive property formula,
k × (l – m) = (k × l) – (k × m)
= (5 × 3) – (5 × 12)
= 15 – 60
= -45
Therefore, the value of 5 x (3 – 12) = – 45, which is a negative integer.
Now you must be 100 percent confident in what distributive property means and how to solve problems concerning this property. If you are not completely sure and have missed any of the concepts in the
article, you can revisit this page again for theory and solutions. Moreover, start preparing for your upcoming exam now and outshine others by learning and practicing.
Frequently Asked Question
1. What is distributive property examples?
Distributive property is a rule that states that you can distribute the terms of an expression. It’s used when you have one term that’s being multiplied by another term but you want to distribute the
term being multiplied by another number.
For example:
5(x+y) = 5x + 5y
In this case, x and y are multiplied by 5, which means we can distribute the 5 over them. So we would rewrite this as 50x + 50y.
Let’s look at another example:
(6x+2)(x-1) = 6×2 – 2x – 1
2. What property is distributive property?
It is a property that allows you to divide the whole by its parts. It is usually used in mathematics and algebra. For example, if you have the sum of two numbers and want to find the sum of their
parts, you would use the distributive property.
3. What is the distributive property of 3?
The distributive property of 3 is a mathematical rule that allows you to distribute one number to each term in a sum.
For example, if you want to add 2+3+4, you can’t just say “add 6” because 2+3=5 and 4+3=7. You need to find a way to split up the 6 among those two terms.
The distributive property of 3 tells us how to do this: we’ll multiply each term by 3 before adding them together. So our answer is 9+12=21.
4. How do you do the distributive method?
The distributive method is a way to solve an equation by multiplying the parentheses on either side of the equality sign. The distributive law states that when multiplying or dividing by a sum or
difference of terms, one must multiply or divide each term in the expression by each term in the sum or difference.
For example, if you have:
(a + 4)(a – 2) = 4a^2 – 8a + 8 – 8a
You would distribute the 4 from the first term to each term in the second term:
4a^2 – 8a + 8 – 8a = (4a^2) + (4(-2)) + (8) + (-8) = 16 – 0 + 0 = 16
5. Why do we use the distributive property?
The distributive property turns a multiplication problem into an addition problem. For example, if you have x * y, where x and y are positive numbers and x is greater than 1, then you can rewrite
this as (x – 1) * y + x * y.
It is also useful when solving equations with exponents. For example, if you have 5(x+1) = 10(x), then you can rewrite this as 5x + 5 = 10x.
Distributive Property
Distributive Property – The distributive property is also known as the distributive law of multiplication over addition and subtraction. The name itself signifies that the operation includes dividing
or distributing something. The distributive law is applicable to addition and subtraction. Let us learn more about the distributive property of multiplication along with some distributive property
examples, how to use the distrivutive property on this page.
What is the Distributive Property?
The distributive property states that an expression which is given in form of A (B + C) can be solved as A × (B + C) = AB + AC. This distributive law is also applicable to subtraction and is
expressed as, A (B – C) = AB – AC. This means operand A is distributed between the other two operands.
Distributive Property Definition
According to the distributive property definition, the distributive property allows us to take a factor and distribute it to each member (term) of the group of things that have been added or
subtracted. Instead of multiplying the factor by the group as a whole, we can distribute it to be multiplied by each member (term) of the group individually.
Distributive Property Formula
The distributive property formula of a given value is expressed as,
Let us discuss the distributive property of multiplication over addition and subtraction in detail with examples.
Distributive Property of Multiplication Over Addition
The distributive property of multiplication over addition is applied when we need to multiply a number by the sum of two numbers. For example, let us multiply 7 by the sum of 20 + 3. Mathematically
we can represent this as 7(20 + 3).
Example: Solve the expression 7(20 + 3) using the distributive property of multiplication over addition.
Read More : Mypass-a-grille.com
Solution: When we solve the expression 7(20 + 3) using the distributive property, we first multiply every addend by 7. This is known as distributing the number 7 amongst the two addends and then we
can add the products. This means that the multiplication of 7(20) and 7(3) will be performed before the addition. This leads to 7(20) + 7(3) = 140 + 21 = 161.
Distributive Property of Multiplication Over Subtraction
The distributive property of multiplication over subtraction is similar to the distributive property of multiplication over addition except for the operation of addition and subtraction. Let us
consider an example of the distributive property of multiplication over subtraction.
Example: Solve the expression 7(20 – 3) using the distributive property of multiplication over subtraction.
Solution: Using the distributive property of multiplication, we can solve the expression as follows: 7 × (20 – 3) = (7 × 20) – (7 × 3) = 140 – 21 = 119
Verification of Distributive Property
Let us try to justify how distributive property works for different operations. We will apply the distributive law individually on the two basic operations, i.e., addition and subtraction.
Distributive Property of Addition: The distributive property of multiplication over addition is expressed as A × (B + C) = AB + AC. Let us verify this property with the help of an example.
Example: Solve the expression 2(1 + 4) using the distributive law of multiplication over addition.
Solution: 2(1 + 4) = (2 × 1) + (2 × 4)
⇒ 2 + 8 = 10
Now, if we try to solve the expression using the law of BODMAS, we will solve it as follows. First, we will add the numbers given in brackets, and then we will multiply this sum with the number given
outside the brackets. This means, 2(1 + 4) ⇒ 2 × 5 = 10. Therefore, both the methods result in the same answer.
Distributive Property of Subtraction: The distributive law of multiplication over subtraction is expressed as A × (B – C) = AB – AC. Let us verify this with the help of an example.
Example: Solve the expression 2(4 – 1) using the distributive law of multiplication over subtraction.
Solution: 2(4 – 1) = (2 × 4) – (2 × 1)
⇒ 8 – 2 = 6
Now, if we try to solve the expression using the order of operations, we will solve it as follows. First, we will subtract the numbers given in brackets, and then we will multiply this difference
with the number given outside the brackets. This means 2(4 – 1) ⇒ 2 × 3 = 6. Since both the methods result in the same answer, this distributive law of subtraction is verified.
Distributive Property of Division
We can show the division of larger numbers using the distributive property by breaking the larger number into two or more smaller factors. Let us understand this with an example.
Example: Divide 24 ÷ 6 using the distributive property of division.
Solution: We can write 24 as 18 + 6
24 ÷ 6 = (18 + 6) ÷ 6
Now, let us distribute the division operation for each factor (18 and 6) in the bracket.
⇒ (18 ÷ 6) + (6 ÷ 6)
⇒ 3 + 1
Therefore, the answer is 4.
Cuemath is one of the world’s leading math learning platforms that offers LIVE 1-to-1 online math classes for grades K-12. Our mission is to transform the way children learn math, to help them excel
in school and competitive exams. Our expert tutors conduct 2 or more live classes per week, at a pace that matches the child’s learning needs. | {"url":"https://www.mypass-a-grille.com/tag/what-is-distributive-property/","timestamp":"2024-11-03T06:37:03Z","content_type":"text/html","content_length":"62093","record_id":"<urn:uuid:5217164f-9c4d-46a7-b774-3ab9af3f655f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00640.warc.gz"} |
Regular 2D Shape Area Formulas | FineduCalcs
Regular 2D Shape Area Formulas
Home » Geometry » Regular 2D Shape Area Formulas
2d shapes are which drawn on plane. Have the property of area. Basic 2d objects include triangle, circle, rectangle, square, parallelogram, rhombus, irregular quadrilateral, n-polygon and etc.
2d shape Area formulas:
It is a 2d shape that has 3 sides.
A = 1/2 x base * height
A = ( s x ( s – a ) x ( s – b ) x ( s – c ) )^1/2
where a, b, c are sides of triangle, s = ( a + b + c ) / 2
It has only on curved edge and every point one edge is equidistant from central point.
A = π x radius²
Circle Calculator – Calculate circle area and perimeter, sector area and arc length.
This 2d shape has four equal length straight edges and all four corners are right angles.
A = side²
Parallel edges are equal in length. All four corners are right angles.
A = length x width
Four side 2d shape having opposite sides as equal in length and parallel.
A = height x base
Four side 2d shape having four equal length edges.
A = 1/2 x diagonal1 x diagonal2
It is a 2d shape having four straight edges. Only one pair of opposite edges are parallel.
A = 1/2 x height x (base1 + base2) | {"url":"https://fineducalcs.in/geometry/regular-2d-shape-area-formulas","timestamp":"2024-11-10T00:15:42Z","content_type":"text/html","content_length":"92732","record_id":"<urn:uuid:7ad716c9-6471-4f2d-9db2-c804c0d5814e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00785.warc.gz"} |
Introduction to cryptology
2WF80 -- Introduction to cryptology - Winter 2017
Contents Announcements Exams Literature Pictures and slides Old exams
Tanja Lange
Coding Theory and Cryptology
Eindhoven Institute for the Protection of Information
Department of Mathematics and Computer Science
Room MF 6.104B
Technische Universiteit Eindhoven
P.O. Box 513
5600 MB Eindhoven
Phone: +31 (0) 40 247 4764
The easiest ways to reach me wherever I am:
This page belongs to course 2WF80 - Introduction to cryptology. This course is offered at TU/e as part of the bachelor's elective package 'Security'. The official page is here.
Classical systems (Caesar cipher, Vigenère, Playfair, rotor machines), shift register sequences, DES, RC4, RSA, Diffie-Hellman key exchange, cryptanalysis by using statistics, factorization, attacks
on WEP (aircrack).
Some words up front: Crypto is an exciting area of research. Learning crypto makes you more aware of the limitations of security and privacy which might make you feel less secure but that's just a
more accurate impression of reality and it a good step to improve your security.
Here is a nice link collection of software to help you stay secure https://prism-break.org/en/ and private https://www.privacytools.io/.
You should have participated in "2WF50 - Algebra" or "2WF90 - Algebra for security" before taking this course. If not you can find some material in the Literature section.
All lectures take place Mondays 10:45 - 12:30 in AUD 10 and Thursdays 13:45 - 17:30 in Flux 1.03. There is a holiday break between Christmas and New year so that there are no lectures between 22 Dec
and 07 Jan. There will also be no lectures on 08 Jan and 11 Jan but on 15 Jan and 18 Jan.
Gustavo Banegas is the teaching assistant for this course.
Literature and software
It is not necessary to purchase a book to follow the course.
For some background on algebra see
Some nice books on crypto (but going beyond what we need for this course) are
• Johannes Buchmann "Introduction to Cryptography", Springer, 2004.
• Neal Koblitz "A course in Number Theory and Cryptography", Springer, 1994.
• Rudolf Lidl and Harald Niederreiter "Introduction to Finite Fields and their Applications", Cambridge University Press, 1994.
• Christof Paar and Jan Pelzl "Understanding Cryptography", Springer, 2010
• Doug Stinson "Cryptography: Theory and Practice", CRC Press, 1995
• Henk van Tilburg "Fundamentals of Cryptology" Kluwer academic Publishers, Boston, 2000.
For easy prototyping of cryptoimplementations I like the computer algebra system Sage. It is based on python and you can use it online or install it on your computer (in a virtual box in case you're
running windows).
For encrypting your homeworks you should use GPG/PGP. If you're running linux then GnuPG is easy to install. If you're using windows I recommend using GPG4win; if you're using MAC-OS you can use GPG
Suite. We are OK with having only the attachment encrypted, but for proper encryption of your email you might want to look into Enigmail which works well with Thunderbird.
30% of the grade is determined by homeworks. There will be two sets of homework during the quarter. You may hand in your homework in groups of 2 or 3. To make sure that you get used to crypto we
require solutions to be sent encrypted with GPG/PGP. Each participant must have communicated with me at least once using GPG/PGP. You can find my public key for tanja@hyperelliptic.org on the key
servers and on my homepage here. For Gustavo use gustavo@cryptme.in as email address and check for fingerprint 83458248E2E3D43F.
There will be an exam on 22 January 2018, 13:30 - 16:30 (with a retake on April 16, 18:00 - 21:00) which accounts for the remaining 70% of the grade. You may use a simple (non-programmable)
calculator but no cell-phones or other devices containing calculator applications. You may not use books or your class notes.
Here is a test exam. Note that the CRT exercise would have somewhat smaller keys for the exam.
Class notes
This section will fill in gradually after the lectures. I'll provide short summaries and links to pictures of the blackboards. The homeworks will be posted here as well.
13 Nov 2017
Substitution cipher, Caesar cipher, Viginere, Playfair system. Some statements about the number of possible keys for these schemes.
Pictures of black boards and videos are here.
16 Nov 2017
Here is the exercise sheet for block 5 and 6: exercise-1.pdf. See also the raw data if paste fails.
For most of the exercises the solution is obvious when you have it. There are many more pages on the web with tools for cryptanalysis of classical ciphers, e.g. https://www.guballa.de/vigenere-solver
, http://www.braingle.com/brainteasers/codes/index.php, http://www.cryptool-online.org, http://axion.physics.ubc.ca/cbw.html.
In the lecture we discussed the column transposition cipher (see pictures). You can play with it in the C1.3 exercise of the old Mystery Twister if you have Flash Player installed.
We discussed a bit about rotor machines and cryptanalysis. Visit the exhibition of rotor machines from the Cryptomuseum that is currently on show in the MetaForum and take a look at their website
(and museum if you get a chance).
We discussed how to break the Hill cipher given some plaintext-ciphertext pairs and in particular repeated the extended Euclidean algorithm.
One-time pad with bits, and problems with reuse. Try the 4th challenge in the old Mystery Twister to see the reuse problems.
Finally, we discussed stream ciphers. These are much more practical than the OTP in that the key is much shorter. To encrypt a message, expand the key into a stream of pseudo-random bits and xor
those to the message, i.e., treat the stream-cipher output as the one-time pad. To encrypt multiple message it becomes necessary to remember how many bits have been used and either stay in that state
or forward by that many positions the next time one uses the cipher. This is impractical. Initialization Vectors (IVs) deal with that problem in that they move the beginning of the stream to a random
postion. The IV is then sent in clear along with the ciphertext, so that the receiving end can compute the same starting position. More on this on Monday.
Pictures of black boards and videos are here.
21 Nov 2017
Feedback shift registers and how to use them for encryption; k-th order feedback sequence (=a sequence with k coefficients), period, pre-period(=tail), ultimately periodic sequences, Linear feedback
shift registers (LFSRs), want c0=1 (else there is just a delay in output); can run backwards, is periodic, max period is 2^n-1, relation to matrix multiplication. characteristic polynomial of the
Note that we did the last part only for characteristic 2, in general we need to pay attention to signs and P(x)=det(xI-C).
Pictures of black boards and videos are here.
23 Nov 2017
Here is the exercise sheet for block 5 and 6: exercise-2.pdf.
We summarized the results of exercise 1 to derive some conjectures on the periods and factorization patterns (see blackboard pictures). For exercise 3 we clarified what α in the expression would mean
and did a quick round of finite fields.
Quick primer on matrices, covering eigenvalues, eigenvectors, the characterisitic polynomial P(x) and that P(C)=O.
Defintion of irreducible polynomials, order/period of a polynomial, some examples using the characterisitc polynomials from the exercises; this order matches the order of the matrix C -- because P is
C's characteristic polynomial; Order of C matches order of its characteristic polynomial because P(C)=0 means C and x play the same role, i.e. we compute the powers of C modulo P. Rabin's
irreducibility test, order of an irreducible polynomial of degree n divides 2^n-1.
Pictures of black boards and videos are here.
The first homework is due on December 07, 2017 at 13:45. Here is the first homework sheet. At this point you can solve the first half of the exercises, you will have all material necessary by next
Please remember to submit your homework by encrypted, signed email. Don't forget to include your public key for me to reply.
The second homework sheet will be posted on Dec 14, again you will not have all background for the exercises, but can solve the first part of it. The homework will be due on Jan 8. I plan on skipping
lectures that week and instead doing lectures on Jan 15 and 18, so you can still ask questions about the corrections.
27 Nov 2017
An irreducible polynomial generates a sequence with period equal to its order, i.e ord(C)=ord(P) is the period length of the longest period. An irreducible polynomial is called primitive if it
generates a sequence of period 2^k1, i.e. the maximum possible.
Generating function S(x) satisfies S(x)=F(x)/P^*(x), where * gives the reciprocal, P is the characteristic polynomial and F(x) is a polynomial of degree less than deg(P). The characteristic
polynomial is the polynomial of smallest degree with this property. The characteristic polynomial of the sequence {si+ti} is the lcm of the characteristic polynomials of the two sequences. This
means, we can now analyze LFSRs by analyzing the irreducible factors of their characteristic polynomials. We did not study how to handle repeated factors but from the examples on Thursday this does
not look like a good idea.
One can recover the state of an LFSR from k output bits and then compute all future (and past) outputs. If the polynomial P is unkown we need to solve a linear system in the c_i, which needs 2k
consecutive outputs. If k is unknown, attack for k=1, k=2, ... For each of them compute the candidate P and check for consistency with the following outputs to get confirmation or contradictions.
LFSRs are used in practice because they are small and efficient, but they need a non-linear output filter. As an example I mentioned Grain which is one of the finalists of the eStream competition on
stream ciphers. Grain uses an LFSR together with a NFSR and an output function.
As a bad example I mentioned A5/2 which is a stream cipher used in GSM encryption. I mentioned a bit of the weird history of it but forgot to mention that the design was secret. One of the first
postings on it with some details on the history an attack idea for A5/1 by Ross Anderson is from 1994, but lots of details were missing. The full algorithm descriptions of A5/1 and its purposefully
weakened sibling A5/2 were reverse engineered and posted in 1999 by Marc Briceno, Ian Goldberg, and David Wagner. The same group also showed a devastating attack on A5/2, allowing for real-time
decryption. Sadly enough, the A5 algorithms allow downgrade attacks, so this is a problem for any phone which has code for it, which is most until recently. Also A5/1 does not offfer 2^54 security
(54 bits is the effective key length) but only 2^24 (with some precomputation/space). However, A5/2 is broken even worse, in 2^16 computations, with efficient code online, e.g A5/2 Hack Tool.
A nice overview of lightweight ciphers, including more modern and less broken ones is given by Alex Biryukov and Leo Perrin.
Further reading on finite fields and LFSRs, is in Lidl/Niederreiter (see literature section), or David Kohel's lecture notes.
Pictures of black boards and videos are here.
30 Nov 2017
Here is the exercise sheet for block 5 and 6: exercise-3.pdf.
Results from the exercises:
RC4 has a strong bias towards 0 in the second byte. We saw this experimentally and also gave the explanation. The first byte has a strong bias towards the first key byte unless the first key byte it
zero, we showed this by tracing the state vector S.
RC4 does not provide for refreshing the key stream, so one needs to remember the last values for j and i. WEP needs a place to put some per connection data and uses key bits for that, so we get the
known biases plus known key bits plus known plaintext/ciphertext pairs. Aircrack uses these to break WEP encryption.
I mentioned slides on more biases of RC4 by Daniel J. Bernstein. They are available here.
Concept of public-key cryptography: each user has two key, a private/secret key and a public key. Public-key crypto is also called asymmetric crypto,
PGP (which you need to use to submit your homework) is based on public-key crypto, you need to send your public key to Gustavo and Tanja along with your homework. Some explanantions why it is OK to
publish the pubic key; some discussions on fingerprints.
Cryptographic hash functions need to provide preimage resistance, second preimage resistance, and collision resistance. If the output of the hash function has n bits then finding a collision takes on
average 2^n/2 trials (use the birthday paradox to see this) and finding a preimage or second preimage takes on average 2^n trials.
Pictures of black boards and videos are here.
04 Dec 2017
Stream ciphers are susceptible to attacks flipping bits in the ciphertext, which cause the same bits to flip in the plaintext. A fingerprint protects against accidental bit filps, but a proper
Message Authentication Codes (MACs) need to resist adversarially chosen changes. Communicating parties A and B need an authentication key along with the encryption key. The easiest version of a MAC
is to use a hash function to compute cryptographic checksum over the authentication key and the ciphertext. Want to have checksum on the ciphertext for easy and quick rejection of forged packets =
Encrypt, then MAC.
We covered design of hash functions using the Merkle-Damgaard construction and how this enables length-extension attacks (and how putting a fixed padding plus length information in the final block
avoids these issues). Short summary of hash functions: MD4 is completely broken; for MD5 it's esasy to find collisions, first SHA-1 collisions were computed this year (see https://shattered.io/).
SHA-256, SHA-512 and SHA-3 (and the other SHA-3 finalists) are likely to be OK.
Block cipers. ECB (electronic code book) mode encrypts each block separately, this means that identical blocks encrypt the same way. A famous example of how weak this is is the ECB penguin. More
reasonable modes are CBC, OFB, and CTR. These modes ensure that identical plaintext blocks do not lead to identical ciphertext blocks. I commented on how easy it is to de/encrypt locally, by that I
mean how much effort is needed to de/encrypt block i. In OFB you only need data C_i but also i+1 times the encryption of IV which makes this not local.
Some details on DES, 56 bits for the key is not secure enough! First brute force attack was done with "DES Cracker" for 250k USD. In 2006 a team from Bochum and Kiel built COPACOBANA which can break
DES in a weak for 8980 EUR (plus some grad-student time). We'll look more into DES on Thursday.
When you submit your homework please use a subject starting with [2WF80] to make it easier for us to find your homework in our inboxes.
Pictures of black boards and videos are here.
07 Dec 2017
Here is the exercise sheet for block 5 and 6: exercise-4.pdf.
More details on DES. S-box is non-linear part and designed to avoid differential attacks. In the exercises we saw that small changes in the input lead to big changes in the output. Quick comment on
PRESENT and that it uses a single S-box and on the current standard AES.
Discussion of brute force attacks on DES. Only 2^56 trials for complete key search. 2-DES is only marginally harder to break than DES, taking 2^57 with a divide-and-conquer approach. Still common use
is 3-DES with k1=k3 and use k2 with decryption instead of encryption. This needs 2^112 steps to break.
When you use symmetric crypto, make sure to include a MAC!
Public-key signatures have public verification key and private signing key. Anybody can verify the signature (given the message and the public key) but only the owner of the private key can make
valid signatures.
RSA signatures: public key for RSA is (n,e), private key is (n,d), where n=pq for two different primes p and q, φ(n)=(p-1)(q-1) and d is the inverse of e modulo φ(n). We showed how to sign and verify
and why proper signatures pass verification. Attention, this is schoolbook RSA, do not use this in practice.
If you don't remember how to compute modulo an integer or what φ(n) is, now is a good moment to recover this.
Pictures of black boards and videos are here.
11 Dec 2017
Use cases for signatures, differences between MACs and signatures.
Pubblic-key encryption requires 3 algorithms: Key generation, encryption, and decryption. RSA encryption. Exponentiation by square and multiply, this takes l squarings and as many multiplications as
e has bits set to 1; examples; can choose small e but d must be large/random.
Attack on RSA using gcd computation (problem for any way of using key, and this happened for real, see https://factorable.net/). First problem with schoolbook RSA: we can recover a message that is
sent to multiple people, if they all use the same small exponent. More issues with schoolbook RSA: can decrypt linearly related messages; RSA is homomorphic.
Pictures of black boards and videos are here.
14 Dec 2017
Here is the exercise sheet for block 5 and 6: exercise-5.pdf.
Security notions and attack definitions (CPA and CCA), semantic security, ciphertext indistinguishability. schoolbook RSA is not CCA-II secure; IND-CCA security as game between attacker and
challenger; the attacker should not have higher probability than guessing in deciding which of two messages m0 and m1 was encrypted, given the messages and one ciphertext.
To make RSA a randomized encryption one uses some padding. We discussed PKCS v1.5 as a negative example and looked at Bleichenbacher's attack. Take a look at https://robotattack.org/ for a very
recent use of Bleichenbacher's attack in practice. You should be able to understand details of the full paper Return Of Bleichenbacher's Oracle Threat. RSA-OAEP is a better padding scheme.
Factorization methods: trial division, factoring numbers of the form p*nextprime(p+1), Fermat factorization, p-1 method.
Pictures of black boards and videos are here.
The second homework is due on 11 January 2018 at 13:45. Here is the second homework sheet. At this point you can solve the first half of the exercises, you will have all material necessary by next
Please remember to submit your homework by encrypted, signed email. Don't forget to include your public key for me to reply.
18 Dec 2017
Fermat Factorization as algorithm, more details and success chances for Pollard's p-1 method. Namedropping of other factorization methods, see also http://facthacks.cr.yp.to/ for descriptions and
code snippets. I mentioned that Nadia Heninger put up some nice code for easy factorizations.
Diffie-Hellman key exchange in different groups, including some insecure ones. A good choice is to use the intergers modulo a large prime or elliptic curve crypto (not covered in this class).
CDHP, DDHP, DLP, relations between these problems. Baby-Step Giant-Step algorithm: any system based on DLP has at most squareroot of the group oder hardness of the DLP.
Pictures of black boards and videos are here.
21 Dec 2017
Here is the exercise sheet for block 5 and 6: exercise-6.pdf.
Thanks to Gustavo for covering the first part today while I was at the diploma ceremony. Gustavo repeated BSGS and did a detailed example. DH has a problem with active man-in-the-middle attacks, he
used P for the man-in-the-middle (hmm .... evil prof?).
Details of semi-static DH and that this matters for quick session resumption. ElGamal encryption (for historical purposes), this is not CCA secure: asking for (r,2c) can be used to decrypt (r,c). The
additive homomorphism might be what you want, but otherwise you should really not use this but use static-DH instead.
Key-Encapsulation Mechanisms (KEMs) and how this fits with modern use of RSA and DH.
ElGamal signatures. For functionality (and security) we want to sign h(m) and not m; this is also important for RSA signatures
We showed how and why ElGamal signatures and why these systems work.
Last year I wrote some slides for this lecture. You might find them interesting as a different way to explain BSGS.
Pictures of black boards and videos are here.
Enjoy your holidays. If you want to do some crypto take a look at the old exams (below). Email me if you have questions or think you have solutions to old exams (= send me scans of your solutions and
I'll send comments back).
Remember, there will be no lectures on 08 and 11 Jan.
15 Jan 2017
Needham-Schroeder authentication protocol and why it doesn't actually prove to B that he is talking to A. Triple DH or DH+ signatures achieves authentication and key freshness.
Some summary of what I expect you to know about polynomial factorization and orders of polynomials.
Shamir secret sharing: allows to share a secret in a t-out-of-n fashion so that any set of t people can recover it; works >y simple Lagrange interpolation.
Note that the secret never needs to be re-computed -- for applications in RSA or DH the shares can be applied individually and then only the per-message secrets be combined. Also note that there is
no need to ever have the secret -- it can be generated from t shares; these shares are then re-shared in a t-out-of-n fashion.
Pictures of black boards and videos are here.
18 Jan 2018
Here is the exercise sheet for block 5 and 6: exercise-7.pdf.
Old Exams
This course was given for the first time in Q2 of 2014. Here are the exams so far | {"url":"http://www.hyperelliptic.org/tanja/teaching/CS17/","timestamp":"2024-11-02T18:43:00Z","content_type":"text/html","content_length":"27792","record_id":"<urn:uuid:c8e0da6c-f816-4392-a5a3-d1be8e6175d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00169.warc.gz"} |
How to calculate variance in Excel | Basic Excel Tutorial
How to calculate variance in Excel
Variance is a measure of distance between numbers in a data set. Its measures the distance difference each number is from the mean. This distance is also known as the error term in which the variance
is measured. It's very easy to calculate variance in Excel if you have all the data set already entered into the software. Variance is calculated by a sample of data taken from a large population.
There are two types of formulas used and they have different functions in Excel.
Functions used to calculate Variance
It gives the standard deviation for the actual value you have entered.
They assume the data you have is the whole population.
=VAR.P (A1:A7)
It gives the standard deviation for a whole population where the sample data is taken.
This formula provides the estimated variance for the population.
S indicates the dataset is a sample, but the results are for the population.
=VAR.S (A1:A7)
Steps on how to calculate variance in Excel;
1. Open your Excel worksheet installed on your pc with data.
2. You must ensure the data you have entered is in a single range of cells in Excel.
3. If the data you have represents the entire population, you will use this formula VAR.P (A1:A20). On the other hand, if the data you have is a sample for some larger population, you will this
formula VAR.S (A1:A20).
4. Here your variance for the data will be displayed in the cell.
Standard Deviation
Calculating a standard deviation is similar to calculating variance in excel. They both have similar properties and characteristics.
STDEV and STDEV.S Functions in Excel
The STDEV function and STDEV.S functions are tools that help one to estimate the standard deviation based on the set of data.STDEV.S function is mostly used nowadays. STDEV is used as a compatibility
function thus it can be used to ensure backward compatibility. The STDEV and STDEV.S functions provide an estimated dataset of a standard deviation. The dataset entered represents only a small sample
of the total population and as result, it does not return the exact standard deviation. Both of the functions have identical behaviors
STDEV and STDEV.S Syntax and Arguments
STDEV (number1, [number2], …)
Number 1 is a must that it is required. It can be a named range, actual number, or cell references of the data in an Excel worksheet. If cell references are used, then the rest in the range of cell
references are ignored. Number 2 is optional.
STDEV.S (number1, [number2], …)
Number 1 is mandatory that it is required. Single arrays can also be used instead of arguments. Number 2 is optional.
Steps on how to calculate standard deviation in Excel;
1. You must enter all the data you require in MS Excel.
The data you entered must be in Excel range that is a column, a row, or a group of a matrix of columns and rows before using the statistics function in excel.
2. Select all the data without selecting any other values.
This is because we need that data only not any other value.
3. If the data represents the entire population use the following formula STDEV.P (A1:A20).
On the other hand, if the data represents a sample from some large population use the following formula STDUEV (A1:A20).
4. The standard deviation for the data will be displayed in the cell. | {"url":"https://basicexceltutorial.com/how-to-calculate-variance-in-excel/","timestamp":"2024-11-11T00:33:19Z","content_type":"text/html","content_length":"70645","record_id":"<urn:uuid:b1fbae51-00b9-4775-9969-134be09e7b80>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00119.warc.gz"} |
TS Inter 1st Year Maths 1A Addition of Vectors Formulas - TS Board Solutions
TS Inter 1st Year Maths 1A Addition of Vectors Formulas
Learning these TS Inter 1st Year Maths 1A Formulas Chapter 4 Addition of Vectors will help students to solve mathematical problems quickly.
TS Inter 1st Year Maths 1A Addition of Vectors Formulas
→ Scalar : A physical quantity having magnitude is called a scalar.
E.g. : Length, mass, area, volume, temperature, speed etc.
→ Vector : A physical quantity having both magnitude and direction is called a vector.
Ex : Displacement, velocity, acceleration, force, angular momentum.
→ Modulus of a vector : If a vector \(\overline{\mathrm{AB}}\) is denoted by a̅ then |a̅|. denote the length oi the vector of a̅ also |a̅| is called the magnitude or modulus of a vector a .
→ Collinear or parallel vectors : Vectors along the same line or along the parallel line are called collinear vectors. In figure \(\overline{\mathrm{AB}}, \overline{\mathrm{BC}}, \overline{\mathrm
{CA}}\) are collinear vectors. Two vectors a̅ b̅ are parallel or collinear iff a̅ = tb̅ . t ∈ R.
→ Like vectors: Collinear or parallel vectors having the same direction are called like vectors.
→ Unlike vectors:
Collinear or parallel vectors having opposite direction are called unlike vectors.
→ Unit vector:
A vector whose modulus is unity is called a unit vector. The unit vector in the direction of vector a̅ is denoted by a̅̂. Thus modulus of |a̅̂| = 1.
• Unit vector in the direction of a̅ is \(\frac{\overline{\mathrm{a}}}{|\overline{\mathrm{a}}|}\).
• Unit vector in the opposite direction of a̅ is \(\frac{-a}{|\bar{a}|}\).
→ Position vector : If a point ‘O’ is fixed as origin in the plane and ‘A’ is any point then \(\overline{\mathrm{OA}}\) is called the position vector of A’ with respect to ‘O’.
→ Triangle law of addition of vectors: In a triangle OAB, let \(\overline{\mathrm{OA}}\) = a̅, \(\overline{\mathrm{AB}}\) = b̅ then the resultant vector \(\overline{\mathrm{OB}}\) is defined as \(\
overline{\mathrm{OB}}=\overline{\mathrm{OA}}+\overline{\mathrm{AB}}\) = a̅ + b̅.
This is known as triangle law of addition of vectors.
→ Section formula:
• Let A and B be two points with position vectors a̅ and b̅ respectively. Let ‘C’ be a point dividing AB internally in the ratio m : n. The position of ‘C’ is \(\overline{O C}=\frac{m b+n a}{m+n}\)
• Let A and B be two points with position vectors a̅ and b̅. Let be a point dividing the line segment AB externally in the ratio in : n then the position vector of C is given by \(\overline{\mathrm
{OC}}=\frac{\mathrm{mb}-n \bar{a}}{m-n}\)
→ The position vector of the midpoint of the line segment joining two vectors with position vector is \(\frac{\bar{a}+\bar{b}}{2}\).
→ Coplanar vectors: Two or more vectors are said to be coplanar if they lie on the same plane.
• The vectors a̅, b̅, c̅ are said to be coplanar iff [a̅ b̅ c̅] = 0.
• Four points A, B. C. D are said to be coplanar iff \(\left[\begin{array}{lll}
\overline{\mathrm{AB}} & \overline{\mathrm{AC}} & \overline{\mathrm{AD}}
\end{array}\right]\) = 0.
• Three vectors a̅, b̅, c̅ are said to be linearly dependent iff [a̅ b̅ c̅] = 0.
• Three vectors a̅, b, c̅ are said to be linearly independent iff [a̅ b̅ c̅] = 0.
→ Vector equations of a straight line :
• The vector equation of the straight line passing through the point A (a̅) and parallel to the vector b̅ is r̅ = a̅ + tb̅ , t ∈ R.
• The vector of the line passing through origin ‘O’ and parallel to the vector b̅ is r̅ = tb̅, t ∈ R.
Cartesian form : Cartesian equation for the line equation passing through A (x[1], y[1], z[1]) and parallel to the vector b̅ = li + mj + nk is \(\frac{x-x_1}{l}=\frac{y-y_1}{m}=\frac{z-z_1}{n}\)
• The vector equat ion of the line passing through the points A (a) and B(b) is r = (1 – t) a̅ + tb̅. t ∈ R.
Cartesian form : Cartesian equation for the line through A(x[1], y[1], z[1]) and B(x[2], y[2], z[2]) is \(\frac{\mathrm{x}-\mathrm{x}_1}{\mathrm{x}_2-\mathrm{x}_1}=\frac{\mathrm{y}-\mathrm{y}_1}
→ Vector equations of a plane :
• The vector equation of the plane passing through the points A(a̅) and parallel to the vectors b̅ & c̅ is r̅ = a̅ + tb̅ + sc̅ : t. s ∈ R.
• The equation of the plane passing through the points A(a̅).B(b̅) and parallel to the vector c is r̅ = (1 – t) a̅ + tb + sc̅ ; t, s ∈ R.
• The equation of the plane passing through three non-collinear points A(a̅), B(b̅) and C(c̅) is r̅ = (1 – t – s)a̅ + tb̅ + sc̅ ; t, s ∈ R.
→ Linear combinations : Let \(\overline{a_1}, \overline{a_2}, \ldots \ldots, \overline{a_n}\) be n vectors and l[1], l[2], …………. l[n] be n scalars.
Then \(l_1 \overline{\mathrm{a}_1}+l_2 \overline{\mathrm{a}_2}, \ldots \ldots \ldots+l_{\mathrm{n}} \overline{\mathrm{a}_{\mathrm{n}}}\) is called a linear combination of \(\overline{\mathrm{a}_1}, \
overline{\mathrm{a}_2}, \ldots \overline{\mathrm{a}_{\mathrm{n}}}\).
Leave a Comment | {"url":"https://tsboardsolutions.com/ts-inter-1st-year-maths-1a-addition-of-vectors-formulas/","timestamp":"2024-11-02T17:14:14Z","content_type":"text/html","content_length":"59984","record_id":"<urn:uuid:78c001ae-72fe-4212-82ba-6227c040b3a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00584.warc.gz"} |
Monotonicity of dynamical degrees for Hénon-like and polynomial-like maps
F. Bianchi - T. C. Dinh - K. Rakhimov
Monotonicity of dynamical degrees for Hénon-like and polynomial-like maps
Published Paper
Inserted: 29 mar 2024
Last Updated: 23 sep 2024
Journal: Transactions of the AMS
Volume: 377
Number: 9
Year: 2024
We prove that, for every invertible horizontal-like map (i.e., H{\'e}non-like map) in any dimension, the sequence of the dynamical degrees is increasing until that of maximal value, which is the main
dynamical degree, and decreasing after that. Similarly, for polynomial-like maps in any dimension, the sequence of dynamical degrees is increasing until the last one, which is the topological degree.
This is the first time that such a property is proved outside of the algebraic setting. Our proof is based on the construction of a suitable deformation for positive closed currents, which relies on
tools from pluripotential theory and the solution of the $d$, $\bar \partial$, and $dd^c$ equations on convex domains. | {"url":"https://gecogedi.dimai.unifi.it/paper/579/","timestamp":"2024-11-09T14:32:19Z","content_type":"text/html","content_length":"4507","record_id":"<urn:uuid:016fcce5-02af-4ae5-95fd-978331b583eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00169.warc.gz"} |
Coherence in a network of two-level systems coupled to a bosonic field
We study the quantum dynamics of an open network of two-level systems which is coupled to a correlated common environment represented by a bosonic field in a thermal equilibrium state. Extensive
numerical simulations of the full second-order time-convolutionless quantum master equation for the density matrix of various types of networks are performed, in order to investigate dissipation and
decoherence processes and, in particular, their dependence on the spatial separation of the network sites and on the speed of the bosonic field modes. In the limit of an infinite speed the influence
of the environment disappears due to the emergence of a decoherence-free subspace, while the limit of zero speed corresponds to the case in which the network sites are coupled to independent
reservoirs, a case which is much easier to treat numerically. The main result of the paper is a general intuitive criterion which states that the simpler model of independent reservoirs can be used
as long as the network relaxation time is smaller than the time it takes for the modes to travel the minimal distance between the network sites.
ASJC Scopus subject areas
• Atomic and Molecular Physics, and Optics
Dive into the research topics of 'Coherence in a network of two-level systems coupled to a bosonic field'. Together they form a unique fingerprint. | {"url":"https://cris.bgu.ac.il/en/publications/coherence-in-a-network-of-two-level-systems-coupled-to-a-bosonic-","timestamp":"2024-11-12T03:45:05Z","content_type":"text/html","content_length":"56250","record_id":"<urn:uuid:2051b668-273a-417d-b097-fa77d816fd47>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00351.warc.gz"} |
How many entries of a typical orthogonal matrix can be approximated by
independent normals?
How many entries of a typical orthogonal matrix can be approximated by independent normals?
We solve an open problem of Diaconis that asks what are the largest orders of $p_n$ and $q_n$ such that $Z_n,$ the $p_n\times q_n$ upper left block of a random matrix $\boldsymbol{\Gamma}_n$ which is
uniformly distributed on the orthogonal group O(n), can be approximated by independent standard normals? This problem is solved by two different approximation methods. First, we show that the
variation distance between the joint distribution of entries of $Z_n$ and that of $p_nq_n$ independent standard normals goes to zero provided $p_n=o(\sqrt{n})$ and $q_n=o(\sqrt{n})$. We also show
that the above variation distance does not go to zero if $p_n=[x\sqrt{n} ]$ and $q_n=[y\sqrt{n} ]$ for any positive numbers $x$ and $y$. This says that the largest orders of $p_n$ and $q_n$ are $o(n^
{1/2})$ in the sense of the above approximation. Second, suppose $\boldsymbol{\Gamma}_n=(\gamma_{ij})_{n\times n}$ is generated by performing the Gram--Schmidt algorithm on the columns of $\bold{Y}_n
=(y_{ij})_{n\times n}$, where $\{y_{ij};1\leq i,j\leq n\}$ are i.i.d. standard normals. We show that $\epsilon_n(m):=\max_{1\leq i\leq n,1\leq j\leq m}|\sqrt{n}\cdot\gamma_{ij}-y_{ij}|$ goes to zero
in probability as long as $m=m_n=o(n/\log n)$. We also prove that $\epsilon_n(m_n)\to 2\sqrt{\alpha}$ in probability when $m_n=[n\alpha/\log n]$ for any $\alpha>0.$ This says that $m_n=o(n/\log n)$
is the largest order such that the entries of the first $m_n$ columns of $\boldsymbol{\Gamma}_n$ can be approximated simultaneously by independent standard normals.Comment: Published at http://
dx.doi.org/10.1214/009117906000000205 in the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org | {"url":"https://core.ac.uk/works/1086405/","timestamp":"2024-11-07T23:01:00Z","content_type":"text/html","content_length":"122601","record_id":"<urn:uuid:5c77f006-cee6-4371-a5a3-1ce36abb9c5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00416.warc.gz"} |
Bahrain VAT Calculator
Bahrain Value Added Tax Calculator
Sponsor: None [Sponsor Enquiry]
Calculator published on: 2023-05-04 12:08:02
Calculator last reviewed: 2023-05-04 12:23:50
Rating: ★★★★★ [ No Votes yet, be the first ]
Reference(s) The National Bureau for Revenue (NBR)
VAT Rates last updated: 2023-05-04 12:23:46 [Update Rates]
Currency Bahraini Dinar [ BHD ] [ .د.ب ]
The Bahrain VAT Calculator provides quick VAT calculations and, optionally, a detailed VAT table which includes multiple VAT entries and calculations. VAT is calculated as a percentage of the value
of the goods at the point of sale, this amount is then added to provide the total sale price (if you wish to deduct VAT from the sale amount, please use the Reverse VAT Calculator for Bahrain. The
quick VAT calculation is for those who wish to calculate how much VAT will be added to a specific product or service. The advanced calculator option includes the same calculation overview and, in
addition, produces a table of VAT transactions with details on the VAT amount for each product/service and the total cost of the goods/services before and after VAT is applied.
On this page, discover:
VAT Formula for Bahrain
As we mentioned in the introduction, "VAT is calculated as a percentage of the value of the goods at the point of sale, this amount is then added to provide the total sale price". VAT is therefore a
transactional tax as the tax is only incurred at the point of sale, this is the same for goods and services. If we consider the calculation from the perspective of the goods or service, we can define
the formula for calculating VAT in Bahrain as having four components, there are:
1. The NET sale amount of the goods/service at the point of sale (a)
2. The VAT rate that is applicable to the goods/service (b)
3. The amount of VAT that is chargeable on the goods/service (c)
4. The GROSS sale amount of the goods/service at the point of sale (d)
You may have noticed that we defined a letter for each of the four components or VAT calculation, this letter is shown in a bracket. We will use this letter to create our VAT formula, using the
letters instead of the fully worded VAT component so that our VAT formula is in a nice user friendly format.
Our aim when calculating VAT is to calculate the "the GROSS sale amount of the goods/service at the point of sale", in order to do this, we must first calculate the "amount of VAT that is chargeable
on the goods/service", we can do this using the following formula:
c = a × b100
The above formula illustrates that "the amount of VAT that is chargeable on the goods/service" is equal to "the NET sale amount of the goods/service at the point of sale" times "the VAT rate that is
applicable to the goods/service"
You may have noticed that in our formula we divide "the VAT rate that is applicable to the goods/service (b)" by 100, this is to allow us to calculate the written percentage as a mathematical
percentage amount. For example, 50% is equal to 0.50 when completing mathematical computations.
Now that we have calculated "The amount of VAT that is chargeable on the goods/service (c)" we can calculate the "The GROSS sale amount of the goods/service at the point of sale (d)" using the
following formula:
d = a + c
As you can see from this very simple formula, we calculate "the GROSS sale amount of the goods/service at the point of sale (d)" by adding "the NET sale amount of the goods/service at the point of
sale (a)" to "The amount of VAT that is chargeable on the goods/service (c)" that we calculated with the previous VAT formula
Instructions: How to use the Bahrain VAT Calculator
There are two parts to these instructions, we explain the general features of our tax calculators in these instructions, below we discuss the instructions for using the Bahrain VAT Calculator.
Bahrain VAT Calculator - Quick Calculation Instructions
The quick VAT calculation provides the results below the calculator and requires the minimum input to calculate the VAT due on a product or service.
1. Enter the NET Value / Price of the product or service.
2. Select the VAT rate applicable for the product service.
3. Click on the Calculate button.
Bahrain VAT Calculator - Detailed Calculation Instructions
The detailed VAT calculation allows you to create a table of results for multiple products and/or services that are subject to VAT. The results are displayed for each VAT calculation as we
demonstrated with the quick VAT calculation method, with the detailed VAT calculator you have the option to add the results to a VAT table which provides the total VAT costs, NET sale amount and
GROSS sale amount.
1. To use the detailed VAT Calculator and tools, click on the Detailed Calculation icon in the tax calculator ribbon bar
2. Enter the Description of the product / service subject to VAT in Bahrain.
3. Enter the NET Value / Price of the product or service.
4. Select the VAT rate applicable for the product service.
5. Enter the total amount of Units Purchased/Sold that you wish to calculate VAT on.
6. Click on the Calculate + add to table button.
The detailed VAT calculations will then be displayed under the calculator, in addition, the results will be added to the VAT Calculation Tables (which will appear below when using the detailed VAT
calculator). You have the option to remove VAT entry rows as required by clicking on the delete button.
Bahrain VAT Rates for 2023
The vat rates for Bahrain were last updated on 2023-05-04 12:23:46. You can review the full tax tables for Bahrain here where you can also report any tax rate or tax threshold changes which require
attention or alteration.
Bahrain VAT Rates for 2023
VAT Rate VAT Description
0% Exempt
0% Zero Rated
5% Standard Rate | {"url":"https://bh.taxcalculator.info/vat.html","timestamp":"2024-11-02T08:06:48Z","content_type":"text/html","content_length":"19223","record_id":"<urn:uuid:2f260c84-9b44-47a1-b564-9bf6c73ea96c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00524.warc.gz"} |
Three quantizations of conformal field theory
May 1 (Wed) at 15:40 - 17:30, 2019 (JST)
□ Tsukasa Tada (Coordinator, iTHEMS / Vice Chief Scientist, Quantum Hadron Physics Laboratory, RIKEN Nishina Center for Accelerator-Based Science (RNC))
Needless to say, conformal field theory is elemental in the study of string theory, statistical quantum systems, and various quantum field theories.
Two-dimensional conformal field theory is usually quantized by the so-called radial quantization. However, this is not the only way. As a matter of fact, there are two other distinctive choices for
the time foliation, or equivalently, the Hamiltonian. One of these choices yields the continuous Virasoro algebra, while the other choice leads to the Virasoro algebra on a torus. The former case
corresponds to the recently found (and perhaps less known) phenomenon, sine-square deformation. The latter yields the well-known entanglement entropy. I will present a comprehensive treatment of
these three quantizations and discuss its physical implications. | {"url":"https://ithems.riken.jp/en/events/three-quantizations-of-conformal-field-theory","timestamp":"2024-11-14T07:06:43Z","content_type":"text/html","content_length":"43298","record_id":"<urn:uuid:7e5be29f-761d-438b-a121-0c95224167a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00704.warc.gz"} |
Smooth and peaked solitons of the Camassa-Holm equation and applications
The relations between smooth and peaked soliton solutions are reviewed for the Camassa-Holm (CH) shallow water wave equation in one spatial dimension. The canonical Hamiltonian formulation of the CH
equation in action-angle variables is expressed for solitons by using the scattering data for its associated isospectral eigenvalue problem, rephrased as a Riemann-Hilbert problem. The momentum map
from the action-angle scattering variables T*(T^N) to the flow momentum provides the Eulerian representation of the N-soliton solution of CH in terms of the scattering data and squared eigenfunctions
of its isospectral eigenvalue problem. The dispersionless limit of the CH equation and its resulting peakon solutions are examined by using an asymptotic expansion in the dispersion parameter. The
peakon solutions of the dispersionless CH equation in one dimension are shown to generalize in higher dimensions to peakon wave-front solutions of the EPDiff equation whose associated momentum is
supported on smoothly embedded subspaces. The Eulerian representations of the singular solutions of both CH and EPDiff are given by the (cotangent-lift) momentum maps arising from the left action of
the diffeomorphisms on smoothly embedded subspaces.
• smooth soliton solutions
• peaked soliton solutions
• Camassa-Holm equation
• Hamiltonian formulation
• action-angle variables
• scattering data
• Riemann-Hilbert problem
• momentum map
• Eulerian representation
• N-soliton solution
• dispersionless limit
• peakon solutions
• EPDiff equation
• singular solutions
• diffeomorphisms
Dive into the research topics of 'Smooth and peaked solitons of the Camassa-Holm equation and applications'. Together they form a unique fingerprint. | {"url":"https://researchprofiles.tudublin.ie/en/publications/smooth-and-peaked-solitons-of-the-camassa-holm-equation-and-appli-3","timestamp":"2024-11-10T13:59:51Z","content_type":"text/html","content_length":"55744","record_id":"<urn:uuid:65f6ace5-7a68-4914-9528-fc7d0a1b2ee8>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00151.warc.gz"} |
ECE 515 - Control System Theory & Design
Homework 4 - Due: 02/15
Problem 1
In class last week, we discussed changes to the LTV system \(\dot x = A(t) x(t)\) under a time varying coordinate change \(x(t) = P(t) \bar{x}(t)\) and derived a form for the \(\bar{A}(t)\) matrix in
the equivalent representation: \(\dot {\bar{x}} = \bar{A}(t) \bar{x}(t)\) under the assumption that \(P(t)\) is invertible for all \(t\).
\[ \bar{A}(t) = \dot{P}^{-1}(t) P(t) + P^{-1}(t)A(t)P(t) \]
For this problem
a. Extend the coordinate change to the full system: \[ \dot x = A(t) x(t) + B(t) u (t) \]
b. Show that if \(P(t)\) is a fundamental matrix for the system \[ \dot{P}(t) = A(t) P(t), \qquad P(t_0) = C \in \mathbb{R}^{n\times n} \] where \(C\) is invertible then \[ \dot{\bar{x}} (t) = P^
{-1}B(t) u(t) \]
c. Using this result re-derive the variation of constants formula \[ x(t) = \phi(t, t_0)x(t_0) + \int \limits _{t_0} ^{t} \phi (t, s) B(s) u(s) ds \] where \(\phi (t, s) = P(t)P^{-1}(s)\).
Problem 2
Prove that the Euclidean norm \[|x|:=\sqrt{\langle x,x\rangle} =\sqrt{x_1^2+\dots+x_n^2}\] satisfies the triangle inequality.
Hint: Use the Cauchy-Schwarz inequality \(|\langle x,y\rangle|^2\le \langle x,x\rangle\cdot \langle y,y\rangle\).
Problem 3
Let \(A\) be a symmetric real-valued square matrix.
a. Show that if \(\lambda+i\mu\) is an eigenvalue of \(A\) and \(z=x+iy\) is a corresponding eigenvector, then \(\mu=0\) and \(x\) is an eigenvector (you can assume \(x\ne 0\) for simplicity). In
other words, eigenvalues of symmetric matrices are always real and eigenvectors can always be chosen to be real.
Hint: Show that \(\overline z^T\!Az\) is real.
b. Show that eigenvectors of \(A\) corresponding to distinct eigenvalues are orthogonal.
Problem 4
Let \(X\) and \(Y\) be linear vector spaces over \(\mathbb R\) equipped with inner products \(\langle\cdot,\cdot\rangle_X\) and \(\langle\cdot,\cdot\rangle_Y\), respectively. Further, let \(L:X\to Y
\) be a linear operator.
We define the adjoint of \(L\) to be a linear operator \(L^*:Y\to X\) with the property that \[ \langle y,Lx\rangle_Y=\langle L^*y,x\rangle_X\qquad \forall\, x\in X,\ y\in Y \] Assume that the map \
(LL^*:Y\to Y\) is invertible. Then the equation \(Lx=y_0\) has a solution \[ x_0=L^*(LL^*)^{-1}y_0 \] for each \(y_0\in Y\).
Prove that if \(x_1\) is any other solution of \(Lx=y_0\), then \(\langle x_1,x_1\rangle\ge \langle x_0,x_0\rangle\).
Hint: Let \(y_1:=(LL^*)^{-1}y_0\). Using the definition of adjoint, show that \(\langle y_1,Lx_0\rangle=\langle x_0,x_0\rangle\) and also that \(\langle x_0,x_1\rangle=\langle y_1,Lx_0\rangle\)
Complete the proof by using the fact that \(\langle x_1-x_0,x_1-x_0\rangle\ge 0\).
Problem 5
Using the stability definitions given in class, determine if the systems below are stable, asymptotically stable, globally asymptotically stable, or neither. The first two systems are in \(\mathbb{R}
^2\), the last is in \(\mathbb{R}\).
a. \(\dot x_1 = 0\) and \(\dot x_2 = -x_2\)
b. \(\dot x_1 = -x_2\) and \(\dot x_2 = 0\)
c. \(\dot x = 0\) if \(|x|>1\) and \(\dot x = -x\) otherwise
Justify your answers using only the definitions of stability (not eigenvalues or Lyapunov’s method).
Problem 6
First, some definitions.
• Given a linear operator \(A:X\to X\), a subspace \(Y\subset X\) is called \(A\)-invariant if \(Ay\in Y\) \(\forall y\in Y\).
• For a linear system \(\dot x=Ax\) on \(X=\mathbb{R}^n\), this means that \(x_0\in Y\) implies \(x(t)\in Y\) \(\forall t\) (reason: \(x(t)=e^{At}x_0=(I+A+A^2/2+\dots)x_0\)).
• If \(v\) is an eigenvector of \(A\) with a real eigenvalue, then \(\operatorname{span}\{v\}\) is a 1-dimensional invariant subspace. For a \(k\times k\) Jordan block, the eigenvector \(v_1\) and
the generalized eigenvectors \(v_2,\dots, v_k\) together span an invariant subspace.
• The case of a pair of complex eigenvalues was discussed in Problem Set 2, Problem 3
• The sum of all invariant subspaces corresponding to eigenvalues with \(\text{Re}(\lambda)<0\) is called the stable invariant subspace; the corresponding object for \(\text{Re}(\lambda)\ge 0\) is
the unstable invariant subspace; together these two subspaces span \(\mathbb{R}^n\).
Consider the LTI system \(\dot x =Ax\) where \[ A= \begin{pmatrix} -1 & 0 & 0 \\ 0 & 2 & 1\\ 0 & -1 & 2 \end{pmatrix} \] Identify the stable and unstable invariant subspaces by giving a real basis
for each of them. | {"url":"https://courses.grainger.illinois.edu/ece515/sp2024/homework/hw04.html","timestamp":"2024-11-13T21:55:23Z","content_type":"application/xhtml+xml","content_length":"37749","record_id":"<urn:uuid:b7fe4de1-50be-43fb-8c49-285b000604fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00465.warc.gz"} |
Odds Converter
The Odds Converter allows you to seamlessly convert between Decimal, Fractional, American odds, and Win Percentage. Simply input any one of these values, and the converter will automatically update
the other fields, ensuring all odds formats are accurate and synchronised. This tool is particularly helpful for bettors who need to understand or compare odds across different regions or platforms.
Detailed Instructions for Using the Odds Converter
1. Enter Decimal Odds: To start with decimal odds, enter your value in the "Decimal Odds" field. This will automatically convert the odds to the corresponding fractional odds, American odds, and win
percentage. Decimal odds can range from very low (1.01) to high, depending on the likelihood of the outcome.
2. Enter Fractional Odds: If you prefer working with fractional odds, input the numerator and denominator separately. The tool will convert this into decimal odds, American odds, and the win
percentage. The fractional odds format is commonly used in the UK and Ireland.
3. Enter American Odds: In the "American Odds" field, enter the odds as a positive or negative number. The calculator will update all other formats, including fractional, decimal, and win
percentage. American odds are widely used in the US for sports betting.
4. Enter Win Percentage: For those who prefer probability-based betting, input the win percentage. This field automatically converts the win probability into decimal, fractional, and American odds,
allowing for easy comparison between probability and odds formats.
Understanding Different Types of Odds
Decimal Odds
Decimal odds are the most straightforward type of odds and are commonly used in Europe, Canada, and Australia. They represent the total payout, including your stake. For example, if the decimal odds
are 2.50, you will receive £2.50 for every £1 staked, which includes your original £1.
The formula for calculating payout with decimal odds is simple:
\[ \text{Payout} = \text{Stake} \times \text{Decimal Odds} \]
For example, if you stake £100 on odds of 2.50, your potential return would be:
\[ \text{Payout} = \text{Stake} \times \text{Decimal Odds} \]
This includes a profit of £150, as £100 is your initial stake.
Fractional Odds
Fractional odds are widely used in the UK and Ireland and show the net profit relative to the stake. For example, fractional odds of 5/1 mean you will earn £5 for every £1 staked, in addition to
receiving your original stake back.
The formula for converting fractional odds to decimal odds is:
\[ \text{Decimal Odds} = \frac{\text{Numerator}}{\text{Denominator}} + 1 \]
For example, fractional odds of 5/2 are equivalent to:
\[ \text{Decimal Odds} = \frac{5}{2} + 1 = 3.50 \]
American Odds
American odds are mainly used in the United States and can be expressed as positive or negative values. Positive American odds (e.g., +200) represent how much profit you would make on a £100 bet.
Negative American odds (e.g., -150) show how much you need to bet to win £100.
To convert positive American odds to decimal odds:
\[ \text{Decimal Odds} = \frac{\text{American Odds}}{100} + 1 \]
For negative American odds:
\[ \text{Decimal Odds} = \frac{100}{|\text{American Odds}|} + 1 \]
Win Percentage
Win percentage represents the probability of an event happening. It can easily be converted into odds using the formula:
\[ \text{Decimal Odds} = \frac{100}{\text{Win Percentage}} \]
For example, if an event has a 25% win probability, the decimal odds would be:
\[ \text{Decimal Odds} = \frac{100}{25} = 4.00 \]
The History of Different Odds Formats in Gambling
The evolution of odds formats across different regions of the world is deeply connected to the history of gambling and betting culture. The diverse ways of expressing odds—Decimal, Fractional, and
American—are products of cultural preferences, local betting practices, and even the development of certain sports. Let’s take a closer look at how these formats evolved and why they became dominant
in specific regions.
Fractional Odds: The UK and Horse Racing
Fractional odds, also known as "British odds" or "traditional odds," have their roots in the early days of horse racing in the United Kingdom. As one of the oldest forms of organised betting, horse
racing played a crucial role in shaping how odds were expressed. In the 1700s and 1800s, when horse racing was at the centre of British leisure, bookmakers used fractions to represent the return on a
bet. The fractions reflected a simple way to calculate profit, which made them accessible for the bettors of the time.
For example, odds of 5/1 meant that for every £1 bet, the bettor would receive £5 in profit if their selection won. This intuitive system worked well in a time when betting was less formalised, and
it has remained a cultural staple in the UK and Ireland ever since.
However, as global betting has expanded and become more interconnected, the complexity of fractional odds—especially when dealing with odds like 15/8 or 9/4—has led some bettors to prefer the
simplicity of decimal odds. This is particular true for betting exchange users.
Decimal Odds: Continental Europe, Canada, and Australia
Decimal odds, often referred to as "European odds," gained popularity later in the 20th century, particularly as betting markets became more standardized and betting regulation increased in regions
like continental Europe, Canada, and Australia. Decimal odds are much easier to understand, particularly for casual bettors, because they represent the total payout, including the stake, in one
For example, if you place a bet at odds of 3.50, you know that for every £1 staked, you will receive £3.50 back, including your initial stake. The simplicity of this format makes it especially
popular for markets where betting is highly regulated, such as in Australia and across many European countries. Additionally, with the rise of online betting platforms and sports exchanges like
Betfair, decimal odds became the go-to format for calculating potential winnings quickly and easily.
In Australia, decimal odds are dominant, especially for betting on sports like cricket, rugby, and Australian Rules football. As Australia developed its own unique sports culture, decimal odds
provided a clear, easy-to-understand way to handle increasingly complex betting markets.
Similarly, in Canada and much of continental Europe, decimal odds took over as gambling and sports betting regulations evolved in the 20th century. This format's user-friendly nature made it a
natural choice for countries standardizing their betting laws and attracting international bettors.
American Odds: The United States and Sports Betting
American odds, also known as "moneyline odds," developed from the unique betting practices in the United States, particularly around American sports such as baseball, basketball, and American
football. While sports like horse racing and boxing were already popular in the early 20th century, the advent of betting on these American sports required a different approach to odds.
American odds can be positive or negative:
• Positive odds (e.g., +200) show how much profit you would make on a $100 bet. For example, odds of +200 mean you would win $200 for every $100 staked.
• Negative odds (e.g., -150) show how much you need to bet to win $100. For example, odds of -150 mean you need to bet $150 to win $100.
This system was born out of the way betting was handled in the U.S. and aligns with the American focus on profit margins. Unlike fractional or decimal odds, which express the total payout, American
odds emphasize either the required stake or potential profit, depending on whether the odds are positive or negative. For this reason, they are especially common in sports like American football,
basketball, and baseball, where they allow bettors to easily calculate whether they are betting on the favorite (negative odds) or the underdog (positive odds).
As Las Vegas became the hub for sports betting in the U.S. throughout the 20th century, American odds became the standard across bookmakers and sportsbooks. Even today, in the post-legalisation era
of sports betting in many U.S. states, American odds remain the dominant format, particularly for markets like the NFL and NBA.
The Globalisation of Betting: Convergence and Challenges
While each of these odds formats developed in specific regions, the globalisation of sports betting in the 21st century has led to a blending of these systems. With the rise of online betting
platforms that operate across multiple regions, bettors are increasingly exposed to different odds formats. This is where tools like the Odds Converter come in handy, helping bettors switch between
formats depending on where they’re betting or what market they’re participating in.
For example, a UK-based bettor accustomed to fractional odds might place a bet on an international tennis match where the odds are displayed in decimal. Similarly, a bettor in the U.S. might engage
with European soccer (football) markets that use decimal odds or even fractional odds for large international events like the Cheltenham Festival or the FIFA World Cup. | {"url":"https://sharpbetting.co.uk/calculator/odds-converter","timestamp":"2024-11-12T21:52:14Z","content_type":"text/html","content_length":"37327","record_id":"<urn:uuid:a6d2958f-cf3e-4076-8a83-086c39b918b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00592.warc.gz"} |
The hat guessing number of graphs
Consider the following hat guessing game: n players are placed on n vertices of a graph, each wearing a hat whose color is arbitrarily chosen from a set of q possible colors. Each player can see the
hat colors of his neighbors, but not his own hat color. All of the players are asked to guess their own hat colors simultaneously, according to a predetermined guessing strategy and the hat colors
they see, where no communication between them is allowed. Given a graph G, its hat guessing number HG(G) is the largest integer q such that there exists a guessing strategy guaranteeing at least one
correct guess for any hat assignment of q possible colors. In 2008, Butler et al. asked whether the hat guessing number of the complete bipartite graph K[n,n] is at least some fixed positive
(fractional) power of n. We answer this question affirmatively, showing that for sufficiently large n, the complete r-partite graph K[n,…,n] satisfies HG(K[n,…,n])=Ω(n^[Formula presented]−o(1)). Our
guessing strategy is based on a probabilistic construction and other combinatorial ideas, and can be extended to show that HG(C→[n,…,n])=Ω(n^[Formula presented]−o(1)), where C→[n,…,n] is the blow-up
of a directed r-cycle, and where for directed graphs each player sees only the hat colors of his outneighbors. Additionally, we consider related problems like the relation between the hat guessing
number and other graph parameters, and the linear hat guessing number, where the players are only allowed to use affine linear guessing strategies. Several nonexistence results are obtained by using
well-known combinatorial tools, including the Lovász Local Lemma and the Combinatorial Nullstellensatz. Among other results, it is shown that under certain conditions, the linear hat guessing number
of K[n,n] is at most 3, exhibiting a huge gap from the Ω(n^[Formula presented]−o(1)) (nonlinear) hat guessing number of this graph.
All Science Journal Classification (ASJC) codes
• Theoretical Computer Science
• Discrete Mathematics and Combinatorics
• Computational Theory and Mathematics
• Combinatorial Nullstellensatz
• Complete bipartite graph
• Hat guessing number
• Lovász Local Lemma
Dive into the research topics of 'The hat guessing number of graphs'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/the-hat-guessing-number-of-graphs-2","timestamp":"2024-11-02T20:31:52Z","content_type":"text/html","content_length":"53214","record_id":"<urn:uuid:cf9ca0ca-5ddb-4999-843d-a5d97b5d6284>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00269.warc.gz"} |
Isoperimetry in integer lattices | Published in Discrete Analysis
Extremal combinatorics
April 20, 2018 BST
Isoperimetry in integer lattices
Ben Barber
□ School of Mathematics, University of Bristol
Joshua Erde
□ Fachbereich Mathematik, Universität Hamburg
Isoperimetry in integer lattices, Discrete Analysis 2018:7, 16 pp.
The isoperimetric problem, already known to the ancient Greeks, concerns the minimisation of the size of a boundary of a set under a volume constraint. The problem has been studied in many contexts,
with a wide range of applications. The present paper focuses on the discrete setting of graphs, where the boundary of a subset of vertices can be defined with reference to either vertices or edges.
Specifically, the so-called edge-isoperimetric problem for a graph $G$ is to determine, for each $n$, the minimum number of edges leaving any set $S$ of $n$ vertices. The vertex isoperimetric problem
asks for the minimum number of vertices that can be reached from $S$ by following these edges.
For a general graph $G$ this problem is known to be NP-hard, but exact solutions are known for some special classes of graphs. One example is the $d$-dimensional hypercube, which is the graph on
vertex set $\{0,1\}^d$ with edges between those binary strings of length $d$ that differ in exactly one coordinate. The edge-isoperimetric problem for this graph was solved by Harper, Lindsey,
Bernstein, and Hart, and the extremal sets include $k$-dimensional subcubes obtained by fixing $d-k$ of the coordinates.
The edge-isoperimetric problem for the $d$-dimensional integer lattice whose edges connect pairs of vertices at $\ell_1$-distance 1, was solved by Bollobás and Leader in the 1990s, who showed that
the optimal shapes consist of $\ell_\infty$-balls. More recently, Radcliffe and Veomett solved the vertex-isoperimetric problem for the $d$-dimensional integer lattice on which edges are defined with
respect to the $\ell_\infty$-distance instead.
In the present paper the authors solve the edge-isoperimetric problem asymptotically for every Cayley graph on $G=\mathbb{Z}^d$, and determine the near-optimal shapes in terms of the generating set
used to construct the Cayley graph. In particular, this solves the edge-isoperimetric problem on $\mathbb{Z}^d$ with the $\ell_\infty$-distance.
We now describe the results in more detail. Given any generating set $U$ of $G$ that does not contain the identity, the Cayley graph $\Gamma(G,U)$ has vertex set $G$ and edge set $\{(g,g+u): g\in G,
u\in U\}$. This construction includes both the $\ell_1$ and the $\ell_\infty$ graph defined above, by considering the generating sets $U_1=\{(\pm 1, 0,\dots,0),\dots,(0,\dots,0,\pm 1)\}$ and $U_\
infty=\{-1,0,1\}^d\setminus\{0,0,0\}$, respectively.
The near-optimal shapes obtained by the authors are so-called zonotopes, generated by line segments corresponding to the generators of the Cayley graph. More precisely, if $U=\{u_1,u_2,\dots,u_k\}$
is a set of non-zero generators of $G$, then the near-optimal shapes are the intersections of scaled copies of the convex hull of the sum set $\{0,u_1\}+\{0,u_2\}+\dots+\{0,u_k\}$ with $\mathbb{Z}^d$
. For example, when $d=2$, then the zonotope for the $\ell_\infty$ problem is an octagon obtained by cutting the corners off a square through points one third of the way along each side.
In contrast to the aforementioned edge-isoperimetry results, which were solved exactly using compression methods, the approach in this paper is an approximate one. It follows an idea of Ruzsa, who
solved the vertex-isoperimetry problem in general Cayley graphs on the integer lattice by approximating the discrete problem with a continuous one. While the continuous analogue in the present paper
is a natural one, it is not clear that it is indeed a good approximation to the original problem, and it is here that the main combinatorial contribution of the paper lies. It concludes with several
open problems and directions for further work.
Powered by
, the modern academic journal management system | {"url":"https://discreteanalysisjournal.com/article/3555","timestamp":"2024-11-13T16:06:11Z","content_type":"text/html","content_length":"166387","record_id":"<urn:uuid:58203408-5459-4ee0-be6c-21ff1a757e23>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00493.warc.gz"} |
What are the best practices for paying for R programming homework help? | Programming Assignment Help
What are the best practices for paying for R programming homework help? A R program may be studying the way you do online and can help you explore your computer’s resources to make sure it doesn’t
scare someone. The program can help either by providing you with a strong understanding of your skills or maybe help you select which one to use for final decisions. You can’t make mistakes (duress
mistakes) or you may have an unreliable backup or your computer could be damaged. In this example, it is common to get confused and even an inaccurate backup or backup might be damaged. This is
common among software development, where there is not all the information there is to understand about the software you are using. Even though most companies ignore this issue, you can potentially
learn some skills quickly and also for the first time. The same with your curriculum. How many times can you teach your computer? Then it is as simple as asking the person you teach this if they have
a problem with it, you might need to go over what they understand. If you don’t know how the computer works and how you can teach it, you may end up with a different understanding. In this example,
the programming language R is not programming. You can use your grammar to code using the symbols that you use, and write your program using those symbols. Learning R is very easy but there is a few
things to keep in mind before you can go ahead and go back to your college or even your classroom. You may find it harder to time yourself, particularly if you have an online instructor and if you
are prepared to take on the assignment so that you can start learning R in a simple way then you may not be able to continue your education. Here I can say that I learned this for the first time in
the summer semester of 1997. Here is the short guide for choosing the best learning equipment. There are many programs around that seem to be very good for their teachers, but get beyond the basics,
where the benefit is the simple instruction. There are times where you may use go to this site one way or another and it should be done using the special computer skills that are available in most
classes, such as the techniques you use throughout your classes. There are also other very helpful teaching tools that you can utilize. For example, there are many modules available on the internet
that are used independently as long as the programs are available for use with the computers of the students, but only if you are very skilled. In addition, there are plenty of ways you can use
programs that you haven’t used before.
What Is The Easiest Degree To Get Online?
Find these tutorials on Google and post here how to get started. Before you start learning R, you should have done some reading of the book R Live using R’s Reference in which is a chapter detailing
some basic research papers and problems solved in early teaching when learning R. You will need a book or three, but other books and also references from some of the organizations and workshops can
also giveWhat are the best practices for paying for R programming homework help? The R programming community has been using several of the best practices for paying for R programming homework to help
end students with their homework time. But some strategies are not always as effective. Let’s find out how to do this homework. How did the R programming community make up R code and how did they
begin to try to find other solutions? Here are just some of the strategies that are described by the R community to build the R code for their projects. 1. Make sure you’re really creating code Use
database classes, arrays, and other features to create complex code. Then evaluate this classes that you can easily pick up as needed. Look up a code base to select items from. Then you can fill in
the data types. Think of the R code that you have before you wrote that you can easily pick up. Instead of picking up two-year credits, make sure you’re thinking of one-year credits. Use the skills
and connections (such as the in-class system) that you were able to leverage when you wrote the R programming code for your project. Maybe this is the hardest part… It’s a bit boring. Try to choose
components that you’ve built, like using the data stores. If your idea is to run these components in a computer where doing so takes a ton of time, try to make this job easier.
Do Your Homework Online
(Notably, one-year credits may not make a good end goal). Another way to look at it is to do so. Each component has attributes that should ideally know what component it wants to run — one of those
attributes has to accurately represent what’s going on. So use components knowing what they want to run. Of course, if you have a lot of components, why not use performance-based components? What’s
the difference between storing data and using store-by-store? If you’re having this type of problem then there are plenty of good reasons to pay for those components. Next, consider both reading and
writing data. Reading data is so much easier when you can easily read it (writing data while using the R programming system is pretty easy). Writing data is a combination of read-to-read, text
to-text, and performance measurements — and you need to put data between reading (including data when used in writing to cache) and writing. 2. Consider the code If you think the R programming
community think that coding should be about data, what advice is there to give to those who are spending time on R programs to demonstrate these principles? 3. Listen to code Most programming is
complicated. There’s no good way to do a project so you can write code on it. If you’re going to have small problems with the need for data being there but cannot do anything small enough to be a
problem for the R community, you should instead create a project or topic for the whole program toWhat are the best practices for paying for R programming homework help? This article (I mean this to
mean the best practices) talks about the strategies that a R script can find and modify to pay for homework help, regardless of the level of difficulty. You may have a serious curiosity and have been
working with a lot of people wanting to modify R for learning from scratch. Another possibility is that a script may recognize a task like homework help, just like that. Finally, I want to show you
the books that you’ll likely need to look at while trying to determine what the best rule is for paying for R programming homework help! Research and Programming + Basic Theories In our current R
script, we are at the beginning of our dissertation and have just started to design a simple program using basic formulas. However, there are a few features that need to be explored first. // I am
going to get a calculator because I wrote this and it wants to be fast. But before I program this script, I’ll need to figure out how to just add a function to an expression. The main thing that I
have realized is that a mathematical solution like a fraction is a Visit This Link small part of programming, because in most cases, you wouldn’t actually measure it or know how to do something like
that and then sort out the details yourself.
What Are Some Good Math Websites?
In the most commonly used solution to this problem, we would draw a dot on this figure and sort its sizes by how far it could go with a fraction. So we’d have a function that would look like this. //
If I were working with fractional equation then I would make a x-function that changes x-value from y-value if I wanted to make a fraction between left and right. And now a function is based on an
equation: So I would be able to write functions like that and then make a fraction based on that function, with operations for calculating to represent the value. How do a fraction look? Here’s the
gist: 1. If a function does anything for you to assess and solve, your problem must be somewhere between 1 and 2x+2x+3+1x+3+1 = x + 1 2. How can you judge when a function is different from the
function from where you first understood that function and actually measured its existence? You would need to make a small sample which would be enough to determine if the function was different from
you prior from where you are now looking. Right after we can make a fraction instead of the function first, we would want to review some basic definitions about fractions as if there were an analogue
method. For a given function thus, there would be no x as-is/x is/of.x, but there would be x as-is divided by 2*x.x, but that gets a constant x minus the constant 2*x. What this means, is that the
fraction is a point function, so the above definition | {"url":"https://programmingdoc.com/what-are-the-best-practices-for-paying-for-r-programming-homework-help","timestamp":"2024-11-13T12:34:36Z","content_type":"text/html","content_length":"162286","record_id":"<urn:uuid:e07c2d78-1d1d-400a-b53b-d4e4cf1afd57>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00189.warc.gz"} |
More on Rossi / Focardi LENR / "Cold Fusion"
More on Rossi / Focardi LENR / "Cold Fusion"
Andrea Rossi is not wasting time. As noted here earlier, Rossi has invested almost all his money in developing and producing the LENR reactors for the Athens, Greece 1 MW plant. Rossi also claims to
signed a contract "of tremendous importance" in the USA
, with a company he is not yet at liberty to name.
Rossi plans to install a one megawatt, American made E-Cat power station in a factory in Greece in October, 2011. Rossi believes that only a working commercial power station can definitively
prove to the world that his creation is real. If E-Cats turn out to be as economical as expected, they will eventually be used to power cars, trucks, trains, ships, aircraft, and spacecraft. _
OpEdNews Chris Calder
More background and several helpful links at above link.
Both Brian Wang and Brian Westenhaus have been following the progress of the Rossi / Focardi low energy nuclear reaction device.
Rossi claims that the reactor is able to obtain large amounts of heat energy from the low energy nuclear transmutation reactions that transform Nickel into Copper. Here is a more detailed look at
the energy numbers involved in such a transmutation:
MeV for each Ni transformation
Starting from Ni58 we can obtain Copper formation and its successive decay in Nickel, producing Ni59, Ni60, and Ni62. The chain stops at Cu63 stable.
For simplicity I assume all the Nickel in the reactor in the form Ni58.
For simplicity I suppose for each Ni58 the whole sequence of events from Ni58 to Cu63 and as a rough estimate I calculate the mass defect between (Ni58 plus 5 nucleons) and the final state
Ni58 mass is calculated to be 57.95380± 15 amu
The actual mass of a copper-Cu63 nucleus is 62.91367 amu
Mass of Ni58 plus 5 nucleons is 57.95380+5=62.95380 amu
Mass defect is 62.95380-62.91367=0.04013 amu
1 amu = 931 MeV is used as a standard conversion
0.04013×931 MeV=37.36 MeV
So each transformation of Ni58 into Cu63 releases 37.36MeV of nuclear energy.
Nickel consumption
One hundred grams of nickel powder can power a 10 kW unit for a minimum of six months.
How much of Ni58 should be transformed, in six months of continuous operation, in order to generate 10 kW?
10 kW is thermal or electrical power. The nuclear power must be larger. Assume a nuclear power twice:
20 kW = 20,000 J/s = 1.25 x 10**17 MeV/s.
Each transformation of Ni58 into Cu63 releases 37.36MeV of nuclear energy.
The number of Ni58 transformations should thus be equal to (1.25 x 10**17)/37.36 = 3.346 x 10**15 per second.
Multiplying by the number of seconds in six months (1.55 x 10**7) the total number of transformed Ni58 nuclei is 5.186 x 10**22.
This means 5 grams.
The order of magnitude is not exactly the same but seems to be plausible. This means also 5 grams of Nickel in Rossi’s reactor transmuted into (stable) Copper after six months of continuous
operation at the rate of 10 kW. _NextBigFuture
This may seem incredible to most persons who know how many tons of coal are required to provide the same amount of power as 5 grams of nickel. But nuclear energy is on a far different level of
scale than chemical energies, such as combustion energy.
But if you consult this table of energy densities provided at Transtronics Wiki, you can clearly see the difference in scale between the energy of nuclear reactions and the energy from chemical
reactions -- roughly 7 or 8 orders of magnitude, depending on the method of comparison.
Imagine the savings in fuel transportation costs alone!
Will this sparkling new form of energy prove to be true gold, or just a fool's flash in the pan? Time will tell. _AlFin2100
More from Next Big Future
Labels: LENR
0 Comments:
Subscribe to Post Comments [Atom] | {"url":"https://alfin2300.blogspot.com/2011/04/more-on-rossi-focardi-lenr-cold-fusion.html","timestamp":"2024-11-10T14:19:20Z","content_type":"application/xhtml+xml","content_length":"23175","record_id":"<urn:uuid:a5f81e48-01cf-4f2c-8605-1e420e2a9d28>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00847.warc.gz"} |
Discrete Calculus etc - Quantum Calculus
Discrete Calculus etc
Discrete Calculus
I had been busy teaching the new course math 22a last fall. Related to quantum calculus are the units Unit 33: discrete I and Unit 37 Discrete II where multi-variable calculus isdone on graphs. There
was also a short presentation adapted from the final review:
Cartan’s magic formula
Related to quantum calculus is the topic of Cartan’s magic formula which has a surprisingly simple implementation in the discrete. This formula gives an expression of the Lie dervative L[X] in terms
of the exterior derivative d and an inner derivative i[X]. The Magic formula of Cartan shows that the Lie derivative is a type of Laplacian and a square of a Dirac type operator d+i[X]. This is kind
of neat as if one has a Dirac operator, then one can write down immediately formulas for the corresponding wave equation. It essentially is a Taylor expansion. Teaching the topic of the
multi-variable Taylor expansion actually led to this note. By the way, everything is also illustrated with actual Mathematica code. Just download the LaTeX source of the paper to get the code. It is
simple text as I don’t trust mathematica notebooks. Their format will change over time. [The technical specifications of .nb files changed during the last decades I used Mathematica. One can hardly
open a 20 year old 1.2 notebook any more, but there is no problem with 20 year old plain code as this is just language independent of technical implementation.]
Elie Cartan (1869 – 1951) is the father of differential forms. The idea was published in his “Les systemes differentiels exterieurs et leur applications geometriques“.
Given a compact Riemannian manifold or a graph, we have both an exterior derivative $d$ as well as an interior derivative $i_X$ for every vector field X. The map $d$ maps $\Lambda^p$ to $\Lambda^
{p+1}$ and the map $i_X$ like $d^*$ maps $\Lambda^{p+1}$ to $\Lambda^p$. All these derivatives satisfy $d^2=(d^*)^2=i_X^2=0$. We can now form two type of Dirac operators and Laplacians
$$ D_X = d + i_X , L_X = D_X^2 = d i_X + i_X d $$
$$ D = d + d^* , L = D^2 = d d^* + d^* d $$
The equation $L_X = d i_X + i_X d$ is called the Cartan’s magical formula. In the special case $p=0$, where $L_X$ and $L$ are operators on scalar functions, we have $L_X f(x) = d/dt f(x(t))$, where
$x'(t) = X x(t)$ is the flow line of the vector field $X$.
Applying $i_X$ is also called a contraction. For scalar functions, $f \to D_X f = i_X df = df(X)$ is a directional derivative. It follows from the Cartan formula that the Lie derivative commutes with
$d$ because $L_X d = d i_X d = d L_X$. We can also check that Lie derivative satisfies $L_[X,Y] = L_X L_Y – L_Y L_X$ if $Z=[X,Y]$ is defined by its inner derivative $i_Z = [L_X,i_Y] = L_X i_Y – i_Y
L_X$. However, we need to make sure that $i_Z$ is again an inner derivative. This works if $X$ involves only differential forms which are non-adjacent. One possibility is to restrict to odd forms.
Classically, how does one prove Cartan’s magical formula? One makes use of the fact hat $L_X$ commutes with d and does the right thing on functions. The inner derivative$ i_X w(X_1,…,X_k) = w
(X,X_1,…,X_k)$ is an anti derivation like d. It also satisfies the product rule. One can now prove the general fact by establishing it for 0 forms, then refer to the Grassman algebra.
Here are some computations:
(* Generate a random simplicial complex *)
(* Computation of exterior derivative *)
If[SubsetQ[a,b] && (k==l+1),z=Complement[a,b][[1]];
c=Prepend[b,z]; Signature[a]*Signature[c],0]];
dt=Transpose[d]; DD=d+dt; LL=DD.DD;
(* Build interior derivatives iX and iY *)
e={}; Do[If[Length[G[[k]]]==2,e=Append[e,k]],{k,n}];
Do[ee=G[[e[[l]]]]; Do[If[SubsetQ[G[[k]],ee],
iX[[m,k]]= If[MemberQ[P,Length[G[[m]]]],X[[l]],0]*
Orient[G[[k]],G[[m]]]],{k,n}],{l,Length[e]}]; iX];
(* Build Laplacians LX,LY,LZ, plot spectrum of D and DX and matrices *)
iX=BuildField[{1,3,5,7,9}]; iY=BuildField[{1,3,5,7,9}];
DX=iX+d; LX=DX.DX; DY=iY+d; LY=DY.DY;
iZ=LX.iY-iY.LX; DZ=iZ+d; LZ=DZ.DZ;
From a footnote in the Math 22a course lecture notes:
The derivative $d$ acts on anti-symmetric tensors (forms), where $d^2=0$. A vector field $X$ then defines a <b> Lie derivative</b>$L_X = d i_X + i_X d=(d+i_X)^2=D_X^2$ with interior product $i_X$.
For scalar functions and the constant field $X(x)=v$, one gets the directional derivative $D_v = i_X d$. The projection $i_X$ in a specific direction can be replaced with the transpose $d^*$ of $d$.
Rather than transport along $X$, the signal now radiates everywhere. %with $f_{t}=-L_X f$ leading to translation $f(x+tv)$ if $X(x)=v$ is constant The operator $d+i_X$ becomes then the <b> Dirac
operator</b> $D=d+d^*$ and its square is the Laplacian $L=(d+d^*)^2 = d d^* + d^* d$. The {\bf wave equation} $f_{tt} = -L f$ can be written as $(\delta_t^2+D^2) f = (\delta_t+i D) f=0$ which has the
solution $a e^{i D t} + b e^{-i D t}$. Using the Euler formula $e^{i Dt}=\cos(Dt) + i \sin(Dt)$ one gets the explicit solutions $f(t) = f(0) \cos(Dt) + i D^{-1} f_t(0) \sin(Dt)$ of the wave equation.
It gets more exciting: by packing the initial position and velocity into a complex wave $\psi(0,x) = f(0,x) + i D^{-1} f_t(0,x)$, we have $\psi(t,x) = e^{i D t} \psi(0,x)$. The wave equation is
solved by a Taylor formula, which solves a Schroedinger equation for $D$ and the classical Taylor formula is the Schroedinger equation for $D_X$. This works in any framework featuring a derivative
$d$, like finite graphs, where Taylor resembles a Feynman path integral a sort of Taylor expansion used by physicists to compute complicated particle processes. The Taylor formula shows that the
directional derivative $D_v$ generates translation by $-v$. In physics, the operator $P=-i \hbar D_v$ is called the momentum operator associated to the vector $v$. The Schroedinger equation $i \hbar
f_t = P f$ has then the solution $f(x-t v)$ which means that the solution at time $t$ is the initial condition translated by $t v$. This generalizes to the Lie derivative $L_X$ given by the magical
formula as $L_X=D_X^2$ acting on forms defined by a vector field $X$. For the analog $L=D^2$, the motion is not channeled in a determined direction $X$ (this is a photon) but spreads (this is a wave)
in all direction leading to the wave equation. We have just seen both the “photon picture” $L_X$ as well as the “wave picture” $L$ of light. And whether it is wave or particle, it is all Taylor.
Coloring using topology
Over the winter break I was programming a lot in some older project, also in Nantucket. Some blogging happend early 2015 here but then came some other series of interests as one can see on this
There is still the task to implement the graph coloring algorithm which uses topology. It turned out to be harder than expected to implement this but I believe to be there soon (hopefully I can make
some progress during spring break, if not some other discovery derails this, but there is no rush. Unlike in my postdoc time, where I had deadlines for papers, there is no particular one now so that
I’m not under pressure). There were many surprises. I needed dozens of attempts and a few attempts led to rather serious problems which also needed to readjust the theory. (Actually, some of the
obstacles would have been almost impossible to spot when looking only at the theory. One definitely has to implement every detail to make sure that one does not overlook a theoretical aspect). There
are actually many exciting surprises in the discrete and things which one might think to be true are not, sometimes because of strange reasons. Just to recapitulate: the topological approach is not
computer assisted. It is a theoretical and constructive proof based on a very simple idea of fisk: just write the graph as a boundary of a higher dimensional graph, then color the later. We use the
computer only to implement the algorithm. As it is constructive, one should be able to implement it in such a way that it works.
The strategy has been sketched in this paper and been illustrated here in 2014:
But a lot of more work was needed to actually show that the procedure works and terminates (which I’m now confident about but being confident is not the same than having it done). In the animation
seen above from 2014, the cutting was done by hand. The graph is like a Rubik cube which needs to be solved and I solved that particular Rubik cube. But it should not just be that every graph is a
puzzle to be solved, no, there is an algorithm which does the job for any planar graph and colors it in polynomial time. [Solving a puzzle like Rubik can be hard. I actually spent more time solving
that particular graph than solving the Rubik (which had taken me many weeks as a high school student, of course, not by “look up a strategy”, but to up with the solution strategy yourself without any
group theory knowledge then (somehow, kids are very good in solving puzzles also without theoretical background). In college, when I was a CS course assistant, we gave the class the problem to write
a program in MAGMA (then Cayley), which FINDS a solution strategy in a finitely presented group (of course giving them the frame work of Schreier-Sims in computational group theory). So, the homework
was not to write a program which solves the rubik (that is easier) but to write a program which solves<b>any</b> Rubik type problem). You give in a puzzle type (the group with the generators and
relations), and the computer finds the strategy, to solve that type of puzzle for any initial condition.] This is similar with the 4-color theorem. Every graph is a puzzle to be colored, but we do
not want to color only a particular graph, no, we want a strategy which colors any of them. This strategy has to be intelligible, have no random component in it and come with a complexity bound on
how fast this can be done. Now, since the stakes in the 4-color problem are high (some of the smartest mathematicians have made mistakes or even produced wrong proofs), no theoretical argument would
be taken serious which does not actually do. One has to walk the talk.
The strategy was essentially sketched also in this recent paper where some important ground work was done to do it rigorously and some results from there are needed (even so some of the proofs can be
substantially simplified). But still, also in that paper of August 2018, some important details have not yet even been seen. One of the major serious issues is related to the Jordan-Brouwer theorem
which was tackled here in the discrete. One has to be very precise when defining and working with balls and spheres in the discrete and even with the right definitions, the Jordan-Brower theorem can
fail without some subtle adjustments. It is often (actually most of the time false in three dimensions) that the union of two balls, where one is a unit ball at the boundary of an other ball is a
ball. The reason is that balls can be embedded in a Peano surface type matter. This actually leads to puzzling situations, where the coloring code starts to fail. I assumed for example (without
bothering to prove it), that if one has a 3-ball B(technically defined as a punctured 3-sphere) which is a sub-graphgraph of an other larger 3-ball and x is a vertex at the boundary, then the union B
with B(x) (the unit ball at x) is again a ball. This is true in dimension 2 but not in dimension 3. [This is similar than the Alexander horned sphere which is a continuum situation where things
change from dimension 2 to dimension 3]. So, while B(x) intersects by definition the boundary of B in a circle (that is part of the definition of a ball), it is not true that the unit sphere S(x)
intersects the boundary S of B in a circle! When seeing this realized the first time in actual examples, one is shocked as it defies intuition and one tries for days to find the mistake in the
computer code or think one starts to get crazy. But it is not a mistake in the computer code, it is mathematics. The lemma that the union of two balls, where one sphere is a unit balls centered at
the boundary of the other is simply wrong in dimensions larger than 2. That looks like a serious blow to the algorithm as it builds up larger and larger balls and it is essential that we have a ball,
meaning also that the boundary is a 2-sphere. Indeed, what happens is when the ball property starts to fail, the code starts to break and there is no way to fix it afterwards. So, one has at any cost
make extensions such that the extended ball remains a ball. Of course, one has to regularize the space somehow near the place where things are extended but one has to do that in a way which does not
lead to more vertices. It is a puzzle like solving a Rubik Cube fighting back, when solving the puzzle, one makes the puzzle harder in each step. The task is then to control it and make sure that one
has some progress. This needed a few approaches but there is a way to do it.
A lot of clarifications about this was worked out when writing the that above mentioned Jordan-Brower theorem paper. There, the obstacles were overcome with suitable regularization, meaning that we
clarify what it means for a sphere to be embedded in an ambient space. The hard thing in the “coloring algorithm” is to make sure that the regularization do not lead to an explosion of the size of
the graph. It does not help when conquering one new vertex introduces 10 new ones (even one new one). Additionally, it is not enough to conquer every vertex, there are balls within balls which cover
all vertices of a graph. When doing naive refinements, these adjustments might lead to more adjustments etc, preventing the algorithm to finish. And this actually happened in the problem which colors
a graph automatically. The boundary of the cleaned out region so to speak developed turbulence and winds up in crazy ways. One would think that this happens only in complicated networks, but it
happens already in very small ones and very early in the algorithm, after a few steps already. But the problems can be overcome, but implementing that took a lot of time, requiring quiet programming
(and thinking) sessions lasting several hours without any distraction and incommunicado, something which is more difficult during the semester. Also, when programming on such a project, it is
emotionally tolling as one does not know whether things will eventually work out. Every time the algorithm starts to fail, there might be an issue popping up which kills the entire idea (this happens
in any mathematical research but it becomes harder if one has invested a lot of time in it. And things still could fail now). Indeed, while most of the time, one hits an obstacle it is a programming
issue, there were a few times, where it was a fundamental theoretical issue requiring to change the details of the extension strategy. In principle it is simple: what the algorithm needs to do is to
refine the topology of the graph near the extension in such a way that the union of two balls, where one is the unit ball centered at the boundary of the other is still a ball. The difficulty is to
do that so that the extension incorporates the additionally added regularizing vertices outside the already refined region. So, while confident now, I had been confident many times already before.
What is good is now that the algorithm works where it is supposed to work.
Update February 1, 2019:
And also the theoretical part is important, especially at the very end when the algorithm finishes. But this is the easier part (in comparison). One has to exclude for example a graph (a puzzle),
which can not be solved like that. [This amounts to understand that there are no constraints like in the 15 puzzle, if initially, two pieces are interchanged and where a “parity integral” must be
preserved.] In the Euler refinement game this been understood already by Stephen Thomas Fisk (who was a Harvard PhD, who wrote his thesis under the advise of Gian Carlo Rota. Stephen Fisk died in
2010. Some basic ideas trace back even earlier as there is some work in 1891 by Victor Eberhard, an other remarkable German geometer who was blind (one can read about Eberhard in this excellent
outreach feature article of Joseph Malkevitch). That part pretty much is explained in this paper about Eulerian edge refinements. An important theorem there is that every 2-sphere can be edge refined
to become Eulerian. I assumed in 2014 that this is “obvious” but of course, it has to be proven. I prefer now to see it as a consequence of the modulo 3 lemma from the same paper as one also has to
understand well what happens when refining a disk, where the boundary plays an important role. It is the game of billiards or geodesic flow which essentially tell why these things are true. I just
found a picture of Stephen Fisk in the marvelous book of Burger and Starbird (“The heart of mathematics”). It shows Steve Fisk with his wife in 1975/1976 during a trip in Asia:
Steve Fisk with Wife in Asia, in 1975-1976. Image credit: book of Burger and Starbird “the heart of mathematics’ | {"url":"https://www.quantumcalculus.org/discrete-calculus-etc/","timestamp":"2024-11-02T09:24:04Z","content_type":"text/html","content_length":"78868","record_id":"<urn:uuid:158a29fc-4005-4b4d-8764-b68331022115>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00621.warc.gz"} |
8th Grade Ohio’s State Tests Math Worksheets: FREE & Printable
Is passing the 8th-grade Ohio State Tests math your main concern these days? We're here to address that concern with 8th-grade Ohio State Tests math worksheets!
Ohio’s State Tests are used to assess the level of knowledge and performance of students in grades 3-8.
8th-grade students who are worried about not passing the math section of Ohio’s State Tests just need to use our 8th-grade Ohio State Tests math worksheets to master the test concepts.
We know the needs of 8th-grade students and we have designed these 8th-grade Ohio State Tests math worksheets according to their purpose and needs.
Our 8th-grade Ohio State Tests math worksheets are free, comprehensive, printable, and have subject categories so that 8th-grade students can easily access the exercises related to the topic they
IMPORTANT: COPYRIGHT TERMS: These worksheets are for personal use. Worksheets may not be uploaded to the internet, including classroom/personal websites or network drives. You can download the
worksheets and print as many as you need. You can distribute the printed copies to your students, teachers, tutors, and friends.
You Do NOT have permission to send these worksheets to anyone in any way (via email, text messages, or other ways). They MUST download the worksheets themselves. You can send the address of this page
to your students, tutors, friends, etc.
Related Topics
The Absolute Best Book to Ace the 8th Grade Ohio’s State Tests Math Test
Original price was: $29.99.Current price is: $14.99.
8th Grade Ohio’s State Tests Mathematics Concepts
Whole Numbers
Fractions and Decimals
Real Numbers and Integers
Proportions, Ratios, and Percent
Algebraic Expressions
Equations and Inequalities
A PERFECT Math Workbook for Ohio’s State Tests Grade 8 Test!
Original price was: $27.99.Current price is: $14.99.
Linear Functions
Exponents and Radicals
Geometry and Solid Figures
Statistics and Probability
8th Grade Ohio’s State Tests Math Exercises
Fractions and Decimals
Real Numbers and Integers
Proportions and Ratios
Algebraic Expressions
Equations and Inequalities
Linear Functions
Systems of Equations
Exponents and Radicals
Solid Figures
Function Operations
Looking for the best resource to help your student succeed on the Ohio State Tests Math test?
The Best Books to Ace the Ohio’s State Tests Math Test
Original price was: $27.99.Current price is: $22.99.
Original price was: $29.99.Current price is: $14.99.
Related to This Article
What people say about "8th Grade Ohio’s State Tests Math Worksheets: FREE & Printable - Effortless Math: We Help Students Learn to LOVE Mathematics"?
No one replied yet. | {"url":"https://www.effortlessmath.com/blog/8th-grade-ohios-state-tests-math-worksheets-free-printable/","timestamp":"2024-11-06T14:32:06Z","content_type":"text/html","content_length":"125803","record_id":"<urn:uuid:189c03fc-dbad-4cd4-86a6-c098863649d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00425.warc.gz"} |
Graph algorithms visualization
2 Sept 2019
A weighted, undirected, planar and connected graph
Graph algorithms
My page on graph search shows a few simple graph algorithms, mostly about search in graphs. On this page we present some further algorithms for graphs. Unlike the first page, here we are dealing with
weighted graphs. In a weighted graph, the edges between nodes have different weights, shown above as the thickness of the lines between the circles. The weights can have different meanings, depending
on the algorithm in question.
We will start off with minimum spanning trees, where the weight resembles the cost of adding an edge to a tree. Later on we will present algorithms for finding shortest paths in graphs, where the
weight represents a length between two nodes. For simplicity the weights of the edges are chosen to be between 1 and 4. These can be directly translated into thicknesses of the line representing the
The same graph with a minimum spanning tree
Minimum spanning trees
A spanning tree is a subset of the edges of a connected graph that connects all nodes, but has no cycles. A connected graph is a graph without disconnected parts that can't be reached from other
parts of the graph. All graphs used on this page are connected. A minimum spanning tree (MST) is such a spanning tree that is minimal with respect to the edge weights, as in the total sum of edge
weights. In the panel above, the green edges are part of a minimum spanning tree that was found by Kruskal’s algorithm. If you have a close look, you can see that all nodes can be reached by the MST.
The next two panels show algorithms for finding an MST.
Ready to startRunning Kruskal’s algorithmMinimum spanning tree found
Kruskal’s algorithm
Kruskal’s algorithm works as such: start with every node in a separate singleton tree. All trees together make up a forest. The algorithm then considers each edge, sorted by non-decreasing order of
weight, and only adds an edge to the MST if it connects two previously unconnected trees in the forest. So every step merges two trees into one, and hence reduces the number of trees by one. Once the
entire forest consists only of one tree, an MST has been found. This strategy is of the so-called ‘greedy’ type, but it finds a globally optimal solution for this problem.
Ready to startRunning Prim’s algorithmMinimum spanning tree found
Prim’s algorithm
Prim’s algorithm uses a different greedy strategy than Kruskal’s algorithm. It starts at a random node, and grows a single tree node by node until it has been turned into an MST. The key point is how
the next edge to expand on is chosen: from the list of all edges bridging the current spanning tree and the rest of the graph, the edge with the least weight is chosen. Ties can be broken randomly.
Note that Prim’s algorithm and Kruskal’s algorithm find the same minimum total weight (as reported after the run below the graphic), but don’t necessarily use the exact same edges in their MSTs (but
often do).
Ready to startRunning Dijkstra’s algorithmSearch complete
Dijkstra’s algorithm
Dijkstra’s algorithm computes what is called the ‘single-source shortest paths’ problem: For a given source node we want to know how far the total shortest distance to any other node in the graph is.
Hereby the edge weights we had previously, are now considered lengths of distance between the nodes. Thicker lines indicate longer distances between the nodes. The geometric distance between the
nodes as they are displayed is not relevant here, only the distance defined by the edges. The computed shortest distances are displayed as the numbers inside the explored nodes, where the green
filled node is the starting point and has zero distance to itself.
The algorithm can be loosely characterised as a weighted breadth-first search. To choose which edge to traverse next, consider the edges between the explored and unexplored nodes, sorted by the
distance to the new node in nondescending order. If you have a look at the animation above, note how the distance of the newly added nodes never decreases from the last one, often just increasing by
one. This is a very similar strategy to Prim’s algorithm, and indeed the traversed edges in green here make up a spanning tree, just not an MST.
A weighted, directed graph with negative edge lengths
Negative edge lengths
One hidden assumption of Dijkstra’s algorithm is that all edge lengths have to be positive or zero for the algorithm to work. So if we are modelling a road network for example, this makes sense as
there are no negative drive times there. But we could want to have a weighted graph that includes negative edge lengths for other applications. In this case we can no longer rely on the simple
Dijkstra’s algorithm to find shortest paths. In the example above, negative-length edges are indicated in red, the thickness of the lines now indicates magnitude of length, so a thick red line is
quite negative in value. We have now chosen a directed graph, in which the edges have a direction, because paths through an undirected graph with negative edges aren't well defined.
Ready to startRunning the Bellman-Ford algorithmSearch completedNegative cycles detected
The Bellman-Ford algorithm
The Bellman-Ford algorithm can find single-source shortest paths in a graph with negative edge lengths. But this only works if there are no negative cycles, which are cycles where the path length
adds up to a negative value. In their presence, any path that moves around the cycle can become arbitrarily negative, just by cycling around the negative cycle. If this is the case, the algorithm has
to terminate and indicate the presence of a negative cycle. On the panel above we always work with a graph that has no negative cycles. To see the opposite case, have a look at the next panel.
The starting condition for the algorithm is that all nodes except for the source node (green) have a distance of infinity (∞). It then loops over all nodes in any order and considers the currently
shortest path to the node and updates the distance (denoted as a small green number in the node circle). At the end of one iteration through the node (from left to right) the numbers are updated from
the new values that where just computed. In the beginning not much changes as most new distances are still infinity. After a while the shorter, non-infinite distances start to spread from the source
node. If the distances don't change despite having processed all nodes, the algorithm has become stable and returns the resulting distances. Some nodes might remain at infinity if there is no path
from the source node to them.
Ready to startRunning the Bellman-Ford algorithmSearch completedNegative cycles detected
Negative cycles
There are two possible indicators for a negative cycle: the first is that after repeating the loop over all nodes as often as there are nodes in the graph, and the algorithm does not become stable,
i.e. distances are still changing, then there must be at least one negative cycle in the graph. Another, sufficient but not necessary condition is that the source node takes on negative values. This
also means there is a negative cycle in the graph. These cases are demonstrated in the panel above, where we always show a graph with negative cycles present. We have speeded up the algorithm display
a bit, because detecting negative cycles can take a while.
If you enjoyed this page, there are more algorithms and data structures to be found on the main page. In particular there is more about Graph search. | {"url":"https://www.chrislaux.com/graphalgo","timestamp":"2024-11-13T06:31:34Z","content_type":"text/html","content_length":"16107","record_id":"<urn:uuid:87be62ab-127f-4abc-bd3b-551b3f1a1789>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00833.warc.gz"} |
How do you solve absolute value inequalities absx>=2?
| HIX Tutor
How do you solve absolute value inequalities #absx>=2#?
Answer 1
$x \le - 2 \text{ or } x \ge 2$
#"inequalities of the type "|x|>=a#
#"always have solutions of the form"#
#x<=-a" or "x>=a#
#"here "a=2" thus"#
#x<=-2" or "x>=2#
#(-oo,-2]uu[2,oo)larrcolor(blue)"in interval notation"#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To solve the absolute value inequality |x| ≥ 2, you need to consider two cases: when x ≥ 0 and when x < 0.
For x ≥ 0, the inequality becomes x ≥ 2.
For x < 0, the inequality becomes -x ≥ 2, which simplifies to x ≤ -2 when multiplied by -1.
So, the solution is x ≥ 2 or x ≤ -2.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-solve-absolute-value-inequalities-absx-2-8f9af93fe8","timestamp":"2024-11-04T11:13:02Z","content_type":"text/html","content_length":"568001","record_id":"<urn:uuid:e3f36f2c-c7c2-4537-81d5-12c36333553a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00791.warc.gz"} |
How to describe inhomogeneous quantum systems in one dimension with (conformal) field theory: lessons from non-interacting Fermi gases
Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Dr G Moller.
In spite of the tremendous successes of conformal field theory (CFT) in describing large-scale, universal, effects in one-dimensional (1D) systems at quantum critical points, their applicability has
been limited to situations in which the bulk is uniform: CFT describes low-energy excitations around some energy scale, assumed to be constant throughout the system. However, in many experimental
situations, quantum systems are strongly inhomogeneous: for instance, quantum gases in trapping potentials always have a non-uniform density; this is also true in many out-of-equilibrium situations,
for instance when a gas is released from its trap. Here, we will argue that the powerful CFT approach can be adapted to deal with such 1D situations, relying on the example of lattice and continuous
Fermi gases. The system’s inhomogeneity enters the field theory action through parameters that vary with position; in particular, the metric itself varies, resulting in a CFT in curved space. As an
illustration, new exact formulas for entanglement entropies—-that have recently become experimentally measurable, and that are usually very difficult to calculate—-will be derived.
This talk is part of the Theory of Condensed Matter series.
This talk is included in these lists:
Note that ex-directory lists are not shown. | {"url":"http://talks.cam.ac.uk/talk/index/65515","timestamp":"2024-11-02T21:27:02Z","content_type":"application/xhtml+xml","content_length":"16111","record_id":"<urn:uuid:ea5fdbcb-00ec-41be-8671-0d1d87e98b11>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00614.warc.gz"} |
ECDSA vs RSA: Everything You Need to Know - Cybers GuardsECDSA vs RSA: Everything You Need to Know
ECDSA vs RSA: Everything You Need to Know
If you’re into SSL certificates or cryptocurrencies, you’d eventually come across the much-talked “ECDSA vs RSA” subject (or RSA vs ECC). What do all of these words mean, and why do they even matter?
Two of the world’s most commonly adopted asymmetric algorithms are ECDSA and RSA. However, when it comes to the way they work and how their keys are created, all these algorithms are drastically
different. We will decode all of these encryption algorithms in this article to help you understand what they are, how they work, and to discover their special benefits (and disadvantages). Let’s
start now!
RSA Algorithm: What It Is and How It Works
There is no match for the RSA (Rivest Shamir Adleman) asymmetric encryption algorithm when it comes to popularity. When it comes to SSL/TLS licenses, bitcoins, email encryption, and a number of other
uses, this algorithm is used commonly.
Since it was invented in 1977 by Ron Rivest, Adi Shamir and Leonard Adleman, when it comes to asymmetric encryption algorithms, RSA has become the gold standard. For one-way encryption of a message,
RSA uses the prime factorization form. Two titanic-sized random prime numbers are taken in this process, and they are multiplied to generate another gigantic number.
The multiplication of these two numbers is simple, but it is almost a difficult task to calculate the original prime numbers from this multiplied number, at least for modern supercomputers. The
“prime factorization” method is called this operation. It is an awfully difficult job to figure out the two prime numbers in the RSA algorithm, which took a group of researchers more than 1,500 years
of computational time (distributed through hundreds of computers) to be able to do so.
ECDSA vs RSA: What Makes RSA a Good Choice
Considering that this one algorithm has been the business experts’ leading option for nearly three decades, you have to respect its durability. In 1994, RSA was first standardized, and it remains the
most commonly used algorithm to date. The explanation why it is very important to mention this durability is that it indicates that RSA has stood the test of time. Compared to current algorithms such
as ECDSA, it’s an incredibly well-studied and audited algorithm.
The flexibility that it provides is another big element that sets RSA apart from other algorithms. It is based on a basic mathematical approach and is simple to incorporate in the public key
infrastructure (PKI). This has been one of the core reasons why the most common encryption algorithm technique remains RSA.
ECDSA Algorithm: What It Is and How It Works
The successor of the digital signature algorithm is the ECDSA (elliptic curve digital signature algorithm), or ECC (elliptic curve cryptography), as it is often called (DSA). ECDSA was born when the
use of elliptical curves in cryptography was suggested by two mathematicians named Neal Koblitz and Victor S. Miller. The ECDSA algorithm, though, has taken almost two decades to become standardized.
ECDSA is an algorithm in asymmetric cryptography based on elliptical curves and an underlying function known as a “trapdoor function.” An elliptic curve is the set of points (y2 = x3 + ax + b) that
satisfy a mathematical equation. This is how the elliptical curve looks:
ECDSA vs RSA: What makes ECC a better choice
As all asymmetric algorithms go, ECDSA functions in a way that is easy to quantify in one direction, but hard to reverse. In the ECDSA case, the number on the curve is multiplied by another number
and, thus, the point on the curve is generated. It is difficult to find out the latest point, even though you know the original point.
Thanks to its sophistication, ECDSA was found to be more safe against existing cracking methods compared to RSA. ECDSA delivers the same degree of protection as RSA, but by using much shorter key
lengths, it does so. Therefore, ECDSA would take slightly more time for longer keys to break by brute-forcing attacks.
The value of performance and scalability is another great advantage that ECDSA provides over RSA. Because ECC ensures maximum protection with shorter key lengths, network and processing capacity
demand a lower load. For computers that have minimal storage and processing power, this proves to be perfect. The ECC algorithm reduces the time taken to execute SSL/TLS handshakes in SSL/TLS
certificates which can help you load your website faster.
The catch, though, is that not all CAs in their control panels and hardware protection modules support ECC (although the number of CAs that do is growing).
ECDSA vs RSA: The Difference of Key Lengths
As we discussed, ECC requires much shorter key lengths to give the same level of security provided by long keys of RSA. Here’s what the comparison of ECDSA vs RSA looks like:
Security (In Bits) RSA Key Length Required (In Bits) ECC Key Length Required (In Bits)
80 1024 160-223
112 2048 224-255
128 3072 256-383
192 7680 384-511
256 15360 512+
ECC vs RSA: The Quantum Computing Threat
Irreversibility is the key trait that makes an encryption algorithm secure. Therefore, you must perform brute-force attacks, trial and mistake, in plain terms, to break some such algorithm. Due to
the encryption key lengths, though, the potential variations that you have to try are in quantities that we can’t even begin to conceptualize correctly.
However, much of this will change in the future with the eventual (and likely) advent of quantum computers. The National Institute of Standards and Technology (NIST) predicts that current public key
cryptography will collapse until quantum computation becomes popular. About why? Since quantum computers are mightily efficient since they run on qubits rather than bits, significantly more power
than supercomputers today. What this implies is that at any given moment in time, they can attempt several combinations and, thus, their computing time is considerably shorter. These quantum
computers, like RSA and ECDSA, are expected to make today’s encryption schemes redundant.
RSA and ECDSA are also potentially susceptible to an algorithm known as Shor’s algorithm, according to different reports. As used for quantum computers, this algorithm is likely to crack both RSA and
ECDSA. It has been found that ECDSA is easier to solve compared to the RSA cryptosystem, according to research performed by Microsoft. However, since functional quantum computers are still in their
infancy, there’s no reason to think about this right now.
RSA vs. ECDSA: Summary
Till now I hope I’ve been able to clear up any confusion you may have regarding the topic of ECDSA vs RSA. Here’s a summary of all the differences that makes it easy for you to understand:
RSA ECDSA
One of the earliest methods of public-key cryptography, standardized in 1995. Comparatively new public-key cryptography method compared to RSA, standardized in 2005.
Today, it’s the most widely used asymmetric encryption algorithm. Compared to RSA, ECDSA is a less adopted encryption algorithm.
It works on the principle of the Prime Factorization method. It works on the mathematical representation of Elliptical Curves.
RSA is a simple asymmetric encryption algorithm, thanks to the prime factorization method. The complexity of elliptical curves makes ECDSA a more complex method compared to RSA.
RSA is a simpler method to implement than ECDSA. Implementing ECDSA is more complicated than RSA.
RSA requires longer keys to provide a safe level of encryption protection. Compared to RSA, ECDSA requires much shorter keys to provide the same level of security
As it requires longer keys, RSA slows down the performance. Thanks to its shorter key lengths, ECDSA offers much better performance compared to RSA.
Final Word: ECDSA vs RSA
RSA and ECDSA remain two of the most common asymmetric encryption algorithms, no matter their particular advantages and drawbacks. Both of these algorithms have the degree of security that hackers
today can’t even dream of reaching. In certain ways, however, both are very distinct. These are the grounds from which they vary, to rehash what we have just learned:
• Performance
• Required key length for secure encryption
• Working principle
• Scaleability
• Complexity
The key to these algorithms’ performance and strength lies in their proper execution. If it is poorly applied and meets industry norms, no encryption algorithm can provide optimal security.
There is not much of an argument in the “ECDSA vs. RSA” controversy as far as existing safety requirements are concerned, as you can pick one of them and they are both absolutely safe. I would like
to stress the fact that the ECC is not supported as generally as RSA. That being said, the ECC is a safer choice if you have the option to choose.
Melina Richardson | {"url":"https://cybersguards.com/ecdsa-vs-rsa-everything-you-need-to-know/","timestamp":"2024-11-09T18:59:15Z","content_type":"text/html","content_length":"85034","record_id":"<urn:uuid:fc50503b-6835-45f9-b173-5ce5be22970a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00377.warc.gz"} |
Power Release
When Energy is specified, the Power Release method is used to calculate the Turbine Release required. If the Energy request can not be met, the user is notified. There is one method per Power method.
The Power Release category is available when any of the Power methods is selected except None, Peak Power, Peak and Base, or Peak Power Equation.
This is the default method in the Power Release category. No calculations are performed in this method.
If this method is selected for the Power Release category, a RiverWare error will be posted and the simulation run will be terminated. A viable power release method must be selected when the Power
Release category is visible.
There are no slots specific to this method.
Plant Power Coefficient Release
The Plant Power Coefficient Release method calculates Turbine Release using the entire plant characteristics when Energy is specified. The Plant Power Coefficient Release method is only available if
the Plant Power Coefficient method is selected in the Power category. Energy must be input for this method to execute. If Energy is flagged as either MAX CAPACITY or BEST EFFICIENCY, it is considered
input. If Energy is flagged as MAX CAPACITY, Turbine Release is set to meet the Energy request at the maximum flow rate. If Energy is flagged as BEST EFFICIENCY, Turbine Release is set to meet the
Energy request at the most efficient flow rate. If Energy is neither flagged as MAX CAPACITY nor flagged as BEST EFFICIENCY, the Turbine Release is calculated from the Energy request and a Power
Coefficient. The Power Coefficient may be input by the user or calculated by RiverWare from interpolation of the Best and Max Power Coefficient tables.
If Energy is flagged UNIT VALUES (U), an error is issued. This flag is only available with the Unit Power Table Release method; see
Unit Power Table Release
There are no slots specific to this method.
Method Details
The first step in the Plant Power Coefficient Release algorithm is to set the Power Plant Cap Fraction to 1.0 if it is not already known.
If the Energy slot is flagged as MAX CAPACITY, the following steps are taken.
1. Qtemp, a local variable, is calculated from interpolation of the Max Turbine Q table using the Operating Head.
2. Turbine Release is calculated with the following equation:
3. PCmax, a local variable, is determined from interpolation of the Max Power Coefficient table using the Operating Head. The Power Coefficient is set as PCmax if it is not input.
4. Power and Energy are then calculated using the following equations:
5. If the Plant Power Limit is exceeded, Power is reduced to the Plant Power Limit and the Energy is recalculated. A new Power Coefficient and Turbine Release are then calculated based on the Plant
Power Limit.
If Energy is flagged as BEST EFFICIENCY, the following steps are taken.
1. Qtemp is calculated from interpolation of the Best Turbine Q table using the Operating Head.
2. Turbine Release is computed using the following equation:
3. PCbest, a local variable, is determined from interpolation of the Best Power Coefficient table using the Operating Head. The Power Coefficient is set as PC best if it is not input.
4. Power and Energy are then calculated using the following equations:
5. If the Plant Power Limit is exceeded, Power is reduced to the Plant Power Limit, the Energy is recalculated, and the Turbine Release is recalculated as Plant Power Limit / PCbest.
If Energy is not flagged as either MAX CAPACITY or BEST EFFICIENCY and the Power Coefficient is input, the following steps are taken.
1. If the Power Coefficient is less than 0.00000001, a RiverWare error is posted and the simulation run is terminated.
2. Power is calculated using the following equation:
where the Timestep is in hours.
3. Qout, a local variable, is calculated with the following equation:
4. Qtemp, a local variable, is determined by the interpolation of the Max Turbine Q table using the Operating Head.
5. Qmax, a local variable, is computed using the following equation:
6. If Qout is greater than Qmax, the largest discharge value in the Max Turbine Q table is found. If this value is greater than or equal to Qout, Turbine Release is set equal to Qmax. If the value is
less than Qout, a RiverWare error is posted and the simulation run is terminated.
7. If Qout is less than or equal to Qmax, Turbine Release is set equal to Qout.
If Energy is not flagged as either MAX CAPACITY or BEST EFFICIENCY and the Power Coefficient is not given, the following steps are taken.
1. Power is calculated using the following equation:
where the timestep is in hours.
2. The best and max power coefficients are interpolated using the Operating Head and the Best Power Coefficient and the Max Power Coefficient tables, respectively.
3. QbestTemp and QmaxTemp (local variables) are then determined using the Operating Head to interpolate values from the Best Turbine Q and Max Turbine Q tables, respectively.
4. Qbest, a local variable, is computed using the following equation:
5. Qmax, a local variable, is calculated using the following equation:
6. If Power divided by the best power coefficient is less than or equal to Qbest, Turbine Release is set equal to Power divided by the best power coefficient.
7. If Power divided by the max power coefficient is greater than Qmax, Turbine Release is set equal to the max turbine flow.
8. If neither 3) nor 4) is true, an interpolated value (pcoeffINTERP) is found between the best and max power coefficients based on how close Power is to both the product of Qbest and the best power
coefficient, and the product of Qmax and the max power coefficient. The following pair of equations is used to quantitatively determine the pcoeffINTERP value:
9. The Turbine Release is then calculated with the following equation:
Plant Efficiency Curve Release
The Plant Efficiency Curve Release method calculates Turbine Release using the entire plant characteristics when Energy is specified. The Plant Efficiency Curve Release method is only available if
the Plant Efficiency Curve method is selected in the Power category. Energy must be input or set by a rule for this method to execute. If Energy is flagged as either MAX CAPACITY or BEST EFFICIENCY,
it is considered input. If Energy is flagged as MAX CAPACITY, Turbine Release is set to meet the Energy request at the maximum flow rate. If Energy is flagged as BEST EFFICIENCY, Turbine Release is
set to meet the Energy request at the most efficient flow rate. If Energy is neither flagged as MAX CAPACITY nor flagged as BEST EFFICIENCY, the Turbine Release is calculated from the Energy request.
If Energy is flagged UNIT VALUES (U), an error is issued. This flag is only available with the Unit Power Table Release method; see
Unit Power Table Release
There are no slots specific to this method.
Method Details
The first step in the Plant Efficiency Curve Release algorithm is to set the Power Plant Cap Fraction to 1.0 if it is not already known.
If the Energy slot is flagged as MAX CAPACITY, the following steps are taken.
1. Qtemp, a local variable, is calculated as the maximum release using the Operating Head and the Plant Power Table.
2. Turbine Release is calculated with the following equation:
3. Power is determined directly from the Plant Power Curve.
4. Energy is calculated as follows:
5. The Power Coefficient is calculated as follows:
6. If the Plant Power Limit is exceeded, Power is reduced to the Plant Power Limit and the Energy is recalculated. A new Power Coefficient and Turbine Release are then calculated based on the Plant
Power Limit.
If Energy is flagged as BEST EFFICIENCY, the following steps are taken.
1. Qtemp is computed as the most efficient release given the Operating Head and the Plant Power Table.
2. Turbine Release is computed using the following equation:
3. Power is determined directly from the Plant Power Curve.
4. Energy is calculated as follows:
5. The Power Coefficient is calculated as follows:
6. If the Plant Power Limit is exceeded, Power is reduced to the Plant Power Limit and the Energy is recalculated. A new Power Coefficient and Turbine Release are then calculated based on the Plant
Power Limit.
If Energy is not flagged as either MAX CAPACITY or BEST EFFICIENCY and the Power Coefficient is input, the following steps are taken.
1. If the Power Coefficient is less than 0.00000001, a RiverWare error is posted and the simulation run is terminated.
2. Power is calculated using the following equation:
3. Turbine Release is calculated as follows:
If Energy is not flagged as either MAX CAPACITY or BEST EFFICIENCY and the Power Coefficient is not input, the following steps are taken.
1. Power is calculated using the following equation:
2. The max Turbine Release and Power production are found for the current operating conditions.
3. If input Power is greater than the max Power for current operating conditions, and INPUT_ENERGY _ADJUST method is chosen, Turbine Release is set equal to the max Turbine Release from 2, and Power
is set equal to Power from 2. The Power Coefficient is then computed as Power divided by Turbine Release.
4. Otherwise, Turbine Release is found using the Plant Power Table and the Power Coefficient is set as Power divided by Turbine Release.
5. If the Plant Power Limit is exceeded, an error is posted.
Note: If the Power Plant Cap Fraction is input by the user, it is necessary for the Plant Power Table to basically be scaled back to account for the operating points when the turbines are operating
at less than 100%. To do this, when Turbine Release is known and Power is to be found using the Plant Power Curve, Turbine Release is divided by the Power Plant Cap Fraction. This point is then found
in the Plant Power Curve for the current operating head and the Power is found using 3‑D interpolation. Finally the Power is multiplied by the Power Plant Cap Fraction to get the actual Power
produced for the current timestep.
Note: If Power is known, and Turbine release is to be found in the table. Power is multiplied by the Power Plant Cap Fraction and then this point is found in the Plant Power Curve to solve for
Turbine Release. Turbine Release is then divided by the Power Plant Cap Fraction to get the actual Turbine Release for the current timestep.
Plant Power Equation Release
The Plant Power Equation Release method calculates Turbine Release using the water power equation when Energy is specified. The Plant Power Equation Release method is only available if the Plant
Power Equation method is selected in the Power category. Energy must be input for this method to execute. If Energy is flagged as either MAX CAPACITY or BEST EFFICIENCY, it is considered input. If
Energy is flagged as MAX CAPACITY, Turbine Release is set to meet the Energy request at the maximum possible turbine release. If Energy is flagged as BEST EFFICIENCY, the run terminates because BEST
EFFICIENCY is not supported in this method.
If Energy is flagged UNIT VALUES (U), an error is issued. This flag is only available with the Unit Power Table Release method; see
Unit Power Table Release
There are no slots specific to this method.
Method Details
This method first checks to see if Turbine Release is user input or set by a rule. If it is, the run terminates because both Energy and Turbine Release cannot be input.
If the Energy slot is flagged as MAX CAPACITY, the following steps are taken.
1. Set Turbine Release to be the maximum turbine release calculated by interpolating the Net Head on the Net Head Vs Max Turbine Release table.
2. Once efficiency, Plant Cap Fraction, Net Head, and Turbine Release are all known, Power is solved for using the Power Equation. The unit compatibility factor comes from balancing units and is
102.01697767 in internal RiverWare units.
If the computed Power is greater than the Plant Power Limit, the Power is reset to the Plant Power Limit. In this case, Turbine Release is recomputed using the previous equation rearranged.
3. Lastly, Energy is computed as Power multiplied by the length of the timestep.
If the Energy slot is not flagged MAX CAPACITY, the following steps are taken.
When the Energy value is known (rather than flagged Max Capacity), the Plant Power Equation Release method uses Energy to solve for Power and Turbine Release. Power is simply Energy divided by the
length of the timestep:
Using Power, the Net Head and Turbine Release are solved for iteratively as follows.
1. If the computed Power is greater than the Plant Power Limit, the specified energy is too large. The selected method in the Input Energy adjustment category is executed. The Reduce Input Energy
method reduces the energy to the maximum possible. If the None method is selected, an error will be issued that the specified energy leads to a power that is above the Plant Power Limit.
2. Turbine Release is initially assumed zero
3. Tailwater Elevation is determined via the selected Tailwater method (the flow variable is set to Outflow. If Turbine Release is linked it can be assumed that the Turbine Release and Spill are
separated and the flow variable should be set to Turbine Release.)
4. Operating Head is calculated as Pool Elevation minus Tailwater Elevation
5. Net Head is calculated as Operating Head minus Head Loss
6. Turbine Release is calculated again using the Water Power equation:
7. The calculated Turbine Release is compared to the initial Turbine Release and the process iterates until the values converge.
Note: Convergence Percentage is a general slot on power reservoirs representing the convergence in all iterative solutions; the slot defaults to 0.0001 if not user input.
Once converged, the Net Head is looked up on the Net Head Vs Max Turbine Release table to get the max release. If the Turbine Release is larger than the max release times the Power Plant Cap
Fraction, the selected method in the Input Energy adjustment category is executed. The Reduce Input Energy method reduces the energy to the maximum possible. Otherwise, there is too much flow and an
error will be issued that the energy request cannot be met.
Peak Power Equation with Off Peak Spill Release
The Peak Power Equation with Off Peak Spill Release method calculates the necessary Turbine Release, Peak Release and Peak Time using the water power equation when Energy is specified. The method is
only available if the Peak Power Equation with Off Peak Spill method is selected in the Power category. Energy must be input or set by a rule for this method to execute.
There are no slots specific to this method.
Method Details
This method first checks to see if Turbine Release is user input or set by a rule. If it is, the run terminates because both Energy and Turbine Release cannot be input. When the Energy value is
known, the Peak Power Equation with Off Peak Spill Release method uses Energy to solve for Turbine Release, Peak Release and Peak Time as follows.
1. Peak Release is initially set to zero.
2. Given the net head from the previous timestep (Operating Head at previous timestep minus Head Loss), the efficiency is interpolated from the Net Head vs Efficiency table. The previous Operating
Head is used as an approximation so as not to introduce an additional variable in the iteration. As a result, the Tailwater Elevation at the initial timestep must be input. The net head for the
initial timestep is the initial Pool Elevation minus the initial Tailwater Elevation minus Head Loss.
3. The current Tailwater Elevation is determined using the maximum of Peak Release or the current Outflow as the value in the selected Tailwater method.
4. The Operating Head is calculated as the average Pool Elevation minus the Tailwater Elevation.
5. The net head is calculated as the Operating Head minus the Head Loss.
6. Given the net head, the Generator Capacity is interpolated from the Net Head vs. Generator Capacity table. If the capacity is above the Plant Power Limit, the Generator Capacity is reset to the
Plant Power Limit.
7. Peak Release is calculated according to the power equation. The unit compatibility factor comes from balancing units and the specific weight of water; it is 102.01697767 in internal RiverWare
8. The new Peak Release value is compared with the previous value and the iteration (steps 3-7) continues until the value converges.
Note: Convergence Percentage is a general slot on power reservoirs representing the convergence in all iterative solutions; the slot defaults to 0.0001 if not input.
Power is set equal to the Generator Capacity and Peak Time is as follows:
Turbine Release is the Peak Release average over the timestep:
Unit Generator Power Release
The Unit Generator Power Release method is only available when Unit Generator Power is selected in the Power category. It is used to calculate the Turbine Release required to produce a given amount
of Power. Energy must be input by the user for this method to execute.
There are no slots specific to this method.
Method Details
If Energy is flagged UNIT VALUES (U), an error is issued. This flag is only available with the Unit Power Table Release method; see
Unit Power Table Release
The Unit Generator Power Release method begins by computing the availability and power limits of each unit type. Availability and power limit values are computed as the sum of the values from the
availability and power limit columns, respectively, in the Generators Available and Limit slot. A value for availability and power limit is computed for each unit type.
The efficiency of each unit type is calculated by the following equation:
PowerTemp and flowTemp, both local variables, are computed from the Best Generator Power and Best Generator Flow tables, respectively, using the current Operating Head. Each unit type is then sorted
in descending order based on the computed efficiency.
In order to compute the Turbine Release associated with the known Power, the method begins to add entire unit types (operating according to the best power and flow tables and beginning with the most
efficient type) until the Power is exceed or all the unit types have been added. If the Power is exceeded, the last generator type is interpolated to compute the Turbine Release exactly (see equation
below). However, if all the unit types have been added and the Power cannot be met, the method assumes all unit types are operating at full capacity (according to the Full Generator Flow and Full
Generator Power tables). Then if the Power is exceeded, the last generator type added is interpolated to compute the Power exactly (see equation below). However, if the Power still cannot be met, an
error is posted and the run is terminated because the generators are unable to produce the amount of Power specified by the user.
The interpolation equation used to calculate Power is given below:
where oneLessTypePower is the power produced from all the previous types added (excluding the most recent type added); oneLessTypeFlow is the flow through all the previous unit types (excluding the
most recent type added); cumulativePower is the power produced from all the unit types added (including the most recent type); and cumulativeFlow is the flow through all the unit types added
(including the most recent type).
Note: This equation assumes the relationship between power and flow is linear regardless of the actual relationship specified in the power and flow tables. It is also interpolating over an entire
type of generators.
LCR Power Release
The LCR Power Release method calculates the release from the Lower Colorado River hydropower products. The LCR Power Release method is available only when LCR Power is selected in the Power category.
Energy must be input or flagged as BEST EFFICIENCY (Energy cannot be flagged MAX CAPACITY for the LCR Power method) for this method to execute. It is determined if the requested Power demand can be
met. This determination is based on the maximum possible power that can be generated for a given head. If it is possible to meet the requested Power demand, the Turbine Release is set so as to
produce the requested Power.
If Energy is flagged UNIT VALUES (U), an error is issued. This flag is only available with the Unit Power Table Release method; see
Unit Power Table Release
There are no slots specific to this method.
Method Details
The first step in this method to verify the Lower Colo Power Coeffs are known. If either of these coefficients are not known, a RiverWare error is flagged and the simulation run is terminated. Then,
the LCR Input Efficiency slot is checked. If it is not known, it is assumed to be 100% efficient and the LCR Input Efficiency is set to 1.0.
If Energy is flagged as BEST EFFICIENCY, it is calculated as the Net Energy Request plus the value of energy in the Station Energy Table corresponding to the current day of the week.
If Energy is flagged as MAX CAPACITY, an error is given. If Energy is not flagged as either BEST EFFICIENCY or MAX CAPACITY, it must be input by the user.
Turbine Release is calculated using the following equation:
where the Timestep is in hours. The constants used in this equation are to convert the input to RiverWare standard units.
The previous equation is based on the energy calculation equation solved for Flow and corrected to standard units (see LCR Power method):
where flow is in kcfs, Timestep is in hour, and Operating Head is in feet.
The correction factors used in these equations are presented below:
Once Turbine Release is calculated, it is checked against the maximum allowable turbine release. A RiverWare error is flagged and the simulation run is terminated if Turbine Release exceeds the
maximum allowable turbine release.
Unit Power Table Release
This method is only available if the Unit Power Table method is selected in the Power category; see
Unit Power Table
for details. The method Unit Power Table Release calculates Turbine Release when Energy is specified. If Energy is flagged as BEST EFFICIENCY (B) or MAX CAPACITY (M) or UNIT VALUES (U), it is
considered input.
If Energy is flagged B, the Unit Best Turbine Q table will be used to determine the best efficiency Turbine Release for the current average Operating Head. This assumes that all units are in use
unless specified otherwise in the Unit is Generating slot. Power is then found using the Unit Power Table. If Energy is flagged M, the Unit Max Turbine Q table is used to determine the maximum Unit
Turbine Release for the current average Operating Head. This point is then found in the Unit Power Table to determine the maximum power that can be produced for this Operating Head. If Energy is
flagged U, the method calculates Unit Turbine Release using table interpolation of Unit Energy on the Unit Power Table with the Unit Energy.
If Energy is input but not flagged as B, M, or U and Unit Energy is not input, the method will exit without calculating Unit Energy. If any of the values in Unit Energy are input, it will be used to
determine the release and power.
Method Details
This method will be called if Energy is input or set by a rule, which includes being flagged B, M, or U. This method will execute in the following manner.
if (Energy is flagged M)
If any of the Unit Energy[u] values are input or set by a rule, issue an error.
For each unit that is available (based on a non-zero value in the Unit is Generating slot), use 2D interpolation of Auto Unit Max Turbine Q table;
Set max release to a temporary local variable, Qmax[u];
Turbine Release is set to
Once the value for each unit flow at the current average Operating Head is found, the Unit Power[u] produced for that flow can be determined directly from the Unit Power Table.
else if (Energy is flagged B)
If any of the Unit Energy[u] values are input or set by a rule, issue an error.
For each unit that is available (based on a non-zero value in the Unit is Generating slot), use 2D interpolation of Auto Unit Best Turbine Q table to determine release at B;
Set best release to a temporary local variable, Qbest[u];
Turbine Release is set to
Again, Unit Power[u] will then be able to be determined directly from the Unit Power Table.
else if (Energy is Input/Rules (including “U” flag) and Unit Energy for any unit is not input/rules)
Issue an error; there is no way to calculate Unit Energy from plant values and no way to calculate plant Power without unit information
else if (Energy is input/rules (including “U” flag) and Unit Energy for any unit is input/rules)
If Unit is Generating is set (input/rules) to 0 for a unit that has a Unit Energy, issue an error.
If Unit is Generating is set (input/rules) to 1 for a unit that does not have a Unit Energy, issue an error.
If Energy is flagged U,
Next, Unit Power[u] = Unit Energy[u] / time (hrs)
From this power calculation, the Unit Turbine Release[u] can then be determined using a reverse table lookup of Unit Power[u] in the Unit Power Table. If the Shared Penstock Head Loss method is
selected, the solution is iterative as the net operating head is a function of Turbine Release. If Unit Energy[u] is less than zero, the Unit Turbine Release[u] is set to zero. A negative Unit Energy
can be set to represent a unit that is spinning but not producing power; that is condensing.
Turbine Release =
Finally, the method returns to the Unit Power Table method and computes Unit is Generating and Number of Units Generating. See
Unit Power Table
for details. | {"url":"https://riverware.org/HelpSystem/CurrentVersion/Objects/reservoirPowerMethods.31.10.html","timestamp":"2024-11-09T22:57:37Z","content_type":"application/xhtml+xml","content_length":"59115","record_id":"<urn:uuid:76f4d4dd-c633-4a69-9ba8-6013671a7bae>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00376.warc.gz"} |
Pigeonhole Principle - (Theoretical Statistics) - Vocab, Definition, Explanations | Fiveable
Pigeonhole Principle
from class:
Theoretical Statistics
The pigeonhole principle states that if you have more items than containers to put them in, at least one container must hold more than one item. This seemingly simple idea has powerful implications
in combinatorics, probability, and number theory, often helping to prove the existence of certain configurations or outcomes even when the exact arrangements are not specified.
congrats on reading the definition of Pigeonhole Principle. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. The pigeonhole principle can be stated simply: If n items are placed into m containers and n > m, then at least one container must contain more than one item.
2. This principle is used in proofs to demonstrate that certain conditions must exist within a set or configuration, often leading to surprising results.
3. It can be generalized to state that if n items are put into m containers, then at least one container must contain at least ⌈n/m⌉ items.
4. Applications of the pigeonhole principle can be found in computer science for hashing functions, error detection in coding theory, and even in social sciences for understanding distributions.
5. The principle illustrates fundamental truths about existence rather than providing specific arrangements or counts, making it a powerful tool in theoretical arguments.
Review Questions
• How does the pigeonhole principle help in demonstrating the existence of certain outcomes in combinatorial problems?
□ The pigeonhole principle helps to show that within a set distribution of items across containers, certain configurations must exist when the number of items exceeds the number of containers.
For instance, in a scenario where you have 10 pairs of socks and only 9 drawers to store them, you are guaranteed that at least one drawer will hold more than one sock. This result can
simplify complex combinatorial proofs by ensuring that certain conditions must occur without needing to specify every arrangement.
• In what ways can the pigeonhole principle be applied to real-world problems such as error detection and data distribution?
□ The pigeonhole principle can be applied in error detection by illustrating how redundant systems can guarantee that errors are caught. For example, if a system transmits data packets and has
fewer channels than packets being sent, at least one channel must handle multiple packets. This redundancy ensures that if one packet is lost or corrupted, there’s still a high probability
that another copy exists somewhere else. Additionally, in data distribution tasks like load balancing, understanding how items are allocated among servers helps prevent overloads on any
single server.
• Evaluate how the pigeonhole principle relates to concepts of cardinality and surjective functions in mathematics.
□ The pigeonhole principle directly connects to cardinality by highlighting relationships between the sizes of sets. If you apply the principle to a scenario involving sets with different
cardinalities, it shows that if a function maps elements from a larger set to a smaller set (as seen in surjective functions), then some elements from the smaller set must correspond to
multiple elements from the larger set. This observation reinforces the necessity of understanding mappings and distributions between different sizes in mathematical concepts, showcasing its
relevance across various fields.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/theoretical-statistics/pigeonhole-principle","timestamp":"2024-11-11T08:03:38Z","content_type":"text/html","content_length":"152814","record_id":"<urn:uuid:842ea83c-c8e4-42d2-bcc3-f7dc355ce466>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00506.warc.gz"} |
Computational Complexity of Recognizing Well-Covered and Generalized Well-Covered Graphs
The core message of this article is to investigate the computational complexity of recognizing well-covered graphs and their generalizations, known as Wk graphs and Es graphs. The authors establish
several complexity results, including showing that recognizing Wk graphs and shedding vertices are coNP-complete on well-covered graphs, determining the precise complexity of recognizing 1-extendable
(Es) graphs as Θp2-complete, and providing a linear-time algorithm to decide if a chordal graph is 1-extendable. | {"url":"https://linnk.ai/ja/insight/graph-theory/","timestamp":"2024-11-03T16:31:28Z","content_type":"text/html","content_length":"353414","record_id":"<urn:uuid:a86fecd0-0509-476d-8d57-b5202a38d5e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00111.warc.gz"} |
Probability Theory Lecture Notes by Phanuel Mariano | Download book PDF
Lecture Notes for Introductory Probability
The contents include: Combinatorics, Axioms of Probability, Conditional Probability and Independence, Discrete Random Variables, Continuous Random Variables, Joint Distributions and Independence,
More on Expectation and Limit Theorems, Convergence in probability, Moment generating functions, Computing probabilities and expectations by conditioning, Markov Chains: Introduction, Markov Chains:
Classification of States, Branching processes, Markov Chains: Limiting Probabilities, Markov Chains: Reversibility, Three Application, Poisson Process.
Author(s): Janko Gravner, Mathematics Department, University of California
218 Pages | {"url":"https://www.freebookcentre.net/maths-books-download/Probability-Theory-Lecture-Notes-by-Phanuel-Mariano.html","timestamp":"2024-11-04T00:49:02Z","content_type":"text/html","content_length":"36746","record_id":"<urn:uuid:c8c380cc-91aa-4031-8195-ea4cc890195f>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00368.warc.gz"} |
How To Use Fibonacci Retracement Levels Correctly - Pro Trading School
There are many tools available to forex traders, but the Fibonacci retracement tool is a unique tool and one of the oldest tools available to traders.
Unlike the moving averages that lag price, the Fibonacci retracement levels lead the price, which means it shows what might happen in the market before the thing happens.
Interestingly, the tool is available in every charting platform.
The tool is very useful for trading a trending market.
Traders use the Fibonacci retracement levels to find areas where there could be high-probability trade setups because those levels suggest potential places where a pullback can reverse and head back
in the trending direction.
In this post, we will discuss what the Fibonacci retracement levels really mean, how to attach the tool, how to use it in trading, and the common mistakes to avoid when using the tool.
But before then, we’ll explore the origin of the Fibonacci levels and the relevance of the golden ratio.
Origin of the Fibonacci retracement
We will discuss this under the following :
• Fibonacci sequence
• Fibonacci ratios
The Fibonacci sequence
The Fibonacci retracement levels are derived from the various Fibonacci ratios, which are, in turn, derived from the Fibonacci sequence of numbers.
Discovered by an Italian mathematician, Leonardo de Pisa (nicknamed Fibonacci), the Fibonacci number sequence is a numerical series in which each number in the series — with the exception 0 and 1 —
is the sum of the two numbers before it.
So, the Fibonacci number sequence looks like this: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, and continues like that till infinity.
Taking a closer look at the numbers, one thing you will notice is that from 21 and above, each number is about 161.8% of the number immediately before it.
To put it in another way, each number is 61.8% of the number immediately after it. For instance, if you divide 34 by 55, you will get 0.618, meaning that 34 is 61.8% of 55.
Going the other way round, if you divide 55 by 34, you will get 1.618, which is widely regarded as the golden ratio because of its occurrence in several aspects of nature.
Leonardo Fibonacci discovered all this during his youthful age in the Mediterranean. Born in the year 1170 in the Italian city of Pisa, the young mathematician made several eastern trips with his
father and even lived with him in Bejaia — a Mediterranean port in northern Algeria.
It was at this time that he studied mathematics and learned the Hindu-Arabic numeral system.
When he eventually returned to Italy in 1202, Leonardo documented his findings in what became a famous mathematics compendium — ‘Liber Abaci’.
This work popularized the Hindu-Arabic numeral system in Europe and the rest of the western world.
The Golden ratio and other Fibonacci ratios
As we stated earlier, aside from the first few numbers, dividing a number in a number in the sequence with the number immediately before or after it gives a fairly consistent ratio — 1.618 or 0.618,
as the case may be.
The value, 1.618, is widely known as Phi or the golden ratio since we encounter it in different things in life, including the financial markets.
In the financial trading world, the 0.618 ratio or 61.8% gives rise to the 61.8% Fibonacci retracement level, while the 1.618 ratio or 161.8% gives rise to the 161.8% extension or expansion level.
Aside from the golden ratio and its inverse, other ratios can be derived from the numbers in the Fibonacci sequence.
For instance, dividing a number by the number two places to the right — say, 89 divided by 233 — would give 0.382 (38.2%), which is one of the Fibonacci retracement levels. The inverse of 0.382 is
2.618 or 261.8% — another expansion or extension level.
Again, if you divide a number in the sequence by a number three places after it — say, 34 divided by 144 — you will get the ratio 0.236 (23.6%), which is one of the Fibonacci retracement levels too.
While some say that the 50% and 100% levels are not technically derivatives of the Fibonacci ratios, but 1, 1, and 2 are members of the sequence — dividing 1 by 2 gives 50%, while dividing 1 by 1
gives 100%.
There may be other ratios from the Fibonacci number sequence, but when it comes to forex trading, Fibonacci ratios like 0.236, 0.382, 0.618, 1.618, and 2.618 are the significant ones.
But away from the technical analysis of the financial markets, the golden ratio, or its inverse, is quite prevalent in different aspects of our natural world and human arts.
For example, the ratio has been observed in the spiral galaxies of outer space, tree branches, sunflowers, rose petals, human faces, Leonardo da Vinci’s Mona Lisa, the Parthenon, and the ancient
Greek vases.
However, one of the famous examples of the ratio in nature is seen in the nautilus shell, which spirals at about the same level as the percentages from the golden ratio and its inverse.
The ratio may also be used to predict human behaviors and spending habits, which is why it works in the financial markets.
What do the Fibonacci retracement levels mean?
A trending market moves in waves — impulse waves and corrective waves or pullbacks. The impulse waves move in the direction of the trend, while the corrective waves move in the opposite direction.
The Fibonacci retracement levels show how much of the preceding impulse wave a pullback can retrace to before reversing to head back in the trending direction — starting a new impulse wave.
They indicate the percentage of the impulse wave a pullback might end, which means that a pullback is measured as a percentage of the impulse wave before it.
Thus, a 61.8% retracement level means 61.8% of the preceding impulse wave, and if a pullback reverses at that level, it means the pullback (retracement) was only 61.8% of the preceding impulse wave.
Since price reversal areas are considered support or resistance levels, the Fibonacci retracement levels, in essence, indicate potential support or resistance areas.
Interestingly, the tool highlights these levels even before the price reaches those levels. The common retracement levels are 23.6 %, 38.2%, 50%, 61.8%, and 78.6%.
If the market is trending up, then, pullbacks move downwards, so the retracement levels will serve as possible support levels.
The opposite is the case in a market that is in a downtrend — pullbacks move upwards, so the retracement levels will function as potential resistance levels.
Thus, they provide unique areas to look for trade setups.
It is believed that since traders already know about these levels beforehand, they tend to work like self-fulfilled prophecies.
That is, traders place a lot of orders around those levels in anticipation that the pullback will reverse, and it’s those huge orders that cause the price to reverse at those levels.
But whatever the case, Fibonacci retracement levels can help you spot where to look for your trade signal.
In addition to the retracement levels, which indicate potential pullback reversal levels, the Fibonacci retracement tool can be set to show levels that lie in the opposite direction beyond the high/
low of the price swing it’s attached to.
These other levels are called the extension levels and can indicate potential impulse wave reversal levels.
That means, in an uptrend, the extension levels can act as potential resistance areas where an impulse wave may reverse and begin a new pullback.
Hence, these levels may indicate possible profit targets for someone riding up the impulse wave.
In a downtrend, on the other hand, the extension levels can act as potential support levels where traders can place their profit targets for short positions.
How to attach the Fibonacci retracement tool to your chart
It’s important to learn how to attach the Fibonacci retracement tools properly on your chart because that determines how well you can use the tool to find potential price reversal levels.
The tool may look different in different charting platforms, but you attach it to your chart the same way, irrespective of the platform you’re using.
If you are not used to your trading platform, you will first need to go through it to know where the tool is located and how it looks.
You can also check the levels preset in the tool to know if you can add more, especially if you want to see the extension levels since they are not always pre-set in the tool.
After that, you need to study the direction of the trend you want to trade and identify the impulse waves and pullbacks.
See the sample chart below; the market is in an uptrend. Observe the impulse (green arrow) and pullback waves (orange arrows) marked.
You aim to attach the tool to the latest impulse wave when a pullback has started so that you can anticipate where the pullback might reverse.
Attach the retracement tool from the beginning of the impulse wave to its end.
Hence, in an uptrend, you attach it from the swing low to the swing high since the waves move upwards.
This is what you do: Select the retracement tool, place your cursor at the lowest point of the latest impulse wave, and drag it up to the highest point in the wave. Take a look at the chart below.
The various levels will appear, from up to down, in this order: 23.6%, 38.2%, 50%, 61.8%, and 78. 6% — as you can see in the chart above. Notice that the pullback is currently at the 61.8% level.
In a downtrend, attach the retracement tool from the swing high to the swing low, because the impulse waves are moving downwards.
After selecting the tool, place your cursor at the highest point in the latest impulse wave and drag it down to the lowest point in the wave. The various retracement levels
will appear from down upwards — 23.6%, 38.2%, 50%, 61.8%, and 78. 6%, in that order. See the chart below. Notice that the pullback ended at the 50% level.
Using the Fibonacci retracement tool in your trading
The best time to use the Fibonacci retracement tools in your trading is when the market is strongly trending in one direction — up or down — making clear impulse waves and pullbacks.
There are many ways traders use the Fibonacci retracement tool in their trading, but these are the common ones:
Pullback reversal strategy
Gartley chart patterns
Elliot Wave methods
Pullback reversal strategy
With this strategy, a trader tries to enter the market at the end of a price pullback so as to ride the next impulse wave and get out before the next pullback begins.
To play this strategy, you must find ways of knowing when a pullback is losing momentum and identify the level where it might end for a new impulse wave to begin.
Many traders approach this strategy differently, and there are several indicators one can use to estimate when a price swing has exhausted its move.
But to predict the possible price reversal levels, one of the most popular tools to use is the Fibonacci retracement levels and their extension counterparts.
The retracement levels
The retracement levels can serve as potential resistance or support levels, depending on the direction of the trend, and can offer great levels for your trade entry or stop loss orders.
If the market is trending up, the retracement levels serve as potential support levels. Thus, when a pullback reaches one of the important Fibonacci retracement levels —38.2%, 50%, or 61.8% — you
should watch out for whatever defines your bullish reversal signal.
Your bullish reversal signal can be a bullish candlestick pattern or any technical indicator signal.
In the GBP USD chart below, the price found support at the 50% level and 61.8% Notice the inside bar pattern that formed at the end of the pullback, which could be a signal to go long.
Also, note the hidden divergence (blue line) and the oversold signal in the stochastic indicator — another possible signal to go long.
If you see your setup at any of the Fibonacci retracement levels, you can go long and place your stop loss below the next higher retracement (lower) level, next two higher levels, or the 100% level —
which represent the previous swing low. See the chart below.
In a market that is trending down, the retracement levels serve as potential resistance levels where a price rally can reverse.
When the price pulls back to 38.2%, 50%, 61.8%, or even 78.6%, look for your bearish reversal trade setups, which could be a price action pattern or an indicator signal.
The GBP CHF below shows that a pin bar pattern occurred at the 61.8% Fibonacci retracement level and the price declined further.
When you have a signal at any of the levels, you can go short and place your stop loss above the next higher retracement level, next two higher levels, or even the 100% level — which represents the
previous swing high. See the chart below.
The extension levels
As the name implies, the extension levels are an extension of the retracement levels beyond the price swing high/low to project where the next impulse wave might end.
Depending on the direction of the trend, the extension levels can serve as potential resistance or support levels and may provide great levels for your profit targets.
In an uptrend, the extension levels can serve as resistance levels, so you can place your profit target just below any relevant extension level — as you can see in the GBP USD chart below.
For a down-trending market, the extension levels can become support levels, so you can place your take profit order just above any of the levels.
It’s even possible to place more than one profit target, with each near a different extension level, if you want to exit your trade in batches. Take a look at the chart below.
Gartley chart patterns
These are harmonic chart patterns that are based on the Fibonacci ratios and percentages.
Just like other harmonic patterns, specific Fibonacci levels must be met for a formation to qualify as a valid Gartley pattern setup.
Depending on whether it’s a bullish or bearish pattern, the Gartley pattern may look like an M or W. So, it has five points —denoted as X, A, B, C, and D — and four price swings.
Here is what a bullish (M) formation looks like. A swing up, XA, which could be any price swing in the market.
This is followed by a pullback swing, AB, which must be about 61.8% Fibonacci retracement of the XA swing.
From point B, the price reverses to point C, which must be about 38.2% retracement from point A or 88.6% of the AB swing.
Then, the price heads downwards from point C to point D, making a 127.2% extension of the BC swing or 78.6% retracement of the XA move.
The pattern is completed at point D, and a buy signal occurs with possible profit targets at point C, point A, and a final target at 161.8% increase from point A. Most times, the stop loss is placed
below the point X.
Note that these Fibonacci levels need not be exact, but the closer they are, the more reliable the Gartley pattern would be.
The bearish version of the pattern is just the inverse of the bullish pattern and is shaped like W.
If the pattern is completed at the fifth point (D), it suggests a potential bearish move with several profit targets
Elliot Wave method
Elliot Wave Theory states that the market moves in waves, which include the impulse wave and the corrective waves.
The impulse wave moves in the direction of the trend, while the corrective waves are retracements of the impulse waves.
Within each wave, there is a set of waves that adhere to the same impulse/corrective wave pattern.
The impulse wave has five waves within it — three smaller impulse waves (wave1, wave 3, and wave 5) interspaced by 2 smaller corrective waves (wave 2 and wave 4).
On the other hand, the corrective wave has three smaller waves within it — wave A, wave B, and wave C.
Traders who follow this method use the Fibonacci retracement levels to predict where the corrective waves can reverse for the next impulse wave to begin.
In the GBPAUD chart below, you can see the impulse and corrective waves, with the smaller waves within each.
Notice that the corrective wave reversed at the 50% Fibonacci level.
Mistakes to avoid when trading with Fibonacci retracement levels
There are a few serious mistakes some traders make when trading the Fibonacci retracement levels. We will discuss some of them here so that you can avoid them.
Trading Fibonacci retracement levels on short timeframes
Some traders try to trade the Fibonacci levels on very short timeframes, such as the M15 and M5, but the levels work better on higher timeframes.
In fact, it’s better if you don’t go below the H4 timeframe when attaching the Fibonacci tool.
After marking the levels on a higher timeframe, you can step down to the lower timeframe to look for your trade setups when the price reaches any of the Fib levels.
Going against the main trend
Going against the trend can be very disastrous for your trading account, so try to avoid it by all means.
It is one of the reasons you should stick to higher timeframes — preferably, D1 and H4 — because it’s almost impossible to identify the direction of the main trend in a lower timeframe.
Trading only the Fibonacci retracement levels
That the price has retraced to the 50% or 61.8% Fibonacci retracement level does not mean that it would reverse and resume in the trend direction.
Those levels are only a guide for where you can look for trade setups.
Additionally, it’s even better if there is a confluence of other factors, like an established support and resistance level, trend line, or long-term moving average, coinciding with an important
Fibonacci retracement level.
Using a tight stop loss
Don’t place your stop loss very close to the low/high of the pullback you are trading. It is safer to keep it beyond the 100% retracement level.
Final words
Using the Fibonacci retracement levels to trade a trending market can improve the odds of your trading outcome if you use it correctly.
However, there’s a need to combine it with other supporting factors to even increase your chances further.
Additionally, you must have clear criteria to identify a trade setup when the price reaches a significant Fibonacci level. Above all, ensure use give your trades enough room — avoid tight stop loss. | {"url":"https://www.protradingschool.com/how-to-use-fibonacci-retracement-levels-correctly/","timestamp":"2024-11-03T02:40:02Z","content_type":"text/html","content_length":"84497","record_id":"<urn:uuid:ec59244a-d3fd-48c5-aaca-df62ab97ab86>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00392.warc.gz"} |
Stable Matching and its Discrepancies
Stable Matching and its Discrepancies
Written by: Aryan Mansingh
There are many topics in game theory, but one of the most basic yet fundamental is the idea of a stable matching. This can be used in a variety of fields from economics to 5G networks, yet is often
left hidden just below the surface of what’s known about their use. The basics of this topic, as well as another fundamental game theory topic will be covered in this article.
The marriage problem is one that has been acknowledged for centuries, albeit not directly. The method of pairing two groups of people based on their preferences such that each member pair is no
better off with any other person than with their match. It hadn’t even been known if it was possible to get such a matchup, let alone a way to achieve it without failure. However, before
understanding how such a solution was found, a simpler concept has to be understood: Nash equilibria.
Every economist knows the concept of Nash equilibria. It’s one of the fundamental ways to predict the actions of set of characters/players by analyzing what the most desirable outcomes are for those
players. This concept is commonly displayed by the prisoners dilemma, although it’s applicable in practically any situation where the actions and consequences of those actions are known for any two
players. In the prisoners dilemma, there are two prisoners, each of whom can either remain silent or confess about a joint crime. These prisoners are both “logical,” meaning that they will prefer
less jail time to more. If both prisoners remain silent, they both are given two years of time. If one prisoner remains silent while the other confesses, then the prisoner that remains silent gets 8
years of time while the one that confesses gets a year in prison. However, if both prisoners confess, then they both get 5 years in prison. Here’s a visual of the problem (Figure 1):
Figure 1
2X2 payoff matrix for the prisoner’s dilemma
Courtesy of: Aryan Mansingh
Looking at the diagram, it seems obvious that both prisoners should remain silent to receive the least possible punishment, right? However, the concept of Nash equilibria proves this wrong. Lets
consider this question from prisoner A’s perspective, responding to prisoner B’s choices. if prisoner B decides to remain silent, the prisoner A is better off confessing, since they recieve a year
less time. If prisoner B decides to confess, the prisoner A is better off confessing, since they get 3 years less time. So, in either scenario, it’s more beneficial for prisoner A to confess. Because
this scenario is symmetrical (prisoner B has the same choices and repercussions as prisoner A), prisoner B will also find it more beneficial to confess. This also leads to the combined jail time
between the prisoners. So, even though both prisoners confessing is the least optimal in terms of jail time, they are forced to make that choice, leading to the Nash equilibria. The prisoners simply
have no better choice to make.
A stable matching requires a similar idea. Instead of having different characters and analyzing their possible actions, a stable matching is based off the preferences of individuals and pair them
accordingly. Here is one possible arrangement of preferences (Figure 2):
Figure 2
Preference layout with w and m representing women and men accordingly
Here, there are three men and three women, with each of them having a preference profile P and their preferences of the opposite gender in order. For example, man 1 (m1) prefers woman 2 (w2) to woman
1 (w1) to woman 3 (w3). Using these preference profiles, a stable matching has to be determined such that there’s no way for any two pairs to switch their pairings to better satisfy their preference
profiles. While it may seem difficult to figure out such a matching, David Gale and Lloyd Shapley created an algorithm in 1962 to solve this particular question. While there are many ways to
understand how this algorithm works, a flowchart demonstrates it quite simply (Figure 3):
Figure 3
Flow diagram of the Gale-Shapley algorithm
This relatively simple algorithm achieves a stable matching, and applying this algorithm on the proposed scenario gives us this matching:
To show that this works, we can look at each individual case. m1 is matched to w1, even though w1 is his second choice. Why is this? This is because his first choice, w2, prefers m3 over m1, and
since there are other conflicts between m3’s more preferred choices, he is matched with w2. This can be can be demonstrated for any given pair, there’s always another one that’s preventing both
players from switching. This algorithm was quite useful when it was made, as it was used to connect students to medical schools in a similar fashion. However, there’s something to note. The women get
much better preferences than the men do, and this is no accident. The men get their 2nd, 3rd, and 2nd choices, while the women get their 1st, 1st, and 2nd choices respectively; this is far better
than what the men got. This is because there can be multiple stable matchings for the same preference profiles. The man favorable stable matching is as follows:
If tested, this is also a stable matching. However, the men here get their 1st, 3rd, and 1st choices while the women get their 3rd, 2nd, and 2nd choices respectively. While this may not seem like too
large of a problem, in practical application, it made a huge difference. When this algorithm was originally used to match medical students with potential schools, the different stable matchings were
never acknowledged, often being favorable to schools rather than the students. This was eventually addressed by Alvin Roth in 1996, who made developments to the Gale-Shapley algorithm and proposed a
solution while addressing the matching process as a labor market. Only then was the matching process fixed.
Yet, this only covers one discrepancy in the Gale-Shapley algorithm, and there are several more that exist. Alvin Roth has done the difficult work, though, and compiled various properties and their
applications in his paper “Deferred Acceptance Algorithms: History, Theory, Practice and Open Questions.” While this article covers just the surface of stable matching and Nash equilibria, game
theory is a vast field that goes far deeper into the analysis of player interactions, a field that still remains under-appreciated to this day.
Gale, D., & Shapley, L. S. (1962). College admissions and the stability of marriage. The American Mathematical Monthly, 69(1), 9–15. https://doi.org/10.1080/00029890.1962.11989827
Osborne, M. (2004). An introduction to game theory. New York: Oxford University Press.
Rostom, M., Abd El-Malek, A., Abo-Zahhad, M., & Elsabrouty, M. (2022). A two-stage matching game and repeated auctions for users admission and channels allocation in 5G HetNets. IEEE Access, P
P, 1. https://doi.org/10.1109/ACCESS.2022.3180982 | {"url":"https://sites.imsa.edu/hadron/2024/09/23/stable-matching-and-its-discrepancies/","timestamp":"2024-11-03T19:29:28Z","content_type":"text/html","content_length":"52602","record_id":"<urn:uuid:b031bc01-73c6-412a-bfe3-dda8a0423437>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00528.warc.gz"} |
Related Queries: Machine Learning Analyst interview prep
Is causal analysis commonly applied to census data?
Cannot say
None of the above
What is the difference between word2vec and GloVe?
They are the same thing
Word2vec is predictive, GloVe is count-based
Word2vec can only be used for English
GloVe always performs better
What is an advantage of using deep learning for image recognition?
It can automatically learn complex patterns from images
It requires manual feature extraction
It is only suitable for small datasets
It cannot be used for real-time processing
Explain important model perf. statistics
R-squared: model fit
MSE: mean squared error
MAE: mean absolute error
All of the above
What is the main advantage of using peephole connections in LSTM?
Faster training
Better handling of time dependencies
Reduced model size
Improved interpretability
What's the primary characteristic of overfitting?
Poor performance on training data
Poor performance on test data
Good performance on both training and test data
No relation to model performance
What is the difference between a Type I and Type II error?
Type I rejects true null, Type II fails to reject false null
Type II rejects true null, Type I fails to reject false null
They are the same
Neither relates to null hypothesis
What does HTTPS stand for in web security?
Hyper Text Transfer Protocol Secure
High-Level Transmission Protection Service
Hierarchical Type Translation Protection System
Hybrid Textual Transport Platform Security
Which is NOT a disadvantage of linear models?
Assumption of linearity
Can't use for binary outcomes
Overfitting problems
High computational cost
What does the term "precision" refer to in machine learning?
The proportion of true positive predictions out of all positive predictions
The proportion of true negative predictions
The speed of the model's predictions
The difference between predicted and actual values
Score: 0/10
Which technique is used to handle out-of-vocabulary words in machine translation?
Subword tokenization
Character-level models
Copy mechanism
All of the above
What is the bias-variance tradeoff?
It's not a real concept
The balance between model simplicity and flexibility
A method for feature selection
A way to increase model accuracy
What is the primary goal of cross-validation?
To increase model complexity
To reduce model accuracy
To assess model performance
To generate more data
Which algorithm is best suited for anomaly detection in streaming data?
Isolation Forest
Which algorithm is best for named entity recognition?
Naive Bayes
Conditional Random Fields
All of the above
Which algorithm is best for emotion recognition in speech?
Hidden Markov Models
All of the above
What is the main purpose of collaborative filtering?
To filter spam
To recommend items to users
To collaborate on filtering tasks
To reduce filter complexity
Which of these is NOT a key step in the K-means algorithm?
Centroid initialization
Assignment step
Update step
Gradient calculation
What is the primary goal of the Cramér–von Mises criterion?
To test Cramér's theories
To assess goodness of fit of cumulative distribution function
To classify von Mises distributions
To reduce criterion complexity
What is the purpose of the Shapley values in machine learning?
To perform clustering
To explain feature importance in model predictions
To reduce dimensionality
To generate synthetic data
Score: 0/10
What is the main difference between a generative and discriminative model?
Generative models are always more accurate
Generative models learn the joint probability distribution
Discriminative models are unsupervised
There is no difference
What type of ensemble method is Random Forest?
What distinguishes a population from a sample?
Time frame
Which of these is NOT a key feature of XGBoost?
Gradient boosting
Support for missing values
Exclusive use of deep learning
What is the purpose of k-fold cross-validation?
To increase model complexity
To assess model performance and generalization
To reduce the number of features
To speed up model training
What are the different types of joins in SQL?
Inner join, left join, right join, full outer join
Cross join, self join
Both of the above
None of the above
What is the diff. between hard and soft clustering?
Hard: data point belongs to one cluster
Soft: data point has probabilities
Soft: more flexible
All of the above
What does t-SNE stand for in dimensionality reduction?
Temporal Sequence Numerical Embedding
t-distributed Stochastic Neighbor Embedding
Time Series Network Estimation
Transformed Spatial Neighborhood Evaluation
What is the purpose of the early stopping technique in neural networks?
To speed up training
To prevent overfitting
To increase model complexity
To perform feature selection
What is the main difference between a Recurrent Neural Network (RNN) and a Feedforward Neural Network?
RNNs are only for image data
RNNs can process sequential data
RNNs don't use activation functions
There is no difference
Score: 0/10
Which measure of dispersion is most affected by outliers?
Standard deviation
Interquartile range
Which algorithm is best for aspect-based sentiment analysis?
Naive Bayes
LSTM with attention
All of the above
What does HDFS stand for in big data?
Hadoop Distributed File System
High-Density File Storage
Hierarchical Data Formatting Service
Hybrid Database Functionality Suite
What is the main purpose of data sharding?
To sharpen data points
To distribute data across multiple machines
To reduce data volume
To encrypt data fragments
What is the Bonferroni correction used for?
To increase statistical power
To decrease the significance level
To adjust for multiple comparisons
To improve model fit
How does Data Science differ from Machine Learning?
DS is a subset of ML
ML is a subset of DS
They are unrelated
They are identical
What is the purpose of the kernel trick in SVM?
To make the algorithm faster
To transform data into higher dimensions implicitly
To perform clustering
To reduce dimensionality
What is the main purpose of data integration?
To segregate data sources
To combine data from different sources
To reduce data volume
To encrypt multiple datasets
What does HMM stand for in sequence modeling?
High Markov Model
Hidden Markov Model
Hierarchical Memory Management
Hybrid Matrix Multiplication
What is the primary goal of reinforcement learning?
To classify data
To cluster data points
To maximize cumulative reward
To reduce dimensionality
Score: 0/10 | {"url":"https://coolgenerativeai.com/machine-learning-analyst-prep/","timestamp":"2024-11-11T20:37:45Z","content_type":"text/html","content_length":"188651","record_id":"<urn:uuid:ef6c72a5-6685-41a1-8d81-603047e321ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00229.warc.gz"} |
Characterization of Spatially Graded Biomechanical Scaffolds
Advances in fabrication have allowed tissue engineers to better mimic complex structures and tissue interfaces by designing nanofibrous scaffolds with spatially graded material properties. However,
the nonuniform properties that grant the desired biomechanical function also make these constructs difficult to characterize. In light of this, we developed a novel procedure to create graded
nanofibrous scaffolds and determine the spatial distribution of their material properties. Multilayered nanofiber constructs were synthesized, controlling spatial gradation of the stiffness to mimic
the soft tissue gradients found in tendon or ligament tissue. Constructs were characterized using uniaxial tension testing with digital image correlation (DIC) to measure the displacements throughout
the sample, in a noncontacting fashion, as it deformed. Noise was removed from the displacement data using principal component analysis (PCA), and the final denoised field served as the input to an
inverse elasticity problem whose solution determines the spatial distribution of the Young's modulus throughout the material, up to a multiplicative factor. Our approach was able to construct,
characterize, and determine the spatially varying moduli, in four electrospun scaffolds, highlighting its great promise for analyzing tissues and engineered constructs with spatial gradations in
modulus, such as those at the interfaces between two disparate tissues (e.g., myotendinous junction, tendon- and ligament-to-bone entheses).
Issue Section:
Research Papers
Biomechanics, Displacement, Fibers, Inverse problems, Manufacturing, Materials properties, Noise (Sound), Principal component analysis, Nanofibers, Tensile testing, Design, Elasticity, Algorithms,
Boundary-value problems, Young's modulus, Biological tissues, Bone, Tendons, Modeling
1 Introduction
Over the last two decades, there has been an increasing interest in using bio-engineered constructs for tissue engineering, for everything from making artificial ligaments to enhancing the body's
ability to heal bone, skin, and muscle [1–6]. Nanofibrous scaffolds can be used to transform this research; by carefully constructing these scaffolds, model systems can be made strong and
sufficiently pliable to mimic load directions placed on bone and connective tissue, can degrade naturally in the body, and can be engineered to be porous and hydrophilic to promote cellular response
and allow the introduction of growth factors through the scaffolds themselves [7–9].
Characterizing these scaffolds is not a trivial proposition; on a microstructural level, there are numerous qualities of importance that can affect the mechanical response, including fiber size,
topography, and alignment, as well as the size and orientation of pores. Characterization at a larger scale provides the mechanical properties of the scaffold as a whole, which are critical for
ensuring that it is able to bear the mechanical stresses inherent in a given application [6]. There are a variety of methods, such as scanning electron microscopy (SEM) and mercury porosimetry, for
assessing morphology and assorted structural properties; but, for characterizing mechanical properties, the field is largely dominated by tensile testing, with compression testing being employed for
special cases and some in vitro studies [6,10,11]. Recently, a number of studies have sought to characterize the mechanical properties of individual electrospun fibers, using either atomic force
microscopy (e.g., Refs. [12] and [13]) or single-fiber tensile testing [14–16]. Additionally, complex constitutive models have been used to directly link scaffold microstructure and overall
mechanical properties, but these tend to be tailored to highly specific applications and materials (for examples, see Refs. [12] and [17]).
Despite these advances in the microstructure-informed scaffold mechanical behavior, the bulk of the work reported in the literature uses scaffolds with properties that are designed to be uniform
throughout, typically consisting of a single layer of nonwoven fibers [17–21]. While these scaffolds can provide a useful environment for the growth of new cells and are able to mimic simple
structures, they are poorly suited for materials with graded properties, such as those encountered at interfaces between dissimilar tissue, including the tendon-to-bone transition, the myotendinous
junction, and ligament-bone entheses. These structures are inherently nonuniform, with significant gradations in qualities like mineralization, fiber alignment, and modulus throughout [22].
Successfully mimicking these structures requires not only more advanced fabrication techniques to replicate these material property gradients, but also appropriate characterization techniques that
are able to discern spatial variations in material properties—which can be very challenging [20,23–26]. While some methods for morphologic characterization, like SEM, still work for spatially graded
scaffolds, standard tensile characterization methods are wholly deficient for spatially varying mechanical properties like modulus, as these properties are homogenized throughout the specimen
geometry. Thus, a tensile characterization technique, able to determine material properties and how they vary throughout the specimen geometry, is highly desirable, and would be of great interest to
bio-engineering applications at disparate tissue interfaces.
In 2009, Li et al. created a scaffold with graded properties, characterized its microstructure, and measured how its mechanical properties were distributed throughout [27]. In their study, the
tendon-bone insertion site was mimicked by creating an electrospun scaffold with a continuous gradient in mineralization, and a variety of methods, including SEM and X-ray spectroscopy, were used to
examine its microstructure. They then applied a tissue-staining agent to create a speckle pattern on the sample, and performed tensile tests in conjunction with image correlation techniques to record
displacements at numerous points on the sample surface and generate images of the local strain throughout the sample as it underwent elongation. Their results showed a clear correlation between the
gradients in mineralization and modulus, as expected. This type of strain imaging gives a sense of the distribution of the modulus throughout the scaffold, greatly enhancing the usual tensile testing
procedure. However, this approach cannot appreciate the nonuniformity of stress throughout heterogeneous tissues, and is particularly sensitive to noise due to its numerical differentiation approach
to extract strain.
Herein, we aim to address these limitations and extend our characterization capabilities through elastography, by creating our own set of graded nanofibrous scaffolds and solving for the distribution
of their mechanical properties, namely, the Young's modulus, by solving an inverse elasticity problem using displacement data from tensile tests. By reconstructing the modulus directly from the
displacement, we omit conversion to strain and ensure that the results are consistent with the laws of elasticity and a given constitutive model [28]. Consequently, though we restrict ourselves to
modulus here, this general method also has the advantage of being extensible to other quantities of interest: with a sufficiently detailed constitutive model, we could use these techniques to
determine a variety of microstructural properties from displacement fields [29,30]. A number of studies have used ultrasound or magnetic resonance imaging (both in vitro and in vivo) to noninvasively
evaluate scaffold properties as they degrade [31–34]. However, to our knowledge, there are no studies that use elastography to bridge the gap between fabrication and mechanical characterization of
graded scaffolds, and allow for the assessment of spatially varying material properties.
2 Methods
2.1 Fabrication of Graded Scaffolds.
Multilayered nanofiber mats were created using previously described electrospinning methods [35]. Briefly, a polycaprolactone (PCL) solution, consisting of poly(ϵ-caprolactone) (80,000MW,
BrightChina Industrial, Shenzhen, Guangdong, China) in a 3:1 chloroform-to-methanol solvent (VWR International, LLC, Radnor, PA), was continuously expelled from a glass syringe through an 18-gage
needle, using a programmable syringe pump (BS-8000, Braintree Scientific, Braintree, MA). The polymer solution was expelled at a rate of 12μL/min, through an applied 18keV field, and collected on a
rotating mandrel (15×1.5in.) in a custom electrospinning apparatus. Mandrel rotation speed was controlled using a commercial overhead high-speed stirrer (BDC6015, Caframo Ontario, CA), and
maintained at 2000rpm during deposition to achieve aligned nanofibers. This alignment was characterized by a highly nonuniform distribution of fiber orientation angles across the sample, with most
fibers in the neighborhood of a particular orientation, similar to that seen in our previous electrospun aligned PCL nanofibrous mats [35]. To create scaffolds with spatially graded properties, we
utilized a seven-layer design (Fig. 1) wherein sections of select layers were masked to prevent nanofiber accumulation, using card stock strips positioned above the mandrel during deposition (Fig. 2
). Because these masking strips were not in contact with the mandrel surface, and thus, did not contact the specimens directly, their removal did not affect the material. In the present scaffold
design, continuous layers were alternated with interrupted (masked) layers such that the resulting multilayer structure was symmetric in specimen thickness, with continuous (unmasked) layers as the
initial base, and final layers. This design produced scaffolds with uniform continuous surfaces and aligned nanofiber topography, with 80% of fiber diameters between 200 and 425nm (Fig. 1).
Following fabrication, the multilayered nanofiber mats were removed from the mandrel, and cut into individual material test specimens using a 3.6×0.6cm rectangular punch (C.S. Osborne, Harrison,
NJ). Specimens were obtained such that fibers were aligned to the long axis of the sample, and the masked regions were centered within the specimen gage length (i.e., samples were most compliant in
the center, and stiffest at the ends) (Fig. 3). Each individual test specimen was then mounted in an oak-tag I-frame sample holder (Figs. 4(a) and 4(b)), and an anisotropic, high-contrast speckle
pattern was applied to enable previously described digital image correlation (DIC) measurement [35]. Due to the high degree of fiber alignment, we expected these structures to display anisotropic
behavior during tensile testing.
In addition to preparing samples for tensile testing, during one of the electrodeposition sessions, we created six additional samples for destructive thickness measurements. The thickness samples (n
=6) were analyzed using a metallurgical microscope, which allowed void spaces to be removed without deforming the nanofiber layers. Macroscopically, these samples displayed a fairly uniform
macroscopic thickness, comparable to the thickness of the seven-layer region. Metallurgical microscopy yielded thicknesses of 60.93±2.55 μm, 21.35±1.07 μm, and 14.73±0.96 μm for the ends of the
samples, the five-layer, and four-layer regions, respectively. Though these thicknesses do not correspond directly to those of the samples used for mechanical characterization, due to the small
standard deviations in their thickness measures for each region (thin, medium, thick), they provide an accurate estimation of the graded thicknesses for samples fabricated on that day.
2.2 Determination of Displacement Fields Within Samples.
Specimens (n=4) were mechanically characterized via uniaxial tensile testing with a test resources universal materials test machine, equipped with a 100N tensile load cell (0.5N resolution)
(Model 500LE2-1, Test Resources, Inc., Shakopee, MN). In our established method [35], once secured in the pneumatic grips (Fig. 4(c)), the I-beam holder is transected, leaving only the scaffold
sample spanning between the grips (Fig. 4(d)), and the specimen is stretched to remove any slack, to a slight positive preload of 0.1N, corresponding to less than 0.5% strain, which accounts for
less than 2% of the maximum applied load. Upon preload, the specimen is stretched to failure at a constant rate (0.2mm/s), with load, displacement, and time recorded at a rate of 50Hz. Testing was
performed such that the axis of expansion was along the specimen's long axis, referred to hereafter as the axial direction, and the perpendicular axis aligned with the sample width is termed the
lateral direction. A two-camera digital image correlation (DIC) system (Correlated Solutions, Inc., Columbia, SC) was employed to measure strains in the material, in a noncontacting manner, using a
texture-mapping algorithm. The system was calibrated in multiple planes within the testing volume using a rectangular calibration grid (9×9-25mm). Video was collected at 7 frames/second during
tensile testing, and analyzed with VIC-3D 2010 dic software (Correlated Solutions), to produce a texture map, treating the solution as a continuum of displacement data across the sample. A virtual
extensometer was created along the centerline of the sample length (axial direction) using vic2d software (Correlated Solutions) to determine the axial engineering strain of the sample within its
gage length [35]. In order to make the data compatible with our inverse elasticity problem solver, which uses the finite element method, we discretized the specimen geometry using a 15×40
rectangular virtual grid, conforming as closely as possible to the boundaries of the calibration region, approximately 5×25mm. The 3D positions for every node of the virtual grid were exported at
each time-step to determine the measured displacements. This resulted in a series of incremental frame-to-frame displacements at each point on this new mesh, with the corresponding load cell and
grip-to-grip displacement values providing the boundary conditions at each time-step.
2.3 Denoising Data Using Principal Component Analysis.
Principal component analysis (PCA) allows us to take a large set of measurements and decompose it into a number of orthogonal modes equal to the number of measurements [36,37]. When each measurement
is a repetition of the same experiment, e.g., ten different measurements of the length of a sample, or 20 different tests of a component's resistivity, these modes can be used to separate noise from
signal. In this case, the lower modes, which correspond to the larger singular values, are considered to be signals, and the high modes are considered noise. Here, we use each individual frame of the
displacement data as a different incremental measurement. As long as we restrict ourselves to the small-strain regime, taken to be below 5% strain, the linear behavior of the material will yield
approximately the same displacement for each incremental stretch, allowing us to treat each frame as a repetition of the same measurement in our analysis. Our 5% estimate for the cutoff of the linear
regime is based on our prior studies in similar electrospun PCL scaffolds [35]. We exclude frames of data collected in the nonlinear, strain-stiffening region, or “toe” region, of the stress–strain
curve (Fig. 5), and include only frames for which the average local strain (determined by grid size) across the sample is below 3.5%; this limit is somewhat arbitrary, but justified based on further
PCA analysis in Sec. 3.2. PCA is performed on the lateral and axial components of displacement separately, producing modes that represent characteristic motions of the sample in each direction.
Each of these modes generated by PCA accounts for a certain percentage of the total variance present in the data. Looking at the largest contributors can give an estimate of the true signal; modes
which contribute a disproportionately small amount to the variance can be considered to be noise. Typically, the variance contribution of each mode is plotted and they are ordered from greatest to
least. Of all the modes, we retain the first few dominant ones that together contribute a significant portion of the total variance, around 99%. Thereafter, we project the measured displacement onto
these modes to arrive at the denoised estimate of the displacement field.
2.4 Characterization of Material Properties
2.4.1 Inverse Problem Algorithm.
For characterization, we solve the inverse problem of determining the spatial distribution of material properties given the denoised experimental displacement. In this inverse problem, we minimize an
objective function
defined on domain Ω
The first term in π represents the mismatch between a predicted displacement field u and the measured displacements $ũ$. The predicted displacements are constrained to obey a model of linear motion,
the details of which are given in Sec. 2.5. T is a matrix used to apply weights to the data, in case the quality of data is dependent on the direction of measurement, as is the case for imaging
modalities like ultrasound. As an example, we might account for poor lateral data by using $T=[0.1001]$ to apply a weight ten times greater to data in the axial direction than those in the lateral,
thereby forcing the minimization to place more emphasis on the axial data. The R term is a total variation diminishing (TVD) regularization term, depending on the material property distribution μ, as
detailed in Refs. [38] and [39]. The overall strength of the regularization is controlled by a regularization parameter α, and E represents the spatial distribution of the Young's modulus. The
minimization problem is solved using a quasi-Newton algorithm, which requires calculating the gradient of the objective function with respect to the optimization parameters. This is evaluated
efficiently by setting up and solving an adjoint problem [40]. Our approach to solving the inverse problem iteratively proceeds as follows:
1. For a given spatial distribution of material properties, solve a forward elasticity problem to determine the predicted displacements.
2. Compute the objective function, which measures the difference between the predicted and measured displacements. If this is below a given tolerance, stop. Else continue.
3. Solve an adjoint problem that is driven by the difference between the predicted and measured displacement fields.
4. Use the forward and the adjoint problems to determine the gradient of the objective function.
5. Provide this gradient to the quasi-Newton algorithm to determine the updated spatial distribution of mechanical parameters.
6. Go to step 1.
2.5 Material Behavior and Modeling.
In order to solve the inverse problem, we first consider the properties we expect from the graded PCL specimens. Past studies have shown that similar constructs exhibit anisotropic behavior, due in
part to fiber realignment [35,41]. We anticipate behavior to be linear at small strains, but, if realignment is substantial, we may need nonlinear modeling to capture the rotational character of that
motion. We also understand that, since the specimens are very thin sheets, we can assume plane stress conditions and work in two dimensions. Taking all this into consideration, to compute multiple
components of the Young's moduli, we require a two-dimensional model that can account for anisotropy and accommodate variable axes of anisotropy, and is dependent on Poisson's ratio. To use this
model, we need an estimate for Poisson's ratio as well as multiple two-dimensional displacement measurements from experiments stretching the sample along different axes. However, performing
multi-axial loading without irreversibly altering the scaffold microstructure from test to test is very difficult, and our previous work has shown that the complex behavior of these structures makes
Poisson's ratio challenging to assess [35]. Designing experiments to account for these difficulties and perform suitable multi-axial loading is beyond the scope of this study, so we limit ourselves
to a simpler experiment and model.
Our experiment, as described in Sec. 2.2, consists of a single tensile test, stretching samples uniaxially in the direction of fiber alignment. We note that in this case, only the lateral motion is
influenced by Poisson's ratio, and the axial motion is governed entirely by the corresponding axial component of the Young's modulus, E[11]. Thus, we can reduce the two-dimensional model to a
one-dimensional model, wherein isotropy holds and we recognize that the only material parameter for which we can solve is E[11]. In spite of this reduced dimensionality, the inverse problem we have
described is driven by the solution of a forward problem in two dimensions; we enforce consistency with our simplified case by letting the lateral component of weighting matrix T be zero. This allows
us to choose an arbitrary value for Poisson's ratio, as it will only affect the lateral data, which is ignored by the data-matching term in our optimization.
We note that though this model requires only the axial data, we perform PCA and denoising for both components of displacement in order to interrogate our assumptions about the material behavior and
build a framework to do this analysis with more complex experiments and models in the future.
2.5.1 Determination of Regularization Parameter α.
Once the material model and weighting matrix are established, regularization must be addressed. The inverse problem algorithm uses a TVD functional to smooth our reconstructions and ensure that no
over-fitting of data occurs. This TVD regularization penalizes large spatial variations in the material parameters without regard to their slope. This makes it an ideal choice for this application,
as we expect sharp changes in material properties over small distances due to the spatial gradations engineered into the sample [42]. The strength of the regularization is controlled by a parameter,
α, whose optimal value is commonly determined using an L-curve. In general, an L-curve plots a predicted solution against the difference between the predicted and true solutions, for a range of
regularization parameter (α) values [43]. We instead use a plot directly comparing the difference between the predicted and true solutions to α itself, referred to by Paynter et al. as an S curve [44
]. We look for a point on the bend of the S curve, where the tail of the S begins; this represents a nearly optimal compromise between minimizing the mismatch between measured and predicted results,
and avoiding over-fitting the data. We find that an α value of 1×10^−3 is appropriate for all of our samples.
2.5.2 Boundary Conditions.
In the course of solving the full inverse problem, we need to solve a forward elasticity problem. In order to produce accurate solutions and ensure that the inverse problem is well-posed, we must
prescribe appropriate boundary conditions [45]. In prescribing these, we note that our samples are fixed at both ends in the grips of the tensile test apparatus; one of the grips remains stationary
while the other moves, and nothing comes into contact with the sides of the samples. So, we assert that the sides are traction-free and then use displacement boundary conditions on the top and bottom
sample edges to match the measured displacement exactly. On the bottom edge, we constrain both the lateral and axial movement with displacement boundary conditions, but on the top edge we constrain
only the axial movement. We do this because the bottom edge of the rectangular grid, on which we perform DIC and PCA, is set close to the fixed grip, and we believe it to be strongly constrained.
However, the top edge is more free to displace (narrow) laterally, and thus should be less constrained.
3 Results
3.1 Sample Construction, Testing, and DIC.
Spatially graded samples displayed a characteristic bilinear mechanical response in their stress–strain curves (Fig. 5). This behavior, along with the trends in PCA modes described in Sec. 3.2,
allows us to confirm that we are analyzing displacements in the initial linear region for the scaffolds.
3.2 Denoising Data Using Principal Component Analysis.
For each of the four samples, we identify a series of successive frames of incremental displacement data corresponding to an average cumulative strain of 3.5%; on average, 17 frames are used for each
sample. Using these data, we compute the PCA modes for all sets of lateral and axial data. The contribution from each of these modes to the overall variance for a typical sample is shown in Fig. 6,
showing that the first two modes in the lateral case, and the first mode in the axial case, contribute significantly to the variance. The total contribution from all the other modes is less than 2.5%
in both cases. The dominance of two modes in the lateral data can be interpreted as an indicator of nonlinear behavior, suggesting more complex behavior than in the axial case.
In Fig. 7, we have plotted the first three lateral and axial modes of motion for one particular case. The first shear mode can be interpreted to reflect the realignment of fibers throughout the
structure. As discussed in Sec. 2.1, we anticipate that for each sample, the fibers have a dominant direction of fiber orientation. Further, though we have tried to ensure that this direction is
aligned with the axis of stretch in the experiment (i.e., the axial direction), the distribution of fibers will create some slight misalignment between these directions. Consequently, during the
stretch, the fibers will tend to reorient themselves along the axis of stretch, causing an overall rotation. This can be observed in the first lateral mode in Fig. 7, which corresponds to a net
counterclockwise rotation, as we surmised in our modeling considerations in Sec. 2.5.
The second lateral mode corresponds to lateral narrowing, and the third mode is comprised of random motions, as are all higher modes. We note that the first axial mode corresponds to stretch along
the axial direction. The second axial mode represents some form of rotation; however, from Fig. 7, we observe that its contribution is three orders of magnitude smaller than that of the first mode.
The third axial mode is comprised primarily of noise. These modes are consistent with the early stages of a uniaxial tension test, in which fibers are realigning and the largest displacements are
We would like to emphasize the fact that the axial movement in the first mode is of much higher magnitude than any of the lateral motions. Additionally, only one mode is significant for describing
the axial motions, whereas the lateral motion requires two modes; this reinforces our notion that the lateral motion is nonlinear and more complex than the axial motion. To better explore the
behavior of the PCA modes and their significance, we plot the contribution to the variance of the first three modes in both the lateral and axial cases, covering a wide range of cumulative strains
(Fig. 8).
Turning our attention first to the lateral data in Fig. 8, we see it always has two dominant modes, even at very small strains. We ascribe this to (i) the rotation of re-orienting fibers, which is
manifested in the first lateral mode as a shear and, (ii) to the overall extension of the sample. Accounting for both of these is critical to describing the lateral motion in future studies with more
complex models. After a short initial period, the second mode, representing lateral narrowing, comes to represent an increasing proportion of the variance. The axial data, on the other hand, tells a
simpler story: we see a single dominant mode throughout, indicative of linear behavior. Taking both sets of data into account, we conclude that selecting a strain limit of 3.5% will yield linear
axial motion and nonlinear lateral motion.
Returning to the PCA data, we determine the denoised displacement data by projecting the measured displacement onto the first two lateral modes and the first axial mode. The resulting displacement
fields for the sample are shown in Fig. 9. Here, for both the lateral and axial displacements, we have plotted the original displacement field, the denoised field, and the difference, which
represents our estimate of noise. Based on this analysis, we estimate that the magnitude of the noise, which is approximately equal for both lateral and axial data, is about 1% of the magnitude of
the axial data and 10% of the magnitude of the lateral data. This is a substantial difference in signal-to-noise ratio, giving us more confidence in the axial data, both before and after denoising.
This is a potential issue that will have to be navigated in any studies that wish to reconstruct the complete two-dimensional properties of these scaffolds.
3.3 Reconstructions.
We produce reconstructions of the relative modulus values throughout the samples; the distribution is solved up to a multiplicative factor. The results for our four samples are shown in Fig. 10.
We can clearly see the spatial gradation in modulus with which the sample was constructed, with light-colored regions of low modulus in the middle and darker regions of high modulus on the ends. This
trend is observed in all samples. However, we also observe variability in the modulus distribution between samples. This can be ascribed to either the manufacturing process or the error inherent in
our approach to measure displacements and then infer the modulus from it. The average value of the contrast in the modulus between the outer regions of the sample and the central region is reported
to be 1.5 (Table 1). We also observe some variability in contrast, around 12%, between samples. The contrast is computed as the ratio of the average modulus values at the ends to the moduli in the
center of the sample. In order to evaluate the average value of the E in the central region, a region in the middle, away from the sharp transition in the modulus field was chosen. Whereas for the
average value in the outer regions, two distinct regions, one at the top and the other at the bottom, both away from the sharp transitions in the modulus, were chosen.
To get a sense of the contrast in modulus that we should expect between these two regions, we turn to two separate estimates. First, we recall our scaffold design (Fig. 1), which shows that the
samples are comprised of seven complete layers on the ends and four in the center, with a five-layer transition zone. The figure implies that the soft region should be between 0.5cm and 1cm in
length (which is reflected accurately in our reconstructions), and gives us the simplest estimate of modulus contrast of 7/4=1.75 between the stiffest and softest regions. Next, we consider our
previous study [35] in which we used our electrospinning approach to fabricate sets of PCL scaffolds with uniform thickness, for a range of scaffold thicknesses, and evaluated their elastic moduli [
35]. This study showed that, counterintuitively, modulus increased with decreasing specimen thickness; indicating that modulus cannot simply be scaled by cross-sectional area to estimate values for
our current samples. Using the results for uniform specimens along with the thickness measurements from Sec. 2.1, we can extrapolate values of 189MPa, 242MPa, and 251MPa for the three regions of
our samples, from stiffest to softest. Because the scaffolds are relatively uniform in macroscopic thickness, with most of the difference between soft and dense regions being accounted for by
different degrees of fiber packing, we adjust the thickness values accordingly, yielding final modulus estimates of 189MPa, 85MPa, and 61MPa for each region, producing an approximate contrast of
three between the stiffest and softest sections.
Returning to Table 1, the contrast for our reconstructions is 1.5; a value below both of these estimates. This is a 15% reduction from the first estimate, computed based on the number of layers in
the sample, and a 50% reduction from the second estimate, determined by extrapolating modulus values from previous tests of uniform scaffolds. The first deviation can be explained by the lessening of
contrast inherent in the regularization process, but the second is too large to be explained by the regularization or factors in the optimization process. We postulate that the notable differences in
fiber packing densities between our graded samples and uniform specimens of similar thickness necessarily change the fiber–fiber interactions (e.g., friction, entanglement), which, in turn, is
reflected in changes in the mechanical response significant enough to exaggerate the expected contrast in moduli.
4 Discussion
By using PCA to process the displacement data in this problem, we create both a set of denoised data and an estimate of the magnitude of the noise itself. We have shown that the former helps us to
enhance the robustness of the inverse problem solution and detect nonlinear behavior, and the success of this method in other studies [46] gives us confidence that PCA is a useful part of our
workflow. The set of denoised displacements obtained after PCA can also be utilized to create clean images of strain in the material, where spatial derivatives of noisy fields are often known to
cause difficulties. The assessment of the magnitude of the noise via PCA allows us to see how strongly the axial and lateral signals are influenced by the noise, and to adjust hyperparameters, such
as the values of weighting matrix T, accordingly. However, after accounting for the noise in the data, potential sources of error remain in our boundary conditions and material model.
Error induced by the boundary conditions likely stems from our treatment of the top and bottom edges of the samples; we require that both components of displacement match the measured data exactly on
the bottom edge, and that the axial components match on the top edge. The stiffening effects of these conditions and the influence of the noise, on the lateral data in particular, make this direct
enforcement potentially problematic. In future studies, we could avoid assumptions about the boundary conditions altogether by employing special coupled formulations [47,48].
Our constitutive model assumes linear material behavior and isotropy, and does not take into account the fibrous nature of the mats. Both linearity and isotropy are reasonable assumptions for our
experiment, given that we work with data in the small-strain regime and only consider one component of the displacement data. Because PCA shows us that the motion at low strains is dominated by fiber
realignment, it is clear that we will need to update our models to represent that behavior in order to increase our modeling accuracy. Updating the constitutive model will become much more important
in future studies that work with finite strain data, or wish to characterize other material properties; we can build on the work of recent studies that have attempted to capture the effects of fiber
orientation and interaction [12,17,29].
Though we use a simple model for this case, our reconstructions have a relatively low variability in contrast, as noted in our earlier consideration of Table 1. Based on previous studies, this level
of variability is within what we might expect for this type of inverse reconstruction method [40,49,50]. Additionally, some of this variation may be a product of variability in the manufacturing
process, though this is difficult to quantify. The actual spatial variation of modulus across each sample shows rapid transitions between regions of low and high modulus for all cases, and uneven
edges for these regions in two of the four cases. Without more detailed experimental characterization of each sample we cannot say that features like the uneven edges are real with absolute
certainty, but we do know that total variation regularization tends to produce “staircasing” in results, yielding sharp edges in reconstructions [51,52]. Repeating these tests with an H^1
regularization term produces very smooth transitions, and looking at the scaffold design (Fig. 1) suggests that we might expect to see a two-step transition. Thus, we can assume that the single sharp
transition is a result of the total variation regularization combining the two transition steps into one steep divide. To consider whether we could have produced a more detailed result, we examine
the finite element mesh used, as shown in Fig. 11, we can see that the sharp transitions in modulus occur within a single element. This leads us to conclude that we are producing results that
appropriately represent the granularity of this data. To resolve finer details in these reconstructions, we would require higher resolution in the displacement data, on a finer virtual grid.
5 Conclusions
We were able to successfully create and characterize spatially graded nanofibrous scaffolds. Four samples were fabricated using electrospun PCL fibers; as material was deposited, the center of each
sample was periodically masked to create a gradation of material stiffness mimicking that of natural tendon or ligament. Each of these samples was stretched uniaxially to failure, and a two-camera
DIC system recorded displacements across the sample faces during these tests. We used PCA to remove noise from this data and generate a set of clean input displacements for an inverse elasticity
algorithm, which was used to infer the spatial distribution of the modulus across each sample.
Our reconstruction of the modulus revealed spatial distributions matching our expectations based on the construction of the spatially graded samples. In future work, we can refine our approach
through a combination of experimental and computational changes. Experimentally, we can use more complex tools for characterization of the scaffolds before tensile testing to assess more material
properties and make possible quantitative, rather than relative, modulus reconstructions. In order to address anisotropy, we can implement an anisotropic model in our solver and perform tensile tests
along multiple axes. To further enhance the utility of our method, this anisotropic model can be built to model fiber-level interactions and properties.
We would also like to acknowledge the invaluable support of Taylor Anderson and Kristen Lee in performing experiments and collecting and formatting DIC data.
Funding Data
• National Science Foundation (CAREER Award CBET-0954990 (DTC); Funder ID: 10.13039/100000001).
• Rensselaer Polytechnic Institute's Presidential Scholars Fellowship (Funder ID: 10.13039/100007092).
C. E.
Shekhar Jha
S. A.
G. L.
, and
D. G.
, “
Nanotechnology in the Design of Soft Tissue Scaffolds: Innovations in Structure and Function
Wiley Interdiscip. Rev.: Nanomed. Nanobiotechnol.
), pp.
, and
, “
Nanofibers and Their Biomedical Use
Acta Pharm.
), pp.
, and
, “
Electrospinning: An Enabling Nanotechnology Platform for Drug Delivery and Regenerative Medicine
Adv. Drug Delivery Rev.
, pp.
, and
, “
New State of Nanofibers in Regenerative Medicine
Artif. Cells, Nanomed., Biotechnol.
), pp.
T. R.
P. D.
, and
D. W.
, “
Design, Fabrication and Characterization of PCL Electrospun Scaffolds—A Review
J. Mater. Chem.
), pp.
, and
, “
Nanofiber-Based Scaffolds for Tissue Engineering
Eur. J. Plast. Surg.
), pp.
C. T.
E. J.
R. S.
, and
F. K.
, “
Electrospun Nanofibrous Structure: A Novel Scaffold for Tissue Engineering
J. Biomed. Mater. Res.
), pp.
, and
, “
Putting Electrospun Nanofibers to Work for Biomedical Research
Macromol. Rapid Commun.
), pp.
D. W.
D. L.
D. J.
, and
J. A.
, “
Can Tissue Engineering Concepts Advance Tumor Biology Research?
Trends Biotechnol.
), pp.
J. H.
, and
M. D.
, “
Influence of Poly-(l-Lactic Acid) Nanofiber Functionalization on Maximum Load, Young's Modulus, and Strain of Nanofiber Scaffolds Before and After Cultivation of Osteoblasts: An In Vitro Study
Sci. World J.
, pp.
, and
R. P.
, “
Modulus of Elasticity of Randomly and Aligned Polymeric Scaffolds With Fiber Size Dependency
J. Mech. Behav. Biomed. Mater.
, pp.
S. R.
, and
, “
Determining the Mechanical Properties of Electrospun Poly-ε-Caprolactone (PCL) Nanofibers Using AFM and a Novel Fiber Anchoring Technique
Mater. Sci. Eng. C
, pp.
A. R.
M. T.
D. T.
D. L.
R. J.
, and
, “
Solvent Retention in Electrospun Fibers Affects Scaffold Mechanical Properties
), pp.
A. M.
K. P.
San Segundo
I. M.
A. R.
A. K.
R. J.
R. A.
D. T.
, and
R. J.
, “
Poly-l-Lactic Acid-co-Poly (Pentadecalactone) Electrospun Fibers Result in Greater Neurite Outgrowth of Chick Dorsal Root Ganglia In Vitro Compared to Poly-l-Lactic Acid Fibers
ACS Biomater. Sci. Eng.
), pp.
N. J.
A. R.
D. T.
E. Y.
M. R.
, and
R. J.
, “
The Effect of Engineered Nanotopography of Electrospun Microfibers on Fiber Rigidity and Macrophage Cytokine Production
J. Biomater. Sci. Polym. Ed.
), pp.
C. A.
A. S.
S. A.
, and
V. H.
, “
Computational Predictions of the Tensile Properties of Electrospun Fibre Meshes: Effect of Fibre Diameter and Fibre Orientation
J. Mech. Behav. Biomed. Mater.
), pp.
F. D.
, and
C. T.
, “
Controlled Biomineralization of Electrospun Poly (ε-Caprolactone) Fibers to Enhance Their Mechanical Properties
Acta Biomater.
), pp.
J. S.
N. J.
R. J.
, and
L. A.
, “
Electrospun Nanofiber Scaffolds for Investigating Cell–Matrix Adhesion
Adhesion Protein Protocols
, Humana Press, Totowa, NJ, pp.
P. L.
, and
F. D.
, “
Fabrication of Nanofiber Scaffolds With Gradations in Fiber Organization and Their Potential Applications
Macromol. Biosci.
), pp.
Ghorbani, S., Bazaz, S. R., Warkiani, M. E., Soleimani, M., and Mehrizi, A. A.,
, “
Evaluation of Nanofiber PLA Scaffolds Using Dry- and Wet-Electro Spinning Methods
,” 24th National and Second International Iranian Conference on Biomedical Engineering (
), IEEE, Tehran, Iran, Nov. 30–Dec. 1, pp.
G. M.
J. D.
P. J.
, and
, “
Functional Grading of Mineral and Collagen in the Attachment of Tendon to Bone
Biophys. J.
), pp.
J. L.
M. S.
L. S.
S. G.
, and
C. T.
, “
Composite Scaffolds: Bridging Nanofiber and Microsphere Architectures to Improve Bioactivity of Mechanically Competent Constructs
J. Biomed. Mater. Res. Part A
), pp.
F. D.
, and
, “
Electrospun Nanofiber Scaffolds With Gradations in Fiber Organization
J. Visualized Exp.
, pp.
, and
, “
Biomineralized Poly (l-Lactic-co-Glycolic Acid)/Graphene Oxide/Tussah Silk Fibroin Nanofiber Scaffolds With Multiple Orthogonal Layers Enhance Osteoblastic Differentiation of Mesenchymal Stem Cells
ACS Biomater. Sci. Eng.
), pp.
, and
, “
Electrospun Nanofibers: New Concepts, Materials, and Applications
Acc. Chem. Res.
), pp.
, and
, “
Nanofiber Scaffolds With Gradations in Mineral Content for Mimicking the Tendon-to-Bone Insertion site
Nano Lett.
), pp.
P. E.
, and
A. A.
, “
A Review of the Mathematical and Computational Foundations of Biomechanical Imaging
Computational Modeling in Biomechanics
, Dordrecht, The Netherlands, pp.
T. J.
P. E.
, and
A. A.
, “
Inferring Spatial Variations of Microstructural Properties From Macroscopic Mechanical Response
Biomech. Model. Mechanobiol.
), pp.
S. W.
P. E.
A. A.
, and
E. F.
, “
Transversely Isotropic Elasticity Imaging of Cancellous Bone
ASME J. Biomech. Eng.
), p.
M. S.
J. M.
C. G.
S. J.
, and
, “
Three Dimensional Elastic Modulus Reconstruction for Non-Invasive, Quantitative Monitoring of Tissue Scaffold Mechanical Property Changes
IEEE Ultrasonics Symposium
, IEEE, Beijing, China, Nov. 2–5, pp.
C. G.
, and
S. J.
, “
Non-Invasive Monitoring of Tissue Scaffold Degradation Using Ultrasound Elasticity Imaging
Acta Biomater.
), pp.
, and
S. F.
, “
Mr Elastography for Evaluating Regeneration of Tissue-Engineered Cartilage in an Ectopic Mouse Model
Magn. Reson. Med.
), pp.
Tripathy, S., Takanari, K., Hashizume, R., Hong, Y., Amoroso, N. J., Fujimoto, K. L., Sacks, M. S., Wagner, W. R., and Kim, K.,
, “
In-Vivo Mechanical Property Assessment of a Biodegradable Polyurethane Tissue Construct on Rat Abdominal Repair Model Using Ultrasound Elasticity Imaging
IEEE International Ultrasonics Symposium
, IEEE, San Diego, CA, Oct. 11–14, pp.
R. A.
K. L.
J. A.
, and
D. T.
, “
The Influence of Specimen Thickness and Alignment on the Material and Failure Properties of Electrospun Polycaprolactone Nanofiber Mats
J. Biomed. Mater. Res. Part A
), pp.
, “
Principal Component Analysis
International Encyclopedia of Statistical Science
, Heidelberg, Germany, pp.
L. I.
, and
, “
Nonlinear Total Variation Based Noise Removal Algorithms
Phys. D: Nonlinear Phenom.
), pp.
, and
, “
Medical Images Denoising Based on Total Variation Algorithm
Procedia Environ. Sci.
, pp.
A. A.
N. H.
, and
Feij O
G. R.
, “
Solution of Inverse Problems in Elasticity Imaging Using the Adjoint Method
Inverse Probl.
), pp.
J. A.
, Jr.
R. L.
, and
R. S.
, “
Fabrication and Characterization of Six Electrospun Poly (α-Hydroxy Ester)-Based Fibrous Scaffolds for Tissue Engineering Applications
Acta Biomater.
), pp.
N. H.
P. E.
, and
A. A.
, “
Solution of the Nonlinear Elasticity Imaging Inverse Problem: The Compressible Case
Inverse Probl.
), p.
P. C.
, “
Analysis of Discrete Ill-Posed Problems by Means of the l-Curve
SIAM Rev.
), pp.
R. W.
, “
Regularization Methods for the Extraction of Depth Profiles From Simulated Arxps Data Derived From Overlayer/Substrate Models
J. Electron Spectrosc. Relat. Phenom.
), pp.
P. E.
, and
N. H.
, “
Elastic Modulus Imaging: On the Uniqueness and Nonuniqueness of the Elastography Inverse Problem in Two Dimensions
Inverse Probl.
), pp.
T. J.
P. E.
, and
A. A.
, “
Improving Three-Dimensional Mechanical Imaging of Breast Lesions With Principal Component Analysis
Med. Phys.
), pp.
M. I.
, and
, “
A Modified Error in Constitutive Equation Approach for Frequency-Domain Viscoelasticity Imaging Using Interior Data
Comput. Methods Appl. Mech. Eng.
, pp.
A. A.
N. H.
M. M.
, and
J. C.
, “
Evaluation of the Adjoint Equation Based Algorithm for Elasticity Imaging
Phys. Med. Biol.
), p.
, and
A. A.
, “
Solution of the Nonlinear Elasticity Imaging Inverse Problem: The Incompressible Case
Comput. Methods Appl. Mech. Eng.
), pp.
, “
Structural Properties of Solutions to Total Variation Regularization Problems
Math. Modell. Numer. Anal.
), pp.
, “
Some Remarks on the Staircasing Phenomenon in Total Variation-Based Image Denoising
J. Math. Imaging Vision
), pp. | {"url":"https://turbomachinery.asmedigitalcollection.asme.org/biomechanical/article/142/7/071010/1072423/Characterization-of-Spatially-Graded-Biomechanical","timestamp":"2024-11-09T19:40:30Z","content_type":"text/html","content_length":"393736","record_id":"<urn:uuid:39f872bf-e450-4f9a-a04c-f32050e7f061>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00073.warc.gz"} |
359 research outputs found
The diagonal hydrodynamic reductions of a hierarchy of integrable hydrodynamic chains are explicitly characterized. Their compatibility with previously introduced reductions of differential type is
analyzed and their associated class of hodograph solutions is discussed.Comment: 19 page
A general theorem on factorization of matrices with polynomial entries is proven and it is used to reduce polynomial Darboux matrices to linear ones. Some new examples of linear Darboux matrices are
discussed.Comment: 10 page
We consider an hierarchy of integrable 1+2-dimensional equations related to Lie algebra of the vector fields on the line. The solutions in quadratures are constructed depending on $n$ arbitrary
functions of one argument. The most interesting result is the simple equation for the generating function of the hierarchy which defines the dynamics for the negative times and also has applications
to the second order spectral problems. A rather general theory of integrable 1+1-dimensional equations can be developed by study of polynomial solutions of this equation under condition of regularity
of the corresponding potentials.Comment: 17
The properties of discrete nonlinear symmetries of integrable equations are investigated. These symmetries are shown to be canonical transformations. On the basis of the considered examples, it is
concluded, that the densities of the conservation laws are changed under these transformations by spatial divergencies.Comment: 17 pages, LaTeX, IHEP 92-14
The iterations are studied of the Darboux transformation for the generalized Schroedinger operator. The applications to the Dym and Camassa-Holm equations are considered.Comment: 16 pages, 6 eps
Boundary value problems for integrable nonlinear partial differential equations are considered from the symmetry point of view. Families of boundary conditions compatible with the Harry-Dym, KdV and
MKdV equations and the Volterra chain are discussed. We also discuss the uniqueness of some of these boundary conditions.Comment: 25 pages , Latex , no figure
The discrete symmetry's dressing chains of the nonlinear Schrodinger equation (NLS) and Davey-Stewartson equations (DS) are consider. The modified NLS (mNLS) equation and the modified DS (mDS)
equations are obtained. The explicitly reversible Backlund auto-transformations for the mNLS and mDS equations are constructed. We demonstrate discrete symmetry's conjugate chains of the KP and DS
models. The two-dimensional generalization of the P4 equation are obtained.Comment: 20 page | {"url":"https://core.ac.uk/search/?q=author%3A(Shabat%20A%20B)","timestamp":"2024-11-13T16:30:31Z","content_type":"text/html","content_length":"121449","record_id":"<urn:uuid:0f0b1f17-83a8-4853-9100-63675da6a2fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00679.warc.gz"} |
Data For Triangles
Data for triangles consist of metallic triangles, edges, symmetry, dielectric triangles as well as advanced information for corner and end points.
Metallic Triangles
For the metallic triangles the following extract is written:
DATA OF THE METALLIC TRIANGLES
no. label x1 in m y1 in m z1 in m edges
medium x2 in m y2 in m z2 in m
medium x3 in m y3 in m z3 in m
nx ny nz area in m*m
1 0 0.0000E+00 0.0000E+00 0.0000E+00 1
Free s 0.0000E+00 2.0000E-01 0.0000E+00
Free s 3.3333E-02 0.0000E+00 0.0000E+00
0.0000E+00 0.0000E+00 -1.0000E+00 3.3333E-03
2 0 3.3333E-02 2.0000E-01 0.0000E+00 -1 2 3
Free s 3.3333E-02 0.0000E+00 0.0000E+00
Free s 0.0000E+00 2.0000E-01 0.0000E+00
0.0000E+00 0.0000E+00 -1.0000E+00 3.3333E-03
The first column gives the number of the triangle. The second column gives the label followed by the medium in which the triangle is situated. A 0 means that the triangle is in free space. The next
three columns are the X coordinate, Y coordinate and Z coordinate of the three corner points of the triangles.
In the first row of each triangle follows another list of the numbers of the edges of the adjacent triangles. A positive sign indicates that the positive current direction is away from the triangle.
A negative sign indicates that the positive current direction is towards the triangle. The area of the triangle is given below the edges in m^2.
Metallic Triangle Edges
The data for the metallic triangle edges is given after the metallic triangles. Such an edge is generated wherever two triangles have two common vertices. An additional line (or row) gives the
components (nx, ny, nz) of the normal vector of each triangle.
DATA OF THE METALLIC EDGES (with MoM)
triangle no. points of tr. information on symmetry
no. type length/m media KORP KORM POIP POIM yz xz xy status
1 1 2.0276E-01 Free s -1 1 2 1 1 0 0 0 unknown
2 1 2.0000E-01 Free s -1 2 3 3 3 0 0 0 unknown
3 1 3.3333E-02 Free s -1 2 7 2 2 0 0 0 unknown
Note: In the above table the spacing between columns was reduced to facilitate rendering the rows as single lines of data.
Each edge is assigned a consecutive number, which appears in the first column. The second column indicates the type of the edge. The third column gives the length of the edge and the fourth column
gives the medium in which the edge is found. On an edge there are exactly two triangles. The columns
give the numbers of these two triangles and the positive current direction is from the triangle
to the triangle
. The column
gives the number of the corner point of the triangle
which is opposite to the edge. The same applies to the column
The next four columns contain information regarding the symmetry. The column yz gives the number of the edge corresponding to the X=0 plane (YZ plane) of symmetry. A positive sign indicates that the
currents are symmetric and a negative sign indicates that the currents are anti-symmetric. If there is a 0 present in this column then a symmetric edge does not exist. The same applies to the next
columns xz and xy concerning the Y=0 plane and the Z=0 plane.
If the last column with the heading status displays unknown then the edge has an unknown status. This means that the applicable coefficient of the current basis function cannot be determined from the
symmetry, but has to be determined form the solution of the matrix equation. If a 0 is displayed instead then the coefficient of the current basis function is 0 due to electric or magnetic symmetry
and does not have to be determined.
If there is any other number in the status column then this number indicates another edge for which the coefficient is equal to (positive sign in the status column) or the negative of (negative sign
in the status column) the coefficient of the current basis function. From symmetry the coefficient of the current triangle does not have to be determined.
Dielectric Triangles
The data of the dielectric triangles (
method) is very similar to that of metallic triangles.
DATA OF THE DIELECTRIC TRIANGLES
no. label x1 in m y1 in m z1 in m edges
medium x2 in m y2 in m z2 in m
medium x3 in m y3 in m z3 in m
nx ny nz area in m*m
1 0 7.1978E-01 0.0000E+00 7.1978E-01 1 2 3
1 9.4044E-01 0.0000E+00 3.8954E-01
Free s 8.6886E-01 3.5989E-01 3.8954E-01
8.2033E-01 1.6317E-01 5.4812E-01 7.2441E-02
2 0 9.4044E-01 0.0000E+00 3.8954E-01 4 5 6
1 1.0179E+00 0.0000E+00 0.0000E+00
Free s 9.4044E-01 3.8954E-01 0.0000E+00
9.6264E-01 1.9148E-01 1.9148E-01 7.8817E-02
For the dielectric edges the extract is as follows:
DATA OF THE DIELECTRIC EDGES (with MoM)
triangle no. points of tr. electr. info on symmetry ...
no. type length/m media KORP KORM POIP POIM yz xz xy status ...
1 3 3.6694E-01 Free s 1 1 3 1 3 40 75 141 unknown ...
2 3 5.1069E-01 Free s 1 1 4 2 1 41 76 142 unknown ...
3 3 3.9718E-01 Free s 1 1 45 3 2 42 -3 143 0 ...
magnet. info on symmetry
yz xz xy status
40 75 141 unknown
41 76 142 unknown
42 -3 143 unknown
Note: In the above table the spacing between columns was reduced to facilitate convenient rendering of line breaks in the rows of data.
The symmetry information is shown for the basis functions for both the equivalent electric and magnetic current densities. | {"url":"https://help.altair.com/feko/topics/feko/user_guide/output_file/geom_data_triangles_feko_r.htm","timestamp":"2024-11-03T04:39:00Z","content_type":"application/xhtml+xml","content_length":"48484","record_id":"<urn:uuid:d34b6106-1a40-452a-a6da-6494f05059e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00412.warc.gz"} |
Classical computers being surpassed by 'primitive' two qubit quantum computer in various ways
Researchers at the Universities of Bristol and Western Australia have demonstrated a practical use of a “primitive” quantum computer, using an algorithm known as “quantum walk.” They showed that a
two-qubit photonics quantum processor can outperform classical computers for this type of algorithm, without requiring more sophisticated quantum computers, such as IBM’s five-qubits cloud-based
quantum processor (see IBM makes quantum computing available free on IBM Cloud).
Quantum walk is the quantum-mechanical analog of “random-walk” models such as Brownian motion (for example, the random motion of a dust particle in air).
Ref: Efficient quantum walk on a quantum processor. Nature Communications (5 May 2016) | DOI: 10.1038/ncomms11511 | PDF (Open Access)
The random walk formalism is used across a wide range of applications, from modelling share prices to predicting population genetics. Likewise, quantum walks have shown much potential as a framework
for developing new quantum algorithms. Here we present explicit efficient quantum circuits for implementing continuous-time quantum walks on the circulant class of graphs. These circuits allow us to
sample from the output probability distributions of quantum walks on circulant graphs efficiently. We also show that solving the same sampling problem for arbitrary circulant quantum circuits is
intractable for a classical computer, assuming conjectures from computational complexity theory. This is a new link between continuous-time quantum walks and computational complexity theory and it
indicates a family of tasks that could ultimately demonstrate quantum supremacy over classical computers. As a proof of principle, we experimentally implement the proposed quantum circuit on an
example circulant graph using a two-qubit photonics quantum processor. | {"url":"https://futuristech.info/posts/classical-computers-being-surpassed-by-primitive-two-qubit-quantum-computer-in-various-ways","timestamp":"2024-11-04T11:00:30Z","content_type":"text/html","content_length":"65579","record_id":"<urn:uuid:b52ae39a-35a2-4005-afbf-c723d03e93ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00874.warc.gz"} |
Marginal Distribution
The marginal distribution of a subset of a collection of random variables is the probability distribution of the variables contained in the subset. It gives the probabilities of various values of the
variables in the subset without reference to the values of the other variables.
This contrasts with a Conditional Distribution, which gives the probabilities contingent upon the values of the other variables. | {"url":"https://stevengong.co/notes/Marginal-Distribution","timestamp":"2024-11-13T18:41:16Z","content_type":"text/html","content_length":"13186","record_id":"<urn:uuid:2922bfe3-906c-4cda-9003-210d4a82e15c>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00285.warc.gz"} |
Could we have a leaf_count() function in base sagemath?
Could we have a leaf_count() function in base sagemath?
There is leaf_count() type function which is standard and buildin in other CAS systems (Mathematica and Maple). It is very useful. It is used to measure the size of a mathematical expression in a
standard way.
This will make it easier for example, to compare the size of the anti-derivative from a call to integrate from different CAS systems.
There are couple of ad-hoc attempts on the net now to do this in sagemath, but they do not work for all expressions, (give errors in some cases) and do not produce good measure of the size of the
leaf_count() is described in https://reference.wolfram.com/language/ref/LeafCount.html
LeafCount counts the number of subexpressions in expr that correspond to "leaves" on the expression tree.
So basically, if it possible to obtain the expression tree of any sagemath mathematical expression, leaf_count() should just return the number of leaf nodes of that tree.
Currently, the way I obtain leaf_count() for a sagemath expression, is by calling Maple and passing it the expression (as a string), then use Maple's LeafCount function there (after converting the
expression back to Maple, using parse function).
I use Maple not Mathematica for this, since the syntax of the sagemath expressions are much closer to Maple's.
But I prefer to do this all in sagemath. Much simpler. But the current implementation I saw does not work well for everything, and gives different result of the measure of the expression.
see as an example https://stackoverflow.com/questions/25202346/how-to-obtain-leaf-count-expression-size-in-sage
I am sure a sagemath internals expert here could write such a function and add it to sagemath as a standard buildin function. May be for 9.4 version?
The following are some examples
expr:=1/2*(log(b*x + a) - log(b*x - a))/(a*b);
expr:=-1/12*sqrt(3)*arctan(1/12*(7*sqrt(3)*cos(x)^2 - 4*sqrt(3))/(cos(x)*sin(x)));
expr:=[-sqrt(-a*b)*log((a*x - b - 2*sqrt(-a*b)*sqrt(x))/(a*x + b))/(a*b), -2*sqrt(a*b)*arctan(sqrt(a*b)/(a*sqrt(x)))/(a*b)];
The sagemath leaf_count() does not have to give same exact value as the above, but it should be very close to it.
Thank you
2 Answers
Sort by ยป oldest newest most voted
To every expression one can associate a syntactic tree.
You are asking for the size of that tree.
Since all tree vertices count I would call it "tree size" rather than "leaf count".
So let us write a small tree_size function.
I don't know whether Sage already has such a function, or whether a shorter one might use Sage's "expression tree walker" and "map reduce".
The function below gives the tree size of a symbolic expression, as well as of a list, tuple, or vector of symbolic expressions, or any Sage object that can be converted to a symbolic expression.
def tree_size(expr):
Return the tree size of this expression.
if expr not in SR:
# deal with lists, tuples, vectors
return 1 + sum(tree_size(a) for a in expr)
expr = SR(expr)
x, aa = expr.operator(), expr.operands()
if x is None:
return 1
return 1 + sum(tree_size(a) for a in aa)
Here is a companion function for the string length:
def string_size(expr):
Return the length of the string representation of this expression.
return len(str(expr))
Let us define a list of the test cases in the question.
a, b, x = SR.var('a, b, x')
ee = [
1/2*(log(b*x + a) - log(b*x - a))/(a*b),
-1/12*sqrt(3)*arctan(1/12*(7*sqrt(3)*cos(x)^2 - 4*sqrt(3))/(cos(x)*sin(x))),
[-sqrt(-a*b)*log((a*x - b - 2*sqrt(-a*b)*sqrt(x))/(a*x + b))/(a*b),
and run the function on them:
sage: print('\n'.join(f'* {e}\n {tree_size(e)}, {string_size(e)}' for e in ee))
* x
1, 1
* 1/2*(log(b*x + a) - log(b*x - a))/(a*b)
25, 39
* arctan(b*x/a)/(a*b)
14, 19
* -1/12*sqrt(3)*arctan(1/12*(7*sqrt(3)*cos(x)^2 - 4*sqrt(3))/(cos(x)*sin(x)))
31, 75
* [-sqrt(-a*b)*log((a*x - b - 2*sqrt(-a*b)*sqrt(x))/(a*x + b))/(a*b),
68, 117
Perfect match with the desired values from the question!
Now also works for the follow-up requests in the comment:
sage: [tree_size(a) for a in [1, 1/2, 3.4, i, CC(3, 2)]]
[1, 1, 1, 1, 1]
sage: x = polygen(ZZ)
sage: p = x - x^2
sage: p
-x^2 + x
sage: tree_size(p)
sage: f = (x - x^2) / (1 - 3*x)
sage: f
(-x^2 + x)/(-3*x + 1)
sage: tree_size(f)
• The Introduction to Calcium worksheet gives a demo of Calcium; one of the examples shows Calcium expressions have a num_leaves method. By the way, if you have interesting complicated expressions
in QQbar, see Fredrik Johansson's request for benchmark problems on sage-devel.
edit flag offensive delete link more
Great, will test it more. But would it be it possible to make it handle special cases when the input is non symbolic? i.e. just numbers? This will make it more complete. For example tree_size(1) and
tree_size(3.4) and tree_size(1/2) all of these return 1 in Maple. now these calls generate errors. Either object has no attribute 'operator' or maximum recursion depth exceeded. I am sure a one extra
special check is all what is needed?. Thanks.
Nasser ( 2021-05-15 12:30:58 +0100 )edit
Fixed now to work with constants, polynomials, ...
slelievre ( 2021-05-15 12:58:59 +0100 )edit
The trick is probably to define correctly what is a leaf. First rough attempt :
def LC(x):
def is_leaf(x):
from sage.symbolic.expression import Expression
return type(x) is not Expression or x.is_symbol() or x.is_numeric()
if is_leaf(x): return 1
return sum(map(LC, x.operands()))
sage: LC(sin(a+b))
sage: LC(sin(a+b).trig_expand())
sage: LC(integrate(sin(x), x, a, b, hold=True))
Note that, by design, it won't work on anything else than a symbolic expression ; polynomials, rational fractions, series, etc..., here considered as a leaf, need their own implementation, whose
definition of "leaf" isn't a trivial question. Consider :
sage: R1.<t>=QQ[]
sage: LC(t^3-3*t^2+t-1)
sage: LC(SR(t^3-3*t^2+t-1))
sage: len((t^3-3*t^2+t-1).coefficients())
sage: len((t^3).coefficients())
sage: len((t^3).coefficients(sparse=False))
Anyway, this is probably not sufficiently "general" to deserve an implementation in "core" Sage. A package grouping some tree-related utilities could be considered...
BTW, the "complexity" of an expression isn't only a function of the number of its leaves, but also of the number of its non-terminal nodes, the distribution of the depth of its branches and the
structure of common subexpressions... AFAIK, there is no universally acknowledged metric of an expression's "complexity"...
edit flag offensive delete link more | {"url":"https://ask.sagemath.org/question/57123/could-we-have-a-leaf_count-function-in-base-sagemath/?answer=57124","timestamp":"2024-11-07T00:01:16Z","content_type":"application/xhtml+xml","content_length":"69609","record_id":"<urn:uuid:f76f22bc-3650-464f-b51c-b234e6783100>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00114.warc.gz"} |
Investigating Spatial Patterns of Pulmonary Tuberculosis and Main Related Factors in Bandar Lampung, Indonesia Using Geographically Weighted Poisson Regression
Graduate School of Environmental Science, Sriwijaya University, Palembang 30139, Indonesia
Department of Environmental Sanitation, Tanjung Karang Health Polytechnic, Bandar Lampung 35145, Indonesia
Department of Pharmacology, Faculty of Medicine, Sriwijaya University, Palembang 30114, Indonesia
Department of Physics, Faculty of Mathematics and Natural Sciences, Sriwijaya University, Indralaya 30662, Indonesia
Department of Chemistry, Faculty of Mathematics and Natural Sciences, Sriwijaya University, Indralaya 30662, Indonesia
Author to whom correspondence should be addressed.
Submission received: 26 July 2022 / Revised: 15 August 2022 / Accepted: 22 August 2022 / Published: 26 August 2022
Tuberculosis (TB) is a highly infectious disease, representing one of the major causes of death worldwide. Sustainable Development Goal 3.3 implies a serious decrease in the incidence of TB cases.
Hence, this study applied a spatial analysis approach to investigate patterns of pulmonary TB cases and its drivers in Bandar Lampung (Indonesia). Our study examined seven variables: the growth rate
of pulmonary TB, population, distance to the city center, industrial area, green open space, built area, and slum area using geographically weighted Poisson regression (GWPR). The GWPR model
demonstrated excellent results with an R^2 and adjusted R^2 of 0.96 and 0.94, respectively. In this case, the growth rate of pulmonary TB and population were statistically significant variables.
Spatial pattern analysis of sub-districts revealed that those of Panjang and Kedaton were driven by high pulmonary TB growth rate and population, whereas that of Sukabumi was driven by the
accumulation of high levels of industrial area, built area, and slums. For these reasons, we suggest that local policymakers implement a variety of infectious disease prevention and control
strategies based on the spatial variation of pulmonary TB rate and its influencing factors in each sub-district.
1. Introduction
Tuberculosis (TB) is a major cause of global health problems, representing one of the leading causes of death due to infectious diseases worldwide (
Table A1
) [
]. TB is fundamentally caused by Mycobacterium tuberculosis, which affects the lungs (pulmonary TB) while also affecting other sites (extrapulmonary TB) [
]. An acid-fast bacillus (AFB) positive smear is an early-stage indicator while diagnosing pulmonary TB [
]. Moreover, AFB bacterium can cause a host of other infections in addition to TB [
]. Fundamentally, there are several factors, causing pulmonary TB including geographical factors (e.g., altitude) [
], environmental factors (e.g., soil moisture) [
], and socio-economic factors (e.g., population density) [
Auchincloss et al. [
] have reviewed several methods used for epidemiological spatial analysis, such as trends over time, distance calculations, spatial aggregation, clustering, spatial smoothing and interpolation, and
spatial regression. Moreover, other studies have focused on the spatial analysis of TB by using the Global Moran’s I and Getis-Ord Gi* autocorrelation statistics to detect the spatiotemporal patterns
of TB [
]. Meanwhile, Li [
] used the Bayesian spatiotemporal model to analyze the correlation of socio-economic, health, demographic, and meteorological factors with the population level of TB. Other studies have utilized a
weight-rating and multi-criteria decision-making score model to map TB risk areas [
Several studies have analyzed the spatial interaction between socio-economic factors and pulmonary TB cases by comparing the ordinary least squares (OLS) model and the geographically weighted
regression (GWR) model [
]. Their results confirmed that the GWR model could better distinguish the relationship between the mean number of smear-positive TB cases and their socio-economic determinants. Hailu et al. [
] have applied Getis-Ord Gi* and the GWR method to explore the spatial cluster patterns of pulmonary TB cases. In this way, they have assessed the spatial heterogeneity with the predictor variables.
Despite these advancements, the in-depth analysis of the spatial distribution in urban areas is lacking.
By 2020, 30 countries with the highest TB cases accounted for 86% of TB cases worldwide. Eight of these countries accounted for two-thirds of the global total: India (26%), China (8.5%), Indonesia
(8.4%), the Philippines (6.0%), Pakistan (5.8%), Nigeria (4.6%), Bangladesh (3.6%), and South Africa (3.3%) [
]. As per World Health Organization data from 2020 [
], 10 million people worldwide suffer from TB, and 1.2 million people die every year. Global efforts are being made to end the TB epidemic by 2030 (Sustainable Development Goal (SDG) 3.3) by
detecting and treating TB cases [
]. The strategies and SDGs imply achieving and targeting large-scale reductions in the incidence of TB, the absolute number of TB deaths, and the costs, faced by TB patients [
Indonesia is one of the countries with the highest TB case load globally, with an estimated number of infections reaching 845,000, and a mortality rate of 98,000, equivalent to 11 deaths/h [
]. From a regional perspective, the TB case detection rate for all the TB cases in Lampung Province (Indonesia) has increased by 25–54% from 2017 to 2019 [
]. In particular, the third highest case detection rate was identified in Bandar Lampung with 63% of the detected cases [
], which is the capital city of the Lampung Province and serves as the center of both government, and social, political, economic, educational, and cultural activities [
]. Given the regional importance of Bandar Lampung and abundant TB cases within the city, it is required to elucidate their distribution and drivers toward achieving SDG 3.3. However, the studies
about local cases of pulmonary TB and its drivers in Bandar Lampung are lacking.
To fill these research gaps, our study investigates the factors and distribution of pulmonary TB cases in Bandar Lampung using land use and socio-demographic variables. Firstly, we conducted
correlation and scatter plot analysis to identify potential variables. Secondly, we used OLS to develop a multivariate equation for pulmonary TB cases estimation using the geographically weighted
Poisson regression (GWPR) method. In this case, we assessed GWPR model performance and significance variables based on the statistical report. Thirdly, we analyzed the spatial patterns of pulmonary
TB cases and its influencing factors by sub-districts. As a novelty, this study provides high accuracy of the GWPR model and an in-depth spatial patterns analysis of pulmonary TB cases in Bandar
Lampung. Furthermore, local and national policymakers can adopt the research findings to control pulmonary TB transmission across the urban area.
2. Materials and Methods
2.1. Study Area
Bandar Lampung (
Figure 1
) has an area of 197.22 km
with a population density of approximately 6008 people/km
and a population growth rate of 2.16% per year from 2011 to 2021 [
]. Its population growth will reach 1.8 million people by 2030 [
]. As the capital city of the Lampung Province, Bandar Lampung has the highest incidence of TB cases in the province [
]. In 2010, from a pool of 13,533 inhabitants, 1353 were found to be AFB smear-positive [
]. In 2011, the Bandar Lampung had 1314 TB cases, including 1000 smear-positive cases.
2.2. Spatial Data Used in This Study
This study used the available spatial data, including land use and socio-demographic data summarized in
Table 1
2.2.1. Socio-Demographic Data
The number of pulmonary TB cases is tuberculosis patients data in 2015 and 2020 was sourced from the Bandar Lampung City Health Office [
]. The pulmonary TB growth rate is the growth of TB cases calculated based on pulmonary TB cases in 2015 and 2020 sourced from the Bandar Lampung City Health Office [
]. The population is a census data of the Bandar Lampung population in 2020, which was carried out every ten years [
2.2.2. Land Use Data
Distance to the urban center is data on the distance to the capital by sub-district in Bandar Lampung, which was collected from the government of Bandar Lampung. The distance is based on the unit
length of km [
]. The industrial area is data on the area of industrial areas in Bandar Lampung in 2020, which was archived by the Regional Development Planning Agency of Bandar Lampung City [
]. Green open space area are data on the area of green open space in Bandar Lampung, which is sourced from the Regional Development Planning Agency of Bandar Lampung City based on the interpretation
of SPOT 6 imagery [
]. The slums area includes data on the area of slums in Bandar Lampung City in 2020, sourced from the Regional Development Planning Agency of Bandar Lampung City [
]. Bult-up areas are obtained to Global Artificial Impervious Areas (GAIA) data products. GAIA uses the complete Landsat archive with a spatial resolution of 30 m on the Google Earth Engine platform.
Data are available from 1985 to 2018. Additional data sets include night light data and Sentinel-1 Synthetic Aperture Radar Data. A cross-product comparison shows the GAIA data are the only data set
spanning more than 30 years. The temporal trends in GAIA are in good agreement with other datasets at local, regional, and global scales [
2.3. Methodology
2.3.1. Scatter Plot and Correlation Analysis
The variables influencing pulmonary TB were selected based on scatterplots and correlation analyses. The scatterplot graph and correlation coefficient can be used for identifying required variables
based on the strength of the relationship between the two variables. The correlation coefficient is calculated by:
$r x y = n ∑ x y − ( ∑ x ) ( ∑ y ) { n ∑ x 2 − ( ∑ x ) 2 } { n ∑ y 2 ) − ( ∑ y ) 2 }$
is the correlation coefficient, n is the number of data points,
$∑ x , ∑ y$
is the number of each variable,
$∑ x y$
is the sum of the multiplication of the variables
$∑ x 2 , ∑ y 2$
is the sum of the squares of
2.3.2. Ordinary Least Square (OLS)
In this study, OLS analysis was applied to determine spatial dependencies in regression analysis to avoid unstable parameters, to perform significance tests, and to obtain the information about the
spatial relationship between the parameters involved in the model [
]. Equation (2) formalizes the OLS regression model:
$Y i = β 0 + β 1 X 1 i + β 2 X 2 i + … + β n X n i + ε i$
$Y i$
is the dependent variable,
$X 1 i$
$X 2 i$
$X n i$
are the independent variables,
$ε i$
represents an error,
$β 0 ,$
$β 1$
$β n$
are the respective intercepts and coefficients [
2.3.3. Geographically Weighted Poisson Regression (GWPR)
This study used GWPR to improve the predictions for each sub-district based on the observations in nearby sub-districts. In a GWPR, the pulmonary TB cases are predicted by a set of explanatory
variables allowing the parameters to vary over space [
]. The function of the GWPR equation is formalized in Equation (3):
$ln ( Y ) = ln ( β 0 ( u i ) ) + β 1 ( u i ) X 1 + β 2 ( u i ) X 2 + … + β n ( u i ) X n + ε i$
$β n$
represents the function of location, and
$u i = ( u x i , u y i )$
denotes the two-dimensional coordinates of the
th point in space. This means that the parameter
$β n = ( β 0 , β 1 ,$
$, β n )$
, as estimated in Equation (3), may differ between sub-districts. Thus, in the GWPR method, the parameter
$β n$
can be expressed by using Equation (4):
$β = [ β 0 ( u x 1 , u y 1 ) β 1 ( u x 1 , u y 1 ) ⋯ β n ( u x 1 , u y 1 ) β 0 ( u x 2 , u y 2 ) β 1 ( u x 2 , u y 2 ) ⋯ β n ( u x 2 , u y 2 ) ⋯ ⋯ ⋯ ⋯ β 0 ( u x k , u y k ) β 1 ( u x k , u y k ) ⋯ β
n ( u x k , u y k ) ]$
is the number of sub-districts. The parameters for each sub-district, which form a row in Equation (4), are estimated as follows [
$β ^ ( i ) = ( X T W ( u x i , u y i ) X ) − 1 X T W ( u x i , u y i ) Y$
In Equation (5),
$W ( u x i , u y i )$
represents an
spatial weight matrix that can be expressed as
$W ( i )$:
$W ( i ) = [ w i 1 0 ⋯ 0 0 w i 2 ⋯ 0 ⋯ ⋯ ⋯ ⋯ 0 ⋯ ⋯ w i k ]$
$w i j$$( j = 1 , 2 ,$
$, k )$
is the weight given to the sub-district
in the model adjustment for sub-district
2.3.4. Model Assessment
In this study, the evaluation phase of the model accuracy for each sub-district was carried out based on the analysis of the standard deviation of the OLS and GWPR model by using Equation (7):
$S = ∑ 1 − n n ( X i − X ¯ ) 2 n$
is the standard deviation,
is the amount of data,
$X i$
is the variance value, and
$X ¯$
is the calculated average [
]. Moreover, the model fitness was evaluated based on the value of R
and adjusted R
, as shown in Equations (8) and (9), respectively:
is the square of the difference between the predicted Y value and the average value
Y = $∑ i = 1 n$(ŷᵢ − ӯ)^2
, and
is the square of the difference between the actual Y value and the average value
Y = $∑ i = 1 n$(yᵢ – ӯ)^2
$R 2 a d j = 1 − M S E M S T = 1 − ( 1 − R 2 ) ( n − 1 n − p − 1 )$
is the mean squared error,
is the total mean squared error,
is the number of observations, and
is the number of variables [
2.3.5. Investigating Spatial Patterns of Incidence Rate and Main Variables
The number of pulmonary TB cases per sub-district, generated by using GWPR, was applied to quantify the incidence rate by Equation (10).
$I n c i d e n c e r a t e = T B C a s e P o p u l a t i o n × 100 , 000$
At the last step, the incidence rate data and main variables were analyzed in depth to determine the spatial distribution pattern and factors, supporting TB cases in each sub-district of Bandar
3. Results
3.1. Correlation and OLS of AFB Smear-Positive Pulmonary TB
Figure 2
shows the scatter plot and correlation coefficient analysis, reflecting the relationship between each variable and the cases of pulmonary TB. The pulmonary TB growth rate revealed the highest
correlation coefficient (0.74) with the number of pulmonary TB cases. On the other hand, several other variables revealed weaker relationships, such as population (0.59), industrial area (0.45),
built area (0.35), and slum area (0.31). In contrast, several variables were found to have no correlation with pulmonary TB cases, including distance to the urban center (0.19) and green open space
Table 2
summarizes the statistical results of the overall OLS model. In this case, the variance inflation factor (VIF) was further applied to reflect the redundancy between variables, and if the VIF value
was more than 7.5, it must be removed. This analysis revealed that the VIF values ranged from 1.352 to 3.678, thereby indicating no redundancy between the explanatory variables used in the study. The
coefficient value indicates that several variables positively influenced the rate of pulmonary TB cases, including pulmonary TB growth rate, slum areas, and population. At the same time, several
variables showed negative effects on the rate of pulmonary TB cases, including green open space and distance to urban center.
Table 2
also depicts that the growth rate of pulmonary TB cases and population were identified as significant variables for the regression model with the
-value of 0.0001 and 0.006, respectively. The model performance indicators revealed that the R
and adjusted R
were 0.83 and 0.73. This indicates that the OLS model had significant properties and was a good fit.
The F-statistic and Joint Wald statistics indicators were conducted to reflect the overall statistical significance of the model. The null hypothesis for both tests is that the model’s explanatory
variables are ineffective. The probability value of the Joint F-statistic obtained was 0.001, while the probability value of the Joint Wald statistics was 0.0001. The value of the Joint F-statistic
and Joint Wald statistics obtained was <0.05, thereby indicating that the resulted model was statistically significant. The Koenker (BP) statistic was conducted to determine relationship consistency
in the model between the explanatory variables and the dependent variable in geographic and data space. The null hypothesis for this test is that the model is stationary. The probability value of the
Koenker statistics (BP) obtained in this model was 0.212, thereby suggesting that heteroscedasticity and non-stationarity were not statistically significant. The Jarque–Bera statistic was conducted
to determine the statistical distribution of the residuals. The null hypothesis for this test is that the residuals are normally distributed. The probability value of the Jarque–Bera statistics
obtained from this model was 0.639. This indicates that the residuals were normally distributed and had no bias.
Equation (9) illustrates how the seven variables were applied to estimate the spread of pulmonary TB:
$Y = 0.002 X 1 − 3.416 X 2 + 0.167 X 3 − 40.034 X 4 − 8.995 X 5 + 5.615 X 6 + 0.249 X 7 − 7.420$
The seven variables included population (X[1]), distance to the urban center (X[2]), industrial area (X[3]), green open space (X[4]), built area (X[5]), five-years average pulmonary TB growth rate (X
[6]), and slum area (X[7]).
3.2. Estimation of Pulmonary TB Cases Based on GWPR Method
Figure 3
shows that the estimated pulmonary TB cases, obtained from the GWPR processing, were divided into five classes. The very high, high, medium, low, and very low classes accounted for the ranges of
149–192 cases, 123–148 cases, 90–122 cases, 57–89 cases, and 41–56 cases, respectively. Several sub-districts, including Kedaton, Sukabumi, and Panjang, were characterized by relatively higher number
of pulmonary TB cases compared to the average cases of all sub-districts. Moreover, a lower number of cases was identified in some sub-districts, such as Labuhan Ratu, Langkapura, and Enggal.
The comparison of the estimated and real number of AFB smear-positive pulmonary TB cases for each sub-district in Bandar Lampung is shown in
Figure 4
. The average error of AFB smear-positive pulmonary TB cases in all the Bandar Lampung sub-districts was six cases. The Bumi Waras and Teluk Betung Barat sub-districts were characterized by high
error, above 15 cases, while the lowest error was identified in the Kemiling sub-district, with 0 cases.
3.3. Statistical Analysis of GWPR Model
The number of neighbors obtained from the GWPR model indicates that the optimal number of adaptive neighbors in this model was 15. The sigma-squared obtained in this model was 72,329.187, thus
indicating that this model matched the observed data well. The value of deviance explained by the local vs. global model was 0.716, thereby suggesting that the local model performed better than the
global model. The AICc value was found to be lower than OLS (195.456). In general, the statistical indicators of GWPR clearly indicate that the estimated GWPR model had significant properties with
the R^2 and adjusted R^2 of 0.96 and 0.94.
Figure 5
demonstrates that visually, the residuals of GWPR were lower than the residuals of OLS. The residuals of the GWPR model were in the range of −1.5 to 1.5 Std. Dev. We identified only two sub-districts
in the range of <−2.5 Std. Dev. (the Teluk Betung Barat and Bumi Waras sub-districts). Teluk Betung Barat had a reasonably medium overprediction value due to the high number of pulmonary TB cases
affected by the Tanjung Karang Barat and Teluk Betung Selatan. Bumi Waras had also a reasonably medium overprediction value due to the high number of pulmonary TB cases affected by Panjang and
Sukabumi. An extremely high or low number of cases in a sub-district could have triggered an overprediction or underprediction in its neighboring sub-districts. However, we discerned somewhat low
residuals values in other sub-districts as the GWPR yielded relatively higher R
, adjusted R
, and AICc than OLS. The relatively small residuals in most sub-districts indicate that the overall number of cases estimated by the GWPR model was close to the actual value.
3.4. Spatial Pattern of Pulmonary TB Cases
As shown in
Figure 6
, the highest incidence rate of AFB smear-positive pulmonary TB was observed in the Kedaton sub-district, and the lowest incidence rate was observed in the Labuan Ratu sub-district. Notably, several
sub-districts with more than 200 incidence rate (Kedaton, Teluk Betung Selatan, Panjang, and Tanjung Karang Barat) require peculiar attention due to the number of pulmonary TB cases.
Furthermore, we conducted an in-depth analysis of the spatial pattern and elucidated the relationship between the number of pulmonary TB cases and several main variables in each sub-district of
Bandar Lampung (see
Figure 7
We identified numerous cases and main variables in the Kedaton, Panjang, and Sukabumi sub-districts. Each sub-district exhibited distinctly characteristics. The number of pulmonary TB cases in
Kedaton was strongly affected by the high growth rate of pulmonary TB cases and population. Moreover, Panjang was affected by a high pulmonary TB cases growth rate, as well as high total population,
industrial areas, and slum areas. Furthermore, in Sukabumi, pulmonary TB cases were more affected by the population, industrial areas, built areas, and slum areas.
In general, there were no sub-districts with a high rate of pulmonary TB cases, where low levels of main variables were identified. This finding indicates that the factors of pulmonary TB cases in
Bandar Lampung were dominated by the pulmonary TB case growth rate, total population, industrial areas, built areas, and slum areas. This pattern is corroborated by the low rate of pulmonary TB cases
in several sub-districts with low levels of main variables. Moreover, two sub-districts with a low rate of pulmonary TB cases and low main variable were also identified (Langkapura and Tanjung Karang
4. Discussion
In general, pulmonary TB growth rate and population were the two dominant factors of pulmonary TB cases. The pulmonary growth rate of TB has a strong correlation, while the population has a moderate
correlation. OLS statistics also confirmed that these variables were statistically significant with the
-value < 0.01. Spatial pattern analysis revealed that high pulmonary TB cases in the Kedaton and Panjang were driven by the high pulmonary TB growth rate and population. According to some previous
research, the growth rate of pulmonary TB cases and population has a large influence on the number of pulmonary TB disease [
]. Increasing urban population density and scarce health resources may contribute to the gradual expansion of the pulmonary TB epidemic. This may be so because economic development greatly promoted
public transportation, which provided convenience for population mobility and opportunities for spatial transmission of pathogens [
The results showed that industrial areas, built areas, and slum areas had a weak correlation and insignificant with pulmonary TB cases. However, based on the findings of similar studies, a more
reasonable explanation is that slum settlement is one of the variables influencing the distribution of pulmonary TB diseases [
]. This is reasonable because slum environments create a conducive environment for TB spread due to high population density and lack of basic amenities such as decent housing, access to clean water,
lack of drainage, and basic sanitation. Furthermore, some of the TB-related conditions which are likely to occur in slums areas include ineffective health services in crowded and poorer populations,
poor patient compliance, a large pool of untreated cases, delayed diagnosis, and inappropriate treatment regimens [
]. From spatial pattern analysis, it was also identified that in Sukabumi, the number of pulmonary TB cases was high due to the high built, industrial, and slums area. In this case, the different
categorization of regions in Sukabumi may have caused a difference in the findings. As an illustration, industrial and slum areas in Bandar Lampung are only identified in particular sub-districts,
i.e., Sukabumi and Panjang.
Overall, it can be concluded that sub-districts with a high rate of pulmonary TB cases tend to have a high pulmonary TB growth rate and population. However, several sub-districts with high-rate
pulmonary TB cases identified a high level of the industrial, built, and slums area. For these reasons, it is clear that the dominant factors of pulmonary TB cases may vary geographically and can be
an accumulation of several factors. This finding indicates that the local government should put extra effort into sub-districts that are densely populated and have a high growing rate of pulmonary TB
cases. Rather than just pulmonary TB growth and population, other main variables also affect pulmonary TB cases. Hence, this should lead to various strategic approaches to controlling pulmonary TB
transmission [
The GWPR model shows excellent results with an R
and adjusted R
of 0.96 and 0.94, respectively. Based on previous research, the model demonstrated more accurate results according to the higher R
produced in several previous studies [
]. Our high values of R
and adjusted R
imply that the developed model can better represent the spatial variation of pulmonary TB cases in Bandar Lampung. This can be used to analyze pulmonary TB cases control strategies by simulating the
number and variables. Moreover, variables applied in this study can also be utilized as a basis for developing further pulmonary TB case spatial models in other urban areas.
However, there was a noticeably high difference between R
and adjusted R
of the OLS model (the adjusted R
is 0.1 point lower). The difference of 0.1 point was identified due to several less relevant variables, causing the adjusted R
to decrease. To alleviate these statistical shortcoming, future studies should thoroughly consider several variables that significantly affect increasing pulmonary TB cases at the city scale. To this
end, Sun et al. [
] stated that environmental factors, climatic factors, and rainy days have a complex impact on increasing the prevalence of TB. Other studies have revealed that temperature, humidity, and sunlight
also affect Mycobacterium tuberculosis growth [
]. Previous studies also suggested that pollution may increase the risk of pulmonary TB in the urban center of the industrial area [
]. Therefore, environmental, climatic, and air quality indicators can be explored to analyze their relation to pulmonary TB cases [
]. In this case, in situ data measurement can be collected in some areas to study its relation to pulmonary TB cases on a local scale [
]. Some research articles also report the number of other infectious disease cases, income per capita [
], the number of industrial workers, sanitation quality, HIV prevalence, child mortality, smoking, and diabetes rates, which are additional factors associated with the progression of pulmonary TB [
]. Therefore, future studies can explore various potential variables to understand the spatial pattern of pulmonary TB cases in urban areas, especially in high incidence rate cities.
In the case of spatial epidemiology, future studies can explore spatial clustering methods, e.g., spatial autocorrelation [
], global Moran’s I statistics, Kulldorff’s scan statistic [
], Getis-Ord Gi* [
], the generalized linear regression model, and the generalized additive model [
] to analyze the spatial distribution of pulmonary TB. In addition, combining several geospatial techniques with epidemiologically related cases can provide further insight [
]. Furthermore, a spatial risk model of pulmonary TB can be developed based on a disaster mitigation approach by involving hazard, vulnerability, and capacity aspects to assist policymakers in
designing intervention targets [
5. Conclusions
Pulmonary TB is a widespread infectious disease affecting millions of people worldwide every year. Due to the alarming rate of the spread of pulmonary TB, particularly in developing countries,
medical professionals are implementing new strategies for reducing the incidence rate and the absolute number of TB deaths. Therefore, this study employed a spatial approach to understand pulmonary
TB transmission in Bandar Lampung, Indonesia. Correlation analysis depicted that the growth rate of pulmonary TB and population have strong and moderate positive correlations, respectively, with the
number of pulmonary TB cases. Analysis by OLS also confirmed that these variables are statistically significant with the p-value < 0.01. Moreover, the GWPR model demonstrated a reliable result with
an R^2 and adjusted R^2 of 0.96 and 0.94, respectively. The GWPR model developed in this study can help to simulate the current status and future direction of pulmonary TB transmission. Through
spatial analysis, we discovered that the factors of high pulmonary TB growth rate, large population, and large amounts of built, industrial, and slums areas affect the high-rate pulmonary TB cases in
the Kedaton, Panjang, and Sukabumi sub-districts of Bandar Lampung. However, the drivers of each sub-district are spatially varied. The variation in pulmonary TB rate and its influencing factors can
lead to different control strategies for each sub-district at the local level. In this case, policymakers should realize that geospatial insight is a critical aspect that needs to be adopted as a
part of evidence-based policymaking in epidemiology and outbreak management to achieve community health resilience.
Author Contributions
Conceptualization, H.H.; methodology, H.H.; software, H.H.; validation, M.T.K., I.I. and S.; formal analysis, H.H.; investigation, H.H.; resources, H.H.; data curation, H.H.; writing—original draft
preparation, H.H.; writing—review and editing, H.H., M.T.K., I.I. and S.; visualization, H.H.; supervision, M.T.K., I.I. and S.; project administration, H.H. All authors have read and agreed to the
published version of the manuscript.
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
The authors are grateful to acknowledge the support from Sriwijaya University. We also thank the anonymous reviewers whose critical and constructive comments greatly helped us to prepare an improved
and clearer version of this paper. All persons and institutes who kindly made their data available for this research are acknowledged.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
Table A1.
Estimated number of pulmonary TB deaths in 2020 [
Region Pulmonary TB Deaths
World 1,500,000
Indonesia 13,174
Lampung 163
Bandar Lampung 32
1. World Health Organization. Global Tuberculosis Report 2021 [Homepage on the Internet]; World Health Organization: Geneva, Switzerland, 2021; Available online: https://www.who.int/teams/
global-tuberculosis-programme/tb-reports (accessed on 15 August 2022).
2. Ahmed, A.; Mekonnen, D.; Shiferaw, A.M.; Belayneh, F.; Yenit, M.K. Incidence and determinants of tuberculosis infection among adult patients with HIV attending HIV care in north-east Ethiopia: A
retrospective cohort study. BMJ Open 2018, 8, e016961. [Google Scholar] [CrossRef] [PubMed]
3. Chang, C.Y.; Hong, J.Y.; Yuan, M.K.; Chang, S.J.; Lee, Y.M.; Chang, S.C.; Hsu, L.C.; Chen, S.L. Risk factors in patients with AFB smear-positive sputum who receive inappropriate antituberculous
treatment. Drug Des. Devel. Ther. 2013, 7, 53–58. [Google Scholar] [CrossRef] [PubMed]
4. Acid-Fast Bacillus (AFB) Tests: MedlinePlus Medical Test. Available online: https://medlineplus.gov/lab-tests/acid-fast-bacillus-afb-tests/ (accessed on 18 April 2022).
5. Tanrikulu, A.C.; Acemoglu, H.; Palanci, Y.; Eren Dagli, C. Tuberculosis in Turkey: High altitude and other socio-economic risk factors. Public Health 2008, 122, 613–619. [Google Scholar] [
CrossRef] [PubMed]
6. Li, X.X.; Wang, L.X.; Zhang, H.; Jiang, S.W.; Fang, Q.; Chen, J.X.; Zhou, X.N. Spatial variations of pulmonary tuberculosis prevalence co-impacted by socio-economic and geographic factors in
People’s Republic of China, 2010. BMC Public Health 2014, 14, 257. [Google Scholar] [CrossRef]
7. Li, X.X.; Ren, Z.P.; Wang, L.X.; Zhang, H.; Jiang, S.W.; Chen, J.X.; Wang, J.F.; Zhou, X.N. Co-endemicity of pulmonary tuberculosis and intestinal helminth infection in the People’s Republic of
China. PLoS Negl. Trop. Dis. 2016, 10, 1–23. [Google Scholar] [CrossRef]
8. Rosli, N.M.; Shah, S.A.; Mahmood, M.I. Geographical Information System (GIS) application in tuberculosis spatial clustering studies: A systematic review. Malays. J. Public Health Med. 2018, 18,
70–80. [Google Scholar]
9. Tadesse, S.; Enqueselassie, F.; Hagos, S. Spatial and space-time clustering of tuberculosis in Gurage Zone, Southern Ethiopia. PLoS ONE 2018, 13, e0198353. [Google Scholar] [CrossRef]
10. Masabarakiza, P.; Adel Hassaan, M. Spatial-temporal analysis of tuberculosis incidence in Burundi using GIS. Cent. Afr. J. Public Health 2019, 5, 280. [Google Scholar] [CrossRef]
11. Auchincloss, A.H.; Gebreab, S.Y.; Mair, C.; Diez Roux, A.V. A review of spatial methods in epidemiology, 2000–2010. Annu. Rev. Public Health 2012, 33, 107–122. [Google Scholar] [CrossRef]
12. Mahara, G.; Yang, K.; Chen, S.; Wang, W.; Guo, X. Socio-economic predictors and distribution of tuberculosis incidence in Beijing, China: A study using a combination of spatial statistics and GIS
technology. Med. Sci. 2018, 6, 26. [Google Scholar] [CrossRef]
13. Mollalo, A.; Mao, L.; Rashidi, P.; Glass, G.E. A gis-based artificial neural network model for spatial distribution of tuberculosis across the continental united states. Int. J. Environ. Res.
Public Health 2019, 16, 157. [Google Scholar] [CrossRef] [Green Version]
14. Alene, K.A.; Viney, K.; Moore, H.C.; Wagaw, M.; Clements, A.C.A. Spatial patterns of tuberculosis and HIV coinfection in Ethiopia. PLoS ONE 2019, 14, e0226127. [Google Scholar] [CrossRef]
15. Alves, L.S.; Dos Santos, D.T.; Arcoverde, M.A.M.; Berra, T.Z.; Arroyo, L.H.; Ramos, A.C.V.; De Assis, I.S.; De Queiroz, A.A.R.; Alonso, J.B.; Alves, J.D.; et al. Detection of risk clusters for
deaths due to tuberculosis specifically in areas of southern Brazil where the disease was supposedly a non-problem. BMC Infect. Dis. 2019, 19, 628. [Google Scholar] [CrossRef]
16. Li, Q.; Liu, M.; Zhang, Y.; Wu, S.; Yang, Y.; Liu, Y.; Amsalu, E.; Tao, L.; Liu, X.; Zhang, F.; et al. The spatio-temporal analysis of the incidence of tuberculosis and the associated factors in
mainland China, 2009–2015. Infect. Genet. Evol. 2019, 75, 103949. [Google Scholar] [CrossRef]
17. Abdul Rasam, A.R.; Mohd Shariff, N.; Dony, J.F. Geospatial-Based Model for Diagnosing Potential High-Risk Areas of Tuberculosis Disease in Malaysia. MATEC Web. Conf. 2019, 266, 02007. [Google
Scholar] [CrossRef]
18. Wei, W.; Yuan-Yuan, J.; Ci, Y.; Ahan, A.; Ming-Qin, C. Local spatial variations analysis of smear-positive tuberculosis in Xinjiang using Geographically Weighted Regression model. BMC Public
Health 2016, 16, 1058. [Google Scholar] [CrossRef]
19. Wang, Q.; Guo, L.; Wang, J.; Zhang, L.; Zhu, W.; Yuan, Y.; Li, J. Spatial distribution of tuberculosis and its socioeconomic influencing factors in mainland China 2013–2016. Trop. Med. Int.
Health 2019, 24, 1104–1113. [Google Scholar] [CrossRef]
20. Dangisso, M.H.; Datiko, D.G.; Lindtjørn, B. Identifying geographical heterogeneity of pulmonary tuberculosis in southern Ethiopia: A method to identify clustering for targeted interventions.
Glob. Health Action 2020, 13, 1785737. [Google Scholar] [CrossRef]
21. World Health Organization. Global Tuberculosis Report 2020; World Health Organization: Geneva, Switzerland, 2020; ISBN 9789240013131. [Google Scholar]
22. Rood, E.; Khan, A.H.; Modak, P.K.; Mergenthaler, C.; Van Gurp, M.; Blok, L.; Bakker, M. A spatial analysis framework to monitor and accelerate progress towards SDG 3 to end TB in Bangladesh.
ISPRS Int. J. Geo-Inf. 2019, 8, 14. [Google Scholar] [CrossRef]
23. Pemerintah Provinsi Lampung Dinkes. Riskesdas Profil Kesehatan Provinsi Lampung Tahun 2019; Pemerintah Provinsi Lampung Dinkes: Bandar Lampung, Indonesia, 2019; p. 136. [Google Scholar]
24. Badan Pusat Statistik Kota Bandar Lampung. Bandar Lampung in Figure 2021 [Homepage on the Internet]; Badan Pusat Statistik Kota Bandar Lampung: Bandar Lampung, Indonesia, 2021; Available online:
https://bandarlampungkota.bps.go.id/publication/2021/02/26/89c1b3d0038567aff884ca04/kota-bandar-lampung-dalam-angka-2021.html (accessed on 25 August 2022).
25. Badan Pusat Statistik Kota Bandar Lampung. Bandar Lampung in Figure 2022 [Homepage on the Internet]; Badan Pusat Statistik Kota Bandar Lampung: Bandar Lampung, Indonesia, 2022; Available online:
https://bandarlampungkota.bps.go.id/publication/2022/02/25/0890a0fd32082cf574db32af/kota-bandar-lampung-dalam-angka-2022.html (accessed on 25 August 2022).
26. Profil Perumahan Dan Kawasan Permukiman Kota Bandar Lampung—Perkim.Id. Available online: https://perkim.id/pofil-pkp/profil-kabupaten-kota/
profil-perumahan-dan-kawasan-permukiman-kota-bandar-lampung/ (accessed on 20 April 2022).
27. Rencana Strategis Dinkes Provinsi Lampung Tahun 2015–2019; Dinas Kesehatan Provinsi Lampung: Bandar Lampung, Indonesia, 2019; Volume 58.
28. Lestari, A. Pengaruh Terapi Psikoedukasi Keluarga Terhadap Pengetahuan dan Tingkat Ansietas Keluarga Dalam Merawat Anggota Keluarga yang Mengalami Tuberculosis Paru di Kota Bandar Lampung. J.
Ilmiah Kesehatan 2011, 1. [Google Scholar] [CrossRef]
29. Dinas Kesehatan Kota Bandar Lampung. Bandar Lampung Health Profile 2015-2020 [Homepage on the Internet]. Dinas Kesehatan Kota Bandar Lampung: Bandar Lampung, Indonesia. 2020. Available online:
https://dinkeskotabalam.com/laporan (accessed on 15 August 2022).
30. BAPPEDA|Kota Bandar Lampung. Available online: https://bappeda.bandarlampungkota.go.id/ (accessed on 19 January 2022).
31. Gong, P.; Li, X.; Wang, J.; Bai, Y.; Chen, B.; Hu, T.; Liu, X.; Xu, B.; Yang, J.; Zhang, W.; et al. Annual maps of global artificial impervious area (GAIA) between 1985 and 2018. Remote Sens.
Environ. 2020, 236, 111510. [Google Scholar] [CrossRef]
32. Kumar, C.; Singh, P.K.; Rai, R.K. Under-five mortality in high focus states in india: A district level geospatial analysis. PLoS ONE 2012, 7, e0037515. [Google Scholar] [CrossRef]
33. Li, C.; Li, F.; Wu, Z.; Cheng, J. Exploring spatially varying and scale-dependent relationships between soil contamination and landscape patterns using geographically weighted regression. Appl.
Geogr. 2017, 82, 101–114. [Google Scholar] [CrossRef]
34. Nakaya, T.; Fotheringham, A.S.; Brunsdon, C.; Charlton, M. Geographically weighted poisson regression for disease association mapping. Stat. Med. 2005, 24, 2695–2717. [Google Scholar] [CrossRef]
35. Fotheringham, A.S.; Brunsdon, C.; Charlton, M. Geographically Weighted Regression: The Analysis of Spatially Varying Relationships; Wiley: Chichester, UK, 2002. [Google Scholar]
36. Soewarno. Hidrologi Aplikasi Metode Statistik untuk Analisa Data, 1st ed.; NOVA: Bandung, Indonesian, 1995. [Google Scholar]
37. Helland, I.S. On the interpretation and use of R^2 in regression analysis. Biometrics 1987, 43, 61. [Google Scholar] [CrossRef]
38. Noorcintanami, S.; Widyaningsih, Y.; Abdullah, S. Geographically weighted models for modelling the prevalence of tuberculosis in Java. J. Phys.: Conf. Ser. 2021, 1722, 012089. [Google Scholar]
39. Bui, L.V.; Mor, Z.; Chemtob, D.; Ha, S.T.; Levine, H. Use of geographically weighted poisson regression to examine the effect of distance on tuberculosis incidence: A case study in Nam Dinh,
Vietnam. PLoS ONE 2018, 13, e0207068. [Google Scholar] [CrossRef]
40. Sun, W.; Gong, J.; Zhou, J.; Zhao, Y.; Tan, J.; Ibrahim, A.N.; Zhou, Y. A spatial, social and environmental study of tuberculosis in China using statistical and GIS technology. Int. J. Environ.
Res. Public Health 2015, 12, 1425–1448. [Google Scholar] [CrossRef]
41. Dos Santos, M.A.P.S.; Albuquerque, M.F.P.M.; Ximenes, R.A.A.; Lucena-Silva, N.L.C.L.; Braga, C.; Campelo, A.R.L.; Dantas, O.M.S.; Montarroyos, U.R.; Souza, W.V.; Kawasaki, A.M.; et al. Risk
factors for treatment delay in pulmonary tuberculosis in Recife, Brazil. BMC Public Health 2005, 5, 25. [Google Scholar] [CrossRef]
42. Edelson, P.J.; Phypers, M. TB transmission on public transportation: A review of published studies and recommendations for contact tracing. Travel Med. Infect. Dis. 2011, 9, 27–31. [Google
43. Ogbudebe, C.L.; Chukwu, J.N.; Nwafor, C.C.; Meka, A.O.; Ekeke, N.; Madichie, N.O.; Anyim, M.C.; Osakwe, C.; Onyeonoro, U.; Ukwaja, K.N.; et al. Reaching the underserved: Active tuberculosis case
finding in urban slums in southeastern Nigeria. Int. J. Mycobacteriol. 2015, 4, 18–24. [Google Scholar] [CrossRef]
44. Bam, K.; Bhatt, L.P.; Thapa, R.; Dossajee, H.K.; Angdembe, M.R. Illness perception of tuberculosis (TB) and health seeking practice among urban slum residents of Bangladesh: A qualitative study.
BMC Res. Notes 2014, 7, 572. [Google Scholar] [CrossRef]
45. Banu, S.; Rahman, M.T.; Uddin, M.K.M.; Khatun, R.; Ahmed, T.; Rahman, M.M.; Husain, M.A.; van Leth, F. Epidemiology of tuberculosis in an urban slum of Dhaka City, Bangladesh. PLoS ONE 2013, 8,
e0077721. [Google Scholar] [CrossRef] [Green Version]
46. Kerubo, G.; Amukoye, E.; Niemann, S.; Kariuki, S. Drug susceptibility profiles of pulmonary Mycobacterium tuberculosis isolates from patients in informal urban settlements in Nairobi, Kenya. BMC
Infect. Dis. 2016, 16, 583. [Google Scholar] [CrossRef]
47. Oppong, J.R.; Mayer, J.; Oren, E. The global health threat of African urban slums: The example of urban tuberculosis. Afr. Geogr. Rev. 2015, 34, 182–195. [Google Scholar] [CrossRef]
48. Hargreaves, J.R.; Boccia, D.; Evans, C.A.; Adato, M.; Petticrew, M.; Porter, J.D.H. The social determinants of tuberculosis: From evidence to action. Am. J. Public Health 2011, 101, 654–662. [
Google Scholar] [CrossRef]
49. Duarte, R.; Lönnroth, K.; Carvalho, C.; Lima, F.; Carvalho, A.C.C.; Muñoz-Torrico, M.; Centis, R. Tuberculosis, social determinants and co-morbidities (including HIV). Pulmonology 2018, 24,
115–119. [Google Scholar]
50. Rachow, A.; Ivanova, O.; Wallis, R.; Charalambous, S.; Jani, I.; Bhatt, N.; Kampmann, B.; Sutherland, J.; Ntinginya, N.E.; Evans, D.; et al. TB sequel: Incidence, pathogenesis and risk factors of
long-term medical and social sequelae of pulmonary TB—A study protocol. BMC Pulm. Med. 2019, 19, 4. [Google Scholar] [CrossRef]
51. Goschin, Z.; Druica, E. Regional factors hindering tuberculosis spread in Romania: Evidence from a semiparimetric GWR model. J. Soc. Sci. Econ. 2017, 6, 15–29. [Google Scholar]
52. Krishnan, R.; Thiruvengadam, K.; Jayabal, L.; Selvaraju, S.; Watson, B.; Malaisamy, M.; Nagarajan, K.; Tripathy, S.P.; Chinnaiyan, P.; Chandrasekaran, P. An influence of dew point temperature on
the occurrence of Mycobacterium tuberculosis disease in Chennai, India. Sci. Rep. 2022, 12, 6147. [Google Scholar] [CrossRef]
53. Xu, M.; Li, Y.; Liu, B.; Chen, R.; Sheng, L.; Yan, S.; Chen, H.; Hou, J.; Yuan, L.; Ke, L.; et al. Temperature and humidity associated with increases in tuberculosis notifications: A time-series
study in Hong Kong. Epidemiol. Infect. 2020, 149, e8. [Google Scholar] [CrossRef]
54. Fernandes, F.M.d.C.; Martins, E.d.S.; Pedrosa, D.M.A.S.; Evangelista, M.d.S.N. Relationship between climatic factors and air quality with tuberculosis in the Federal District, Brazil, 2003–2012.
Braz. J. Infect. Dis. 2017, 21, 369–375. [Google Scholar] [CrossRef]
55. Lin, Y.J.; Lin, H.C.; Yang, Y.F.; Chen, C.Y.; Ling, M.P.; Chen, S.C.; Chen, W.Y.; You, S.H.; Lu, T.H.; Liao, C.M. Association between ambient air pollution and elevated risk of tuberculosis
development. Infect. Drug Resist. 2019, 12, 3835–3847. [Google Scholar] [CrossRef] [PubMed]
56. Lai, T.C.; Chiang, C.Y.; Wu, C.F.; Yang, S.L.; Liu, D.P.; Chan, C.C.; Lin, H.H. Ambient air pollution and risk of tuberculosis: A cohort study. Occup. Environ. Med. 2016, 73, 56–61. [Google
Scholar] [CrossRef] [PubMed]
57. Huang, S.; Xiang, H.; Yang, W.; Zhu, Z.; Tian, L.; Deng, S.; Zhang, T.; Lu, Y.; Liu, F.; Li, X.; et al. Short-term effect of air pollution on tuberculosis based on kriged data: A time-series
analysis. Int. J. Environ. Res. Public Health 2020, 17, 1522. [Google Scholar] [CrossRef] [PubMed] [Green Version]
58. Dye, C.; Lönnroth, K.; Jaramillo, E.; Williams, B.G.; Raviglione, M. Trends in tuberculosis incidence and their determinants in 134 countries. Bull. World Health Organ. 2009, 87, 683–691. [Google
Scholar] [CrossRef] [PubMed]
59. Khaliq, A.; Khan, I.H.; Akhtar, M.W.; Chaudhry, M.N. Environmental risk factors andsocial determinants of pulmonary tuberculosis in Pakistan. Epidemiol. Open Access 2015, 5, 201. [Google Scholar]
60. Gurung, L.M.; Bhatt, L.D.; Karmacharya, I.; Yadav, D.K. Dietary practice and nutritional status of tuberculosis patients in Pokhara: A cross sectional study. Front. Nutr. 2018, 5, 63. [Google
Scholar] [CrossRef]
61. Heunis, J.C.; Kigozi, N.G.; Chikobvu, P.; Botha, S.; Van Rensburg, H.D. Risk factors for mortality in TB patients: A 10-year electronic record review in a South African province. BMC Public
Health 2017, 17, 1–7. [Google Scholar] [CrossRef]
62. Shimeles, E.; Enquselassie, F.; Aseffa, A.; Tilahun, M.; Mekonen, A.; Wondimagegn, G.; Hailu, T. Risk factors for tuberculosis: A case–control study in Addis Ababa, Ethiopia. PLoS ONE 2019, 14,
e0212235. [Google Scholar] [CrossRef]
63. Ren, H.; Lu, W.; Li, X.; Shen, H. Specific urban units identified in tuberculosis epidemic using a geographical detector in Guangzhou, China. Infect. Dis. Poverty 2022, 11, 44. [Google Scholar] [
64. Asemahagn, M.A.; Alene, G.D.; Yimer, S.A. Spatial-temporal clustering of notified pulmonary tuberculosis and its predictors in East Gojjam Zone, Northwest Ethiopia. PLoS ONE 2021, 16, e0245378. [
Google Scholar] [CrossRef]
65. Liao, W.B.; Ju, K.; Gao, Y.M.; Pan, J. The association between internal migration and pulmonary tuberculosis in China, 2005-2015: A spatial analysis. Infect. Dis. Poverty 2020, 9, 1–12. [Google
Scholar] [CrossRef] [PubMed]
66. Gwitira, I.; Karumazondo, N.; Shekede, M.D.; Sandy, C.; Siziba, N.; Chirenda, J. Spatial patterns of pulmonary tuberculosis (TB) cases in Zimbabwe from 2015 to 2018. PLoS ONE 2021, 16, e0249523.
[Google Scholar] [CrossRef] [PubMed]
67. Im, C.; Kim, Y. Spatial pattern of tuberculosis (TB) and related socio-environmental factors in South Korea, 2008–2016. PLoS ONE 2021, 16, e0255727. [Google Scholar] [CrossRef]
68. Wang, W.; Guo, W.; Cai, J.; Guo, W.; Liu, R.; Liu, X.; Ma, N.; Zhang, X.; Zhang, S. Epidemiological characteristics of tuberculosis and effects of meteorological factors and air pollutants on
tuberculosis in Shijiazhuang, China: A distribution lag non-linear analysis. Environ. Res. 2021, 195, 110310. [Google Scholar] [CrossRef] [PubMed]
69. Shaweno, D.; Karmakar, M.; Alene, K.A.; Ragonnet, R.; Clements, A.C.; Trauer, J.M.; Denholm, J.T.; McBryde, E.S. Methods used in the spatial analysis of tuberculosis epidemiology: A systematic
review. BMC Med. 2018, 16, 193. [Google Scholar] [CrossRef]
70. Kementerian Kesehatan, R.I. Indonesia Health Profile 2020 [Homepage on the Internet]; Kementerian Kesehatan RI: Jakarta, Indonesia, 2020; Available online: https://pusdatin.kemkes.go.id/resources
/download/pusdatin/profil-kesehatan-indonesia/Profil-Kesehatan-Indonesia-Tahun-2020.pdf (accessed on 15 August 2022).
71. Dinas Kesehatan Provinsi Lampung. Lampung Health Profile 2020 [Homepage on the Internet]; Dinas Kesehatan Provinsi Lampung: Bandar Lampung, Indonesia, 2020; Available online: https://
dinkes.lampungprov.go.id/wpfd_file/profil-kesehatan-provinsi-lampung-tahun-2020/ (accessed on 15 August 2022).
Figure 3. Spatial distribution of AFB smear-positive pulmonary tuberculosis (TB) cases in Bandar Lampung in 2020.
Figure 4. Estimated and real AFB smear-positive pulmonary tuberculosis (TB) by sub-districts in Bandar Lampung.
Figure 7. Spatial variations of pulmonary TB cases, pulmonary TB growth rate, population, built area, industrial area, and slums.
No. Data Data Class Timespan Reference
1 Number of Pulmonary Tuberculosis Cases Socio-demographic 2020 [29]
2 Pulmonary Tuberculosis Growth Rate Socio-demographic 2015–2020 [29]
3 Population Socio-demographic 2020 [24]
4 Distance to the Urban Center Land Use 2020 [24]
5 Industrial Area Land Use 2020 [30]
6 Green Open Space Area Land Use 2020 [30]
7 Slums Area Land Use 2020 [30]
8 Built Area (GAIA) Land Use 1985–2018 [31]
Variable Coefficient StdError t-Statistics Probability Robust_SE Robust_t Robust_Pr VIF
Intercept −7.420 25.487 −0.291 0.078 19.944 −0.372 0.716 -
Population 0.002 0.001 3.320 0.006 * 0.001 5.773 0.000 * 2.631
Distance to the Urban Center −3.416 1.975 −1.730 0.109 1.336 −2.556 0.025 * 1.828
Industrial Area 0.167 3.763 0.044 0.965 2.568 0.065 0.949 3.678
Green Open Space −40.034 56.106 −0.714 0.489 37.453 −1.069 0.306 1.931
Built Area −8.995 6.864 −1.311 0.215 5.318 −1.691 0.117 2.591
5 Years Average Pulmonary TB Growth Rate 5.615 1.157 4.581 0.000 * 1.195 4.697 0.001 * 1.352
Slums 0.249 0.143 1.735 0.108 0.078 3.190 0.008 * 2.633
Diagnostics of OLS
Number of Observations 20 Akaike’s Information Criterion (AICc) 205.284
Multiple R-Squared 0.83 Adjusted R-Squared 0.73
Joint F-Statistics 8.288 Prob (>F), (7,12) degrees of freedom 0.001 *
Joint Wald Statistics 177.349 Prob (>chi-squared), (7) degrees of freedom 0.000 *
Koenker (BP) Statistics 9.603 Prob (>chi-squared), (7) degrees of freedom 0.212 *
Jarque–Bera Statistics 0.896 Prob (>chi-squared), (2) degrees of freedom 0.639 *
* An asterisk next to a number indicates a statistically significant p-value (p < 0.01).
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:
Share and Cite
MDPI and ACS Style
Helmy, H.; Kamaluddin, M.T.; Iskandar, I.; Suheryanto. Investigating Spatial Patterns of Pulmonary Tuberculosis and Main Related Factors in Bandar Lampung, Indonesia Using Geographically Weighted
Poisson Regression. Trop. Med. Infect. Dis. 2022, 7, 212. https://doi.org/10.3390/tropicalmed7090212
AMA Style
Helmy H, Kamaluddin MT, Iskandar I, Suheryanto. Investigating Spatial Patterns of Pulmonary Tuberculosis and Main Related Factors in Bandar Lampung, Indonesia Using Geographically Weighted Poisson
Regression. Tropical Medicine and Infectious Disease. 2022; 7(9):212. https://doi.org/10.3390/tropicalmed7090212
Chicago/Turabian Style
Helmy, Helina, Muhammad Totong Kamaluddin, Iskhaq Iskandar, and Suheryanto. 2022. "Investigating Spatial Patterns of Pulmonary Tuberculosis and Main Related Factors in Bandar Lampung, Indonesia Using
Geographically Weighted Poisson Regression" Tropical Medicine and Infectious Disease 7, no. 9: 212. https://doi.org/10.3390/tropicalmed7090212
Article Metrics | {"url":"https://www.mdpi.com/2414-6366/7/9/212","timestamp":"2024-11-03T03:59:10Z","content_type":"text/html","content_length":"493358","record_id":"<urn:uuid:8ef3612b-f395-4bdf-bc8c-2a047193b30f>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00266.warc.gz"} |
APS March Meeting 2013
Bulletin of the American Physical Society
APS March Meeting 2013
Volume 58, Number 1
Monday–Friday, March 18–22, 2013; Baltimore, Maryland
Session C26: Semiconductor Qubits - Gates and Robust Control Hide Abstracts
Sponsoring Units: GQI
Chair: Hendrik Blhum, RWTH Aachen
Room: 328
Monday, C26.00001: Interplay of charge and spin coherence in Landau-Zener interferometry in double quantum dots
March Invited Speaker: Hugo Ribeiro
2013 Landau-Zener-St\"{u}ckelberg-Majorana (LZSM) physics has been exploited to coherently manipulate two-electron spin states in a GaAs double quantum dot (DQD) at a singlet (S)-triplet ($\textrm
2:30PM {T}_+$) anti-crossing. The anti-crossing results from the hyperfine interaction with the nuclear spins of the host material [1,2]. However, the fluctuations of the nuclear spin bath result in
- spin dephasing within $T_2^* \sim 10-20$ ns. As a consequence, the sweep through the anti-crossing would have to be performed on a timescale comparable to $T_2^*$ to achieve LZSM oscillations
3:06PM with 100\% visibility. Moreover, the S-$\textrm{T}_+$ anti-crossing is located near the $(1,1)-(2,0)$ interdot charge transition, where $(n_{l}, n_{\mathrm{r}})$ denotes the number of
electrons in the left and right quantum dot. As a result the singlet state involved in the dynamics is a superpostion of $(1,1)$ and $(2, 0)$ singlet states. Here we show that it is possible
to increase the oscillation visibility while keeping sweep times less than $T_2^*$ using a tailored pulse with a detuning dependent level velocity. The pulse includes a slow level velocity
portion that is chosen to coincide with the passage through the S-$\textrm{T}_+$ anti-crossing and two fast level velocity portions. The latter minimize the time spent in regions where spin
and charge degrees of freedom are entangled, which renders the qubit susceptible to charge noise. The slow level velocity portion of the pulse results in a stronger effective coupling between
the spins states, which increases the oscillations visibility [3,4]. In particular, we were able to obtain a visibility of $\sim 0.5$ for LZSM oscillations. This constitutes an important step
towards the implementation of a Hadamard gate.\\[4pt] [1] J. R. Petta, H. Lu, and A. C. Gossard, Science 327, \textbf{669} (2010).\\[0pt] [2] H. Ribeiro, J. R. Petta, and G. Burkard, Phys.
Rev. B \textbf{82}, 115445 (2010).\\[0pt] [3] H. Ribeiro, G. Burkard, J. R. Petta, H. Lu, and A. C. Gossard, arXiv:1207.2972 (2012). \\[0pt] [4] H. Ribeiro, J. R. Petta, G. Burkard,
arXiv:1210.1957 (2012). [Preview Abstract]
Monday, C26.00002: Decoherence-protected nuclear spin quantum register in diamond
March Viatcheslav Dobrovitski, Wan Jung Kuo, Ronald Hanson, Tim H. Taminiau
2013 We analyze the decoherence-protected operation of a quantum register based on the nuclear spins surrounding a nitrogen-vacancy (NV) center in diamond. Combination of the decoherence
3:06PM protection with the quantum gates is achieved by applying the decoupling pulses to the NV center's electronic spin in resonance with the motion of one of the nuclear spins [1,2]. In this way,
- many weakly coupled (tens of kHz) nuclei located far from the NV center can be combined in a quantum register. We study the limits, set by realistic experimental parameters, on the size of
3:18PM such a register and on the duration of the quantum gates needed for its operation. We also consider the ways of accelerating the quantum gate operation, and integration of the
decoherence-protected gates with the decoupling of the nuclear spins themselves. We conclude that creation of such registers is feasible with current experimental capabilities. Work at the
Ames Laboratory was supported by the Department of Energy - Basic Energy Sciences under Contract No. DE-AC02-07CH11358. [1] T. van der Sar et al., Nature 484, 82 (2012). [2] T. H. Taminiau et
al., Phys. Rev. Lett. 109, 137602 (2012). [Preview Abstract]
Monday, C26.00003: Enhancement of Inter-qubit Coupling in Singlet-Triplet Qubits by Floating Metal Gate
March Shannon Harvey, Michael Shulman, Oliver Dial, Hendrik Bluhm, Vladimir Umansky, Amir Yacoby
2013 Spin qubits in semiconductors are promising systems for quantum computing, because they have long coherence times and are potentially scalable. However, their weak interaction with the
3:18PM environment, which gives their long coherence times, also makes inter-qubit interactions weak. Numerous proposals use electrostatic coupling between qubits for entangling operations, but
- these interactions require the qubits to be near one another. These proposals also suggest that adding a metallic gate between two qubits could increase coupling and allow the qubits to be
3:30PM spatially separated. We present results on two singlet-triplet (S-T$_{0})$ qubits connected by a floating metallic gate. Previous work on two-qubit operations, which use a capacitive
coupling, showed that the inter-qubit coupling is weak and requires the qubits to be in close proximity. We find that the inter-qubit coupling is increased with the inclusion of a floating
metal gate, which improves entangling operation fidelities and allows for these qubits to be spatially separated. Together, these improvements open the door to a scalable architecture for
quantum information processing for all semiconductor spin qubit platforms. [Preview Abstract]
Monday, C26.00004: Probing quantum phase transitions on a spin chain with a double quantum dot
March Yun-Pil Shim, Sangchul Oh, Jianjia Fei, Xuedong Hu, Mark Friesen
2013 We propose a local, projective scheme for detecting quantum phase transitions (QPTs) in a quantum dot spin chain [1]. QPTs in qubit systems are known to produce singularities in the
3:30PM entanglement, which could in turn be used to probe the QPT. Current proposals to measure the entanglement are challenging however, because of their nonlocal nature. We present numerical and
- analytical evidence that entanglement in a double quantum dot (DQD) coupled locally to a spin chain exhibits singularities at the critical points of the spin chain, and that these
3:42PM singularities are reflected in the singlet probabilities of the DQD. This result suggests that a DQD can be used as an efficient probe of QPTs through projective singlet measurements. We
propose a simple experiment to test this concept in a linear triple quantum dot. [1]Y.-P. Shim {\it et al.}, arXiv:1209.5445 [Preview Abstract]
Monday, C26.00005: Coherent electron transfer between distant quantum dots in a linear array
March Floris Braakman, Pierre Barthelemy, Lieven Vandersypen
2013 Tunnel coupled quantum dots form the basis for electronic charge and spin qubits in semiconductors. The tunnel coupling gives rise to quantum coherent phenomena such as exchange oscillations
3:42PM of neighboring spins. However, tunnel coupling strength between non-neighbouring sites is negligible and it is therefore desirable to develop a form of long range coupling. In a linear array
- of three quantum dots, we demonstrate an effective tunnel coupling between the outer dots through virtual occupation of discrete levels in the center dot. The coupling strength depends
3:54PM strongly on the detuning between center and outer dot levels. The observation of Landau-Zener-Stueckelberg oscillations demonstrates the coherent nature of the coupling. In principle the
effective long-range tunnel coupling should also allow coherent exchange of remote spins. [Preview Abstract]
Monday, C26.00006: Dynamically Corrected Pulse Sequences for the Exchange Only Qubit
March Garrett Hickman, Jason Kestner
2013 In the exchange-only qubit, hyperfine interactions of qubit electrons with neighboring atoms introduce decoherence into the basis states and mix them with a third leaked state. We
3:54PM theoretically derive a scheme for performing arbitrary single-qubit rotations on the exchange-only qubit while canceling all hyperfine-induced errors to first order. We compare numerically
- the performance of the resulting pulse sequences with that of the simplest na\"ive implementations for a range of hyperfine interaction strengths. While for typical operations these sequences
4:06PM are roughly 50 times longer than a simple uncorrected pulse, error is significantly reduced. We show that for hyperfine field inhomogeneities less than one thirtieth of the maximum exchange
strength, typical hyperfine-induced errors are reduced by at least an order of magnitude. [Preview Abstract]
Monday, C26.00007: Composite pulses robust against charge noise and magnetic field noise for universal control of a singlet-triplet qubit
March Xin Wang, Edwin Barnes, Jason P. Kestner, Lev S. Bishop, Sankar Das Sarma
2013 We generalize our SUPCODE pulse sequences [1] for singlet-triplet qubits to correct errors from imperfect control. This yields gates that are simultaneously corrected for both charge noise
4:06PM and magnetic field gradient fluctuations, addressing the two dominant $T_2^*$ processes. By using this more efficient version of SUPCODE, we are able to introduce this capability while also
- substantially reducing the overall pulse time compared to the previous sequence. We show that our sequence remains realistic under experimental constraints such as finite bandwidth. [1] Wang
4:18PM et al., ``Composite pulses for robust universal control of singlet-triplet qubits'', Nat. Commun. 3, 997 (2012) [Preview Abstract]
Monday, C26.00008: Composite multi-qubit gates dynamically corrected against charge noise and magnetic field noise for singlet-triplet qubits
March Jason Kestner, Edwin Barnes, Xin Wang, Lev Bishop, Sankar Das Sarma
2013 We use previously described single-qubit SUPCODE pulses on both intra-qubit and inter-qubit exchange couplings, integrated with existing strategies such as BB1, to theoretically construct a
4:18PM CNOT gate that is robust against both charge noise and magnetic field gradient fluctuations. We show how this allows scalable, high-fidelity implementation of arbitrary multi-qubit operations
- using singlet-triplet spin qubits in the presence of experimentally realistic noise. [Preview Abstract]
Monday, C26.00009: Dynamically corrected gates for singlet-triplet spin qubits with control-dependent errors
March N. Tobias Jacobson, Wayne M. Witzel, Erik Nielsen, Malcolm S. Carroll
2013 Magnetic field inhomogeneity due to random polarization of quasi-static local magnetic impurities is a major source of environmentally induced error for singlet-triplet double quantum dot
4:30PM (DQD) spin qubits. Moreover, for singlet-triplet qubits this error may depend on the applied controls. This effect is significant when a static magnetic field gradient is applied to enable
- full qubit control. Through a configuration interaction analysis, we observe that the dependence of the field inhomogeneity-induced error on the DQD bias voltage can vary systematically as a
4:42PM function of the controls for certain experimentally relevant operating regimes. To account for this effect, we have developed a straightforward prescription for adapting dynamically corrected
gate sequences that assume control-independent errors into sequences that compensate for systematic control-dependent errors. We show that accounting for such errors may lead to a substantial
increase in gate fidelities. [Preview Abstract]
Monday, C26.00010: High fidelity gates in quantum dot spin qubits
March Mark Friesen, Teck Seng Koh, S. N. Coppersmith
2013 A variety of logical qubits and quantum gates have been proposed for quantum computer architectures using top-gated quantum dots. Despite their differences, we show that many combinations of
4:42PM qubits and gates can be evaluated on an equal footing by optimizing the gating protocols for maximum fidelity. Here, we evaluate single-qubit gate operations for two types of logical-qubits:
- singlet-triplet qubits and quantum dot hybrid qubits. In both cases, transitions between the qubit states are controlled by the exchange interaction between the dots, which in turn depends on
4:54PM the tunnel coupling and the detuning. We compute the fidelities for three exchange gate protocols: a dc pulsed gate, an ac resonant gate, and stimulated Raman adiabatic passage (STIRAP).
Remarkably, we find that the optimized fidelities for all three gates follow a simple scaling law; the maximum fidelity depends only on the range of parameters that can be achieved
experimentally. We show that a singlet-triplet qubit can be pulse-gated with significantly higher fidelity than a hybrid qubit, and that the highest overall fidelity should be achieved in a
hybrid qubit using a STIRAP gating protocol. [Preview Abstract]
Monday, C26.00011: Theoretical hyperfine decay functions in triple quantum dots
March Thaddeus Ladd
2013 Coherent oscillations in multiple quantum dots decay due to hyperfine interactions with nuclear spins. The decay functions observed in several double-dot experiments [1] agree well with
4:54PM simple formulae derived using the group SU(2), which is defined by exchange and hyperfine interactions in the singlet-triplet system [2]. We show that in triple dots, this theory generalizes
- to SU(3), with convenient representation in the basis of states of the exchange-only qubit in a decoherence-free subsystem~[3]. Using some intuition from SU(3), we derive analytic formulae
5:06PM for the hyperfine decay functions expected in coherent oscillations in triple dots~[4]. \\{} \newcommand\mybibformat[5]{#1, \textit{#2}~\textbf{#3}, #4 (#5)} [1]~\mybibformat{B.~M.~Maune et
al.}{Nature}{481}{344}{2012}; \mybibformat{E.~A.~Laird et al.}{Phys. Rev. B}{82}{075403}{2012} \\{} [2]~\mybibformat{W. A. Coish and D. Loss}{Phys. Rev.~B}{72}{125337}{2005} \\{} [3]~\
mybibformat{D.~P.~DiVincenzo et al.}{Nature}{408}{339}{2000}; \mybibformat{B.~H.~Fong and S.~M.~Wandzura}{Quantum Inf. Comput.}{11}{1003}{2011} \\{} [4]~\mybibformat{T. D. Ladd}{Phys. Rev. B}
{86}{125408}{2012}. [Preview Abstract]
Monday, C26.00012: High fidelity gates for exchange-only qubits in triple-quantum-dots
March Jianjia Fei, Jo-Tzu Hung, Teck Seng Koh, Yun-Pil Shim, Sangchul Oh, Susan Coppersmith, Xuedong Hu, Mark Friesen
2013 One of the main attractions of implementing exchange-only qubits in quantum dots is their ease of control. Gate operations are performed by changing the voltages on the top-gates, to vary the
5:06PM tunnel coupling and/or the detuning between the dots. One of the main challenges is that when exchange interactions are turned on, charge noise will cause dephasing. Here, we explore optimal
- strategies for implementing logical qubit rotations in exchange-only qubits. We take into account charge noise, and challenges due to hyperfine interactions, including leakage outside the
5:18PM logical qubit space, and dephasing caused by fluctuations of the local nuclear fields. Our method is based on optimizing the experimentally tunable parameters to maximize the fidelity of the
gate operation. /newline /newline The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either
expressly or implied, of the U.S. Government. [Preview Abstract]
Monday, C26.00013: Constructing Two-Qubit Gates for Exchange-Based Quantum Computing
March Daniel Zeuch, Robert Cipri, N.E. Bonesteel
2013 Exchange pulses are local unitary operations obtained by turning on and off the isotropic exchange interaction between pairs of spin-1/2 particles, for example electron spins in quantum dots.
5:18PM We present a procedure for analytically constructing sequences of exchange pulses for carrying out leakage free two-qubit gates on logical three-spin qubits. At each stage of our construction
- we reduce the problem to that of finding a sequence of rotations for an effective two-level system. The resulting pulse sequences are 39 pulses long, longer than the original 19-pulse
5:30PM sequence of DiVincenzo et al. [1] and the more recent 18-pulse sequence of Fong and Wandzura [2], both of which were obtained numerically. Like the latter sequence, our sequences work
regardless of the total spin of the six spins used to encode two qubits. After introducing our method, we prove that any leakage-free sequence of exchange pulses must act on at least five of
the six spins to produce an entangling two-qubit gate.\\[4pt] [1] D.P. DiVincenzo et al., Nature \textbf{408}, 339 (2000). \newline [2] B.H. Fong \& S.M. Wandzura, Quantum Info. Comput., \
textbf{11}, 1003 (2011). [Preview Abstract]
Engage My APS Information for The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics.
Become an APS Member Renew Membership Librarians
Submit a Meeting Abstract Join an APS Unit Authors
Submit a Manuscript Get My Member Number Referees
Find a Journal Article Update Contact Information Media
Donate to APS Students
© 2024 American Physical Society | All rights reserved | Terms of Use | Contact Us
Headquarters 1 Physics Ellipse, College Park, MD 20740-3844 (301) 209-3200
Editorial Office 100 Motor Pkwy, Suite 110, Hauppauge, NY 11788 (631) 591-4000
Office of Public Affairs 529 14th St NW, Suite 1050, Washington, D.C. 20045-2001 (202) 662-8700 | {"url":"https://meetings.aps.org/Meeting/MAR13/Session/C26?showAbstract","timestamp":"2024-11-14T11:30:09Z","content_type":"text/html","content_length":"33666","record_id":"<urn:uuid:9c3ea324-3fa3-495b-b11c-0086c3bb5af5>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00514.warc.gz"} |
Change in type="cor" behavior?
Replied on Mon, 03/25/2019 - 14:38
I'm pretty sure this is unintentional and indicates a bug.
Replied on Mon, 03/25/2019 - 15:39
I'm searching back. There has been no change in behavior since v2.9.9 (2018-05-11).
Replied on Mon, 03/25/2019 - 16:12
In reply to v2.9.9 by jpritikin
more info needed
It's hard to test earlier version of OpenMx. Can we narrow this bug down? Is the problem in summary()?
Replied on Mon, 03/25/2019 - 16:26
In reply to more info needed by jpritikin
I would expect it to be in
I would expect it to be in summary() as that is where the fit indices are computed and the Model statistics table constructed.
Replied on Mon, 03/25/2019 - 17:22
In reply to I would expect it to be in by bwiernik
Did a change occur in OpenMx:
Did a change occur in OpenMx:::computeOptimizationStatistics()? That is where the DoF are calculated.
Replied on Wed, 03/27/2019 - 15:51
In reply to I would expect it to be in by bwiernik
what exactly is wrong?
I installed v2.6.9 (2016-07-28). Cov has observed statistics: 45, degrees of freedom: 24. Cor has observed statistics: 36, degrees of freedom: 15. These look the same. However, I do notice
differences in the chi-squared and Information Criteria fit stats,
For cov:
chi-square: X2 ( df=24 ) = 88.7134, p = 2.351412e-09
Information Criteria:
| df Penalty | Parameters Penalty | Sample-Size Adjusted
AIC: 40.71340 130.7134 NA
BIC: -48.25725 208.5627 141.9627
For cor:
chi-square: X2 ( df=24 ) = 88.7134, p = 2.351412e-09
Information Criteria:
| df Penalty | Parameters Penalty | Sample-Size Adjusted
AIC: 58.713401 130.7134 NA
BIC: 3.106747 208.5627 141.9627
There are lots of differences here compared to current output. What specifically do you think is incorrect?
Replied on Fri, 03/29/2019 - 10:13
See Steiger (1980)...
When I run a RAM model with type = "cor" data , I get warning message:
In FUN(X[[i]], ...) :
OpenMx does not yet correctly handle mxData(type='cor') standard errors and fit statistics.
See Steiger (1980), "Tests for
comparing elements of a correlation matrix".
This is with the current OpenMx: 2.12.2.267 [GIT v2.12.2-267-g3cbee07]
Replied on Fri, 04/05/2019 - 10:36
update in the works
An upcoming version of OpenMx will constrain the variances of the manifest portion of the `S` matrix to 1 if `type="cor"` in RAM models. Secondly, we'll likely stop allowing cor in non-RAM models,
with a note to the user to assert `type = cov` and add constraints where they need to be based on their matrix usage, or warn people that they need to take care of this.
Replied on Tue, 04/09/2019 - 15:46
In reply to update in the works by tbates
Automating the constraint on the diagonal of the model-expected covariance/correlation matrix is going to be trickier than I thought when I volunteered to tackle this bug. A model of `type="RAM"` has
no MxMatrix or MxAlgebra for its model-expected matrix. The constraint would have to exist solely in the backend. We do have a precedent for creating new constraints in the backend: we do that to
implement the constrained representation of the confidence-limit problem. But, those constraints don't persist to be returned to the frontend, and they don't do things like adjust the
degrees-of-freedom of the model. Maybe we should just deprecate `type="cor"`?
Replied on Tue, 04/09/2019 - 18:46
Analyses with correlation
Analyses with correlation matrices are really common, and returning to the previous behavior where the sole change for type="cor" is to display a warning about standard errors and adjusting the
degrees of freedom and associated fit indices would be much preferable I think to users just analyzing correlation matrices as covariance matrices without any warning or adjustment.
Rather than doing the diagonal constraint implicitly, would adding a new mxCorrelationConstraint() function that adds an explicit constraint as part of the model be easier? Then, type="cor" could
error out if that constraint were not added?
Replied on Tue, 04/16/2019 - 10:40
In reply to Analyses with correlation by bwiernik
and returning to the previous behavior where the sole change for type="cor" is to display a warning about standard errors and adjusting the degrees of freedom and associated fit indices would be
much preferable I think to users just analyzing correlation matrices as covariance matrices without any warning or adjustment.
It _does_ warn, though. I just tested it in version 2.12.2. I notice in your OP you said you were running v2.11.5. Have you updated since then?
Replied on Tue, 04/16/2019 - 12:20
In reply to version? by AdminRobK
Sorry I wasn't clear there.
Sorry I wasn't clear there. The current version does warn during estimation (though not during summary() which would also be useful I think), but does not adjust df or any fit statistics.
My comment about "without any warning or adjustment" is regarding the proposed option of deprecating type="cor". The consequence of doing that, I think, is that users would just analyze the
correlation matrix as a covariance, but then not even with the warning.
Replied on Wed, 04/10/2019 - 03:07
Mike Cheung Joined: 10/08/2009
The create.vechsR() function
The create.vechsR() function in the metaSEM package can be used to create a model-implied covariance matrix ensuring that the diagonals are always ones. It does not use the mxConstraint() function in
OpenMx. It treats the error variances of the dependent variables as computed parameters rather than as free parameters.
It was written to fit a correlation structure with weighted least squares in the context of meta-analysis. But it can be modified for maximum likelihood. Please see the attached examples.
File attachments
Replied on Wed, 04/17/2019 - 10:16
OLD behavior, for reference
I still have an installation of R v2.15.3, running OpenMx v1.3, on my 32-bit Windows 7 laptop. When I analyze the Holzinger-Swineford correlation matrix with `type="cov"`, I get this:
observed statistics: 45
estimated parameters: 23
degrees of freedom: 22
-2 log likelihood: 2050.228
saturated -2 log likelihood: 1714.86
number of observations: 301
chi-square: 335.3673
p: 7.71944e-58
Information Criteria:
df Penalty Parameters Penalty Sample-Size Adjusted
AIC: 291.3673 381.3673 NA
BIC: 209.8109 466.6309 393.6879
CFI: 0.6698406
TLI: 0.4597392
RMSEA: 0.2175366
When I analyze the correlation matrix with `type="cor"`, I get this (along with status BLUE):
observed statistics: 36
estimated parameters: 23
degrees of freedom: 13
-2 log likelihood: 2050.228
saturated -2 log likelihood: 1714.86
number of observations: 301
chi-square: 335.3673
p: 9.242367e-64
Information Criteria:
df Penalty Parameters Penalty Sample-Size Adjusted
AIC: NA NA NA
BIC: NA NA NA
CFI: 0.6698406
TLI: 0.4597392
RMSEA: 0.2175366
So, if we go back THIS far, OpenMx didn't handle correlation matrices correctly, and didn't warn about it, either. | {"url":"https://openmx.ssri.psu.edu/comment/8123","timestamp":"2024-11-08T01:04:03Z","content_type":"text/html","content_length":"59125","record_id":"<urn:uuid:eb20056a-fcfa-470f-bcc1-ac9199924d5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00113.warc.gz"} |
Statistics Notes: Some examples of regression towards the mean
General Practice
Statistics Notes: Some examples of regression towards the mean
BMJ 1994
309 doi: https://doi.org/10.1136/bmj.309.6957.780 (Published 24 September 1994) Cite this as: BMJ 1994;309:780
We have previously shown that regression towards the mean occurs whenever we select an extreme group based on one variable and then measure another variable for that group (4 June, p 1499).1 The
second group mean will be closer to the mean for all subjects than is the first, and the weaker the correlation between the two variables the bigger the effect will be. Regression towards the mean
happens in many types of study. The study of heredity1 is just one. Once one becomes aware of the regression effect it seems to be everywhere. The following are just a few examples.
Treatment to reduce high levels of a measurement - In clinical practice there are many measurements, such as weight, serum cholesterol concentration, or blood pressure, for which particularly high or
low values are signs of underlying disease or risk factors for disease. People with extreme values of the measurement, such as high blood pressure, may be treated to bring their values closer to the
mean. If they are measured again we will observe that the mean of the extreme group is now closer to the mean of the whole population - that is, it is reduced. This should not be interpreted as
showing the effect of the treatment. Even if subjects are not treated the mean blood pressure will go down, owing to regression towards the mean. The first and second measurement will have
correlation r<l because of the inevitable measurement error and biological variation. The difference between the second mean for the subgroup and the population mean will be approximately r times the
difference between the first mean and the population mean. We need to separate any genuine reductions due to treatment from the effect of regression towards the mean. This is best done by using a
randomised control group, but it can be estimated directly.2
Relating change to initial value - We may be interested in the relation between the initial value of a measurement and the change in that quantity over time. In antihypertensive drug trials, for
example, it may be postulated that the drug's effectiveness would be different (usually greater) for patients with more severe hypertension. This is a reasonable question, but, unfortunately, the
regression towards the mean will be greater for the patients with the highest initial blood pressures, so that we would expect to observe the postulated effect even in untreated patients.3
Assessing the appropriateness of clinical decisions - Clinical decisions are sometimes assessed by asking a review panel to read case notes and decide whether they agree with the decision made.
Because agreement between observers is seldom perfect the panel is sure to conclude that some decisions are “wrong.” For example, Barrett et al reviewed cases of women who had had a caesarean section
because of fetal distress.4 The percentage agreement between pairs of observers in the panel varied from 60% to 82.5%. They judged a caesarean section to be “appropriate” if at least four of the five
observers thought a caesarean should have been done. Because there was poor agreement among the panel, judgments by panel members and the actual obstetricians doing the sections must also be poorly
related and not all caesareans will be deemed appropriate by the panel. The authors concluded that 30% of all caesarean sections for fetal distress were unnecessary, but what the study actually
showed was that decisions about whether women should have emergency surgery for fetal distress are difficult and that obstetricians do not always agree.5
Comparison of two methods of measurement - When comparing two methods of measuring the same quantity researchers are sometimes tempted to regress one method on the other. The fallacious argument is
that if the methods agree the slope should be 1. Because of the effect of regression towards the mean we expect the slope to be less than 1 even if the two methods agree closely. For example, in two
similar studies self reported weight was obtained from a group of subjects, and the subjects were then weighed.6,7 Regression analysis was done, with reported weight as the outcome variable and
measured weight as the predictor variable. The regression slope was less than 1 in each study. According to the regression equation, the mean reported weight of heavy subjects was less than their
mean measured weight, and the mean reported weight of light subjects was greater than their mean measured weight. We have a finding which allows a simple and attractive, but misleading,
interpretation: those who are overweight tend to underestimate their weights and those who are excessively thin tend to overestimate their weights. In fact we would expect to find a slope less than
1, as a result of regression towards the mean. If self reported and measured weight were equaly good measures of the subject's true weight then the slope of the regression of reported weight on
measured weight will be less than 1. But the slope of the regression of measured weight on reported weight will also be less than 1. Now we have the oppostive conclusion: people who are heavy have
overestimated their weights and people who are light have underestimated theirs. Elsewhere we describe a better approach to such data.8
Publication bias - Rousseeuw notes that referees for papers submitted for publication do not always agree which papers should be accepted.9 Because referees' judgments of the quality of papers are
therefore made with error, they cannot be perfectly correlated with any measure of the true quality of the paper. Thus when an editor accepts the “best” papers for publication the average quality of
these will be less than the editor thinks, and the average quality of those rejected will be higher than the editor thinks. Next time you are turned down by the BMJ do not be too despondent. It could
be just another example of regression towards the mean.
View Abstract | {"url":"https://www.bmj.com/content/309/6957/780?ijkey=7c83c5957fe306dbb178b98ed5790f29438db20d&keytype2=tf_ipsecsha","timestamp":"2024-11-02T17:45:13Z","content_type":"text/html","content_length":"111488","record_id":"<urn:uuid:eddd2966-cbfe-4249-b3d3-1ca6161efd25>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00760.warc.gz"} |
List of gear nomenclature
The addendum is the height by which a tooth of a gear projects beyond (outside for external, or inside for internal) the standard pitch circle or pitch line; also, the radial distance between the
pitch diameter and the outside diameter.^[1]
Addendum angle
Addendum angle in a bevel gear, is the angle between elements of the face cone and pitch cone.^[1]
Addendum circle
The addendum circle coincides with the tops of the teeth of a gear and is concentric with the standard (reference) pitch circle and radially distant from it by the amount of the addendum. For
external gears, the addendum circle lies on the outside cylinder while on internal gears the addendum circle lies on the internal cylinder.^[1]
Pressure Angle
Apex to back
Apex to back, in a bevel gear or hypoid gear, is the distance in the direction of the axis from the apex of the pitch cone to a locating surface at the back of the blank.^[1]
Back angle
The back angle of a bevel gear is the angle between an element of the back cone and a plane of rotation, and usually is equal to the pitch angle.^[1]
Back cone
The back cone of a bevel or hypoid gear is an imaginary cone tangent to the outer ends of the teeth, with its elements perpendicular to those of the pitch cone. The surface of the gear blank at the
outer ends of the teeth is customarily formed to such a back cone.^[1]
Back cone distance
Back cone distance in a bevel gear is the distance along an element of the back cone from its apex to the pitch cone.^[1]
In mechanical engineering, backlash is the striking back of connected wheels in a piece of mechanism when pressure is applied. Another source defines it as the maximum distance through which one part
of something can be moved without moving a connected part. It is also called lash or play. In the context of gears, backlash is clearance between mating components, or the amount of lost motion due
to clearance or slackness when movement is reversed and contact is re-established. In a pair of gears, backlash is the amount of clearance between mated gear teeth.
Backlash is unavoidable for nearly all reversing mechanical couplings, although its effects can be negated. Depending on the application it may or may not be desirable. Reasons for requiring backlash
include allowing for lubrication and thermal expansion, and to prevent jamming. Backlash may also result from manufacturing errors and deflection under load.
Base circle
The base circle of an involute gear is the circle from which involute tooth profiles are derived.^[1]
Base cylinder
The base cylinder corresponds to the base circle, and is the cylinder from which involute tooth surfaces are developed.^[1]
Base diameter
The base diameter of an involute gear is the diameter of the base circle.^[1]
Bevel gear
Bull gear
The term bull gear is used to refer to the larger of two spur gears that are in engagement in any machine. The smaller gear is usually referred to as a pinion.^[2]
Center distance
Center distance (operating) is the shortest distance between non-intersecting axes. It is measured along the mutual perpendicular to the axes, called the line of centers. It applies to spur gears,
parallel axis or crossed axis helical gears, and worm gearing.^[1]
Central plane
The central plane of a worm gear is perpendicular to the gear axis and contains the common perpendicular of the gear and worm axes. In the usual case with axes at right angles, it contains the worm
Composite action test
The composite action test (double flank) is a method of inspection in which the work gear is rolled in tight double flank contact with a master gear or a specified gear, in order to determine
(radial) composite variations (deviations). The composite action test must be made on a variable center distance composite action test device.^[1]
Cone distance
Cone distance in a bevel gear is the general term for the distance along an element of the pitch cone from the apex to any given position in the teeth.^[1]
Outer cone distance in bevel gears is the distance from the apex of the pitch cone to the outer ends of the teeth. When not otherwise specified, the short term cone distance is understood to be outer
cone distance.
Mean cone distance in bevel gears is the distance from the apex of the pitch cone to the middle of the face width.
Inner cone distance in bevel gears is the distance from the apex of the pitch cone to the inner ends of the teeth.
Conjugate gears
Conjugate gears transmit uniform rotary motion from one shaft to another by means of gear teeth. The normals to the profiles of these teeth, at all points of contact, must pass through a fixed point
in the common centerline of the two shafts.^[1] Usually conjugate gear tooth is made to suit the profile of other gear which is not made based on standard practice.
Crossed helical gear
A crossed helical gear is a gear that operate on non-intersecting, non-parallel axes.
The term crossed helical gears has superseded the term spiral gears. There is theoretically point contact between the teeth at any instant. They have teeth of the same or different helix angles, of
the same or opposite hand. A combination of spur and helical or other types can operate on crossed axes.^[1]
Crossing point
The crossing point is the point of intersection of bevel gear axes; also the apparent point of intersection of the axes in hypoid gears, crossed helical gears, worm gears, and offset face gears, when
projected to a plane parallel to both axes.^[1]
Crown circle
The crown circle in a bevel or hypoid gear is the circle of intersection of the back cone and face cone.^[1]
Crowned teeth
Crowned teeth have surfaces modified in the lengthwise direction to produce localized contact or to prevent contact at their ends.^[1]
Dedendum angle
Dedendum angle in a bevel gear, is the angle between elements of the root cone and pitch cone.^[1]
Equivalent pitch radius
Equivalent pitch radius is the radius of the pitch circle in a cross section of gear teeth in any plane other than a plane of rotation. It is properly the radius of curvature of the pitch surface in
the given cross section. Examples of such sections are the transverse section of bevel gear teeth and the normal section of helical teeth.
Face (tip) angle
Face (tip) angle in a bevel or hypoid gear, is the angle between an element of the face cone and its axis.^[1]
Face cone
The face cone, also known as the tip cone is the imaginary surface that coincides with the tops of the teeth of a bevel or hypoid gear.^[1]
Face gear
A face gear set typically consists of a disk-shaped gear, grooved on at least one face, in combination with a spur, helical, or conical pinion. A face gear has a planar pitch surface and a planar
root surface, both of which are perpendicular to the axis of rotation.^[1] It can also be referred to as a face wheel, crown gear, crown wheel, contrate gear or contrate wheel.
Face width
The face width of a gear is the length of teeth in an axial plane. For double helical, it does not include the gap.^[1]
Total face width is the actual dimension of a gear blank including the portion that exceeds the effective face width, or as in double helical gears where the total face width includes any distance or
gap separating right hand and left hand helices.
For a cylindrical gear, effective face width is the portion that contacts the mating teeth. One member of a pair of gears may engage only a portion of its mate.
For a bevel gear, different definitions for effective face width are applicable.
Form diameter
Form diameter is the diameter of a circle at which the trochoid (fillet curve) produced by the tooling intersects, or joins, the involute or specified profile. Although these terms are not preferred,
it is also known as the true involute form diameter (TIF), start of involute diameter (SOI), or when undercut exists, as the undercut diameter. This diameter cannot be less than the base circle
Front angle
The front angle, in a bevel gear, denotes the angle between an element of the front cone and a plane of rotation, and usually equals the pitch angle.^[1]
Front cone
The front cone of a hypoid or bevel gear is an imaginary cone tangent to the inner ends of the teeth, with its elements perpendicular to those of the pitch cone. The surface of the gear blank at the
inner ends of the teeth is customarily formed to such a front cone, but sometimes may be a plane on a pinion or a cylinder in a nearly flat gear.^[1]
Gear center
A gear center is the center of the pitch circle.^[1]
Gear range
The gear range is difference between the highest and lowest gear ratios and may be expressed as a percentage (e.g., 500%) or as a ratio (e.g., 5:1).
The heel of a tooth on a bevel gear or pinion is the portion of the tooth surface near its outer end.
The toe of a tooth on a bevel gear or pinion is the portion of the tooth surface near its inner end.^[1]
Helical rack
A helical rack has a planar pitch surface and teeth that are oblique to the direction of motion.^[1]
Helix angle
Helix angle is the angle between the helical tooth face and an equivalent spur tooth face. For the same lead, the helix angle is greater for larger gear diameters. It is understood to be measured at
the standard pitch diameter unless otherwise specified.
Herringbone gear
Hobbing is a machining process for making gears, splines, and sprockets using a cylindrical tool with helical cutting teeth known as a hob.
Index deviation
The displacement of any tooth flank from its theoretical position, relative to a datum tooth flank.
Distinction is made as to the direction and algebraic sign of this reading. A condition wherein the actual tooth flank position was nearer to the datum tooth flank, in the specified measuring path
direction (clockwise or counterclockwise), than the theoretical position would be considered a minus (-) deviation. A condition wherein the actual tooth flank position was farther from the datum
tooth flank, in the specified measuring path direction, than the theoretical position would be considered a plus (+) deviation.
The direction of tolerancing for index deviation along the arc of the tolerance diameter circle within the transverse plane.^[1]
Inside cylinder
The inside cylinder is the surface that coincides with the tops of the teeth of an internal cylindrical gear.^[1]
Inside diameter
Inside diameter is the diameter of the addendum circle of an internal gear, this is also known as minor diameter.^[1]
Involute gear
Involute polar angle
Expressed as θ, the involute polar angle is the angle between a radius vector to a point, P, on an involute curve and a radial line to the intersection, A, of the curve with the base circle.^[1]
Involute roll angle
Expressed as ε, the involute roll angle is the angle whose arc on the base circle of radius unity equals the tangent of the pressure angle at a selected point on the involute.^[1]
Involute teeth
Involute teeth of spur gears, helical gears, and worms are those in which the profile in a transverse plane (exclusive of the fillet curve) is the involute of a circle.^[1]
Bottom land
The bottom land is the surface at the bottom of a gear tooth space adjoining the fillet.^[1]
Top land
Top land is the (sometimes flat) surface of the top of a gear tooth.^[1]
Lead is the axial advance of a helix gear tooth during one complete turn (360°), that is, the Lead is the axial travel (length along the axle) for one single complete helical revolution about the
pitch diameter of the gear.
Lead angle is 90° to the helix angle between the helical tooth face and an equivalent spur tooth face. For the same lead, the lead angle is larger for smaller gear diameters. It is understood to be
measured at the standard pitch diameter unless otherwise specified.
A spur gear tooth has a lead angle of 90°, and a helix angle of 0°.
See: Helix angle
Line of centers
The line of centers connects the centers of the pitch circles of two engaging gears; it is also the common perpendicular of the axes in crossed helical gears and wormgears. When one of the gears is a
rack, the line of centers is perpendicular to its pitch line.^[1]
Mounting distance
Mounting distance, for assembling bevel gears or hypoid gears, is the distance from the crossing point of the axes to a locating surface of a gear, which may be at either back or front.^[1]
Normal module
Normal module is the value of the module in a normal plane of a helical gear or worm.^[1]
Normal plane
A normal plane is normal to a tooth surface at a pitch point, and perpendicular to the pitch plane. In a helical rack, a normal plane is normal to all the teeth it intersects. In a helical gear,
however, a plane can be normal to only one tooth at a point lying in the plane surface. At such a point, the normal plane contains the line normal to the tooth surface.
Important positions of a normal plane in tooth measurement and tool design of helical teeth and worm threads are:
1. the plane normal to the pitch helix at side of tooth;
2. the plane normal to the pitch helix at center of tooth;
3. the plane normal to the pitch helix at center of space between two teeth
In a spiral bevel gear, one of the positions of a normal plane is at a mean point and the plane is normal to the tooth trace.^[1]
Offset is the perpendicular distance between the axes of hypoid gears or offset face gears.^[1]
In the adjacent diagram, (a) and (b) are referred to as having an offset below center, while those in (c) and (d) have an offset above center. In determining the direction of offset, it is customary
to look at the gear with the pinion at the right. For below center offset the pinion has a left hand spiral, and for above center offset the pinion has a right hand spiral.
Outside cylinder
The outside (tip or addendum) cylinder is the surface that coincides with the tops of the teeth of an external cylindrical gear.^[1]
Outside diameter
The outside diameter of a gear is the diameter of the addendum (tip) circle. In a bevel gear it is the diameter of the crown circle. In a throated wormgear it is the maximum diameter of the blank.
The term applies to external gears, this is can also be known from major diameter.^[1]
Pitch angle
Pitch angle in bevel gears is the angle between an element of a pitch cone and its axis. In external and internal bevel gears, the pitch angles are respectively less than and greater than 90 degrees.
Pitch circle
A pitch circle (operating) is the curve of intersection of a pitch surface of revolution and a plane of rotation. It is the imaginary circle that rolls without slipping with a pitch circle of a
mating gear.^[1] These are the outlines off the imaginary smooth roller or friction discs in every pair of mating gears. Many important measurements are taken on and from this circle.^[1]
Pitch cone
A pitch cone is the imaginary cone in a bevel gear that rolls without slipping on a pitch surface of another gear.^[1]
Pitch cylinder
A pitch cylinder is the imaginary cylinder in a spur or helical gear that rolls without slipping on a pitch plane or pitch cylinder of another gear.^[1]
Pitch helix
The pitch helix is the intersection of the tooth surface and the pitch cylinder of a helical gear or cylindrical worm.^[1]
Base helix
The base helix of a helical, involute gear or involute worm lies on its base cylinder.
Base helix angle
Base helix angle is the helix angle on the base cylinder of involute helical teeth or threads.
Base lead angle
Base lead angle is the lead angle on the base cylinder. It is the complement of the base helix angle.
Outside helix
The outside (tip or addendum) helix is the intersection of the tooth surface and the outside cylinder of a helical gear or cylindrical worm.
Outside helix angle
Outside helix angle is the helix angle on the outside cylinder.
Outside lead angle
Outside lead angle is the lead angle on the outside cylinder. It is the complement of the outside helix angle.
Normal helix
A normal helix is a helix on the pitch cylinder, normal to the pitch helix.
Pitch line
The pitch line corresponds, in the cross section of a rack, to the pitch circle (operating) in the cross section of a gear.^[1]
Pitch point
The pitch point is the point of tangency of two pitch circles (or of a pitch circle and pitch line) and is on the line of centers.^[1]
Pitch surfaces
Pitch surfaces are the imaginary planes, cylinders, or cones that roll together without slipping. For a constant velocity ratio, the pitch cylinders and pitch cones are circular.^[1]
Pitch plane
The pitch plane of a pair of gears is the plane perpendicular to the axial plane and tangent to the pitch surfaces. A pitch plane in an individual gear may be any plane tangent to its pitch surface.
The pitch plane of a rack or in a crown gear is the imaginary planar surface that rolls without slipping with a pitch cylinder or pitch cone of another gear. The pitch plane of a rack or crown gear
is also the pitch surface.^[1]
Transverse plane
The transverse plane is perpendicular to the axial plane and to the pitch plane. In gears with parallel axes, the transverse and the plane of rotation coincide.^[1]
Principal directions
Principal directions are directions in the pitch plane, and correspond to the principal cross sections of a tooth.
The axial direction is a direction parallel to an axis.
The transverse direction is a direction within a transverse plane.
The normal direction is a direction within a normal plane.^[1]
Profile angle
Profile radius of curvature
Profile radius of curvature is the radius of curvature of a tooth profile, usually at the pitch point or a point of contact. It varies continuously along the involute profile.^[1]
Rack and pinion
Radial composite deviation
Tooth-to-tooth radial composite deviation (double flank) is the greatest change in center distance while the gear being tested is rotated through any angle of 360 degree/z during double flank
composite action test.
Tooth-to-tooth radial composite tolerance (double flank) is the permissible amount of tooth-to-tooth radial composite deviation.
Total radial composite deviation (double flank) is the total change in center distance while the gear being tested is rotated one complete revolution during a double flank composite action test.
Total radial composite tolerance (double flank) is the permissible amount of total radial composite deviation.^[1]
Root angle
Root angle in a bevel or hypoid gear, is the angle between an element of the root cone and its axis.^[1]
Root circle
The root circle coincides with the bottoms of the tooth spaces.^[1]
Root cone
The root cone is the imaginary surface that coincides with the bottoms of the tooth spaces in a bevel or hypoid gear.^[1]
Root cylinder
The root cylinder is the imaginary surface that coincides with the bottoms of the tooth spaces in a cylindrical gear.^[1]
Shaft angle
A shaft angle is the angle between the axes of two non-parallel gear shafts. In a pair of crossed helical gears, the shaft angle lies between the oppositely rotating portions of two shafts. This
applies also in the case of worm gearing. In bevel gears, the shaft angle is the sum of the two pitch angles. In hypoid gears, the shaft angle is given when starting a design, and it does not have a
fixed relation to the pitch angles and spiral angles.^[1]
Spiral gear
See: Crossed helical gear.
Spiral bevel gear
Spur gear
A spur gear has a cylindrical pitch surface and teeth that are parallel to the axis.^[1]
spur gear
Spur rack
A spur rack has a planar pitch surface and straight teeth that are at right angles to the direction of motion.^[1]
Standard pitch circle
The standard pitch circle is the circle which intersects the involute at the point where the pressure angle is equal to the profile angle of the basic rack.^[1]
Standard pitch diameter
The standard reference pitch diameter is the diameter of the standard pitch circle. In spur and helical gears, unless otherwise specified, the standard pitch diameter is related to the number of
teeth and the standard transverse pitch. The diameter can be roughly estimated by taking the average of the diameter measuring the tips of the gear teeth and the base of the gear teeth.^[1]
The pitch diameter is useful in determining the spacing between gear centers because proper spacing of gears implies tangent pitch circles. The pitch diameters of two gears may be used to calculate
the gear ratio in the same way the number of teeth is used.
Where is the total number of teeth, is the circular pitch, is the diametrical pitch, and is the helix angle for helical gears.
Standard reference pitch diameter
The standard reference pitch diameter is the diameter of the standard pitch circle. In spur and helical gears, unless otherwise specified, the standard pitch diameter is related to the number of
teeth and the standard transverse pitch. It is obtained as:^[1]
Test radius
The test radius (R[r]) is a number used as an arithmetic convention established to simplify the determination of the proper test distance between a master and a work gear for a composite action test.
It is used as a measure of the effective size of a gear. The test radius of the master, plus the test radius of the work gear is the set up center distance on a composite action test device. Test
radius is not the same as the operating pitch radii of two tightly meshing gears unless both are perfect and to basic or standard tooth thickness.^[1]
Throat diameter
The throat diameter is the diameter of the addendum circle at the central plane of a wormgear or of a double-enveloping wormgear.^[1]
Throat form radius
Throat form radius is the radius of the throat of an enveloping wormgear or of a double-enveloping worm, in an axial plane.^[1]
Tip radius
Tip radius is the radius of the circular arc used to join a side-cutting edge and an end-cutting edge in gear cutting tools. Edge radius is an alternate term.^[1]
Tip relief
Tip relief is a modification of a tooth profile whereby a small amount of material is removed near the tip of the gear tooth.^[1]
Tooth surface
The tooth surface (flank) forms the side of a gear tooth.^[1]
It is convenient to choose one face of the gear as the reference face and to mark it with the letter “I”. The other non-reference face might be termed face “II”.
For an observer looking at the reference face, so that the tooth is seen with its tip uppermost, the right flank is on the right and the left flank is on the left. Right and left flanks are denoted
by the letters “R” and “L” respectively.
Worm drive
See also | {"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Standard_pitch_diameter.html","timestamp":"2024-11-03T22:29:38Z","content_type":"text/html","content_length":"75704","record_id":"<urn:uuid:c93ea30c-0657-4db6-b50b-f33c844470b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00803.warc.gz"} |
CAR-Proper: The CAR-Proper Distribution in nimble: MCMC, Particle Filtering, and Programmable Hierarchical Modeling
Density function and random generation for the proper Gaussian conditional autoregressive (CAR) distribution.
dcar_proper( x, mu, C = CAR_calcC(adj, num), adj, num, M = CAR_calcM(num), tau, gamma, evs = CAR_calcEVs3(C, adj, num), log = FALSE ) rcar_proper( n = 1, mu, C = CAR_calcC(adj, num), adj, num, M =
CAR_calcM(num), tau, gamma, evs = CAR_calcEVs3(C, adj, num) )
x vector of values.
mu vector of the same length as x, specifying the mean for each spatial location.
C vector of the same length as adj, giving the weights associated with each pair of neighboring locations. See ‘Details’.
adj vector of indices of the adjacent locations (neighbors) of each spatial location. This is a sparse representation of the full adjacency matrix.
num vector giving the number of neighboring locations of each spatial location, with length equal to the number of locations.
M vector giving the diagonal elements of the conditional variance matrix, with length equal to the number of locations. See ‘Details’.
tau scalar precision of the Gaussian CAR prior.
gamma scalar representing the overall degree of spatial dependence. See ‘Details’.
evs vector of eigenvalues of the adjacency matrix implied by C, adj, and num. This parameter should not be provided; it will always be calculated using the adjacency information.
log logical; if TRUE, probability density is returned on the log scale.
n number of observations.
vector of the same length as x, specifying the mean for each spatial location.
vector of the same length as adj, giving the weights associated with each pair of neighboring locations. See ‘Details’.
vector of indices of the adjacent locations (neighbors) of each spatial location. This is a sparse representation of the full adjacency matrix.
vector giving the number of neighboring locations of each spatial location, with length equal to the number of locations.
vector giving the diagonal elements of the conditional variance matrix, with length equal to the number of locations. See ‘Details’.
scalar representing the overall degree of spatial dependence. See ‘Details’.
vector of eigenvalues of the adjacency matrix implied by C, adj, and num. This parameter should not be provided; it will always be calculated using the adjacency information.
logical; if TRUE, probability density is returned on the log scale.
If both C and M are omitted, then all weights are taken as one, and corresponding values of C and M are generated.
The C and M parameters must jointly satisfy a symmetry constraint: that M^(-1) %*% C is symmetric, where M is a diagonal matrix and C is the full weight matrix that is sparsely represented by the
parameter vector C.
For a proper CAR model, the value of gamma must lie within the inverse minimum and maximum eigenvalues of M^(-0.5) %*% C %*% M^(0.5), where M is a diagonal matrix and C is the full weight matrix.
These bounds can be calculated using the deterministic functions carMinBound(C, adj, num, M) and carMaxBound(C, adj, num, M), or simultaneously using carBounds(C, adj, num, M). In the case where C
and M are omitted (all weights equal to one), the bounds on gamma are necessarily (-1, 1).
Banerjee, S., Carlin, B.P., and Gelfand, A.E. (2015). Hierarchical Modeling and Analysis for Spatial Data, 2nd ed. Chapman and Hall/CRC.
x <- c(1, 3, 3, 4) mu <- rep(3, 4) adj <- c(2, 1,3, 2,4, 3) num <- c(1, 2, 2, 1) ## omitting C and M uses all weights = 1 dcar_proper(x, mu, adj = adj, num = num, tau = 1, gamma = 0.95) ## equivalent
to above: specifying all weights = 1, ## then using as.carCM to generate C and M arguments weights <- rep(1, 6) CM <- as.carCM(adj, weights, num) C <- CM$C M <- CM$M dcar_proper(x, mu, C, adj, num,
M, tau = 1, gamma = 0.95) ## now using non-unit weights weights <- c(2, 2, 3, 3, 4, 4) CM2 <- as.carCM(adj, weights, num) C2 <- CM2$C M2 <- CM2$M dcar_proper(x, mu, C2, adj, num, M2, tau = 1, gamma =
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/nimble/man/CAR-Proper.html","timestamp":"2024-11-06T01:38:27Z","content_type":"text/html","content_length":"36000","record_id":"<urn:uuid:deba4797-ca4a-4ded-bf11-1f4b8f511644>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00775.warc.gz"} |
Linearly-Additive Decomposed 2 × 2 Games: A Primer for ResearchLinearly-Additive Decomposed 2 × 2 Games: A Primer for Research
2 $×$ 2 games (such as the Prisoner’s Dilemma) are economic games for studying cooperation and social decision-making. Linearly-additive decomposed games are variants of 2 $×$ 2 games that can change
the framing of the game and thereby provide researchers with additional flexibility for measuring preferences and social cognition that would not be possible with standard (matrix-form) 2 $×$ 2
games. In this paper, we provide a systematic overview of linearly-additive decomposed 2 $×$ 2 games. We show which 2 $×$ 2 games can be decomposed in a linearly-additive way and how to calculate
possible decompositions for a given game. We close by suggesting for which experiments decomposed games might be more conducive than matrix games.
1. Introduction
In this article, we explain what linearly-additive decomposed 2 $×$ 2 games are, how they work, and for which types of experiments they might be most useful. In our opinion, they are a versatile but
underused tool for studying social preferences and social interactions. The purpose of this article is to help readers understand these games better, such that they can use them in their own
research. For readability, we will omit the phrase ‘linearly additive’ (such that we refer to linearly-additive decomposed 2 $×$ 2 games as ‘decomposed games’) and we put all relevant mathematics in
the Appendix.
1.1. Social interactions and 2 × 2 games
Cooperation and social decisions have been studied for decades across a wide range of disciplines (e.g., Fudenberg & Tirole, 1991; Henrich et al., 2001; King-Casas & Chiu, 2012; Nowak, 2006; Perc et
al., 2017; Rilling & Sanfey, 2011; Van Lange et al., 2013). To tease apart various aspects of cooperation, different tasks have been developed (Thielmann et al., 2021), such as the Dictator Game
(Engel, 2011), the Ultimatum Game (Güth & Kocher, 2014), and the Trust Game (Johnson & Mislin, 2011).
One of the most widely studied economics games is the Prisoner’s Dilemma, which models conflict between individual and collective. In the original framing of the Prisoner’s Dilemma (see Poundstone,
1993), a police officer arrests 2 criminals. The evidence is not sufficient to convict them of the major crime, but there is evidence to convict them of a lesser crime. The officer makes both
criminals, who are kept in separate rooms and cannot communicate with each other, the same offer: if both talk to the police and betray each other, they each go to prison for 5 years; if both remain
silent, they each go to prison for 2 years; if one remains silent and the other speaks, the first will go to prison for 10 and the latter will be set free (see Figure 1).
The Prisoner’s Dilemma thus models a conflict between what’s best for the individual and what’s best for both collectively: the best joint outcome for the two prisoners is to remain silent and go to
prison for only 2 years each, but no matter what the other prisoner does, a prisoner’s own prison sentence is lower if they confess and betray the other person. The Prisoner’s Dilemma can be
generalised beyond the specific prison-context to any situation in which DC $>$ CC $>$ DD $>$ CD (the first letter indicates the first player’s choice and the second letter indicates the other
player’s choice: DC = Defect when other Cooperates; CC = mutual Cooperation; DD = mutual Defection; CD = Cooperating when other defects); especially in iterated experiments, a second rule is
implemented (2CC $>$ DC + CD) to ensure that taking turns defecting and cooperating isn’t the best long-term strategy. Another frequently studied 2 $×$ 2 game is Chicken (Rapoport & Chammah, 1966;
Smith & Price, 1973). Chicken (also known as Snowdrift and Hawk-Dove) is identical to the Prisoner’s Dilemma with one exception: DD and CD are swapped, such that for Chicken the payoffs are DC $>$ CC
$>$ CD $>$ DD. This payoff swap changes the dynamics of the game: defection is no longer dominant. Chicken models a situation in which mutual destruction is worse than being taken advantage of and
has been likened to nuclear warfare (Russell, 1959).
The Prisoner’s Dilemma might be the most famous 2 $×$ 2 game, but it is only one of many. In a 2 $×$ 2 game (Rapoport et al., 1976), 2 players each make a binary decision, leading to 4 possible
outcomes. The different games can be defined by the order of their payoffs. Of all ordinal 2 $×$ 2 games with strict preferences, 12 are symmetric (both players have the same payoffs; names from
Bruns (2015); see Table 1).
Table 1.
Game . 1 > . 2 > . 3 > . 4 .
Chicken DC CC CD DD
Battle DC CD CC DD
Hero DC CD DD CC
Compromise DC DD CD CC
Deadlock DC DD CC CD
Prisoner’s Dilemma DC CC DD CD
Stag Hunt CC DC DD CD
Assurance CC DD DC CD
Coordination CC DD CD DC
Peace CC CD DD DC
Harmony CC CD DC DD
Concord CC DC CD DD
Game . 1 > . 2 > . 3 > . 4 .
Chicken DC CC CD DD
Battle DC CD CC DD
Hero DC CD DD CC
Compromise DC DD CD CC
Deadlock DC DD CC CD
Prisoner’s Dilemma DC CC DD CD
Stag Hunt CC DC DD CD
Assurance CC DD DC CD
Coordination CC DD CD DC
Peace CC CD DD DC
Harmony CC CD DC DD
Concord CC DC CD DD
The only difference between different ordinal 2 $×$ 2 games is the rank of their payoffs and the resulting strategic decisions for the players (e.g., Nash equilibria). Consider the 4 games presented
in matrix-form in Figure 2: the Prisoner’s Dilemma models conflict between what’s best for the individual (defect) and what’s best for both players combined (cooperate); for Chicken, getting
exploited is not the worst option, mutual defection is, making defection the riskier option and cooperation safer; the reverse is true for Stag-Hunt, where cooperation can lead to the highest or the
lowest payoff; Concord contains no real conflict: mutual cooperation is the best payoff and mutual defection the worst. Thus, 2 $×$ 2 games collectively map various interdependent decisions and can
be used for studying social interactions.
1.2. Matrix games and decomposed games
In most empirical studies, 2 $×$ 2 games are displayed as the outcomes of the interdependent decision (Rapoport et al., 1976): if you choose C and the other person chooses D, you get 1 dollar and the
other gets 7 dollars (see Figure 2a). This is usually displayed in a matrix and called the ‘matrix form’. But the outcomes can be decomposed into actions with consequences for self and other that are
independent of what the other person chooses: if you choose C, you get 0 dollar and the other person gets 5 dollars, but if you choose D, you get 2 dollars and the other person gets 1 dollar (Figure
3 left). If both players decide between such decomposed options, the matrix form and the decomposed form describe the same 2 $×$ 2 game, only changing the way the game is presented. For example,
Figure 3 shows how if both players decide between the decomposed options just mentioned, the final outcomes are identical to the Prisoner’s Dilemma displayed in Figure 2.
The central difference between the matrix form and the decomposed form thus lies in their different emphasis on action and outcome. While the matrix form shows only the consequence of the players’
choices, the decomposed form shows only the value of the actions themselves, without explicitly stating the final outcomes. In matrix form, a decision has no intrinsic value and can only be evaluated
in relation to the other’s decision; in decomposed form, each action carries an intrinsic value for oneself and the other. Decomposed games may thus be psychologically similar to allocation tasks:
formally, they are 2 $×$ 2 games, but they appear like a forced-choice allocation task, where each player chooses 1 of 2 possible allocations options for oneself and another person. The tasks of the
social-value orientation literature are often described as ‘decomposed games’ (Kuhlman & Marshello, 1975; Murphy & Ackermann, 2013), but in these tasks usually only one person makes a decision, such
that they are not games in the strict sense. In this article, we use the term ‘decomposed games’ to refer to a 2 $×$ 2 game in which both players decide between 2 options. Decomposed games thus allow
changing the way a game is framed, manipulating the story behind the game.
2. Decomposing different 2 × 2 games
2.1. Symmetric game, symmetric decomposition
This section answers the questions: 1) ‘which games can be decomposed?’ and 2) ‘if a game can be decomposed, what are possible decompositions?’.
From previous studies, we know that the Prisoner’s Dilemma can be decomposed. For example, Pruitt (1967) empirically tested the effect of different decompositions of the same Prisoner’s Dilemma on
human cooperation rates. Pruitt decomposed the payoff matrix DC = 18, CC = 12, DD = 6, CD = 0 into several decomposed games, including the 2 displayed in Figure 4. These decompositions are
psychologically quite different (in decomposition 1, you can either split $12 in half, or take $6 away from the other to get $12 yourself, but in decomposition 2 the decision lies between either
giving $12 to the other player or taking $6 oneself), but if both players choose between the same decomposed options, the resulting game is the same. Pruitt found that participants had a 55%
cooperation rate for the matrix-form and Decomposition 1 from Figure 4, and a 70% cooperation rate for Decomposition 2.
2.1.1. Which symmetric 2 × 2 games can be decomposed symmetrically?
Pruitt found that for a Prisoner’s Dilemma to be decomposable, it has to fulfil the necessary condition CC - DC = CD - DD (which is equivalent to DD - DC = CD - CC and CC + DD = CD + DC). As we
demonstrate in the Appendix, this necessary condition emerges as an algebraic consequence of the way decomposed games are set up and is independent of the order of payoffs: any 2 $×$ 2 game can be
decomposed symmetrically if and only if CC - DC = CD - DD.
What are the consequences of this necessary condition? First, we can use this rule to find out which games are decomposable in principle by taking CC - DC = CD - DD and searching for contradictions.
As Table 2 shows, 4 games do not inherently contradict CC - DC = CD - DD: Deadlock, Prisoner’s Dilemma, Harmony, and Concord. These are the only strict ordinal symmetric 2 $×$ 2 games that can be
decomposed symmetrically. This logical explanation of why only 4 symmetric 2 $×$ 2 games can be decomposed symmetrically can be complemented in a visually more intuitive way (Figure 5; for a related
graphical representation of decomposed games, see Griesinger & Livingston, 1973): Fix one of the two decomposed options at an arbitrary point (the black dot in the centre) and let the other
decomposed option vary freely. The resulting decomposed game depends only on the relative position of both options: for example, if the second point lands in the lowest right triangle (as depicted in
blue) or highest left triangle, the resulting game is a Prisoner’s Dilemma; if it lands in the second lowest right triangle or the second highest left triangle, the game is Deadlock, and so on. If
the freely-varying point were to land on the diagonal, horizontal, or vertical lines, the resulting game would not be a ordinal 2 $×$ 2 games with strict preferences: e.g., if the freely-varying
point were to land on the diagonal between Prisoner’s Dilemma and Deadlock, the resulting payoff matrix would be DC $>$ CC $=$ DD $>$ CD. In other words, the game would be between the Prisoner’s
Dilemma and Deadlock.
Table 2.
Game . 1 > . 2 > . 3 > . 4 . Potentially decomposable? .
Chicken DC CC CD DD No: DC \(>\) CC \(\Rightarrow\) CC - DC \(<\) 0; CD \(>\) DD \(\Rightarrow\) CD - DD \(>\) 0
Battle DC CD CC DD No: DC & CD \(>\) CC & DD \(\Rightarrow\) DC + CD \(\neq\) CC + DD
Hero DC CD DD CC No: same as for Battle
Compromise DC DD CD CC No: DC \(>\) DD \(\Rightarrow\) DD - DC \(<\) 0; CD \(>\) CC \(\Rightarrow\) CD - CC \(>\) 0
Deadlock DC DD CC CD Yes
Prisoner’s Dilemma DC CC DD CD Yes
Stag Hunt CC DC DD CD No: CC \(>\) DC \(\Rightarrow\) CC - DC \(>\) 0; DD \(>\) CD \(\Rightarrow\) CD - DD \(<\) 0
Assurance CC DD DC CD No: CC & DD \(>\) DC & CD \(\Rightarrow\) CC + DD \(\neq\) CD + DC
Coordination CC DD CD DC No: same as for Assurance
Peace CC CD DD DC No: DD \(>\) DC \(\Rightarrow\) DD - DC \(>\) 0; CC \(>\) CD \(\Rightarrow\) CD - CC \(<\) 0
Harmony CC CD DC DD Yes
Concord CC DC CD DD Yes
Game . 1 > . 2 > . 3 > . 4 . Potentially decomposable? .
Chicken DC CC CD DD No: DC \(>\) CC \(\Rightarrow\) CC - DC \(<\) 0; CD \(>\) DD \(\Rightarrow\) CD - DD \(>\) 0
Battle DC CD CC DD No: DC & CD \(>\) CC & DD \(\Rightarrow\) DC + CD \(\neq\) CC + DD
Hero DC CD DD CC No: same as for Battle
Compromise DC DD CD CC No: DC \(>\) DD \(\Rightarrow\) DD - DC \(<\) 0; CD \(>\) CC \(\Rightarrow\) CD - CC \(>\) 0
Deadlock DC DD CC CD Yes
Prisoner’s Dilemma DC CC DD CD Yes
Stag Hunt CC DC DD CD No: CC \(>\) DC \(\Rightarrow\) CC - DC \(>\) 0; DD \(>\) CD \(\Rightarrow\) CD - DD \(<\) 0
Assurance CC DD DC CD No: CC & DD \(>\) DC & CD \(\Rightarrow\) CC + DD \(\neq\) CD + DC
Coordination CC DD CD DC No: same as for Assurance
Peace CC CD DD DC No: DD \(>\) DC \(\Rightarrow\) DD - DC \(>\) 0; CC \(>\) CD \(\Rightarrow\) CD - CC \(<\) 0
Harmony CC CD DC DD Yes
Concord CC DC CD DD Yes
The second consequence of the necessary condition is that any decomposable game requires a certain symmetry: using the first two formulations of the necessary condition (CC - DC = CD - DD and DD - DC
= CD - CC), we can see that the difference between the first and second options has to be the same as the difference between the third and fourth options (see Figure 6). This places some limitations
on which games can be decomposed: for example, if DC is increased, the game will only be decomposable if CC is increased equally, if DD is increased equally, or if CD is decreased equally.
The third consequence of the necessary condition is that any Prisoner’s Dilemma that is decomposable also abides by the second rule of the Prisoner’s Dilemma (2CC $>$ DC + CD). Using the third
formulation of the necessary condition (CC + DD = CD + DC), it becomes clear that the first rule of the Prisoner’s Dilemma (DC $>$ CC $>$ DD $>$ CD) implies that CC $>$ 1/2(DC + CD), and therefore
2CC $>$ DC + CD. This means that the second rule of the Prisoner’s Dilemma holds for any decomposed decomposed Prisoner’s Dilemma, such that alternative cooperation and defection of both players
cannot be the most beneficial strategy, even in an iterated setting.
2.1.2. What decompositions are possible, and why are there infinitely many?
Pruitt also mentioned that if a Prisoner’s Dilemma payoff matrix is decomposable, then there are infinitely many possible decompositions. But how do the infinite decompositions relate to each other;
can we choose freely which decomposition to use, or are these decompositions related in some systematic way? As above, the full explanation is in the Appendix, and the approach isn’t defined by the
order of payoffs in the Prisoner’s Dilemma, so the other 3 decomposable games also have infinitely many decompositions once the necessary condition is fulfilled. From the algebraic formulation in the
Appendix, we can create a table with the generic formula for symmetrically decomposing any symmetric 2 $×$ 2 game (if it fulfills the necessary condition):
The decomposed options are defined by the payoffs of the game and $γ$, which can be chosen freely and is subtracted from payoffs for the self and added to payoffs for the other. Mathematically, the
infinity of decomposed options per 2 $×$ 2 game is trivial (x - x = 0), but from a practical perspective, this provides flexibility when designing experiments to alter the framing of the decomposed
As an example, Figure 8 displays different decompositions of the same payoff matrix. We use the Prisoner’s Dilemma with the payoff matrix DC = 7, CC = 5, DD = 3, CD = 1, and then use $γ$ values of
(-2, -1, 0, 1, 2). The resulting decomposition lead to the same payoff matrix specified above (see Figure 8).
The specific value one selects for $γ$ will depend on multiple factors, such as the scientific question and the payoff matrix. To provide some guidelines, the total contributed points for C and D
remain constant (because $γ$ is always added to the same extent that it is subtracted; in Figure 8 C always provides a total of 5 points and D always provides a total of 3 points), but $γ$ can affect
other factors, such as absolute inequality for Self and Other allocations for the C and D options (in Figure 8 this is 1, 1, 3, 5, and 7 for C, and 5, 3, 1, and 1 for D), as well as advantageous and
disadvantageous inequality (Fehr & Schmidt, 1999): in Figure 8, C switches from advantageous to disadvantageous inequality from example a to b, whereas D makes the same switch from example d to e; in
example c, C and D have the same absolute inequality but C has disadvantageous inequality and D has advantageous inequality. The magnitude of $γ$ will likely be in a similar order of magnitude as the
payoffs (i.e., if the payoffs are in the 100s, a $γ$ of 1 or 2 is unlikely to have much an effect on people’s decisions). Again, these questions will depend on the experimental question.
2.2. Asymmetric games and decompositions
So far, we have only dealt with symmetric games and symmetric decompositions: both players have the same options and the same potential outcomes. But real life doesn’t consist exclusively of
symmetric situations. Outcomes can differ between people, the actions available to them can differ, or both. In the context of decomposed games, there can be asymmetric decomposed games and/or
asymmetric decompositions. To account for decomposed games with asymmetries, either in payoff matrix or the decomposed options, we expand our previous section.
Conceptually, we can distinguish between 2 different kinds of asymmetries in the payoff structures of 2 $×$ 2 games (asymmetric games): first, both players’ outcomes have the same payoff structure
but with different values; second, the ordering of the payoffs differs between both players. Figure 9 displays both situations: first, in Figure 9a, the payoffs of both players are that of the
Prisoner’s Dilemma, but the row player’s payoffs are multiplied by 10. Thus, for each player, the standard game-theoretic strategic considerations are the same as in a symmetric Prisoner’s Dilemma,
but psychologically it might feel quite different. For example, if we consider different aspects of social-value orientation (Bogaert et al., 2008; Murphy & Ackermann, 2013), joint gain is now
highest if the row player defects and the column player cooperates – but if a player cares most about reducing inequality, then the opposite would yield the best result (if the row player cooperates
and the column player defects, the absolute difference between both players is only 3). Second, in Figure 9b, both players have the same values for their payoffs, but the payoffs of the column player
are no longer in the order of the Prisoner’s Dilemma, but instead in the order of Concord. This combination of strict ordinal games constitutes a new game (Bruns, 2015). Thus, there are two different
kinds of asymmetries of the payoff structure that we could incorporate into decomposed games.
Additionally, we would like to incorporate situations in which the decompositions (actions) differ between the players (asymmetric decompositions). We can use the same approach as before, but need to
specify independent variables for each player, both for the payoffs and for the decomposition. We thus expand Figure 3 to Figure 10:
Figure 11 provides the summary of how to decompose any 2 $×$ 2 game. As before, the full explanation is in the Appendix. The game can be symmetric or either type of asymmetric, and the decompositions
can be symmetric or asymmetric. 2 parameters ($α$ and $β$) can be chosen freely: as before, the self and other columns have to add up to a constant for the ultimate payoffs to be constant. The
necessary condition CC - DC = CD - DD still holds, but separately for the payoff matrices of each player. Thus, asymmetric games are only decomposable if each of the players’ individual games is
For asymmetric decompositions, the same principles apply for selecting the parameter as for symmetric decompositions, with the sole difference that now there are 2 separate parameters that can be
chosen freely to accommodate a researcher’s methodological needs. In such asymmetric decompositions, selecting ideal parameters for $α$ and $β$ is less intuitive than for symmetric decompositions
because the Self and Other allocations for the two players no longer coincide; instead, each player’s final payoff receives its own parameter: $α$ is for Player 2 (Self allocation for Player 2 and
Other allocation for Player 1), and $β$ is for Player 1 (Self allocation for Player 1 and Other allocation for Player 2). As with symmetric decompositions, this flexibility allows researchers to vary
different kinds of inequalities, but with the added flexibility that for asymmetric decompositions there can be further inequality between the two players’ total contributions.
As examples, Figure 12 shows a symmetric Prisoner’s Dilemma decomposed into asymmetric decompositions, where player 1 contributes more than player 2; Figure 13 shows decompositions for when the
payoff structure differs between both players: player 1’s payoffs are of a Prisoner’s Dilemma, and player 2’s payoffs are of Concord.
2.3. Games with ties
So far, we only considered ordinal games with strict preferences, such that per player 2 payoffs cannot be the same. For example, for a game to be considered a Prisoner’s Dilemma, DC has to be larger
than CC, the player cannot be indifferent about the order of the two outcomes. But just as we generalized symmetric games to asymmetric games, in real life different outcomes can be equally
appealing. Thus, our final expansion includes games with ties between payoffs (e.g., DC $>$ CC $=$ DD $>$ CD).
As before, for a game to be decomposable, the necessary condition still holds for each players’ payoffs: DC + CD = CC + DD, even if outcomes are equal. Take a game with the payoff matrix (DC = 3, CC
= 2, DD = 2, CD = 1). This game is between the Prisoner’s Dilemma (DC $>$ CC $>$ DD $>$ CD) and Deadlock (DC $>$ DD $>$ CC $>$ CD). Because DC + CD = CC + DD, the game is decomposable. Figure 14
shows two possible decomposed versions of this game, one symmetric and the other asymmetric.
Only games with ties that lie ‘between’ decomposable ordinal games can be decomposed (i.e., the dashed lines in Figure 5), including the special case where C = D and DC = CC = DD = CD (i.e., possible
actions and outcomes are identical). Thus, decomposed games can not only incorporate symmetric and asymmetric games for ordinal games with strict preferences, but also for games with ties. This
allows for even more flexible and nuanced experimental designs and thus expands the range of questions one can answer with decomposed games.
3. Conceptual differences between matrix games and decomposed games
When could one use the matrix-form and when could one use the decomposed form? Any specific response depends on the specific experimental question, but we can provide general guidelines that might
aid deciding between these two ways of presenting 2 $×$ 2 games in an experiment.
To our knowledge, not much research has systematically compared games in matrix form and decomposed form. Although some early studies compared the two (Evans & Crumbaugh, 1966; Messick & McClintock,
1968; Pruitt, 1967), these study focused on individual aspects: Evans and Crumbaugh compared participants’ cooperative choices in the matrix-form and one decomposed form and found that for that
particular decomposition and payoffs, participants cooperated more in the decomposed than in the matrix form; Pruitt generalised symmetric decomposed games in the Prisoner’s Dilemma and found
differences in cooperation rate between the matrix-form and decomposed form, and differences in cooperation rate between different decomposed forms of the same payoff matrix; Messick and McClintock
found that decomposed games could be used to assess different motivational aspects of their participants. Those studies deal only with the symmetric Prisoner’s Dilemma (no other 2 $×$ 2 games and no
asymmetries between players) and mainly show that decomposed games can affect people’s decisions, without attempting to systematically study how or why decomposed games can affect people’s behaviour.
Similarly, although the term ‘decomposed games’ has been used extensively in social-value orientation, these studies are not games but allocation decisions because only one person decides (Murphy &
Ackermann, 2013) - and thus these studies also do not tell us anything about the differences between matrix form and decomposed form. Many of the differences we point out are thus ‘potential
differences’, rather than ‘established differences’, and could be tested empirically in future studies.
First, the main conceptual difference between matrix form and decomposed form lies in their different foci: the matrix form emphasises outcomes, the decomposed form emphasises the actions. In matrix
form, players decide between different outcomes, but in decomposed games players can base their decision on either the intrinsic value of an action itself (e.g., I prefer giving equally to both and
will choose the action with the smallest difference between both players) or on the outcome (as in the matrix form). This distinction is related to the distinction between deontological ethics and
consequentialism, where the former values the action itself higher and the latter values the outcome higher, independent of the action that went into the outcome (Alexander & Moore, 2023). This
difference between action and outcome is also reminiscent of the distinction between procedural fairness and outcome fairness (Brockner & Wiesenfeld, 1996). Decomposed games might provide a new angle
to study open questions in those fields.
Second, games might be more ecologically valid in decomposed form than in matrix form. In many situations in life, actions have a direct effect on oneself and someone else, independent of what the
other person does. In the matrix form, however, actions only exist in the interdependent context; thus, the decomposed form might be more ecologically valid because they let people decide between
actions with inherent value, which add up to a specific 2 $×$ 2 game (Pruitt, 1967).
Third, from a practical perspective, games in decomposed form might be easier to understand for participants (and animals). Thus, for any experimental design that might benefit from a simpler task
(e.g., if the participants/animals might struggle with the instructions; if there is little time in the experiment to explain the task; when testing children; if the rest of the experiment is already
very complex), it might be useful to use the decomposed form. The simpler task structure might facilitate research that would otherwise not be possible. Studying cooperation and social
decision-making in animals often requires a relatively simple experimental design (e.g., the car-driving task with rhesus macaques (Ong et al., 2020) or the rope-pulling task with elephants (Plotnik
et al., 2011); using decomposed games may allow novel variants of these tasks. Caution is advised: any decomposition will affect the framing of the situation, which might have unintended
4. Practical Guidelines
So far we have mainly considered theoretical aspects of decomposed games, such as what decomposed games are, which games can be decomposed, which decompositions are possible for any given payoff
matrix. In this section, we summarise the main practical considerations for using decomposed games in experimental research (decomposed games can of course also be used for simulation studies, but
our focus is empirical investigations), including their limitations.
One of the main considerations for decomposed games is the question of which values to choose for $α$, $β$, and $γ$. We highlight two aspects that can be manipulated with these parameters:
inequality, and valence.
Inequality. All three parameters can be used to change various aspects of inequality of the contributions (absolute, advantageous, disadvantageous) between the allocations for self and other. Thus,
when selecting the ideal decompositions for a given payoff matrix, researchers ought to carefully consider whether changing these parameters might inadvertently have affected various aspects of
inequality of the options, and whether this might affect people’s behaviour. Factors such as advantageous and disadvantageous inequality (Fehr & Schmidt, 1999) could easily be manipulated, such that
decomposed games could be used to investigate to what extent the various aspects of inequality influence people’s decisions; unlike standard 2 $×$ 2 games, decomposed games allow a disentangling of
action and outcome, such that these aspects can be considered separately from each other. Additionally, for asymmetric decompositions, $α$ and $β$ can be used to affect the total contributions for
each player. For example, Figure 12 shows a symmetric game with asymmetric decompositions, such that both players receive the same potential outcomes, but Player 1 contributes many points and Player
2 total contributions are a net negative. Such asymmetries may affect people’s behaviour if they care not only about equality of outcomes but also equality of inputs.
Valence. The other factor, valence, can equally be affect by each of the three parameters: for many payoff matrices, changing $α$, $β$, and $γ$ can lead to changing whether an allocation for self
or other can lead to a categorical change between a gain or a loss. In an empirical study, we showed that losses and gains can have strong effects on people’s decisions to cooperate, leading to both
increases and decreases of cooperation, depending on the context (Kuper-Smith & Korn, 2023). For those interested in how losses and gains affect cooperation, future studies could further investigate
such questions in decomposed games: by changing the $α$, $β$, and $γ$, we could shift the different decompositions relative to 0 without changing the valence of the outcomes. This could help ask
further questions about how losses and gains affect social decisions, such as which types of losses matter most (loss of action or loss of outcome, for self or for other). Any researcher not
interested in such questions ought to ensure that their change of $α$, $β$, and $γ$ did not also lead to changes in behaviour due to changing the valence of the allocation options.
A further important question to consider is what information one wants to reveal to the participants (e.g., whether to show only the decomposition or also payoff matrix, whether the decompositions
are symmetric or not, etc.). The question of how the visibility of decomposed options and resulting payoff matrices affects people’s cooperative choices was examined in a study (Pincus & Bixenstine,
1977) in which participants either saw the decomposed options alone or alongside the resulting payoff matrices. Showing the decomposed options alone or with the resulting payoff matrix was shown to
alter people’s cooperation rates. Future studies could clarify why these effects occurred: which aspects matter, and how and why these aspects affect people’s cooperative choices. It could also be
interesting to design studies in which players have different decomposed options, but are not made aware of this option until it is revealed, at which point one could see how people’s decisions and
attitudes about the other person change.
Decomposed Games could also be used to study what aspects of a game people pay attention to, how salient different options are, and what features people use to make decisions. For example, one could
present the payoff matrix alongside decomposed options of that payoff matrix and use eye-tracking (Polonio et al., 2015) to study which aspects people pay most attention to: do they attend most to
the final outcomes, or to the actions that lead to them? Does this differ for different decompositions, and what factors influence this?
So far, we have only discussed decomposed game in the context of standard economic games. But decomposed games can also be embedded in an ecological context, which often have probabilistic outcomes
over multiple steps. For example, previous research (Korn & Bach, 2015, 2018) has studied how people behave in foraging situations. So far, these studies are non-social studies of risky choices, but
such foraging contexts could be expanded to include social decisions where two (or more) players make such decisions with outcomes for themselves and the other player. Any such game would then be a
decomposed game, even with probabilistic outcomes. This could also help further the link between evolutionary, economic, and psychological game theory.
Any of the features mentioned so far, especially inequality and valence, could also be investigated in clinical populations. For example, patients with Borderline Personality Disorder show impaired
social functioning (Jeung & Herpertz, 2014), and several studies found abnormal behaviour in the Ultimatum Game related to fairness (De Panfilis et al., 2019; Polgár et al., 2014; Thielmann et al.,
2014), which could be investigated by altering the inequality of action and outcomes with decomposed games; valence has been linked to compulsivity, with some studies suggesting abnormalities in
addiction (Mogg et al., 2003) and obsessive-compulsive behaviour (Sachdev & Malhi, 2005), which could be further investigated by manipulating the valence of the actions and outcomes. Thus, decomposed
games could be used to further understand these conditions.
Decomposed Games are of course not without limitations. We highlight three main limitations, all of which are caused by the fixed way that decomposed games are necessarily set up. First, the biggest
limitation of decomposed games is that not every game can be decomposed. Of the 12 symmetric ordinal 2 $×$ 2 games with strict preferences, only 4 can be decomposed (Prisoner’s Dilemma, Deadlock,
Harmony, and Concord); of the games with ties, only those ‘between’ the 4 ordinal games can be decomposed. This means that some of the most interesting 2 $×$ 2 games, which are commonly used to study
social dilemmas, cannot be decomposed, including Stag-Hunt and Chicken. This might also explain why decomposed games have almost exclusively focused on the Prisoner’s Dilemma so far: the other 3
decomposable games offer less of a social dilemma, in that, usually, one option is preferable to the other, which is less uniformly the case for the Prisoner’s Dilemma (or other games like Stag-Hunt
or Chicken). Thus, given that only certain games can be decomposed, this limits the variety of games (and therefore social situations) that can be studied using decomposed games. Second, although
decomposed games allow for lots of flexibility when choosing the right decomposed options for one’s experimental design, there are several factors that cannot be changed. For example, for symmetric
decompositions, the total amount of points allocated to Self and Other for the C option is always equal to CC (because the CD from Self is cancelled out by the -CD from Other, as is the positive and
negative $γ$ parameter; for D the total allocations is always equal to DC-CC+CD for the same reason). Thus, certain potentially interesting aspects one might like to vary are not possible to be
varied independently with decomposed games (one could change CC, but this obviously also alters the final payoffs, rather than just the decomposed options). Thus, despite the general flexibility
(especially for asymmetric decompositions), there are some limitations as to what can be altered. Third, decomposed games’ main strength, namely that it can change the framing of the game, is also a
potential weakness: when setting up a decomposed game to study one factor, say how losses and gains affect people’s cooperative decisions (Kuper-Smith & Korn, 2023) in decomposed games, other factors
can easily interfere, such as advantageous and disadvantageous inequality (Doppelhofer et al., 2021; Fehr & Schmidt, 1999) of the actions/decompositions: changing the parameters to alter one factor
automatically will change other, often unintended, factors that could have large effects on people’s behaviour. Thus, whenever choosing a decomposition for one’s study, one has to be careful to
consider what other factors are affected by one’s manipulation. While this is a generic problem of almost any experimental design, decomposed games are particularly likely to lead to unintended
consequences, if not considered carefully.
5. Conclusion
Linearly-additive decomposed 2 $×$ 2 games have existed for more than 50 years in the context of the Prisoner’s Dilemma but have been studied relatively rarely and, to our knowledge, have never been
explored systematically. This is surprising, given the flexibility that they provide in assessing various aspects of social preferences, particularly in relation to inequality between two players in
terms of actions and outcomes. In this article, we highlighted linearly-additive decomposed games by explaining their logic showing how they can apply beyond the Prisoner’s Dilemma, providing a way
to calculate possible decompositions for a given payoff matrix (for symmetric and asymmetric games and decompositions), and providing some practical suggestions. Decomposed games allow for more
flexible experimental designs, potentially enabling for ecologically more variable and realistic experimental set-ups.
Author Contributions
BJKS wrote an initial short draft which both authors developed, expanded and edited.
This work was supported by the Emmy Noether Research Group grant (392443797) from the German Research Foundation (DFG).
Competing Interests
Both authors report no competing interests.
Data Accessibility Statement
This is a theoretical article and contains no data.
6. Appendix
To keep the main article short, we present the relevant mathematics here. Any introductory course on Linear Algebra, such as the one by Gilbert Strang (https://ocw.mit.edu/courses/mathematics/
18-06sc-linear-algebra-fall-2011/index.htm), should cover the following.
Why is CC - DC = CD - DD a necessary condition for a Prisoner’s Dilemma payoff matrix to be decomposable? In a decomposed game, the standard options of C and D indicate a specific amount of points
each player gets (see Figure 1, right example). We can calculate the standard payoffs DC, CC, DD and CD the following way:
The first column stands for $x1$, the second column for $x2$, and so forth. To simplify the notation, we write $A$ and $b$ as a single augmented matrix. We can now test whether this set of
equations has a solution, and if so how many and under which conditions. We use Gaussian elimination to bring this matrix into the upper triangular form:
The last row now states that 0$x1$ + 0$x2$ + 0$x3$ + 0$x4$ = DD + CC - CD - DC. Therefore, this system of equations only has a solution (i.e., the matrix is decomposable) if:
Therefore, this set of equations does not have a solution if CC - DC $≠$ CD - DD. If this condition were not to be fulfilled, then the set of equations would run into internal contradictions. This is
why Pruitt said that CC - DC = CD - DD is a necessary condition for a Prisoner’s Dilemma payoff matrix to be decomposable.
To find out what all possible decompositions for a given payoff matrix is, we need to solve the set of equations. We take the upper triangular matrix and bring it to the row-reduced echelon form.
Given that we are only interested in games that are actually decomposable, we know that the necessary condition of CC - DC = CD - DD is fulfilled. To simplify the set of equations, we can substitute
DD + CC - CD - DC with 0:
We now bring this form into the reduced row echelon form by using Gauss-Jordan elimination:
By finding the particular and the special solution, we can now find the generic solution to this set of equations:
To make it easier to use, we reformatted this equation in Table 2. This solution also explains why if there is one decomposition, there are infinitely many decompositions: we can take a particular
decomposition, and then shift the decomposition such that $x1$ and $x3$ are decreased by $γ$ to the same degree that $x2$ and $x4$ are increased. In other words, if for both options the self-payoff
is decreased as much as the other-payoff is increased, the resulting payoff matrix remains constant.
At no point did we specify the order of the payoffs. This means that the results from this section hold true independent of which game is being used. As we show in the main text, only some games can
abide by the necessary condition, but for those games, the solutions offered here apply equally.
So far we have assumed that both players play the same game with the same decomposed options. Using the same approach as for symmetric games, we can now solve the system without those assumptions and
thus incorporate asymmetric games and asymmetric decompositions. We skipped the steps here, but the approach is the same as above, just with more variables:
As before, we now get the necessary condition that DC + CD - CC - DD = 0, but for both players separately. This means that both players can have different payoff matrices, but they still have to each
abide by the necessary condition for a 2 $×$ 2 game to be decomposable.
Again, to get a generic formula for calculating the decompositions, we solve this set of equations:
As before, we reformatted this into a more usable format in Figure 11.
Alexander, L., & Moore, M. (2023). Deontological ethics. Stanford Encyclopedia of Philosophy (Online). https://plato.stanford.edu/entries/ethics-deontological/
Bogaert, S., Boone, C., & Declerck, C. (2008). Social value orientation and cooperation in social dilemmas: A review and conceptual model. British Journal of Social Psychology, 47(3), 453–480. https:
Brockner, J., & Wiesenfeld, B. M. (1996). An integrative framework for explaining reactions to decisions: Interactive effects of outcomes and procedures. Psychological Bulletin, 120(2), 189–208.
Bruns, B. (2015). Names for games: Locating 2 × 2 games. Games, 6(4), 495–520. https://doi.org/10.3390/g6040495
De Panfilis, C., Schito, G., Generali, I., Gozzi, L. A., Ossola, P., Marchesi, C., & Grecucci, A. (2019). Emotions at the border: Increased punishment behavior during fair interpersonal exchanges in
borderline personality disorder. Journal of Abnormal Psychology, 128(2), 162–172. https://doi.org/10.1037/abn0000404
Doppelhofer, L. M., Hurlemann, R., Bach, D. R., & Korn, C. W. (2021). Social motives in a patient with bilateral selective amygdala lesions: Shift in prosocial motivation but not in social value
orientation. Neuropsychologia, 162, 108016. https://doi.org/10.1016/j.neuropsychologia.2021.108016
Engel, C. (2011). Dictator games: A meta study. Experimental Economics, 14(4), 583–610. https://doi.org/10.1007/s10683-011-9283-7
Evans, G. W., & Crumbaugh, C. M. (1966). Effects of prisoner’s dilemma format on cooperative behavior. Journal of Personality and Social Psychology, 3(4), 486–488. https://doi.org/10.1037/h0023035
Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. The Quarterly Journal of Economics, 114(3), 817–868. https://doi.org/10.1162/003355399556151
Fudenberg, D., & Tirole, J. (1991). Game theory. MIT Press.
Griesinger, D. W., & Livingston, J. W. (1973). Toward a model of interpersonal motivation in experimental games. Behavioral Science, 18(3), 173–188. https://doi.org/10.1002/bs.3830180305
Güth, W., Kocher, M. G. (2014). More than thirty years of ultimatum bargaining experiments: Motives, variations, and a survey of the recent literature. Journal of Economic Behavior Organization, 108,
396–409. https://doi.org/10.1016/j.jebo.2014.06.006
Henrich, J., Boyd, R., Bowles, S., Camerer, C., Fehr, E., Gintis, H., McElreath, R. (2001). In search of homo economicus: Behavioral experiments in 15 small-scale societies. American Economic Review,
91(2), 73–78. https://doi.org/10.1257/aer.91.2.73
Jeung, H., Herpertz, S. C. (2014). Impairments of interpersonal functioning: Empathy and intimacy in borderline personality disorder. Psychopathology, 47(4), 220–234. https://doi.org/10.1159/
Johnson, N. D., Mislin, A. A. (2011). Trust games: A meta-analysis. Journal of Economic Psychology, 32(5), 865–889. https://doi.org/10.1016/j.joep.2011.05.007
King-Casas, B., Chiu, P. H. (2012). Understanding interpersonal function in psychiatric illness through multiplayer economic games. Biological Psychiatry, 72(2), 119–125. https://doi.org/10.1016/
Korn, C. W., Bach, D. R. (2015). Maintaining homeostasis by decision-making. PLoS Computational Biology, 11(5), e1004301. https://doi.org/10.1371/journal.pcbi.1004301
Korn, C. W., Bach, D. R. (2018). Heuristic and optimal policy computations in the human brain during sequential decision-making. Nature Communications, 9(1), 325. https://doi.org/10.1038/
Kuhlman, D. M., Marshello, A. F. (1975). Individual differences in game motivation as moderators of preprogrammed strategy effects in prisoner’s dilemma. Journal of Personality and Social Psychology,
32(5), 922–931. https://doi.org/10.1037/0022-3514.32.5.922
Kuper-Smith, B. J., Korn, C. W. (2023). Loss avoidance can increase and decrease cooperation. PsyArXiv.
Messick, D. M., McClintock, C. G. (1968). Motivational bases of choice in experimental games. Journal of Experimental Social Psychology, 4(1), 1–25. https://doi.org/10.1016/0022-1031(68)90046-2
Mogg, K., Bradley, B. P., Field, M., De Houwer, J. (2003). Eye movements to smoking-related pictures in smokers: Relationship between attentional biases and implicit and explicit measures of stimulus
valence. Addiction, 98(6), 825–836. https://doi.org/10.1046/j.1360-0443.2003.00392.x
Murphy, R. O., Ackermann, K. A. (2013). Social value orientation: Theoretical and measurement issues in the study of social preferences. Personality and Social Psychology Review, 18(1), 13–41. https:
Nowak, M. A. (2006). Five rules for the evolution of cooperation. Science, 314(5805), 1560–1563. https://doi.org/10.1126/science.1133755
Ong, W. S., Madlon-Kay, S., Platt, M. L. (2020). Neuronal correlates of strategic cooperation in monkeys. Nature Neuroscience, 24(1), 116–128. https://doi.org/10.1038/s41593-020-00746-9
Perc, M., Jordan, J. J., Rand, D. G., Wang, Z., Boccaletti, S., Szolnoki, A. (2017). Statistical physics of human cooperation. Physics Reports, 687, 1–51. https://doi.org/10.1016/
Pincus, J., Bixenstine, V. E. (1977). Cooperation in the decomposed prisoner’s dilemma game. Journal of Conflict Resolution, 21(3), 519–530. https://doi.org/10.1177/002200277702100308
Plotnik, J. M., Lair, R., Suphachoksahakun, W., De Waal, F. B. M. (2011). Elephants know when they need a helping trunk in a cooperative task. Proceedings of the National Academy of Sciences, 108
(12), 5116–5121. https://doi.org/10.1073/pnas.1101765108
Polgár, P., Fogd, D., Unoka, Z., Sirály, E., Csukly, G. (2014). Altered social decision making in borderline personality disorder: An ultimatum game study. Journal of Personality Disorders, 28(6),
841–852. https://doi.org/10.1521/pedi_2014_28_142
Polonio, L., Di Guida, S., Coricelli, G. (2015). Strategic sophistication and attention in games: An eye-tracking study. Games and Economic Behavior, 94, 80–96. https://doi.org/10.1016/
Poundstone, W. (1993). Prisoner’s dilemma: John von neumann, game theory, and the puzzle of the bomb. Anchor.
Pruitt, D. G. (1967). Reward structure and cooperation: The decomposed Prisoner’s Dilemma game. Journal of Personality and Social Psychology, 7(1, Pt.1), 21–27. https://doi.org/10.1037/h0024914
Rapoport, A., Chammah, A. M. (1966). The game of chicken. American Behavioral Scientist, 10(3), 10–28. https://doi.org/10.1177/000276426601000303
Rapoport, A., Guyer, M. J., Gordon, D. G. (1976). The 2 x 2 game. The University of Michigan Press.
Rilling, J. K., Sanfey, A. G. (2011). The neuroscience of social decision-making. Annual Review of Psychology, 62(1), 23–48. https://doi.org/10.1146/annurev.psych.121208.131647
Russell, B. (1959). Common sense and nuclear warfare. George Allen Unwin.
Sachdev, P. S., Malhi, G. S. (2005). Obsessive-compulsive behaviour: a disorder of decision-making. Australian and New Zealand Journal of Psychiatry, 39(9), 757–763. https://doi.org/10.1111/
Smith, J. M., Price, G. R. (1973). The logic of animal conflict. Nature, 246(5427), 15–18. https://doi.org/10.1038/246015a0
Thielmann, I., Böhm, R., Ott, M., Hilbig, B. E. (2021). Economic games: An introduction and guide for research. Collabra: Psychology, 7(1), 19004. https://doi.org/10.1525/collabra.19004
Thielmann, I., Hilbig, B. E., Niedtfeld, I. (2014). Willing to give but not to forgive: Borderline personality features and cooperative behavior. Journal of Personality Disorders, 28(6), 778–795.
Van Lange, P. A. M., Joireman, J., Parks, C. D., Van Dijk, E. (2013). The psychology of social dilemmas: A review. Organizational Behavior and Human Decision Processes, 120(2), 125–141. https://
This is an open access article distributed under the terms of the
Creative Commons Attribution License (4.0)
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. | {"url":"https://online.ucpress.edu/collabra/article/9/1/84916/197212/Linearly-Additive-Decomposed-2-2-Games-A-Primer","timestamp":"2024-11-04T21:17:36Z","content_type":"text/html","content_length":"334617","record_id":"<urn:uuid:c26ae87e-e019-44e8-a7a4-b9b841373e28>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00510.warc.gz"} |
Activity: Estimation
This activity is about estimation and doing a survey.
How many red marbles?
For this experiment you will need a bag of marbles of various colors - red, blue, green, yellow etc. The total number of marbles is not really important, but you will find it easier to work with a
number like 50. What is important is that you do not know how many of each color there are. So you will have to ask a friend or relative to choose the marbles for you.
You will estimate how many red marbles there are in the bag.
So why don't you just count them?
Of course you could, but this experiment will teach you how to make an estimate.
Scientists and statisticians use this method all the time to make estimates in real world situations where they can't just simply count.
Example: Blood Types
There are four different human blood types, called A, B, AB and O.
To test everyone, as you can imagine, would be very hard!
So to find out how many people in the world have each blood type, scientists take a sample of (say) 1,000 people and test their blood type.
From the sample they can estimate how many of each type there are in the world.
Now you're ready to begin the experiment. It's very simple.
• Shake the marbles in the bag.
• Take one marble without looking and record its color.
• Return the marble to the bag.
• Repeat this process 100 times.
Why do you return the marble to the bag?
So that the conditions for choosing the second marble are exactly the same as the conditions for choosing the first marble.
If the first marble you choose is red and you don't replace it, then for your second choice there is one less red marble in the bag, which makes the chance of choosing another red marble less.
How do you record the color?
You can use a tally/frequency table like this:
│ Color │ Tally │ Frequency │
│ Red │ │ │
│ Blue │ │ │
│ Green │ │ │
│ Yellow │ │ │
│ Purple │ │ │
│ │ │ Total = 100 │
I have included five colors in my table, but yours could be different.
Once you've completed choosing 100 times, you can work out the relative frequency of red by dividing the number of red marbles by 100.
For example, if you record 22 red marbles in your table, then the relative frequency of red is 22/100 = 0.22
How does this help you estimate the number of red marbles in the bag?
That's easy. If there are 50 marbles in the bag and red occurred 22 times out of 100, then it should occur 11 times out of 50.
In other words, the fractions 22/100 and 11/50 are equivalent fractions.
An easier way is just to multiply 0.22 by 50: 0.22 × 50 = 11
When you've finished the calculations, you can check how many red marbles there really are in the bag.
• How good was your estimate?
• How can you get a better estimate?
To get a better estimate, don't count straight away.
Instead, repeat the experiment several times and calculate the mean number of red marbles.
Then you can compare your mean number with the actual number in the bag.
You should get a much better estimate, and may even get it exactly right.
As a variation on this experiment, you could use Smarties or M and M's. But you may be tempted to eat some and ruin the results of your experiment!
How many A's?
Another variation on the above experiment can be done using the letters from the game of Scrabble. In the game of Scrabble there are 100 tiles. 98 of the tiles are inscribed with letters of the
alphabet, the other two are blank. Remove the blank ones, so you will have 98 tiles.
If you don't have the game of Scrabble, then you could make your own tiles. You will need 98 square pieces of cardboard, all the same size: 2 cm by 2 cm will do.
Write letters on the tiles as follows:
• 9 A's • 6 N's,
• 2 B's • 8 O's,
• 2 C's, • 2 P's,
• 4 D's, • 1 Q,
• 12 E's, • 6 R's,
• 2 F's, • 4 S's,
• 3 G's, • 6 T's,
• 2 H's, • 4 U's,
• 9 I's, • 2 V's,
• 1 J, • 2 W's,
• 1 K, • 1 X,
• 4 L's, • 2 Y's
• 2 M's, • 1 Z
Pretend that you don't know how many A's there are. Now do the experiment in the same way that you did the marbles experiment.
Record your results in a table as before:
│ Letter │ Tally │ Frequency │
│ A │ │ │
│ B │ │ │
│ C │ │ │
│ D │ │ │
│ E │ │ │
│ etc ... │ │ │
│ │ │ Total = 100 │
The only difference is that you will need a much longer table going all the way to Z.
When you've finished, calculate the relative frequency of the letter A and estimate the number of A's in the bag by multiplying the relative frequency by 98.
Do this several times and calculate the mean.
What result did you get?
How many people have blood type O?
Now you've learned how to make an estimate from a sample, you are ready to do a real-life experiment.
Before you begin you should read the pages How to Do a Survey and Survey Questions.
How many people in the world have blood type O?
There are four different human blood types, called A, B, AB and O . It is obviously impractical to test everybody in the world; so, to find out how many people in the world have each blood type, you
get a sample of 100 people and find out their blood type. From the sample you can estimate how many people in the world have blood type O.
But this could be a difficult experiment as many people don't even know their own blood type!
And, since you can't test them for blood type, you will have to just count the ones who do know. So to get 100 who do know their blood type means a sample size much greater than 100.
Note: Depending on where you live in the world, you could use a different physical characteristic such as hair color or eye color. But, if you live in some countries, you may not have a good
variation of different colors. That's why I chose blood type.
Which sample?
If you are still at school, then you could use your school population do do your survey. Ask permission first, then go around different classrooms.
But you might get a better result doing your survey in a local shopping center or mall. When you stop people, tell them that you are doing a survey and politely ask them if they would mind telling
you their blood type.
Ignore the ones who refuse to answer or who don't know.
Keep going until you have 100 positive answers.
Record your results in a table, as follows:
│ Blood type │ Tally │ Frequency │
│ A │ │ │
│ B │ │ │
│ AB │ │ │
│ O │ │ │
│ │ │ Total = 100 │
When you have finished, you can work out the relative frequency of blood type O from your sample.
Divide the number with blood type O by 100.
You can expect to get a better result if you use a bigger sample, or if you take several samples and calculate the mean.
Now all you need to know is how many people there are in the world. A quick search on the internet should answer this for you.
Can you now estimate how many people in the world have blood type O? | {"url":"http://wegotthenumbers.org/estimation-2.html","timestamp":"2024-11-08T07:37:54Z","content_type":"text/html","content_length":"14287","record_id":"<urn:uuid:8bbeeb9e-1af8-4a59-8eb2-1255f3494f4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00002.warc.gz"} |
Stability estimates for partial data inverse problems for Schrodinger operators in the high frequency limit
We discuss the partial data inverse boundary problem for the Schroedinger operator at a fixed frequency on a bounded domain in the Euclidean space, with impedance boundary conditions. Assuming that
the potential is known in a neighborhood of the boundary, the knowledge of the partial Robin-to-Dirichlet map along an arbitrarily small portion of the boundary determines the potential uniquely, in
a logarithmically stable way. In this talk we show that the logarithmic stability can be improved to the one of Holder type in the high frequency regime. Our arguments are based on boundary Carleman
estimates for semiclassical Schrodinger operators acting on functions satisfying impedance boundary conditions. This is joint work with Gunther Uhlmann. | {"url":"https://math.washington.edu/events/2018-01-09/stability-estimates-partial-data-inverse-problems-schrodinger-operators-high","timestamp":"2024-11-04T12:02:53Z","content_type":"text/html","content_length":"51260","record_id":"<urn:uuid:dcbaf594-a3fe-4573-a0bd-a162edc964ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00801.warc.gz"} |
Witch of Agnesi
From Encyclopedia of Mathematics
A plane curve, given in the Cartesian orthogonal coordinate system by the equation
$$y(a^2+x^2)=a^3,\quad a>0.$$
If $a$ is the diameter of a circle with centre at the point $(0,a/2)$, $OA$ is a secant, $CB$ and $AM$ are parallel to the $x$-axis, and $BM$ is parallel to the $y$-axis (see Fig.), then the witch of
Agnesi is the locus of the points $M$. If the centre of the generating circle and the tangent $CB$ are shifted along the $y$-axis, the curve thus obtained is called Newton's aguinea and is a
generalization of the witch of Agnesi. The curve is named after Maria Gaetana Agnesi (1718-1799), who studied it.
The unusual name derives from a misreading of the term la versiera (from Latin versoria) "rope that turns a sail" as l'aversiera, "witch".
• [1] A.A. Savelov, "Planar curves" , Moscow (1960) (In Russian)
• [a1] J.D. Lawrence, "A catalog of special plane curves" , Dover, reprint (1972)
• [b1] Ian Stewart, Professor Stewart's Cabinet of Mathematical Curiosities, Profile Books (2010) ISBN 1846683459
How to Cite This Entry:
Witch of Agnesi. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Witch_of_Agnesi&oldid=54488
This article was adapted from an original article by A.B. Ivanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/wiki/Witch_of_Agnesi","timestamp":"2024-11-05T03:17:44Z","content_type":"text/html","content_length":"15199","record_id":"<urn:uuid:3171953c-872c-485a-b836-33e7bcd86dc0>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00022.warc.gz"} |
The effects of temperature on topological materials
Add to your list(s) Download to your calendar using vCal
If you have a question about this talk, please contact Yonatan Calahorra.
Topological materials exhibit exotic properties such as dissipantionless charge and spin currents that could form the basis for novel technological applications such as low-power electronics or
spintronics devices. However, many topological materials lose their topological order upon increasing temperature, thus hampering practical applications.
I will describe the interplay between topology and temperature, showing that thermal expansion and electron-phonon coupling contribute similarly to the temperature dependence of the properties of
topological materials. Using the Bi2Se3 family of topological insulators as an example, I will explain why increasing temperature tends to destroy topological order. However, I will argue that this
is not a fundamental constraint on topological materials, and I will show how it is also possible to design materials in which the opposite behaviour is observed, presenting PbO2 as the first example
of a material in which temperature promotes a topologically ordered phase. Finally, I will discuss how temperature may be exploited to identify the correct topological phase of a material, an
approach that will prove particularly useful close to the boundary between two phases with different topological order.
This talk is part of the Department of Materials Science & Metallurgy Seminar Series series.
This talk is included in these lists:
Note that ex-directory lists are not shown. | {"url":"https://talks.cam.ac.uk/talk/index/139519","timestamp":"2024-11-06T15:27:42Z","content_type":"application/xhtml+xml","content_length":"11993","record_id":"<urn:uuid:00ba5317-be12-421a-9bc3-a6a653b59ae8>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00191.warc.gz"} |
Indices and Surds
This resource from Susan Wall contains five activities designed to enable students to explore indices both numerically and algebraically. The resource features a number of activities dealing with
negative indices and fractional indices.
Starter activities: contains two starter activities. In the first activity students have to represent an integer value in as many ways as they can, each way to include an index value. The second
activity, ‘Why does?’ students have to explain mathematical statements involving indices.
Matching pairs game: Students are required to match a number in index form with its integer value.
Ordering indices: Students compare the values written in index form and place the cards in numerical order.
Odd one out: Given three cards, students find the odd one out and make up their own card to match the odd one.
True or false: Aimed at addressing common misconceptions, in this activity students discuss whether the statements given are true or false.
In this resource from the DfE Standards Unit students are introduced to fractional and negative indices to enable them to evaluate numerical expressions using negative and fractional
indices and use the rules of indices with integer and fractional powers of variables. Students should have some knowledge of the rules of indices for multiplying and dividing numbers in index form
Following on from the resource ‘Indices’, this resource from Susan Wall contains four activities designed to explore the rules of indices as well as differentiating and integrating functions
containing indices.
The rules of indices with algebraic expressions: This activity is a matching exercise with algebraic statements involving negative indices and fractional indices.
Dominoes: is a loop card activity involving negative indices and fractional indices.
Differentiation and integration involving indices: A large, multi-tiered activity matching equivalent functions, differentials and integrals.
Marking: In this activity, students are given questions and solutions to a number of differentiation and integration questions. Students are required to mark the work. The solutions contain many
common errors that are made. Students should mark the work for accuracy, correct solutions where necessary and give advice to help the candidate therefore explain what the error was and how to
correct it.
In this resource from the DfE Standards Unit students identify equivalent surds and develop their ability to simplify expressions involving surds. Students should understand what a square root is and
be able to remove brackets correctly.
This RISP activity from can be used when either consolidating or revising ideas of curve-sketching and indices. The numbers phi, e and pi are used in this investigation where students are asked to
estimate the size numbers generated when raising these numbers to different powers. It is suggested that a graphing package would prove useful wihen investigating the set problem.
Mathcentre provide these resources which cover aspects of arithmetic, often used in the field of engineering. They include fractions and their associated arithmetic, calculations involving surds,
using standard form, as well as understanding and drawing the graph of a function.
Comprehensive notes, with clear descriptions, for each resource are provided, together with relevant diagrams and examples. Students wishing to review, and consolidate, their knowledge and
understanding of arithmetic will find them useful, as each topic includes a selection of questions to be completed, for which answers are provided | {"url":"https://www.stem.org.uk/resources/community/collection/13668/indices-and-surds","timestamp":"2024-11-03T12:50:11Z","content_type":"text/html","content_length":"35704","record_id":"<urn:uuid:b37d54fe-a273-4ba3-a4d7-58e504b18b36>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00130.warc.gz"} |
The Weights' Problem - 30/03/2013
by showmyiq » Mon Apr 01, 2013 7:58 pm
Your logic is absolutely correct but no answer is provided.
Your name will be published as the one who successfully solved the puzzle!
I will provide my way of thinking and generalization of the issue:
Let’s define such set A_s - equal to such set of elements, which can generate all the numbers from 1 to s (included). In our case we are looking for such A_120 with cardinality |A_120| = x, x->min.
We can easily see, that A_1 = {1}
There are different infinite solutions for A_2, for example A_2 = {1,1}, or A_2={1,1,1) and so on (this apply to all the sets with i>1).
Ok, what about A_3? Well A_3 = {1,2}.
Following this logic, we can see that A_7 = {1,2,4} is the minimum solution for i=7 (therefore x=3).
But since walking backwards is not very good approach, we can directly attack the given task by manipulating 120. The pattern extracted shows the minimum weights we need is exactly 7.
A_120 = {60,30,15,8,4,2,1}
You can see that x=7 and it’s not perfect solution (because we have duplicates, like 15 can be represented by {15} or {8,4,2,1}, but that’s the best solution possible).
If you want to calculate A_s, then the minimum X will be defined as {floor(S/2^1), floor(S/2^2), …., floor(S/2^x) =1)
*by floor I defined the up-round value. floor(3/2) = floor(1.5) = 2
As I can analyze further, the best way to avoid duplicates is to supply S equals to 2^j, for any integer j.
Another good reason to use binary math in computers …
Final Answer: 60, 30, 15, 8, 4, 2, 1 | {"url":"http://www.showmyiq.com/forum/viewtopic.php?f=6&t=564&p=965&sid=efc21f01878ed558fc06255c34ea4ae4","timestamp":"2024-11-08T22:15:49Z","content_type":"application/xhtml+xml","content_length":"27184","record_id":"<urn:uuid:b318af7b-a281-49d0-b19c-5b59c63190d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00364.warc.gz"} |
This tutorial depends on step-8.
This program does not introduce any new mathematical ideas; in fact, all it does is to do the same computations that step-8 already does, but it does so in a different manner. Instead of using
deal.II's linear algebra classes, we build everything on top of classes deal.II provides that wrap around the linear algebra implementation of the PETSc library. And since PETSc allows the
distribution of matrices and vectors across several computers within an MPI network, the resulting code will even be capable of solving the problem in parallel. If you don't know what PETSc is, then
this would be a good time to take a quick glimpse at their homepage.
As a prerequisite of this program, you need to have PETSc installed, and if you want to run in parallel on a cluster, you also need METIS to partition meshes. The installation of deal.II together
with these two additional libraries is described in the README file.
Now, for the details: as mentioned, the program does not compute anything new, so the use of finite element classes, etc., is exactly the same as before. The difference to previous programs is that
we have replaced almost all uses of classes Vector and SparseMatrix by their near-equivalents PETScWrappers::MPI::Vector and PETScWrappers::MPI::SparseMatrix that store data in a way so that every
processor in the MPI network only stores a part of the matrix or vector. More specifically, each processor will only store those rows of the matrix that correspond to a degree of freedom it "owns".
For vectors, they either store only elements that correspond to degrees of freedom the processor owns (this is what is necessary for the right hand side), or also some additional elements that make
sure that every processor has access the solution components that live on the cells the processor owns (so-called locally active DoFs) or also on neighboring cells (so-called locally relevant DoFs).
The interface the classes from the PETScWrapper namespace provide is very similar to that of the deal.II linear algebra classes, but instead of implementing this functionality themselves, they simply
pass on to their corresponding PETSc functions. The wrappers are therefore only used to give PETSc a more modern, object oriented interface, and to make the use of PETSc and deal.II objects as
interchangeable as possible. The main point of using PETSc is that it can run in parallel. We will make use of this by partitioning the domain into as many blocks ("subdomains") as there are
processes in the MPI network. At the same time, PETSc also provides dummy MPI stubs, so you can run this program on a single machine if PETSc was configured without MPI.
Parallelizing software with MPI
Developing software to run in parallel via MPI requires a bit of a change in mindset because one typically has to split up all data structures so that every processor only stores a piece of the
entire problem. As a consequence, you can't typically access all components of a solution vector on each processor – each processor may simply not have enough memory to hold the entire solution
vector. Because data is split up or "distributed" across processors, we call the programming model used by MPI "distributed memory computing" (as opposed to "shared memory computing", which would
mean that multiple processors can all access all data within one memory space, for example whenever multiple cores in a single machine work on a common task). Some of the fundamentals of distributed
memory computing are discussed in the Parallel computing with multiple processors using distributed memory documentation topic, which is itself a sub-topic of the Parallel computing topic.
In general, to be truly able to scale to large numbers of processors, one needs to split between the available processors every data structure whose size scales with the size of the overall problem.
(For a definition of what it means for a program to "scale", see this glossary entry.) This includes, for example, the triangulation, the matrix, and all global vectors (solution, right hand side).
If one doesn't split all of these objects, one of those will be replicated on all processors and will eventually simply become too large if the problem size (and the number of available processors)
becomes large. (On the other hand, it is completely fine to keep objects with a size that is independent of the overall problem size on every processor. For example, each copy of the executable will
create its own finite element object, or the local matrix we use in the assembly.)
In the current program (as well as in the related step-18), we will not go quite this far but present a gentler introduction to using MPI. More specifically, the only data structures we will
parallelize are matrices and vectors. We do, however, not split up the Triangulation and DoFHandler classes: each process still has a complete copy of these objects, and all processes have exact
copies of what the other processes have. We will then simply have to mark, in each copy of the triangulation on each of the processors, which processor owns which cells. This process is called
"partitioning" a mesh into subdomains.
For larger problems, having to store the entire mesh on every processor will clearly yield a bottleneck. Splitting up the mesh is slightly, though not much more, complicated (from a user perspective,
though it is much more complicated under the hood) to achieve and we will show how to do this in step-40 and some other programs. There are numerous occasions where, in the course of discussing how a
function of this program works, we will comment on the fact that it will not scale to large problems and why not. All of these issues will be addressed in step-18 and in particular step-40, which
scales to very large numbers of processes.
Philosophically, the way MPI operates is as follows. You typically run a program via
which means to run it on (say) 32 processors. (If you are on a cluster system, you typically need to schedule the program to run whenever 32 processors become available; this will be described in the
documentation of your cluster. But under the hood, whenever those processors become available, the same call as above will generally be executed.) What this does is that the MPI system will start 32
copies of the step-17 executable. (The MPI term for each of these running executables is that you have 32 MPI processes.) This may happen on different machines that can't even read from each others'
memory spaces, or it may happen on the same machine, but the end result is the same: each of these 32 copies will run with some memory allocated to it by the operating system, and it will not
directly be able to read the memory of the other 31 copies. In order to collaborate in a common task, these 32 copies then have to communicate with each other. MPI, short for Message Passing
Interface, makes this possible by allowing programs to send messages. You can think of this as the mail service: you can put a letter to a specific address into the mail and it will be delivered. But
that's the extent to which you can control things. If you want the receiver to do something with the content of the letter, for example return to you data you want from over there, then two things
need to happen: (i) the receiver needs to actually go check whether there is anything in their mailbox, and (ii) if there is, react appropriately, for example by sending data back. If you wait for
this return message but the original receiver was distracted and not paying attention, then you're out of luck: you'll simply have to wait until your requested over there will be worked on. In some
cases, bugs will lead the original receiver to never check your mail, and in that case you will wait forever – this is called a deadlock. (See also video lecture 39, video lecture 41, video lecture
41.25, video lecture 41.5.)
In practice, one does not usually program at the level of sending and receiving individual messages, but uses higher level operations. For example, in the program we will use function calls that take
a number from each processor, add them all up, and return the sum to all processors. Internally, this is implemented using individual messages, but to the user this is transparent. We call such
operations collectives because all processors participate in them. Collectives allow us to write programs where not every copy of the executable is doing something completely different (this would be
incredibly difficult to program) but where all copies are doing the same thing (though on different data) for themselves, running through the same blocks of code; then they communicate data through
collectives and then go back to doing something for themselves again running through the same blocks of data. This is the key piece to being able to write programs, and it is the key component to
making sure that programs can run on any number of processors, since we do not have to write different code for each of the participating processors.
(This is not to say that programs are never written in ways where different processors run through different blocks of code in their copy of the executable. Programs internally also often communicate
in other ways than through collectives. But in practice, parallel finite element codes almost always follow the scheme where every copy of the program runs through the same blocks of code at the same
time, interspersed by phases where all processors communicate with each other.)
In reality, even the level of calling MPI collective functions is too low. Rather, the program below will not contain any direct calls to MPI at all, but only deal.II functions that hide this
communication from users of the deal.II. This has the advantage that you don't have to learn the details of MPI and its rather intricate function calls. That said, you do have to understand the
general philosophy behind MPI as outlined above.
What this program does
The techniques this program then demonstrates are:
• How to use the PETSc wrapper classes; this will already be visible in the declaration of the principal class of this program, ElasticProblem.
• How to partition the mesh into subdomains; this happens in the ElasticProblem::setup_system() function.
• How to parallelize operations for jobs running on an MPI network; here, this is something one has to pay attention to in a number of places, most notably in the ElasticProblem::assemble_system()
• How to deal with vectors that store only a subset of vector entries and for which we have to ensure that they store what we need on the current processors. See for example the
ElasticProblem::solve() and ElasticProblem::refine_grid() functions.
• How to deal with status output from programs that run on multiple processors at the same time. This is done via the pcout variable in the program, initialized in the constructor.
Since all this can only be demonstrated using actual code, let us go straight to the code without much further ado.
The commented program
Include files
First the usual assortment of header files we have already used in previous example programs:
And here come the things that we need particularly for this example program and that weren't in step-8. First, we replace the standard output std::cout by a new stream pcout which is used in parallel
computations for generating output only on one of the MPI processes.
We are going to query the number of processes and the number of the present process by calling the respective functions in the Utilities::MPI namespace.
Then, we are going to replace all linear algebra components that involve the (global) linear system by classes that wrap interfaces similar to our own linear algebra classes around what PETSc offers
(PETSc is a library written in C, and deal.II comes with wrapper classes that provide the PETSc functionality with an interface that is similar to the interface we already had for our own linear
algebra classes). In particular, we need vectors and matrices that are distributed across several processes in MPI programs (and simply map to sequential, local vectors and matrices if there is only
a single process, i.e., if you are running on only one machine, and without MPI support):
Then we also need interfaces for solvers and preconditioners that PETSc provides:
And in addition, we need some algorithms for partitioning our meshes so that they can be efficiently distributed across an MPI network. The partitioning algorithm is implemented in the GridTools
namespace, and we need an additional include file for a function in DoFRenumbering that allows to sort the indices associated with degrees of freedom so that they are numbered according to the
subdomain they are associated with:
And this is simply C++ again:
The last step is as in all previous programs:
The ElasticProblem class template
The first real part of the program is the declaration of the main class. As mentioned in the introduction, almost all of this has been copied verbatim from step-8, so we only comment on the few
differences between the two tutorials. There is one (cosmetic) change in that we let solve return a value, namely the number of iterations it took to converge, so that we can output this to the
screen at the appropriate place.
The first change is that we have to declare a variable that indicates the MPI communicator over which we are supposed to distribute our computations.
Then we have two variables that tell us where in the parallel world we are. The first of the following variables, n_mpi_processes, tells us how many MPI processes there exist in total, while the
second one, this_mpi_process, indicates which is the number of the present process within this space of processes (in MPI language, this corresponds to the rank of the process). The latter will have
a unique value for each process between zero and (less than) n_mpi_processes. If this program is run on a single machine without MPI support, then their values are 1 and 0, respectively.
Next up is a stream-like variable pcout. It is, in essence, just something we use for convenience: in a parallel program, if each process outputs status information, then there quickly is a lot of
clutter. Rather, we would want to only have one process output everything once, for example the one with rank zero. At the same time, it seems silly to prefix every place where we create output with
an if (my_rank==0) condition.
To make this simpler, the ConditionalOStream class does exactly this under the hood: it acts as if it were a stream, but only forwards to a real, underlying stream if a flag is set. By setting this
condition to this_mpi_process==0 (where this_mpi_process corresponds to the rank of an MPI process), we make sure that output is only generated from the first process and that we don't get the same
lines of output over and over again, once per process. Thus, we can use pcout everywhere and in every process, but on all but one process nothing will ever happen to the information that is piped
into the object via operator<<.
The remainder of the list of member variables is fundamentally the same as in step-8. However, we change the declarations of matrix and vector types to use parallel PETSc objects instead. Note that
we do not use a separate sparsity pattern, since PETSc manages this internally as part of its matrix data structures.
Right hand side values
The following is taken from step-8 without change:
The ElasticProblem class implementation
The first step in the actual implementation is the constructor of the main class. Apart from initializing the same member variables that we already had in step-8, we here initialize the MPI
communicator variable we want to use with the global MPI communicator linking all processes together (in more complex applications, one could here use a communicator object that only links a subset
of all processes), and call the Utilities::MPI helper functions to determine the number of processes and where the present one fits into this picture. In addition, we make sure that output is only
generated by the (globally) first process. We do so by passing the stream we want to output to (std::cout) and a true/false flag as arguments where the latter is determined by testing whether the
process currently executing the constructor call is the first in the MPI universe.
Next, the function in which we set up the various variables for the global linear system to be solved needs to be implemented.
However, before we proceed with this, there is one thing to do for a parallel program: we need to determine which MPI process is responsible for each of the cells. Splitting cells among processes,
commonly called "partitioning the mesh", is done by assigning a subdomain id to each cell. We do so by calling into the METIS library that does this in a very efficient way, trying to minimize the
number of nodes on the interfaces between subdomains. Rather than trying to call METIS directly, we do this by calling the GridTools::partition_triangulation() function that does this at a much
higher level of programming.
As mentioned in the introduction, we could avoid this manual partitioning step if we used the parallel::shared::Triangulation class for the triangulation object instead (as we do in step-18).
That class does, in essence, everything a regular triangulation does, but it then also automatically partitions the mesh after every mesh creation or refinement operation.
Following partitioning, we need to enumerate all degrees of freedom as usual. However, we would like to enumerate the degrees of freedom in a way so that all degrees of freedom associated with cells
in subdomain zero (which resides on process zero) come before all DoFs associated with cells on subdomain one, before those on cells on process two, and so on. We need this since we have to split the
global vectors for right hand side and solution, as well as the matrix into contiguous chunks of rows that live on each of the processors, and we will want to do this in a way that requires minimal
communication. This particular enumeration can be obtained by re-ordering degrees of freedom indices using DoFRenumbering::subdomain_wise().
The final step of this initial setup is that we get ourselves an IndexSet that indicates the subset of the global number of unknowns this process is responsible for. (Note that a degree of freedom is
not necessarily owned by the process that owns a cell just because the degree of freedom lives on this cell: some degrees of freedom live on interfaces between subdomains, and are consequently only
owned by one of the processes adjacent to this interface.)
Before we move on, let us recall a fact already discussed in the introduction: The triangulation we use here is replicated across all processes, and each process has a complete copy of the entire
triangulation, with all cells. Partitioning only provides a way to identify which cells out of all each process "owns", but it knows everything about all of them. Likewise, the DoFHandler object
knows everything about every cell, in particular the degrees of freedom that live on each cell, whether it is one that the current process owns or not. This can not scale to large problems because
eventually just storing the entire mesh, and everything that is associated with it, on every process will become infeasible if the problem is large enough. On the other hand, if we split the
triangulation into parts so that every process stores only those cells it "owns" but nothing else (or, at least a sufficiently small fraction of everything else), then we can solve large problems if
only we throw a large enough number of MPI processes at them. This is what we are going to in step-40, for example, using the parallel::distributed::Triangulation class. On the other hand, most of
the rest of what we demonstrate in the current program will actually continue to work whether we have the entire triangulation available, or only a piece of it.
We need to initialize the objects denoting hanging node constraints for the present grid. As with the triangulation and DoFHandler objects, we will simply store all constraints on each process;
again, this will not scale, but we show in step-40 how one can work around this by only storing on each MPI process the constraints for degrees of freedom that actually matter on this particular
Now we create the sparsity pattern for the system matrix. Note that we again compute and store all entries and not only the ones relevant to this process (see step-18 or step-40 for a more efficient
way to handle this).
Now we determine the set of locally owned DoFs and use that to initialize parallel vectors and matrix. Since the matrix and vectors need to work in parallel, we have to pass them an MPI communication
object, as well as information about the partitioning contained in the IndexSet locally_owned_dofs. The IndexSet contains information about the global size (the total number of degrees of freedom)
and also what subset of rows is to be stored locally. Note that the system matrix needs that partitioning information for the rows and columns. For square matrices, as it is the case here, the
columns should be partitioned in the same way as the rows, but in the case of rectangular matrices one has to partition the columns in the same way as vectors are partitioned with which the matrix is
multiplied, while rows have to partitioned in the same way as destination vectors of matrix-vector multiplications:
We now assemble the matrix and right hand side of the problem. There are some things worth mentioning before we go into detail. First, we will be assembling the system in parallel, i.e., each process
will be responsible for assembling on cells that belong to this particular process. Note that the degrees of freedom are split in a way such that all DoFs in the interior of cells and between cells
belonging to the same subdomain belong to the process that owns the cell. However, even then we sometimes need to assemble on a cell with a neighbor that belongs to a different process, and in these
cases when we add up the local contributions into the global matrix or right hand side vector, we have to transfer these entries to the process that owns these elements. Fortunately, we don't have to
do this by hand: PETSc does all this for us by caching these elements locally, and sending them to the other processes as necessary when we call the compress() functions on the matrix and vector at
the end of this function.
The second point is that once we have handed over matrix and vector contributions to PETSc, it is a) hard, and b) very inefficient to get them back for modifications. This is not only the fault of
PETSc, it is also a consequence of the distributed nature of this program: if an entry resides on another processor, then it is necessarily expensive to get it. The consequence of this is that we
should not try to first assemble the matrix and right hand side as if there were no hanging node constraints and boundary values, and then eliminate these in a second step (using, for example,
AffineConstraints::condense()). Rather, we should try to eliminate hanging node constraints before handing these entries over to PETSc. This is easy: instead of copying elements by hand into the
global matrix (as we do in step-4), we use the AffineConstraints::distribute_local_to_global() functions to take care of hanging nodes at the same time. We also already did this in step-6. The second
step, elimination of boundary nodes, could also be done this way by putting the boundary values into the same AffineConstraints object as hanging nodes (see the way it is done in step-6, for
example); however, it is not strictly necessary to do this here because eliminating boundary values can be done with only the data stored on each process itself, and consequently we use the approach
used before in step-4, i.e., via MatrixTools::apply_boundary_values().
All of this said, here is the actual implementation starting with the general setup of helper variables. (Note that we still use the deal.II full matrix and vector types for the local systems as
these are small and need not be shared across processes.)
The next thing is the loop over all elements. Note that we do not have to do all the work on every process: our job here is only to assemble the system on cells that actually belong to this MPI
process, all other cells will be taken care of by other processes. This is what the if-clause immediately after the for-loop takes care of: it queries the subdomain identifier of each cell, which is
a number associated with each cell that tells us about the owner process. In more generality, the subdomain id is used to split a domain into several parts (we do this above, at the beginning of
setup_system()), and which allows to identify which subdomain a cell is living on. In this application, we have each process handle exactly one subdomain, so we identify the terms subdomain and MPI
Apart from this, assembling the local system is relatively uneventful if you have understood how this is done in step-8. As mentioned above, distributing local contributions into the global matrix
and right hand sides also takes care of hanging node constraints in the same way as is done in step-6.
The next step is to "compress" the vector and the system matrix. This means that each process sends the additions that were made to those entries of the matrix and vector that the process did not own
itself to the process that owns them. After receiving these additions from other processes, each process then adds them to the values it already has. These additions are combining the integral
contributions of shape functions living on several cells just as in a serial computation, with the difference that the cells are assigned to different processes.
The global matrix and right hand side vectors have now been formed. We still have to apply boundary values, in the same way as we did, for example, in step-3, step-4, and a number of other programs.
The last argument to the call to MatrixTools::apply_boundary_values() below allows for some optimizations. It controls whether we should also delete entries (i.e., set them to zero) in the matrix
columns corresponding to boundary nodes, or to keep them (and passing true means: yes, do eliminate the columns). If we do eliminate columns, then the resulting matrix will be symmetric again if it
was before; if we don't, then it won't. The solution of the resulting system should be the same, though. The only reason why we may want to make the system symmetric again is that we would like to
use the CG method, which only works with symmetric matrices. The reason why we may not want to make the matrix symmetric is because this would require us to write into column entries that actually
reside on other processes, i.e., it involves communicating data.
Experience tells us that CG also works (and works almost as well) if we don't remove the columns associated with boundary nodes, which can be explained by the special structure of this particular
non-symmetry. To avoid the expense of communication, we therefore do not eliminate the entries in the affected columns.
Having assembled the linear system, we next need to solve it. PETSc offers a variety of sequential and parallel solvers, for which we have written wrappers that have almost the same interface as is
used for the deal.II solvers used in all previous example programs. The following code should therefore look rather familiar.
At the top of the function, we set up a convergence monitor, and assign it the accuracy to which we would like to solve the linear system. Next, we create an actual solver object using PETSc's CG
solver which also works with parallel (distributed) vectors and matrices. And finally a preconditioner; we choose to use a block Jacobi preconditioner which works by computing an incomplete LU
decomposition on each diagonal block of the matrix. (In other words, each MPI process computes an ILU from the rows it stores by throwing away columns that correspond to row indices not stored
locally; this yields a square matrix block from which we can compute an ILU. That means that if you run the program with only one process, then you will use an ILU(0) as a preconditioner, while if it
is run on many processes, then we will have a number of blocks on the diagonal and the preconditioner is the ILU(0) of each of these blocks. In the extreme case of one degree of freedom per
processor, this preconditioner is then simply a Jacobi preconditioner since the diagonal matrix blocks consist of only a single entry. Such a preconditioner is relatively easy to compute because it
does not require any kind of communication between processors, but it is in general not very efficient for large numbers of processors.)
Following this kind of setup, we then solve the linear system:
The next step is to distribute hanging node constraints. This is a little tricky, since to fill in the value of a constrained node you need access to the values of the nodes to which it is
constrained (for example, for a Q1 element in 2d, we need access to the two nodes on the big side of a hanging node face, to compute the value of the constrained node in the middle).
The problem is that we have built our vectors (in setup_system()) in such a way that every process is responsible for storing only those elements of the solution vector that correspond to the degrees
of freedom this process "owns". There are, however, cases where in order to compute the value of the vector entry for a constrained degree of freedom on one process, we need to access vector entries
that are stored on other processes. PETSc (and, for that matter, the MPI model on which it is built) does not allow to simply query vector entries stored on other processes, so what we do here is to
get a copy of the "distributed" vector where we store all elements locally. This is simple, since the deal.II wrappers have a conversion constructor for the deal.II Vector class. (This conversion of
course requires communication, but in essence every process only needs to send its data to every other process once in bulk, rather than having to respond to queries for individual elements):
Of course, as in previous discussions, it is clear that such a step cannot scale very far if you wanted to solve large problems on large numbers of processes, because every process now stores all
elements of the solution vector. (We will show how to do this better in step-40.) On the other hand, distributing hanging node constraints is simple on this local copy, using the usual function
AffineConstraints::distributed(). In particular, we can compute the values of all constrained degrees of freedom, whether the current process owns them or not:
Then transfer everything back into the global vector. The following operation copies those elements of the localized solution that we store locally in the distributed solution, and does not touch the
other ones. Since we do the same operation on all processors, we end up with a distributed vector (i.e., a vector that on every process only stores the vector entries corresponding to degrees of
freedom that are owned by this process) that has all the constrained nodes fixed.
We end the function by returning the number of iterations it took to converge, to allow for some output.
Using some kind of refinement indicator, the mesh can be refined. The problem is basically the same as with distributing hanging node constraints: in order to compute the error indicator (even if we
were just interested in the indicator on the cells the current process owns), we need access to more elements of the solution vector than just those the current processor stores. To make this happen,
we do essentially what we did in solve() already, namely get a complete copy of the solution vector onto every process, and use that to compute. This is in itself expensive as explained above and it
is particular unnecessary since we had just created and then destroyed such a vector in solve(), but efficiency is not the point of this program and so let us opt for a design in which every function
is as self-contained as possible.
Once we have such a "localized" vector that contains all elements of the solution vector, we can compute the indicators for the cells that belong to the present process. In fact, we could of course
compute all refinement indicators since our Triangulation and DoFHandler objects store information about all cells, and since we have a complete copy of the solution vector. But in the interest in
showing how to operate in parallel, let us demonstrate how one would operate if one were to only compute some error indicators and then exchange the remaining ones with the other processes.
(Ultimately, each process needs a complete set of refinement indicators because every process needs to refine their mesh, and needs to refine it in exactly the same way as all of the other
So, to do all of this, we need to:
• First, get a local copy of the distributed solution vector.
• Second, create a vector to store the refinement indicators.
• Third, let the KellyErrorEstimator compute refinement indicators for all cells belonging to the present subdomain/process. The last argument of the call indicates which subdomain we are
interested in. The three arguments before it are various other default arguments that one usually does not need (and does not state values for, but rather uses the defaults), but which we have to
state here explicitly since we want to modify the value of a following argument (i.e., the one indicating the subdomain).
Now all processes have computed error indicators for their own cells and stored them in the respective elements of the local_error_per_cell vector. The elements of this vector for cells not owned by
the present process are zero. However, since all processes have a copy of the entire triangulation and need to keep these copies in sync, they need the values of refinement indicators for all cells
of the triangulation. Thus, we need to distribute our results. We do this by creating a distributed vector where each process has its share and sets the elements it has computed. Consequently, when
you view this vector as one that lives across all processes, then every element of this vector has been set once. We can then assign this parallel vector to a local, non-parallel vector on each
process, making all error indicators available on every process.
So in the first step, we need to set up a parallel vector. For simplicity, every process will own a chunk with as many elements as this process owns cells, so that the first chunk of elements is
stored with process zero, the next chunk with process one, and so on. It is important to remark, however, that these elements are not necessarily the ones we will write to. This is a consequence of
the order in which cells are arranged, i.e., the order in which the elements of the vector correspond to cells is not ordered according to the subdomain these cells belong to. In other words, if on
this process we compute indicators for cells of a certain subdomain, we may write the results to more or less random elements of the distributed vector; in particular, they may not necessarily lie
within the chunk of vector we own on the present process. They will subsequently have to be copied into another process' memory space, an operation that PETSc does for us when we call the compress()
function. This inefficiency could be avoided with some more code, but we refrain from it since it is not a major factor in the program's total runtime.
So here is how we do it: count how many cells belong to this process, set up a distributed vector with that many elements to be stored locally, copy over the elements we computed locally, and finally
compress the result. In fact, we really only copy the elements that are nonzero, so we may miss a few that we computed to zero, but this won't hurt since the original values of the vector are zero
So now we have this distributed vector that contains the refinement indicators for all cells. To use it, we need to obtain a local copy and then use it to mark cells for refinement or coarsening, and
actually do the refinement and coarsening. It is important to recognize that every process does this to its own copy of the triangulation, and does it in exactly the same way.
The final function of significant interest is the one that creates graphical output. This works the same way as in step-8, with two small differences. Before discussing these, let us state the
general philosophy this function will work: we intend for all of the data to be generated on a single process, and subsequently written to a file. This is, as many other parts of this program already
discussed, not something that will scale. Previously, we had argued that we will get into trouble with triangulations, DoFHandlers, and copies of the solution vector where every process has to store
all of the data, and that there will come to be a point where each process simply doesn't have enough memory to store that much data. Here, the situation is different: it's not only the memory, but
also the run time that's a problem. If one process is responsible for processing all of the data while all of the other processes do nothing, then this one function will eventually come to dominate
the overall run time of the program. In particular, the time this function takes is going to be proportional to the overall size of the problem (counted in the number of cells, or the number of
degrees of freedom), independent of the number of processes we throw at it.
Such situations need to be avoided, and we will show in step-18 and step-40 how to address this issue. For the current problem, the solution is to have each process generate output data only for its
own local cells, and write them to separate files, one file per process. This is how step-18 operates. Alternatively, one could simply leave everything in a set of independent files and let the
visualization software read all of them (possibly also using multiple processors) and create a single visualization out of all of them; this is the path step-40, step-32, and all other parallel
programs developed later on take.
More specifically for the current function, all processes call this function, but not all of them need to do the work associated with generating output. In fact, they shouldn't, since we would try to
write to the same file multiple times at once. So we let only the first process do this, and all the other ones idle around during this time (or start their work for the next iteration, or simply
yield their CPUs to other jobs that happen to run at the same time). The second thing is that we not only output the solution vector, but also a vector that indicates which subdomain each cell
belongs to. This will make for some nice pictures of partitioned domains.
To implement this, process zero needs a complete set of solution components in a local vector. Just as with the previous function, the efficient way to do this would be to re-use the vector already
created in the solve() function, but to keep things more self-contained, we simply re-create one here from the distributed solution vector.
An important thing to realize is that we do this localization operation on all processes, not only the one that actually needs the data. This can't be avoided, however, with the simplified
communication model of MPI we use for vectors in this tutorial program: MPI does not have a way to query data on another process, both sides have to initiate a communication at the same time. So even
though most of the processes do not need the localized solution, we have to place the statement converting the distributed into a localized vector so that all processes execute it.
(Part of this work could in fact be avoided. What we do is send the local parts of all processes to all other processes. What we would really need to do is to initiate an operation on all processes
where each process simply sends its local chunk of data to process zero, since this is the only one that actually needs it, i.e., we need something like a gather operation. PETSc can do this, but for
simplicity's sake we don't attempt to make use of this here. We don't, since what we do is not very expensive in the grand scheme of things: it is one vector communication among all processes, which
has to be compared to the number of communications we have to do when solving the linear system, setting up the block-ILU for the preconditioner, and other operations.)
This being done, process zero goes ahead with setting up the output file as in step-8, and attaching the (localized) solution vector to the output object.
The only other thing we do here is that we also output one value per cell indicating which subdomain (i.e., MPI process) it belongs to. This requires some conversion work, since the data the library
provides us with is not the one the output class expects, but this is not difficult. First, set up a vector of integers, one per cell, that is then filled by the subdomain id of each cell.
The elements of this vector are then converted to a floating point vector in a second step, and this vector is added to the DataOut object, which then goes off creating output in VTK format:
Lastly, here is the driver function. It is almost completely unchanged from step-8, with the exception that we replace std::cout by the pcout stream. Apart from this, the only other cosmetic change
is that we output how many degrees of freedom there are per process, and how many iterations it took for the linear solver to converge:
The main function
The main() works the same way as most of the main functions in the other example programs, i.e., it delegates work to the run function of a managing object, and only wraps everything into some code
to catch exceptions:
Here is the only real difference: MPI and PETSc both require that we initialize these libraries at the beginning of the program, and un-initialize them at the end. The class MPI_InitFinalize takes
care of all of that. The trailing argument 1 means that we do want to run each MPI process with a single thread, a prerequisite with the PETSc parallel linear algebra.
If the program above is compiled and run on a single processor machine, it should generate results that are very similar to those that we already got with step-8. However, it becomes more interesting
if we run it on a multicore machine or a cluster of computers. The most basic way to run MPI programs is using a command line like
to run the step-17 executable with 32 processors.
(If you work on a cluster, then there is typically a step in between where you need to set up a job script and submit the script to a scheduler. The scheduler will execute the script whenever it can
allocate 32 unused processors for your job. How to write such job scripts differs from cluster to cluster, and you should find the documentation of your cluster to see how to do this. On my system, I
have to use the command qsub with a whole host of options to run a job in parallel.)
Whether directly or through a scheduler, if you run this program on 8 processors, you should get output like the following:
(This run uses a few more refinement cycles than the code available in the examples/ directory. The run also used a version of METIS from 2004 that generated different partitionings; consequently,
the numbers you get today are slightly different.)
As can be seen, we can easily get to almost four million unknowns. In fact, the code's runtime with 8 processes was less than 7 minutes up to (and including) cycle 14, and 14 minutes including the
second to last step. (These are numbers relevant to when the code was initially written, in 2004.) I lost the timing information for the last step, though, but you get the idea. All this is after
release mode has been enabled by running make release, and with the generation of graphical output switched off for the reasons stated in the program comments above. (See also video lecture 18.) The
biggest 2d computations I did had roughly 7.1 million unknowns, and were done on 32 processes. It took about 40 minutes. Not surprisingly, the limiting factor for how far one can go is how much
memory one has, since every process has to hold the entire mesh and DoFHandler objects, although matrices and vectors are split up. For the 7.1M computation, the memory consumption was about 600
bytes per unknown, which is not bad, but one has to consider that this is for every unknown, whether we store the matrix and vector entries locally or not.
Here is some output generated in the 12th cycle of the program, i.e. with roughly 300,000 unknowns:
As one would hope for, the x- (left) and y-displacements (right) shown here closely match what we already saw in step-8. As shown there and in step-22, we could as well have produced a vector plot of
the displacement field, rather than plotting it as two separate scalar fields. What may be more interesting, though, is to look at the mesh and partition at this step:
Again, the mesh (left) shows the same refinement pattern as seen previously. The right panel shows the partitioning of the domain across the 8 processes, each indicated by a different color. The
picture shows that the subdomains are smaller where mesh cells are small, a fact that needs to be expected given that the partitioning algorithm tries to equilibrate the number of cells in each
subdomain; this equilibration is also easily identified in the output shown above, where the number of degrees per subdomain is roughly the same.
It is worth noting that if we ran the same program with a different number of processes, that we would likely get slightly different output: a different mesh, different number of unknowns and
iterations to convergence. The reason for this is that while the matrix and right hand side are the same independent of the number of processes used, the preconditioner is not: it performs an ILU(0)
on the chunk of the matrix of each processor separately. Thus, it's effectiveness as a preconditioner diminishes as the number of processes increases, which makes the number of iterations increase.
Since a different preconditioner leads to slight changes in the computed solution, this will then lead to slightly different mesh cells tagged for refinement, and larger differences in subsequent
steps. The solution will always look very similar, though.
Finally, here are some results for a 3d simulation. You can repeat these by changing
in the main function. If you then run the program in parallel, you get something similar to this (this is for a job with 16 processes):
The last step, going up to 1.5 million unknowns, takes about 55 minutes with 16 processes on 8 dual-processor machines (of the kind available in 2003). The graphical output generated by this job is
rather large (cycle 5 already prints around 82 MB of data), so we contend ourselves with showing output from cycle 4:
The left picture shows the partitioning of the cube into 16 processes, whereas the right one shows the x-displacement along two cutplanes through the cube.
Possibilities for extensions
The program keeps a complete copy of the Triangulation and DoFHandler objects on every processor. It also creates complete copies of the solution vector, and it creates output on only one processor.
All of this is obviously the bottleneck as far as parallelization is concerned.
Internally, within deal.II, parallelizing the data structures used in hierarchic and unstructured triangulations is a hard problem, and it took us a few more years to make this happen. The step-40
tutorial program and the Parallel computing with multiple processors using distributed memory documentation topic talk about how to do these steps and what it takes from an application perspective.
An obvious extension of the current program would be to use this functionality to completely distribute computations to many more processors than used here.
The plain program | {"url":"https://www.dealii.org/developer/doxygen/deal.II/step_17.html","timestamp":"2024-11-03T09:54:45Z","content_type":"application/xhtml+xml","content_length":"247536","record_id":"<urn:uuid:3871cd14-4334-4993-8424-11c5cfca31d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00635.warc.gz"} |
Quick Start Example
This vignette covers the whole process of Bayesian network structure learning to parameter estimation and data simulation. # Find the best fitting graphical structure using an exact search algorithm
Basic workflow with the abn package
The package abn is a collection of functions for modelling of additive Bayesian networks. It contains routines to score Bayesian Networks based on Bayesian (default) or information-theoretic
formulation of generalized linear models. Depending on the type of data, the package supports a possible mixture of continuous, discrete, and count data. The following table shows which of
distribution types are supported by for each method of estimation:
Gaussian ✅ ✅
Binomial ✅ ✅
Poisson ✅ ✅
Multinomial ❌ ✅
Structure learning of additive Bayesian networks with abn is a three-step process. Based on a set of model specifications (data, maximal number of possible parent nodes, restricted or enforced arcs,
etc.), abn calculates in a first step the score of the data given the model (buildScoreCache()). This list of scores is then used to estimate the most probable Bayesian network structure ("structure
learning") and to infer the network structure in a third step (fitAbn()). Four structure-learning algorithms have been implemented in abn: the hill-climbing algorithm, the "exact search" algorithm,
the simulated annealing algorithm and tabu search algorithm. With the network structure inferred, the package provides routines to estimate the parameters of the network and to simulate data from the
fitted additive Bayesian network model.
The following example shows how to find the best fitting graphical structure using an exact search algorithm.
Model specification
Load the example dataset ex1.dag.data
This artificial data set comes with abn and contains 10000 observations of 10 variables. The variables are a mixture of continuous (gaussian), binary (binomial), and count (poisson) data. The data
set is a simulated data set from a known network structure.
mydat <- ex1.dag.data
Set up distribution list for each node
abn requires a list of the type of distribution for each node in the data set.
mydists <- list(b1="binomial",
Set the parent limits node-wise
The max.par argument sets the maximum number of parent nodes for each node in the data set. It can be set to a single value for all nodes or to a list with the node names as keys and the maximum
number of parent nodes as values. This is a crucial parameter to speed up the model estimation in abn as it limits the number of possible combinations.
# max.par <- list("b1"=1,"p1"=2,"g1"=3,"b2"=4,"p2"=1,"b3"=2,"g2"=3,"b4"=4,"b5"=1,"g3"=2) # set different max parents for each node
max.par <- 4 # set the same max parents for all nodes
Build the score cache
The score cache is a list of scores for each possible parent combination for each node in the data set. It is used to learn the structure of the Bayesian network in the next step.
mycache <- buildScoreCache(data.df = mydat,
data.dists = mydists,
method = "bayes", # the default method is "bayes"
max.parents = max.par)
The minimal number of input arguments for buildScoreCache() is the data set and the distribution list. By default, the function uses the Bayesian score which is based on the posterior probability of
the model given the data. To use the Log-Likelihood score, Akaike Information Criterion (AIC) or Bayesian Information Criterion (BIC) instead, the method argument can be set to "mle".
The function buildScoreCache() also accepts a list of banned and retained arcs, which can be used to enforce or restrict the presence of certain arcs in the network structure. This can be useful if
prior knowledge about the network structure is available, e.g. from expert knowledge or from previous analyses it is known that certain arcs must be present or have to be absent.
The max.parents argument sets the maximum number of parent nodes for each node in the data set and together with the dag.banned and dag.retained arguments, it restricts the model search space and can
speed up the model estimation in abn.
Structure learning
The next step is to find the best fitting graphical structure of the Bayesian network. In this example, we use the exact search algorithm to find the most probable Bayesian network structure given
the score cache from the previous step. We supply the score cache as abnCache object from the previous step to the structure learning function.
mp.dag <- mostProbable(score.cache = mycache)
The mostProbable() function returns an object of class abnLearned which contains the most probable Bayesian network structure and the score of the model given the data.
Plot the best fitting graphical structure
The best fitting graphical structure can be plotted using the plotAbn() function.
The plot() function requires the Rgraphviz package to be installed.
Estimate the parameters of the network
The parameters of the network can be estimated using the fitAbn() function.
myfit <- fitAbn(object = mp.dag)
The fitAbn() function returns an object of class abnFit which contains the estimated parameters of the network.
Simulate data from the fitted model
The simulateAbn() function can be used to simulate data from the fitted model.
simdat <- simulateAbn(object = myfit,
n.iter = 10000L) | {"url":"https://cran.mirror.garr.it/CRAN/web/packages/abn/vignettes/quick_start_example.html","timestamp":"2024-11-03T15:38:19Z","content_type":"text/html","content_length":"19908","record_id":"<urn:uuid:a817f0c1-cc2b-4054-951d-551fa2f5e653>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00371.warc.gz"} |
Operators in C++
Operators are special type of functions, that takes one or more arguments and produces a new value. For example : addition (+), substraction (-), multiplication (*) etc, are all operators. Operators
are used to perform various operations on variables and constants.
Types of operators
1. Assignment Operator
2. Mathematical Operators
3. Relational Operators
4. Logical Operators
5. Bitwise Operators
6. Shift Operators
7. Unary Operators
8. Ternary Operator
9. Comma Operator
Assignment Operator (=)
Operates '=' is used for assignment, it takes the right-hand side (called rvalue) and copy it into the left-hand side (called lvalue). Assignment operator is the only operator which can be overloaded
but cannot be inherited.
Mathematical Operators
There are operators used to perform basic mathematical operations. Addition (+) , subtraction (-) , diversion (/) multiplication (*) and modulus (%) are the basic mathematical operators. Modulus
operator cannot be used with floating-point numbers.
C++ and C also use a shorthand notation to perform an operation and assignment at same type. Example,
int x=10;
x += 4 // will add 4 to 10, and hence assign 14 to X.
x -= 5 // will subtract 5 from 10 and assign 5 to x.
Relational Operators
These operators establish a relationship between operands. The relational operators are : less than (<) , grater thatn (>) , less than or equal to (<=), greater than equal to (>=), equivalent (==)
and not equivalent (!=).
You must notice that assignment operator is (=) and there is a relational operator, for equivalent (==). These two are different from each other, the assignment operator assigns the value to any
Variables, whereas equivalent operator is used to compare values, like in if-else conditions, Example
int x = 10; //assignment operator
x=5; // again assignment operator
if(x == 5) // here we have used equivalent relational operator, for comparison
cout <<"Successfully compared";
Logical Operators
The logical operators are AND (&&) and OR (||). They are used to combine two different expressions together.
If two statement are connected using AND operator, the validity of both statements will be considered, but if they are connected using OR operator, then either one of them must be valid. These
operators are mostly used in loops (especially while loop) and in Decision making.
Bitwise Operators
There are used to change individual bits into a number. They work with only integral data types like char, int and long and not with floating point values.
• Bitwise AND operators &
• Bitwise OR operator |
• And bitwise XOR operator ^
• And, bitwise NOT operator ~
They can be used as shorthand notation too, & = , |= , ^= , ~= etc.
Shift Operators
Shift Operators are used to shift Bits of any variable. It is of three types,
1. Left Shift Operator <<
2. Right Shift Operator >>
3. Unsigned Right Shift Operator >>>
Unary Operators
These are the operators which work on only one operand. There are many unary operators, but increment ++ and decrement -- operators are most used.
Other Unary Operators : address of &, dereference *, new and delete, bitwise not ~, logical not !, unary minus - and unary plus +.
Ternary Operator
The ternary if-else ? : is an operator which has three operands.
int a = 10;
a > 5 ? cout << "true" : cout << "false"
Comma Operator
This is used to separate variable names and to separate expressions. In case of expressions, the value of last expression is produced and used.
Example :
int a,b,c; // variables declaration using comma operator
a=b++, c++; // a = c++ will be done. | {"url":"https://www.studytonight.com/cpp/operators-and-their-types.php","timestamp":"2024-11-06T08:23:21Z","content_type":"text/html","content_length":"255103","record_id":"<urn:uuid:87e47426-5ced-4a4c-948e-0303ebf03228>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00215.warc.gz"} |
In [Lan21P] the authors challenge some commonly used density based approaches to anomaly detection. They criticize the representation dependence of density scoring methods. The paper is interesting
not only because of its content but also because of its history. It received a quite strong rejection at ICLR 2020 although the reviewers honor the attempt to challenge current practices. The
rejection was mainly because the reviewers did not agree with the main principle that is formulated in the paper: Anomaly detection techniques should generally be invariant under reparametrization
with continuous invertible functions. The ICLR review is available online on OpenReview and the discussion between the authors and the reviewers is an interesting addition to read alongside the
The paper was eventually published in the journal Entropy and presented at the I Can’t Believe it’s not Better workshop at NeurIPS 2020. I actually first saw the well delivered presentation at this
workshop and it really made me think again about the fundamental setup of anomaly detection. However, I would eventually agree with the ICLR review. In this post I want to present the content of the
paper alongside some thoughts that occurred to me while reading in the hope that other people might also benefit from revisiting these fundamental questions about anomaly detection.
1 Density based definitions of anomaly
A popular way of approaching anomaly detection is to view it as a binary classification task where at training time only samples from one class, the nominal one, are available (one-class
classification). Simply put, one aims to partition the feature space $\mathcal{X}$ into two subsets $\mathcal{X}_{\operatorname{in}}$ and $\mathcal{X}_{\operatorname{out}}$ where $\mathcal{X}_{\
operatorname{in}}$ denotes the nominal region and $\mathcal{X}_{\operatorname{out}}$ the anomalous region. Within this approach, we assume that the nominal points are drawn from a probability
distribution $P_{X}$ and demand that $\mathcal{X}_{\operatorname{in}}$ covers the majority of the density mass, say $99%.$ However, this alone does not uniquely determine the partition as there are
obviously infinitely many ways to cover $99%$ of the probability mass of a continuous distribution.
One can additionally argue that the density of the nominal distribution in the anomalous region should generally be smaller than in the nominal region by appealing to the intuition that anomalies
should produce unusual observations. We can then define $\mathcal{X}_{\operatorname{in}} = \lbrace x \in \mathcal{X} \mid p_{X} (x) > \lambda \rbrace$ and $\mathcal{X}_{\operatorname{out}} = \lbrace
x \in \mathcal{X} \mid p (x) \leq \lambda \rbrace$ to be the upper level and lower level sets with respect to some density threshold $\lambda$ that is chosen such that $P_{X} (\mathcal{X}_{\
operatorname{in}}) = 0.99$. This definition appears for instance in Bishop [Bis94N].
Intuitively, the idea might seem pretty solid since densities are related to probabilistic frequencies which seems to match our intuition that anomalies occur in unlikely areas of the feature space.
However, a direct attribution of high density with high frequency seems faulty in high dimensions when considering the Gaussian annulus theorem1 1 Gaussian Annulus Theorem [Blu20F]. For every
spherical $d$-dimensional Gaussian with variance $1$ in each direction and any $\beta < \sqrt{d}$ at most $3 e^{- c \beta^2}$ of the probability mass lies outside the annulus $\sqrt{d} - \beta \leq |
x | \leq \sqrt{d} + \beta$ where $c$ is a fixed constant..
Most of the probability mass of a high dimensional Gaussian is concentrated in a thin annulus around the surface of a hypersphere of radius $\sqrt{d}$. This might seem unintuitive at first glance but
becomes clearer if we recall that the length of a sample vector is distributed like $\sqrt{{\sum^d}^{}_{i = 1} X^2_{i}}$ and where the $X_{i}$ are independent standard normal distributions. Since ${\
sum_{i = 1}^d}^{} X^2_{i}$ follows a chi-squared distribution by definition, $\sqrt{{\sum^d}^{}_{i = 1} X^2_{i}}$ follows a chi distribution with $d$ degrees of freedom. The mean of $\sqrt{{\sum^d}^
{}_{i = 1} X^2_{i}}$ is therefore $\sqrt{2} \frac{\Gamma \left( \frac{k + 1}{2} \right)}{\Gamma \left( \frac{k}{2} \right)} \approx \sqrt{k}$. The chi distribution has a relatively low variance of $k
- \left( \sqrt{2} \frac{\Gamma \left( \frac{k + 1}{2} \right)}{\Gamma \left( \frac{k}{2} \right)} \right)^2$.
Hence, for large $d$ we will barely ever observe anything close to the origin. It was argued therefore that one might want to count it to the anomalous region. The highest density of the Gaussian is
however still obtained at the origin. In order to account for such phenomena, the notion of a typical set has been introduced.
Definition 1: ($\epsilon$-typical set [Cov06E]). For a random variable $X$ and $\epsilon > 0$ the $\epsilon$-typical set $A_{\epsilon}^{(N)} (X) \subseteq \mathcal{X}^N$is the set of all sequences
that satisfy
\[ \left| H (X) + \frac{1}{N} \sum^N_{i = 1} \log (p (x_{i})) \right| \leq \epsilon, \]
where $H (X) = - E [\log (p (X))]$ is the (differential) entropy.
The definition of typical sets is useful for dealing with phenomena like the Gaussian annulus theorem since $\lim^{}_{N \rightarrow \infty^{}} P (A_{\epsilon}^N (X)) = 1$ for any $\epsilon > 0$.
Hence, for large $N$ the $\epsilon$-typical set will contain most of the mass with respect to the joint probability measure.
1.1 Is the center of a high dimensional Gaussian anomalous?
Let us first note that the Gaussian annulus theorem does not state that the area that is enclosed by the annulus is disproportionally rarely observed. Rather, it follows from the geometry of the high
dimensional space that the volume close to the surface of the sphere is relatively large.2 2 Recall that the volume of a $d$-dimensional sphere with radius $r$ is $V_{d} (r) = \frac{\pi^{\frac{d}
{2}}}{\Gamma \left( \frac{d}{2} + 1 \right)} r^d$. Hence for $r > \epsilon$, the ratio of the volume of an $\epsilon$-annulus of a sphere to the enclosed volume is $\frac{V_{d} (r + \epsilon) - V_{d}
(r - \epsilon)}{V_{d} (r - \epsilon)}$$= \frac{(r + \epsilon)^d}{(r - \epsilon)^d} - 1,$ which goes to $\infty$ as $d \rightarrow \infty$. For large $d$, the $\epsilon$-annulus of radius $\sqrt{d}$
contains many times the volume of the enclosed area. However, the probability of the enclosed area under an isotropic Gaussian is still larger than the probability of any subset of the annulus of the
same volume. In a sufficiently (i.e. very) large dataset the area around the origin will in fact have the highest density of data points. The relationship between the $d$-th order growth of the
volume of the sphere with the radius and the exponential decay of the density function of the Gaussian with the radius creates annulus phenomenon.
Nevertheless, points close to the origin are in some sense very dissimilar to the vast majority of the observed points in terms of distance to the origin. I think this is a very good example where a
subjective notion of rareness is tied to a non-Euclidean notion of similarity. In fact, if we consider the variable $Y = | X |^{}$ then we obtain density based anomalous regions that are in line with
the intuition that only the annulus should be counted as nominal. One should be aware that behind this mapping is an arbitrary notion of equivalence, or more generally of similarity, in terms of
length of a vector. In this space densities look substantially different. We also loose information because we equate all vectors of same length. We already see that the statement that anomalous
regions coincide with low densities can only be true under certain representations because it depends on how well distances in the space capture our intuitive notion of similarity.
2 The role of parameterization
The main point of the paper is to stress that densities are highly dependent on the feature space, up to a point where one can arbitrarily interchange high and low densities via reparametrization.
This fact remains true even if we restrict ourselves to continuous bijective maps where the image of the map contains the same information about the represented event as the input. This leads the
authors to raise doubts about whether density scoring based approaches are reasonable for anomaly detection in general.
Let us first have a look at the above claim. It is actually a simple consequence of how densities transfer via continuous bijective maps. Let $X$ be a continuous random variable, $f : \mathcal{X} \
rightarrow \mathcal{X}’$ a continuous invertible function on $X$, and $Y = f (X)$. Then the pdf of $X$, $p_{X} (x)$, transfers via $f$ to the pdf on $Y$, $p_{Y} (y)$. However, we need to take into
account the way $f$ locally stretches or compresses the space. This is reflected in the change of variables formula.
How severely even continuous invertible transformations can alter a density function is demonstrated by the Knothe-Rosenblatt rearrangement. There are only two mild assumptions that have to be made.
The densities of the two distributions should be larger $0$ everywhere and all cumulative conditional densities $P_{X_{i} \mid X_{< i}} (X_{i} \leq x_{i} \mid x_{1}, \ldots, x_{i - 1})$ should always
be differentiable in ($x_{1}, \ldots, x_{i}$). As a consequence one can transform any two such continuous distributions into each other using a continuous invertible map.3 3 Knothe-Rosenblatt
rearrangement [Kno57C, Ros52R]: Any continuous distribution with the above mentioned properties can be transformed into a uniform distribution using a continuous invertible map. Let $X_{1}, \ldots,
X_{n}$ be continuous random variables with joint distribution $P_{X_{1}, \ldots, X_{n}}$. Consider the map $f (x_{1}, \ldots, x_{n}) = (y_{1}, \ldots, y_{n})$ with $\begin{array}{lll} y_{i} & = & P_
{X_{i} \mid X_{< i}} (X_{n} \leq x_{i} \mid x_{1}, \ldots, x_{i - 1}) . \end{array}$ Note that $\frac{\partial f}{\partial x^T}$ is lower triangular since $y_{i}$ does not depend on $x_{j}$ for $i <
j$. Further, the $i$th component on the main diagonal of $\frac{\partial f}{\partial x^T} (x_{1}, \ldots, x_{n})$ is $p_{X_{i} \mid X_{< i}} (x_{i} \mid x_{1}, \ldots, x_{i - 1})$ since it is the
derivative of the corresponding conditional cumulative distribution. Hence, the determinant of $\frac{\partial f}{\partial x^T} (x_{1}, \ldots, x_{n})$ is simply the product of the conditional
densities, which equals $p_{X_{1}, \ldots, X_{n}} (x_{1}, \ldots, x_{n})$ because of the product rule. With this observation we can show that $(Y_{1,} \ldots, Y_{n}) = f (X_{1}, \ldots, X_{n})$ is
uniformly distributed over the $n$-dimensional unit hypercube because for all ${y_{1}} , \ldots y_{n} \in [0, 1]$: $\begin{array}{l} p_{Y_{1}, \ldots, Y_{n}} (y_{1}, \ldots, y_{n}) = \end{array}$ $p_
{X_{1}, \ldots, X_{n}} (f^{- 1} (\bar{y})) \left| \frac{\partial f}{\partial x^T} (f^{- 1} (\bar{y})) \right|^{- 1}_{} =$$1$.
Example: Let $X$ and $Y$ be two continuous random variables with the above-mentioned properties. With the Knothe-Rosenblatt construction we obtain two continuous bijective maps $f_{X}, f_{Y}$ such
that $f_{X} (X)$ and $f_{Y} (Y)$ are uniformly distributed over $[0, 1]$. We claim that $f_{Y}^{- 1} f_{X} (X )$ has the same distribution as $Y$. Indeed, letting $h := f_{Y}^{- 1} \circ f_{X}$:
\begin{eqnarray*} p_{h (X)} (h (x)) & = & p_{X} (x) | h’ (x) |^{- 1}\\\ & = & \left( p_{X} (x) \left| {f^{}_{X}}’ (x) \right|^{- 1} \right) {\left| \left( {f^{- 1}_{Y}}^{} \right)’ (f_{X} (x)) \right
|^{}}^{- 1}\\\ & = & (p_{X} (x) | f_{X}’ (x) |^{- 1})_{} \left| {f_{Y}^{}}’ (h (x)) \right|\\\ & = & p_{Y} (h (x)) . \end{eqnarray*}
Note that the inverse of a continuous function is not necessarily continuous if the domain and range have different dimensions but with the two additional assumptions mentioned above, one can show
that $f_{X}$ and $f_{Y}$ are differentiable bijections. The inverses are therefore also diffeomorphisms and in particular continuous. The examples show that a purely density based approach to anomaly
detection can lead to completely different results depending on the nature and transformations of observed features. To see this even more clearly, it is important to observe how the densities of a
distribution can change relative to each other under continuous invertible maps. In the paper the authors construct several more fine-grained examples that show how one can alter densities even
locally in an almost arbitrary fashion or explicitly interchange the densities of two given points. Even if we accept a density based definition in the original (real-world) probability space, the
data we collect is already a transformation thereof, i.e. a random variable. The authors illustrate this on the example of images where the depicted object is the true sample and the images are the
transformations we observe. This becomes even more severe if the images are given in a compressed format. Therefore, a density based approach to anomaly detection must necessarily rely on the
assumption that low density areas actually correspond to anomalous regions in the presented feature space. Note that this has nothing to do with how well our density model captures the true
distribution. In fact, the same problems arise if the true density function of the nominal data under a certain representation is available.
The authors stress a very important point which is certainly not new but often overlooked in practice: One should keep in mind that the bijectivity of the feature selection function is already a
strong assumption that will be violated in many practical scenarios. Network intrusion detection, for instance, is often performed on just a few connection statistics, which are in no way sufficient
for uniquely characterising every possible connection [Tav09D]. Further, we will usually have only limited knowledge about the transformation that led to the observed data. The direct attribution of
anomalous regions with low densities becomes therefore arbitrary and needs additional justification.
3 Reparametrization invariant approaches
We can obtain a reparametrization invariant definition of the anomalous region if we model the anomaly detection problem fully probabilistically and use Bayesian inference. This is known as Huber’s
contamination model [Hub64R]. Since we accept that even in the anomalous region the density of the nominal distribution is not $0$, a sample that lies in $\mathcal{X}_{\operatorname{out}}$ is not
necessary an anomaly. Therefore, one might rather think of being anomalous as a binary random variable $O$. The probability that a given sample is an anomaly is given by $P_{O \mid X} (o \mid x)$. In
this case, it seems more plausible to choose the anomalous regions based on a threshold on $P_{O \mid X} (o \mid x)$. This leads to a mixture model $(1 - \epsilon) D_{\operatorname{in}} + \epsilon D_
{\operatorname{out}}$ between the nominal distribution $D_{\operatorname{in}}$ and the distribution $D_{\operatorname{out}}$ of the anomalies. Here $D_{\operatorname{in}} = P (X \mid O = 0)$, $D_{\
operatorname{out}} = P (X \mid O = 1)$ and $\epsilon = P (O = 1)$ is the prior for observing an anomaly. We can now define $\mathcal{X}_{\operatorname{in}} = \lbrace x \in \mathcal{X} \mid P_{O \mid
X} (1 \mid x) \leq \lambda \rbrace$ and $\mathcal{X}_{\operatorname{out}} = \lbrace x \in \mathcal{X} \mid P_{O \mid X} (1 \mid x) > \lambda \rbrace$ for some threshold $\lambda \in [0, 1]$. This
definition is invariant under reparametrization. Indeed, for any continuous invertible transformation of $X$ we can compute with Bayes’ rule that
\begin{eqnarray*} P_{O \mid f (X)} (o \mid f (x)) & = & \frac{p_{f (X) \mid O} (f (x) \mid o) P_{O} (o)}{p_{f (X)} (f (x))}\\\ & = & \frac{p_{X \mid O} (x \mid o) \left| \frac{\partial f}{\partial x^
T} \right|^{- 1} P_{O} (o)}{p_{X} (x) \left| \frac{\partial f}{\partial x^T} \right|^{- 1}}\\\ & = & \frac{p_{X \mid O} (x \mid o) P_{O} (o)}{p_{X} (x)}\\\ & = & P_{O \mid X} (o \mid x) . \end
While this analysis is quite pleasing from a theoretical point of view, it is arguably of limited use in practice. Often times, the fraction of anomalies $\epsilon$ is simply too small to learn a
reasonable mixture model. It might even happen that we have no anomalous samples at all. Therefore, we need to make assumptions about $\epsilon$ and $p_{X \mid O} (x \mid 1)$. One usually adds
additional assumptions to justify scoring by the nominal density [Kim12R]. More precisely, we assume that that anomalies are:
• Outlying: $D_{\operatorname{in}}$ and $D_{\operatorname{out}}$ do not overlap too much.
• Sparse: $\epsilon \ll \frac{1}{2}$.
• Diffuse: $D_{\operatorname{out}}$ is not too spatially concentrated.
It might seem a little disappointing that we end up with the same density based method. However, these assumptions explicitly state under which conditions we can expect good results from such a
method. It explicitly incorporates spatial assumptions about the nature of anomalies, especially the first and the last assumption take explicit reference to spatial aspects of anomalies. This is
even more present in other approaches where for instance anomalies are defined in terms of distances, e.g. nearest neighbor distance ratios [Bre00L]. A distance based approach emphasizes even more
that the properties we are looking for are not invariant under reparametrization.
The paper also mentions another way out: The comparison against a reference distribution $D_{\operatorname{ref}}$. One can take the ratio of the density with respect to the nominal distribution and
the reference distribution [Gri07M]. If both - nominal and reference distribution - are transformed under the same continuous invertible function then the effect of the transformation will be
canceled out in their ratio. The reference distribution allows us to model knowledge about the feature space. In this language we can quantify our assumptions and explicitly integrate them into our
calculations. The drawback of this approach is again that we cannot expect to have complete knowledge about the transformation that the data has undergone. It can be very hard to define a good
reference distribution under these circumstances.
4 Reparametrization invariance as a principle?
The previous analysis led the authors to the conclusion that any anomaly detection technique should be invariant under reparametrization. They formulate this as a principle.4 4 Formulation in the
paper: “In an infinite data and capacity setting, the result of an anomaly detection method should be invariant to any continuous invertible reparametrization $f$.”
The proposal of this principle has been heavily criticized by the reviewers and led eventually to the rejection at ICLR. I also feel that this requirement is too strong. In this section, I want to
point out a few things that become apparent if we switch from the perfect model regime to the learning from data regime.
4.1 Why the principle is too strong
We now consider the scenario where anomalies are indeed defined as lower level sets for some density threshold $\lambda$ with respect to the distribution of the nominal data in some fixed base
representation. We want to show that no algorithm can learn the anomalous region if we allow that the data to be transformed by an arbitrary continuous invertible function. Intuitively, this follows
from the fact that we can arbitrarily transform any distribution. However, this true for any algorithm that purely learns from nominal data, even in the infinite data regime. To make this a little
more precise, let us formulate some properties that we can safely assume for a learning algorithm $\mathcal{A}$:
1. The algorithm takes a dataset $X \sim D$ and outputs an anomalous region $\mathcal{A} (X)$ (represented by a model).
2. As the size of the dataset grows towards infinity, the algorithm converges to a solution $\mathcal{A} (D)$ which only depends on the distribution $D$ of the data.
3. The limit solutions are measurable and bounded with respect to some base measure $\mu$ on the feature space, e.g. the Lebesgue measure if $\mathcal{X}=\mathbb{R}^d$.
We make the assumptions 2 and 3 mostly out of convenience. Note that in this setup there is an implicit assumption about the algorithm being deterministic. If we consider a stochastic algorithm we
have to fix the “random seed” of the algorithm and changing it would lead to a different algorithm. What is important is that the result only depends on the presented data. The algorithm is free to
incorporate assumptions about the data but these assumptions need to tied to the algorithm and must be independent of the distribution from which the algorithm is actually drawn. These assumptions
are quite common when one wants to talk about general limitations of learning algorithms. In fact, they are inspired by the extended Bayesian formalism, which was also used by Wolpert in his no free
lunch theorem [Wol02S]. While the use of an improper uniform prior in the no free lunch theorem can certainly be debated, the formal framework in which he conducts his proof is well suited to answer
general questions like ours.
I want to argue that if we restrict ourselves to reparametrization invariant approaches then we cannot - not even in the infinite data regime - guarantee to capture the anomalous region closely in
terms of precision with respect to the base measure in the feature space (we use the base measure because we have no knowledge about the distribution of the anomalies within the anomalous region or
the frequency of anomalies at test time). This holds even in cases were substantial knowledge about the base representation is available. Let us illustrate this with a little story about an
unfortunate scientist. The example shows that different situations become indistinguishable when we transform the distributions with the Knothe-Rosenblatt rearrangement.
An unfortunate measurement
Imagine some scientists want to measure the state of an obscure particle that they have just discovered. They know that the state of the particles are uniformly distributed in $[0, 1]$ except for one
interval $\left[ \frac{i}{n}, \frac{i + 1}{n} \right)$ where the density must be $50%$ smaller when compared to the other intervals. They do not know where the exact spot is located, so they decide
to conduct an experiment and build a density model.
Figure 3. Visualization of the experiment. We sample from all possible worlds and fit a kernel density estimator in the original feature space (something the scientist can of course not do). After
that we apply the cdf to the sample and fit again a kde. In the original feature space the situations are clearly distinguishable but after reparametrization the situations are indistinguishable from
the data.
They can draw from an infinite supply of particles but unfortunately they cannot measure the state directly. All they can do is to compare the state of two particles and see which one has the higher
value. They can compare one particle with as many as they want, and therefore they decide to take one particle at a time and compare it with many newly drawn ones. Finally, they record the fraction
of particles that had a smaller state. Can the data help our scientists to find the anomalous region? Certainly not!
What they are actually recording is the value of the cumulative density function. If $X$ is the state of a particle our scientist records $f (X) =\operatorname{CDF}_{X} (X)$. As we have previously
seen, $f (X)$ will be uniformly distributed over $[0, 1]$ irrespective of where the actual anomalous region is located.
Let us formalize the situation a little further:
• The scientists know that $Y =\operatorname{CDF}_{X_{i}} (X_{i})$ for some $i < n$ and have some prior $P_{i}$ on $i$.
• Given the dataset of i.i.d. samples from $Y$ they build a posterior belief about $i$.
But since the likelihood of $D$ does not depend on $i$ there is nothing they can learn about $i$ from observing $D$:
\begin{eqnarray*} {P_{i}}_{} (i \mid D) & = & \frac{p_{D \mid i} (D \mid i) P_{i} (i)}{p_{D} (D)}\\\ & = & \frac{p_{D} (D) P_{i} (i)}{p_{D} (D)}\\\ & = & P_{i} (i) . \end{eqnarray*}
Therefore, no matter what they will try to do with the data it will not help them to identify the anomalous region. But that should make us doubt whether we should demand that the result of any
anomaly detection method is invariant under reparametrization. Any algorithm that tries to learn only from $D$ must learn the same anomalous region $\mathcal{X}_{\operatorname{out}}$ in the infinite
data regime, regardless of the value of $i$.5 5 We can try to bound the average fraction of $f_{i}^{- 1} (\chi _{\operatorname{out}})$ that intersects with the anomalous region $\left[ \frac{i}{n}, \
frac{i + 1}{n} \right)$. Indeed, we can compute for the average case that: $\frac{1}{n}
\sum^{n - 1}_{i = 0} \frac{\mu \left( f_{i}^{- 1} (\chi _{\operatorname{out}}) \cap \left[ \frac{i}{n}, \frac{i + 1}{n} \right) \right)}{\mu (f_{i}^{- 1} (\chi _{\operatorname{out}}))}$$\leq \frac{1}
\sum^{n - 1}_{i = 0} \frac{c \mu \left( f_{i} \left( \left[ \frac{i}{n}, \frac{i + 1}{n} \right) \right) \cap \chi _{\operatorname{out}} \right)}{\frac{c}{2} \mu (\chi _{\operatorname{out}})}$$\leq \
\frac{\sum^{n - 1}_{i = 0} \mu \left( f_{i} \left( \left[ \frac{i}{n}, \frac{i + 1}{n} \right) \right) \cap \chi _{\operatorname{out}} \right)}{{\sum_{i = 0}^{n - 1}}^{} \mu \left( f_{i} \left( \left
[ \frac{i}{n}, \frac{i + 1}{n} \right) \right) \cap \chi _{\operatorname{out}} \right)}$$= \frac{2}{n} .$ The first inequality holds since $\left| \frac{\partial f_{i}}{\partial x} (x) \right|$ $= \
left\lbrace\begin{array}{l} \frac{c}{2} \operatorname{if}x \in \left[ \frac{i}{n}, \frac{i + 1}{n} \right)\\\ c\operatorname{else} \end{array}\right.$ for some $c > 0$ and therefore $\frac{c}{2} \mu
(f (X)) \leq \mu (X) \leq c \mu (f (X))$ for all measurable subsets $X$ of $[0, 1]$. The second inequality holds since we have $f_{i} \left( \left[ \frac{i}{n}, \frac{i + 1}{n} \right) \right) \cap
f_{j} \left( \left[ \frac{j}{n}, \frac{j + 1}{n} \right) \right) = \emptyset$ for $i \neq j$ (checking this is a good exercise). Therefore, any algorithm must produce a largely wrong result with $\
frac{\mu \left( f_{i}^{- 1} (\chi _{\operatorname{out}}) \cap \left[ \frac{i}{n}, \frac{i + 1}{n} \right) \right)}{\mu (f_{i}^{- 1} (\chi _{\operatorname{out}}))} \leq \frac{2}{n}$ in at least one
situation. Since this is unavoidable, we ask that the algorithm produce the same bad result even if we present the real state $X_{i}$ instead of $\operatorname{CDF}_{X_{i}} (X_{i})$.
One can take this argument to the extreme and derive the following normal form for reparametrization invariant learning algorithms.
Proposition 3: For every learning algorithm that is invariant under reparametrization and every $n \in \mathbb{N}^+$ there is some set $O \subseteq [0, 1]^n$ such that for any continuous probability
distribution $D$ over $\mathbb{R}^n$ (which fulfills 2) the algorithm outputs f$_{D}^{- 1} (O)$ in the infinite data regime when presented with data independently drawn from $D$. The function f$_{D}^
{}$ denotes here the Knothe-Rosenblatt construction w.r.t. $D$.
That means the algorithm can essentially be specified by some set $O \subseteq [0, 1]$ which determines the outcome for almost any possible input. Importantly, this set $O$ is chosen before the data
is seen. In the case of $\mathcal{X}=\mathbb{R}$ the algorithm has a preselected set of percentiles that are anomalous and he just “estimates” the correct values for them from the data. Hence, the
result has almost nothing to do with the problem at hand!
As the previous example shows, this unavoidably leads to failures of the algorithm even for “trivial” instances. Note that the same type of argument can be applied to many other notions of anomalies
(including those that don’t solely depend on the distribution of nominal data).
5 Conclusion
Given the success of non-reparametrization invariant methods in anomaly detection, the principle seems unreasonably restrictive. However, I agree that we should have a reference framework for the
test scenario where the problem definition is reparametrization invariant. Such a framework could be the aforementioned mixture model of nominal and anomaly distribution. In practice, we mostly have
to live with the fact that only nominal data is available for training. Hence, I would rather stress that most notions of anomalies are tied to a distance metric (e.g. implicitly when estimating the
I think the more interesting question is whether we can learn representations that are particularly well suited for anomaly detection. It is known that deep density models such as normalizing flows
do not necessarily map out-of-distribution data into low density areas of the latent space [Kir20W, Zha21U]. I think this will continue to be a major research direction in anomaly detection for the
upcoming years.
Nevertheless, I believe the paper to be a valuable contribution. The authors remind us that some common approaches to anomaly detection should be used with more care. They rest on assumptions that
are not explicitly formulated and lack theoretical justification. Their article definitely motivated me to revise these foundational questions with greater care.
Let us wrap up this article with a few takeaway messages:
• Anomaly detection is notoriously ill-defined and arguably subjective to a certain degree. When applying an anomaly detection method, one needs to ensure that the selected approach actually
captures the notion of anomaly in the application. The main goal of this article is to convince you that this is not intrinsic to the algorithms.
• Density based approaches are particularly fragile because the result can be almost arbitrarily changed by “simple” continuous invertible transformations (as they are routinely applied to data).
• Additionally, even in the original - usually high-dimensional - probability space low densities might not coincide with anomalous regions because rareness might be tied to a non-Euclidean notion
of similarity. This indicates that anomaly detection approaches might have the hidden assumption that Euclidean distance is a suitable measure of similarity.
• Mixture models or likelihood ratio scores against a reference distribution allow to encode missing information about potential anomalies to interpret the densities consistently across all
possible reparametrizations.
• However, mixture models / reference distributions need to be defined in a representation specific fashion which is not always possible.
• I agree with the ICLR reviewers that even with an infinite supply of data and capacity no algorithm can guarantee to eventually learn the anomalous region from nominal data if arbitrary
reparametrizations are allowed. Therefore, one should not try to enforce reparametrization invariance as a general principle.
• We conclude that feature engineering has an even more crucial role in anomaly detection than in other areas of machine learning. It is the practitioners’ responsibility make sure that the
representation of the data and the selected approach are suitable for each other.
• Learning good representations for anomaly detection remains a major challenge in machine learning research. | {"url":"https://transferlab.ai/blog/perfect-density-models/","timestamp":"2024-11-13T15:31:10Z","content_type":"text/html","content_length":"87539","record_id":"<urn:uuid:5f348d21-a620-4296-aa56-122105b7e66d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00730.warc.gz"} |
Please show work/formulas and financial calculator steps, if
used. Answer as much as you can
Please show work/formulas and financial calculator steps, if used. Answer as much as you can (The...
Please show work/formulas and financial calculator steps, if used. Answer as much as you can
(The following information applies to Problems 1-4)
The Collins Group, a leading producer of custom automobile accessories, has hired you to estimate the firm's weighted average cost of capital. The balance sheet and some other information are
provided below.
Current assets $ 38,000,000
Net plant, property, and equipment 101,000,000
Total assets $139,000,000
Liabilities and Equity
Accounts payable $ 10,000,000
Accruals 9,000,000
Current liabilities $ 19,000,000
Long-term debt (40,000 bonds, $1,000 par value) 40,000,000
Total liabilities $ 59,000,000
Common stock (10,000,000 shares) 30,000,000
Retained earnings 50,000,000
Total shareholders' equity 80,000,000
Total liabilities and shareholders' equity $139,000,000
1. The Collins Group’s bond with $1,000 par value 20-year, 7.25% annual
coupon rate with semiannual coupon payment is selling or $875. What is the best estimate of the after-tax cost of debt if the firm’s tax rate is 40%?
a. 4.64%
b. 4.88%
c. 5.14%
d. 5.40%
e. 5.67%
2. The stock’s beta is 1.25, and the yield on a 20-year Treasury bond is 5.50%.
The required return on the stock market is 11.50%. Based on the CAPM,
what is the firm's cost of common stock?
a. 11.15%
b. 11.73%
c. 12.35%
d. 13.00%
e. 13.65%
3. Which of the following is the best estimate for the weight of debt for use in calculating the firm’s WACC? The debt is selling for $875 per bond and the stock is selling or 15.25 per share
a. 18.67%
b. 19.60%
c. 20.58%
d. 21.61%
e. 22.69%
4. What is the best estimate of the firm's WACC?
a. 10.85%
b. 11.19%
c. 11.53%
d. 11.88%
e. 12.24%
5. Quinlan Enterprises stock trades for $52.50 per share. It is expected to pay a $2.50 dividend at year end (D[1] = $2.50), and the dividend is expected to grow at a constant rate of 5.50% a year.
The before-tax cost of debt is 7.50%, and the tax rate is 40%. The target capital structure consists of 45% debt and 55% common equity. What is the company’s WACC if all the equity used is from
retained earnings?
a. 7.07%
b. 7.36%
c. 7.67%
d. 7.98%
e. 8.29%
6. A company’s perpetual preferred stock currently sells for $92.50 per share, and it pays an $8.00 annual dividend. If the company were to sell a new preferred issue, it would incur a flotation cost
of 5.00% of the issue price. What is the firm's cost of preferred stock?
a. 7.81%
b. 8.22%
c. 8.65%
d. 9.10%
e. 9.56%
7. The management of California Fluoride Industries (CFI) is planning next year’s capital budget. The company’s earnings and dividends are growing at a constant rate of 4 percent. The last dividend,
D[0], was $0.80; and the current equilibrium stock price is $8.73. CFI can raise new debt at a 12 percent before‑tax cost. CFI is at its optimal capital structure, which is 35 percent debt and 65
percent equity, and the firm’s marginal tax rate is 40 percent. CFI has the following independent, indivisible, and equally risky investment opportunities:
Project Cost Rate of Return
A $ 18,000 9%
B 16,000 11%
C 13,000 15%
D 23,000 13%
What is CFI’s optimal capital budget?
a. $70,000 b. $36,000 c. $34,000 d. $47,000 e. $0
8. Radiator Products Company (RPC) is at its optimal capital structure of 75 percent common equity and 25 percent debt. RPC’s WACC is 12.50 percent. RPC has a marginal tax rate of 40 percent. Next
year’s dividend is expected to be $2.50 per share, and RPC has a constant growth in earnings and dividends of 5 percent. The cost of common equity used in the WACC is based on retained earnings,
while the before‑tax cost of debt is 10 percent. What is RPC’s current equilibrium stock price?
a. $12.73 b. $17.23 c. $25.83 d. $20.37 e. $23.70
9. Which of the following statements is CORRECT?
a. When calculating the cost of preferred stock, companies must adjust for taxes, because dividends paid on preferred stock are deductible by the paying corporation.
b. Because of tax effects, an increase in the risk-free rate will have a greater effect on the after-tax cost of debt than on the cost of common stock as measured by the CAPM.
c. If a company’s beta increases, this will increase the cost of equity used to calculate the WACC, but only if the company does not have enough retained earnings to take care of its equity financing
and hence must issue new stock.
d. Higher flotation costs reduce investors' expected returns, and that leads to a reduction in a company’s WACC.
e. When calculating the cost of debt, a company needs to adjust for taxes, because interest payments are deductible by the paying corporation.
10. Which of the following statements is CORRECT?
a. WACC calculations should be based on the before-tax costs of all the individual capital components.
b. Flotation costs associated with issuing new common stock normally reduce the WACC.
c. If a company’s tax rate increases, then, all else equal, its weighted average cost of capital will decline.
d. An increase in the risk-free rate will normally lower the marginal costs of both debt and equity financing.
e. A change in a company’s target capital structure cannot affect its WACC.
1) C, 2) d, 3) a, 4) c, 5) c, 6) d, 7) b, 8) c, 9) e, 10) c | {"url":"https://justaaa.com/finance/175308-please-show-workformulas-and-financial-calculator","timestamp":"2024-11-03T06:17:33Z","content_type":"text/html","content_length":"70104","record_id":"<urn:uuid:a681d2b9-c3f6-4267-b571-730c463fc687>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00029.warc.gz"} |
extra questions
Given below are the important and extra questions for Chapter 1 class 10 maths real numbers
a. Long answer questions
b. True and false questions
c. Multiple choice questions(MCQ)
d. Short Answer type
e. Irrational number proof questions
f. Cross-word Puzzle
Long answer questions
Question 1
Without actually performing division, state which of these number will terminating decimal expression or non terminating repeating decimal expression
1. $\frac {7}{25}$
2. $ \frac {3}{7}$
3. $ \frac {29}{343}$
4. $ \frac {6}{15}$
5. $ \frac {77}{210}$
6. $ \frac {11}{67}$
7. $ \frac {15}{27}$
8. $ \frac {11}{6}$
9. $ \frac {343445}{140}$
Those rational number which can be expressed in form x/2^m X5^n are terminating expression and those can not be are non terminating decimal expression
Terminating decimal: (a), (d)
Non terminating repeating decimal: (b), (c), (e), (f), (g).(h) ,(i)
Question 2
Using Euclid’s theorem to find the HCF between the following numbers
a. 867 and 225
b. 616 and 32
Using Euclid theorem
$867=225 \times 3 +192$
$225=192 \times 1 +33$
$192=33 \times 5+ 27$
$33=27 \times 1+6$
$27=6 \times 4+3$
$6=3 \times 2+0$
So solution is 3
b. 8
Question 3
Write 10 rational number between
a. 4 and 5
b. 1/2 and 1/3
Question 4
Represent in rational form.
a. 1.232323….
b. 1.25
c. 3.67777777
Question 5
a. Prove that 2+√3 is a irrational number
b. Prove that 3√3 a irrational number
a. Let’s take this as rational number
Since a rational number can’t be equal to irrational number, our assumption is wrong
b. Let’s take this as rational number
Since a rational number can’t be equal to irrational number, our assumption is wrong
True or False statement
Question 6
Mark T/F as appropiate:
a. Number of the form $2n +1$ where n is any positive integer are always odd number
b. Product of two prime number is always equal to their LCM
c. $\sqrt {3} \times \sqrt {12}$ is a irrational number
d. Every integer is a rational number
e. The HCF of two prime number is always 1
f. There are infinite integers between two integers
g. There are finite rational number between 2 and 3
h. √3 Can be expressed in the form √3/1,so it is a rational number
i. The number 6
for n in natural number can end in digit zero
j. Any positive odd integer is of the form 6m+1 or 6m+3 or 6m +5 where q is some integer
1. True
2. True
3. False, as it is written as 6
4. True ,as any integer can be expressed in the form p/q
5. True
6. False,There are finite integer between two integers
7. False
8. False
9. False
10. True
Multiple choice Questions
Question 7
the HCF (a, b) =2 and LCM (a, b) =27. What is the value $a \times b$
a. 25
b. 9
c. 27
d. 54
Answer is (d)
LCM X HCF=aXb
Question 8
2+√2 is a
a. Non terminating repeating
b. Terminating
c. Non terminating non repeating
d. None of these
Answer is (c)
Question 9
if a and b are co primes which of these is true
a. LCM (a, b) =aXb
b. HCF (a, b)= aXb
c. a=br
d. None of these
Question 10
A rational number can be expressed as terminating decimal when the factors of the denominator are
a. 2 or 5 only
b. 2 or 3 only
c. 3 or 5 only
d. 3 or 7 only
Question 11
if $x^2 =3 \;, \; y^2=9 \; ,\; z^3=27$, which of these is true
a. x is a irrational number
b. y is a rational number
c. z is rational number
d.All of the above
Short answer question
Question 12
Find the HCF and LCM of these by factorization technique
b. 120 ,144
c. 29029 ,580
$ 27= 3 \times 3 \times 3$
$81=3 \times 3 \times 3 \times 3$
$120=2 \times 2 \times 3 \times 2 \times 5$
$144=2 \times 2 \times 3 \times 2 \times 2 \times 3$
$HCF=2^3 \times 3=24$
$29029= 29 \times 13 \times 11 \times 7$
$580=29 \times 5 \times 4$
LCM=$29 \times 13 \times 11 \times 7 \times 4 \times 5=580580$
Question 13
Find all the positive integral values of p for which $p^2 +16$ is a perfect square?
So we have
Case 1
$q-p=8$ and $q+p=2$ which gives p=3
Case 2
$q-p=4$ and $q+p=4$ which gives p=0
Case 3
$q-p=2$ and $q+p=8$ which gives p=3
So the answer is 3 only
Question 14
Find the nature of the product $(\sqrt {2} - \sqrt {3}) ( \sqrt {3} + \sqrt {2})$ ?
$(\sqrt {2} - \sqrt {3}) ( \sqrt {3} + \sqrt {2})$
$=(\sqrt {2} - \sqrt {3}) ( \sqrt {2} + \sqrt {3})$
So, it is rational number
Question 15
Show that 4
can never end with the digit zero for any natural number n.
A number ending with zero must have 2 and 5 as factors like 10=2X5 550=2X5X5X11
Here 4^n =(2X 2)^n
So it does not have any 5
Therefore we can say that it will not with zero
Question 16
Use Euclid’s division lemma to show that the cube of any positive integer is of the form 9m, 9m + 1 or 9m + 8.
For a and b any two positive integer, we can always find unique integer q and r such that
a=bq + r , 0 ≤ r < b
Now on putting b=3 ,we get
a=3q+ r , 0 ≤ r < 3
a=3q , a^3=27q^3 a^3 =9m where m=3q^3
a=3q+1 , a^3=27q^3 +27q^2 +9q +1=9(3q^3+3q+q) +1 =9m+1 where m=3q^3+3q+q
a=3q+2 , a^2=27q^3 +54q^2 +36q +8=9(3q^3+6q+4q) +1 =9m+1 where m=3q^3+6q+4q
Irrational number proof questions
Question 17
Show that $3 + 5 \sqrt {2} $ is an irrational number. Is sum of two irrational numbers always an irrational number?
We can solve it using contradiction method.
Let $3 + 5 \sqrt {2} $ is a rational number, then
$3 + 5 \sqrt {2} = \frac {p}{q}$
$3q+ 5q\sqrt {2} = p$
$\sqrt {2} = \frac {p-3q}{5q}$
Clearly,RHS is a rational number ,but LHS is not.
So,our contradiction is wrong.
$3 + 5 \sqrt {2} $ is an irrational number
sum of two irrational numbers is not always an irrational number
$3 + 5 \sqrt {2} $ and $3 - 5 \sqrt {2} $ are irrational number, but the sum is rational number
$3 + 5 \sqrt {2} + 3 - 5 \sqrt {2}= 6$
Question 18
Prove that $\sqrt {3}$ is an irrational number and hence show that $2\sqrt {3}$ is also an irrational number.
Since we cannot clearly express in p/q form,it is difficult to say,So lets us assume this is rational number
$\sqrt {3}=\frac {p}{q}$
where p and q are co primes.
$q\sqrt {3}=p$
squaring both sides
So 3 divides p^2,from theorem we know that,
2 will divide p also. p=3c
So q divided by 3 also
So both p and q are divided by 3 which is contradiction from we assumed
So $\sqrt {3}$ is irrational number
Question 19
Prove that $5 - \sqrt {3}$ is an irrational number.
let $5 - \sqrt {3}$ be rational number
$5 - \sqrt {3}=\frac {p}{q}$
$\sqrt {3}= \frac {5q -p}{q}$
Clearly RHS is rational number but LHS is not,
So our assumption is wrong.
$5 - \sqrt {3}$ is an irrational number
Question 20
Prove that $2 \sqrt {5}$ is an irrational number.
Question 21
Show that $(\sqrt {3} + \sqrt {5})^2$ is an irrational number.
Let $(\sqrt {3} + \sqrt {5})^2= 8+ 2 \sqrt {15}$ is a rational number
$8+ 2 \sqrt {15}=\frac {p}{q}$
$\sqrt {15}= \frac {p -8q}{2q}$
Clearly RHS is rational number but LHS is not,
So our assumption is wrong.
$(\sqrt {3} + \sqrt {5})^2$ is an irrational number.
Question 22
Prove that $4 - \sqrt {5} $ is an irrational number.
Question 23
Prove that $\sqrt {5}$ is an irrational number.
We will try to prove it contradiction method
Let $\sqrt {5}$ be rational
then it must in the form of p/q [q is not equal to 0][p and q are co-prime]
$\sqrt {5}=\frac {p}{q}$
$\sqrt {5}$ \times q = p$
squaring on both sides
$5q^2 = p^2$
So, p^2 is divisible by 5
Now from the theorem, p is divisible by 5
let $p = 5c$ [c is a positive integer] [squaring on both sides ]
$p^2 = 25c^2$
So we can obtain below from both the above equation
$5q^2 = 25c^2$
$q^2 = 5c^2$
So q is divisible by 5
thus q and p have a common factor 5
there is a contradiction
so $\sqrt {5}$ is an irrational number
Question 24
Prove that √2 + 1/√2 is an irrational number
Question 25
Prove that for any positive integer n, n
– n is divisible by 6.
n^3 - n =n(n^2 -1) = n(n-1)(n+1)
Any Number can be represented in the form 3q,3q+1 and 3q+2
for n=3q, n(n-1)(n+1)= 3q(3q-1)(3q+1) Divisible by 3
for n=3q+1,n(n-1)(n+1)= (3q+1)(3q)(3q+2) Divisible by 3
for n=3q+2,n(n-1)(n+1)= (3q+2)(3q+1)(3q+3)=3(3q+2)(3q+1)(q+1) Divisible by 3
So product n(n-1)(n+1) is divisible by 3
Any Number can be represented in the form 2m,2m+1
for n=2m, n(n-1)(n+1)= 2m(2m-1)(2m+1) Divisible by 2
for n=2m+1, n(n-1)(n+1)= (2m+1)(2m)(2m+1) Divisible by 2
So this is divisible by both 2 and 3 =6
Question 26
If n is rational and √m is irrational, then prove that (n + √m) is irrational.
Question 27
Show that one and only one out of n, n + 4, n + 8, n + 12 and n + 16 is divisible by 5, where n is any positive integer
n,n+4,n+8,n+12,n+16 be integers.
where n can take the form 5q, 5q+1 ,5q+2 , 5q + 3 , 5q + 4.
Case I when n=5q
Then n is divisible by 5.
but neither of 5q+4 ,5q+8 , 5q + 12 , 5q + 16 is divisible by 5.
Case II when n=5q+1
Then n is not divisible by 5.
n+4 = 5q+1+4 = 5q+5=5(q +1),
which is divisible by 5.
but neither of 5q+1 ,5q+9 , 5q + 13 , 5q + 17 is divisible by 5.
Case III when n=5q+2
Then n is not divisible by 5.
n+8 = 5q+2+8 =5q+10=5(q+2),
which is divisible by 5.
but neither of 5q+2 ,5q+6 , 5q + 14 , 5q + 18 is divisible by 5.
Case IV when n=5q+3
Then n is not divisible by 5.
n+12 = 5q+3+12 =5q+15=5(q+3),
which is divisible by 5.
but neither of 5q+3 ,5q+7 , 5q + 11 , 5q + 19 is divisible by 5.
Case V when n=5q+4
Then n is not divisible by 5.
n+16 = 5q+4+16 =5q+20=5(q+4), which is divisible by 5.
but neither of 5q+4 ,5q+8 , 5q + 12 , 5q + 16 is divisible by 5.
Hence, one of n, n+4,n+8,n +12 and n+16 is divisible by 5.
Question 28
Prove that √11 is irrational.
Question 29
Show that 3√2 is irrational.
Question 30
Prove that the sum of a rational number and an irrational number is always irrational.
We will try to prove it contradiction method
Let z be the irrational number, p/q is the rational number. We assume the sum is a rational number a/b
z+ p/q = a/b
z= a/b - p/q = (aq-bp)/bq
Now aq,bp and bq are all integers as a,b,p,q are integers.So (aq-bp)/bq is a rational number but x is irrational number.So our assumption is wrong
The sum is a irrational number
Question 31
The product of a non-zero rational and an irrational number is
(A) always irrational
(B) always rational
(C) rational or irrational
(D) one
The product is always irrational
We can prove it like below with contradiction method
Let y be irrational number and a/b is rational and Product is rational p/q
ya/b = p/q
y= bp/aq
bp,aq are integers,so ratio is rational but y is irrational so our assumption is wrong. Product is always irrational
Question 32
Prove that √p + √q is irrational, where p, q are primes.
Question 33
Prove that one of any three consecutive positive integers must be divisible by 3.
Let three consecutive positive integers are n,n+1,n+2 where n is any positive integer
Any Number can be represented in the form 3q,3q+1 and 3q+2
for n=3q, n=3q So it Divisible by 3
for n=3q+1,n+2=3q+3 Divisible by 3
for n=3q+2,n+1=3q+3 Divisible by 3
So one of any three consecutive positive integers must be divisible by 3.
Cross-word Puzzle to check your Real number knowledge
2. Number which are not divisible by any other number except 1
6. decimal expression can be expressed in the form 1/2
^n Down
1. Number which can be written as product of prime
3. In Euclid division lemma a=bq + r , it is the value r
4. HCF can be found using this division algorithm
5. In Euclid division lemma a=bq + r , it is the value b
7. Numbers of the forms p/q
(1) Composite
(2) prime
(3) remainder
(4) Euclid
(5) quotient
(6) Terminating
(7) Rational
This class 10 real numbers extra questions with answers is prepared keeping in mind the latest syllabus of CBSE . This has been designed in a way to improve the academic performance of the students.
If you find mistakes , please do provide the feedback on the mail.
Also Read
• Notes
• NCERT Solutions & Assignments
Go back to Class 10 Main Page using below links Class 10 Maths Class 10 Science
Practice Question
Question 1 What is $1 - \sqrt {3}$ ?
A) Non terminating repeating
B) Non terminating non repeating
C) Terminating
D) None of the above
Question 2 The volume of the largest right circular cone that can be cut out from a cube of edge 4.2 cm is?
A) 19.4 cm^3
B) 12 cm^3
C) 78.6 cm^3
D) 58.2 cm^3
Question 3 The sum of the first three terms of an AP is 33. If the product of the first and the third term exceeds the second term by 29, the AP is ?
A) 2 ,21,11
B) 1,10,19
C) -1 ,8,17
D) 2 ,11,20 | {"url":"https://physicscatalyst.com/Class10/real-numbers-important-questions.php","timestamp":"2024-11-07T07:28:01Z","content_type":"text/html","content_length":"85482","record_id":"<urn:uuid:26555579-a3fd-4e80-92e3-9b09e653245f>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00170.warc.gz"} |
[EM] Fixing IRV
Anthony Simmons bbadonov at yahoo.com
Sat Aug 11 15:41:11 PDT 2001
>> From: Richard Moore <rmoore4 at home.com>
>> Subject: Re: [EM] Fixing IRV
>> > How do you want to check whether a given elimination method can
>> > also be defined as a method that doesn't "eliminate alternatives
>> > prior to selecting a winner"?
>> It is not reasonable to ask for a way to check this in
>> general. As Godel showed, there are true statements in
>> mathematics for which no proof can be found. So we may have
>> to live with the possibility that there may be methods out
>> there for which we cannot determine (with the certainty of
>> mathematical proof) if there is an equivalent
>> non-elimination method. If so, the proposition that the
>> equivalent exists (or not) for such a method will remain a
>> conjecture. Maybe it's true that no such problematic methods
>> exist (meta-statement). Maybe someone can even prove (or
>> disprove) this meta-statement. Then again, maybe the
>> meta-statement is true but cannot be proven.
>> In other words, "don't go there".
>> Richard
The answer falls out of its own accord if I just
reformulate the problem in sufficiently rigorous
Let the number of voters be N, and the number of ways of
marking a ballot (valid ways plus one general way called
"invalid") be m. Then there are N^m distinct ways that all
of the ballots can be marked. (If you like, you could
include all registered voters in N, and include one more way
of marking the ballot, "unvoted".)
Each way of marking the ballots maps to an outcome, which is
a ranking of the candidates, allowing for whatever
peculiarities, such as ties, might be allowed.
Call this function, which maps sets of marked ballots to
outcomes, an "election method". Each election method, being
a function, is just a set of ordered pairs; each pair
contains one set of ballots and the outcome they map to.
Each election method is a finite set (with N^m elements) of
ordered pairs. Since every election method is a finite set,
there is an effective, if somewhat tedious, method of
comparing two of them to see if they are equivalent --
compare their members. One of those "at least in principle"
things. Since the number of election methods is also finite,
it is, again at least in principle, possible to compare a
given election method with all other election methods (or
with a subset of them) to see if they are equivalent.
Therefore, it is possible (yes, "at least in principle") to
prove that a given election method is or is not equivalent to
a non-elimination method. This does *not* mean there's a
feasible method.
Life is so easy with finite sets.
More information about the Election-Methods mailing list | {"url":"http://lists.electorama.com/pipermail/election-methods-electorama.com/2001-August/104600.html","timestamp":"2024-11-10T17:51:45Z","content_type":"text/html","content_length":"5710","record_id":"<urn:uuid:c445f998-80de-4ecd-80f4-35dfe95eb02a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00521.warc.gz"} |
Realism vs. Anti-Realism I: Introduction — Guest Post by G. Rodrigues
Posted inPhilosophy
Realism vs. Anti-Realism I: Introduction — Guest Post by G. Rodrigues
Mathematical realist Blaise Pascal
The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve. We should be grateful for
it and hope that it will remain valid in future research and that it will extend, for better or for worse, to our pleasure, even though perhaps also to our bafflement, to wide branches of
— Eugene Wigner, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences”
These are the final words of E. Wigner’s famous essay, which you can find on-line here. Wigner uses the God-haunted word “miracle”, once in the quoted paragraph, several times in the whole essay. But
His presence is more palpable when he speaks about the wonderful, unmerited gift, of mathematics; I do not know what was his relation with the Deity, but he may have found himself in the sad
predicament of being thankful and having no one to thank for.
The sense of the miraculous, or more prosaically for those who distrust such intrusions of the numinous in our ordinary lives, the sense of wonder, is the beginning of all True Philosophy. There is a
puzzle here and an answer must be found. Why is the universe orderly? Not only that, why is the orderliness of the universe of such a nature that it can be described and explained, at least in part
(and I would add, only in part) by mathematics? And by supremely abstract mathematics, discovered and developed to solve purely mathematical problems, quite independently of its appropriateness for
describing the real world[1]?
A tentative answer is to adopt a realist stance and say that mathematical objects have an extra-mental reality just like the common ordinary objects of our experience like rocks, trees, persons and
planets, and in studying them we are, somehow, somewhere, somewhen, discovering objective features of reality. This suggestion needs some considerable spelling out, but to see how it imposes itself
forcefully, let us watch W. Quine make the so called indispensability argument[2]:
1. We should believe the theory which best accounts for our sense experience.
2. If we believe a theory, we must believe in its ontological commitments.
3. The ontological commitments of any theory are the objects over which that theory first-order quantifies.
4. The theory which best accounts for our sense experience first-order quantifies over mathematical objects.
5. We should believe that mathematical objects exist.
Lest there be any confusion, I hasten to add that I do not find this argument compelling. Premises 2 and 3 are particularly hard to swallow[3]. If I invoke it here it is for two wholly different
sorts of reasons: one, the delightful irony in borrowing a stick from your enemy (Quine was a confirmed naturalist) with which to beat your other enemies and two, to once again stress that, given how
inimical mathematical realism is to naturalism[4], it is quite telling that Quine felt compelled to adopt it (and expansively redefine naturalism along the way). For otherwise, if mathematical
objects are nothing but a product of the mind with no objective basis on reality (the mind itself being, on some accounts, the random material product of a highly contingent history), in the same way
as fictions are, then what can possibly explain their appropriateness in the description of reality?
If that is indeed the case, and the Dirac operator of a spin manifold or the curvature of a connection are just as fictitious as novels or children stories, then are we not forced to retreat to the
nigh-mystical position of Wigner? And if that is indeed the case, what is the principled distinction between say, the Dirac operator on a spin manifold, and quarks which because of QCD confinement
cannot be observed free? Are we not committed to say that quarks are equally fictitious? And if that, then are we not obliged to conclude that science tells us nothing objective about the nature of
the world? And if that, should not electrons or evolution by natural selection go the same way of quarks?
But enough of questions and on to the heart of the matter. Mathematical realism is just one among a cluster of related problems involving the ontological status of such seemingly abstract objects as
universals, properties, propositions, relations, etc. All these notions are intimately related to one another. For the sake of simplicity, I will lump them all under one banner, although realists of
various stripes will insist on all sorts of distinctions.
Here, I will defend not mathematical realism per se[5], but the closely related problem of realism with respect to universals. For those who have read The Last Superstition, you will know that Edward
Feser addresses this problem in pages 39-49. Although I will return to some of the arguments made in those pages, here I will take a different route and follow J. P. Moreland’s Universals[6].
The plan for this series is then as follows: first I will present an account of the realist position with respect to universals, the phenomena that it purports to explain and the challenge it
presents to anti-realists. Then I will survey some variants of anti-realism with respect to universals and point out, not only the problems they have, but how these problems typically recur in all
anti-realist stances. Then I will respond to the more common anti-realist objections. I will wrap things up with a few words on why this seemingly abstruse and irrelevant problem is actually at the
heart of many contemporary discussions by (very) briefly surveying one such example and along the way annoy to no end and give offense to those delicate, clueless, liberal souls. At least so I hope
Some caveats are in order. My background is in mathematics, and to a lesser extent in physics. From this, two immediate corollaries follow:
1. As a general rule, mathematicians are notoriously bad expositors; over-abundance of technical detail in contrast to a dearth of understanding. In mathematics, this is somewhat inevitable, as
knowing the technical details is more often than not what understanding amounts to. So while I cannot say that I have any of the qualities usually recognized in mathematicians like rigor, attention
to detail, etc., you can surely expect that I amnot the exception to the general rule.
2. Of necessity, not being a philosopher, I will say nothing that is original. Or to put it in other words, the only originality I can claim is in my mistakes.
In 2, originality is used in the paltry sense of new or novel; but there is a deeper sense to it, related to the root word origin. To recognize it, it is probably best to step into the world of the
arts, literature in particular. The greatness of an author like T. S. Eliot or James Joyce (if these examples offend your tastes, replace them by your own as similar remarks apply) lies in part in
the fact that their genius has opened up a clearing in our common cultural heritage, where their voices rise and add up to the chorus (or cacophony: choose your preferred metaphor) of the voices of
the Great and Magnificent Dead. There can be no understanding of Joyce, understanding in the deeper senses of literary criticism, without locating him within the total order of literature and
clarifying his relationship with his predecessors, Shakespeare and Homer above all.
Finnegans Wake, we feel that a limit has been reached and that words have been found to express the hitherto inexpressible, what was always there since the origin, what is definitive of our nature of
human beings qua human beings but that would not, and could not have been recognized unless it was first illumined to us. These illuminations then become the guiding lights in the inner theater of
our imaginings. Something like this sense of continuity is lost in philosophy with the advent of the Cartesian revolution (and the Hobbesian revolution, and the Baconian revolution, and etc.) where
the ties with Plato, Aristotle and their progeny, the Scholastics, were severed. The curt dismissal of a whole tradition without even dignifying to offer a semblance of criticism is not exactly the
type of cultural continuity I am thinking of. In cycle after cycle, modern philosophers will raze to the ground the hard won wisdom of the past and build upon the ashes of their forefathers their own
metaphysical edifices. But to borrow Kierkegaard’s charge against Hegel, peppered with some rhetorical flavor, no one, including their builders, wants to live in them because the darned things are so
damn ugly[8].
The same Kierkegaard, no friend of Aquinas and co., in an intense little book called Repetition, proposes this term to replace the Platonic term of anamnesis or recollection. For Plato we have always
known, but upon the shock of being dropped on the bucket of the world we have forgotten, and the travail of Wisdom is to recollect and reawaken what is origin-al within ourselves. Or as Francis Bacon
puts it at the beginning of Essay LVIII Of Vicissitude of Things, in his very distinctive diction:
Solomon saith, “There is no new thing upon the earth”. So that as Plato had an imagination, “That all knowledge was but remembrance”; so Solomon giveth his sentence, “That all novelty is but
Kierkegaard means by repetition, not the stale, desiccated reiteration of old formulas, but a re-creation in the Apocalyptic terms of “Behold, I make all things new”. So the third corollary is a
plea, and here I am following Feser again, for a renewed look upon the metaphysical tradition of the Scholastics. Not a return to some fabled Golden Age, that never existed anyway[9], but a
development and elaboration upon the sound, realist metaphysics and philosophy of nature developed by those men and concomitantly, a gentle nudge to the reader, if any there be, to go search in more
appropriate places for a more exhaustive explanation of the issues involved.
I will take as my fourth and last corollary, a warning and a dire one indeed. As the reader may have already observed, I am given, among other sins, to ramblings, digressions[10], asides, footnotes,
parenthetical remarks, heavy doses of pedantry and (salutary) exaggerations. Add on top of this the fact that the series will drag itself through four more installments, and you may want to reserve
your comments to future posts. Anyway, as the typical villain in a comic book would have it, “I am invincible!” (clenched fists, maniacal laughter), so feel free to Snipe, Snide and Snark; I may even
respond in kind. Unless that is, someone bores me to death. Literally.
[1] This is not to deny that many developments in mathematics have indeed occurred in answer to problems posed by other disciplines, most notably physics. But the fact that such developments did come
about that way, does not entail that they necessarily had to come about that way, this latter claim being patently absurd as even the most incipient knowledge of modern mathematics (starting about
the beginning of the 19th century with the efforts to give calculus a firm and rigorous foundation) shows.
[2] H. Putnam, M. Resnik, etc. have advanced slightly different versions of the indispensability argument. See The Indispensability Argument in the Philosophy of Mathematics for more information.
[3] And then again, the modern strategy to defuse indispensability arguments, appropriately called dispensability arguments, involves rewriting the physical theories to avoid the quantification over
mathematical objects. But such rewriting, even when successful, appeals to second-order logic or mereological axioms which are even more controversial and problematic. As Quine quipped, higher order
logic is set theory in sheep’s clothing, so it is legitimate to wonder how successful the strategy is given that you avoid reference to mathematical objects (e.g. sets) only by introducing them
surreptitiously and by the back door, suitably redressed. Once again I refer the reader to The Indispensability Argument in the Philosophy of Mathematics for more information and references.
[4] I will have occasion to return to this point later on.
[5] The reason why I will not be defending mathematical realism per se is in short, and to quote my lifelong intellectual hero Dr. Johnson, “Ignorance, madam, pure ignorance”. If I were a Platonist
realist, I would have available a straightforward account of all of mathematics, but since I reject Platonism for reasons I will not delve into, this move is not available to me. I could still take
the neo-Platonist route, which is indeed available, of saying that mathematical objects pre-exist in the mind of God from all eternity, but favoring an Aristotelian-Thomistic metaphysics something
more is needed than this simple, stopgap answer. Mathematics is a fascinating and bizarre realm, in more than one sense, and as far as I know, Thomists have not written much about it, and what there
is, it is hard to digest — at the level of PhD thesis 700 pages thick, couched in impenetrable jargon.
[6] Further bibliographical references will be scattered throughout the posts for those interested in pursuing these matters.
[7] This is war; the air is burning, shrapnel will be flying everywhere. Better get down on the floor.
[8] Recommended reading: Mortimer Adler, Ten philosophical mistakes.
[9] Such Golden Age proclamations usually betray a singular lack of historical sense.
[10] Here I confess I was really tempted to quote at length the endless ironies of J. Swift’s “A digression in praise of digressions. Just go read it.
21 Comments
I do look forward to it. It seems very interesting.
This is like reading from a menu in a 3 star restaurant, you know it is going to be excellent, but the portions are expected to be small….
For me, a small portion at one time is all I can swallow.
In following up on the link to the indispensability argument I found this:
Standard readings of mathematical claims entail the existence of mathematical objects. But, our best epistemic theories seem to debar any knowledge of mathematical objects. Thus, the
philosopher of mathematics faces a dilemma: either abandon standard readings of mathematical claims or give up our best epistemic theories.
Further down we apparently find what the anthropology behind these so called best epistemic theories is:
But, the rationalist’s claims appear incompatible with an understanding of human beings as physical creatures whose capacities for learning are exhausted by our physical bodies.
Is this sort of self evident? Am I missing something here? Is this a counter argument or a colossal prejudice.
I will deal with the epistemological objection in part IV.
To me it seems that the mathematical realists confuse the description of a thing with the thing itself.
Mathematics can be used to describe the real world but so can words. So called mathematical objects are no more or less real than words are.
In these terms, it is probably better to think of mathematics as a language for describing reality that is more pricise than traditional languages but a language non the less.
But what in physical reality is meant by references to “conjoining and splitting topologies on a function space”? Radical nominalism has its own problems. A word like “dog” becomes a word
precisely because there is something real in Fido, Rover, and Spot that is not particular to only one of them. That is why the realism of the universals is precisely the same as the realism of
mathematics. The argument that language and mathematics are descriptors isn’t really an argument against realism.
Ye Olde Statistician,
Except that the description, even with the greater precision of mathematics, will always be incomplete, an aproximation of reality. You are confusing the description of reality with reality
And just to be clear, I am not arguing for or against either matmatical or linguistic realism. Mearly pointing out that every one of the arguments presented here for mathematical realism applies
equaly well to all other forms of language. To the point that IF mathematical objects have reality beyond the mind, then words must as well.
The arguments presented here offer no rational basis to distinguish mathematics from other languages.
I wouldn’t mind (as a native Dutch speaker) a somewhat simpler text. Unless there’s going to be proof that neither the English, nor the Dutch, language exist. In which case the proof should in an
as complicated kind of English as possible.
To speak of words in the general sense you are doing does not make any sense to me in this context. There is a point in something corresponding to the word dog, which is immanent to all dogs, yet
none such a correspondence exists for all Rovers, Spots and Fido’s in the world. But even with dogs, it is hypothetical, albeit real, what this immanent being of dogs is, at least to most of us
I suppose. I experienced just how difficult it can get to come to terms with such things. Sander might well remember how I attempted to express the universal connected to the word house. I ran
into an abstraction and failed. Yet with mathematics it is much easier. You can actually bring to expression what a circle or a triangle is. The definition and the construction coincide. Try that
with dog!
Therefore I do not agree with YOS that: the realism of the universals is precisely the same as the realism of mathematics. It could be true objectively, but subjectively the dog is much harder to
apprehend than the circle.
I am not specifically arguing that the realism of the universals is precisely the same as the realism of mathematics. I am mearly pointing out that as I read it the article above offer’s no basis
to distinguish them.
If you wish to convince me I am wrong you will have to show how the specific arguments in the article apply to one but not the other.
What Matt is saying is that mathematics is a language. A mathematical description/definition of a circle may very well fully describe the concept of a circle but is not itself a circle. Also, the
mathematical description of a circle fails to describe any circle that appears outside of the mind, say for instance, drawn on paper. So, it is precise only in the mind and nowhere else.
Since the subject is words, here’s a dictionary definition of concept.
con·cept [ kón sèpt ]
1. something thought or imagined: something that somebody has thought up, or that somebody might be able to imagine
2. broad principle affecting perception and behavior: a broad abstract idea or a guiding general principle, e.g. one that determines how a person or culture behaves, or how nature, reality,
or events are perceived
3. understanding or grasp: the most basic understanding of something
4. way of doing or perceiving something: a method, plan, or type of product or design
Synonyms: idea, notion, thought, perception, impression, conception, theory, model, hypothesis, view, belief.
Note that one of its synonyms is model.
Mathematics describes certain concepts but at no time are mathematical concepts any more real than any other concept.
I do not have any wish to convince you of anything yet, I am myself searching for the right approach to this issue.
What you are saying does not follow. Certainly the definition does not apply to a circle drawn on paper, But that does not justify the conclusion that the concept of a circle is just something in
my mind. The definition in Dutch is in my mind, the definition in some other language in yours (I presume English). Yet the circle is in neither. There is no your circle and my circle. It is the
interpretation of this fact that we should be talking about.
The fact that mathematical concepts can be comunicated from one mind to another like any other knowledge does not remove those concepts from the realm of the mind and put them somewhere in the
real world.
Speaking of circles, we seem to be talking in one. The point is there is a distinction between descriptions and the whatever being described. You seem to have agreed with that when you said yet
the circle is in neither. Mathematics is just another language. It may not be the same class as Dutch, German or English but a language nonetheless. The expression for a circle is just how a
circle is described in math but, even there, it is not a circle — just a description.
Whether circles or any other concept actually exist outside of the mind is perhaps an interesting question. I would like to see the argument for it. Even if true, that hardly qualifies their
descriptions to also be so regardless of language choice.
What do you mean by the real world? And what do you mean by communicated from one mind to the other? I apprehend what a circle is. I turn to someone next to me and I try to make him apprehend
what a circle is too. It is not too different from pointing at something at the horizon. If the other person looks in that direction, he will simply see what I mean for himself.
I would like to know in what way mathematics is a language to you. Do you mean it has symbols and grammar other than English or German? Or do you apply the term loosely because you find that
mathematical objects are descriptive of things in the real world?
I would distinguish between a couple of things. If we speak of a description of a circle I assume we mean the sequence of words by which we bring to expression what a circle is. That indeed is
not itself a circle. Secondly we might have a mental picture of a circle. Surely you have yours and I have mine. I tend to favor very small circles, but that is just me.( Incidentally if we speak
of dogs it is mostly on this level of mental pictures or actual sense experience). The mental picture is not the concept of the circle, it is simply the proxy of the circle we might otherwise be
inclined to draw on paper as an aid. The pure concept cannot be pictured, neither does it consist of parts, like its description. Yet it is there.
Through thinking we determine what a circle is, it is a dictate. A circle is indeed nowhere to be found, it must be created. The conundrum is that although such an object is seemingly made up by
us, it is not arbitrary, but highly relevant with regard to the one world around us. Can you agree on this?
. Do you mean it has symbols and grammar other than English or German?
Yes. Not the field of mathematics but the expressions themselves.
“What do you mean by the real world?”
The real world is the realm of the physical. Things you can see, hear and touch. If you can’t distinguish these things from the things that are purely in your mind further discussion is
“And what do you mean by communicated from one mind to the other?”
That is what language (spoken, written, or anything else) is for. This is what we are doing at this blog. If you don’t understand this, what are you doing on this blog, throwing random symbols at
a computer screen?
“I turn to someone next to me and I try to make him apprehend what a circle is too. It is not too different from pointing at something at the horizon. If the other person looks in that direction,
he will simply see what I mean for himself.”
It is very different than what you describe. Without language and bi-directional comunication you can not assume that the other person knows which object you are pointing at. Additionaly you will
find no perfect circles in nature to point to, so you absolutely can not communicate the mathematical concept of a circle in this manner.
you will find no perfect circles in nature to point to, so you absolutely can not communicate the mathematical concept of a circle in this manner.
Correct. But each and every one of us has to arrive at the concept of the circle by his own effort.
The real world, the so called realm of the physical is real because we think about it. Without thoughts it would not be real to us (which is the only thing we need to consider). Thoughts are
therefore an indispensible part of what we call real. | {"url":"https://www.wmbriggs.com/post/6336/","timestamp":"2024-11-09T19:15:53Z","content_type":"text/html","content_length":"178979","record_id":"<urn:uuid:f8184625-c8fc-4db5-8200-3400004e4e50>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00705.warc.gz"} |
Pi, Piem and Piphilology - Maths from the Past
Pi, Piem and Piphilology
Be aware, there are audio recordings of our project team members reading out the poems in this article!
Image credit: Ilia Tafazoly via ChatGPT.
As mentioned in the Story of Pi and Zu Chongzhi article , for over 4000 years, mathematicians around the world have sought to extend our understanding of 𝜋 by calculating its value to a higher level
of accuracy.
Until the 20^th century, the number of digits of 𝜋 that mathematicians had the grit to compute by hand remained in the hundreds, so memorising all the digits known at the time was possible. And
because people all around the world were interested, piphilology was born.
The word “piphilology” itself is a play on the word “pi” and “philology” (the study of language in oral and written historical sources). It comprises the creation and use of mnemonic techniques to
remember a span of digits of 𝜋. In this article, we will have a look at some of these techniques and the mnemonics created under them.
Among all the mnemonic techniques for memorising 𝜋, the most common one is probably the use of so-called piems (formed by combining the words “pi” and “poem”). Piems are poems that represent 𝜋 in a
way such that the length of each word in letters represents the corresponding digit of 𝜋 in order. Here is an early example of piem:
“Now I need a drink, alcoholic of course, after the heavy lectures involving quantum mechanics.”
— Sir James Hopwood Jeans
Note how the first word has 3 letters, the second has 1 letter, the third has 4 letters, the fourth has 1 letter, the fifth has 5 letters, and so on. Such style of constrained writing is sometimes
called the Pilish writing.
Evidently, short poems like this don’t take us very far down 𝜋’s infinite road; rather, they are intended more as amusing verses composed in irregular rhythm. One of the most ironic examples that can
illustrate this is a piem which includes twenty decimal digits:
“How I wish I could enumerate pi easily, since all these bullshit mnemonics prevent recalling any of pi’s sequence more simply.”
— Peter M. Brigham
A logologist named Dmitri Borgmann composed a 30-word piem in his book, Language on Vacation: An Olio of Orthographical Oddities.
Now, a moon, a lover refulgent in flight,
Sails the black silence’s loneliest ellipse.
Computers use pi, the constant, when polite,
Or gentle data for sad tracking aid at eclipse.
— Dmitri Borgmann
Show this piem
A perhaps more poetic piem.
Sir, I bear a rhyme excelling
In mystic force and magic spelling
Celestial sprites elucidate
All my own striving can’t relate
Or locate they who can cogitate
And so finally terminate.
— Unknown
Show this piem
Image credit: Ilia Tafazoly via ChatGPT.
It is notable that many published piems use truncation instead of any rounding at the closing end. This can lead to less accurate mathematical results when the first omitted digit is greater than or
equal to five, however, this is under the consideration that, if any person intends to learn more decimal places of 𝜋 later on, it is guaranteed that they won’t see any misleading digits and memorise
those from the piem. To illustrate this trend of piems, here comes an example:
“How I wish I could recollect, of circle round, the exact relation Archimede unwound.”
— Unknown
This piem gives 3.1415926535897, with the next digit of 𝜋, 9, truncated.
In some piems, there are some variations on the Pilish constraints. For example:
The following piem uses the separation of the poem’s title and main body to represent the decimal point.
I wish I could determine pi
Eureka, cried the great inventor
Christmas pudding, Christmas pie
Is the problem’s very center.
— Unknown
Below in the 35-word piem by David Saul, the word “nothing” is used to represent the digit zero.
It’s a fact
A ratio immutable
Of circle round and width,
Produces geometry’s deepest conundrum.
For as the numerals stay random,
No repeat lets out its presence,
Yet it forever stretches forth.
Nothing to eternity.
Note that the piem is laid out as a circle to provide some clue to the readers as to the purpose of the poem.
In lengthier Pilish writings, 10-letter words are used to represent the digit zero, and words of more than 10 letters are used to represent consecutive digits. For example, a 13-letter word, such as
“piphilologist”, represents the digits 1,3. These rules are broadly used in a short story, “Cadaeic Cadenza”, which records the first 3,834 digits of 𝜋, as well as in a 10,000-word novel, Not A Wake.
The former held the record for the longest Pilish text from 1996 to 2010, until the publication of the latter. Both of these pieces are written by Mike Keith.
The world has shared the remarkable creativity in putting up piems. You can find piems in many different languages here and here.
If you feel creative to formulate your own piem, have a go with this Pilish checker here!
Other aspects of piphilology
A song titled “I am the first 50 digits of pi” was created in 2004 by Andrew Huang to serve as a mnemonic for the first fifty digits of 𝜋. The first line goes:
Man, I can’t – I shan’t! – formulate an anthem where the words comprise mnemonics, dreaded mnemonics for pi.
In 2013, Huang extended the song to include the first 100 digits of 𝜋, and changed the title to “Pi Mnemonic Song”. This song can be accessed here.
Larger memorisations of 𝜋
As modern super computers calculate 𝜋 with significantly higher accuracy, reaching 100 trillion in March 2022, people began to memorise more and more of the outputs. The world record for the number
of digits of 𝜋 memorised had exploded since mid-1990s: the record certified by Guinness World Records is 70,000 digits, recited in India by Rajveer Meena on 21 March 2015.In 2006,a retired Japanese
engineer Akira Haraguchi, claimed to have recited 100,000 decimal places, but this was not verified by Guinness World Records. For such large memorisations of 𝜋, poems have been proven to be rather
inefficient. Methods that are typically used by the record-setting 𝜋memorisers include remembering patterns in the numbers (for instance, the year 1971 appears in the first fifty digits of 𝜋), and
themethod of loci.
Recent decades have seen a surge in the record number of digits of 𝜋 memorised.
Image credit: Keenan Pepper, public domain via Wikimedia Commons.
Yansong Li
Arndt, Jörg and Haenel, Christoph, Pi – Unleashed (Heidelberg, 2001).
Borgmann, Dmitri Alfred, Language on Vacation: An Olio of Orthographical Oddities (New York, 1965).
Danesi, Marcel, Pi (π) in Nature, Art, and Culture (Leiden, 2020).
Fun-with-words.com, ‘Mnemonic Techniques for Numbers’, Mnemonics for Remembering Numbers, Website, <http://www.fun-with-words.com/mnem_numbers.html> [accessed 13 March 2023].
Hatzipolakis, Antreas P., ‘PiPHILOLOGY’, Faculdade de Engenharia de Universidade do Porto, Website, <https://paginas.fe.up.pt/~fsilva/port/pi2.html> [accessed 13 March 2023].
Hawkesworth, Alan S., ‘Two Mnemonics’,The American Mathematical Monthly,38: 3 (March 1931), p. 158.
Huang, Andrew (@andrewhuang), ‘PI MNEMONIC SONG’, YouTube, 14 March 2013, <https://www.youtube.com/watch?v=EaDm9G4Ig18&ab_channel=ANDREWHUANG> [accessed 13 March 2023].
Iyer, Ramdas and Srivastava, Bharat Bhushan, ‘Unending journey of pi’, Science Reporter, 47: 4 (April 2010), pp. 40-44.
Keith, Mike, Not A Wake:A Dream Embodying π’s Digits Fully For 10000 Decimal (Vinculum Press, 2010).
Keith, Mike, ‘Cadaeic Cadenza’, cadaeic, Website, <http://cadaeic.net/cadenza.htm> [accessed 13 March 2023].
Keith, Mike, ‘Writing in Pilish’, cadaeic, Website, <http://www.cadaeic.net/pilish.htm> [accessed 13 March 2023].
Mackay, J. S., ‘Mnemonics for 𝜋, 1/𝜋, e’,Proceedings of the Edinburgh Mathematical Society, 3 (February 1884), pp. 105-107.
Nowlan, Robert A., Masters of Mathematics (Rotterdam, 2017).
O’Connor, John Joseph and Robertson, Edmund Frederick, ‘Zu Chongzhi’, MacTutor History of Mathematics Archive, Website, <https://mathshistory.st-andrews.ac.uk/HistTopics/Pi_through_the_ages/>
[accessed 13 March 2023].
Raz, Amir, Packard, Mark G., Alexander, Gerianne M., Buhle, Jason T., Zhu, Hongtu, Yu, Shan and Peterson, Bradley S., ‘A slice of π : An exploratory neuroimaging study of digit encoding and retrieval
in a superior memorist’,Neurocase,15: 5 (2009),pp. 361-372.
Saul, David, Somewhen (Somewhen Publishing, 2011).
Walder, Peter, ‘Pilish Checker’, Valhalla Consulting, Website, <http://www.valhallaconsulting.com.au/PilishChecker.html> [accessed 13 March 2023].
show less | {"url":"https://maths-from-the-past.org/pi-piem-and-piphilology-2/","timestamp":"2024-11-04T05:11:12Z","content_type":"text/html","content_length":"202117","record_id":"<urn:uuid:11250ebb-0cf0-4887-ba3a-a30e42d9bce7>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00722.warc.gz"} |
What is a good estimate? - The Culture SGWhat is a good estimate?
Some of my student asked me how good is an estimate and how do we measure it up.
Let us first consider the difference between an estimate and estimator. We are all familiar with the following formula
The above formula is an estimator. It allows us to input our collected data and values to %latex s^{2}
So there are definitely more factors that attributes to a good estimate, unbiased is one. The next in line will be consistent. We question if the estimate is consistent, this means to consider if the
estimate tends to the true value as n tends to infinity.
This is really intuitive as I explained in class just yesterday.Imagine you are capable of obtaining the weight of EVERY SINGLE person in Singapore, then you will definitely be able to find the TRUE
mean weight (granted ladies do not underestimate their weight). Thus, the value found will be consistent with the true value.
pingbacks / trackbacks
• […] What is a good estimate? […] | {"url":"http://theculture.sg/2015/08/what-is-a-good-estimate/","timestamp":"2024-11-02T18:27:18Z","content_type":"text/html","content_length":"102262","record_id":"<urn:uuid:ac2a95d6-8a25-4525-97d7-d21d0b8687ed>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00094.warc.gz"} |
Hello Vera Community,
Has anyone ever worked the new board Z-Uno for any project.
Is it compatible with the Vera Edge.
No, but it looks quite interesting. There’s already a plugin which accesses Arduino-based sensors Vera Controller | MySensors - Create your own Connected Home Experience, and two under development
to access the ZWave.me Razberry board or their UZB stick via openLuup: http://forum.micasaverde.com/index.php/topic,38297.0.html and http://forum.micasaverde.com/index.php/topic,39261.0.html
So a plugin which was sort of cross between the two would make an interesting addition to the options we have for accessing ZWave devices.
They indicate Vera compatibility on their site.
If you’re familiar with Arduino, I’d say go for it. I certainly will as soon as(if) they are available in the U.S. But, if you’re not really familiar with Arduino, this is probably not for you.
Z-Uno is very useful device and full Vera support will be great.
All popular sensor types are supported by Vera. But exotic pressure/distance/radiation/… sensors are not visible in Vera.
Now it is available. Check https://z-uno.us
I think now once it is available in US, there would be more Vera Edge/Plus users and MiCasaVerde will make a better support of Z-Uno.
All popular sensor types are supported by Vera. But exotic pressure/distance/radiation/… sensors are not visible in Vera.
Now it is available. Check https://z-uno.us
I think now once it is available in US, there would be more Vera Edge/Plus users and MiCasaVerde will make a better support of Z-Uno.[/quote]
And Vera needs to get more interoperability!
[quote=“DylanBalloo, post:1, topic:193803”]Hello Vera Community,
Has anyone ever worked the new board Z-Uno for any project.
Is it compatible with the Vera Edge.
Good evening All,
As anyone started working with the Z-Uno card yet.
I have just placed an order for 1, hopefully my lack of experience and programming skills, won’t cripple me.
Does one need a specific Plug-In for this unit.
No plugin needed.
Hopefully the mods can start a seperate group for this.
You can also read some discussion on Z-UNO here;
some on the Home Seer forums here:
[url=https://forums.homeseer.com/showthread.php?p=1315013#post1315013]Z-Uno - Z-Wave + Arduino with HS3 - HomeSeer Message Board
Technical and further questions can be raised here:
[url=http://forum.z-wave.me/viewforum.php?f=3427&sid=1c8cea40e5e4cceb63e174747b0e11c5]Z-Uno - forum
You will find that you are only limited by your imagination with this device, I’m just scratching the surface and see unlimited possibilities and look forward to reading how others are using ZUNO.
not to poo poo on anyone who has a legitimate use for this device… but i don’t see the appeal at $76
you can get a nodemcu for like $2-3
this appears to be zwave + arduino = $76
nodemcu is wifi+arduino = $3
what am i missing?
Market volume?
Same could be said about Zig Bee, horses for Courses and time will tell.
[quote=“mvader, post:9, topic:193803”]not to poo poo on anyone who has a legitimate use for this device… but i don’t see the appeal at $76
you can get a nodemcu for like $2-3
this appears to be zwave + arduino = $76
nodemcu is wifi+arduino = $3
what am i missing?[/quote]
I fully agree with you about the price, I wasn’t aware of Nodemcu, I will keep it in mind.
I chose the Z-Uno, because, I am not a programmer, or an engineer, and I wanted a simple access to Z-Wave, without having to learn all the Z-Wave stuff.
What most forget is that this is native Z-Wave not WiFi, 433 or other communication frequencies and protocols.
If you want a development board to integrate directly without serial adaptors or plugins then this cuts other devices out of the loop.
With the WiFi spectrum becoming flooded, it’s easy to forget that WiFi devices and their ilk may not function correctly, while Z-Wave on it’s dedicated frequency will.
I have several projects for this in mind, one of which involves communication with a MegaA. To me that is combining the best of both worlds.
[quote=“zedrally, post:13, topic:193803”]What most forget is that this is native Z-Wave not WiFi, 433 or other communication frequencies and protocols.
If you want a development board to integrate directly without serial adaptors or plugins then this cuts other devices out of the loop.
With the WiFi spectrum becoming flooded, it’s easy to forget that WiFi devices and their ilk may not function correctly, while Z-Wave on it’s dedicated frequency will.
I have several projects for this in mind, one of which involves communication with a MegaA. To me that is combining the best of both worlds.[/quote]
if this is a product that you are trying to sell or get a kick back from, then i understand and i’m not trying to be a hater.
but to me this is very expensive and you can accomplish the same task far cheaper. for example, i’m putting in some under cabinet lighting.
I considered a native zwave solution, using the fibario rgb device, it’s $70. i purchased a node mcu for $6 and am achieving the same results.
I personally don’t have any wifi reliability issues, but I recognize that is subjective to the environment. but i have dozens of wifi devices, as well as zwave, 433 and other RF devices. I really
don’t have any issues with them.
I would certainly prefer to have a native zwave solution. but the cost to implement vs going wifi is substantial. If you have the extra money then great.
but when you can buy 10 esp based arduino devices for the cost of 1 of these.
if the cost was much less (and i know that zwave chips are more expensive) then it would be more ideal.
[quote=“mvader, post:14, topic:193803”]snip
but when you can buy 10 esp based arduino devices for the cost of 1 of these.
if the cost was much less (and i know that zwave chips are more expensive) then it would be more ideal.[/quote]
And this is the beauty of this device, you can have up to 10 devices all rolled into one.
USD$7.60 is pretty cheap per device.
[quote=“zedrally, post:15, topic:193803”][quote=“mvader, post:14, topic:193803”]snip
but when you can buy 10 esp based arduino devices for the cost of 1 of these.
if the cost was much less (and i know that zwave chips are more expensive) then it would be more ideal.[/quote]
And this is the beauty of this device, you can have up to 10 devices all rolled into one.
USD$7.60 is pretty cheap per device.[/quote]
ahh… in certain situations, that may be an ideal solution (kitchen lighting for example). so i guess i can see a use case for this.
[quote=“mvader, post:14, topic:193803”][quote=“zedrally, post:13, topic:193803”]What most forget is that this is native Z-Wave not WiFi, 433 or other communication frequencies and protocols.
If you want a development board to integrate directly without serial adaptors or plugins then this cuts other devices out of the loop.
With the WiFi spectrum becoming flooded, it’s easy to forget that WiFi devices and their ilk may not function correctly, while Z-Wave on it’s dedicated frequency will.
I have several projects for this in mind, one of which involves communication with a MegaA. To me that is combining the best of both worlds.[/quote]
if this is a product that you are trying to sell or get a kick back from, then i understand and i’m not trying to be a hater.
but to me this is very expensive and you can accomplish the same task far cheaper. for example, i’m putting in some under cabinet lighting.
I considered a native zwave solution, using the fibario rgb device, it’s $70. i purchased a node mcu for $6 and am achieving the same results.
I personally don’t have any wifi reliability issues, but I recognize that is subjective to the environment. but i have dozens of wifi devices, as well as zwave, 433 and other RF devices. I really
don’t have any issues with them.
I would certainly prefer to have a native zwave solution. but the cost to implement vs going wifi is substantial. If you have the extra money then great.
but when you can buy 10 esp based arduino devices for the cost of 1 of these.
if the cost was much less (and i know that zwave chips are more expensive) then it would be more ideal.[/quote]
In reading your response, I think you miss the point of the Z-uno, I would expect the learning curve of the Arduino / Z-Wave to be a lot less intense then having to learn all these different
protocols, to accomplish sone of the same functions achieved by the Z-Uno.
Yes, I agree, cost is a deterrent, but I believe the Z-Uno, is made in order to simplify, and package everything into a small unit, and made to satisfy a demand by the hobbyist and not the engineers
or knowledgeable programmers.
And no, I do not get a kick back from Z-Wave.
[quote=“poordom, post:17, topic:193803”]In reading your response, I think you miss the point of the Z-uno
sure - back on page 1, my first comment was “what am i missing”.
[quote=“poordom”]With the WiFi spectrum becoming flooded, it’s easy to forget that WiFi devices and their ilk may not function correctly, while Z-Wave on it’s dedicated frequency will.
I would expect the learning curve of the Arduino / Z-Wave to be a lot less intense then having to learn all these different protocols, to accomplish sone of the same functions achieved by the Z-Uno.
Your comments read like everything else will fail and is too involved and too complicated to be bothered with.
this z-uno is easier than other options and will never fail.
at least that is how it come across to me. thus my question about why your showing such support for a $70 device that (in my opinion) has limited use case for the money.
it was pointed out to me you could run several applications off of 1 z-uno, and that is something i didn’t think about… so in that specific use case.
to have native z-wave, i agree. a great option.
however. beyond that, if you have different applications all over your house, say automated blinds in every room.
you won’t put a $70 z-uno on each and every blind, just for the sake of having native z-wave, when you can put a $6 esp based wifi device on each one achieve the same effect.
if you have a use case where 1 device will do everything in 1 room, then great. it seems like a fine choice. but for me, most of my dozens of situations will get low cost nrf radios or esp based
radios on arduinos. the programming is the same regardless of which radio i choose.
Good afternoon Mvader,
" Your comments read like everything else will fail "
I have had home automation of one kind or another for the last 30 years, hate to say this, but everything will fail at one time or another, doesn’t matter what it is.
I can’t be to explicit about the Z-Uno, since I haven’t received mine yet, but I would see the Z-Uno, located in a central location, where Analog sensor could be connected, and have the Z-Uno act
a act and re-act device.
From what I understand the Z-Uno can be integrated to my Vera Plus, quite easily, I hope I am right about all this, we shall see.
Here is a SS of the 10 channel Z-UNO, i have been asked what it looks like in Vera but cannot post attachments in PM’s. | {"url":"https://community.ezlo.com/t/z-uno/193803","timestamp":"2024-11-02T08:33:55Z","content_type":"text/html","content_length":"53452","record_id":"<urn:uuid:d68b6b3c-6564-4264-ab1b-1bb28e75652b>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00070.warc.gz"} |
WP 34S calculates "almost integers" (fun result)
10-14-2014, 05:09 PM
(This post was last modified: 10-14-2014 05:15 PM by Peter Van Roy.)
Post: #1
Peter Van Roy Posts: 21
Junior Member Joined: Jan 2014
WP 34S calculates "almost integers" (fun result)
The WP 34S can calculate a famous "almost integer", exp(pi * sqrt(163)), which is sometimes called Ramanujan's number. In double precision mode (34 digits), a simple calculation returns:
The actual answer (according to Wikipedia) is:
No other handheld calculator in the world can do this, AFAIK.
PS: The Wikipedia entry gives exp(pi * sqrt(67)) as another almost integer, but I find that exp(pi * sqrt(58)) is of higher quality: it gives an error almost an order of magnitude smaller. Just a few
minutes with my WP 34S to find the best "almost integers" of the form exp(pi * sqrt(n)).
User(s) browsing this thread: 1 Guest(s) | {"url":"https://hpmuseum.org/forum/showthread.php?tid=2292&pid=20279&mode=threaded","timestamp":"2024-11-08T01:59:09Z","content_type":"application/xhtml+xml","content_length":"18963","record_id":"<urn:uuid:6a46eb37-0148-49c9-8b64-11acff18962c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00815.warc.gz"} |
2: How Do Organizations Identify Cost Behavior Patterns?
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Eric Mendez is the chief financial officer (CFO) of Bikes Unlimited, a company that produces mountain bikes and sells them to retail bicycle stores. Bikes Unlimited obtains the bulk of its parts from
outside suppliers and assembles them into the mountain bikes prior to shipment. Last month (June), Bikes Unlimited sold 5,000 mountain bikes for $100 each. Last month’s income statement shows total
revenue of $500,000 and operating profit of $50,000:
Susan Wesley is Bikes Unlimited’s cost accountant. Planning for July was completed during June. Senior management is now planning for next month (August) and has asked Eric, the CFO, to obtain some
vital financial information for budgeting purposes. Eric arranged a meeting with Susan to discuss the August budget.
As you know, we are in the middle of our planning for next month. The senior management group asked me to make some projections based on expected changes to our sales next month.
Where do you think sales are headed?
We expect unit sales to increase 10 percent, perhaps 20 percent if all goes well.
If sales increase 10 percent, I would expect profit to increase by more than 10 percent since some costs are fixed.
Sounds reasonable. What’s the next step to get a reasonable estimate of profit?
First, we have to identify how costs behave with changes in sales and production—whether the costs are variable, fixed, or some other type. Then we can set up the income statement in a contribution
margin format and determine if the numbers are within the relevant range.
Perhaps you and your staff can discuss this and get me some accurate estimates.
I’ll meet with them tomorrow and should have some information for you within a few days.
2.1 Cost Behavior Patterns
1. Identify typical cost behavior patterns.
Question: To predict what will happen to profit in the future at Bikes Unlimited, we must understand how costs behave with changes in the number of units sold (sales volume). Some costs will not
change at all with a change in sales volume (e.g., monthly rent for the production facility). Some costs will change with a change in sales volume (e.g., materials for the mountain bikes). What are
the three cost behavior patterns that help organizations identify which costs will change and which will remain the same with changes in sales volume?
Answer: The three basic cost behavior patterns are known as variable, fixed, and mixed. Each of these cost patterns is described next.
Variable Costs
Question: We know that some costs vary with changes in activity. What do we call this type of cost behavior?
Answer: This cost behavior pattern is called a variable cost. A variable cost describes a cost that varies in total with changes in volume of activity. The activity in this example is the number of
bikes produced and sold. However, the activity can take many different forms depending on the organization. The two most common variable costs are direct materials and direct labor. Other examples
include indirect materials and energy costs.
Assume the cost of direct materials (wheels, seats, frames, and so forth) for each bike at Bikes Unlimited is $40. If Bikes Unlimited produces one bike, total variable cost for direct materials
amounts to $40. If Bikes Unlimited doubles its production to two bikes, total variable cost for direct materials also doubles to $80. Variable costs typically change in proportion to changes in
volume of activity. If volume of activity doubles, total variable costs also double, while the cost per unit remains the same. It is important to note that the term variable refers to what happens to
total costs with changes in activity, not to the cost per unit.
Taking it one step further for Bikes Unlimited, let’s consider all variable costs related to production. Assume direct materials, direct labor, and all other variable production costs amount to $60
per unit. Table 2.1 “Variable Cost Behavior for Bikes Unlimited” provides the total and per unit variable costs at three different levels of production, and Figure 2.1 “Total Variable Production
Costs for Bikes Unlimited” graphs the relation of total variable costs (y-axis) to units produced (x-axis). Note that the slope of the line represents the variable cost per unit of $60 (slope =
change in variable cost ÷ change in units produced).
Table 2.1 Variable Cost Behavior for Bikes Unlimited
Units Produced Total Variable Costs Per Unit Variable Cost
1 $ 60 $60
2,000 120,000 60
4,000 240,000 60
Figure 2.1 Total Variable Production Costs for Bikes Unlimited
Using Different Activities to Measure Variable Costs
Question: At Bikes Unlimited, it is reasonable to assume that the activity, number of units produced, will affect total variable costs for direct materials and direct labor. However, companies often
use a different activity to estimate total variable costs. What types of activities might be used to estimate variable costs?
Answer: The type of activity used to estimate variable costs depends on the cost. For example, a law firm might use the number of labor hours to estimate labor costs. An airline such as American
Airlines might use hours of flying time to estimate fuel costs. A mail delivery service such as UPS might use the number of packages processed to estimate labor costs associated with sorting
packages. A retail store such as Best Buy might use sales dollars to estimate cost of goods sold.
Variable costs are affected by different activities depending on the organization. The goal is to find the activity that causes the variable cost so that accurate cost estimates can be made.
Fixed Costs
Question: Costs that vary in total with changes in activity are called variable costs. What do we call costs that remain the same in total with changes in activity?
Answer: This cost behavior pattern is called a fixed cost. A fixed cost describes a cost that is fixed (does not change) in total with changes in volume of activity. Assuming the activity is the
number of bikes produced and sold, examples of fixed costs include salaried personnel, building rent, and insurance.
Assume Bikes Unlimited pays $8,000 per month in rent for its production facility. In addition, insurance for the same building is $2,000 per month and salaried production personnel are paid $6,000
per month. All other fixed production costs total $4,000. Thus Bikes Unlimited has total fixed costs of $20,000 per month related to its production facility (= $8,000 + $2,000 + $6,000 + $4,000). If
only one bike is produced, Bikes Unlimited still must pay $20,000 per month. If 5,000 bikes are produced, Bikes Unlimited still pays $20,000 per month. The fixed costs remain unchanged in total as
the level of activity changes.
Question: What happens to fixed costs on a per unit basis as production levels change?
Answer: If Bikes Unlimited only produces one bike, the fixed cost per unit would amount to $20,000 (= $20,000 total fixed costs ÷ 1 bike). If Bikes Unlimited produces two bikes, the fixed cost per
unit would be $10,000 (= $20,000 ÷ 2 bikes). As activity increases, the fixed costs are spread out over more units, which results in a lower cost per unit.
Table 2.2 “Fixed Cost Behavior for Bikes Unlimited” provides the total and per unit fixed costs at three different levels of production, and Figure 2.2 “Total Fixed Production Costs for Bikes
Unlimited” graphs the relation of total fixed costs (y-axis) to units produced (x-axis). Note that regardless of the activity level, total fixed costs remain the same.
Table 2.2 Fixed Cost Behavior for Bikes Unlimited
Units Produced Total Fixed Costs Per Unit Fixed Cost
1 $20,000 $20,000
2,000 20,000 10
4,000 20,000 5
Figure 2.2 Total Fixed Production Costs for Bikes Unlimited
Business in Action 2.1
United Airlines Struggles to Control Costs
United Airlines is the second largest air carrier in the world. It has hubs in Chicago, Denver, Los Angeles, San Francisco, and New York and flies to 109 destinations in 23 countries. Destinations
include Tokyo, London, and Frankfurt.
Back in 2002, United filed for bankruptcy. Industry analysts reported that United had relatively high fixed costs, making it difficult for the company to cut costs quickly in line with its reduction
in revenue. A few years later, United emerged from bankruptcy, and in 2010 merged with Continental Airlines. Although financial information was presented separately for each company (United and
Continental) in 2010, both companies are now owned by United Continental Holdings, Inc. The following podcast discusses information United Airlines’ income statement for the year ended December 31,
2019 (amounts are in millions). Review this information carefully. Which costs are likely to be fixed?
Fixed costs are clearly a large component of total operating expenses, which makes it difficult for airline companies like United Airlines to make short-term cuts in expenses when revenue declines.
Committed Versus Discretionary Fixed Costs
Question: Organizations often view fixed costs as either committed or discretionary. What is the difference between these two types of fixed costs?
Answer: A committed fixed cost is a fixed cost that cannot easily be changed in the short run without having a significant impact on the organization. For example, assume Bikes Unlimited has a
five-year lease on the company’s production facility, which costs $8,000 per month. This is a committed fixed cost because the lease cannot easily be broken, and the company is committed to using
this facility for years to come. Other examples of committed fixed costs include salaried employees with long-term contracts, depreciation on buildings, and insurance.
A discretionary fixed cost is a fixed cost that can be changed in the short run without having a significant impact on the organization. For example, assume Bikes Unlimited contributes $10,000 each
year toward charitable organizations. Management has the option of changing this amount in the short run without causing a significant impact on the organization. Other examples of discretionary
fixed costs include advertising, research and development, and training programs (although an argument can be made that reducing these expenditures could have a significant impact on the company
depending on the amount of the cuts).
In general, management looks to cut discretionary fixed costs when sales and profits are declining, since cuts in this area tend not to have as significant an impact on the organization as cutting
committed fixed costs. Difficulties arise when struggling organizations go beyond cutting discretionary fixed costs and begin looking at cutting committed fixed costs.
Mixed Costs
Question: We have now learned about two types of cost behavior patterns—variable costs and fixed costs. However, there is a third type of cost that behaves differently in that both total and per unit
costs change with changes in activity. What do we call this type of cost?
Answer: This cost behavior pattern is called a mixed cost. The term mixed cost describes a cost that has a mix of fixed and variable costs. For example, assume sales personnel at Bikes Unlimited are
paid a total of $10,000 in monthly salary plus a commission of $7 for every bike sold. This is a mixed cost because it has a fixed component of $10,000 per month and a variable component of $7 per
Table 2.3 “Mixed Cost Behavior for Bikes Unlimited” provides the total and per unit fixed costs at three different levels of production, and Figure 2.3 “Total Mixed Sales Compensation Costs for Bikes
Unlimited” graphs the relation of total mixed costs (y-axis) to units produced (x-axis). The point at which the line intersects the y-axis represents the total fixed cost ($10,000), and the slope of
the line represents the variable cost per unit ($7).
Table 2.3 Mixed Cost Behavior for Bikes Unlimited
Units Sold Total Mixed Costs Per Unit Mixed Cost
1 $10,007 $10,007.00
2,000 24,000 12.00
4,000 38,000 9.50
Figure 2.3 Total Mixed Sales Compensation Costs for Bikes Unlimited
Because this cost is depicted with a straight line, we can use the equation for a straight line to describe a mixed cost:
Key Equation
Total mixed cost = Total fixed cost + (Unit variable cost × Number of units)
Y = f + vX
Y = total mixed costs (this is the y-axis in Figure 2.3 “Total Mixed Sales Compensation Costs for Bikes Unlimited”)
f = total fixed costs
v = variable cost per unit
X = level of activity (this is the x-axis in Figure 2.3 “Total Mixed Sales Compensation Costs for Bikes Unlimited”)
For Bikes Unlimited, the mixed cost equation is Y = $10,000 + $7X. If Bikes Unlimited sells 4,000 bikes (X) in one month, the total mixed cost (Y) for sales personnel compensation would be $38,000 [=
$10,000 + ($7 × 4,000 units)].
In math class, you may have learned this same equation where Y = mX + B is the equation of a line. Further, you learned that m = slope of that line and B = the Y intercept. This is really the same
concept as above with slope equal to the variable cost per unit and y intercept equal to the total fixed costs.
Short Term Versus Long Term and the Relevant Range
We now introduce two important concepts that must be considered when estimating costs: short term versus long term, and the relevant range.
Short Term Versus Long Term
Question: When identifying cost behavior patterns, we assume that management is using the cost information to make short-term decisions. Why is this short-term decision making assumption so
Answer: Variable, fixed, and mixed cost concepts are useful for short-term decision making and therefore apply to a specific period of time. This short-term period will vary depending on the
company’s current production capacity and the time required to change capacity. In the long term, all cost behavior patterns will likely change.
For example, suppose Bikes Unlimited’s production capacity is 8,000 units per month, and management plans to expand capacity in two years by renting a new production facility and hiring additional
personnel. This is a long-term decision that will change the cost behavior patterns identified earlier. Variable production costs will no longer be $60 per unit, fixed production costs will no longer
be $20,000 per month, and mixed sales compensation costs will also change. All these costs will change because the estimates are accurate only in the short term.
The Relevant Range
Question: Another important concept we use when estimating costs is called the relevant range. What is the relevant range and why is it so important when estimating costs?
Answer: The relevant range is the range of activity for which cost behavior patterns are likely to be accurate. The variable, fixed, and mixed costs identified for Bikes Unlimited will only be
accurate within a certain range of activity. Once the firm goes outside that range, cost estimates are not necessarily accurate and often must be reevaluated and recalculated.
For example, assume Bikes Unlimited’s mixed sales compensation costs of $10,000 per month plus $7 per unit is only valid up to 4,000 units per month. If unit sales increase beyond 4,000 units,
management will hire additional salespeople and the total monthly base salary will increase beyond $10,000. Thus the relevant range for this mixed cost is from zero to 4,000 units. Once the company
exceeds sales of 4,000 units per month, it is out of the relevant range, and the mixed cost must be recalculated.
We discuss the relevant range concept in more detail later in the chapter. For now, remember that the accuracy of cost behavior patterns is limited to a certain range of activity called the relevant
How Cost Behavior Patterns Are Used
Question: How do managers use cost behavior patterns to make better decisions?
Answer: Accurately predicting what costs will be in the future can help managers answer several important questions. For example, managers at Bikes Unlimited might ask the following:
• We expect to see a 5 percent increase in unit sales next year. How will this affect revenues and costs?
• We are applying for a loan with a bank, and bank managers think our sales estimates are high. What happens to our revenues and costs if we lower estimates by 20 percent?
• What happens to revenues and costs if we add a racing bike to our product line?
• How will costs behave in the future if we increase automation in the production process?
The only way to accurately predict costs is to understand how costs behave given changes in activity. To make good decisions, managers must know how costs are structured (fixed, variable, or mixed).
The next section explains how to estimate fixed and variable costs, and how to identify the fixed and variable components of mixed costs.
Business in Action 2.2
Budget Cuts at an Elementary School District
A school district outside Sacramento, California, was faced with making budget cuts because of a reduction in state funding. To reduce costs, the school district’s administration decided to consider
closing one of the smaller elementary schools in the district. According to an initial estimate, closing this school would reduce costs by $500,000 to $1,000,000 per year. However, further analysis
identified only $100,000 to $150,000 in cost savings.
Why did the analysis yield lower savings than the initial estimate? Most of the costs were committed fixed costs (e.g., teachers’ salaries and benefits) and could not be eliminated in the short term.
In fact, teachers and students at the school being considered for closure were to be moved to other schools in the district, and so no savings on teachers’ salaries and benefits would result. The
only real short-term cost savings would be in not having to maintain the classrooms, computer lab, and library (nonunion employees would be let go) and in utilities (heat and air conditioning would
be turned off).
The school district ultimately decided not to close the school because of the large committed fixed costs involved, as well as a lack of community support, and budget cuts were made in other areas
throughout the district.
Sierra Company is trying to identify the behavior of the three costs shown in the following table. The following cost information is provided for six months. Calculate the cost per unit, and then
identify how each cost behaves (fixed, variable, or mixed). Explain your answers.
Cost 1 Cost 2 Cost 3
Month Units Produced Total Costs Cost per Unit Total Costs Cost per Unit Total Costs Cost per Unit
1 50 $100 $2.00 $100 $2.00 $100 $2.00
2 100 200 2.00 100 1.00 150 1.50
3 150 300 _____ 100 _____ 200 _____
4 200 400 _____ 100 _____ 250 _____
5 250 500 _____ 100 _____ 300 _____
6 300 600 _____ 100 _____ 350 _____
As shown in the following table, cost 1 is a variable cost because as the number of units produced changes, total costs change (in proportion to changes in activity) and per unit cost remains the
same. Cost 2 is a fixed cost because as the number of units produced changes, total costs remain the same and per unit costs change. Cost 3 is a mixed cost because as the number of units produced
changes, total cost changes (but not in proportion to changes in activity) and per unit cost changes.
Cost 1 Cost 2 Cost 3
Month Units Produced Total Costs Cost per Unit Total Costs Cost per Unit* Total Costs Cost per Unit*
1 50 $100 $2.00 $100 $2.00 $100 $2.00
2 100 200 2.00 100 1.00 150 1.50
3 150 300 2.00 100 0.67 200 1.33
4 200 400 2.00 100 0.50 250 1.25
5 250 500 2.00 100 0.40 300 1.20
6 300 600 2.00 100 0.33 350 1.17
2.2 Cost Estimation Methods
Learning Objective
1. Estimate costs using account analysis, the high-low method, the scattergraph method, and regression analysis.
Question: Recall the conversation that Eric (CFO) and Susan (cost accountant) had about Bikes Unlimited’s budget for the next month, which is August. The company expects to increase sales by 10 to 20
percent, and Susan has been asked to estimate profit for August given this expected increase. Although examples of variable and fixed costs were provided in the previous sections, companies typically
do not know exactly how much of their costs are fixed and how much are variable. (Financial accounting systems do not normally sort costs as fixed or variable.) Thus organizations must estimate their
fixed and variable costs. What methods do organizations use to estimate fixed and variable costs?
Answer: Four common approaches are used to estimate fixed and variable costs:
• Account analysis
• High-low method
• Scattergraph method
• Regression analysis
All four methods are described next. The goal of each cost estimation method is to estimate fixed and variable costs and to describe this estimate in the form of Y = f + vX. That is, Total mixed cost
= Total fixed cost + (Unit variable cost × Number of units). Note that the estimates presented next for Bikes Unlimited may differ from the dollar amounts used previously, which were for illustrative
purposes only.
Account Analysis
Question: The account analysis approach is perhaps the most common starting point for estimating fixed and variable costs. How is the account analysis approach used to estimate fixed and variable
Answer: This approach requires that an experienced employee or group of employees review the appropriate accounts and determine whether the costs in each account are fixed or variable. Totaling all
costs identified as fixed provides the estimate of total fixed costs. To determine the variable cost per unit, all costs identified as variable are totaled and divided by the measure of activity (
units produced is the measure of activity for Bikes Unlimited).
Let’s look at the account analysis approach using Bikes Unlimited as an example. Susan (the cost accountant) asked the financial accounting department to provide cost information for the production
department for the month of June (July information is not yet available). Because the financial accounting department tracks information by department, it is able to produce this information. The
production department information for June is as follows:
Susan reviewed this cost information with the production manager, Indira Bingham, who has worked as production manager at Bikes Unlimited for several years. After careful review, Indira and Susan
came up with the following breakdown of variable and fixed costs for June:
Total fixed cost is estimated to be $30,000, and variable cost per unit is estimated to be $52 (= $260,000 ÷ 5,000 units produced). Remember, the goal is to describe the mixed costs in the equation
form Y = f + vX. Thus the mixed cost equation used to estimate future production costs is
Y = $30,000 + $52X
Now Susan can estimate monthly production costs (Y) if she knows how many units Bikes Unlimited plans to produce (X). For example, if Bikes Unlimited plans to produce 6,000 units for a particular
month (a 20 percent increase over June) and this level of activity is within the relevant range, total production costs should be approximately $342,000 [= $30,000 + ($52 × 6,000 units)].
Question: Why should Susan be careful using historical data for one month (June) to estimate future costs?
Answer: June may not be a typical month for Bikes Unlimited. For example, utility costs may be low relative to those in the winter months, and production costs may be relatively high as the company
prepares for increased demand in July and August. This might result in a lower materials cost per unit from quantity discounts offered by suppliers. To smooth out these fluctuations, companies often
use data from the past quarter or past year to estimate costs.
Alta Production, Inc., is using the account analysis approach to identify the behavior of production costs for a month in which it produced 350 units. The production manager was asked to review these
costs and provide her best guess as to how they should be categorized. She responded with the following information:
1. Describe the production costs in the equation form Y = f + vX.
2. Assume Alta intends to produce 400 units next month. Calculate total production costs for the month.
1. Because f represents total fixed costs, and v represents variable cost per unit, the cost equation is: Y = $7,000 + $1,428.57X. (Variable cost per unit of $1,428.57 = $500,000 ÷ 350 units.)
2. Using the previous equation, simply substitute 400 units for X, as follows:
Y=$7,000+($1,428.57×400 units)Y=$7,000+$571,428Y=$578,428
Thus total production costs are expected to be $578,428 for next month.
High-Low Method
Question: Another approach to identifying fixed and variable costs for cost estimation purposes is the high-low method A method of cost analysis that uses the high and low activity data points to
estimate fixed and variable costs. . Accountants who use this approach are looking for a quick and easy way to estimate costs, and will follow up their analysis with other more accurate techniques.
How is the high-low method used to estimate fixed and variable costs?
Answer: The high-low method uses historical information from several reporting periods to estimate costs. Assume Susan Wesley obtains monthly production cost information from the financial accounting
department for the last 12 months. This information appears in Table 2.4 “Monthly Production Costs for Bikes Unlimited”.
Table 2.4 Monthly Production Costs for Bikes Unlimited
Reporting Period (Month) Total Production Costs Level of Activity (Units Produced)
July $230,000 3,500
August 250,000 3,750
September 260,000 3,800
October 220,000 3,400
November 340,000 5,800
December 330,000 5,500
January 200,000 2,900
February 210,000 3,300
March 240,000 3,600
April 380,000 5,900
May 350,000 5,600
June 290,000 5,000
Step 1. Identify the high and low activity levels from the data set.
Step 2. Calculate the variable cost per unit ( v ).
Step 3. Calculate the total fixed cost ( f ).
Step 4. State the results in equation form Y = f + v X.
Question: How are the four steps of the high-low method used to estimate total fixed costs and per unit variable cost?
Answer: Each of the four steps is described next.
Step 1. Identify the high and low activity levels from the data set.
The highest level of activity (level of production) occurred in the month of April (5,900 units; $380,000 production costs), and the lowest level of activity occurred in the month of January (2,900
units; $200,000 production costs). Note that we are identifying the high and low activity levels rather than the high and low dollar levels—choosing the high and low dollar levels can result in
incorrect high and low points.
Step 2. Calculate the variable cost per unit ( v ).
The calculation of the variable cost per unit for Bikes Unlimited is shown as follows:
Unit variable cost (v)=Cost at highest level − Cost at lowest level Highest activity level − Lowest activity level =$380,000−$200,0005,900 units−2,900 units=$180,0003,000 units=$60
Step 3. Calculate the total fixed cost ( f ).
After completing step 2, the equation to describe the line is partially complete and stated as Y = f + $60X. The goal of step 3 is to calculate a value for total fixed cost (f). Simply select either
the high or low activity level, and fill in the data to solve for f (total fixed costs), as shown.
Using the low activity level of 2,900 units and $200,000,
Y=f+vX$200,000=f+($60×2,900 units)f=$200,000−($60×2,900 units)f=$200,000−$174,000f=$26,000
Thus total fixed costs total $26,000. (Try this using the high activity level of 5,900 units and $380,000. You will get the same result as long as the per unit variable cost is not rounded.)
Step 4. State the results in equation form Y = f + v X.
We know from step 2 that the variable cost per unit is $60, and from step 3 that total fixed cost is $26,000. Thus we can state the equation used to estimate total costs as
Y = $26,000 + $60X
Now it is possible to estimate total production costs given a certain level of production (X). For example, if Bikes Unlimited expects to produce 6,000 units during August, total production costs are
estimated to be $386,000:
Y=$26,000+($60×6,000 units)Y=$26,000+$360,000Y=$386,000
Question: Although the high-low method is relatively simple, it does have a potentially significant weakness. What is the potential weakness in using the high-low method?
Answer: In reviewing the data above, you will notice that this approach only considers the high and low activity levels in establishing an estimate of fixed and variable costs. The high and low data
points may not represent the data set as a whole, and using these points can result in distorted estimates.
For example, the $380,000 in production costs incurred in April may be higher than normal because several production machines broke down resulting in costly repairs. Or perhaps several key employees
left the company, resulting in higher than normal labor costs for the month because the remaining employees were paid overtime. Cost accountants will often throw out the high and low points for this
reason and use the next highest and lowest points to perform this analysis. While the high-low method is most often used as a quick and easy way to estimate fixed and variable costs, other more
sophisticated methods are most often used to refine the estimates developed from the high-low method.
Scattergraph Method
Question: Many organizations prefer to use the scattergraph method A method of cost analysis that uses a set of data points to estimate fixed and variable costs. to estimate costs. Accountants who
use this approach are looking for an approach that does not simply use the highest and lowest data points. How is the scattergraph method used to estimate fixed and variable costs?
Answer: The scattergraph method considers all data points, not just the highest and lowest levels of activity. Again, the goal is to develop an estimate of fixed and variable costs stated in equation
form Y = f + vX. Using the same data for Bikes Unlimited shown in Table 2.4 “Monthly Production Costs for Bikes Unlimited”, we will follow the five steps associated with the scattergraph method:
Step 1. Plot the data points for each period on a graph.
Step 2. Visually fit a line to the data points and be sure the line touches one data point.
Step 3. Estimate the total fixed costs ( f ).
Step 4. Calculate the variable cost per unit ( v ).
Step 5. State the results in equation form Y = f + v X.
Question: How are the five steps of the scattergraph method used to estimate total fixed costs and per unit variable cost?
Answer: Each of the five steps is described next.
Step 1. Plot the data points for each period on a graph.
This step requires that each data point be plotted on a graph. The x-axis (horizontal axis) reflects the level of activity (units produced in this example), and the y-axis (vertical axis) reflects
the total production cost. Figure 2.5 “Scattergraph of Total Mixed Production Costs for Bikes Unlimited” shows a scattergraph for Bikes Unlimited using the data points for 12 months, July through
Figure 2.5 Scattergraph of Total Mixed Production Costs for Bikes Unlimited
Step 2. Visually fit a line to the data points and be sure the line touches one data point.
Once the data points are plotted as described in step 1, draw a line through the points touching one data point and extending to the y-axis. The goal here is to minimize the distance from the data
points to the line (i.e., to make the line as close to the data points as possible). Figure 2.6 “Estimated Total Mixed Production Costs for Bikes Unlimited: Scattergraph Method” shows the line
through the data points. Notice that the line hits the data point for July (3,500 units produced and $230,000 total cost).
Figure 2.6 Estimated Total Mixed Production Costs for Bikes Unlimited: Scattergraph Method
Step 3. Estimate the total fixed costs ( f ).
The total fixed costs are simply the point at which the line drawn in step 2 meets the y-axis. This is often called the y-intercept. Remember, the line meets the y-axis when the activity level (units
produced in this example) is zero. Fixed costs remain the same in total regardless of level of production, and variable costs change in total with changes in levels of production. Since variable
costs are zero when no units are produced, the costs reflected on the graph at the y-intercept must represent total fixed costs. The graph in Figure 2.6 “Estimated Total Mixed Production Costs for
Bikes Unlimited: Scattergraph Method” indicates total fixed costs of approximately $45,000. (Note that the y-intercept will always be an approximation.)
Step 4. Calculate the variable cost per unit ( v ).
After completing step 3, the equation to describe the line is partially complete and stated as Y = $45,000 + vX. The goal of step 4 is to calculate a value for variable cost per unit (v). Simply use
the data point the line intersects (July: 3,500 units produced and $230,000 total cost), and fill in the data to solve for v (variable cost per unit) as follows:
Y=f+vX$230,000=$45,000+(v×3,500)$230,000−$45,000=v×3,500$185,000=v×3,500v=$185,000÷3,500v=$52.86 (rounded)
Thus variable cost per unit is $52.86.
Step 5. State the results in equation form Y = f + v X.
We know from step 3 that the total fixed costs are $45,000, and from step 4 that the variable cost per unit is $52.86. Thus the equation used to estimate total costs looks like this:
Y = $45,000 + $52.86X
Now it is possible to estimate total production costs given a certain level of production (X). For example, if Bikes Unlimited expects to produce 6,000 units during August, total production costs are
estimated to be $362,160:
Y=$45,000+($52.86×6,000 units)Y=$45,000+$317,160Y=$362,160
Question: Remember that the key weakness of the high-low method discussed previously is that it considers only two data points in estimating fixed and variable costs. How does the scattergraph method
mitigate this weakness?
Answer: The scattergraph method mitigates this weakness by considering all data points in estimating fixed and variable costs. The scattergraph method gives us an opportunity to review all data
points in the data set when we plot these data points in a graph in step 1. If certain data points seem unusual (statistics books often call these points outliers), we can exclude them from the data
set when drawing the best-fitting line. In fact, many organizations use a scattergraph to identify outliers and then use regression analysis to estimate the cost equation Y = f + vX. We discuss
regression analysis in the next section.
Although the scattergraph method tends to yield more accurate results than the high-low method, the final cost equation is still based on estimates. The line is drawn using our best judgment and a
bit of guesswork, and the resulting y-intercept (fixed cost estimate) is based on this line. This approach is not an exact science! However, the next approach to estimating fixed and variable
costs—regression analysis—uses mathematical equations to find the best-fitting line.
Question: Regression analysis is similar to the scattergraph approach in that both fit a straight line to a set of data points to estimate fixed and variable costs. How does regression analysis
differ from the scattergraph method for estimating costs?
Answer: Regression analysis uses a series of mathematical equations to find the best possible fit of the line to the data points and thus tends to provide more accurate results than the scattergraph
approach. Rather than running these computations by hand, most companies use computer software, such as Excel, to perform regression analysis. Using the data for Bikes Unlimited shown back in Table
2.4 “Monthly Production Costs for Bikes Unlimited”, regression analysis in Excel provides the following output. (This is a small excerpt of the output; see the appendix to this chapter for an
explanation of how to use Excel to perform regression analysis.)
y-intercept 43,276
x variable 53.42
Thus the equation used to estimate total production costs for Bikes Unlimited looks like this:
Y = $43,276 + $53.42X
Now it is possible to estimate total production costs given a certain level of production (X). For example, if Bikes Unlimited expects to produce 6,000 units during August, total production costs are
estimated to be $363,796:
Y=$43,276+($53.42×6,000 units)Y=$43,276+$320,520Y=$363,796
Regression analysis tends to yield the most accurate estimate of fixed and variable costs, assuming there are no unusual data points in the data set. It is important to review the data set
first—perhaps in the form of a scattergraph—to confirm that no outliers exist.
Alta Production, Inc., reported the following production costs for the 12 months January through December. (These are the same data that appear in
Reporting Period (Month) Total Production Cost Level of Activity (Units Produced)
January $460,000 300
February 300,000 220
March 480,000 330
April 550,000 390
May 570,000 410
June 310,000 240
July 440,000 290
August 455,000 320
September 530,000 380
October 250,000 150
November 700,000 450
December 490,000 350
Regression analysis performed using Excel resulted in the following output:
y-intercept 703
x variable 1,442.97
1. Using this information, create the cost equation in the form Y = f + vX.
2. Assume Alta Production, Inc., will produce 400 units next month. Calculate total production costs for the month.
Solution to Review Problem 5.5
1. The cost equation using the data from regression analysis is:
2. Using the equation, simply substitute 400 units for X, as follows:
Y=$703+($1,442.97×400 units)
3. Y=$703+$577,188
4. Y=$577,891
Thus total production costs are expected to be $577,891 for next month.
Summary of Four Cost Estimation Methods
Question: You are now able to create the cost equation Y = f + vX to estimate costs using four approaches. What does the cost equation look like for each approach at Bikes Unlimited?
Answer: The results of these four approaches for Bikes Unlimited are summarized as follows:
• Account analysis: Y = $30,000 + $52.00X
• High-low method: Y = $26,000 + $60.00X
• Scattergraph method: Y = $45,000 + $52.86X
• Regression analysis: Y = $43,276 + $53.42X
Question: We have seen that different methods yield different results, so which method should be used?
Answer: Regression analysis tends to be most accurate because it provides a cost equation that best fits the line to the data points. However, the goal of most companies is to get close—the results
do not need to be perfect. Some could reasonably argue that the account analysis approach is best because it relies on the knowledge of those who are familiar with the costs involved.
At Bikes Unlimited, Eric (CFO) and Susan (cost accountant) met several days later. After consulting with her staff, Susan agreed that regression analysis was the best approach to use in estimating
total production costs (keep in mind nothing has been done yet with selling and administrative expenses). Account analysis was ruled out because no one on the accounting staff had been with the
company long enough to review the accounts and determine which costs were variable, fixed, or mixed. The high-low method was ruled out because it only uses two data points and Eric would prefer a
more accurate estimate. Susan did request that her staff prepare a scattergraph and review it for any unusual data points before performing regression analysis. Based on the scattergraph prepared,
all agreed that the data was relatively uniform and no outlying data points were identified.
My staff has been working hard to determine what will happen to profit if sales volume increases. So far, we’ve been able to identify cost behavior patterns for production costs, and we’re currently
working on the cost behavior patterns for selling and administrative expenses.
What do you have for production costs?
The portion of production costs that are fixed—that won’t change with changes in production and sales—totals $43,276. The portion of production costs that are variable—that vary with changes in
production and sales—totals $53.42 per unit.
When do you expect to have further information for the selling and administrative costs?
We should have those results by the end of the day tomorrow. At that point, I’ll put together an income statement projecting profit for August.
Sounds good. Let’s meet when you have the information ready.
• Account analysis requires that a knowledgeable employee (or group of employees) determine whether costs are fixed, variable, or mixed. If employees do not have enough experience to accurately
estimate these costs, another method should be used.
• The high-low method starts with the highest and lowest activity levels and uses four steps to estimate fixed and variable costs.
• The scattergraph method has five steps and starts with plotting all points on a graph and fitting a line through the points. This line represents costs throughout a range of activity levels and
is used to estimate fixed and variable costs. The scattergraph is also used to identify any outlying or unusual data points.
• Regression analysis forms a mathematically determined line that best fits the data points. Software packages like Excel are available to perform regression analysis. As with the account analysis,
high-low, and scattergraph methods, this line is described in the equation form Y = f + vX. This equation is used to estimate future costs.
• Four methods can be used to estimate fixed and variable costs. Each method has its advantages and disadvantages, and the choice of a method will depend on the situation at hand. Experienced
employees may be able to effectively estimate fixed and variable costs by using the account analysis approach. If a quick estimate is needed, the high-low method may be appropriate. The
scattergraph method helps with identifying any unusual data points, which can be thrown out when estimating costs. Finally, regression analysis can be run using computer software such as Excel
and generally provides for more accurate cost estimates.
2.3 The Contribution Margin Income Statement
1. Prepare a contribution margin income statement.
After further work with her staff, Susan was able to break down the selling and administrative costs into their variable and fixed components. (This process is the same as the one we discussed
earlier for production costs.) Susan then established the cost equations shown in Table 2.5 “Cost Equations for Bikes Unlimited”.
Table 2.5 Cost Equations for Bikes Unlimited
Production costs Y = $43,276 + $53.42X
Selling and administrative costs Y = $110,000 + $9.00X
Question: The challenge now is to organize this information in a way that is helpful to management—specifically, to Eric Mendez. The traditional income statement format used for external financial
reporting simply breaks costs down by functional area: cost of goods sold and selling and administrative costs. It does not show fixed and variable costs. Panel A of Figure 2.7 “Traditional and
Contribution Margin Income Statements for Bikes Unlimited” illustrates the traditional format. How can this information be presented in an income statement that shows fixed and variable costs
Answer: Another income statement format, called the contribution margin income statement, shows the fixed and variable components of cost information. This type of statement appears in panel B of
Figure 2.7 “Traditional and Contribution Margin Income Statements for Bikes Unlimited”. Note that operating profit is the same in both statements, but the organization of data differs. The
contribution margin income statement organizes the data in a way that makes it easier for management to assess how changes in production and sales will affect operating profit. The contribution
margin represents sales revenue left over after deducting variable costs from sales. It is the amount remaining that will contribute to covering fixed costs and to operating profit (hence, the name
contribution margin).
Eric indicated that sales volume in August could increase by 20 percent over sales in June of 5,000 units, which would increase unit sales to 6,000 units [= 5,000 units + (5,000 × 20 percent)], and
he asked Susan to come up with projected profit for August. Eric also mentioned that the sales price would remain the same at $100 per unit. Using this information and the cost estimate equations in
Table 2.5 “Cost Equations for Bikes Unlimited”, Susan prepared the contribution margin income statement in panel B of Figure 2.7 “Traditional and Contribution Margin Income Statements for Bikes
Unlimited”. Assume for now that 6,000 units is just within the relevant range for Bikes Unlimited. (We will discuss this assumption later in the chapter.)
Figure 2.7 Traditional and Contribution Margin Income Statements for Bikes Unlimited
*From Table 2.5 “Cost Equations for Bikes Unlimited”.
The contribution margin income statement shown in panel B of Figure 2.7 “Traditional and Contribution Margin Income Statements for Bikes Unlimited” clearly indicates which costs are variable and
which are fixed. Recall that the variable cost per unit remains constant, and variable costs in total change in proportion to changes in activity. Because 6,000 units are expected to be sold in
August, total variable costs are calculated by multiplying 6,000 units by the cost per unit ($53.42 per unit for cost of goods sold, and $9.00 per unit for selling and administrative costs). Thus
total variable cost of goods sold is $320,520, and total variable selling and administrative costs are $54,000. These two amounts are combined to calculate total variable costs of $374,520, as shown
in panel B of Figure 2.7 “Traditional and Contribution Margin Income Statements for Bikes Unlimited”.
The contribution margin of $225,480 represents the sales revenue left over after deducting variable costs from sales ($225,480 = $600,000 − $374,520). It is the amount remaining that will contribute
to covering fixed costs and to operating profit.
Recall that total fixed costs remain constant regardless of the level of activity. Thus fixed cost of goods sold remains at $43,276, and fixed selling and administrative costs stay at $110,000. This
holds true at both the 5,000 unit level of activity for June, and the 6,000 unit level of activity projected for August. Total fixed costs of $153,276 (= $43,276 + $110,000) are deducted from the
contribution margin to calculate operating profit of $72,204.
Armed with this information, Susan meets with Eric the next day. Refer to panel B of Figure 2.7 “Traditional and Contribution Margin Income Statements for Bikes Unlimited” as you read Susan’s
comments about the contribution margin income statement.
Eric, I have some numbers for you. My projection for August is complete, and I expect profit to be approximately $72,000 if sales volume increases 20 percent.
Excellent! You were correct in figuring that profit would increase at a higher rate than sales because of our fixed costs.
Here’s a copy of our projected income for August. This income statement format provides the variable and fixed costs. As you can see, our monthly fixed costs total approximately $153,000. Now that we
have this information, we can easily make projections for different scenarios.
This will be very helpful in making projections for future months. I’ll take your August projections to the management group this afternoon. Thanks for your help!
• The contribution margin income statement shows fixed and variable components of cost information. Revenue minus variable costs equals the contribution margin. The contribution margin minus fixed
costs equals operating profit. This statement provides a clearer picture of which costs change and which costs remain the same with changes in levels of activity.
Last month, Alta Production, Inc., sold its product for $2,500 per unit. Fixed production costs were $3,000, and variable production costs amounted to $1,400 per unit. Fixed selling and
administrative costs totaled $50,000, and variable selling and administrative costs amounted to $200 per unit. Alta Production produced and sold 400 units last month.
Prepare a traditional income statement and a contribution margin income statement for Alta Production. Use Figure 2.7 “Traditional and Contribution Margin Income Statements for Bikes Unlimited” as a
2.4 The Relevant Range and Nonlinear Costs
1. Understand the assumptions used to estimate costs.
Question: Bikes Unlimited is making an important assumption in estimating fixed and variable costs. What is this important assumption and why might it be misleading?
Answer: The assumption is that total fixed costs and per unit variable costs will always be at the levels shown in Table 2.5 “Cost Equations for Bikes Unlimited” regardless of the level of
production. This will not necessarily hold true under all circumstances.
For example, let’s say Bikes Unlimited picks up a large contract with a customer that requires producing an additional 30,000 units per month. Do you think the cost equations in Table 2.5 “Cost
Equations for Bikes Unlimited” would lead to accurate cost estimates? Probably not, because additional fixed costs would be incurred for facilities, salaried personnel, and other areas. Variable cost
per unit would likely change also since additional direct labor would be required (either through overtime, which requires overtime pay, or by hiring more employees who are less efficient as they
learn the process), and the volume of parts purchased from suppliers would increase, perhaps leading to reductions in per unit costs due to volume discounts for the parts.
As defined earlier, the relevant range is a term used to describe the range of activity (units of production in this example) for which cost behavior patterns are likely to be accurate. Because the
historical data used to create these equations for Bikes Unlimited ranges from a low of 2,900 units in January to a high of 5,900 units in April (see Table 2.4 “Monthly Production Costs for Bikes
Unlimited”), management would investigate costs further when production levels fall outside of this range. The relevant range for total production costs at Bikes Unlimited is shown in Figure 2.8
“Relevant Range for Total Production Costs at Bikes Unlimited”. It is up to the cost accountant to determine the relevant range and make clear to management that estimates being made for activity
outside of the relevant range must be analyzed carefully for accuracy.
Figure 2.8 Relevant Range for Total Production Costs at Bikes Unlimited
Recall that Bikes Unlimited estimated costs based on projected sales of 6,000 units for the month of August. Although this is slightly higher than the highest sales of 5,900 units in April, Susan
(cost accountant) determined that Bikes Unlimited had the production capacity to produce 6,000 units without significantly affecting total fixed costs or per unit variable costs. Thus she determined
that a sales level of 6,000 units was still within the relevant range. However, Susan also made Eric (CFO) aware that Bikes Unlimited was quickly approaching full capacity. If sales were expected to
increase in the future, the company would have to increase capacity, and cost estimates would have to be revised.
Question: Another important assumption being made by Bikes Unlimited is that all costs behave in a linear manner. Variable, fixed, and mixed costs are all described and shown as a straight line.
However, many costs are not linear and often take on a nonlinear pattern. Why do some costs behave in a nonlinear way?
Answer: Assume the pattern shown in Figure 2.9 “Nonlinear Variable Costs” is for total variable production costs. Consider this: Have you ever worked a job where you were very slow at first but
improved rapidly with experience? If a company produces just a few units each month, workers (direct labor) do not gain the experience needed to work efficiently and may waste time and materials.
This has the effect of driving up the per unit variable cost. Recall that the slope of the line represents the unit cost; thus, when the unit cost increases, so does the slope. If the company
produces more units each month, workers gain experience resulting in improved efficiency, and the per unit cost decreases (both in materials and labor). This causes the total cost line to flatten out
a bit as the slope decreases. This is fine until the company starts to reach its limit in how much it can produce (called capacity). Now the company must hire additional inexperienced employees or
pay its current employees overtime, which once again drives up the cost per unit. Thus the slope begins to increase.
Figure 2.9 Nonlinear Variable Costs
Although this is probably a more accurate description of how variable costs actually behave for most companies, it is much simpler to describe and estimate costs if you assume they are linear. As
long as the relevant range is clearly identified, most companies can reasonably use the linearity assumption to estimate costs.
• Two important assumptions must be considered when estimating costs using the methods described in this chapter.
1. When costs are estimated for a specific level of activity, the assumption is that the activity level is within the relevant range.
2. Costs are estimated assuming that they are linear.
Both assumptions are reasonable as long as the relevant range is clearly identified, and the linearity assumption does not significantly distort the resulting cost estimate.
2.5 Appendix: Performing Regression Analysis with Excel
Learning Objective
1. Perform regression analysis using Excel.
Question: Regression analysis is often performed to estimate fixed and variable costs. Many different software packages have the capability of performing regression analysis, including Excel. This
appendix provides a basic illustration of how to use Excel to perform regression analysis. Statistics courses cover this topic in more depth. How is regression analysis used to estimate fixed and
variable costs?
Answer: As noted in the chapter, regression analysis uses a series of mathematical equations to find the best possible fit of the line to the data points. For the purposes of this chapter, the end
goal of regression analysis is to estimate fixed and variable costs, which are described in the equation form of Y = f + vX. Recall that the following Excel output was provided earlier in the chapter
based on the data presented in Table 2.4 “Monthly Production Costs for Bikes Unlimited” for Bikes Unlimited.
y-intercept 43,276
x variable 53.42
The resulting equation to estimate production costs is Y = $43,276 + $53.42X. We now describe the steps to be performed in Excel to get this equation.
Step 1. Confirm that the Data Analysis package is installed.
Go to the Data tab on the top menu bar and look for Data Analysis. If Data Analysis appears, you are ready to perform regression analysis. If Data Analysis does not appear, go to the help button
(denoted as a question mark in the upper right-hand corner of the screen) and type Analysis ToolPak. Look for the Load the Analysis ToolPak option and follow the instructions given.
Step 2. Enter the data in the spreadsheet.
Using a new Excel spreadsheet, enter the data points in two columns. The monthly data in Table 2.4 “Monthly Production Costs for Bikes Unlimited” includes Total Production Costs and Units Produced.
Thus use one column (column A) to enter Total Production Costs data and another column (column B) to enter Units Produced data.
Step 3. Run the regression analysis.
Using the same spreadsheet set up in step 2, select Data, Data Analysis, and Regression. A box appears that requires the input of several items needed to perform regression. Input Y Range requires
that you highlight the y-axis data, including the heading (cells B1 through B13 in the example shown in step 2). Input X Range requires that you highlight the x-axis data, including the heading
(cells C1 through C13 in the example shown in step 2). Check the Labels box; this indicates that the top of each column has a heading (B1 and C1). Select New Workbook; this will put the regression
results in a new workbook. Lastly, check the Line Fit Plots box, then select OK. The result is as follows (note that we made a few minor format changes to allow for a better presentation of the
Step 4. Analyze the output.
Here, we discuss key items shown in the regression output provided in step 3.
• Cost Equation: The output shows that estimated fixed costs (shown as the Intercept coefficient in cell B17) total $43,276, and the estimated variable cost per unit (shown as the Units Produced
coefficient in B18) is $53.42. Thus the cost equation is:
Y = $43,276 + $53.42X
Total Production Costs = $43,276 + ($53.42 × Units Produced)
• Line Fit Plot and R-Squared: The plot shows that actual total production costs are very close to predicted total production costs calculated using the cost equation. Thus the cost equation
created from the regression analysis is likely to be useful in predicting total production costs. Another way to assess the accuracy of the regression output is to review the R-squared statistic
shown in cell B5. R-squared Measures the percent of the variance in the dependent variable explained by the independent variable. measures the percent of the variance in the dependent variable (
total production costs, in this example) explained by the independent variable (units produced, in this example). According to the output, 96.29 percent of the variance in total production costs
is explained by the level of units produced—further evidence that the regression results will be useful in predicting total production costs.
The discussion of regression analysis in this chapter is meant to serve as an introduction to the topic. To further enhance your knowledge of regression analysis and to provide for a more thorough
analysis of the data, you should pursue the topic in an introductory statistics course.
Software applications, such as Excel, can use regression analysis to estimate fixed and variable costs.
• Once the data analysis package is installed, historical data are entered in the spreadsheet, and the regression analysis is run.
• The resulting data are used to determine the cost equation, which includes estimated fixed and variable costs.
The line fit plot and R-squared statistic are used to assess the usefulness of the cost equation in estimating costs.
1. What is a fixed cost? Provide two examples.
2. What is the difference between a committed fixed cost and a discretionary fixed cost? Provide examples of each.
3. What is a variable cost? Provide two examples.
4. What is a mixed cost? Provide two examples.
5. Describe the variables in the cost equation Y = f + vX.
6. How is the cost equation Y = f + vX used to estimate future costs?
7. Why is it important to identify how costs behave with changes in activity?
8. Explain how account analysis is used to estimate costs.
9. Describe the four steps of the high-low method and how these steps are used to estimate costs.
10. Why might the high-low method lead to inaccurate results?
11. Describe the five steps of the scattergraph method and how these steps are used to estimate costs.
12. How can the scattergraph method be used to identify unusual data points?
13. Describe how regression analysis is used to estimate costs.
14. How does the contribution margin income statement differ from the traditional income statement?
15. Describe the term relevant range. Why is it important to stay within the relevant range when estimating costs?
16. Explain how some costs can behave in a nonlinear way.
Brief Exercises
17. Planning at Bikes Unlimited. Refer to the dialogue at Bikes Unlimited presented at the beginning of the chapter. What is the first step to be taken by Susan and her accounting staff to help in
estimating profit for August?
18. Identifying Cost Behavior. Vasquez Incorporated is trying to identify the cost behavior of the three costs that follow. Cost information is provided for three months.
Cost A Cost B Cost C
Month Units Produced Total Costs Cost per Unit Total Costs Cost per Unit Total Costs Cost per Unit
1 1,500 $1,500 _____ $4,500 _____ $3,000 _____
2 3,000 1,500 _____ 5,250 _____ 6,000 _____
3 750 1,500 _____ 3,750 _____ 1,500 _____
1. Calculate the cost per unit, and then identify how the cost behaves for each of the three costs (fixed, variable, or mixed). Explain the reasoning for your answers.
2. How does identifying cost behavior patterns help managers?
19. Account Analysis. Cordova Company would like to estimate production costs on an annual basis. Costs incurred for direct materials and direct labor are variable costs. The accounting records
indicate that the following production costs were incurred last year for 50,000 units.
Direct materials $100,000
Direct labor $215,000
Manufacturing overhead $300,000 (20 percent fixed; 80 percent variable)
Use account analysis to estimate the fixed costs per year, and the variable cost per unit.
20. High-Low Method. The city of Rockville reported the following annual cost data for maintenance work performed on its fleet of trucks.
Reporting Period (Year) Total Costs Level of Activity (Miles Driven)
Year 1 $ 750,000 225,000
Year 2 850,000 240,000
Year 3 1,100,000 430,000
Year 4 1,150,000 454,000
Year 5 1,250,000 560,000
Year 6 1,550,000 710,000
1. Use the four steps of the high-low method to estimate total fixed costs per year and the variable cost per mile. State your results in the cost equation form Y = f + vX.
2. What would the estimated costs be if the trucks drove 500,000 miles in year 7?
21. Regression Analysis. Regression analysis was run using the data in Brief Exercise 22 for the city of Rockville. The output is shown here:
y-intercept 441,013
x variable 1.53
1. Use the regression output to develop the cost equation Y = f + vX by filling in the dollar amounts for f and v.
2. What would the city of Rockville’s estimated costs be if its trucks drove 500,000 miles in year 7?
22. Contribution Margin Income Statement. Last year Pod Products, Inc., sold its product for $250 per unit. Production costs totaled $40,000 (25 percent fixed, 75 percent variable). Selling and
administrative costs totaled $150,000 (10 percent fixed, 90 percent variable). Pod Products produced and sold 1,000 units last year.
Prepare a contribution margin income statement for Pod Products, Inc.
23. Relevant Range. Jersey Company produces jerseys for athletic teams, and typically produces between 1,000 and 5,000 jerseys annually. The accountant is asked to estimate production costs for this
coming year assuming 9,000 jerseys will be produced.
What is meant by the term relevant range, and why is the relevant range important for estimating production costs for this coming year at Jersey Company?
24. Identifying Cost Behavior. Zhang Corporation is trying to identify the cost behavior of the three costs shown. Cost information is provided for six months.
Cost 1 Cost 2 Cost 3
Month Units Produced Total Costs Cost per Unit Total Costs Cost per Unit Total Costs Cost per Unit
1 18,000 $36,000 _____ $19,800 _____ $5,000 _____
2 16,000 32,000 _____ 19,200 _____ 5,000 _____
3 14,000 28,000 _____ 18,200 _____ 5,000 _____
4 12,000 24,000 _____ 16,800 _____ 5,000 _____
5 10,000 20,000 _____ 14,500 _____ 5,000 _____
6 8,000 16,000 _____ 12,000 _____ 5,000 _____
1. Calculate the cost per unit, and then identify how the cost behaves (fixed, variable, or mixed) for each of the three costs. Explain the reasoning behind your answers.
2. Why is it important to identify how costs behave with changes in activity?
25. Account Analysis. Baker Advertising Incorporated would like to estimate costs associated with its clients on an annual basis. Assume costs for supplies and advertising staff are variable costs.
The accounting records indicate the following costs were incurred last year for 100 clients:
Supplies $ 20,000
Advertising staff wages (hourly employees) $170,000
Manager salary $ 90,000
Building rent $ 56,000
1. Use account analysis to estimate total fixed costs per year, and the variable cost per unit. State your results in the cost equation form Y = f + vX by filling in the dollar amounts for f and
2. Estimate the total costs for this coming year assuming 120 clients will be served.
26. Regression Analysis. Regression analysis was run for Castanza Company resulting in the following output (this is based on the same data as the previous two exercises):
y-intercept 445,639
x variable 8.54
1. Use the regression output given to develop the cost equation Y = f + vX by filling in the dollar amounts for f and v.
2. What would Castanza Company’s estimated costs be if it used 50,000 machine hours next month?
3. What would Castanza Company’s estimated costs be if it used 15,000 machine hours next month?
27. Contribution Margin Income Statement. Last month Kumar Production Company sold its product for $60 per unit. Fixed production costs were $40,000, and variable production costs amounted to $15 per
unit. Fixed selling and administrative costs totaled $26,000, and variable selling and administrative costs amounted to $5 per unit. Kumar Production produced and sold 7,000 units last month.
1. Prepare a traditional income statement for Kumar Production Company.
2. Prepare a contribution margin income statement for Kumar Production Company.
3. Why do companies use the contribution margin income statement format?
28. Regression Analysis Using Excel (Appendix). Walleye Company produces fishing reels. Management wants to estimate the cost of production equipment used to produce the reels. The company reported
the following monthly cost data related to production equipment:
Reporting Period (Month) Total Costs Machine Hours
January $1,104,000 54,000
February 720,000 30,000
March 600,000 24,000
April 1,320,000 108,000
May 1,368,000 114,000
June 744,000 36,000
July 1,056,000 45,600
August 1,092,000 57,600
September 1,272,000 93,600
October 1,152,000 61,200
November 1,680,000 115,200
December 1,176,000 64,800
1. Use Excel to perform regression analysis. Provide a printout of the results.
2. Use the regression output to develop the cost equation Y = f + vX by filling in the dollar amounts for f and v.
3. What would Walleye Company’s estimated costs be if it used 90,000 machine hours this month?
29 . Cost Behavior. Assume you are a consultant performing work for two different companies. Each company has asked you to help them identify the behavior of certain costs.
1. Identify each of the following costs for Hwang Company, a producer of ski boats, as variable (V), fixed (F), or mixed (M):
1. _____Salary of production manager
2. _____Materials required for production
3. _____Monthly rent on factory building
4. _____Hourly wages for assembly workers
5. _____Straight-line depreciation for factory equipment
6. _____Annual insurance on factory building
7. _____Invoices sent to customers
8. _____Salaries and commissions of salespeople
9. _____Salary of chief executive officer
10. _____Company cell phones with first 50 hours free, then 10 cents per minute
2. Identify each of the following costs for Rainier Camping Products, a maker of backpacks, as variable (V), fixed (F), or mixed (M):
1. _____Hourly wages for assembly workers
2. _____Fabric required for production
3. _____Straight-line depreciation on factory building
4. _____Salaries and commissions of salespeople
5. _____Lease payments for factory equipment
6. _____Company cell phones with first 80 hours free, then 8 cents per minute
7. _____Invoices sent to customers
8. _____Salary of production manager
9. _____Salary of controller (accounting)
10. _____Electricity for factory building
11. How might the managers of these companies use the cost behavior information requested?
30. Account Analysis and Contribution Margin Income Statement. Madden Company would like to estimate costs associated with its production of football helmets on a monthly basis. The accounting
records indicate the following production costs were incurred last month for 4,000 helmets.
Assembly workers’ labor (hourly) $70,000
Factory rent 3,000
Plant manager’s salary 5,000
Supplies 20,000
Factory insurance 12,000
Materials required for production 20,000
Maintenance of production equipment (based on usage) 18,000
1. Use account analysis to estimate total fixed costs per month and the variable cost per unit. State your results in the cost equation form Y = f + vX by filling in the dollar amounts for f
and v.
2. Estimate total production costs assuming 5,000 helmets will be produced and sold.
3. Prepare a contribution margin income statement assuming 5,000 helmets will be produced, and each helmet will be sold for $70. Fixed selling and administrative costs total $10,000.
Variable selling and administrative costs are $8 per unit.
31. High-Low, Scattergraph, and Regression Analysis; Manufacturing Company. Woodworks, Inc., produces cabinet doors. Manufacturing overhead costs tend to fluctuate from one month to the next, and
management would like to accurately estimate these costs for planning and decision-making purposes.
The accounting staff at Woodworks recommends that costs be broken down into fixed and variable components. Because the production process is highly automated, most of the manufacturing overhead costs
are related to machinery and equipment. The accounting staff believes the best starting point is to review historical data for costs and machine hours:
Reporting Period (Month) Total Costs Machine Hours
January $278,000 1,550
February 280,000 1,570
March 266,000 1,115
April 290,000 1,700
May 262,000 1,110
June 269,000 1,225
July 275,000 1,335
August 286,000 1,660
September 250,000 1,000
October 253,000 1,020
November 260,000 1,025
December 281,000 1,600
These data were entered into a computer regression program, which produced the following output:
y-intercept 210,766
x variable 45.31
1. Use the four steps of the high-low method to estimate total fixed costs per month and the variable cost per machine hour. State your results in the cost equation form Y = f + vX by
filling in the dollar amounts for f and v.
2. Use the five steps of the scattergraph method to estimate total fixed costs per month, and the variable cost per machine hour. State your results in the cost equation form Y = f + vX
by filling in the dollar amounts for f and v.
3. Use the regression output given to develop the cost equation Y = f + vX by filling in the dollar amounts for f and v.
4. Use the results of the high-low method (a), scattergraph method (b), and regression analysis (c), to estimate costs for 1,500 machine hours. (You will have three different answers—one
for each method.) Which approach do you think is most accurate and why
Management likes the regression analysis approach and asks you to estimate costs for 5,000 machine hours using this approach (the company plans to expand by opening another facility and hiring
additional employees). Calculate your estimate, and explain why your estimate might be misleading.
32. Regression Analysis Using Excel (Appendix). Metal Products, Inc., produces metal storage sheds. The company’s manufacturing overhead costs tend to fluctuate from one month to the next, and
management would like an accurate estimate of these costs for planning and decision-making purposes.
1. The company’s accounting staff recommends that costs be broken down into fixed and variable components. Because the production process is highly automated, most of the manufacturing
overhead costs are related to machinery and equipment. The accounting staff agrees that reviewing historical data for costs and machine hours is the best starting point. Data for the past
18 months follow.
1. Reporting Period (Month) Total Overhead Costs Total Machine Hours
January $695,000 3,875
February 700,000 3,925
March 665,000 2,788
April 725,000 4,250
May 655,000 2,775
June 672,500 3,063
July 687,500 3,338
August 715,000 4,150
September 625,000 2,500
October 632,500 2,550
November 650,000 2,563
December 702,500 4,000
January 730,000 4,025
February 735,000 4,088
March 697,500 2,900
April 762,500 4,425
May 687,500 2,888
June 705,000 3,188
1. Use Excel to perform regression analysis. Provide a printout of the results.
2. Use the regression output given to develop the cost equation Y = f + vX by filling in the dollar amounts for f and v.
3. Use the results of the regression analysis to estimate costs for 3,750 machine hours.
4. Management is considering plans to expand by opening several new facilities and asks you to estimate costs for 22,000 machine hours. Calculate your estimate, and explain why this estimate may be
5. What can be done to improve the estimate made in part d?
One Step Further: Skill-Building Cases
33. Internet Project: Variable and Fixed Costs. Using the Internet, find the annual report of one retail company and one manufacturing company. Print out each company’s income statement. (Hint: The
income statement is often called the statement of operations or statement of earnings.)
1. Review each income statement, and provide an analysis of which operating costs are likely to be variable and which are likely to be fixed. Include copies of both income statements when
submitting your answer.
2. How would you expect a retail company’s mix of variable and fixed operating costs to differ from that of a manufacturing company?
3. How might the managers of these companies use cost behavior information?
34. Group Activity: Identifying Variable and Fixed Costs. To complete the following requirements, form groups of two to four students.
1. Each group should select a product that is easy to manufacture.
2. Prepare a list of materials, labor, and other resources needed to make the product.
3. Using the list prepared in requirement b, identify whether the costs associated with each item are variable, fixed, or mixed.
4. As a manager for this company, why would you want to know whether costs are variable, fixed, or mixed?
35. Fixed Costs at United Airlines. Review Note 5.4 “Business in Action 5.1”.
1. What is meant by the term fixed cost?
2. Which costs at United Airlines were identified as fixed costs?
3. How might United Airlines reduce its fixed costs? Be specific.
Comprehensive Case
36. Ethics: Manipulating Data to Establish a Budget (Appendix). Healthy Bar, Inc., produces energy bars for sports enthusiasts. The company’s fiscal year ends on December 31. The production manager,
Jim Wallace, is establishing a cost budget for the production department for each month of this coming quarter (January through March). At the end of March, Jim will be evaluated based on his
ability to meet the budget for the three months ending March 31. In fact, Jim will receive a significant bonus if actual costs are below budgeted costs for the quarter.
The production budget is typically established based on data from the last 18 months. These data are as follows:
Reporting Period (Month) Total Overhead Costs Total Machine Hours
July $695,000 3,410
August 700,000 3,454
September 665,000 2,453
October 725,000 3,740
November 655,000 2,442
December 672,500 2,695
January 687,500 2,937
February 715,000 3,652
March 625,000 2,200
April 632,500 2,244
May 650,000 2,255
June 702,500 3,520
July 730,000 3,542
August 735,000 3,597
September 697,500 2,552
October 762,500 3,894
November 687,500 2,541
December 705,000 2,805
You are the accountant who assists Jim in preparing an estimate of production costs for the next three months. You intend to use regression analysis to estimate costs, as was done in the past.
Jim expects that 3,100 machine hours will be used in January, 3,650 machine hours in February, and 2,850 machine hours in March.
Jim approaches you and asks that you add $100,000 to production costs for each of the past 18 months before running the regression analysis. As he puts it, “After all, management always takes my
proposed budgets and reduces them by about 10 percent. This is my way of leveling the playing field!”
1. Use Excel to perform regression analysis using the historical data provided.
1. Submit a printout of the results.
2. Use the regression output to develop the cost equation Y = f + vX by filling in the dollar amounts for f and v.
3. Calculate estimated production costs for January, February, and March. Also provide a total for the three months.
2. Use Excel to perform regression analysis after adding $100,000 to production costs for each of the past 18 months, as Jim requested.
1. Submit a printout of the results.
2. Use the regression output to develop the cost equation Y = f + vX by filling in the dollar amounts for f and v.
3. Calculate estimated production costs for January, February, and March. Also provide a total for the three months.
3. Why did Jim ask you to add $100,000 to production costs for each of the past 18 months?
4. How should you handle Jim’s request? (If necessary, review the presentation of ethics in Chapter 1 “What Is Managerial Accounting?” for additional information.) | {"url":"https://biz.libretexts.org/Courses/HACC_Central_Pennsylvania's_Community_College/Principles_of_Managerial_Accounting_1/02%3A_How_Do_Organizations_Identify_Cost_Behavior_Patterns","timestamp":"2024-11-08T05:42:00Z","content_type":"text/html","content_length":"282233","record_id":"<urn:uuid:cb63074a-0a64-4375-9d99-333293ea5432>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00409.warc.gz"} |
How Old Is The Universe? (All Calculations So Far)
A Bible-based assumption of a few thousand years old was made by Isaac Newton. Einstein believed in a World that was steady-state, ageless. Since then, somewhere in the centre, data obtained from the
Universe places the probable response.
First Ingredient: Quantum mechanics
In the early 20th century, it was realized that the stability of atomic matter could not be explained using the Maxwell equations of classical electrodynamics. This triumph belonged to quantum
mechanics. The hydrogen atom was stable because the possible energy states of the electron in the atom are quantized by the rule
where n is an integer, and m is (approximately) the electron mass.
Read more: Kepler’s Laws on the Movement of the Planets
So when the electron changes energy for some reason, say by absorbing or emitting electromagnetic radiation, it can only absorb or emit light of a wavelength corresponding to the difference in the
electron’s quantized energy states. The hydrogen emission spectrum is considered the set of light wavelengths produced by hydrogen gas, and there is a corresponding absorption spectrum. The
measurement of the wavelengths in the observed hydrogen spectrum was one of quantum mechanics’ remarkable achievements.
Secrets Of The Universe
Watch on Amazon- https://amzn.to/3ldEvtu
Second ingredient: Relativity
Spacetime revolution was the other great revolution that started the 20th century of special and general relativity. In special relativity, as the wavelength light source λem travels away from the
observer at some velocity v, the observer sees the light at some other lobs of the wavelength, defined by the theory that the speed of light for all observers is the same. The fractional difference
between λem and λobs is called the redshift, denoted by the letter z, and is computed from the relative velocity v between the source and observer by
where c is the speed of light. If the source and observer are moving towards one another, the redshift becomes a blue shift and is given by taking v -> -v in above.
Conclusion: The Universe is expanding
Age of universe from Hubble constant
Stars are made mostly out of hydrogen and helium, and the emission spectrum of the hydrogen atoms in a star in a far away galaxy ought to be the same as that of hydrogen atoms in a tube of gas in a
laboratory on Earth. But that’s not what Edwin Hubble found when he compared the emission spectra of different stars and galaxies. Hubble found that the hydrogen gas’s emission wavelengths were
redshifted by an amount proportional to their distance from our solar system. Hubble’s Law relates the redshift z to the distance D through
where the empirical constant H0 is called Hubble’s constant.
Hubble’s observation indicated that with a velocity that increases with distance, the stars and galaxies in the Universe hurtle away from each other, as if the whole Universe was expanding, as in a
major explosion. When physicists extrapolated that moves backward in time, it suggested that the Universe started very hot and dense and somehow exploded into the vast cold place that we see today.
Hubble’s Law was an empirical observation that demanded, and received, very intense attention from modern theoretical physics after it was first proposed in 1924.
The equation of motion
When physicists want to study a given system, they turn to the motion equations for that system. According to the theory of general relativity, the correct equation of motion for describing a
Universe is the Einstein equation
relating the curvature of the spacetime in a given Universe to the distribution of energy and momentum in that Universe. The energy-momentum tensor includes all of the energy from all
non-gravitational sources such as matter, electromagnetism, or even quantum vacuum energy, as we shall see later. The standard cosmological solution to the Einstein equation is written in the form of
the Friedmann-Robertson-Walker metric
The function a(t) is called the scale factor because it tells us the size of the Universe. The a(t) scale factor and the k constant are both determined by the Universe’s specific type of matter and/
or radiation.
For any value of a(t) or k, the gravitational redshift of light z, due to the changing size of the Universe satisfies
where t[obs] is the time in the Universe that the light is being observed and t[em] is the time when the light was first emitted.The Hubble parameter H(t) gives the relative rate of change in the
scale factor a(t) by
The observed Hubble constant is just the current value of the dynamically evolving Hubble parameter. The uncertainties of the currently observed value of the Hubble constant have been lumped into the
parameter h0.
How old?
The Five Ages of the Universe: Inside the Physics of Eternity
Buy on Amazon- https://amzn.to/3iwM24O
The inverse of the Hubble constant will approximate a simple calculation of the age of the Universe. The age of the universe turns out to be
Current best estimates of h0 are
the Universe is most likely somewhere between 12 and 16 billion years old, at least according to this calculation process.
But note that time is relative, according to relativity. When the time was a quantity that could be calculated, we can guess the amount of time likely to have passed. But about any processes that
might have occurred before the notion of time made sense, we can’t say anything. Quantum gravity could be an everlasting stage of the Universe in some way, and the Big Bang could be considered to be
the end of eternity and the beginning of time itself.
Recommended Book
How Old Is the Universe?
Buy on Amazon- https://amzn.to/35mk5bp
Leave a Comment Cancel Reply
Facts of Universe Cycle/Oscillating Model
Leave a Comment / Mysteries & Hypothesis, Matter & Energy, Physics & Cosmos, Space & Time / By Deep Prakash / July 8, 2020 / astronomy, astrophysics, balance, cosmologist, cosmology, cosmos, cycle of
the universe, cycle of universe, cycle universe, cycles of the universe, cyclic model, cyclic model of the universe, cyclic model of universe, cyclic model theory, cyclic theory, cyclic theory of the
universe, cyclic universe, cyclic universe theory, cyclical time theory, cyclical universe theory, cycling universe, einstein, equation, explaied, galaxy, is the universe cyclical, oscilating theory,
oscillating model, oscillating model theory, oscillating theory, oscillating universe, phenomenon, physics, physics equation, reincarnation, repeating, repeating universe theory, research, science,
scientist, solar system, space, surprise, symbolize, symbols, the cyclic universe theory, theoretical, understand, understand universe, universe, universe cycle, universe cycle theory, universe
cycles, universe examples, what is cyclic universe theory
Cyclic Model of Universe
Leave a Comment / Mysteries & Hypothesis, Matter & Energy, Physics & Cosmos, Space & Time, Uncommon & Remarkable / By Deep Prakash / July 10, 2020 / astronomy, astrophysics, big bang, black, black
hole, cosmology, cosmos, cycle of the universe, cycle of universe, cycle universe, cycles of the universe, cyclic model, cyclic model of the universe, cyclic model of universe, cyclic model theory,
cyclic theory, cyclic theory of the universe, cyclic universe, cyclic universe theory, cyclical time theory, cyclical universe theory, cycling universe, einstein, energy, equation, expanding universe
, explaied, galaxy, gravitational, hole, hypothesis, is the universe cyclical, mass, matter, oscilating theory, oscillating model, oscillating model theory, oscillating theory, oscillating universe,
physics, quasar, repeating universe theory, research, science, scientist, space, the cyclic universe theory, theoretical, theory, universe, universe cycle, universe cycle theory, universe cycles,
universe examples, what is cyclic universe theory, whirling disk | {"url":"https://cosmos.theinsightanalysis.com/how-old-is-the-universe-and-time/","timestamp":"2024-11-14T04:11:04Z","content_type":"text/html","content_length":"177637","record_id":"<urn:uuid:87f69868-03f9-4c6d-a17f-c3bc67732dc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00039.warc.gz"} |
How to Reverse a Number in Java: Step-by-Step Guide
Have you ever found yourself in the scenario, where you are required to solve how to reverse a number in Java? This is a challenge that people go through in our day to day life either when preparing
for an interview or even when doing a college project. It’s a common task that can help develop a deeper understanding of Java but seem tricky at first.
It is not just a formal math exercise but there is more to this. It has been used and applied in various computational exercises, testing whether a number is a palindrome, cryptography, and any other
task that involves the manipulation of numerical data. In this blog, we will learn how to reverse a number in Java using a few techniques and the code examples; loop, recursion, StringBuilder class.
Now let us go through the methods to reverse a number in Java and ensure the implementation is simple.
Basic Algorithm for Reversing a Number
Let us begin by analysing the procedures:
• First, create a variable for storing the reverse of the number, let it be reverse. Start with reverse = 0.
• To obtain the last digit of the number, you should use the remainder operator %.
• Multiply reverse by 10 and add the last digit to reverse.
• Divide the original number by 10 to remove the last digit.
• Continue with the steps 2 to 4 until the number is zero.
Simple, right? Now let’s illustrate these concepts with different types of methods.
Reversing a Number Using a While Loop
The while loop is a straightforward method to reverse a number. Let’s walk through the process step-by-step.
Explanation and Step-by-Step Process
• Initialise: Start with the number you want to reverse and a variable reverse set to 0.
• Loop: Use a while loop to iterate as long as the number is not 0.
• Extract Last Digit: Use number % 10 to get the last digit.
• Update Reverse: Multiply reverse by 10 and add the last digit.
• Remove Last Digit: Divide the number by 10 to remove the last digit.
• Repeat: Continue the loop until the number becomes 0.
Here’s a complete Java program that reverses a number using a while loop. It takes user input and outputs the reversed number.
Reversing a Number Using a For Loop
Have you ever thought about whether a while loop is necessary to reverse a number in Java? We can do it with a for loop also. The process is rather similar, however the loop structure will be
Explanation and Step-by-Step Process
• Initialise: Start with the number to reverse and set reverse to 0.
• Loop: Use a for loop to iterate as long as the number is not 0.
• Extract Last Digit: Use number % 10 to get the last digit.
• Update Reverse: Multiply reverse by 10 and add the last digit.
• Remove Last Digit: Divide the number by 10 to remove the last digit.
• Repeat: Continue the loop until the number becomes 0.
Here’s a Java program using a for loop to reverse a number. It takes user input and prints the reversed number.
Implementing Number Reversal with Recursion
Let’s now add some excitement into it. Do you know that reversing a number can be achieved with the help of recursion? Even, using recursion might result in clearer, more understandable code.
Explanation and Step-by-Step Process
• Base Case: If the number is 0, return.
• Extract Last Digit: Use number % 10 to get the last digit.
• Update Reverse: Multiply reverse by 10 and add the last digit.
• Recursive Call: Divide the number by 10 and call the function again.
Using the StringBuilder Class to Reverse a Number
Okay, let me bring you to a new concept. Java’s StringBuilder class can be used to reverse a number, do you know that? This method is straightforward and completes the process with the help of Java’s
built-in methods.
Explanation and Step-by-Step Process
• Convert to String: Convert the number to a string.
• Reverse String: Use StringBuilder to reverse the string.
• Convert Back to Integer: Convert the reversed string back to an integer.
• Handle Leading Zeros: Ensure any leading zeros are handled.
Handling Leading Zeros and Negative Numbers in Reversal
Handling Leading Zeros
One tricky part of reversing numbers is handling leading zeros. For example, reversing 1000 should give 1, not 0001.
Handling Negative Numbers
We also need to handle negative numbers properly. Reversing -123 should give -321.
Complexity Analysis of Different Methods
Do you have any concerns or questions on which method to use for reversing a number in Java? By discussing time and space complexities, it is seen that while loop method and for loop method is mostly
efficient. Let’s break down the complexity of each method to help us decide.
Method Time Complexity Explanation Space Complexity Explanation
While Loop O(log10(n)) Every iteration reduces the number by a factor of 10. O(1) Uses constant space.
For Loop O(log10(n)) Similar to the while loop, it iterates until the number becomes zero. O(1) Also uses constant space.
Recursion O(log10(n)) Each recursive call reduces the number by a factor of 10. O(log10(n)) Uses stack space for each recursive call.
StringBuilder O(n) Converts the number to a string and then reverses it. O(n) Uses space proportional to the number of digits.
Common Mistakes to Avoid When Reversing Numbers
Although it may appear simple to reverse a number, there are a few typical mistakes to watch out for:
Forgetting to Handle Negative Numbers
Always check if the number is negative and handle it accordingly. If we ignore this, the reversed number might lose its sign.
Ignoring Leading Zeros
Make sure to remove leading zeros in the reversed number. For instance, reversing 1000 should give us 1, not 0001.
Overflow Errors
Be cautious with very large numbers. Reversing a number might result in an overflow if it exceeds the range of the data type.
Not Considering Time and Space Complexity
Choosing the right method is crucial. Consider both time and space complexity to ensure efficient performance, especially with large inputs.
Here in this blog, we have discussed how to reverse a number in Java through while loop, for loop, recursion and finally, using StringBuilder class. We have introduced the basic algorithm, looked at
time and space complexity and given examples of each technique. We also talked about potential issues that might occur, such as handling leading zeros and negative numbers. With these techniques, we
should be well equipped to solve number reversal problems in Java to make effective and precise solutions. For further coding interviews as well as for real-time projects, these concepts are
necessary for any Java developer.
Can these methods handle very large numbers?
For very large numbers, especially those exceeding standard data types, consider using libraries that support arbitrary precision arithmetic.
How do these methods handle negative numbers?
All methods can handle negative numbers by checking the sign and ensuring the reversed number retains the negative sign.
Are there any built-in Java functions to reverse a number?
There is no built-in function in Java to reverse a number but StringBuilder has a reverse() method to use on the string format of the number.
Why is handling leading zeros important in number reversal?
Handling leading zeros ensures the reversed number is correctly represented and doesn't include unnecessary zeros. | {"url":"https://herovired.com/learning-hub/topics/how-to-reverse-a-number-in-java/","timestamp":"2024-11-06T12:50:22Z","content_type":"text/html","content_length":"152845","record_id":"<urn:uuid:e1099ff9-2296-4b49-91e3-edffc84cf6d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00283.warc.gz"} |
432 research outputs found
An effective spin-orbit Hamiltonian is derived for a spin-1/2 trimerized kagome antiferromagnet in the second-order of perturbation theory in the ratio of two coupling constants. Low-energy singlet
states of the obtained model are mapped to a quantum dimer model on a triangular lattice. The quantum dimer model is dominated by dimer resonances on a few shortest loops of the triangular lattice.
Characteristic energy scale for the dimer model constitutes only a small fraction of the weaker exchange coupling constant.Comment: 7 pages, 3 figure
The power-law temperature dependences of the specific heat, the nuclear relaxation rate, and the thermal conductivity suggest the presence of line nodes in the superconducting gap of Sr2RuO4. These
recent experimental observations contradict the scenario of a nodeless (k_x+ik_y)-type superconducting order parameter. We propose that interaction of superconducting order parameters on different
sheets of the Fermi surface is a key to understanding the above discrepancy. A full gap exists in the active band, which drives the superconducting instability, while line nodes develop in passive
bands by interband proximity effect.Comment: 4 pages, 1 figur
Above the saturation field, geometrically frustrated quantum antiferromagnets have dispersionless low-energy branches of excitations corresponding to localized spin-flip modes. Transition into a
partially magnetized state occurs via condensation of an infinite number of degrees of freedom. The ground state below the phase transition is a magnon crystal, which breaks only translational
symmetry and preserves spin-rotations about the field direction. We give a detailed review of recent works on physics of such phase transitions and present further theoretical developments.
Specifically, the low-energy degrees of freedom of a spin-1/2 kagom\'e antiferromagnet are mapped to a hard hexagon gas on a triangular lattice. Such a mapping allows to obtain a quantitative
description of the magnetothermodynamics of a quantum kagom\'e antiferromagnet from the exact solution for a hard hexagon gas. In particular, we find the exact critical behavior at the transition
into a magnon crystal state, the universal value of the entropy at the saturation field, and the position of peaks in temperature- and field-dependence of the specific heat. Analogous mapping is
presented for the sawtooth chain, which is mapped onto a model of classical hard dimers on a chain. The finite macroscopic entropies of geometrically frustrated magnets at the saturation field lead
to a large magnetocaloric effect.Comment: 22 pages, proceedings of YKIS2004 worksho
Nearest-neighbor Heisenberg antiferromagnet on a face-centered cubic lattice is studied by extensive Monte Carlo simulations in zero magnetic field. The parallel tempering algorithm is utilized,
which allows to overcome a slow relaxation of the magnetic order parameter and fully equilibrate moderate size clusters with up to N ~ 7*10^3 spins. By collecting energy and order parameter
histograms on clusters with up to N ~ 2*10^4 sites we accurately locate the first-order transition point at T_c=0.4459(1)J.Comment: 5 pages, 5 figure
Competing ferro- and antiferromagnetic exchange interactions may lead to the formation of bound magnon pairs in the high-field phase of a frustrated quantum magnet. With decreasing field, magnon
pairs undergo a Bose-condensation prior to the onset of a conventional one-magnon instability. We develop an analytical approach to study the zero-temperature properties of the magnon-pair
condensate, which is a bosonic analog of the BCS superconductors. Representation of the condensate wave-function in terms of the coherent bosonic states reveals the spin-nematic symmetry of the
ground-state and allows one to calculate various static properties. Sharp quasiparticle excitations are found in the nematic state with a small finite gap. We also predict the existence of a
long-range ordered spin-nematic phase in the frustrated chain material LiCuVO4 at high fields.Comment: 5 pages, final versio
Effect of structural disorder is investigated for an $XY$ pyrochlore antiferromagnet with continuous degeneracy of classical ground states. Two types of disorder, vacancies and weakly fluctuating
exchange bonds, lift degeneracy selecting the same subset of classical ground states. Analytic and numerical results demonstrate that such an "order by structural disorder" mechanism competes with
the effect of thermal and quantum fluctuations. Our theory predicts that a small amount of nonmagnetic impurities in $\rm{Er_2Ti_2O_7}$ will stabilize the coplanar $\psi_3$ ($m_{x^2-y^2}$) magnetic
structure as opposed to the $\psi_2$ ($m_{3z^2-r^2}$) state found in pure material | {"url":"https://core.ac.uk/search/?q=author%3A(Zhitomirsky%20M.%20E.)","timestamp":"2024-11-14T07:19:27Z","content_type":"text/html","content_length":"118948","record_id":"<urn:uuid:e0dbf61d-d3b6-4cc2-8685-8a6681aa5a61>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00236.warc.gz"} |
Ask Uncle Colin: The Twelve Coin Puzzle
Dear Uncle Colin,
I have twelve coins, one of which is counterfeit. I don’t know if it’s heavier or lighter than the others, but I’m allowed three goes on a balance scale to determine which coin it is and whether
it’s light or heavy. Can you help?
Some Coins Are Light, Evidently
Hi, SCALE, and thanks for your message!
This is a classical puzzle, but one I can never remember the answer to and have to work out fresh each time. So I’m especially grateful: this time, I got a Flash of Insight that made it possible to
(If you want to solve it yourself, now would be an excellent time to do so: below the line are spoilers.)
The possibilities
The Flash of Inspiration was that:
• There are 24 possible solutions to the problem (any one of the 12 coins could be heavy, or any one of them could be light)
• If each weighing reduces the number of possibilites to about a third of what they were before, we’ll have a solution!
In particular, the first weighing needs to reduce the solution space from 24 elements to eight.
Weighing 1
Pick eight of the coins and weigh them four against four. There are two possibilities.
If one of the pans drops, then we have four coins that are potentially heavy, four that are potentially light, and the remaining four we know are genuine. This is situation A; our solution space now
has eight elements.
If the pan stays level, then one of the remaining four coins is fake, and we don’t know whether it’s light or heavy. This is situation B, which also has eight elements.
Weighing 2, situation A
This is the one that took me the longest to figure out: with eight possible solutions, I need a weighing that eliminates three (or two) possibilities, no matter what happens.
My first thought was to weigh three of the potentially heavy coins and three of the potentially light coins against six genuine coins - but unfortunately, I only have four genuine coins, so I can’t
do that.
Instead, the trick is to put two possibly-heavy coins and a possibly-light coin in each pan.
If one of the pans drops, then either one of the two heavy coins on that side or the light coin on the other is the counterfeit one - there are now three possibilities. (This is situation A1).
If they balance, then we know our counterfeit must be one of the ones we didn’t just weigh - one of which we suspect is heavy, one of which we suspect is light - this is situation A2.
Weighing 2, situation B
If we weigh three of the suspect coins against three of the genuine coins (this time, we have plenty of those), either the suspect coins will drop (in which case one of them is heavy - B1), rise (in
which case one of them is light - B2) or not move (in which case the remaining suspect coin is counterfeit - B3).
Now we’ve got six possible situations to handle for the final weighing.
Weighing 3
• A1: We have two possibly-heavy coins against one possibly-light coin. Weigh the two possibly-heavy coins; if one drops, we know it’s heavy; if neither does, the other must be light.
• A2: We have a possibly-heavy coin and a possibly-light coin. Weight the two against two genuine coins. If the suspect coins drop, the possibly-heavy one is definitely heavy; similarly for the
light one if it rises.
• B1: We have three possibly-heavy coins. Weigh two against each other: if one pan drops, it contains the heavy coin; if neither does, then the other is heavy.
• B2: We have three possibly-light coins. Do the same as for B1, with the obvious substitutions.
• B3: We have a counterfeit coin, but don’t know whether it’s heavy or light. Weight it against a genuine coin.
And there you have it! We’ve identified the bad coin and how it’s bad.
Hope that helps!
- Uncle Colin
A selection of other posts
subscribe via RSS | {"url":"https://www.flyingcoloursmaths.co.uk/ask-uncle-colin-the-twelve-coin-puzzle/","timestamp":"2024-11-10T01:05:35Z","content_type":"text/html","content_length":"11473","record_id":"<urn:uuid:6d2314d4-eb89-4b6a-9930-a9c39a93e0cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00452.warc.gz"} |
Graphing Cubic Equations
The applet below has 4 different cubic functions, including their roots. You can cycle through the functions, as well as change some values of the functions. Play around with the app, and then answer
the questions below in your notebook.
Question 1: Consider the function f_1(x), which has three real, distinct zeros: b, c, and d, and a leading coefficient a. How does "a" affect the graph? What is the geometrical significance of b, c,
and d? Draw some examples, illustrating the above answers. Question 2: Consider the function f_2(x), which has two real zeroes, with one repeated, and a leading coefficient a. What is the geometrical
significance of the squared factor? Draw an example, illustrating the above answers. Question 3: Consider the function f_3(x), which has one zero, repeated three times, and a leading coefficient a.
What is the geometrical significance of the cubed factor? Draw an example, illustrating the above answer. Question 4: Consider the function f_4(x), which has one real zero, and two complex zeros (so
an irreducible quadratic, with a negative discriminant). Compare this function to the previous function, f_3(x). Draw an example, illustrating the above answer. | {"url":"https://www.geogebra.org/m/eSSFU4TK","timestamp":"2024-11-05T09:23:04Z","content_type":"text/html","content_length":"90961","record_id":"<urn:uuid:366bbdd1-dddc-47a4-acec-85ff29e2e0c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00537.warc.gz"} |
New Modeling for Generation of Normal and Abnormal Heart Rate Variability Signals
New Modeling for Generation of Normal and Abnormal Heart Rate Variability Signals ()
1. Introduction
According to the electrical activity of the heart and considering that the heart produces a series of electrical potentials, these electrical signals of the heart can be recorded by installing
electrodes on the chest, left hand, and on the right foot as well. This type of signal is called electrocardiogram or ECG. Figure 1 shows an ECG signal.
Components of this signal are: P, Q, R, S and T. The R component is considered as the component with am- plitude greater than the other components. If we could extract the peak of R by adopting a
suitable method, this peak would represent a heartbeat. If the time between successive extracted R peaks is plotted as a function of time, we can have a diagram like the one in Figure 2 which is
called the signal of heart beats (heart rate). By using the HR signal vector, we can result in a new signal called Heart Rate Variability (HRV) as Figure 3 and Figure 4.
This research is carried out based on modeling of biological signals. Thus, HRV signals can be synthetically produced by mathematical equations that are used as input for the IPFM model. Hence, this
project is performed via modeling and analyzing mathematical relations. This method was chosen because by changing the inputs of IPFM model we can produce synthetic HRV signal which is similar to
natural signals (the one taken from human body), and we can model several types of diseases that show their effect on the signal. So it can be concluded that this model is a comprehensive model for
generating synthetic HRV signal because it is capable of producing several types of HRV signals. Hence, this type of study and its implementation is considered applied. This research requires several
sequences of chaotic signals which can be obtained via relations and initial conditions of the existing resources. In this study, Guyton’s physiology book can be used as the reference to know the
function and physiology of the heart and all the nerves of the sympathetic and parasympathetic effects on cardiac signals
2. Review of Previous Research
Jafarnia Dabanloo et al. in 2004 [1] proposed a model based on RBF neural network to produce ECG signal synthetically. The results of this study suggested a low error in producing natural signals.
This research was proposed by using a neural network with radial basis functions (RBF) in a nonlinear dynamical model, with the advantage that simulation was performed on a wide range of
physiological signals. Finally, the accuracy of the model was evaluated by the introduced error function. The mean of error during 100 seconds by using 20 neurons was less than 2.5 for the proposed
By applying the Zeeman nonlinear model, Jafarnia Dabanloo et al. (2006) produced HRV signal with a cycle of ECG signals using neural network. In the results of their survey they reported the effects
of breathing signals and production of Mayer waves in the power spectrum of the obtained HRV signal [2] .
In another study Jafarnia Dabanloo et al. (2013) presented a model based on IPFM but with random threshold. They used random sequences with normal distribution [3] .
In a research by Bailon et al. (2011), to produce HRV signal a method based on IPFM was introduced. Time threshold used in this study included non-static values. The proposed method is based on the
estimation of produced signals which are dependent on autonomic nervous system using methods from IPFM model with fixed
Figure 4. Power spectrum density of HRV signal for a healthy person with three peaks that called VLF, LF (Mayer), HF (RSA) [4] .
threshold and this case demonstrates the need for a fixed time-dependent amount to produce signals of the heart rate. Finally, it was shown that the results of this method matched with the values
obtained from physiological signals of human body [5] .
In a research, by using physiological IPFM model, Seydnejad et al. [6] obtained physiological data of human body. Most of the methods mentioned in this research focused on low-pass filters to access
the modulated signals. In this study, a new method based on the theory of vector space was presented. This new method is based on time-varying uses of IPMF model (TVTIPMF). This model analyzes
signals obtained from TVTIPMF as series based on special functions. Then, the matrices obtained from input signals can be used to solve equations in parametric form. In a particular case,
applications of this method are used to produce R-R series obtained from SA node to evaluate peripheral nerve functions and the effect of stress.
In 2011, Michele Orini et al. was presented two different methods for producing HRV signals with controlled characteristics and structure of time-frequency (TF) for use in non-stationary HRV analysis
[7] . This signal is a random process that TF structure by selecting the time period corresponding to the frequency of the moment, and can either be determined by the shape of the TF function. This
method consists of three steps: 1) Select the structure of TF in signal by selecting a set of design parameters; 2) The automatic detection parameters of the model; 3) Synthesis of random signals.
Also two measurements for the proper assessment of simulated signals are conducted. They used this framework to model a wide variety of non-stationarities in the modulated heart rate signal during
stress test and were effects of music on emotion of human. Proposed model was used for ac- cess to the smoothing distribution pseudo Wigner-Ville distribution (SPWVD) to pattern of HRV [7] .
In 2012, Ali Almasi et al. were presented a dynamic model to generate synthetic Phonocardiogram signals [8] . This model was based on the PCG morphology and consists of three differential equations
and can be produce several type of normal PCG signal. Bit to bit PCG change is important in shape of this signal, therefore parameters of model should be have some variety to generate it. This model
is inspired of dynamical model to generated electrocardiogram signal that proposed by McSharry et al., and can be used for biomedical signal processing techniques [8] .
In 2013, Diego Martin et al. were presented a stochastic model for Photoplethysmogram (PPG) signal [9] . In that paper, a model of chaotic Photoplethysmographic signal which is able to synthesize and
analyze number of other statistical signals was presented. For this purpose, the pulse signal was process to normalization of pulse in time domain. In the next step, a single-pulse-model that consist
ten parameters was designed. In third step, time variation in this vector of ten parameters can be approximated by autoregressive moving average models. Application of this model after decorrelation
stage that is allows parallel process of each element in the vector. PPG signals were used in this study [9] is 76 signals that 26 of these signals were received by the Omicron FT Surveyor device. In
all signals, 5 minutes with is sampling period Ts = 15 ms (Fs = 66.6 Hz), and the signal detected by an optical sensor attached to the right index finger of the samples, and this record were detected
in relaxation conditions. Experimental results show that the proposed model is able to preserve the main features of the reference signal. This is reach by linear spectral analysis and also by
comparison of the two measurements obtained from nonlinear analysis. The proposed model can be summarized as follows: 1) Tracking of physical activity; 2) Obtain static clinical parameters by the
model samples; 3) Recover of lost or missing signals [9] .
In this paper, without taking electrocardiogram signal from human body (and then obtaining HRV signal from it), this signal was obtained through a proposed mathematical model. Various signals derived
from the output of this model can be used for final analysis of the HRV signals such as arrhythmia detection and classification of ECG. One application of a dynamic model which is able to produce
synthetic ECG signals is the easy evaluation of diagnostic ECG signal processing devices. Such a model can also be used in signal compression and telemedicine. The overall objective of this article
is to achieve an applied mathematical model for modeling and synthesizing HR and HRV signals using IPFM model in which chaotic maps are considered as its threshold level.
3. Materials and Methods
3.1. Introduction
In this paper, a mathematical model for synthesizing heart rate variation signals (HRV) has been proposed. In the proposed model, the effect of sympathetic and parasympathetic nerves, and also, an
internal input to the SA node that all of which are effective in HRV signals are evaluated. First the input of IPFM model is considered as
3.2. Introducing Chaotic Maps That Used in This Research
This section represents the method of generating chaotic maps used in the present study.
3.2.1. Logistic Map
Equation (1) is called the function of logistic map which is widely used today in modeling especially for natural systems.
Parameter A specifies the chaotic state of this map. This parameter is checked for the state of 0 ≤ x ≤ 1 and 0 ≤ A ≤ 4. At first, the function
The functionFigure 5 and Figure 6 show the initial curve and bifurcation diagram of a logistic map, respectively.
Figure 5. The initial curve of Logistic map.
Figure 6. The diagram bifurcation of a logistic map. For A < 1 the value z = 0 is an absorbing point. For 1 < A < 3, the fixed point
3.2.2. Lorenz Model
It is a nonlinear system that is a simplified model of convection in the fluids. This model was proposed by meteorologist Edward Lorenz in 1963. Lorenz Model is based on simplification of
Navier-Stokes equations for fluids. The fluid motion and temperature disturbances can be mentioned with the three variables X (t), Y (t), and Z (t). These variables are not spatial variables. The
variable X is related to the time dependence of the fluid flow function. Variables Y and Z are related to time dependence of temperature deviations in areas far from linear areas of temperature that
are obtained for the non-convective mode. By using these variables, the equations of Lo- renz Model can be expressed as three interdependent differential equations as follows (Equation (2)).
In the first equation,
3.2.3. Henon Map
Henon map is a two variables map which is defined by Equation (3).
The diagram in Figure 7 shows the attractor of Henon map according to the parameters a = 1.4 and b = 0.3.
3.2.4. Tent Map
Tent map was defined based on Equation (4).
Figure 7. Diagram of Henon map attractor.
The bifurcation diagram of tent map has been show in Figure 8.
3.3. Human Normal ECG Signal
In this section we introduce a normal ECG signal and how we read it in MATLAB software. The initial calling signal was gained as Figure 9.
In the following figure, the first 10 seconds of ECG signal have been drawn for seeing more clearly (see Fig- ure 10).
Figure 8. Bifurcation diagram of tent map.
Figure 10. The first 10 seconds of initial ECG signal.
Then, to have a clean signal, noise deviation from the baseline, AC noise (power) and frequency interference were eliminated by applying the linear filters. Figure 11 shows the conversion of original
ECG signal to clean ECG signal. This figure indicates the removal operation of DC component from Original ECG signal.
Also, Figure 12 shows the removing operation baseline noise and the power noise of the original ECG signal. Now we have clean ECG signal (see Figure 13).
After performing the above operations, the R peak of the ECG signal can be extracted. Figure 14 shows the result of extracting the R peak of the ECG signal.
Then, according to the intervals of these extracted peaks, HR and the HRV (from the time difference between the peaks of the extracted R waves) are obtained from the HR. Figure 15 represents the two
signals of HR and resulted HRV for normal ECG signal.
Then, the density of power spectrum of the obtained HRV signal is calculated and it is represented in the output. Figure 16 shows the obtained density of power spectrum of the obtained which is
consistent with the previous normal samples [2] [3] [6] [10] .
3.4. Proposed Method
3.4.1. Chaotic Maps Used in Proposed Model
In the proposed model we first produce the desired chaotic maps with respect to relations and descriptions set forth in Section 3.2. In this scheme, the chaotic maps including Logistic, Henon,
Lorenz, and Tent were used. After coding each formula of these chaotic maps, the output of each chaotic map was saved in a separate matrix.
1) Logistic Map
Logistic map is defined with the initial value of x[0] = 0.1, and a matrix sequence and the values obtained from the model can be finally saved in a matrix. To generate chaotic sequence by logistic
mapping, the following equation was used. In this case, the initial value x (0) was set to 0.1. A is the control parameter that should be a positive, real and as 0 ≤ A ≤ 4. If the Logistic mapping
equation is generated by above relation in which A = 1.2, the system will be stable. If this quantity goes up to 3.57, the system will be chaotic.
2) Henon Map
Henon map was produced based on the Equation (4). Initial values x (0) and y (0) were set to 0.0239 and 0.0239, respectively. The chaotic parameters “a” and “b” were set 1.4 and 0.3, respectively.
Then, by using these values and these relations, the chaotic sequence was obtained. The values resulted from the chaotic sequences were saved until they could be used as a threshold in IPFM model.
3) Lorenz Map
Lorenz maps were produced by Equation (2). We’ll check and calculate the fixed points of this model. In
proposed model, the 0.3 as the initial value for x (0) and y (0) and z (0) and values of p = 10, r = 11 and
Figure 11. The tow diagrams above: Initial ECG signals with frequency spectrum; the tow diagrams below: The operation of removing the DC component of the ECG signal with its frequency spectrum.
Figure 12. Tow diagram above: Removing operation of the baseline deviation from ECG signal with its frequency spectrum; Tow diagrams below: The removing operation of the power noise from ECG signal
with its frequency spectrum.
Figure 14. ECG signal with extracted R peaks.
have been allocated to the parameters of the Lorenz model. In the results chapter, the outputs of Lorenz model are shown.
4) Tent Map
Figure 15. Above: HR signal obtained from ECG signal; Below: HRV signal obtained from normal HR signal.
Figure 16. HR signal, HRV signal and power spectrum density of the HRV signal obtained from a sample normal ECG signal.
Tent map was produced by Equation (5). In the proposed model, the initial value for x[n] of this mapping was considered 0.05 and this mapping can be defined as a sequel to 1000 entries. The parameter
“a” was also defined in the range of 0 to 1. As a result, after the values obtained from the chaotic mapping are normalized, they are used as threshold values in the IPFM model.
3.4.2. IPFM Model Used in Proposed Model
Heart Rate (HR) is a signal that controlled by the autonomic nervous system (ANS), and it contains of information about changes and signs of heart activity. ANS has two subsystems, sympathetic nerves
and parasympathetic nerves. The HR may be increased by sympathetic activity or decreased by parasympathetic activity [10] . In this study, a mathematical model was proposed to generate artificial HRV
signals. In this model, sympathetic and parasympathetic nerves, as well as an internal input to the SA node, all of which are effective in HRV signals are checked. Hence, the sympathetic nervous and
parasympathetic nervous have been shown with S, and P respectively. The internal input for SA will be represented by [0] will be deemed as an assumed index in this model. In addition, the input S[1]
produces VLF component in HRV signal. As we know, the VLF is related to peripheral vascular and mechanisms of temperature regulation that have been produced by the sympathetic nervous system [10] .
Input P[1] and S[2] respectively can determine the effectiveness of parasympathetic and sympathetic mechanisms that are related to the pressure sensor (baroreceptory) that appear in the LF (or Mayer)
[2] . P[2] input is another part of the parasympathetic nervous impact on the heart, which its influence is visible in the power spectrum of HRV signal. This input is due to frequency changes of
breathing during inhalation and exhalation. These respiratory changes are effective in the HF (or RSA) related to power spectrum of HRV signal [2] .
All the above mentioned effects of increasing and decreasing heart rate by sympathetic and parasympathetic nerves have been presented in this model by Equation (5).
The input of the IPFM model was considered as
Then the integrator output with a threshold level that was previously generated by chaotic maps is compared. In fact, every chaotic mapping to an IPFM model is defined as a threshold to be considered
in that model. There- fore, IPFM model for different chaotic threshold level, the output will be different. These different outputs are obtained from the output of model as the assimilated different
HRV signals of the output. Where the value of the integrator output exceeds the threshold level, a pulse is generated from the IPFM model output that will be considered as a heartbeat. In fact, each
pulse represents the occurrence or production of an R wave, which is calculated by Equation (7). Then, the HRV signal is achieved by calculating the difference in time of occurrence of the R wave.
Figure 17 and Figure 18 show the block diagram of the IPFM circuit used in this study.
3.4.3. Feature Extraction
In this section, from the all signals linear features such as Median, Mean, Variance, Standard Deviation, Maximum Amplitude, Minimum Amplitude, Amplitude Range and Mode, and non-linear features such
as Lyapunov Exponent, Shanon Entropy, log Entropy, Threshold Entropy, sure Entropy and mode Entropy were extracted. This features extraction process was done in order to compare the values of the
extracted features with artificial signals generated in this method by which the similarities and accuracy percentage of the proposed model output with the normal ECG signal will be calculated. Table
1 and Table 2 show the values of linear features extracted from the normal HR and HRV signal.
Figure 17. IPFM model used in the proposed model.
Figure 18. Overall schematic of the simulated HRV signal generation by the IPFM model.
Table 1. Linear features extracted from the natural HR signal.
Table 2. Linear features extracted from the natural HRV signal.
3.4.4. Modeling and Generation of Abnormal HRV Signal
In this paper, tow patient that called high sympathetic Balance and Cardiovascular Autonomy Neuropathy (CAN) which is detected and evaluating by HRV signals were simulated. These signals by changing
the values of the some coefficients of the normal simulated signal and with extracted frequency feature from these signals were simulated. To modeling HRV signal for these patients, some coefficients
basic equation of IPFM model such as
For final generation of these abnormal signals, frequency features such as energy of low frequency band (EL), energy of high frequency band (HL), ratio of energy in low frequency band to the energy
in high frequency band (EL/EH), ratio of energy in low frequency band to the energy in all frequency band (EL/ET) and ratio of energy in high frequency band to the energy in all frequency band (EH/
ET) from abnormal signals were extracted and compared with these extracted values from normal signals.
In case of diseases high sympathetic balance, feature as ratio of energy in low frequency band to the energy in high frequency band (EL/EH) in compared with the same ratio in normal signal should
increase at about twice, and also feature as ratio of energy in low frequency band to the energy in all frequency band (EL/ET) in compared with the same ratio in normal signal should increase and
finally feature as ratio of energy in high frequency band to the energy in all frequency band (EH/ET) in compared with the same ratio in normal signal should re- main unchanged. Also, for simulation
of Cardiovascular Autonomy Neuropathy (CAN) signal, feature as ratio of energy in low frequency band to the energy in high frequency band (EL/EH) in compared with the same ratio in normal signal
should has value is close to zero, and feature as ratio of energy in low frequency band to the energy in all frequency band (EL/ET) in compared with the same ratio in normal signal should increase
and has value is close to zero, and finally feature as ratio of energy in high frequency band to the energy in all fre- quency band (EH/ET) in compared with the same ratio in normal signal should be
show a slight decrease. These proportions and values of characteristics of the diseases in compared with the normal HRV signals based on the physiological changes that cause these diseases are on the
cardiovascular system (and subsequent created changes on ECG, HR and HRV signals) were included are calculated (from understanding of References [1] [2] [10] ).
4. Results
First, the output of chaotic maps used in this study is presented. Figure 19 shows the output of Logistic chaotic map.
Figure 20 shows the output of Henon chaotic map.
Figure 21 shows the output of Lorenz chaotic map.
Figure 22 shows the output of Tent chaotic map.
This part represents how the results of X (IPFM model input) in the proposed model were calculated. In the IPEM model, the threshold level obtained from the different chaotic maps is compared with
the outputs of IPFM model integrator. If the integrator output is more than threshold level, one pulse will be produced in the output. The pulse time (t[i]) is in fact the time that R wave occurred
in HR. The difference between the times obtained re- sults in the production of HRV signal. The results of the proposed model based on the different chaotic maps have been indicated in Figures 23-27.
After achieving the necessary results of the proposed model, these results will be compared statistically with the results of normal signal. Features from the results of the proposed model such as
mean, median, variance, standard deviation, maximum amplitude, minimum amplitude, mode and amplitude range were compared with the normal sample. As a result, the model error and the accuracy of the
model were obtained.
Then the above mentioned features were obtained from the simulated HRV signals achieved in the previous stage. This extraction of features is done to compare the features resulted from simulated HRV
signals with HRV signal which has been obtained from normal and natural ECG signal to find the similarities and accuracy of output in proposed model with normal and natural ECG signal. Tables 3-10
show the linear features extracted from the simulated HR and HRV signal.
Also, some non-linear features of time series sequence were extracted. These feature are Lyapunov Exponent, Shanon Entropy, log Entropy, Threshold Entropy, sure Entropy and mode Entropy. Figures
28-31 were shown results of these features extracted.
The results of the proposed model and comparing to the results achieved from normal and natural HRV signal showed that the proposed model has a good performance in modeling the HRV signal.
5. Discussions
In this study it has presented an applied mathematical model to generate artificial HRV signal based on IPFM model with using chaotic threshold levels. As a result this project will be performed as
model using mathematical analysis. This method has been chosen because by means of changing input of IPFM model, we can produce synthetic HRV signal which is similar to the natural signal (the one
taken from human body), and we can also model several types of diseases that show their effects on the signal. Considering that this model is capable of producing several types of HRV signals it can
be concluded that it is a comprehensive model for synthesizing this type of signal (with comparison to results of references [1] [2] [10] ). So this study can be considered as an applied study in
terms of its type and method.
The results of this study indicated that the proposed model has a qualified functioning and there are a lot of similarities between the signals of this model with HRV ones. In this study, the
coupling between the sympathetic and parasympathetic nervous system was not intended, however, we achieved acceptable results. The results were closely correlated with the real data which confirm the
effectiveness of the proposed model. Various
Figure 23. Results of IPFM model output base on threshold level of logistic map. Above figure: HR signal; Below figure: HRV signal.
Figure 24. Results of IPFM model base on threshold level used Henon map (x sequence). Above figure: HR signal; Below figure: HRV signal.
Figure 25. Results of IPFM model output base on threshold level of Henon map (y sequence). Above figure: HR signal; Below figure: HRV signal.
signals derived from the output of this model can be used for final analysis of the HRV signals, such as arrhythmia detection and classification of ECG and HRV signals. One of the applications of the
proposed model is the easy evaluation of diagnostic ECG signal processing devices. Such a model can also be used in signal compres-
Figure 26. Results of IPFM model output base on threshold level of Lorenz map. Above figure: HR signal; Below figure: HRV signal.
Figure 27. Results of IPFM model output base on threshold level of Tent map. Above figure: HR signal; Below figure: HRV signal.
Table 3. Linear features extracted from simulated HR signal by proposed IPFM model using threshold chaotic level of logistic map.
Table 4. Linear features extracted from simulated HRV signal by proposed IPFM model using threshold chaotic level of logistic map.
Table 5. Linear features extracted from simulated HR signal by proposed IPFM model using threshold chaotic level of henon map.
Table 6. Linear features extracted from simulated HRV signal by proposed IPFM model using threshold chaotic level of henon map.
Table 7. Linear features extracted from simulated HR signal by proposed IPFM model using threshold chaotic level of lorenz map.
Table 8. Linear features extracted from simulated HRV signal by proposed IPFM model using threshold chaotic level of lorenz map.
Table 9. Linear features extracted from simulated HR signal by proposed IPFM model using threshold chaotic level of tent map.
Table 10. Linear features extracted from simulated HRV signal by proposed IPFM model using threshold chaotic level of tent map.
Figure 28. Comparison of linear feature extracted from natural HR signal and simulated HR signals with proposed method using logistic map, henon map, Lorenz map and tent map, respectively.
Figure 29. Comparison of non-linear feature extracted from natural HR signal and simulated HR signals with proposed method using logistic map, henon map, Lorenz map and tent map, respectively.
Figure 30. Comparison of linear feature extracted from natural HRV signal and simulated HRV signals with proposed method using logistic map, henon map, Lorenz map and tent map, respectively.
Figure 31. Comparison of non-linear feature extracted from natural HRV signal and simulated HRV signals with proposed method using logistic map, henon map, Lorenz map and tent map, respectively.
sion and telemedicine. The major advantage of this study is simplicity and good performance of the proposed model. Figures 28-31 were shown the comparison of all results in this study. Following
charts shown the comparison of all the features extracted from natural signals and the simulated signals with proposed method (See Figures 28-31).
As in Figures 28-31 were clear, linear feature of normal HR signal such as median, mean, max amplitude, min amplitude and mode were nearly to these feature that extracted from all simulated HR
signal. Variance, standard deviation and amplitude range are features in all simulated HR signals that not close to natural HR sig- nal. Also, non-linear feature of normal HR signal such as lyapunov
exponent, Shanon entropy, log entropy, threshold entropy, sure entropy and norm entropy were very nearly to these feature that extracted from all simu- lated HR signal.
About HRV signal, linear feature of normal HRV signal such as median, mean, variance, standard deviation, max amplitude, min amplitude and mode were nearly to these feature that extracted from all
simulated HR signal (except results of lorenz map in max amplitude). Also, non-linear feature of normal HRV signal such as lyapunov exponent and average of all entropy were very nearly to these
feature that extracted from all simulated HRV signal. But each of entropy singly (especial Shanon and log entropy), is not very closely to the natural HRV signal. It should be noted that these
extracted features was not exist in previous researches and in this paper we extracted these features for showing acceptable results of proposed method with comparison to similar studies.
6. Conclusion
In general we can generate artificial HR and HRV signals that results of this method are very closely with normal HR and HRV signals that obtained from human heart. Various signals derived from the
output of this model can be used for final analysis of the HRV signals, such as arrhythmia detection and classification of ECG and HRV signals. One of the applications of the proposed model is the
easy evaluation of diagnostic ECG signal processing devices. Such a model can also be used in signal compression and telemedicine. Because signals generated by this model due of the high number of
cardiac cycles and low volume of these, they can easily upload them to the internet for educational purposes and produce different models of cardiac signals. Also by changing the coefficients of the
parameters in Equation (5), production different normal and patient signals from a prototype model, that this is due to the education mode of these signals.
This work was supported by Research Fund of Islamic Azad University, Dezful Branch, under research project: “Application Mathematical Model for Artificial Generation of Heart Rate Variability Using
IPFM Model with Chaotic Maps”. | {"url":"https://www.scirp.org/journal/paperinformation?paperid=52831","timestamp":"2024-11-13T08:46:19Z","content_type":"application/xhtml+xml","content_length":"140935","record_id":"<urn:uuid:234421cf-8df4-4718-876b-1b3dc4525c72>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00889.warc.gz"} |
Interest Rate Conversion
When interest on a loan is paid more than once in a year, the effective interest rate of the loan will be higher than the nominal or stated annual rate . For instance, if a loan carries interest rate
of 8% p.a., payable semi annually, the effective annualized rate is 8.16% which is mathematically obtained by the conversion formula [(1+8%/2)^2-1]. We may, at times, need to compare an interest rate
payable at certain frequency with interest rate payable at a different frequency. For instance, if there are two offers for a loan - one at an interest rate of 8% p.a. payable at half yearly
intervals and the other at 7.9% p.a., but interest payable at monthly intervals; which one is advantageous in terms of effective annualized cost? Use Interest Rate Converter to compare such interest
rates and to convert interest rate from one frequency to an equivalent of the rate payable in another frequency.
Interest Rate Conversion in Excel:
If you are interested in conversion formulae that can be used in Excel to convert interest rate from one frequency to an equivalent rate in another frequency, they are as follows: If 'Rate' is the
interest rate:
│From▼ \ To►│Monthly │Quarterly │H Yearly │Annual │
│Monthly │=((1+Rate/12)^1-1)*12 │=((1+Rate/12)^3-1)*4 │=((1+Rate/12)^6-1)*2 │=((1+Rate/12)^12-1)*1│
│Quarterly │=((1+Rate/4)^(1/3)-1)*12 │=((1+Rate/4)^1-1)*4 │=((1+Rate/4)^2-1)*2 │=((1+Rate/4)^4-1)*1 │
│Semi-Annual│=((1+Rate/2)^(1/6)-1)*12 │=((1+Rate/2)^(1/2)-1)*4│=((1+Rate/2)^(1/1)-1)*2│=((1+Rate/2)^2-1)*1 │
│Annual │=((1+Rate)^(1/12)-1)*12 │=((1+Rate)^(1/4)-1)*4 │=((1+Rate)^(1/2)-1)*2 │=((1+Rate)^(1/1)-1)*1│
You may extend the above logic further and attempt to derive the formulae for conversion of interest rate from other frequencies such such as weekly, daily, hourly etc into annualized rates as also
into various other frequencies. The Rate Conversion Table spreadsheet demonstrates the use of the above formulas to convert interest rate from one frequency to another.
By the way, if interest (R) is compounded at infinite intervals (which is called as continuous compounding), what is the annualized equivalent of this rate? It can be computed in Excel by this
function: =EXP(R)-1 | {"url":"https://vindeep.com/Corporate/InterestRateConversion.aspx","timestamp":"2024-11-09T06:50:48Z","content_type":"text/html","content_length":"22771","record_id":"<urn:uuid:4759ddc5-55b6-4989-a7cb-2dc8346cce06>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00429.warc.gz"} |
Dipartimento di Ingegneria informatica, automatica e gestionale
It follows from the Marcus-Spielman-Srivastava proof of the Kadison-Singer conjecture that if G = (V, E) is a ∆- regular dense expander then there is an edge-induced subgraph H = (V, EH) of G of
constant maximum degree which is also an expander. As with other consequences of the MSS theorem, it is not clear how one would explicitly construct such a subgraph. We show that such a subgraph
(although with quantitatively weaker expansion and near-regularity properties than those predicted by MSS) can be constructed with high probability in linear time, via a simple algorithm. Our
algorithm allows a distributed implementation that runs in O(log n) rounds and does O(n) total work with high probability. The analysis of the algorithm is complicated by the complex dependencies
that arise between edges and between choices made in different rounds. We sidestep these difficulties by following the combinatorial approach of counting the number of possible random choices of the
algorithm which lead to failure. We do so by a compression argument showing that such random choices can be encoded with a non-trivial compression. Our algorithm bears some similarity to the way
agents construct a communication graph in a peer-to-peer network, and, in the bipartite case, to the way agents select servers in blockchain protocols. | {"url":"http://www.corsodrupal.uniroma1.it/publication/19989","timestamp":"2024-11-03T07:36:06Z","content_type":"text/html","content_length":"26338","record_id":"<urn:uuid:cc089db9-6b25-439d-8fe6-b46fecc3ccc8>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00245.warc.gz"} |
Watts to Lumens Calculator: Quickly Convert Watts to Brightness! - Calculator Pack
Watts to Lumens Calculator
Are you struggling to understand the brightness of a light bulb? Do you want to know the exact brightness of a bulb in terms of lumens? Look no further as we introduce a user-friendly Watts to Lumens
Calculator. This calculator is designed to help you determine the brightness of a light bulb in terms of lumens, based on its wattage. With this calculator, you can easily compare the brightness of
different bulbs and choose the right one for your needs. Say goodbye to confusion and make informed decisions with our Watts to Lumens Calculator. Simply input the wattage of the bulb and let our
calculator do the rest! Get accurate results instantly and make an informed decision for your lighting needs. Try it out now and experience a hassle-free way of finding the right lighting for your
Watts to Lumens Calculator
Convert watts to lumens.
Watts to Lumens Calculator Results
Watts 0
Lumens per Watt 0
Lumens 0
Share results with your friends
How to Use the Watts to Lumens Calculator
The Watts to Lumens Calculator is a tool that allows users to convert watts to lumens, which is an important conversion for anyone interested in lighting, energy consumption, or designing lighting
systems. With this calculator, users can quickly and easily determine how many lumens are produced by a certain number of watts, helping them to make informed decisions about lighting solutions.
To use the Watts to Lumens Calculator, there are two required input fields: watts and lumens per watt.
The watts field refers to the amount of power consumed by the light source, while lumens per watt is a measure of the efficiency of the light source in converting electrical power into visible light.
The calculator uses these two inputs to calculate the total number of lumens produced by the light source.
Providing accurate input data is crucial for obtaining correct results from the calculator.
If the user does not know the exact wattage or lumens per watt rating of their light source, they may need to consult the manufacturer's specifications or use a measuring device to obtain accurate
data. Inaccurate data input can lead to incorrect results and potential safety hazards.
The output fields of the Watts to Lumens Calculator are watts, lumens per watt, and lumens.
The watts and lumens per watt fields display the input values provided by the user, while the lumens field displays the calculated result of the conversion. The lumens field is the most important
output value, as it provides the user with the total amount of visible light produced by their light source.
The formula used by the Watts to Lumens Calculator is simple:
lumens = watts x lumens per watt.
This means that to calculate the total lumens produced by a light source, the calculator multiplies the number of watts by the lumens per watt rating of the light source.
Illustrative Example:
Let's say we have a light bulb that consumes 60 watts of power and has a lumens per watt rating of 100. To use the Watts to Lumens Calculator, we would input 60 for the watts field and 100 for the
lumens per watt field. The calculator would then output a result of 6000 lumens (60 watts x 100 lumens per watt).
Illustrative Table Example:
Watts Lumens per Watt Lumens
In conclusion, the Watts to Lumens Calculator is a valuable tool for anyone looking to calculate the total amount of visible light produced by a light source. By inputting accurate data into the
calculator, users can obtain reliable results that will help them make informed decisions about lighting solutions. | {"url":"https://calculatorpack.com/watts-to-lumens-calculator/","timestamp":"2024-11-09T07:25:51Z","content_type":"text/html","content_length":"32994","record_id":"<urn:uuid:57cc703f-9ad5-4e65-9ae9-0a83800e025b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00063.warc.gz"} |
Online Math Dictionary
Online Math Dictionary: F
Easy to understand math definitions for K-Algebra mathematics
Just scroll down or click on the word you want and I'll scroll down for you!
face of a polyhedron factor in arithmetic factor in algebra
factorial Fibonacci numbers finite
formula fractal fraction
Face of a Polyhedron
A face of a polyhedron is one of the sides.
Example: In the dodecahedron on the right, we can see one green face, one purple face, two red facesand two blue faces. (There are six more faces on the back side that we can't see.)
For more info on polyhedra, check out myPolyhedra Gallery.
Factor in Arithmetic
A factor in arithmetic is one of the two or three (or more) things we multiply together to get our answer. In the examples below, the factors are the blueand red numbers.
Examples: 6 = 2 x 3 120 = 4 x 5 x 6
For more info, check out my Multiplication Lessons.
Factor in Algebra
Factors in Algebra are the same thing as factors in arithmetic... Factors are the things we multiply together to get answers. In the example below, the factors are in red and blue.
For some great lessons on factoring, check out my factoring lessons.
Here's the official formula for a factorial:
n! = n x (n-1) x (n-2) x ... x 3 x 2 x 1
It looks creepy, but it isn't.
Example: 4! = 4 x 3 x 2 x 1 = 24
5! = 5 x 4 x 3 x 2 x 1 = 120 3! = 3 x 2 x 1
0! is specially defined as: 0! = 1
For more info on factorials, check out my factorials lesson. | {"url":"https://www.coolmath.com/reference/math-dictionary-F","timestamp":"2024-11-14T01:56:52Z","content_type":"text/html","content_length":"52575","record_id":"<urn:uuid:3078ba8e-b390-4d10-bb40-3525de4c999a>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00088.warc.gz"} |