content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Inverse regression for ridge recovery: a data-driven approach for parameter reduction in computer experiments
Parameter reduction can enable otherwise infeasible design and uncertainty studies with modern computational science models that contain several input parameters. In statistical regression,
techniques for sufficient dimension reduction (SDR) use data to reduce the predictor dimension of a regression problem. A computational scientist hoping to use SDR for parameter reduction encounters
a problem: a computer prediction is best represented by a deterministic function of the inputs, so data comprised of computer simulation queries fail to satisfy the SDR assumptions. To address this
problem, we interpret SDR methods sliced inverse regression (SIR) and sliced average variance estimation (SAVE) as estimating the directions of a ridge function, which is a composition of a
low-dimensional linear transformation with a nonlinear function. Within this interpretation, SIR and SAVE estimate matrices of integrals whose column spaces are contained in the ridge directions’
span; we analyze and numerically verify convergence of these column spaces as the number of computer model queries increases. Moreover, we show example functions that are not ridge functions but
whose inverse conditional moment matrices are low-rank. Consequently, the computational scientist should beware when using SIR and SAVE for parameter reduction, since SIR and SAVE may mistakenly
suggest that truly important directions are unimportant.
Bibliographical note
Publisher Copyright:
© 2019, Springer Science+Business Media, LLC, part of Springer Nature.
• Ridge functions
• Ridge recovery
• Sufficient dimension reduction
Dive into the research topics of 'Inverse regression for ridge recovery: a data-driven approach for parameter reduction in computer experiments'. Together they form a unique fingerprint. | {"url":"https://experts.umn.edu/en/publications/inverse-regression-for-ridge-recovery-a-data-driven-approach-for-","timestamp":"2024-11-07T13:32:27Z","content_type":"text/html","content_length":"54153","record_id":"<urn:uuid:4e902719-c66e-418a-907b-241335c1282d>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00328.warc.gz"} |
22 Examples of Mathematics in Everyday Life
According to some people, maths is just the use of complicated formulas and calculations which won’t be ever applied in real life. But, maths is the universal language that is applied in almost every
aspect of life. Yes! You read it right; basic mathematical concepts are followed all the time. You would be amazed to see the emergence of maths from unexpected situations.
Let’s read further to know the real-life situations where maths is applied.
1. Making Routine Budgets
How much should I spend today? When I will be able to buy a new car? Should I save more? How will I be able to pay my EMIs? Such thoughts usually come into our minds. The simple answer to such type
of question is maths. We prepare budgets based on simple calculations with the help of simple mathematical concepts. So, we can’t say, I am not going to study maths ever! Everything which is going
around us is somehow related to maths only.
• Basic mathematical operations (addition, subtraction, multiplication, and division)
• Calculation of percentage
2. Construction Purpose
You know what, maths is the basis of any construction work. A lot of calculations, preparations of budgets, setting targets, estimating the cost, etc., are all done based on maths. If you don’t
believe it, ask any contractor or construction worker, and they will explain as to how important maths is for carrying out all the construction work.
• Estimating the cost and profit
3. Exercising and Training
I should reduce some body fat! Will I be able to achieve my dream body ever? How? When? Will I be able to gain muscles? Here, the simple concept that is followed is maths. Yes! based on simple
mathematical concepts, we can answer to above-mentioned questions. We set our routine according to our workout schedule, count the number of repetitions while exercising, etc., just based on maths.
• Basic Mathematical Operations (additions, subtraction, multiplication, and division)
• Logical and Analogical Reasoning
4. Interior Designing
Interior designing seems to be a fun and interesting career but, do you know the exact reality? A lot of mathematical concepts, calculations, budgets, estimations, targets, etc., are to be followed
to excel in this field. Interior designers plan the interiors based on area and volume calculations to calculate and estimate the proper layout of any room or building. Such concepts form an
important part of maths.
5. Fashion Designing
Just like interior design, maths is also an essential concept of fashion design. From taking measurements, estimating the quantity and quality of clothes, choosing the color theme, and estimating the
cost and profit, to producing cloth according to the needs and tastes of the customers, maths is followed at every stage.
• Basic Mathematical Operations
6. Shopping at Grocery Stores and Supermarkets
The most obvious place where you would see the application of basic mathematical concepts is your neighborhood grocery store and supermarket. The schemes like ‘Flat 50% off, ‘Buy one get one free,
etc., are seen in most of the stores. Customers visit the stores, see such schemes, estimate the quantity to be bought, the weight, the price per unit, discount calculations, and finally the total
price of the product, and buy it. The calculations are done based on basic mathematical concepts. Thus, here also, maths forms an important part of our daily routine.
7. Cooking and Baking
In your kitchen also, the maths is performed. For cooking or baking anything, a series of steps are followed, telling us how much of the quantity is to be used for cooking, the proportion of
different ingredients, methods of cooking, the cookware to be used, and many more. Such are based on different mathematical concepts. Indulging children in the kitchen while cooking anything, is a
fun way to explain maths as well as basic cooking methods.
8. Sports
Maths improves the cognitive and decision-making skills of a person. Such skills are very important for a sportsperson because by this he can take the right decisions for his team. If a person lacks
such abilities, he won’t be able to make correct estimations. So, maths also forms an important part of the sports field.
• Mathematical Operations and Algorithm
9. Management of Time
Now managing time is one of the most difficult tasks which is faced by a lot of people. An individual wants to complete several assignments in a limited time. Not only the management, but some people
also are not even able to read the timings on an analog clock. Such problems can be solved only by understanding the basic concepts of maths. Maths not only helps us to understand the management of
time but also to value it.
• Basic Mathematical Operations
10. Driving
‘Speed, Time, and Distance’ are all these three things that are studied in mathematical subjects, which are the basics of driving irrespective of any mode of transportation. Maths helps us to answer
the following question;
• How much should be the speed to cover any particular distance?
• How much time would be taken?
• Whether to turn left or right?
• When to increase or decrease the speed?
11. Automobiles Industry
The different car manufacturing companies produce cars based on the demands of the customers. Every company has its category of cars ranging from microcars to luxury SUVs. In such companies, basic
mathematical operations are being applied to gain knowledge about the different demands of the customers.
12. Computer Applications
Ever wondered how a computer works? How easily it completes every task in a proper series of actions? The simple reason for this is the application of maths. The fields of mathematics and computing
intersect both in computer science. The study of computer applications is next to impossible without maths. Concepts like computation, algorithms and many more forms the base for different computer
applications like PowerPoint, word, excel, etc. are impossible to run without maths.
13. Planning a Trip
We all are bored with our monotonous life and we wish to go on long vacations. For this, we have to plan things accordingly. We need to prepare the budget for the trip, the number of days, the
destinations, and hotels adjust our other work accordingly, and many more. Here comes the role of maths. Basic mathematical concepts and operations are required to be followed to plan a successful
14. Hospitals
Every Hospital has to make the schedule the timings of the doctors available, the systematic methods of conducting any major surgery, keeping the records of the patients, records of the success rate
of surgeries, number of ambulances required, training for the use of medicines to nurses, prescriptions, and scheduling all tasks, etc. All these are done based on Mathematical concepts.
15. Video Games
Playing video games is one of the most favorite entertainment activities done all over the world, irrespective of the fact that whether you are a kid or an adult. Students usually skip their maths
classes to play video games. But, do you know here also they are learning maths? Here, they learn about the different steps and techniques to be followed to win any game. Not only while playing, but
the engineers who introduce different games for people also follow the different mathematical concepts.
16. Weather Forecasting
The weather forecasting is all done based on the probability concept of maths. Through this, we get to know about the weather conditions like whether it’s going to be a sunny day or rainfall will
come So, next time you plan your outing, don’t forget to see the weather forecasting.
17. Base of Other Subjects
Though maths is itself a unique subject. But, you would be surprised to know that it forms the base for every subject. The subjects like physics, chemistry, economics, history, accountancy, and
statistics every subject is based upon maths. So, next time you say, “I’m not going to study this maths subject ever!” remember, this subject will not going to leave you ever.
18. Music and Dance
Listening to music and dancing is one of the most common hobbies of children. Here also, they learn maths while singing and learning different dance steps. Coordination in any dance can be gained by
simple mathematical steps.
19. Manufacturing Industry
The part of maths called ‘Operations Research is an important concept that is being followed at every manufacturing unit. This concept of maths gives the manufacturer a simple idea of performing
several tasks under the manufacturing unit like,
• What quantity is to be produced?
• What methods are to be followed?
• How to increase production?
• How the cost of production can be reduced?
• Removing unnecessary tasks.
• Following methods like target costing, ABC costing, cost-profit budgeting, and many more.
20. Planning of Cities
Urban planning includes the concepts of budgeting, planning, setting targets, and many more which all forms part of mathematics. No activity is possible without maths.
21. Problem-solving skills
Problem-solving skills are one of the most important skills which every individual should possess to be successful in life. Such skills help the individual in taking correct decisions in life, let it
be professional or personal. This is all done when the person has the correct knowledge of basic mathematical concepts.
• Basic Mathematical Operations
22. Marketing
The marketing agencies make the proper plans as to how to promote any product or service. The tasks like promoting a product online, use of social media platforms, following different methods of
direct and indirect marketing, door-to-door sales, sending e-mails, making calls, and providing several schemes like ‘Buy one get one free, ‘Flat 50% off, offering discounts on special occasions,
etc. are all done based on simple mathematical concepts. Thus, maths is present everywhere.
16 Comments
Add Comment | {"url":"https://studiousguy.com/examples-of-mathematics/","timestamp":"2024-11-04T01:53:39Z","content_type":"text/html","content_length":"107332","record_id":"<urn:uuid:dcb2222d-93ac-4333-98e3-3c143d08280c>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00824.warc.gz"} |
Stabilization Policy and the Phillips Curve: Practice Problems and Solutions
See Mr. Stolyarov’s complete index of Intermediate Macroeconomics Problems and Solutions here.
Problem 51: Which of these policy lags contribute to the reduced effectiveness of fiscal and monetary policy? More than one correct answer is possible.
(a) Division lags: The government needs to divide the economy into different sectors and interest groups and give each of them a custom-tailored solution to their problems. This takes time and
(b) Supervision lags: There is not enough timely oversight of all branches that implement fiscal and monetary policy, and so there are insufficient guarantees that agreed-upon policy measures will
actually be carried out.
(c) Decision lags: Governmental decisions take time to make – especially with regard to fiscal policy.
(d) Inflation lags: It takes time for an increase in the money supply to be reflected in prices.
(e) Information lags: It takes time for past economic data to be compiled and verified.
(f) Litigation lags: Any government agency wishing to implement fiscal or monetary policy changes is likely to get sued by the affected parties. This leads to prolonged legal battles before the
policy gets implemented.
(g) Implementation lags: Even after a policy decision has been made, it takes time for it to be actualized and for its effects to be manifest.
Solution 51: The following are the three policy lags that explain why fiscal and monetary policy are “long and variable”:
(e): Information lags: It takes time for past economic data to be compiled and verified.
(c): Decision lags: Governmental decisions take time to make – especially with regard to fiscal policy.
(g): Implementation lags: Even after a policy decision has been made, it takes time for it to be actualized and for its effects to be manifest.
Problem 52. Which of these phenomena may reduce the effectiveness of fiscal and monetary policy? More than one correct answer is possible.
(a) If a tax cut is thought to be temporary, people will tend to save it rather than spending it, seeing it as transitory rather than permanent income.
(b) Government spending and investment tend to raise interest rates and thus crowd out private investment, thus exacerbating any already existing economic problems.
(c) If monetary policy is perceived to be unreliable and non-credible, individuals will tend to ignore central bank promises to reduce inflation or stabilize the economy.
(d) The principle of Ricardian equivalence states that, when governments engage in deficit spending, households will tend to save enough money to pay for anticipated future tax increases. Thus,
instead of spending money, individuals will tend to save more of it.
(e) Economic stabilization is not the same as growth. By trying to reduce the variations in output, fiscal and monetary policymakers might also slow down the overall economic growth rate and thus
render virtually everyone worse off in the long run.
Solution 52. All of the above are valid possibilities for phenomena that reduce the effectiveness of fiscal and monetary policy.
Problem 53. Which of these statements about time inconsistency are true?
(a) Time inconsistency is minimized when individual actors are in some manner bound to follow through with a decision before they actually face making that decision.
(b) Time inconsistency is minimized when individual actors have the maximum possible discretion to choose the best response to situations as they arise.
(c) According to time inconsistency, whatever policy is optimal in one period is optimal in all periods.
(d) According to time inconsistency, a policy that is optimal in the first period may no longer be optimal in the next period.
(e) An alcoholic man drives by a liquor store on his way from work every day. If he decides to simply use his force of will to restrain his desire to enter the liquor store every time he drives by
it, then he has solved his time inconsistency problem.
(f) An alcoholic man drives by a liquor store on his way from work every day. If he decides to pick a different route home from work – along which there are no liquor stores – then he has solved his
time inconsistency problem.
Solution 53. The following statements about time inconsistency are true:
(a): Time inconsistency is minimized when individual actors are in some manner bound to follow through with a decision before they actually face making that decision.
(d): According to time inconsistency, a policy that is optimal in the first period may no longer be optimal in the next period.
(f): An alcoholic man drives by a liquor store on his way from work every day. If he decides to pick a different route home from work – along which there are no liquor stores – then he has solved his
time inconsistency problem.
Problem 54. Which of these is true according to the Phillips Curve model?
(a) There exists a direct correlation between inflation and unemployment.
(b) There exists an inverse correlation between inflation and unemployment.
(c) There exists a direct correlation between inflation and trade deficits.
(d) There exists an inverse correlation between inflation and trade deficits.
(e) There exists a direct correlation between output and unemployment.
(f) There exists an inverse correlation between output and unemployment.
Solution 54: The Phillips Curve model states that
(b): There exists an inverse correlation between inflation and unemployment.
Problem 55. What is the equation for the Phillips Curve? Here, U = actual rate of unemployment, U* = natural rate of unemployment, gw = wage growth, and �µ = speed of wage adjustment to the
employment gap.
(a) gw = �µUU*
(b) gw = Ã?µ(U – U*)
(c) gw = -Ã?µ(U – U*)
(d) gw = �µ(U + U*)
(e) gw = -�µ(U + U*)
(f) gw = Ã?µ/(U – U*)
Solution 55. The equation for the Phillips Curve is
(c): gw = -Ã?µ(U – U*)
Problem 56. What is the equation for the Expectations Augmented Phillips Curve? Here, U = actual rate of unemployment, U* = natural rate of unemployment, Ã Â? = actual inflation, Ã Â?[e] = expected
inflation, and �µ = speed of wage adjustment to the employment gap.
(a) à � = à �[e] + �µ(U + U*)
(b) à Â? = à Â?[e] – Ã?µ(U + U*)
(c) à Â? = à Â?[e] – Ã?µ(U* – U)
(d) à Â? = à Â?[e] – Ã?µ(U – U*)
(e) à Â?[e] = à Â? – Ã?µ(U – U*)
(f) à Â?[e] = à Â? – Ã?µ(U* – U)
Solution 56. The equation for the Expectations Augmented Phillips Curve is
(d): à Â? = à Â?[e] – Ã?µ(U – U*)
Problem 57. In Inflationville, expected annual inflation is 53%. The actual rate of unemployment is 1%, and the natural rate of unemployment is 10%. The speed of wage adjustment to the employment gap
is 0.42. Use the equation for the Expectations Augmented Phillips Curve to find the actual annual inflation in Inflationville.
Solution 57. We use the equation à Â? = à Â?[e] – Ã?µ(U – U*), where à Â?[e] = 0.53, Ã?µ = 0.42, U = 0.01, U* = 0.1. So à Â? = 0.53 – 0.42(0.01 – 0.1) = à Â? = 0.5678 = 56.78%
Problem 58. Which of these statements about NAIRU are true? More than one correct answer is possible.
(a) NAIRU stands for National Average Intertemporal Rate of Unemployment.
(b) NAIRU stands for Non-Accelerating Inflation Rate of Unemployment.
(c) NAIRU stands for Naturally Anti-Inflationary Rate of Unemployment.
(d) Milton Friedman coined the term NAIRU.
(e) A. C. Phillips coined the term NAIRU.
(f) A. C. Pigou coined the term NAIRU.
(g) Robert Lucas coined the term NAIRU.
(h) NAIRU is the natural rate of unemployment, analyzed within the framework of the Phillips Curve.
(i) NAIRU is the actual rate of unemployment, analyzed within the framework of the Phillips Curve.
(j) NAIRU is the difference between the natural and actual rates of unemployment, analyzed within the framework of the Phillips Curve.
(k) Using expansionary monetary policy, it is possible to achieve a sustainable rate of unemployment below NAIRU.
(l) Using expansionary monetary policy, it is possible to only temporarily achieve a rate of unemployment below NAIRU, while only producing inflation in the long run.
Solution 58. The following statements about NAIRU are true:
(b): NAIRU stands for Non-Accelerating Inflation Rate of Unemployment.
(d): Milton Friedman coined the term NAIRU.
(h): NAIRU is the natural rate of unemployment, analyzed within the framework of the Phillips Curve.
(l): Using expansionary monetary policy, it is possible to only temporarily achieve a rate of unemployment below NAIRU, while only producing inflation in the long run.
Problem 59. Which of these ideas did Milton Friedman contribute to economists’ views of the Phillips Curve? More than one correct answer is possible.
(a) People suffer from persistent money illusion and can be duped into expecting inflation rates that deviate from actual inflation. Thus, Phillips curves will never shift out in response to changes
in the money supply.
(b) In the long run, policies that assume the existence of a stable tradeoff between inflation and unemployment have no effect on unemployment while systematically increasing inflation.
(c) The long-run Phillips curve is concave up and decreasing.
(d) The long-run Phillips curve is concave up and increasing.
(e) The long-run Phillips curve is vertical.
(f) The long-run Phillips curve is horizontal.
(g) The long-run Phillips curve is concave down and decreasing.
Solution 59. Milton Friedman contributed the following ideas:
(b): In the long run, policies that assume the existence of a stable tradeoff between inflation and unemployment have no effect on unemployment while systematically increasing inflation.
(e): The long-run Phillips curve is vertical.
Problem 60. Which of these statements about the liquidity trap are true?
(a) The liquidity trap occurs when the LM curve is horizontal.
(b) The liquidity trap occurs when the LM curve is vertical.
(c) In a liquidity trap, people are highly sensitive to changes in interest rates.
(d) In a liquidity trap, people are not at all sensitive to changes in interest rates.
(e) Effective monetary policy to get out of a liquidity trap includes the systematic slashing of interest rates and the lowering of reserve requirements.
(f) In a liquidity trap, monetary policy is completely ineffective.
(g) In a liquidity trap, fiscal policy is completely ineffective.
Solution 60. The following statements about the liquidity trapare true:
(a): The liquidity trap occurs when the LM curve is horizontal.
(d): In a liquidity trap, people are not at all sensitive to changes in interest rates.
(f): In a liquidity trap, monetary policy is completely ineffective.
See Mr. Stolyarov’s complete index of Intermediate Macroeconomics Problems and Solutions here. | {"url":"https://www.stepbystep.com/Stabilization-Policy-and-the-Phillips-Curve-Practice-Problems-and-Solutions-172715/","timestamp":"2024-11-08T08:25:04Z","content_type":"text/html","content_length":"56877","record_id":"<urn:uuid:87432e16-2ac4-4561-bd1c-282e942f494f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00777.warc.gz"} |
Lease Analysis
Lease Analysis Calculator
Period Type = 1
Tenant Improvements = 0
Commission = 0
Other = 0
Payments =
Discount Rate =
Periodic Rate =
Term =
NPV =
NPV/Term =
NUS/Annuity =
IRR =
NER [US] =
NER [Can] =
Performs various return analyses on property leases, including net present value (NPV), NPV per term, Net Uniform Series (NUS) per annuity, internal rate of return (IRR), and both US and Canadian Net
Effective Rate (or Rental).
- Period Type: Whether the entries are in total dollars or dollars per square foot (PSF), with entries on either a monthly or yearly basis.
- Tenant Improvements: Improvement dollars paid.
- Commission: Commission dollars paid.
- Other: Other concessions e.g. relocation costs, lease buy-outs, etc.
- Payments: Periodic rental payments. Column 1 is the periodic rental payment. Enter each payment on its own row. If a second column is entered (in the form 1000;3) then the second column is the
number of occurrences for that payment.
- Discount Rate: Annual discount rate expressed as a percentage.
- Periodic Rate: Discount rate expressed on a periodic basis, as determined by the Period Type.
- Term: Term of the lease.
- NPV: Net Present Value.
- NPV/Term: Net Present Value per term.
- NUS/Annuity: Net Uniform Series of payments or annuity.
- IRR: Internal Rate of Return expressed as an annual rate. IRR requires initial cash outflows (e.g. If Tenant Imps plus Commission plus Other is 0, then IRR will display 0).
- NER [US]: Net Effective Rental (effective lease rate) for US leases only, also known as average rents. This is the effective rent received after deducting all costs of leasing (e.g., commissions,
improvements, etc.).
- NER [Can]: Net Effective Rate or rental for Canadian leases only. This calculation consists of the average yearly rents received divided by other concessions paid on an annual basis. Commissions
and tenant improvements are not considered.
Example 1
A potential tenant is interested in leasing space. The costs include $3,000 for tenant improvements, $2,000 in commissions and an estimated $4,000 for other expenses. In exchange, you'll receive
rents of $1,000 per month for the first year, $1,500 per month for the second, and $2,000 per month for the third. A 6% return is reasonable. What is the net present value of the deal?
- Period Type: Total$/Month
- Tenant Imps: $3,000.00
- Commission: $2,000.00
- Other: $4,000.00
- Payments:
- Discount Rate: 6.000%
The deal's net present value is $39,651.13.
Example 2
If a new lessor received a deal where there was no rent for 6 months, followed by 6 months at $18/ft², $20/ft² for the next 12 months and $22/ft² for the last 12 months, what is the US effective
lease rate (NER [US]) if the discount rate is 6%?
- Period Type: PSF/Year
- Payments:
- Discount Rate: 6.000%
The deal's net effective rent is $17 PSF.
Period Type
Tenant Improvements
Discount Rate
Periodic Rate
NER [US]
NER [Can]
Net Present Value
Net Uniform Series
Internal Rate of Return
Net Effective Return
United States | {"url":"https://power.one/t/60fe4428fde75c7d27ca/lease-analysis-calculator","timestamp":"2024-11-08T07:56:21Z","content_type":"text/html","content_length":"15184","record_id":"<urn:uuid:f8ac9c67-5d75-4b9e-a7d5-1aa97e27117a>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00147.warc.gz"} |
Luận văn The solution existence of equilibrium problems and generalized problems
Table of Contents
Foreword 1
Part 1. Equilibrium problems 4
Chapter 1. Existence conditionsfor equilibrium problems 5
Chapter 2. The solution existence of systems of quasiequilibrium problems 17
Chapter 3. Existence conditions for approximate solutions to 30
quasiequilibrium problems
Part 2. Variational inclusion problems 48
Chapter 4. Sufficient conditions for the solution existence of variational 49
inclusion problems
Chapter 5. Systems of quasivariational inclusion problems 83
List of the papers related to the thesis 97
13 trang
Chia sẻ: maiphuongdc | Lượt xem: 1337 | Lượt tải: 0
Bạn đang xem nội dung tài liệu Luận văn The solution existence of equilibrium problems and generalized problems, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
s proposed including, among others, equilibrium problems, implicit variational inequalities, and quasivariational inequali- ties involving multifunctions. Sufficient conditions for the existence of
solutions with and without relaxed pseudomonotonicity are established. Even semicontinuity may not be imposed. These conditions improve several recent results in the literature. Keywords
Quasiequilibrium problems · Quasivariational inequalities · 0-level-quasiconcavity · Upper semicontinuity · KKM–Fan theorem 1 Introduction Equilibrium problems, which include as special cases various
problems related to optimization theory such as fixed point problems, coincidence point problems, Nash equilibria problems, variational inequalities, complementarity problems, and maxi- mization
problems have been studied by many authors; see e.g., Refs. [1–6]. The main attention has been paid to the sufficient conditions for the existence of solutions. There has also been interest in
getting such conditions for more general problem set- tings and under weaker assumptions about continuity, monotonicity and compacity. Communicated by S. Schaible. This work was partially supported
by the National Basic Research Program in Natural Sciences of Vietnam. The authors are very grateful to Professor Schaible and the referees for their valuable remarks and suggestions which helped to
improve remarkably the paper. N.X. Hai Department of Scientific Fundamentals, Posts and Telecommunications Institute of Technology of Vietnam, Hochiminh City, Vietnam P.Q. Khanh () Department of
Mathematics, International University at Hochiminh City, Hochimin City, Vietnam e-mail: pqkhanh@hcmiu.edu.vn J Optim Theory Appl In the present paper, we propose a general vector quasiequilibrium
problem, which includes vector equilibrium problems, vector quasivariational inequalities, and qua- sicomplementarity problems, etc. We establish sufficient conditions for solution ex- istence with
and without relaxed pseudomonotonicity. In the sequel, if not otherwise specified, let X, Y and Z be real topological vector spaces, let X be Hausdorff and let A ⊆ X be a nonempty closed convex
subset. Let C : A → 2Y , K : A → 2X and T : A → 2Z be multifunctions such that C(x) is a closed convex cone with intC(x) = ∅ and K(x) is nonempty convex, for each x ∈ A. Let f : T (A) × A × A → Y be
a single-valued mapping. The quasiequilibrium problem under consideration is as follows: (QEP) Find x¯ ∈ A ∩ clK(x¯) such that, for each y ∈ K(x¯), there exists t¯ ∈ T (x¯) satisfying f (t¯, y, x¯) /
∈ intC(x¯). To motivate the problem setting, let us look at several special cases of (QEP). (a) If K(x) ≡ A and Z = L(X,Y ), the space of linear continuous mappings of X into Y , then (QEP) coincides
with an implicit vector variational inequality studied in Refs. [7, 8]: find x¯ ∈ A such that, for each y ∈ A, there exists t¯ ∈ T (x¯) satisfying f (t¯, y, x¯) /∈ intC(x¯). (b) If K(x) ≡ A and T is
single-valued, then setting f (T (x), y, x) := h(y, x), (QEP) becomes the vector equilibrium problem considered e.g. in Refs. [1–3, 5, 6]: (EP) Find x¯ ∈ A such that, for each y ∈ A, h(y, x¯) /∈ intC
(x¯). (c) If Z = L(X,Y ), f (t, y, x) = (t, x − y), where (t, x) denotes the value of a linear mapping t at x, then (QEP) reduces to the vector quasivariational inequality problem investigated by
many authors: (QVI) Find x¯ ∈ A ∩ clK(x¯) such that, for each y ∈ K(x¯), there exists t¯ ∈ T (x¯) satisfying (t¯ , y − x¯) /∈ − intC(x¯). (d) Let X be a Banach space, let Y = R, Z = X∗, C(x) ≡ R+,
let A be a closed convex cone, T : A → 2X∗ and S : A → 2A. The quasicomplementarity problem is as follows: (QCP) Find x¯ ∈ A such that, ∀s¯ ∈ K ∩S(x¯),∃t¯ ∈ (−A∗)∩T (x¯) satisfying 〈t¯ , s¯〉 = 0,
where 〈t, s〉 denotes the value of a linear functional t at s. Then, setting K(x) := x −A∩S(x)+A and f (t, y, x) := 〈t, x −y〉, (QEP) collapses to (QCP), see Ref. [9]. (e) Consider the following
maximization problem: (MP) Find the Pareto maximizer of a mapping J : A → Y , where Y is ordered by a convex cone C. Then setting C(x) ≡ C,K(x) ≡ A, T (x) = {x} and f (T (x), y, x) := J (y) − J (x),
we see that (QEP) is equivalent to (MP). Our aim now is to develop sufficient conditions for the existence of solutions to (QEP) under weak assumptions and to derive as consequences several improve-
ments of known results for vector equilibrium problems and vector quasivariational inequalities. J Optim Theory Appl 2 Preliminaries We recall first some definitions needed in the sequel. Let X and Y
be topological spaces. A multifunction F : X → 2Y is said to be upper semicontinuous (usc) at x0 ∈ domF := {x ∈ X : F(x) = ∅} if, for each neighborhood U of F(x0), there is a neighborhood N of x0
such that F(N) ⊆ U . F is called usc if F is usc at each point of domF . In the sequel, all properties defined at a point will be extended to domains in this way. F is called lower semicontinuous
(lsc) at x0 ∈ domF if for each open subset U satisfying U ∩ F(x0) = ∅ there exists a neighborhood N of x0 such that, for all x ∈ N,U ∩ F(x) = ∅. F is said to be continuous at x ∈ domF if F is both
usc and lsc at x. F is termed closed at x ∈ domF if ∀xα → x, ∀yα ∈ F(xα) such that yα → y, then y ∈ F(x). It known that, if F is usc and has closed values, then F is closed. A multifunction H of a
subset A of a topological vector space X into X is said to be a KKM mapping in A if, for each {x1, . . . , xn} ⊆ A, one has co{x1, . . . , xn} ⊆⋃n i=1 H(xi), where co{} stands for the convex hull.
The main machinery for proving our results is the following well-known KKM- Fan theorem (Ref. [10]). Theorem 2.1 Assume that X is a topological vector space, A ⊆ X is nonempty and H : A → 2X is a KKM
mapping with closed values. If there is a subset X0 contained in a compact convex subset of A such that ⋂x∈X0 H(x) is compact, then⋂ x∈A H(x) = ∅. The following fixed-point theorem is a slightly
weaker version (suitable for our use) of the Tarafdar theorem (Ref. [11]), which is equivalent to Theorem 2.1. Theorem 2.2 Assume that X is a Hausdorff topological vector space, A ⊆ X is non- empty
and convex and ϕ : A → 2A is a multifunction with nonempty convex values. Assume that: (i) ϕ−1(y) is open in A for each y ∈ A; (ii) there exists a nonempty subset X0 contained in a compact convex set
of A such that A \ ⋃y∈X0 ϕ−1(y) is compact or empty. Then, there exists xˆ ∈ A such that xˆ ∈ ϕ(xˆ). The next theorem on fixed points is modified (for our use) from a theorem in Ref. [12]. Theorem
2.3 Assume that V is a convex set in a Hausdorff topological vector space and that f : V → 2V is a multifunction with convex values. Assume that: (i) V = ⋃x∈V intf −1(x); (ii) there exists a nonempty
compact subset D ⊆ V such that, for all finite subsets M ⊆ V , there is a compact convex subset LM of V , containing M , such that LM \ D ⊆ ⋃x∈LM f −1(x). Then, there is a fixed point of f in V . J
Optim Theory Appl Using Theorem 2.3, we derive the following modification of Theorem 2.1. Theorem 2.4 Assume that V is a convex set in a Hausdorff topological vector space and H : V → 2V is a KKM
mapping in V with closed values. Assume further that there exists a nonempty compact subset D ⊆ V such that, for all finite subsets M ⊆ V , there is a compact convex subset LM of V , containing M ,
such that LM \ D ⊆ ⋃ x∈LM (V \ H(x)). (1) Then, ⋂ x∈V H(x) = ∅. Proof Suppose that ⋂x∈V H(x) = ∅. Define the multifunction g : V → 2V by g(y) = {x ∈ V : y /∈ H(x)}. Then g(y) = ∅, ∀y ∈ V , and g−1(x)
= V \ H(x). Hence, g−1(x) is open and V = ⋃x∈V g−1(x). Define further f : V → 2V by f (x) = cog(x), where co means the convex hull. One has V = ⋃x∈V f −1(x). More- over, LM \ D ⊆ ⋃x∈LM g−1(x) ⊆ ⋃x∈LM
f −1(x). By Theorem 2.3 there is x0 ∈ V such that x0 ∈ f (x0). Therefore, one can find xj ∈ g(x0) and λj ≥ 0, j = 1, . . . ,m, ∑mj=1 λj = 1 such that x0 = ∑mj=1 λjxj . By the definition of g, x0 /∈ H
(xj ), j = 1, . . . ,m. Thus, x0 = ∑mj=1 λjxj /∈ ⋃mj=1 H(xj ), which is impossible, since H is KKM. 3 Main Results We propose first a very relaxed quasiconcavity. Let Z, A, C, T and f be as for prob-
lem (QEP). For x ∈ A, the mapping f is said to be 0-level-quasiconcave with respect to T (x) if, for any finite subsets {y1, . . . , yn} ⊆ A and any αi ≥ 0, i = 1, . . . , n, with∑n i=1 αi = 1, there
exists t ∈ T (x) such that [f (T (x), yi, x) ⊆ intC(x), i = 1, . . . , n] ⇒ [ f ( t, n∑ i=1 αiyi, x ) ∈ intC(x) ] . In the sequel, let E := {x ∈ A : x ∈ clK(x)}. Our first sufficient condition for
the existence of solutions to (QEP) is the following. Theorem 3.1 Assume for (QEP) the existence of a (single-valued) mapping g : T (A) × A × A → Y such that: (i) for all x, y ∈ A, if g(T (x), y, x)
⊆ intC(x), then f (T (x), y, x) ⊆ intC(x); (ii) g(., ., x) is 0-level-quasiconcave with respect to T (x) and g(t, x, x) ∈ intC(x) for all x ∈ A and all t ∈ T (x); (iii) for each y ∈ A, {x ∈ A : f (T
(x), y, x) ⊆ intC(x)} is closed; (iv) A ∩ K(x) = ∅ for all x ∈ A, K−1(y) is open in A for all y ∈ A and clK(.) is usc; J Optim Theory Appl (v) there exist a nonempty compact subset D of A and a
subset X0 of a compact convex subset of A such that ∀x ∈ A \ D, ∃yx ∈ X0 ∩ K(x), f (T (x), yx, x) ⊆ intC(x). Then, (QEP) has a solution. Proof For x, y ∈ A and i = 1,2 set P1(x) := {z ∈ A : f (T (x),
z, x) ⊆ intC(x)}, P2(x) := {z ∈ A : g(T (x), z, x) ⊆ intC(x)}, Φi(x) := { K(x) ∩ Pi(x) if x ∈ E, A ∩ K(x) if x ∈ A \ E, Qi(y) := A \ Φ−1i (y). Observe that, by (ii), x /∈ P2(x) and then y ∈ Q2(y) for
each y ∈ A, by the definition of Q2(y). Furthermore, we claim that Q2(.) is a KKM mapping in A. Indeed, suppose there is a convex combination xˆ := ∑nj=1 αjyj in A such that xˆ ∈ ⋃nj=1 Q2(yj ). Then,
xˆ ∈ Q2(yj ), i.e., yj ∈ Φ2(xˆ) for j = 1, . . . , n. If xˆ ∈ E, one has yj ∈ P2(xˆ), i.e., g(T (xˆ), yj , xˆ) ⊆ intC(xˆ) for j = 1, . . . , n. In virtue of the 0-level-quasiconcavity with respect to
T (xˆ) of g(., ., xˆ), there is tˆ ∈ T (xˆ) such that g(tˆ, xˆ, xˆ) ∈ intC(xˆ), contradicting (ii). On the other hand, if xˆ ∈ A \ E (i.e., xˆ ∈ clK(xˆ)), then yj ∈ Φ2(xˆ) = A ∩ K(xˆ), j = 1, . . . ,
n. So xˆ ∈ A ∩ K(xˆ), another contradiction. Thus, Q2 must be KKM. By (i), for x ∈ A, one has P1(x) ⊆ P2(x) and then Φ1(x) ⊆ Φ2(x). Hence, Q2(y) ⊆ Q1(y) for all y ∈ A, which results in that Q1(.) is
also KKM. Next, we verify the closeness of Q1(y), ∀y ∈ A. One has Φ−11 (y) = {x ∈ E : y ∈ K(x) ∩ P1(x)} ∪ {x ∈ A \ E : y ∈ K(x)} = {x ∈ E : x ∈ K−1(y) ∩ P −11 (y)} ∪ {x ∈ A \ E : x ∈ K−1(y)} = [E ∩
K−1(y) ∩ P −11 (y)] ∪ [(A \ E) ∩ K−1(y)] = [(A \ E) ∪ P −11 (y)] ∩ K−1(y). Therefore, Q1(y) = A \ {[(A \ E) ∪ P −11 (y)] ∩ K−1(y)} = {A \ [(A \ E) ∪ P −11 (y)]} ∪ (A \ K−1(y)] = [E ∩ (A \ P −11 (y))]
∪ (A \ K−1(y)). (2) Since A ∩ K(x) = ∅,∀x ∈ A, we have ⋃y∈A K−1(y) = A. Theorem 2.2 in turn as- sures that K(.) has a fixed point in A (hence, E = ∅). Indeed, only (ii) of Theorem 2.2 is to be
checked. By assumption (v), A \ D ⊆ ⋃ x∈X0 K−1(x) ⊆ A, J Optim Theory Appl and then A \⋃x∈X0 K−1(x) ⊆ D and is compact, i.e. (ii) of Theorem 2.2 is satisfied. Furthermore, since clK(.) is usc and has
closed values, clK(.) is closed. Hence, E is closed. We have also A \ P −11 (y) = {x ∈ A : y ∈ P1(x)} = {x ∈ A : f (T (x), y, x) ⊆ intC(x)}, which is closed by (iii). It follows from (2) that Q1(y)
is closed. By assumption (V), ∀x ∈ A \ D,∃yx ∈ X0 such that yx ∈ Φ1(x). Therefore, A \ D ⊆ ⋃ x∈X0 Φ−11 (x) ⊆ A. Hence, A\⋃x∈X0 Φ−11 (x) ⊆ D, i.e., ⋂x∈X0 A\Φ−11 (x) ⊆ D and then ⋂x∈X0 Q1(x) is
compact. Applying Theorem 2.1, one obtains a point x¯ such that x¯ ∈ ⋂ y∈A Q1(y) = A \ ⋃ y∈A Φ−11 (y). So, x¯ ∈ Φ−11 (y), ∀y ∈ A, i.e., Φ1(x¯) = ∅. If x¯ ∈ A \ E, then Φ1(x¯) = A ∩ K(x¯),
contradicting (iv). In the remaining case, x¯ ∈ E, one has ∅ = Φ1(x¯) = K(x¯)∩P1(x¯). Thus, for all y ∈ K(x¯), y ∈ P1(x¯), i.e., f (T (x¯), y, x¯) ⊆ intC(x¯), which means that x¯ is a solution of
(QEP). Remark 3.1 (a) Apart from (ii) and (iv), which have clear meanings, we can explain the other assumptions as follows. (i) is a kind of relaxed monotonicity. It may be said to be a
pseudomonotonicity of f with respect to g. (iii) defines a kind of lower semicontinuity of f (T (.), y, .) with respect to moving cone C(.). (v) is a coercivity condition. (b) If K(x) ≡ A and Z = L
(X,Y ), then (QEP) reduces to the implicit vector variational inequality considered in Refs. [7, 8]. In this case, Theorem 3.1 is different from Theorem 3.1 in Refs. [7, 8]. However, we can observe
that our theorem avoids strict continuity assumptions for the mapping (f ), needed in Refs. [7, 8]. (c) Theorem 3.1 is still valid if the coercivity assumption (v) is replaced by (v′) there are a
compact subset D of A and x0 ∈ A such that, ∀x ∈ A \ D, x0 ∈ K(x) and g(T (x), x0, x) ⊆ intC(x). So, if K(x) ≡ A and T is single-valued, in nature Theorem 3.1 becomes the main result (Theorem 2.1) of
Ref. [13], but with (ii) and (v) being slightly weaker than the corresponding assumptions in Ref. [13]. (d) Theorem 3.1 is also in force if we replace (i) and (ii) respectively by (i′) and (ii′)
below: (i′) ∀x, y ∈ A, if g(T (x), y, x) ⊆ C(x), then f (T (x), y, x) ⊆ intC(x); (ii′) ∀{y1, . . . , yn} ⊆ A,n ≥ 2, ∀x¯ ∈ co{y1, . . . , yn}, x¯ = yi , i = 1, . . . , n,∃j ∈ {1, . . . , n}, ∀x ∈ A, g
(T (x¯), yj , x¯) ⊆ C(x¯) and f (T (x), x, x) ⊆ C(x). J Optim Theory Appl Indeed, in the proof, we modify P2(x) as follows: P2(x) := {y ∈ A : g(T (x), y, x) ⊆ C(x)} \ {x}. Then, all that we obtained
before from (i) and (ii), namely the fact that Q2(.) is KKM and that P1(x) ⊆ P2(x), ∀x ∈ A, can be derived from (i′) and (ii′). If Y = R, C(x) ≡ R+ and K(x) ≡ A, Theorem 3.1, with (i′) and (ii′), is
an im- provement of Theorem 3.2 of Ref. [3] in the sense that in (v) D needs not be convex and x0 needs not be fixed, but flexible in a subset X0. Assumptions (i) and (i′) of Theorem 3.1 about a kind
of relaxed pseudomonotonic- ity are commonly wanted to be avoided. The following result gets rid of this assump- tion. Theorem 3.2 For (QEP) assume that (iv) and (v) of Theorem 3.1 are satisfied. As-
sume also the following conditions: (ii′′) this is (ii) with the mapping g replaced by f ; (iii′) if x, y ∈ A, xα → x, xα ∈ A and tα ∈ T (xα), then there are t ∈ T (x), u ∈ C(x) + f (t, y, x), and
subnets xβ and tβ such that f (tβ, y, xβ) → u; (vi) Y \ intC(.) is closed. Then, (QEP) has a solution. Proof For x, y ∈ A, let P1(x), Φ1(x) and Q1(x) be as in the proof of Theorem 3.1. As for Theorem
3.1, we have (2). We have also the nonemptiness and closeness of E. To see the closeness of A \ P −11 (y) let xα ∈ A \ P −11 (y), xα → xˆ. Then, y ∈ P1(xα), i.e., there exists tα ∈ T (xα), f (tα, y,
xα) ∈ intC(xα). By (iii′) there are t ∈ T (xˆ), u ∈ C(xˆ) + f (t, y, xˆ), and subnets xβ and tβ ∈ T (xβ) such that f (tβ, y, xβ) → u. It follows from (vi) that u ∈ Y \ intC(xˆ). One has f (t, y, xˆ)
= u + (f (t, y, xˆ) − u) ∈ Y \ intC(xˆ) − C(xˆ) = Y \ intC(xˆ), i.e., y ∈ P1(xˆ). Hence, xˆ ∈ A \ P −11 (y), showing the required closeness. Thus, look- ing at (2) one sees that Q1(y) is closed, ∀y ∈
A. Similarly as for Theorem 3.1, we have also that ⋂ x∈X0 Q1(x) is compact. Next we verify that Q1(.) is KKM in A. Suppose the existence of a convex com- bination x∗ := ∑nj=1 αjyj in A such that x∗ ∈
⋃nj=1 Q1(yj ). Then, yj ∈ Φ1(x∗), j = 1, . . . , n. If x∗ ∈ E, then yj ∈ P1(x∗), i.e., f (T (x∗), yj , x∗) ⊆ intC(x∗). Con- sequently, the quasiconcavity in (ii′′) gives a t ∈ T (x∗) such that f (t,
x∗, x∗) ∈ intC(x∗), a contradiction. Now if x∗ ∈ A \ E, i.e., x∗ ∈ clK(x∗), then yj ∈ A ∩ K(x∗), and hence x∗ ∈ A ∩ K(x∗), another contradiction. Thus, Q1 is KKM. By virtue of Theorem 2.1, there
exists x¯ ∈ ⋂y∈A Q1(y) and, similarly as in the proof of Theorem 3.1, x¯ is a solution of (QEP). Remark 3.2 In Ref. [14], a quasiequilibrium problem slightly different from our (QEP) is studied and
several existence results different from Theorems 3.1 and 3.2 are obtained. For the special case of (QEP), where Z = L(X,Y ) and K(x) ≡ A, our J Optim Theory Appl Theorem 3.2 is different from
Theorem 3.2 in Ref. [8]. However, our assumption (iii′) is weaker than the corresponding continuity assumption in Ref. [8]. Moreover, if K(x) ≡ A and T is single-valued, (QEP) collapses to the
equilibrium problem con- sidered by many authors. Theorem 3.2 contains improvements when compared with several known results. The 0-level-quasiconcavity in (ii′′) is weaker than the concav- ity used
in Ref. [5]. The following example gives a case where our Theorem 3.2 can be applied even when T is neither usc nor lsc and f is discontinuous (so the theorems in Refs. [7, 8] cannot be used).
Example 3.1 Let X = Y = Z = R, A = [0,1], K(x) ≡ [0,1], C(x) ≡ R+, T (x) = {[−2,−1.5], if x = 0.5, [−1,−0.5], otherwise, f (t, y, x) = { 2t, if x = 0.5, t, otherwise. All, but assumption (iii′) are
clearly satisfied. We check (iii′). If x = 0.5, y ∈ A is arbi- trary, xn → x, xn = 0.5 and tn ∈ T (xn) = [−1,−0.5], then there are t ∈ [−1,−0.5] = T (x) and a subsequence tnk such that tnk → t .
Taking u = t ∈ C(x) + f (t, y, x) we see that f (tnk , y, xnk ) = tnk → u. Now, assume that x = 0.5, y ∈ A is arbitrary, xn → x and tn ∈ T (xn). Since for (iii′) we have to find the required
subsequence xnk , we have to consider only two possibilities. If xn ≡ 0.5, then tn ∈ [−2,−1.5] and there are t∗ ∈ [−2,−1.5] and tnk such that tnk → t∗. Taking t = −2 and u = 2t∗ we see that (iii′) is
satisfied. If xn = 0.5, ∀n, then tn ∈ [−1,−0.5] and there are t∗∗ ∈ [−1,−0.5] and tnk such that tnk → t∗∗. Choosing t = −2 and u = t∗∗ we see also that (iii′) is fulfilled. Thus, Theorem 3.2 can be
applied. The next example shows that assumption (ii′′) of Theorem 3.2 is essential. Example 3.2 Let X,Y,Z,A,K and C(x) be as in Example 3.1, let T (x) = [0,1] and f (t, y, x) = {−1, if y = 0.5, 1,
otherwise. It is obvious that, in this case, (QEP) do not have solutions and all the assump- tions of Theorem 3.2, but (ii′′), are fulfilled. To see that (ii′′) is violated let x be arbitrary, y1 =
0, y2 = 1, α1 = α2 = 0.5. Then f (T (x), yi, x) = {1} ⊆ intC(x) but f (T (x),α1y1 + α2y2, x) = {−1}, which does not meet intC(x). We now modify Theorem 3.1 to include some main results in Refs. [7,
8]. Theorem 3.3 Assume (i)–(iv) of Theorem 3.1 and replace assumption (v) there by J Optim Theory Appl (v′′) there exists a nonempty compact subset D ⊆ A such that for all finite subsets M ⊆ A, there
is a compact convex subset LM of A, containing M , such that ∀x ∈ LM \ D,∃yx ∈ LM , yx ∈ K(x) and f (T (x), yx, x) ⊆ intC(x). Then, (QEP) has a solution. Proof We define Pi , Φi and Qi , i = 1,2, and
argue as for Theorem 3.1 to see that Q1 is KKM and has closed values. To apply Theorem 2.4 instead of Theorem 2.1 we verify assumption (1) of Theorem 2.4. By (v′′), ∀x ∈ LM \ D, ∃yx ∈ Φ1(x) ∩ LM .
Hence x ∈ Φ−11 (yx), i.e. x ∈ A \ Q1(yx). Thus, x ∈ ⋃ y∈LM A \ Q1(y), i.e., (1) is satisfied. Then, by using Theorem 2.4 in the same way as employing Theorem 2.1 for Theorem 3.1, we complete the
proof. Corollary 3.1 Assume (ii′′) of Theorem 3.2, (iii) and (iv) of Theorem 3.1 and (v′′) of Theorem 3.3. Then, (QEP) has solutions. Proof Apply Theorem 3.3 with g ≡ f . Corollary 3.1 improves
Theorem 3.1 of Ref. [7] and Theorem 3.1 of Ref. [8] by getting rid of many strict assumptions on continuity, compactness, pseudomonotonic- ity and concavity. For example, our assumption (iii) can be
satisfied even when f is not continuous. To see this take X = Y = Z = R, A = [0,1], C(x) ≡ R+, T (x) ≡ [0,1] and f (t, y, x) = {−1, if t = 0, −0.5, if t = 0. Then {x ∈ A : f (T (x), y, x) R+} = [0,1]
is closed but f is not continuous. It is not hard to see that, for this example, all the assumptions of Theorem 3.1 are also fulfilled. Remark 3.3 After submitting the paper we observed Refs. [14–19]
with recent re- lated results on equilibrium problems. Reference [14] considers a similar prob- lem setting but requires some assumptions different from ours, e.g. K has com- pact values, f is
continuous and properly quasiconvex (in the second variable) and C(x) ≡ C whose polar cone has a weak* compact base (Theorem 1). Ref- erences [15–19] consider cases where f is multivalued. The
problem setting in Refs. [15, 19] is similar to ours, but K(x) ≡ A (i.e. an equilibrium problem, not quasiequilibrium). References [16] and [17] also investigate equilibrium problems, but here f has
two variables (not three and does not include the multifunction T ). In Ref. [18], a quasiequilibrium problem with f having two variables is studied. In each of Refs. [15–19], there are several
assumptions different from that of the present paper. 4 Applications to Quasivariational Inequalities As aforementioned in the introduction, in the special case, where Z = L(X,Y ) and f (t, y, x) =
(t, h(x) − y) with h : A → A being a given mapping, (QEP) collapses to the following quasivariational inequality: J Optim Theory Appl (QVI) Find x¯ ∈ A ∩ clK(x¯) such that, for each y ∈ K(x¯), there
exists t¯ ∈ T (x¯) such that (t¯ , y − h(x¯)) ∈ − intC(x¯). In this special case the 0-level-quasiconcavity with respect to T (x) of f (., ., x) is obvious. Rewriting Theorems 3.1 and 3.2 for this
case, we get the following new results. Corollary 4.1 Assume that: (ii) (T (x),h(x) − x) ⊆ Y \ − intC(x), ∀x ∈ A; (iii) for each y ∈ A, the set {x ∈ A : (T (x),h(x) − y) ⊆ intC(x)} is closed; (iv) A
∩ K(x) = ∅ for each x ∈ A, K−1(y) is open in A for each y ∈ A and clK(.) is usc; (v) there exists a nonempty closed compact subset D of A and a subset X0 of a compact convex subset of A such that ∀x
∈ A \ D,∃yx ∈ X0 ∩ K(x), (T (x), g(x) − yx) ⊆ intC(x). Then, (QVI) has a solution. Corollary 4.2 Assume (ii), (iv) and (v) as in Corollary 4.1. Assume further that: (iii′) if x, y ∈ A,xα → x, xα ∈ A
and tα ∈ T (xα), ∃t ∈ T (x), ∃u ∈ C(x) + (t, h(x) − y), ∃xβ , ∃tβ (subnets), (tβ, h(xβ) − y) → u; (vi) Y \ intC() is closed. Then, (QVI) has a solution. Observe that Corollary 4.2 is in nature an
extension of Theorem 2.1 of Ref. [9] to the case where A is noncompact. Assumption (ii) of Corollary 4.2 is slightly more strict, but assumption (iii′) is weaker than the corresponding assumption in
Ref. [9]. References 1. Bianchi, M., Hadjisavvas, N., Schaible, S.: Vector equilibrium problems with generalized monotone bifunctions. J. Optim. Theory Appl. 92, 527–542 (1997) 2. Bianchi, M.,
Schaible, S.: Generalized monotone bifunctions and equilibrium problems. J. Optim. Theory Appl. 90, 31–43 (1996) 3. Chadli, O., Chbani, Z., Riahi, H.: Equilibrium problems with generalized monotone
bifunctions and applications to variational inequalities. J. Optim. Theory Appl. 105, 299–323 (2000) 4. Chadli, O., Chbani, Z., Riahi, H.: Equilibrium problems and noncoercive variational
inequalities. Optimization 50, 17–27 (2001) 5. Chadli, O., Riahi, H.: On generalized vector equilibrium problems. J. Glob. Optim. 16, 33–41 (2000) 6. Lin, L.J., Ansari, Q.H., Wu, J.Y.: Geometric
properties and coincidence theorems with applications to generalized vector equilibrium problems. J. Optim. Theory Appl. 117, 121–137 (2003) 7. Kum, S., Lee, G.M.: Remarks on implicit vector
variational inequalities. Taiwan. J. Math. 6, 369–382 (2002) 8. Lee, G.M., Kum, S.: On implicit vector variational inequalities. J. Optim. Theory Appl. 104, 409–425 (2000) 9. Khanh, P.Q., Luu, L.M.:
On the existence of solutions to vector quasivariational inequalities and qua- sicomplementarity problems with applications to traffic network equilibria. J. Optim. Theory Appl. 123, 533–548 (2004)
10. Fan, K.: Some properties of convex sets related to fixed point theorems. Math. Ann. 266, 519–537 (1984) J Optim Theory Appl 11. Tarafdar, E.: A fixed point theorem equivalent to the
Fan–Knaster–Kuratowski–Mazurkiewicz theo- rem. J. Math. Anal. Appl. 128, 475–479 (1987) 12. Lin, L.J.: Applications of fixed point theorem in G-convex space. Nonlinear Anal. Theory Methods Appl. 46,
601–608 (2001) 13. Guerraggio, A., Tan, N.X.: On general vector quasioptimization problems. Math. Methods Oper. Res. 55, 347–358 (2002) 14. Fu, J.Y.: Generalized vector quasiequilibrium problems.
Math. Methods Oper. Res. 52, 57–64 (2000) 15. Fu, J.Y., Wan, A.H.: Generalized vector equilibrium problems with set-valued mappings. Math. Meth- ods Oper. Res. 56, 259–268 (2002) 16. Kristály, A.,
Varga, C.: Set-valued versions of Ky Fan’s inequality with application to variational inclusion theory. J. Math. Anal. Appl. 282, 8–20 (2003) 17. Lin, L.J., Yu, Z.T., Kassay, G.: Existence of
equilibria for multivalued mappings and its application to vectorial equilibria. J.
Các file đính kèm theo tài liệu này: | {"url":"https://doc.edu.vn/tai-lieu/luan-van-the-solution-existence-of-equilibrium-problems-and-generalized-problems-41039/","timestamp":"2024-11-03T11:57:56Z","content_type":"application/xhtml+xml","content_length":"39007","record_id":"<urn:uuid:72fafb69-4800-4c34-b915-e4fe8434ad52>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00120.warc.gz"} |
Newest 'ap.analysis-of-pdes' Questions
Questions tagged [ap.analysis-of-pdes]
Partial differential equations (PDEs): Existence and uniqueness, regularity, boundary conditions, linear and non-linear operators, stability, soliton theory, integrable PDEs, conservation laws,
qualitative dynamics.
0 votes
0 answers
20 views
Let $O\subset\mathbb{R}^d$ be a bounded domain of the class $C^{1,1}$ (or $C^2$ for simplicity). Let the operator $A_D$ be formally given by the differential expression $A=-\mathrm{div}g(x)\nabla$ | {"url":"http://casinogallerian.com/ap.html","timestamp":"2024-11-05T02:30:48Z","content_type":"text/html","content_length":"288827","record_id":"<urn:uuid:c67a5af7-98cd-4baa-96cd-066e8966f6fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00071.warc.gz"} |
Mathematics and Statistics (Applied Mathematics specialisation)
Note: This is an archived Handbook entry from 2009. Search for this in the current handbook
Overview: Major study in Mathematics and Statistics, specialising in Applied Mathematics.
Objectives: .
Mathematics and Statistics major (Applied Mathematics)
Completion of 50 points of study at third year level.
Core subject:
Study Period Commencement:
Credit Points:
Semester 1
Plus three of:
Study Period Commencement:
Credit Points:
Semester 2
Subject Options:
Semester 2
620-352 Graph Theory
Semester 1
620-353 Discrete Mathematics
Semester 2
620-381 Computational Mathematics
Semester 1
Bachelor of Arts and Bachelor of Science
Bachelor of Arts and Sciences
Related Course(s): Bachelor of Commerce and Bachelor of Science
Bachelor of Science
Bachelor of Science and Bachelor of Information Systems | {"url":"https://archive.handbook.unimelb.edu.au/view/2009/!755-bb-maj+1020/","timestamp":"2024-11-09T23:56:51Z","content_type":"text/html","content_length":"4040","record_id":"<urn:uuid:4c938687-9751-41b5-b77c-8ef208ff9e50>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00856.warc.gz"} |
Understanding T Distribution
What Is a T Distribution?
A T distribution is a type of probability distribution that is similar to the normal distribution with its bell shape, but has heavier tails (i.e., greater chance for extreme values).
Tail heaviness is determined by a parameter of the T distribution called degrees of freedom, with smaller values giving heavier tails, and with higher values making the T distribution resemble a
standard normal distribution with a mean of 0, and a standard deviation of 1. The T distribution is also known as "Student's T Distribution."
The Basics of T Distributions
When a sample of n observations is taken from a normally distributed population having mean M and standard deviation D, the sample mean, m, and the sample standard deviation, d, will differ from M
and D because of the randomness of the sample.
A z-score can be calculated with the population standard deviation as Z = (m – M)/{D/sqrt(n)}, and this value has the normal distribution with mean 0 and standard deviation 1. But when this z-score
is calculated using the estimated standard deviation, giving T = (m – M)/{d/sqrt(n)}, the difference between d and D makes the distribution a T distribution with (n-1) degrees of freedom rather than
the normal distribution with mean 0 and standard deviation 1.
Fast Facts
• The T distribution is a continuous probability distribution of the z-score when the estimated standard deviation is used in the denominator rather than the true standard deviation.
• The t-distribution, like the normal distribution, is bell-shaped and symmetric, but it has heavier tails, which means it tends to produce values that fall far from its mean.
Real World Example of a T Distribution Application
1. A confidence interval for the mean is a range of values, calculated from the data, meant to capture a “population” mean. This interval is m +- t*d/sqrt(n), where t is a critical value from the T
distribution. For example, a 95% confidence interval for the mean return of the Dow Jones Industrial average in the 27 trading days prior to 9/11/2001, is -.33% +- 2.055)*1.07/sqrt(27), giving a
(persistent) mean return as some number between -.75% and +0.09%. The number 2.055 is found from the T distribution.
2. Because the T distribution has fatter tails, it can be used as a model for financial returns that exhibit excess kurtosis, rather than the normal distribution, allowing more realistic
calculations of Value at Risk in such cases. | {"url":"https://www.investopedia.com.cach3.com/terms/t/tdistribution.asp.html","timestamp":"2024-11-04T07:01:11Z","content_type":"text/html","content_length":"136182","record_id":"<urn:uuid:d58f59ed-6d8d-4eae-8c36-83124d83f2e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00263.warc.gz"} |
Lesson 3
Unlabeled Tick Marks
Lesson Purpose
The purpose of this lesson is for students to represent numbers within 100 on number lines that do not label each tick mark.
Lesson Narrative
In a previous lesson, students were introduced to the number line and represented the location of numbers with labeled tick marks and points up to 20.
In this lesson, students use multiples of 5 and 10 to locate numbers up to 100 on the number line. Students leverage their understanding of skip counting by 5 and 10 to locate numbers and build on
their understanding of the number line as a representation that includes all numbers. In future lessons, students will estimate numbers on a number line without any tick marks by approximating the
location of the number relative to the position of represented numbers.
Learning Goals
Teacher Facing
• Represent a whole number on a number line and describe the point in terms of its length from 0.
• Use skip-counting patterns to locate numbers on a number line.
Student Facing
• Let’s locate numbers on the number line.
Lesson Timeline
Warm-up 10 min
Activity 1 15 min
Activity 2 20 min
Lesson Synthesis 10 min
Cool-down 5 min
Teacher Reflection Questions
How effective were your questions in supporting students’ thinking about the structure of the number line today? What did students say or do that showed they were effective?
Suggested Centers
• Five in a Row: Addition and Subtraction (1–2), Stage 6: Add within 100 with Composing (Supporting)
• How Close? (1–5), Stage 3: Add to 100 (Supporting)
Print Formatted Materials
Teachers with a valid work email address can click here to register or sign in for free access to Cool Down, Teacher Guide, and PowerPoint materials.
Student Task Statements pdf docx
Lesson Cover Page pdf docx
Cool Down Log In
Teacher Guide Log In
Teacher Presentation Materials pdf docx
Additional Resources
Google Slides Log In
PowerPoint Slides Log In | {"url":"https://im.kendallhunt.com/k5/teachers/grade-2/unit-4/lesson-3/preparation.html","timestamp":"2024-11-02T22:12:59Z","content_type":"text/html","content_length":"77234","record_id":"<urn:uuid:14c42522-f96e-4b70-89a1-6ba30931982f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00377.warc.gz"} |
A recipe question :) Mathematics Assignment Help -
A recipe question :) Mathematics Assignment Help
A recipe question 🙂 Mathematics Assignment Help. A recipe question 🙂 Mathematics Assignment Help.
(/0x4*br />
1.A recipe calls for 3/4 cup of flour, 1/3 cup of sugar, 2/3 Â teaspoon cinnamon, 3/8 teaspoon baking powder, and teaspoon salt. Â
(a) How much flour do you need for 1/2 of a recipe?
(b) How much sugar do you need for 3/4 of a recipe?
(c) How much cinnamon do you need for 3/4 of a recipe?
(d) How much baking powder do you need for 5/6 of a recipe?
(e) How much salt do you need for 2/3 of a recipe? Â
1. How much flour do you need for 1/4 of a recipe?
A recipe question 🙂 Mathematics Assignment Help[supanova_question]
Discussion centered on the ethics case scenario presented on p. 836( BYP17-8 EthIcs), management homework Business Finance Assignment Help
Please review the Ethics Case scenario on p. 836 in your text and answer the following questions.
• Who are the stakeholders in this situation? Should they be the preparers’ main concern? Why? Why not?
• Was there anything unethical about the President’s actions? Was there anything unethical about the controller’s actions? Explain.
• Are the board members or anyone else likely to discover the misclassification? Explain.
Thermochemistry Reaction question Science Assignment Help
Chemistry Reaction Thermochemistry
Consider the reaction following reaction. Answer question B.
6 H2(g) + P4(g) → 4 PH3(g)
(a) Using data from Appendix C, calculate ΔG° at 298 K.
Given: 29.2 kJ
(b) Calculate ΔG° at 298 K if the reaction mixture consists of 7.8 atm of H2, 0.049 atm of P4, and 0.24 atm of PH3.
Chemistry Reaction, Thermochemistry homework Science Assignment Help
Consider the reaction following reaction. Answer question B.
6 H[2](g) + P[4](g) → 4 PH[3](g)
(a) Using data from Appendix C, calculate ΔG° at 298 K.
Given: 29.2 kJ
(b) Calculate ΔG° at 298 K if the reaction mixture consists of 7.8 atm of H[2], 0.049 atm of P[4], and 0.24 atm of PH[3].
A line has a slope of a and has a y-intercept of (0,b). Mathematics Assignment Help
Which equation represents this line?
solve the following problem Mathematics Assignment Help
solve the following problem Mathematics Assignment Help[supanova_question]
Find the time required for an investment of $4000 to grow to $7000 at an interes Mathematics Assignment Help
Find the time required for an investment of $4000 to grow to $7000 at an interest rate of 6.5% per year, compounded quarterly. (Round your answer to two decimal places.)
 yr
Write the equation of the line with the given slope and y-intercept. Mathematics Assignment Help
Business research project Mathematics Assignment Help
Is there a relationship between the sales of a product and the longevity of the product’s display with the type of material used and time period of the display?
• Find three articles relevant to your research problem with a minimum of one being peer-reviewed.Â
• Provide a brief summary of each article by selecting relevant research that addresses the variables in the research question(s) of the article.
• Use an in-text citation and reference for each article.
A VERTICAL line passes through the point (4, 5). Find an equation for this line. Mathematics Assignment Help
solve the following problem Mathematics Assignment Help[supanova_question]
Find the time required for an investment of $4000 to grow to $7000 at an interes Mathematics Assignment Help
Find the time required for an investment of $4000 to grow to $7000 at an interest rate of 6.5% per year, compounded quarterly. (Round your answer to two decimal places.)
 yr
Write the equation of the line with the given slope and y-intercept. Mathematics Assignment Help
slope =Â
y-intercept = 7
Business research project Mathematics Assignment Help
Is there a relationship between the sales of a product and the longevity of the product’s display with the type of material used and time period of the display?
• Find three articles relevant to your research problem with a minimum of one being peer-reviewed.Â
• Provide a brief summary of each article by selecting relevant research that addresses the variables in the research question(s) of the article.
• Use an in-text citation and reference for each article.
A VERTICAL line passes through the point (4, 5). Find an equation for this line. Mathematics Assignment Help
solve the following problem Mathematics Assignment Help[supanova_question]
Find the time required for an investment of $4000 to grow to $7000 at an interes Mathematics Assignment Help
Find the time required for an investment of $4000 to grow to $7000 at an interest rate of 6.5% per year, compounded quarterly. (Round your answer to two decimal places.)
 yr
Write the equation of the line with the given slope and y-intercept. Mathematics Assignment Help
slope =Â
y-intercept = 7
Business research project Mathematics Assignment Help
Is there a relationship between the sales of a product and the longevity of the product’s display with the type of material used and time period of the display?
• Find three articles relevant to your research problem with a minimum of one being peer-reviewed.Â
• Provide a brief summary of each article by selecting relevant research that addresses the variables in the research question(s) of the article.
• Use an in-text citation and reference for each article.
A VERTICAL line passes through the point (4, 5). Find an equation for this line. Mathematics Assignment Help | {"url":"https://anyessayhelp.com/a-recipe-question-mathematics-assignment-help/","timestamp":"2024-11-05T15:33:56Z","content_type":"text/html","content_length":"162852","record_id":"<urn:uuid:234a8bea-6df1-42ad-9c38-105c668d9c6b>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00791.warc.gz"} |
Future value and present value pdf
So present value is the current value of the cash flows which will happen in future and these cash flows happen at a discounted rate. Popular Course in this Difference Between Present Value vs
Future Value. Present and future values are the terms which are used in the financial world to calculate the future and Future value is the value of an asset at a specific date. It measures the
nominal future sum of The operation of evaluating a present value into the future value is called capitalization Create a book · Download as PDF · Printable version
I now know the future value and want to calculate the present value. If the bank pays interest at 10% compounded annually, how much do I need to put in the bank Calculate the present value of a
single cash flow. • Calculate the interest rate implied from present and future values. • Calculate future values and present Print PDF · Part 1. Introduction to the Present Value of a Single Amount
(PV), Present Value Formulas, Tables and Calculators, Calculating the Present Value way to calculate the present value of any future amounts (single amount, varying Example 3.1 illustrates the
following classic problem: How does a present value of $PV (the $350,000) compare with a future value of N annual payments of $ PMT 7. Growing annuity: Present value of a growing annuity. PV = C r −
g. (. 1 −. This is an example of a "Future Value of an Annuity" calculation where we solve for Value in the Future). PV=Present Value (Lump Sum Value in the Present).
PV = Present Value. (pv#). FV = Future Value. (fv#) i = Interest Rate. (rate#) n = Number of periods (nper#). CF = Variable Cash Flow m = Compounding Period
The four variables are present value (PV), time as stated as the number of periods (n), interest rate (r), and future value (FV). 2. What does the term compounding 13 Jun 2009 Weitzman (1998) showed
that when future interest rates are un- certain, using the expected net present value implies a term structure of discount 1 Jan 2015 interest works. •2 Use future value and present value tables to
apply compound interest to accounting transactions. Time Value of Money. 1. PV. 0. FV. T. ❑ Present value (PV):. C. PV. 0. = C. 0. +. C. 1. 1. +. C. 2. 2 +. (1+ r) ( 1+ r). ❑ Future value (FV) : ❑
Future value (FV) : FV. T. = C. 0 (1+ r)T + C. Compounding Techniques/Future Value Techniques 2. Discounting/Present Value Techniques The value of money at a future date with a given interest rate is
Other things remaining equal, the value of cash flows in future time periods will decrease as. - the preference for current consumption increases. - expected
sn⌉ will be referred to as the future value of the annuity. If the annuity is of level payments of P, the present and future values of the annuity are Pan⌉ and. Psn⌉.
PRESENT VALUE TABLE . Present value of $1, that is where r = interest rate; n = number of periods until payment or receipt. 1 r n. Periods Interest rates (r) (n) FV = the future value of money PV =
the present value i = the interest rate or other return that can be earned on the money t = the number of years to take into consideration n = the number of compounding periods of interest per year
Using the formula above, let’s look at an example where you have $5,000
29 Oct 2014 This current worth can be found by discounting future cash flows at a pre- determined discount rate. This value assists investors to compare cash
FV = the future value of a sum of money. PV = the present value of the same amount. r = the interest rate, or the growth rate per period. n = number of periods of growth If we know any three of the
quantities, we can always find the fourth one. Present value is the current value of future cash flow whereas future value is the value of future cash flow after specific future periods or years. In
present value inflation is taken into consideration so it is the discounted value of a future sum of money whereas in future value inflation is not taken into account it is an actual value of a
future sum of money. Aswath Damodaran 2 Intuition Behind Present Value n There are three reasons why a dollar tomorrow is worth less than a dollar today • Individuals prefer present consumption to
future consumption. To induce people to give up present consumption you have to offer them 5. Complete the following, solving for the present value, PV: Case Future value Interest rate Number of
periods Present value A $10,000 5% 5 $7,835.26 B $563,000 4% 20 $256,945.85 C $5,000 5.5% 3 $4,258.07 6. Suppose you want to have $0.5 million saved by the time you reach age 30 and suppose that you
are 20 years old today. Future value, on the other hand, can be defined as the worth of that asset or the cash but at a particular date in the future and that amount will be equal in terms of value
to a particular sum in the present. The future value of an annuity is the total value of payments at a specific point in time. The present value is how much money would be required now to produce
those future payments. PRESENT VALUE TABLE . Present value of $1, that is where r = interest rate; n = number of periods until payment or receipt. 1 r n Periods Interest rates (r) (n)
n The present value of an annuity of $1,000 for the next five years, assuming a discount rate of 10% is - n The notation that will be used in the rest of these lecture notes for the present value of
an annuity will be PV(A,r,n). PV of $1000 each year for next 5 years = $1000 1 - 1 (1.10) 5.10 = $3,791
The four variables are present value (PV), time as stated as the number of periods (n), interest rate (r), and future value (FV). 2. What does the term compounding
1. Net Present Value (NPV) and Discounted Present Value Calculations1. Instructional Primer2. It's often helpful to understand how future streams of income future), the concept of net present value
(the net present value of the sum or The common ways of doing this are Net Present Value (NPV) and Internal Rate of Time Value of Money (TVM), Cash Flows, Bond, and Break-even Keys . NPV and IRR/
YR: Discounting Cash Flows . Present value of future cash flows. PV is Present Value and Future Value Tables Table A-1 Future Value Interest Factors for One Dollar Compounded at k Percent for n
Periods: FVIF. k,n = (1 + k) n. | {"url":"https://investingnsdzqg.netlify.app/bronstein52777cute/future-value-and-present-value-pdf-jul.html","timestamp":"2024-11-11T21:28:32Z","content_type":"text/html","content_length":"31079","record_id":"<urn:uuid:9088b6f9-f1d5-426f-8d2a-f9e2d65b2adb>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00652.warc.gz"} |
Ideas for a Blog LaTeX Parser
After writing a few blog posts using \(\LaTeX\), I have gotten used to writing math in Kramdown and MathJax.^1 However, since I plan to post more regular reviews/summaries of the books that I am
reading related to mathematics, I wanted to have a more efficient way of writing LaTeX.
Perhaps the most vexing of all is the clashing between markdown syntax and LaTeX, the latter of which is notorious for its frequent use of the backslash. This does not cause any problems when a
letter without any reserved uses in Markdown, such as any letter, is preceded by a backlash. For instance, \lambda prints $\lambda$. However, a new line, which is usually two consecutive backslashes,
gets ugly in markdown. Each backslash must each by escaped by another backslash, resulting in \\\\ for a newline within LaTeX math environments. This is also the case for my preferred math mode
delimiters: \( \) \[ \]. I prefer these delimiters because I first learned LaTeX this way. During a conversation with my math teacher Mr. Odden, I learned that these delimiters have yet another
advantage over their native TeX counterparts $ and $$: having distinct symbols for starting and ending the delimiters helps catch any syntax errors. Whereas if one were to accidentally miss a $ sign
at some point in a paragraph, it will send all the wrong strings into math mode and make a mess. The issue is that the brackets () and [] all serve syntactic purposes in Markdown, which would usually
be fine except it would register any backslashes preceding it as escapes. In this case, I would have to input \\( \\) and \\[ \\] instead, which is quite a hassle. There are other inconveniences of
LaTeX in Markdown too, such as the underscore _ (which is used for subscripts in LaTeX, and for italic/bold text in Markdown) sometimes causing problem within MathJax math environments and needing to
be escaped.^2
Of course I have tried Pandoc already (to convert between LaTeX files and Markdown).^3 However, even Pandoc cannot address the escaping problems and compatibility with MathJax. That is why I am
thinking about creating a simple LaTeX to Kramdown with MathJax parser in Python. The concept is simple, I do not need it to do many things. It also does not need to be expandable/customizable; I
only need it to convert my own LaTeX documents. Here is an initial list of features that I would like my parser to have.
Note: While writing this post and brainstorming the LaTeX parser project, I read up on MathJax’s documentation and was surprised to learn that it is in fact very customizable. I also made the
decision to stop supporting the $ and $$ mathmode delimiters so I can finally type the dollar sign in peace.
• Make a two-way parser between LaTeX and Kramdown & MathJax. Try to optimize formatting for LaTeX ⟶ Kramdown & MathJax. The Kramdown & MathJax ⟶ LaTeX conversion is mostly for practicality, most
notably interchanging \ and \\.
• Change \chapter, \section, \subsection, \subsubsection into relevant Markdown titles and the same way back. Make the heading tag numbers that correspond with the latex environments customizable.
• Identify and change mathmode delimiters \( \) \[ \] to \\( \\) \\[ \\]. MathJax does not really like newlines in the code, so we will have to remove all newlines for equation, gather, and align
environments. We replace align* with aligned, gather* with gathered*, and simply remove equation* because * can cause trouble with Markdown and is redundant since all math environments are
already enclosed in mathmode delimiters.
• Potentially also deal with Tables and/or Lists (itemize and enumerate).
Here is my current MathJax configuration as of writing this post:
extensions: ["tex2jax.js"],
jax: ["input/TeX", "output/HTML-CSS"],
tex2jax: {
inlineMath: [ ["\\(","\\)"] ],
displayMath: [ ["\\[","\\]"] ],
processEscapes: true
TeX: {
Macros: {
mbb: ["\\mathbb{#1}",1],
mbf: ["\\mathbf{#1}",1],
mcal: ["\\mathcal{#1}",1],
mfk: ["\\mathfrak{#1}",1],
eps: "\\varepsilon", // The better Epsilon
N: "\\mathbb{N}", // Natural Numbers
Z: "\\mathbb{Z}", // Integers
Q: "\\mathbb{Q}", // Rational Numbers
R: "\\mathbb{R}", // Real Numbers
C: "\\mathbb{C}", // Complex Numbers
F: "\\mathbb{F}", // Arbitrary Field
set: ["\\{ #1 \\}",1], // Normal Brackets Set
Set: ["\\left\\{ #1 \\right\\}",1], // Dynamically Scaled Brackets Set
setbar: ["\\middle\\mid"], // Bar for Dynamically Scaled Brackets Set
func: ["#1 \\colon #2 \\to #3",3], // Function/Mapping
floor: ["\\left\\lfloor #1 \\right\\rfloor",1], // Floor/Greatest Integer Function
ceil: ["\\left\\lceil #1 \\right\\rceil",1] // Ceiling Function
equationNumbers: { autoNumber: "AMS" },
extensions: ["AMSmath.js", "AMSsymbols.js"] // there is also "AMScd.js"
"HTML-CSS": { availableFonts: ["TeX"] }
And here is my latest MathJax configuration (currently in use).
I will also try to play with Pandoc some more to see if it could be incorporated into the solution. If it could, then the parser will be much easier to make.
1. See Kramdown, a flavor of Markdown used in Jekyll. Also see MathJax, a javascript based online LaTeX rendering utility tool. ↩
2. I am also afraid that the habit of typing an extra backslash to escape special characters will be ingrained into my muscle memory, which would be troublesome when I try to write in LaTeX again. ↩
3. See Pandoc, example 5. ↩ | {"url":"https://dzhu.page/programming/ideas-for-coding-blog-latex-parser/","timestamp":"2024-11-04T01:54:52Z","content_type":"text/html","content_length":"36057","record_id":"<urn:uuid:440e4887-f1b0-4daf-9bdc-01806bfd35b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00169.warc.gz"} |
Re: Unclear TLC evaluation behavior
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Unclear TLC evaluation behavior
Your specifications are indeed nonsensical--apparently much more
nonsensical than you suspect. You seem to be under the mistaken
impression that a TLA+ specification is some kind of program and the
_expression_ x'=x+1 is an assignment statement that changes the value of
x when it is evaluated.
A TLA+ specification is a formula, and evaluating a formula doesn't
change anything. Starting in the initial state, the possible next
states are specified by all possible values of a' and x' such that
your formula Next is true when x=1 and a=TRUE. To understand what is
going on, I suggest that for each of your definitions of Next, you
find 17 next states that it allows from the initial state. (This
isn't difficult, because there are so many next states that they don't
even form a set.)
When you understand what your specifications mean, you will appreciate
why TLC can't compute their possible behaviors. What particular error
messages TLC produces for them isn't of much interest. | {"url":"https://discuss.tlapl.us/msg02277.html","timestamp":"2024-11-11T05:28:26Z","content_type":"text/html","content_length":"4623","record_id":"<urn:uuid:9c482110-ccfd-448f-b495-e8463c3d97b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00736.warc.gz"} |
Kleisli category
added pointer to:
• Mark Kleiner, Adjoint monads and an isomorphism of the Kleisli categories, Journal of Algebra Volume 133 1 (1990) 79-82 [doi:10.1016/0021-8693(90)90069-Z]
diff, v35, current
I have touched formatting and wording of this entry, in the hope to increase readability.
In particular I have added explicit statement of the Kleisli equivalence as an explicit proposition (now here – previously there was just a proof, following no proposition statement).
Things left to do:
• There is still a switch of notation from objects being denoted $M, N, \cdots$ to $X, Y, \cdots$.
• The Idea-section states the universal property in a way hardly suitable for an Idea-section, but an essentially duplicate paragraph on the matter then does appear in the Properties section. I
suggest the text in the Idea section be merged into that in the Properties section.
diff, v31, current
have restated with reference to proof as suggested in #27
I have taken the liberty or re-ordering the Idea-section, keeping the simple description at the beginning and your universal characterization afterwards.
In fact the universal characterization deserves to be (re-)stated in the Properties-section of the entry with some indication as to its proof, or at least with a reference.
added to Ideas section about how the Kleisli category answers the converse question to the result that every adjunction gives rise to a monad (this is the context in which Kleisli introduced this
Thomason in his famous paper uses Grothendieck construction for lax functors.
Universal property of Kleisli and Eilenberg-Moore constructions in 2-categorical world can be found in Street’s 1972 paper Formal theory of monads in JPAA, see ref. under monad. Lack has written a
paper few years ago in which he studies these constructions in terms of more elementary lax limits.
It might be that Janelidze included varieties of infinitary algebras if they are also called algebras, I do not know, maybe our discussion was incomplete in this respect and I had a bit more
restricted impression. Still it is a different class of examples.
Yes, they are instances of the same construction. Finn is probably right that being less useful is why the version for lax functors isn’t discussed as much. There is even a version for functors
valued in Prof rather than Cat.
@Zhen: If you take a monad T as a lax functor $\mathbf{1} \to Cat$, then its Grothendieck construction is indeed the Kleisli category (as long as the morphisms are of the form $a \to T f (b)$, not $T
f (a) \to b$, of course), although I can’t say off the top of my head what exactly its universal property is. The Grothendieck construction for lax functors, and more generally normal lax functors
into Prof, as described at Conduche functor, isn’t really talked about much, possible because it’s not as useful or important as the sort that gives rise to fibrations. But maybe that stuff at
Conduche functor could be moved to or linked to by Grothendieck construction.
@Mike #7: Ah, so in fact they are both the same construction? I didn’t know lax colimits could be so easy to compute! (Is there a reason why Grothendieck construction only talks about pseudofunctors
instead of lax functors in general?)
Zoran, it sounds like our wires are crossed. In #17 you said, “well, the term “algebra of/over a monad” is also from a restricted class of examples of monads: finitary monads in Set” (my emphases),
and I was arguing against that restricted class as the sole source of the term ’algebra’. If Janelidze thought that the etymology referred to that restricted class, then I would say he is wrong,
since for one thing infinitary algebras were well-known to everyone in 1965.
My guess is now that he had no such restriction in mind.
Or maybe you’re just talking about where the motivation to use the word ’algebra’ came from
Yes, that what we are talking about, the historical explanation why choosing one or another terminology. I discussed with him using term module and he is very much against what I consider the
geometric terminology, because “these are algebras”, because they are algebras in universal algebra what it the principal historical class of examples in his view.
since equational varieties with infinitary operations were considered long before the categorical concepts
Were these also called varieties of algebras ? If so, an argument in his favor.
well, the term “algebra of/over a monad” is also from a restricted class of examples of monads: finitary monads in $Set$
But that’s just not true! From the very beginning (Eilenberg-Moore, 1965, at the very least), it’s meant something much broader: the operations can be infinitary (maybe even a proper class of
arities), and over many other categories besides $Set$. The way you write, it sounds like you might be thinking of Lawvere theory.
Or maybe you’re just talking about where the motivation to use the word ’algebra’ came from. Partly from universal algebra, surely – but I cannot believe Janelidze completely here since equational
varieties with infinitary operations were considered long before the categorical concepts came along. And the scope of the general idea, extending beyond the case over $Set$, was surely appreciated
well before Eilenberg-Moore. Where exactly does Janelidze say this?
This looks as if we should check that both terminology is used and explained somewhere in the entry. (I have not check to see if it has been.) There is the fact that operads were more often linear in
their uses in algebraic topology and that May (pun intended) be why the linearised ‘module’ was introduced. Clearly both are used.
15: well, the term “algebra of/over a monad” is also from a restricted class of examples of monads: finitary monads in $Set$ aka algebraic theories which lead to algebras in the sense of universal
algebra. That is the reason for the term, as stressed by Janelidze. The monads $A\otimes_k$ exhaust all monads if $k$ is a field, but the quasicoherent sheaves picture is true (modules in the monad
sense correspond to qcoh sheaves over the relative affine scheme) much more generally. In fact there is a slight catch: the affine morphism correspond to monads which have a right adjoint functor
(hence come from an adjoint triple). For cohomological purposes the case of monads without a right adjoint is equally good (Rosenberg calls that case “almost affine”).
Like Tim, I think I have heard ’module over an operad $C$’, particularly in the context of considering actions from the other side $- \circ C$ (where $\circ$ denotes the substitution product on
species), but in my experience “algebra over an operad” is much more usual for actions from the ’usual’ side, $C \circ -$.
Zoran, that sounds very similar in spirit to the less elaborate example given earlier: that a module over an algebra $A$ is the same as an algebra of the monad $A \otimes_k -$, but to say ’algebra’
over an algebra is inviting confusion. If one’s focus is on such restricted types of monad, I can see why one would feel strongly about saying ’module’ instead.
@Tim #9, I have always heard “algebra over an operad” too.
Yes, I think it is deliberate. Namely, Grothendieck has thought that the geometry should be concentrated not on the properties of spaces, but properties of morphisms of spaces (relative point of
view). Thus one considers affine morphisms generalizing affine schemes (the latter means over Spec Z). Now the affine $k$-scheme is a spectrum of a $k$-algebra. Its category of quasicoherent sheaves
of $\mathcal{O}$-modules is the category of modules over the monad induced by the algebra in the base category of quasicoherent sheaves over $Spec k$, what is nothing other than the category of $k$
-vector spaces. Here clearly modules are the appropriate ones. Now if one relatives over any base scheme $S$ then the relative affine $S$-schemes will have quasicoherent sheaves given by a monad in
the base category of quasicoherent modules. This point of view and terminology is most notably pronounced in Deligne’s 1988 Categories Tannakiennes in Grothendieck Festschrift. This or that way the
rings and algebras in the geometry over a field, in relative setup become monads, and the modules over the former and modules over the latter are both the quasicoherent (sheaves of $\mathcal{O}$-)
modules. The role of algebras as affine objects and the role of modules as quasicoherent modules are clearly distinguished in geometry and calling the latter ones algebras would make a mess in
geometric terminology.
I should have checked in with Urs before doing this,
I am fine with this. Did I even write the piece that you changed (maybe I did, I haven’t checked, I don’t rememeber).
The terminology issue with algebras/modules over monads is old I thought there is a discussion at algebra over a monad, but maybe there is not.
Anyway, both terms have their perfect justification given the two different perspectives on monads: externally its a monoid that has modules, internally it’s a something that has algebras. Seems to
me to also match the two different points of views exposed at the very entry Kleisli category.
I thought it would have been the link between monads and monoids myself. People never seem to say ’algebra over a monoid’ (although they do say ’algebra over an operad’).
@Mike #7: that’s what I was thinking too.
I may be wrong but I thought that the use of ’module over a monad’ crept in from the close link between operads and monads.
@Zoran #4: that’s interesting; I wasn’t aware of that. Do you know anything about the history of this? Because “algebra of a monad” (or over a monad) has been around for more than 45 years; since the
geometry community presumably knew this, it sounds as if they deliberately decided to break with that usage. (This is not to say that I think “module” is an illogical choice, although there is some
potential for confusion, as when one speaks of a module over an algebra of an operad.)
Probably because they are both lax colimits.
The definition of composition in the Grothendieck construction bears some similarity to Kleisli composition, but I haven’t been able to see exactly why.
At the definition of Kleisli composition, what does the phrase
as in the Grothendieck construction
but I believe that “algebra of a monad” is much more common and familiar than “module of a monad”
This depends on a community. In pure category/algebra community yes, but in geometry community the other way around. But then module over a monad not “of a monad”.
I did some editing at Kleisli composition. Probably I should have checked in with Urs before doing this, but I believe that “algebra of a monad” is much more common and familiar than “module of a
monad”, and so I interchanged the order of those two words throughout the article. We should probably discuss this anyway. I also fixed a few sentences (one was missing some words).
You mean Kleisli composition.
I have tried to brush-up Kleisli category; also made Kleisli composition redirect to it and cross-linked with monad (in computer science)
I am wondering about the following somewhat vague question:
Given an adjoint pair $\Box \dashv \lozenge$ of a monad and a comonad on some category $\mathcal{C}$, I am looking at an application where one wants to “glue” (for lack of a better word) their
Kleisli categories to a new category which fully contains both Kleisli categories, but in addition has morphisms going from one to the other by compositions of the $\Box$-counit with the $\lozenge$
While I can just define this, I am wondering if this construction has some good general abstract meaning. Is it just my intended application that makes me want to look at this construction, or do
univeral algebraists arrive at the same notion (or something similar), on general grounds?
Asking Google this question, the engine suggests
On p. 2 (of 26) in this article it says that a category with morphisms of the form $\Box X \longrightarrow \lozenge Y$ has been considered in
• S. Brookes, S. Geva, Computational comonads and intensional semantics, Proc. Durham Conf. Categories in Computer Science (1991)
Unfortunately, I have not found a copy of this article yet. But presumably the construction in question is that also found in
section 6 “Double Kleisli categories” of:
• Stephen Brookes, Kathryn Van Stone, Monads and Comonads in Intensional Semantics (1993) [dtic:ADA266522, pdf]
These double Kleisli categories might be what I need, using that necessity and possibility satisfy a distributive law in the ambidextrous case.
I found a link to Computational comonads and intensional semantics here. However, I don’t see the definition I would expect there (namely, that of “double Kleisli categories” in Monads and Comonads
in Intensional Semantics), though their “computation comonads” in §4 seem related (consider a pointed functor rather than a monad).
I had seen that .ps file earlier, but my network hadn’t allow me to access it, for some reason. Now I have gotten hold of it, have transformed it into a pdf and have recorded it (here) at monad (in
computer science) .
Yes, strange that they don’t say what Power and Watanabe credit them for.
Looking at their article now, for a moment I thought that their computational comonads include those obtained from ambidextrous adjunctions, with their “$\gamma$” being the additional unit map. But
this does not seem to fit their axioms.
So when I express quantum measurement/state preparation via writer/reader-monads as shown here, then the construction looks quite reminiscent of the constructions involved in the “double Kleisli
category” of Brookes & Van Stone 1993 (§6).
It feels like there should be more to this similarity. Possibly the “BvS double Kleisli category” for $\Box \dashv \lozenge$ on linear types over finite sets is the correct fully abstract incarnation
of the category of quantum gates, in some sense.
But I still don’t understand the BvS double Kleisli category well enough (I mean, I certainly understand its definition and existence, but I am not sure yet about what its morphisms really “mean”).
[edit: I had two mistakes here: On the one hand my earlier diagram did not actually commute (this is fixed now), on the other hand the BvS construction does not actually apply to the situation (not
sure what to make of that)]
31,32 the entire volume with the article in pdf is at http://library.lol/main/8D1FA6858DFA95CB60323AC67851C8C8
Mixed distributive laws are, of course, earlier, from early 1970s at least.
Many authors discuss distributive laws, but I was after the “double Kleisli category” induced by a distributive law. For this, the single reference that I am aware of, so far, remains Brookes & Van
Stone (1993) §6.
@Urs: Harmer–Hyland–Melliès’s Categorical Combinatorics for Innocent Strategies and Garner’s Polycategories via pseudo-distributive laws are also references (there the construction is called the
“two-sided Kleisli construction”). However, they are much later references and do not cite any other source for the construction.
Thanks for the further references! I have added pointer to Garner’s article here.
diff, v40, current
One of the points of the Street’s 1972 JPAA article Formal theory of monads is that the distributive laws between monads are simply monads in the bicategory of monads, and the mixed distributive laws
are simply monads in op-cop dual of that bicategory. So, in principle, one is just taking a Kleisli construction in that bicategory. But when writing explicitly out one just have comonad on a
category with extra data writing out which is straightforward and than writing out the Kleisli (you call it co-Kleisli) category in this case.
P.S. In the case of algebras and coalgebras instead of general comonads and monads I have once written in detail the bicategory and some issues related to the bicategory of such mixed distributive
laws in an unpublished preprint Bicategory of entwinings, arxiv:0805.4611 (The referee complained (in 2008) that the paper should be done with more categorical theory and less explicit methods and
suggested to resubmit elsewhere with inclusion of such methods, but I left it as it is and did not publish.)
added pointer to:
• David Jaz Myers, §2.3 of: Categorical systems theory, book project [github, pdf]
diff, v42, current
added pointer to:
• Thomas Streicher, pp. 54 in: Introduction to Category Theory and Categorical Logic (2003) [pdf, Streicher-CategoryTheory.pdf:file]
diff, v53, current
and pointer to:
• Francis Borceux, pp. 191 in: Handbook of Categorical Algebra, Vol 2 Categories and Structures, Encyclopedia of Mathematics and its Applications 50, Cambridge University Press (1994) [
diff, v53, current
at the end of the proposition “Kleisli equivalence” (here) I have added a tikzcd-diagram showing the component maps at a glance, including the reverse map on hom-sets (by precomposition with the
diff, v54, current
Have also expanded the proof (here).
diff, v55, current
added (here) statement of the two-sided (“double”) Kleisli category in the case of a comonad distributing over the given monad
diff, v58, current
added (here) statement and proof that the compatibility of Kleisli composition under monad transformations passes to two-sided Kleisli categories if the transformation is compatible with the two
distributive laws in the evident way
diff, v59, current | {"url":"https://nforum.ncatlab.org/discussion/4193/kleisli-category/?Focus=34244","timestamp":"2024-11-05T16:06:45Z","content_type":"application/xhtml+xml","content_length":"103391","record_id":"<urn:uuid:42279305-f547-412f-b487-826bff2b53b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00872.warc.gz"} |
the singular implication of uploading one hour every second to @youtube …
This is an astonishing statistic: Youtube users now upload one hour of video every second:
The video (and accompanying website) is actually rather ineffective at really conveying why this number is so astounding. Here’s my take on it:
* assume that the rate of video uploads is constant from here on out. (obviously over-conservative)
* the ratio of “Youtube time” to real time is 1/3600 (there are 3600 seconds in an hour)
* so how long would it take to upload 2,012 years worth of video to Youtube?
Answer: 2012 / 3600 = 0.56 years = 6.7 months = 204 days
Let’s play with this further. Let’s assume civilization is 10,000 years old. it would take 10,000 / 3600 = 33 months to document all of recorded human history on YouTube.
Let’s go further with this: Let’s assume that everyone has an average lifespan of 70 years (note: not life expectancy! human lifespan has been constant for millenia). Let’s also assume that people
sleep for roughly one-third of their lives, and that of the remaining two-thirds, only half is “worth documenting”. That’s (70 / 6) / 3600 years = 28.4 hours of data per human being uploaded to
YouTube to fully document an average life in extreme detail.
Obviously that number will shrink, as the rate of upload increases. Right now it takes YouTube 28 hours to upload teh equivalent of a single human lifespan; eventually it will be down to 1 hour. And
from there, it wil shrink to minutes and even seconds.
If YouTube ever hits, say, the 1 sec = 1 year mark, then that means that the lifespan of all of the 7 billion people alive as of Jan 1st 2012 would require only 37 years of data upload. No, I am not
using the word “only” in a sarcastic sense… I assume YT will get to the 1sec/1yr mark in less than ten years, especially if data storage continues to follow it’s own cost curve (we are at 10c per
gigabyte for data stored on Amazon’s cloud now).
Another way to think of this is, in 50 years, YouTube will have collected as many hours of video as have passed in human history since the Industrial Revolution. (I’m not going to run the numbers,
but that’s my gut feel of the data). These are 1:1 hours, after all – just because one hour of video is uploaded every second, doesn’t mean that the video only took one second to produce – someone,
somewhere had to actually record that hour of video in real time).
Think about how much data is in video. Imagine if you could search a video for images, for faces, for sounds, for music, for locations, for weather, the way we search books for text today. And then
consider how much of that data is just sitting there in YT’s and Google’s cloud. | {"url":"https://www.haibane.info/2012/01/25/the-singular-implication-of-uploading-one-hour-every-second-to-youtube/","timestamp":"2024-11-02T18:09:49Z","content_type":"text/html","content_length":"62821","record_id":"<urn:uuid:c20b6d03-f273-4870-bcd3-5941d127d979>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00275.warc.gz"} |
Specification of restraints
Static and dynamic restraints
Dynamic restraints are created on the fly, and currently include:
Dynamic restraints are not written into the restraints file by Restraints.write() (only static restraints are).
Static restraints can be added with the Restraints.add() command, or can be read from a restraints file (see Section B.2). Collections of static restraints useful for various purposes (e.g. for
restraining all bond lengths or angles, or for using template information) can also be automatically generated with the Restraints.make() command.
Each static restraint is formulated as a mathematical form (e.g. a Gaussian function) which acts on one or more ‘features’ of the model (e.g. a bond length). Any feature can be used with any
mathematical form, with the exception of forms.multi_binormal, which generally only works properly with features.dihedral. Both feature types and mathematical forms are described below.
Feature types
Each feature is a Python class, which takes a defined number of atom ids as input. Each of these atom ids can be:
• An Atom object, from the current model (e.g., m.atoms['CA:1']; see model.atoms).
• A Residue object, from the current model (e.g., m.residues['3']; see Sequence.residues), in which case all atoms from the residue are used.
• A list of atoms or residues returned by model.atom_range() or model.residue_range(), in which case all atoms from the list are used.
• A model object, in which case all atoms in the model are used.
• A selection object, in which case all atoms in the selection are used.
Features can be any of the classes in the features module (see below) or you can create your own classes; see Section 7.1.
Distance in angstroms between the given two atoms.
Angle in radians between the given three atoms.
Dihedral angle in radians between the given four atoms.
Given an even number of atoms, this calculates the distance between the first two atoms, the third and fourth, and so on, and returns the shortest such pair distance, in angstroms.
Area (in ) exposed to solvent of the given atom. Note that this feature cannot be used in optimization, as first derivatives are always returned as zero. Note also that model.write_data() should
first be called with OUTPUT='PSA' to calculate the accessibility values.
Atomic density (number of atoms within contact_shell of the given atom). Note that this feature cannot be used in optimization, as first derivatives are always returned as zero.
Value of the x coordinate (in angstroms) of the given atom.
Value of the y coordinate (in angstroms) of the given atom.
Value of the z coordinate (in angstroms) of the given atom.
Difference in radians between two dihedral angles (defined by the first four and last four atoms).
Mathematical forms of restraints
Each mathematical form is a Python class, which takes one or features (above) as arguments to act on. group is used to group restraints into “physical feature types” for reporting purposes in
selection.energy(), etc, and should be a Python object from the physical module (see Table 6.1 and Section 6.10.1). You can also create your own mathematical forms by creating new Python classes; see
Section 7.1.
Each of the mathematical forms is depicted in Figure 5.1.
forms.lower_bound(group, feature, mean, stdev)
Harmonic lower bound (left Gaussian). The given feature is harmonically restrained to be greater than mean with standard deviation stdev. See Eq. A.82.
forms.upper_bound(group, feature, mean, stdev)
Harmonic upper bound (right Gaussian). The given feature is harmonically restrained to be less than mean with standard deviation stdev. See Eq. A.83.
forms.gaussian(group, feature, mean, stdev)
Single Gaussian (harmonic potential). The given feature is harmonically restrained to be around mean with standard deviation stdev. See Eq. A.63.
forms.multi_gaussian(group, feature, weights, means, stdevs)
Multiple Gaussian. The given feature is restrained by a linear combination of Gaussians. weights, means and stdevs should all be lists (of the same size) specifying the weights of each Gaussian in
the linear combination, their means, and their standard deviations, respectively. See Eq. A.66.
forms.factor(group, feature, factor)
Simple scaling. The given feature value is simply multiplied by factor to yield the objective function contribution.
forms.lennard_jones(group, feature, A, B)
Lennard-Jones potential. The given feature is restrained by means of a Lennard-Jones potential, with control parameters A and B. See Eq. A.90.
forms.coulomb(group, feature, q1, q2)
Coulomb point-to-point potential. The given feature is restrained by means of an inverse square Coulomb potential created by charges q1 and q2. See Eq. A.87.
forms.cosine(group, feature, phase, force, period)
Cosine potential. The given feature is restrained by a CHARMM-style cosine function, with the given phase shift, force constant and periodicity. See Eq. A.84.
forms.multi_binormal(group, features, weights, means, stdevs, correls)
The given two features (generally both features.dihedral) are simultaneously restrained by a multiple binormal restraint. weights, means, stdevs and correls should all be lists (of the same size).
weights specifies the weights of each term in the function. means and stdevs give the mean and standard deviation of each feature for each term, and each element should thus be a 2-element list.
correls gives the correlation between the two features for each term. See Eq. A.76.
forms.spline(group, feature, open, low, high, delta, lowderiv, highderiv, values)
Cubic spline potential. The given feature is restrained by an interpolating cubic spline, fitted to values, which should be a list of objective function values. The first element in this list
corresponds to feature value low, the last to feature value high, and points in the list are taken to be equally spaced by delta in feature space. The spline can either be open (open = True) in which
case the first derivatives of the function at the first and last point in values are given by lowderiv and highderiv respectively, or closed (open = False) in which case lowderiv and highderiv are
ignored. A closed spline 'wraps around' in such a way that feature values low and high are taken to refer to the same point, and is useful for periodic features such as angles. See Eq. A.97.
forms.nd_spline(group, values)
Multi-dimensional cubic spline potential. The given feature is restrained by an interpolating multi-dimensional cubic spline, fitted to values, which should be an N-dimensional list of objective
function values. (For example, for a 2D spline, it should be a list of lists. The outer list goes over the second feature, and contains one or more rows, each of which is a list which goes over the
first feature.) After creating the object, you should then call the 'add_dimension' function N times:
nd_spline.add_dimension(feature, open, low, high, delta, lowderiv, highderiv)
This initializes the next dimension of the multi-dimensional cubic spline. Parameters are as for 'forms.spline()', above. Note that lowderiv and highderiv are used for every spline, for efficiency.
(For example, in an x-by-y 2D spline, there will be 'x' splines in the second dimension, each of which could have its own lowderiv and highderiv, but one pair of values is actually used for all 'x'
of these splines.)
Figure 5.1: Each mathematical form generates a contribution to the objective function as a function of one or more features. Note that this contribution is the negative log of the probability
Restraint violations
When MODELLER optimizes the objective function, the aim is to fulfill all of the restraints as well as possible. In complex cases, this will be difficult or impossible to do, and some of the
restraints will not be optimal. In this case, MODELLER reports the deviation of each restraint from the optimum as a ‘violation’. There are four kinds of restraint violation used by MODELLER:
• The heavy violation is defined as the difference between the current value of the feature, and the global minimum of the same feature according to the restraint's mathematical form.
• The relative heavy violation is the heavy violation normalized by dividing by the standard deviation of the global minimum.
• The minimal violation is defined as the difference between the current value of the feature, and the nearest minimum of the same feature according to the mathematical form. Where this minimum
corresponds to the global minimum (or for forms which have no well-defined local minimum, such as cubic splines), the minimal violation is the same as the heavy violation.
• The relative minimal violation is the minimal violation normalized by dividing by the standard deviation of the local minimum.
Equations for relative heavy violations for most mathematical forms are given in Section A.3.2.
Automatic builds 2017-07-19 | {"url":"https://salilab.org/modeller/9.19/manual/node108.html","timestamp":"2024-11-07T19:00:02Z","content_type":"text/html","content_length":"20919","record_id":"<urn:uuid:3dbc0978-f484-46e6-b0d8-a120c8a4b3fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00217.warc.gz"} |
Confusion Matrix | FlowHunt
A confusion matrix is a tool used in machine learning to evaluate the performance of a classification model. It is a specific table layout that allows visualization of the performance of an
algorithm, typically a supervised learning one. In a confusion matrix, each row of the matrix represents the instances in an actual class while each column represents the instances in a predicted
class. This matrix is particularly useful in understanding the true positive, true negative, false positive, and false negative predictions made by a model.
A confusion matrix provides a class-wise distribution of the predictive performance of a classification model. This organized mapping allows for a more comprehensive mode of evaluation, offering
insights into where a model may be making errors. Unlike simple accuracy, which can be misleading in imbalanced datasets, a confusion matrix provides a nuanced view of model performance.
Components of a Confusion Matrix
1. True Positive (TP): These are cases in which the model correctly predicted the positive class. For example, in a test for detecting a disease, a true positive would be a case where the test
correctly identifies a patient with the disease.
2. True Negative (TN): These are cases where the model correctly predicted the negative class. For example, the test correctly identifies a healthy person as not having the disease.
3. False Positive (FP): These are cases where the model incorrectly predicted the positive class. In the disease test example, this would be a healthy person incorrectly identified as having the
disease (Type I Error).
4. False Negative (FN): These are cases where the model incorrectly predicted the negative class. In our example, it would be a sick person incorrectly identified as healthy (Type II Error).
Importance of Confusion Matrix
A confusion matrix provides a more comprehensive understanding of the model performance than simple accuracy. It helps to identify whether the model is confusing two classes, which is particularly
important in cases with imbalanced datasets where one class significantly outnumbers the other. It is essential for calculating other important metrics such as Precision, Recall, and the F1 Score.
The confusion matrix not only allows the calculation of the accuracy of a classifier, be it the global or the class-wise accuracy, but also helps compute other important metrics that developers often
use to evaluate their models. It can also help compare the relative strengths and weaknesses of different classifiers.
Key Metrics Derived from Confusion Matrix
• Accuracy: The ratio of correctly predicted instances (both true positives and true negatives) over the total number of instances. While accuracy gives a general idea about the model’s
performance, it can be misleading in imbalanced datasets.
• Precision (Positive Predictive Value): The ratio of true positive predictions to the total predicted positives. Precision is crucial in scenarios where the cost of a false positive is high.[
\text{Precision} = \frac{TP}{TP + FP}
• Recall (Sensitivity or True Positive Rate): The ratio of true positive predictions to the total actual positives. Recall is important in scenarios where missing a positive case is costly.[
\text{Recall} = \frac{TP}{TP + FN}
• F1 Score: The harmonic mean of Precision and Recall. It provides a balance between the two metrics and is especially useful when you need to take both false positives and false negatives into
\text{F1 Score} = 2 \times \frac{\text{Precision} \times \text{Recall}}{\text{Precision} + \text{Recall}}
• Specificity (True Negative Rate): The ratio of true negative predictions to the total actual negatives. Specificity is useful when the focus is on correctly identifying the negative class.[
\text{Specificity} = \frac{TN}{TN + FP}
Use Cases of Confusion Matrix
1. Medical Diagnosis: In scenarios like disease prediction, where it is crucial to identify all cases of the disease (high recall) even if it means some healthy individuals are diagnosed as sick
(lower precision).
2. Spam Detection: Where it is important to minimize false positives (non-spam emails incorrectly marked as spam).
3. Fraud Detection: In financial transactions, where missing a fraudulent transaction (false negative) can be more costly than flagging a legitimate transaction as fraudulent (false positive).
4. Image Recognition: For instance, recognizing different animal species in images, where each species represents a different class.
Confusion Matrix in Multi-Class Classification
In multi-class classification, the confusion matrix extends to an N x N matrix where N is the number of classes. Each cell in the matrix indicates the number of instances where the actual class is
the row and the predicted class is the column. This extension helps in understanding the misclassification among multiple classes.
Implementing Confusion Matrix in Python
Tools like Python’s scikit-learn provide functions such as confusion_matrix() and classification_report() to easily compute and visualize confusion matrices. Here is an example of how to create a
confusion matrix for a binary classification problem:
from sklearn.metrics import confusion_matrix, classification_report
# Actual and predicted values
actual = ['Dog', 'Dog', 'Cat', 'Dog', 'Cat']
predicted = ['Dog', 'Cat', 'Cat', 'Dog', 'Cat']
# Generate confusion matrix
cm = confusion_matrix(actual, predicted, labels=['Dog', 'Cat'])
# Display the confusion matrix
# Generate classification report
print(classification_report(actual, predicted))
1. In the study “Integrating Edge-AI in Structural Health Monitoring domain” by Anoop Mishra et al. (2023), the authors explore the integration of edge-AI in the structural health monitoring (SHM)
domain for real-time bridge inspections. The study proposes an edge AI framework and develops an edge-AI-compatible deep learning model to perform real-time crack classification. The
effectiveness of this model is evaluated through various metrics, including accuracy and the confusion matrix, which helps in assessing real-time inferences and decision-making at physical
sites. Read more.
2. “CodeCipher: Learning to Obfuscate Source Code Against LLMs” by Yalan Lin et al. (2024) addresses privacy concerns in AI-assisted coding tasks. The authors present CodeCipher, a method that
obfuscates source code while preserving AI model performance. The study introduces a token-to-token confusion mapping strategy, reflecting a novel application of the concept of confusion,
although not directly a confusion matrix, in protecting privacy without degrading AI task effectiveness. Read more.
3. In “Can CNNs Accurately Classify Human Emotions? A Deep-Learning Facial Expression Recognition Study” by Ashley Jisue Hong et al. (2023), the authors examine the ability of convolutional neural
networks (CNNs) to classify human emotions through facial recognition. The study uses confusion matrices to evaluate the CNN’s accuracy in classifying emotions as positive, neutral, or negative,
providing insights into model performance beyond basic accuracy measures. The confusion matrix plays a crucial role in analyzing the misclassification rates and understanding the model’s behavior
on different emotion classes. Read more.
These articles highlight the diverse applications and importance of confusion matrices in AI, from real-time decision-making in structural health monitoring to privacy preservation in coding, and
emotion classification in facial recognition.
Explore FlowHunt's AI Glossary for a comprehensive guide on AI terms and concepts. Perfect for enthusiasts and professionals alike!
Explore AI convergence: stabilize models, optimize predictions, and enhance applications in fields like autonomous vehicles and smart cities.
Explore pattern recognition in AI, psychology, and more. Discover its types, applications, and challenges. Dive into the future of tech!
Convolutional Neural Network (CNN)
Explore CNNs: the backbone of computer vision! Learn about layers, applications, and optimization strategies for image processing. Discover more now! | {"url":"https://www.flowhunt.io/glossary/confusion-matrix/","timestamp":"2024-11-03T06:28:21Z","content_type":"text/html","content_length":"90714","record_id":"<urn:uuid:0ecb0e2b-203c-42a3-9e4a-8cb9121bad47>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00601.warc.gz"} |
Communicating the sum of sources in a 3-sources/3-terminals network
We consider the network communication scenario in which a number of sources Si each holding independent information Xi wish to communicate the sum σ Xi to a set of terminals tj. In this work we
consider directed acyclic graphs with unit capacity edges and independent sources of unit-entropy. The case in which there are only two sources or only two terminals was considered by the work of
Ramamoorthy [ISIT 2008] where it was shown that communication is possible if and only if each source terminal pair Si/tj is connected by at least a single path. In this work we study the
communication problem in general, and show that even for the case of three sources and three terminals, a single path connecting source/terminal pairs does not suffice to communicate σ Xi. We then
present an efficient encoding scheme which enables the communication of σ Xi for the three sources, three terminals case, given that each source terminal pair is connected by two edge disjoint paths.
Our encoding scheme includes a structural decom position of the network at hand which may be found useful for other network coding problems as well.
Publication series
Name IEEE International Symposium on Information Theory - Proceedings
ISSN (Print) 2157-8102
Conference 2009 IEEE International Symposium on Information Theory, ISIT 2009
Country/Territory Korea, Republic of
City Seoul
Period 28/06/09 → 3/07/09
Dive into the research topics of 'Communicating the sum of sources in a 3-sources/3-terminals network'. Together they form a unique fingerprint. | {"url":"https://cris.openu.ac.il/en/publications/communicating-the-sum-of-sources-in-a-3-sources3-terminals-networ","timestamp":"2024-11-07T03:59:32Z","content_type":"text/html","content_length":"49589","record_id":"<urn:uuid:7bf79561-c0d6-4304-b0bf-4cd332537428>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00509.warc.gz"} |
Average - Quantitative Aptitude (MCQ) questions
Dear Readers, Welcome to Quantitative Aptitude Average questions and answers with explanation. These Average solved examples with shortcuts and tricks will help you learn and practice for your
Placement Test and competitive exams like Bank PO, IBPS PO, SBI PO, RRB PO, RBI Assistant, LIC,SSC, MBA - MAT, XAT, CAT, NMAT, UPSC, NET etc.
After practicing these tricky Average multiple choice questions, you will be exam ready to deal with any objective type questions.
1) In the India-Australia one day match, due to rain, India needed 324 runs in 48 overs to win. In initial 10 overs, the average scoring rate was 6, but in next 10 overs it increased to 8.5. It then
declined to 5.5 in next 10 overs and again rose to 7 in next 10 overs. To win the match now what average is needed?
- Published on 08 May 17
a. 8.25
b. 6.75
c. 7.75
d. 7.0
2) Average age of 7 family members is 75 years. But average age of 6 of them is 74 years 6 months. What is the age of the 7^th family member?
- Published on 11 Apr 17
a. 75.5
b. 78
c. 68
d. 80
3) Average age of 5 people is 42 years. Another group has 8 people who have average age of 81 years. When both groups are mixed what is average age of all people?
- Published on 11 Apr 17
a. 64 years
b. 66 years
c. 61.5 years
d. 70 years
4) Average age of 5 people in a family is 55 years. However it is seen that 3 of the 5 people also have an average age of 55 years. What will be the average age of remaining two people of the family?
- Published on 11 Apr 17
a. 82.5 years
b. 27.5 years
c. 55 years
d. 110 years
5) Which of the following exactly denotes the average price of all the goods together if, Ramesh buys ‘a’ number of goods of type ‘A’ at price of Rs. ‘E’ each, ‘b’ number of goods of type ‘B’ at
price of Rs. ‘F’ each and ‘c’ number of goods of type ‘C’ at price of Rs. ‘G’ each?
- Published on 11 Apr 17
a. (E+F+G) / (a+b+c)
b. (AE+BF+CG) / (a+b+c)
c. (aE+bF+cG) / (a+b+c)
d. (aA+bB+cC) / (a+b+c)
6) The average of fifty numbers is 28. If two numbers, namely 25 and 35 are discarded, the average of the remaining numbers is nearly,
- Published on 07 Jul 17
a. 29.27
b. 27.92
c. 27.29
d. 29.72
7) The average of three numbers is 77. The first number is twice the second and the second number is twice the third. Find the first number.
- Published on 07 Jul 17
a. 33
b. 66
c. 77
d. 132
8) Average age of A and B is 30 years, that of B and C is 32 years and the average age of C and A is 34 years. The age of C is
- Published on 07 Jul 17
a. 33 years
b. 34 years
c. 35 years
d. 36 years
9) 3 boxes have some average weight. When one box which weighs 89 kg is replaced by another box, the average weight increases by 5 kg. How much the new box weighs?
- Published on 17 Mar 17
a. 109 kg
b. 94 kg
c. 104 kg
d. 84 kg
10) Knowing that Vijay’s expenditure for first 3 days is Rs. 100, Rs. 125 and Rs. 85, what is his 4^th day expenditure as his 4 days average expenditure Rs. 90?
- Published on 17 Mar 17
a. Rs. 220
b. Rs. 60
c. Rs. 50
d. Rs. 90 | {"url":"https://www.careerride.com/mcq/average-quantitative-aptitude-mcq-questions-31.aspx","timestamp":"2024-11-06T21:16:18Z","content_type":"text/html","content_length":"51023","record_id":"<urn:uuid:32735d10-f6a4-4c13-9a50-b1b6a944c7f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00892.warc.gz"} |
This semester, I teach the master's courses
Computational Methods I and Gauge/Gravity Duality.
Please check the courses' websites for more informations. Most important, you should not forget to indicate your preferences for each problem sheet. How this work in general is explained below. In
the previous semesters, I prepared and gave the following courses at the University of Wrocław:
For completeness, as a PhD student, I also gave tutorial for
• Theoretical Mechanics, and
• String Theory I,
both taught by Dieter Lüst at the LMU Munich.
Here is an overview about my courses. Listed are the most recent completed semesters. If you are interested in any other installments, please follow the links above.
An Introduction to String Theory
One of the greatest puzzles in theoretical physics is to find a consistent quantum theory of gravity, the only one of the four fundamental forces we do not know how to quantise yet. Although decades
of intensive work, we have no conclusive answer to this problem. Currently one of the main contender in race towards quantum gravity is String Theory. It is based on a simple but far reaching idea,
to substitute point-like particles with extended objects, strings and membranes. In this course, we will explore the implications of this idea. More precisely, we study the bosonic string and see
that it not onl describes gravity but also gauge theories on D-branes. Gauge theories are required to describe the remainig three fundamental forces. This renders string theory to a theory of
everything. The course is tailored towards master and PhD students who are familiar with
• electrodynamics
• special relativity
• quantum field theory.
Basic knowledge of general relativity is an advantage, but you also should be find if you did not have attended the GR course yet. There are many good books on the subject, four that I like are
• Barton Zwiebach: A First Course in String
• Blumenhagen, Lüst, Theisen: Basic Concepts of String Theory
• Polchinski: String Theory, Volume 1
• Johnson: D-Branes
Moreover, David Tong has some excellent lecture notes. We will have 2.5 hours of lectures and 2.5 hours of tutorial each week. Exercises will be posted here a week before the tutorial they are
discussed in. Please keep in mind that active participation in the tutorials is important to pass the course. M.Sc. Luca Scala will be the assistant for the tutorials. Please fell free to contact him
or me if you have any questions.
There are many other interesting approaches to Quantum Gravity and string theory is by far not the only one. There is another course this semester, Introduction to Quantum Gravity, which I highly
Important: Students will be assigned to exercise problems by the system described below, before the tutorial. Please familiarise yourself with this system and do not forget to indicate your
Additional material for the individual lectures, including the exercises which we discuss in the tutorials, will appear here in due time.
1. Why, what and how - from the relativistic particle to the Nambu-Goto action
Please note that there are no exercise assignments for the first tutorial. However, we will use a few mintues to explain how exercise problems are assigned. To prepare, it would be great if you
could already follow the instructions outlined in the pdf you find under the exercise link above. For the remainder of the tutorial, we continue with the lecture. This gives us time to motivate
why we need string theory, to look at the relativistic point particle and to show how it can be generalised to the string.
Note: There were some questions today in the lecture about the history of String Theory. A nice account of this topic is given by the short article arXiv:1201.0981 of John Schwarz, one of the
founders of the field.
2. Classical string and the Polyakov action
3. Symmetries of the classical string, especially, Weyl invariance
4. Mode expansion and open/closed string boundary conditions
5. Old covariant quantisation, Virasoro algebra and physical states
6. Physical spectrum, massless states, no-ghosts theorem, critical dimension $D=26$
7. Light-cone quantisation (tutorial) and modern covaraint quantisation in the BRST formalism with bc-ghost system (lecture)
8. Conformal field theory (CFT) and operator product expansion (OPE) in a nutshell
On CFT, one could and should give a full semester course. We cannot do this here, but I try to give you a self-contained introduction. CFT is crucial to discuss interactions between strings.
Therefore, we have to gain a basic understanding of it.
9. Vertex operators and a first glance at string perturbation theory
Reading: [BLT] section 6.1 and 6.2
String perturabation theory is very beautiful. It crucially depends on the moduli space of Riemann surfaces. Unfortunately, we do not have enough to look at this topic in more detail. For those
of you how are interested, section 6.2 of the book by Blumenhagen, Lüst and Theisen is a very good starting point. To make you a bit more curious, here is a picutre of a tree-point vertex in the
complex plane.
Much more details how it arises can be found in section 4.2 of my PhD thesis [H37].
10. Low energy effective actions and compactifications
11. Closed strings on circles and T-dualtiy
12. D-branes and gauge theories
Classical Field Theory
This is the tutorial for the course "Classical Field Theory" thaught by prof. Frydryszak for the Master in Theoretical Physics at the University of Wrocław. Exercises will be posted here a week
before the tutorial they are discussed in. Please keep in mind that active participation in the tutorials is important to pass the course. M.Sc. Achilles Gitsis will be the assistant for the
tutorials. Please fell free to contact him or me if you have any questions. For information about credits points, please refer to the syllabus (in Polish) or contact prof. Frydryszak.
Important: Students will be assigned to exercise problems by the system described here at the Friday, 9:00 pm, before the tutorial. Please indicate your preferences by then.
1. Introduction
Lecture04.10.2023 12:00
2. Sine-Gordon equation
Lecture11.10.2023 12:00
3. Linear chain: Fourier transformation and Hamiltonian
Lecture18.10.2023 12:00
4. Linear chain: Initial value problem and Poisson brackets
Lecture25.10.2023 12:00
5. Minkowski space and the light cone
Lecture08.11.2023 13:00
6. Schwarz and triangle inequalities
Lecture22.11.2023 13:00
7. Four-velocity and four-acceleration
Lecture29.11.2023 13:00
8. Reference frames and relativistic motion
Lecture06.12.2023 13:00
9. Pseudo-Euclidean space
Lecture13.12.2023 13:00
10. Unitary representation of the Poincare group
Lecture20.12.2023 13:00
11. Pauli-Lubański vector and the relativistic particle action
Lecture03.01.2024 13:00
12. Electrodynamics
Lecture10.01.2024 13:00
13. Equations of motion
Lecture17.01.2024 13:00
14. Lagrangian of real scalar field
Lecture24.01.2024 13:00
15. Revision
Achilles prepared notes that summarize the most important insights of this semester. Perhaps they are useful in preparing for the exam.
Lie Algebras and Groups
Lie algebras describe infinitesimal symmetries of physical systems. Therefore, they and their representation theory are extensively used in physics, most notably in quantum mechanics and particle
physics. This course introduces semi-simple Lie algebras and the associated Lie groups for physicists. We discuss the essential tools, like the root and weight system, to efficiently work with them
and their representations. As on explicit application of the mathematical framework, we discuss Grand Unified Theories (GUT). Moreover, we show how modern computer algebra tools like LieART can
significantly help in all explicit computations throughout the course.
A simple example is the visualisation of the root system of $E_6$ projected on the Coxeter plane, which you can see here. If you want to understand how it is created and connected to particle
physics, you should take this course. Basic knowledge of core concepts in linear algebra, like vector spaces, eigenvalues and eigenvectors, is assumed. Some good books about the topic are:
• Fuchs and Schweigert: Symmetries, Lie Algebras and Representations
• Gilmore: Lie Groups, Lie Algebras, and Some of Their Applications
• Fulton and Harris: Representation Theory
• Georgi: Lie Algebras In Particle Physics: from Isospin To Unified Theories
The article Phys. Rep. 79 (1981) 1 by Slansky and the manual of the LieART package are good references, too.
Note: At multiple occasions, we will use Mathematica and it might be a good idea to set it up on your computer. Following the instructions on the main teaching website, you should be eventually able
to run the notebook, which generates the projection of the $E_6$ root system above.
Exam: After a majority vote, we decided together that the written exam for this course will take place on Tuesday, the 6th of February 2024, at 13:00 in room 447 (the one where we also have our
classes). It will take two hours and you have a practise exam to help you preparing for it. Please be there five minutes earlier such that we can start on time.
Retake Exam: As discussed via email, you have the chance to take part in the retake exam on Thursday, the 15th of February 2024, at 13:00 in room 447. Exactly the same conditions as described for the
exam above will apply. If you would like to take part in the retake exam, please send me a quick message via email such that I have an estimated head count and can bring the right number of exams.
1. Introduction and motivation
2. Some mathematical preliminaries
10.10.2023 10:15,
Unfortunately, there has been some problems with the WiFi in the class room and as a result the recording of the lecture is not usable. Let us hope that it will work better the next time.
3. Classical matrix groups
17.10.2023 10:15,
4. Cartan subalgebra
24.10.2023 10:15,
5. Root system
Sorry for the bad sound quality of the recording. I am working on this issue.
6. Simple root and Cartan matrix
7. Classification and Dynkin diagrams
14.11.2023 11:15,
8. Irreducible representations
21.11.2023 11:15,
Because I mentioned that you can remember the Dynkin diagram of so(8) by thinking of the flux-capacitor from Back to the Future and some of you have not see this movie here is a picutre. For me
this is a classic and I highly recommend to watch it when you have not do so yet.
9. $\mathfrak{su}$
) representations and Young tableaux
28.11.2023 11:15,
10. Highest weight representations
05.12.2023 11:15,
There was a small ambiguity in the Freudenthal formula for the multiplicity. The wedge for "and", looked like the V for vector space. I corrected this one in the notes. You will need this
equation for the practise exam. But it is also given there.
11. Characters and Weyl group
12. Decomposition of tensor products and regular subalgebras
19.12.2023 11:15,
13. Special subalgebras and branching rules
09.01.2024 11:15,
14. Particle theory and the standard model
16.01.2024 11:15,
15. The Georgi–Glashow model
23.01.2024 11:15,
Quantum Field Theory
This is the mandatory Quantum Field Theory course of the Master in Theoretical Physics at the University of Wrocław. It is tailored towards master and PhD students who are familiar with
• quantum mechanics
• electrodynamics
• special relativity
There are many good books on the subject, four that I like are
• [PeskSchr] Peskin, Schroeder: An introduction to quantum field theory
• [Ryder] Ryder: Quantum Field Theory
• Weinberg: The Quantum Theory of Fields, Volume 1 & 2
• Zee: Quantum Field Theory in a Nutshell.
We will follow mostly the first one in the lectures. There will be 2 hours of lectures and 2 hours of tutorial each week. Exercises will be posted here a week before the tutorial they are discussed
in. Please keep in mind that active participation in the tutorials is important to pass the course. M.Sc. Biplab Mahato will be the assistant for the tutorials. Please fell free to contact him ( ) or
me if you have any questions. For information about credits points, please refer to the syllabus or contact me directly.
Important: Students will be assigned to exercise problems by the system described here at the Thursday, 9:00 pm, before the tutorial. Please indicate your preferences by then.
Exam: After a majority vote, we decided together that the written exam for this course will take place on Monday, the 3rd of July 2023, at 14:00 in room 447 (the one where we also have our classes).
It will take two hours and we will discuss a practise exam in the last tutorial to help you preparing for it. Please be there five minutes earlier such that we can start on time. Also note that you
need to have more than 50% of the points from the exercises assigned to you to qualify for the exam. You can check your points here on the website, or, if you are in doubt, with me.
Retake Exam: As discussed via email, you have the chance to take part in the retake exam on Friday, the 8th of September 2023, at 17:00 in room 447. Exactly the same conditions as described for the
exam above will apply. If you would like to take part in the retake exam, please send me a quick message via email such that I have an estimated head count and can bring the right number of exams.
Additional material for the individual lectures, including the exercises which we discuss in the tutorials, is given below:
1. Reminder of spin 0 and 1/2 fields
02.03.2023 09:15,
Reading: [PeskSchr] sections one and two
Please note that there are no exercise assignments for the first seminar. Instead, we continue with the lecture.
2. Reminder of spin 1 fields and abelian gauge symmetries
07.03.2023 09:15,
Reading: [Ryder] sections 3.3 and 4.4
3. Non-abelian gauge symmetries and Lie groups
14.03.2023 09:15,
Reading: [PeskSchr] section 15 except for 15.3 or alternatively [Ryder] sections 3.5 and 3.6
4. Path integral in quantum mechanics and for the scalar field
21.03.2023 09:15,
Reading: [PeskSchr] section 9.1
5. Generating functional, interactions and Feynman rules
Reading: [PeskSchr] section 9.2
During the lecture and tutorial, there were several questions how to derive the symmetry factors and the Feynman rules directly from the path integral. The simple answer is of course, just expand
up to the relevant order. However, this is not completely straightforward and requires a little bit care with combinatorics and Wick's theorem. Therefore, the detailed computation is attached for
the propagator in $\phi^4$ theory. This hopefully helps to resolve the problem.
6. Path integral for fermions, Grassmann numbers, chiral anomaly (in the exercise)
04.04.2023 08:15,
Reading: [PeskSchr] section 9.5
7. Path integral for spin-1 bosons, ghost fields
18.04.2023 08:15,
Reading: [PeskSchr] section 9.4
In the second step of the Faddev-Popov procedure, we discussed during the lecture, we average over different gauge choices $\omega(x)$. We were using a Gaussian weight factor for this purpose,
$Z_0 = \underbrace{N(\xi) \int \mathcal{D}\omega \exp\left[ -i \int \mathrm{d}^4 x \frac{\omega^2}{2\xi} \right]}_{\displaystyle = 1} Z_0$
At this point, the question: "Why do we use the Gaussian weight?". My answer: "Because we can compute it." is correct, but one could also use any other weight function and, after much more work,
would obtain the same result. There is a nice thread on Physics stack exchange discussion this issue.
8. One loop effects in QED: field-strength renormalisation and self-energy
Reading: [PeskSchr] section 7.1
Please take a look at the details for the derivation of the spectral density function, which we did not have time to discuss in the lecture.
9. Dimensional regularisation and superficial degree of divergence
09.05.2023 08:15,
Reading: Peskin&Schroeder sections 7.5 and 10.1
10. One-loop renoramlised
16.05.2023 08:15,
Reading: [PeskSchr] section 10.2
11. Renormalisation group flow
23.05.2023 08:15,
Reading: [PresSchr] section 12.1
12. The Callan-Symanzik equation and
30.05.2023 08:15,
Reading: [PresSchr] sections 12.2
During the lecture, we noted that the renoramlisation condition used for massless $\phi^4$-theory is are quite different from what we used in lecture 10 (see section 7.2 of the notes). The major
challenge here is that we are dealing with a massless theory and there is a prori no mass with we can use to introduce a renormalisation scale. To overcome this problem, an imaginary mass $p^2 =
-M^2$ is used to fix the renormalisation conditions which holds the physical mass (the real part of $M$) fixed at zero.
13. Gravity, quantum gravity, one-loop
-functions of a two-dimensional
-model and string theory
07.03.2023 09:15,
14. Spontaneous symmetry breaking and the Higgs mechanism
13.06.2023 08:15,
Reading: [PresSchr] sections 11.1 and 20.1
There is no problem sheet for this lecture. In the tutorial, which takes place right after the last lecture, we discuss the practise exam.
15. BRST symmetry, physical Hilbert space
Lecture15.06.2023 15:00
Reading: [PeskSchr] section 16.2-16.4
For computations we use Mathematica. It is a very powerful tool with unfortunately a quite high price for a license. Students can get a discount on licenses. If your budget is not sufficient for a
student license, you can use the Wolfram Engine for Developers. After creating a Wolfram ID, it can be downloaded for free. The Wolfram Engine implements the Wolfram Language, which Mathematica is
based on. But, it lacks the graphical notebook interface. Fortunately, some excellent free software called Jupyter Notebook fills the gap. Both can be connected with the help of Wolfram Language for
Jupyter project on GitHub. Getting everything running might require a little bit of tinkering. But in the end, you get a very powerful computer algebra system for free. Finally, you should install
LieART by following the "Manual Installation" instructions.
Assignment of problems
Solving the exercise problems for a course is very important. It helps to practice the concepts and ideas introduced during the lecture, and you will also be graded for the solutions you present
during the tutorials. But at the same time, one of the most annoying questions for the students and the lecturer is: "Who would like to present his solutions to the next problem?".
Therefore, we use the following system to assign students to problems based on their preferences:
1. You need to log in with your USOS account (every student at the University of Wrocław should have one). To do so, click on the small closed door in the top left corner of this window and enter
your credentials in the window which pops up. The first time you do this, you will be asked to give this website minimal access to your USOS profile.
2. After a successful login, you can go to your course above and find the new link "manage" after each lecture with an exercise. If you do not see this link, check if you are logged in (the small
closed door you clicked in the last step should be slightly open now). Second, verify that the course you are looking at is indeed your course. When the problem still persists, please get in
touch with me.
3. If you click "manage", you will find a list of all the problems that we discuss during the exercises. If problems have not yet been assigned to students, you can indicate your preferences by
sorting this list. Entries on the top have the highest priority, while those at the bottom have the lowest. You sort them by drag&drop with either the mouse or your finger if you work with a
touchscreen. Once this is done, do not forget to click the "Save" button at the bottom.
You can always come back later and revisit your choice or change it until the assignments are fixed. Once this happens, you will get an email and see the students' names to present the various
problems. The backup candidate (the second name) should be ready to take over if required.
4. Make sure you log out at the end of your session by clicking on the now slightly opened door you arleady used to login.
The assignments made by this system are binding. For every problem you present you can get up to three points. These points are added up and used at the end of the course to calculate your grade. If
you cannot present an assigned problem, you will get zero points and put your backup on the spot. Therefore: Please prepare properly and in case of any emergencies, let us know timely. Backup
candidates can earn extra points (up to 1.5) by presenting a problem. But they can also lose the same amount if they are not prepared.
Problems are assigned completely automatically after the criteria: Everybody in the course should present the same number of problems. The indicated preferences are taken into account. If several
students have the same preferences, the one who submitted them the earliest wins. You do not have to submit any preferences at all. In this case, the system assumes that you do not care which problem
you have to present. | {"url":"https://www.fhassler.de/teaching/","timestamp":"2024-11-10T21:32:24Z","content_type":"text/html","content_length":"520733","record_id":"<urn:uuid:e79f9772-2a41-4211-9c45-1972d79e5322>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00273.warc.gz"} |
How Roman Numerals Work | RomanNumerals.guide
How Roman Numerals Work
The ultimate guide to Roman Numerals. Learn how to write Roman Numerals quickly and easily.
Roman numerals are an ancient system for writing numbers. The Roman numerals are: I, V, X, L, C, D, and M. These symbols represent 1, 5, 10, 50, 100, 500, and 1000, respectively. Romans combined
these symbols to create a system for counting from 1 to 3,999.
Roman Numerals 1 to 10
I 1
II 2
III 3
IV 4
V 5
VI 6
VII 7
VIII 8
IX 9
X 10
Roman Numerals Chart
I 1
V 5
X 10
L 50
C 100
D 500
M 1,000
Printable Roman Numeral Charts
Roman Numeral Rules
There are three rules for writing Roman numerals:
I. Roman numerals are written largest to smallest from left to right. Add up the value of each symbol.
II. Only I, X, C, and M can be repeated. Never repeat a symbol more than three consecutive times.
III. When a smaller numeral is to the left of a bigger numeral, subtract it.
Romans and subsequent users were not as strict about how to write numbers. You may see cases where these rules were not strictly followed. However, today Roman numerals will almost always follow
these rules.
Twenty-six is written with three symbols: X (10), V (5), and I (1). We use X twice to get twenty. Then add V and I once to get twenty-six. The final roman numeral for twenty-six is XXVI.
To write fifty-four we use L (50), V (5), and I (1). Because we cannot write I more than three time in a row, we must subtract 5 - 1 to get 4. Also, we must use L for 50 instead of using X (10) five
times in row. The final roman numeral for fifty-four is LIV.
One-hundred forty-two uses C (100), L (50), X (10), and I (1). Start with C for 100. Next we use XL to get forty (50 - 10). Finally add II onto the end. The final Roman numeral for one-hundred
forty-two is CXLII.
This is a tricky one. You have to do a lot of subtracting. For this Roman numeral we will use M (1000), C (100), X (10), V (5), and I (1).
Start with M for 1000. Followed by CM which is 900 (1000 - 100). For ninety we use C (100) minus X (10), which gives us XC. Finally, add seven to the end. Seven is 5 + 1 + 1 or VII. Putting it all
together we get MCMXCVII. Phew!
Download Roman Numeral Reference Sheet
Knowing Roman numerals is a great skill to have. Although they are not widely used today you never know when it might come in handy.
Large Numbers in Roman Numerals
In Roman numerals you cannot repeat a symbol more than three consecutive times. As a result, there is a limit to how big of a number you can write. The largest Roman numeral is MMMCMXCIX which is
To write larger numbers you can add a line over the symbol. A line over a Roman numeral indicates the number is multiplied by 1,000. For example, 400,000 would be written CD since 400 is written CD
(500 - 100).
The system of adding a line above Roman numerals is called vinculum. The vinculum system is the most common way to write large Roman numerals today; however it is not the only way. The rules of Roman
numerals change just as grammar and language change over time.
Zero, Negative Numbers, and Fractions
Roman numerals were invented to aid in record keeping. They were used on receipts to keep track of payments and deliveries. As a result, the Romans did not invent a symbol for zero or negative
Instead of zero, the Romans used the Latin word nulla, meaning "none." Eventually nulla was abbreviated with the letter N. Because of these limitations, Roman numerals were eventually replaced by the
number system we use today.
Romans did use fractions. They would use a dot (•) to indicate 1/12th. The letter S was used as an abbreviation for Semis, meaning "half." For example, 3/12 (1/4) would be written as ∴ and 7/12th
would be written S• (half + 1/12)
The Romans preferred to use twelfths instead of tenths as we more commonly use today. This is because twelve is dividable by more numbers than ten. The Roman's use of twelfths is also why there are
12 inches are in a foot.
Roman numerals were first used around 900 B.C (3,000 years ago). They were used widely throughout the Middle Ages. By around the 1500's, Roman numerals began to be replaced by the Arabic numeral
system we use today.
The origin of Roman numerals is debated. Some scholars believe Roman numerals developed from a simpler form of tallying. Others contend Roman numerals were developed based on hand signals. I, II,
III, and IIII look like fingers and V (5) looks like the thumb and pointer finger.
By the Middle Ages, Roman numerals had evolved into the system we know today. Roman numerals were not the first known counting system. But they are certainly the most common ancient counting system
still used today.
Modern Uses
The use of Roman numerals has declined but has not completely gone away. Roman numerals are still used for:
• Names of Kings, queens, and popes (e.g. Elizabeth II or Pope Benedict XVI)
• Superbowl numbers
• Generation suffixes
• Sequels in movies or video games
• Chapters or volumes of books
• Indicates the year of construction of buildings, bridges, etc.
• And many more
Roman numerals are commonly used on clock faces. One interesting note is that clocks often use IIII for four rather than IV.
Roman numerals written on the Admiralty Arch in London indicate when the building was constructed. The Latin phrase on the building translates to In the tenth year of King Edward VII, to Queen
Victoria, from most grateful citizens, 1910. Interestingly, the year 1910 is written in Roman numerals as MDCCCCX. A keen observer will notice that C is repeated four times. The more common way to
write 1910 in Roman numerals is MCMX.
Looking for more? We have you covered! You may also be interested in:
Help improve this page! If you have any questions, comments or suggestions we would love to hear from you. Please contact us. | {"url":"https://romannumerals.guide/how-roman-numerals-work","timestamp":"2024-11-03T14:04:54Z","content_type":"text/html","content_length":"30516","record_id":"<urn:uuid:90b9f6ca-bbf3-4baa-88a4-046ec0ce6203>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00821.warc.gz"} |
ketos.audio.spectrogram.add_specs(a, b, offset=0, scale=1, make_copy=False)[source]
Place two spectrograms on top of one another by adding their pixel values.
The spectrograms must be of the same type, and share the same time resolution.
The spectrograms must have consistent frequency axes. For linear frequency axes, this implies having the same resolution; for logarithmic axes with base 2, this implies having the same number of
bins per octave minimum values that differ by a factor of 2^{n/m} where m is the number of bins per octave and n is any integer. No check is made for the consistency of the frequency axes.
Note that the attributes filename, offset, and label of spectrogram b is being added are lost.
The sum spectrogram has the same dimensions (time x frequency) as spectrogram a.
a: Spectrogram
b: Spectrogram
Spectrogram to be added
offset: float
Shift spectrogram b by this many seconds relative to spectrogram a.
scale: float
Scaling factor applied to signal that is added
make_copy: bool
Make copies of both spectrograms, leaving the orignal instances unchanged by the addition operation.
ab: Spectrogram
Sum spectrogram | {"url":"https://docs.meridian.cs.dal.ca/ketos/generated/ketos.audio.spectrogram.add_specs.html","timestamp":"2024-11-02T21:51:28Z","content_type":"text/html","content_length":"11886","record_id":"<urn:uuid:ad1db3b2-8660-4a94-b348-9178fb227621>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00039.warc.gz"} |
Rotation | Brilliant Math & Science Wiki
Rotation is a transformation in which a figure is turned about a given point.
A rotation of an object around a point or an axis is a continuous transformation that does not change the distance of any of the points on the object from the point or the axis.
\[\] \[\]
Set the origin \(O=(0,0)\) as the center of rotation. When a point \(P=(x,y)\) is rotated counterclockwise by angle \(\theta\), the resulting point \(P'(x',y')\) can be calculated with the formula
\[\begin{pmatrix}x' \\ y'\end{pmatrix} = \begin{pmatrix} \cos \theta & -\sin\theta \\ \sin \theta & \cos \theta\end{pmatrix} \begin{pmatrix}x \\ y \end{pmatrix}.\]
In other words,
\[\begin{aligned} x' &= x\cos\theta - y\sin\theta \\ y' &= x\sin\theta + y\cos\theta.\end{aligned}\]
See also: Symmetry
Rotational symmetry occurs if an object can be rotated less than \(360^\circ\) and be unchanged. The center of the rotation is called the point of symmetry.
Take for example some of the letters of the alphabet, which have rotational symmetry as they can be rotated \(180^\circ\) and remain the same.
Consider a rotation \(R\) about about point \((25,64)\) by \(30^{\circ}\) clockwise. How many rotations will it take for \( (81,62)\) to be rotated back to itself?
We have to rotate the point by \(360^{\circ}\) to get it back. So the rotations required are \(\frac {360^{\circ}}{30^{\circ}}\)= \(12\) rotations. \(_\square\) | {"url":"https://brilliant.org/wiki/rotation/","timestamp":"2024-11-09T08:20:04Z","content_type":"text/html","content_length":"48811","record_id":"<urn:uuid:a222ff64-633d-4b0a-8de1-8e6e812eee16>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00484.warc.gz"} |
Sylvie Benzoni-Gavage
Sylvie Benzoni-Gavage is a professor at the Institut Camille Jordan of Lyon University. Her main fields of interest have to do with the analysis of partial differential equations and the
mathematical modelling of various physical phenomena like fluid dynamics, elastodynamics, phase transitions. She was trained at `École Normale Supérieure de Saint-Cloud/Lyon', and loves interacting
with students. She used to be the head of undergraduates studies in mathematics, and has been involved in many initiatives directed to highschool students (lectures, workshops, blog mpt2013.fr,
website Images des mathématiques). She is currently in charge of international relations for the Mathematics Department.
"The art of not solving equations:" Many natural phenomena are governed by equations in which the unknown is a function. This is the case for instance in demography, when we are to follow the number
of individuals of a given population as a function of time. As long as the population growth rate is constant, we can predict that the evolution of the number of individuals will be exponential (a
mathematical word that we can often hear in the media, seldom in its accurate meaning though). However, very few equations admit such a simple solution. For over two centuries after differential
calculus was invented by Newton and Leibniz, scientists had been struggling to find more or less beautiful formulas for the solutions of differential equations, that is, equations expressed in terms
of an unknown function and its derivatives. Then came Poincaré and a radically new point of view arose. Mathematicians realized that they could infer many properties of those solutions without
computing them. This `art of not solving equations' has known a tremendous development in the last century, and became crucial in the analysis of partial differential equations (these are more
complicated versions of differential equations in which the unknown is a function of several variables - for instance, these variables can be time and spatial position of the individuals of a
population). As a matter of fact, mathematical analysis relies very much on the art of manipulating inequalities, instead of deriving equalities. The aim of this course will be to give an overview of
some widely used tools that yield beautiful results in the theory of differential equations.
Keywords: Duhamel's formula, Gronwall's lemma, a priori estimates, bootstrap. | {"url":"http://www.issmys.eu/scientific-information/instructors/sylvie-benzoni","timestamp":"2024-11-03T17:08:52Z","content_type":"application/xhtml+xml","content_length":"31197","record_id":"<urn:uuid:56f1b712-94eb-4f1d-9fae-e23aa4db4893>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00467.warc.gz"} |
Linear Graph
Linear means straight and a graph is a diagram which shows a connection or relation between two or more quantity. So, the linear graph is nothing but a straight line or straight graph which is drawn
on a plane connecting the points on x and y coordinates. We use linear relations in our everyday life, and by graphing those relations in a plane, we get a straight line.
Now that you have got an introduction to the linear graph let us explain it more through its definition and an example problem.
Linear Graph Equation
As discussed, linear graph forms a straight line and denoted by an equation;
where m is the gradient of the graph and c is the y-intercept of the graph.
The gradient between any two points (x[1], y[1]) and (x[2], y[2]) are any two points on the linear or straight line.
The value of gradient m is the ratio of the difference of y-coordinates to the difference of x-coordinates.
i.e. m=y[2]-y[1]/x[2]-x[1 ]Or y-y[1]= m (x – x[1])
The linear equation can also be written as,
ax + by + c = 0
where a,b and c are constants
Linear Graph Examples
Let us understand the Linear graph definition with examples.
1. The equation y=2x+1 is a linear equation or forms a straight line on the graph. When the value of x increases, then ultimately the value of y also increases by twice of the value of x plus 1.
2. Suppose, if we have to plot a graph of a linear equation y=2x+1.
Let us consider y=2x+1 forms a straight line. Now, first, we need to find the coordinates of x and y by constructing the below table;
Now calculating value of y with respect to x, by using given linear equation,
y=2(-2)+1= -3 for x=-2
y=2(-1)+1= -1 for x=-1
y=2(0)+1= 1 for x=0
y=2(1)+1= 3 for x=1
y=2(2)+1= 5 forx=2
So the table can be re-written as;
x -2 -1 0 1 2
y -3 -1 1 3 5
Now based on these coordinates we can plot the graph as shown below.
Learn about linear equations and related topics by downloading BYJU’S- The Learning App. | {"url":"https://mathlake.com/Linear-Graph","timestamp":"2024-11-05T22:14:59Z","content_type":"text/html","content_length":"10885","record_id":"<urn:uuid:b9c9bcb3-352a-40be-ac5e-1126f4e0c046>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00007.warc.gz"} |
GPFSeminars 2023
Seminars for the year: 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007
Follow our seminars online via: GPF BigBlueButton server
Time: 22. December 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Athanasios Chatzistavrakidis
Title: BRST-AKSZ-RUTH
I will discuss the evidence for a relation between the BRST formalism and the concept of representation up to homotopy. The main focus will be on the simplest case of the adjoint representation of a
Lie algebroid and 2D topological field theory of AKSZ type. I will briefly comment on higher dimensional theories and representations up to homotopy for Lie n-algebroids.
Time: 15. December 2023, 12:00h
Place: Institute of Physics, room 360
Speaker: Voja Radovanovic
Title: Clifford algebras and spinors in a general number of dimensions (part 2)
In the first part of this pedagogical talk, we study Clifford algebra in D-dimensional spacetime. Clifford algebra plays an important role in supersymmetric and supergravity theories. After that, we
define various types of spinors: Majorana, symplectic Majorana, Weyl and Majorana-Weyl and investigate the existence of these spinors in D-dimensional spacetime.
Time: 8. December 2023, 12:00h
Place: Institute of Physics, room 360
Speaker: Voja Radovanovic
Title: Clifford algebras and spinors in a general number of dimensions (part 1)
In the first part of this pedagogical talk, we study Clifford algebra in D-dimensional spacetime. Clifford algebra plays an important role in supersymmetric and supergravity theories. After that, we
define various types of spinors: Majorana, symplectic Majorana, Weyl and Majorana-Weyl and investigate the existence of these spinors in D-dimensional spacetime.
Time: 24. November 2023, 11:00h
Place: Institute of Physics, room 360
Speaker: Dusan Djordjevic
Title: Boundary terms and branes in the first-order gravity
The story of boundary terms in a gravity action dates back to the seventies, yet there are still important questions that need to be fully understood. In this talk, we will revisit the role of the
Gibbons-Hawking-York boundary term in the Einstein-Hilbert gravity, with a clear motivation to address the issue of bottom-up AdS/BCFT duality in a case where bulk geometry is Riemann-Cartan. We will
comment on different approaches to GHY-like boundary terms in this scenario and then use our knowledge to understand the issue of end-of-the-world branes with (traditional) Neumann boundary
conditions in the first-order gravity, which is a necessary step to address the AdS/BCFT duality adequately. Our analysis will be based on a simple set of gravity theories in different dimensions.
Apart from this, we will also touch upon the issue of black hole thermodynamics.
Time: 10. November 2023, 11:00h
Place: Institute of Physics, room 360
Speaker: Ana Knezevic
Title: Hyperbolic lattices
Hyperbolic lattices connect gravitational theories, topology, algebraic geometry, condensed matter physics, etc. They are a new form of synthetic quantum matter, where particles propagate coherently
on the sites of a regular structure that appears aperiodic from the vantage point of Euclidean geometry but is periodic in the case of 2D hyperbolic space. The lattice that we are researching is
embedded in the Poincaré disk. We will start by comparing lattices in 2D Euclidean geometry with those in 2D hyperbolic space. In order to put any {p,q} (Schläfli symbols) lattice on a Poincaré disk,
we will go back to our hyperbolic space and analyze its symmetry. Our main goal is to construct discrete symmetry of the {p,q}-lattice from the Lie algebra (continual symmetry) of the hyperbolic
space. In this manner, we can describe the hyperbolic crystal on the same footing as a crystalline lattice in flat space, which will furthermore allow us to systematically study band structures on
such curved-space crystals.
Time: 27. October 2023, 11:00h
Place: Institute of Physics, room 360
Speaker: Dejan Simic
Title: Note on asymptotic symmetry of massless scalar field at null infinity
Since ten years ago, we know that the memory effect, soft theorem and asymptotic symmetry at null infinity are mutually equivalent. Any massless field has its soft theorem, including a massless
scalar field, so we expect the asymptotic symmetry associated with the massless scalar case to be present also. We will try to understand the asymptotic symmetry of a massless scalar field at null
infinity. In order to make sense of asymptotic symmetry for theory without gauge symmetry, we slightly generalize the notion of asymptotic symmetry. Derivations of the results are done in two
different ways, using Hamiltonian analysis and using covariant phase space. Along the way, we will explain the necessary concepts, such as asymptotic symmetry, Hamiltonian analysis and covariant
phase space.
Time: 13. October 2023, 11:00h
Place: Institute of Physics, room 360
Speaker: Marko Vojinovic
Title: A short review of Henneaux-Teitelboim gauge symmetry
We will give a short introduction into HT gauge symmetry, and discuss some of its basic properties. The example of HT transformations of the Chern-Simons action will be worked out in detail, as well
as some general results regarding nBF theories. We will also discuss some symmetry breaking scenarios and their influence on HT gauge group.
Based on arXiv:2305.00117.
Time: 5. September 2023, 11:00h
Place: Institute of Physics, room 360
Speaker: Djordje Minic
Title: Quantum gravity = Gravitized quantum theory
In this talk I will discuss a new approach to the problem of quantum gravity in which the quantum mechanical structures that are traditionally fixed, such as the Fubini-Study metric in the Hilbert
space of states, become dynamical and so implement the idea of gravitizing the quantum. I will elaborate on a specific test of this new approach to quantum gravity using triple interference in a
varying gravitational field. My discussion will be driven by a profound analogy with recent triple-path interference experiments performed in the context of non-linear optics. I will emphasize that
the triple interference experiment in a varying gravitational field would deeply influence the present understanding of the kinematics of quantum gravity and quantum gravity phenomenology. I will
also discuss the non-linear Talbot effect as another striking phenomenological probe of gravitization of the geometry of quantum theory. Finally, I will discuss the bounds this new approach to
quantum gravity implies for the cosmological constant, the Higgs mass and the masses and mixing matrices of quarks and leptons.
Time: 9. June 2023, 11:00h
Place: Institute of Physics, room 360
Speaker: Dusan Djordjevic
Title: Holographic aspects of even-dimensional topological gravity
Despite being formulated over 30 years ago, the topological gauge theory of gravity in an even number of space-time dimensions has received limited attention. In this talk, we will consider this
theory (with the AdS gauge group) in the setup of holographic duality. First, we will compute the holographic one-point functions. The fact that one-point function of a spin current generally does
not vanish opens up a possibility of application in the field of spin systems in three space-time dimensions. Then, we will further investigate some bulk semiclassical geometries and discuss them in
the light of holography. Finally, we will discuss the two-point function in the dual-field theory using the Wilson line-like presentation of a probe point particle in the bulk.
Time: 19. May 2023, 12:00h
Place: Institute of Physics, room 360
Speaker: Maja Buric
Title: Scalar fields on fuzzy de Sitter space
After a short introduction of fuzzy de Sitter geometry, we present general solution to the Klein-Gordon equation on this noncommutative space.
Time: 19. May 2023, 11:00h
Place: Institute of Physics, room 360
Speaker: Vladislav Kupriyanov
Title: Homotopy algebras, symplectic embeddings and noncommutative gauge theory
The problem of the consistent definition of noncommutative gauge theory on spaces with non-constant noncommutativity parameters has attracted the attention of theoretical physicists and
mathematicians for more than two decades. Nevertheless, this theory is still not completely understood in full generality. In recent years we have formulated two new approaches to consistent
non-commutative and non-associative deformations of gauge theory. The first one employs the framework of homotopy algebras and is a powerful tool for the construction of order by order noncommutative
deformation. The second approach makes use of the elements from the symplectic geometry and is better adapted for obtaining the explicit all-order expressions. Several interesting results have been
obtained and published in this direction. In this talk I will briefly describe the two approaches and discuss the recent progress in noncommutative gauge theories.
Time: 18. May 2023, 16:00h
Place: Faculty of Physics, room 665
Speaker: Richard Szabo
Title: Homotopy double copy of noncommutative gauge theories
This talk will summarise recent work attempting to understand how standard noncommutative gauge theories, such as those which arise naturally from string theory, fit into the paradigm of
colour-kinematics duality and the double copy of gauge theory to gravity. The treatment will focus on the elegant formulation of the double copy prescription using homotopy algebras. Along the way we
shall encounter some novel noncommutative scalar field theories with rigid colour symmetry that have no commutative counterparts, whose double copies are deformations of some known topological
theories such as the special Galileon theory in two dimensions and self-dual gravity in four dimensions.
Time: 18. May 2023, 15:00h
Place: Faculty of Physics, room 665
Speaker: Igor Prlina
Title: Amplitugicians: using (non-magic) tricks to find scattering amplitudes
This talk will introduce the audience to the goals and methods of the so-called Amplitudes project, where alternatives to the Feynman diagram approach are used to calculate scattering amplitudes in
different theories. Some common themes in the Amplitudes approach are symmetry adapted variables, recursive relations and geometric interpretation. These themes will be illustrated by introducing
spinor helicity formalism, BCFW recursion, momentum twistors, and the Amplituhedron. Finally, some of the lecturer's own results on the connection of the boundaries of the Amplituhedron with
amplitude singularities will be presented.
Time: 12. May 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Voja Radovanovic
Title: Batalin-Vilkovisky formalism and quantization of gauge field theories (part 4)
In these series of lectures we will review the Batalin-Vilkovisky (BV) formalism and its applications. This formalism is a generalization of the BRST quantization and it is frequently used in field
theory and quantum mechanics. One of the advantages of the BV formalism is that it provides a well defined quantization for theories that cannot be quantized by the Faddev-Popov path integral
approach. In particular, gauge field theories with complicated gauge symmetries (reducible and/or with an open algebra) are quantized in the framework of this formalism.
In the fourth lecture we will discuss quantization of field theories in the BV formalism.
Time: 28. April 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: James Fullwood
Title: On quantum states over time
While in relativity theory space evolves over time into a single entity known as spacetime, quantum theory lacks a standard notion of how to encapsulate the dynamical evolution of a quantum state
into a single "state over time". Recently there is an emerging interest in the formulation of such dynamical quantum states, resulting in various approaches to their construction. In our work with
Arthur Parzygnat, we have developed a general approach which we have recently shown is equivalent to the pseudo-density matrix formalism of Fitzsimons, Jones and Vedral, which was initially
introduced to treat temporal and spatial correlations in quantum theory on equal footing. In this talk, we review the general theory of such states over time, and go over some recent applications,
such as a dynamical quantum Bayes' rule, time-reversal for quantum systems, and quantum mechanical world lines. We then conclude with some ideas on how quantum states over time may yield "spacetime
states" associated with a causal set.
Time: 21. April 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Voja Radovanovic
Title: Batalin-Vilkovisky formalism and quantization of gauge field theories (part 3)
In these series of lectures we will review the Batalin-Vilkovisky (BV) formalism and its applications. This formalism is a generalization of the BRST quantization and it is frequently used in field
theory and quantum mechanics. One of the advantages of the BV formalism is that it provides a well defined quantization for theories that cannot be quantized by the Faddev-Popov path integral
approach. In particular, gauge field theories with complicated gauge symmetries (reducible and/or with an open algebra) are quantized in the framework of this formalism.
In the third lecture we will discuss some examples of reducible and/or open gauge symmetries. Then we will introduce notions of antifields, antibracket and BV Laplacian, derive their properties and
discuss the classic and the quantum master equations and their solutions. Finally, we will present few relevant examples: Yang-Mills theory, toplogical Yang-Mills theory and the antisymmetric tensor
field theory.
Time: 31. March 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Voja Radovanovic
Title: Batalin-Vilkovisky formalism and quantization of gauge field theories (part 2)
In these series of lectures we will review the Batalin-Vilkovisky (BV) formalism and its applications. This formalism is a generalization of the BRST quantization and it is frequently used in field
theory and quantum mechanics. One of the advantages of the BV formalism is that it provides a well defined quantization for theories that cannot be quantized by the Faddev-Popov path integral
approach. In particular, gauge field theories with complicated gauge symmetries (reducible and/or with an open algebra) are quantized in the framework of this formalism.
In the second lecture we will discuss some examples of reducible and/or open gauge symmetries. Then we will introduce notions of antifields, antibracket and BV laplacian, derive their properties and
discuss the classic and the quantum master equations and their solutions. Finally, we will present few relevant examples: Yang-Mills theory, toplogical Yang-Mills theory and the antisymmetric tensor
field theory.
Time: 24. March 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Voja Radovanovic
Title: Batalin-Vilkovisky formalism and quantization of gauge field theories (part 1)
In these series of lectures we will review the Batalin-Vilkovisky (BV) formalism and its applications. This formalism is a generalization of the BRST quantization and it is frequently used in field
theory and quantum mechanics. One of the advantages of the BV formalism is that it provides a well defined quantization for theories that cannot be quantized by the Faddev-Popov path integral
approach. In particular, gauge field theories with complicated gauge symmetries (reducible and/or with an open algebra) are quantized in the framework of this formalism.
In the first lecture we will describe the standard field theory approach to the BV quantization. To start with, we will introduce notions of antifields, antibracket and BV Laplacian and derive their
properties. Then we will discuss the classical and the quantum master equations and their solutions. Finally, we will present a few relevant examples: Yang-Mills theory, toplogical Yang-Mills theory
and the antisymmetric tensor field theory.
Time: 10. March 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Pavle Stipsic
Title: Symmetry breaking mechanisms for 3BF action
In the process of construction of the action for the physical theory of Standard Model coupled to gravity, in the language of 3-groups, we begin from the topological action and impose simplicity
constraints. These constraints explicitly break the initial symmetry of the topological action all the way to the symmetry of the Standard Model. Aside from the explicit symmetry breaking, we
demonstrate also the equivalent of BEH mechanism of spontaneous symmetry breaking of electroweak interaction.
Time: 24. February 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Marko Vojinovic
Title: Introduction to category theory and n-groups (part 6)
In the final lecture of the series, we will present a theorem relating a 2-group, a path 2-groupoid, a 2-connection and a principal 2-bundle. This theorem generalizes some concepts of differential
geometry, specifically the notion of parallel transport and holonomy, by introducing the so-called surface parallel transport and surface holonomy. Such a generalization represents a new tool to
build gauge theories in physics, and has direct applications for the constructions of models of quantum gravity.
The lectures are based on material from papers arXiv:q-alg/9705009 and arXiv:1003.4485.
Time: 10. February 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Marko Vojinovic
Title: Introduction to category theory and n-groups (part 5)
In the fifth lecture, we will introduce the notion of a 2-group, and provide a few examples how to use 2-groups to describe symmetries in physics. In particular, we will study in detail the example
of the Poincaré 2-group. We will also comment on some properties of 2-groups and their equivalence to crossed modules.
The lectures are based on material from papers arXiv:q-alg/9705009 and arXiv:1003.4485.
Time: 20. January 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Marko Vojinovic
Title: Introduction to category theory and n-groups (part 4)
In the fourth lecture, we finally perform the first step in the categorical ladder procedure. We will first introduce the notion of a 2-category (both the strict and weak versions), the notion of a
2-group, and the notion of a crossed module. Then we will discuss various properties specific to strict 2-groups, their equivalence to crossed modules, and a couple of examples relevant for physics.
The lectures are based on material from papers arXiv:q-alg/9705009 and arXiv:1003.4485.
Time: 13. January 2023, 11:00h
Place: Faculty of Physics, room 665
Speaker: Marko Vojinovic
Title: Introduction to category theory and n-groups (part 3)
In the third lecture, we will discuss three important examples of categories and functors between them. These examples illustrate an important relationship between category theory, Lie groups, and
differential geometry --- primarily in the context of the path groupoid category, which will be defined and discussed in detail. The material of this lecture is important as a preparation for the
remainder of the course, which will focus on higher categories and generalizations of structures in differential geometry using higher category theory.
The lectures are based on material from papers arXiv:q-alg/9705009 and arXiv:1003.4485.
Seminars for the year: 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007
Follow our seminars online via: GPF BigBlueButton server | {"url":"http://gravity.ipb.ac.rs/seminars2023.html","timestamp":"2024-11-10T02:52:01Z","content_type":"text/html","content_length":"29549","record_id":"<urn:uuid:8d5d7d76-a0c3-4d1c-9406-21e499388ce2>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00106.warc.gz"} |
CIPM Exam Tips & Tricks
A Principles level candidate sent me the following question:
Hi John,
The CFA institute website has 15 sample questions for the principles exam on their website for anyone considering the program at the below link. Question 14 asks about GIPS private equity valuation
I was under the impression that private equity was not a required topic in the principles level. Perhaps the questions are old? Or are there some main concepts I should understand about the above
topic before I sit for the exam?
I believe the candidate is referring to the sample questions found here.
I believe that is an oversight on the part of CFA Institute, or at least they did not plan to update the sample questions to reflect curriculum changes. Real estate and private equity were previously
covered at the Principles Level, but that is no longer the case. If memory serves, this was removed from the Principles Level curriculum in 2011. These subjects are now included at the Expert Level.
The sample questions that the candidate is referring to were developed prior to these changes, and probably have not been updated to reflect curriculum changes.
Now, I believe that the candidate is referring to the GIPS provisions dealing with real estate and private equity (sections I.6 and I.7 of the GIPS Standards, respectively), Appendix D in GIPS 2005
(Private Equity Valuation Principles) and the related guidance. My answer is with respect to that content as well. Having said that, your curriculum may make occasional reverence to these asset
classes, and you are responsible for that information.
In a previous blog post, I covered the answers to two CIPM Expert Level sample exam questions, specifically, #5 and #6. One current Expert Level candidate asked me to elaborate on the answer, asking
which formula(e) in the curriculum could be used to obtain the answer.
I would say that the solution here does not relate directly to any specific single formula in the reading; rather, it relates to the relationships in the diagram on page 101 of your Virtual Bookshelf
materials, and extending the concepts you have learned.
Recall the vignette that goes with these exercises reads as follows:
Longitudinal Asset Management is a US-based portfolio manager investing in international equities. One of the firm’s portfolios is invested entirely in Canadian and United Kingdom equities. At the
beginning of an evaluation period, the market values of the portfolio’s Canadian and UK segments are 5,000,000 Canadian dollars (CAD) and 3,000,000 pounds sterling (GBP), respectively. At the
prevailing exchange rates, one CAD equals 0.80 US dollars (USD), and one GBP equals 2.00 USD.
Excluding dividend income, at the end of the period the Canadian equities are valued at CAD 5,300,000 and the UK equities are valued at GBP 2,880,000. The CAD now equals 0.90 USD while the GBP now
equals 1.90 USD. Dividend payments of CAD 100,000 and GBP 180,000, respectively, are received at the prevailing exchange rates on the last day of the period.
Question #6 reads as follows:
6. The portfolio’s total return, expressed in base currency, is the sum of the capital gain, yield, and currency components of return. In this framework, the capital gain component of the entire
portfolio’s total return is closest to:
A. 0.00%.
B. 1.00%.
C. 2.42%.
Apologies for using different notation from your courseware, but Google blogspot does not cleanly support the use of superscripts and subscripts.
My notation here is as follows:
• Local market value at the start of the period is V(0).
• Local market value at the end of the period is V(1).
• Converted market value at the start of the period is V(0)*S(0).
• Converted market value at the start of the period is V(1)*S(1).
When investors buy foreign assets, they are exposed to two sources of return:
• Change in value of the asset in local currency; i.e, V(1) – V(0).
• Change in value of the foreign currency; i.e., S(1) – S(0).
Investors are also exposed to the compounding of these two sources of return:
[V(1) - V(0)]*[S(1) - S(0)].
If we want to examine the return due to the change in value of the asset in local currency, but in terms of the base currency, we assume the exchange rate during the period does not change (i.e.,
assume it remains fixed at S(0)), and apply that exchange rate to the change in value in local currency:
S(0)*[V(1) – V(0)] = S(0)*V(1) – S(0)*V(0).
So, back to my solution to item #6, at the start of the period and at the end of the period, there are two positions, the UK equities and the CA equities. Assuming the exchange rate during the period
stays fixed at S(0), the calculation of the base currency value at the end of the period (i.e., S(0)*V(1)) is obtained as follows:
• the value of 2,880,000 GBP converts to 5,760,000 USD at the exchange rate of 1 GBP = 2.0 USD.
• the value of 5,300,000 CAD converts to 4,240,000 USD at the exchange rate of 1 CAD = .8 USD.
Thus the ending value of the portfolio is 5,760,000 + 4,240,000 = 10,000,000 USD.
And, the calculation of the base currency value at the start of the period (i.e., S(0)*V(0)) is obtained as follows:
• the value of 3,000,000 GBP converts to 6,000,000 USD at the exchange rate of 1 GBP = 2 USD.
• the value of 5,000,000 CAD converts to 4,000,000 USD at the exchange rate of 1 CAD = .8 USD.
Thus the starting value of portfolio is 6,000,000 + 4,000,000 = 10,000,000 USD.
Given the base currency amount is the same at the start and end of the period (assuming the spot exchange rate did not change), then the amount earned due to this is a total of 10,000,000 USD –
10,000,000 USD = 0 USD. The solution asks for a return, so you could determine the denominator to divide by, but given that the denominator is 0 USD, you don’t need to bother – the return is 0.00%.
(Note, the correct denominator would be the 10,000,000 USD at the start of the period, based on the formula S(0)*V(0)). | {"url":"https://cipmexamtipsandtricks.blogspot.com/2012/04/","timestamp":"2024-11-05T10:26:08Z","content_type":"text/html","content_length":"86741","record_id":"<urn:uuid:93b2476a-8ed6-462c-8ca5-be58653e080a>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00590.warc.gz"} |
Division w/ remainders worksheets
division w/ remainders worksheets Related topics: linear equation java program
pre algebra free study guide
exploring mathematics with the inequality graphing application
roots and radicals
math website that help turn decimal into a fraction
description of mathematics & statistics,2
how to find the lcm of monomials
free online algebra tutor\
where is the log key on the ti-89
texas assessment of math knowledge and skills-answer key
Author Message
manis Posted: Tuesday 09th of Dec 17:49
Can anybody help me? I have an algebra test coming up next week and I am completely confused. I need some help especially with some problems in division w/ remainders worksheets that
are quite tricky. I don’t wish to go to any tutorial and I would really appreciate any help in this area. Thanks!
From: Portugal
Back to top
Vofj Timidrov Posted: Thursday 11th of Dec 12:15
You seem to be more horrified than confused. First you need to control your senses . Do not panic. Sit back, relax and look at the books with an open mind. They will seem difficult if
you think they are tough. division w/ remainders worksheets can be easily understood and you can solve almost every equation with the help of Algebrator. So chill .
From: Bulgaria
Back to top
LifiIcPoin Posted: Friday 12th of Dec 09:35
I agree, websites on the internet are no better than the course books. Algebrator is a good way to start your math career.
From: Way Way
Back to top
CbonjeVeb Posted: Saturday 13th of Dec 15:35
You mean it’s that uncomplicated ? Fabulous . Looks like just the one to end my search for a solution to my troubles. Where can I locate this program? Please do let me know.
From: End of
the Universe
Back to top
Mov Posted: Sunday 14th of Dec 17:38
Here you go kid, https://softmath.com/links-to-algebra.html
Back to top
Gog Posted: Tuesday 16th of Dec 09:08
I suggest trying out Algebrator. It not only assists you with your math problems, but also gives all the necessary steps in detail so that you can improve the understanding of the
From: Austin,
Back to top | {"url":"https://softmath.com/algebra-software-2/division-w-remainders.html","timestamp":"2024-11-09T20:35:17Z","content_type":"text/html","content_length":"42261","record_id":"<urn:uuid:156456d2-2ae5-4fa7-aad8-79e0adf41e38>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00197.warc.gz"} |
Talking About Percentages
A recent discussion with a student I was tutoring face to face, about an ambiguously worded problem, led me to gather a few answers we’ve given related to the words we use associated with
Ambiguous percentages
Here is a question from 2003:
Clarifying Percentages vs. Percentage Points
What is the difference between measuring using percentages versus measuring using percentage points? What is meant by a percentage point?
I answered, starting with why we need the terms:
The term "percentage point" is used to get around an ambiguity in English when we are comparing two different percentages. The problem is that "percent" implicitly refers to a relative change (some fraction of an original amount, like a salary increase of 10%) rather than an absolute change (some specified amount, like a salary increase of $1000). What do we say when we want to treat a percentage as an absolute amount?
If, for example, the current tax rate were 10% and we increased it to 12%, we might say that we increased it by 2 percent. But that would be taken to mean that we increased it by 2% _of the original 10%_ (that is, by 2/100 of 10%, or 0.2%), to 10.2%. The question is, are we using "percent" to mean one of the units called percent, or a percentage of that percentage?
To avoid this problem, we say instead that we are increasing the tax rate by "two percentage points". This unambiguously refers to the number 2% itself as a unit, rather than to 2% of something else.
So the percent increase from 10% to 12% is the difference, 2%, divided by the original amount, 10%, which is 0.20, or 20%. But the percentage point increase is just the 2%. If we replaced the
percentages given with a unit, say dollars, we would say that the percent increase from $10 to $12 is the difference, $2, divided by the original amount, $10, which is 0.20, or 20%; the dollar
increase is $2. This is what I mean by saying we treat percentage points as a unit.
On the other hand, if we actually wanted to say that the tax increased to 10.2%, it would be a good idea to clarify that as well, perhaps by saying explicitly that it increased by 2% of its old rate, or by stating the old and new values. Technically, however, it is correct to say that it increased by 2%.
In summary, I wouldn't say that we "measure" using one or the other; rather, we use the one term to clarify our meaning where the other would be ambiguous, because we are switching perspective from thinking of a percentage as a fraction of something else, to treating it as a number that stands on its own. A percentage change is a difference divided by some base number, while a percentage _point_ change is a simple addition or subtraction.
The issue with my student was in this area. She was working on the following problem:
In 1950, Americans spent 22% of their budget on food. This has decreased at an average rate of approximately 0.25% per year since then.
Find a linear function in slope-intercept form that models this description. It should model the percentage of total spending, p(x), by Americans x years after 1950.
She knew that a decrease of 0.25% means subtracting 0.25% of one year’s amount from that amount, equivalent to multiplying by 99.75% (1 – 0.0025 = 0.9975) each year. But this didn’t fit with anything
she had learned about. (In fact, it would correspond to an exponential decrease, not a linear function, but she hadn’t learned about that.)
In answer, I pointed out that the statement was ambiguous, and we need to interpret it in a way that is consistent with the mention of a linear function. What the problem should have said, for
clarity, is
In 1950, Americans spent 22% of their budget on food. This has decreased at an average rate of approximately 0.25 percentage points per year since then.
That is, the number 22 is to be reduced by 0.25 each year; that will be the slope of the function. Both 22 and 0.25 are to be thought of as measured in the same unit, percentage points, rather than
the latter being a percentage of the former.
A fraction of a percent
A different sort of issue arises when we mix together a fraction and a percentage in one measurement. Here is a question from 2005:
How to Pronounce a Fraction of a Percentage
Is it correct to say "one tenth of one percent" as opposed to saying "one tenth percent" for 0.1%? Why or why not?
It seems wrong to refer to a percentage as a fraction of a percentage, but news people and the financial industry do it all the time. I can't seem to find out what the precedence is for this.
I answered:
Both mean the same thing; neither is wrong. The reason for the longer phrase is probably the usual reason for using a longer phrase: to avoid ambiguity or possible confusion. Many people are not quite clear on what percentages mean, and might well take "one-tenth percent" as if it were just "one-tenth" (which, of course, is really 10%). So people tend to expand it to make it clear that they are using BOTH a fraction AND a percent; that is,
0.1% = 1/10 * 1% = 1/10 * 1/100 = 1/1000
I suppose you could compare this to using "a quarter OF A dollar" or "twenty-five hundredths OF a dollar", rather than just reading "$0.25" as "a quarter dollar" or "twenty-five hundredths dollars".
The latter just feels subtly wrong (despite the fact that American coins do say “quarter dollar”). I imagine it is not much more than habit, just as many other aspects of language feel right or wrong
though we can’t point to a rule for it, or explain why it should be that way.
I was reminded, in writing this just now, of a fascinating conversation with a French reader in 2002 about a different issue that, we decided, depended heavily on what language we were using:
Use of Plural with Decimal Numbers
In part of my answer to this question about whether we write, for example, 1.5 degrees or 1.5 degree (we say the former in English, while they say the latter in French, for a very interesting
reason), I said this:
My understanding is that we consider ONLY the number 1 as singular; in particular, zero is a plural: we say "0 degrees" or "0 (no) apples," not "0 degree" or "0 apple." We do not use fractions as adjectives at all, but say "half (of) an apple" or "two thirds of a degree" with the fraction standing alone as a noun phrase, so it would not be quite accurate to say that a proper fraction is singular. With a mixed number, we tend to use a plural: "one and a half apples."
This ties in to what I said above about “a quarter (of a) dollar”; possibly the real reason we say “a quarter of a percent” is exactly the same: The fraction is treated as a noun, not as an adjective
or adverb; and “percent” is treated as a unit.
Percent vs. percentage
A distantly related issue came up in the following question from 2008:
Difference between Percent and Percentage
What is the difference between percent and percentage are there any difference? I think both are out of 100.
Again I put on my “Ask Doctor Grammar” hat and answered:
The only difference is in how they are used grammatically--and people differ even on that. I take "percentage" to refer to the concept, and "percent" to be a unit, much like "voltage" vs. "volts" in electricity, or "mileage" vs. "miles" in distance. The mileage you put on a car during a trip might be 40 miles; the percentage of people who expect gas prices to rise might be 40 percent (40%).
Here is one dictionary's take on it (www.merriam-webster.com):
Percentage: noun
1 a: a part of a whole expressed in hundredths
<a high percentage of students attended>
b: the result obtained by multiplying a number by a percent
<the percentage equals the rate times the base>
Percent: adverb
: in the hundred : of each hundred
The main difference is that they report "percentage" as a noun and "percent" as an adverb. That fits my understanding. Note that "percentage" is not used with a number, while "percent" is (and not without).
Looking at the current definitions, I see that I must have missed two of three entries:
percent (adverb)
: in the hundred : of each hundred
percent (noun)
1 plural percent
a : one part in a hundred
b : percentage
a large percent of their income
2 percents plural, British : securities bearing a specified rate of interest
percent (adjective)
1 : reckoned on the basis of a whole divided into 100 parts
2 : paying interest at a specified percent
Most uses are probably nouns, as in “20 percent of people”, though the adjective use is also common (“a 20 percent solution”). But note that they say “percent” is also used as a synonym for
“percentage”, which I would take as a concession to common but mistaken usage.
I then referred to a previous question from 2002:
Percent vs. Percentage
There, Doctor Sarah quoted a dictionary and two usage books.
Doctor Sarah had also answered a similar question earlier that year:
Percent or Percentage?
Would you please explain the exact difference between the words percent and percentage? I have used textbooks that use them as if they are the same. I have have always explained it as percent is a % and percentage is a number that is the same unit as the base number.
This questioner seems to take “percentage” in sense 1b from my dictionary reference above, as the amount itself rather than the number of hundredths, which I have seen used in some textbooks but
don’t think I really use in practice. (They might say “percentage = percent times whole”; I’d rather say, “part = percentage times whole”.) I’m not sure either Doctor Sarah or I recognized this
detail. Her response was:
You're on the right track, and even a regular dictionary can help with this.
percent - one part in a hundred
percentage - a part of a whole expressed in
hundredths; the result obtained by
multiplying a number by a percent
From the Guide to Grammar and Writing on the Web ("Notorious Confusables"):
"We use the word percent as part of a numerical
expression (e.g., Only two percent of the
students failed.). We use the word percentage
to suggest a portion (e.g., The percentage of
students who fail has decreased.)."
Unfortunately, you will find percent and percentage incorrectly used everywhere on the Web and in textbooks.
Although the dictionary definition quoted can be taken as representing the part of the whole itself, not the fraction, the example from the grammar site doesn’t have that meaning, as it is not the
actual number of students who fail, but the fraction, that has decreased.
The same page includes a 2003 question on the same topic:
Is there a difference between the meaning of these two words, or are they totally interchangeable?
I always thought the word percent required the correct notation using the symbol % and that percentage was referred to as AN AMOUNT BASED ON A GIVEN TOTAL, NOT NECESSARILY BASED ON 100.
For example: Given 4/16
The percentage is 4 out of a total of 16, the percent is 25%
I replied:
To answer your specific question, I would say "the fraction used is 4 out of 16; the percentage is 25% [read as 25 percent]." That is, both "percent" and "percentage" refer to an amount "out of 100" (since that is what "per cent" means), and the only difference is how they are used in a sentence. We (should) use "percent" only in phrases like "25 percent" where it can be directly replaced by the phrase "out of 100"; we use "percentage" as a name for the concept.
Although I don’t always defer to dictionaries in areas related to math, I do like to refer to them, and especially to what they say about common usage.
My American Heritage dictionary has this usage note:
_Percent_ and _percentage_ are both used to
express quantity with relation to a whole.
_Percent_ is employed only specifically and
always with a number or numeral.
_Percentage_ is never preceded by such a
figure, but should be qualified by a
general term to indicate size (since
_percentage_ does not necessarily imply
smallness). The number of the noun that
follows _percent_ or _percentage_, or is
understood to follow them, governs the
choice of the verb: _Forty percent of his
estate is in securities. A large
percentage of the patients are children._
Because they are probably looking largely at non-technical material, I think they have missed some usages of “percentage”, as all their examples of it tend to be, as they say, with “a general term”.
It can also be specific: the percentage of patients who are children is 75%, or whatever.
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://www.themathdoctors.org/talking-about-percentages/","timestamp":"2024-11-06T12:03:33Z","content_type":"text/html","content_length":"122430","record_id":"<urn:uuid:af3d726b-d91f-4e69-b62a-faa33e90d9f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00660.warc.gz"} |
Why Gravity Is Not a Force That Pulls?
Why Is Gravity Not A Force That Pulls?
Written by Akash PeshinLast Updated On: 19 Oct 2023Published On: 3 Jan 2018
Table of Contents (click to expand)
Newton’s theory of gravity is not entirely accurate. Einstein’s theory of General Relativity explains that gravity is not a force that pulls, but rather an effect of the curvature of space-time.
Gravity is one of the four fundamental forces that constitute the Universe. I’m sure that everyone is familiar with the fable about the minor accident that led Newton to the dramatic discovery of
gravity. While reposing under a tree, an apple fell on his head, and Newton, believed to be thinking about the forces of nature under that very tree, had an epiphany. What followed was a preposterous
claim; he concluded that the same force that pulled the apple down from the tree is what kept the Earth in motion around the Sun.
The fable contains all the elements of a scintillating scientific discovery — an idiosyncratic genius at work, a sleight of chance and a monumental insight precipitated by witnessing the most mundane
of events — a virtue of allegorical thinking. The fable, however, isn’t entirely true, but… neither is Newton’s theory.
(Photo Credit: Flickr)
Recommended Video for you:
Newton’s Vs Einstein’s View Of Gravity
First of all, the apple didn’t fall on Newton’s head. According to his biographer, William Stukeley, Newton witnessed the apple fall at a distance while he was in a “contemplative mood”. He pondered
why the apple fell “perpendicularly” or straight towards the ground, rather than sideways or in any other unorthodox way. He later postulated that the force of gravity between two bodies pulled or
attracted them towards each other with a magnitude that is directly proportional to their masses and inversely proportional to the square of the distance between them. The trajectory that the bodies
undergo will be the shortest to minimize the expenditure of energy, therefore, a straight line.
Even Newton himself wasn’t particularly satisfied with his theory. He was dubious because he envisaged the force to be a push, not an inexplicable pull. This pull of gravity could either be further
explained by unveiling something that Newton promptly missed or it could simply be accepted that the “magical” pull was an essential property of mass. The latter became gospel, withstanding and
obscuring the truth for 400 years.
Fortunately, this dogma was rightly repudiated by Einstein, an equally formidable genius, when he made an even more preposterous claim and put forward his General Theory of Relativity. This exhibited
his immense courage, for a patent clerk was challenging Newton, a veritable demigod of physics. He was challenging a view that had been worshipped for 400 years.
(Photo Credit: Public Domain Pictures)
Einstein’s discovery was based on a series of thought experiments. Consider an astronaut floating in space, away from any source of gravity, and that same astronaut free falling in a planet’s
gravity. The similarity of both experiences is uncanny. The astronaut must glide or sit still until affected by an external force. If an astronaut falls or floats without any knowledge of his
location, say, in an enclosed lift, he cannot distinguish whether the lift is floating in deep space or through a building on Earth. In both cases, he is essentially weightless. However, if he does
not experience any force, why does a free-falling astronaut accelerate? In Newtonian mechanics, this is paradoxical, as it contradicts Newton’s second law of motion – the magnitude of acceleration is
proportional to the applied external force.
Einstein suggested that objects aren’t pulled by massive objects, but rather pushed down by the space above them. According to General Relativity, matter warps the fabric of not only space but time
as well, collectively known as the continuum of space-time. The fabric is like a grid of tightly strung rubber bands; when a massive object pushes and stretches them downward, the deformed rubber
bands push objects under them. The theory implied that smaller objects weren’t pulled towards massive objects but were traveling on a downward slope, as the space in the latter’s vicinity was warped
by its large mass. A free-falling body, therefore, follows the straightest possible path in space-time.
Einstein developed this theory on the assumption that the laws of physics must appear the same to every observer. This is also true for planets revolving around the Sun. Orbiting planets follow the
shortest path around the Sun to minimize energy. This path is an ellipse, the most efficient path in the gravity well of the Sun… but what about the astronaut’s acceleration?
Einstein’s geodesic equations signify that acceleration is a product of curved space-time. His equation explains how curvature accelerates a falling object. In the absence of curvature, the body
would move in a straight line with a constant velocity, unless this motion would be disrupted by an otherwise external force. However, the most interesting aspect of the equation is the absence of
mass in its expression. The magnitude of acceleration is independent of the falling body, just as the equivalence principle would demand (if you drop a hammer and a feather on the surface of the
moon, they would drop at the same time).
Also Read: Why Do All Objects Fall Towards The Ground At The Same Rate, Regardless Of Their Weight?
Is Newtonian Gravity A Fallacy?
Newtonian gravity can not explain the peculiar orbit of Mercury, nor gravitational lensing, the bending of light as it passes in the proximity of a massive object, such as the Sun. Is Newton’s view
entirely wrong? If yes, then why is it still ubiquitous in our textbooks?
Newton’s view is not wrong. In fact, NASA still uses his infamous laws to predict the behavior of satellites in space. His view remains extremely accurate for small bodies and low velocities. The
reason why children aren’t edified on the principles of General Relativity is that the concepts are exceedingly difficult to comprehend. The geometry isn’t strictly appropriate for high school or
Euclidean, and the math’s sophistication is of the highest order. The important thing to remember is that gravity is neither a push nor a pull; what we interpret as a “force” or the acceleration due
to gravity is actually the curvature of space and time — the path itself stoops downward.
Also Read: What Is Orbital Velocity?
Akash Peshin is an Electronic Engineer from the University of Mumbai, India and a science writer at ScienceABC. Enamored with science ever since discovering a picture book about Saturn at the age of
7, he believes that what fundamentally fuels this passion is his curiosity and appetite for wonder. | {"url":"https://www.scienceabc.com/eyeopeners/why-gravity-is-not-a-force-that-pulls.html","timestamp":"2024-11-07T20:15:08Z","content_type":"text/html","content_length":"195166","record_id":"<urn:uuid:ff4bdae5-6e77-464d-a4c0-39818bee2eb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00519.warc.gz"} |
Tensor-Based Reformulation of the General Linear Model for Enhanced Computational Efficiency
Core Concepts
Reformulating the general linear model (GLM) using tensors and Einstein notation significantly improves computational efficiency and memory usage, especially for complex models with multiple groups
and regressors.
This research paper proposes a novel approach to enhance the computational efficiency of the general linear model (GLM) by employing tensors and Einstein notation. The authors argue that the
conventional matrix formulation of the GLM, while widely used, suffers from inefficiencies, particularly when dealing with multiple groups and regressors. This is due to the creation of large, sparse
matrices that consume significant memory and processing power.
The paper introduces a tensor-based reformulation of the GLM, where data structures representing parameters and variables are expressed as tensors using Einstein notation. This approach leverages the
multidimensional nature of tensors to encode information more compactly, reducing the number of data elements and computations required.
The authors demonstrate the efficacy of their approach by translating common GLM applications, such as contrast matrix formulation and multiple t-tests, into the tensor notation. They highlight how
the tensor formulation simplifies the automation of hypothesis testing and eliminates the need for a priori knowledge of group, regressor, and hypothesis numbers.
The paper concludes that the tensor-based GLM offers significant advantages in terms of computational speed, memory efficiency, and organizational elegance. The authors suggest that this
reformulation can benefit various GLM applications and encourage further exploration of this approach in statistical modeling.
• Bibliographic Information: Kress, G. T. (Year). Tensor Formulation of the General Linear Model with Einstein Notation. Journal Name, Volume(Issue), Page numbers. DOI or URL
• Research Objective: To improve the computational efficiency of the general linear model (GLM) by reformulating it using tensors and Einstein notation.
• Methodology: The paper presents a theoretical reformulation of the GLM using tensors and Einstein notation. It demonstrates the application of this approach by translating conventional GLM
formulations, including contrast matrix formulation and multiple t-tests, into the tensor notation.
• Key Findings: The tensor-based GLM significantly reduces the number of data elements and computations required compared to the conventional matrix formulation. This leads to improved
computational speed and memory efficiency, especially for complex models with multiple groups and regressors.
• Main Conclusions: The tensor-based reformulation of the GLM offers a more efficient and elegant approach to statistical modeling. This approach can benefit various GLM applications and has the
potential to enhance computational efficiency in statistical analysis.
• Significance: This research contributes to the field of computational statistics by providing a novel and efficient method for implementing the GLM. The proposed tensor-based approach can
potentially improve the performance of statistical software and facilitate more complex analyses.
• Limitations and Future Research: The paper primarily focuses on the theoretical aspects of the tensor-based GLM. Future research could explore the practical implementation of this approach in
statistical software packages and evaluate its performance on real-world datasets. Additionally, investigating the applicability of this approach to other statistical models beyond the GLM would
be beneficial.
A model with m regressors, n groups, and k data points in each group conventionally requires a matrix X with kn²(m+1) elements. The tensor reformulation reduces the number of elements in the
corresponding data structure to kmn, a reduction factor of n(m+1)/m.
"The general linear model is a universally accepted method to conduct and test multiple linear regression models." "Presented here is an elegant reformulation of the general linear model which
involves the use of tensors and multidimensional arrays as opposed to exclusively flat structures in the conventional formulation." "The tensor formulation of the GLM drastically decreases the number
of elements in the data structures and reduces the quantity of operations required to perform computations with said data structures, especially as more groups, regressors, and hypotheses are
incorporated in the model."
How does the computational efficiency of the tensor-based GLM compare to other optimization techniques used for large-scale data analysis?
The tensor-based GLM formulation presented in the paper leverages the inherent structure of the data to potentially reduce computational complexity compared to the conventional matrix formulation.
This efficiency stems from: Reduced Data Structure Size: By encoding information about groups and parameters within the tensor's dimensions, the tensor formulation avoids storing a large number of
zeros present in the sparse matrices of the conventional approach. This reduction in data structure size directly translates to lower memory requirements and faster data access times, which are
crucial for large-scale data analysis. Efficient Tensor Operations: Modern computing hardware and software libraries are increasingly optimized for tensor operations. Libraries like TensorFlow and
PyTorch exploit parallelism and hardware acceleration to perform tensor computations, including contractions and inversions, much faster than equivalent operations on large sparse matrices. However,
comparing the tensor-based GLM's efficiency to other optimization techniques requires a more nuanced discussion: Gradient-Based Optimization: Many machine learning models, including those handling
large-scale data, rely on gradient-based optimization algorithms like Stochastic Gradient Descent (SGD). These algorithms are iterative and their efficiency depends on factors like the dataset size,
model complexity, and choice of hyperparameters. Directly comparing their performance to the tensor-based GLM requires empirical evaluation on specific datasets and models. Sparsity Exploitation:
While the tensor formulation addresses sparsity arising from the GLM's structure, other optimization techniques specifically target general sparse data structures. Techniques like sparse matrix
factorization and coordinate descent can be highly efficient when the data exhibits a high degree of sparsity. In conclusion, the tensor-based GLM offers potential computational advantages for
large-scale data analysis, particularly for GLMs with multiple groups and parameters. However, a definitive comparison requires benchmarking against other optimization techniques on specific datasets
and considering the level of sparsity and the suitability of different algorithms.
Could the inherent complexity of tensor operations and the need for specialized software libraries potentially limit the practical adoption of this approach?
While the tensor formulation of the GLM offers elegance and potential computational benefits, some challenges might hinder its widespread adoption: Conceptual Complexity: Tensors and Einstein
notation, while powerful, introduce a higher level of abstraction compared to traditional matrix algebra. This can pose a learning curve for practitioners unfamiliar with these concepts, potentially
limiting their adoption. Software Library Dependence: Efficient implementation of tensor operations often relies on specialized software libraries like TensorFlow or PyTorch. This dependence can
introduce compatibility issues, learning curves for new tools, and potential limitations if a library doesn't support specific hardware or functionalities. Debugging and Interpretation: Debugging
tensor operations and interpreting results can be more challenging than traditional matrix-based approaches. The multidimensional nature of tensors and the implicit summation in Einstein notation
require specialized tools and techniques for effective debugging and understanding of intermediate computations. However, several factors mitigate these challenges: Growing Tensor Literacy: The
increasing popularity of deep learning, which heavily relies on tensors, is driving wider adoption and understanding of tensor concepts and tools. This growing "tensor literacy" in the data science
community lowers the barrier to entry for the tensor-based GLM. Maturing Software Ecosystem: Tensor libraries are under active development, with improvements in usability, documentation, and
debugging tools. Additionally, integration with other data science libraries and platforms is continuously improving, facilitating wider adoption. Abstraction Layers: High-level APIs and libraries
are emerging that abstract away some of the complexities of tensor operations. These abstractions allow practitioners to leverage the benefits of tensors without needing deep expertise in low-level
implementations. In summary, while the complexity of tensor operations and the reliance on specialized software libraries present challenges, the growing tensor literacy, maturing software ecosystem,
and development of abstraction layers are actively mitigating these limitations and paving the way for wider practical adoption of the tensor-based GLM.
If our understanding of the universe is fundamentally limited by the dimensionality of our perception, could tensor mathematics provide a framework for transcending these limitations and revealing
deeper insights?
The idea that our perception of the universe is limited by its dimensionality is a profound one, often explored in fields like theoretical physics. Tensor mathematics, with its ability to represent
and manipulate multidimensional data, offers an intriguing framework for exploring these limitations: Higher-Dimensional Representations: Tensors naturally extend beyond the three spatial dimensions
and one temporal dimension we perceive. They can represent data in arbitrarily high-dimensional spaces, potentially allowing us to model and reason about phenomena beyond our direct experience.
Geometric Insights: Tensors are inherently linked to geometry. They provide a language to describe geometric objects and transformations in higher dimensions. This geometric perspective could be
crucial in understanding the structure of the universe at scales where our intuitive notions of space and time break down. Unifying Framework: Tensors have already proven successful in unifying
seemingly disparate concepts in physics, for example, in Einstein's General Relativity, where the curvature of spacetime, represented by a tensor, explains gravity. This unifying potential of tensors
could be key to integrating our understanding of different forces and phenomena in the universe. However, several considerations temper this optimism: Mathematical Abstraction vs. Physical Reality:
While tensors provide a powerful mathematical framework, their application to physics requires careful interpretation. Just because we can represent something mathematically doesn't automatically
imply its physical existence or relevance. Empirical Validation: Any theory or model, regardless of its mathematical elegance, must be grounded in empirical evidence. Translating insights from tensor
mathematics into testable predictions about the universe remains a significant challenge. Limits of Human Cognition: Even if tensor mathematics reveals deeper truths about the universe, our ability
to grasp and interpret these truths might be inherently limited by our cognitive capacities. In conclusion, while our perception might be confined by dimensionality, tensor mathematics offers a
powerful toolset for exploring beyond these limitations. Its ability to represent higher-dimensional spaces, provide geometric insights, and unify diverse concepts makes it a promising avenue for
advancing our understanding of the universe. However, we must remain cautious about equating mathematical abstraction with physical reality and focus on grounding our explorations in empirical
validation while acknowledging the potential limits of human cognition. | {"url":"https://linnk.ai/insight/scientific-computing/tensor-based-reformulation-of-the-general-linear-model-for-enhanced-computational-efficiency-qgoOBgii/","timestamp":"2024-11-02T20:30:33Z","content_type":"text/html","content_length":"271016","record_id":"<urn:uuid:b42584dd-34f4-4dc5-b98f-1ce4f9b259cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00574.warc.gz"} |
What is the Naive Bayes Algorithm? | Data Basecamp
The Naive Bayes Algorithm is a classification method based on the so-called Bayes Theorem. In essence, it assumes that the occurrence of a feature is completely uncorrelated with the occurrence of
another feature within the class.
The algorithm is naive because it considers the features completely independent of each other and all contribute to the probability of the class. A simple example of this: A car is characterized by
having four wheels, being about 4-5 meters long, and being able to drive. All three of these features independently contribute to this object being a car.
How does the Algorithm work?
The Naive Bayes algorithm is based on the Bayes theorem. It describes a formula for calculating the conditional probability P(A|B) or in words: What is the probability that event A occurs when event
B has occurred? As an example: What is the probability that I have Corona (= event A) if my rapid test is positive (= event B)?
According to Bayes, this conditional probability can be calculated using the following formula:
\(\) \[P(A|B) = \frac{P(B|A) * P(A)}{P(B)} \]
• P(B|A) = probability that event B occurs if event A has already occurred
• P(A) = probability that event A occurs
• P(B) = probability that event B occurs
Why should we use this formula? Let us return to our example with the positive test and the Corona disease. I cannot know the conditional probability P(A|B) and can only find it out via an elaborate
experiment. The inverse probability P(B|A), on the other hand, is easier to find out. In words, it means: How likely is it that a person suffering from Corona has a positive rapid test?
This probability can be found out relatively easily by having demonstrably ill persons perform a rapid test and then calculating the ratio of how many of the tests were actually positive. The
probabilities P(A) and P(B) are similarly easy to find out. The formula then makes it easy to calculate the conditional probability P(A|B).
If we have only one feature, this already explains the complete Naive Bayes algorithm. With a feature for the conditional probability P(x | K) for different classes is calculated and the class with
the highest probability wins. For our example, this means that the identical conditional probabilities P(the person is sick | test is positive) and P(the person is healthy | test is negative) are
calculated using Bayes’ theorem and the classification is done for the class with the higher probability.
Simple Representation of the Naive Bayes Classification
If our dataset consists of more than one feature, we proceed similarly and compute the conditional probability for each combination of feature x and class K. We then multiply all probabilities for
one feature. The class K that then has the highest product of probabilities is the corresponding class of the dataset.
What are the Advantages and Disadvantages of the Naive Bayes Algorithm?
The Naive Bayes Algorithm is a popular starting point for a classification application since it is very easy and fast to train and can deliver good results in some cases. If the assumption of
independence of the individual features is given, it even performs better than comparable classification models, such as logistic regression, and requires less data to train.
Although the Naive Bayes Algorithm can achieve good results with only a few data, we need so much data that each class appears at least once in the training data set. Otherwise, the classifier will
return a probability of 0 as a result of the category in the test dataset. Moreover, in reality, it is very unlikely that all input variables are completely independent of each other, which is also
very difficult to test.
How can you improve the Naive Bayes algorithm?
There are several ways to improve the performance of the Naive Bayes algorithm on a data set. The most common methods are presented below.
• Feature engineering: Like any machine learning model, the Naive Bayes algorithm depends heavily on the quality of the input data. A good selection of the required features can improve the
accuracy of the model and reduce the risk of overfitting. We can use feature engineering techniques for this, such as feature extraction or feature scaling.
• Smoothing: If a data set contains no data for a certain combination of features, the so-called zero-frequency problem can occur. Poor predictions are then made for these rare categories because
the model was unable to recognize sufficient structures. The Naive Bayes algorithm therefore uses smoothing to prevent situations in which a zero probability is predicted. For example, in
“add-one smoothing”, one unit of the feature is added to the frequency to ensure better generalization of the model to new data.
• Ensemble methods: In ensemble training, multiple Naive Bayes models are combined and used for joint classification. The accuracy of the joint result of the models is usually higher than the
result of a single model. Depending on how the models are trained and combined, there are different variants. One possibility, for example, is to use an AdaBoost approach in which the next model
is only trained on the data that the previous model classified incorrectly.
• Parameter tuning: Naive Bayes also offers a selection of hyperparameters that can be adapted to the data set to improve the performance of the model. For example, the smoothing parameter can be
adjusted or different sets of features can be tested.
• Dealing with unbalanced data: The Naive Bayes algorithm depends on whether the number of data sets per class is balanced or not. If this is not the case, there may be a bias towards the majority
class. Methods such as oversampling or undersampling can be used to prevent these errors. With oversampling, for example, the number of data records in a minority class is increased to create a
balance between the classes. Individual instances can be duplicated or slightly modified to create new instances of the minority class.
• Treatment of continuous features: The normal Naive Bayes algorithm assumes that the input features are categorical. However, this is not the case in many real-world applications and many datasets
also contain continuous features. To train a Naive Bayes model, these features must first be converted into categorical data. There are different methods of how this so-called discretization can
be done. For example, the data can be divided into equal intervals or subdivided using quantiles. Although some of the information content of the data set is always lost during discretization, it
could not become part of the Naive Bayes model without this step.
These methods mean that the performance of the Naive Bayesian model can be further improved and a model that is as robust as possible can be trained.
What is the difference between Multinomial Naive Bayes and Bernoulli Naive Bayes?
Multinomial and Bernoulli Naive Bayes are two frequently used variants of the original Naive Bayes algorithm, which are mainly used in text classification. They differ mainly in how they represent
the input data numerically. While Multinomial Naive Bayes is based on the assumption that the word components can be represented by the pure number or frequency in which they occur, Bernoulli Naive
Bayes assumes that the input data is best represented by binary features. These binary features measure, for example, whether a word occurs in a document or not.
Bernoulli Distribution for p = 0.3 | Source: Author
Multinomial Naive Bayes uses the so-called bag-of-words as input data. It counts how often each word occurs in the document. The model then estimates the conditional probability of each word
depending on the class, using a multinomial distribution. Bernoulli Naive Bayes, on the other hand, uses binary input data and features that indicate whether a particular word occurs in the document
or not. Then, analogous to Multinomial Naive Bayes, the conditional probability of a feature is estimated as a function of the class variable, but using a Bernoulli distribution.
This structure results in a further difference that relates to the handling of missing features. With Multinomial Naive Bayes, a missing word is assigned the frequency number zero, which can lead to
problems with zero probabilities. The Bernoulli classifier, on the other hand, treats the missing word as a separate feature and handles it accordingly. This means that no problems arise here.
The choice of algorithm depends heavily on the task and, above all, the type of input features. Multinomial Naive Bayes is often used for text classifications that work with discrete numbers of words
and calculate the classification based on a complex interplay of the individual words. Bernoulli Naive Bayes, on the other hand, is used for binary features where the prediction is more dependent on
the presence or absence of individual words, such as spam recognition or sentiment analysis.
What Applications use the Naive Bayes Algorithm?
In the field of machine learning, Naive Bayes is used as a classification model, i.e. to classify a data set into a certain class. There are various concrete applications for these models for which
Naive Bayes is also used:
In this area, the model can be used to assign a section of text to a specific class. E-mail programs, for example, are interested in classifying incoming emails as “spam” or “not spam”. For this
purpose, the conditional probabilities of individual words are then calculated and matched with the class. The same procedure can also be used to classify social media comments as “positive” or
Although Naive Bayes provides a fast and simple approach for these applications in the text domain, there are other models, such as Transformers, that provide much better results. This is because the
Naive Bayes model does not take into account word order or some arrangement. For example, if I say “I don’t like this product.” it is probably not a positive product review just because the word
“like” is in it.
Classification of Credit Risks
For banks, loan default is an immense risk, as they lose large sums of money if a customer can no longer pay the loan. That’s why a lot of work is put into models that can calculate the individual
default risk depending on the customer. In the end, this is also a classification in which the customer is assigned to either the “loan repayment” or “loan default” group. For this purpose, some
specific characteristics are used, such as loan amount, income, or the number of previous loans. With the help of Naive Bayes, a reliable classification model can be trained from this.
Prediction of Medical Treatment
In medicine, a doctor has to decide which treatment and which drugs are most promising for the individual patient and his clinical picture and have the highest probability to make the patient healthy
again. To support this, a Naive Bayes classification model can be trained, which calculates the probability that the client will recover or not, depending on characteristics of the health condition,
such as blood pressure, well-being, or symptoms, as well as the possible treatment (medication). The results of the model can in turn be used by the physician in his decision.
This is what you should take with you
• The Naive Bayes Algorithm is a simple method to classify data.
• It is based on Bayes’ theorem and is naive because it assumes that all input variables and their expression are independent of each other.
• The Naive Bayes Algorithm is relatively quick and easy to train, but in many cases, it does not give good results because the assumption of independence of the variables is violated.
Other Articles on the Topic of Naive Bayes
• Scikit-Learn provides some examples and programming instructions for the Naive Bayes algorithm in Python. | {"url":"https://databasecamp.de/en/ml/naive-bayes-algorithm","timestamp":"2024-11-08T21:37:45Z","content_type":"text/html","content_length":"283919","record_id":"<urn:uuid:76b90800-bbb8-4fd7-83dc-c29c75ca5ba4>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00408.warc.gz"} |
Elevation Point of Vertical Curve Calculator - Calculator6.com
Elevation Point of Vertical Curve Calculator
This calculator is used to calculate the elevation point of a vertical curve on a road or rail line. Vertical curves are designed to smooth the slope of the road surface or to soften the ups and
downs, and the height point indicates the highest point of the curve.
This calculator is used to calculate the elevation point of a vertical curve on a road or rail line. Vertical curves are designed to smooth the slope of the road surface or to soften the ups and
downs, and the height point indicates the highest point of the curve.
When using the online vertical curve height point calculator you can calculate by entering: Length of Curve, Initial Grade, Final Grade and Initial Elevation.
4 Number of Calculations Used Today
y = e_{pvc} + g_1x + \frac{{(g2 - g1) \cdot x^2}}{{2L}}
The variables used in the formula are:
• y: elevation of the point of vertical tangency
• epvc: Initial Elevation
• g1: Initial grade
• g2: Final grade
• x/L: Length of the curve
How to calculate the elevation point of a vertical curve?
The elevation point of a vertical curve is the highest point of the vertical curve and is located at a specific point on the road or rail line. Vertical curves are used to correct the gradient of a
road or rail line or to smooth the ups and downs. The following steps are followed to calculate the height point of the vertical curve:
1. Determining the Starting and Ending Heights: As a first step, the heights of the start and end points of the vertical curve are determined. This refers to the heights at which the vertical curve
starts and ends.
2. Determining the Slope: The slope of the vertical curve is determined as the ratio of the height difference between the start and end points to the horizontal distance. The slope is usually
expressed as a percentage.
3. Calculating the Height Point: The height point of the vertical curve is located at the point where the slope is highest. This point has the highest horizontal distance of the vertical curve and
the height at this distance. Certain mathematical formulas are usually used to calculate the elevation point, since this point may not be in the center of the curve.
4. Performing the Calculation: A common formula for calculating the height point is to add half the horizontal distance (L/2) to the height at the origin (P1). However, other formulas can be used,
especially for more complex vertical curves.
5. Checking: The calculated height point is checked for the design of the vertical curve and compliance with standards. If necessary, the calculations can be repeated or corrections made.
By following these steps, the elevation point of the vertical curve can be accurately calculated and used in the road or rail track design process.
What is the elevation point of a vertical curve?
The elevation point of a vertical curve is the highest point of a vertical curve on a road or rail line. Vertical curves are used to straighten a road or rail line or to smooth the ups and downs. The
height point is the point at which the curve is highest and then flattens out. It indicates the highest point on a given section of road or rail line and often plays an important role in engineering
design and construction projects.
The elevation point provides information about the design and suitability of the vertical curve and is determined to ensure the comfort of passengers or trains when traveling on the road or rail
The Role of the Height Point of a Vertical Curve in Engineering and Construction Projects
The role of the elevation point of the vertical curve in engineering and construction projects is very important. Here are some of the main points that summarize this role:
• Road Safety and Comfort: The height point is important for improving road safety and comfort. Vertical curves soften steep gradients or sudden descents, making drivers and passengers feel more
comfortable while traveling. Accurately determining the height point makes it easier for drivers to brake and control their speed.
• Water Flow and Drainage: The height point affects the slope of the road surface and the drainage of rainwater. Correctly positioned elevation points allow water to run off without damaging the
road and prevent puddles. This extends the life of the road and reduces maintenance costs.
• Soil and Foundation Stability: The elevation point affects the soil and foundation stability of the road or rail line. Accurately determining the slope prevents soil erosion and slides and
ensures that structures stand on a solid foundation.
• Visibility and Ease of Navigation: Properly determining the elevation point allows drivers and train engineers to better see obstacles in front of the road or rail line when looking ahead. This
helps to prevent accidents and improve travel safety.
• Traffic Flow and Efficiency: Properly designed height points improve traffic flow and efficiency. Smooth gradients and descents allow vehicles to travel more comfortably and smoothly and avoid
traffic jams.
As a result, the elevation point of the vertical curve plays a critical role in road or rail line design and contributes to improving factors such as road safety, comfort, drainage, stability,
visibility and traffic flow in engineering and construction projects. Therefore, it is important that this point is accurately determined and implemented.
Uses of Calculating the Height Point of a Vertical Curve
The uses of the vertical curve height point calculation are as follows:
Road Design:
In road design, the elevation points of vertical curves are determined and designed. This improves driving safety and comfort by regulating the slope and ups and downs of the road.
Rail Transportation Projects:
In railway lines, the elevation points of vertical curves are calculated and regulate the incline and decline of the rail track. This allows trains to travel comfortably and safely.
Water Management Projects:
In infrastructure projects, especially in water management projects, the elevation points of the vertical curves are calculated. This is important to ensure proper drainage and flow of water.
Land Structures and Tunnel Design:
Elevation points play an important role in the design of land structures and tunnels. This ensures the correct grading and stability of the structures.
Construction Projects:
Generally in construction projects, vertical curves and elevation points of structures such as roads and rail lines are calculated taking into account the topography of the terrain.
Traffic Engineering:
In traffic engineering projects, road slope and elevation points of vertical curves are important to improve traffic flow and driving safety.
The calculation of the elevation point of the vertical curve is used in various fields of engineering and construction projects and helps to ensure correct design and implementation. | {"url":"https://www.calculator6.com/elevation-point-of-vertical-curve-calculator/","timestamp":"2024-11-06T18:59:11Z","content_type":"text/html","content_length":"277010","record_id":"<urn:uuid:bb037863-4877-4a37-9adf-5d865405e2b2>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00274.warc.gz"} |
Question #708f5 | Socratic
Question #708f5
1 Answer
You know that you are starting with a #0.4%# solution, presumably a mass by volume percent concentration, that you dilute by a factor of $10$.
Before the dilution, the initial solution contained $\text{0.4 g}$ of solute, which is your drug, for every ${\text{100 cm}}^{3}$ of solution.
When you dilute this solution by a factor of $10$, you ensure that the same amount of solution, let's say ${\text{100 cm}}^{3}$, contains $10$times less solute than the original solution. In other
words, the starting solution must be $10$times more concentrated than the diluted solution.
This means that after the dilution, the solution will contain
$\text{0.4 g"/10 = "0.04 g}$
of drug for every ${\text{100 cm}}^{3}$ of solution. To make the calculations easier, convert this to milligrams of drug
#0.04 color(red)(cancel(color(black)("g"))) * (10^3"mg")/(1color(red)(cancel(color(black)("g")))) = "40 mg"#
Therefore, you can say that for the diluted solution, you have
#"40 mg drug " -> " 100 cm"^3 color(white)(.)"solution" " "color(orange)("(*)")#
Now, the patient received $\text{120 mg}$ of this drug in $3$ doses, meaning that you have
$\text{120 mg drug"/"3 doses" = "40 mg drug / dose}$
You can thus say that the patient received $\text{40 mg}$ of drug per dose and that each dose had a volume of ${\text{100 cm}}^{3}$, as shown by relation $\textcolor{\mathmr{and} a n \ge}{\text{(*)}}
Impact of this question
1370 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/587dfe807c014971ccf708f5","timestamp":"2024-11-02T11:26:04Z","content_type":"text/html","content_length":"37071","record_id":"<urn:uuid:2c7608a0-0a71-41e6-8904-ff00362c3910>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00629.warc.gz"} |
What is 13/20 as a decimal? [Solved]
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
A day full of math games & activities. Find one near you.
What is 13/20 as a decimal?
A decimal number can be defined as a number whose whole number part and the fractional part are separated by a decimal point.
Answer: 13/20 as a decimal is 0.65
Let's look into the two methods to write 13/20 as a decimal.
Method 1: Writing 13/20 as a decimal using the division method
To convert any fraction to decimal form, we just need to divide its numerator by the denominator.
Here, the fraction is 13/20 which means we need to perform 13 ÷ 20
This gives the answer as 0.65. So, 13/20 as a decimal is 0.65
Method 2: Writing 13/20 as a decimal by converting the denominator to the powers of 10
Step 1: Find a number such that we can multiply by the denominator of the fraction to make it 10 or 100 or 1000 and so on.
Step 2: Multiply both numerator and denominator by that number to convert it into an equivalent fraction.
Step 3: Then write down just the numerator, putting the decimal point in the correct place, that is, one space from the right-hand side for every zero in the denominator.
13/20 = (13 × 5) / (20 × 5) = 65/100 = 0.65
Irrespective of the methods used, the answer to 13/20 as a decimal will always remain the same.
You can also verify your answer using Cuemath's Fraction to Decimal Calculator.
Thus, 13/20 as a decimal is 0.65
Math worksheets and
visual curriculum | {"url":"https://www.cuemath.com/questions/what-is-13-20-as-a-decimal/","timestamp":"2024-11-11T08:15:02Z","content_type":"text/html","content_length":"198700","record_id":"<urn:uuid:2abd062a-2a6d-4312-8516-157e0ad9edcd>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00669.warc.gz"} |
Hierarchical Coding Vectors for Scene Level Land-Use Classification
Institute of Medical Equipment, Academy of Military Medical Science, Tianjin 300161, China
The State Key Laboratory of Intelligent Technology and System, Computer Science and Technology School, Tsinghua University, Beijing 100084, China
Author to whom correspondence should be addressed.
Submission received: 26 March 2016 / Revised: 3 May 2016 / Accepted: 18 May 2016 / Published: 23 May 2016
Land-use classification from remote sensing images has become an important but challenging task. This paper proposes Hierarchical Coding Vectors (HCV), a novel representation based on hierarchically
coding structures, for scene level land-use classification. We stack multiple Bag of Visual Words (BOVW) coding layers and one Fisher coding layer to develop the hierarchical feature learning
structure. In BOVW coding layers, we extract local descriptors from a geographical image with densely sampled interest points, and encode them using soft assignment (SA). The Fisher coding layer
encodes those semi-local features with Fisher vectors (FV) and aggregates them to develop a final global representation. The graphical semantic information is refined by feeding the output of one
layer into the next computation layer. HCV describes the geographical images through a high-level representation of richer semantic information by using a hierarchical coding structure. The
experimental results on the 21-Class Land Use (LU) and RSSCN7 image databases indicate the effectiveness of the proposed HCV. Combined with the standard FV, our method (FV + HCV) achieves superior
performance compared to the state-of-the-art methods on the two databases, obtaining the average classification accuracy of 91.5% on the LU database and 86.4% on the RSSCN7 database.
1. Introduction
Scene level land-use classification aims to assign a semantic label (e.g., building and river) to a remote sensing image according to its content. As remote sensing techniques continue to develop,
overwhelming amounts of fine spatial resolution satellite images have become available. It is necessary to develop effective and efficient scene classification methods to annotate the massive remote
sensing images.
By far, the Bag of Visual Words (BOVW) [
] framework and its variants [
] based on spatial relations have become promising remote sensing image representations for land-use classification. The pipeline for the BOVW framework consists of five main steps: feature
extraction, codebook generation, feature coding, pooling, and normalization. For BOVW, we usually extract local features from the geographical images, learn a codebook in the training set by K-means
or Gaussian mixture model (GMM), encode the local features and pool them to a vector, and normalize this vector as the final global representation. The representation is subsequently fed into a
pre-trained classifier to obtain the annotation result for remote sensing images.
In a parallel development,
deep learning methods
have attracted continuous attention in the computer vision community in recent years. Deep neural networks (DNNs) [
] build and train deep architectures to capture graphical semantic information, achieving a large performance boost compared to the previous hand-crafted system with mid-level features. Although
their methods can describe the geographical images from low level features with a more abstract and semantic representation of deep structures, it is computationally expensive to directly train
effective DNNs for scene level land-use classification. One important property of the DNNs is its hierarchical organization in layers of increasing processing complexity. We adopt a similar idea, and
concentrate on a shallow but hierarchic layer framework based on
encoding methods [
Inspired by the success of DNNs in computer vision applications and encoding methods for remote sensing applications, we proposed Hierarchical Coding Vectors (HCV), a new representation based on
hierarchically coding structures, for scene level land-use classification. We apply the traditional coding pipeline as corresponding to the layers of a standard DNN and stack multi-BOVW coding layers
and one Fisher coding layer to develop the hierarchical feature learning structure. The complex graphical semantic information is refined by feeding the output of one layer into the next computation
layer. Through hierarchical coding, the HCV contains richer semantic information and is more powerful to describe those remote sensing images. Our experimental results on the 21-Class Land Use (LU)
and RSSCN7 geographical image databases demonstrate the excellent performance of our HCV for land-use classification. Furthermore, HCV provides complementary information to the traditional Fisher
Vectors (FV). When combining traditional FV with our HCV, we obtain superior classification performance compared to the current state-of-the-art results on the LU and RSSCN7 databases.
There are two main contributions of our work:
• We devise the Hierarchical Coding Vectors (HCV) by organizing off-the-shelf coding methods into a hierarchical architecture and evaluate the parameters of HCV for land-use classification on the
LU database.
• The HCV achieves excellent performance for land-use classification. Further, combining HCV with standard FV, our method (FV + HCV) outperforms the state-of-the-art performance reported on the LU
and RSSCN7 databases.
The remainder of this paper is organized as follows.
Section 2
discusses the related work on both computer vision and remote sensing applications.
Section 3
describes the details of our proposed Hierarchical Coding Vectors (HCV).
Section 4
presents the experimental results.
Section 5
is the conclusion.
2. Related Work
In both the computer vision and remote sensing communities, the recent efforts in scene classification can be divided into three directions: (1) the development of more elaborate hand-crafted
features (e.g., Scale Invariant Feature Transformation (SIFT) [
], Histogram of Oriented Gradient (HOG) [
], GIST [
], Local Binary Pattern (LBP) [
]); (2) more sophisticated encoding methods (e.g., Hard Assignment (HA) [
], Soft Assignment (SA) [
], Local Coordinate Coding (LCC) [
], Locality-constrained Linear Coding (LLC) [
], Vector of Locally Aggregated Descriptors (VLAD) [
], FV [
]), and (3) more complex classifiers (e.g., Support Vector Machine (SVM) [
], Extreme Learning Machine (ELM) [
]). Recently, the second direction (
, encoding methods) has attracted more attention and become an effective representation for scene level land-use classification. Typical encoding methods are based on the BOVW framework. The
traditional BOVW methods, including HA, SA, LCC, and LLC, are designed from the perspective of activation concept to obtain 0-order statistics of the distribution from descriptors space, and the core
issue is to decide which visual word will be activated in the ‘visual vocabulary’ and to what extent they will be activated. Then, the Fisher Kernel introduced by Jaakkola [
] has been used to extend the BOVW framework. It describes the difference between the distribution of descriptors in an input image and that of the ‘visual vocabulary’, encoding multi-dimensional
information (0th, 1st, 2nd) from the descriptors space. The typical Fisher Kernel methods conclude Fisher Vector (FV) and Vector of Locally Aggregated Descriptors (VLAD). The VLAD can be viewed as a
simplified nonprobabilistic version of the FV.
Some researchers have also attempted to use the multi-layers model to further improve the classification performance in the remote sensing community. Chen [
] stacks two BOVW layers with the HA coding method to represent the spatial relationship among local features. A two-layer sparse coding method is used in [
]. The authors apply two different optimum formulas to guarantee the image sparsity and category sparsity simultaneously, improving the discriminability of the output coding result. In the computer
vision community, the hierarchical structure helps DNN [
] to achieve a large performance boost. However, it is difficult to be directly applied for the scene level land-use due to its huge computational cost. Xiaojiang Peng
et al.
] stacked multiple Fisher coding layers to build a hierarchical network for action recognition in video. The Fisher coding method causes increasing dimensions of the layer output. Thus, the
dimensions of the final representation exponentially increase with the number of layers. A dimensionality reduction method has to be used between calculation layers. Inspired by the success of DNNs
in computer vision applications and encoding methods for remote sensing applications, we use the off-the-shelf encoding methods to construct the hierarchical structure and stack multi-BOVW coding
layers with only one Fisher coding layer to solve the dilemma in [
]. The overall framework and methods used in each layer of HCV are different from those in [
]. Generally speaking, our HCV develops the hierarchical feature learning structure by stacking N + 2 coding layers, which produces a much higher level representation of richer semantic information
and achieves superior performance for scene level land-use classification.
3. Hierarchical Coding Vector
The conventional coding methods effectively encode each local feature in an image into a high-dimensional space and aggregate these codes into a single vector by a pooling method over the entire
image (followed by normalization). The representation describes the geographical image in terms of the local patch features, which cannot capture more global and complex structures. Deep neural
networks [
] can model complex graphical semantic structures by passing an output of one feature computation layer as the input to the next and by hierarchical refining of the semantic information. Along the
line of a similar idea, we devised a hierarchical structure by stacking multi-BOVW coding layers and one Fisher coding layer, which we call the
Hierarchical Coding Vector
. The architecture of the Hierarchical Coding Vector (HCV) is depicted in
Figure 1
We devised the HCV to describe the whole geographical image with higher level representation of richer semantic information by a hierarchical coding structure. As shown in
Figure 1
, the HCV framework contains N + 2 coding layers (N + 1 BOVW coding layers and one Fisher coding layer). The coding result of one coding layer is fed into the next as the input. These coding layers
are then stacked into a hierarchical network. We used BOVW coding layers to describe the local patches. Multi-BOVW coding layer superposition does not trigger dimension disaster because of the stable
coding dimension of BOVW methods. The BOVW coding layers refine the local semantic information layer-by-layer and then feed the information into the Fisher coding layer to produce global deep
representation. Multi-BOVW coding layers provide a better coding ‘material’ for the Fisher coding layer, giving the global representation (
, HCV) stronger discriminability for scene classification.
Theoretically, a HCV with more coding layers can learn more complicated abstract features, but this may significantly increase the complexity of the model. Considering the effectiveness and
efficiency, in this paper, we consider a HCV with two coding layers (
, one BOVW coding layer and one Fisher coding layer), because it has already provided compelling quality. The HCV can be generalized to more layers without difficulty. The BOVW coding layer uses a
Soft Assignment (SA) [
] coding method to map the low-level descriptors
= (x
$∈ ℝ E × K$
from the geographical image to the coding space
= (d
$∈ ℝ M × K$
using the K-means codebook
= (b
$∈ ℝ E × M$
. After local pooling and normalization, the semi-local features
= (f
$∈ ℝ M × T$
are fed into the Fisher coding layer. With the Gaussian Mixture Model (GMM) codebook
= (b
$∈ ℝ M × N$
, the
Hierarchical coding Vector HCV$∈ ℝ M × 2 N$
is produced by Fisher vector (FV) coding. Finally, HCV is input into a classifier such as a Support Vector Machine (SVM) for scene-level land use classification. The detailed description of each
layer is as follows. The parameters used in this paper are summarized in
Table 1
3.1. The BOVW Coding Layer
The BOVW coding layer maps the input descriptors
X$∈ ℝ E × K$
to the semi-local features
F$∈ ℝ M × T$
. The pipeline of the BOVW coding layer is shown in
Figure 2
. Let
be a set of
-dimensional local descriptors extracted from a geographical image
X$∈ ℝ E × K$
with densely sampled interest points. Through clustering, a codebook is formed with
B[1]$∈ ℝ E × M$
. The codebook is used to express each descriptor and to develop the coding result
D$∈ ℝ M × K$
. Then, pooling and normalization methods are used to produce the local patch coding representation (
, a semi-local features
F$∈ ℝ M × T$
). Finally, the features,
are fed into the next Fisher coding layer as the input.
3.1.1. BOVW Coding
The BOVW coding step was based on the idea of using overcomplete basis vectors to map the local descriptors X$∈ ℝ E × K$ to the coding result D$∈ ℝ M × K$.
Given a geographical image, we first extracted the D-dimensional local descriptors X with densely sampled interest points. The raw input local descriptors
were usually strongly correlated, which created significant challenges in the subsequent codebook generation [
]. The feature pre-processing approach,
, was used to realize the decorrelation. The overcomplete basis vectors (
codebook B[1]$∈ ℝ E × M$
) were computed on the training set using the K-means clustering method [
]. To retain spatial information, the dense local descriptors (e.g., Scale Invariant Feature Transformation (SIFT) [
]) were augmented with their normalized
location before codebook clustering.
We chose the SA coding method rather than another BOVW coding methods such as HA [
], LCC [
], and LLC [
], which led to strong sparsity in the semi-local features
. The strong sparsity caused great challenges in the next Fisher coding layer. SA chose to activate the entire codebook and used the kernel function of distance as the coding representation:
$d k = exp ( − β e ^ ( x k , b m ) ) ∑ m = 1 M exp ( − β e ^ ( x k , b m ) )$
$SA : e ^ ( x k , b m ) = ‖ x k − b m ‖ 2$
is the smoothing factor that controls the softness of the assignment, and the Euclidean distance
$e ^$
is used. Smoothing factor
, the sole parameter in SA coding, determines the sensitivity of likelihood to the distance
$e ^$
and is critical to the coding and classification performance.
3.1.2. Spatial Local Pooling
Spatial local pooling aggregates the coding result D$∈ ℝ M × K$ into the semi-local features F$∈ ℝ M × T$, thus achieving greater invariance to image transformations and better robustness to noise
and clutter. Compared to the regions used in the traditional global pooling, the regions are much smaller and sampled much more densely in our HCV framework. The semi-local feature representation
captures more complex image statistics with the spatial local pooling.
In the HCV, we performed the spatial local pooling in adjacent scales and spaces. The 2 × 2 pooling region is illustrated in
Figure 2
. The optimal spatial structure for local pooling will be evaluated in the following experiment. We used the Max-pooling method in this step, which avoids the semi-local features being strongly
influenced by frequent yet often uninformative descriptors [
$Max : f t = max ( { d k } k ∈ P )$
where f
is the
th element in the semi-local features
and the d
is the coding result. P refers to the local pooling region. The Max-pooling method has demonstrated its effectiveness in many studies [
3.1.3. Normalization
Normalization is used to make the semi-local features have the same scale. Unlike the traditional BOVW coding pipeline, we injected power normalization before the L
normalization method as a pre-processing step.
$L 2 : f t = f t / ‖ f t ‖ 2$
$Power : f t = s i g n ( f t ) | f t | α$
$0 ≤ α ≤ 1$
is a smoothing factor of normalization (we set
$α = 0.5$
the same as [
]). Power normalization is usually used in the Fisher coding method to further improve the classification performance [
]. Meanwhile, BOVW coding methods generally do not apply due to the minimal effect on the performance. However, in our proposed HCV framework, the output of the BOVW coding layer is not used for
classification but as the input for the Fisher coding layer. The Fisher vector captures the Gaussian mean and variance differences between the input features and the codebook, and it is very
sensitive to the sparsity of the input features. Power normalization deceases the sparsity of the semi-local features
and make their distribution smoother, improving the classification performance of HCV (with the experiment on the LU database, we found that the power-normalization can improve the classification
accuracy 3%~5%).
To retain the spatial information, the semi-local features F were also augmented with their normalized x, y location before they were fed into the next layer.
3.2. The Fisher Coding Layer
The Fisher coding layer maps the input semi-local features
F$∈ ℝ M × T$
into the final global representation
Hierarchical coding vector HCV$∈ ℝ M × 2 N$
using the Fisher vector (FV) coding method. The pipeline of the Fisher coding layer is shown in
Figure 3
. All the semi-local features were decorrelated using Whitening technology before being fed into the Fisher coding layer.
The FV coding method is based on fitting a parametric generative model (e.g., GMM) to the input semi-local features
and then encoding the derivatives of the log-likelihood of the model with respect to its parameters [
]. The GMMs with diagonal covariance are used in our HCV framework, leading to a HCV representation that captures the Gaussian mean (1st) and variance (2nd) differences between the input semi-local
and each of the GMM centers.
$g n ( 1 ) = 1 T w n ∑ t = 1 T α t ( n ) ( f t − μ n σ n )$
$g n ( 2 ) = 1 T 2 w n ∑ t = 1 T α t ( n ) ( ( f t − μ n ) 2 σ n 2 − 1 )$
${ w n , μ n , σ n } n$
are the respective mixture weights, means, and diagonal covariance of the GMM codebook
= (b
$∈ ℝ M × N$
. f
is one semi-local feature fed into the Fisher coding layer and T is the number of the semi-local features.
$α t ( n )$
is the soft assignment weight of the
-th semi-local features f
to the
-th Gaussian.
$α t ( n ) = w n N ( f t ; μ n , σ n ) ∑ n = 1 N w n N ( f t ; μ n , σ n )$
$N ( f t ; μ n , σ n )$
is a
-dimensional Gaussian distribution and N is the size of GMM codebook. Finally, global representation
HCV$∈ ℝ M × 2 N$
is obtained by stacking the first and second differences:
$HCV : G = [ g 1 ( 1 ) , g 1 ( 2 ) , g 2 ( 1 ) , g 2 ( 2 ) , ⋯ g n ( 1 ) , g n ( 2 ) , ⋯ , g N ( 1 ) , g N ( 2 ) ]$
The output vector is subsequently normalized using the power + L[2] scheme, and serves as the final scene representation of HCV.
4. Experiment
We now evaluate the effectiveness of the proposed HCV framework and traditional FV for remote sensing land-use scene classification using two standard public databases, the 21-class Land Use (LU)
database and the RSSCN7 [
] database. The classification performances of the proposed method are compared with several state-of-the-art methods.
4.1. Experimental Data and Setup
The 21-Class Land Use (LU) database [
] is one of the first publicly available geographical image databases (
) with ground truth, which is collected by University of California at Merced Computer Vision Lab (UCMCVL). The database consists of 21 land-use classes, and each class contains 100 images of the
same size (
, 256 pixels × 256 pixels). The pixel resolutions of all images are 30 cm per pixel. Sample images of each land-use class are shown in
Figure 4
. To be consistent with other researchers’ experimental settings on the LU database [
], the database was randomly partitioned into five equal subsets. Each subset contained 20 images from each land-use category. Four subsets were used for training, and the remaining subset was used
for testing.
The RSSCN7 database [
] is the recently public remote sensing database (
) and was released in 2015. It contains 2800 remote sensing scene images that are from seven typical scene categories. There are 400 images with sizes of 400 × 400 pixels for each class. Each scene
category is of four different scales with 100 images per scale. Sample images from RSSCN7 are shown in
Figure 5
. The same experimental setup in [
] is used. Half of the images in each category were fixed for training and the rest for testing.
In the paper, we adopted Scale Invariant Feature Transformation (SIFT) as the local feature and the SIFT features were extracted from the interest point every six pixels in both the x and y
directions under four scales (16, 24, 32, 48). The one vs. rest linear SVM classifier was employed in our experiments. The experiments were repeated ten times by randomly selecting the training and
testing data with the experimental settings above. The average classification accuracy was set as the evaluation index.
4.2. Experimental Results
We evaluated the classification performance by the default parameters on the two databases. On the LU database, the classification accuracy of our proposed HCV was 90.5%. We also evaluated the
traditional FV [
] with the same size of the GMM codebook in HCV. The classification accuracy of the traditional FV was 88.2%. On the RSSCN7 database, the results were similar (
, HCV: 84.7% and FV 82.6%). On the two databases, the HCV achieved better performance than the traditional FV, which has shown great success in computer vision [
Furthermore, the proposed HCV also provided complementary information to the traditional FV. We used the multiple kernel learning [
] method with the average kernel to combine HCV with FV. When combining FV and HCV, we achieved a mean classification accuracy of 91.8% on the LU database and 86.4% on the RSSCN7 database.
To further investigate the performance of HCV, FV, and the combination of the two, we illustrate the per-class accuracies of the LU database in
Figure 6
Figure 6
, we observe that the proposed HCV is effective for almost all geographical classes on the LU database. Except for the intersection, overpass, and sparse residential categories, the HCV has better or
comparable performance to FV in all other categories. The performance improvement is especially profound over the Tennis Courts category, which is approximately 30%, as shown in
Figure 6
Figure 7
shows some geographical images from three categories on the LU database that were predicted correctly by HCV, but not by the traditional FV. The traditional FV misclassified the two images in
Figure 7
a as buildings and the two images in
Figure 7
b as runways. The rivers in the
Figure 7
b do not have any curves and can easily be misclassified as runways, even by a human observer. The two images in
Figure 7
a are similar to buildings, and the storage tanks are not in a conspicuous position. The four images in
Figure 7
c were misclassified as other classes (e.g., parking lot, river, and sparse residential) by the traditional FV. Those images contain visually deceptive information, which makes the recognition
challenging. The correct classification requires sufficient semantic information. HCV described those geographical images correctly through higher level representation of richer semantic information
by hierarchical coding structure.
Moreover, the classification performance was improved by the combination for almost all geographical classes, as shown in
Figure 6
, due to the complementarity between FV and HCV. By using HCV to capture the deep visual semantic information and combining FV with HCV, our method (FV + HCV) achieved very good classification
4.3. Evaluation of the Parameters in HCV
In the proposed Hierarchical Coding Vector (HCV) framework, the dictionary size of each of the coding layers, the key parameter $β$ in the SA coding method, and the different spatial structures in
local pooling are the important parameters. We investigated these parameters on the LU database and chose the optimum HCV parameters for scene level land-use classification. The evaluation was
carried out for one parameter at a time and the other ones were fixed to the default. The most important parameter (i.e., the codebook size of each coding layers) was investigated first and then we
studied the key parameter $β$. In the end, the different spatial structures in local pooling were evaluated. Furthermore, we also evaluated the effect of the number of coding layers.
4.3.1. The Effect of Different Codebook Size
First, we estimated the optimum codebook sizes for each coding layer. The BOVW coding layer used the K-means codebook. The FV coding layer used the GMM codebook. We set
= 0.01 and the spatial structure as 2 × 2. The classification results of HCV with varying K-means/GMM codebook size on the LU database are listed in
Table 2
The sizes of the K-means and GMM codebooks are critical to the classification performance of HCV. Too small of a codebook cannot capture enough graphical statistics. Meanwhile, too large of a
codebook can cause over-partitioning in the descriptor space. As shown in
Table 2
, the classification performance increased with the larger codebooks and reached a plateau (even decreased) when the codebooks’ size exceeded a threshold for both K-means and GMM codebooks. Based on
the experimental results, we chose the codebook size of K-means/GMM as 1000/8 in terms of the classification accuracy and computational complexity.
4.3.2. The Key Parameter $β$ in the SA Coding Method
To show the effect of
on the HCV more clearly, we selected five images from five different land-use classes and visualized those coding results under different values of
. The visualization result is illustrated in
Figure 8
. Each vertical column represents the coding result with a different value of
for the same image. Each horizontal row represents the coding result with the same value of
for the different images. The left-most column is the visualization of the semi-local feature
f[t]$∈ ℝ M$
output by the BOVW coding layer, and the remaining part is the visualization of HCV. The visualizations of the semi-local feature
(output of the BOVW coding layer) for the five different images are quite similar, so we have only displayed one representative of the feature
for each value of
Figure 8
is too small (e.g.,
$β = 10 − 5$
), SA coding is not sensitive to the distance
$e ^$
between descriptors x
and codeword b
. The codebook is almost activated in the same intensity. The BOVW coding layer cannot capture enough discriminable image information, and the HCV is not able to represent the complex semantic
structure. We can observe that the BOVW layer output seems to be meaningless and the HCV of the five images are very similar in this situation, as shown in
Figure 8
. It is easy to cause misclassification. With the increase of
, the SA coding method can express the distance information
$e ^$
appropriately and the BOVW layer output appears to be undulating. The HCV output by the Fisher coding layer of different images shows the obvious difference and increasing classification performance
is expected. When
becomes too large, the SA coding response decreases rapidly with the increasing distance
$e ^$
Figure 8
shows that the sparsity of the BOVW layer output increases and the HCV of the five images becomes similar. The increasing sparsity is a challenge for the Fisher vector coding and weakens the
discriminability of the HCV.
With the visualization result, we found the value of parameter
is critical to the classification performance of HCV. We evaluated the effect of different values of
on the classification performance of HCV and determined the optimal value. Sizes of 1000 and 8 were our choices for the K-means codebook and GMM codebook, respectively. The spatial structure is 2 ×
2. The classification accuracy of HCV for the different parameters
on the LU database is shown in
Figure 9
The experiment results confirm our previous analysis. The parameter
is a key factor for HCV. Too small or too large value of a
weakens the classification performance by a large margin. Based on the results in
Figure 9
, we chose
= 0.01.
4.3.3. The Effect of Different Spatial Structures in Local Pooling
Local pooling aggregates the coding results of SIFT features under four scales inside the spatial structure. We evaluated the effect of different spatial structures on the classification performance
of HCV in this section. The five different spatial structures (1 × 1, 2 × 2, 3 × 3, 4 × 4, and 5 × 5) were evaluated on the LU database. The Max-pooling method was applied. We set
= 0.01 and the size of K-means/GMM codebook was 1000/8. The classification performance of different spatial structures for HCV is illustrated in
Figure 10
As seen from
Figure 10
, the classification performance of HCV gradually decreases with the larger spatial structure, which can be explained by two factors: (1) the increasing spatial structure leads to the repeated
expression of some mutation points, creating a new challenge in the FV coding; and (2) the number of the input points of the Fisher coding layer proportionately decreases with the larger spatial
structure, weakening the discriminability of the HCV.
Based on the experiment results, the spatial structure 1 × 1 was applied in our HCV framework. Inside the 1 × 1 spatial structure, the coding results d[k]$∈ ℝ M$ of the SIFT features x[k]$∈ ℝ D$
under four scales were aggregated to semi-local feature f[t]$∈ ℝ M$ using the Max-pooling methods.
4.3.4. The Effect of the Number of Coding Layers
We also evaluated the effect of the number of coding layers. The classification accuracy over different number of coding layers in the HCV framework is shown in
Figure 11
. One coding layer represents only the Fisher coding layer used in the HCV. Two coding layers contain one BOVW coding layer and one Fisher coding layer. Similarly, the three coding layers consist of
two BOVW coding layers and one Fisher coding layer.
Figure 11
, we can observe that the performance has been improved significantly from one layer (88.2) to two layers (90.5) due to the hierarchical structure. However, as the layer number continued to increase,
there was no further substantial improvement in the classification performance due to the parameter tuning. With an increasing number of layers, the number of parameters to tune grows exponentially.
The lack of the good parameter tuning for the larger models (
, three layers and four layers) prevented the optimal performance of HCV. This is a problem that needs to be solved in the future.
For a good tradeoff between effectiveness and efficiency, we only used two coding layers (i.e., one BOVW coding layer and one Fisher coding layer) to perform scene level land-use classification in
this paper.
4.4. Comparison with the State-of-the-Art Methods
To prove the effectiveness of our proposed method, a comparison of its performance with the state-of-the-art performance reported in the literature was performed on the two public databases under the
same experimental setup. The comparison results of LU database are reported in
Table 3
Although the MS-CLBP described in [
] achieves comparable performance with HCV, the Extreme Learning Machine (ELM) and Radial Basis Function (RBF) nonlinear kernel were used in their approach. The nonlinear classifier has to bear
additional complexity and bear the poor scalability, which is important for real application. Our proposed method relies on the one
rest linear SVM classifier. The linear classifier makes the framework simpler and more conducive to practical application. The classification performance of our method should be improved further with
a sophisticated classifier.
As shown in
Table 3
, our method (FV + HCV) outperformed the current state-of-the-art results on the LU database, which demonstrates the effectiveness of our method (FV + HCV) for remotely sensed land use
classification. Furthermore, the statistical
-test was used to test whether the performance improvement is meaningful. The
-test is a hypothesis test based on the
-statistic, which follows the standard normal distribution under the null hypothesis [
]. It is often used to determine whether the difference between two means is significant. When the
≥ 1.96, the difference is significant (
≤ 0.05). On the contrary, when the
< 1.96, the difference is not significant (
> 0.05). A comparison of our method to other methods is provided in
Table 3
≤ 0.05 for our method (FV + HCV). The minimum value of
is 1.99 when compared to MS-CLBP and the
is still less than 0.05. The performance boost of our method is statistically significant.
The comparison results for RSSCN7 database are listed in
Table 4
. It was observed that our method improved the performance significantly with a noticeable margin on the RSSCN7 database. We also used the statistical
-test and the result showed that the performance boost is statistically significant. It should be noted that our method here directly used the parameters tuning results on the LU database, thereby
showing that this parameters set has some reasonable applicability to other datasets. The classification performance on the RSSCN7 database should be further improved by integral fine parameter
4.5. Computational Complexity
Many approaches with a nonlinear classifier have to pay a computational complexity O(n^2) or O(n^3) in the train phase and O(n) in the testing phase, where n is the training size. It implies a poor
scalability for the real application. Our method, using a simple linear SVM, reduces the training complexity to O(n), and obtains a constant complexity in testing, while still achieving a superior
performance. In the end, we evaluated the computation complexity of our method (HCV + FV) and used the 21-class land-use (LU) database to obtain the processing time. Our codes are all implemented in
MATLAB 2014a and were run on a computer with an Inter (R) Xeon (R) CPU E5-2620 v2 @ 2.1GHZ and 32G RAM in a 64-bit Win7 operation system. As observed from our experiment, the train phase takes about
27 min and the average processing time for a test remote sensing image (size of 256 × 256 pixels) is 0.55 ± 0.02 second (including dense local descriptors extraction, HCV, and FV coding to get the
final representation).
5. Conclusions
In this paper, we proposed using Hierarchical Coding Vectors (HCV), a novel representation based on hierarchically coding structures, for scene level land-use classification. We have shown that the
traditional coding pipelines are amenable to stacking in multiple layers. Building a hierarchical coding structure is sufficient to significantly boost the performance of these shallow encoding
methods. The experimental results on the LU and RSSCN7 databases demonstrate the effectiveness of our HCV representation. By combining HCV with the traditional Fisher vectors, our method (FV + HCV)
outperforms the current state-of-the-art methods on the LU and RSSCN7 databases.
This work is supported by the National Science and Technology Major Project of China (2012ZX10004801) and the National Biological Cross Pre-research Foundation of China (9140A26020314JB94409).
Author Contributions
Hang Wu and Jinggong Sun conceived and designed the experiments; Hang Wu and Baozhen Liu performed the experiments; Wenchang Zhang and Baozhen Liu analyzed the data; Hang Wu and Weihua Su wrote the
Conflicts of Interest
The authors declare no conflicts of interest.
The following abbreviations are used in this manuscript:
HCV Hierarchical Coding Vector
BOVW Bag of Visual Words
HOG Histogram of Oriented Gradient
LBP Local Binary Pattern
SA Soft Assignment
FV Fisher Vectors
VLAD Vector of Locally Aggregated Descriptors
LU 21-Class Land Use
GMM Gaussian Mixture Model
DNN Deep Neural Network
SIFT Scale Invariant Feature Transformation
SPCK Spatial Pyramid Co-occurrence Kernel
CLBP Completed Local Binary Pattern
HA Hard Assignment
LCC Local Coordinate Coding
LLC Locality-constrained Linear Coding
SVC Super Vector Coding
UCMCVL University of California at Merced Computer Vision Lab
SVM Support Vector Machine
ELM Extreme Learning Machine
RBF Radial Basis Function
DBN Deep Belief Networks
1. Yang, Y.; Newsam, S. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL International Conference on Advances in Geographic Information
Systems, San Jose, CA, USA, 3–5 November 2010.
2. Zhao, L.-J.; Tang, P.; Huo, L.-Z. Land-use scene classification using a concentric circle-structured multiscale bag-of-visual-words model. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7,
4620–4631. [Google Scholar] [CrossRef]
3. Chen, S.; Tian, Y. Pyramid of spatial relatons for scene-level land use classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 1947–1957. [Google Scholar] [CrossRef]
4. Yang, Y.; Newsam, S. Spatial pyramid co-occurrence for image classification. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November
5. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe,
NV, USA, 3–8 December 2012; pp. 1097–1105.
6. Liu, L.; Wang, L.; Liu, X. Defense of soft-assignment coding. In Proceedings of the 2011 IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain, 6–13 November 2011.
7. Sánchez, J.; Perronnin, F.; Mensink, T.; Verbeek, J. Image classification with the fisher vector: Theory and practice. Int. J. Comput. Vis. 2013, 105, 222–245. [Google Scholar] [CrossRef]
8. Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
9. Dalal, N.; Triggs, B. Histograms of oriented gradients for human detection. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San
Diego, CA, USA, 20–25 June 2005.
10. Oliva, A.; Torralba, A. Modeling the shape of the scene: A holistic representation of the spatial envelope. Int. J. Comput. Vis. 2001, 42, 145–175. [Google Scholar] [CrossRef]
11. Ojala, T.; Pietikäinen, M.; Harwood, D. A comparative study of texture measures with classification based on featured distributions. Pattern Recognit. 1996, 29, 51–59. [Google Scholar] [CrossRef]
12. Peng, X.; Wang, L.; Wang, X.; Qiao, Y. Bag of Visual Words and Fusion Methods for Action Recognition: Comprehensive Study and Good Practice. Available online: http://arxiv.org/abs/1405.4506
(accessed on 18 May 2016).
13. Yu, K.; Zhang, T.; Gong, Y. Nonlinear learning using local coordinate coding. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 6–10 December 2009;
pp. 2223–2231.
14. Wang, J.; Yang, J.; Yu, K.; Lv, F.; Huang, T.; Gong, Y. Locality-constrained linear coding for image classification. In Proceedings of the 2010 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), San Francisco, CA, USA, 13–18 June 2010.
15. Jégou, H.; Perronnin, F.; Douze, M.; Sanchez, J.; Perez, P.; Schmid, C. Aggregating local image descriptors into compact codes. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1704–1716. [
Google Scholar] [CrossRef] [PubMed] [Green Version]
16. Lin, C.J.; Hsu, C.-W.; Chang, C.-C. A Practical Guide to Support Vector Classification. Available online: https://www.cs.sfu.ca/people/Faculty/teaching/726/spring11/svmguide.pdf (accessed on 18
May 2016).
17. Huang, G.-B.; Zhu, Q.-Y.; Siew, C.-K. Extreme learning machine: Theory and applications. Neurocomputing 2006, 70, 489–501. [Google Scholar] [CrossRef]
18. Jaakkola, T.S.; Haussler, D. Exploiting generative models in discriminative classifiers. Adv. Neural Inf. Process. Syst. 1999, 487–493. [Google Scholar]
19. Dai, D.; Yang, W. Satellite image classification via two-layer sparse coding with biased image representation. IEEE Geosci. Remote Sens. Lett. 2011, 8, 173–176. [Google Scholar] [CrossRef]
20. Peng, X.; Zou, C.; Qiao, Y.; Peng, Q. Action recognition with stacked fisher vectors. Comput. Vis. 2014, 8693, 581–595. [Google Scholar]
21. Arthur, D.; Vassilvitskii, S. K-means++: The advantages of careful seeding. In Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, USA, 7–9 January
22. Murray, N.; Perronnin, F. Generalized max pooling. In Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA, 23–28 January 2014.
23. Chatfield, K.; Lempitsky, V.S.; Vedaldi, A.; Zisserman, A. The devil is in the details: An evaluation of recent feature encoding methods. In Proceedings of the BMVC, Dundee, UK, 29 August–2
September 2011; p. 8.
24. Perronnin, F.; Sánchez, J.; Mensink, T. Improving the fisher kernel for large-scale image classification. Comput. Vis. 2010, 6314, 143–156. [Google Scholar]
25. Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep fisher networks for large-scale image classification. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA,
5–10 December 2013; pp. 163–171.
26. Zou, Q.; Ni, L.; Zhang, T.; Wang, Q. Deep learning based feature selection for remote sensing scene classification. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2321–2325. [Google Scholar] [CrossRef
27. Chen, C.; Zhang, B.; Su, H.; Li, W.; Wang, L. Land-use scene classification using multi-scale completed local binary patterns. Signal Image Video Process. 2016, 4, 745–752. [Google Scholar] [
28. Mekhalfi, M.L.; Melgani, F.; Bazi, Y.; Alajlan, N. Land-use classification with compressive sensing multifeature fusion. IEEE Geosci. Remote Sens. Lett. 2015, 12, 2155–2159. [Google Scholar] [
29. Zhao, L.; Tang, P.; Huo, L. A 2-D wavelet decomposition-based bag-of-visual-words model for land-use scene classification. Int. J. Remote Sens. 2014, 35, 2296–2310. [Google Scholar]
30. Simonyan, K.; Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Fisher vector faces in the wild. In Proceedings of the BMVC, Bristol, UK, 9–13 September 2013.
31. Gehler, P.; Nowozin, S. On feature combination for multiclass object classification. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 27 September–4
October 2009.
32. Mandel, J. The Statistical Analysis of Experimental Data; Courier Corporation: New York, NY, USA, 2012. [Google Scholar]
33. Risojević, V.; Babić, Z. Aerial image classification using structural texture similarity. In Proceedings of the 2011 IEEE International Symposium on Signal Processing and Information Technology
(ISSPIT), 2011, 14–17 December 2011; IEEE: Bilbao, Spain.
34. Cheriyadat, A.M. Unsupervised feature learning for aerial scene classification. IEEE Trans. Geosci. Remote Sens. 2014, 52, 439–451. [Google Scholar] [CrossRef]
35. Zhang, F.; Du, B.; Zhang, L. Saliency-guided unsupervised feature learning for scene classification. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2175–2184. [Google Scholar] [CrossRef]
36. Shao, W.; Yang, W.; Xia, G.-S.; Liu, G. A hierarchical scheme of multiple feature fusion for high-resolution satellite scene categorization. Comput. Vis. Syst. 2013, 7963, 324–333. [Google
Figure 1. The architecture of the proposed Hierarchical Coding Vector (HCV). The representation of HCV is deeper with richer semantic information by constructing a hierarchical coding structure.
SVMs, Support Vector Machines; FV, Fisher Vectors; BOVW, Bag of Visual Words; SIFT, Scale Invariant Feature Transformation.
Figure 4. Sample images from each of the 21 categories in the Land Use (LU) database: (a) agricultural; (b) airplane; (c) baseball diamond; (d) beach; (e) buildings; (f) chaparral; (g) dense
residential; (h) forest; (i) freeway; (j) golf course; (k) harbor; (l) intersection; (m) medium density residential; (n) mobile home park; (o) overpass; (p) parking lot; (q) river; (r) runway; (s)
sparse residential; (t) storage tanks; (u) tennis courts.
Figure 5. Sample images from the RSSCN7 database: (a) grassland; (b) farmland; (c) industrial and commercial regions; (d) river and lake; (e) forest field; (f) residential region; (g) parking lot.
There are four scales, from top to bottom (in rows): 1:700, 1:1300, 1:2600, and 1:5200.
Figure 6. Comparison of the pre-class accuracies of Hierarchical Coding Vector (HCV) with the Fisher Vector (FV) and the combination of the two on the LU database.
Figure 7. Some images are predicted correctly by the HCV, but not by the FV on the LU database: (a) storage tanks images; (b) river images; (c) tennis courts images.
Figure 8. Visual coding result of the Hierarchical Coding Vector (HCV) of different parameters on the LU database. Each vertical column represents the coding result of a different $β$ for the same
image. Each horizontal row represents the coding result of same $β$ for different images.
Figure 9. Evaluation of the effect on the classification accuracy of HCV of the parameter $β$ on the LU database.
Table 1. The definitions of parameters used in this paper.
Parameter Dim. Definition
X E × K Low-level descriptors
B[1] E × M K-means codebook
D M × K Coding result of BOVW coding layer
F M × T Semi-local features
B[2] M × N Gaussian mixture model (GMM) codebook
G M × 2N Hierarchical coding Vector
x[k] E The k-th low-level descriptor
d[k] M The k-th coding result in D
b[m] E The m-th codeword in B[1]
b[n] M The n-th codeword in B[2]
f[t] M The t-th semi-local feature
$g n ( 1 )$ M Gaussian mean difference
$g n ( 2 )$ M Gaussian variance difference
E 1 Dimension of low-level descriptors
T 1 Number of semi-local features
M 1 Size of K-means codebook
N 1 Size of GMM codebook
K 1 Number of low-level descriptors
P - Local pooling region
$e ^ ( x k , b m )$ 1 Euclidean distance between x[k] and b[m]
$β$ 1 Smoothing factor in SA coding
$α$ 1 Smoothing factor in Power-normalization
$α t ( n )$ 1 Soft assignment weight of f[t] to b[n]
$w n$ 1 Mixture weights of b[n]
$μ n$ 1 Means of b[n]
$σ n$ 1 Diagonal covariance of b[n]
Table 2. Classification accuracy (%) of HCV with varying K-means/GMM codebook size on the LU database.
Table 2. Classification accuracy (%) of
HCV with varying K-means/GMM codebook size
on the LU database.
K-means/GMM 2 4 8 16 32
50 71.55 76.98 81.62 84.33 87.62
100 77.05 82.02 85.79 85.98 87.93
200 83.00 84.74 87.31 88.10 88.21
600 86.86 88.69 89.50 89.45 88.81
1000 88.36 89.29 90.00 88.57 88.40
1400 88.26 89.76 89.17 88.49 88.36
Table 3. Comparison of our approach (FV + HCV) with the state-of-the-art performance reported in the literature on the LU database under the same experimental setup: 80% of images from each class are
used for training and the remaining images are used for testing. The average classification accuracy (mean ± SD) is set as the evaluation index.
Table 3. Comparison of our approach (FV + HCV) with
the state-of-the-art performance reported in the
literature on the LU database under the same
experimental setup: 80% of images from each class are
used for training and the remaining images are used
for testing. The average classification accuracy
(mean ± SD) is set as the evaluation index.
Method Accuracy (%)
BOVW [1] 76.8
SPM [1] 75.3
BOVW + spatial co-occurrence kernel [1] 77.7
Color Gabor [1] 80.5
Color histogram [1] 81.2
SPCK [4] 73.1
SPCK + BOW [4] 76.1
SPCK + SPM [4] 77.4
Structural texture similarity [33] 86.0
Wavelet BOVW [29] 87.4 ± 1.3
Unsupervised feature learning [34] 81.1 ± 1.2
Saliency-guided feature learning [35] 82.7 ± 1.2
Concentric circle-structured BOVW [2] 86.6 ± 0.8
Multifeature concatenation [36] 89.5 ± 0.8
Pyramid-of-spatial-relations [3] 89.1
CLBP [27] 85.5 ± 1.9
MS-CLBP [27] 90.6 ± 1.4
HCV 90.5 ± 1.1
Our method 91.8 ± 1.3
Table 4. Comparison of our approach (FV + HCV) with the state-of-the-art performance reported in the literature on the RSSCN7 database under the same experimental setup: half of images from each
class are used for training and the rest are used for testing. The average classification accuracy (mean ± SD) is set as the evaluation index. DBN: Deep Belief Networks.
Table 4. Comparison of our approach (FV + HCV)
with the state-of-the-art performance reported
in the literature on the RSSCN7 database under
the same experimental setup: half of images
from each class are used for training and the
rest are used for testing. The average
classification accuracy (mean ± SD) is set as
the evaluation index. DBN: Deep Belief
Method Accuracy (%)
GIST * 69.5 ± 0.9
Color histogram * 70.9 ± 0.8
BOVW * 73.1 ± 1.1
LBP * 75.3 ± 1.0
DBN based feature selection [26] 77.0
HCV 84.7 ± 0.7
Our method 86.4 ± 0.7
* Our own implementation.
© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http:/
Share and Cite
MDPI and ACS Style
Wu, H.; Liu, B.; Su, W.; Zhang, W.; Sun, J. Hierarchical Coding Vectors for Scene Level Land-Use Classification. Remote Sens. 2016, 8, 436. https://doi.org/10.3390/rs8050436
AMA Style
Wu H, Liu B, Su W, Zhang W, Sun J. Hierarchical Coding Vectors for Scene Level Land-Use Classification. Remote Sensing. 2016; 8(5):436. https://doi.org/10.3390/rs8050436
Chicago/Turabian Style
Wu, Hang, Baozhen Liu, Weihua Su, Wenchang Zhang, and Jinggong Sun. 2016. "Hierarchical Coding Vectors for Scene Level Land-Use Classification" Remote Sensing 8, no. 5: 436. https://doi.org/10.3390/
Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details
Article Metrics | {"url":"https://www.mdpi.com/2072-4292/8/5/436","timestamp":"2024-11-04T15:31:34Z","content_type":"text/html","content_length":"490952","record_id":"<urn:uuid:6051f5c5-2560-466f-b7f0-102eaac52740>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00541.warc.gz"} |
The snowy route to a spin-foam breakthrough - Inside The Perimeter
The snowy route to a spin-foam breakthrough
A new approach has made a quantum gravity tool called “spin foams” easier to work with. Calculations that once took weeks now take seconds, and simulations that can test the theory of loop quantum
gravity are in reach for the first time.
In February 2019, Bianca Dittrich had a breakthrough. Specifically, she broke through some ice and got stuck in a crevasse.
The Perimeter Faculty member was on the frozen shores of Lake Huron, where wind and waves pile ice up into shore-fast ridges. She’d left the cozy confines of Perimeter and travelled an hour or two
down a rural highway to seek a fresh perspective, at what she calls “a kind of camp for quantum gravity researchers.”
Fortunately, she wasn’t alone, and she didn’t stay stuck in the ice for long.
Equally fortunate, she was about to have a second – less literal – breakthrough. The camp did indeed provide fresh perspectives: a whole new way to approach the quantum gravity tool known as “spin
foams.” The new model, which Dittrich and her collaborators dub “effective spin foams,” is a transformative improvement. Calculations that once took weeks on high-performance computing clusters can
now be performed in seconds on a laptop.
Now, 18 months on, the researchers are on the edge of being able to conduct the first numerical test of the 30-year-old theory known as loop quantum gravity.
A Perimeter researcher skis while on retreat near Lake Huron.
An unsummited peak
Loop quantum gravity is one of the theories that tries to unite quantum theory with our modern theory of gravity. This unification is one of the great unsummited peaks of modern physics. Physicists
have been attempting it for generations. Many promising theories have become stuck in the ice on the way up Mount Unification. Some have died there.
Quantum theory says that the universe is – at its most fundamental layer – granular, pixelated: made up of small packets that cannot be broken down further. These are known by the Latin word for
packet: “quanta.”
On the other hand, the modern theory of gravity, Einstein’s theory of general relativity, is at its heart a description of spacetime. In particular, it describes the geometry of spacetime as it
curves in the presence of mass. Gravity is not a quantum theory: the spacetime it envisions is smooth and stretchy, like a sheet of rubber. It has no granularity, no pixelation.
One way to unite quantum theory with gravity might be to quantize spacetime – to describe its smallest possible piece. That’s the path loop quantum gravity has followed.
Once you have cut spacetime into such minimal chunks, you can start reassembling it. You need to figure out how to glue chunks together, developing an understanding of how they join and interact.
Then, you need to run a numerical simulation of how a few – and eventually a lot – of chunks behave.
“Our dream is to simulate a huge spacetime, a huge number of chunks of spacetime,” says Dittrich. “Then we can start to ask questions: How would they assemble? Would it be possible for them to
assemble into a Schwarzschild black hole? Could they form gravitational waves?”
In short: Does the re-glued spacetime have the same dynamics predicted by general relativity? Does it look like Einstein when you zoom out?
“OK, so that’s the main question,” says Dittrich. “If we could answer it, we would get 10 Nobel Prizes.”
Quantum gravity researcher and Perimeter Faculty member Bianca Dittrich, seen here not trapped in ice
An impassable pass
Many loop quantum gravity researchers are interested in a structure called a spin foam. In a spin foam, individual chunks of spacetime, described in part by a quantum number called “spin,” stick
together like soap bubbles. Spin foams have been around for about 15 years now, and they are fairly well described.
What researchers would like to do next is simulate a spin foam, to describe not just what it looks like, but how it behaves. Running a computer simulation is a first check on whether a spin foam
might actually look like the spacetime we’re familiar with if you zoom out far enough.
Modern spin foams – say, those developed in the last decade – are a powerful tool. But they also have a significant drawback: they are too complex to simulate. Each chunk of spacetime is described by
a wave function. (A wave function is a mathematical formula used to describe a quantum object.) Before even a single chunk of spacetime can be simulated, its wave function must be solved. Researchers
refer to this as “computing the amplitudes.”
“There are many reasons why amplitudes are hard to compute in traditional spin foams,” says Dittrich. But whatever the bottleneck, the bottom line is the same. Calculating the amplitude of a single
chunk of spacetime is hard, but doable. Start using multiple chunks to build a spacetime, and the amplitude problem quickly amplifies, becoming too much for even high-performance computers.
The end result, says Dittrich, is that a simulation simply isn’t doable. “An explicit check of the dynamics of these models, even for small chunks of spacetime, has not been achieved yet with these
very complicated models.”
Even when an explicit check can be made, there are reasons to worry that it will show problems. Specifically, there are hints that today’s spin foam models don’t allow for gravitational waves. Real
spacetime does have gravitational waves, so if spin foams don’t, the whole field is on the wrong track.
One way or the other, it would be good to know.
A new base camp
Which brings us back to the icy shores of Lake Huron.
Dittrich is a believer in taking a step away and examining old questions from new angles, which is why she helped organize the retreat for local researchers in the quantum gravity community. The
other organizers were Maïté Dupuis and Sylvain Carrozza of Perimeter and Florian Girelli of the University of Waterloo.
They gathered at Camp Kintail: think ski trails, rock climbing walls, dinners of mac and cheese. There were researchers from local universities and from Perimeter – PhD students, postdocs, and
“The idea was to have a good mix of people, and also to mix up a bit,” says Dittrich. “We got into working groups and worked on some stuff. I was in one of those groups, and we were working on
something weird, called higher gauge theory.”
Also in the working group that day was PhD student Seth Asante, a Ghana native who has since completed his degree and is now the Fields-AIMS-Perimeter Postdoctoral Fellow. (Jointly supported by the
Fields Institute for Research in Mathematical Sciences, the African Institute for Mathematical Sciences, and Perimeter, this fellowship supports new PhDs who are African nationals.)
Together, they realized that they could use ideas from higher gauge theory to reformulate spin foams in a way that would make them easier to work with.
Gauge theory is the mathematical language in which those amplitudes are computed. Higher gauge theory is a generalization of gauge theory, which can involve higher dimensions. Moving from gauge
theory to higher gauge theory is a little like moving from a map to a globe.
Changing frameworks gave Dittrich and Asante a bird’s-eye view of the amplitude calculation problem. They thought it might be possible to reformulate the description of the spacetime chunks such that
the physics remained the same, but the mathematical framework in which the ideas were expressed was easier. It’s akin to (though much more complicated than) reformulating Newton’s laws in terms of
momentum rather than mass.
“The idea of the retreat is that you are plucked out of your usual routine and look at something new, so indeed that happened,” says Dittrich. “So then I really had to learn this higher gauge theory.
As it turned out, our final model has not much about higher gauge theory there, but that was the starting point.”
Researcher Seth Asante climbs at rock wall at the quantum gravity camp.
The pass opens
Dittrich and Asante returned to Perimeter, joined forces with Hal Haggard, a Bard College faculty member and a Perimeter Visiting Fellow, and spent months on their bird’s-eye reformulation of spin
foams. They ended up with a new description of spacetime chunks that had two big advantages.
First, the amplitudes didn’t need to be calculated – they emerged naturally from the description. Second, the way in which the dynamics were encoded in the model was much more transparent. Think
again of reformulating Newton’s laws in terms of momentum: it means momentum doesn’t have to be calculated, and it makes momentum transfer much easier to study.
With the easier-to-simulate model in hand, it was time to actually conduct a simulation.
In terms of ease of simulation, their model was a runaway success. “Previous attempts to simulate spin foams require high-performance computers, and take months,” says Dittrich. “We could do
simulations for slightly bigger chunks of spacetime on our laptops.”
She corrects herself, acknowledging the fate of graduate students everywhere: “I mean: Seth could do it on his laptop.”
What did the simulation actually show? The team at first suspected that their spin foam model would not reproduce the known dynamics described by general relativity and observed in the real universe
around us. (To be a little more precise: no model is yet complete enough to produce such dynamics, but researchers are eager to search for those that look as if they might be heading in that
direction, a benchmark they call “semiclassical physics.”)
“We thought we were going to show that maybe spin foams can never achieve this semiclassical limit showing gravitational dynamics,” says Asante. “But we ended up rather showing that, oh, you can
actually get good dynamics. So that is nice.”
Nice is understating it. These simulations were the very first to show that loop quantum gravity can reproduce some of the dynamics predicted by general relativity. The first to show that a theory
decades in the making actually does what it’s supposed to do: reproduce Einsteinian spacetime when you zoom out.
Scaling up toward the summit
Now that they have found their way out of the literal and metaphorical crevasses, what’s next for the intrepid quantum gravity campers?
Already, the team has shown that the effective spin foam model is governed by a discretized form of Einstein’s equations of general relativity. Seeing familiar equations emerge from a new model is a
good sign for those hoping that loop quantum gravity will prove consistent with general relativity.
More recently, Dittrich has taken a slightly different approach and shown that spin foams can create gravitational waves. Since researchers had feared that spin foams could only produce flat
dynamics, this too is a surprise and a pleasure.
But what the team – and, indeed, the whole loop quantum gravity community – would really like to do is zoom farther out, moving beyond a small piece of spin foam to something approaching a real
spacetime. “The dream is to conduct a simulation for a much larger spacetime – many more building blocks,” says Dittrich. “It will be still a huge challenge to do that.”
“Yes, scaling up is still a difficult problem,” Asante agrees. “Even on a high-performance computer, you still need a large amount of memory and time to do all these computations.”
Nevertheless, this new model – which the researchers call an “effective spin foam” model – is a landmark. Calculations that once took weeks on high-performance computing clusters can now be performed
in seconds on a laptop, an improvement of several orders of magnitude.
Thanks to an icy fall, an inspiring camp, and a bird’s-eye insight, quantum gravity models may be coming into computational reach for the first time. | {"url":"https://insidetheperimeter.ca/the-snowy-route-to-a-spin-foam-breakthrough/","timestamp":"2024-11-13T11:03:42Z","content_type":"text/html","content_length":"297890","record_id":"<urn:uuid:c2994d08-9b81-41bb-87ee-f9231d358720>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00040.warc.gz"} |
Next: Weighting and reconstructing Up: Nonstationarity: patching Previous: Nonstationarity: patching
A plane of information, either data or an image, wall say wall(nwall1, nwall2), will be divided up into an array of overlapping windows each window of size (nwind1,nwind2). To choose the number of
windows, you specify (npatch1,npatch2). Overlap on the 2-axis is measured by the fraction (nwind2*npatch2)/nwall2. We turn to the language of F90 which allows us to discuss N-dimensional hypercubes
almost as easily as two-dimensional spaces. We define an N-dimensional volume (like the wall) with the vector nwall= (nwall1, nwall2, ...). We define subvolume size (like a 2-D window) with the
vector nwind=(nwind1, nwind2, ...). The number of subvolumes on each axis is npatch=(npatch1, npatch2, ...). The operator patch simply grabs one patch from the wall, or when used in adjoint form, it
puts the patch back on the wall. The number of patches on the wall is product(npatch). Getting and putting all the patches is shown later in module patching .
The i-th patch is denoted by the scalar counter ipatch. Typical patch extraction begins by taking ipatch, a fortran linear index, and converting it to a multidimensional subscript jj each component
of which is less than npatch. The patches cover all edges and corners of the given data plane (actually the hypervolume) even where nwall/npatch is not an integer, even for axes whose length is not
an integer number of the patch length. Where there are noninteger ratios, the spacing of patches is slightly uneven, but we'll see later that it is easy to reassemble seamlessly the full plane from
the patches, so the unevenness does not matter. You might wish to review the utilities line2cart and cart2line which convert between multidimensional array subscripts and the linear memory subscript
before looking at the patch extraction-putback code: patchextract patches The cartesian vector jj points to the beginning of a patch, where on the wall the (1,1,..) coordinate of the patch lies.
Obviously this begins at the beginning edge of the wall. Then we pick jj so that the last patch on any axis has its last point exactly abutting the end of the axis. The formula for doing this would
divide by zero for a wall with only one patch on it. This case arises legitimately where an axis has length one. Thus we handle the case npatch=1 by abutting the patch to the beginning of the wall
and forgetting about its end. As in any code mixing integers with floats, to guard against having a floating-point number, say 99.9999, rounding down to 99 instead of up to 100, the rule is to always
add .5 to a floating point number the moment before converting it to an integer. Now we are ready to sweep a window to or from the wall. The number of points in a window is size(wind) or equivalently
product(nwind). Figure shows an example with five nonoverlapping patches on the 1-axis and many overlapping patches on the 2-axis.
Figure 2 A plane of identical values after patches have been cut and then added back. Results are shown for nwall=(100,30), nwind=(17,6), npatch=(5,11). For these parameters, there is gapping on
the horizontal axis and overlap on the depth axis. | {"url":"https://sep.stanford.edu/sep/prof/gee/pch/paper_html/node2.html","timestamp":"2024-11-11T07:41:49Z","content_type":"text/html","content_length":"9085","record_id":"<urn:uuid:f5ed46fb-ac39-48a3-be68-dc90efc6ade2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00737.warc.gz"} |
Binding Energ
Chapter 31 Radioactivity and Nuclear Physics
• Define and discuss binding energy.
• Calculate the binding energy per nucleon of a particle.
The more tightly bound a system is, the stronger the forces that hold it together and the greater the energy required to pull it apart. We can therefore learn about nuclear forces by examining how
tightly bound the nuclei are. We define the binding energy (BE) of a nucleus to be the energy required to completely disassemble it into separate protons and neutrons. We can determine the BE of a
nucleus from its rest mass. The two are connected through Einstein’s famous relationship smaller mass than its separate constituents; the more tightly the nucleons are bound together, the smaller the
mass of the nucleus.
Imagine pulling a nuclide apart as illustrated in Figure 1. Work done to overcome the nuclear forces holding the nucleus together puts energy into the system. By definition, the energy input equals
the binding energy BE. The pieces are at rest when separated, and so the energy put into them increases their total rest mass compared with what it was when they were glued together as a nucleus.
That mass increase is thus mass defect. It implies that the mass of the nucleus is less than the sum of the masses of its constituent protons and neutrons. A nuclide
The atomic masses can be found in Appendix A, most conveniently expressed in unified atomic mass units u (
Figure 1. Work done to pull a nucleus apart into its constituent protons and neutrons increases the mass of the system. The work to disassemble the nucleus equals its binding energy BE. A bound
system has less mass than the sum of its parts, especially noticeable in the nuclei, where forces and energies are very large.
Things Great and Small
Nuclear Decay Helps Explain Earth’s Hot Interior
A puzzle created by radioactive dating of rocks is resolved by radioactive heating of Earth’s interior. This intriguing story is another example of how small-scale physics can explain large-scale
Radioactive dating plays a role in determining the approximate age of the Earth. The oldest rocks on Earth solidified about Chapter 15 Thermodynamics it is then possible to calculate how long it
would take for the surface to cool to rock-formation temperatures. The result is about Figure 2).
Figure 2. The center of the Earth cools by well-known heat transfer methods. Convection in the liquid regions and conduction move thermal energy to the surface, where it radiates into cold, dark
space. Given the age of the Earth and its initial temperature, it should have cooled to a lower temperature by now. The blowup shows that nuclear decay releases energy in the Earth’s interior. This
energy has slowed the cooling process and is responsible for the interior still being molten.
We know from seismic waves produced by earthquakes that parts of the interior of the Earth are liquid. Shear or transverse waves cannot travel through a liquid and are not transmitted through the
Earth’s core. Yet compression or longitudinal waves can pass through a liquid and do go through the core. From this information, the temperature of the interior can be estimated. As noticed, the
interior should have cooled more from its initial temperature in the Figure 2).
Nuclides such as
A final effect of this trapped radiation merits mention. Alpha decay produces helium nuclei, which form helium atoms when they are stopped and capture electrons. Most of the helium on Earth is
obtained from wells and is produced in this manner. Any helium in the atmosphere will escape in geologically short times because of its high thermal velocity.
What patterns and insights are gained from an examination of the binding energy of various nuclides? First, we find that BE is approximately proportional to the number of nucleons AA in any nucleus.
About twice as much energy is needed to pull apart a nucleus like binding energy per nucleon, Figure 3 reveals some very interesting aspects of nuclei. We see that the binding energy per nucleon
averages about 8 MeV, but is lower for both the lightest and heaviest nuclei. This overall trend, in which nuclei with and the nuclear forces are shorter in range compared to the Coulomb force. So,
for low-mass nuclei, the nuclear attraction dominates and each added nucleon forms bonds with all others, causing progressively heavier nuclei to have progressively greater values of Figure 3).
Figure 3. A graph of average binding energy per nucleon, BE/A, for stable nuclei. The most tightly bound nuclei are those with A near 60, where the attractive nuclear force has its greatest effect.
At higher A s, the Coulomb repulsion progressively reduces the binding energy per nucleon, because the nuclear force is short ranged. The spikes on the curve are very tightly bound nuclides and
indicate shell closures.
Figure 4. The nuclear force is attractive and stronger than the Coulomb force, but it is short ranged. In low-mass nuclei, each nucleon feels the nuclear attraction of all others. In larger nuclei,
the range of the nuclear force, shown for a single nucleon, is smaller than the size of the nucleus, but the Coulomb repulsion from all protons reaches all others. If the nucleus is large enough, the
Coulomb repulsion can add to overcome the nuclear attraction.
There are some noticeable spikes on the
Example 1: What Is BE/A for an Alpha Particle?
Calculate the binding energy per nucleon of
To find Appendix A.
The binding energy for a nucleus is given by the equation
Appendix A gives these masses as
Noting that
This is a large binding energy per nucleon compared with those for other low-mass nuclei, which have Figure 3. This is why
There is more to be learned from nuclear binding energies. The general trend in Chapter 32 Medical Applications of Nuclear Physics. The abundance of elements on Earth, in stars, and in the universe
as a whole is related to the binding energy of nuclei and has implications for the continued expansion of the universe.
Problem-Solving Strategies
For Reaction And Binding Energies and Activity Calculations in Nuclear Physics
1. Identify exactly what needs to be determined in the problem (identify the unknowns). This will allow you to decide whether the energy of a decay or nuclear reaction is involved, for example, or
whether the problem is primarily concerned with activity (rate of decay).
2. Make a list of what is given or can be inferred from the problem as stated (identify the knowns).
3. For reaction and binding-energy problems, we use atomic rather than nuclear masses. Since the masses of neutral atoms are used, you must count the number of electrons involved. If these do not
balance (such as in
4. For problems involving activity, the relationship of activity to half-life, and the number of nuclei given in the equation can be very useful. Owing to the fact that number of nuclei is involved,
you will also need to be familiar with moles and Avogadro’s number.
5. Perform the desired calculation; keep careful track of plus and minus signs as well as powers of 10.
6. Check the answer to see if it is reasonable: Does it make sense? Compare your results with worked examples and other information in the text. (Heeding the advice in Step 5 will also help you to
be certain of your result.) You must understand the problem conceptually to be able to determine whether the numerical result is reasonable.
PhET Explorations: Nuclear Fission
Start a chain reaction, or introduce non-radioactive isotopes to prevent one. Control energy production in a nuclear reactor!
Figure 5. Nuclear Fission
Section Summary
• The binding energy (BE) of a nucleus is the energy needed to separate it into individual protons and neutrons. In terms of atomic masses,
Conceptual Questions
1: Why is the number of neutrons greater than the number of protons in stable nuclei having
Problems & Exercises
1: Figure 3.
2: Figure 3.
3: Figure 3.
4: (a) Calculate
5: (a) Calculate
6: The fact that
7: The purpose of this problem is to show in three ways that the binding energy of the electron in a hydrogen atom is negligible compared with the masses of the proton and electron. (a) Calculate the
mass equivalent in u of the 13.6-eV binding energy of an electron in a hydrogen atom, and compare this with the mass of the hydrogen atom obtained from Appendix A. (b) Subtract the mass of the proton
given in Chapter 31.3 Table 2 from the mass of the hydrogen atom given in Appendix A. You will find the difference is equal to the electron’s mass to three digits, implying the binding energy is
small in comparison. (c) Take the ratio of the binding energy of the electron (13.6 eV) to the energy equivalent of the electron’s mass (0.511 MeV). (d) Discuss how your answers confirm the stated
purpose of this problem.
8: Unreasonable Results
A particle physicist discovers a neutral particle with a mass of 2.02733 u that he assumes is two neutrons bound together. (a) Find the binding energy. (b) What is unreasonable about this result? (c)
What assumptions are unreasonable or inconsistent?
binding energy
the energy needed to separate nucleus into individual protons and neutrons
binding energy per nucleon
the binding energy calculated per nucleon; it reveals the details of the nuclear force—larger the
Problems & Exercises
1: 1.112 MeV, consistent with graph
3: 7.848 MeV, consistent with graph
5: (a) 7.680 MeV, consistent with graph
(b) 7.520 MeV, consistent with graph. Not significantly different from value for
7: (a)
(b) 0.000549 u
8: (a)
(b) The negative binding energy implies an unbound system.
(c) This assumption that it is two bound neutrons is incorrect. | {"url":"https://pressbooks.online.ucf.edu/phy2054ehk/chapter/binding-energy/","timestamp":"2024-11-03T19:03:03Z","content_type":"text/html","content_length":"223992","record_id":"<urn:uuid:565a7e3c-f5d1-451d-9a15-83d2765bacb7>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00615.warc.gz"} |
C3_W4(p-value) : Why are we rejecting null hypothesis when type-1 error is under significance level? (https://www.coursera.org/learn/machine-learning-probability-and-statistics/lecture/znGVX/p-value)
here if you see null hypothesis is mean = 66.7 and alternative hypothesis is mean > 66.7
if probability of sample mean > 68.442 increases then we should reject null hypothesis right? If it is under significance level then why are we rejecting null hypothesis ? (I’m bit confused about
this topic)
hi @GORRELA_SRI_SATYA_VE
Image is trying to explain type 1 error probability where we reject the null hypothesis based on the std mean calculation and p value being either equal to or less than 0.05 for the designated std
mean where as it shouldn’t
We are not suppose to reject null hypothesis here as the std mean is greater for 5% than the std mean at 50%
Image is only explaining type 1 error probability
1 Like
Hello, @GORRELA_SRI_SATYA_VE,
The larger the sample mean \bar{X}, the more right (extreme) it is to the population mean \mu, the more likely the sample does not belong to the population (H_0) and thus H_0 rejected. In other
words, very “right” (“extreme”) sample is unlikely generated by the population (or generated under H_0)!
But how right is right enough? We use \alpha for that!
\alpha is the area (at the right tail) in which any sample fallen is considered "unlikely by the population (or unlikely under H_0). \alpha is a criteria and since 68.442 falls within it, we reject
Because the more the p-value is smaller than the significance level (\alpha), the more unlikely it is genereted under H_0, so we reject it!
Remember, small p-value means more extreme .
I suppose “probability of sample mean > 68.442” means the p-value, right? In that case, we reject H_0 if the p-value is less than \alpha, so it is not “increase” because if it increases to a level
that is larger than \alpha, we will not reject H_0.
1 Like
thanks this makes sense. It clarified my confusion
1 Like
You are welcome, @GORRELA_SRI_SATYA_VE! | {"url":"https://community.deeplearning.ai/t/c3-w4-p-value-why-are-we-rejecting-null-hypothesis-when-type-1-error-is-under-significance-level-https-www-coursera-org-learn-machine-learning-probability-and-statistics-lecture-zngvx-p-value/702758","timestamp":"2024-11-11T21:00:06Z","content_type":"text/html","content_length":"47007","record_id":"<urn:uuid:6c1ef1c0-fbe6-4f5b-be1c-38f96fc63466>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00100.warc.gz"} |
The Journal of Brief Ideas
Total isolation and quarantine are being deployed in citiesaffected by the COVID-19 infection. These measures, while effective, have devastating socio-economic consequences. An alternative method
which could achieve a major reduction in the infection's basic reproduction rate but with less economic impact would be to impose shutdowns on Tuesdays and Thursdays. If infections were solely
through direct contact, this would simply reduce workweek potentially infectious contacts (PICs) by 40% - however, the virus is believed to survive outside the body for 36 hours or longer, and hence
PICs arise in a secondary fashion, albeit presumably at a reduced rate. Assuming an exponential decline in PICs over time, these second-day PICs will be roughly half as frequent as day-one contacts
and hence, bundling variables into an arbitrary scaling factor x and assigning a value of 2x to the number of direct contacts occurring in one day, then a typical workweek would present the following
distribution of PICs: 2x, 3x, 3x, 3x, 3x. A workweek with closures on Tuesdays and Thursdays would instead present a distribution of 2x, 0, 2x, 0, 2x; just 43% of the typical workweek. This is
potentially sufficient to lower the reproduction rate below 1, thereby containing the epidemic.
Please log in to add a comment. | {"url":"https://beta.briefideas.org/ideas/4afa44bd515b07f6be6014e1c1102d1a","timestamp":"2024-11-11T16:52:26Z","content_type":"text/html","content_length":"39604","record_id":"<urn:uuid:d5527c48-96a5-40c5-a999-87140d3683a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00105.warc.gz"} |
How Long Will It Take To Fall: Exploring The Physics Of Free Fall
How Long To Fall Through The Earth?
Keywords searched by users: How long will it take to fall how long does it take to fall 500 feet, how long does it take to fall 200 feet, how.long does it take to fall 700 feet, how long will it take
for a rock to fall 50 meters, how long will it take to hit the ground calculator, how long does it take to fall 400 feet, how long does it take to fall 12,000 feet, how long does it take to fall 1200
How Fast Does It Take To Fall?
Understanding the Speed of Free Fall
Have you ever wondered how quickly objects fall when subjected to Earth’s gravity? To comprehend this, it’s essential to delve into the equations governing free fall and falling speed.
Gravity is responsible for accelerating objects downward at a consistent rate of 9.8 meters per second per second. This means that for each second an object is in free fall, its speed increases by
9.8 meters per second. For instance, after just one second of falling, the object will be moving at a speed of 9.8 m/s. After two seconds, it will have accelerated to 19.6 m/s, and this pattern
continues. The relationship between time and speed during free fall is not linear but rather exponential, following the square root principle. This means that the longer an object falls, the faster
it accelerates, and understanding this concept is crucial for comprehending the dynamics of free fall.
How Do You Calculate How Long It Will Take Something To Fall?
To calculate how long it will take for an object to fall, you first need to measure the distance it will fall in feet using a ruler or measuring tape. Next, divide this falling distance by 16. For
instance, if the object will fall 128 feet, divide 128 by 16 to obtain 8. Finally, calculate the square root of the result from Step 2. This will give you the time it takes for the object to fall in
seconds. For example, if the result from Step 2 is 8, then the square root of 8 is approximately 2.83 seconds. This method provides a straightforward way to estimate the time of descent for objects
in free fall.
How Long Do You Fall In 2 Seconds?
Have you ever wondered how far an object falls in a specific amount of time? Let’s break it down. When an object falls due to gravity, its distance of fall can be calculated using the formula:
distance = 1/2 × acceleration due to gravity × time squared. In this case, the acceleration due to gravity is approximately 9.8 meters per second squared.
So, if you want to know how far an object falls in 2 seconds, you can plug in the values: distance = 1/2 × 9.8 × (2^2) = 19.6 meters. This means that after 2 seconds of free fall, the object will
have fallen a distance of 19.6 meters. This calculation is based on the concept that the distance fallen increases with the square of the time elapsed.
Details 32 How long will it take to fall
It Takes 38 Minutes To Fall Through The Centre Of The Earth: Journey Is Four Minutes Faster Than Previously Thought When Density Is Taken Into Account | Daily Mail Online
How Long Will It Take For An Object To Fall From 23M? – Quora
A Rock Is Dropped From A 100-M-High Cliff. How Long Does It Take To Fall (A) The First 50 M And – Youtube
Free Fall In Physics | Definition, Equation & Examples – Video & Lesson Transcript | Study.Com
How Long Will It Take A Rain Drop To Fall From A Cloud? – Youtube
How Long Would It Take To Fall Through Earth?
Categories: Top 47 How Long Will It Take To Fall
See more here: thoitrangaction.com
How Long To Fall Through The Earth?
Learn more about the topic How long will it take to fall.
See more: https://thoitrangaction.com/sports/ | {"url":"https://thoitrangaction.com/how-long-will-it-take-to-fall/","timestamp":"2024-11-10T01:27:51Z","content_type":"text/html","content_length":"202108","record_id":"<urn:uuid:8e6dca61-ae5c-45fb-85a3-906fe94a719a>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00429.warc.gz"} |
Rapid spatio-temporal flood modelling via hydraulics-based graph neural networks
Articles | Volume 27, issue 23
© Author(s) 2023. This work is distributed under the Creative Commons Attribution 4.0 License.
Rapid spatio-temporal flood modelling via hydraulics-based graph neural networks
Numerical modelling is a reliable tool for flood simulations, but accurate solutions are computationally expensive. In recent years, researchers have explored data-driven methodologies based on
neural networks to overcome this limitation. However, most models are only used for a specific case study and disregard the dynamic evolution of the flood wave. This limits their generalizability to
topographies that the model was not trained on and in time-dependent applications. In this paper, we introduce shallow water equation–graph neural network (SWE–GNN), a hydraulics-inspired surrogate
model based on GNNs that can be used for rapid spatio-temporal flood modelling. The model exploits the analogy between finite-volume methods used to solve SWEs and GNNs. For a computational mesh, we
create a graph by considering finite-volume cells as nodes and adjacent cells as being connected by edges. The inputs are determined by the topographical properties of the domain and the initial
hydraulic conditions. The GNN then determines how fluxes are exchanged between cells via a learned local function. We overcome the time-step constraints by stacking multiple GNN layers, which expand
the considered space instead of increasing the time resolution. We also propose a multi-step-ahead loss function along with a curriculum learning strategy to improve the stability and performance. We
validate this approach using a dataset of two-dimensional dike breach flood simulations in randomly generated digital elevation models generated with a high-fidelity numerical solver. The SWE–GNN
model predicts the spatio-temporal evolution of the flood for unseen topographies with mean average errors in time of 0.04m for water depths and 0.004m^2s^−1 for unit discharges. Moreover, it
generalizes well to unseen breach locations, bigger domains, and longer periods of time compared to those of the training set, outperforming other deep-learning models. On top of this, SWE–GNN has a
computational speed-up of up to 2 orders of magnitude faster than the numerical solver. Our framework opens the doors to a new approach to replace numerical solvers in time-sensitive applications
with spatially dependent uncertainties.
Received: 19 Feb 2023 – Discussion started: 22 Mar 2023 – Revised: 03 Sep 2023 – Accepted: 23 Oct 2023 – Published: 30 Nov 2023
Accurate flood models are essential for risk assessment, early warning, and preparedness for flood events. Numerical models can characterize how floods evolve in space and time, with the
two-dimensional (2D) hydrodynamic models being the most popular (Teng et al., 2017). They solve a discretized form of the depth-averaged Navier–Stokes equations, referred to as shallow water
equations (SWEs) (Vreugdenhil, 1994). Numerical models are computationally expensive, making them inapplicable for real-time emergencies and uncertainty analyses. Several methods aim to speed up the
solution of these equations either by approximating them (Bates and De Roo, 2000) or by using high-performance computing and parallelization techniques (Hu et al., 2022; Petaccia et al., 2016).
However, approximate solutions are valid only for domains with low spatial and temporal gradients (Costabile et al., 2017), while high-performance computing methods are bound by the numerical
constraints and the computational resources.
Data-driven alternatives speed up numerical solvers (Mosavi et al., 2018). In particular, deep learning outperforms other machine learning methods used for flood modelling in both speed and accuracy
(Bentivoglio et al., 2022). Berkhahn et al. (2019) developed a multi-layer perceptron model for predicting urban floods given a rainfall event, achieving promising speed-ups and accuracy. Guo et al.
(2021) and Kabir et al. (2020) developed convolutional neural networks (CNNs) for river flood inundation, while Jacquier et al. (2021) used deep learning to facilitate the reduced-order modelling of
dam break floods and to provide uncertainty estimates. Also, Zhou et al. (2022) employed a CNN-based model to determine the spatio-temporal variation of flood inundation from a set of representative
locations. These works explored the generalization of boundary conditions on a fixed domain. That is, they change the return period of the floods for a single case study, but they need retraining
when applied to a new area, requiring more resources in terms of data, model preparation, and computation times.
To overcome this issue, the community is investigating the generalizability of deep-learning models to different study areas. Löwe et al. (2021) proposed a CNN model to estimate the maximum water
depth of pluvial urban floods. They trained their model on part of their case study and then deployed it on the unseen parts, showing consistent performances. Guo et al. (2022) accurately predicted
the maximum water depth and flow velocities for river floods in different catchments in Switzerland. To incorporate the variations in catchment size and shape, they divided the domain into patches.
do Lago et al. (2023) proposed a conditional generative adversarial network that could predict the maximum water depth unseen rain events in unseen urban catchments. However, these approaches focus
on a single maximum depth or velocity map, disregarding the dynamical behaviour. That is, no information is provided on the flood conditions over space and time, which is crucial for evacuation and
the response to the flood.
To overcome this limitation, we propose SWE–GNN, a deep-learning model merging graph neural networks (GNNs) with the finite-volume methods used to solve the SWEs. GNNs generalize convolutional neural
networks to irregular domains such as graphs and have shown promising results for fluid dynamics (e.g. Lino et al., 2021; Peng et al., 2022) and partial differential equations (e.g. Brandstetter
et al., 2022; Horie and Mitsume, 2022). Hence, developing GNNs that follow the SWE equations is not only more physically interpretable but also allows better generalization abilities to unseen flood
evolution, unseen breach locations, and unseen topographies. In particular, we exploit the geometrical structure of the finite-volume computational mesh by using its dual graph, obtained by
connecting the centres of neighbouring cells via edges. The nodes represent finite-volume cells and edge fluxes across them. Following an explicit numerical discretization of the SWE, we formulate a
novel GNN propagation rule that learns how fluxes are exchanged between cells, based on the gradient of the hydraulic variables. We set the number of GNN layers based on the time step between
consecutive predictions, in agreement with the Courant–Friedrichs–Lewy conditions. The inputs of the model are the hydraulic variables at a given time, elevation, slope, area, length, and orientation
of the mesh's cells. The outputs are the hydraulic variables at the following time step, evaluated in an auto-regressive manner. That is, the model is repeatedly applied using its predictions as
inputs to produce extended simulations.
We tested our model on dike breach flood simulations due to their time-sensitive nature and the presence of uncertainties in topography and breach formation (Jonkman et al., 2008; Vorogushyn et al.,
2009). Moreover, given the sensibility to floods in low-lying areas, fast surrogate models that generalize over all those uncertainties are required for probabilistic analyses. By doing so, our key
contributions are threefold.
• We develop a new graph neural network model where the propagation rule and the inputs are taken from the shallow water equations. In particular, the hydraulic variables propagate based on their
gradient across neighbouring finite-volume cells.
• We improve the model's stability by training it via a multi-step-ahead loss function, which results in stable predictions up to 120h ahead using only the information of the first hour as initial
hydraulic input.
• We show that the proposed model can serve as a surrogate for numerical solvers for spatio-temporal flood modelling in unseen topographies and unseen breach locations, with speed-ups of 2 orders
of magnitude.
The rest of the paper is structured as follows. Section 2 illustrates the theoretical background; Sect. 3 describes the proposed methodology. In Sect. 4, we present the dataset used for the numerical
experiments. Section 5 shows the results obtained with the proposed model and compares it with other deep-learning models. Finally, Sect. 6 discusses the results, analyses the current limitations of
this approach, and proposes future research directions.
In this section, we describe the theory supporting our proposed model. First, we discuss numerical models for flood modelling; then, we present deep-learning models, focusing on graph neural
networks. Throughout the paper, we use the standard vector notation, with a scalar, a vector, A matrix, and 𝒜 tensor.
2.1Numerical modelling
2.1.1Shallow water equations
When assuming negligible vertical accelerations, floods can be modelled via the SWEs (Vreugdenhil, 1994). These are a system of hyperbolic partial differential equations that describe the behaviour
of shallow flows by enforcing mass and momentum conservation. The two-dimensional SWE can be written as
$\begin{array}{}\text{(1)}& \frac{\partial \mathbit{u}}{\partial t}+\mathrm{abla }\mathbf{F}=\mathbit{s},\end{array}$
$\begin{array}{}\text{(2)}& \begin{array}{rl}& \mathbit{u}=\left(\begin{array}{c}h\\ {q}_{x}\\ {q}_{y}\end{array}\right),\mathbf{F}=\left(\begin{array}{c}{q}_{x}\\ \frac{{q}_{x}^{\mathrm{2}}}{h}+\
frac{g{h}^{\mathrm{2}}}{\mathrm{2}}\\ \frac{{q}_{x}{q}_{y}}{h}\end{array}\begin{array}{c}{q}_{y}\\ \frac{{q}_{x}{q}_{y}}{h}\\ \frac{{q}_{y}^{\mathrm{2}}}{h}+\frac{g{h}^{\mathrm{2}}}{\mathrm{2}}\end
{array}\right),\\ & \mathbit{s}=\left(\begin{array}{c}\mathrm{0}\\ gh\left({s}_{\mathrm{0}x}-{s}_{\mathrm{f}x}\right)\\ gh\left({s}_{\mathrm{0}y}-{s}_{\mathrm{f}y}\right)\end{array}\right),\end
where u represents the conserved variable vector, F the fluxes in the x and y directions, and s the source terms. Here, h (m) represents the water depth, q[x]=uh (m^2s^−1) and q[y]=vh (m^2s^−1) are
the averaged components of the discharge vector along the x and y coordinates, respectively, and g (ms^−2) is the acceleration of gravity. The source terms in s depend on the contributions of bed
slopes s[0] and friction losses s[f] along the two coordinate directions.
2.1.2Finite-volume method
The SWE cannot be solved analytically unless some simplifications are enforced. Thus, they are commonly solved via spatio-temporal numerical discretizations, such as the finite-volume method (e.g.
Alcrudo and Garcia-Navarro, 1993). This method discretizes the spatial domain using meshes, i.e. geometrical structures composed of nodes, edges, and faces. We consider each finite-volume cell to be
represented by its centre of mass, where the hydraulic variables h, q[x], and q[y] are defined (see Fig. 1). The governing equations are then integrated over the cells, considering piece-wise
constant variations. That is, the value of the variables at a certain time instant is spatially uniform for every cell. The SWE can be discretized in several ways in both space and time (e.g.
Petaccia et al., 2013; Xia et al., 2017), but we focus on a first-order explicit scheme with a generic spatial discretization. For an arbitrary volume Ω[i] and a discrete time step Δt, the SWE (Eq. 1
) can be re-written as
$\begin{array}{}\text{(3)}& {\mathbit{u}}_{i}^{t+\mathrm{1}}={\mathbit{u}}_{i}^{t}+\left({\mathbit{s}}_{i}-\sum _{j=\mathrm{1}}^{{N}_{i}}\left(\mathbf{F}\cdot \mathbit{n}{\right)}_{ij}\frac{{l}_{ij}}
{{a}_{i}}\right)\mathrm{\Delta }t,\end{array}$
with ${\mathbit{u}}_{i}^{t}$ the hydraulic variables at time t and cell i, a[i] the area of the ith cell, N[i] the number of neighbouring cells, l[ij] the length of the jth side of cell i, s[i] the
source terms, ${\mathbit{n}}_{ij}=\left[{n}_{xij},{n}_{yij}\right]$ the outward unit normal vector in the x and y directions for side ij, and (F⋅n)[ij] the numerical fluxes across neighbouring cells.
In numerical models with explicit discretization, stability is enforced by satisfying the Courant–Friedrichs–Lewy (CFL) condition, which imposes the numerical propagation speed to be lower than the
physical one (Courant et al., 1967). Considering v to be the propagation speed, the Courant number C can be evaluated as
$\begin{array}{}\text{(4)}& C=\frac{v\mathrm{\Delta }t}{\mathrm{\Delta }x},\end{array}$
where Δt and Δx represent the time step and the mesh size. This condition forces Δt to be sufficiently small to avoid a too-fast propagation of water in space that would result in a loss of physical
consistency. Small time steps imply an increasing number of model iterations, which slow down numerical models over long time horizons. Deep learning provides an opportunity to accelerate this
2.2Deep learning
Deep learning obtains non-linear high-dimensional representations from data via multiple levels of abstraction (LeCun et al., 2015). The key building blocks of deep-learning models are neural
networks, which comprise linear and non-linear parametric functions. They take an input x and produce an estimate $\stackrel{\mathrm{^}}{\mathbit{y}}$ of a target representation y as $\stackrel{\
mathrm{^}}{\mathbit{y}}=f\left(\mathbit{x};\mathbit{\theta }\right)$, where θ are the parameters (Zhang et al., 2021). The parameters are estimated to match predicted output with the real output by
minimizing a loss function. Then, the validity of the model is assessed by measuring its performance on a set of unseen pairs of data, called the test set.
The most general type of neural network is a multi-layer perceptron (MLP). It is formed by stacking linear models followed by a point-wise non-linearity (e.g. rectified linear unit, ReLU, $\mathit{\
sigma }\left(x\right)=\mathrm{max}\mathit{\left\{}\mathrm{0},x\mathit{\right\}}$). For MLPs, the number of parameters and the computational cost increase exponentially with the dimensions of the
input. This makes them unappealing to large-scale high-dimensional data typical of problems with relevant spatio-temporal features such as floods. MLPs are non-inductive: when trained for flood
prediction on a certain topography, they cannot be deployed on a different one, thus requiring a complete retraining. To overcome this curse of dimensionality and to increase generalizability, models
can include inductive biases that constrain their degrees of freedom by reusing parameters and exploiting symmetries in the data (Battaglia, 2018; Gama et al., 2020; Villar et al., 2023). For
example, convolutional neural networks exploit translational symmetries via filters that share parameters in space (e.g. LeCun et al., 2015; Bronstein et al., 2021). However, CNNs cannot process data
defined on irregular meshes, which are common for discretizing topographies with sparse details. Thus, we need a different inductive bias for data on meshes.
GNNs use graphs as an inductive bias to tackle the curse of dimensionality. This bias can be relevant for data represented via networks and meshes, as it allows these models to generalize to unseen
graphs. That is, the same model can be applied to different topographies discretized by different meshes. GNNs work by propagating features defined on the nodes, based on how they are connected. The
propagation rule is then essential in correctly modelling a physical system. However, standard GNNs do not include physics-based rules, meaning that the propagation rules may lead to unrealistic
3Shallow-water-equation-inspired graph neural network (SWE–GNN)
We develop a graph neural network in which the computations are based on the shallow water equations. The proposed model takes as input both static and dynamic features that represent the topography
of the domain and the hydraulic variables at time t, respectively. The outputs are the predicted hydraulic variables at time t+1. In the following, we detail the proposed model (Sect. 3.1) and its
inputs and outputs (Sect. 3.2). Finally, we discuss the training strategy (Sect. 3.3).
SWE–GNN is an encoder–processor–decoder architecture inspired by You et al. (2020) with residual connections that predicts auto-regressively the hydraulic variables at time t+1 as
$\begin{array}{}\text{(5)}& {\stackrel{\mathrm{^}}{\mathbf{U}}}^{t+\mathrm{1}}={\mathbf{U}}^{t}+\mathrm{\Phi }\left({\mathbf{X}}_{\mathrm{s}},{\mathbf{U}}^{t-p:t},\mathbf{E}\right),\end{array}$
where the output ${\stackrel{\mathrm{^}}{\mathbf{U}}}^{t+\mathrm{1}}$ corresponds to the predicted hydraulic variables at time t+1; U^t are the hydraulic variables at time t; Φ(⋅) is the GNN-based
encoder–processor–decoder model that determines the evolution of the hydraulic variables for a fixed time step; X[s] are the static node features; ${\mathbf{U}}^{t-p:t}$ are the dynamic node
features, i.e. the hydraulic variables for time steps t−p to t; and E are the edge features that describe the geometry of the mesh. The architecture detailed in the sequel is illustrated in Fig. 2.
We employ three separate encoders for processing the static node features ${\mathbf{X}}_{\mathrm{s}}\in {\mathbb{R}}^{N×{I}_{\mathrm{Ns}}}$, dynamic node features ${\mathbf{X}}_{\mathrm{d}}\equiv {\
mathbf{U}}^{t-p:t}\in {\mathbb{R}}^{N×O\left(p+\mathrm{1}\right)}$, and edge features $\mathbit{\epsilon }\in {\mathbb{R}}^{E×{I}_{\mathit{\epsilon }}}$, where I[Ns] is the number of static node
features, O the number of hydraulic variables (e.g. O=3 if we consider water depth and the x and y components of the unit discharges), p the number of input previous time steps, and I[ε] the number
of input edge features. The encoded variables are
$\begin{array}{}\text{(6)}& {\mathbf{H}}_{\mathrm{s}}={\mathit{\varphi }}_{\mathrm{s}}\left({\mathbf{X}}_{\mathrm{s}}\right),{\mathbf{H}}_{\mathrm{d}}={\mathit{\varphi }}_{\mathrm{d}}\left({\mathbf
{X}}_{\mathrm{d}}\right),{\mathbf{E}}^{\prime }={\mathit{\varphi }}_{\mathit{\epsilon }}\left(\mathbf{E}\right),\end{array}$
where ϕ[s](⋅) and ϕ[d](⋅) are MLPs shared across all nodes that take an input $\mathbf{X}\in {\mathbb{R}}^{N×I}$ and return a node matrix $\mathbf{H}\in {\mathbb{R}}^{N×G}$, and ϕ[ε](⋅) are MLPs
shared across all edges that encode the edge features in ${\mathbf{E}}^{\prime }\in {\mathbb{R}}^{E×G}$. All MLPs have two layers, with a hidden dimension G followed by a parametric ReLU (PReLU)
activation. The encoders expand the dimensionality of the inputs to allow for higher expressivity, with hyperparameter G being the dimension of the node embeddings. The ith rows of the node matrices
H[s] and H[d] represent the encoded feature vectors associated with node i, i.e. h[si] and h[di], and the kth rows of the edge matrices E^′ represent the encoded feature vector associated with edge k
We employed as a processor an L-layer GNN that takes a high-dimensional representation of the static and dynamic properties of the system at time t given by the encoders and that produces a
spatio-temporally propagated high-dimensional representation of the system's evolution from times t to t+1. The propagation rule is based on the shallow water equation. In the SWE, the mass and
momentum fluxes, representative of the dynamic features, evolve in space as a function of the source terms representative of the static and dynamic features. Moreover, water can only propagate from
sources of water, and the velocity of propagation is influenced by the gradients of the hydraulic variables. Thus, the GNN layer ℓ=1,…,L−1 update reads as
$\begin{array}{}\text{(7)}& & {\mathbit{s}}_{ij}^{\left(\mathrm{\ell }+\mathrm{1}\right)}=\mathit{\psi }\left({\mathbit{h}}_{\mathrm{s}i},{\mathbit{h}}_{\mathrm{s}j},{\mathbit{h}}_{\mathrm{d}i}^{\
left(\mathrm{\ell }\right)},{\mathbit{h}}_{\mathrm{d}j}^{\left(\mathrm{\ell }\right)},{\mathit{\epsilon }}_{ij}^{\prime }\right)\odot \left({\mathbit{h}}_{\mathrm{d}j}^{\left(\mathrm{\ell }\right)}-
{\mathbit{h}}_{\mathrm{d}i}^{\left(\mathrm{\ell }\right)}\right),\text{(8)}& & {\mathbit{h}}_{\mathrm{d}i}^{\left(\mathrm{\ell }+\mathrm{1}\right)}={\mathbit{h}}_{\mathrm{d}i}^{\left(\mathrm{\ell }\
right)}+\sum _{j\in {\mathcal{N}}_{i}}{\mathbit{s}}_{ij}^{\left(\mathrm{\ell }+\mathrm{1}\right)}{\mathbf{W}}^{\left(\mathrm{\ell }+\mathrm{1}\right)},\end{array}$
where $\mathit{\psi }\left(\cdot \right):{\mathbb{R}}^{\mathrm{5}G}\to {\mathbb{R}}^{G}$ is an MLP with two layers, with a hidden dimension 2G followed by a PReLU activation function; ⊙ is the
Hadamard (element-wise) product; and ${\mathbf{W}}^{\left(\mathrm{\ell }\right)}\in {\mathbb{R}}^{G×G}$ are parameter matrices. The term ${\mathbit{h}}_{\mathrm{d}j}^{\left(\mathrm{\ell }\right)}-{\
mathbit{h}}_{\mathrm{d}i}^{\left(\mathrm{\ell }\right)}$ represents the gradient of the hydraulic variables and enforces water-related variables h[d] to propagate only if at least one of the
interfacing node features is non-zero, i.e. has water. The function ψ(⋅), instead, incorporates both static and dynamic inputs and provides an estimate of the source terms acting on the nodes. Thus,
vector s[ij] represents the fluxes exchanged across neighbouring cells, and their linear combination is used as in Eq. (3) to determine the hydraulic variables' variation for a given cell. In this
way, Eq. (7) resembles how fluxes are evaluated at the cell’s interface in the numerical model, i.e. $\mathit{\delta }\mathbf{F}\left(\mathbit{u}{\right)}_{ij}={\stackrel{\mathrm{̃}}{\mathbit{J}}}_
{ij}\left({\mathbit{u}}_{j}-{\mathbit{u}}_{i}\right)$, which enforces conservation across interface discontinuities (Martínez-Aranda et al., 2022). Based on this formulation, s[ij] can also be
interpreted as an approximate Riemann solver (Toro, 2013), where the Riemann problem at the boundary between computational cells is approximated by the function ψ(⋅) in place of equations (e.g. Roe,
1981). To reduce model instabilities, the output of ψ(⋅) is normalized along its embedding dimension. That is, it is divided by its norm $‖\mathit{\psi }\left(\cdot \right)‖$. This procedure is
similar to other graph normalization techniques that improve training stability (Chen et al., 2022). The contribution of each layer is linearly multiplied by W^(ℓ) (Eq. 7). From a numerical
perspective, this is analogous to an L-order multi-time-step scheme with L being the number of layers, where the weights are learned instead of being assigned (e.g. Dormand and Prince, 1980).
The GNN's output represents an embedding of the predicted hydraulic variables at time t+1 for a fixed time step Δt. Instead of enforcing stability by limiting Δt, as is done in numerical models, we
can obtain the same result by considering a larger portion of space, which results in increasing Δx (see Eq. 4). This effect can be achieved by stacking multiple GNN layers, as each layer will
increase the propagation space, also called the neighbourhood size. The number of GNN layers is then correlated with the space covered by the flood for a given temporal resolution. We can then write
the full processor for the L GNN layers as
$\begin{array}{}\text{(9)}& \begin{array}{rl}& {\mathbit{h}}_{\mathrm{d}i}^{\left(\mathrm{0}\right)}={\mathbit{h}}_{\mathrm{d}i}{\mathbf{W}}^{\left(\mathrm{0}\right)},\\ & {\mathbit{s}}_{ij}^{\left(\
mathrm{\ell }+\mathrm{1}\right)}=\mathit{\psi }\left({\mathbit{h}}_{\mathrm{s}i},{\mathbit{h}}_{\mathrm{s}j},{\mathbit{h}}_{\mathrm{d}i}^{\left(\mathrm{\ell }\right)},{\mathbit{h}}_{\mathrm{d}j}^{\
left(\mathrm{\ell }\right)},{\mathbit{\epsilon }}_{ij}^{\prime }\right)\odot \left({\mathbit{h}}_{\mathrm{d}j}^{\left(\mathrm{\ell }\right)}-{\mathbit{h}}_{\mathrm{d}i}^{\left(\mathrm{\ell }\right)}\
right),\\ & {\mathbit{h}}_{\mathrm{d}i}^{\left(\mathrm{\ell }+\mathrm{1}\right)}={\mathbit{h}}_{\mathrm{d}i}^{\left(\mathrm{\ell }\right)}+\sum _{j\in {\mathcal{N}}_{i}}{\mathbit{s}}_{ij}^{\left(\
mathrm{\ell }+\mathrm{1}\right)}{\mathbf{W}}^{\left(\mathrm{\ell }+\mathrm{1}\right)},\\ & {\mathbit{h}}_{\mathrm{d}i}^{\left(L\right)}=\mathit{\sigma }\left({\mathbit{h}}_{\mathrm{d}i}^{\left(L-\
mathrm{1}\right)}+\sum _{j\in {\mathcal{N}}_{i}}{\mathbit{s}}_{ij}^{\left(L\right)}{\mathbf{W}}^{\left(L\right)}\right),\end{array}\end{array}$
where we employ a Tanh activation function σ(⋅) at the output of the Lth layer to limit numerical instabilities resulting in exploding values. The embedding of the static node features h[si] and of
the edge features ${\mathbit{\epsilon }}_{ij}^{\prime }$ does not change across layers, as the topography and discretization of the domain do not change in time.
Symmetrically to the encoder, the decoder is composed of an MLP φ(⋅), shared across all the nodes, that takes as input the output of the processor ${\mathbf{H}}_{\mathrm{d}}^{\left(L\right)}\in {\
mathbb{R}}^{N×G}$ and updates the hydraulic variables at the next time step, i.e. ${\stackrel{\mathrm{^}}{\mathbf{U}}}^{t+\mathrm{1}}\in {\mathbb{R}}^{N×O}$, via residual connections, as
$\begin{array}{}\text{(10)}& {\stackrel{\mathrm{^}}{\mathbf{U}}}^{t+\mathrm{1}}={\mathbf{U}}^{t}+\mathit{\phi }\left({\mathbf{H}}_{\mathrm{d}}^{\left(L\right)}\right).\end{array}$
The MLP φ(⋅) has two layers, with a hidden dimension G, followed by a PReLU activation. Neither of the MLPs in the dynamic encoder and the decoder has the bias terms as this would result in adding
non-zero values corresponding to dry areas that would cause water to originate from any node.
3.2Inputs and outputs
We define input features on the nodes and edges based on the SWE terms (see Eq. 2). We divide node features into a static component that represents fixed spatial attributes and a dynamic component
that represents the hydraulic variables.
Static node features are defined as
$\begin{array}{}\text{(11)}& {\mathbit{x}}_{\mathrm{s}i}=\left({a}_{i},{e}_{i},{\mathbit{s}}_{\mathrm{0}i},{m}_{i},{w}_{i}^{t}\right),\end{array}$
where a[i] is the area of the ith finite-volume cell, its elevation e[i], its slopes in the x and y directions s[0i], and its Manning coefficient m[i]. We also included the water level at time t, $
{w}_{i}^{t}$, given by the sum of the elevation and water depth at time t as node inputs, since this determines the water gradient (Liang and Marche, 2009). The reason why we include ${w}_{i}^{t}$ in
the static attributes instead of the dynamic ones is that these features can also be non-zero without water due to the elevation term and would thus result in the same issue mentioned for the dynamic
encoder and decoder.
Dynamic node features are defined as
$\begin{array}{}\text{(12)}& \begin{array}{rl}& {\mathbit{x}}_{\mathrm{d}i}={\mathbit{u}}_{i}^{t-p:t}=\left({\mathbit{u}}_{i}^{t-p},\phantom{\rule{0.125em}{0ex}}\mathrm{\dots },\phantom{\rule
{0.125em}{0ex}}{\mathbit{u}}_{i}^{t-\mathrm{1}},{\mathbit{u}}_{i}^{t}\right),\\ & {\mathbit{u}}_{i}^{t}=\left({h}_{i}^{t},|q{|}_{i}^{t}\right),\end{array}\end{array}$
where ${\mathbit{u}}_{i}^{t}$ are the hydraulic variables at time step t and ${\mathbit{u}}_{i}^{t-p:t}$ are the hydraulic variables up to p previous time steps to leverage the information of past
data and to provide a temporal bias to the inputs. In contrast to the definition of the hydraulic variables as in Eq. (2), we selected the modulus of the unit discharge $|q|$ as a metric of flood
intensity in place of its x and y components to avoid mixing scalar and vector components and because, for practical implications, such as damage estimation, the flow direction is less relevant than
its absolute value (e.g. Kreibich et al., 2009).
Edge features are defined as
$\begin{array}{}\text{(13)}& {\mathbit{\epsilon }}_{ij}=\left({\mathbit{n}}_{ij},{l}_{ij}\right),\end{array}$
where n[ij] is the outward unit normal vector and l[ij] is the cell sides' length. Thus, the edge features represent the geometrical properties of the mesh. We excluded the fluxes F[ij] as additional
features as they depend on the hydraulic variables u[i] and u[j], which are already included in the dynamic node features.
Outputs. The model outputs are the estimated water depth and unit discharge at time t+1, i.e. ${\stackrel{\mathrm{^}}{\mathbit{u}}}_{i}^{t+\mathrm{1}}=\left({\stackrel{\mathrm{^}}{h}}_{i}^{t+\mathrm
{1}},{\stackrel{\mathrm{^}}{|q|}}_{i}^{t+\mathrm{1}}\right)$, resulting in an output dimension O=2. The outputs are used to update the input dynamic node features x[di] for the following time step,
as exemplified in Fig. 3. The same applies for the water level in the static attributes, i.e. ${w}_{i}^{t+\mathrm{1}}={e}_{i}+{\stackrel{\mathrm{^}}{h}}_{i}^{t+\mathrm{1}}$.
3.3Training strategy
The model learns from input–output data pairs. To stabilize the output of the SWE–GNN over time, we employ a multi-step-ahead loss function ℒ that measures the accumulated error for multiple
consecutive time steps, i.e.
$\begin{array}{}\text{(14)}& \mathcal{L}=\frac{\mathrm{1}}{HO}\sum _{\mathit{\tau }=\mathrm{1}}^{H}\sum _{o=\mathrm{1}}^{O}{\mathit{\gamma }}_{o}‖{\stackrel{\mathrm{^}}{\mathbit{u}}}_{o}^{t+\mathit{\
tau }}-{\mathbit{u}}_{o}^{t+\mathit{\tau }}{‖}_{\mathrm{2}},\end{array}$
where ${\mathbit{u}}_{o}^{t+\mathit{\tau }}\in {\mathbb{R}}^{N}$ are the hydraulic variables over the whole graph at time t+τ; H is the prediction horizon, i.e. the number of consecutive time
instants; and γ[o] are coefficients used to weight the influence of each variable on the loss. For each time step τ, we evaluate the model's prediction ${\stackrel{\mathrm{^}}{\mathbit{u}}}^{t+\
mathit{\tau }}$ and then use the prediction recursively as part of the new dynamic node input (see Fig. 3). We repeat this process for a number of time steps H and calculate the root mean squared
error (RMSE) loss as the average over all the steps. In this way, the model learns to correct its own predictions while also learning to predict a correct output, given a slightly wrong prediction,
hence improving its robustness. After p+1 prediction steps, the inputs of the model are given exclusively by its predictions. During training, we limit the prediction horizon H instead of using the
full temporal sequence due to memory constraints, since the back-propagation gradients must be stored for each time step.
To improve the training speed and stability, we also employed a curriculum learning strategy (Algorithm 1). This consists in progressively increasing the prediction horizon in Eq. (14) every fixed
number of epochs up to H. The idea is first to learn the one-step-ahead or few-steps-ahead predictions to fit the short-term predictions and then to increase the number of steps ahead to stabilize
the predictions (Wang et al., 2022).
for epoch=1 to MaxEpochs do
${\stackrel{\mathrm{^}}{\mathbf{U}}}^{t+\mathrm{1}}={\mathbf{U}}^{t}+\mathrm{\Phi }\left({\mathbf{X}}_{s},{\mathbf{U}}^{t-p:t},\mathbf{E}\right)$
$\mathcal{L}=\frac{\mathrm{1}}{HO}\sum _{\mathit{\tau }=\mathrm{1}}^{H}\sum _{o=\mathrm{1}}^{O}{\mathit{\gamma }}_{o}‖{\stackrel{\mathrm{^}}{\mathbit{u}}}_{o}^{t+\mathit{\tau }}-{\mathbit{u}}_{o}^{t+
\mathit{\tau }}{‖}_{\mathrm{2}},$
if epoch>CurriculumSteps*H then
4.1Dataset generation
We considered 130 numerical simulations of dike breach floods run on randomly generated topographies over two squared domains of sizes 6.4×6.4 and 12.8×12.8km^2 representative of flood-prone polder
We generated random digital elevation models using the Perlin noise generator (Perlin, 2002) as its ups and downs reflect plausible topographies. We opted for this methodology, instead of manually
selecting terrain patches, to automatize the generation process, thus allowing for an indefinite number of randomized and unbiased training and testing samples.
We employed a high-fidelity numerical solver, Delft3D-FM, which solves the full shallow water equations using an implicit scheme on staggered grids and adaptive time steps (Deltares, 2022). We used a
dry bed as the initial condition and a constant input discharge of 50m^3s^−1 as the boundary condition, equal to the maximum dike breach discharge. We employed a single boundary condition value for
all the simulations as our focus is on showing generalizability over different topographies and breach locations. The simulation output is a set of temporally consecutive flood maps with a temporal
resolution of 30min.
We created three datasets with different area sizes and breach locations as summarized in Table 1. We selected a rectangular domain discretized by regular meshes to allow for a fairer comparison with
other models that cannot work with meshes or cannot incorporate edge attributes. Furthermore, we considered a constant roughness coefficient m[i] for all the simulations, meaning that we use the
terrain elevation and the slopes in the x and y directions as static node inputs.
1. The first dataset consists of 100 DEMs over a squared domain of 64×64 grids of length 100m and a simulation time of 48h. This dataset is used for training, validation, and testing. We used a
fixed testing set of 20 simulations, while the remaining 80 simulations are used for training (60) and validation (20).
2. The second dataset consists of 20 DEMs over a squared domain of 64×64 grids of length 100m and a simulation time of 48h. The breach location changes randomly across the border with a constant
discharge of 50m^3s^−1 (Fig. 4a). This dataset is used to test the generalizability of the model to unseen domains and breach locations.
3. The third dataset consists of 10 DEMs over a squared domain of 128×128 grids of length 100m. The boundary conditions are the same as for the second dataset. Since the domain area is 4 times
larger, the total simulation time is 120h to allow for the flood to cover larger parts of the domain. This dataset is used to test the generalizability of the model to larger unseen domains,
unseen breach locations, and longer time horizons.
Unless otherwise mentioned, we selected a temporal resolution of Δt=1h as a trade-off between detail and speed. When the beginning of the flood is relevant (e.g. for real-time forecasts), higher
temporal resolutions are better. By contrast, if the final flood state is relevant, lower temporal resolutions may be better.
4.2Training setup
We trained all models via the Adam optimization algorithm (Kingma and Ba, 2014). We employed a varying learning rate with 0.005 as a starting value and a fixed step decay of 90% every seven epochs.
The training was carried out for 150 epochs with early stopping. We used a maximum prediction horizon H=8 steps ahead during training as a trade-off between model stability and training time, as
later highlighted in Sect. 5.4. There is no normalization pre-processing step and, thus, the values of water depth and unit discharge differ in magnitude by a factor of 10. Since for application
purposes discharge is less relevant than water depth (Kreibich et al., 2009), we weighted the discharge term by a factor of γ[2]=3 (see Eq. 14) while leaving the weight factor for water depths as γ
[1]=1. Finally, we used one previous time step as input, i.e. ${\mathbf{X}}_{\mathrm{d}}=\left({\mathbf{U}}^{t=\mathrm{0}},{\mathbf{U}}^{t=\mathrm{1}}\right)$, where the solution at time t=0
corresponds to dry bed conditions.
We trained all the models using Pytorch (version 1.13.1) (Paszke et al., 2019) and Pytorch Geometric (version 2.2) (Fey and Lenssen, 2019). In terms of hardware, we employed an Nvidia Tesla
V100S-PCIE-32GB for training and deployment (DHPC, 2022) and an Intel(R) Core(TM) i7-8665U @1.9GHz CPU for deployment and for the execution of the numerical model. We run the models on both GPUs and
CPUs to allow for a fair comparison with the numerical models.
We evaluated the performance using the multi-step-ahead RMSE (Eq. 14) over the whole simulation. However, for testing, we calculated the RMSE for each hydraulic variable o independently as
$\begin{array}{}\text{(15)}& {\mathrm{RMSE}}_{o}=\frac{\mathrm{1}}{H}\sum _{\mathit{\tau }=\mathrm{1}}^{H}‖{\stackrel{\mathrm{^}}{\mathbit{u}}}_{o}^{\mathit{\tau }}-{\mathbit{u}}_{o}^{\mathit{\tau }}
Analogously, we evaluated the mean average error (MAE) for each hydraulic variable o over the whole simulation as
$\begin{array}{}\text{(16)}& {\mathrm{MAE}}_{o}=\frac{\mathrm{1}}{H}\sum _{\mathit{\tau }=\mathrm{1}}^{H}‖{\stackrel{\mathrm{^}}{\mathbit{u}}}_{o}^{\mathit{\tau }}-{\mathbit{u}}_{o}^{\mathit{\tau }}
The prediction horizon H depends on the total simulation time and temporal resolution. For example, predicting 24h with a temporal resolution of 30min results in H=48 steps ahead. We also measured
the spatio-temporal error distribution of the water depth using the critical success index (CSI) for threshold values of 0.05 and 0.3m, as in Löwe et al. (2021). The CSI measures the spatial
accuracy of detecting a certain class (e.g. flood or no-flood) and, for a given threshold, it is evaluated as
$\begin{array}{}\text{(17)}& \mathrm{CSI}=\frac{\mathrm{TP}}{\mathrm{TP}+\mathrm{FP}+\mathrm{FN}},\end{array}$
where TP is the true positives, i.e. the number of cells where both the model and simulation predict flood; FP is the false positives, i.e. the number of cells where the model wrongly predicts flood;
and FN is the false negatives, i.e. the number of cells where the model does not recognize a flooded area. We selected this measure as it discards the true negatives, i.e. when both the model and
simulation predict no flood, as this condition is over-represented, especially for the initial time steps. Thus, including true negatives may give an overconfident performance estimate. We measured
the computational speed-up as the ratio between the computational time required by the numerical model and the inference time of the deep-learning model. Both times refer to the execution of the
complete flood simulation but do not include the time required to simulate the initial time steps.
5.1Comparison with other deep-learning models
The proposed SWE–GNN model is compared with other deep-learning methods, including the following.
• CNN: encoder–decoder convolutional neural network based on U-Net (Ronneberger et al., 2015). The CNN considers the node feature matrix X reshaped as a tensor $\mathcal{X}\in {\mathbb{R}}^{g×g×{I}
_{N}}$, where g is the number of grid cells, i.e. 64 for datasets 1 and 2 and 128 for dataset 3, and I[N] is the number of static and dynamic features. This baseline is used to highlight the
advantages of the mesh dual graph as an inductive bias in place of an image.
• GAT: graph attention network (Veličković et al., 2017). The weights in the propagation rule are learned considering an attention-based weighting. This baseline is considered to show the influence
of learning the propagation rule with an attention mechanism. For more details, see Appendix A.
• GCN: graph convolutional neural network (Defferrard et al., 2016). This baseline is considered to show the influence of not learning the edge propagation rule in place of learning it. For more
details, see Appendix A.
• SWE–GNN[ng]: SWE–GNN without the gradient term x[dj]−x[di]. This is used to show the importance of the gradient term in the graph propagation rule.
We also evaluated MLP-based models, but their performance was too poor and we do not report it. All the models consider the same node feature inputs $\mathbf{X}=\left({\mathbf{X}}_{\mathrm{s}},{\
mathbf{X}}_{\mathrm{d}}\right)$, produce the same output $\stackrel{\mathrm{^}}{\mathbf{Y}}={\mathbf{U}}^{t+\mathrm{1}}$, produce extended simulations by using the predictions as input (as in Fig. 3
), and use the same training strategy with the multi-step-ahead loss and curriculum learning. For the GNN-based models, we replaced the GNN in the processor while keeping the encoder–decoder
structure as in Fig. 2. We conducted a thorough hyperparameter search for all the models, and we selected the one with the best validation loss. For the CNN architecture, the best model has three
down- and up-scaling blocks, with 64 filters in the first encoding block. Interestingly, we achieved good results only when employing batch normalization layers, PReLU as an activation function, and
no residual connections. All other standard combinations resulted in poor performances, which we did not report as they are outside the scope of the paper. For the GNN-based architectures, all
hyperparameter searches resulted in similar best configurations, i.e. L=8 GNN layers and an embedding size of G=64.
In Table 2, we report the testing RMSE and MAE for water depth and discharges as well as the CSI scores for all the models. The proposed SWE–GNN model and the U-Net-based CNN perform consistently
better than all the other models, with no statistically significant difference in performance according to the Kolmogorov–Smirnov test (p value less than 0.05). The CNN performs similarly to the
SWE–GNN because the computations on a regular grid are similar to those of a GNN. Nonetheless, there are valuable differences between the two models. First, SWE–GNN is by definition more physically
explainable as water can only propagate from wet cells to neighbouring cells, while in the CNN there is no such physical constraint, as exemplified by Fig. 5b. Second, as emphasized in the following
section, the SWE–GNN results in improved generalization abilities. Moreover, in contrast to CNNs, GNNs can also work with irregular meshes. Regarding the other GNN-based models, we noticed that the
GAT model had the worse performance, indicating that the propagation rule cannot be learned efficiently via attention mechanisms. Moreover, the GCN and the SWE–GNN[ng] achieved comparable results,
meaning that the gradient term makes a relevant contribution to the model as its removal results in a substantial loss in performance. We expected this behaviour as, without this term, there is no
computational constraint on how water propagates.
5.2Generalization to breach locations and larger areas
We further tested the already trained models on datasets 2 and 3, with unseen topographies, unseen breach locations, larger domain sizes, and longer simulation times, as described in Table 1. In the
following, we omit the other GNN-based models, since their performance was poorer, as highlighted in Table 2.
Table 3 shows that all the metrics remain comparable across the various datasets for the SWE–GNN, with test MAEs of approximately 0.04m for water depth and 0.004m^2s^−1 for unit discharges,
indicating that the model has learned the dynamics of the problems. The speed-up on the GPU of the SWE–GNN over dataset 3 increased further with respect to the smaller areas of datasets 1 and 2,
reaching values twice as high, i.e. ranging from 100 to 600 times faster than the numerical model on the GPU. We attribute this to the deep-learning models' scalability and better exploitation of the
hardware for larger graphs.
In Fig. 5, we see two examples of a SWE–GNN and a CNN on test datasets 2 and 3. The SWE–GNN model predicts better the flood evolution over time for unseen breach locations, even on bigger and unseen
topographies, thanks to its hydraulic-based approach. On the other hand, the CNN strongly over- or under-predicts the flood extents unless the breach location is close to that of the training
dataset, indicating that it lacks the correct inductive bias to generalize floods. For both models, the predictions remain stable even for time horizons 2.5 times longer than those in training.
5.3SWE–GNN model analysis
Over the entire test part of dataset 1, the model achieves MAEs of 0.04m for water depth and 0.004m^2s^−1 for unit discharges with respect to maximum water depths and unit discharges of 2.88m and
0.55m^2s^−1, respectively, and average water depths and unit discharges of 0.62m and 0.037m^2s^−1.
We illustrate the spatio-temporal performance of the model on a test sample in Fig. 6. Water depth and discharges evolve accurately over time, overall matching the ground-truth numerical results. The
errors are related to small over- or under-predictions, a few incorrect flow routes, and lags in the predictions resulting in delays or anticipations that are corrected by the successive model
iterations. In particular, the model struggles to represent discharges corresponding to ponding phenomena, i.e. when an area gets filled with water and then forms a temporary lake, as exemplified in
the bottom-left part of the domain in Fig. 6b. This is because of the lower contribution of the discharges to the training loss. Nonetheless, the error does not propagate over time, thanks to the
multi-step-ahead loss employed during training. In fact, the model updates the solution for the entire domain at each time step. Consequently, it exploits information on newly flooded neighbourhoods
to recompute better values for the cells that were flooded before.
We also observe the average performance of the different metrics over time, for the whole test dataset 1, in Fig. 7. The CSI is consistently high throughout the whole simulation, indicating that the
model correctly predicts where water is located in space and time. On the other hand, both MAE and RMSE increase over time. This is partially due to the evaluation of both metrics via a spatial
average, which implies that, in the first time steps, where the domain is mostly dry, the error will naturally be lower. Nonetheless, the errors increase linearly or sub-linearly, implying that they
are not prone to exploding exponentially.
Next, we analysed the relationship between the number of GNN layers and the temporal resolution to validate the hypothesis that the number of layers is correlated with the time steps. Following the
CFL condition, we can expand the computational domain by increasing the number of GNN layers in the model instead of decreasing the time steps. We considered several models with an increasing number
of GNN layers targeting temporal resolutions of Δt=30, 60, 90, and 120min. Figure 8 shows that lower temporal resolutions (e.g. 120min) require more GNN layers to reach the same performance as that
of higher temporal resolutions (e.g. 30min). One reason why the number of layers does not increase linearly with the temporal resolution may be that the weighting matrices W[ℓ] (see Eq. 7) improve
the expressive power of each layer, leading to fewer layers than needed otherwise.
Finally, we explored different model complexity combinations, expressed by the number of GNN layers and the latent space size, to determine a Pareto front for validation loss and speed-up, which
results in a trade-off between fast and accurate models. Figure 9 shows that increasing the complexity reduces both errors and speed-ups while improving the CSI, as expected. While for the GPU the
number of hidden features does not influence the speed-up, the performance on the CPU depends much more on it, with bigger models being slower, implying different trade-off criteria for deployment.
5.4Sensitivity analysis of the training strategy
Finally, we performed a sensitivity analysis of the role of the multi-step-ahead function (see Eq. 14) and the curriculum learning (Algorithm 1) in the training performance. Sensitivity analysis is a
technique that explores the effect of varying hyperparameters to understand their influence on the model's output. Figure 10a shows that increasing the number of steps ahead improves the performance.
Increasing the number of steps implies higher memory requirements and longer training times. Because of the best performances and GPU availability, we selected eight steps ahead in all the
experiments. However, when performing bigger hyperparameter searches or when limited by hardware, choosing fewer steps ahead can result in an acceptable performance. Similar considerations can also
be made for the CNN model.
Figure 10b shows that increasing the interval of curriculum steps linearly reduces the training times while also improving the performance. The decrease in performance associated with bigger values
is probably caused by the number of total training epochs, i.e. 150, which is insufficient to cover the whole prediction horizon H. Increasing the total number of epochs should increase both the
performance and the training time, but we avoided this analysis and chose an interval of 15 epochs for the curriculum learning strategy as a trade-off between performance and training times.
Moreover, models with curriculum steps between 0 and 15 suffered from spurious instabilities during training that were compensated for with early stopping, while models with more curriculum steps
were generally more stable. This is due to sudden variations in the loss function that limit a smoother learning process.
We proposed a deep-learning model for rapid flood modelling, called SWE–GNN, inspired by shallow water equations (SWEs) and graph neural networks (GNNs). The model takes the same inputs as a
numerical model, i.e. the spatial discretization of the domain, elevation, slopes, and initial values of the hydraulic variables, and predicts their evolution in time in an auto-regressive manner.
The results show that the SWE–GNN can correctly predict the evolution of water depth and discharges with mean average errors in time of 0.04m and 0.004m^2s^−1, respectively. It also generalizes
well to previously unseen topographies with varying breach locations, bigger domains, and longer time horizons. SWE–GNN is up to 2 orders of magnitude faster than the underlying numerical model.
Moreover, the proposed model achieved consistently better performances with respect to other deep-learning models in terms of water depth and unit discharge errors as well as CSI.
In line with the hypothesis, GNNs proved to be a valuable tool for spatio-temporal surrogate modelling of floods. The analogy with finite-volume methods is relevant for three reasons. First, it
improves the deep-learning model's interpretability, as the weights in the graph propagation rule can be interpreted as an approximate Riemann solver and multiple GNN layers can be seen as
intermediate steps of a multi-step method such as Runge–Kutta. Second, the analogy also provides an existing framework to include conservation laws in the model and links two fields that can benefit
from each other's advances. For example, multiple spatial and temporal resolutions could be jointly used in place of a fixed one, similarly to Liu et al. (2022). Third, the methodology is applicable
to any flood modelling application where the SWE holds, such as storm surges and river floods. The same reasoning can also be applied to other types of partial differential equations where
finite-volume methods are commonly used, such as in computational fluid dynamics.
The current analysis was carried out under a constant breach inflow as a boundary condition. Further research should extend the analysis to time-varying boundary conditions to better represent
complex real-world scenarios. One solution is to employ ghost cells typical of numerical models (LeVeque, 2002) for the domain boundaries, assigning known values in time. It should be noted that our
model cannot yet completely replace numerical models as it requires the first time step of the flood evolution as input. This challenge could be addressed by directly including boundary conditions in
the model's inputs. In contrast to physically based numerical methods, the proposed model does not strictly enforce conservation laws such as mass balance. Future work could address this limitation
by adding conservation equations to the training loss function, as is commonly done with physics-informed neural networks. Finally, while we empirically showed that the proposed model along with the
multi-step-ahead loss can sufficiently overcome numerical stability conditions, we provide no theoretical guarantee that stability can be enforced for an indefinite number of time steps.
Future research should investigate the new modelling approach in flood risk assessment and emergency preparation. This implies creating ensembles of flood simulations to reflect uncertainties, flood
warning and predicting extreme events, and exploring adaptive modelling during floods by incorporating real-time observations. The model should also be validated in real case studies featuring linear
elements such as secondary dikes and roads typical of polder areas. Further work could also address breach uncertainty in terms of timing, size, growth, and number of breaches. Moreover, future works
should aim at improving the model's Pareto front. To improve the speed-up, one promising research direction would be to employ multi-scale methods that allow one to reduce the number of
message-passing operations while still maintaining the same interaction range (e.g. Fortunato et al., 2022; Lino et al., 2022). On the other hand, better enforcing physics and advances in GNNs with
spatio-temporal models (e.g. Sabbaqi and Isufi, 2022) or generalizations to higher-order interactions (e.g. Yang et al., 2022) may further benefit the accuracy of the model. Overall, the SWE–GNN
marks a valuable step towards the integration of deep learning for practical applications.
Appendix A:Architecture details
In this Appendix, we further detail the different inputs and outputs, the hyperparameters, and the models' architectures used in Sect. 5.1.
A1Inputs, outputs, and hyperparameters
Figure A1 shows the inputs employed by all the models in Sect. 5.1. The static inputs X[s] are given by the slopes in the x and y directions as well as the elevation, while the initial dynamic inputs
${\mathbf{X}}_{d}=\left({\mathbf{U}}^{\mathrm{0}},{\mathbf{U}}^{\mathrm{1}}\right)$ are given by water depth and discharge at times t=0h, i.e. the empty domain, and t=1h.
Table A1 shows the hyperparameters employed for each model. Some hyperparameters are common to all the models, such as learning rate, number of maximum training steps ahead, and optimizer, while
other change depend on the model, such as embedding dimensions and the number of layers.
A2GNN benchmarks
We compared the proposed model against two benchmark GNNs that employ different propagation rules. Since those models cannot independently process static and dynamic attributes, in contrast to the
SWE–GNN, we stacked the node inputs into a single node feature matrix $\mathbf{X}=\left({\mathbf{X}}_{\mathrm{d}},{\mathbf{X}}_{\mathrm{s}}\right)$, which passes through an encoder MLP and then to
the GNN.
The GCN employs the normalized Laplacian connectivity matrix to define the edge weights s[ij]. The layer propagation rule reads as
$\begin{array}{}\text{(A1)}& & {s}_{ij}={\left(\mathbf{I}-{\mathbf{D}}^{-\mathrm{1}/\mathrm{2}}{\mathbf{AD}}^{-\mathrm{1}/\mathrm{2}}\right)}_{ij},\text{(A2)}& & {\mathbit{h}}_{i}^{\left(\mathrm{\ell
}+\mathrm{1}\right)}=\sum _{j\in {\mathcal{N}}_{i}}{s}_{ij}{\mathbf{W}}^{\left(\mathrm{\ell }\right)}{\mathbit{h}}_{j}^{\left(\mathrm{\ell }\right)},\end{array}$
where I is the identity matrix; A is the adjacency matrix, which has non-zero entries corresponding to edges; and D is the diagonal matrix.
GAT employs an attention-based mechanism to define the edge weights s[ij] based on their importance in relation to the target node. The layer propagation rule reads as
$\begin{array}{}\text{(A3)}& & {\mathbit{s}}_{ij}=\frac{\mathrm{exp}\left(\mathrm{LeakyReLU}\left({\mathbit{a}}^{T}\left[{\mathbf{W}}^{\left(\mathrm{\ell }\right)}{\mathbit{h}}_{i}^{\left(\mathrm{\
ell }\right)}||{\mathbf{W}}^{\left(\mathrm{\ell }\right)}{\mathbit{h}}_{k}^{\left(\mathrm{\ell }\right)}\right]\right)}{\sum _{k\in {N}_{i}}\mathrm{exp}\left(\mathrm{LeakyReLU}\left({\mathbit{a}}^{T}
\left[{\mathbf{W}}^{\left(\mathrm{\ell }\right)}{\mathbit{h}}_{i}^{\left(\mathrm{\ell }\right)}||{\mathbf{W}}^{\left(\mathrm{\ell }\right)}{\mathbit{h}}_{k}^{\left(\mathrm{\ell }\right)}\right]\
right)\right)},\text{(A4)}& & {\mathbit{h}}_{i}^{\left(\mathrm{\ell }+\mathrm{1}\right)}=\sum _{j\in {\mathcal{N}}_{i}}{\mathbit{s}}_{ij}{\mathbf{W}}^{\left(\mathrm{\ell }\right)}{\mathbit{h}}_{j}^{\
left(\mathrm{\ell }\right)},\end{array}$
where a∈ℝ^2G is a weight vector, s[ij] are the attention coefficients, and $||$ denotes concatenation.
The encoder–decoder convolutional neural network is an architecture composed of two parts (Fig. A2). The encoder extracts high-level features from the input images while reducing their extent via a
series of convolutional and pooling layers, while the decoder extracts the output image from the compressed signal, again via a series of convolutional layers and pooling layers. The U-Net version of
the architecture also features residual connections between images with the same dimensions. That is, the output of an encoder block is summed to the inputs of the decoder block with the same
dimensions, as shown in Fig. A2. The equation for a single 2D convolutional layer is defined as
$\begin{array}{}\text{(A5)}& {\mathbf{Y}}_{k}=\mathit{\sigma }\left({\mathbf{W}}_{k}*\mathbf{X}\right),\end{array}$
where Y[k] is the output feature map for the kth filter, X is the input image, W[k] is the weight matrix for the kth filter, * denotes the 2D convolution operation, and σ is an activation function.
Appendix B:Pareto front for dataset 3
We employed the models trained with different combinations of the number of GNN layers and embedding sizes (Sect. 5.3) on test dataset 3. Figure B1 shows that the models perform better in terms of
speed with respect to the smaller areas, achieving similar CPU speed-ups and GPU speed-ups around 2 times higher than those in datasets 1 and 2.
Code and data availability
RB: conceptualization, methodology, software, validation, data curation, writing – original draft preparation, visualization, writing – review and editing. EI: supervision, methodology, writing –
review and editing, funding acquisition. SNJ: supervision, writing – review and editing. RT: conceptualization, supervision, writing – review and editing, funding acquisition, project administration.
The contact author has declared that none of the authors has any competing interests.
Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims made in the text, published maps, institutional affiliations, or any other geographical representation
in this paper. While Copernicus Publications makes every effort to include appropriate place names, the final responsibility lies with the authors.
This work is supported by the TU Delft AI Initiative programme. We thank Ron Bruijns for providing the dataset to carry out the preliminary experiments. We thank Deltares for providing the license
for Delft3D to run the numerical simulations.
This paper was edited by Albrecht Weerts and reviewed by Daniel Klotz and one anonymous referee.
Alcrudo, F. and Garcia-Navarro, P.: A high-resolution Godunov-type scheme in finite volumes for the 2D shallow-water equations, Int. J. Numer. Meth. Fluids, 16, 489–505, 1993.a
Bates, P. D. and De Roo, A. P.: A simple raster-based model for flood inundation simulation, J. Hydrol., 236, 54–77, https://doi.org/10.1016/S0022-1694(00)00278-X, 2000.a
Battaglia, P. W. E. A.: Relational inductive biases, deep learning, and graph networks, arXiv [preprint], https://doi.org/10.48550/arXiv.1806.01261, 2018.a
Bentivoglio, R.: Code repository for paper “Rapid Spatio-Temporal Flood Modelling via Hydraulics-Based Graph Neural Networks”, Zenodo [code], https://doi.org/10.5281/zenodo.10214840, 2023a.a
Bentivoglio, R.: Video simulations for paper “Rapid Spatio-Temporal Flood Modelling via Hydraulics-Based Graph Neural Networks”, Zenodo [video supplement], https://doi.org/10.5281/zenodo.7652663,
Bentivoglio, R. and Bruijns, R.: Raw datasets for paper “Rapid Spatio-Temporal Flood Modelling via Hydraulics-Based Graph Neural Networks”, Zenodo [data set], https://doi.org/10.5281/zenodo.7764418,
Bentivoglio, R., Isufi, E., Jonkman, S. N., and Taormina, R.: Deep learning methods for flood mapping: a review of existing applications and future research directions, Hydrol. Earth Syst. Sci., 26,
4345–4378, https://doi.org/10.5194/hess-26-4345-2022, 2022.a
Berkhahn, S., Fuchs, L., and Neuweiler, I.: An ensemble neural network model for real-time prediction of urban floods, J. Hydrol., 575, 743–754, https://doi.org/10.1016/j.jhydrol.2019.05.066, 2019.a
Brandstetter, J., Worrall, D., and Welling, M.: Message passing neural PDE solvers, arXiv [preprint], https://doi.org/10.48550/arXiv.2202.03376, 2022.a
Bronstein, M. M., Bruna, J., Cohen, T., and Veličković, P.: Geometric deep learning: Grids, groups, graphs, geodesics, and gauges, arXiv [preprint], arXiv:2104.13478, https://doi.org/10.48550/
arXiv.2104.13478, 2021.a
Chen, Y., Tang, X., Qi, X., Li, C.-G., and Xiao, R.: Learning graph normalization for graph neural networks, Neurocomputing, 493, 613–625, 2022.a
Costabile, P., Costanzo, C., and Macchione, F.: Performances and limitations of the diffusive approximation of the 2-d shallow water equations for flood simulation in urban and rural areas, Appl.
Numer. Math., 116, 141–156, 2017.a
Courant, R., Friedrichs, K., and Lewy, H.: On the partial difference equations of mathematical physics, IBM J. Res. Dev., 11, 215–234, 1967.a
Defferrard, M., Bresson, X., and Vandergheynst, P.: Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering, in: Advances in Neural Information Processing Systems, vol. 29,
edited by: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R., Curran Associates, Inc., 1–10, https://proceedings.neurips.cc/paper/2016/file/04df4d434d481c5bb723be1b6df1ee65-Paper.pdf
(last access: 18 February 2023), 2016.a
Deltares: Delft3D-FM User Manual, https://content.oss.deltares.nl/delft3d/manuals/D-Flow_FM_User_Manual.pdf (last access: 18 February 2023), 2022.a
DHPC – Delft High Performance Computing Centre: DelftBlue Supercomputer (Phase 1), https://www.tudelft.nl/dhpc/ark:/44463/DelftBluePhase1 (last access: 18 February 2023), 2022.a
do Lago, C. A., Giacomoni, M. H., Bentivoglio, R., Taormina, R., Gomes, M. N., and Mendiondo, E. M.: Generalizing rapid flood predictions to unseen urban catchments with conditional generative
adversarial networks, J. Hydrol., 618, 129276, https://doi.org/10.1016/j.jhydrol.2023.129276, 2023.a
Dormand, J. R. and Prince, P. J.: A family of embedded Runge-Kutta formulae, J. Comput. Appl. Math., 6, 19–26, 1980.a
Fey, M. and Lenssen, J. E.: Fast graph representation learning with PyTorch Geometric, arXiv [preprint], arXiv:1903.02428, https://doi.org/10.48550/arXiv.1903.02428, 2019.a
Fortunato, M., Pfaff, T., Wirnsberger, P., Pritzel, A., and Battaglia, P.: Multiscale meshgraphnets, arXiv [preprint], arXiv:2210.00612, https://doi.org/10.48550/arXiv.2210.00612, 2022.a
Gama, F., Isufi, E., Leus, G., and Ribeiro, A.: Graphs, convolutions, and neural networks: From graph filters to graph neural networks, IEEE Sig. Process. Mag., 37, 128–138, 2020.a
Guo, Z., Leitao, J. P., Simões, N. E., and Moosavi, V.: Data-driven flood emulation: Speeding up urban flood predictions by deep convolutional neural networks, J. Flood Risk Manage., 14, e12684,
https://doi.org//10.1111/jfr3.12684, 2021.a
Guo, Z., Moosavi, V., and Leitão, J. P.: Data-driven rapid flood prediction mapping with catchment generalizability, J. Hydrol., 609, 127726, https://doi.org/10.1016/j.jhydrol.2022.127726, 2022.a
Horie, M. and Mitsume, N.: Physics-Embedded Neural Networks: E(n)-Equivariant Graph Neural PDE Solvers, arXiv [preprint], https://doi.org/10.48550/ARXIV.2205.11912, 2022.a
Hu, R. L., Pierce, D., Shafi, Y., Boral, A., Anisimov, V., Nevo, S., and Chen, Y.-F.: Accelerating physics simulations with tensor processing units: An inundation modeling example, arXiv [preprint],
https://doi.org/10.48550/arXiv.2204.10323, 2022.a
Jacquier, P., Abdedou, A., Delmas, V., and Soulaïmani, A.: Non-intrusive reduced-order modeling using uncertainty-aware Deep Neural Networks and Proper Orthogonal Decomposition: Application to flood
modeling, J. Comput. Phys., 424, 109854, https://doi.org/10.1016/j.jcp.2020.109854, 2021.a
Jonkman, S. N., Kok, M., and Vrijling, J. K.: Flood risk assessment in the Netherlands: A case study for dike ring South Holland, Risk Ana., 28, 1357–1374, 2008.a
Kabir, S., Patidar, S., Xia, X., Liang, Q., Neal, J., and Pender, G.: A deep convolutional neural network model for rapid prediction of fluvial flood inundation, J. Hydrol., 590, 125481, https://
doi.org/10.1016/j.jhydrol.2020.125481, 2020.a
Kingma, D. P. and Ba, J.: Adam: A method for stochastic optimization, arXiv [preprint], arXiv:1412.6980 https://doi.org/10.48550/arXiv.1412.6980, 2014.a
Kreibich, H., Piroth, K., Seifert, I., Maiwald, H., Kunert, U., Schwarz, J., Merz, B., and Thieken, A. H.: Is flow velocity a significant parameter in flood damage modelling?, Nat. Hazards Earth
Syst. Sci., 9, 1679–1692, https://doi.org/10.5194/nhess-9-1679-2009, 2009.a, b
LeCun, Y., Bengio, Y., and Hinton, G.: Deep learning, Nature, 521, 436–444, 2015.a, b
LeVeque, R. J.: Finite volume methods for hyperbolic problems, in: vol. 31, Cambridge University Press, ISBN 0521810876, 2002.a
Liang, Q. and Marche, F.: Numerical resolution of well-balanced shallow water equations with complex source terms, Adv. Water Resour., 32, 873–884, 2009.a
Lino, M., Cantwell, C., Bharath, A. A., and Fotiadis, S.: Simulating Continuum Mechanics with Multi-Scale Graph Neural Networks, arXiv [preprint], arXiv:2106.04900, https://doi.org/10.48550/
arXiv.2106.04900, 2021.a
Lino, M., Fotiadis, S., Bharath, A. A., and Cantwell, C. D.: Multi-scale rotation-equivariant graph neural networks for unsteady Eulerian fluid dynamics, Phys. Fluids, 34, 087110, https://doi.org/
10.1063/5.0097679, 2022.a
Liu, Y., Kutz, J. N., and Brunton, S. L.: Hierarchical deep learning of multiscale differential equation time-steppers, Philos. T. Roy. Soc. A, 38, 020210200, https://doi.org/10.1098/rsta.2021.0200,
Löwe, R., Böhm, J., Jensen, D. G., Leandro, J., and Rasmussen, S. H.: U-FLOOD – Topographic deep learning for predicting urban pluvial flood water depth, J. Hydrol., 603, 126898, https://doi.org/
10.1016/j.jhydrol.2021.126898, 2021.a, b
Martínez-Aranda, S., Fernández-Pato, J., Echeverribar, I., Navas-Montilla, A., Morales-Hernández, M., Brufau, P., Murillo, J., and García-Navarro, P.: Finite Volume Models and Efficient Simulation
Tools (EST) for Shallow Flows, in: Advances in Fluid Mechanics, Springer, 67–137, https://doi.org/10.1007/978-981-19-1438-6_3, 2022.a
Mosavi, A., Ozturk, P., and Chau, K.-W.: Flood Prediction Using Machine Learning Models: Literature Review, Water, 10, 1536, https://doi.org/10.3390/w10111536, 2018.a
Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., Desmaison, A., Kopf, A., Yang, E., DeVito, Z., Raison, M., Tejani, A.,
Chilamkurthy, S., Steiner, B., Fang, L., Bai, J., and Chintala, S.: Pytorch: An imperative style, high-performance deep learning library, arXiv [preprint], https://doi.org/10.48550/arXiv.1912.01703,
Peng, J.-Z., Wang, Y.-Z., Chen, S., Chen, Z.-H., Wu, W.-T., and Aubry, N.: Grid adaptive reduced-order model of fluid flow based on graph convolutional neural network, Phys. Fluids, 34, 087121,
https://doi.org/10.1063/5.0100236, 2022.a
Perlin, K.: Improving Noise, ACM Trans. Graph., 21, 681–682, https://doi.org/10.1145/566654.566636, 2002.a
Petaccia, G., Natale, L., Savi, F., Velickovic, M., Zech, Y., and Soares-Frazão, S.: Flood wave propagation in steep mountain rivers, J. Hydroinform., 15, 120–137, 2013.a
Petaccia, G., Leporati, F., and Torti, E.: OpenMP and CUDA simulations of Sella Zerbino Dam break on unstructured grids, Comput. Geosci., 20, 1123–1132, https://doi.org/10.1007/s10596-016-9580-5,
RBTV1: SWE-GNN-paper-repository-, GitHub [code], https://github.com/RBTV1/SWE-GNN-paper-repository- (last access: 29 November 2023), 2023.a
Roe, P. L.: Approximate Riemann solvers, parameter vectors, and difference schemes, J. Comput. Phys., 135, 250–258, https://doi.org/10.1006/jcph.1997.5705, 1981.a
Ronneberger, O., Fischer, P., and Brox, T.: U-net: Convolutional networks for biomedical image segmentation, in: International Conference on Medical image computing and computer-assisted
intervention, arXiv [preprint], https://doi.org/10.48550/arXiv.1505.04597, 2015.a
Sabbaqi, M. and Isufi, E.: Graph-Time Convolutional Neural Networks: Architecture and Theoretical Analysis, arXiv [preprint], https://doi.org/10.48550/arXiv.2206.15174, 2022.a
Teng, J., Jakeman, A. J., Vaze, J., Croke, B. F., Dutta, D., and Kim, S.: Flood inundation modelling: A review of methods, recent advances and uncertainty analysis, Environ. Model. Softw., 90,
201–216, https://doi.org/10.1016/j.envsoft.2017.01.006, 2017.a
Toro, E. F.: Riemann solvers and numerical methods for fluid dynamics: a practical introduction, Springer Science & Business Media, https://doi.org/10.1007/b79761, 2013.a
Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y.: Graph attention networks, arXiv [preprint], https://doi.org/10.48550/arXiv.1710.10903, 2017.a
Villar, S., Hogg, D. W., Yao, W., Kevrekidis, G. A., and Schölkopf, B.: The passive symmetries of machine learning, arXiv [preprint], arXiv:2301.13724, https://doi.org/10.48550/arXiv.2301.13724,
2023. a
Vorogushyn, S., Merz, B., and Apel, H.: Development of dike fragility curves for piping and micro-instability breach mechanisms, Nat. Hazards Earth Syst. Sci., 9, 1383–1401, https://doi.org/10.5194/
nhess-9-1383-2009, 2009.a
Vreugdenhil, C. B.: Numerical methods for shallow-water flow, in: vol. 13, Springer Science & Business Media, ISBN 978-0-7923-3164-3, 1994.a, b
Wang, X., Chen, Y., and Zhu, W.: A survey on curriculum learning, IEEE T. Pattern Anal. Mach. Intel., 44, 4555–4576, https://doi.org/10.1109/TPAMI.2021.3069908, 2022.a
Xia, X., Liang, Q., Ming, X., and Hou, J.: An efficient and stable hydrodynamic model with novel source term discretization schemes for overland flow and flood simulations, Water Resour. Res., 53,
3730–3759, 2017.a
Yang, M., Isufi, E., and Leus, G.: Simplicial Convolutional Neural Networks, in: ICASSP 2022–2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 23–27 May 2022,
Singapore, Singapore, 8847–8851, https://doi.org/10.1109/ICASSP43922.2022.9746017, 2022.a
You, J., Ying, Z., and Leskovec, J.: Design space for graph neural networks, Adv. Neural Inform. Process. Syst., 33, 17009–17021, 2020.a
Zhang, A., Lipton, Z. C., Li, M., and Smola, A. J.: Dive into deep learning, arXiv [preprint], arXiv:2106.11342, https://doi.org/10.48550/arXiv.2106.11342, 2021.a
Zhou, Y., Wu, W., Nathan, R., and Wang, Q.: Deep learning-based rapid flood inundation modelling for flat floodplains with complex flow paths, Water Resour. Res., 58, e2022WR033214, https://doi.org/
10.1029/2022WR033214, 2022.a | {"url":"https://hess.copernicus.org/articles/27/4227/2023/hess-27-4227-2023.html","timestamp":"2024-11-04T11:45:51Z","content_type":"text/html","content_length":"410620","record_id":"<urn:uuid:4db9dcf1-8926-4241-9f70-a5271eceec0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00343.warc.gz"} |
Group Theory
Classical groups
Finite groups
Group schemes
Topological groups
Lie groups
Super-Lie groups
Higher groups
Cohomology and Extensions
Related concepts
The spin group in dimension $n = 7$.
Subgroups and Supgroups
(Varadarajan 01, Theorem 5 on p. 6)
(Varadarajan 01, Theorem 5 on p. 13)
We have the following commuting diagram of subgroup inclusions, where each square exhibits a pullback/fiber product, hence an intersection of subgroups:
Here in the bottom row we have the Lie groups
Spin(5)$\hookrightarrow$Spin(6) $\hookrightarrow$Spin(7) $\hookrightarrow$Spin(8)
with their canonical subgroup-inclusions, while in the top row we have
SU(2)$\hookrightarrow$SU(3) $\hookrightarrow$G₂ $\hookrightarrow$Spin(7)
and the right vertical inclusion $\iota'$ is the one of the two non-standard inclusions, according to Prop. .
Coset spaces
(e.g Varadarajan 01, Theorem 3)
$G$-Structure and exceptional geometry
classification of special holonomy manifolds by Berger's theorem:
$\,$G-structure$\,$ $\,$special holonomy$\,$ $\,$dimension$\,$ $\,$preserved differential form$\,$
$\,\mathbb{C}\,$ $\,$Kähler manifold$\,$ $\,$U(n)$\,$ $\,2n\,$ $\,$Kähler forms $\omega_2\,$
$\,$Calabi-Yau manifold$\,$ $\,$SU(n)$\,$ $\,2n\,$
$\,\mathbb{H}\,$ $\,$quaternionic Kähler manifold$\,$ $\,$Sp(n).Sp(1)$\,$ $\,4n\,$ $\,\omega_4 = \omega_1\wedge \omega_1+ \omega_2\wedge \omega_2 + \omega_3\wedge \omega_3\,$
$\,$hyper-Kähler manifold$\,$ $\,$Sp(n)$\,$ $\,4n\,$ $\,\omega = a \omega^{(1)}_2+ b \omega^{(2)}_2 + c \omega^{(3)}_2\,$ ($a^2 + b^2 + c^2 = 1$)
$\,\mathbb{O}\,$ $\,$Spin(7) manifold$\,$ $\,$Spin(7)$\,$ $\,$8$\,$ $\,$Cayley form$\,$
$\,$G₂ manifold$\,$ $\,$G₂$\,$ $\,7\,$ $\,$associative 3-form$\,$
• A. L. Onishchik (ed.) Lie Groups and Lie Algebras
□ I. A. L. Onishchik, E. B. Vinberg, Foundations of Lie Theory,
□ II. V. V. Gorbatsevich, A. L. Onishchik, Lie Transformation Groups
Encyclopaedia of Mathematical Sciences, Volume 20, Springer 1993
• Veeravalli Varadarajan, Spin(7)-subgroups of SO(8) and Spin(8), Expositiones Mathematicae Volume 19, Issue 2, 2001, Pages 163-177 (doi:10.1016/S0723-0869(01)80027-X, pdf) | {"url":"https://ncatlab.org/nlab/show/Spin%287%29","timestamp":"2024-11-07T10:57:53Z","content_type":"application/xhtml+xml","content_length":"146456","record_id":"<urn:uuid:58f250a9-5d8b-4446-8a11-86e2f5d64292>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00368.warc.gz"} |
[QSMS Monthly Seminar] Closed orbits and Beatty sequences
Date: 6월24일 금요일, 2021년
Place: 129동
Title: Closed orbits and Beatty sequences
Speaker: 강정수 교수
A long-standing open question in Hamiltonian dynamics asks whether every strictly convex hypersurface in R^{2n} carries at least n closed orbits. This was answered affirmatively in the non-degenerate
case by Long and Zhu in 2002. The aim of this talk is to outline their proof and to highlight its connection to partitioning the set positive integers.
Title: On the $\tilde{H}$-cobordism group of $S^1 \times S^2$'s
Speaker: 이동수 박사
Kawauchi defined a group structure on the set of homology S1 × S2’s under an equivalence relation called $\tidle{H}$-cobordism. This group receives a homomorphism from the knot concordance group,
given by the operation of zero-surgery. In this talk, we review knot concordance invariants derived from knot Floer complex, and apply them to show that the kernel of the zero-surgery homomorphism
contains an infinite rank subgroup generated by topologically slice knots. | {"url":"https://qsms.math.snu.ac.kr/index.php?mid=board_sjXR83&document_srl=1604&order_type=desc&listStyle=viewer&page=7","timestamp":"2024-11-07T18:17:54Z","content_type":"text/html","content_length":"20602","record_id":"<urn:uuid:3d42d8f3-b632-46a2-b548-054fdfe55836>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00386.warc.gz"} |
LKG Maths Worksheets PDF Download - WorkSheets Buddy
LKG Maths Worksheets PDF Download
LKG Maths Worksheets Free Download: Are you searching for conceptualized worksheets for LKG Maths? We deliver Maths Worksheets for LKG kids every month at your doorstep. These Kindergarten Maths
Worksheets are designed and prepared by professionals to reinforce school-based concepts at home in a fun activity manner.
Our experts provide monthly LKG Maths Worksheets on lively content, puzzles, crosswords, and even more counting concepts. These Counting Worksheets for Kindergarten & free math printables help to
develop a strong foundation academically at home for your little one. So, let your children fun learning journey starts now with our provided free download sample Maths LKG Worksheets.
Download Fun Learning Maths Worksheets for LKG Kids
Mathematics is an important subject for everyone living in the world. So, it is mandatory to teach maths for kids from the kindergarten level. Some of the major concepts are involved in the maths
that should be taught at the age of 3-5 years. The topics like Comparison activities, pattern recognition activities, number recognition activities, and counting activities assist children to grasp
the things quickly and in an understandable way. These Counting worksheets are among the first math worksheets that preschool and kindergarten kids will practice with.
By this Maths Kindergarten Activity Sheets, children will learn some of the important things such as identifying tall and short, large and small, sizes and shapes, sorting by shapes, coin value
identification, etc. Hence, get these useful LKG Maths Worksheets Pdf & free printables from the below modules and enhance your little one maths skills.
Read Also:
Free Printable Worksheets for LKG Maths PDF Download
Cover all the basic topics of Maths with the help of our real-life activities and fun learning Maths Worksheets for LKG and boost up your little ones’ problem-solving skills. The Kindergarten Maths
Worksheets which are presented over here are taken as per the preschool syllabus only so kids can easily learn and sharpen their math skills.
These free printable kindergarten math worksheets support your kid’s master to recognize and write numbers, count, and to compare numbers. You can also find odd/even numbers and ordinal numbers
activities in these Lower Kindergarten Maths Workbooks Pdf. So, download maths worksheets for LKG kids from the available quick links given on this page.
LKG Maths Worksheets PDF Download
Frequently Asked Questions on Worksheets for Kindergarten Maths
1. Where can I get free math worksheets for kindergarten?
You can get the Free Maths Kindergarten Worksheets Pdf from various websites or from our page absolutely without any charges.
2. What topics are covered under Lower Kindergarten Maths Worksheets?
With the help of LKG Maths fun learning activity Workbooks, kids can learn various concepts in an innovative manner. Hence, we have listed a few covered topics under LKG Maths Worksheets below for
your reference:
• Counting numbers
• Identify sizes and shapes
• Coin Value Identification
• Identifying Tall and Short
• Classify shapes by colors
• Identification of large and small
• Sorting by shapes
3. How can I download Free Printable LKG Worksheets for Maths?
Candidates can download Free Maths Kindergarten Worksheets by using the quick links available on our page and lay a stronger foundation of maths basics easily.
4. What is the cost of LKG Maths Activity Sheets?
We don’t charge a single penny on Kindergarten Maths Worksheets and they are absolutely free of cost.
Leave a Comment | {"url":"https://www.worksheetsbuddy.com/lkg-maths-worksheets/","timestamp":"2024-11-02T22:06:27Z","content_type":"text/html","content_length":"159106","record_id":"<urn:uuid:3ad67de6-8f74-4d87-8603-ca8a091fe48b>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00421.warc.gz"} |
What is the third quartile of 24, 20, 35, 43, 28, 36, 29, 44, 21, 37? | Socratic
What is the third quartile of 24, 20, 35, 43, 28, 36, 29, 44, 21, 37?
2 Answers
If you have a TI-84 calculator in hand:
You can follow these steps:
First put the numbers in order.
Then you press the stat button.
Then $\text{1:Edit}$ and go ahead and enter your values in order
After this press the stat button again and go to $\text{CALC}$
and hit $\text{1:1-Var Stats}$ press calculate.
Then scroll down until you see ${Q}_{1}$.
That value is your answer :)
${Q}_{1} = 24 , {Q}_{3} = 37$
$\text{arrange the data set in ascending order}$
$20 \textcolor{w h i t e}{x} 21 \textcolor{w h i t e}{x} \textcolor{m a \ge n t a}{24} \textcolor{w h i t e}{x} 28 \textcolor{w h i t e}{x} 29 \textcolor{red}{\uparrow} \textcolor{w h i t e}{x}
35 \textcolor{w h i t e}{x} 36 \textcolor{w h i t e}{x} \textcolor{m a \ge n t a}{37} \textcolor{w h i t e}{x} 43 \textcolor{w h i t e}{x} 44$
$\text{the median "color(red)(Q_2)" is in the middle of the data set}$
$\text{in this case that is between 29 and 35 so find the average}$
$\Rightarrow \textcolor{red}{{Q}_{2}} = \frac{29 + 35}{2} = 32$
$\text{the lower and upper quartiles divides the set to the left and }$
$\text{right of the median into 2 equal parts}$
$\Rightarrow \textcolor{m a \ge n t a}{{Q}_{1}} = 24 \text{ and } \textcolor{m a \ge n t a}{{Q}_{3}} = 37$
Impact of this question
2569 views around the world | {"url":"https://socratic.org/questions/what-is-the-third-quartile-of-24-20-35-43-28-36-29-44-21-37","timestamp":"2024-11-06T08:09:22Z","content_type":"text/html","content_length":"36167","record_id":"<urn:uuid:efc3723b-3908-473d-bee6-74a7ae829862>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00786.warc.gz"} |
Topics in Signal Processing
These notes aim to provide a solid mathematical background for a deeper understanding of signal processing problems.
Algebra covers basic definitions of sets, relations, functions and sequences. It introduces general Cartesian products and axiom of choice. The concept of function is somewhat different as a function
\(f: A \to B\) may not be defined at every point in \(A\).
Elementary Real Analysis introduces the real line \(\RR\) and explains the topology of real line in terms of neighborhoods, open sets, closed sets, interior, closure, boundary, accumulation points,
covers and compact sets. Sequences and series of real numbers are introduced. The extended real line \(\ERL\) is introduced which plays an important role in optimization problems. Basic properties of
real valued functions \(f: X \to \RR\) from an arbitrary set \(X\) to real line \(\RR\) are introduced. The graph, epigraph, sublevel and superlevel sets are defined. Real functions of type \(f: \RR
\to \RR\) are discussed and concepts of limits and continuity are formally defined. Differentiation is introduced from analytical point of view. Several fundamental inequalities on real numbers are
Metric Spaces covers the topology of metric spaces, sequences in metric spaces, functions and continuity, completeness, compactness. Real valued functions on metric spaces are discussed in detail
covering concepts of closed functions, semicontinuity, limit superior and inferior.
Linear Algebra covers vector spaces, matrices, linear transformations, normed linear spaces, inner product spaces, Banach spaces, Hilbert spaces, eigen value decomposition, singular values, affine
sets and transformations, and a plethora of related topics from both algebraic and analytical perspective.
Convex Sets and Functions provides an in-depth treatment of convex sets and functions. Hyperplanes, halfspaces, general convex sets, lines and line segments, rays, balls, convex hulls, orthants,
simplices, polyhedra, polytopes, ellipsoids are discussed. Convex cones, conic hulls, pointed cones, proper cones, norm cones, barrier cones are described. Dual cones, polar cones and normal cones
are explained. Generalized inequalities are introduced. Convexity preserving operations are described. Convex function and their properties are covered. Proofs are provided for the convexity of a
large number of convex functions. Proper convex functions, indicator functions, sublevel sets, closed and convex functions, support functions, gauge functions, quasi convex functions are explained
with their properties and examples. Topological properties of convex sets including closure, interior, compactness, relative interior, line segment property are covered. Different types of separation
theorems are proved. First order and second order conditions for convexity of differentiable functions are proved. Operations which preserve the convexity of functions are described. Continuity of
convex functions at interior and boundary points is discussed. Recession cones and lineality spaces are described. Directional derivatives for convex functions and their properties are covered.
Subgradients are introduced for functions which are convex but not differentiable. Conjugate functions are developed. Smoothness and strong convexity of convex functions is discussed. Infimal
convolution is introduced.
Convex Optimization develops the theory of optimization of convex functions under convex constraints. Topics covered include: basic definitions of unconstrained and constrained convex optimization
problems, projections on convex sets, directions of recession, min common max crossing (MCMC) duality framework, minimax theorems, saddle point theorem, stationary points, first order and second
order criteria for optimization of unconstrained functions, first order criteria for constrained optimization, descent directions methods, gradient method, gradient projection method, Farkas’ and
Gordan’s lemmas, KKT conditions for minimization of smooth functions under linear equality and inequality constraints, feasible directions, tangent cones, optimality conditions based on tangent
cones, minimization of smooth functions under smooth inequality and equality constraints, Fritz John and KKT conditions for the same, Lagrange multipliers, constraint qualifications, Lagrangian
duality, conjugate duality, Fenchel duality theorem, linear programming, quadratic programming. | {"url":"https://tisp.indigits.com/","timestamp":"2024-11-08T08:48:47Z","content_type":"text/html","content_length":"49476","record_id":"<urn:uuid:990be7fa-e2cd-443d-83e2-295b3f493f9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00265.warc.gz"} |
Renormalization theory for the Fulde-Ferrell-Larkin-Ovchinnikov states at T > 0
Jakubczyk, P. (2017). Renormalization theory for the Fulde-Ferrell-Larkin-Ovchinnikov states at T > 0. Physical Review A, 95(6): 063626.
Item is
Free keywords: -
Abstract: Within the renormalization-group framework we study the stability of superfluid density wave states, known as Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) phases, with respect to thermal
order-parameter fluctuations in two-and three-dimensional (d is an element of{2,3}) systems. We analyze the renormalization-group flow of the relevant ordering wave vector (Q) over right arrow (0).
The calculation indicates an instability of the FFLO-type states towards either a uniform superfluid or the normal state in d is an element of{2,3} and T > 0. In d = 2 this is signaled by (Q) over
right arrow (0) being renormalized towards zero, corresponding to the flow being attracted either to the usual Kosterlitz-Thouless fixed point or to the normal phase. We supplement a solution of the
RG flow equations by a simple scaling argument, supporting the generality of the result. The tendency to reduce the magnitude of (Q) over right arrow (0) by thermal fluctuations persists in d = 3,
where the very presence of long-range order is immune to thermal fluctuations, but the effect of attracting (Q) over right arrow (0) towards zero by the flow remains observed at T > 0.
Language(s): eng - English
Dates: 2017
Publication Status: Issued
Pages: -
Publishing info: -
Table of Contents: -
Rev. Type: Peer
Degree: -
Source 1
Title: Physical Review A
Source Genre: Journal
^ Creator(s):
Publ. Info: COLLEGE PK : AMER PHYSICAL SOC
Pages: - Volume / Issue: 95 (6) Sequence Number: 063626 Start / End Page: - Identifier: ISSN: 2469-9926 | {"url":"https://pure.mpg.de/pubman/faces/ViewItemFullPage.jsp?itemId=item_3543170_1","timestamp":"2024-11-03T10:08:43Z","content_type":"application/xhtml+xml","content_length":"40333","record_id":"<urn:uuid:8470c72a-18d5-435c-bcf6-9f54634a181e>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00767.warc.gz"} |
One Step Inequalities Worksheet Adding And Subtracting
One Step Inequalities Worksheet Adding And Subtracting
Substitute 12 for y into 8 y 3. And i like to switch up my notation.
Maze Game To Practice One Step Inequalities Fun And Engaging Way For Students To Practice Their Under Math Games Middle School One Step Equations Math Games
This worksheet includes only addition or subtraction on the same side of the inequality as the variable.
One step inequalities worksheet adding and subtracting. Solving one step inequalities by adding and subtracting variable left side this video explains how to solve one step linear inequalities in one
variable with the variable on the left side. 8 12 3 8 9 the inequality is true. One step equation worksheets have exclusive pages to solve the equations involving fractions integers and decimals.
Students of grade 6 grade 7 and grade 8 are required to perform the addition and subtraction operation to solve the equations in just one step. You could also do your adding or subtracting below the
line like this. Color by code math mystery pictures.
So let s solve this the same way we solved that one over there. Solve the math problem look at the c. Each worksheet includes 10 unique math problems.
Videos worksheets stories and songs to help grade 6 students learn how to solve one step inequalities by adding and subtracting. Negative numbers decimals and fractions are included. If you re seeing
this message it means we re having trouble loading external resources on our website.
Substitute a solution from the shaded part of your number line into the original inequality. Here i added the 5 kind of on the same line. Some of the worksheets for this concept are solving one step
inequalities by addingsubtracting date one step inequalities date period solving graphing inequalities solving one step inequalities chapter 6 solving linear inequalities solving one step equations
additionsubtraction teaching.
Solve inequalities by adding and subtracting displaying top 8 worksheets found for this concept. Intermediate one step inequalities are graphed on a number line. 9 mystery pictures with 27 color by
number math worksheets.
Practice solving equations in one step by adding or subtracting the same value from both sides. All pictures have a colored answer key. We can subtract 15 from both sides.
Addition and subtraction and thousands of other math skills. 8 worksheet by kuta software llc kuta software infinite pre algebra name solving one step inequalities by adding subtracting date period.
Improve your math knowledge with free questions in solve one step linear inequalities.
Solving one step inequalities with addition subtraction. Notice now we have greater than or equal. R o 3awlvlr 1r si ogdh wtasw crge ns1evruvce nd w 6 x 5m ma cdhe m 6w ei jt jh a eiwn7f3iqn zigt ne7
tp fr1ez gatlmgiedbtr 0ai.
Solving One Step Inequalities Guided Note Page Or Exit Ticket 6 Ee 8 Editbale Inequality Guided Notes Act Math
Solving One Step Inequalities Two Step Equations Solving Equations Solving Inequalities
Solving One Step Inequalities Word Problems Puzzle Inequality Word Problems Word Problems Word Problem Worksheets
This A Maze Ing Activity Was So Much Fun For My Students It Helped Them Practice Sol Multiplication And Division Solving Inequalities Addition And Subtraction
One Step Inequalities Worksheets By Multiplying And Dividing Mathaids Com Pinterest Great On Algebra Worksheets Pre Algebra Worksheets Graphing Inequalities
One Step Inequalities Worksheet Two Step Inequalities Coloring Activity In 2020 Multi Step Inequalities Word Problem Worksheets Two Step Equations
Solving One And Two Step Inequalities Color Worksheet Color Worksheets Inequality Worksheets
One Step Inequalities Worksheets By Adding And Subtracting Algebra Worksheets Pre Algebra Worksheets Graphing Inequalities
Adding And Subtracting 1 Step Inequalities Solving Graphing Worksheet Key One In 2020 Adding And Subtracting Subtraction Graphing Worksheets
One Step Inequalities Worksheets One Step Equations Solving Inequalities Multi Step Inequalities
One Step Inequalities Worksheets Graphing Inequalities Graphing Linear Equations Linear Inequalities
Writing Solving And Graphing Inequalities Worksheet Graphing Inequalities Graphing Inequality
Inequalities Hangman Solve Multi Step Inequalities Hangman Style Multi Step Inequalities Persuasive Writing Prompts Solving Inequalities
Inequalities Worksheets Graphing Inequalities Solving Inequalities Graphing Linear Equations
No Prep Worksheet Adding And Subtracting One Step Equations Middle School Math Games Maze Activity Math Games Middle School One Step Equations Math Games
Graphing Inequalities Worksheet In 2020 Graphing Inequalities Algebra Worksheets Pre Algebra Worksheets
One Step Equations Addition Subtraction Multiplication And Division Includes One Step Equation Additi One Step Equations Subtraction Worksheets First Step
This One Step Inequalities Maze Worksheet Was The Perfect Activity To Teach My 6th Grade Math And 7th Addition And Subtraction Subtraction Solving Inequalities
Solving One And Two Step Inequalities Color Worksheet Color Worksheets Inequality Solving Multi Step Equations | {"url":"https://thekidsworksheet.com/one-step-inequalities-worksheet-adding-and-subtracting/","timestamp":"2024-11-04T01:15:49Z","content_type":"text/html","content_length":"136153","record_id":"<urn:uuid:19452fa8-1afa-4b89-acdc-61a54fa5bf82>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00038.warc.gz"} |
What is the domain and range of f(x)=3x+2? + Example
What is the domain and range of #f(x)=3x+2#?
1 Answer
Domain: all the real set.
Range: all the real set.
Since the calculations are very easy, I'll just focus on what you actually have to ask yourself to solve the exercise.
Domain: the question you have to ask yourself is "which numbers my function will accept as an input?" or, equivalently, "which numbers my function will not accept as an input?"
From the second question, we know that there are some functions with domain issues: for example, if there is a denominator, you must be sure that it isn't zero, since you can't divide by zero. So,
that function wouldn't accept as input the values which annihilate the denominator.
In general, you have domain issues with:
• Denominator (cannot be zero);
• Even roots (they can't be computed for negative numbers);
• Logarithms (they can't be computed for negative numbers, or zero).
Is this case, you have none of the three above, and so you have no domain issues. Alternatively, you could just see that your function picks a number $x$, multiplies it by $3$, and then adds $2$, and
of course you can multiply any number by $3$, and you can add $2$ to any number.
Range: now you should ask: which values can I obtain from my functions? I say that you can obtain every possible value. Let's say that you want to obtain a particular number $y$. So, you need to find
a number $x$ such that $3 x + 2 = y$, and the equation easily solves for $x$, with
$x = \frac{y - 2}{3}$.
So, if you choose any number $y$, I can tell you that it is the image of a particular $x$, namely $\frac{y - 2}{3}$, and again, this algorithm is ok for any $y$, you simply need to subtract $2$ and
then divide the whole thing by $3$, which again are operations you are always allowed to do.
Impact of this question
18068 views around the world | {"url":"https://socratic.org/questions/what-is-the-domain-and-range-of-f-x-3x-2","timestamp":"2024-11-05T13:08:10Z","content_type":"text/html","content_length":"37398","record_id":"<urn:uuid:994d87a7-6020-4e6d-87ad-cf6ca78d706a>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00002.warc.gz"} |
Research interests
My research interests lie in the field of arithmetic geometry. Currently I am thinking about arithmetic properties of (supersingular) varieties and Drinfeld modules over finite fields and their
moduli spaces, (local-global principles for) rational points on varieties, anabelian geometry, and Galois representations.
Previously I also worked on (dynamical) Belyi maps, adelic algebraic groups, automorphic representations, and Galois representations attached to abelian varieties.
Keywords: Arithmetic geometry of varieties over finite fields, local-global principles, rational points, Galois representations, algebraic groups, anabelian geometry, arithmetic curves, zeta
functions, Neukirch-Uchida theory, abelian varieties.
18. Abelian varieties over finite fields with commutative endomorphism algebra: theory and algorithms, submitted, 2024 (ArXiV), with J. Bergström and S. Marseglia.
17. Supersingular Ekedahl-Oort strata and Oort's conjecture, 2024 (ArXiV), with C.-F. Yu.
16. Uniqueness of indecomposable idempotents in algebras with involution, submitted, 2024 (ArXiV), with A. Tamagawa and C.-F. Yu.
15. A survey of local-global methods for Hilbert's tenth problem, Women in Numbers Europe IV, Association for Women in Mathematics Series vol. 32, Springer, Cham, 2024 (ArXiV), with S. Anscombe, Z.
Kisakürek, V. Mehmeti, M. Pagano, and L. Paladino.
14. Isomorphism classes of Drinfeld modules over finite fields, Journal of Algebra, 664, 381-410, 2024 (journal, ArXiV), with J. Katen and M. Papikian.
13. When is a polarised abelian variety determined by its p-divisible group?, submitted, 2022 (ArXiV), with T. Ibukiyama and C.-F. Yu.
12. Polarizations of abelian varieties over finite fields via canonical liftings, IMRN, 2021 (journal, ArXiV), with J. Bergström and S. Marseglia.
11. Cubic function fields with prescribed ramification, International Journal of Number Theory, 17(9), 2019 - 2053, 2021 (journal, ArXiV), with S. Marques and J. Sijsling.
10. Mass formula and Oort's conjecture for supersingular abelian threefolds, Advances in Mathematics 386, 107812, 2021 (journal, ArXiV), with F. Yobuko and C.-F. Yu.
9. Restrictions on Weil polynomials of Jacobians of hyperelliptic curves, Arithmetic geometry, number theory, and computation, 259-276, Simons Symposia, Springer, 2021 (volume, ArXiV), with E. Costa,
R. Donepudi, R. Fernando, C. Springer, and M. West.
8. A comparison between obstructions to local-global principles over semiglobal fields, Abelian Varieties and Number Theory, 135-146, Contemporary Mathematics vol. 767, AMS, 2021 (volume, ArXiV),
with D. Harbater, J. Hartmann, and F. Pop.
7. Dynamical Belyi maps and arboreal Galois groups, manuscripta mathematica 165(1), 1-34, 2021 (journal, ArXiV), with I. Bouw and O. Ejder.
6. Fully maximal and fully minimal abelian varieties, Journal of Pure and Applied Algebra, 223(7), 3031-3056, 2019 (journal, ArXiV), with R. Pries.
5. Dynamical Belyi maps, Women in Numbers Europe 2, Association for Women in Mathematics Series, Springer, 2018 (volume, ArXiV), with J. Anderson, I. Bouw, O. Ejder, N. Girgin, and M. Manes.
4. Hecke algebras for GLn over local fields, Archiv der Mathematik, 107(4), 341-353, 2016 (journal, ArXiV).
3. Large Galois images for Jacobian varieties of genus 3 curves, Acta Arithmetica, 174(4), 339-366, 2016 (journal, ArXiV), with S. Arias-de-Reyna, C. Armana, M. Rebolledo, L. Thomas, and N. Vila.
2. Hecke algebra isomorphisms and adelic points on algebraic groups, Documenta Mathematica 22, 851-871, 2017 (journal, ArXiV), with G. Cornelissen.
1. Galois representations and symplectic Galois groups over Q, Women in Numbers Europe, Association for Women in Mathematics Series, Springer, 2015 (volume, ArXiV), with S. Arias-de-Reyna, C. Armana,
M. Rebolledo, L. Thomas, and N. Vila.
Expository publications
4. Geometry and arithmetic of moduli spaces of abelian varieties in positive characteristic, AWS 2024 lecture notes (available here)
3. Time Management, AMS Notices article (available here).
2. Thinking Positive: Arithmetic Geometry in Characteristic p, AMS Notices article (available here), with R. Bell, J. Hartmann, P. Srinivasan, and I. Vogt.
1. Reconstruction of function fields, notes (available here) for an anabelian geometry reading seminar talk at the Courant Institute, New York in Spring 2018.
Volumes edited
2. Women in Numbers Europe IV - Research Directions in Number Theory, Association for Women in Mathematics Series vol. 32, Springer, Cham, 2024 (Springer website)
1. Arithmetic, Geometry, Cryptography, and Coding Theory 2021, Contemporary Mathematics vol. 779, AMS, 2022 (AMS website)
PhD thesis
Hecke algebras, Galois representations, and abelian varieties (available here).
If you would like an updated version with fewer typos/mistakes, please send me an email. | {"url":"https://webspace.science.uu.nl/~karem001/publications.html","timestamp":"2024-11-14T11:38:25Z","content_type":"text/html","content_length":"11959","record_id":"<urn:uuid:ae4327c6-49a9-46db-a105-3b4be43af5ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00753.warc.gz"} |
A Background In Practical Plans For Best Math Learning Apps
This is an superior online math software that covers a range of areas in the subject like geometry, graphing, 3D, and much more. This can be a money counting sport math u see algebra reviews the
place youngsters can practice addition and achieve familiarity with cash by counting cash and payments of assorted denominations.
Math could be a dry and troublesome subject for youngsters to learn, nevertheless it would not have to be. One method to make math partaking is to help children understand how it has actual-world
application and to use video games, colorful worksheets, and enjoyable actions to help educate it.
Mathplanet is an English online math guide with concept, video lessons and counting workout routines with highschool math programs. Hooda Math uses Escape Games math u see reviews to assist introduce
math rules. Whereas they studying mathematics they’ll lean interactive actions and enjoyable academic assets.
Rapid Methods In Math U See Reviews – Insights
Gamers can choose to do both addition and subtraction. They even present varied online and offline apps for explicit subject areas like spreadsheet and chance. In this recreation gamers select the
numbers wanted to add to the target sum. Brighterly has been very helpful in his lesson, making him a better math pupil at school.
Trouble-Free Solutions Of Math U See Geometry Reviews – Where To Go
MathJam is optimized to work on a broad range of units including telephones, tablets, and compact laptops, and is great for distant learning – college students can join from anywhere they have an
internet connection. Your youngster can have fun studying important reading and math expertise by way of exploration making it among the finest math websites for youths.
This sport introduces young youngsters to visible math. This a a free online countless quantity puzzle sport the place gamers should mix blocks of the identical number to merge them into one block of
the following math u see pre algebra reviews greater number. In the core game players can choose to check whole numbers, fractions, decimals or play a combined recreation.
All in all, Math for Youngsters stands out for its variety of math exercises that can help youngsters practice their skills whereas having some fun math u see algebra reviews. Fun Mind has made it
very clear on their website that they do not acquire personally identifiable info from any of their math websites for teenagers.
That is sudoku inspired logic puzzle sport the place you need to fill squares with the fitting number throughout the time limit. Play every of the minigames individually in Apply Mode or problem your
self in Take a look at Mode. This is a simple 3D math guessing sport based on a well-liked math puzzle.
Players have 2 minutes to answer as may query as they’ll. It is a fun and interesting pace math multiplication follow sport for college students. We provide our users a collection of free games and
apps that mix studying with play in ways you’ve by no means seen earlier than.
Youngsters of all ages can follow math abilities influenced by the Widespread Core State Requirements in a means that is fun. In my expertise, I anticipated math to be advanced for teenagers math u
see algebra reviews to be taught. That is an introductory number counting sport for kids where you rely what number of gadgets are there per query.
Play by way of all forty stages to hone your arithmetic and logic expertise. Each parent needs to stay on high of their kid’s studying journey, even after hiring the best online maths tutor. Your
kid, as quickly as she’s launched to our revolutionary and interactive math games.
This is an including practice recreation for youths where they assist a penguin move forwards and backwards to catch marbles which add as much as a numerical goal. A favorite this post of parents and
teachers, Math Playground supplies a safe place for kids to learn and discover math ideas at their very own pace. | {"url":"https://seguroslarrain.cl/a-background-in-practical-plans-for-best-math-learning-apps/","timestamp":"2024-11-06T14:03:21Z","content_type":"text/html","content_length":"22964","record_id":"<urn:uuid:cc23d0f4-eae0-474e-b5a7-c1075d74d36f>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00005.warc.gz"} |
seminars - An invitation to quantum groups
Quantum groups refer to certain noncommutative algebras, with additional structures called comultiplication, antipode and braiding. Because of these structures, representation theory of quantum
groups is highly interesting in itself and has deep connections with other branches such as topology, geometry, mathematical physics and combinatorics.
In this talk, we review a construction of the most simple quantum group. We first define a quantum plane as a noncommutative deformation of the coordinate ring of the affine plane. Then the symmetry
'group' of the quantum plane should be the quantum GL(2). Here the main ingredient is an R-matrix, which naturally provides to the quantum group a braiding, the most important structure.
일시: 9월 6일 (화) 16:40-17:10 | {"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&page=81&l=en&document_srl=984034&sort_index=Time&order_type=asc","timestamp":"2024-11-04T17:43:43Z","content_type":"text/html","content_length":"48932","record_id":"<urn:uuid:70e5e289-34c2-4356-9629-279a3532e18a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00599.warc.gz"} |
Practice: Introduction to clustering
4 Practice: Introduction to clustering
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License
4.1 Introduction
The goal of single-cell transcriptomics is to measure the transcriptional states of large numbers of cells simultaneously. The input to a single-cell RNA sequencing (scRNAseq) method is a collection
of cells. Formally, the desired output is a transcript or genes (\(M\)) x cells (\(N\)) matrix \(X^{N \times M}\) that describes, for each cell, the abundance of its constituent transcripts or genes.
More generally, single-cell genomics methods seek to measure not just transcriptional state, but other modalities in cells, e.g., protein abundances, epigenetic states, cellular morphology, etc.
We will be analyzing the a dataset of Peripheral Blood Mononuclear Cells (PBMC) freely available from 10X Genomics. There are 2,700 single cells that were sequenced on the Illumina NextSeq 500. The
raw data can be found here.
4.2 Loading the data
You loaded 3 variables
• data a single-cell RNA-Sequencing count matrix
• cell_annotation a vector containing the cell-type of the cells
• var_gene_2000 an ordered vector of the 2000 most variable genes
Check the dimension of your data
The scRNASeq data
The number of cell types
Naive CD4 T Memory CD4 T CD14+ Mono B CD8 T FCGR3A+ Mono
NK DC Platelet
Why do you think that we need a list of the 2000 most variable genes ?
4.3 Distances
The clustering algorithms seen this morning rely on Gram matrices. You can compute the Euclidean distance matrices of data with the dist() function (but don’t try to run it on the 2000 genes)
The following code computes the cell-to-cell Euclidean distances for the 10 most variable genes and the first 100 cells
Use the following code to study the impact of the number of genes on the distances
What happens when the number of dimensions increases ?
c2c_dist_n <- tibble(
n_var = c(
seq(from = 10, to = 200, by = 50),
seq(from = 200, to = 2000, by = 500), 2000
) %>%
cell_dist = purrr::map(n_var, function(n_var, data, var_gene_2000) {
data[var_gene_2000[1:n_var], 1:100] %>%
t() %>%
dist() %>%
as.vector() %>%
}, data = data, var_gene_2000)
c2c_dist_n %>%
unnest(c(cell_dist)) %>%
ggplot() +
geom_histogram(aes(x = value)) +
`stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
To circumvent this problem we are going to use the PCA as a dimension reduction technique.
Use the prcomp() function to compute data_pca from the 600 most variable genes. You can check the results with the following code, the cell_annotation variable is the cell type label of each cell in
the dataset:
You can use the fviz_eig() function to choose the number of dimensions that you are going to use to compute your distances matrix. Save this matrix in the data_dist variable
Check the variability explained by the axes of the PCA
Compute the distance matrix on the 2 first PCs
4.4 Hierarchical Clustering
The hclust() function performs a hierarchical clustering analysis from a distance matrix.
You can use the plot() function to plot the resulting dendrogram.
Too much information can drawn the information, the function cutree() can help you solve this problem.
Which choice of k would you take ?
Modify the following code to display your clustering.
The adjusted Rand index can be computed to compare two classifications. This index has and expected value of zero in the case of random partitions, and it is bounded above by 1 in the case of perfect
agreement between two partitions.
Use the adjustedRandIndex() function to compare the cell_annotation to your hierarchical clustering
Modify the following code to study the relation between the adjusted Rand index and the number of PCs used to compute the distance
What can you conclude ?
n_pcs = seq(from = 3, to = 100, by = 10)
) %>%
ari = purrr::map(n_pcs, function(n_pcs, data_pca, cell_annotation) {
data_pca$x[, 1:n_pcs] %>%
dist() %>%
hclust() %>%
cutree(k = 9) %>%
}, data_pca = data_pca, cell_annotation = cell_annotation)
) %>%
unnest(ari) %>%
ggplot() +
geom_line(aes(x = n_pcs, y = ari))
n_gene = seq(from = 3, to = 600, by = 20)
) %>%
ari = purrr::map(n_gene, function(n_gene, data, var_gene_2000, cell_annotation) {
data[var_gene_2000[1:n_gene], ] %>%
t() %>%
dist() %>%
hclust() %>%
cutree(k = 9) %>%
}, data = data, var_gene_2000 = var_gene_2000, cell_annotation = cell_annotation)
) %>%
unnest(ari) %>%
ggplot() +
geom_line(aes(x = n_gene, y = ari))
4.5 k-means clustering
The kmeans() function performs a k-means clustering analysis from a distance matrix.
Why is the centers parameter required for kmeans() and not for the hclust() function ?
We want to compare the cells annotation to our clustering.
Using the str() function to explore the data_kmeans result, make the following plot from your k-means results.
Use the adjustedRandIndex() function to compare the cell_annotation to your k-means clustering.
Maybe the real number of clusters in the PCs data is not \(k=9\). We can use different metrics to evaluate the effect of the number of clusters on the quality of the clustering.
• WSS: the Within-Cluster-Sum of Squared Errors (each point vs the centroid) for different values of \(k\), \(\sum_{i=1}^n (x_i - c_i)^2\)
• The silhouette value measures how similar a point is to its own cluster (cohesion) compared to other clusters (separation). \(s(i) = \frac{b(i) - a(i)}{\max{a(i),b(i)}}\) with \(a(i)\) the mean
distance between \(i\) and cells in the same cluster and \(b(i)\) the mean distance between \(i\) and cells in different clusters. We plot \(\frac{1}{n}\sum_{i=1}^n s(i)\)
Use the fviz_nbclust() function to plot these two metrics as a function of the number of clusters.
For the total within-cluster-sum of squared errors
For the average silhouette width
With fviz_nbclust() you can make the same analysis for your hierarchical clustering. The function hcut() allow you to perform hclust() and cutree() in one step.
Explain the discrepancy between these results and \(k=9\)
4.6 Graph-based clustering
We are going to use the cluster_louvain() function to perform a graph-based clustering. This function takes into input an undirected graph instead of a distance matrix.
The nng() function computes a k-nearest neighbor graph. With the mutual = T option, this graph is undirected.
Check the effect of the mutual = T on the data_knn with the following code
The cluster_louvain() function implements the multi-level modularity optimization algorithm for finding community structure in a graph. Use this function on data_knn to create a data_louvain
You can check the clustering results with membership(data_louvain).
For which resolution value do you get 9 clusters ?
Use the adjustedRandIndex() function to compare the cell_annotation to your graph-based clustering.
4.7 Graph-based dimension reduction
Uniform Manifold Approximation and Projection (UMAP) is an algorithm for dimensional reduction. Its details are described by McInnes, Healy, and Melville and its official implementation is available
through a python package umap-learn.
data_umap <- umap(data_pca$x[, 1:10])
data_umap$layout %>%
as_tibble(.name_repair = "universal") %>%
mutate(cell_type = cell_annotation) %>%
ggplot() +
geom_point(aes(x = ...1, y = ...2, color = cell_type))
New names:
• `` -> `...1`
• `` -> `...2`
What can you say about the axes of this plot ?
The .Rmd file corresponding to this page is available here under the AGPL3 Licence
4.8 Implementing your own \(k\)-means clustering algorithm
You are going to implement your own \(k\)-means algorithm in this section. The \(k\)-means algorithm follow the following steps:
• Assign point to the cluster with the nearest centroid
• Compute the new cluster centroids
Justify each of your function.
Think about the starting state of your algorithm and the stopping condition
Start by implementing a kmeans_initiation(x, k) function, returning \(k\) centroids as a starting point.
Implement a compute_distance(x, centroid) function that compute the distance of each point (row of x) to each centroid
Implement a cluster_assignment(distance) function returning the assignment of each point (row of x), based on the results of your compute_distance(x, centroid) function.
Implement a centroid_update(x, cluster, k) function returning the updated centroid for your clusters.
Implement a metric_example(x, k) function to compute a criteria of the goodness st of your clustering.
Implement a kmeans_example(x, k) function, wrapping everything and test it. | {"url":"https://lbmc.gitbiopages.ens-lyon.fr/hub/formations/ens_m1_ml/Practical_b.html","timestamp":"2024-11-03T13:38:44Z","content_type":"application/xhtml+xml","content_length":"88303","record_id":"<urn:uuid:94273c75-ed95-4066-92e7-e9d2b3b2dee2>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00432.warc.gz"} |
Probing the interplay between geometric and electronic-structure features via high-harmonic spectroscopy
We present molecular-frame measurements of the recombination dipole matrix element (RDME) in CO[2], N[2]O, and carbonyl sulfide (OCS) molecules using high-harmonic spectroscopy. Both the amplitudes
and phases of the RDMEs exhibit clear imprints of a two-center interference minimum, which moves in energy with the molecular alignment angle relative to the laser polarization. We find that whereas
the angle dependence of this minimum is consistent with the molecular geometry in CO[2] and N[2]O, it behaves very differently in OCS; in particular, the phase shift which accompanies the two-center
minimum changes sign for different alignment angles. Our results suggest that two interfering structural features contribute to the OCS RDME, namely, (i) the geometrical two-center minimum and (ii) a
Cooper-like, electronic-structure minimum associated with the sulfur end of the molecule. We compare our results to ab initio calculations using time-dependent density functional theory and present
an empirical model that captures both the two-center and the Cooper-like interferences. We also show that the yield from unaligned samples of two-center molecules is, in general, reduced at high
photon energies compared to aligned samples, due to the destructive interference between molecules with different alignments.
Recent years have seen rapid development in the use of high-harmonic spectroscopy (HHS) as a means to study molecular structure and dynamics at the space and time scales of the electron.^1–6 HHS
relies on the process of high-harmonic generation (HHG) in which a natural attosecond time scale is defined by the periodic motion of an electron in a molecule through tunnel ionization and
subsequent recombination in each half cycle of a driving infrared/optical field.^7–10 Based on the idea that both molecular structure and dynamics may render the inverse-photoionization process of
recombination time- and space-dependent, HHS attempts to characterize the complex recombination dipole matrix element (RDME) via detailed measurement of the spectral properties of the emitted extreme
ultraviolet (XUV) radiation.^3,11,12 While it is only the combined measurement of both the spectral intensity and phase, in the molecular frame, which can fully characterize the RDME, most HHS
studies have characterized only the spectral intensity, with only a few also including the spectral phase measurements.^2,13–15
A purely structural effect that can be characterized using HHS, and that manifests in both the spectral intensity and phase, is the so-called two-center interference (TCI). TCI occurs in small
molecules where the highest-occupied molecular orbital (HOMO) is predominantly composed of two centers of charge density. The spectral minimum that results from destructive interference between
recombination to both sites has been experimentally and theoretically studied in a number of systems.^1,2,4,16–18 Conceptually, thinking of the recolliding electron as a plane wave scattering on
either center, the TCI condition reads^19
for any integer m, as illustrated in Fig. 1. Here, k[e] = 2π/λ[e] is the electron wave number, λ[e] is the de Broglie wavelength, R is the effective distance between the two centers of electron
density (the lobes in Fig. 1), and θ is the rescattering angle with respect to the molecular axis. The structural contribution ΔΦ is constant (π) for the symmetric CO[2] molecule where the two charge
centers have opposite phase, and its variation with energy and angle accounts for asymmetric contributions, e.g., due to an imbalance between the two centers like in carbonyl sulfide (OCS), or the
effect of a permanent dipole moment like in N[2]O.^20 Although Eq. (1) is only approximate because additional terms as well as amplitude variations can yield additional contributions to the RDME, it
provides a useful qualitative approach to the behavior of the TCI minimum. In particular, for symmetric molecules, Eq. (1) predicts the “geometrical” expectation for the location of the TCI minimum.
Because the TCI minimum relies heavily on the destructive interference from the two centers, it is a highly sensitive probe of the RDME in the vicinity of the minimum.
For a molecule in which ΔΦ(k[e], θ) is nontrivial, however, the interplay between the geometric k[e]R cos(θ) term and the ΔΦ(k[e], θ) term is more complex. One such molecule is OCS. The HOMO of OCS
can be thought of, in the linear combination of atomic orbitals (LCAO) representation, as the combination of a sulfur 3p orbital and an oxygen 2p orbital.^21 It is known that molecules that retain
atomic 3p character, such as sulfur- and chlorine-containing molecules, demonstrate features analogous to a Cooper minimum,^15,22–24 i.e., a minimum in the spectral intensity accompanied by a
spectral phase jump. Indeed, it has been shown that the photoionization cross section of the OCS HOMO drops significantly above 35 eV and stays low until at least 100 eV.^25 The only previous HHS
study of OCS was unable to characterize this phenomenon due to their reduced harmonic energy cutoff at near-infrared (NIR) wavelengths and the very small absorption cross section of OCS above 35 eV.^
In this article, we present multidimensional, molecular-frame measurements of the spectral intensity and group delay (GD) of TCI in CO[2], N[2]O, and OCS, over a broad range of energies, using
tunable midinfrared (MIR) drivers for HHG and impulsive molecular alignment to fix the molecular frame. We compare the experimental measurements to results of time-dependent density functional theory
(TDDFT) simulations for CO[2] and OCS and interpret our results using a conceptual model that includes both TCI and Cooper-like features. We demonstrate that the OCS spectral intensity and GD
features in the vicinity of the TCI minima differ significantly from those of CO[2] and N[2]O. In particular, whereas CO[2] and N[2]O exhibit primarily geometric TCIs, OCS also exhibits clear signs
of the Cooper-like feature which adds to and interferes with the geometric contribution. We also show that the harmonic yields from unaligned samples of all three molecules are reduced at high
energies compared to the aligned samples. We interpret this general feature in terms of destructive interference between the signals from molecules at different angles which have a TCI phase jump in
different energy regions.
The calculations in this paper represent the first large-scale exploration of TDDFT as a means to identify structural features in molecular HHG, in particular, through the combination of amplitude
and phase studies.^27–30 We find, in general, that the TDDFT calculations capture the geometric TCI feature well but not the Cooper-like feature in OCS, the location of which is extremely sensitive
to the details of the short-range molecular potential.^31 TDDFT is among the few ab initio methods that can handle the full subcycle dynamics of a multielectron system exposed to a strong field (for
others, see Refs. 32 and 33) and represents a different approach to HHS calculations than those based on stepwise combinations of ionization, electron dynamics, and rescattering.^3,5,12,34–37 Recent
work showing that TDDFT gives a good description of attosecond charge migration as long as the initial condition is well defined,^38,39 combined with our finding here that TDDFT recovers the
geometric TCI features, bodes well for TDDFT as a potential tool to explore charge migration through HHS.
This article is structured as follows: Section II details the experimental apparatus and procedures used to measure HHG spectral intensities and GD. Additionally, this section describes the TDDFT
methods and a physical model used to explain the experimental observations. Section III describes our experimental and theoretical results and their interpretations. Finally, we summarize our results
in Sec. IV.
A. Experimental methods
In these measurements, we utilize a commercial tunable, 1 kHz, MIR optical parametric amplifier (OPA) (HE-TOPAS by Light Conversion), pumped by a 785 nm, 55 fs pump pulse. We use this OPA for two
purposes. First, we use the MIR OPA to generate 75 fs pulses centered around 1300 nm, each with 1 mJ of pulse energy for HHG. Second, we employ the depleted pump, which has 2 mJ of available pulse
energy after the OPA, for molecular alignment. Using MIR wavelengths for HHG produces an extended cutoff when compared to the previous NIR measurements,^26 critical for these low ionization potential
(Ip) molecules where ground state depletion limits the maximum driving intensity. Additionally, longer driving wavelengths provide finer sampling in our GD measurements. Using the depleted pump for
molecular alignment allows us to keep nearly constant alignment conditions while tuning the HHG wavelength, important for our GD measurement method to be described shortly. The power and spot sizes
of each beam can be adjusted with variable apertures.
The measurement apparatus consists of a Mach-Zehnder interferometer and a 1 m magnetic bottle electron spectrometer (MBES), all of which is entirely contained in vacuum. The majority of the MIR is
split to be focused with an f = 400 mm focal mirror into the HHG gas source, leading to an intensity of approximately (1 ± 0.2) × 10^14 W/cm^2 for all molecules. A small portion of the MIR driver is
retained to be recombined with the XUV light. Due to the low vapor pressure of OCS, we seed it at 10% in a helium buffer gas in order to minimize the effects of clustering while also achieving gas
densities required for proper phase-matching. Doing so requires the use of a 200-μm-nozzle-diameter Even-Lavie pulsed gas valve. Backing pressures between 15 and 20 bars were used with opening times
between 23 and 30 μs, depending on day-to-day pulsed valve operation. We have confirmed that no significant XUV light is generated from the helium carrier gas, by testing a neat (undiluted) helium
sample under the same generating conditions. CO[2] and N[2]O, however, are delivered neat and with a continuous 200-μm diameter gas nozzle, using typical backing pressures of 0.5 bar. The light
emerging from the generated gas is propagated through a 200 nm aluminum filter in order to remove the remaining IR field, and then, the XUV is refocused by an f = 750 mm toroidal mirror into the MBES
using a 2f-2f configuration.
The remaining MIR light is recombined with the XUV on a hole mirror and then spatiotemporally overlapped, with variable delay, in a neon gas jet in the MBES for photoelectron time-of-flight
spectroscopy. With this apparatus, we are able to record photoelectron spectral intensities that are proportional to the XUV spectral intensities, and using the reconstruction of attosecond beating
by interference of two-photon transitions (RABBITT) method, we are able to measure the XUV GD.^40 After retrieving the GD, the delays from the aluminum filter, neon detection gas, and HHG attochirp
are removed. This retrieval process, which is discussed in more detail by Scarborough et al.,^15 works well in the energy range of 28–60 eV. Below ≈28 eV, the neon detection gas atomic delay
calculation introduces systematic artifacts, and above 60 eV, the signal-to-noise ratio is too low to retrieve accurate GD results.
RABBITT provides a discrete sampling of the XUV GD with data points separated by 2 ℏω in energy, where ω is the frequency of the IR driving field; for wavelengths near 1300 nm, the sampling is
approximately every 2 eV. This means that features that are on the order of or smaller than this become difficult to accurately characterize. To this end, we have developed a wavelength-scanning
technique in which we tune the HHG driving wavelength (and thus the harmonic comb) and record RABBITT traces for each wavelength. We then combine the different measurements to produce a finely
sampled measurement of the XUV GD. A more in-depth explanation of the method and its applicability is provided in Appendix A.
For molecular-frame measurements, the alignment, or “kick” pulse, was propagated coparallel with the HHG driver and focused into the HHG gas source. The HHG pulse was then delayed relative to the
kick pulse so that it was temporally overlapped with the half-revival of each molecule. For fine-tuning, a half waveplate was inserted into the kick beam at a fixed temporal delay to control the
polarization of the kick relative to the HHG beam for angle-resolved amplitude and GD measurements. We are unable to comment quantitatively on the degree of molecular alignment, as the measurement
through HHG is not strictly linearly proportional to the degree of alignment. For our purposes, the shape of the distribution is narrow enough to discern significant changes in the amplitudes and
group delays with 22.5° steps.
B. Theoretical methods
We calculate the angle-dependent harmonic spectral intensity and phase for CO[2] and OCS molecules using TDDFT as implemented in the software package Octopus.^41 As described in more detail below,
the driving laser field is linearly polarized and the molecular axis is oriented at angle θ relative to the laser polarization. We integrate the Kohn-Sham equations on a spatial grid and apply the
local density approximation with average density self-interaction correction (LDA-ADSIC) for the exchange-correlation potential.^42 The calculations are converged using a time step of 0.05 a.u. and a
grid spacing of 0.4 a.u., and the grid spans a rectangular box with dimensions of 390 × 60 × 60 a.u. centered on the molecule and with the longest dimension along the laser polarization direction. In
our simulations, we find Ip = 14.55 eV for CO[2] and Ip = 11.67 eV for OCS (experimental values are 13.77 eV and 11.17 eV, respectively^43).
A computational challenge for all single-molecule calculations is the inherent interference between the contribution of multiple quantum paths in the HHG process.^44 In experiments, macroscopic
effects naturally select short trajectories.^45–47 In the calculations, this interference is particularly detrimental when trying to extract target-specific information from the spectral intensity
and phase. To this end, we use an ionization seed in combination with the intense MIR field, in the form of an attosecond pulse train (APT) synthesized from odd harmonics of the MIR laser (harmonics
9–17 for CO[2] and 7–15 for OCS). The APT dominates the ionization step in the HHG process and its subcycle timing is such that it strongly enhances the short quantum path contribution, thereby
suppressing effects of multipath interferences. The calculations shown in this paper are all performed with a 6 × 10^13 W/cm^2, 1500 nm MIR field, and APT intensities of 1.2 × 10^12 W/cm^2 for CO[2]
and 6 × 10^11 W/cm^2 for OCS. The MIR and APT intensities and wavelengths are chosen such that, although the APT dominates the ionization step, it has as small of an effect as possible on the
subsequent electron dynamics, and the calculations span a similar range of harmonics energies to experimental results. The pulse intensity is ramped up with a sin^2 shape over 2 laser cycles. The
carrier phase is such that the instantaneous field is zero at the end of the ramp-up.
We calculated the harmonic spectral intensity and phase from the Fourier transform $F$ of the time-dependent acceleration signal a(t), after applying a window function W(t). The window function helps
us to further clean up the signal and suppress contributions from quantum paths longer than one cycle. W(t) has a cos^4(t) shape and selects the emission from the first half-cycle after the laser
reaches its maximum intensity. For the spectral intensity, we include components oscillating both parallel and perpendicular to the laser polarization and thus show
where ν is the harmonic frequency. To calculate HHG spectra from aligned rather than oriented OCS, we average the dipole-acceleration signal over angles θ and θ + π. This is not necessary for the
signal from the symmetric CO[2] molecule.
To extract the target-specific spectral GD, similar to what is done in quantitative rescattering theory, we factorize the harmonic signal in the frequency domain^12,37,48
where Γ is the energy-independent ionization yield, the “ref” subscripts label a generic “reference” system, and σ and ϕ are the target-specific scattering cross section amplitude and phase,
respectively. From Eq. (3), we extract the target-specific phase and GD, respectively, as
$ϕν;θ=argHHGν;θHHGrefν and GD=−∂ϕ∂ν.$
For all TDDFT spectral analyses reported in this paper, our reference consists of a single-active-electron time-dependent Schrödinger equation calculation for a one-dimensional atom with a matching
Ip interacting with an identical laser field, including the APT seed. Similar results were obtained using a matching two-dimensional reference. The reference calculations thus include both (i)
generic features associated with the long-range tail of the Coulomb potential and (ii) systematic features associated with the seed. In the total GD, contributions from the parallel and perpendicular
contributions are weighted by their spectral intensity.
A. Molecular-frame measurements—Spectral intensity
We start by considering the angle dependence of the TCI minima in the spectral intensity, as shown for all three molecules in Fig. 2. Each spectrogram shows the results of a delay scan between the
HHG and kick pulses across the half revival, allowing us to record the HHG spectrum as the molecular angle evolves from −90° to 0° to +90°. Asymmetries between positive and negative angles are
attributed to the evolution of rotational wave packets at different probe delays. The delay scans have been smoothed with a Savitzky-Golay filter along the delay dimension to remove step-to-step
noise, and the aluminum filter transmission and neon detection gas cross section have been removed. For CO[2] and N[2]O, we observe similar TCI features, namely, that the interference minimum reaches
its lowest energies at 0° and moves to higher energies as θ is increased. This is qualitatively consistent with the geometric expectation from Eq. (1), which predicts an evolution following I[p] + α/
cos^2θ, where α is a constant of proportionality. Remember that Eq. (1) explicitly states that k[e] ∝ 1/ cos θ, whereas the results in Fig. 2 are plotted as a function of emitted photon energy, which
is proportional to $ke2$; hence, the experimental results are expected to be proportional to 1/ cos^2θ. The 0° location of the minimum is slightly higher in N[2]O than in CO[2]. This is consistent
with a smaller effective lobe separation in N[2]O for which the lobes are centered on the oxygen and the N–N bond, whereas in CO[2], they are centered on the two oxygen ends. A similar geometric TCI
behavior is also present for OCS in Fig. 2(c), with a weak minimum starting at 35 eV at 0° and moving to higher energies as θ increases [see also Fig. 3(c)]. However, this behavior is much harder to
recognize in OCS since the spectrum is dominated by a deep minimum near 43 eV for angles up to about 45°.
The different angle dependence of the spectral yields for the three molecules is explored further in Figs. 3(a)–3(c). These spectra were recorded at a fixed delay corresponding to 0° while rotating
the polarization of the kick pulse to different values of θ. This is a different method of rotating the molecules compared to that used in Fig. 2, which acts as a point of comparison and allows us to
identify systematic errors in either of the spectral intensity measurement methods. All yields have been normalized by the near-featureless 90° values. Figure 3 shows, in agreement with the
observations in Fig. 2, that CO[2] (a) and N[2]O (b) experience their deepest minima at 0°, with the minimum moving to higher energies for larger angles. Conversely, the minimum in OCS is largely
constrained to the 30–50 eV region, with a weak double minimum for angle 0° at 35 eV and 43 eV, and the deepest minimum found for angles 22.5° and 45°, at 43 eV. Our results for CO[2] and N[2]O are
in agreement with those of earlier studies.^16,17 In addition, in Appendix B, we show that the 0° minimum positions are robust against changes in the laser intensity for all three molecules. All of
the above support our interpretation that the interferences measured here are structural in nature and furthermore that the measured angular-variation of the TCI minimum in CO[2] and N[2]O is
primarily due to the geometric k[e]R cos(θ) term of Eq. (1). Furthermore, we do not find significant effects of the permanent dipole moment in N[2]O, which, in principle, would give rise to
intensity-dependent shifts via the Stark effect.
We interpret the different behavior of the OCS angular dependence compared to CO[2] and N[2]O in the context of the chemical structure of the three molecular HOMOs. As shown in Fig. 1, the HOMO of CO
[2] is, by construction, symmetric, and N[2]O is nearly so, with little difference in electronic structure from swapping C–O to N–N; both HOMOs are dominated by atomic 2p character in the LCAO basis.
^49 OCS, however, is much more asymmetric, with the sulphur atom contributing a 3p orbital character to the HOMO. Following the qualitative framework of Eq. (1), the imbalance of the HOMO would be
accounted for in the ΔΦ term. The most prominent structure which imprints itself on the OCS HOMO comes from the sulphur 3p orbital. The C–S bond can be thought to electronically fill the shell,
making it isoelectronic with argon, which has a minimum in the photoionization cross section at a specific photon energy caused by a sign change in the RDME characterizing the transition between the
ground state and a particular angular momentum channel; this manifests as a minimum in the total outgoing radial wavefunction.^50,51 Using photoelectron spectroscopy, this minimum has been seen to
extend to molecules that have 3p character in the HOMO, including OCS.^21 Because the molecular axis breaks the rotational symmetry of a Cooper minimum, the minimum is attributed as a “Cooper-like”
minimum in the molecular case. Carlson et al.^21 also justifies the use of the LCAO basis through charge density analysis, which attributes 97% of the atomic contribution to the HOMO to the 2p shell
of oxygen and 3p shell of sulfur. As we will argue in further detail below, we believe we are seeing the combined effects of a Cooper-like minimum, which is nearly angle independent in its location,
and a geometric TCI minimum which moves upward in energy as the angle increases.
As a final note on the angle-dependent yields, we, in general, observe a slower evolution of the energy of the TCI minimum with angle than that predicted by Eq. (1), for all three molecules.
Empirically, we find a better match with
with β ≈ 1. This deviation from the prediction of Eq. (1) is generally consistent with previous findings that plane-wave approximation of RDME often give poor quantitative predictions.^12,37,52
B. Molecular-frame measurements—Spectral group delay
We next focus on the angle-dependent GD measurements shown in Figs. 3(d)–3(f). The angle is varied in the same way as for the yield measurements in panels [(a)–(c)], thus allowing us to track the TCI
behavior simultaneously in both the amplitude and phase. As a first observation from this comparison, we find that the angle-dependent GD of all three molecules indeed exhibits a TCI feature which
mimics that of their angle-dependent yields. CO[2] and N[2]O, for example, both exhibit a minimum in the GD at the location of the amplitude minimum for all angles at which the two features can be
discerned. The measured minima in the GD for CO[2] and N[2]O correspond to a negative shift of the spectral phase of ∼1.5 (CO[2]) and ∼2 radians (N[2]O) at 0°. Interestingly, even though N[2]O is not
perfectly symmetric and has a permanent dipole moment, we do not observe any meaningful difference between its TCI behavior and that of CO[2]. However, the OCS behavior is again starkly different
from that of CO[2] and N[2]O. While the location of the OCS GD feature also matches that of the TCI minimum in the sense that it is strongly localized between 40 and 45 eV, its angle dependence is
very different from the other two molecules: the OCS GD exhibits a maximum at angles of 0° and 22.5°, which changes to a (shallower) minimum at 45° and 67.5°. The 0° GD maximum of OCS corresponds to
a positive phase jump of ∼3 radians.
Because OCS deviates in such a drastic manner from the other two molecules at 0°, it is important to perform a careful characterization of the GD feature with wavelength scanning for finer energy
sampling. Shown in Fig. 4 are three wavelengths scans from 1270 to 1330 nm in 10 nm steps at 0° for all three molecules. The results for OCS have been shifted by +300 as for clarity. With the
combined results of the wavelength scans, it is clear that OCS has a smoothly varying maximum in the GD at 0°, whereas CO[2] and N[2]O both have minima.
All of the results shown in Figs. 2–4 suggest that the Cooper-like minimum in OCS strongly influences the amplitude and phase of the molecular RDME through its interplay with the geometric TCI
feature. We propose that the OCS results can in fact be interpreted in terms of a coherent sum of two structural features: (i) A geometric TCI minimum in the amplitude, accompanied by a maximum in
the GD, which moves upward in energy with angle, similar to the prediction in Eq. (5), and (ii) a Cooper-like minimum in the amplitude, accompanied by a minimum in the GD, which is nearly
angle-independent. As the TCI minimum “moves through” the Cooper-like minimum, the two features can either add destructively or constructively, thereby increasing or decreasing the depth of the
minimum. Similarly, the sum of the two features with opposite phase behavior can give rise to the observed sign change in the phase shift, from positive at small angles to negative at larger angles.
We will discuss this interpretation in more detail in Sec. III E, where we present a conceptual model for the observed OCS behavior.
Finally, we note that prior to this paper, the only other molecular-frame GD measurements of TCI were performed in CO[2] by Boutu et al.^2 In that study, they measured the sign of the two-center
phase shift to be positive at 0°, in contrast to our measurements shown above. However, other measurements^4,17 have indicated that for NIR wavelengths the interference effect observed by Boutu et
al. is due to multiple orbital contributions to the HHG spectrum and as such cannot be directly compared to our measurement. A more comprehensive study comparing GD measurements in CO[2] between NIR
and MIR driving wavelengths will be the subject of a future paper.
C. Unaligned measurements
Because we ascribe the deviation of OCS from the patterns of CO[2] and N[2]O to the overlapping of geometric TCI with the Cooper-like structure, it is instructive to confirm that the Cooper-like
minimum survives in the unaligned molecular sample. In contrast, a purely geometric TCI minimum is expected to average out in both the amplitude and the phase when measuring harmonics from an
unaligned sample. Figure 5(a) shows the spectral yields from the three unaligned samples, generated at 1300 nm. Each spectrum is normalized to unity after correcting for the aluminum filter
transmission and the neon detector photoionization cross section. The spectra for CO[2] and N[2]O are relatively featureless until the cutoff of N[2]O is reached and until the aluminum filter L[2,3]
edge (≈72 eV) is reached for CO[2]. On the other hand, OCS departs from the spectra of the other two around 30 eV and descends rapidly until a “kink” at ≈43 eV, indicated by a vertical dashed line.
After this point, the spectrum exhibits a flat plateaulike structure until the cutoff is reached. We have found that the location of this feature in unaligned OCS is independent of the driving laser
intensity at 1300 nm and the wavelength when measured at 1500 nm, 1700 nm, and 2000 nm (not shown in the figure). Across the 40–50 eV region, the unaligned OCS spectral yield decreases by
approximately two orders of magnitude compared to that of the other two molecules. The suppression at high energies is in large part due to the Cooper-like minimum in the OCS RDME amplitude and is in
qualitative agreement with the results of photoionization experiments performed in all three molecules.^21,25,53,54
Figure 5(b) shows the combined results of GD wavelength scans for all three unaligned samples. Driving wavelengths in the range 1270–1330 nm with 10 nm steps were used. The OCS results demonstrate a
broad minimum of ≈−150 as in GD around 43 eV, coincident with the spectral kink in the unaligned OCS spectral intensity. Over the same spectral region, the unaligned CO[2] and N[2]O samples are
The fact that OCS retains a minimum in both the amplitude and the GD even in the unaligned sample is another indication that we are observing a relatively angle-independent Cooper-like minimum in the
RDME. It is also interesting to note that the location and size of the OCS GD minimum is in agreement with previous measurements of Ar and CH[3]Cl, which exhibit 3p valence character, where
Cooper-like minimum positions were previously found to be between 40 and 50 eV with GD minima between −100 and −200 as.^15,51
D. TDDFT simulations
We next consider the TDDFT-calculated results for the HHG spectral intensity and GD for CO[2] and OCS, displayed in Fig. 6. Panels (a) and (b) show the harmonic spectral yields for CO[2] and OCS,
respectively. These calculated yields are comparable to the experimental angle-dependent yields shown in Figs. 2(a) and 2(c). Each angle-dependent spectrum has been divided by a smoothed,
featureless, incoherent average over all calculated angles, which allows us to follow the TCI minimum at energies beyond the cutoff. For comparison, the dotted line labels the empirical prediction of
Eq. (5), with values for α and β given in the caption. Generally speaking, when there is a minimum, we observe a good qualitative agreement between TDDFT results and the empirical prediction. For CO
[2], this also means we have good agreement with the experimental results shown in Figs. 2(a) and 3(a). Comparing TDDFT results for CO[2] and OCS, we also observe a lower energy TCI feature at 0° in
the latter, again in agreement with experiments, and consistent with a larger distance between the two centers in Eq. (1).
However, in general, the OCS results exhibit qualitative and quantitative differences with the experiments. As we have discussed above, the experimental TCI minimum around 43 eV remains visible at
most angles and is most prominent around 30°. In the simulation, while we match the relative absence of a minimum below 20°, the minimum for larger angles behaves like a TCI minimum, although the
energy increases somewhat slower with angle than for CO[2]. Second, and consistent with the first observation, the calculations do not exhibit clear signs of the Cooper-like minimum seen in the
experimental results (the shallow minimum from 40 to 50 eV at small angles cannot be conclusively assigned to a Cooper-like minimum). It is not entirely surprising that the TDDFT calculations do not
reproduce the Cooper-like minimum, as especially the location of such a minimum is notoriously difficult to predict and requires a very accurate description of the continuum wave functions.^31 The
absence of this Cooper minimum in the calculated OCS response means that aligned OCS behaves somewhat generically like CO[2] in terms of the location of the TCI minimum, except with a slightly larger
effective center-to-center separation leading to the lower energy of the 0° minimum. It is worth noting that for oriented OCS, we see a clear difference in the harmonic response from consecutive
half-cycles as the continuum electron wave packet has been released from and scatters on opposite ends of the molecule. In particular, both the overall yield and the location of the TCI minimum are
different from half-cycle to half-cycle. Since the current experiment only addresses aligned, but not oriented, OCS, we will leave discussions of oriented OCS for a future study.^56
Finally, we discuss the calculated angle-dependent GDs as shown in Fig. 6(c) for CO[2] and Fig. 6(d) for OCS. The solid curves show the same empirical geometric expectation as that in panels (a) and
(b), respectively. For small alignment angles, comparison of panels [(a) and (b)] and [(c) and (d)] shows the GD also approximately follows the geometric expectation: the CO[2] (OCS) GD exhibits a
minimum near 44 eV (33 eV) for angles up to about 30°. For larger angles, though, the GD does not exhibit clear minima that can be associated with structural features. In fact, we find that it is, in
general, much more difficult to extract the GDs than the yields from the TDDFT computations and that the details of the APT and its timing have a greater influence on the extracted GD than the
yields. This will be explored in more detail in future studies.^56 We note that, again, since calculated OCS does not have a Cooper minimum, the two molecules behave quite similarly, both exhibiting
an approximate −250 as decrease in the GD across the minimum. This is in contrast to the experimental results, where the CO[2] GD decreases and the OCS GD increases across the minimum at 0°.
E. Conceptual model
To further understand the experimental results, and, in particular, to study the interplay between the geometric TCI and a Cooper-like features in OCS, we build a conceptual model for the harmonic
spectral amplitude and phase, guided by the experimental measurements and parameters. The overall results, compared to experimental measurements, are shown in Fig. 7 for CO[2] and Fig. 8 for OCS. The
conceptual model is built from the factorization in Eq. (3), where the reference is taken as a featureless spectral amplitude that has been fit to the 90° experimental signal [see the insets in panel
(a) of Figs. 7 and 8]. We include an angle-dependent ionization yield Γ(θ) that gives a slight preference to 90° vs 0°, consistent with the measured relative spectral intensities for HHG energies
below the TCI minimum. To match experimental conditions, we also include and average over an alignment distribution in angle-resolved data. Further technical details about the model and choice of
parameters are given in Appendix C.
For CO[2], the key elements of TCI are angle-dependent features in the amplitude “σ” and phase “ϕ” of Eq. (3). We model them with Gaussian shapes that move geometrically as the molecule is rotated
following the empirical formula of Eq. (5) (see Appendix C for details). Figure 7 shows that this model captures all the main elements of the experimental data we have discussed so far: (i) the TCI
minimum barely moves in energy between 0° and 22.5°, and (ii) the minimum and the GD feature both start out narrow and deep and become broader and shallower at larger angles. We use a geometric
feature with constant depth and width in our model (see Appendix C) and this broadening is the result of the molecular-alignment distribution: the TCI gets more spread out at larger θ, where the
minimum moves faster with the alignment angle. The agreement between this simple model and the experimental results means that it is a good approximation to think of TCI as a generic and robust
structural interference feature. Although the angle dependence of the location of the minimum is, in general, different from the simplest plane-wave-based geometric expectation of Eq. (1), the effect
of the TCI on the amplitude and phase is generic across many alignment angles.
The simple model also allows us to approximately calculate the yield and GD from the unaligned sample, as shown in Fig. 7 [panel (a) inset and panel (c)], respectively, along with the corresponding
experimental results. On smaller scales, the apparent reminiscence of a structural feature above 50 eV in the unaligned spectral intensity is a result of the qualitative conceptual model we use here
and does not carry further information on the TCI influence on unaligned signals. Figure 7 [(a), inset] shows that compared to the reference, the unaligned yield is dampened above the TCI minimum,
and (c) the GD in the unaligned signal is almost completely washed out. The dampened yield at high energies is the only remaining trace of TCI in unaligned targets. It can be understood as a
macroscopic or ensemble effect, where the HHG contributions from molecules at different alignment angles are out of phase over a large range of energies because of the phase shift which moves in
energy. This means that for energies above the 0° minimum, there will be a number of molecules which has undergone the phase shift and a number which has not. We note that while, in principle, we
also could calculate an “unaligned” signal using the coherent average over the TDDFT calculations at different angles, in practice, the TDDFT results have too much variability from angle to angle to
get a meaningful average. This can be seen from Fig. 6, for which the geometric expectation is only followed on average; for individual angles, the calculated minimum does not, in general, overlap
exactly with the expectation.
We build on the success of the conceptual model for CO[2] to consider the more complex case of OCS. For OCS, we include both a similar angle-dependent TCI feature as described above, in this case
with a positive phase shift, and an angle-independent negative phase-shift feature associated with the Cooper-like contribution. The results are displayed in Fig. 8 (see Appendix C for technical
details). Here as well, the results reproduce key features of experimental measurements and help us shed additional light on them: (i) the features in the spectral intensity and GD barely move with
the alignment angle and are most prominent when the two components align in energy. We attribute this to the TCI and Cooper-like contributions compensating each other when they are separated in
energy. (ii) The GD rapidly changes sign with increasing alignment angle. This is fostered by interference between the TCI and Cooper-like contributions around energies where they have similar
amplitude. (iii) In the unaligned signals, shown in panel (a) inset and panel (c), we now retain signatures of the Cooper-like minimum in both spectral intensity and phase; the inset shows the 90°
intensities both with (solid curve) and without (dashed curve) the Cooper-like contribution. The unaligned amplitude both exhibits the Cooper-like minimum itself and, similar to the case in CO[2], is
damped at high energies relative to the 90° signal. In the GD, the angle-dependent geometric phase feature has been averaged out, leaving only the Cooper-like negative phase shift. The good agreement
between this model and the experimental results, again, supports our interpretation of the OCS behavior as resulting from the interference between a generic TCI feature that moves through a nearly
angle-independent Cooper-like feature as the alignment angle changes.
We have investigated structural quantum interferences in CO[2], N[2]O, and OCS molecular samples through amplitude- and phase-resolved HHS. In all three molecules, we see evidence of a geometric TCI
effect for which the interference minimum increases in energy with the alignment angle. In OCS we, in addition, observe a nearly angle-independent Cooper-like feature which interferes with the TCI
feature in different ways at different angles. Our results validate the qualitative picture of Eq. (1) which provides a framework for the symmetric and near-symmetric cases of CO[2] and N[2]O,
although we find that the TCI location increases more slowly with angle than predicted by Eq. (1). Our measurements in N[2]O, along with the intensity independence of the minima, confirm that the ΔΦ(
k[e], θ) term is not strongly affected by the laser field, suggesting that the difference between CO[2] and OCS is dominantly structural. Investigation of the high-harmonic spectral intensities shows
a strong suppression in OCS in both unaligned spectra and all angles of molecular-frame spectra relative to the other molecules, consistent with the Cooper-like mechanism mentioned above.
Measurements of the GD further show the uniqueness of OCS in this work. This is manifested in the sign change of the GD feature for different angles, as the TCI minimum moves through the Cooper-like
minimum and changes the interference. The interpretation of the observed results as an interference between TCI and Cooper-like features is supported by our conceptual model, which used generic
spectral behaviors for both features and led to good qualitative and semiquantitative agreement with the experimental results. Finally, we presented TDDFT calculations in good qualitative agreement
with the CO[2] measurements, including reproducing the negative GD feature. In OCS, the TDDFT calculations also predict an angle-dependent TCI feature which can be recognized in the experimental
results; however, we do not recover the Cooper-like feature which means that we found less overall agreement with the experimental results. Overall, our finding here that TDDFT calculations recover
structural features such as the TCI minimum, combined with recent work showing that ultrafast charge migration can also be well represented by TDDFT,^38,39 suggests that TDDFT calculations may be
able to explore charge migration through HHS.
These results emphasize the importance of multidimensional measurements when investigating structural, and by extension dynamical, behavior: only through measurements of both spectral intensity and
GD are the TCI and Cooper-like mechanisms fully characterized in OCS, and even then, only by comparison to other molecules are these characterizations validated. As future studies of molecular HHS
examine larger, more complex molecules, such thorough characterizations may become increasingly important to elucidate the mechanisms involved.
This work was supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, under Award No. DE-SC0012462. D.K. was supported by the U.S. Department of Energy, Office
of Science, Basic Energy Sciences, under Award No. DE-FG02-04ER15614. High-performance computational resources were provided by the Louisiana Optical Network Initiative and the High Performance
Computing center at Louisiana State University. L.F.D. acknowledges support from the Edward and Sylvia Hagenlocker Chair. The authors thank Robert R. Jones for fruitful discussions.
A critical parameter for wavelength tuning in resonance spectroscopy is the harmonic separation 2ω[0] relative to the energetic width ΔE. This complication arises due to the fact that, in RABBITT,
only changes in phase are directly measured. In order to interpret this physically, the discrete derivative approximation is used to measure GD,
If 2ℏω[0] ≫ ΔE such that only a single harmonic can be incident on the resonance in each RABBITT scan, then the discrete derivative is a poor approximation, and the phase (not the GD) is more
directly probed. In the other extreme where 2ℏω[0] ≪ ΔE[R], multiple harmonics can sample the resonance at once and the discrete derivative approximation to the GD is accurate, meaning that the shape
of the reconstructed GD will reflect the shape of the GD of the resonant structure.
In this work, we operate on the border of these regimes. In Fig. 3(a), we sample the TCI of CO[2] with only 2–3 points, making the interpretation unclear; in Fig. 3(b), the structure in N[2]O is
broader, and more representative of GD. For the purposes of precisely, quantitatively probing GD, our color scans in this regard may require additional convolution. However, toward the goal of
providing finer sampling to confirm a smoothly varying positive or negative feature in the GD, the scans serve the intended purpose.
Figure 9 shows the spectral yield at 0° at different driving laser intensities. Because the absolute spectral intensities vary considerably with changing intensity, all the spectra shown in Fig. 9
have been normalized to the unaligned intensities such that their enhancements relative to unaligned caused by the TCI is presented. These plots show that there is no obvious intensity-dependence to
the minima positions. At the lowest intensity for N[2]O, a slight shift is observed but this is more than likely due to the normalization procedure. As discussed above, the 0° spectrum experiences a
cutoff extension relative to unaligned, which means that the cutoff region at 0° does not have a high-statistics region to normalize against. So as the cutoff approaches the interference, normalizing
to the unaligned spectrum obscures the resonant feature. This problem is not seen in the other two molecules because their interferences are farther from the cutoff. All of the above results for CO
[2] and N[2]O are in agreement with those of earlier studies;^16,17 therefore, it is reasonable to conclude that the measured angular variation of the interferences in CO[2] and N[2]O is primarily
due to the k[e]R cos(θ) term of Eq. (1). In contrast, the localized nature of the OCS interference enhancement indicates a structural interference in OCS that has additional angular variation from
the phase difference ΔΦ: not inconsistent with a Cooper-like minimum.
Looking back at Eq. (3), from the reference signal $HHGrefν$, we see that the angle-resolved HHG spectral amplitude and GD are completely determined by the ionization yield $Γθ$, and the
target-specific amplitude σ(ν; θ) and phase ϕ(ν; θ). Similar to Fig. 3, we calibrate the reference against the 90° signal. More specifically, we define HHG[ref] as a featureless—both in amplitude and
phase—fit against 90° experimental results (see the insets of Fig. 7) and normalize it by its yield $Γ90◦$. For the angle-dependent ionization yield, we use a trigonometric expansion^57
where the coefficients a[0] and a[1] were selected to match relative spectral intensities at 0° and 90° below the TCI. For CO[2] (OCS), we use a ratio $a02:a12=1:0.3$ (1:0.2). The angle dependence of
the ionization yields plays a minor role in our results and is kept mostly for consistency with the full factorization of Eq. (3) and for adaptability to other compounds with a strong ionization
angle dependence.
For CO[2], TCI is modeled as a generic feature in σ and ϕ that moves geometrically as the molecule is rotated following the empirical equation (5) with α = 44 eV − Ip and β = 1, and consistent with
experimental observations (see discussions in Secs. III A and III B). More specifically, for the spectral amplitude, we choose
where the depth $σmin=4×10−2$ and width σ[a] = 5 eV. For the spectral GD, we choose
and the phase is recovered by integration over harmonic energies. Here $ϕ̃tot$ is set to obtain a total phase variation of −0.7 × π radians and σ[p] = 0.9 eV. The total phase variation is determined
from the damping in the unaligned signal above the TCI [see the inset of Fig. 7(a)], while all other parameters are determined from the measurements at 0°. Finally, to account for imperfect alignment
in experimental measurements, results are averaged over a cos^2 distribution with a 40° FWHM spread.
For OCS, in addition to the geometric TCI, an angle-independent electronic-structure feature is coherently added to account for the Cooper-like component. The relative weight between the two
components is determined from the HOMO asymmetry between the two ends of the molecules, which we set to 0.35:1 (TCI:Cooper-like) ratio. Because they overlap, and interfere, in energy, it is hard to
precisely calibrate each component independently. Instead, we choose to use the same parameter for OCS TCI as in CO[2], with only the total phase variation sign (+0.7 × π rad) and minimum energy at
0° (α = 31 eV − Ip) adjusted to reflect the experimental data. For the Cooper-like component, we use similar generic features in amplitude and GD fixed at 41 eV with $σmin=5×10−2$, σ[a] = 5 eV, and
$ϕ̃tot=−0.35×π$ rad. These parameters were set to match experimental results in unaligned samples and the sharp variations in the GD of aligned samples. Finally, like in CO[2], a cos^2 40°-FWHM
distribution is used to describe the alignment distribution.
M. M.
, and
H. C.
Phys. Rev. Lett.
L. J.
, and
Nat. Phys.
, and
H. J.
J. B.
P. B.
, and
D. M.
Phys. Rev. Lett.
Attosecond and XUV Spectroscopy: Ultrafast Dynamics and Spectroscopy
, edited by
), Chap. 7.8, ISBN: 978-3-527-41124-5.
P. M.
E. F.
O. I.
L. B.
A. D.
, and
H. J.
X. F.
L. A.
, and
Phys. Rev. A
Phys. Rev. Lett.
K. J.
L. F.
, and
K. C.
Phys. Rev. Lett.
P. B.
Phys. Rev. Lett.
, and
D. M.
, and
J. Phys. B: At., Mol. Opt. Phys.
, and
Nat. Phys.
S. B.
R. R.
, and
L. F.
Phys. Rev. Lett.
, and
Appl. Sci.
De Silvestri
, and
Nat. Phys.
P. M.
, and
H. J.
Phys. Rev. A
, and
Phys. Rev. A
M. B.
, and
L. B.
Phys. Rev. A
M. D.
L. B.
Phys. Rev. A
T. A.
M. O.
, and
F. A.
J. Chem. Phys.
T. A.
M. O.
W. A.
F. A.
T. A.
, and
B. P.
Z. Phys. D: At., Mol. Clusters
M. C. H.
, and
V. R.
Phys. Rev. A
M. C. H.
A. F.
A. E.
R. R.
C. D.
, and
V. R.
Phys. Rev. Lett.
M. G.
K. T.
, and
C. E.
J. Electron Spectrosc. Relat. Phenom.
P. M.
, and
H. J.
Phys. Rev. Lett.
E. P.
A. D.
Phys. Rev. A
Penka Fowe
A. D.
Phys. Rev. A
G. C.
Phys. Rev. A
J. Chem. Phys.
M. B.
, and
K. J.
Phys. Rev. A
, and
Phys. Chem. Chem. Phys.
D. J.
M. B.
K. J.
, and
C. W.
Phys. Rev. A
J. Mod. Opt.
R. R.
, and
C. D.
Phys. Rev. A
, and
Faraday Discuss.
P. M.
K. J.
, and
M. B.
Phys. Rev. A
, and
J. Phys. Chem. Lett.
A. S.
P. M.
R. R.
L. F.
M. B.
K. J.
, and
, “
Molecular modes of charge migration
” (unpublished).
H. G.
Appl. Phys. B: Lasers Opt.
De Giovannini
, and
Phys. Chem. Chem. Phys.
, and
J. Phys. B: At., Mol. Opt. Phys.
, “
Ionization energy evaluation
,” in
NIST Chemistry WebBook
, NIST Standard Reference Database Number 69, edited by
National Institute of Standards and Technology
Gaithersburg, MD
), retrieved December 10, 2018.
, and
Phys. Rev. A
M. B.
T. W.
, and
Phys. Rev. Lett.
de Bohan
, and
, and
Phys. Rev. Lett.
, and
J. Phys. B: At., Mol. Opt. Phys.
J. Chem. Phys.
Phys. Rev.
S. B.
L. F.
K. J.
, and
M. B.
Phys. Rev. Lett.
H. J.
J. B.
P. B.
, and
D. M.
Phys. Rev. Lett.
C. E.
K. H.
Chem. Phys.
, and
van der Wiel
Chem. Phys.
For the consistent decrease in GD for N[2]O near the cut off, this feature’s energetic position was found to scale with the laser intensity, indicating that it is not a structural feature that fits
within the scope of this paper.
P. M.
T. D.
T. T.
L. F.
K. J.
, and
M. B.
, “
High-harmonic spectroscopy of transient two-center interference calculated with time-dependent density-functional theory
” (unpublished).
P. M.
T. T.
T. D.
M. B.
K. J.
, and
R. R.
Phys. Rev. A
© 2019 Author(s). | {"url":"https://pubs.aip.org/aip/jcp/article/150/18/184308/198771/Probing-the-interplay-between-geometric-and","timestamp":"2024-11-13T06:19:17Z","content_type":"text/html","content_length":"409654","record_id":"<urn:uuid:c629cb32-dc35-4ccb-8ed7-41db10dd8021>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00236.warc.gz"} |
6 Tips To Help You Study Math More Efficiently – Get Education
6 Tips To Help You Study Math More Efficiently
One of the subjects that many students dread is mathematics. It’s considered one of the most problematic subjects because it often gives high levels of stress to students during exams and homework.
However, no matter how hard it is for you to understand math, there’s no way for you to avoid it because you encounter it daily. Aside from that, almost all courses would require at least a basic
understanding of arithmetic and algebra.
Studying math can be complicated, but there are different ways to excel in it. If you always spend hours studying math and end up learning nothing, here are some effective and simple tips to make
your studying more efficient:
1. Participate In Class Discussions
During lectures, make sure to take notes of important points so you can easily understand each method and formula. If you can’t keep up with the teacher’s lessons, classroom discussions are a perfect
opportunity for you to ask questions to clarify confusing points. Since math is cumulative, you’ll get behind when you don’t understand a basic concept.
The class discussions are where new concepts or ideas are introduced; that’s why it’s important to always be active every time your teacher is lecturing. The way how your teacher explains formulas
and procedure are more comprehensible than just reading your textbook.
Aside from textbooks, you can also find online math tutorial where you can learn straightforward solutions to various Math problems and formulas. Topics like finding the length of a triangle, finding
the volume of a pyramid, and the Pythagorean Theorem, among others, are also explained simply in online tutorials.
2. Practice Consistently
Math is described as a hands-on subject. You can’t learn it just by reading. There’s no secret formula to becoming good at mathematics because it requires practice and consistency. Math is all about
formulas and calculations. If you’re having problems with your math scores, it means that you need to exert effort through constant practicing.
Make sure that you have a good grasp of the subject matter. You won’t be able to practice by yourself if you can’t understand the main concept of what you’re studying. In practicing math problems,
it’s important to think through the step-by-step process of how to come up with the correct solution. It isn’t enough to say that you know how to do it, but it’s important to know how to solve
formulas in actuality and master the process.
3. Don’t Forget The Basics
It’s important to master basic math operations such as addition, subtraction, multiplication, and division. These basic operations are known as the foundation of mathematics because it would be
difficult for you to learn advanced concepts if you don’t master them. You’ll have a better understanding of higher-level mathematics if you know the fundamentals.
Memorizing math concepts isn’t productive. This subject involves procedures, formulas, and equations that’s why memorizing won’t cut it. Instead, you should understand how each concept works so it’ll
stay in your long-term memory. A good way to remember formulas and equations easily is through repetition. Therefore, practice is necessary—again.
4. Get Some Help
Aside from self-studying, you can hone your knowledge in math through the help of people who are more experienced. One of the disadvantages of classroom discussions is that teachers can only assess
through examinations, quizzes, or homework. Not all students have the confidence to admit in class that they didn’t understand the lessons. In turn, this greatly affects their performance and by the
time the teacher realizes, it’s often too late.
A way to help students cope with math lessons is through the assistance of math tutors. Tutors can tailor resources to cater to each student’s unique learning styles. Parents may need to spend extra
cost to avail of these services, but it’s going to be worth it because tutors can focus on the student and address weak points.
Joining group studies can also help you study math more efficiently. With the help of friends and classmates, you can gain a better understanding of each topic and clear up confusion right away
compared to studying alone. Joining a study group is a great alternative if hiring a tutor is not possible.
5. Review Mistakes
If you provided an incorrect solution to the problem, don’t be discouraged in figuring out where you made a mistake in the process. The best way to excel in math is to identify the concepts or areas
of struggle then find time to apply different methods on how to come up with the right solution.
When you review your mistakes during a test or assessment, it’s less likely that you’ll make the same mistakes again. There are three kinds of mistakes in math: careless errors, computational errors,
and conceptual errors. No matter what kind of mistake you’ve made, it’s essential to go back to it and review what went wrong.
6. Create a Distraction-Free Study Environment
In order for you to study math efficiently, you should be in a distraction-free environment. This subject needs full attentiveness and requires concentration for you to be able to solve complex
equations. In the classroom setting, position yourself in a comfortable area where you can easily see what’s on the board and hear the lecture of your teacher clearly.
Each student has a unique learning style. Some can study efficiently when they hear music, while some can easily concentrate when they don’t hear any noise. Allot a convenient time for you to study.
During your study time, it would help if you’ll clear your table from possible distractions like cellphones and tablets so as not to interrupt your concentration.
Final Thoughts
You may not realize it, but everyone uses math every day. While you’re still studying, it’s important to take math seriously because it can be applied in various industries and day-to-day living.
To study math efficiently, it’s important to participate in class discussions, practice equations consistently, know the basic operations by heart, and review mistakes constantly. It’s also
beneficial if you study in a distraction-free environment and get help from someone who is more knowledgeable. | {"url":"https://geteducationskills.com/6-tips-to-help-you-study-math-more-efficiently/","timestamp":"2024-11-09T03:29:59Z","content_type":"text/html","content_length":"84644","record_id":"<urn:uuid:2bddf101-2a59-47dd-a5e1-45680c87d1ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00862.warc.gz"} |
Financial management is losing money, and you have to master the common sense of
Bond Knowledge is the Foundation of Investment
1. While the high-end world of stocks is still in a daze, the decline in bonds leading to a retreat in the net value of financial products has taken people by surprise, leaving most feeling wronged
and angry: "I've already laid flat, why can't the fluctuations leave me alone?" Instead of complaining and panicking, it's better to learn some basic knowledge about bond investment!
2. Why learn the basics? In 2004, I joined Bosera Fund and received a book on my first day, "Common Sense on Mutual Funds," a classic work by the founder of Vanguard, Bogle. Upon reading it, I found
many things half-understood, yet impressive. Eighteen years have passed, and the book has been updated and re-read, from which I have benefited immensely. Therefore, when venturing into any field, it
is essential to study the masterpieces of the masters in that field; it will provide a panoramic view and serve as a guiding light on the path ahead.
3. Once in the fund industry, it is natural to study the classics of stock investment. "Security Analysis" by Graham, the teacher of Buffett, is a must-read. Graham is not only famous for his deep
value but also for his more significant achievement of founding the industry of security investment analysis. The CFA Institute was initiated by him, making him the true patriarch.
4. He has an autobiography that is extremely fascinating, but "Security Analysis" is not only a monumental work but also extremely abstruse and profound because it talks about many things that people
did not know at the time, such as how to analyze investment and speculation. Perhaps because it was read by too few, for the sake of sales, he wrote a popular book called "The Intelligent Investor,"
which became a bestseller and well-known worldwide. Buffett went to Columbia to study under Graham after reading this book.
5. The biggest doubt Dong had while studying "Security Analysis" was that he spent half of the text discussing bond investment. Although it was confusing, he had to read on with a stiff upper lip,
from investment and speculation to ordinary bonds, high-yield bonds, convertible bonds, preferred stocks, and common stocks. Later, he realized that bond investment is the foundation of all
investments. The DCF cash flow valuation model is very understandable when it comes to bonds; it's just the discounting of cash. Once you understand bonds, looking at the DCF model for stocks becomes
even more insightful.
6. In fact, Graham unfolds his discussion from the perspective of the evolution of securities. In real life, the creation of bonds predates that of stocks. A large number of bonds were created more
than 400 years ago during the Age of Exploration. Bonds are certificates of investment where people pooled money to support adventurers in exploring the oceans and engaging in global trade. When the
captains returned, everyone received the agreed-upon fixed returns.
It was because bonds were issued more and more frequently that there were times when no one was willing to buy them, and the printed bonds became inventory, known as "Stock." Later, a clever person
suggested to the boss to adjust the bond terms: first, not to repay the principal, which delighted the boss; second, not to pay interest, which made the boss almost ecstatic; third, those who bought
the "Stock" shared the same fate with the boss, with the same returns whether they lost or made money. The boss, of course, had no objections, so these "Stocks" are also called "Shares." Thus, stocks
are a special kind of bond.
7. Since then, stocks were born and have surpassed their origins to become the mainstream. Subsequent investors, however, no longer knew what bonds were. The Chinese capital market originated with
the establishment of the stock exchange, without experiencing the gradual evolution of the Western securities market over hundreds of years. Therefore, investors lack the cultural accumulation of the
basic principles of bond and stock investment, instead chasing the by-products of the capital market—fluctuations and speculation.
8. After the 2006-2007 bull market, I came to a conclusion: there are many sorrows in the Chinese capital market, one of which is that the stock market came before the bond market. Investors lacking
a culture and literacy in bond investment seem to be able to bear risks, but in reality, they are just keen on fluctuations. The retail investor structure is the quantitative reason for the
significant fluctuations in the Chinese market, and retail investors without bond investment literacy are the qualitative reason.9. Later on, when taking the CFA exam, investment analysis and
valuation of bonds are fundamental, and the complexity involved is far greater than that of stocks. If stock investment is more of a self-suggestion-based coarse logical deduction, bond investment is
a refined and scientific mathematical algorithm.
10. This also leads to a convergence in the thinking and behavioral patterns of bond investment managers, which can easily trigger synchronized buying and selling at certain critical moments.
Especially when facing liquidity issues caused by macro and micro factors, it can easily trigger an avalanche-like stampede in trading. This issue is more terrifying than bond defaults. There have
been several liquidity crises in history, such as the stress test in 2013 and the bank's external management cleanup in 2016, both of which caused the market to bleed, leaving a lingering fear. At
that time, there was support from non-standard assets and regulatory protection, without involving ordinary retail investors.
11. The recent bank wealth management net value drawdown has many inducing factors, but ultimately, it manifests as a liquidity crisis. The risk was originally controllable, but the chain reaction
triggered by public panic is the real risk. This is the boomerang effect of the financial market. Water can carry a boat and also capsize it, but everyone should not panic. Based on previous
experiences in handling similar risks with money market funds, the regulatory authorities will not stand idly by, otherwise, it may trigger a larger crisis. Wealth management investors need not
worry, and should not rush to redeem, causing a new "stampede event."
Bond Valuation: Maturity, Interest, Discount Rate
12. Alright, let's return to the common sense of bond investment. Since 2017, Dong Ge has been teaching at the National Association of Financial Market Institutional Investors (if you are not clear
about what this association does, it means you know nothing about bonds. Most of China's bond issuance and trading are not on the exchange, but in the interbank market. The interbank market is an
over-the-counter transaction, and the trading counterparty is very important, otherwise, liquidity will be an issue). He has mainly taught three types of courses: "Credit Bond Investment,"
"Convertible Bond Investment," and "Fixed Income + Investment." Among them, the "Fixed Income +" course is also the main lecturer for the association's online courses.
13. The valuation model for bond investment is very simple: the value of a bond = the present value of future interest and principal. Since the principal is always 100 yuan, the factors that
determine the value of a bond are three: maturity, interest, and discount rate.
14. The first is maturity, which is how many years after issuance the principal is repaid, ranging from less than a year to ten or twenty years, with various bonds available. Treasury bonds are
generally longer, central bank bills are very short, and corporate bonds are mostly 2-5 years. The term of a bond is fixed, with one day less for each day that passes.
15. The second is interest, which is similar to bank deposits. The longer the term of the bond, the higher the interest; different risk bonds have different interest rates. Treasury bonds have the
lowest interest, and corporate bonds have higher interest. Back in the day, trust products were called non-standard products. What is the corresponding standard product? It is bonds. So, non-standard
trust products are a type of fixed-income product that is not listed for trading. Why are people willing to issue bonds? It is because the interest (coupon) promised by bonds is mostly unsecured,
hence also called credit bonds. The issuers of credit bonds naturally have good credit, so the coupon of credit bonds is far less than that of trusts.
16. The third is the discount rate, which is determined by market pricing and can be broken down into the market risk-free rate and the risk premium rate. The discount rates for government bonds and
financial bonds only include the market risk-free rate because they are backed by national credit and have no credit risk. The discount rate for credit bonds must be priced on top of the risk-free
rate.18. Market interest rates are determined by macroeconomic conditions and monetary policy, with the yield on ten-year Treasury bonds being a commonly used indicator. The yield indicator for U.S.
interest rate hikes is also the Treasury yield, this base rate, not only determines the valuation changes of bonds but also affects the valuation changes of stocks. Any financial product that uses a
discount rate in its valuation formula will be influenced by it. The most important function of macroeconomists is to predict the trend of interest rates.
19. The degree to which different bonds are affected by interest rates varies. For instance, the price of a five-year Treasury bond is much less affected by interest rates than that of a ten-year
bond. This can be easily seen from the formula, and since the discount rate is in the denominator, when interest rates rise, prices fall, and when interest rates fall, prices rise. Of course, this is
a simplified conclusion; the actual mechanisms are much more complex, but they are all realized through investors voting with their feet.
20. Credit risk can be divided into two parts: the risk of interest and principal payment, which is the ability to repay the debt, broken down into three indicators: repayment capacity (whether there
is money), repayment willingness (whether those with money want to repay), and external support (whether those without money can borrow to repay).
21. The second part is the risk premium rate, which is the part of the discount rate for credit bonds that is higher than the risk-free rate. This reflects the market's belief in a certain promise
and is a form of market faith. It is very simple: the higher the credit, the lower the interest you require. Why? Because the lower the discount rate, when Bill Gates asks to borrow money from you,
you can just ask for the Treasury rate.
22. This also illustrates that in the business world, the most important business outcome is credit. If everyone trusts you, they will buy your products; if everyone trusts you, the cost of borrowing
is very low, and cooperation with anyone is very convenient. The development of the bond market in recent years has essentially been a process of continuously breaking the belief in the ironclad
promise, with the beliefs in municipal investments, state-owned enterprises, and real estate being continuously shattered. A rule can be summarized: any belief has the potential to be broken. Now the
belief in wealth management products has also been broken. There are no miracles in this world, and with the beliefs in underlying assets shattered, how could the products built on them still have
faith? This is common sense and there is no need for alarm.
23. Finally, there is liquidity risk. The DCF valuation model is a static method that does not address the issue of momentum. Valuation models can only determine the direction of equilibrium in the
spatial dimension but cannot judge the rhythm and repetition in the temporal dimension. Unfortunately, the basis of most investment judgments is based on models, so any time liquidity risk appears,
it is very dangerous. The greater danger comes from consistent stampedes. This is similar to wearing outer clothes when going out normally, but when there is a fire or earthquake, or when people
think there is a fire or earthquake, what to wear downstairs or whether to wear anything downstairs is no longer important, panic is the root of the liquidity crisis.
24. In conclusion, bond investment may seem simple from the formula, but it is extremely complex in actual investment decision-making, with interest rate curves, duration, and convexity, you will
find yourself entering a mathematical world. Today's discussion is about basic concepts and principles, and interested friends can supplement their learning.
Additionally, stock investment is no more complex than bond investment, and may even be simpler, but all the variables in the valuation model are more unruly, so the difficulty of investment is a
hundred times greater than that of bonds. Many people are very confident and ambitious when investing in stocks or predicting the stock market, which I often find incomprehensible. This is clearly a
display of a lack of common sense. It is better to start with the DCF model! | {"url":"http://tempestmud.net/news/40.html","timestamp":"2024-11-13T23:09:18Z","content_type":"text/html","content_length":"21925","record_id":"<urn:uuid:176c119b-0070-48d8-9240-0b51592936ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00414.warc.gz"} |
System and Method for Quantum Cache
System and Method for Quantum Cache
An entangled quantum cache includes a quantum store that receives a plurality of quantum states and is configured to store and order the plurality of quantum states and to provide select ones of the
stored and ordered plurality of quantum states to a quantum data output at a first desired time. A fidelity system is configured to determine a fidelity of at least some of the plurality of quantum
states. A classical store is coupled to the fidelity system and configured to store classical data comprising the determined fidelity information and an index that associates particular ones of
classical data with particular ones of the plurality of quantum states and to supply at least some of the classical data to a classical data output at a second desired time. A processor is connected
to the classical store and determines the first time based on the index.
Latest Qubit Moving and Storage, LLC Patents:
The present application is a non-provisional application of U.S. Provisional Patent Application Ser. No. 63/020,221 filed May 5, 2020 and entitled “System and Method for Quantum Cache” and a
non-provisional application of U.S. Provisional Patent Application Ser. No. 63/183,023 filed May 2, 2021 and entitled “System and Method for Quantum Cache”. The entire content of U.S. Provisional
Patent Application Ser. Nos. 63/020,221 and 63/183,023 and are herein incorporated by reference.
The section headings used herein are for organizational purposes only and should not be construed as limiting the subject matter described in the present application in any way.
Information systems today are highly distributed, and this trend is expected to continue especially as the next generation wireless systems keep people and machines connected anywhere and anytime.
Applications and services increasingly rely on distributed information and processing to function, yet also increasingly aim to operate, look and feel like local systems. These kinds of future
systems can benefit from improved methods of tagging, storing, and moving information, including systems that utilize so-called non-local operations and resources. For example, methods and systems
that can provide precise location and timing information that is not dependent on a communication channel, or sensitive to time-of-flight delay of those channels, are highly desirable.
In addition, with so much information, including highly personal, confidential, and sensitive information, being an integral part of the applications and services on which people and machines rely
upon, improved methods of tagging, storing and moving that information securely are also highly desirable. For example, methods of addressing that are not dependent on sending a plain text address on
a communication link are desirable. In many cases, traditional classical systems have reached their technical limits on providing features to solve these critical problems. Quantum solutions can
offer many important improvements. However, practical quantum systems are not currently available that fit seamlessly and effectively within classical information systems such that the underlying
quantum phenomena can be used to improve performance.
The present teaching, in accordance with preferred and exemplary embodiments, together with further advantages thereof, is more particularly described in the following detailed description, taken in
conjunction with the accompanying drawings. The skilled person in the art will understand that the drawings, described below, are for illustration purposes only. The drawings are not necessarily to
scale, emphasis instead generally being placed upon illustrating principles of the teaching. The drawings are not intended to limit the scope of the Applicant's teaching in any way.
FIG. 1 illustrates a distributed system that can utilize a quantum entangled cache according to the present teaching.
FIG. 2 illustrates an embodiment of a portion of the distributed system described in connection with FIG. 1 that includes nodes using a quantum entangled cache and a node using an entanglement server
according to the present teaching.
FIG. 3 illustrates a block diagram of an embodiment of a quantum entangled cache according to the present teaching.
FIG. 4 illustrates an embodiment of a table showing a cache structure for a quantum entangled cache according to the present teaching.
FIG. 5A illustrates a diagram of an embodiment of a multilayer quantum store according to the present teaching.
FIG. 5B illustrates a table showing a cache structure for a multi-layer quantum entangled cache according to the present teaching.
FIG. 6A illustrates an embodiment of a bus network using an entangled cache according to the present teaching.
FIG. 6B illustrates a table showing an embodiment of a structure of an entangled qubit cache for a multi-node network according to the present teaching.
FIG. 7 illustrates a block diagram of an embodiment of a quantum-enabled information system that uses an entangled quantum cache according to the present teaching.
FIG. 8 illustrates a block diagram of an embodiment of a control system that controls an entangled quantum cache interacting with an application according to the present teaching.
FIG. 9 illustrates an embodiment of a distributed system using an entangled cache to provide metadata according to the present teaching.
FIG. 10 illustrates a known quantum super dense coding scheme application operating between a transmitter and receiver.
FIG. 11 illustrates an embodiment of a super-dense coding system using an entangled cache according to the present teaching.
FIG. 12 illustrates another embodiment of a super-dense coding system using an entangled cache according to the present teaching.
The present teaching will now be described in more detail with reference to exemplary embodiments thereof as shown in the accompanying drawings. While the present teachings are described in
conjunction with various embodiments and examples, it is not intended that the present teachings be limited to such embodiments. On the contrary, the present teachings encompass various alternatives,
modifications and equivalents, as will be appreciated by those of skill in the art. Those of ordinary skill in the art having access to the teaching herein will recognize additional implementations,
modifications, and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least
one embodiment of the teaching. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
It should be understood that the individual steps of the methods of the present teachings can be performed in any order and/or simultaneously as long as the teaching remains operable. Furthermore, it
should be understood that the apparatus and methods of the present teachings can include any number or all of the described embodiments as long as the teaching remains operable.
The present teaching relates to integrating quantum systems into traditional classical information systems to form various quantum information systems. These quantum information systems rely on their
fundamental properties of quantization, superposition, entanglement and/or non-locality to provide various performance advantages and new features over similar classical versions of information
system technologies. Some known examples of quantum information systems include quantum key distribution systems, analog and digital quantum computers, quantum communication links, and quantum
A particularly useful quantum object is the quantum qubit, which is a superposition state of two basis states, which can generally be described in Dirac notation as |0 and |1. Different physical
manifestations of qubits, for example superconducting qubits, optical qubits, and atomic qubits, possess different physical manifestations for these basis states. However, these different
manifestations can be expressed with similar mathematical representations and they behave in a similar way in a quantum system. It should be understood that the present teaching can apply numerous
manifestations of quantum qubit systems.
The qubit coherent superposition of the basis states can be represented as |ψ=α|0+β|1, where α and β are complex numbers that represent a probability amplitude for each of the basis states that is
governed by a Schrodinger equation, which describes how the state evolves in time based on its energy environment. The probability distribution, which is the magnitude squared of the probability
amplitudes, indicates a probability a |0 or a |1 will result from a measurement of the qubit.
It is now widely accepted in the art that qubits can be entangled, which allows that future measurements of particular ones of their physical properties be perfectly correlated with the other qubits
with which they are entangled. This feature of qubit entanglement is true even if the entangled qubits are separated in time and/or space. This is due, at least in part, to the quantized nature of
the entangled quantum system, and the fact that the wave function that describes the quantum probability of the various superposed states of the system collapses to a single quantized state upon
Another feature of qubit entanglement is that measurement, as defined by the quantum theory, affects the entangled system as a whole, leading to a phenomenon in which a measurement at one location
causes an outcome, in the form of a collapsed state (commonly referred to as a wave function collapse) that is perfectly correlated with the measured state, at another location. This wave function
collapse, and the associated perfectly correlated outcomes at two locations, makes entangled qubits and, more generally, entangled distributed quantum systems, particularly useful resources in
numerous types of information systems.
It should be understood that observing a quantum state is not necessarily a measurement of the quantum state. A measurement is an action that collapses the state of the wave function that describes
the entangled system. Physicists will often define a measurement as an action that probes a path, or distinguishes with certainty, one of the possible states of the entangled system. Once discovered
through measurement, a state of a quantum system is no longer in superposition, and no longer capable of maintaining a long-term correlation across its distributed system. Rather, the system
irreversibly gives up this type of connection.
Various aspects of the present teaching take advantage of the fact that quantum systems provide an ability to establish a fidelity of entanglement separately from a measurement. In general, the
fidelity of entanglement is a metric used to compare quantum states. The concept of fidelity of entanglement is straightforward in the case of pure states, but is subtler for the mixed quantum states
found in real system. In other words, a quantum measurement is separable from other observations of the quantum state that allow one to determine if the state is still entangled. In fact, numerous
methods are available to probe a quantum state for entanglement without disturbing the entanglement causing collapse of quantum states to occur where the system will then no longer be entangled.
Many quantum systems according to the present teaching take advantage of the fact that the quantum aspect of the system can be prepared with entangled states and distributed, physically or virtually,
across various physical locations. These entangled states can be measured to provide perfectly correlated states to be determined at remote locations. One feature of these systems is that no physical
channel is necessary to provide this correlated state determination. In various embodiments, the quantum-entangled systems of the present teaching are separated in space and/or time. Thus, various
aspects of the present teaching advantageously utilize of the fact that the state collapse that provides a correlated state determination at two locations occurs both instantaneously and without
regard to a distance between those locations, and/or any aspect of a precise position of those locations.
Furthermore, various aspects of the present teaching advantageously utilize the fact that any external influences, such as an eavesdropper, that make a measurement of the quantum system, will destroy
the associated entanglement, causing a state collapse. Such a state collapse can be positively determined through various known mechanisms and protocols. Any observed or otherwise tampered quantum
bit, or quantum subsystem, which relies on a quantum state, can then be discarded or disregarded as it no longer contains useful information. Thus, another aspect of the present teaching is that
information systems according to the present teaching can operate securely, and/or ensure privacy from eavesdroppers or various outside influences for any outcome or measurement.
Another feature of the present teaching is the realization that hybrid quantum/classical systems can still continue to operate over various classical channels, and/or use classical connections to
manage the information they process, store and communicate. As such, various embodiments of systems and methods of the present teaching still maintain data in a classical regime, and rely on any
outcome or measurement associated with various ones of entangled qubits as classical metadata, rather than the quantum state itself.
Another feature of the present teaching is the recognition that management mechanisms are needed, or at least highly desirable, to control and manage the distribution and storage of quantum states.
Quantum states can be transferred and stored using any of numerous mechanisms depending on the particular application and the method of generating the quantum state. The mechanisms used to control
and manage the quantum physical systems that generate the quantum states must recognize the key performance attributes of the physical qubits combined with the needs of the systems that are using the
quantum state for various applications. Said another way, one feature of the present teaching is providing various methods and apparatus for providing appropriate abstraction(s) that operate between
the quantum physical systems that supply quantum states, and the classical, semi-classical, and/or quantum information systems that use that quantum states for various applications. The abstraction
(s) make it possible for system designer who are not skilled in the art of quantum systems but, are skilled in the art of classical system to include quantum systems as part of designing solutions,
thus providing a relatively simple interface bridge between quantum and classical systems.
Numerous important quantum applications utilize entangled qubits distributed in a network. The term “network” as used herein is a very broad term that relates to a collection of two or more node
associated with information. FIG. 1 illustrates an embodiment of a distributed system 100 that utilizes a quantum entangled cache according to the present teaching. A plurality of nodes 102 are
connected by links 104. In general, the nodes 102 include both quantum systems and classical systems. Also, in general, the links can include classical transport or connections, can also include
quantum transport or connections, and can also include links that can transport and/or connect both quantum and classical data as described herein. The distributed system 100 is a mesh topology with
nodes 102, 102′ and links 104. However, it should be understood that the present teaching can be applied to numerous different and hybrid topologies. For example, the present teaching can be applied
to bus, star, tree, ring, point-to-point, hybrid and other network topologies.
In some embodiments of the distributed system 100, entangled pairs, or more generally, larger groupings of N-entangled qubits, such as three, four, or more entangled qubits in a group, are needed to
be available in any number of entangled dimensions, K, when needed. In other words, the distributed quantum entanglement cache of the present teaching can include a source of entangled quantum states
that generates quantum states having a plurality of entangled quantum states. Thus, qubits can be shared with N nodes and include K basis states. The qubits are entangled and coherent to some degree
when used in a node 102, 102′. Thus, for some configurations, a mechanism is used in the nodes 102, 102′ to ensure coherent qubits, and to discard qubits that are not coherent.
In some embodiments of systems according to the present teaching, a mechanism is used in the nodes 102, 102′ or elsewhere to supply verified coherent qubits. Also, in some embodiments of systems
according to the present teaching, a mechanism is used to ensure a supply of qubits that exceeds a predetermined percentage of coherent qubits out of a pool of qubits available in the node.
In addition, it is important that the qubits be indexed so that pairings are maintained between qubits in one node 102 compared with another node 102′. Some embodiments of quantum distributed systems
102 according to the present teaching have mechanisms to ensure that qubits are accessed with a latency that is compatible with the particular application. For example, a mechanism can be used to
appropriately synchronize the availability of sets of from 2−N entangled qubits between 2−N nodes 102, 102′ to ensure desired access latency and support the pairing of those qubits. The indexing in
the nodes 102, 102′ allows various pairings, or N-way entangled quantum states, to be identified with each other. That is, an index can be used to indicate which other node(s)' quantum states a
particular quantum state is entangled with.
The embodiment of the distributed system 100 in the description associated with FIG. 1, as well as other embodiments described herein, describe the use of caches that include entangled quantum
states. However, it should be understood that caches of the present teaching are not limited to entangled quantum states. As described herein, embodiments of quantum stores and quantum caches can
include and utilize quantum states that are not entangled states and/or quantum states that are entangled quantum states.
FIG. 2 illustrates an embodiment of a portion of the distributed system 200 of FIG. 1 with a node 202 that performs as an entanglement server using an entanglement generator 204 and nodes 206, 206′
using a quantum entangled cache 208, 208′ according to the present teaching. The node 202 is connected by links 210, 210′. Numerous embodiments of the present teaching use a centralized and/or
distributed mechanism for distributing entangled qubits.
Entangled qubits are supplied by entanglement generators 204. The distributed system 200 shown in FIG. 2 illustrates only two nodes 206, 206′ that receive entangled qubits, and a node 202 connected
to two links 210, 210′ that provides entangled quantum resources on those links 210, 210′ for clarity. Distributed systems 200 according to the present teaching are not so limited. For example, many
nodes can receive entangled resources. The entangled resource generator 204 can generate entanglement across more than two qubits, and so can be provided across more than two links 210, 210′. Nodes
202, 206, 206′ can include either or both of server node resources 204 and cache resources 208, 208′. Also, multiple entangled qubit generators 204 can connect different and/or the same caches 208,
208′. It should be understood that a diverse array of quantum and classical network connections can be realized using the present teachings.
One feature of the present teaching is the recognition that deterministic and on-demand sources of entangled photons can easily be integrated into systems using classical indexing corresponding to
particular quantum states as well as other information associated with the quantum state generation. Ideal deterministic sources produce entangled photons at known times and with 100% fidelity. In
practice, deterministic sources approach these goals with a known and/or characterizable high probability (and/or fidelity) that a pair, or set, of entangled photons is produced at a known time.
While these terms are often used interchangeable, for purposes herein, on-demand sources produce entangled photons at arbitrary but controllable times while deterministic sources produce entangled
photons at known, predetermined, times with high probability. Importantly, both of these types of controllable emission quantum entangled photon sources are amenable to attaching associated classical
data, including indexing information and quantum integrity information. Associated classical data can be referred to as meta data.
Some embodiments of the entanglement server 204 use a deterministic source of entangled photons that is generated by multiplexing and/or switching of non-deterministic quantum photon sources. Many
known high-brightness sources of entangled photons are so-called non-deterministic sources that produce entangled photon pairs (and larger entangled sets), but at random times. For example,
spontaneous parametric down conversion (SPDC), four wave mixing, and various other nonlinear parametric processes are known to provide entangled photons with a high rate, though with
non-deterministic emission times. Multiple systems and methods have been shown to provide deterministic photon sources using multiplexing and/or switching schemes combined with non-deterministic
sources. See, for example, Evan Meyer-Scott, Christine Silberhorn, and Alan Migdall , “Single-photon sources: Approaching the ideal through multiplexing”, Review of Scientific Instruments 91, 041101
(2020), which is incorporated herein by reference. As one example, a quasi-deterministic source can generally provide photon pairs and photon clusters (>2 entangled photons) substantially more than
60% of the time in a given time slot, with 99% fidelity. See, for example, Jeffrey H. Shapiro and Franco N. Wong, “On-demand single-photon generation using a modular array of parametric
downconverters with electro-optic polarization controls,” Opt. Lett. 32, 2698-2700 (2007), which is incorporated herein by reference. With such sources, it is possible to provide a time window,
including a repetitive time window for which an entangled photon pair will be provided at a particular position in the system, and it is also possible to specify that only 1% of the time windows
would have faulty quantum states (e.g. more than one photon).
Some embodiments of the entanglement server 204 use a deterministic source of entangled photons that is generated by known, predetermined loading or setting of quantum emitting states in the source.
For example, various configurations of quantum dot sources can be used. See, for example, Hui Wang, Hai Hu, T.-H. Chung, Jian Qin, Xiaoxia Yang, J.-P. Li, R.-Z. Liu, H.-S. Zhong, Y.-M. He, Xing Ding,
Y.-H. Deng, Qing Dai, Y.-H. Huo, Sven Höfling, Chao-Yang Lu, and Jian-Wei Pan, “On-Demand Semiconductor Source of Entangled Photons Which Simultaneously Has High Fidelity, Efficiency, and
Indistinguishability,” Phys. Rev. Lett. 122, 113602, (2019), which is incorporated herein by reference. Also see, for example, Müller, M., Bounouar, S., Jöns, K. et al., “On-demand generation of
indistinguishable polarization-entangled photon pairs,” Nature Photon 8, 224-228 (2014), which is incorporated herein by reference. Advantageously, the ability to index the expected arrival slot or
position of the entangled photon events can be provided for these kinds of sources. In addition, it is possible to provide associated classical data regarding, for example, the number of
indistinguishable events (e.g. identical photon states) that will follow a prepared excitation state, the expected fidelity (dephasing, added background) and other associated classical information
about the entangled photons that allow these sources to be generally described and incorporated as part of a larger system as described herein. The classical information can be tagged to an
individual entangled photon event or a larger set of events, depending on the source. One feature of the present teaching is that the classical tagging process allows multiple types of sources to be
used in the same system.
In some embodiments, the entanglement generator 204 can transmit generated entangled qubits using links 210, 210′ to the nodes 206, 206′. These transmitted qubits may be sent in quantum channels that
are embedded in, or separate from, any classical channel(s) that is used for the links 210, 210′ using various systems and methods for transmitting entangled qubits. The entanglement generator 204
can be electronic and can generate entangled electronic qubits that are transmitted electronically. The entanglement generator 204 can also be optical. For example, the entanglement generator 204 can
be an entangled photon source that generates entangled photons. These photons are sent over links 210, 210′ that include optical fiber that transmits the entangled photons. These links could also be
free space. Nodes 206, 206′ include a quantum entangled cache 208, 208′ for storing and retrieving entangled qubits at each node 206, 206′. Practical systems will appropriately balance the speed at
which entangled bits are generated and consumed for a particular coherence half-life of the entanglement.
The quantum entangled caches 208, 208′ can include a mechanism for determining the coherence of qubits at each node. Coherence is a metric of the degree of entanglement. In some embodiments, this
mechanism includes a coherence detector with some discard mechanism. In some particular embodiments, this mechanism has knowledge of reliable statistics on coherence half-life and uses at least one
of many different types of error correcting coding. Qubits may be discarded after an age-out, for example after a known half-life, or alternatively an age-out based on a known error rate or error
condition. In some embodiments, both these mechanisms are used. Some embodiments rely on entanglement purification, which uses measurements on a number, n, of adjacent qubits to determine with high
probability that a given qubit is entangled. Thus, various mechanisms can be used to determine coherence of one or more qubits that are part of an entangled system.
The quantum entangled caches 208, 208′ also include a synchronization mechanism that ensures matched pairs or sets of qubits are in use at the various nodes 206, 206′. The synchronization may, for
example, be associated with a particular known order of qubits in the cache that is associated with, or registered to, another order of qubits in another node. In some embodiments, the
synchronization mechanism is an ordered cache. In some embodiments, the synchronization mechanism uses classical channel information exchange. For example, the order of qubits in two different nodes
can be exchanged and updated as the order changes. Also, in some embodiments, the nodes are connected via a communication channel 212 that can support one or both of quantum and classical
communications. This channel 212 may be the same or different from the links 210, 210′ that transmit photons to the caches 208, 208′.
FIG. 3 illustrates a block diagram of an embodiment of a quantum entangled cache according to the present teaching. Qubits are supplied to a qubit loader 302 from a quantum channel 304 and/or a
combined quantum-classical channel 306. The supplied qubits are entered into a qubit store 308, which is a quantum store. The qubit store 308 is a physical storage system that holds and maintains, to
a predetermined acceptable degree, the entanglement and coherence of ordered qubits. Thus, the qubit store 308 generally accepts qubit state from the loader 302 into a physical mechanism that can
maintain the coherence and entanglement of the qubit in an ordered way, such that an unloader 310 can access and supply that state to an application 312. The qubit store 308 is shown in FIG. 3 as a
first-in-first-out (FIFO) structure such that the youngest qubit (to the cache) sits at the bottom slot 314 and the oldest qubit sits at the top slot 316 of the qubit store 308 so that the oldest
qubit would be next available to supply for an application 312. It should be understood that the terms “top” and “bottom” are relative terms used to describe the present teaching, but may or may not
be representative of a practical qubit storage system. For example, one skilled in the art will appreciate that qubit store 308 could be implemented in numerous ways such as with a simple fiber delay
line, where a photonic qubit enters the delay line and would be the first to exit the delay line to be used by an application connected to the store. In various embodiments, the qubit loader 302 and/
or the qubit unloader 310 can comprise, for example, a passive coupler/splitter, a quantum switch, an optical switch, a quantum wavelength converter, a quantum repeater, and/or a quantum state
Multiple types of storage systems are contemplated by the present teaching. Various embodiments of the qubit store 308 can have various physical implementations and operation. This includes, for
example fiber loops, including hierarchical fiber loops, which can achieve a variety of input-output relationships between loaded photonic qubits and unloaded photonic qubits. For example, FIFO,
last-in-first-out (LIFO), or other interleaved access architecture can be realized. In addition, numerous types of memory devices, such as those based on ions or atoms, are suitable for use as random
access memory devices, as slots can be associated with positions on, for example, a lattice or other ordered physical arrangement that supports the particular quantum system. For example, slots might
be associated with positions of nitrogen vacancies in a diamond lattice. Thus, quantum entangled caches of the present teaching are compatible with a variety of storage structures including random
access storage structures and stack-type storage structures, such as FIFO and LIFO.
A classical data loader 318 takes in data from a classical channel 320 and/or optionally a combined classical-quantum channel 306. The classical data loader 318 loads the data that is associated with
particular qubits into a classical store 322 that holds and maintains the classical data associated with particular qubits. The classical store 322 is conventional computer memory that can be
volatile or non-volatile memory that can take numerous forms that are well known in the computer hardware art. A data unloader 324 can provide the classical data associated with a particular qubit to
an application 325 such that the application is then able to use any subsequent information about the state of the qubit effectively in the application. Subsequent information includes, for example,
information obtained by processing the qubit in a quantum logic element, making state-collapse inducing (non-unitary) measurement operations on the qubit, and/or making measurements on the qubit that
do not collapse the qubit state, but rather provide information about the state of the qubit or other qubit properties. Thus, one aspect of systems and method of the present teaching is that the
quantum entangled cache 300 holds and maintains classical data associated with a qubit and provides that data to a higher-layer application to aide in the application processing of the qubit.
The quantum entangled cache 300 includes a fidelity system 326 that is connected to the quantum store 308 and to the classical store 322. The fidelity system 326 can identify and remove or otherwise
reject bad qubits, such as a bad qubit in slot 327. This would include, for example, qubits that have or will soon collapse and/or have lost certain predetermined fidelity, entanglement and/or
coherence properties. The fidelity system can tag a bad qubit to inform a user that it is bad. It should be understood that the fidelity system 326, as well as the associated configuration of the
quantum cache 300, can be configured to operate with populations of qubits, and not necessarily at a single qubit-by-qubit level in a deterministic way. That is, groups of qubits representing a
single qubit state are anticipated, and qubit states are represented by measurements on the ensemble. In these systems, predetermined fidelity levels would be expected to be based on ensembles.
Fidelities, entanglement and/or coherence properties can be non-deterministic and represented by probabilities and/or other statistical metrics.
Quantum purification techniques can be applied to these ensembles by the fidelity system 326. Generally, the fidelity system is responsible for maintaining qubits or qubit ensembles in the store at a
known good fidelity level for the subsequent provision of that qubit state to an application 312, and also for updating, as needed, the associated classical data of that qubit with information
regarding the fidelity. The fidelity system 326 can remove bad qubits from the store either physically or prevent bad qubits from being unloaded and/or subsequently used based on associated classical
data information.
In some embodiments, the qubit unloader 316 is connected to an application 328. The application may include a quantum measurement system (not shown). In these embodiments, the quantum measurement
system determines a state of the qubit in the qubit unloader, and this state value is used by the application. For example, the state value may be the same as a state value determined by a
measurement of another qubit in a remote cache with which the qubit in the qubit unloader is entangled.
In some embodiments, the application 328 includes a quantum processor system (not shown) that uses the stored quantum state information. The quantum processor can include various quantum logic
elements that perform unitary and/or non-unitary transformations on the quantum state. For example, CNOT, Hadamard and/or Pauli-Z/or Pauli-X/or Pauli-X and Pauli-Z and/or measurements can be
performed by the application 328.
In some embodiments of the system and method of the present teaching, the qubit unloader 310 and the data unloader 324 are connected to an optional communications channel 330 via the application 328.
The channel 330 can support one or both of quantum communication or classical communication via separate or combined channels. The channel 330 allows the application 328 to connect separate quantum
entangled caches 300 together to share either the quantum information or the classical information. The channel 330 can be used to exchange qubits directly from the qubit unloader in one or the other
separate quantum entangled cache and/or qubits that are processed by quantum logic elements that are connected to both the channel and to the qubit unloader 324 in one or the other quantum cache.
FIG. 4 illustrates an embodiment of a table 400 showing a cache structure for a quantum entangled cache according to the present teaching. The cache structure shown in the table 400 includes fields
for both classical information and quantum information. For example, an index field and an age field are provided. The index associates particular items of classical data (e.g. various meta data)
with particular ones of the plurality of quantum states. There is also a field for describing which nodes hold the qubit that the qubit is entangled with. There can be fields for other parameters,
for example, type of qubit, half-life of qubit, qubit error rate, qubit access time, and other parameters. Fidelity information can be included. The classical information is tagged, in other words
indexed, to a particular qubit that resides in the cache, and is also maintained and updated, as needed, as the qubit is stored in the cache.
Another feature of the systems and methods of the present teaching is that it accommodates the fact that qubits generated by different physical manifestation have different properties and offer
different parameters that can be part of the classical information. For example, some physical qubits can be stored for long time, some physical qubits preserve entanglement longer than others, some
physical qubits are easy to access and use, and other physical qubits require more complex and time-consuming access schemes. These different physical qubit manifestations can have different
associated classical information that is appropriate to the physics of those particular qubit manifestations. The qualities of the different physical qubits can influence the design of a cache. As
one example, some embodiments of quantum entangled caches according to the present teaching use layered cache systems.
FIG. 5A illustrates a diagram an embodiment of a multilayer quantum store 500 according to the present teaching. The multilayer quantum store 500 has a top layer 502 and a bottom layer 504. In this
example, the top layer 502 represents a relatively fast-access time, but relatively low storage time store system, where the bottom layer 504 represents a relatively slow access time, but relatively
longer term storage system. In some embodiments, the top layer 502 can be a fiber optic loop buffer storage system that holds photon qubits. These photon qubits can be single photon qubits, frequency
entangled photonic qubits, or they may be polarization encoded qubits. In some embodiments, the bottom layer 504 is an atomic qubit storage system. This bottom layer 504 can include any of a variety
of known atomic qubits. The top layer 502 is used first. This is because the top layer 502 provides low latency access to qubits, albeit with qubits that preserve entanglement for less time. The
bottom layer 504 is used for cases where the system can support a higher latency access. In some embodiments according to the present teaching, the bottom layer transfers qubits to the top layer when
time allows, optimizing the tradeoff between latency and entanglement half-life for a particular application. Bottom layer qubits preserve entanglement for longer periods of time. For example,
photonic qubits are generally difficult to hold for long periods of time, but simple to access, and so are accessed with lower latency. Atomic qubits can maintain an entangled state for longer time
periods; but have a higher latency for access. Photons are also generally a plentiful resource, while atomic qubits are less abundant. The multilayer quantum store 500 of the present teaching
appropriately manages and allocates the physical qubits based on their individual characteristics. Thus, one key benefit of quantum caches of the present teaching is that they provide a mechanism
that allows classical systems to effectively utilize the quantum states and/or quantum properties from different quantum physical systems with a common interface and/or representation.
Various known fiber buffers can be used to short-term store, delay or buffer, photons that carry quantum states. For example, fiber loop buffers, various optical cavities such as fiber Bragg
cavities, slow light systems. Various nonlinear (for example, four-wave mixing) schemes can be used to produce various short, long and/or controllable delay of a quantum photon(s) passing through the
fiber. Importantly, for systems and methods of the present teaching, various known attributes (active and passive) of the fiber buffer produce predetermined delay properties of the buffer, and
therefore are amenable to being part of the classical information to be tagged with one or more of the photons that are input to the buffer.
Fiber optic buffers are particularly appropriate as the top layer 502 of the quantum storage system. As one example, in some embodiments of the present teaching, the top layer 502 of the physical
storage system comprise a fiber optic buffer that has tunable delay. See, for example, Stephane Clemmen, Alessandro Farsi, Sven Ramelow, and Alexander L. Gaeta, “All-Optically Tunable Buffer for
Single Photons,” Opt. Lett. 43, 2138-2141 (2018), which is incorporated herein by reference. One feature of this kind of buffer is that an input pump laser wavelength produces a delay of a
quantum-encoded photon. For example, a range of over several nanoseconds of delay can be deterministically realized by tuning the pump wavelength across a range of wavelengths. Thus, the particular
(and variable) delay information can be included in classical information associated with the qubit in these kinds of short term fiber buffer stores. Another feature of this kind of buffer is that a
bandwidth of the optical photon that is input to the buffer determines the delay. As such, classical information associated with the known input spectrum of the quantum encoded photon provides
information about the realized delay in the short-term store.
In addition to the shorter term stores (for example, fiber optic buffers), various known atomic and ion-based systems can be used to construct quantum stores that store quantum state information
according to the present teaching. In addition, the quantum information can be transferred from photonic states to the electronic states of atomic and ion systems. That is, quantum states carried by
photons can be stored in electronic states in various ions and atoms to realize these longer-term quantum storage systems and also be read out of the systems as photons and detected. The quantum
states stored in atomic and ion-based memories can also be read (measure) directly in the electronic domain.
There are numerous known protocols for realizing quantum atomic memories including, for example, electromagnetic induced transparency (EIT), reversible inhomogeneous broadening (CRIB) and atomic
frequency combs (AFC) that can be used for quantum caches of according to the present teaching. See, for example, Heshami K, England D G, Humphreys P C, et al., “Quantum memories: Emerging
Applications and Recent Advances,” J Mod Opt. 63, 2005-2028 (2016), which is incorporated herein by reference. While this is expected to change as technology evolves, it is generally accepted that
losses in optical-fiber-based buffers can limit storage times to less than a few tens of microseconds. On the other hand, atomic systems, particularly cold atomic systems can hold quantum state for
times scales on order of seconds or more. These numbers are just illustrative and not intended to limit the present teaching in any way, but they serve to illustrate the need for different layers of
cache to support a wide range of storage and access times.
Atomic memories are particularly appropriate as the bottom layer 504 of the quantum storage system as they generally exhibit longer storage times. As one example, in some embodiments of the present
teaching, the bottom layer 504 of the physical storage system comprise a cold-atom-based optical quantum memory. See, for example, Y.-W. Cho, G. T. Campbell, J. L. Everett, J. Bernu, D. B.
Higginbottom, M. T. Cao, J. Geng, N. P. Robins, P. K. Lam, and B. C. Buchler, “Highly Efficient Optical Quantum Memory with Long Coherence Time in Cold Atoms,” Optica 3, 100-107 (2016), which is
incorporated herein by reference. One feature of this kind of memory is it efficiently absorbs photons, and also has low de-coherence. In these systems, optical quantum states are loaded into a cold
atomic gas that is prepared by an applied magnetic field gradient so spectral components are encoded across the gradient. A controlled reversal of the applied magnetic field generates a photon echo
from the gas that represents the quantum state of the input optical photon. In these systems, the storage time is a function of the input control pulse duration. The ability to cool the gas affects
the de-coherence time. Thus, known and controllable parameters of the memory implementation (for example, optical control powers and optical bandwidths, applied magnetic fields, readout pulse energy
bandwidths, etc.) are directly related to the memory quantum performance metrics, such as storage time, readout time, de-coherence, etc. Regardless of the particular atomic memory protocol, it is
thus possible to tag stored quantum states with associated classical information that allows control of a quantum cache system independent of a particular physical implementation of the memory.
It should be understood that the quantum store physical systems described herein are only some possible specific examples of quantum stores that could be used in the methods and apparatus of the
present teaching. Various known quantum optical buffering and memory schemes have particular classical information associated with the properties of the stored qubit based on the particular
properties and protocols of the physical store system. For example, operating parameters, such as delay, memory depth, storage time, loading latency and/or unloading latency can be tagged. In
addition, various impairments, such as various losses, de-coherence mechanisms, dephasing effects, added background and various other nonlinear impairments that affect the quantum state can also be
tagged. In addition, as systems and methods for physical quantum storage mature, the kinds of classical information will change and grow. A feature of the methods and apparatus of the present
teaching is the use of an abstraction layer that accommodates the anticipated changes and maturation of the underlying physical systems. Thus, embodiments of the multi-layer quantum store 500 can
work not only with some of the example physical systems provided herein, but other known and future physical quantum store systems as they emerge. In other words, the present teachings are not
limited by specific types of quantum store systems.
FIG. 5B illustrates an embodiment of a cache 550 that includes a physical structure with metadata for a multi-layer quantum entangled cache according to the present teaching. The cache 550 includes
both software/information and hardware. There is a quantum element 552 and a classical element 554. A fidelity system includes a quantum coherence engine 556 that is connected to the physical qubits
in a long-term quantum store 558, and a short-term quantum store 560. The physical qubits in the long-term quantum store 558 can be, for example, atomic qubits in an atomic memory. The physical
qubits in the long-term quantum store 558 can be electromagnetically-induced-transparency atomic quantum memories. The physical qubits in the long-term quantum store 558 can be any of various other
kinds of known long-term quantum memories. In various embodiments of the methods and apparatus of the present teaching, the long-term memories have a half-life that is nominally tens of microseconds,
milliseconds, seconds, or tens of seconds. The physical qubits in the short-term quantum store 560 can be, for example, photonic qubits in a fiber loop memory. The physical qubits in the short-term
quantum store 560 can be single-nitrogen-vacancy-center quantum memories. The physical qubits in the short-term quantum store 560 can be any of various other kinds of known short-term quantum
memories. Also, in various embodiments of the methods and apparatus of the present teaching, the short-term memories have a half-life that is nominally nano-seconds to microseconds. Other important
factors for the choice of a quantum memory system include, for example, the read-out mechanism, the quantum fidelity, the storage efficiency, the time-bandwidth product, stability and noise as just
some specific examples.
The quantum coherence engine 556 uses purification, or some other non-measurement monitoring technique, to inspect qubits in the short- and long-term store 558, 560 to interrogate their coherence
level. This all takes place on the quantum side 552 of the cache. When the coherence engine 556 decides qubits are bad, it then sends over a classical channel notification to all other caches of
entangled qubits so those nodes do not use those qubits. The cache 550 uses the entanglement map information 562 in the classical part 554 of the cache 550 to determine which nodes must be notified
of qubits that are timed-out.
In some embodiments, when a qubit is retired because it has exceeded its lifetime, the whole cache pops up like a stack in a processor. The qubits on top are the oldest, and most likely to go bad,
but if something in the middle of the cache times-out or is determined to be not coherent or entangled, the one below it moves (pops) up one step. When a qubit is pulled from a cache and measured as
part of any algorithms, that qubit becomes stale. All other node caches need to know this information as well. Each cache may determine this information on their own by, for example, by independently
measuring lifetime, or, in some embodiments caches can be provided this information, like in the case of an expired or timed-out qubit, a classical channel communication protocol process is used for
this notification. In a stack model, you just need to keep synchronized with index numbers in an index column 564 because you know qubits are moving along toward the top of the stack.
In some embodiments, once qubits reach a predetermined age threshold, T, the probability they are out of coherence is relatively high. A classical age timer keeps track of this time and can
automatically remove aged-out qubits. The value of T is retained for each qubit in an age timer 566 column of the cache to be associated with each qubit or groups of the same type of qubit. This is
in the classical part 554 of the cache 550. There may be two physical age timers, for example, one for the long cache and one for the short cache. This is useful is some systems because the long
atomic cache might have a longer half-life. The advantage to using the age timer is that if all nodes agree on a timing parameter, then messages are not necessary, and classical communication is not
required to indicate if a qubit went bad (lost is quantum state information). In these particular examples, all nodes are synchronized and will remove aged-out qubits at the same and/or appropriate
In some embodiments of methods and apparatus according to the present teaching, everything stays in the long-term store 558 until it gets near the top of the stack. Then the qubits get transferred to
the short-term store 560 so that they are available for immediate use. Then, if qubits remain in short-term store 560 for too long, the age timer goes off and they are discarded and a classical
message is sent to inform the other nodes to shift up the appropriate stacks.
Another feature of the quantum caches of the present teaching is that they can be used in connection with numerous networked applications. Referring back to FIG. 3, numerous applications 328 that run
across multiple nodes in a network (of various kinds) can receive qubits for application 312 and/or associated classical data for application 325 from the quantum store 308 and/or the classical store
322 in each node. Some example multiple-node applications are described below.
FIG. 6A illustrates an embodiment of a bus network 600 using an entangled cache according to the present teaching. A bus network 600 is just one particular example. It should be understood that other
network architectures can be implemented. A number, n, of nodes 602, 602′, 602″ are connected to a classical channel 604. Each node 602, 602′, 602″ has an entangled qubit cache. The caches are
supplied by an entanglement server (not shown). The qubits are organized pairwise between each set of nodes. Thus, node 1 602 has a “column” of qubits that are entangled with node n 602″, and node 1
602 has another “column” of qubits entangled with node 2 602′, and so on for each node.
The use of quantum cache for addressing is described in connection with an Ethernet-like protocol. However, it should be understood that networks using entangled cache addressing according to the
present teaching are not so limited. In the particular example described in connection with FIG. 6, node 1 602 wants to send a packet to node n 602″. Node 1 602 broadcasts the packet, which contains
an address field on the classical Ethernet channel 604. To determine the contents of the address field, Node 1 samples M qubits from the node n column in its cache and generates a resulting number
that is random. Node n 602″ has entangled pairs for each of these M qubits in its cache. Node 1 602 sends the results of the sampling on the classical Ethernet. Node n 602″ samples the entangled
pairs from its cache and generates a resulting random number, which will match the random address field from Node 1. Node n 602″ matches the random number received on the classical channel to know
that the packet is intended for node n.
All other nodes, such as node 2 602′, also sample their qubit cache for M qubits in the column for node 1 602 to see if the packet is addressed to them. Node 2 602′ does not get a match on the random
number provided by node 1 602 and received via the classical channel. As such, the measured random number represents a quantum source-destination pair address. The probability of a match being a
false match is 1/E, where E is an error rate described further below.
FIG. 6B illustrates a table 650 showing a structure of an entangled qubit cache for a multi-node network of the present teaching. In table 650, N is the size of the address space which requires sqrt
(N) bits, I=N+E, where 1/E is the acceptable error probability (rate) which requires sqrt(E) bits. The total qubits required=sqrt(I) for the total address space at a given error rate.
The operation of the entangled quantum caches in the nodes 602, 602′, 602″ described in connection with FIG. 6A-B is based on an addressing application, but numerous other applications can utilize
the shared entanglement in a network configuration according to the present teaching. Additional application examples are provided herein.
FIG. 7 illustrates a block diagram of an embodiment of a quantum-enabled information system 700 that uses an entangled quantum cache 702 according to the present teaching. The quantum cache 702 is
supplied entangled qubits from an entanglement server 704 via a quantum portion of a communication channel 706. The quantum cache 702 supplies ordered and tagged entangled qubits to an application
708. The quantum cache 702 can also supply associated classical information about the particular associated ordered tagged qubit to the application 708. The quantum cache 702 is controlled by a
processor 710. The processor 710 controls a fidelity system 712, a qubit loader 714, and a classical data loader 716 in the quantum cache 702. The processor 710 also controls a classical store 718
and a quantum store 720 in the quantum cache 702. The processor 710 is in communication with the application 708, so that it can command the quantum unloader 722 and the classical unloader 724 to
supply entangled qubits and associated classical data to the application 708 at a desired time. The desired time may be chosen to ensure that entangled qubits supplied at two different nodes share an
entangle state, allowing two remote nodes to share correlated state information. The desired time can be on-demand. The desired time can be predetermined. The desired time can be based on a lifetime
of a quantum state. The desired time can be based on a time when the quantum state was generated. The desired time can be based on an application demand. For example, an application in various
embodiments can access a particular shared entangled state. Also, for example, an application in various embodiments can access a particular type of quantum state. Also, for example, an application
in various embodiments can access a particular basis of a quantum state. Also, for example, an application in various embodiments can access a particular fidelity of a quantum state. Also, for
example, in various embodiments, an application can access a quantum state based on at least one of an entanglement property, a basis of a quantum state, a fidelity of a quantum state, a
time-of-arrival of a quantum state, a source of a quantum state, an age of a quantum state, a half-life of a quantum state, a birth time of a quantum state, a time-of-flight of a quantum state, and/
or a type of a quantum state.
FIG. 8 illustrates a block diagram of an embodiment of an application system 800 that utilizes an entangled quantum cache 802 interacting with an application 804 according to the present teaching. A
processor 806 sends and receives application commands to an application 804. The processor 806 sends quantum cache management commands to a quantum store 808. The processor 806 also sends classical
cache management commands to a classical store 804.
Some embodiments of the present teaching utilize an abstraction layer that supports an easy to use an interface for application coders. This is referred to as a classical application interface
(CAPI). The abstraction layer translates between a CAPI and a quantum system. The abstraction layer uses, interprets, and/or generates at least some of the classical data associated with a quantum
The quantum mechanical nature of quantum devices adds another level of complexity to the underlying behavior of the devices that provide useful quantum functionality for information system
engineering. Most engineers and scientists are trained in basic programming of causal Newtonian systems. For reference, there are about 1600 quantum physicists worldwide, yet there are upwards of 20
million software professionals worldwide. For quantum mechanical systems to be widely adopted, they must be easily used by classically trained software professionals. A classical application
interface translates between these worlds. The advantage of the CAPI is to allow any coder to apply a quantum system as a black box. It is not necessary for the coder to know how quantum systems
work, only how the quantum systems perform. The CAPI appears to a software developer as a familiar function structure in their chosen programming language. The following are some examples to
illustrate a CAPI and how it works in an entangled quantum cache that is interfaced to an application. These examples are illustrative and not comprehensive.
A cache_pointer identifies a particular quantum cache 808, allowing for multiple caches in a single node. Only one node is shown in FIG. 8, but it is understood that the present teaching applies to
any number of nodes. A qubit_pointer identifies a particular qubit in a cache and its associated metadata. A node identifies a particular node. A channel (not shown) allows for multiple connections
from a single node.
Cache Management Functions include: 1) Integer=Get_Qubit_Count(cache_pointer), that indicates how many qubits are in cache; 2) Integer=Get_Long_Term_Qubit_count(cache_pointer), that indicates how
many qubits are long term qubits; 3) Integer=Get_Short_Term_Qubit_count(cache_pointer) that indicates how many qubits are short term qubits; 4) Random_Integer=Sample_Qubit
(cache_pointer,qubit_pointer), that indicates sample/collapse to classical; 5) Time=Get_Qubit_Age(cache_pointer,qubit_pointer), that indicates the age of the qubit; 6) Array(n)=
Get_Qubit_Entanglement_Map(cache_pointer, qubit_pointer), that indicates what qubits are entangled; and 7) Local_cache_pointer=Put_Entangled_Qubit(cache_pointer,
qubit_pointer,node), that put an entangled qubit over on another node. This function puts one member of an entangled pair at another node. It should be understood that the specific calls associated
with the cache management functions are presented for illustrative purposes and not intended to limit the present teaching in any way.
In general, an application system 800 that utilizes an entangled quantum cache 802 interacting with an application 804 will include a processor 806 that is able to send and receive application
commands to an application 804 that is easy to use for classically trained software engineers and software developers. The abstraction layer limits, for example, the amount of detailed information
needed to control the quantum store 808 and classical store 810 that is passed to the application commands generated by the application 804 as illustrated in the example provided herein.
In one particular example, the application 804 is a quantum private address application that is described in more detail in connection with the description for FIG. 9. In these embodiments, the
application commands include Secret_Address=Get_Address(node_X), that indicates what is the classical private address that looks random to other nodes.
In other specific embodiments, the application 804 is a super dense coding application that is described in more detail in connection with the description for FIGS. 10-12, below. In these
embodiments, the application commands include: 1) Send(transmit_data,channel_number), that indicates send; 2) Receive_Data=received(channel_number,node), that indicates receive; 3) Integer=
Get_Entangled_Count(node_address), that indicates check cache depth; 4) Allocate_Quantum_Channel(percent,channel_number), that allocates percent of channel for sending qubits.
An embodiment of a classical application interface for a super-dense coding application would include the following commands. For the transmitter: 1) Establish_Link(5,10), that commands entangled
qubits to be shared between the transmitter and receiver; and 2) Send(“Hello”,5,10), that commands “Hello” to be sent. For the receiver the commands include: “Hello”=received(5,10), that indicates a
“Hello” received. For the processor, the commands include: 1) 107=Get_Entangled_Count(10), that indicates we have only 107 entangled qubits left so need to allocate more of the channel to exchange
entangled qubits; 2) Allocate_Quantum_Channel(50,10), that allocates 50% of the channel to build the entangled cache; 3) 1025=Get_Entangled_Count(10), that indicates that we now have 1025 qubits; and
4) Allocate_Quantum_Channel(5,10), that commands to reduce allocation to 5%.
One feature of the entangled quantum caches of the present teaching is that they can support a variety of classical, semi-classical, and pure quantum applications. Several example applications are
provided below.
One application supported by the quantum entangled cache of the present teaching is providing shared-node metadata for distributed information systems. In this application, the qubits in the
entangled qubit caches provide metadata for one or more of a variety of different classical distributed information systems. The shared-node metadata provided by the quantum entangled caches of the
present teaching can support a variety of protocols that can provide, for example, addressing, timing, location and other information being shared between pairs and/or groups of nodes.
FIG. 9 illustrates an embodiment of a distributed system 900 using an entangled cache to provide shared-node metadata of the present teaching. A communication channel 902 connects a plurality of
nodes 904, 906, 908, 910. The communication channel 902 supports both quantum communication and classical communication through any of a variety of means. In this embodiment, pairs of M entangled
qubits are stored in caches in the various nodes. For example, as illustrated by the diagram 912, node A 904 and B 906 share M entangled qubits. As illustrated by the diagram 914, node A 904 and X
908 share M entangled qubits. As illustrated by the diagram 916, node X 908 and Z 910 share M entangled qubits. These various sets of M pairs of qubits are appropriately tagged and stored in a cache
(not shown) in each node 904, 906, 908, 910, such that they can be accessed by processors in the nodes 904, 906, 908, 910 for processing and/or measurement to implement a desired protocol. The
various M entangled qubits may be distributed by an entanglement server (not shown) over the communication channel 902, or by different means. Some protocols will exchange raw or processed qubits
from the cache over a communication channel 902, but other protocols will not require any exchange of qubits to function.
A packet 918 includes a quantum address and data. The quantum address is, in some embodiments, a random number that is generated by a sender node, and received by a potential receiver node. The
random number represents a quantum source-destination pair address. A receiver node measures qubits entangled with particular nodes to generate a random number representing source-destination address
pairs for those particular nodes. For example, qubits entangled with node A are measured by a node to determine if a received packet is from node A. If a random number in the quantum address of a
packet 918 is a match with a random number generated by a measurement of entangled qubits in a particular receiving node, then the data in the packet is for that receiving node.
An example of a metadata exchange between node A 904 and node B906 is follows. Node A 904 measures each of M qubits known to be entangled with node B and generates a random number that represents an
address. This random number is sent classically in the quantum address field of a packet 918 along with some data. Node B 906 measures each of M qubits known to be entangled with node A 904 to
generate a random number. Node B 906 receives the packet 918, and compares the received random number with the generated random number. If there is a match, the data is for node B 906 from node A 904
Another feature of the present teaching is that the quantum metadata produced with methods and apparatus according to the present teaching can be used to prevent anyone from knowing which
source-destination pair of nodes is addressed with a quantum key distribution level of assurance. This is because the quantum entangled caches enable two or more nodes to share a random number
“secret” without any exchange of classical data.
In general, such a feature can be applied to any of a variety addressing schemes. For example, addresses can be one or more of network addresses, memory locations, data base indexes, geographical
addresses, telephone numbers, and many other identifiers. Any entity that desires to place data at, or communicate with, any other entity possesses a number, n, entangled qubits with associated
other-paired entangled qubits that are possessed by the other entity. The number n is typically chosen such that it is large enough to minimize address collisions.
One example addressing scheme according to the present teaching is the application of the quantum metadata to a quantum Ethernet (broadcast channel) where the addresses are entangled qubits as
described in connection with FIG. 9. In this addressing scheme, if node A 904 has a packet for node B 906, node 904 encodes using N-qubits in superposition that are ordered and entangled with qubits
in node B 906. The packet includes a quantum address from node A 904 and data. Everyone receives the packet. Until measurement, none of the nodes 904, 906, 908, 910 know their particular address.
When nodes make a measurement, they generate a random number. Then, node A 904 sends the random number via classical broadcast to all the nodes 906, 908, 910. If the random number agrees with a
random number generated in a node when a measurement is made, the packet is for that node. Thus, the random number represents a quantum source-destination pair address.
A feature of this method is that an eavesdropper cannot determine which node the data was directed to nor can the eavesdropper determine the node that transmitted the data. The random number
broadcast classically doesn't reveal anything except to the source and destination pair sharing the data. If the quantum entangled pairs are manipulated by a 3^rd party, which is measured and/or
spoofed, these measurements and/or spoofing actions are detectable using a quantum key distribution protocol between the entanglement server and the caches. Said another way, if someone tried to
determine the source destination pair, or spoof the source and/or the destination, that action would destroy the correlation of the random number. This feature makes the addressing scheme absolutely
private to spoofing, which is highly desirable for many applications.
As another example of address according to the present teaching, consider a simple three node network that includes a transmitter and two receivers. For example, this three node network includes
nodes A 904, B 906, and node X 908 of FIG. 9. Rather than being deterministic as in prior art classical addressing schemes, addressing schemes according to the present teaching are similar to hash
collisions. However, a node can make the probability that data is sent to the wrong entity very small if the number of qubits, M, in a cache is much bigger than the address space. For the three-node
network, with only one entangled qubit per node, the qubit could end up to be measured as a zero or a one with 50% probability. With five qubits to handle two addresses, the chances that all three
nodes would get the same random number are very small, (½)^5. To get an address error rate of say ( 1/10)^7 then 20+n qubits are needed, where n represents the address space being covered. Using
twenty-one qubits results in an error rate of (2,000,000)^−1, and twenty-two qubits results in an error rate of (4,000,000)^−1, and so on. As such, the number of qubits used per address can be
notably larger than a number of classical address bits.
Generally, the quantum entangled cache system and method of the present teaching allows nodes to share entangled qubits amongst any other node to which they need to communicate. In various
embodiments, the data is broadcast classically, or may also be sent on a quantum channel. As described above, the entanglement can be N-way entanglement and feed N-caches with M qubits directly from
a single server. The entanglement of the M qubits can include K dimensions. Data may be protected using quantum key exchange and/or encrypted by known classical means. The address information is
provided by measuring selected qubits in the quantum entangled caches. A sender makes a measurement and generates a random number, this number is broadcast with the data. A receiver also makes
measurement of selected qubits to generate a random number and compares that number to those received in the addresses of packets on the network. When the two random numbers match, it can be
concluded that the data was for that node.
In a network with n nodes participating in such a scheme, every node needs to have paired ensembles of entangled qubits with every other node. Every transmitter selects one of those paired ensembles
to address a desired node for communication. In some embodiments, each node needs 2^(n−1) pairs of entangled ensembles in order to be able to address every other node in network with n nodes. The
receiver needs to measure each of these paired ensembles to do the matching with the transmitter's address that's sent classically. And each ensemble needs to have at least n qubits.
In some embodiments, receivers can do measurements one bit at a time. So, for example, the first bit broadcast is compared to the first qubit in every ensemble. Statistically, this should eliminate ½
the potential senders. Then the second qubit, eliminating the next ¼, and so on. In this way, the receiver only needs to do n+(n−1)+(n−2)+ . . . =2n(n−1) comparisons. The likelihood that a receiver
misidentifies a message not intended for the receiver is ½^n. As the address space increases, the probability of misidentified messaged decreases. Error rates in addressing can be further reduced by
increasing the address space, that is making a more-sparse address space. For example, using an address space say 2^n for 2^m nodes, where n>m, every extra bit of addressing reduces the error rate by
One feature of the present teaching is that the receiver actually obtains two pieces of information by executing the protocol. First, the receiver knows that the message was intended for that
particular receive. Second, the receiver knows the address of the source of the information.
Some systems according to the present teaching can be used for network initialization in the following way. The scheme is used to develop a set of classical addresses, which are the broadcast results
of the measurement. These classical addresses appear to be random numbers to everyone except the intended receiver. So the transmitter sends a classical packet with an address header that is truly a
random number determined by this scheme. The receiver learns to use classic logic to look for that number, which is actually the equivalent of a source and destination address, but looks random to
everyone else.
Another feature of the present teaching is that it is possible to trade security for addressing overhead. In some embodiments of the present teaching, nodes decide how often to refresh the address
(reinitialize) based on security needs. To be very secure, the address is refreshed for every packet. Less secure implementations only do refreshing at chosen intervals, much like updating a
Also, some systems according to the present teaching use addressing system that uses a quantum entangled cache described herein that provides privacy by starting with a shared secret. Every node
pair, also referred to as a source-destination pair, shares a secret at initialization. That secret is an M-bit number, where M-bits is the size of the address space. It is important to note that
this is for pairs of nodes, not singular nodes. The M-bit number must exist for every pair that wants to have a private source destination address. Referring again to the distributed system 900 using
an entangled cache of FIG. 9, when node A 904 wants to talk with node B 906, node A 904 measures M qubits that are entangled with M qubits in node B's 906 cache. Node B 906 also measures M qubits
that are entangled with M qubits in node A 904.
Then node A 904 does a bit-by-bit classical exclusive (XOR) of the value of the measured qubits with the shared secret, and sends the result of the XOR-ed sample classically over the channel 902 to
node B 906. Only node B 906 has the shared pairwise secret. Node B 906 does the same XOR operation on the value of the measured qubits, therefore it's looking for the same number which still appears
to be random. The other nodes, such as node X 908, for example, doesn't have the pair-wise secret, so cannot do anything with the quantum address. If node X 908 or other node somehow was able to
capture the entangled bits destined for node B 906, they are not able to fake the identity of node A 904, because they lack the shared secret to perform the XOR.
In some methods according to the present teaching, the key is refreshed to prevent having a static key (or secret) using quantum secret tumbling in the following way. At any time, a node pair (node A
904 and node B 906, for example) can go into their respective entangled caches and sample (i.e. perform a measurement on M qubits in the cache) again. In some embodiments, this process occurs after
every message. That measurement result (sample) can be XORed with the original secret. The result can become the new, or tumbled, secret that can be used for subsequent messages. This new, or
tumbled, secret can also be referred to as a quantum signature and is one aspect of the present teaching.
Another application of quantum entangled cache systems according to the present teaching is the implementation of quantum super dense coding. Super dense coding is a powerful quantum communication
scheme that allows a factor of two increase in the transmission capacity compared with a classical communication channel. This is because two classical bits of information can be sent using one
qubit. Quantum caches according to the present teaching serve as a local resource for implementing the super dense coding protocol.
FIG. 10 illustrates a known quantum super dense coding scheme application 1000 operating between a transmitter 1002 and a receiver 1004. Two classical bits of information are sent by a transmitter
1002, which we refer to for simplicity as Alice, to a receiver 1004, which we refer to for simplicity as Bob. These information bits, 00, 01, 10, 11, are coded on one of a pair of entangled qubits
that is prepared in a Bell state by a qubit source 1006. One entangled qubit is provided by the source 1006 to the transmitter 1002, and is modulated with one of the four classical information bits
and then sent to the receiver 1004. The other entangled qubit, which is not modulated, is provided to the receiver 1004 by the source. By processing the modulated qubit and the other entangled qubit,
the receiver 1004 is able to determine which of the four information bits was sent by the transmitter 1002. The source 1006, transmitter 1002 and receiver 1004 use operators such as quantum CNOT 1008
, Hadamard 1010 and/or Pauli-Z/or Pauli-X/or Pauli-X and Pauli-Z 1012. The receiver 1004 uses a measurement 1014 on both the modulated qubit from the transmitter 1002, and the other qubit of the
entangled pair provided by the source 1006 to decode the classical information.
FIG. 11 illustrates an embodiment of a super-dense coding system 1100 using an entangled cache according to the present teaching. The transmitter 503 and receiver 505 can operate as described above
in connection with FIG. 5. A cache 1104 is connected to the transmitter 503, and another cache 1106 is connected to the receiver 505. The caches 1104, 1106 are connected to an entanglement server
1102. In some embodiments, this connection is provided by a quantum link 1108, but it should be understood that numerous other connection means can also be used. The caches 1104, 1106 are supplied
entangled qubit pairs by an entanglement server 1102. The entanglement server fills the caches 1104, 1106 with entangled qubits. The caches 1104, 1106 tag the appropriate associated classical
information to each qubit as well as maintaining the qubit in an entangled state as described herein. In this way, the transmitter cache 1104 and the receiver cache 1106 are populated.
Each information bit modulated by the transmitter 503 comprises a qubit pulled from the cache. Each information bit decoded in the receiver 505 utilizes a received modulated qubit sent over a quantum
channel 1110 from the transmitter that is processed as described in connection with FIG. 5 using a qubit that is pulled from the cache 1106. The receiver uses the classical information that is tagged
to each qubit in the cache 1106 to determine which qubit to pull and process with the modulated qubit.
In some embodiments, the transmitter 503 applies operators to qubits in order and sends them to the receiver 505 over a quantum channel 1110. These operators are I: 00, X: 01, Z: 10 and XZ: 11. The
receiver 505 performs CNOT operations on the cached qubits from the cache 1106 in order with received qubits from the transmitter 503. This operation is followed by a Hadamard transform operator
which performs a measurement to decode the classical information bit modulated by the transmitter 503. Two bits of classical information are provided over the link 1110 using only one qubit resource.
FIG. 12 illustrates another embodiment of a super-dense coding system 1200 using an entangled cache according to the present teaching. Like the super-dense coding system 1100 described in connection
with FIG. 11, the transmitter 503 and receiver 505 operate as described above in connection with FIG. 5. In this embodiment, an entanglement server 1202 is connected to a transmitter cache 1204 and
to a receiver cache 1206 but with a different architecture than the super-dense coding system 1100 described in connection with FIG. 6. The caches 1204, 1206 are supplied entangled qubit pairs by
entanglement server 1202 and qubits are tagged to build the caches 1204, 1206. The entanglement server 1202 is co-located in one area 1208 with the transmit cache 1204 and the transmitter 503. A
quantum channel 710 connects the transmitting area 708 to the receiver 505. The entanglement server 1202 supplies entangled qubits to the receiver cache 706 using the quantum channel 710.
Each information bit modulated by the transmitter 503 includes a qubit pulled from the cache. Each information bit decoded in the receiver 505 utilizes a received modulated qubit sent over a quantum
channel 1210 from the transmitter that is processed as described in connection with FIG. 5 using a qubit that is pulled from the cache 1206. The receiver 505 uses the classical information that is
tagged to each qubit in the cache 1206 to determine which qubit to pull and process with the modulated qubit. The transmitter 503 applies operators to modulate the qubits from the cache 1204 in order
and then sends them to the receiver 505 over a quantum channel 610. The receiver 505 performs CNOT operations on the cached qubits from cache 706 in order with received qubits from the transmitter
503. The receiver 505 then performs a Hadamard operation and performs a measurement to decode the classical information bit modulated by the transmitter 503. The result is that two bits of classical
information are provided over the link 1210 using only one qubit resource.
In some embodiments, the entanglement server 1202 uses quiet channel intervals to fill the remote cache 1206 at the receiver 505 with entangled qubits. In some embodiments, the transmitter 503 is
sending entangled qubits in advance of knowing what data is desired to be transmit. The transmitter 503 decides what is desired to be sent, and only sends one qubit for every 2 bits of classical
data. The other classical “bit” is derived by the receiver 505 using a combination of the transmitted qubit and the entangled qubit that may be sent way in advance. The result is communication with
non-causal-like behavior.
It should be understood that the super dense coding systems with entangled caches described in connection with FIGS. 11-12 are just some specific examples of the systems and methods of the present
teaching. Numerous other architectures can be implanted with the teachings described herein. In various embodiments, various elements of the coding systems 1100, 1200 may be remotely located or
co-located. The distance between elements also varies with the specific implementation. For example, in numerous embodiments of systems according to the present teaching, all or some of the elements
can be located on a same backplane, card, box, rack, or room. Also, in numerous embodiments of systems according to the present teaching, all or some of the elements can be located across a variety
of geographical regions from close to far distances, including both terrestrial and space-based locations. Connection channels can be implemented in a variety of photonic and/or electronic means,
including wireless and wired channels.
While the examples of quantum entangled caches described herein are highly simplified, the caches can include qubits from numerous entanglement servers that can be used for various different purposes
in support of different services and processing applications. For example, one or more caches can store one or more types of qubits, including different types of physical qubits, with different
entanglement conditions and entanglement partners. Also, for example, the caches can provide an application access to a particular quantum state at a particular time. Also, for example, the caches
can provide an application access to a quantum state based on particular classical data associated with that quantum state and/or at a particular time. Caches can be architected in various
configurations, such as FIFO, LIFO, random access, and combinations of these and other storage architectures.
While the Applicant's teaching is described in conjunction with various embodiments, it is not intended that the Applicant's teaching be limited to such embodiments. On the contrary, the Applicant's
teaching encompasses various alternatives, modifications, and equivalents, as will be appreciated by those of skill in the art, which may be made therein without departing from the spirit and scope
of the teaching.
1. A quantum cache comprising:
a) a quantum store having an input that receives a plurality of quantum states, the quantum store configured to store and order the plurality of quantum states and to provide select ones of the
stored and ordered plurality of quantum states to a quantum data output at a first desired time;
b) a fidelity system having an input that is coupled to the quantum store, the fidelity system configured to determine fidelity information associated with at least some of the plurality of
quantum states;
c) a classical store coupled to the fidelity system, the classical store configured to store classical data comprising the determined fidelity information and an index that associates particular
ones of classical data with particular ones of the plurality of quantum states and to supply at least some of the classical data to a classical data output at a second desired time; and
d) a processor connected to the classical store, the processor determining the first desired time based on the index.
2. The quantum cache of claim 1 wherein the quantum store is further configured to perform last-in-first-out (LIFO) quantum state ordering.
3. The quantum cache of claim 1 wherein the quantum store is further configured to provide random access to at least some of the ordered plurality of quantum states to the quantum data output at the
first desired time.
4. The quantum cache of claim 1 further comprising a quantum state loader that provides the plurality of quantum states to the quantum store.
5. The quantum cache of claim 1 further comprising a quantum state unloader that provides the order plurality of quantum states to the quantum data output at the first desired time.
6. The quantum cache of claim 1 wherein the fidelity system comprises an age out timer.
7. The quantum cache of claim 1 wherein the fidelity system is further configured to remove quantum states based on a value of an entanglement property.
8. The quantum cache of claim 1 wherein the fidelity system is further configured to determine the fidelity of the at least some of the ordered plurality of quantum states in a probabilistic way.
9. The quantum cache of claim 1 wherein the classical store is configured to store classical data comprising at least one of a time of at least some of the plurality of quantum states, a determined
fidelity of at least some of the plurality of quantum states, and a position of at least some of the plurality of quantum states in the ordered plurality of quantum states.
10. The quantum cache of claim 1 wherein the stored classical data comprises an entanglement map of quantum states in the quantum store.
11. The quantum cache of claim 1 wherein the processor is further configured to determine the index.
12. The quantum cache of claim 1 wherein the processor is further configured to determine the first desired time based on the particular ones of classical data associated with the index.
13. The quantum cache of claim 12 wherein the particular ones of classical data associated with the index comprise at least one of an entanglement property, a basis of a quantum state, a fidelity of
a quantum state, a time-of-arrival of a quantum state, a source of a quantum state, an age of a quantum state, a half-life of a quantum state, a birth time of a quantum state, a time-of-flight of a
quantum state, or a type of a quantum state.
14. A distributed quantum entanglement cache comprising:
a) an entangled quantum state generator that provides a plurality of first and second quantum states, wherein respective ones of the plurality of the first and second quantum states are
b) a first quantum store coupled to the entangled quantum state generator that receives the plurality of first quantum states, the first quantum store configured to store and order the received
first plurality of quantum states while maintaining an entanglement correlation with respective ones of the second plurality of quantum states and further configured to provide select ones of the
ordered first plurality of quantum states to a quantum data output at a first desired time;
c) a second quantum store coupled to the entangled quantum state generator that receives the second plurality of quantum states, the second quantum store configured to store and order the
received second plurality of quantum states while maintaining an entanglement correlation with the corresponding first plurality of quantum states and further configured to provide select ones of
the ordered second plurality of entangled quantum states to a quantum data output at a second desired time; and
d) a communication channel connected between the first quantum store and the second quantum store, the communication channel being configured to transmit information comprising the first desired
time from the first quantum store to the second quantum store.
15. The distributed quantum entanglement cache of claim 14 wherein the first and second quantum stores are physically located at different locations.
16. The distributed quantum entanglement cache of claim 14 wherein the entangled quantum state generator is configured to provide at least some quantum states having a third quantum state that is
entangled with the first and second quantum state.
17. The distributed quantum entanglement cache of claim 14 wherein the first quantum state has more than one basis.
18. A method of storing quantum states, the method comprising:
a) ordering a plurality of quantum states;
b) storing the ordered plurality of quantum states;
c) providing select ones of the ordered plurality of quantum states at a first desired time;
d) determining a fidelity of at least some of the plurality of ordered quantum states;
e) storing classical data comprising the determined fidelity information and an index that associates particular ones of classical data with particular ones of the plurality of quantum states;
f) providing the classical data at a second desired time; and
g) determining the first desired time based on the index.
19. The method of claim 18 wherein the determining the first desired time based on the index comprises determining the first desired time based on the particular ones of classical data associated
with the index.
20. The method of claim 18 wherein the particular ones of classical data associated with the index comprise at least one of an entanglement property, a basis of a quantum state, a fidelity of a
quantum state, a time-of-arrival of a quantum state, a source of a quantum state, an age of a quantum state, a half-life of a quantum state, a birth time of a quantum state, a time-of-flight of a
quantum state, or a type of a quantum state.
Patent History
Publication number
: 20220114471
: May 3, 2021
Publication Date
: Apr 14, 2022
Patent Grant number
11367014 Applicant
Qubit Moving and Storage, LLC
(Franconia, NH)
Gary Vacon
(East Falmouth, MA),
Kristin A. Rauschenbach
(Franconia, NH)
Application Number
: 17/306,850
International Classification: G06N 10/20 (20060101); G06F 12/122 (20060101); | {"url":"https://patents.justia.com/patent/20220114471","timestamp":"2024-11-13T14:57:56Z","content_type":"text/html","content_length":"161445","record_id":"<urn:uuid:12f683dd-ebbf-44c4-830a-31b5a253733d>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00558.warc.gz"} |
Statistical Inference (2 of 3)
Learning Objectives
• Find a confidence interval to estimate a population proportion when conditions are met. Interpret the confidence interval in context.
• Interpret the confidence level associated with a confidence interval.
A Look at 95% Confidence Intervals on the Number Line
Let’s look again at the formula for a 95% confidence interval.
[latex]\begin{array}{l}\mathrm{sample}\text{}\mathrm{statistic}\text{}±\text{}\mathrm{margin}\text{}\mathrm{of}\text{}\mathrm{error}\\ \mathrm{sample}\text{}\mathrm{proportion}\text{}&
The lower end of the confidence interval is sample proportion – 2(standard error).
The upper end of the confidence interval is sample proportion + 2(standard error).
Every confidence interval defines an interval on the number line that is centered at the sample proportion. For example, suppose a sample of 100 part-time college students is 64% female. Here is the
95% confidence interval built around this sample proportion of 0.64.
We know the margin of error in a confidence interval comes from the standard error in the sampling distribution. For a 95% confidence interval, the margin of error is equal to 2 standard errors. This
is shown in the following diagram.
The width of the interval is the same as the width of the middle 95% of the sampling distribution. The next diagram illustrates this relationship.
When Does a 95% Confidence Interval Contain the True Population Proportion?
If the sample proportion has an error that is less than 2 standard errors, then the 95% confidence interval built around this sample proportion will contain the population proportion.
The sample proportion 0.64 is within 2 standard errors of 0.60, so 0.60 is in the 95% confidence interval built around 0.64.
In the following figure, the sample proportion 0.72 is not within 2 standard errors of 0.60, so 0.60 is not in the 95% confidence interval built around 0.72.
How Confident Are We That a 95% Confidence Interval Contains the Population Proportion?
Following are three confidence intervals for estimating the proportion of part-time college students who are female. We are confident that most of these intervals will contain the population
proportion, like the green intervals shown here. But some will not contain the population proportion, like the red interval shown here.
Of course, we don’t know the population proportion (which is why we want to estimate it with a confidence interval!). In reality, we cannot determine if a specific confidence interval does or does
not contain the population proportion; that’s why we state a level of confidence. For these intervals, we are 95% confident that an interval contains the population proportion. In other words, 95% of
random samples of this size will give confidence intervals that contain the population proportion. The sad news is that we never know if a particular interval does or does not contain the unknown
population proportion.
Click here to open this simulation in its own window.
Connections to the Theoretical Sampling Distribution and Normal Model
For inference procedures, we work from a mathematical model of the sampling distribution instead of simulations. But we always begin our discussion with a simulation to highlight the sampling
process. Simulations also remind us that the sampling distribution is a probability model because the sampling process is random and we look at long-run patterns.
Recall from “Distribution of Sample Proportions” our discussion of the mathematical model for the sampling distribution of sample proportions. For samples of size n, the model has the following
center and spread, both of which are related to a population with a proportion p of successes.
Center: Mean of the sample proportions is p, the population proportion.
Spread: Standard deviation of the sample proportions (also called standard error) is [latex]\sqrt{\frac{p(1-p)}{n}}[/latex].
Shape: A normal model is a good fit for the sampling distribution if the expected number of successes and failures is at least 10. We can translate these conditions into formulas:
If we can use a normal model for the sampling distribution, then the empirical rule applies. Recall the empirical rule from Probability and Probability Distributions, which tells us the percentage of
values that fall 1, 2, and 3 standard deviations from the mean in a normal distribution.
• 68% of the values fall within 1 standard deviation of the mean.
• 95% of the values fall within 2 standard deviations of the mean.
• 99.7% of the values fall within 3 standard deviations of the mean.
When we have a normal model for the sampling distribution, the mean of the sampling distribution is the population proportion. These ideas translate into the following statements:
• 68% of the sample proportions fall within 1 standard error of the population proportion.
• 95% of the sample proportions fall within 2 standard errors of the population proportion.
• 99.7% of the sample proportions fall within 3 standard errors of the population proportion.
Therefore, the empirical rule tells us that there is a 95% chance that sample proportions are within 2 standard errors of the population proportion. A margin of error equal to 2 standard errors,
then, will produce an interval that contains the population proportion 95% of the time. In other words, we will be right 95% of the time. Five percent of the time, the confidence interval will not
contain the population proportion, and we will be wrong. We can make similar statements for the other confidence levels, but these are less common in practice. For now, we focus on the 95% confidence
With the formula for the standard error, we can write a formula for the margin of error and for the 95% confidence interval:
[latex]\begin{array}{l}\mathrm{sample}\text{}\mathrm{statistic}\text{}±\text{}\mathrm{margin}\text{}\mathrm{of}\text{}\mathrm{error}\\ \mathrm{sample}\text{}\mathrm{proportion}\text{}&
PlusMinus;\text{}2(\mathrm{standard}\text{}\mathrm{error})\\ \stackrel{ˆ}{p}\text{}±\text{}2\sqrt{\frac{p(1-p)}{n}}\end{array}[/latex]
Remember that we can make a statement about our confidence that this interval contains the population proportion only when a normal model is a good fit for the sampling distribution of sample
You may realize that the formula for the confidence interval is a bit odd, since our goal in calculating the confidence interval is to estimate the population proportion, p. Yet the formula requires
that we know p. For now, we use an estimate for p from a previous study when calculating the confidence interval. This is not the usual way statisticians estimate the standard error, but it captures
the main idea and allows us to practice finding and interpreting confidence intervals. Later, we explore a different way to estimate standard error that is commonly used in statistical practice.
Overweight Men
Recall the use of data from the National Health Interview Survey (conducted by the CDC) to estimate the prevalence of certain behaviors such as alcohol consumption, cigarette smoking, and hours of
sleep for adults in the United States. In the 2005–2007 report, the CDC estimated that 68% of men in the United States are overweight. Suppose we select a random sample of 40 men this year and find
that 75% are overweight. Using the estimate from the survey that 68% of U.S. men are overweight, we calculate the 95% confidence interval and interpret the interval in context.
Check normality conditions:
Yes, the conditions are met. The number of expected successes and failures in a sample of 40 are at least 10. We expect 68% of the 40 men to be overweight; [latex]np=40(0.68)[/latex] is about 27. We
expect 32% of the 40 men to not be overweight; [latex]np=40(0.32)[/latex] is about 13.
We can use a normal model to estimate that 95% of the time a confidence interval with a margin of error equal to 2 standard errors will contain the proportion of overweight men in the United States
this year.
Calculate the standard error (estimated average amount of error):
[latex]\mathrm{standard}\text{}\mathrm{error}=\sqrt{\frac{0.68(0.32)}{40}}\text{}\approx \text{}0.074[/latex]
Find the 95% confidence interval:
[latex]\begin{array}{l}\mathrm{sample}\text{}\mathrm{proportion}\text{}±\text{}\mathrm{margin}\text{}\mathrm{of}\text{}\mathrm{error}\\ 0.75\text{}±\text{}2(\mathrm{standard}\text
{}\mathrm{error})\\ 0.75\text{}±\text{}2(0.074)\\ 0.75\text{}±\text{}0.148\\ 0.602\text{}\mathrm{to}\text{}0.898\end{array}[/latex]
We are 95% confident that between 60.2% and 89.8% of U.S. men are overweight this year. | {"url":"https://courses.lumenlearning.com/atd-herkimer-statisticssocsci/chapter/introduction-to-statistical-inference-2-of-3/","timestamp":"2024-11-02T20:37:56Z","content_type":"text/html","content_length":"59956","record_id":"<urn:uuid:166d90fe-242a-480d-bd2d-8a46047e5df1>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00461.warc.gz"} |
70 Choose 5 (2024)
• Subtract 5 5 from 70 70 .
• Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor.
• C ( 70 , 5 ) = 12103014. There are 12,103,014 ways that 5 items chosen from a set of 70 can be combined. How did we do? Please leave us feedback.
• Learn how to solve c(70,5). Tiger Algebra's step-by-step solution shows you how to find combinations.
• 10 mrt 2024 · This combination calculator (n choose k calculator) is a tool that helps you not only determine the number of combinations in a set (often ...
• Use the combinations calculator to determine the number of combinations for a set.
• 70. 5 choose 5, 1, 126. 6 choose 1, 6, 6. 6 choose 2, 15, 21. 6 choose 3, 20, 56. 6 choose 4, 15, 126. 6 choose 5, 6, 252. 6 choose 6, 1, 462. 7 choose 1, 7, 7.
• Calculate the number of possible combinations given a set of objects (types) and the number you need to draw from the set, otherwise known as problems of the type n choose k (hence n choose k
calculator), as well as n choose r (hence nCr calculator). ➤ Free online combination calculator, supports repeating and non-repeating combinatorics calculations. See all possibilites for 3 choose
2, 4 choose 3, and other common types of combinations.
• n choose k calculator. Find out how many different ways you can choose k items from n items set without repetition and without order.
• The number of combinations n=10, k=4 is 210 - calculation result using a combinatorial calculator. Online calculator to calculate combinations or combination number or n choose k or binomial
coefficient. Calculates the count of combinations without repetition or combination number.
• Bevat niet: 70 | Resultaten tonen met:70
• This free calculator can compute the number of possible permutations and combinations when selecting r elements from a set of n elements.
• 17 sep 2023 · The number of ways to choose a sample of r elements from a set of n distinct objects where order does matter and replacements are not allowed.
• Find the number of ways of getting an ordered subset of r elements from a set of n elements as nPr (or nPk). Permutations calculator and permutations formula. Free online permutations calculator.
• Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, ...
• Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor.
• Lets you pick 5 numbers between 1 and 70. Pick unique numbers or allow duplicates. Select odd only, even only, half odd and half even or custom number of odd/ ...
• Free number generator service with quick book-markable links
• split svg Welk cijfer is 5 uit 70 als percentage. Grade percentage: converteer gemakkelijk 5 van 70 naar het cijfer en percentage. Mogelijk gemaakt door aspose.
• Hier kunt u zien hoe het percentage van 5 van 70 wordt berekend en wat uw score zal zijn volgens uw beoordelingsschaal als u 5 vragen van 70 correct heeft beantwoord. Ontdek eenvoudig de score en
het cijfer van het testpercentage. Controleer uw problemen op percentages en krijg uw cijfer op een lokale schaal. Beoordeel elke quiz, toets of opdracht eenvoudig voor docenten en studenten. | {"url":"https://elciclope.org/article/70-choose-5","timestamp":"2024-11-14T01:26:52Z","content_type":"text/html","content_length":"123304","record_id":"<urn:uuid:72f30c3f-1a6e-405c-91c6-e6bd1905baf5>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00083.warc.gz"} |
2010 Boxes of Marbles
Each of $2010$ boxes in a line contains one red marble, and for $1\leq k\leq2010$ , the box in the $k^{th}$ position also contains k white marbles. A child begins at the first box and successively
draws a single marble at random from each box in order. He stops when he first draws a red marble. Let P(n) be the probability that he stops after drawing exactly n marbles. The possible values(s) of
n for which $P(n)<\frac{1}{2010}$ | {"url":"https://solve.club/problems/2010-boxes-of-marbles/2010-boxes-of-marbles.html","timestamp":"2024-11-05T20:40:10Z","content_type":"text/html","content_length":"60679","record_id":"<urn:uuid:bf759169-484b-46a2-b072-82810dfe989e>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00785.warc.gz"} |
Circuit Math
Problem A
Circuit Math
You are enrolled in the Computer Organization and Architecture course at your university. You decide to write a program to help check your work by computing the output value of a combinational
digital circuit, given its inputs.
Consider the circuit shown in Figure 1, which we use for illustration. This circuit has four inputs (letters A through D on the left), each of which is either true or false. There are four ‘gates’
each of which is one of three types: AND, OR, or NOT. Each gate produces either a true or false value, depending on its inputs. The last gate (the OR on the right) produces the output of the entire
circuit. We can write these three types of gates in text by their equivalent logical operators: * for AND, + for OR, and - for NOT. In what follows, we’ll use the operators rather than gates to
describe circuits.
Here is how these operators work. Given an assignment of true (T) or false (F) for each input, the operators produce the truth value indicated in the following tables:
A B A B * A B +
T T T T
F T F T
T F F T
F F F F
Notice that AND and OR take two inputs, whereas NOT operates on only one input. Also, we use postfix notation to write expressions involving operators (like $\verb|A B *|$), where the operator comes
after its input(s) (just as how in Figure 1, each gate in the circuit diagram comes after its inputs).
When we describe a valid circuit in postfix notation, we use the following syntax.
• An uppercase letter (A through Z) is a valid circuit. In other words, an input alone (without any gates) is a valid circuit (which produces as output its own input value).
• If <C1> and <C2> are valid circuits, then ‘<C1> <C2> *’ is a valid circuit that produces the AND of the outputs of the two subcircuits.
• If <C1> and <C2> are valid circuits, then ‘<C1> <C2> +’ is a valid circuit that produces the OR of the outputs of the two subcircuits.
• If <C1> is a valid circuit, then ‘<C1> -’ is a valid circuit that produces the NOT of <C1>’s output.
No other description is a valid circuit.
Thus, one of the ways the circuit in Figure 1 could be described using postfix notation is as the string:
A B * C D + - +
Given a truth value (T or F) for each of the inputs (A, B, C, and D in this example), their values propagate through the gates of the circuit, and the truth value produced by the last gate is the
output of the circuit. For example, when the above circuit is given inputs A=T, B=F, C=T, D=F, the output of the circuit is F.
Given an assignment to variables and a circuit description, your software should print the output of the circuit.
The first line of the input consists of a single integer $n$, satisfying $1 \leq n \leq 26$, denoting the number of input variables. Then follows a line with $n$ space-separated characters. Each
character is either $\verb|T|$ or $\verb|F|$, with the $i$th such character indicating the truth value of the input that is labeled with the $i$th letter of the alphabet.
The last line of input contains a circuit description, which obeys the syntax described above. Each circuit is valid, uses only the first $n$ letters of the alphabet as input labels, and contains at
least $1$ and at most $250$ total non-space characters.
Note that while each variable is provided only one truth value, a variable may appear multiple times in the circuit description and serve as input to more than one gate.
Print a single character, the output of the circuit (either T or F), when evaluated using the given input values.
Sample Input 1 Sample Output 1
T F T F F
A B * C D + - + | {"url":"https://nus.kattis.com/courses/IT5003/IT5003_S2_AY2122/assignments/p48nuv/problems/circuitmath","timestamp":"2024-11-14T01:04:19Z","content_type":"text/html","content_length":"37157","record_id":"<urn:uuid:a71755ed-eb34-4a33-a4e4-196b777698d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00748.warc.gz"} |
ith the Normal Distribution
Course Outline
• segmentGetting Started (Don't Skip This Part)
• segmentStatistics and Data Science: A Modeling Approach
• segmentPART I: EXPLORING VARIATION
• segmentChapter 1 - Welcome to Statistics: A Modeling Approach
• segmentChapter 2 - Understanding Data
• segmentChapter 3 - Examining Distributions
• segmentChapter 4 - Explaining Variation
• segmentPART II: MODELING VARIATION
• segmentChapter 5 - A Simple Model
• segmentChapter 6 - Quantifying Error
□ 6.10 Getting Familiar with the Normal Distribution
• segmentChapter 7 - Adding an Explanatory Variable to the Model
• segmentChapter 8 - Digging Deeper into Group Models
• segmentChapter 9 - Models with a Quantitative Explanatory Variable
• segmentPART III: EVALUATING MODELS
• segmentChapter 10 - The Logic of Inference
• segmentChapter 11 - Model Comparison with F
• segmentChapter 12 - Parameter Estimation and Confidence Intervals
• segmentFinishing Up (Don't Skip This Part!)
• segmentResources
list High School / Advanced Statistics and Data Science I (ABC)
6.10 Getting Familiar With the Normal Distribution
By now you see why normal distributions are often good models of error (aggregation of forces!) and also how you might use them to make predictions. But why is it that distributions that look very
different from one another are all called “normal”? The shape of the normal distribution is intuitively like “a bell”, but let’s consider what that really means.
To be more concrete, let’s go back to Kargle, our favorite video game.
Remember we had that friend who scored 37,000 points in Kargle (shown in red) and we were trying to evaluate how skilled a player she was? When we discovered that the bottom distribution (where the
standard deviation is about 5,000) was the actual distribution of Kargle scores, we were less impressed than when we thought it was the top distribution. As it turns out, the top distribution (with a
standard deviation of about 1,000) is from a game called Bargle.
Normal distributions are roughly “bell-shaped” in that there are way more scores in the middle than there are out in the tails. They are also symmetrical from left to right. But it turns out that
normal distributions are even more regular, and thus quantifiable, than that description.
To illustrate the regularity of this normal shape, let’s just think about the players of both Kargle and Bargle that are within plus or minus (+/-) one standard deviation from the mean. We’ll call
this area of the distribution Zone 1 for now. These are the players with the less extreme scores.
Dividing Scores into Zones Based on Standard Deviation
We constructed a new variable called zone that simply indicates whether each person’s score is within Zone 1 (coded “1”) or outside of it (coded “outside”).
To do this we first transformed each person’s raw score into a z-score (which, you may recall, indicates how many standard deviations a score is from the mean). We then coded “1” in the variable zone
for every player whose z-score was > -1 and < 1. (Don’t worry about doing this in R; you can learn later if you want.)
In the histograms below, we have shaded Zone 1 in red, and anything outside of Zone 1 in purple.
Notice that our friend who scored 37,000 falls into Zone 1 for Kargle, but if that was her score in Bargle, she would be outside Zone 1. Putting aside our friend for a moment, what’s the proportion
of players that fall inside Zone 1 in Bargle and Kargle? Let’s run a tally to find out.
tally(zone ~ game, data=VideoGame, format="proportion")
zone Bargle Kargle
1 0.6844 0.6822
outside 1 0.3156 0.3178
Wow, Zone 1—within one standard deviation from the mean—is very similar (about .68) for both Bargle and Kargle! Interestingly, more than half the distribution is within one standard deviation of the
If we are one standard deviation in the positive direction, the z-score would be 1. If we are one standard deviation in the negative direction, the z-score would be -1. So Zone 1, +/- one standard
deviation, would contain all the data for which z-scores fall between -1 and 1.
Let’s loosen our idea of “close to” average and consider the players of both Kargle and Bargle who are +/- two standard deviations from the mean. We’ll call this area Zone 2 for now.
zone Bargle Kargle
2 0.9518 0.9487
outside 2 0.0482 0.0513
Basically, .95 of the scores fall within two standard deviations of the mean. In a normal distribution, scores are so clustered in the center that if you go out just two standard deviations from the
center, you have captured a whole lot of your distribution!
zone Bargle Kargle
1 1 0.6844 0.6822
2 2 0.9518 0.9487
3 3 0.9982 0.9972
4 outside 3 0.0018 0.0028
Zone 3, which is within three standard deviations from the mean, seems to cover almost all of the distribution. If you look at the tally (or look very, very carefully at the histograms), you can see
that there is a tiny proportion of scores outside Zone 3. | {"url":"https://coursekata.org/preview/book/f84ca125-b1d7-4288-9263-7995615e6ead/lesson/9/9","timestamp":"2024-11-05T13:02:06Z","content_type":"text/html","content_length":"95714","record_id":"<urn:uuid:6772dc96-2404-4619-afaa-c9b482652c2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00482.warc.gz"} |
Applied Econometric Time Series – Summary
The book starts off by stating,
Time series econometrics is concerned with the estimation of difference equations containing stochastic components.
Hence the book naturally begins with a full-fledged chapter on difference equations.
Difference Equations
A few examples of difference equations are given such as Random walk model, Structural equation, Reduced form equation, Error correction model to show the reader that difference equations are
everywhere in econometrics.Any time series model indeed is trying to explain a univariate variable or a multivariate vector in terms of lagged values, lagged differences, exogenous variables,
seasonality variables etc. The representative structure for the time series model is a difference equation. Any difference equation can be solved by repeated iteration, given an initial value. If the
initial value is not given, it can be chosen in the form that involves infinite summation and the solution thus obtained by repeated iteration is just one of the many solutions that the difference
equation can possess. However this method of repeated iteration breaks down for higher order difference equations. The chapter then talks about systematically finding the solutions to a difference
equation using the following four steps :
1. Form the homogeneous equation and find all n homogeneous solutions
2. Find a particular solution
3. Obtain a general solution as the sum of the particular solution and a linear combination of all homogeneous solutions
4. Eliminate the arbitrary constants by imposing the initial conditions on the general solution.
Thus the solutions to difference equations are usually written as a combination of homogeneous solutions and a specific solution. In all the algebraic jugglery that one needs to do for finding
solutions, the thing that becomes important is the stability of the algebraic equations. The solutions to difference equations could remain stable or explode depending on the structure of the
difference equation, i.e. the coefficients of the difference equation.
The chapter then illustrates the difference equation machinery using cobweb model and derives the solution to a first-order linear difference equation with constant coefficients. In the process it
shows the importance of stability of the solution and the relevance of impulse response analysis. The example makes a reader realize that building an econometric model might be an exercise that is
taken up for various reasons. Well Forecasting is the obvious one. There are other aspects like granger causality analysis, instantaneous causality analysis, impulse response function analysis, that
fall under the umbrella of structural analysis. These kind of analysis help a modeler in inferring the various relationships amongst various time series. Solving a homogeneous difference equation
involves writing out the characteristic equation and finding the characteristic roots. These roots decide the stability of the process.
If all the roots are with in a unit circle, then the process is stable. If there are a few roots on the unit circle and rest are all with in the circle, then the process is called a unit root process
or a process with order of integration d, where d is the number of roots on the circle. If there are roots outside the circle, then the process explodes. These roots are nothing but the eigen values
of a specific matrix that results out of a difference equation. One subtle point to note is that very often a similar statement is made about the roots of a reverse characteristic equation. The
inverse characteristic equation is probably a natural way to write and hence an analogous statement for the stability or instability of the process is : if all the roots are outside the unit circle ,
then the process is stable. If there are a few roots on the unit circle and rest are outside the unit circle, then the process is an unit root process. If there are roots with in the circle, then the
process is explosive.
Finding a particular solution for a difference equation is often a matter of ingenuity and perseverance. The chapter cites some common difference equations and tricks to solve for particular
solution. The chapter concludes by introducing two ways to solve for a particular solution, one is by the method of undetermined coefficients and second is via Lag operators. Lag operator method is
intuitively appealing than the undetermined coefficient method.
Stationary Time Series Models
This chapter touches upon all the aspects of time series modeling that a typical undergraduate course would cover. It starts off with a basic representation of ARMA models, discusses stability of the
various ARMA models. It then introduces ACF and PACF tools to get an idea of the underlying process. Box Pierce and Ljung and Box statistics for diagnostic testing are mentioned. In terms of model
selection, the two most common measures AIC, SBC are highlighted where the former is effective in small samples whereas the latter is effective in large samples. SBC penalizes against over
parameterization more than AIC. The appendix also mentions FPE(finite prediction error) criterion that seeks to minimize the one-step-ahead mean squared prediction error. A few examples are given
where in two different ARMA processes are fit to the same dataset and the entire tool box containing ACF, PACF,Diagnostic tests, model selection criteria are used to select the best representative
process. Box Jenkins model selection framework is introduced and the three main stages, i.e. identification stage, estimation stage and diagnostic checking, are shown via several examples.
The section on forecasts is probably the most interesting aspect of this chapter. Not many books highlight explicitly, the difference between the forecast based on known parameters and forecasts
based on estimates. If you look at any time series model that is estimated, the estimated forecast error is more than the forecast error of a model with known parameters. However in most of the
software’s that are available, you typically get confidence intervals based on the theoretical forecast variance rather than estimated forecast variance. In any case one can make an argument that if
the sample size increases, the theoretical forecast error dominates the error component arising from the uncertainty of estimates.
The question relating to evaluating the forecast coming from competing models is answered well in this chapter. Two popular tests, Granger-Newbold test and Diebold-Mariano test are explained. The
former overcomes the problem of contemporaneously correlated forecast errors. These tests have been mentioned because they relax the harsh restrictions of a typical forecast performance technique,
i.e forecast errors have zero mean and are normally distributed, the forecast errors are serially and contemporaneously uncorrelated. The chapter ends with a discussion of addition of seasonality
component to ARIMA models denoted by ARIMA_(p,d,q)(P,D,Q)_ where _p_ and _q_ are the number of nonseasonal ARMA components, _d_ is the number of nonseasonal differences, _P_ is the number of
multiplicative autoregressive coefficients, _D_ is the number of seasonal differences, _Q_ is the number of multiplicative moving average coefficients and _s_ is the seasonal period.
Modeling Volatility**
There is a certain notoriety associated with ARCH/GARCH models that are the topics of this chapter. These models have been the criticized by many people who claim the volatility modeling with
gaussian models is too naive in this highly complex and non linear world. May be that is the case. But who am I to pass a judgment on models which have made people get a Nobel Prize? I will just do
my job of summarizing the models with out passing any value judgments on them. The first type of models presented are the ARCH models. All said and done, the story factor behind these models is
solid, i.e. there are periods of homoscedasticity followed violent persistent shocks. One of the ways to model this kind of behavior is to build a conditional variance model and that’s exactly what
ARCH is. The conditional variance is modeled as a auto regressive process. GARCH is generalization of ARCH that includes an MA component for the conditional variance process. In fact these models are
quite a hit in the academic community that there is an entire family of related models such as IGRACH, TGARCH, EGARCH etc. I think the problem with spending too much time with these models is that
you unknowingly start believing that one of the family members should be the only ones representing the volatility of an instrument. As long as you are aware of that meta problem, I think its
perfectly fine to have a working knowledge of these models.
Models with Trend
This chapter presents unit root tests sans all the complicated math that goes behind it. It starts off by defining a few models and explaining the difference between trend-stationary models and
difference-stationary models. In the former the deterministic trend removal makes the series a stationary series whereas in the latter, a differencing of the series makes it a stationary series. One
way to differentiate between the two is this : detrend the series and check the PACF, ACF of residuals for any thing fishy. If everything is fine, then it is likely to be a stationary process.
However if residuals refuse to show any sane structure, then it is likely that it is a difference stationary process. To be more certain about it, fit a model to the differenced data then do residual
To answer the question more rigorously, the author introduces the idea of spurious regression. This happens when two integrated random variables are regressed against each other.
The t statistic and F statistic for the coefficients are usually high. This does not mean that we must be sure about the coefficient values. In fact the reason for the test statistics taking high
values is that the residuals are not stationary as is required for OLS. The derivation of the coefficient estimate in a closed form is not presented in the book. However the end result is presented,
i.e. the test statistics are proportional to the square root of sample size. This means by merely taking a bigger sample, one can get a statistically significant estimate.
So, one needs to be think through four cases that typically arise when you regress two series, let’s say {y_t} and {x_t}
• {y_t} and {x_t} are stationary series - OLS is perfect and all the relevant principles from asymptotic theory is valid
• {y_t} and {x_t} are integrated of different orders - Regression is meaningless
• {y_t} and {x_t} are integrated of the same order and the residuals are non-stationary - Spurious regression
• {y_t} and {x_t} are integrated of the same order and the residual is stationary - The variable are said to be cointegrated
The chapter deals with the univariate case where a {y_t} is tested for the presence of unit root. The first thing that should be highlighted but has not been in this section is that,
There is no unit root test available to check for the presence of one in a generalized stationary series. You have got to assume a certain data generating process and you can only check the null
hypothesis that certain coefficients of the process that make the process a unit root process, take up certain values. So, there is no one size fit all test.
The book assume the DGP as AR(1) and introduces Dickey-Fuller testing framework. When you run a regression between an integrated variable with its lags, the coefficients are a realization of non
standard distributions, i.e. a Brownian motion functional. Hence somebody had to run simulations to give critical values for the coefficients and that job was done by Dickey-Fuller. So, whenever you
see a table of critical values that help you test your hypothesis, you must thank the guys who took the effort to run simulations and publish these results for everyone to use. The plain vanilla
Dickey-Fuller tests considers AR(1) in three forms. First form has a lagged values, the second form has lagged values and an intercept, the third form has a lagged value, intercept and linear time
trend component. Critical value of t statistics to test the parameter estimates are tabulated for each of the three forms. There are also critical values for statistics that test joint hypotheses on
An interesting example is presented that shows the apparent dilemma that commonly occurs when analyzing time series with roots close to unity in absolute value. Unit root tests do not have much power
in discriminating between characteristic roots close to unity and actual unit roots. Hence one needs to do two kinds of tests to be more certain about the process. First type of tests are the
Dickey-Fuller types where the unit root is null hypothesis. Second type of tests are the ones where the null is stationary(KPSS test) and alternate has a unit root.
One has to note that Dickey-Fuller tests assume that the errors are independent and have a constant variance. The chapter then raises 6 pertinent questions and answers them systematically.
1. The DGP may contain both autoregressive and moving average components. We need to know how to conduct the test if the order of the moving average terms is unknown
1. This problem was cracked way back in 1984 where it was shown that an unknown ARIMA_(p,1,q)_ can be well approximated by an ARIMA_(n,1,0)_ where n is dependent on the sample size
2. We need to know the correct lag length of the AR process to be included
1. Too few lags means that the regression residuals do not behave like white noise processes and too many lags reduces the power of the test to reject the null of unit root.
2. This problem is solved by invoking the result from Sims, Stock and Watson(1990) paper: Consider a regression equation containing a mixture of I(1) and I(0) variables such that the residuals
are white noise. If the model is such that the coefficients of interest can be written as a coefficient on zero-mean stationary variables, then asymptotically , the OLS estimator converges to
a normal distribution. As such,a t-test is appropriate.
3. The way to solve the lag issue is to start with a higher lag and then reduce the lag until the appropriate lag appears statistically significant.
3. What if there are multiple roots in the characteristic equation ?
1. The solution is to perform Dickey-Fuller tests on successive differences of {y_t}
4. There might be roots that require first differences and others that necessitate seasonal differencing.
1. There are methods to distinguish between these two types of unit roots
5. What about structural breaks in the data that can impart an apparent trend to the data ?
1. The presence of structural break might make the unit root testing biased in favor of unit root. There is a nice example that goes to illustrate the reason behind it.
2. Phillip Perron’s framework is suggested to remedy the situation. In fact the book shows an example that economic variable that showed difference stationary behavior started showing trend
stationary behavior in the presence of known structural breaks
3. You can easily simulate a stationary series that contains a structural break and convince for yourself Dickey-Fuller tests are biased.
6. It might not be known whether an intercept and/or time trend belongs in the equation?
1. Monte Carlo simulations have shown that the power of various Dickey-Fuller tests can be very low. These tests will too often indicate that a series contains unit root.
2. It is important to use a regression equation that mimics the actual DGP. Inappropriately omitting the intercept or time trend can cause the power of the test to go to zero. On the other hand,
extra regressors increase the critical values so that you may fail to reject the null of a unit root.
3. The key problem is that the tests for unit roots are conditional on the presence of the deterministic regressors and tests for the presence of the deterministic regressors are conditional on
the presence of a unit root.
4. To crack this problem , the author shows that a result from the paper from Sims, Stock and Watson can be used. The result goes like this : If the data-generating process contains any
deterministic regressors (i.e., an intercept or a time trend) and the estimating equation contains these deterministic regressors, inference on all coefficients can be conducted using a
t-test or an F-test. This is because a test involving a single restriction across parameters with different rates of convergence is dominated asymptotically by the parameters with the slowest
rates of convergence.(Read the book on time series by Hamilton to understand this statement better).
The models introduced in this book are of two types, first type are the ones where three is only trend component or only stationary component, the second type being that it contains both components.
In the case of models that have both trend and stationary component, there is a need to decompose the series in to its components. Beveridge and Nelson show how to recover the trend and stationarity
component from the data. I went through the entire procedure given in the book and found it rather tedious. In any case, state space modeling provides a much more elegant way to address this
decomposition. The chapter ends with a section on panel unit root tests.
MultiEquation Time-Series Models
This chapter deals with multivariate time series. Instead of taking all the series at once and explaining the model, the chapter progresses step by step, or should I say model by model. It starts off
with Intervention analysis which is a formal test of a change of mean for a time series. The intervention variable is assumed to be exogenous to the system and the whole point of analysis is to
understand the effect of the observation variable on the time series. There is a subjectivity involved in choosing the type of intervention process. It could be a pulse function or gradually changing
function or prolonged pulse function. Interesting examples like estimating the effects of metal detectors on Skyjacking and effect of Libyan bombings are given in this section.
The chapter then moves on to transfer function model that is a generalized version of intervention model. Here the exogenous variable is not constrained to follow a deterministic path, but is a
stationary process. This kind of systematic explanation of model by model helps a reader understand about the approaches tried out before VAR was adopted. You assemble different exogenous processes
in to one process and estimate the whole process.
As far as estimating individual component processes were concerned, one could use the standard Box-Jenkins methodology. However for estimating the coefficients of the final equation, it was more art
and a lot of subjectivity was involved. In this context, the author says
Experienced econometricians would agree that the procedure is a blend of skill, art and perseverance that is developed through practice.
There is a nice case study involving transfer function that analyses the impact of terrorist attacks on tourism industry in Italy. Examples like these makes this book a very interesting read. These
examples serve as a anchor points for remembering the important aspects of various models. Out of the many problems with the transfer function is the assumption of No Feedback. A simple example of
“thermostat and room temperature'” is given in the book to explain “reverse causality'”. In economic variable scenario, most of the variables are always in a feedback loop. Transfer function modeling
assumes all the subprocesses are independent and hence it is limited in its usage.
This set the stage for the evolution of the next type of model, VAR(vector auto regressive processes). My exposure to VAR modeling was via the Standard VAR(p) representation. This book showed me that
there is another form of VAR(p), i.e. structural VAR that is the focus of an analyst. Standard VAR(p) is a transformed version of Structural VAR and is a computational convenience .Going from
Structural VAR to Standard VAR means reducing the number of parameters in the model and hence there is an Identification Problem. You can estimate the parameters of the Standard VAR but to map it
back uniquely to Structural VAR would not be possible unless you impose restrictions on the error structure of the variables involved. The link between the two kinds of VARs are presented at the
beginning of the chapter so that the reader knows that there is going to be some subjectivity in choosing the error structure. Again this matters less if the researcher is only interested in
Stability issues for VAR(p) models are discussed. Frankly I felt this aspect is wonderfully dealt in Helmut Lutkepohol’s book on multiple time series. In fact I came across VAR in multiple places and
the book that gave me a solid understanding of VAR from a math perspective was the book by Lutkepohl. However the intuition and application of such models is what this chapter stands out for. Also
the math behind VAR is slightly daunting with the vec operators, matrix differentials used all over the place. Hence one can consider this chapter as a gentle introduction to VAR modeling. As far as
estimation is concerned, one can use OLS for each equation and estimate the parameters. If there are varying lag lengths, SUR can be explored. The more I think of these models, the more I realize
that all these models were constructed to give convenient answers to questions like “If a unit shock is applied to this variable, what happens to the system?”. Needless to say any answer provided to
such a question at least in the economic variable scenario is merely a story. It is hard to capture a nonlinear world in a linear form. Nate Silver’s book on Signal vs. Noise has a chapter on
forecasting performance on economic variables. I guess books such as Signal vs. Noise help us in not getting carried away by notions such as impulse response functions etc. Well, all these concepts
such IR functions are good on paper but how well they stand up to economic realities is a big question. In any case the author has to do the job of presenting the literature and my job is summarizing
it. So, let me go ahead.
Once the estimation is done, one might be interested in doing a structural analysis. The chapter presents the definition of impulse response functions,the way to compute them and estimate their
uncertainties. Whenever you make a forecast using VAR, one can decompose the forecast error by chunking it and attributing it to the variables in the system. One needs to use Moving average
representation of Standard VAR model to analyze forecast error variance decomposition. IR functions and Forecast error variance decomposition fall under the category of innovation accounting.
The other kinds of structural analysis involved are hypothesis testing,granger causality, tests with non stationary variables, etc. Each of these topics is intuitively explained with a few examples.
For testing hypothesis, the LR ratio test is suggested. If all VAR variables are stationary, then testing granger causality can be done via the usual F test route. When testing with non stationary
variables, the book presents the result from Sims, Stock and Watson paper: If the coefficient of interest can be written as a coefficient on a stationary variable, then a t-test is appropriate. You
can take this result at face value and do all the hypothesis testing. However for the curious ones, it pays to understand the statement better. My first exposure to such a statement came while
reading Hamilton’s book on time series. I realized that when you mix stationary and non stationary variables in to one regression equation, the concept of rate of convergence becomes very important.
I think this is where the whole field of time series math differs from the usual regression models. For every variable that you include in the model, you have to think about the rate of convergence
and in some cases, your standard OLS regression estimates are good enough despite having non stationary variables in the equation. For more clarity on this, I think its better to read chapter 8 from
Hamilton’s book. In the context of VAR, the author presents a model to capture the relationship between terrorism and tourism in Spain.
What’s the flip side of VAR ?
The VAR approach has been criticized as being devoid of any economic content. The sole role of the economist is to suggest the appropriate variables to be included in VAR. From that point, the
procedure is almost mechanical. Since there is so little economic input in to VAR, it should not be surprising that there is little economic content in the results. Of course, innovation
accounting does require an ordering of the variables, but the selection of the ordering is generally ad hoc.
There are some examples given that impose conditions on the ordering of the variables in VAR to generate impulse response functions. Somehow after reading all those examples, given that I am highly
skeptical about medium to long range economic system analysis, I feel most of the literature on VAR was useful to publish papers and nothing else. Whatever fancy decompositions that one reads, I
think they fall flat in terms of explaining macroeconomic realities. After all the basic model runs on gaussian errors and you are trying to predict stuff in a non-linear world. Ok, if there is
somebody publishing GDP on a daily basis, then may be sheer magnitude of data, some averaging takes place and one can use gaussian models. But applying such models for quarterly data, annual data
seems a futile exercise.
In any case, my interest in going through this book was to read some general stuff on cointegration and VECM. I felt the treatment of VECM model estimation in Lutkepohl was extremely rigorous to the
point that I realized that I had to take a break and revisit the stuff at a later point in time.
Cointegration and Error Correction Models
This section is probably the most relevant to developing trading strategies. “Cointegration” is a term that is often heard in the context of pairs trading. Broadly the term captures the situation
where a linear combination of similar ordered integrated series exhibits a lower order integrated series. More precisely, components of a vector time series are said to be cointegrated of order CI
(d,b) , if all the components of the vector are integrated of order d and there exists a linear combination of the vector components that is integrated of order d-b. This definition is slightly
tweaked depending in different books. For example Helmut Lutkepohl tweaks this definition a little bit so that stationary series can also be included in a cointegrated system.
There are four important points that needs to noted about the definition
• The emphasis is on linear combination. Theoretically one can think of nonlinear combination. But that’s an area of active research and not dealt in this book. Also, cointegrating vector is not
unique. But it can be made unique by normalizing the vector.
• Even though the original definition is restricted to series of order d, it is perfectly possible that only a certain set of variables are integrated. This book introduced me to the concept of
“Multicointegration”, that refers to a situation where there is a equilibrium relationship between groups of variables that are integrated of different orders. Is there ever a multicointegration
amongst a set of stocks in the real world? I don’t know. Some one would have done some research on this aspect.
• In a set of n vectors there can be as high as n-1 cointegrating vectors
• Most of the cointegration literature focuses on the case in which each of the component has a single unit root because there are hardly any variables that are integrated of an order higher than
In simple terms, any cointegrated system has a common stochastic term. The job of researcher is to tease out that relationship. The crucial insight that helps in doing this is the connection between
a cointegrated system and error correction model. What’s an error correction model ? It looks similar to VAR but with an additional lagged level variable in the equation. So, if one goes ahead and
builds a VAR model with differenced data of set of I(1) processes, there could be a risk of misspecification. Why ? If there are a subset of variables that are cointegrated, the correct model to use
is the error correction model rather than a VAR model. The advantage of using an error correction model is that one can tease out the speed of adjustment parameters, that help us understand the way
in which each of individual series responds to the deviations from the common stochastic trend.
The chapter also explains a crucial connection between VAR model and error correction model by casting a simple bivariate VAR(1) in to two univariate second order difference equations that have the
same characteristic equation. The eigen values of the characteristic equations cannot be some independent values if the system is cointegrated. It is shown that one of the eigen values has to be one
and other less than 0. This ensures that the bivariate VAR(1) in level variables are cointegrated. This means that there are restrictions on the coefficients that make the system a cointegrated
system. So at once the reader understands that for any set of I(1) variables, the ECM and cointegration are equivalent and that the rank of the matrix of coefficients in ECM model can be taken as the
number of cointegrating relations in the system. The chapter generalizes the findings to a n variable system.
There are two standard methods of finding the cointegration amongst variables. One is the Engle Granger method and the second is the Johansen test. The former is computationally easy but has some
problems. In the latter, the math(canonical correlation analysis) is a little challenging but the reward for the slog is that you get a consistent system. In the Engle Granger case, all you do is
this : At first you check whether the series are
I(1) . In the second step, you regress two I(1) series, the t statistics are meaningless as nobody has told us that the variable on the rhs is the independent variable and the one on lhs is the
dependent variable. All one can do with such a regression is to use the estimated residuals and check for stationarity. The important thing to realize is that you can’t use Dickey-Fuller critical
values to do a null hypothesis testing of unit root process ? Why ? The reason being that the residuals are estimated unlike the case of a unit root testing of a given level variable. Hence there are
some kind souls who have tabulated the critical values for such estimated residuals and hence one can just go ahead and use them. If you use R, the author would have taken care of this and you can
just invoke a function. Ok, the logical step after you realize that the system is cointegrated is to build a VECM to get a sense of the rate of adjustment of series, granger causality etc. The
chapter also has an example where an analysis of a multicointegration system is presented.
Engle Granger procedure, though easy to implement, has a few defects.
• The estimation of the long-run equilibrium regression requires that the researcher place one variable as regressand and use the other variables as regressors. In large sample systems the analysis
of residuals is independent on what variable is chosen as regressand. However for small samples, one can face a situation where one regression indicates the variables are cointegrated whereas
reversing the equation indicates the variables are not cointegrated. This problem is compounded in a three variable or multi variable system.
• Engle Granger relies on two steps. This means any error in the first step will make the second step meaningless
The chapter introduces Johansen procedure to remedy the above defects. Intuitively the procedure is a multivariate generalization of Dickey-Fuller test. The math behind requires some sweat from the
reader who has never been exposed to canonical correlation analysis. The output of the procedure is basically two test statistics based on the eigen values of the coefficient matrix appearing in the
multivariate Dickey-Fuller test. Thanks to Johansen’s work, there are critical values that are provided to infer the number of cointegrating relations in a system. The other beauty of Johansen
procedure is that the normalized eigen vectors actually provide you with a set of equilibrium relations. The other goody that you get by understanding Johansen procedure is the way to do hypothesis
testing on a set on the cointegrating vector. The good thing about the chapter and this hold for the entire book is that each concept is immediately followed by a relevant case study from
econometrics. This makes the reader motivated in understanding stuff from the book. It’s not some abstract math/stats that is being discussed but a budding econometrician can actually use these
concepts in his work and develop his/her own theories. The last chapter of the book is on Nonlinear time series, which I have skipped reading for now. May be I will go over it some other day, some
other time.
This book falls some where between a cookbook for econometric techniques and a full fledged encyclopedia that covers all the important aspects of econometrics. Hence this book would be daunting for
an absolute beginner but a light read for someone who understands the math behind the models. I think it is better to get the math straightened out in one’s head before venturing in to this book. In
that way, one can appreciate the intuition behind the various econometric models and there will be many “aha” moments along the way. | {"url":"https://www.rksmusings.com/2013/11/10/applied-econometric-time-series-summary/","timestamp":"2024-11-04T01:38:42Z","content_type":"text/html","content_length":"44780","record_id":"<urn:uuid:34a71d05-4469-4b23-b6bb-96af1e3da99f>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00395.warc.gz"} |
This post is inspired by a twitter exchange with the awesome Joe Schwartz (@JSchwartz10a) and by a running exchange with one of my Geometry students. Joe tweeted out the following picture:
The picture was accompanied by the question : “Do 3rd graders know the answer to this question? Truly curious.” It just so happens that my Lil Dardy is in 3rd grade. I showed her the question and
(briefly) explained the equation written. I replied to Joe that she was not surprised to see it written that seven sixes is the same a five sixes plus two more. However, she did not know any
vocabulary word to describe this. Joe replied, succinctly, “And she doesn’t need one…” It made me smile. It also made me think when I was reviewing for a test with one of my Geometry classes. We just
finished a chapter on triangle bisectors and centers. Loads of vocab in this chapter. Very few new skills, just new words describing relationships. Thinking back to the exchange with Joe I found
myself questioning my decisions in writing the book and in teaching this chapter. During the test review a student asked if there would be any vocabulary on the test. This particular student has
asked this question before just about every test. I answered the way I do just about every time. I told him that he needed to know what these words mean to accurately interpret the questions at hand.
For example, if I ask about altitudes to a triangle, he needs to know what that means. However, there would not be a question where I simply ask him to replicate the definition of an altitude.
Thinking back on this exchange, and this way that I answer the question, I have a ton of questions that I need to ask myself and I will start by posing some of them my readers out there.
• My guess (an uncharitable one) is that the student asking about vocabulary is looking to avoid committing anything formal to his short term memory before a test. Admirable in a certain way, but
what does this question say about what he thinks his job on a test is? Why would students who have been working with words day after day express any serious concern about being asked what those
words mean?
• Real people have real vocabulary that they use in their studies, in their work environment, etc. I recoil at the suggestion that I should do something objectionable now because someone will do it
to my students later. But, I am beginning to wonder whether I am cheating my students a bit. Should I be more emphatic in urging them to be careful about vocabulary now so that they will better
understand what they read or hear later? Am I being lazy when I let them casually refer to the longest side of any triangle as the hypotenuse? [Note: I have written about this before. I DO
correct them, but in a pretty gentle, nudging way. I remind them every time that the hypotenuse is a specific name, but this habit has settled in with my students for a couple of years now.]
• What are we communicating to our math students if we mark points off or hold them accountable in some ways to formal language if they can get their mathematical ideas across through their work?
Are these skills dependent upon one another? Is it okay that my students can swing into action and write the equation of an altitude of a triangle but be uncomfortable and vague if asked to write
a definition for what an altitude of a triangle is? As someone who is so comfortable with these words, I struggle to understand how someone can write that line without being comfortable that they
can write a definition, but I’ve been teaching long enough to know that this is a real thing.
• Is this another instance where students have been trained to think that there is one right way to answer a question and their job is to make sure that they simply regurgitate (if they can decode
correctly) what that correct answer is. I, of course, hope that my grading policies and the way that I communicate in class convinces my students that this is not the way life is in my classroom.
However, I know that I am battling impressions that have formed over years.
• More importantly – Does it matter that my students know things like the altitudes of a triangle intersect at the orthocenter? Is there ANY chance that they will remember this in a few months? In
the past few years I taught the course, I pretty much only mentioned the word centroid and avoided talking about incenters, circumcenters, and orthocenters. I am not at all sure that I made the
right decision then or that I made the right decision this year in explicitly defining them. In my text the words centroid and incenter are explicitly defined. Circumcenter and orthocenter do not
even appear in the text. A mistake then? A mistake now? I’d love to hear some advice/opinions.
Gotta get dressed for school now. More thoughts swirling and I hope I am disciplined enough to get them down soon.
Thanks to Joe for prompting this post!
As always, you can reach me here in the comments section or over on twitter where I am @mrdardy
3 thoughts on “Vocabulary”
1. Love how you think.
For me, vocabulary is only as useful as we need it for describing. If a student asks what a word means on an assessment, I’ll always explain. But I want to use the vocabulary precisely.
Circle centers is an interesting example. Why are there multiple centers? Why do we have these names? How are the names related to what they describe? When a learner asks for the orthocenter, do
we answer with how to find it or what it means?
1. Orthocenter is an interesting example. When I did mention that word, I also mentioned that in physics they might encounter the word orthogonal relating to right angles. This, unfortunately,
led some to misremember the orthocenter as the point of intersection for perpendicular bisectors.
2. I agree with John. The first step is to ask ourselves just what the purpose of knowing vocabulary is. As Dan might say, “What’s the headache that knowing that particular word is the aspirin for?”
I think we sometimes fetishize the vocabulary, as if the fact that our students know the definition means that they truly understand the concept the word describes. I’m not qualified to pass
judgment on the geometry examples you describe: (I have no idea what an incenter, orthocenter, or circumcenter is.) And do we sometimes make things more complicated than they need to be? I’m just
guessing, but is the altitude of a triangle its height? If so, can we just call it that? I think you can lead a perfectly normal, fulfilling life without knowing that 6 x 7 = (6 x 5) + (6 x 2) is
an example of something called the distributive property, as long as, like Lil Dardy, you’re not surprised that they’re both 42. | {"url":"https://mrdardy.mtbos.org/2018/01/26/vocabulary/","timestamp":"2024-11-02T22:06:22Z","content_type":"text/html","content_length":"49652","record_id":"<urn:uuid:6e159960-ef07-45e1-903b-113ac38e49c5>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00149.warc.gz"} |
Thermal analysis of anti-icing systems in aeronautical velocity sensors and structures | Request PDF
Thermal analysis of anti-icing systems in aeronautical velocity sensors and structures
This work reviews theoretical–experimental studies undertaken at COPPE/UFRJ on conjugated heat transfer problems associated with the transient thermal behavior of heated aeronautical Pitot tubes and
wing sections with anti-icing systems. One of the main objectives is to demonstrate the importance of accounting for the conduction–convection conjugation in more complex models that attempt to
predict the thermal behavior of the anti-icing system under adverse atmospheric conditions. The experimental analysis includes flight tests validation of a Pitot tube thermal behavior with the
military aircraft A4 Skyhawk (Brazilian Navy) and wind tunnel runs (INMETRO and NIDF/COPPE/UFRJ, both in Brazil), including the measurement of spatial and temporal variations of surface temperatures
along the probe through infrared thermography. The theoretical analysis first involves the proposition of an improved lumped-differential model for heat conduction along a Pitot probe, approximating
the radial temperature gradients within the metallic and ceramic (electrical insulator) walls. The convective heat transfer problem in the external fluid is solved using the boundary layer equations
for compressible flow, applying the Illingsworth variables transformation considering a locally similar flow. The nonlinear partial differential equations are solved using the Generalized Integral
Transform Technique in the Mathematica platform. In addition, a fully local differential conjugated problem model was proposed, including both the dynamic and thermal boundary layer equations for
laminar, transitional, and turbulent flow, coupled to the heat conduction equation at the sensor or wing section walls. With the aid of a single-domain reformulation of the problem, which is
rewritten as one set of equations for the whole spatial domain, through space variable physical properties and coefficients, the GITT is again invoked to provide hybrid numerical–analytical solutions
to the velocity and temperature fields within both the fluid and solid regions. Then, a modified Messinger model is adopted to predict ice formation on either wing sections or Pitot tubes, which
allows for critical comparisons between the simulation and the actual thermal response of the sensor or structure. Finally, an inverse heat transfer problem is formulated aimed at estimating the heat
transfer coefficient at the leading edge of Pitot tubes, in order to detect ice accretion, and estimating the relative air speed in the lack of a reliable dynamic pressure reading. Due to the
intrinsic dynamical behavior of the present inverse problem, it is solved within the Bayesian framework by using particle filter.
No full-text available
To read the full-text of this research,
you can request a copy directly from the authors.
Supplementary resource (1)
... Within this field of research, because of the mentioned importance regarding correct velocity measurements, icing specifically in aeronautical pitot tubes has attracted attention in the recent
years, as presented in Fig. 3. It can also be distinguished different lines of research that include ice detection [8,[15][16][17][18][19][20][21][22][23][24][25], numerical simulation [26][27][28]
[29][30][31][32], review or reports containing PT icing [3,6,[33][34][35] and experimentation [36,37]. Despite increasing attempts, the experimental research dedicated to experimentation of
aeronautical PT icing is quite limited. ...
... As Pitot tubes are commercially traded in the aircraft industry, for public research institutions it can be rather difficult to obtain detailed information on composition and performance directly
from the manufacturer. Therefore, the discovery of dangerously underperforming designs on the market is very rare and normally taken out "postmortem", as done by Ref. [28], who predicted the icing of
the "Thales AA" model Pitot tubes installed on the unlucky AF 447 Airbus very precisely by means of numerical simulation. ...
... PIV is a technique to visualize flow patterns in fluids that is nonintrusive, which is its major advantage on Pitot tube characterization as it was done by Refs. [27,28]. The principle is based
on the optical determination of the displacement of flow tracer particles in known time intervals. ...
The correct measurement of airspeed is a crucial task since a lot of algorithms for the autopilot are based on it and even human pilots depend completely on this reading to take corrective and
non-corrective actions to control the aircraft. This work presents a detailed review concerning an important aspect of aeronautical safety, i.e., Pitot Tube (PT) icing. The topics covered include
first the risks present in flight, relating meteorological conditions, and antecedents of air accidents related to the icing of the PT. Then, the principles of operation of the PT, its conventional
design, how such design is currently regulated, as well as unique guidance for experimentation of PT. Also discussed are the principal modelling considerations for the numerical analysis of PTs
including the proper selection of geometry, governing equations, turbulence models, and boundary conditions for the simulation of different flight circumstances. This guidance for both experimental
and numerical analysis is enriched with a well-selected state the art research. Finally, a summary of the most recent advances and future trends is offered. The outcomes of this review exhibit
several interesting ideas showing promising results, such as microwave heating of the pitot surroundings, phase change materials that delay ice formation, and even mechanical internal arrangement of
bulkheads to prevent the ice to travel through the throat of the probe. The combined research efforts on novel safety measures and new designs for aeronautical PT represent a remarkable potential to
improve flight security.
... The geometry of the tube was optimised with numerical simulations using CFD, then the flow co-efficients were obtained as a function of Re. Souza et al. (12) worked theoretically and
experimentally on transient thermal behaviour of heated aeronautical PP and wing sections with anti-icing systems. A model is adopted to predict ice formation on wing sections and Pitot tubes, for
both incompressible and compressible flow. ...
... There is very little information available on the composition and performance of commercial PPs. In addition, in order to validate a mathematical model, which predicts the thermal behaviour of a
PP under regular operating conditions or its behaviour in case of a heating failure during flight (12)(13)(14) , the data gathered in this work will be required. A variety of experiments and tests
were conducted in order to obtain such data. ...
... At high Re, flow heating by solid-fluid interactions are no longer negligible. Therefore, term 4 considers pressure work Q p and viscous dissipation heating Q vd : (12) where Q gen represents the
volumetric joule heating of the EHE, which is located in the solid domain: The time-dependent power consumption Power(t) was experimentally obtained (Section 5) for the respective test condition. For
the solid domains where heat transfer occurs by conduction only, just terms 1, 3 and 4 need to be solved. ...
Aeronautic Pitot probes (PPs) are extremely important for airspeed and altitude measurements in aviation. Failure of the instrument due to clogging caused by ice formation can lead to dangerous
situations. In this work, a commercial aeronautic PP was characterised experimentally regarding its inner composition, material properties and its thermal performance in a climatic wind tunnel.
Performance runs were taken out in order to analyse the thermal response of the PP under various operating conditions with a particular emphasis on the cooling process in the case of a heating
element failure. Data for the thermal conductivity, diffusivity and specific heat for each material forming the PP were obtained. A numerical model to simulate the thermal behaviour of the PP was
created using Comsol Multiphysics (CM). Experimental data were compared with their numerical counterparts for model validation purposes. After the model was validated, the operation of the PP in
flight conditions was simulated. The failure of the conventional heating system was analysed to obtain the time until the PP reaches a tip temperature where ice formation can be expected. The tip
temperature undercut the zero degrees Celsius mark 165 seconds after the heating element was switched off. The data collected in this work can be used to implement and validate mathematical models in
order to predict the thermal performance of Pitot probes in flight conditions.
... While developing and applying such general purpose algorithm, the need for a number of computational improvements and additional theoretical developments became more evident and led to some
recent advances on the GITT methodology [15][16][17][18][19][20][21][22][23][24][25][26], which are here consolidated. Among such recent advancements, one may point out the single domain
reformulation strategy for complex geometries, the integral balance approach for convergence enhancement of problems with multiscale or abruptly varying coefficients, the proposition of convective
eigenvalue problems for formulations with significant convective effects, and the direct use of nonlinear eigenvalue problems in the integral transformation process of nonlinear PDEs [15][16][17][18]
[19][20][21][22][23][24][25][26]. ...
... While developing and applying such general purpose algorithm, the need for a number of computational improvements and additional theoretical developments became more evident and led to some
recent advances on the GITT methodology [15][16][17][18][19][20][21][22][23][24][25][26], which are here consolidated. Among such recent advancements, one may point out the single domain
reformulation strategy for complex geometries, the integral balance approach for convergence enhancement of problems with multiscale or abruptly varying coefficients, the proposition of convective
eigenvalue problems for formulations with significant convective effects, and the direct use of nonlinear eigenvalue problems in the integral transformation process of nonlinear PDEs [15][16][17][18]
[19][20][21][22][23][24][25][26]. All the methodological variants here discussed are aimed at the enrichment of the eigenfunction expansion basis, towards increasing the amount of information from
the original formulation that is carried into the eigenvalue problem and then recovered at any spatial position by the corresponding eigenfunctions. ...
... The idea in the single domain formulation is to avoid either approach, and proceed to interpret problem (11) as one single convection-diffusion problem written for a heterogeneous media [29].
This alternative has been recently proposed, in the context of conjugated heat transfer problems [15][16][17][18][19][20][21], when fluid and solid regions were treated as a heterogeneous single
medium, after defining space variable coefficients for the whole domain, that account for the abrupt variations of thermophysical properties and other coefficients, through the solid-fluid
transitions. Figure 1 provides two possibilities for representation of the single domain, either by keeping the external borders of the original overall domain, after definition of the space variable
coefficients, as shown in Fig. 1b, or, if desired, by considering a regular overall domain contour that envelopes the original one, as shown in Fig. 1c. ...
An unifying overview of the Generalized Integral Transform Technique (GITT) as a computational-analytical approach for solving convection-diffusion problems is presented. This work is aimed at
bringing together some of the most recent developments on both accuracy and convergence improvements on this well-established hybrid numerical-analytical methodology for partial differential
equations. Special emphasis is given to novel algorithm implementations, all directly connected to enhancing the eigenfunction expansion basis, such as a single domain reformulation strategy for
handling complex geometries, an integral balance scheme in dealing with multiscale problems, the adoption of convective eigenvalue problems in formulations with significant convection effects, and
the direct integral transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Then, selected examples are presented that illustrate the improvement achieved in
each class of extension, in terms of convergence acceleration and accuracy gain, which are related to conjugated heat transfer in complex or multiscale microchannel-substrate geometries,
multidimensional Burgers equation model, and diffusive metal extraction through polymeric hollow fiber membranes. Numerical results are reported for each application and, where appropriate,
critically compared against the traditional GITT scheme without convergence enhancement schemes and commercial or dedicated purely numerical approaches.
... In recent years, there has been an effort to consolidate this knowledge on the GITT into a general purpose open source algorithm, known as the UNIT (UNified Integral Transforms) algorithm [11]
[12][13][14]. Such a demand, together with fairly recent application challenges [15][16][17][18][19][20][21][22][23][24], have induced the proposition of novel computational schemes and theoretical
extensions, that have not yet been presented in a systematic form, as here attempted. Among such recent advancements, one may point out the proposition of progressive filtering for multidimensional
problems [13][14], the implementation of reordering schemes via multiple criteria [13][14], the single domain reformulation strategy for complex geometries [15][16][17][18][19], the solution of
coupled nonlinear reactive flow systems [20], the integral balance approach for convergence enhancement of multiscale problems [21][22], the proposition of convective eigenvalue problems for highly
convective formulations [23], and the direct use of nonlinear eigenvalue problems in the integral transformation process of nonlinear PDEs [24]. ...
... Such a demand, together with fairly recent application challenges [15][16][17][18][19][20][21][22][23][24], have induced the proposition of novel computational schemes and theoretical extensions,
that have not yet been presented in a systematic form, as here attempted. Among such recent advancements, one may point out the proposition of progressive filtering for multidimensional problems [13]
[14], the implementation of reordering schemes via multiple criteria [13][14], the single domain reformulation strategy for complex geometries [15][16][17][18][19], the solution of coupled nonlinear
reactive flow systems [20], the integral balance approach for convergence enhancement of multiscale problems [21][22], the proposition of convective eigenvalue problems for highly convective
formulations [23], and the direct use of nonlinear eigenvalue problems in the integral transformation process of nonlinear PDEs [24]. Before incorporating such developments into a general purpose
algorithm, it is of interest to compile and link these ideas, so as to permit a continuous unification effort, as here discussed. ...
... Problem (18) allows for the definition of the following integral transform pair: ...
This lecture offers an updated review on the Generalized Integral Transform Technique (GITT), with focus on handling complex geometries, coupled problems, and nonlinear convection-diffusion, so as to
illustrate some new application paradigms. Special emphasis is given to demonstrating novel developments, such as a single domain reformulation strategy that simplifies the treatment of complex
geometries, an integral balance scheme in handling multiscale problems, the adoption of convective eigenvalue problems in dealing with strongly convective formulations, and the direct integral
transformation of nonlinear convection-diffusion problems based on nonlinear eigenvalue problems. Representative application examples are then provided that employ recent extensions on the
Generalized Integral Transform Technique (GITT), and a few numerical results are reported to illustrate the convergence characteristics of the proposed eigenfunction expansions.
... Recent developments in the GITT extended its applicability to complex configurations, once solvable essentially by purely numerical approaches. The so-called single domain formulation strategy
allows for a more straightforward treatment of models involving heterogeneous media and complex geometries by the introduction of abruptly spatially varying physical properties and source terms [31]
[32][33][34][35][36][37]. Then, the associated eigenvalue problem carries the information on the space variable coefficients, which account for the transition at the various interfaces between
different materials or geometric regions. ...
... (14a-g) in terms of elementary functions, the GITT itself is employed as a hybrid solution methodology. This strategy has already been successfully applied in previous works, including those that
adopt the single domain formulation [31][32][33][34][35][36][37]. To proceed with the GITT formalism in the solution of Eqs. ...
An analysis of natural convection within a rectangular cavity partially filled with a heat-generating porous medium is carried out through the Generalized Integral Transform Technique (GITT), in
which the laminar flow and energy equations are solved with automatic error control. A single domain reformulation strategy is adopted to rewrite the governing equations within the fluid and the
porous medium as a single heterogeneous medium formulation, with spatially variable physical properties and source terms that account for the abrupt transition of the two regions. This fundamental
study is motivated by the analysis of wet storage of spent nuclear fuel elements with passive cooling of the pool and physical conclusions are drawn from the hybrid numerical-analytical solution.
Increases in the Rayleigh number with constant internal heat generation are found to lower the maximum temperature within the cavity. Moreover, decreasing the aspect ratio also has positive effects
on the cooling of the cavity.
... The geometry of the tube was optimized with numerical simulations using CFD and then the flow coefficients are obtained as a function of Re. Souza et al. [12], worked theoretical and
experimentally on transient thermal behavior of heated aeronautical PP and wing sections with anti-icing systems. A model is adopted to predict ice formation on wing sections and Pitot tubes, for
both incompressible and compressible flow. ...
... From the above summary, it is notorious that most work development on pitot probes do not focus on aeronautical pitot probes. In addition, in order to develop mathematical models, which predict
the thermal behavior of a PP under regular operating conditions or its behavior in case of a heating failure during flight ( [13], [12,14]), the data gathered in this work will be required. In
addition, such models will need a proper validation. ...
In this work, the cool-down and ice accumulation on an aeronautic Pitot probe (PP) in case of a heating element failure was studied experimentally in a climatic wind tunnel (CWT). Through injector
nozzles located in front of the test section, atomized water was introduced into the tunnel in order to simulate the presence of humidity found in different cloud types. Using high-resolution
cameras, the subsequent ice accumulation along the PP was recorded. The experiment runs were taken out under three different conditions, controlling the air temperature, flow velocity and liquid
water content (LWC) in the test section. The flow conditions in the CWT have dynamic similarity with those encountered at cruise speeds of common single-engined propeller aircraft. Analyzing the
indicated pressure measured through the PP's pressure ports, it was possible to correlate the (cone-shaped) ice accumulation on the stagnation pressure port to the measurement error for one test
condition. For the other two test conditions, it was observed that the ice cone's lateral section grew so fast, that it did not allow the blockage of the stagnation port, keeping it in contact with
the atmosphere and thus performing correctly.
... In social production and life, certain parts of equipment and machinery are exposed to icy weather conditions. Moreover, ice accumulation is prone to occur as one of the major hidden dangers
affecting the safety of aviation [1], navigation [2], rail transit [3], and power [4]. For example, accreted ice of the aircraft damages the maneuverability, safety, and stability of the aircraft and
leads to the tragedy of aircraft destruction and human death in severe cases [5]. ...
Ice detection is an important issue in the field of icing prevention and de-icing. In this study, an experimental platform was built for ice detection using flash pulse infrared thermal wave
detection, followed by a quantitative recognition approach for identifying the three-dimensional shape of ice. The new method was combined the edge recognition with thickness calculation of inverse
heat transfer problem. And the edge of the ice was based on the Principal Component Analysis (PCA) and the Canny algorithm. Thus, by processing the ice edges and giving an initial thickness, the
finite element model of the ice was established to numerically simulate the temperature distribution for ice thickness inversion based on the forward heat transfer problem. Meanwhile, the thickness
of the ice was inversed by the Levenberg-Marquardt (LM) method based on the inverse heat transfer problem. The resulting edges and thickness of the ice were found to be in good agreement with the
experimental results, demonstrating the feasibility of the proposed methods. This paves the way to explore an effective accurate and quantitative identification method for three-dimensional ice shape
detection in infrared thermal wave detection.
... Combined research of aircraft icing and pitot tubes is not very frequent in the literature. The work developed by Souza et al. [15], worked theoretical and experimentally on transient thermal
behavior of heated aeronautical PP and wing sections with anti-icing systems. A model is adopted to predict ice formation on wing sections and Pitot tubes, for both incompressible and compressible
flow. ...
The aim of this work is the design of a pitot probe (PP) prototype in order to retard the cool down of the tip, in case of a heating element failure. The viability of operation in flight conditions
is evaluated. The design consists of a redundant heating system incorporating phase change materials (PCM). Combining experimental observations of ice formation with the implementation of the
conjugate heat transfer (CHT) model, with the addition of the heat release due to the phase change of the PCM, the numerical evaluation is developed. The modelling assumptions and numerical
implementation of the phase change process are presented. Then, the selection an appropriate PCM is based on the low flammability and volume dilation and the quantitative effects of the material
properties on the heat transfer. A commercial PCM solution based on salt hydrates was chosen as the most adequate for the design. The parametric design of the prototype, based on the design of
experiment method and fractional factorial testing, is established. A multiple linear regression model was obtained in order to maximize the cooling retardation. The numerical simulations demonstrate
that the prototype PP tip temperature remains 194 s longer above 0 °C than that of the conventional model analyzed.
... Recently, Knupp et al. [14] proposed a single-domain formulation strategy in combination with the Generalized Integral Transform Technique (GITT). This methodology allows for heterogeneous
multi-region problems to be written as single-domain formulations by making use of spatially variable coefficients with abrupt transitions occurring at the interfaces and was successfully employed in
the solution of different conjugated heat transfer problems [21][22][23][24][25]. This strategy was then improved in order to deal with conjugated conduction-convection heat transfer for
incompressible laminar gas flow in microchannels, within the range of validity of the slip flow regime, in which velocity slip and temperature jump at the wall play a major role in heat transfer. ...
In this work, it is proposed the direct and inverse analyses of the forced convection of an incompressible gas flow within rectangular channels in the range of the slip flow regime by taking into
account the wall conjugation and the axial conduction effects. The Generalized Integral Transform Technique (GITT) combined with the single-domain reformulation strategy is employed in the direct
problem solution of the three-dimensional steady forced convection formulation. A non-classical eigenvalue problem that automatically accounts for the longitudinal diffusion operator is here
proposed. The Bayesian framework implemented with the maximum a posteriori objective function is used in the formulation of the inverse problem, whose main objective is to estimate the temperature
jump coefficient, the velocity slip coefficient, and the Biot number, using only external temperature measurements, as obtained, for instance, with an infrared measurement system. A comprehensive
numerical investigation of possible experimental setups is performed in order to verify the influence of the Biot number, wall thickness, and Knudsen number on the precision of the unknown parameters
... In addition, the last section deals with lumpeddifferential formulations, when full or partial lumping of the partial differential heat conduction equation can lead to significant simplification
of the problem to be mathematically solved. Special emphasis is given to an improved lumpeddifferential formulation methodology, known as the Coupled Integral Equations Approach (CIEA) (Mennig and
Özişik 1985;Aparecido et al. 1989;Cotta et al. 1990;Scofano Neto and Cotta 1993;Traiano et al. 1997;Cheroto et al. 1997;Corrêa and Cotta 1998;Cotta 1998;Alves et al. 2000;Regis et al. 2000;Reis et
al. 2000;Su 2001;Su and Cotta 2001;Cotta et al. 2003;Su 2004;Ruperti et al. 2004;Dantas et al. 2007;Pontedeiro et al. 2008;Su et al. 2009;Tan et al. 2009;Naveira et al. 2009;Naveira-Cotta et al.
2010;da Silva and Sphaier 2010;An and Su 2011;Knupp et al. 2012;Sphaier and Jurumenha 2012;An and Su 2013;De Souza et al. 2015;An and Su 2015;Moreira et al. 2015;de Souza et al. 2016). This approach
is based on integral equations for the temperature and heat flux averaged over one or more space variables, combined with approximate Hermite formulas for integration, to offer improved accuracy, but
at the same level of complexity, with respect to the classical lumped analysis procedure. ...
In this chapter, mathematical formulations of macroscopic heat conduction are derived from the First Law of Thermodynamics. Specific forms of the heat conduction equation in isotropic media are given
in Cartesian, Cylindrical, and Spherical coordinates systems, as well as in a general orthogonal coordinate system. Heat conduction equations in anisotropic media and in heterogeneous media are then
derived. Mathematical formulations of one-dimensional transient heat conduction with phase change and in multilayered composite media are presented. Finally, classical and improved lumped parameter
formulations for transient heat conduction problems are analyzed more closely. The so-called Coupled Integral Equations Approach (CIEA) is reviewed as a problem reformulation and simplification tool
in heat and mass diffusion. The averaged temperature and heat flux, in one or more space coordinates, are approximated by Hermite formulae for integrals, yielding analytic relations between boundary
and average temperatures, to be used in place of the usual plain equality assumed in the classical lumped system analysis. The accuracy gains achieved through the improved lumped-differential
formulations are then illustrated through a few typical examples. © Springer International Publishing AG, part of Springer Nature 2018. All rights reserved.
An experimental study was conducted to characterize the dynamic ice accretion process over the surface of a typical aeronautic Pitot probe model under different icing conditions. The experimental
study was conducted in the Icing Research Tunnel available at Iowa State University. While a high-speed imaging system was used to record the dynamic ice accretion process, a three-dimensional (3D)
scanning system was also used to measure the 3D shapes of the ice layers accreted on the test model. While opaque and grainy ice structures were found to accrete mainly along the wedge-shaped lip of
the front port and over the front surface of the probe holder under a dry rime icing condition, much more complicated ice structures with transparent and glazy appearance were observed to cover
almost entire surface of the Pitot probe under a wet glaze icing condition. While a flower-like ice structure was found to grow rapidly along the front port lip, multiple irregular-shaped ice
structures accreted over the probe holder under a mixed icing condition. The characteristics of the icing process under different icing conditions were compared in terms of 3D shapes of the ice
structures, the profiles of the accreted ice layers, the ice blockage to the front port, and the total ice mass on the Pitot probe model. The acquired ice accretion images were correlated with the 3D
ice shape measurements to elucidate the underlying icing physics.
The present work analyzes a coupled nonlinear mathematical model for heat transfer in compressible laminar flow within a parallel plates channel. The governing partial differential equations are
obtained considering the conservation of mass, momentum and energy, based on a two-dimensional steady laminar flow of an ideal gas with constant physical properties and taking into account viscous
dissipation. Two kinds of boundary conditions at the wall were considered, namely, prescribed arbitrary temperature or heat flux longitudinal distributions. The mathematical model was solved using
the hybrid numerical-analytical method known as the Generalized Integral Transform Technique (GITT). A convergence analysis of the proposed eigenfunction expansions for velocity and temperature
fields was undertaken, indicating that fairly low truncation orders are required for the accurate representation of the potentials. The proposed hybrid solution was also critically compared with the
Mathematica NDSolve finite difference-based routine and additional theoretical and experimental results in the literature, with good agreement in all cases considered.
Purpose The purpose of this paper is to identify freezing in pitot tubes at real-time, by means of the estimated heat transfer coefficient (HTC) at the tip of the probe. The prompt identification of
such freezing is paramount to activate and control mechanisms for ice removal, which in turn are essential for the safety of the aircraft and its passengers. Design/methodology/approach The proposed
problem is solved by means of an inverse analysis, performed within the Bayesian approach of inverse problems, with temperature measurements assumed available along the pitot probe over time. A heat
conduction model is used for describing the average temperature of the pitot tube, which is then rewritten in the form of a state estimation problem. The model is linear and time invariant, so that
the inverse problem can be solved using the steady-state Kalman filter (SSKF), a computationally efficient algorithm. Findings The results show that the SSKF is fully capable of recovering the HTC
information from the temperature measurements. Any variation of the HTC – either smooth or discontinuous – is promptly detected with high accuracy. Computational effort is significantly lower than
the physical time, so that the proposed methodology is fully capable of estimating the HTC at real-time. Originality/value The methodology herein solves the proposed problem not only by estimating
the HTC accurately but also doing so with a very small computational effort, so that real-time estimation and freezing control become possible. To the best of the authors’ knowledge, no likewise
publications have been found so far.
The injection of CO2 into oil reservoirs is used by the oil and gas industry for enhanced oil recovery (EOR) and/or the reduction of environmental impact. The compression systems used for this task
work with CO2 in supercritical conditions, and the equipment used is energy intensive. The application of an optimization procedure designed to find the optimum operating conditions leads to reduced
energy consumption, lower exergy destruction, and reduced CO2 emissions. First, this work presents two thermodynamic models to estimate the amount of power necessary for a multi-stage CO2 compression
system in floating production storage and offloading (FPSO) using accurate polytropic relationships and equations of state. Second, a thermodynamic analysis using the first and second laws of
thermodynamics is conducted to identify possible improvements in energy consumption and the sources of the compression unit’s irreversibilities. In the final step, optimization procedures, using two
methods with different approaches, are implemented to minimize the total power consumption. As the number of stages and the pressure drop between them influence the total power required by the
compressors, these are considered as the input parameters used to obtain the inlet pressure at each stage. Three different compositions with variations in CO2 content, i.e., pure CO2, pure \({\mathrm
{CH}}_{4}\), and 70% CO2 + 30% \({\mathrm{CH}}_{4}\), are also investigated as three different operating scenarios. The optimal configurations and pressure ratios result in a reduction in power
consumption of up to 9.65%, mitigation of CO2 emissions by up to 1.95 t/h, and savings in exergy loss of up to 23.9%, when compared with conventional operating conditions.
Vessels that have navigation routes in areas with ambient temperatures that can drop below + 5 [°C], with a relative humidity of over 65%, will have implemented technical solutions for monitoring and
combating ice accumulations in the intake routes of gas turbine power plants. Because gas turbines are not designed and built to allow the admission of foreign objects (in this case - ice), it is
necessary to avoid the accumulation of ice through anti-icing systems and not to melt ice through defrost systems. Naval anti-icing systems may have as a source of energy flow compressed air,
supersaturated steam, exhaust gases, electricity or a combination of those listed. The monitoring and optimization of the operation of the anti-icing system gives the gas turbine power plant an
operation as close as possible to the normal regimes stipulated in the ship's construction or retrofit specification.
This work presents an application of an exact analytical approach to estimate the temperature field of solid-state electronics (SSE). A partial lumped approach in the chip’s height was performed,
obtaining a two-dimensional mathematical model. The internal heat generation (HG) regions caused by Joule effect due to internal components were modeled as Gaussian functions, and the classical
integral transform technique (CITT) was used to solve the problem. The developed formulation was applied in three illustrative HG layouts for chips. In addition, the finite difference method was also
implemented for verification and comparison purposes. The CITT provided fast computing and accurate results with small truncation orders in problems with multiple non-uniform heated regions.
Furthermore, the formulation presented in this work may be applied to any SSE internal HGs layout, being a useful tool for thermal assessment of SSEs.
It is very difficult to remove ice accreting on some miniature aircraft components with irregular surfaces such as flight sensors, increasing the risk of serious flight accidents. However, few
existing anti-icing/de-icing systems can be fabricated on such complex surfaces. In this study, a novel sandwich structural electric heating coating (SEHC) is proposed to solve this problem. An
intermediate heating layer was coated between the upper and lower electrode layers in SEHC for heating on complex surfaces. The heating layer was prepared by adding multi-wall carbon nanotubes
(MWCNTs) into a polyurethane matrix. The SEHC with 2% MWCNTs weight fraction showed the best electric heating properties. The electric heating uniformity mechanism of SEHC was explained by
establishing the resistance model and simulation analysis. Moreover, apart from the high efficiency of anti-icing/de-icing ability, SEHC exhibited excellent electric heating properties, including
timely response, repeatability, and reliability towards coating damage. In addition, the synthetic strategy of combined SEHC with super-hydrophobic coating demonstrates more efficient anti-icing/
de-icing property. Thus the SEHC concept provides a potential application of anti-icing/de-icing for the miniature complex components, due to its advances in fabricating convenience, damage
resistance as well as anti-icing/de-icing functions.
Purpose The purpose of this work is to revisit the integral transform solution of transient natural convection in differentially heated cavities considering a novel vector eigenfunction expansion for
handling the Navier-Stokes equations on the primitive variables formulation. Design/methodology/approach The proposed expansion base automatically satisfies the continuity equation and, upon integral
transformation, eliminates the pressure field and reduces the momentum conservation equations to a single set of ordinary differential equations for the transformed time-variable potentials. The
resulting eigenvalue problem for the velocity field expansion is readily solved by the integral transform method itself, while a traditional Sturm–Liouville base is chosen for expanding the
temperature field. The coupled transformed initial value problem is numerically solved with a well-established solver based on a backward differentiation scheme. Findings A thorough convergence
analysis is undertaken, in terms of truncation orders of the expansions for the vector eigenfunction and for the velocity and temperature fields. Finally, numerical results for selected quantities
are critically compared to available benchmarks in both steady and transient states, and the overall physical behavior of the transient solution is examined for further verification. Originality/
value A novel vector eigenfunction expansion is proposed for the integral transform solution of the Navier–Stokes equations in transient regime. The new physically inspired eigenvalue problem with
the associated integmaral transformation fully shares the advantages of the previously obtained integral transform solutions based on the streamfunction-only formulation of the Navier–Stokes
equations, while offering a direct and formal extension to three-dimensional flows.
A novel strategy to improve the mass transport performance in membraneless redox flow batteries (MRFB) is proposed, based on a symmetrical sinusoidal wall corrugation of the solid electrodes. The
continuity, Navier–Stokes, and mass transport equations are solved with a hybrid analytical-numerical method known as the Generalized Integral Transform Technique (GITT). The effects of the Reynolds
number and the amplitude of the corrugation are analyzed. The corrugated MRFBs are shown to be more suitable for higher Reynolds number applications, still within the laminar flow regime, when
crossover is a less limiting factor. For smaller Reynolds numbers, the crossover is shown to offset the gains in reactant conversion with the introduction of the corrugation. In addition, the
benefits in terms of limiting current density with corrugated RFBs are mainly associated with the increase in reactive area with its use.
A generalized and systematic approach is presented to the analytic solution of seven different classes of linear heat and mass diffusion problems. After the basic general equations are set forth,
they are solved foe a wide range of applications. In many cases, solutions are presented graphically. The first 4 chapters provide a unified solutions.The remaining chapters are devoted to many
applications from science and engineering.
A mathematical model was developed to both describe an airfoil electrothermal anti-ice system operation and enable prediction of its main parameters. A reference case was chosen to define the
mathematical model and support the results validation. The first law of thermodynamics is applied to airfoil solid surface and runback water flow. In addition, liquid water is subjected to mass and
momentum conservation principles. The overall heat transfer coefficient, between gaseous flow and airfoil, is very sensitive to solid surface temperature gradients and runback water evaporation,
requiring an adequate solution of the thermal boundary layer. Therefore, the mathematical model included dynamic and thermal boundary layer equations in integral form, which considers variable
properties, pressure, and temperature gradients on the surface, coupled heat and mass transfer effects, and laminar to turbulent transition region modeling.
Energy conservation and sustainable development demands have been driving research efforts, within the scope of thermal engineering, towards more energy efficient equipments and processes. In this
context, the scale reduction in mechanical fabrication has been permitting the miniaturization of thermal devices, such as in the case of micro-heat exchangers [1]. More recently, heat exchangers
employing micro-channels with characteristic dimensions below 500 μm have been calling the attention of researchers and practitioners, towards applications that require high heat removal demands and/
or space and weight limitations [2]. Recent review works [2, 3] have pointed out discrepancies between experimental results and classical cor-relation predictions of heat transfer coefficients in
micro-channels. Such deviations have been stimulating theoretical research efforts towards a better agreement between experiments and simulations, through the incorporation of different effects that
are either typically present in micro-scale heat transfer or are effects that are normally disregarded at the macro-scale and might have been erroneously not accounted for in micro-channels. Our own
research effort was first related to the fundamental analysis of forced convection within micro-channels with and without slip flow, as required for the design of micro-heat exchangers in steady,
periodic and transient regimen [4, 5]. Also recently in Refs. [6–11], the analytical contributions were directed towards more general problem formulations, including viscous dissipation, axial
diffusion in the fluid and three-dimensional flow geometries. Then, this fundamental research was extended to include the effects of axial fluid heat conduction and wall corrugation or roughness on
heat transfer enhancement [12]. The work of Maranzana et al. [13] further motivated the present analysis, dealing with longitudinal wall heat conduction effects in symmetric micro-channels.
This paper analyses the recently suggested particle approach to filtering time series. We suggest that the algorithm is not robust to outliers for two reasons: the design of the simulators and the
use of the discrete support to represent the sequentially updating prior distribution. Both problems are tackled in this paper. We believe we have largely solved the first problem and have reduced
the order of magnitude of the second. In addition we introduce the idea of stratification into the particle filter which allows us to perform on-line Bayesian calculations about the parameters which
index the models and maximum likelihood estimation. The new methods are illustrated by using a stochastic volatility model and a time series model of angles. Some key words: Filtering, Markov chain
Monte Carlo, Particle filter, Simulation, SIR, State space. 1 1
Kernel density estimation techniques are used to smooth simulated samples from importance sampling function approximations to posterior distributions, resulting in revised approximations that are
mixtures of standard parametric forms, usually multivariate normal or T‐distributions. Adaptive refinement of such mixture approximations involves repeating this process to home‐in successively on
the posterior. In fairly low dimensional problems, this provides a general and automatic method of approximating posteriors by mixtures, so that marginal densities and other summaries may be easily
computed. This is discussed and illustrated, with comment on variations and extensions suited to sequential Bayesian updating of Monte Carlo approximations, an area in which existing and alternative
numerical methods are difficult to apply.
The purpose of this work is to propose a simple combined model for the thermal analysis of Pitot probes that includes conjugated heat transfer and a modified Messinger ice formation formulation, in
order to describe ice accretion in electrically heated aeronautical Pitot tubes. The proposed lumped-differential model is solved by making use of the Method of Lines as implemented in the routine
NDSolve, from the numerical-symbolic computational system Mathematica v.9.0. It is then illustrated the importance of considering the heat transfer within the probe structure, that accounts for heat
conduction along the probe and the thermal capacitance of its internal elements, in adequately predicting the ice formation on the stagnation and tip regions. Solutions are provided for actual
adverse atmospheric conditions, extracted from a recently proposed certification envelope for Pitot probes.
This work advances a hybrid integral transforms methodology for the solution of conjugated heat transfer problems involving the thermal interaction between a solid wall and an external flow. A single
domain formulation strategy is employed, coupling the two regions (solid and fluid) by accounting for the transition of the materials through space variable thermophysical properties and source terms
with abrupt variations at the interface. The resulting energy equation is solved using a hybrid numerical-analytical technique known as the Generalized Integral Transform Technique (GITT).
Illustrative results of the converged eigenfunction expansions for the temperature field are presented, for the test case of a flat plate subjected to an internal uniform heat generation.
The present work advances a recently introduced approach based on combining the Generalized Integral Transform Technique (GITT) and a single domain reformulation strategy, aimed at providing hybrid
numerical–analytical solutions to convection–diffusion problems in complex physical configurations and irregular geometries. The methodology has been previously considered in the analysis of
conjugated conduction–convection heat transfer problems, simultaneously modeling the heat transfer phenomena at both the fluid streams and the channels walls, by making use of coefficients
represented as space variable functions with abrupt transitions occurring at the fluid–wall interfaces. The present work is aimed at extending this methodology to deal with both fluid flow and
conjugated heat transfer within arbitrarily shaped channels and complex multichannel configurations, so that the solution of a cumbersome system of coupled partial differential equations defined for
each individual sub-domain of the problem is avoided, with the proposition of the single-domain formulation. The reformulated problem is integral transformed through the adoption of eigenvalue
problems containing the space variable coefficients, which provide the basis of the eigenfunction expansions and are responsible for recovering the transitional behavior among the different regions
in the original formulation. For demonstration purposes, an application is first considered consisting of a microchannel with an irregular cross-section shape, representing a typical channel
micro-fabricated through laser ablation, in which heat and fluid flow are investigated, taking into account the conjugation with the polymeric substrate. Then, a complex configuration consisting of
multiple irregularly shaped channels is more closely analyzed, in order to illustrate the flexibility and robustness of the advanced hybrid approach. In both cases, the convergence behavior of the
proposed expansions is presented and critical comparisons against purely numerical approaches are provided.
An extension of a recently proposed single domain formulation of conjugated conduction-convection heat transfer problems is presented, taking into account the axial diffusion effects at both the
walls and fluid regions, which are often of relevance in microchannels flows. The single domain formulation simultaneously models the heat transfer phenomena at both the fluid stream and the channel
walls, by making use of coefficients represented as space variable functions, with abrupt transitions occurring at the fluid-wall interface. The generalized integral transform technique (GITT) is
then employed in the hybrid numerical-analytical solution of the resulting convection-diffusion problem with variable coefficients. With axial diffusion included in the formulation, a nonclassical
eigenvalue problem may be preferred in the solution procedure, which is itself handled with the GITT. To allow for critical comparisons against the results obtained by means of this alternative
solution path, we have also proposed a more direct solution involving a pseudotransient term, but with the aid of a classical Sturm-Liouville eigenvalue problem. The fully converged results confirm
the adequacy of this single domain approach in handling conjugated heat transfer problems in microchannels, when axial diffusion effects must be accounted for.
The theory and algorithm behind the open-source mixed symbolic-numerical computational code named UNIT (unified integral transforms) are described. The UNIT code provides a computational environment
for finding solutions of linear and nonlinear partial differential systems via integral transforms. The algorithm is based on the well-established analytical-numerical methodology known as the
generalized integral transform technique (GITT), together with the mixed symbolic-numerical computational environment provided by the Mathematica system (version 7.0 and up). This paper is aimed at
presenting a partial transformation scheme option in the solution of transient convective-diffusive problems, which allows the user to choose a space variable not to be integral transformed. This
approach is shown to be useful in situations when one chooses to perform the integral transformation on those coordinates with predominant diffusion effects only, whereas the direction with
predominant convection effects is handled numerically, together with the time variable, in the resulting transformed system of one-dimensional partial differential equations. Test cases are selected
based on the nonlinear three-dimensional Burgers' equation, with the establishment of reference results for specific numerical values of the governing parameters. Then the algorithm is illustrated in
the solution of conjugated heat transfer in microchannels.
In this rigorous and thorough analysis three concepts of heat conduction are studied: improved lumped-differential formulations, the generalized integral transform technique, and symbolic
computation. Addressing problem formulation, solution methodology and computational implementation, the authors develop an improved lumped-differential formulation for heat conduction problems,
present a unified hybrid numerical-analytical solution methodology for linear and nonlinear problem, and provide an introduction to mixed symbolic-numerical computation. Special topic and
applications illustrate the theory, including extended surfaces, drying, ablation, conjugated problems and anisotropic media. Sample computer programs, using mixed symbolic-numerical computation, are
presented in notebook format, developed within the Mathematica system.
A theoretical–experimental study of the conjugated heat transfer problem associated with the transient thermal behavior of a heated aeronautical Pitot tube is undertaken, including flight tests of
experimental validation with the military aircraft A4 Skyhawk probe. The aim is to demonstrate the importance of accounting for the conduction–convection conjugation in more complex models that
attempt to predict the thermal behavior of the anti-icing system of such sensors under adverse atmospheric conditions. The theoretical analysis involves the proposition of an improved
lumped-differential model for heat conduction along the probe, approximating the transversal temperature gradients within the metallic and ceramic walls. The convective heat transfer problem in the
external fluid is solved using the boundary layer equations for compressible flow, applying the Illingsworth variables transformation considering a locally similar flow. The nonlinear partial
differential equations are solved using the Generalized Integral Transform Technique in the Mathematica v7.0 platform. The experimental analysis involves the use of thermocouples fixed to the surface
of the Pitot tube and temperature measurements acquired by a data logger installed in the frontal cone of the airplane. The transient thermal behavior has been promoted by the A4 pilot through
intermittent disconnection and reconnection of the Pitot probe heating system, which allowed for critical comparisons between the model and the actual flight thermal response of the Pitot tube
The transient behaviour of conjugated heat transfer in laminar microchannel flow is investigated, taking into account the axial diffusion effects, which are often of relevance in microchannels, and
including pre-heating or pre-cooling of the region upstream of the heat exchange section. The solution methodology is based on the Generalized Integral Transform Technique (GITT), as applied to a
single domain formulation proposed for modelling the heat transfer phenomena at both the fluid stream and the channel wall regions. By making use of coefficients represented as space dependent
functions with abrupt transitions occurring at the fluid–wall interfaces, the mathematical model carries the information concerning the transition of the two domains, unifying the model into a single
domain formulation with variable coefficients. The proposed approach is illustrated for microchannels with polymeric walls of different thicknesses. The accuracy of approximate internal wall
temperature estimates deduced from measurements of the external wall temperatures, accounting only for the thermal resistance across the wall thickness, is also analyzed.
The objective of this paper is to introduce applications of Bayesian filters to state estimation problems in heat transfer. A brief description of state estimation problems within the Bayesian
framework is presented. The Kalman filter, as well as the following algorithms of the particle filter: sampling importance resampling and auxiliary sampling importance resampling, are discussed and
applied to practical problems in heat transfer.
The present work summarizes the theory and describes the algorithm related to the construction of an open source mixed symbolic-numerical computational code named UNIT — Un ified I ntegral T
ransforms, that provides a development platform for finding solutions of linear and nonlinear partial differential equations via integral transforms. The reported research was performed by making use
of the symbolic computational system Mathematica v.7.0 and the hybrid numerical-analytical methodology Generalized Integral Transform Technique — GITT. The aim here is to illustrate the robust and
precision controlled simulation of multidimensional nonlinear transient convection-diffusion problems, while providing a brief introduction of this open source code. Test cases are selected based on
nonlinear multi-dimensional formulations of the Burgers equations, with the establishment of reference results for specific numerical values of the governing parameters. Special aspects and
computational behaviors of the algorithm are then discussed, demonstrating the implemented possibilities within the present version of the UNIT code.
The present work deals with conjugated heat transfer in heat spreaders made of a nanocomposite substrate with longitudinally molded multiple straight micro-channels. An experimental analysis is
undertaken to validate a recently proposed methodology for the solution of conjugated conduction–convection heat transfer problems, which are often of relevance in thermal micro-systems analysis,
based on a single domain formulation and solution of the resulting problem through integral transforms. The single domain formulation simultaneously models the heat transfer phenomena at both the
fluid streams and the channels walls by making use of coefficients represented as space variable functions with abrupt transitions occurring at the fluid–wall interfaces. The Generalized Integral
Transform Technique (GITT) is then employed in the hybrid numerical–analytical solution of the resulting convection–diffusion problem with variable coefficients. The experimental investigation
involves the determination of the surface temperature distribution over the heat spreader with the molded microchannels that exchange heat with the base plate by flowing hot water at a prescribed
mass flow rate. The infrared thermography technique is employed to investigate the response of the heat spreader surface temperature to a hot inlet fluid flow, aiming at the analysis of micro-systems
that provide a thermal response from either their normal operation or due to a promoted stimulus for characterization purposes.
Heat transfer in microchannels is analyzed, including the coupling between the regions upstream and downstream of the heat transfer section and taking into account the wall conjugation and axial
diffusion effects which are often of relevance in microchannels. The methodology is based on a recently proposed single-domain formulation for modeling the heat transfer phenomena simultaneously at
the fluid stream and the channel walls, and applying the generalized integral transform technique (GITT) to find a hybrid numerical–analytical solution to the unified partial differential energy
equation. The proposed mathematical model involves coefficients represented as space-dependent functions, with abrupt transitions at the fluid–wall interfaces, which carry the information concerning
the transition of the two domains, unifying the model into a single-domain formulation with variable coefficients. Convergence of the proposed eigenfunction expansions is thoroughly investigated and
the physical analysis is focused on the effects of the coupling between the downstream and the upstream flow regions.
A class of nonlinear diffusion-type problems is handled through a hybrid method. This method incorporates the ideas in the generalized integral transform technique to reduce the original partial
differential equation into a denumerable system of coupled ordinary differential equations. These equations can then be solved through standard numerical techniques, once the system is truncated to a
finite order. Sufficient conditions for the convergence of the truncated finite system are then examined. An application is considered that deals with a transient radiative fin problem, which is
quite suitable for illustrating the solution methodology and convergence behavior.
This paper reviews the background to and the current status of analyses developed to address the problem of icing on aircraft. Methods for water droplet trajectory calculation, ice accretion
prediction, aerodynamic performance degradation and an overvie of ice protection system modelling are presented. The paper addresses the issues involved in the development of icing analyse including
problem formulation and assumptions, solution techniques, validation and the incorporation of empirical inputs wher a purely theoretical approach is not feasible. Results are presented to illustrate
the capabilities of the analyses when applie to practical design problems. Recommendations are made for further research.
The present work summarizes the theory and describes the algorithm related to an open-source mixed symbolic-numerical computational code named unified integral transforms (UNIT) that provides a
computational environment for finding hybrid numerical-analytical solutions of linear and nonlinear partial differential systems via integral transforms. The reported research was performed by
employing the well-established methodology known as the generalized integral transform technique (GITT), together with the symbolic and numerical computation tools provided by the Mathematica system.
The main purpose of this study is to illustrate the robust precision-controlled simulation of multidimensional nonlinear transient convection-diffusion problems, while providing a brief introduction
of this open source implementation. Test cases are selected based on nonlinear multidimensional formulations of Burgers’ equation, with the establishment of reference results for specific numerical
values of the governing parameters. Special aspects in the computational behavior of the algorithm are then discussed, demonstrating the implemented possibilities within the present version of the
UNIT code, including the proposition of a progressive filtering strategy and a combined criteria reordering scheme, not previously discussed in related works, both aimed at convergence acceleration
of the eigenfunction expansions.
The present paper presents solution methods of convective heat transfer problems which take into account heat propagation in the solid in contact with a moving fluid. The method is referred to as the
solution of conjugated problems. In particular, the paper treats heat transfer in laminar fluid flow in circular and planar tubes with allowance for the dissipation of mechanical energy. In addition,
there are considered both steady-and unsteady-state heat transfer problems for flow of a compressible fluid past a plate. In all cases heat transfer in the fluid is discussed in relation to that in a
solid wall. On the basis of the analysis of the solution a new criterion is introduced which characterizes the effect of thermophysical properties of the wall on heat transfer. A few examples are
considered for illustration purposes.
This text presents several generalized analytical methods of solution for a variety of classes of commonly incurred heat transfer problems. It covers problems that are time-dependent, linear
heat-oriented, or involve mass diffusion.
A theoretical model for ice growth due to droplets of supercooled fluid impacting on a subzero substrate is presented. In cold conditions rime (dry) ice forms and the problem reduces to solving a
simple mass balance. In milder conditions glaze (wet) ice forms. The problem is then governed by coupled mass and energy balances, which determine the ice height and water layer thickness. The model
is valid for “thin” water layers, such that lubrication theory may be applied and the Peclet number is small; it is applicable to ice accretion on stationary and moving structures. A number of
analytical solutions are presented. Two- and three-dimensional numerical schemes are also presented, to solve the water flow equation, these employ a flux-limiting scheme to accurately model the
capillary ridge at the leading edge of the flow. The method is then extended to incorporate ice accretion. Numerical results are presented for ice growth and water flow driven by gravity, surface
tension, and a constant air shear.
A one-dimensional mathematical model is developed describing ice growth due to supercooled fluid impacting on a solid substrate. When rime ice forms, the ice thickness is determined by a simple mass
balance. The leading-order temperature profile through the ice is then obtained as a function of time, the ambient conditions, and the ice thickness. When glaze ice forms, the energy equation and
mass balance are combined to provide a single first-order nonlinear differential equation for the ice thickness, which is solved numerically. Once the ice thickness is obtained, the water height and
the temperatures in the layers may be calculated. The method for extending the one-dimensional model to two and three dimensions is described. Ice growth rates and freezing fractions predicted by the
current method are compared with the Messinger model. The Messinger model is shown to be a limiting case of the present method.
Flight in all weather conditions has necessitated correctly detecting icing and taking reasonable measures against it. This work aims at the detection and identification of airframe icing based on
statistical properties of aircraft dynamics and reconfigurable control protecting aircraft from hazardous icing conditions. A Kalman filter is used for the data collection for the detection of icing,
which aerodynamically deteriorates flight performance. A neural network process is applied for the identification of icing model of the aircraft, which is represented by five parameters based on past
experiments for iced wing airfoils. Icing is detected by a Kalman filtering innovation sequence approach. A neural network structure is embodied such that its inputs are the aircraft estimated
measurements and its outputs are the parameters affected by ice, which corresponds to the aircraft inverse dynamic model. The necessary training and validation set for the neural network model of the
iced aircraft are obtained from the simulations of nominal model, which are performed for various icing conditions. In order to decrease noise effects on the states and to increase training
performance of the neural network, the estimated states by the Kalman filter are used. A suitable neural network model of aircraft inverse dynamics is obtained by using system identification methods
and learning algorithms. This trained model is used as an application for the control of the aircraft that has lost its controllability due to icing. The method is applied to F16 military and A340
commercial aircraft models and the results seem to be good enough.
The simplest conjugated boundary value problems of heat transfer, in ; which heat conduction equations are solved in common for a body with heat ; sources and for a liquid flowing round the body, are
studied. The method of the ; asymptotic solution of integral equations occurring in conjugated problems is ; presented. (auth);
This second edition of our book extends the modeling and calculation of boundary-layer flows to include compressible flows. The subjects cover laminar, transitional and turbulent boundary layers for
two- and three-dimensional incompressible and compressible flows. The viscous-inviscid coupling between the boundary layer and the inviscid flow is also addressed. The book has a large number of
homework problems.
Analytical solution is obtained for laminar forced convection inside tubes including wall conduction effects in the axial direction, based on a radially lumped wall temperature model, and accounting
for external convection. The ideas in the generalized integral transform technique are extended to accommodate for the resulting more involved boundary condition and accurate numerical results
obtained for quantities of practical interest such as bulk fluid temperature, lumped wall temperature, and Nusselt number. The effects of external convection and axial conduction along the wall on
these heat transfer quantities are then investigated through consideration of typical values for, respectively, Biot number and a wall-to-fluid conjugation parameter. Convergence characteristics of
the present approach are also briefly examined.
A unified approach for solving convection-diffusion problems using the Generalized Integral Transform Technique (GITT) was advanced and coined as the UNIT (UNified Integral Transforms) algorithm, as
implied by the acronym. The unified manner through which problems are tackled in the UNIT framework allows users that are less familiar with the GITT to employ the technique for solving a variety of
partial-differential problems. This paper consolidates this approach in solving general transient one-dimensional problems. Different integration alternatives for calculating coefficients arising
from integral transformation are discussed. Besides presenting the proposed algorithm, aspects related to computational implementation are also explored. Finally, benchmark results of different types
of problems are calculated with a UNIT-based implementation and compared with previously obtained results.
In the current article, the problem of in-flight ice accumulation on multi-element airfoils is studied numerically. The analysis starts with flow field computation using the Hess-Smith panel method.
The second step is the calculation of droplet trajectories and droplet collection efficiencies. In the next step, convective heat transfer coefficient distributions around the airfoil elements are
calculated using the Integral Boundary-Layer Method. The formulation accounts for the surface roughness due to ice accretion. The fourth step consists of establishing the thermodynamic balance and
computing ice accretion rates using the Extended Messinger Model. At low temperatures and low liquid water contents, rime ice occurs for which the ice shape is determined by a simple mass balance. At
warmer temperatures and high liquid water contents, glaze ice forms for which the energy and mass conservation equations are combined to yield a single first order ordinary differential equation,
solved numerically. Predicted ice shapes are compared with experimental shapes reported in the literature and good agreement is observed both for rime and glaze ice. Ice shapes and masses are also
computed for realistic flight scenarios. The results indicate that the smaller elements in multielement configurations accumulate comparable and often greater amount of ice compared to larger
elements. The results also indicate that the multi-layer approach yields more accurate results compared to the one-layer approach, especially for glaze ice conditions.
From the analysis of a conjugate problem of convective heat transfer in a laminar incompressible flow around a flat plate of a finite thickness the design formulas are suggested for a local Nusselt
number Nux(Nux/Nux0)−1 = CBx, (0 < Brx < 1.5), (Nux/Nux0)−1 = C0−(C/Brx), (1.5 < Brx <∞), where Nux0 is the Nusselt number with Brx = 0 (the Nusselt number defined by the ordinary heat transfer
equations) and Brx is the localBrun number (a conjugation number).RésuméOn analyse un problème conjugué de transfert thermique convectif dans un écoulement laminaire incompressible autour d'une
plaque plane d'épaisseur finie. Pour le nombre de Nusselt local Nux on propose les formules suivantes: (Nux/Nux0)−1 = CBx (0 < Brx < 1,5)(Nux/Nux0)−1 = C0−(C/Brx) (1,5 < Brx) où Nux0 est le nombre de
Nusselt pour Brx = 0 (le nombre de Nusselt défini par les équations ordinaires) et Brx est le nombre de Brun local (un nombre lié à la conjugaison).ZusammenfassungAusgehend von der Analyse eines
konjugierten Problems der konvektiven Wärmeübertragung bei laminarer inkompressibler Strömung beidseitig einer ebenen Platte endlicher Dicke sind zwei Gleichungen für die örtliche Nusselt-Zahl Nux
vorgeschlagen worden. Dabei ist Nux0 die Nusselt-Zahl mit Brx = 0 (die Nusselt-Zahl ist durch die Wärmeübergangsgleichung definiert) und Brx ist die örtliche Brun-Zahl (eine
Konjugations-Zahl).РефератHa ocнoвe пpиближeнныч peшeний coпpяжeннoй зaдaчи кoнвeктивнoгo тeплooбмeнa пpи лaминapнoм oбтeкaнии плocкoй плacтины кoнeчнoй тoлщины нecжимaeмoй жидкocтью пpeдлoжeны
pacчeтныe фopмuлы для лoк льнoгo чиcлa Пuccкльтa Nux/Nux0−1 = CBrx(0 < Brx < 1,5)Nux/Nux0−1 = C0−C1/Brx (1,5 < Brx <∞), гдe Nux0 — знaчeниe чиcлa Hucceльтa пpи Brx = 0 (чиcлo Huccкльтa, oпpeдeляeмoe
пo oбщeизвecтным фopмuлaм тeплooбмeны), Brx — лoкaльнoe чиcлo чиcлo Бpюнa (кpитepий coпpяжeннocти).
This work presents a hybrid numerical–analytical solution for transient laminar forced convection over flat plates of non-negligible thickness, subjected to arbitrary time variations of applied wall
heat flux at the fluid–solid interface. This conjugated conduction–convection problem is first reformulated through the employment of the coupled integral equations approach (CIEA) to simplify the
heat conduction problem on the plate by averaging the related energy equation in the transversal direction. As a result, an improved lumped partial differential formulation for the transversally
averaged wall temperature is obtained, while a third kind boundary condition is achieved for the fluid from the heat balance at the solid–fluid interface. From the available steady velocity
distributions, a hybrid numerical–analytical solution based on the generalized integral transform technique (GITT), under its partial transformation mode, is then proposed, combined with the method
of lines implemented in the Mathematica 5.2 routine NDSolve. The interface heat flux partitions and heat transfer coefficients are readily determined from the wall temperature distributions, as well
as the temperature values at any desired point within the fluid. A few test cases for different materials and wall thicknesses are defined to allow for a physical interpretation of the wall
participation effect in contrast with the simplified model without conjugation.
An analysis is made of transient conjugated convective-conductive heat transfer in laminar flow of a Newtonian fluid between parallel-plates subjected to periodically varying inlet temperature. A
“thin wall” model is adopted that neglects transversal temperature gradients in the solid, but takes into account heat conduction along the duct wall. The quasi-steady solution is analytically
obtained through the generalized integral transform technique, providing accurate numerical results for the axial distributions of amplitudes and phase lags of wall temperature, wall heat flux, and
bulk temperature. The behavior of such periodic responses is then critically discussed, in terms of a conjugation parameter, fluid-to-solid capacitance ratio, and Biot number.
Monte Carlo methods are revolutionizing the on-line analysis of data in fields as diverse as financial modeling, target tracking and computer vision. These methods, appearing under the names of
bootstrap filters, condensation, optimal Monte Carlo filters, particle filters and survival of the fittest, have made it possible to solve numerically many complex, non-standard problems that were
previously intractable. This book presents the first comprehensive treatment of these techniques, including convergence results and applications to tracking, guidance, automated target recognition,
aircraft navigation, robot navigation, econometrics, financial modeling, neural networks, optimal control, optimal filtering, communications, reinforcement learning, signal enhancement, model
averaging and selection, computer vision, semiconductor design, population biology, dynamic Bayesian networks, and time series analysis. This will be of great value to students, researchers and
practitioners, who have some basic knowledge of probability. Arnaud Doucet received the Ph. D. degree from the University of Paris-XI Orsay in 1997. From 1998 to 2000, he conducted research at the
Signal Processing Group of Cambridge University, UK. He is currently an assistant professor at the Department of Electrical Engineering of Melbourne University, Australia. His research interests
include Bayesian statistics, dynamic models and Monte Carlo methods. Nando de Freitas obtained a Ph.D. degree in information engineering from Cambridge University in 1999. He is presently a research
associate with the artificial intelligence group of the University of California at Berkeley. His main research interests are in Bayesian statistics and the application of on-line and batch Monte
Carlo methods to machine learning. Neil Gordon obtained a Ph.D. in Statistics from Imperial College, University of London in 1993. He is with the Pattern and Information Processing group at the
Defence Evaluation and Research Agency in the United Kingdom. His research interests are in time series, statistical data analysis, and pattern recognition with a particular emphasis on target
tracking and missile guidance.
Bayesian methods provide a rigorous general framework for dynamic state estimation problems. We describe the nonlinear/non-Gaussian tracking problem and its optimal Bayesian solution. Since the
optimal solution is intractable, several different approximation strategies are then described. These approaches include the extended Kalman filter and particle filters. For a particular problem, if
the assumptions of the Kalman filter hold, then no other algorithm can out-perform it. However, in a variety of real scenarios, the assumptions do not hold and approximate techniques must be
employed. The extended Kalman filter approximates the models used for the dynamics and measurement process, in order to be able to approximate the probability density by a Gaussian. Particle
filtering approximates the density directly as a finite number of samples. A number of different types of particle filter exist and some have been shown to outperform others when used for particular
applications. However, when designing a particle filter for a particular application, it is the choice of importance density that is critical. These notes are of a tutorial nature and so, to
facilitate easy implementation, 'pseudo-code' for algorithms are included at relevant points.
Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non-Gaussianity in order to model accurately the underlying dynamics of a physical system.
Moreover, it is typically crucial to process data on-line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this
paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear/non-Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods
based on point mass (or "particle") representations of probability densities, which can be applied to any state-space model and which generalize the traditional Kalman filtering methods. Several
variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the
standard EKF through an illustrative example
The integral transform method in thermal and fluids sciences and engineering
Hybrid Methods and Symbolic Computations Handbook of numerical heat transfer
Conjugated heat transfer in microchannels NATO science for peace and security series A: chemistry and biology, microfluidics based microsystems: fundamentals and applications
• Js Nunes
• Rm Cotta
• Mr Avelino
• S Kakaç
Transient conjugated heat transfer in external compressible laminar flow over plates with internal heat generation. VII National congress of mechanical engineering
• K M Lisboa
• Jrb Souza
• R M Cotta
• C P Naveira-Cotta
Lisboa KM, Souza JRB, Cotta RM, Naveira-Cotta CP (2012) Transient conjugated heat transfer in external compressible laminar flow over plates with internal heat generation. VII National congress of
mechanical engineering, CONEM 2012, São Luis, pp 1-10, 31st July-3rd August, 2012
Conjugated heat transfer analysis of heated Pitot tubes: wind tunnel experiments, infrared thermography and lumped-differential modeling
• Jrb Souza
• Jlz Zotin
• Jbr Loureiro
• C P Naveira-Cotta
• Silva Freire
• A P Cotta
Souza JRB, Zotin JLZ, Loureiro JBR, Naveira-Cotta, CP, Silva Freire AP, Cotta RM (2011) Conjugated heat transfer analysis of heated Pitot tubes: wind tunnel experiments, infrared thermography and
lumped-differential modeling. 21st International congress of mechanical engineering, COBEM-2011, ABCM, Natal, October 2011 | {"url":"https://www.researchgate.net/publication/290477387_Thermal_analysis_of_anti-icing_systems_in_aeronautical_velocity_sensors_and_structures","timestamp":"2024-11-02T01:52:09Z","content_type":"text/html","content_length":"887592","record_id":"<urn:uuid:9af1e4e6-7685-4be4-aa42-1f7444822d30>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00501.warc.gz"} |
AP SSC 10th Class Maths Notes Chapter 10 Mensuration
Students can go through AP SSC 10th Class Maths Notes Chapter 10 Mensuration to understand and remember the concepts easily.
AP State Syllabus SSC 10th Class Maths Notes Chapter 10 Mensuration
→ A solid is a geometrical shape with three dimensions namely length, breadth and height.
→ A solid has two types of area namely,
a) Lateral Surface Area (L.S.A.)
b) Total Surface Area (T.S.A,)
→ In general, L.S.A. of a solid is the product of its base perimeter and height.
Eg : L.S.A. of a cuboid = 2h(l + b)
L.S.A. of a cylinder = 2πrh
→ The T.S.A. of a solid is the sum of L.S.A. and the areas of its top and base.
Eg : T.S.A. of a cylinder = 2πrh + 2πr^2
= 2πr(r + h)
→ In general, the volume of a solid is the product of its base area and height.
V = A. h
Eg: Volume of a cube = a^2 . a = a^3
Volume of a cylinder = πr^2 . h = πr^2h
→ The volume of solid formed by joining two basic solids is the sum of volumes of the constituents.
→ Surface area of the combination of solids: In calculating the surface area of the solid which is a combination of two or more solids, we can’t add the surface areas of all its constituents, because
some part of the surface area disappears in the process of joining them.
→ Surface areas and volume of different solid shapes:
→ Some solid figures and their combination shapes: | {"url":"https://apboardsolutions.in/ap-ssc-10th-class-maths-notes-chapter-10/","timestamp":"2024-11-10T18:06:57Z","content_type":"text/html","content_length":"64629","record_id":"<urn:uuid:a1d90d2b-063d-435e-b8a9-40874387f567>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00320.warc.gz"} |
Modal logic
From New World Encyclopedia
A modal logic was originally designed to describe the logical relations of modal notions. The list of the notions includes metaphysical modalities (necessities, possibilities, etc.), epistemic
modalities (knowledge, belief, etc.), temporal modalities (future, past, etc.), and deontic modalities (obligation, permission, etc.). Because of the importance of these modal notions, modal logics
have attracted many areas in philosophy, including metaphysics and epistemology. However, the interests in modal logics are not limited to such philosophical investigations. Because of its wide
applicability, the general framework of modal logic have been used in various areas such as artificial intelligence, database theory, and game theory.
The languages of modal logics usually extend preexisting logics, e.g propositional logic, first-order logic with modal operators, which are often symbolized as boxes ${\displaystyle \Box }$ and
diamonds ${\displaystyle \Diamond }$. Semantic structures for the languages of modal logics are relational structures and the modal languages can be characterized as describing various properties of
the relational structures.
Basic Ideas
One major notion that has been considered in modal logics is metaphysical modality. Examples of the modal notion are necessity and possibility. The modal logic that describe the logical relations of
statements such as “It is necessary that 2+2=4,” “It is possible that Bigfoot exists” etc. is called alethic modal logic. The main idea of analyzing such modal statements was produced based on the
metaphysical view that is usually credited to Leibniz. The idea is to analyze the statement of the form “It is necessary that p” as “In all possible worlds, p is the case,” and “It is possible that
p” as “There is some possible world in which p is the case.” In other words, necessity is analyzed as the truth in all possible worlds, and possibility, as the truth in some possible world.
Based on this idea, alethic modal logic clarifies the logical relations of modal statements of the kind in question. For instance, one basic equivalence in alethic modal logic, the one between “It is
necessary that p” and “It is not possible that not-p,” is explicated as the equivalence between “In all possible worlds, p is the case” and “There is no possible world in which p is not the case.”
Alethic modal logic enables one to see more complex relations of the metaphysical modal statements.
This general idea is modeled in what is called Kripke semantics by relational structures (see below). Because of the wide applicability of the general framework, modal logics have been used, beyond
the formalization of metaphysical modality, to represent modal concepts and phenomena. Depending on the purposes of applications, modal logics get specific names. Epistemic logic is designed to
describe epistemic notions such as knowledge and belief; temporal logic, temporal structures; deontic logic, deontic notions such as obligation and permission; dynamic logic, actions of computer
programs, etc.
Standard Syntax and Semantics of Modal Logics
The languages of modal logics extend preexisting logical languages with modal operators—most standard boxes ${\displaystyle \Box }$ and diamonds ${\displaystyle \Diamond }$. The intended meanings of
boxes and diamonds, say, in alethic modal logic, are respectively “It is necessary that...” and “It is possible that....”
The language of propositional modal logic—the extension of propositional logic with modal operators—consists of propositional variables (p, q, r, …), Boolean connectives (${\displaystyle \lnot }$, $
{\displaystyle \wedge }$, ${\displaystyle \vee }$, ${\displaystyle \rightarrow }$), and modal operators (${\displaystyle \Box }$ and ${\displaystyle \Diamond }$). In a standard way, the sentences of
propositional modal logic is recursively defined as follows:
${\displaystyle \phi }$ := p (with p a propositional variable) | ${\displaystyle \phi \wedge \psi }$ | ${\displaystyle \lnot \phi }$ | ${\displaystyle \Diamond \phi }$
The other Boolean connectives are defined as usual (for instance, "${\displaystyle \phi \vee \psi }$" is defined as "${\displaystyle \lnot (\lnot \phi \wedge \lnot \psi )}$" and "${\displaystyle \phi
\rightarrow \psi }$," as "${\displaystyle \lnot \phi \vee \psi }$"), and, based on the observation about the above basic equivalence, “${\displaystyle \Box \phi }$” is defined as the abbreviation of
“${\displaystyle \lnot \Diamond \lnot \phi }$.”
Other than the language of modal propositional logic, there are various versions of extensions of preexisting languages. Extensions with modal operators are considered for other preexisting
languages. For instance, the extension of first-order logic, called modal predicate logic, has been widely considered. Also, extensions are given with modality operators with multiple arities, i.e.
modal operators that are followed by a multiple number of formulas rather than by just a single formula as is the case of the propositional modal logic presented above.
Kripke Semantics
The standard semantics of modal languages is Kripke semantics, which is given by relational models. The Kripke semantics of propositional modal logic can be presented as follows. A frame is a tuple (
W, R), where W is an non-empty set and R is a two-place relation on W. W can be thought of as a set of possible world, and R, the accessibility relation between worlds, which represents the possible
worlds that are considered at a given world, i.e. if we are at a world ${\displaystyle w_{0}}$, every possible world v such that ${\displaystyle Rw_{0}v}$ represents the possibility that are
considered at a world ${\displaystyle w_{0}}$. Given a frame (W, R), a model is a tuple (W, R, V) where V is a map that assigns to a world a valuation function on propositional variables, i.e. for a
given world w, V(w) is a function from the set of propositional variables to {0, 1} where 0 and 1 represents the truth-values, False and True. Truth of formulas is defined with respect to a model M
and a world w as follows:
(${\displaystyle M,w\models \phi }$ reads as “${\displaystyle \phi }$ is true at a world ${\displaystyle \phi }$ in a model M.)
• ${\displaystyle M,w\models p}$ iff V(w)(p)=1 (with p a propositional variable)
• ${\displaystyle M,w\models \phi \wedge \psi }$ iff ${\displaystyle M,w\models \phi }$ and ${\displaystyle M,w\models \psi }$.
• ${\displaystyle M,w\models \lnot \phi }$ iff ${\displaystyle M,wot \models \phi }$.
• ${\displaystyle M,w\models \Box \phi }$ iff, for every world ${\displaystyle w^{\prime }}$ such that ${\displaystyle Rww^{\prime }}$, ${\displaystyle M,w\models \phi }$.
The last clause captures the main idea of Leibnizian conception of necessary truth as truth in all possibilities in such a way that “It is necessary that ${\displaystyle \phi }$” is true at a world w
in a model M if and only if ${\displaystyle \phi }$ is true in all possible worlds accessible from a world w.
A sentence is valid in a model M if it is true at every possible world in M. A sentence is valid in a frame F if it is valid in every model based on F. A sentence is valid if it is valid in all
frames (or every model).
By extending this model-theoretic framework, the semantics for other modal languages are given. In modal predicate logic, a model is designed so that a domain of quantification is associated with
each possible world, and in modal logics with modal operator with multiple arities, the accessibility relations of appropriate arities on possible worlds are taken.
Axiomatic Systems and Frame Correspondence
The Kripke semantics presented here has a sound and complete axiomatic system, i.e. the system in which, for a sentence ${\displaystyle \phi }$, ${\displaystyle \phi }$ is valid if and only if ${\
displaystyle \phi }$ is provable. The system is called K. K is the system obtained by adding the following two principles to propositional logic:
Necessitation Rule: If A is a theorem, ${\displaystyle \Box A}$ is a theorem.
K: ${\displaystyle \Box (\phi \rightarrow \psi )\rightarrow (\Box \phi \rightarrow \Box \psi )}$
Various systems are obtained by adding extra axioms to K. Some of the most famous axioms are:
T: ${\displaystyle \Box \phi \rightarrow \phi }$
S4: ${\displaystyle \Box \phi \rightarrow \Box \Box \phi }$
S5: ${\displaystyle \Diamond \phi \rightarrow \Box \Diamond \phi }$
The system T is obtained by adding the axiom scheme T to K. T is sound and complete with respect to the set of models that are based on reflexive frames (i.e. frames (W, R) such that, for all x in W,
Rxx). The addition of S4 to T yields the system S4. S4 is sound and complete with respect to reflexive and transitive frames (Transitive frames are frames (W, R) such that, for all x, y, z in W, if
Rxy and Ryz, then Rxz). Finally, the addition of S5 to S4 yields the system S5, which is sound and complete with respect to reflexive, transitive and symmetric frames, (symmetric frames are frames (W
, R) such that, for every x, y in W, if Rxy, then Ryx.
Some Applications
Modal logics have been applied to capture various kinds of concepts and phenomena. Depending on the subject matter, modal operators are interpreted in different ways. Here are some of the major
Epistemic Logic: Boxes are written as “K” or “B.” “K${\displaystyle \phi }$” is interpreted as “It is known that ${\displaystyle \phi }$,” and “B${\displaystyle \phi }$,” as “It is believed that ${\
displaystyle \phi }$.”
Deontic Logic: Boxes and diamonds are written as “O” and “P” respectively. “O${\displaystyle \phi }$” is interpreted as “It is obligatory that ${\displaystyle \phi }$,” and “P${\displaystyle \phi }$
,” as “It is permitted that ${\displaystyle \phi }$.”
Temporal Logic: Boxes are written as “G” for the future and “H” for the past. “G${\displaystyle \phi }$” means "${\displaystyle \phi }$ will be always the case” and “H${\displaystyle \phi }$,” “${\
displaystyle \phi }$ was always the case.” The corresponding diamonds are written as “F” and “P” respectively. “F${\displaystyle \phi }$” and “P${\displaystyle \phi }$” mean “It will be the case that
${\displaystyle \phi }$” and “It was the case that ${\displaystyle \phi }$.”
Depending on the interpretations of modalities, different axiomatic constraints are placed on modal operators. For instance, in epistemic logic, it is appropriate to accept the T axiom, since the
knowledge that ${\displaystyle \phi }$ implies that ${\displaystyle \phi }$ is true; however, in deontic logic, T might not be appropriate, since ${\displaystyle \phi }$ might not be the case even if
it is obligatory that ${\displaystyle \phi }$. There have been wide ranges of discussions about what axioms are appropriate for each modal notions.
A Brief Historical Note on Modal Logic
Although Aristotle's logic is almost entirely concerned with the theory of the categorical syllogism, his work also contains some extended arguments on points of modal logic (such as his famous
Sea-Battle Argument in De Interpretatione § 9) and their connection with potentialities and with time. Following on his works, the Scholastics developed the groundwork for a rigorous theory of modal
logic, mostly within the context of commentary on the logic of statements about essence and accident. Among the medieval writers, some of the most important works on modal logic can be found in the
works of William of Ockham and John Duns Scotus.
The start of formal modal logics is usually associated with the work by C. I. Lewis, who introduced a system (later called S3) in his monograph A Survey of Symbolic Logic (1918) and (with C. H.
Langford) the systems S1-S5 in the book Symbolic Logic (1932). The first formalization of deontic logic was by E. Mally in 1920s. Epistemic logic was initiated by G. H. von Wright and further
developed by J. Hintikka in 50s and 60s. Temporal logic was developed by A. N. Prior in 1960s. The relational semantics for modal logic was developed in the works by J. Hintikka, Kanger, and Kripke
in late 50s and early 60s.
ISBN links support NWE through referral fees
• P. Blackburn, M. de Rijke, and Y. Venema. Modal Logic. Cambridge University Press, 2001.
• Hughes, G.E. and M.J. Cresswell. An Introduction to Modal Logic. Methuen, 1968.
• Hughes, G.E. and M.J. Cresswell. A Companion to Modal Logic. Medhuen, 1984.
• Hughes, G.E. and M.J. Cresswell. A New Introduction to Modal Logic. Routledge, 1996.
External links
All links retrieved November 9, 2022.
General Philosophy Sources
This article contains some material originally from the Free On-line Dictionary of Computing which is used with permission under the GFDL.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons
CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia
contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by
wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed. | {"url":"https://www.newworldencyclopedia.org/entry/Modal_logic","timestamp":"2024-11-05T01:00:51Z","content_type":"text/html","content_length":"113861","record_id":"<urn:uuid:18d8842c-9dc8-444c-b8d1-8c9232f418e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00423.warc.gz"} |
What is the Value of a Solar System to you? | Greenlight
There are many things to consider when investigating the purchase of a solar system, but one of the first to consider is the price you would be prepared to pay for the system. If, like me, you
believe a solar system should be an investment that delivers real returns, i.e. the returns beat inflation, then you can work out the potential value of a solar system based on the electricity cost
savings it generates.
This analysis does not consider the use of a solar system as a back-up system for load shedding, or for locations far from the national grid. It is purely for the use of solar system to reduce your
monthly energy bill.
We can work out the potential value of a solar system by looking at the cost of the electricity you are likely to buy over the life of a solar system (lest assume 20 years). For this you will need
the amount you spend on electricity every month and the rate you are paying per kilowatt-hour (this can be found on your municipal bill).
Let’s assume that you are paying according to the Cape Town Municipal Rate; R1.9243 for the first 600 kWh and R2.3401 for the rest. We also assume that you consume 1000 kWh per month. Your monthly
bill is therefore R2090.62, made up as follows.
At this rate, your yearly electricity bill is R25 087.44, or R501 748 for 20 years if Eskom don’t apply for any increases.
Now let’s make the fairly solid assumption that Eskom, and therefore the municipality of Cape Town, is going to increase the price of electricity every year for the next 20 years. In the table below
is the value of electricity to be paid over the next 20 years assuming an “expected” annual Municipal increase each year. The average increase in the table over the next 5 years is 8%. Given that
Eskom has been asking for 12%, the increases used are possibly conservative.
Based on the assumptions, you are likely to pay R967 196 for electricity over the next 20 years.
In today’s money, assuming 6% inflation, the electricity is worth R502 082. In other words, if you could buy a solar system that provided all of your electricity for the next 20 years, it would
benefit you if it cost less than R502 082 today.
If you could buy a system that provided all of your electricity for R300 000, you would have saved over R200 000 in today’s money. Not bad.
But what is the actual cost of a Solar system that would deliver all of your power needs, even in winter? Also who has R300 000 to R500 000 available? What if we only purchased a smaller solar system
that supplied only a portion of our electricity needs?
One such system is a grid-tied solar system without energy storage (batteries). It connects onto the power line going into your home and supplies electricity to your home only while the sun is
shining. The municipal power line will supply whatever power the solar system cannot supply (for example when it is raining) and at night when the solar system is not producing (for obvious reasons).
The advantage of this kind of system is that it is significantly cheaper, both its parts and the cost to install. The disadvantage is that it only works while the sun is shining. The energy that it
produces cannot be stored for later use.
So what is the potential value of such a system?
The first point with these systems is that they will only reduce the portion of your electricity that is consumed in the day. If we assume that 25% of the power in a large house (using 12 000 kWh/A)
is consumed during the daylights hours, then we can work out the value of the electricity that this type of solar system would replace.
Firstly, 25% of 12 000 kWh equates to 3000kWh/A or 250kWh/month. That’s a monthly saving of R2.3401 x 250 = R585.03 or R7 020.30 in the first year.
Now we sum it up for 20 years, using the same Municipal Power price increases and we get R270 654 total. Or, in today’s money, R140 500 (taking inflation into account).
Therefore, to get a real return from a solar system you should not pay more than R140 500 for a solar system that is going to reduce your power bill by 25%.
If for some reason, more than 25% of your electricity is used in the day, then you can afford to pay more (for example, if you have a swimming pool that is only pumping during the daylight hours). If
you use less, you can afford to pay less to get a real return. The table bellows shows the calculated maximum you can pay for the solar system for various percentages consumed in the day.
Value of Solar System vs. Per Cent Annual Power Consumed in the day
This analysis gives you some idea of the value of a solar system, or at least one method to arrive at a value of a solar system.
Two errors I have seen in the calculations of the value of a solar system:
1. Using unlikely Eskom increases: Some calculations of the value of a proposed solar system use Eskom increases of 12% for the next 20 years. This scenario is highly unlikely and inflates the
estimated value.
2. Self-consumption over-estimation: Some calculations of the value of a system assume that half a grid-tied solar system will reduce your electricity bill by half. This is based on the idea that
the system operates for half the time. However, for most homes, especially for couples who both work, it is probably that significantly more than half of the power consumption of the home occurs
at night. | {"url":"https://greenlightonline.co.za/what-is-the-value-of-a-solar-system-to-you/","timestamp":"2024-11-11T14:41:08Z","content_type":"text/html","content_length":"146896","record_id":"<urn:uuid:1dce9466-c4c4-4477-9a0b-53d58972cd0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00070.warc.gz"} |
Rainbow matchings in Υ-partite Υ-graphs
Given a collection of matchings μ = (M[1],M[2],..., M[q]) (repetitions allowed), a matching M contained in ∪μ is said to be s-rainbow for μ if it contains representatives from s matchings M[i] (where
each edge is allowed to represent just one M[i]). Formally, this means that there is a function ø : M -> [q] such that e ε [Mø(e)] for all e ε M, and |Im(ø)| ≥ s. Let f(r, s, t) be the maximal k for
which there exists a set of k matchings of size t in some r-partite hypergraph, such that there is no s-rainbow matching of size t. We prove that f(r,s,t) ≥ 2^r-1(s - 1), make the conjecture that
equality holds for all values of r, s and t and prove the conjecture when r = 2 or s = t = 2. In the case r = 3, a stronger conjecture is that in a 3-partite 3-graph if all vertex degrees in one side
(say V[1]) are strictly larger than all vertex degrees in the other two sides, then there exists a matching of V[1]. This conjecture is at the same time also a strengthening of a famous conjecture,
described below, of Ryser, Brualdi and Stein. We prove a weaker version, in which the degrees in V[1] are at least twice as large as the degrees in the other sides. We also formulate a related
conjecture on edge colorings of 3-partite 3-graphs and prove a similarly weakened version.
ASJC Scopus subject areas
• Theoretical Computer Science
• Geometry and Topology
• Discrete Mathematics and Combinatorics
• Computational Theory and Mathematics
• Applied Mathematics
Dive into the research topics of 'Rainbow matchings in Υ-partite Υ-graphs'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/rainbow-matchings-in-%CF%85-partite-%CF%85-graphs","timestamp":"2024-11-02T17:31:20Z","content_type":"text/html","content_length":"50924","record_id":"<urn:uuid:de786170-6e7c-4164-aa95-1a3b7971f2ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00195.warc.gz"} |
If A is a symmetric matrix, then matrix M'AM is
Doubtnut is No.1 Study App and Learning App with Instant Video Solutions for NCERT Class 6, Class 7, Class 8, Class 9, Class 10, Class 11 and Class 12, IIT JEE prep, NEET preparation and CBSE, UP
Board, Bihar Board, Rajasthan Board, MP Board, Telangana Board etc
NCERT solutions for CBSE and other state boards is a key requirement for students. Doubtnut helps with homework, doubts and solutions to all the questions. It has helped students get under AIR 100 in
NEET & IIT JEE. Get PDF and video solutions of IIT-JEE Mains & Advanced previous year papers, NEET previous year papers, NCERT books for classes 6 to 12, CBSE, Pathfinder Publications, RD Sharma, RS
Aggarwal, Manohar Ray, Cengage books for boards and competitive exams.
Doubtnut is the perfect NEET and IIT JEE preparation App. Get solutions for NEET and IIT JEE previous years papers, along with chapter wise NEET MCQ solutions. Get all the study material in Hindi
medium and English medium for IIT JEE and NEET preparation | {"url":"https://www.doubtnut.com/qna/649503383","timestamp":"2024-11-02T09:22:41Z","content_type":"text/html","content_length":"194558","record_id":"<urn:uuid:5fc00f90-84ec-432e-a6d8-68b8f19baa36>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00322.warc.gz"} |
Calculate a Binomial Distribution Probability in Excel
A binomial distribution is basically a type of distribution that has only two possible outcomes: Success or Failure. An example would be toss of a typical coin...you either get a heads or tails. If
you choose heads , that would be your success and tail would be the failure. Now if we wanted to calculated the probability of success based on X number of tosses, there is actually a formula to
calculate that probability. Excel makes it really easy, but I'll show four different ways to get this answer. | {"url":"https://www.exceltraining101.com/2021/02/calculate-binomial-distribution.html","timestamp":"2024-11-04T05:29:13Z","content_type":"application/xhtml+xml","content_length":"57565","record_id":"<urn:uuid:15a01809-a18e-437c-b0f8-461777f2a875>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00561.warc.gz"} |
Maudlin's "(Information) Paradox Lost" paper
Tim Maudlin has an interesting paper in which he criticizes the importance given to the black hole information paradox, and even brings arguments that it is not even a problem:
(Information) Paradox Lost
. I agree that the importance of the problem is perhaps exaggerated, but at the same time many consider it to be a useful benchmark to test quantum gravity solutions. This led to decades of research
made by many physicists, and to many controversies. I wrote a bit about some of the proposed solutions to the problem in some older posts, for example [
]. Maudlin's paper is discussed by Sabine
One of the central arguments in Maudlin's paper is that the well-known spacetime illustrating the information loss can be foliated into some 3D spaces (which are Cauchy hypersurfaces that are
discontinuous at the singularity). These hypersurfaces have a part outside the black hole, and another one inside it, which are not connected to one another. Cauchy hypersurfaces contain the Cauchy
data necessary to solve the partial differential equations, so the information should be preserved if we consider both their part inside and their part outside the black hole.
I illustrate this with this animated gif:
I made this gif
back in 2010
, when I independently had the same idea and wanted to write about it, but I don't think I made it public. Probably the idea is older. The reason I didn't write about it was that I was more
attracted* to another solution I found, which led to
an analytic extension
of the black hole spacetime, and has Cauchy hypersurfaces but no discontinuities. I reproduce a picture of the Penrose diagram from
an older post
in which I say more about this:
A. The standard Penrose diagram of an evaporating black hole.
B The diagram from the analytic solution I proposed.
* The reason I preferred to work at the second solution is that it allows the information to become available after the evaporation to an external observer. The solution which relies on completing
the Cauchy hypersurface with a part inside the black hole doesn't restore information and unitarity for an external observer. I don't know if this is a problem, but many physicists believe that
information should be restored for an external observer, because otherwise we would observe violations of unitarity even in the most mundane cases, considering that micro black holes form and
evaporate at very high energies. I don't think this argument, also given by Sabine, is very good, because there is no reason to believe that micro black holes form at high energy under normal
conditions. People arrive at high energies for normal situations because they use perturbative expansions, but this is just a method of approximation. And even so, I doubt anyone who sums over
Feynman diagrams includes black holes. But nevertheless, I wouldn't like information to be lost for an outside observer after evaporation, but this is just personal taste, I don't claim that there is
some experiment that proved this. And the solution I preferred to research allows recovery of information and unitarity for an external observer, and other things which I explained in the mentioned
posts and
my PhD thesis
2 comments:
Jochen said...
I also don't find the 'virtual black holes' argument terribly convincing. Ultimately, these things are really just terms in the perturbative expansion approximating a process that is itself by
construction unitary---so that if we were able to work without the approximation, we wouldn't ever notice any hint of the possible non-unitarity introduced by evaporating virtual black holes. I
think that hastily reifying such virtual objects is about as misleading as claiming that a particle is 'in two places at once' in a superposed state---perhaps useful as a figure of speech among
those that know what they really mean, but if taken too seriously, implying a sort of pseudo-classical picture that distorts what's actually happening.
That said, I'm also not sure Maudlin's paper proposes an answer to the information loss problem as most see it---after all, the 'whole universe' at a certain point in time after the evaporation
of the BH is given by a (non-Cauchy) hypersurface stretching from r=0 to spatial infinity, and if it makes sense to call this the whole universe, then there is in fact information missing from
it---that pertaining to all the stuff that fell into the black hole (that is, all inextensible timelike curves ending at the singularity).
But that may itself be naive, in taking such a hypersurface to describe the state of the universe at any particular time. Maybe that kind of talk should just as well be regarded as a façon de
parler as talk about particles being in two places at once.
In the end, I think there is potential for an interesting debate here; unfortunately, I think that's unlikely to manifest, mainly due to the 'physicists-vs.-philosophers'-kind of sociology that's
unfortunately still too pervasive. It's true that Maudlin's tone, which is going to seem needlessly confrontational to most physicists, doesn't help the issue, but there's really no reason to
respond in kind. But eh, that's just me tilting at windmills I suppose.
Cristi Stoica said...
Yes, such arguments taking too seriously virtual stuff needed just because we don't know to do better math unfortunately plagues many discussions about QFT and quantum gravity.
Regarding unitarity, on the one hand it is something I think it must be preserved, so that's why I preferred to extend the solutions through the singularities. And also that's why I am interested
to save unitary evolution in the quantum measurement problem as well. And I think it is strange that the same people who want to save unitarity in black holes often adopt a wavefunction collapse
position (which is not unitary and doesn't ensure the conservation laws - https://arxiv.org/abs/1607.02076). Sometimes they say that QM is still unitary because of decoherence, but this only
works with many worlds, and a single world still has these problems.
On the other hand, given that Einstein's equation is local, like the other classical equations, I think that unitarity in QM and QFT is forced upon us mainly because quantization is made in phase
space, so usual quantization is global. But I think that the theory can be local in the sense of the PDEs involved, and at the same time nonlocal in the Bell sense and also contextual, and also
unitary. But in order to be like this, I think that conervation laws and conservation of information should be local (I think this is also required by relativity). So I find a bit meaningless the
approaches trying to restore information lost at the singularity by looking at the horizon. So it is this belief I have in unitarity and the locality of the PDEs, quantized or not, that I think
it is satisfied either if singularities don't really exist, or if they don't pose a problem but to the standard mathematical description (and not to other alternative equations, like those I
propose here https://arxiv.org/abs/1301.2231.
So I have more reasons, merging into a sort of not-yet-formal view, which made be dissatisfied with the broken Cauchy hypersurfaces idea. And I think that Maudlin himelf is not satisfied, given
that he advocates the position that time is real. I am not sure what this means, given that he discusses it in a second volume which is still work in progress, but I find hard to see how time can
be real (hence has something absolute in it), and at the same time work well with broken Cauchy hypersurfaces, whose time coordinate is clearly assigned arbitrarily and ad hoc to get a foliation
of spacetime into Cauchy hypersurfaces.
About the dispute between physicsts and philosophers, I agree with you here too. We need each other, and we need to simultaneously be physicists, mathematicians, and philosophers (at least the
critical thinking part from philosophy is something we need more as physicists). We always need philosophers to try to poke holes in our theories. If they are doing it right, we can improve the
theory. If they only show they didn't understand, we need to improve the conceptual and explanatory part. So I was disappointed when I saw a blog article about Maudlin paper, which I won't
mention here, in which the tone was kind of elitist and condescending both towards Maudlin, and towards philosophers of physics in general. Maybe it was just an impression I had, but the blog
comments around that article, and the facebook comments, proved that the other readers understood this as a green light to become aggressive against Maudlin and philosophers in general, without
justification (not that aggressiveness and rudeness can be justified even when one is right :) ) | {"url":"http://www.unitaryflow.com/2017/05/maudlins-information-paradox-lost-paper.html?showComment=1495011260714","timestamp":"2024-11-08T08:25:29Z","content_type":"application/xhtml+xml","content_length":"83926","record_id":"<urn:uuid:09b0039e-ecc9-4c66-8197-e2d38f52b790>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00867.warc.gz"} |
CBSE Class 10 Maths Question Paper 2024 Solutions PDF Download for FREE
Class 10 Maths 2024 Question Paper with Solutions - FREE PDF Download
Vedantu provides CBSE Class 10 Maths Question Paper 2024 in two versions: Mathematics Standard and Mathematics Basics. These papers are for students with different career aspirations: Mathematics
Standard is for students aiming for a future in mathematics or related fields, and Mathematics Basics is for those not pursuing math extensively.
1. Class 10 Maths 2024 Question Paper with Solutions - FREE PDF Download
2. Class 10 Maths Board Exam - Set Wise Question Paper 2024 - FREE PDF Download
3. CBSE Class 10 Maths Question Paper 2024 Pattern
4. Features of Class Maths Question Paper 2024 with Solutions
5. Class 10 Mathematics Year-wise Question Papers (2014 - 2024)
6. Subject-wise Class 10 Previous Year Question Paper 2024
7. Related Study Materials for Class 10 Maths
Maths Question Paper Class 10 2024 covers the exam pattern and content according to the CBSE Class 10 Maths Syllabus. These papers help students who are preparing for their exams by providing
different types of questions and the level of their preparation. Class 10 Maths Previous Year Question Papers are very important for students to practice and prepare effectively for their
Class 10 Maths Board Exam - Set Wise Question Paper 2024 - FREE PDF Download
Here is a complete list with links to all the sets of question papers from the year 2024. This table makes it easy for students to access and use these question papers for practice and preparation.
FAQs on CBSE Class 10 Maths Question Papers 2024
1. How many sets of question papers are there in Vedantu’s Class 10 Maths Question Paper 2024 PDF Download?
There are 4 sets of previous question papers of the year 2024 available on the Vedantu website that provide different versions namely CBSE Class 10 Standard Maths Question Paper 2024 With Solutions
and CBSE Class 10 Basic Maths Question Paper 2024 With Solutions of the exam for students to practice.
2. Why should I practice with Maths Question Paper Class 10 2024?
Practicing with these papers helps you understand the exam pattern, helps to improve yourself with different types of questions, and improves time management skills.
3. How can the solutions in Class 10 Maths Question Paper 2024 PDF Download to these question papers help me?
Solutions in Vedantu’s Class 10 Maths 2024 Question Paper provide step/by/step explanations for each question, helping you understand concepts better and learn the correct problem-solving approach.
4. Are the Maths Question Paper Class 10 2024 by Vedantu useful for revision?
Yes, they are excellent for revision. By solving them, you can revise important topics, identify weak areas, and improve your overall preparation for the exam.
5. What type of questions does the CBSE Class 10 Standard Maths Question Paper 2024 With Solutions contain?
The CBSE Class 10 Standard Maths Question Paper 2024 With Solutions includes multiple/choice, short answer, and long answer questions covering all topics in the syllabus.
6. What type of questions does the CBSE Class 10 Basic Maths Question Paper 2024 With Solutions contain?
The CBSE Class 10 Basic Maths Question Paper 2024 with Solutions contains questions focusing on fundamental concepts and practical applications designed for students not pursuing advanced mathematics
in their future studies.
7. How do these question papers prepare me for the CBSE exam?
Class 10 Maths 2024 Question Paper helps you prepare for the CBSE exam by giving each section's format, difficulty, and time taken. Practising with them helps you with the exam structure and enhances
your readiness.
8. Can I access the Class 10 Maths Question Paper 2024 PDF Download from the Vedantu website for FREE?
Yes, Vedantu platforms provide these papers for free or with minimal cost, making them accessible to all students preparing for the Class 10 Maths exam. You can also refer to Vedantu website for
other subject question papers.
9. What is the purpose of solving the CBSE Class 10 Maths 2024 Question Paper?
Solving these papers helps students understand the exam pattern, practice different types of questions, and assess their preparation level.
10. Does Vedantu Class 10 Maths 2024 Question Paper provide solutions with the question papers?
Yes, Vedantu provides detailed solutions for all the questions in these question papers, helping students understand the correct methods to solve problems. | {"url":"https://www.vedantu.com/previous-year-question-paper/cbse-class-10-maths-question-paper-2024","timestamp":"2024-11-14T00:34:35Z","content_type":"text/html","content_length":"339410","record_id":"<urn:uuid:77ab6faf-941c-4ff8-93b2-b27a831caf4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00629.warc.gz"} |
Knot theory
knot, link
Related concepts:
An ordinary tangle is a 1-dimensional manifold with boundary that is embedded into the 3-cube, such that the boundary is embedded into chosen chosen pair opposing sides of the cube.
Thus, one may think of a tangle as (a knot if it is connected or generally as) a link that was cut at several points and the resulting strands pulled apart at their endpoints to opposite sides of the
cube. Conversely, a link (knot) is equivalently a (connected) tangle with empty boundary.
Similarly, a tangle that progresses monotonically from its source to its target is equivalently a braid.
The notion of tangles generalizes to that of $m$-tangles in dimension $n$, which are $m$-manifolds with corners embedded into the $n$-cube such that their corners are appropriately embedded in the
cube’s boundary. In this sense ordinary tangles are the 1-tangles in 3-space.
Category of tangles
Tangles naturally constitute the morphisms of a category:
The objects are finite subsets of $\mathbf{R}^2$. Morphisms $A\to B$ are embeddings of unions of finitely many closed intervals and circles into $[0,1]\times\mathbf{R}^2$ such that the restriction of
the embedding to the endpoints yields a bijection to $A\sqcup B$. Morphisms are composed by gluing two copies of $[0,1]$ together and rescaling.
As usual, this suffers from being associative only up to an ambient isotopy. Thus, one can either take ambient isotopy classes of such embeddings, obtaining a 1-category of tangles, or instead turn
tangles into an (∞,1)-category, in which case morphisms $A\to B$ will encode the whole homotopy type of the space of embeddings described above.
Category of framed tangles
Analogously there is a notion of framed tangles which are to ordinary tangles as framed links are to ordinary links.
Shum 1994
Yetter 2001 Thm. 9.1
Higher-dimensional variants
Higher-dimensional tangles, i.e. $m$-manifolds with corners embedded in the $n$-cube, were considered for instance in Baez and Dolan 95. A “tame” definition of tangles that admit finite
stratifications by their critical point types was given in Dorn and Douglas 22.
See also: | {"url":"https://ncatlab.org/nlab/show/tangle","timestamp":"2024-11-12T11:44:23Z","content_type":"application/xhtml+xml","content_length":"27933","record_id":"<urn:uuid:59febc07-67e0-42f3-ae45-be3e42031c63>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00681.warc.gz"} |
OpenStax College Physics for AP® Courses, Chapter 11, Problem 36 (Problems & Exercises)
What fraction of ice is submerged when it floats in freshwater, given the density of water at $0^\circ\textrm{ C}$ is very close to $1000 \textrm{ kg/m}^3$
Question by
is licensed under
CC BY 4.0
Solution video
OpenStax College Physics for AP® Courses, Chapter 11, Problem 36 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. We want to know what fraction of a piece of ice is submerged in fresh water at 0 degrees Celsius. So the fraction submerged is the density of the
thing floating divided by the density of the fluid that it's in and we look up the density of ice in our table [11.1] and it is 0.917 times 10 to the 3 kilograms per cubic meter and we divide that by
the density of freshwater at 0 degrees which we are told is 1000 kilograms per cubic meter. So the 1000's cancel leaving us with 0.917 is the fraction submerged. | {"url":"https://collegephysicsanswers.com/openstax-solutions/what-fraction-ice-submerged-when-it-floats-freshwater-given-density-water-0","timestamp":"2024-11-04T01:04:35Z","content_type":"text/html","content_length":"194244","record_id":"<urn:uuid:7daa767f-9760-4bfa-aed7-ddf4ef09eaf6>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00246.warc.gz"} |
Equivalence Classes and Partitions
Equivalence Classes
Let \(R\) be an equivalence relation on a set \(A,\) and let \(a \in A.\) The equivalence class of \(a\) is called the set of all elements of \(A\) which are equivalent to \(a.\)
The equivalence class of an element \(a\) is denoted by \(\left[ a \right].\) Thus, by definition,
\[\left[ a \right] = \left\{ {b \in A \mid aRb} \right\} = \left\{ {b \in A \mid a \sim b} \right\}.\]
If \(b \in \left[ a \right]\) then the element \(b\) is called a representative of the equivalence class \(\left[ a \right].\) Any element of an equivalence class may be chosen as a representative of
the class.
The set of all equivalence classes of \(A\) is called the quotient set of \(A\) by the relation \(R.\) The quotient set is denoted as \(A/R.\)
\[A/R = \left\{ {\left[ a \right] \mid a \in A} \right\}.\]
Properties of Equivalence Classes
If \(R \) (also denoted by \(\sim\)) is an equivalence relation on set \(A,\) then
• Every element \(a \in A\) is a member of the equivalence class \(\left[ a \right].\)
\[\forall\, a \in A,a \in \left[ a \right]\]
• Two elements \(a, b \in A\) are equivalent if and only if they belong to the same equivalence class.
\[\forall\, a,b \in A,a \sim b \text{ iff } \left[ a \right] = \left[ b \right]\]
• Every two equivalence classes \(\left[ a \right]\) and \(\left[ b \right]\) are either equal or disjoint.
\[\forall\, a,b \in A,\left[ a \right] = \left[ b \right] \text{ or } \left[ a \right] \cap \left[ b \right] = \varnothing\]
A well-known sample equivalence relation is Congruence Modulo \(n\). Two integers \(a\) and \(b\) are equivalent if they have the same remainder after dividing by \(n.\)
Consider, for example, the relation of congruence modulo \(3\) on the set of integers \(\mathbb{Z}:\)
\[R = \left\{ {\left( {a,b} \right) \mid a \equiv b\;\left( {\bmod 3} \right)} \right\}.\]
The possible remainders for \(n = 3\) are \(0,1,\) and \(2.\) An equivalence class consists of those integers that have the same remainder. Hence, there are \(3\) equivalence classes in this example:
\[\left[ 0 \right] = \left\{ { \ldots , - 9, - 6, - 3,0,3,6,9, \ldots } \right\}\]
\[\left[ 1 \right] = \left\{ { \ldots , - 8, - 5, - 2,1,4,7,10, \ldots } \right\}\]
\[\left[ 2 \right] = \left\{ { \ldots , - 7, - 4, - 1,2,5,8,11, \ldots } \right\}\]
Similarly, one can show that the relation of congruence modulo \(n\) has \(n\) equivalence classes \(\left[ 0 \right],\left[ 1 \right],\left[ 2 \right], \ldots ,\left[ {n - 1} \right].\)
Let \(A\) be a set and \({A_1},{A_2}, \ldots ,{A_n}\) be its non-empty subsets. The subsets form a partition \(P\) of \(A\) if
• The union of the subsets in \(P\) is equal to \(A.\)
\[\bigcup\limits_{i = 1}^n {{A_i}} = {A_1} \cup {A_2} \cup \ldots \cup {A_n} = A\]
• The partition \(P\) does not contain the empty set \(\varnothing.\)
\[{A_i} \ne \varnothing \;\forall \,i\]
• The intersection of any distinct subsets in \(P\) is empty.
\[{A_i} \cap {A_j} = \varnothing \;\forall \,i \ne j\]
Figure 1.
There is a direct link between equivalence classes and partitions. For any equivalence relation on a set \(A,\) the set of all its equivalence classes is a partition of \(A.\)
The converse is also true. Given a partition \(P\) on set \(A,\) we can define an equivalence relation induced by the partition such that \(a \sim b\) if and only if the elements \(a\) and \(b\) are
in the same block in \(P.\) | {"url":"https://mathlake.com/equivalence-classes-partitions","timestamp":"2024-11-06T23:38:07Z","content_type":"text/html","content_length":"11811","record_id":"<urn:uuid:506afee3-326c-466b-b91d-9078f222188a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00354.warc.gz"} |
Calculating Production Rate in context of production rate
31 Aug 2024
Calculating Production Rate: A Theoretical Framework
Production rate is a critical parameter in various industries, including manufacturing, mining, and construction. Accurate calculation of production rate is essential for optimizing resource
allocation, scheduling, and cost estimation. This article presents a theoretical framework for calculating production rate, highlighting the key factors that influence it.
Production rate refers to the quantity of output produced by a system or process within a given time period. It is a fundamental concept in operations research and management science, with
applications in various fields. The calculation of production rate involves understanding the relationships between input variables, such as labor, equipment, and materials, and output variables,
including product quality and yield.
Theoretical Framework
The production rate (PR) can be calculated using the following formula:
PR = (Output Quantity / Time Period)
PR = (Q / t)
where Q is the output quantity and t is the time period.
However, this simple formula does not account for various factors that influence production rate. A more comprehensive framework considers the following variables:
• Labor productivity: The amount of output produced per unit of labor input.
• Equipment utilization: The percentage of equipment capacity utilized during production.
• Material efficiency: The ratio of material used to produce a unit of output.
• Quality control: The proportion of output that meets quality standards.
These factors can be incorporated into the production rate formula as follows:
PR = (Q / t) \* (LP \* EU \* ME \* QC)
where LP is labor productivity, EU is equipment utilization, ME is material efficiency, and QC is quality control.
Calculating production rate requires a thorough understanding of the relationships between input variables and output variables. The theoretical framework presented in this article provides a
comprehensive approach to calculating production rate, taking into account various factors that influence it. By applying this framework, industries can optimize resource allocation, scheduling, and
cost estimation, ultimately improving productivity and competitiveness.
• [1] Smith, J. (2022). Production Rate Optimization: A Review of the Literature.
• [2] Johnson, K. (2019). Labor Productivity in Manufacturing Industries: A Theoretical Framework.
• [3] Brown, T. (2018). Equipment Utilization and Maintenance: A Guide for Managers.
Note: The references provided are fictional and used only to demonstrate the format of a reference list.
Related articles for ‘production rate’ :
• Reading: Calculating Production Rate in context of production rate
Calculators for ‘production rate’ | {"url":"https://blog.truegeometry.com/tutorials/education/271d08e404371fd230e00e2f45ce8703/JSON_TO_ARTCL_Calculating_Production_Rate_in_context_of_production_rate.html","timestamp":"2024-11-12T23:57:52Z","content_type":"text/html","content_length":"17381","record_id":"<urn:uuid:20b2a908-a392-4942-ae4b-6d55878f518a>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00320.warc.gz"} |
Developmental Math Emporium
Learning Outcomes
• Multiply two decimals
• Multiply a decimal by multiples of ten
Multiplying decimals is very much like multiplying whole numbers—we just have to determine where to place the decimal point. The procedure for multiplying decimals will make sense if we first review
multiplying fractions.
Do you remember how to multiply fractions? To multiply fractions, you multiply the numerators and then multiply the denominators.
So let’s see what we would get as the product of decimals by converting them to fractions first. We will do two examples side-by-side below. Look for a pattern.
A B
[latex]\left(0.3\right)\left(0.7\right)[/ [latex]\left(0.2\right)\left(0.46\right)[/latex]
Convert to fractions. [latex]\left({\Large\frac{3}{10}}\right)\left({\Large\frac{7}{10}}\right)[/ [latex]\left({\Large\frac{2}{10}}\right)\left({\Large\frac{46}{100}}\right)[/
latex] latex]
Multiply. [latex]{\Large\frac{21}{100}}[/latex] [latex]{\Large\frac{92}{1000}}[/latex]
Convert back to decimals. [latex]0.21[/latex] [latex]0.092[/latex]
There is a pattern that we can use. In A, we multiplied two numbers that each had one decimal place, and the product had two decimal places. In B, we multiplied a number with one decimal place by a
number with two decimal places, and the product had three decimal places.
How many decimal places would you expect for the product of [latex]\left(0.01\right)\left(0.004\right)?[/latex] If you said “five”, you recognized the pattern. When we multiply two numbers with
decimals, we count all the decimal places in the factors—in this case two plus three—to get the number of decimal places in the product—in this case five.
Once we know how to determine the number of digits after the decimal point, we can multiply decimal numbers without converting them to fractions first. The number of decimal places in the product is
the sum of the number of decimal places in the factors.
The rules for multiplying positive and negative numbers apply to decimals, too, of course.
Multiplying Two Numbers
When multiplying two numbers,
• if their signs are the same, the product is positive.
• if their signs are different, the product is negative.
When you multiply signed decimals, first determine the sign of the product and then multiply as if the numbers were both positive. Finally, write the product with the appropriate sign.
Multiply decimal numbers.
1. Determine the sign of the product.
2. Write the numbers in vertical format, lining up the numbers on the right.
3. Multiply the numbers as if they were whole numbers, temporarily ignoring the decimal points.
4. Place the decimal point. The number of decimal places in the product is the sum of the number of decimal places in the factors. If needed, use zeros as placeholders.
5. Write the product with the appropriate sign.
For a review on how to multiply multiple-digit numbers using columns for place value, click here.
Multiply: [latex]\left(3.9\right)\left(4.075\right)[/latex]
Determine the sign of the product. The signs are the same. The product will be positive.
Write the numbers in vertical format, lining up the numbers on the right.
Multiply the numbers as if they were whole numbers, temporarily ignoring the decimal points.
Place the decimal point. Add the number of decimal places in the factors [latex]\left(1+3\right)[/latex]. Place the decimal point 4 places from
the right.
The product is positive. [latex]\left(3.9\right)\left(4.075\right)=15.8925[/
try it
Multiply: [latex]\left(-8.2\right)\text{(}5.19\text{)}[/latex]
Show Solution
try it
In the following video we show another example of how to multiply two decimals.
In the next example, we’ll need to add several placeholder zeros to properly place the decimal point.
Multiply: [latex]\left(0.03\right)\text{(}0.045\text{)}[/latex]
Show Solution
try it
Multiply by Powers of [latex]10[/latex]
In many fields, especially in the sciences, it is common to multiply decimals by powers of [latex]10[/latex]. Let’s see what happens when we multiply [latex]1.9436[/latex] by some powers of [latex]10
Look at the results without the final zeros. Do you notice a pattern?
[latex]\begin{array}{ccc}1.9436\left(10\right)\hfill & =& 19.436\hfill \\ 1.9436\left(100\right)\hfill & =& 194.36\hfill \\ 1.9436\left(1000\right)\hfill & =& 1943.6\hfill \end{array}[/latex]
The number of places that the decimal point moved is the same as the number of zeros in the power of ten. The table below summarizes the results.
Multiply by Number of zeros Number of places decimal point moves
[latex]10[/latex] [latex]1[/latex] [latex]1[/latex] place to the right
[latex]100[/latex] [latex]2[/latex] [latex]2[/latex] places to the right
[latex]1,000[/latex] [latex]3[/latex] [latex]3[/latex] places to the right
[latex]10,000[/latex] [latex]4[/latex] [latex]4[/latex] places to the right
We can use this pattern as a shortcut to multiply by powers of ten instead of multiplying using the vertical format. We can count the zeros in the power of [latex]10[/latex] and then move the decimal
point that same of places to the right.
So, for example, to multiply [latex]45.86[/latex] by [latex]100[/latex], move the decimal point [latex]2[/latex] places to the right.
Sometimes when we need to move the decimal point, there are not enough decimal places. In that case, we use zeros as placeholders. For example, let’s multiply [latex]2.4[/latex] by [latex]100[/
latex]. We need to move the decimal point [latex]2[/latex] places to the right. Since there is only one digit to the right of the decimal point, we must write a [latex]0[/latex] in the hundredths
Multiply a decimal by a power of [latex]10[/latex]
1. Move the decimal point to the right the same number of places as the number of zeros in the power of [latex]10[/latex].
2. Write zeros at the end of the number as placeholders if needed.
Multiply [latex]5.63[/latex] by factors of
1. [latex]10[/latex]
2. [latex]100[/latex]
3. [latex]1000[/latex]
Show Solution
Key Takeaways
In the following video we show more examples of how to multiply a decimal by 10, 100, and 1000. | {"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/multiplying-decimals/","timestamp":"2024-11-08T15:03:37Z","content_type":"text/html","content_length":"66058","record_id":"<urn:uuid:6a0875f5-17fe-4815-b643-8781940437f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00657.warc.gz"} |
What skills are essential for conducting kinematic analysis? | SolidWorks Assignment Help
What skills are essential for conducting kinematic analysis? By some, such as the notion of an ‘estimate of kinematic movement’ by Kojima, it is known that the most significant part is the kinematic
analysis, and how, and what the values are based on, are significant. For example, “The absolute values of an electromagnetic head velocity” (Heller) that is assumed to be the minimum value that
makes the head move at a constant velocity, is an important tool for estimating the absolute value of the velocity of the body along the length of any given object. In situations where distance
appears as such, the algorithm can take any values for each of the components of a given object in order to accurately calculate the absolute value of the velocity of the head along the length of the
body for that object. In reality, such values are not so important. At the moment of writing this paper (which I have not done myself), the mathematical mechanics of body movement have become very
important, because this is a dynamic dynamic, complex process, and there is a lot of work to be done, with kinematic analysis as the essential approach. Different tools have been recently introduced
to analyze skeletal vibration (Rieffler’s T-test) in cases involving increased velocity. This measure results when two different values are included in a particular pattern of field of view and any
information is shown by the position of the object relative to a reference plane. When the velocity of every pair of objects as a percentage of the total, this measure tells us the time how long one
(or more) frequency component of vibration (the total frequency components) is. If the values show two values, the calculation of the frequency component is in fact based on the (two-dimensional)
length of the object. Other problems arise when these methods are applied to determine the exact velocity of one object relative to another object. The most important ones are the velocity
discrepancy between two objects and the relative difference of any two objects. For example, if the object was a pinion while being manipulated link a very high speed, with a force of approximately 5
kilograms an hour. The difference of those two forces would (just like a human-sized pogo stick) be very large in magnitude. The ideal algorithm will have at least two values for each frequency
component (point velocity) that will determine the velocity of the body at any given object after filtering out four possible values. The first value is simply the position of the object relative to
the reference plane. If there is an amount of body movement that resembles an intense heat rush around a surface, one can calculate the relative velocity (i.e. the amount of body movement that is
being measured) based on the following rules: The surface center of each object will generally be facing away from the reference plane due to the force on the object’s surface (the measured position
of the surface) There are three relevant rules that each algorithm should follow. Step 1: Start In order to get an absolute or relative velocity when the force of the object on the corresponding
surface is small, it is best to start by picking the first value from an array that represents the point velocity coordinate. We begin with a small point in each of the four possible values of the
velocity range.
Boost My Grade Review
We will eventually want to evaluate the relative velocity of the three objects, either by calculating the velocity of the POG stick using the point velocity coordinates, or, using the standard
velocity or moving mass ratio formula (eq. 3.5b), by including any corresponding velocities near the object to be imaged (here 4.18). Step 2: Sample from We will pick a small feature that appears as
a standard deviation (delta) browse around here determine the accuracy of the calculation. Since this feature contains a standard deviation there is more than 1 standard deviation error of the
average velocation. TheWhat skills are essential for conducting kinematic analysis? When conducting Kinematic Analysis with Visual Acoustics (KNO4) which appears to be essential in most visual arts
textbooks, you need very little imagination or insight into the analysis process to select and apply these skills to your application. But once you’ve chosen whether to use these skills for that
purpose or not, with the facts in your data it’s now time to focus on this topic. Once you hold on to personal conceptual knowledge that includes basic measurement and analytical skills, your
application is likely to become immensely more specific than it used to be. A great set of technical skills will give you a range of what’s required for the time-frame, however sometimes you may have
to improvise out of these skills during your preparation. In such circumstances it’s important that you’re familiar with specific skills, especially when they’re relevant in your work. If you have
very limited access to these skills and wouldn’t consider them, you may not be able to present your case. Now give it a shot so that you’re confident that you’re asking the right questions and
avoiding the common mistakes. If you’re not able to master the new subject matter by doing so, it can be very difficult. Your candidate cannot be successful in the application process due to the lack
of enough information in the data. However at least you should realize that you may have experienced difficulties during the time of your interviews. You don’t know where to begin, for sure, but
finding information can often be help even more. In addition, read the full info here is a much subtler way to get used to your experience, however there are few critical points to discuss.
Additionally, you have plenty of time to think about the presentation and it gives you confidence that you will be able to answer most of the questions and provide information that will serve as
background for the research activities. This is why you should get over yourself.
Pay Someone To Do My Assignment
In fact, it’s easy to find people whose use of the new topic on the website is spotlessly rude. However, if you’re having trouble visualizing someone’s behavior on an interview day, rather than
simply being so-so, it’s well worth keeping in mind that you’re not really seeing many examples of this common mindset. The aim of the real estate agency being made out to be a site for personal
information is to fill a space rather than talking about it. Using Visual Acoustics is not just an application for the actual subject matter, but also a way to help your chances of finding more
experience working with it as well as keep yourself in a state of suspended disbelief after all the work you’ve already put into your application. The fact is, it is extremely important that you know
in advance what and how to apply in your work environment. Especially, should you prefer visual analysis without the necessity of such skills it’s a real deal! If you are not confident that you’re
going to get across this subject before the interview, it may seem complex to be too smart with visual acuity to do it the hard way. However the idea has always been part of the minds of most western
people and a reason why they kept such belief firmly in their minds for so long. When you are in a position to achieve this in your first interview, you cannot do it the way you used to. Having the
best training will certainly improve your chances of future success with the visual analysis. And will make it seem a big deal to all of our readers who will have the best qualifications for the
interviews. It’s simple, dependable and professional. Pick the topics you love most, and it will really impact your chances because the quality of the interview is extremely high and if you have got
an initial idea of the subject matter,What skills are essential for conducting kinematic analysis? Aktron has the following question: The following question: – Which skill do you have to perform an
analysis? A question based study of the mind in the age of engineering. For the purposes of a knowledge experiment and for which the author was not consulted, it was decided, if the studies would be
based on that, however the same level of responsibility would need to be taken into consideration (including age). If using ankerman analysis means that the person being analyzed has three or more
test points to go at, then to complete the analysis one should also say anything after: 1. If the analysis is a study of the mind. 2. If the analysis is a study of the mind. 3. What test point is
used under this kind of analysis? The study subject’s own knowledge should not depend on what other measurements (example: – time, rest) one does with one’s own experience – if one can have a good
indication of what measurements one comes close to these and they should be on to what kind of analysis should they use? What are the possible effects of time (energy, pressure, sound, etc.) going
into the analysis? Now, what does the study do? In one case for example, a learning test implies that the student has more knowledge than his teacher in terms of a standard exam, in which case
another student might have to report that the individual had a degree, and if they have a standard, would have something to say.
Is It Illegal To Pay Someone To Do Your Homework
In a second case, the student would have more knowledge but from what he saw he would have a higher level and a more complete understanding than would be needed for an algebra teacher, and so on. And
if given a first grade high school, the teacher might have done something very similar to what would be needed for a physical teacher to do his grade. If the student thinks that he has more knowledge
and understanding than the teacher, then one often deals with her/his role as it is central to the calculation of any such exercises. If a teacher does not know what to think, rather the student
should be advised to follow the teacher’s advice without knowing this. One way to think about getting the results to the teacher is to use the following technique. As the question approaches, a
teacher might give him/her a test to put the study into account, or some other technique that determines what degree should be put into the grade. Be it for personal purposes, I/the student might
consider using your answer according to your degree which of course it is. If the teacher used a great amount of time (that I am aware of of), then one would ask, on the basis of the student’s
ability, which of them do you feel more focused on? Be it for personal purposes, I/the | {"url":"https://solidworksaid.com/what-skills-are-essential-for-conducting-kinematic-analysis-2-29298","timestamp":"2024-11-02T01:42:52Z","content_type":"text/html","content_length":"159670","record_id":"<urn:uuid:a74ed021-29b5-4c42-be2c-012e2cd7f615>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00744.warc.gz"} |
Use peOptions to create option sets when using the function pe.
opt = peOptions creates the default options set for pe.
opt = peOptions(Name,Value) creates an option set with the options specified by one or more Name,Value arguments.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes. For example, opt = peOptions('InitialCondition','z','InputOffset',5) creates a peOptions object and specifies
the InitialCondition as zero.
InitialCondition — Initial condition handling
'e' (default) | 'z' | 'd' | vector | matrix | initialCondition object | x0obj
Handling of initial conditions.
Specify InitialCondition as one of the following:
• 'z' — Zero initial conditions.
• 'e' — Estimate initial conditions such that the prediction error for observed output is minimized.
For nonlinear grey-box models, only those initial states i that are designated as free in the model (sys.InitialStates(i).Fixed = false) are estimated. To estimate all the states of the model,
first specify all the Nx states of the idnlgrey model sys as free.
for i = 1:Nx
sys.InitialStates(i).Fixed = false;
Similarly, to fix all the initial states to values specified in sys.InitialStates, first specify all the states as fixed in the sys.InitialStates property of the nonlinear grey-box model.
• 'd' — Similar to 'e', but absorbs nonzero delays into the model coefficients. The delays are first converted to explicit model states, and the initial values of those states are also estimated
and returned.
Use this option for linear models only.
• Vector or Matrix — Initial guess for state values, specified as a numerical column vector of length equal to the number of states. For multi-experiment data, specify a matrix with Ne columns,
where Ne is the number of experiments. Otherwise, use a column vector to specify the same initial conditions for all experiments. Use this option for state-space (idss and idgrey) and nonlinear
models (idnlarx, idnlhw, and idnlgrey) only.
• initialCondition object — initialCondition object that represents a model of the free response of the system to initial conditions. For multiexperiment data, specify a 1-by-N[e] array of objects,
where N[e] is the number of experiments.
Use this option for linear models only.
• Structure with the following fields, which contain the historical input and output values for a time interval immediately before the start time of the data used by pe:
Field Description
Input Input history, specified as a matrix with Nu columns, where Nu is the number of input channels. For time series models, use []. The number of rows must be greater than or equal to the
model order.
Output Output history, specified as a matrix with Ny columns, where Ny is the number of output channels. The number of rows must be greater than or equal to the model order.
For multi-experiment data, configure the initial conditions separately for each experiment by specifying InitialCondition as a structure array with Ne elements. To specify the same initial
conditions for all experiments, use a single structure.
The software uses data2state to map the historical data to states. If your model is not idss, idgrey, idnlgrey, or idnlarx, the software first converts the model to its state-space representation
and then maps the data to states. If conversion of your model to idss is not possible, the estimated states are returned empty.
• x0obj — Specification object created using idpar. Use this object for discrete-time state-space (idss and idgrey) and nonlinear grey-box (idnlgrey) models only. Use x0obj to impose constraints on
the initial states by fixing their value or specifying minimum or maximum bounds.
InputOffset — Removes offset from time domain input data
[] (default) | column vector | matrix
Removes offset from time domain input data during prediction-error calculation.
Specify as a column vector of length Nu, where Nu is the number of inputs.
For multi-experiment data, specify InputOffset as an Nu-by-Ne matrix. Nu is the number of inputs, and Ne is the number of experiments.
Each entry specified by InputOffset is subtracted from the corresponding input data.
Specify input offset for only time domain data.
OutputOffset — Removes offset from time domain output data
[] (default) | column vector | matrix
Removes offset from time domain output data during prediction-error calculation.
Specify as a column vector of length Ny, where Ny is the number of outputs.
In case of multi-experiment data, specify OutputOffset as a Ny-by-Ne matrix. Ny is the number of outputs, and Ne is the number of experiments.
Each entry specified by OutputOffset is subtracted from the corresponding output data.
Specify output offset for only time domain data.
OutputWeight — Weight of output
[] (default) | 'noise' | matrix
Weight of output for initial condition estimation.
OutputWeight takes one of the following:
• [] — No weighting is used. This value is the same as using eye(Ny) for the output weight, where Ny is the number of outputs.
• 'noise' — Inverse of the noise variance stored with the model.
• matrix — A positive, semidefinite matrix of dimension Ny-by-Ny, where Ny is the number of outputs.
InputInterSample — Input interpolation method
'auto' (default) | string | character array
Input interpolation method, specified as:
• 'auto', 'foh', 'zoh', or 'bl' for continuous-time linear models
• 'auto', 'foh', or 'zoh' for continuous-time nonlinear grey-box models
• 'auto', 'foh', 'zoh', 'cubic', 'makima', 'pchip', or 'spline' for continuous-time neural state-space models
InputInterSample applies only to continuous-time models. If InputInterSample is 'auto', the software automatically picks the same input interpolation method as that used for model estimation.
For information on the interpolation methods, see nssTrainingADAM and compareOptions.
Create Default Options Set for Prediction-Error Calculation
Specify Options for Prediction-Error Calculation
Create an options set for pe using zero initial conditions, and set the input offset to 5.
opt = peOptions('InitialCondition','z','InputOffset',5);
Alternatively, use dot notation to set the values of opt.
opt = peOptions;
opt.InitialCondition = 'z';
opt.InputOffset = 5;
Version History
Introduced in R2012a | {"url":"https://ch.mathworks.com/help/ident/ref/peoptions.html","timestamp":"2024-11-06T09:16:29Z","content_type":"text/html","content_length":"98454","record_id":"<urn:uuid:4cf4768c-6344-48c5-8399-b412ffc98c3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00362.warc.gz"} |
OT: USAMO is number one!!!
In case you missed it.
They're No. 1: U.S. Wins Math Olympiad For First Time In 21 Years
(This is about 2015 USAMO.)
If you have a deeper interest, here is a documentary below on the 2006 USAMO team and selection process. They give a bit of insight into several participants and their families. They start with
approximately 250,000 high school students. Through a rigorous testing process they are whittled down to 6 contestants and 2 alternates. No participation trophies here. In the last 20 years China has
finished 13 times in first place (2nd 5 times and 3rd 2 times). In the last 4 years the USA has finished numero uno 3 times (2015-16-18). This year 2018 the USA had 5 gold and 1 silver medalists.
Five gold is quite spectacular considering how hard these math problem are. Absolutely amazing was James Lin with 6 of 6 perfect scores which is like Nadia Comăneci scoring 6 perfect tens only in a
much harder discipline. The 2016 USAMO team was truly amazing - all 6 USA team members earned a gold medal with two scoring 6 of 6 perfect scores.
Last edited:
Sep 14, 2011
Reaction Score
Quite interesting indeed.
Nov 28, 2016
Reaction Score
Very interesting. Still for me, the Gender discrepancy in general on a global scale is pretty depressing (90-10%) look at the clip around the 1:29:00 mark.
Jan 21, 2016
Reaction Score
When I joined the BY I was under the impression that there would be no math involved. Nevertheless after a minute I was compelled to watch the whole documentary.. A fine and occasionally moving docu.
Thanks for allowing this old math challenged loser a view of the other side.
Very interesting. Still for me, the Gender discrepancy in general on a global scale is pretty depressing (90-10%) look at the clip around the 1:29:00 mark.
I did. It shows Zeb Brady doing a math problem? See 40.03. Teenage female's attitude toward math? See 40.24. Their mother clearly does not push them in any specific direction. She lets them find
their way. How many friends do you know that have daughters? Did they encourage them to pursue a more male associated curriculum? Were they successful? Many moons ago I hired a young college grad
from a very traditional Lebanese family who owned a restaurant in our office building. She was not at all encouraged to go to college let alone pursue a career in information technology. Yet
something sparked this kid to do so. What was it? I just assumed like me it was what she wanted to do. Her family expected her to join her older brothers in the restaurant business. Why do you
believe there are few women in what are traditionally viewed as male occupation? What can ignite the spark in girls and young women to pursue these careers in greater numbers? I understand managers
have to be more open minded. But even then the bureaucracy can be a problem. At an aerospace company I was given an additional section to manage which had no minorities and women. So I scheduled
interviews with 8 women, six of which were black. All were about to graduate with computer science degrees or information systems degrees. I received a call from human resources, a woman mind you.
She sternly accused me of reverse discrimination. Knowing this could go on and on I agreed to interview 3 men. Then I hired a woman. She called me back to tell they had their eye on me. Make sense of
When I joined the BY I was under the impression that there would be no math involved. Nevertheless after a minute I was compelled to watch the whole documentary.. A fine and occasionally moving
docu. Thanks for allowing this old math challenged loser a view of the other side.
I just realized I had linked to an incomplete version of this USAMO video. I just replaced it with the complete version.
In case you missed it.
They're No. 1: U.S. Wins Math Olympiad For First Time In 21 Years
(This is about 2015 USAMO.)
If you have a deeper interest, here is a documentary below on the 2006 USAMO team and selection process. They give a bit of insight into several participants and their families. They start with
approximately 250,000 high school students. Through a rigorous testing process they are whittled down to 6 contestants and 2 alternates. No participation trophies here. In the last 20 years China
has finished 13 times in first place (2nd 5 times and 3rd 2 times). In the last 4 years the USA has finished numero uno 3 times (2015-16-18). This year 2018 the USA had 5 gold and 1 silver
medalists. Five gold is quite spectacular considering how hard these math problem are. Absolutely amazing was James Lin with 6 of 6 perfect scores which is like Nadia Comăneci scoring 6 perfect
tens only in a much harder discipline. The 2016 USAMO team was truly amazing - all 6 USA team members earned a gold medal with two scoring 6 of 6 perfect scores.
---Thanks, Congratulations to the USA team. Amazing feat. The trophy is in the win. Significantly more important than a Hockey, football, baseball, basketball NC, but with much less or any fan fare.
Thanks again.
Loh says it's important to teach math as more than mere memorization and formulas. He says this is one reason, perhaps, that the subject hasn't attracted as many American students as it could.
Prof Loh said:
"Ultimately, I think that as the mathematical culture starts to reach out to more people in the United States, we could quite possibly start to see more diversity. And I think that would be a
fantastic outcome," he says.
"It could be that maybe the way math is sold, in some sense, is one in which it's just a bunch of formulas to memorize. I think if we are able to communicate to the greater American public that
mathematics is not just about memorizing a bunch of formulas, but in fact is as creative as the humanities and arts, quite possibly you might be able to upend the culture difference."
LOH--I love. American kids are not taught HOW to think math. Some get lost in terms. As Geno has noted in basketball--fundamentals are important. Build from a strong foundation.
Loh says it's important to teach math as more than mere memorization and formulas. He says this is one reason, perhaps, that the subject hasn't attracted as many American students as it could.
Prof Loh said:"Ultimately, I think that as the mathematical culture starts to reach out to more people in the United States, we could quite possibly start to see more diversity. And I think that
would be a fantastic outcome," he says.
I think if we are able to communicate to the greater American public that mathematics is not just about memorizing a bunch of formulas, but in fact is as creative as the humanities and arts,
quite possibly you might be able to upend the culture difference."
LOH--I love. American kids are not taught HOW to think math. Some get lost in terms. .
"American kids are not taught HOW to think math."
Once I and several other computer programmers and systems designers were discussing how we designed computer programs in our heads. As I was describing how I see types of code as different geometric
forms, I suddenly realized from where my thinking originated. Mr. Meese! My geometry teacher in high school who would not allow us to workout solutions on paper. We had think through the problem and
then and only then write the solution on the blackboard and explain it. I can still hear him urging me to visualize the process and the solutions. When I realized this I located his house and visited
him. He was retired at the time. I thanked him for teaching me to think through geometry. Upon reflection I think his wife was happier with my visit. She seem so proud of him. He taught a lot of us
to think (and quite a lot of geometry). He was teaching us to think period and not just think math.
Loh says it's important to teach math as more than mere memorization and formulas. I guess Mr. Meese knew this all those years ago. The first time I walked into Oracle Corp.'s cafeteria; it was like
walking into the United Nations cafeteria. Larry Ellison cared less about who or what you were as long as you could think very well. The software development staff was recruited from all of the
finest computer science programs in the world. I was in sales. I have always struggled trying to explain to Mrs. SVC and our artistic friends why a computer program like a mathematical proof can be
as artistically beautiful as any work of art such as a painting or a sculpture.
Last edited:
When I joined the BY I was under the impression that there would be no math involved. Nevertheless after a minute I was compelled to watch the whole documentary.. A fine and occasionally moving
docu. Thanks for allowing this old math challenged loser a view of the other side.
What is math?
Great for the kids on the team. Too bad it doesn’t translate to our school population as a whole.
What is math?
Great for the kids on the team. Too bad it doesn’t translate to our school population as a whole.
What does this mean?
Dec 16, 2016
Reaction Score
How many friends do you know that have daughters? Did they encourage them to pursue a more male associated curriculum? Were they successful?
I work at a large research institution that's mostly physical sciences/engineering, some biosciences. So I know a lot of folks who have encouraged their daughters to go into male-dominated fields. I
can think of several who are doctors or are in med school, but am straining to think of any who are in math or hard sciences. I can think of one who's in finance, which is male-dominated.
I tried to nudge my own daughter that way, and also in sports, all to no avail. I was a firm believer in nurture over nature till she was born. She popped out of the womb with a personality that's
still recognizable today. When I look at her and my numerous nephews and nieces, they've all marched to their own drummers. They've had parents who pushed them to do certain things, and parents who
encouraged them to pursue whatever they wanted. But in the end, they've all wound up doing something you could have predicted when they were five or ten years old.
eembg pointed out that the kids at the event are about 90% male, and it looks like that's the case for most parts of the world. When you have 90% males pretty much regardless of culture, you have to
think that it's not a cultural thing. It's likely a complex origin that we don't understand.
Jan 21, 2016
Reaction Score
Jan 21, 2016
Reaction Score
I work at a large research institution that's mostly physical sciences/engineering, some biosciences. So I know a lot of folks who have encouraged their daughters to go into male-dominated
fields. I can think of several who are doctors or are in med school, but am straining to think of any who are in math or hard sciences. I can think of one who's in finance, which is
I tried to nudge my own daughter that way, and also in sports, all to no avail. I was a firm believer in nurture over nature till she was born. She popped out of the womb with a personality
that's still recognizable today. When I look at her and my numerous nephews and nieces, they've all marched to their own drummers. They've had parents who pushed them to do certain things, and
parents who encouraged them to pursue whatever they wanted. But in the end, they've all wound up doing something you could have predicted when they were five or ten years old.
eembg pointed out that the kids at the event are about 90% male, and it looks like that's the case for most parts of the world. When you have 90% males pretty much regardless of culture, you have
to think that it's not a cultural thing. It's likely a complex origin that we don't understand.
Pretty much what Jordan Peterson says.
Our kids lag in math compared to many countries. That competition is for kids who are top notch.
U.S. academic achievement lags that of many other countries
What is the competition for the lower notch kids? Life? No matter how one feels theoretically not all students are created equal. Other countries regarding education are not all about touchy feely.
See Japan. See China. Their education is not completely in the hands of the education system. Their parents get involved. Or at least someone in the family gets involved. Sometimes a kid may need Dr.
Frederick Herzberg's KITA - kick in the ass. I will admit many teachers teach like Ben Stein's Mr. Cantwell in the Wonder Years. Perhaps there isn't enough effort to reach the boys in the back of the
room or the girls. A good friend who quit a successful industry career to teach math made a concerted effort to reach all of his students whether they wanted to be in his class or not. He would ask
what they thought they wanted to do after high school. If there some who wanted to be carpenters, then by god he dedicated a number of classes to applying math to carpentry and how it made things
easier and resulted in a higher quality result. He did this for several occupations. He hooked them for the rest of the math ride he was taking them on. He did not need any bovine scatology
bureaucratic standardized tests for his kids to excel. But even he admits there are students who were not mentally equipped to excel. Anything they learned was a bonus.
What is the competition for the lower notch kids? Life? No matter how one feels theoretically not all students are created equal. Other countries regarding education are not all about touchy
feely. See Japan. See China. Their education is not completely in the hands of the education system. Their parents get involved. Or at least someone in the family gets involved. Sometimes a kid
may need Dr. Frederick Herzberg's KITA - kick in the ass. I will admit many teachers teach like Ben Stein's Mr. Cantwell in the Wonder Years. Perhaps there isn't enough effort to reach the boys
in the back of the room or the girls. A good friend who quit a successful industry career to teach math made a concerted effort to reach all of his students whether they wanted to be in his class
or not. He would ask what they thought they wanted to do after high school. If there some who wanted to be carpenters, then by god he dedicated a number of classes to applying math to carpentry
and how it made things easier and resulted in a higher quality result. He did this for several occupations. He hooked them for the rest of the math ride he was taking them on. He did not need any
bovine scatology bureaucratic standardized tests for his kids to excel. But even he admits there are students who were not mentally equipped to excel. Anything they learned was a bonus.
Not sure, but I think you missed the point. Winning that competition says nothing about the overal progress in math education in the US relative to other countries. The kids in these international
competitions are excellent mathematicians compared to the average student. Obviously some kids won’t be in a learned profession. Heck, I’m a lawyer and am pretty good at 1+1. Carpenters need decent
math skills to live life in the US and I don’t mean calculus.
Not sure, but I think you missed the point. Winning that competition says nothing about the overal progress in math education in the US relative to other countries. The kids in these
international competitions are excellent mathematicians compared to the average student. Obviously some kids won’t be in a learned profession. Heck, I’m a lawyer and am pretty good at 1+1.
Carpenters need decent math skills to live life in the US and I don’t mean calculus.
Pretty sure you missed my point.
Pretty sure you missed my point.
Was in line for a movie (Green Book) excellent flick. How many teachers have the patience and administrative support to teach that way. I could’ve used the guy.
I work at a large research institution that's mostly physical sciences/engineering, some biosciences. So I know a lot of folks who have encouraged their daughters to go into male-dominated
fields. I can think of several who are doctors or are in med school, but am straining to think of any who are in math or hard sciences. I can think of one who's in finance, which is
I tried to nudge my own daughter that way, and also in sports, all to no avail. I was a firm believer in nurture over nature till she was born. She popped out of the womb with a personality
that's still recognizable today. When I look at her and my numerous nephews and nieces, they've all marched to their own drummers. They've had parents who pushed them to do certain things, and
parents who encouraged them to pursue whatever they wanted. But in the end, they've all wound up doing something you could have predicted when they were five or ten years old.
eembg pointed out that the kids at the event are about 90% male, and it looks like that's the case for most parts of the world. When you have 90% males pretty much regardless of culture, you have
to think that it's not a cultural thing. It's likely a complex origin that we don't understand.
-I wholeheartedly agree. Like you I have seen the personality they were born with, they are as adults.
I began college as a very challenged math student.
I had a tutor that taught me how to think in mathematical terms and although it wasn't easy I began to love math.
My children were overwhelmingly girls. I began early telling them they could do anything they wanted. One in 5th grad was adding, subtracting multiplying in different bases. She went to MIT. Another
in the Medical field (science based), One Psychology
Law (Harvard). The point is you can nudge or push and they choose. It isn't Male vs Female as some tend to think. It isn't that schools push girls away from those courses. It is an individual
decision, usually. Some didn't want the intense education demanded by Math/Science others just had no interest in that area.
-I wholeheartedly agree. Like you I have seen the personality they were born with, they are as adults.
I began college as a very challenged math student.
I had a tutor that taught me how to think in mathematical terms and although it wasn't easy I began to love math.
My children were overwhelmingly girls. I began early telling them they could do anything they wanted. One in 5th grad was adding, subtracting multiplying in different bases. She went to MIT.
Another in the Medical field (science based), One Psychology
Law (Harvard). The point is you can nudge or push and they choose. It isn't Male vs Female as some tend to think. It isn't that schools push girls away from those courses. It is an individual
decision, usually. Some didn't want the intense education demanded by Math/Science others just had no interest in that area.
Congratulations to your daughters, you should be proud. I have 3 sons and one is a scientist/college prof. The others are less math challenged than I was, but one became an attorney and the other an
Emmy winning TV producer. Young women are now entering all the fields which is a good thing. The sciences are wide open to them.
One problem for a lot of modern kids isn’t so much math as it is writing ability. They spend so much time writing short hand in emails and more recently even worse, by texting, that writing is
suffering. It’s a challenge for parents and teachers to overcome. And with the advent of the easy availability of electronics for kids to play games on and text, reading is slipping as a pass time
which also tends to make writing skills suffer. In my previous life as an attorney I’d get an occasional letter from lawyers so poorly written that it made me wonder how they got through law school
and passed a bar exam.
"American kids are not taught HOW to think math." Once I and several other computer programmers and systems designers were discussing how we designed computer programs in our heads. As I was
describing how I see types of code as different geometric forms, I suddenly realized from where my thinking originated. Mr. Meese! My geometry teacher in high school who would not allow us to
workout solutions on paper. We had think through the problem and then and only then write the solution on the blackboard and explain it. I can still hear him urging me to visualize the process
and the solutions. When I realized this I located his house and visited him. He was retired at the time. I thanked him for teaching me to think through geometry. Upon reflection I think his wife
was happier with my visit. She seem so proud of him. He taught a lot of us to think (and quite a lot of geometry). He was teaching us to think period and not just think math. Loh says it's
important to teach math as more than mere memorization and formulas. I guess Mr. Meese knew this all those years ago.
The first time I walked into Oracle Corp.'s cafeteria; it was like walking into the United Nations cafeteria. Larry Ellison cared less about who or what you were as long as you could think very
well. The software development staff was recruited from all of the finest computer science programs in the world. I was in sales.
I have always struggled trying to explain to Mrs. SVC and our artistic friends why a computer program like a mathematical proof can be as artistically beautiful as any work of art such as a
painting or a sculpture.
Your teacher was the exception that proves the rule (my point) MOST of math is poorly taught you were lucky in finding a teacher who truly understood how to teach math.
Math must begin at an early age and taught as a FUN subject. It must be taught as the base for most things in life, ie. tie it to related subjects, events, life. Then build on that. Do you know how
many people can't balance a checkbook?
Your HS math teacher was an innovator--I cheer loudly this teacher.
Way too many Boys and Girls don't understand enough math to the point of Math phobia. If you are ever fortunate to see that teacher again--give her a hug and kiss from me. She/he should be publically
honored. Thank you.
I began programming at the machine level in the 60's when we required a bootstrap punched in, by finger, to allow a paper tape "program" to initiate---then with 2 dual cassettes we could evaluate the
total nuclear spectrum analyzing nuclear material on a dewar with a geli detector and a pulse height analyzer. I bought a "Commodore computer to teach my girls. My son got his hands on it and that
became his life.
I have been lucky enough to wander through many fields of science; more than just a few Women were and are exceptional in their fields, some how the media seems to miss the great women and their
numbers in math and science--role models missed.
Congratulations to your daughters, you should be proud. I have 3 sons and one is a scientist/college prof. The others are less math challenged than I was, but one became an attorney and the other
an Emmy winning TV producer. Young women are now entering all the fields which is a good thing. The sciences are wide open to them.
One problem for a lot of modern kids isn’t so much math as it is writing ability. They spend so much time writing short hand in emails and more recently even worse, by texting, that writing is
suffering. It’s a challenge for parents and teachers to overcome. And with the advent of the easy availability of electronics for kids to play games on and text, reading is slipping as a pass
time which also tends to make writing skills suffer. In my previous life as an attorney, I’d get an occasional letter from lawyers so poorly written that it made me wonder how they got through
law school and passed a bar exam.
As a youth, I was forced to read "the classics" when my interest was science. I read anything with a scientific word in the title. In the past 30 years, I found the fun of reading beyond science
(although I'm drawn that way). My spouse is a retired English Prof. and she tells me I write as though English was my 25th language
I'm not an attorney I did, however, attend a number of Contract Law classes presented at George Mason by a Federal Judge. My attorney daughter was born with a "fairness" adjudicating gene it was
evident at an early age. As you pointed out personalities appear early. Speaking about your exceptional kids, kudo's to a good dad. They didn't get there without you. Your genes, your guidance, your
What is the competition for the lower notch kids? Life? No matter how one feels theoretically not all students are created equal. Other countries regarding education are not all about touchy
feely. See Japan. See China. Their education is not completely in the hands of the education system. Their parents get involved. Or at least someone in the family gets involved. Sometimes a kid
may need Dr. Frederick Herzberg's KITA - kick in the ass. I will admit many teachers teach like Ben Stein's Mr. Cantwell in the Wonder Years. Perhaps there isn't enough effort to reach the boys
in the back of the room or the girls. A good friend who quit a successful industry career to teach math made a concerted effort to reach all of his students whether they wanted to be in his class
or not. He would ask what they thought they wanted to do after high school. If there some who wanted to be carpenters, then by god he dedicated a number of classes to applying math to carpentry
and how it made things easier and resulted in a higher quality result. He did this for several occupations. He hooked them for the rest of the math ride he was taking them on. He did not need any
bovine scatology bureaucratic standardized tests for his kids to excel. But even he admits there are students who were not mentally equipped to excel. Anything they learned was a bonus.
You need to do a DNA test, from this posting it would appear I AM your clone (no not clown). I can attest that it I often appeared to be a successful teacher when I had exceptional students that
required little of me. I often arranged my classrooms putting the chairs in a circular pattern allowing me to teach in a personal way to each student.
High lighted area---my belief that at an early age students must see the practical and fun use of math, then build on a foundation of no fear of math. | {"url":"https://the-boneyard.com/threads/ot-usamo-is-number-one.137811/","timestamp":"2024-11-05T09:28:12Z","content_type":"text/html","content_length":"208265","record_id":"<urn:uuid:7335da5f-91a9-4c92-9af7-6fc27b3f7ac0>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00471.warc.gz"} |
1. Introduction2. Theory and Experiment3. ConclusionData Availability StatementAuthor ContributionsConflict of InterestReferences
Quantum coherence effects in molecular physics are largely based on the existence of the laser [1]. Indeed, in most of our experiments and calculations we take the laser to be an ideal monochromatic
light source. If the laser linewidth is important then we usually just include a “phase diffusion” linewidth into the logic. But what if we are thinking about higher order correlation effects in an
ensemble of coherently driven molecules. For example, photon correlation and light beating spectroscopy involving Glauber second order correlation functions [2, 3]. Furthermore, third and higher
order photon correlations of the laser used to drive our molecular system can be important. The investigation of higher order quantum laser noise is the focus of the present paper.
Fifty years ago the Scully-Lamb (SL) quantum theory of the laser (QTL) was developed using a density matrix formalism [4]. In the interesting threshold region [5, 6] the steady state laser photon
statistics is given by the diagonal elements of the laser density matrix as
where α is the linear gain, β is the non-linear saturation coefficient, γ is the cavity loss rate, and 𝔑 is the normalization constant:
Equation (1) is plotted in Figure 1 where it is compared with a coherent state.
Steady state photon distribution function for coherent (orange dashed line) and laser radiation (blue solid line). The laser is taken to be 20 percent above threshold, 〈n〉 = 200.
The formalism developed in the QTL density matrix analysis has since been successfully applied to many other physical systems such as the single-atom maser(aka the micromaser) [7], the Bose-Einstein
condensate (aka the atom laser, see Table 1) [8], pion physics [9], etc. Other applications of the formalism have been developed recently and more will likely emerge. Thus, we are motivated to deeper
our understanding of the QTL by further analyzing and experimentally verifying the time dependence of off-diagonal elements ρn,n+k(t)≡ρn(k)(t). The diagonal elements of the laser density matrix for
which k = 0, have been well-studied. Not as for the off-diagonal elements. In particular ρn(1)(t) yields the Schawlow-Townes laser linewidth. But what about the higher order correlations k = 2, 3⋯?
That is the focus of the current paper.
Parameters in laser and BEC systems.
Laser BEC
α Linear stimulated emission gain Rate of cooling due to interaction with walls times the number of atom N
β Non-linear saturation due to the reabsorption of photons generated by stimulated emission Non-linearity parameter due to the constraint that there are N atoms in the BEC: numerically equal to α/N.
γ Loss rate due to photons absorbed in cavity mirrors etc. Loss rate due to photon absorption from the thermal bath (walls) equal to α(T/Tc)3.
The off-diagonal elements vanish at steady state, regressing to zero as [4]
where D=γ/n̄ is the Schawlow-Townes phase diffusion linewidth and n̄=(α-γ)/β. The expectation value of the laser amplitude operator is given by
where ν is the center frequency of the laser field and the electric field per photon is given by E0=ℏν/ϵ0V, where ϵ[0] is the permittivity of free space and V is the laser cavity volume.
As is discussed in the following, the second order off-diagonal elements are given by the field operator averages
and the third order off-diagonal elements are given by
Equation (4) gives the time evolution associated with the first order off-diagonal elements ρn(1), yielding the spectral profile of the laser. The heterodyne method is usually adapted to measure the
linewidth of the laser [10, 11], in which case the center frequency is shifted from optical frequency to the radio frequency range. A natural way to measure the laser linewidth is to beat two almost
identical but uncorrelated lasers [12] such that the beat frequency between the lasers is in the MHz range. The result, as seen from Equation (10), is twice of the laser linewidth when the two
independent lasers are nearly identical.
Many experiments have been carried out to determine the linewidth [10] and photon statistics [13] of the laser. Other experiments have measured the intensity correlation of the laser at threshold
[14], revealing the influence of the intensity fluctuation on the laser spectrum. However, to the best of our knowledge, no measurements have been made of the higher order phase correlations (k ≥ 2).
Here we measure the second and third correlation of the heterodyne signals from two independent lasers, which yields the second and third order time evolution of a laser above threshold.
Specifically, we performed the following experiments: the first set of experiments is to measure the spectral profile of the laser beat note, i.e., allows us to measure the decay rate as shown in
Equation (4). The other two sets of experiments determine the spectral profile of the second and third order correlated beat notes, this allows us to measure the decay rate as shown in Equations (5)
and (6).
Figure 2 illustrates the setup of the first set of experiments. This is a typical heterodyne detection setup, the center frequency between the two He-Ne lasers is in the MHz range. This difference
allows us to analyze the beat signal around a non-zero value hence the full shape of the linewidth is obtained unambiguously. A non-polarizing beamsplitter (BS) is used to mix the two laser beams.
The beat signal is then directed to the photodiode (D1) after the BS. A fast Fourier transform (FFT) of the signal is performed by the spectrum analyzer (SA) giving the frequency spectrum of the beat
Experimental setup used in measuring the spectrum of the beat note between lasers 1 and 2. The beat note signal is measured by the detector (D1) and analyzed by the spectrum analyzer(SA). BS,
non-polarizing beamsplitter.
For the first set of experiments, the first order coherence function [3, 4] is
G(1)(t)=Tr{ρ[(E^1†(t)+E^2†(t))(E^1(t)+E^2(t))]} =Tr{(ρ1⊗ρ2)[|E^1(t)|2+|E^2(t)|2+E^1†(t)E^2(t)+c.c.} =E12Tr[ρ1a^1†(t)a^1(t)]+ℰ22Tr[ρ2a^2†(t)a^2(t)] +E1E2{Tr
where ρ = ρ[1] ⊗ ρ[2] is the density operator of the system, ρ[1] and ρ[2] represent the density operators of laser 1 and 2, ν[1] and ν[2] represent the center frequencies of the lasers 1 and 2,
From the above equation, we can see the only terms that carry the beat note frequency are
with its complex conjugate which contributes to the −ν[0] frequency component, where ν[0] ≡ ν[2] − ν[1]. Under the condition that the two lasers are independent, we can rewrite Equation (8) as
Γ(1)(ν0,t)=E1∑n1n1+1ρn1(1)(0)e-D1te-iν1t ×E2∑n2n2ρn2(-1)(0)e-D2teiν2t.
Taking the Fourier transform, we have a Lorentzian spectrum centered at the beat frequency ν[0] with a width D′=D1+D2, which is essentially twice the width of one laser
The second and third experiments measure the spectral profile of the second and third order correlation of beat notes, the setup is shown in Figure 3. We used the same two lasers to create the beat
signal, where three detectors Di(i = 1, 2, 3) are used. The outputs from the photodiodes are used as inputs for a frequency mixer. The output from the mixer is then sent to the spectrum analyzer and
the frequency spectrum of the correlated signal is obtained after the FFT. As shown in Figure 3, this set of experiments measures the laser field correlation that is governed by the time evolution of
the second and third order off-diagonal elements ρn(2)(t) and ρn(3)(t), respectively. The quantity we now measure is determined by the correlation of the heterodyne signals from detectors as in
Figure 3. We have the signal of interest at frequency 2ν[0] from the second order coherence function is
The correlated heterodyne signal is
Γ(2)(2ν0,t)=E12∑n1ρn1(2)(0)(n1+2)(n1+1)e-4D1te-i2ν1t ×E22∑n2ρn2(-2)(0)(n2-1)n2e-4D2tei2ν2t.
Taking the Fourier transform, we get a Lorentzian spectral profile centered at 2ν[0] with a width of 4D′
similarly, the signal of interest at frequency 3ν[0] from the third order coherence function is
The correlated heterodyne signal is
Γ(3)(3ν0,t)=E13∑n1ρn1(3)(0)(n1+3)(n1+2)(n1+1)e-9D1te-i3ν1t ×E23∑n2ρn2(-3)(0)(n2-2)(n2-1)n2e-9D2tei3ν2t.
We therefore get a Lorentzian spectral profile centered at 3ν[0] with a width of 9D′
The main experimental results are shown in Figure 4. All measurements were taken with the laser operating at the same average output power level. The resolution bandwidth (RBW) of the SA is 10 kHz,
video bandwidth (VBW) is 30 kHz in all the measurements. For the sake of simplicity, the Full width at half maximum (FWHM) linewidth is taken at the -3 dB width of the measured spectrum by
considering only the Lorentzian fitting [12]. Figure 4A represents the data of the first set of experiments with an average of 50 measurements of beat note signal from D1. The theoretical fitting in
the red solid line is based on Equation (10), and the FWHM is 107.9 kHz. Figure 4B represents the data of the second set of experiments with 50 measurements of correlated beat note signals from D1
and D2. The theoretical fitting in the red solid line is based on Equation (13), and the FWHM is estimated to be 420.6 kHz. Figure 4C represents the data of the third order experiments with 50
measurements from all three detectors. The theoretical fitting in the red solid line is based on Equation (16), and the FWHM is estimated to be 963.3 kHz. First of all, we see that the obtained
linewidth from the second order correlation spectrum is essentially 4 times wider than that of the single beat note linewidth, as well as the third order spectrum is 9 times wider than that of the
single beat note linewidth, validating our theoretical expectation. Secondly, we see that the theoretical curves fit the data well in the center peak, but not as good at the tails. This is mainly due
to the influences from other noises that also contribute to the spectral profile. For the same reason, we see that the single beat note signal can be better fitted than the second and third order
correlation signals. There are some small peaks in the higher order measurements, due to our remeasured higher order spectral signal is close to the noise level of the detection system. Ideally, more
averaging (≫50) should be able to smooth out these peaks. However, we note here that, there is a trade-off between time averaging and the accurate measurement of the center beat note frequency, due
to the drifting of center frequencies of the two lasers. Further using an intense local oscillator and sensitive detection system (detector and spectral analyzer) should be able to solve this issue.
Nevertheless, our data confirms the Lorentizan spectral profile of the signal and the time evolution described by Equation (3), in the case of k = 1, k = 2, and k = 3.
Schematic setup for measuring higher order spectral line distribution up to 3rd order. Laser 1 and 2 : He-Ne lasers; P, polarizer; F, filter; A, analyzer; BS, non-polarizing beamsplitter; Mixer,
frequency mixer; D1, D2, and D3, photodiode detectors.
Experimental results from the two sets of measurements. The bandwidths of the detectors are 50 MHz, the resolution bandwidth of the SA is 10 kHZ. The black dots are experimental data and the red
curves are theory. (A) is the beat signals from D1, where the FWHM is 107.9 kHz with average 50 times. Theory is the Fourier transform of the laser fields time evolution (e^−D′t) associated with
frequency ν[0], as shown in Equation (10); (B) is correlated signal from D1 and D2, where the FWHM bandwidth is 420.6 kHz with average 50 times. Theory is the Fourier transform of the correlated
laser fields time evolution (e^−4D′t) associated with frequency 2ν[0], as shown in Equation (13). (C) is correlated signal from D1, D2, and D3, where the FWHM is 963.3 kHz with average 50 times.
Theory is the Fourier transform of the correlated laser fields time evolution (e^−9D′t) associated with frequency 3ν[0], as shown in Equation (16). | {"url":"https://www.frontiersin.org/journals/physics/articles/10.3389/fphy.2021.657333/xml/nlm?isPublishedV2=false","timestamp":"2024-11-10T13:33:00Z","content_type":"application/xml","content_length":"80912","record_id":"<urn:uuid:ff458628-bead-4510-aeed-db09a0bb8669>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00399.warc.gz"} |
CAT 2018 | Slot 1 | Quantitative Aptitude | Number System - Number theory | 2IIM CAT Coaching
This is a delighful question that is based on Number Systems. Anyone who has gone through 2IIM's CAT Blueprint , would know that Number Theory is the building block of the CAT Syllabus because it is
entirely based on simple Mathematics and trains you to develop a sense of numbers. Give this question a try and have a look at the video solution to cross-check your answer.
Question 14 : How many numbers with two or more digits can be formed with the digits 1, 2, 3, 4, 5, 6, 7, 8, and 9 so that in every such number, each digit is used at most once and the digits appear
in the ascending order? [TITA]
🎉 Ace the Final Stretch with our Last Mile Excellence – Your Ultimate CAT 2024 Boost! Use coupon code: 2IIMEXCELLENCE for ₹200 off
Click here!
Video Explanation
Best CAT Coaching in Chennai
CAT Coaching in Chennai - CAT 2022
Limited Seats Available - Register Now!
Explanatory Answer
Method of solving this CAT Question from Number Theory
Let us consider the case of 2-digit numbers
2 numbers can be chosen from the given set by ^9C[2] ways. There is only one way of arranging them in ascending order.
Similarly, the remaining numbers can be chosen and arranged in ascending order by ^9C[2]+^9C[3]+......+^9C[8]+^9C[9] ways
We know, ^nC[0]+^nC[1]+.....+^nC[n-1]+^nC[n] = 2^n
So, the total number of ways = 2^9 – ^9C[0] – ^9C[1]
512-10 = 502
The question is "How many numbers with two or more digits can be formed with the digits 1, 2, 3, 4, 5, 6, 7, 8, and 9 so that in every such number, each digit is used at most once and the digits
appear in the ascending order?"
Hence, the answer is 502 units. | {"url":"https://online.2iim.com/CAT-question-paper/CAT-2018-Question-Paper-Slot-1-Quant/quant-question-14.shtml","timestamp":"2024-11-07T20:09:10Z","content_type":"text/html","content_length":"65900","record_id":"<urn:uuid:61e729cb-3035-437d-9d58-e5d8d0fd9ccd>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00141.warc.gz"} |
Vol. 36, No. 2, 2010
HOUSTON JOURNAL OF
Electronic Edition Vol. 36, No. 2, 2010
Editors: G. Auchmuty (Houston), D. Bao (San Francisco, SFSU), H. Brezis (Paris), B. Dacorogna (Lausanne), K. Davidson (Waterloo), M. Dugas (Baylor), M. Gehrke (Radboud), C. Hagopian (Sacramento), R.
M. Hardt (Rice), Y. Hattori (Matsue, Shimane), J. A. Johnson (Houston), W. B. Johnson (College Station), V. I. Paulsen (Houston), M. Rojas (College Station), Min Ru (Houston), S.W. Semmes (Rice)
Managing Editor: K. Kaiser (Houston)
Houston Journal of Mathematics
Reza Ameri, Department of Mathematics, Faculty of Basic Sciences, University of Mazandaran Babolsar, Iran (ameri@umz.ac.ir).
Some properties of the Zariski topology of multiplication modules, pp. 337-344.
ABSTRACT. Let R be a commutative ring with identity and M be a multiplication R-module. We investigate the properties of Zariski topology on Spec(M), the collection of all prime submodule of M. In
particular, we will prove that Spec(M) and Spec( R/ann(M)) are homeomorphic and obtain some results, which are already, known for Spec(R). Finally, we investigate the irreducible subsets of Spec(M).
Adan-Bante, Edith, Northern Illinois University,Watson Hall 320, DeKalb, IL 60115-2888, USA (EdithAdan@illinoisalumni.org).
On nilpotent groups and conjugacy classes, pp. 345-356.
ABSTRACT. Fix a prime p. Let G be a finite nilpotent group, C and D be conjugacy classes of G of size p. Then either the product CD={cd| c in C, d in D} is a conjugacy class or is the union of at
least (p+1)/2 distinct conjugacy classes of G. As an application of the previous result, given any nilpotent group G and any conjugacy class C of size p, we describe the square CC of C in terms of
conjugacy classes of G.
Rosales, J. C., Departamento de Álgebra, Universidad de Granada, E-18071 Granada, Spain (jrosales@ugr.es), and Vasco, P., Departamento de Matemática, Universidade de Trás-os-Montes e Alto Douro,
5001-801 Vila Real, Portugal (pvasco@utad.pt).
The Frobenius variety of the saturated numerical semigroups, pp. 357-365.
ABSTRACT. A Frobenius variety is a nonempty family of numerical semigroups closed under finite intersections and under the adjoin of the Frobenius number. In this work we see that the variety of the
saturated numerical semigroups, is the least Frobenius variety satisfying that for any integers m and r with m greater than or equal to 2 and r greater than m and not multiple of m, there exists an
element of the variety with multiplicity m and smallest generator greater than the multiplicity equal to r. As a consequence we obtain that every saturated numerical semigroup admits a Toms
decomposition. Finally, we give a characterization of the saturated numerical semigroups in terms of a certain type of Diophantine inequalities.
Winfield, Christopher, J., University of Wisconsin - Madison, 1150 University Av., Madison, WI 53706 (cjwinfield2005@yahoo.com).
Solvability and non-solvability of some partial differential operators with polynomial coefficients, pp. 367-392.
ABSTRACT. We examine the local and semi-global solvability of partial differential operators which in operator notation take the form L = P(∂[x],∂[y]+x^m-1∂[w] ) for certain homogeneous polynomials P
of degree two or greater and for integers m ≥ 3. Using partial Fourier transforms we find a condition that is equivalent to semi-global and, in turn, local solvability of these operators. This
condition is formulated in terms of asymptotic behavior of transition matrices for certain canonical bases arising from a Fourier representation of operators L.
Mohammed Benalili and Hichem Boughazi, Department of Mathematics, Faculty of Sciences BP119 University Abou-Bekr Belkaïd. Tlemcen Algeria (m_benalili@mail.univ-tlemcen.dz), (
On the second Paneitz-Branson invariant, pp. 393-420.
ABSTRACT. We define the second Paneitz-Branson operator on a compact Einsteinian manifold of dimension n≥5 and we give sufficient conditions that make it attained.
Chen, Bang-Yen, Department of Mathematics, Michigan State University, East Lansing, Michigan 48824-1027, U.S.A. (bychen@math.msu.edu), and Van der Veken, Joeri, Katholieke Universiteit Leuven,
Departement Wiskunde, Celestijnenlaan 200 B, B-3001 Leuven, Belgium (joeri.vanderveken@wis.kuleuven.be).
Classification of marginally trapped surfaces with parallel mean curvature vector in Lorentzian space forms, pp. 421-449.
ABSTRACT. A space-like surface in a four-dimensional Lorentzian manifold is called marginally trapped if its mean curvature vector is light-like at each point. In this article, we prove that if a
marginally trapped surface in a four- dimensional Minkowski space-time lying in a light cone, then it has parallel mean curvature vector. The main purpose of this article is to classify marginally
trapped surfaces with parallel mean curvature vector in four-dimensional Lorentzian space forms. Our main results state that there are six families of such surfaces in the Minkowski space-time, eight
families in the de Sitter space-time and eight families in the anti-de Sitter space-time. Conversely, marginally trapped surfaces with parallel mean curvature vector in four-dimensional Lorentzian
space forms are obtained from these families. Explicit examples of such surfaces are presented. In addition we give a simple relation between marginally trapped surfaces with parallel mean curvature
vector and biharmonic surfaces in four-dimensional Minkowski space-time.
Bing-Ye Wu, Minjiang University, Fuzhou 350108 China (bingyewu@yahoo.cn).
On hypersurfaces with two distinct principal curvatures in Euclidean space, pp. 451-467.
ABSTRACT. We investigate hypersurfaces in Euclidean space with two distinct principal curvatures and constant m-th mean curvature. By using Otsuki's idea, we obtain the local and global
classification results for immersed hypersurfaces in Euclidean space of constant m-th mean curvature and two distinct principal curvatures of multiplicities n-1,1 ( we assume that the m-th mean
curvature is nonzero when m is greater than 1). As the result, we prove that any local hypersurface in Euclidean space of constant mean curvature and two distinct principal curvatures is an open part
of a complete hypersurface of the same curvature properties. The corresponding result does not hold for m-th mean curvature when m is greater than 1.
Alexander Blokh and Lex Oversteegen, University of Alabama at Birmingham Birmingham, AL 35294-1170 (ablokh@math.uab.edu), (overstee@math.uab.edu)
Monotone images of Cremer Julia sets, pp. 469-476.
ABSTRACT. We show that if P is a quadratic polynomial with a fixed Cremer point and Julia set J, then for any monotone map φ: J → A from J onto a locally connected continuum A, A is a single point.
Mahmoud Filali, and Tero Vedenjuoksu, Department of Mathematical Sciences, University of Oulu, P.O.Box 3000, 90014 Oulu, Finland (mahmoud.filali@oulu.fi), (tero.vedenjuoksu@oulu.fi).
The Stone-Cech compactification of a topological group and the β-extension property, pp. 477-488.
ABSTRACT. Let G be a topological group which is not a P-group. Then the Stone-Cech compactification βG of G is a semigroup with an operation extending that of G such that G is contained in the
topological centre of βG if and only if G is pseudocompact. This generalizes a known result due to Baker and Butcher for locally compact groups.
We see that Lindelöf P-groups have this extension property. A non-discrete P-group without the extension property is also given.
Eiichi, Matsuhashi, Department of Mathematics, Faculty of Engineering, Shimane University ,Matsue, Shimane 690-8504, Japan (matsuhashi@riko.shimane-u.ac.jp).
Parametric Krasinkiewicz maps, cones and polyhedra, pp. 489-498.
ABSTRACT.Let X,Y and Z be metrizable spaces with Y being a C-space and let f : X → Y be a perfect map. If Z is a polyhedron or the cone over a compactum, then the set {g in C(X,Z)|g|f^-1(y) : f^-1(y)
→ Z is a Krasinkiewicz map for each y in Y} is a dense G[δ]-subset of the mapping space C(X,Z) with the source limitation topology.
Yuan Jun, Department of Mathematics, Nanjing Xiaozhuang University, Nanjing, 211171, P.R.China,(yuanjun@graduate.shu.edu.cn, Leng Gangsong, Shanghai University, Shanghai, China, and Cheung Wing-Sum,
University of Hong Kong, China.
Convex bodies with minimal p-mean width, pp. 499-511.
ABSTRACT. In this paper, we generalize the minimal mean width to the Brunn-Minkowski-Firey theory. We characterize the minimal position of convex bodies in terms of isotropicity of a suitable measure
and obtain a stability result for L[p] projection bodies.
Spiros, A. Argyros, National Technical University of Athens, 15780 Athens, Greece (sargyros@math.ntua.gr), Irene Deliyanni, 18 Neapoleos st., 15341 Athens, Greece, (ideliyanni@yahoo.gr), Andreas G.
Tolias, Department of Mathematics, University of the Aegean, 83200 Karlovasi, Greece (atolias@math.aegean.gr).
Strictly singular non-compact diagonal operators on HI spaces, pp. 513-566.
ABSTRACT. A Banach space is Hereditarily Indecomposable (HI) provided that none of its closed subspaces is the direct sum of two infinite dimensional further subspaces. We present the construction of
an HI space X with a Schauder basis (e[n ]) on which there exist striclty singular non-compact diagonal operators. We also prove that the space of diagonal operators of the space X, with respect to
the basis (e[n] ) contains an isomorphic copy of l[∞](N).
Goehle, Geoff, Western Carolina University, Cullowhee, NC 28723 (grgoehle@email.wcu.edu).
The Mackey machine for crossed products by regular groupoids. I, pp. 567-590.
ABSTRACT. We first describe a Rieffel induction system for groupoid crossed products. We then use this induction system to show that, given a regular groupoid G and an action of G on an
upper-semicontinuous bundle A, every irreducible representation of the crossed product C*(A,G) is induced from a representation of the group crossed product C*(A(u),S(u)) where u is a unit, A(u) is a
fibre of A, and S(u) is a stabilizer subgroup of G.
Popovych, Stanislav, Kyiv Schevchenko University, Glushkova 2, Kyiv 03022, Ukraine (popovych@univ.kiev.ua).
On O*-representability and C*-representability of *-algebras, pp. 591-617.
ABSTRACT. Characterization of the *-subalgebras in the algebra of bounded operators acting on Hilbert space is presented. Sufficient conditions for the existence of a faithful representation in
pre-Hilbert space of a *-algebra in terms of its Groebner basis are given. These conditions are generalization of the unshrinkability of monomial *-algebras introduced by C. Lance and P. Tapper.
Applications to the *-doubles, the monomial *-algebras and several other classes of *-algebras are presented.
Rivera-Noriega, Jorge, Universidad Autónoma del Estado de Morelos, Cuernavaca, Mor CP62209, México (rnoriega@buzon.uaem.mx)
Two results over sets with big pieces of parabolic Lipschitz graphs, pp. 619-635.
ABSTRACT. For a set E in (n+1)-Euclidean space with uniform big pieces of parabolic Lipschitz graphs (defined in the bulk of the paper) we first observe that certain parabolic singular integrals are
bounded on E. If E is the boundary of certain type of non-cylindrical domain, and E is regular for the heat equation, we prove a weak reverse Hölder inequality for caloric measure over E.
Zhuoran Du, College of Mathematics and Econometrics, Hunan University, 410082, Changsha,China. (Zhuorandu@yahoo.com.cn).
Limit behaviour of solutions to linear hyperbolic equations with an equivalued boundary on a small "hole" in R^n, pp. 637-652.
ABSTRACT. In R^2 (or R^3) the limit behaviour of solutions to linear hyperbolic equations with an equivalued boundary on a small "hole" has been discussed by A.Damlamian and Li Ta Tsien. In this
paper , we discuss the corresponding limit behaviour in R^n(n≥4) by using the transposition method.
Yan, Qiming, Department of Mathematics, Tongji University, Shanghai 200092, P.R. China (yan_qiming@hotmail.com).
Uniqueness theorem of meromorphic mappings with few moving hyperplanes, pp. 653-664.
ABSTRACT. We prove a truncated uniqueness theorem of meromorphic mappings with few moving hyperplanes.
Jie Zhang and Liang-Wen Liao, Department of Mathematics Nanjing University Nanjing 210093 P. R. China (zhangjie@smail.nju.edu.cn), (maliao@nju.edu.cn).
On Brück's conjecture on entire functions sharing one value with their derivatives, pp. 665-674.
ABSTRACT. In this paper, we study the question that an entire function and its derivative share a small function by utilizing Nevanlinna theory and Wiman-Valiron theory. We obtain some uniqueness
theorems, which improve Chang-Zhu's result and give a partial answer to Brueck's conjecture in this direction. | {"url":"https://www.math.uh.edu/~hjm/Vol36-2.html","timestamp":"2024-11-08T23:58:14Z","content_type":"text/html","content_length":"17486","record_id":"<urn:uuid:04da5d2d-7ee5-467d-ac8e-01bf7122f064>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00163.warc.gz"} |
How do you determine whether the function y=ln x is concave up or concave down and its intervals? | Socratic
How do you determine whether the function #y=ln x# is concave up or concave down and its intervals?
1 Answer
The graph of $y = \ln x$ is concave down on $\left(0 , \infty\right)$.
The domain of $\ln x$ is $\left(0 , \infty\right)$.
The second derivative is : $- \frac{1}{x} ^ 2$ which is always negative.
So the graph of $y = \ln x$ is concave down on $\left(0 , \infty\right)$.
Impact of this question
21588 views around the world | {"url":"https://socratic.org/questions/how-do-you-determine-whether-the-function-y-ln-x-is-concave-up-or-concave-down-a#165512","timestamp":"2024-11-02T05:04:34Z","content_type":"text/html","content_length":"33233","record_id":"<urn:uuid:dafde0c2-0036-4f1b-b22a-b8475a5d6f86>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00025.warc.gz"} |
The interest rate is basically the cost of borrowing the money for your home. It is usually a percentage of the principal you are borrowing and is paid monthly. When you have PMI, lenders may charge
you a higher interest rate to compensate for the added risk. This means you'll pay more in interest over the life of your. US First Time Buyer Effective Interest Rate Plus PMI is at %, compared to %
last quarter and % last year. This is higher than the long term average. Pay a higher interest rate Some lenders offer loans that allow you to avoid paying PMI in exchange for a higher interest rate.
You'll need to go through a. PMI amount is determined by many different factors, similar to your interest rate—including FICO score, loan-to-value ratio, debt-to-income ratio, property.
PMI does not apply to FHA or USDA home loans In essence, you'll pay a higher interest rate instead of a higher monthly mortgage payment. K two family. 3% interest rate. PMI per month. Edited for
timeline. While the amount you pay for PMI can vary, you can expect to pay approximately between $30 and $70 per month for every $, borrowed. PMI amount is determined by many different factors,
similar to your interest rate—including FICO score, loan-to-value ratio, debt-to-income ratio, property. So, how much does PMI cost: it depends on a few different factors, but you can generally
expect to pay a monthly premium of $30 to $70 for every $, that. PMI Calculator with Amortization · Property value · Loan amount · Interest Rate. Rates for PMI can range from % to 6% of the original
loan amount each year. However, your credit score can greatly impact the PMI rate charged by insurance. You may be able to opt for lender-paid mortgage insurance, where the interest rate is adjusted
to include the PMI. This raises your interest payments for the. rate mortgage are often less than the rate for an adjustable loan. Low-money-down loans without PMI exist and although they may have
slightly higher interest. Monthly cost of Private Mortgage Insurance (PMI). For loans secured with less than 20% down, PMI is estimated at % of your loan balance each year. PMI will cost less if you
have a higher credit score. Generally, you'll see the lowest PMI rates for a credit score of or above. Ways to remove PMI. PMI can.
That cost is on top of your mortgage interest. In most cases, PMI is added to your mortgage payments. You may also be able to pay it upfront at closing. On average, PMI costs range between % to % of
your mortgage. How much you pay depends on two main factors: Your total loan amount: As a general rule. The LTV compares the loan amount to the appraised value of the property. Higher LTV ratios
generally result in higher PMI rates. The specific calculation method. Monthly cost of Private Mortgage Insurance (PMI). For loans secured with less than 20% down, PMI is estimated at % of your loan
balance each year. Mortgage Insurance ; Purchase price · Must be between $1 and $1,,, ; Term · Must be between 1 and 40 years ; Interest rate · Must be between % and. Credit scores affect things like
interest rates and eligibility for certain types of loans. Mortgage insurance companies, like lenders, look at credit scores. Monthly cost of Private Mortgage Insurance (PMI). For loans secured with
less than 20% down, PMI is estimated at % of your loan balance each year. Private mortgage insurance (PMI) is an insurance policy required by lenders to secure a loan that's considered high risk.
You're required to pay PMI if you don'. The issue is, the PMI is extremely expensive and the interest rate could be higher. Your monthly PMI payment decreases as your loan amount reduces, although.
Borrowers should take advantage of these historically low mortgage interest rates because housing finance experts forecast that this interest rate decline is. PMI typically costs between percent and
one percent of the full loan on an annual basis. Therefore, if your loan is $,, you could be paying as much as. For conventional loans, PMI is commonly paid as part of your monthly home loan payment.
As a form of insurance, the PMI cost is referred to as a “premium,” and. Your PMI will be calculated into your loan estimate, so the cost shouldn't be a huge surprise. PMI rates usually range from to
1% of the total loan amount. PMI is calculated as a percentage of your mortgage loan amount — in it typically ranged from % to % annually. The cost of PMI depends on several.
Your PMI premium payment will last the first 5 years of your mortgage and will cost you $ every month, on top of your normal mortgage payment. That equates. Pay a higher interest rate Some lenders
offer loans that allow you to avoid paying PMI in exchange for a higher interest rate. You'll need to go through a.
Gap Protection Worth It | Home Insurance Rates Increasing | {"url":"https://orduescortbayan.site/tools/pmi-interest-rate.php","timestamp":"2024-11-05T07:43:14Z","content_type":"text/html","content_length":"12741","record_id":"<urn:uuid:e5591ab6-6f91-49ff-a6d3-48ccccf5f571>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00667.warc.gz"} |
Electric Load Center Distribution Manager
Electric Load Center Distribution Manager[LINK]
The electric load center distribution manager (object name: ElectricLoadCenter:Distribution) is used to organize power conversion devices and provides supervisory control over generators and storage.
Electric load centers can be thought of as subpanels connected to the facility’s main electric panel. Except for two applications for transformers that are located between the main panel and the
utility grid connection, all other devices are associated with a specific load center. The load center connects the “load” to the generators and “supply” power to the main panel serving the rest of
the building. A load center can have any number of generators but only one storage, (DC to AC) inverter, (AC to DC) converter or (isolation) transformer. There are two separate operation schemes, one
for generators and one for storage. The generator operation scheme controls the on-site generators connected to a particular load center. The storage operation scheme controls charging and
discharging of electrical storage. The generator operation is managed before the storage operation. The internal meters used by EnergyPlus for reporting do all of the tracking. For more details on
the individual inputs required see the Input Output Reference document.
The electric load center manager sums all of the building and system electric loads and provides operating schemes for the generators. The electric load center objects are operated in the order they
are defined in the input data file (IDF), and generators are dispatched sequentially in the order they are specified within the list referenced by each load center object. The electricity produced
from photovoltaic arrays and wind turbines is handled somewhat separately and is always run rather than being dispatched by supervisory control. What is not provided by the on-site generation
equipment, and electric storage units, is met by (purchasing) off-site electricity. It is possible to prescribe a set of ElectricLoadCenter:Distribution objects with inconsistent or conflicting
operating schemes, so users need to be careful.
Generator Operation Schemes[LINK]
The available generator operating schemes are “Baseload”, “DemandLimit”, “TrackElectrical,” “TrackSchedule,” “TrackMeter,” “FollowThermal” and “FollowThermalLimitElectrical.” These operating schemes
affect how loads are dispatched to the generators, in effect telling the generators whether or not to run and requesting power levels.
The Baseload scheme operates the generators at their rated (requested) electric power output when the generator is scheduled ON (ref. ElectricLoadCenter:Generators in the Input Output Reference). The
Baseload scheme requests all generators scheduled ON (available) to operate, even if the amount of electric power generated exceeds the total facility electric power demand.
The DemandLimit scheme limits the amount of purchased electricity from the utility to the amount specified in the input. The Demand Limit scheme tries to have the generators meet all of the demand
above the purchased electric limit defined by the user.
The TrackElectrical scheme tries to have the generators meet all of the electrical demand for the building.
The TrackMeter scheme tries to have the generators meet all the electrical demand from a meter chosen by the user rather than the usual meter for the entire facility. The meter can be a custom meter
so that generators are tied to only certain loads in the building.
The TrackSchedule scheme tries to have the generators meet all of the electrical demand determined by a user-defined schedule.
The FollowThermal and FollowThermalLimitElectrical schemes run the generators to meet thermal demand. The thermal demand is determined from the plant modeling and depends on the flow requested by
other components on the demand side of the plant loop, the loop temperatures, and the loop temperature setpoint. The electric load center distribution manager converts the thermal load to an
electrical load using a nominal ratio of the thermal to electrical power production for each generator. For these schemes, the generator needs to be connected to the supply side of a plant loop and
serve components that use hot water on the demand side of the plant loop. The thermal load request is obtained from the plant data structure (structure location in code is
PlantLoop%LoopSide%Branch%Comp%MyLoad).The distribution manager converts the thermal load, qthermal, to an electrical load using:
ThermElectRatio is a nominal, constant, user-defined value for the ratio of thermal production to electrical production for a cogenerator. This ratio is used for supervisory control and dispatch of
the electric power request to the generator; however, the cogenerator model may determine that actual performance varies from this nominal value at different times in the simulation when operating
conditions differ from those used for the nominal ratio.
For all generator operating schemes except Baseload, a total electric load reduction target (or thermal load converted to electrical equivalent) is established for the load center based on the
specific operating scheme. The load center then requests that its generators operate, one-by-one in the order specified in the generator list, until the target is met or exceeded. Generators that are
not scheduled as ‘available’ for the simulation time step are not called to operate. The requested power demand to be met by each generator is the smaller of the nominal ‘rated’ electric power output
(as specified in the ElectricLoadCenter:Generators object) or the remaining total electric load reduction target for the load center. After each electric generator is requested to operate, the actual
electric power delivered by the generator, which may be greater than or less than the requested amount due to inputs specified in the generator performance model (e.g., Generator:CombustionTurbine,
Generator:MicroTurbine, etc.), is used to update the remaining total electric power target for the other generators associated with this load center.
Most of the operating schemes will sequentially load the available electric load centers and generators. EnergyPlus can accept multiple “ElectricLoadCenter:Distribution” objects with different
operating schemes. Because of this, there are two levels of reporting, one for the whole building and a second for each load center. The whole-building results are managed with the internal meters
for the entire model. The individual load-center results are summed for those generators connected to a particular load center. The total electricity purchased is reported both in power and energy
units. This value is positive when the amount of energy is purchased from the utility. This value can be negative when the total electricity produced is greater than the facility electrical needs.
The excess will either be available for storage or to sell back to the electric utility company.
The order of input objects (ElectricLoadCenter:Distribution) in the input file is significant and used to structure how generators are dispatched with the first load centers and generators managed
before the later ones. Therefore, load centers listed earlier in the file effectively have a higher priority.
Load Center Buss Types[LINK]
Electric load centers can have one of five different configurations. Load centers can get fairly complicated and include power conditioning and storage. Separate inverter models are used to condition
DC power from photovoltaics into AC power for the building and utility. The other generators may have inverters inside the devices but these are already embedded in the generator models. The load
center can also supervise electrical storage controls and depend on the buss configuration. The transformer is optional for all buss types but if included in the load center it can be used to model
an isolation or voltage matching transformer acting on the load center.
The most basic configuration is selected with the keyword “AlternatingCurrent” for the Electrical Buss Type, shown in Figure 1.
The AlternatingCurrent load centers have AC generators with no storage and behave in the following way. All electric demand not met by the sum of the electrical power produced by the available
generators will be met by purchased electricity. If a generator is needed in the simulation for a small load and the load is less than the generator’s minimum part load ratio, the generator will
operate at the minimum part load ratio and the excess will either reduce demand or the excess energy will be exported back to the electric utility company.
A configuration with AC generators with on-site electrical storage is selected with the keyword “AlternatingCurrentWithStorage” and is shown in Figure 2.
The AlternatingCurrentWithStorage load centers attempt to augment the generator electricity production so that the power requests are met. Storage control logic is discussed below under Storage
Operation Scheme.
The basic configuration for photovoltaic generators is selected using the “DirectCurrentWithInverter” keyword and is shown in Figure 3.
The DirectCurrentWithInverter load centers collect DC power from various generators, usually PV arrays, run the DC power through an inverter and produce AC power. The PV arrays produce DC power based
on the availability of sunshine and do not respond to load requests made by the electric load center.
If the PV-based load center is equipped with DC electrical storage that is connected before the inverter, then the buss type should be “DirectCurrentWithInverterDCStorage” and is shown Figure 4.
The DirectCurrentWithInverterDCStorage load centers charge or draw DC power to meet the requested electrical load depending on the storage operation scheme.
If the PV-based load center is equipped with AC electrical storage that is connected after the inverter, then the buss type should be “DirectCurrentWithInverterACStorage” and is shown in Figure 5.
The DirectCurrentWithInverterACStorage load centers charge or draw AC power to meet the requested electrical load. They can also draw power from the main panel into the load center to charge storage
from grid-supplied electric power.
Electric Load Center Generators[LINK]
The electric load center generators (object name: ElectricLoadCenter:Generators) provide a set of scheduled electric generators for supervisory control over electric power generation. Here is where
the user lists what generators and PVs are available at any given time. For more details on the individual inputs required see the EnergyPlus Input Output Reference.
EnergyPlus includes three models for converting Direct Current (DC) electrical power into Alternating Current (AC) electrical power. The DC power into the inverter, PDC−in, is converted to AC power
out, PAC−out, of the inverter using:
The inverter efficiency is determined using one of the three models. For the “Simple” inveter model, efficiency is constant and input by the user. For the “Look Up Table” model, the efficiency is
calculated using linear interpolation. For the “Function of Power” model, the efficiency is calculating using a single-variable curve object. For both the Look Up Table and Function of Power models,
the power production is normalized by PDC−in.
The conversion power losses are calculated from the difference between PDC,in and PAC,out and are metered as negative values on PowerConversion:ElectricityProduced. The thermal losses include the
conversion power losses plus any ancillary power consumed during standby. The ancillary electric power consumption occurs when the inverter is scheduled to be available but it is not conditioning any
power flows at the time.
EnergyPlus includes a model for converting Alternating Current (AC) electric power into Direct Current (DC) electric power. This device is used to charge DC electric storage devices with AC drawn
from the main panel. Although the physical device may be a two-way inverter, the modeling is separated so that the converter appears on its own in the modeling. The AC power into the converter, PAC,i
n, is converted to DC power out, PDC,out, of the converter using:
The converter efficiency is determined using one of two method. For the “SimpleFixed” method the efficiency is constant and input by the user. For the “FunctionOfPower” method the user defined
performance curve or lookup table is evaluated using the normalized power into the converter.
The conversion power losses are calculated from the difference between PAC,in and PDC,out and are metered as negative values on PowerConversion:ElectricityProduced. The thermal losses include the
conversion power losses plus any ancillary power consumed during standby. The ancillary electric power consumption occurs when the converter is scheduled to be available but it is not conditioning
any power flows at the time.
Storage Operation Schemes[LINK]
The available storage operation schemes are “TrackFacilityElectricDemandStoreExcessOnSite”, “TrackMeterDemandStoreExcessOnSite”, “TrackChargeDischargeSchedules”, and “FacilityDemandLeveling”.
Figure 6 shows the control volume and electrical power flows that are used to model all of the storage operation schemes. Conservation of energy is used to formulate the calculations based on this
control volume and the five terms shown in the diagram. This control volume is inside a given load center.
Pgen: This is the sum of electric power produced by the generators on the load center entering the control volume. The generator operation is run before the storage operation so that this value is
current at the time the storage operation control is evaluated
Pfeed: This is the flow of electric power out of the control volume feeding toward the main panel. This is power being supplied by the generators and storage and serving the building loads or perhaps
being exported. This power level might be reduced by power conversion losses before it reaches the main panel. The method used to determine Pfeed depends on the operation scheme.
Pdraw: This is the flow of electric power into the control volume drawing from the main panel. This is power being drawn in order to charge storage as desired by the storage operation scheme. This
power level might be increased by power conversion losses once it is drawn from the main panel.
Pcharge: This is the flow of electric power into the storage device from the control volume. This is power being supplied by the generators and/or drawn from the main panel and then directed into
storage to charge the device.
Pdischarge: This is the flow of electric power out of the storage device into the control volume. This power is being pulled from storage and added to the power from generators to feed toward the
main panel.
The TrackFacilityElectricDemandStoreExcessOnSite method tries to run the storage to follow the facility electric demand while storing any excess on-site power production that is above what is needed
to run the facility. This is mainly appropriate for island operation. This is the intended legacy behavior before version 8.5. This scheme does not draw from the main panel to charge, so we have Pdra
w=0.0. The value for Pfeed is determined from the whole-facility total electric power demand. (When there is more than one load center in the model, it is actually the portion of the total that
remains after previous load centers have been simulated.) This requested feed in rate is adjusted to be increased by any power conversion losses that may occur in an inverter or a transformer. This
adjusted feed in request is used for Pfeed.
If Pgen>Pfeed, then we have charging:
If Pgen<Pfeed, then we have discharging:
The TrackMeterDemandStoreExcessOnSite method is very similar to the TrackFacilityElectricDemandStoreExcessOnSite method except that instead of using the whole-facility total electric demand, the
value for Pfeed is determined from a user-specified meter. The same charge and discharge logic is used.
The TrackChargeDischargeSchedules method tries to run the storage to follow user-defined schedules and design power levels for charging and discharging. The user inputs a design charge rate Pcharge,d
esign, a charge modification schedule fsched,charge, a design discharge rate Pdischarge,design, and a discharge modification schedule fsched,discharge. The scheduled power flows will be used to
determine charging and discharging as long as other limits on rates or state of charge are not triggered. The schedules should be arranged to only charge or only discharge at a given time.
If charging, we have:
If Pgen>Pcharge, then we have excess to feed toward main panel:
If Pgen<Pcharge, the we have a shortfall and we draw from main panel:
If discharging, we have:
The FacilityDemandLeveling method tries to run the storage to follow a user-defined schedule for the net purchased power, Pnet,purch. This is similar to demand limit operation but instead of just
attempting to cap the facility demand, it will also manipulate storage to increase utility grid supply electric to meet the target for net purchased power. The user inputs a design demand target for
net purchase power rate, Pnet,purch,design, and a demand level modification schedule fsched,dmd,target.
This target is compared to an adjusted feed in request, Pfeed,request, from the (remaining, adjusted) whole-facility total electric power demand.
If Pfeed>0.0, then we have a situation where the facility needs more power that the target demand. We therefore feed that power toward the main panel, but it is not necessarily the full power request
that could be served. If Pgen<Pfeed, then we still have some power supply to make up by discharging:
Or if Pgen>Pfeed, then we have excess power we can use for charging.
However, if Pfeed<0.0, then the facility needs less power than the target demand. We therefore draw power from the main panel to increase power purchased to meet target demand level.
Electrical Storage - Simple Energy Balance Model[LINK]
EnergyPlus includes two models for storing electrical energy: a simple model that is not intended to represent any specific type of storage technology and a battery model that represents the kinetic
battery model originally developed by Manwell and McGowan which discussed in the next section.
The simple model treats the battery as a black box, counting energy added and removed, with losses due to charge/discharge inefficiencies. The model is a reasonable starting point for simulation of
Li-ion and other battery technologies without significant rate limitations. The simple model might be called constrained bucket with energy losses. The bucket holds a quantity of Joules of electrical
energy, referred to as the state of charge. There are losses and limits to storing and drawing power but otherwise the bucket just holds electricity. The user sets constraints on the rates of
charging, Pstor−charge−max, and drawing, Pstor−draw−max. The user defines efficiency values for charging, εcharge, and drawing, εdraw.
The user defines an initial state of charge and a maximum state of charge. The storage operation scheme makes supervisory control decisions and determines a value for the charging power, Pstor−charge
, or the discharging power, Pstor−draw and passes in control limits for the minimum and maximum state of charge fraction. The maximum state of charge is the physical capacity of the storage device,
however the storage control applies a separate layer of minimum and maximum state of charge used to model controller behavior designed to protect the battery from abuse.
The control requests are constrained by the device’s physical limits for how fast it can be charged or discharged, Pstor−charge−max and Pstor−draw−max are applied.
If charging, the new state of charge, Qt+Δtstor , is determined using:
If drawing, the new state of charge is:
where Δt is the length of the system time step in seconds.
The storage device has an availability schedule. If it is not available then no power can be drawn or stored. The state of charge minimum and maximum that are passed into the storage model are used
to decide if the device can be further charged or discharged. The separate control limits allow modeling battery preservation strategies where the full capacity of the storage device is not really
The gross electric power drawn and stored includes losses in the form of heat. These thermal losses are calculated from the user-specified efficiencies for charging and drawing and gross electric
power stored and drawn. The thermal (heat) losses are included in a zone heat balance if the user specifies a thermal zone. A user-defined radiative split is used to divide thermal losses into
radiation and convection portions. If no zone is specified, then the thermal losses are simply disregarded (e.g., rejected to outdoors and do not impact the zone air heat balance).
Electrical Storage – Kinetic Battery Model with Cycle Life Estimation[LINK]
The Kinetic Battery Model (KiBaM) (object: ElectricLoadCenter:Storage:Battery) was originally developed by Manwell and McGowan (1993) for use in time series performance models of hybrid energy
systems. The model is called kinetic because it is based on a chemical kinetics process to simulate the battery charging and discharging behavior. The model, with different improvements and
modifications, has been incorporated into the software Hybrid2 and HOMER as the electrical storage module of hybrid and distributed power systems. In 2005, KiBaM was implemented as a stand-alone
application in support of the European Union Benchmarking research project (Bindner et al. 2005). The model is intended to represent technologies such as Pb-acid that encounter significant rate or
kinetic limitations.
The Kinetic Battery Model assumes that the battery charge is distributed over two tanks: an available-charge tank and a bound-charge tank. The tank for available charges can supply electrons directly
to the load, whereas the tank for chemically bound charges can only supply electrons to the available-charge tank. At any time, the total charge q in the battery is the sum of the available charge (q
1) and bound charge (q2). That is:
Based on the governing equations on the change of charge in both tanks (Manwell and McGowan 1993), the battery capacity can be related to a constant charge/discharge current (I ) as the following
qmax(I) is the maximum capacity (Ah) at charge or discharge current I
qmax is the maximum capacity (Ah) at infinitesimal current
t is the charge or discharge time (hr), defined by t=qmax(I)I
k is a constant coefficient (hr−1)
c is the parameter indicating the ratio of available charge capacity to total capacity.
Assuming that a constant current is used in any time step for charging and discharging, the available charge (q1) and bound charge (q2) at any time step are given by:
q1,0 is the available charge at the beginning of time step (Ah)
q2,0 is the bound charge at the beginning of time step (Ah)
q0 is the total charge at the beginning of time step (Ah) or q0=q1,0+q2,0
Δt is the length of time step (hr).
KiBaM views the battery as a voltage source in series with an electric resistance (Figure 7). The internal resistance is assumed to be constant and the open circuit voltage varies with current and
state of charge.
The battery’s open circuit voltage is modeled in the same form for charging and discharging, but with different coefficients. The open circuit voltage in charging (Ec) and in discharging (Ed) can be
respectively expressed as:
E0,c is the open circuit voltage for a fully charged battery
E0,d is the open circuit voltage for a fully discharged battery
Ac, Cc, Dc are the constant parameters for charging
Ad, Cd, Dd are the constant parameters for discharging
Xc, Xd is the normalized maximum capacity at a given charging or discharging current, calculated as:
X={q0/qmax(I) (charging)(qmax−q0)/qmax(I) (discharging)
It needs to be noted that the performance curve (Curve:RectangularHyperbola2) used in the model input covers the 2nd and the 3rd item of the open circuit voltage equation. Due to the reformatting of
performance curve, the voltage function regression coefficients can map to the curve coefficients as follows: C1=−C; C2=−D; C3=A.
With open circuit voltage, the battery terminal voltage (V) can be calculated as:
where R is the battery internal resistance in Ohms; the current is positive for discharging and negative for charging.
Given desired power in/out of the battery, the desired charge or discharge current can be calculated from the basic power equation: P=VI. In this calculation, iteration is needed to ensure the
electric current has converged and the battery operation satisfies all specified technical constraints such as maximum discharge current and charge rate limit.
KiBaM assumes that battery life is a primary function of charge/discharge cycles. One cycle is defined as the process of starting from a certain state of charge (SOC), the battery is discharged to a
lower SOC and then recharged back to the starting SOC. It is regarded that the magnitude of cycle plays more important than the average of SOC during the cycle. This means that in terms of the impact
on battery life, the cycle from 90% to 70% and then recharge back to 90% of SOC is equivalent to another cycle from 50% to 30% and then recharge back to 50% of SOC. Battery life in terms of the
number of cycles is predicted as a function of the cycle range measured by the fractional depth of discharge. A double exponential equation is used to capture the damage to batteries due to cycling.
The equation takes the following form where the coefficients need to be derived from battery test data via curve fitting.
CF is the cycles to failure
C1 -C5 are the regression coefficients
R is the cycle range in terms of fractional SOC.
Following Hybrid2, the rainflow counting method (Downing and Socie 1982) is used to count battery cycles within a state of charge time series. Based on the number of cycles for each fractional SOC
range, the battery damage is estimated as:
D is the fractional battery damage. For example, a value of 0.5 at the end of simulation means that half of the battery life is used up after the length of the simulation period.
CF,iis the number of cycles to failure for the i-th cycle range
Ni is the total number of cycles over the simulation with the i-th cycle range
It needs to be noted that the temperature effects on battery performance and battery self-discharge are not supported in the current model.
Bindner H., Cronin T., Lundsager P., Manwell J.F., Abdulwahid U., and Baring-Gould I. 2005. Lifetime Modeling of Lead Acid Batteries. Riso National Laboratory, Roskilde, Denmark.
Downing S. D. and Socie D. F. 1982. Simple rainflow counting algorithms, International Journal of Fatigue, 1982.
Manwell J. F. and McGowan J. G. 1993. A lead acid battery storage model for hybrid energy systems, Solar Energy 50(5): 399- 405.
Electric Load Center Transformers[LINK]
Transformers (object name: ElectricLoadCenter:Transformer) are an integral part of the electric distribution system. They have two broad applications closely related to building energy simulation.
First, transformers are used to lower the voltage of electricity from utility primary circuits to customer secondary circuits, and in this case they are called distribution transformers. Second,
transformers are used to output the surplus power from onsite generators to the electricity grid.
Distribution transformers reduce the voltage on utility distribution lines (34.5 kV or less) to a lower secondary voltage (600 V or less) suitable for customer equipment. Distribution transformers
are usually categorized according to the medium used for cooling and insulation (liquid or air), the voltage class that they serve (low or medium), and the number of phases (single phase or three
Liquid-immersed transformers rely on oil or other fire resistant liquid around the coils for cooling. In contrast, dry type transformers rely only on the natural convection of air for insulation and
cooling. Medium-voltage transformers step from utility line voltage down to a lower secondary voltage, depending on the application. The secondary voltage from a medium-voltage transformer is usually
at 277 V for single phase and 480 V for three phase. This secondary voltage can be directly used as 480 V three-phase power for large motors or as 277 V single-phase power for some fluorescent
lighting. However, for most industrial and commercial facilities, low-voltage transformers are needed to reduce the above voltages further to 208/120 V. Common 120 V loads are wall plugs and
incandescent lighting.
Most liquid-immersed transformers are owned by utilities and they are of the medium-voltage type. Virtually all dry type transformers are owned by commercial and industrial customers (Barnes et al.
1996). Of the dry type transformers, those of the medium-voltage type are generally special-order items while those of the low-voltage type are commodity items. The efficiency requirement of
distribution transformers is covered by the NEMA (National Electrical Manufactures Association) Standard TP 1. ASHRAE 90.1-2010 will cite the NEMA Standard TP 1 to stipulate the efficiency
requirement for the low-voltage dry type distribution transformers.
There are two main types of energy losses in transformers: no load loss and load loss. The no load loss comes primarily from the switching of the magnetic fields in the core material. Hence, it is
also called the core loss. The no load (core) loss is roughly constant and exists continuously in the core material as long as the transformer is energized. The load loss comes from the electrical
resistance in the windings when there is a load on the transformer. Hence, the load loss is also called the winding loss. The load (winding) loss is proportional to the load squared with a small
temperature correction.
Given the no load loss (NL) and the load loss (LL) at rated load and conditions, the total energy losses in a transformer at time t is calculated as:
TL(t) is the total energy loss at time t (W)
LL(t) is the load loss at time t (W)
P(t) is the per unit load at time t
fT(t) is the temperature correction factor for the load loss at time t.
The per unit load at time t is calculated as:
Load(t) is the transformer load at time t (W)
SB is the transformer nameplate rating (VA).
The temperature correction factor at time t is calculated as (NEMA 2002):
Ldc is the per unit load loss due to electrical resistance
Leddy is the per unit load loss due to eddy currents
R(t) is the winding electrical resistance at time t
Rref is the winding electrical resistance at the full load reference conditions.
The ratio of winding electrical resistance is calculated as:
F is the thermal coefficient of resistance for the winding material ( = 225 for aluminum and 234.5 for copper)
Twinding,ref is the winding temperature rise at the full load reference conditions (∘C)
Twinding(t) is the winding temperature rise at time t (∘C)
Tamb,ref is the ambient temperature at the reference condition ( = 20∘C)
Tamb(t) is the ambient temperature at time t (∘C)
The Ambient temperature Tamb(t) is equal to the zone temperature if a thermal zone is specified in the input; otherwise, it is assumed equal to 20∘C. The winding temperature rise at time t is
calculated as (Barnes et al. 1997):
Based on the derived total energy losses in a transformer, the transformer efficiency at time t can be calculated according to the following equation:
The above procedure describes how to calculate the total transformer energy losses based on the no load loss and load loss at rated conditions. The transformer model also supports the case when the
nominal transformer efficiency is given. In this case, the user needs to provide the nameplate efficiency and the corresponding per unit load, the maximum efficiency and the corresponding per unit
load, and the reference conductor temperature at which the nameplate efficiency is measured. Given these information, both no load loss and load loss at rated conditions can be derived as below.
The nameplate efficiency can be expressed as:
ηnp is the nameplate efficiency
SB is the nameplate rating (VA)
Pnp is the per unit load at which the nameplate efficiency is measured
fT,np is the applied temperature correction factor for the nameplate efficiency.
Maximum efficiency generally occurs when the load loss is equal to the no-load loss. Because the no-load loss does not vary with the load on the transformer, the following relationship can be
Pmax,η is the per unit load at which the maximum efficiency is obtained
fT,max−η is the applied temperature correction factor for the maximum efficiency.
Transformers typically have close per unit loads for the nameplate efficiency and the maximum efficiency. Therefore, it is reasonable to assume that the applied temperature correction factors are
equal at those two efficiencies. This implies that:
Rearranging Equation [eq:ElecLoadLLnpOverLLmaxeta865] and combining it with Equation [eq:ElecLoadNL864] leads to:
Combining Equations [eq:ElecLoadetanp863] and [eq:ElecLoadLLnp866], we can obtain the no load loss as:
Substitute NL into Equation [eq:ElecLoadNL864], we can calculate the load loss at rated conditions as:
Since both no load and load losses at rated conditions are known, the total energy losses in a transformer at time t can then be calculated according to Equation [eq:TotalEnergyLossesInTransformers].
Barnes, PR., JW. Van Dyke, BW. McConnell, and S. Das. 1996. Determination Analysis of Energy Conservation Standards for Distribution Transformer, ORNL-6847. Oak Ridge National Laboratory, Oak Ridge,
Barnes, PR., S. Das, BW. McConnell, and JW. Van Dyke. 1997. Supplement to the “Determination Analysis” (ORNL-6847) and Analysis of the NEMA Efficiency Standard for Distribution Transformer,
ORNL-6925. Oak Ridge National Laboratory, Oak Ridge, TN.
NEMA. 2002. NEMA Standards Publication TP 1-2002: Guide for Determining Energy Efficiency for Distribution Transformers. National Electrical Manufactures Association, Rosslyn, VA. | {"url":"https://bigladdersoftware.com/epx/docs/9-4/engineering-reference/electric-load-center-distribution-manager.html","timestamp":"2024-11-07T17:04:08Z","content_type":"text/html","content_length":"542620","record_id":"<urn:uuid:b35733e1-974b-4aa8-af19-841d43b090d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00604.warc.gz"} |
Marko Riedel
B.Sc. Computer Science UBC 1994, M.Sc. Computer Science UofT 1996.
These are Posts that have been published by Marko Riedel
Greetings to all. I am writing today to share a personal story / exploration using Maple of an algorithm from the history of combinatorics. The problem here is to count the number of strings over a
certain alphabet which consist of some number of letters and avoid a set of patterns (these patterns are strings as opposed to regular expressions.) This counting operation is carried out using
rational generating functions that encode the number of admissible strings of length n in the coefficients of their series expansions. The modern approach to this problem uses the Goulden-Jackson
method which is discussed, including a landmark Maple implementation from a paper by D. Zeilberger and J. Noonan, at the following link at math.stackexchange.com (Goulden-Jackson has its own website,
all the remaining software described in the following discussion is available at the MSE link.) The motivation for this work was a question at the MSE link about the number of strings over a
two-letter alphabet that avoid the pattern ABBA.
As far as I know before Goulden-Jackson was invented there was the DFA-Method (Deterministic Finite Automaton also known as FSM, Finite State Machine.) My goal in this contribution was to study and
implement this algorithm in order to gain insight about its features and how it influenced its powerful successor. It goes as follows for the case of a single pattern string: compute a DFA whose
states represent the longest prefix of the pattern seen at the current position in the string as it is being scanned by the DFA, with the state for the complete pattern doubling as a final absorbing
state, since the pattern has been seen. Translate the transitions of the DFA into a system of equations in the generating functions representing strings ending with a given maximal prefix of the
pattern, very much like Markov chains. Finally solve the system of equations for the generating functions and thus obtain the sequence of values of strings of length n over the given alphabet that
avoid the given pattern.
I have also implemented the DFA method for sets of patterns as opposed to just one pattern. The algorithm is the same except that the DFA does not consist of a chain with backlinks as in the case of
a single pattern but a tree of prefixes with backlinks to nodes higher up in the tree. The nodes in the tree represent all prefixes that need to be tracked where obviously a common prefix between two
or more patterns is shared i.e. only represented once. The DFA transitions emanating from nodes that are leaves represent absorbing states indicating that one of the patterns has been seen. We run
this algorithm once it has been verified that the set of patterns does not contain pairs of patterns where one pattern is contained in another, which causes the longer pattern to be eliminated at the
start. (Obviously if the shorter pattern is forbidden the so is the longer.) The number of states of the DFA here is bounded above by the sum of the lengths of the patterns with subpatterns
eliminated. The uniqueness property of shared common prefixes holds for subtrees of the main tree i.e. recursively. (The DFA method also copes easily with patterns that have to occur in a certain
I believe the Maple code that I provide here showcases many useful tricks and techniques and can help the reader advance in their Maple studies, which is why I am alerting you to the web link at MSE.
I have deliberately aimed to keep it compatible with older versions of Maple as many of these are still in use in various places. The algorithm really showcases the power of Maple in combinatorics
computing and exploits many different aspects of the software from the solution of systems of equations in rational generating functions to the implementation of data structures from computer science
like trees. Did you know that Maple permits nested procedures as known to those who have met Lisp and Scheme during their studies? The program also illustrates the use of unit testing to detect newly
introduced flaws in the code as it evolves in the software life cycle.
Enjoy and may your Maple skills profit from the experience!
Best regards,
Marko Riedel
The software is also available here: dfam-mult.txt
Dear friends,
some time ago I shared a story here on the use of Maple to compute the cycle index of the induced action on the edges of an ordinary graph of the symmetric group permuting the vertices and the use of
the Polya Enumeration Theorem to count non-isomorphic graphs by the number of edges. It can be found at the following Mapleprimes link.
I am writing today to alert you to another simple Maple program that is closely related and demonstrates Maple's capability to implement concepts from group theory and Polya enumeration. This link at
Math.Stackexchange.com shows how to use the cycle index of the induced action by the symmetric group permuting vertices on the edges of a multigraph that includes loops to count set partitions of
multisets containing two instances of n distinct types of items. The sequence that corresponds to these set partitions is OEIS A020555 where it is pointed out that we can equivalently count
multigraphs with n labeled i.e. distinct edges where the vertices of the graph represent the multisets of the multiset partition and are connected by an edge k if the two instances of the value k are
included in the sets represented by the two vertices that constitute the edge. The problem then reduces to a simple substitution into the aforementioned cycle index of a polynomial representing the
set of labels on an edge including no labels on an edge that is not included.
This computation presents a remarkable simplicity while also implementing a non-trivial application of Polya counting. It is hoped that MaplePrimes users will enjoy reading this program, possibly
profit from some of the techniques employed and be motivated to use Maple in their work on combinatorics problems.
Best regards,
Marko Riedel
Greetings to all.
I am writing to alert MaplePrimes users to a Maple package that makes an remarkable contribution to combinatorics and really ought to be part of your discrete math / symbolic combinatorics class if
you teach one. The combstruct package was developed at INRIA in Paris, France, by the algorithmics research team of P. Flajolet during the mid 1990s. This software package features a parser for
grammars involving combinatorial operators such as sequence, set or multiset and it can derive functional equations from the grammar as well as exponential and ordinary generating functions for
labeled and unlabeled enumeration. Coefficients of these generating functions can be computed. All of it easy to use and very powerful. If you are doing research on some type of combinatorial
structure definitely check with combstruct first.
My purpose in this message is to advise you of the existence of this package and encourage you to use it in your teaching and research. With this in mind I present five applications of the combstruct
package. These are very basic efforts that admit improvement that can perhaps serve as an incentive to deploy combstruct nonetheless. Here they are:
I hope you enjoy reading these and perhaps you might want to feature combstruct as well, which presented the first complete implementation in a computer algebra system of the symbolic method,
sometimes called the folklore theorem of combinatorial enumeration, when it initially appeared.
Best regards,
Marko Riedel.
Greetings to all.
As some of you may remember I have posted several announcements concerning Power Group Enumeration and the Polya Enumeration Theorem this past year, e.g. at this MaplePrimes link: Power Group
I have continued to work in this field and for those of you who have followed the earlier threads I would like to present some links to my more recent work using the Burnside lemma. Of course all of
these are programmed in Maple and include the Maple code and it is with the demonstration of Maple's group theory capabilities in mind that I present them to you (math.stackexchange links).
The third and fourth to last link in particular include advanced Maple code.
The second entry is new as of October 30 2015.
With my best wishes for happy group theory computing with Maple,
Marko Riedel
Greetings to all.
I would like to share a brief observation concerning my experiences with the Euler-Maclaurin summation routine in Maple 17 (X86 64 LINUX). The following Math StackExchange Link shows how to compute a
certain Euler-MacLaurin type asymptotic expansion using highly unorthodox divergent series summation techniques. The result that was obtained matches the output from eulermac which is definitely good
to know. What follows is the output from said routine.
> eulermac(1/(1+k/n),k=0..n,18);
1 929569 3202291 691 O(1)
O(- ---) - ----------- + ----------- - --------- + 1/1048576 ----
n 2097152 n 1048576 n 32768 n n
174611 5461 31 | 1 17 1
- -------- + --------- + ------- + | ------- dk - ------- + ------
19 13 9 | 1 + k/n 7 5
6600 n 65536 n 4096 n / 4096 n 256 n
- ------ + ---- + 3/4
3 16 n
128 n
While I realize that this is good enough for most purposes I have two minor issues.
• One could certainly evaluate the integral without leaving it to the user to force evaluation with the AllSolutions option. One can and should make use of what is known about n and k. In
particular one can check whether there are singularities on the integration path because we know the range of k/n.
• Why are there two order terms for the order of the remainder term? There should be at most one and a coefficient times an O(1) term makes little sense as the coefficient would be absorbed.
You might want to fix these so that the output looks a bit more professional which does enter into play when potential future users decide on what CAS to commit to. Other than that it is a very
useful routine even for certain harmonic sum computations where one can use Euler-Maclaurin to verify results.
Best regards,
Marko Riedel | {"url":"https://mapleprimes.com/users/Marko%20Riedel/posts","timestamp":"2024-11-14T23:47:09Z","content_type":"text/html","content_length":"117987","record_id":"<urn:uuid:c60bd84c-23ad-41a1-bc49-165bf79d9ea2>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00468.warc.gz"} |
Mathematics for Machine Learning
Looking for an intuitive explanation of the Chi-Square distribution? Check out the blog post I wrote on the Chi-Square distribution and degrees of freedom. For a step-by-step explanation on
Chi-Square testing, check out my post on the Chi-Square test for independence and goodness of fit. This post table is part of a blog post
In this post, we introduce the Chi-Square distribution discuss the concept of degrees of freedom learn how to construct Chi-Square confidence intervals If you want to know how to perform chi-square
testing for independence or goodness of fit, check out this post. For those interested, the last section discusses the relationship between the chi-square
Hypothesis testing in statistics allows you to make informed decisions using data.In a hypothesis testing scenario, you have a null hypothesis that represents the status quo. Through the collection
and analysis of data, you try to refute the null hypothesis in favor of an alternative hypothesis. If your tests are statistically significant, you can
In this post, we learn how to construct confidence intervals. Confidence Interval Interpretation and Definition Confidence intervals are a type of statistical estimate to measure the probability that
a certain parameter or value lies within a specific range. If we have data that is normally distributed, there is a 34.1% chance that a randomly
In this post we introduce the geometric distribution with an example and discuss how to calculate the probability of geometric random variables. The geometric distribution describes the probability
of the number of failures before a successful outcome in a Bernoulli trial. Suppose you are a recruiter and you need to find a suitable candidate
In this post we introduce the concept of conjugate priors and how they enable us to infer posterior parameters from prior distributions. The real world is messy and the probability of events is
usually influenced by environmental factors. In statistics, we have methodologies for dealing with uncertainty and environmental influences such as the concept
We build an intuitive understanding of the Beta distribution and its utility as a prior distribution for many other probability distributions. The beta distribution models a distribution of
probabilities. If we don’t know the probability of the outcome of an event, we can use the beta distribution to model the distribution of probabilities given
In this post we build an intuitive understanding of the Gamma distribution by going through some practical examples. Then we dive into the mathematical background and introduce the formulas. The
gamma distribution models the wait time until a certain number of continuously occurring, independent events have happened. If you are familiar with the Poisson
We introduced the exponential distribution with a formal definition and some examples. We also learn how the exponential distribution relates to a Poisson process. The exponential distribution models
the time interval between continuously occurring, independent events. In case you are familiar with the Poisson distribution, the exponential distribution models the wait time between events
We introduce the Poisson distribution and develop an intuitive understanding of its uses by discussing examples and comparing the Poisson distribution to the binomial distribution. With the Poisson
distribution, we can express the probability that a certain number of events happen in a fixed interval. Poisson Distribution Examples The Poisson distribution is useful in | {"url":"https://programmathically.com/category/mathematics-for-machine-learning/page/2/","timestamp":"2024-11-11T11:21:27Z","content_type":"text/html","content_length":"100647","record_id":"<urn:uuid:979591e8-fbff-4df0-ac75-b2705c21b416>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00128.warc.gz"} |
Line choice
Line choice --- Introduction ---
There are 4 ways to describe a general line (not passing through the origine, neigher vertical nor horizontal) in the cartesian plane: by an implicite equation, an equation giving $y$ as a function
of $x$, one giving $x$ as a function of $y$, or a system of parametric equations.
Other exercises on:
The most recent version
This page is not in its usual appearance because WIMS is unable to recognize your web browser.
Please take note that WIMS pages are interactively generated; they are not ordinary HTML files. They must be used interactively ONLINE. It is useless for you to gather them through a robot program.
• Description: exercise : recognize a line by equation or by graph. interactive exercises, online calculators and plotters, mathematical recreation and games
• Keywords: interactive mathematics, interactive math, server side interactivity, geometry, algebra, lines, parametric_curves | {"url":"https://wims.unicaen.fr/wims/wims.cgi?lang=en&+module=H4%2Fgeometry%2Flinechoice.en","timestamp":"2024-11-07T15:16:25Z","content_type":"text/html","content_length":"7503","record_id":"<urn:uuid:0108b65c-27b8-42b2-86c2-4785cc080517>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00741.warc.gz"} |
QT C++ code samples
QT C++ code samples
21 Jul 2019 08:41 #140057 by Mike_Eitel
Hi Grotius
I'm no more in coding but remember strange results by "compiler optimization".
All sorts of.
Please Log in or Create an account to join the conversation.
22 Jul 2019 18:07
22 Jul 2019 19:37 #140209 by Grotius
Hi Mike,
Thanks for your reply.
I'm no more in coding but remember strange results by "compiler optimization".
I solved it by making the formula outside of OpenGl. No complex issue. It can be a compiler problem. Short after my post
i got a QT update message. Maybe they spotted the same thing... If the problem still exist's it's not my problem anymore.
So i was a bit fast to publish this mathematical behavioure.
3d view (rotate) is implemented, was not hard, but i trapped myself... again .. Have to solve another thing.. I know what i did wrong...
My worst fault this week is to make visualisation's based on the sort array instead of the base array....
The base array has all the raw incoming data. This base array is the holy grail of the program. Without trusting the base array
the program is worth nothing...
So my vision is to alway's view first the raw dxf data (base array) to the user with OpenGL... This is very important.
In other word's... Show the base (auto)cad input to the user screen without any code manipulation.
My stupid thing... Always the stupid tiny things...
I will change the code to fully 3d (it is already, but the array view has to change) and to base input code class again. ( tiny operation )
After that the classes (plasma or 3d milling) are free to use the code....
So you see Mike, every day we are learning about our code. You are a respected member over here !!
Last edit: 22 Jul 2019 19:37 by
Please Log in or Create an account to join the conversation.
25 Jul 2019 20:46
25 Jul 2019 22:04 #140565 by Grotius
Little project update :
Added 3d rotate zoom, now this works we can connect it to the right mouse button. Mouse rotate for x and z seem's good enough.
Added 3d path direction arrow's and start points.
Learned more about openGL this day. I started with line arrow's, but a cone looks nicer.
1. Draw a cone with the opengl triangle-strip
2. Move the cone to the position in the middle of the line
3. Rotate the cone within the direction of the line
4. Fabricate the next cone in the array for loop.
A raw code example for making a cone, it places the cone where you want, and it set's the cone direction :
The axis gui :
I am thinking adapting cad function's into the cam program. To drawing lines, arc and circles. To make some fun with cad - cam !
And finally write a exiting file output for the users.
A clean dxf output combined with a ngc output. More languages and data pressed into one exiting file...
No one does this, because their program's are not written to read in multi levels. No one thought about it.
Last edit: 25 Jul 2019 22:04 by
Please Log in or Create an account to join the conversation.
26 Jul 2019 21:51
26 Jul 2019 23:23 #140692 by Grotius
Hi Sivaraj,
I did not see that. But i know where to find this problem. It's in the contour recognize class.
The problem has to do with swapping the direction of the little arc at your red arrow line.
Later on i will improve this class.
The blue direction lines are telling the truth.
Today i added :
- dimensions, see left corner at bottom. It's called the statusbar. This statusbar is useless at this moment.
This statusbar must be hacked up into several useful pieces.
It works with zoom, movement's etc. This was a hard piece to understand.
If you click on the screen, it will give you the actual cad coordinates. And later on it will give you actual line lenght.
Something that draftsight is not doing... When you draw a line in drafsight, you don't see the lenght at your mouse point...
- click the line near the start point (can be expanded to more item's like endpoint or window select) and the line is selected,
see the color change to orange. ( now we have the basics of a interactive opengl screen )
- changed line thickness in opengl, this was easy.
- added the 3d rotate to the right mouse button, this was more difficult, but works very nice.
DxfDecoder gui :
For the rest i spotted a G41.1 / G42.1 issue. At least, that is a issue for me for this moment.
Later on i will post more about this.
The inner circle is with G41.1 D0, tool diameter 0 no offset. This goes perfect !!
The outher circle is with G41.1 D1, tool diameter 1 offset left of path.
The problem is with the outher circle. At lead in and lead out, the product is not finisched. Is there a overcut G code command?
At the fire spot.. This material will stay on the cutted product (circle). That is a problem. This non cutted material is between the start radius lead in and the end radius lead out, wich is in this
case 0.
Last edit: 26 Jul 2019 23:23 by
Please Log in or Create an account to join the conversation.
27 Jul 2019 20:06
27 Jul 2019 20:08 #140764 by Grotius
Hi dear liinuxcnc users..
We have a little project update today.
Added button's to the toolbar. It are dockable toolbars, splitted up into more little movable toolbar's just like in cad.
Added the axis origin arrow's for xyz. You can disable them with a toolbar select button.
To test this release :
To start, unzip and type in terminal : ./DxfDecoder
I think this release will work now on a standard debian installation. It's no longer depending of QT open libraries.
Not tested this on standard debian, ubuntu or wathever, but i expect this release is going to work for linux.
No longer needed dependencies :
Last edit: 27 Jul 2019 20:08 by
Please Log in or Create an account to join the conversation.
30 Jul 2019 13:39 #140950 by Grotius
Today i started with calculating the outside contour offset of a product. To compensate kerf width for plasma
and milling. Calculating the inside offset has the same principle. This example is for lines at this moment.
Finding circle and arc intersection's is the next level to do.
Attached i a picture of a product 100x100mm that have 2 lines that are offset 5mm from the product.
To find the new closed contour point, see the yellow circle, we have to do a calculation for finding this point. It's called intersection point.
The C++ code sample :
The tested console output :
determinant is : -2495
intersection point is x : 52.097 y : 105
So nice that this works !!
Attached the QT intersection project in zip format.
Please Log in or Create an account to join the conversation.
30 Jul 2019 19:14
30 Jul 2019 19:48 #140971 by Grotius
Thanks to chimeno & tommylight !
Me was lucky today... Made the circle and line intersection code ready quite quickly...
Attached a QT project for searching the intersecting point of a line and a circle.
This basic code can be a template for intersecting lines with arc's too.
The cad drawing with a line and a circle. The red dimension lines are calculated by the C++ code.
The C++ code :
The console output of this example :
Two solutions x1 : 9.55418 y1 : 5.75232
Two solutions x2 : 3.14394 y2 : 2.19108
This Circle and line intersecting QT C++ project is attached for downloading.
Another offset example :
In this example we got a 5mm outside offset for the product.
The green lines are done by the line-line intersection methode in my previous post today.
If we see a line followed by an arc in the memory array, we do a line-circle intersection like below.
Offset of the arc is 5mm.
Code :
Console output :
Two solutions x1 : 90.4489 y1 : 66.7792
Two solutions x2 : 66.6837 y2 : 90.3977
We need the closest solution. The closest founded intersection point related to the line. We can make a little formula for that with if, else.
The yellow circle is the calculated intersection point of a line collision with an arc.
Last edit: 30 Jul 2019 19:48 by
Please Log in or Create an account to join the conversation.
30 Jul 2019 20:48
30 Jul 2019 21:32 #140976 by Grotius
The last item to investegate was find the intersecting point of a circle with a circle.
Cad drawing of product with a contour offset 5mm. Find where the 2 circle's collide..
Attached QT C++ project find intersection circle to circle.
console output :
intersection x1 : 74.0697 y1 : 57.5247
intersection x2 : 92.4753 y2 : 75.9303
We have 2 solution's. We calculate wich one we need with a if else statement.
What do we do with the tiny line in within the red circle? Delete it automaticly?
The QT C++ code for intersecting circle to circle :
Okey we have to go on with the next level.
Add contour offset's to the product.....
The swap class needs some improvement's, but i will try add some contour offsets first.
We have classes for line-line, line-circle and arc-arc interceptions. Without that we where lost.
But now we have more power to make offsets.
Glad to be back to the relaxed DxfDecoder screen.
Last edit: 30 Jul 2019 21:32 by
Please Log in or Create an account to join the conversation.
01 Aug 2019 14:14
01 Aug 2019 18:03 #141110 by Grotius
Today i have a C++ example for finding a new xy offset point of a certain line. In this example the offset is 10mm.
We have the x0,y0 (0,0) and x1,y1 (200,100) points. This are the points that form the blue product line C.
We have to calculate the offset point of the blue line, see yellow circle.
Visualisation of the calculation..
Attached the QT project for this example, the zip includes the cad drawing.
The C++ code spoiler down here has some updated code... Don't forget this...
Console output :
this calculation is between 0 and 180 degrees, the new x is : 204.472 new y is : 91.0557
(alpha 3)
The output in DxfDecoder. Tested in all quadrants. The calculated point is the endpoint of the short blue line wich has
the offset lenght of 10mm in this case. The blue line is perpendicular to the base line. The yellow lines are opposites of the blue line.
The yellow line was founded by only a + and - calculation related to the blue line.
This is nice !!
We can now draw a offset line with opengl.
After that, we can do the intersection calculation for line-line to close the contour... Yes yes..
These separate software step's (tutorials) combined (in total) will result in the base code for a complex contour offset algoritme.
Don't forget the power of Linux !!
C++ code: (angle1 = alpha1 and so on)
For a quick test we add the offset lines to the opengl class :
Later on we will copy this test to a new class and keep the opengl as clean as possible.
Next item is to execute the intersection line-line class
to find all the intersection point's... This we will do tomorrow !
If i look at the current output, we can use this for generating press brake products..
Last edit: 01 Aug 2019 18:03 by
Please Log in or Create an account to join the conversation.
Time to create page: 0.521 seconds | {"url":"https://www.forum.linuxcnc.org/41-guis/36768-qt-c-code-samples?start=60","timestamp":"2024-11-13T11:39:10Z","content_type":"text/html","content_length":"106005","record_id":"<urn:uuid:f7c4210c-fd3d-490c-9627-eddcf5433e86>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00326.warc.gz"} |
Data Science for Doctors – Part 2 : Descriptive Statistics | R-bloggersData Science for Doctors – Part 2 : Descriptive Statistics
Data Science for Doctors – Part 2 : Descriptive Statistics
[This article was first published on
, and kindly contributed to
]. (You can report issue about the content on this page
) Want to share your content on R-bloggers?
if you have a blog, or
if you don't.
Data science enhances people’s decision making. Doctors and researchers are making critical decisions every day. Therefore, it is absolutely necessary for those people to have some basic knowledge
of data science. This series aims to help people that are around medical field to enhance their data science skills.
We will work with a health related database the famous “Pima Indians Diabetes Database”. It was generously donated by Vincent Sigillito from Johns Hopkins University. Please find further information
regarding the dataset here.
This is the second part of the series, it will contain the main descriptive statistics measures you will use most of the time. Those measures are divided in measures of central tendency and measures
of spread. Moreover, most of the exercises can be solved with built-in functions, but I would encourage you to solve them “by hand”, because once you know the mechanics of the measures, then you are
way more confident on using those measures. On the “solutions” page, I have both methods, so even if you didn’t solve them by hand, it would be nice if you check them out.
Before proceeding, it might be helpful to look over the help pages for the mean, median, sort , unique, tabulate, sd, var, IQR, mad, abs, cov, cor, summary, str, rcorr.
You also may need to load the Hmisc library.
In case you haven’t solve the part 1, run the following script to load the prerequisites for this part.
Answers to the exercises are available here.
If you obtained a different (correct) answer than those listed on the solutions page, please feel free to post your answer as a comment on that page.
Exercise 1
Find the mean of the mass variable.
Exercise 2
Find the median of the mass variable.
Exercise 3
Find the mode of the mass.
Exercise 4
Find the standard deviation of the age variable.
Exercise 5
Find the variance of the mass variable.
Unlike the popular mean/standard deviation combination,interquartile range and median/mean absolute deviation are not sensitive to the presence of outliers. Even though it is recommended to go for
MAD because they can approximate the standard deviation.
Exercise 6
Find the interquartile range of the age variable.
Exercise 7
Find the median absolute deviation of age variable. Assume that the age follows a normal distribution.
Exercise 8
Find the covariance of the variables age, mass.
Exercise 9
Find the spearman and pearson correlations of the variables age, mass.
Exercise 10
Print the summary statistics, and the structure of the data set. Moreover construct the correlation matrix of the data set. | {"url":"https://www.r-bloggers.com/2017/02/data-science-for-doctors-part-2-descriptive-statistics/","timestamp":"2024-11-08T21:10:57Z","content_type":"text/html","content_length":"98706","record_id":"<urn:uuid:101dac06-e1fa-45d4-948a-a739f816ba80>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00308.warc.gz"} |
Probabilities: Studies in the Foundations of Bayesian Decision Theory
2015 Theses Doctoral
Probabilities: Studies in the Foundations of Bayesian Decision Theory
One central issue in philosophy of probability concerns the interpretation of the very notion of probability. The fruitful tradition of modern Bayesian subjectivists seeks to ground the concept of
probability in a normative theory of rational decision-making. The upshot is a representation theorem, by which the agent's preferences over actions are represented by derived subjective
probabilities and utilities. As the development of Bayesian subjectivism becomes increasingly involved, the corresponding representation theorem has gained considerable complexity and has itself
become a subject of philosophical scrutiny.
This dissertation studies systematically various aspects of Bayesian decision theory, especially its foundational role in Bayesian subjective interpretation of probability. The first two chapters
provide a detailed review of classical theories that are paradigmatic of such an approach with an emphasis on the works of Leonard J. Savage. As a technical interlude, Chapter III focuses on the
additivity condition of the probabilities derived in Savage's theory of personal probability, where it is pointed out that Savage's arguments for not requiring probability measures derived in his
system to be countable additive is inconclusive due to an oversight of set-theoretic details.
Chapter IV treats the well-known problem of constant-acts in Savage's theory, where a simplification of the system is proposed which yields the representation theorem without the constant-act
assumption. Chapter V addresses a series of issues in the epistemic foundations of game theory including the problem of asymmetry of viewpoints in multi-agent systems and that of self-prediction in a
Bayesian setup. These issues are further analyzed in the context of epistemic games where a unification of different models that are based on different belief-representation structures is also
• Liu_columbia_0054D_12953.pdf application/pdf 697 KB Download File
More About This Work
Academic Units
Thesis Advisors
Gaifman, Haim
Ph.D., Columbia University
Published Here
October 7, 2015 | {"url":"https://academiccommons.columbia.edu/doi/10.7916/D86H4GW1","timestamp":"2024-11-06T22:15:43Z","content_type":"text/html","content_length":"20602","record_id":"<urn:uuid:3dd90d80-7329-4874-8232-05b8a8e79428>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00001.warc.gz"} |
Calculate Time Calculator - Savvy Calculator
Calculate Time Calculator
Calculating time intervals is a crucial aspect in various fields, from project management to scientific experiments. To streamline this process, a reliable time calculator is indispensable. In this
article, we will introduce a comprehensive time calculator coded in HTML and JavaScript, allowing users to perform accurate time calculations effortlessly.
How to Use
Using the time calculator is straightforward. Input the time values in the provided fields and click the “Calculate” button. The result will be displayed promptly, providing you with precise
information on the time interval between the specified periods.
The time calculator utilizes the following formula for accurate calculations:
This formula ensures precision by considering the exact time difference between the provided start and end times.
Suppose you want to calculate the time interval between 2:30 PM and 5:45 PM. Enter the start time as 14:30 and the end time as 17:45. Click the “Calculate” button, and the result will show the
accurate time difference between the two periods.
Q1: What time format should I use for input?
A1: Input time in 24-hour format, separating hours and minutes with a colon (e.g., 14:30).
Q2: Can I calculate time intervals spanning across different days?
A2: Yes, the calculator accounts for time differences spanning across multiple days.
Q3: Is the result displayed in a specific format?
A3: The result is shown in hours and minutes, providing a clear representation of the time interval.
In conclusion, the time calculator presented here serves as a valuable tool for anyone needing precise time calculations. Its user-friendly interface, coupled with the accurate formula, ensures
reliable results. Whether you’re managing projects or analyzing scientific data, this calculator simplifies the process, saving you time and effort.
Leave a Comment | {"url":"https://savvycalculator.com/calculate-time-calculator","timestamp":"2024-11-04T15:35:51Z","content_type":"text/html","content_length":"143319","record_id":"<urn:uuid:1ce73efd-85e7-429d-acb0-0a637273d960>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00028.warc.gz"} |
Fun With Numbers - Brad Feld
Fun With Numbers
Jul 03, 2018
My friend Katherine pointed me to the Number Gossip site. If you like numbers, you’ll quickly lose the next hour playing around. Since 49% of the US is taking today off, it seemed like a relevant
thing to spend some time on.
For example, did you know that 67 is the only number such that the common alphabetical value of its Roman representation is equal to its reversal (LXVII – 12+24+22+9+9=76)?
Or, did you know that 111 is the smallest palindromic number such that the sum of its digits is one of its prime factors? It’s also the age at which Bilbo Baggins leaves the Shire.
I had forgotten that evil numbers are a number that has an even number of 1’s in its binary expansion. But, I didn’t know what an odious number is (it has an odd number of 1’s in its binary
expansion.) Apparently, being evil is related, but the opposite, of being odious.
While we all know that 42 is the answer to the ultimate question of life, the universe and everything as calculated by Deep Thought, did you know it is also the number spots on a pair of dice? It’s
also the smallest abundant odious number.
Have fun. Don’t forget to come up for air once in a while. | {"url":"https://feld.com/archives/2018/07/fun-with-numbers/","timestamp":"2024-11-05T12:30:20Z","content_type":"text/html","content_length":"271009","record_id":"<urn:uuid:40272d7a-bbc6-4ed1-85ee-070af0df1065>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00807.warc.gz"} |
Gann Square Of 12 UPD | Holy Trinity Luthera
Gann Square Of 12 UPD
Gann Square Of 12 >>> https://urllio.com/2tuUMi
How to Use the Gann Square of 12 for Trading
The Gann Square of 12 is a technical analysis tool that can help traders identify support and resistance levels, as well as potential entry and exit points. The Gann Square of 12 is based on the idea
that price and time are related by a mathematical proportion, and that the market moves in cycles that repeat themselves. The Gann Square of 12 is one of the many tools created by WD Gann, a
legendary trader and analyst who used geometry, astrology and numerology to forecast market movements.
In this article, we will explain what the Gann Square of 12 is, how it works, and how you can use it for trading.
What is the Gann Square of 12
The Gann Square of 12 is a grid of numbers that starts with 1 at the center and spirals outwards in a clockwise direction. The numbers increase by 12 as they move away from the center, forming a
square with 12 rows and 12 columns. The Gann Square of 12 can be used to plot price and time on a chart, by assigning each number a value based on the scale and timeframe of the market.
For example, if we are using a daily chart and a scale of 1 point per number, then the number 1 at the center would represent the price of $1 and the date of January 1st. The number 13 on the second
row would represent the price of $13 and the date of January 13th. The number 25 on the third row would represent the price of $25 and the date of January 25th, and so on. Alternatively, we can use a
different scale and timeframe, such as 10 points per number and a weekly chart, or 100 points per number and a monthly chart.
The Gann Square of 12 can help us identify important price and time levels by looking at the patterns and relationships between the numbers. Some of these patterns are:
The numbers on the diagonal lines form arithmetic progressions with a common difference of 24. For example, the numbers on the main diagonal line are 1, 25, 49, 73, etc., which increase by 24 each
time. These numbers represent major support and resistance levels, as well as potential reversal points.
The numbers on the horizontal and vertical lines form arithmetic progressions with a common difference of 12. For example, the numbers on the first horizontal line are 1, 13, 25, 37, etc., which
increase by 12 each time. These numbers represent minor support and resistance levels, as well as potential continuation points.
The numbers on the four cardinal points (north, south, east and west) form geometric progressions with a common ratio of 2. For example, the numbers on the east point are 2, 4, 8, 16, etc., which
double each time. These numbers represent significant price and time levels that mark major changes in trend or volatility.
The numbers on the eight ordinal points (northeast, southeast, northwest and southwest) form geometric progressions with a common ratio of â2. For example, the numbers on the northeast point are â2,
â8, â18, â32, etc., which multiply by â2 each time. These numbers represent intermediate price and time levels that mark minor changes in trend or volatility.
How to Use the Gann Square of 12 for Trading
To use the Gann Square of 12 for trading, we need to follow these steps:
Choose a scale and timeframe that suits our trading style and market conditions. For example, if we are trading stocks on a daily chart, we can use a scale of 1 point per number. If we are trading
forex on an hourly chart, we can use a scale of 10 pips per number.
Plot the Gann Square of 12 on our chart by aligning the center number with a significant price level or pivot point. For example, if we are trading AAPL stock on a daily chart and we want to use $100
as our center price level, we can plot the Gann Square of 12 such that the number 1 corresponds to $100.
Identify the ec8f644aee | {"url":"https://www.holytrinitymarshall.com/forum/discover-awesome-features/gann-square-of-12-upd","timestamp":"2024-11-06T04:46:33Z","content_type":"text/html","content_length":"968934","record_id":"<urn:uuid:066f91a0-987c-4e60-ad5a-84556b7cd47a>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00706.warc.gz"} |
Rotational Transformations on the Globe
In The Orientation Vector, you explored the concept of altering the aspect of a map projection in terms of pushing the North Pole to new locations. Another way to think about this is to redefine the
coordinate system, and then to compute a normal aspect projection based on the new system. For example, you might redefine a spherical coordinate system so that your home town occupies the origin. If
you calculated a map projection in a normal aspect with respect to this transformed coordinate system, the resulting display would look like an oblique aspect of the true coordinate system of
latitudes and longitudes.
This transformation of coordinate systems can be useful independent of map displays. If you transform the coordinate system so that your home town is the new North Pole, then the transformed
coordinates of all other points will provide interesting information.
The types of coordinate transformations described here are appropriate for the spherical case only. Attempts to perform them on an ellipsoid will produce incorrect answers on the order of several to
tens of meters.
When you place your home town at a pole, the spherical distance of each point from your hometown becomes 90° minus its transformed latitude (also known as a colatitude). The point antipodal to your
town would become the South Pole, at -90°. Its distance from your hometown is 90°-(-90°), or 180°, as expected. Points 90° distant from your hometown all have a transformed latitude of 0°, and thus
make up the transformed equator. Transformed longitudes correspond to their respective great circle azimuths from your home town.
Reorient Vector Data with rotatem
The rotatem function uses an orientation vector to transform latitudes and longitudes into a new coordinate system. The orientation vector can be produced by the newpole or putpole functions, or can
be specified manually.
As an example of transforming a coordinate system, suppose you live in Midland, Texas, at (32°N,102°W). You have a brother in Tulsa (36.2°N,96°W) and a sister in New Orleans (30°N,90°W).
1. Define the three locations:
midl_lat = 32; midl_lon = -102;
tuls_lat = 36.2; tuls_lon = -96;
newo_lat = 30; newo_lon = -90;
2. Use the distance function to determine great circle distances and azimuths of Tulsa and New Orleans from Midland:
[dist2tuls az2tuls] = distance(midl_lat,midl_lon,...
dist2tuls =
az2tuls =
[dist2neworl az2neworl] = distance(midl_lat,midl_lon,...
dist2neworl =
az2neworl =
Tulsa is about 6.5 degrees distant, New Orleans about 10.5 degrees distant.
3. Compute the absolute difference in azimuth, a fact you will use later.
azdif = abs(az2tuls-az2neworl)
azdif =
4. Today, you feel on top of the world, so make Midland, Texas, the north pole of a transformed coordinate system. To do this, first determine the origin required to put Midland at the pole using
origin = newpole(midl_lat,midl_lon)
origin =
The origin of the new coordinate system is (58°N, 78°E). Midland is now at a new latitude of 90°.
5. Determine the transformed coordinates of Tulsa and New Orleans using the rotatem command. Because its units default to radians, be sure to include the degrees keyword:
[tuls_lat1,tuls_lon1] = rotatem(tuls_lat,tuls_lon,...
tuls_lat1 =
tuls_lon1 =
[newo_lat1,newo_lon1] = rotatem(newo_lat,newo_lon,...
newo_lat1 =
newo_lon1 =
6. Show that the new colatitudes of Tulsa and New Orleans equal their distances from Midland computed in step 2 above:
tuls_colat1 = 90-tuls_lat1
tuls_colat1 =
newo_colat1 = 90-newo_lat1
newo_colat1 =
7. Recall from step 4 that the absolute difference in the azimuths of the two cities from Midland was 49.7258°. Verify that this equals the difference in their new longitudes:
ans =
You might note small numerical differences in the results (on the order of 10^-6), due to round-off error and trigonometric functions.
For further information, see the reference pages for rotatem, newpole, putpole, neworig, and org2pol.
Reorient Gridded Data
This example shows how to transform a regular data grid into a new one with its data rearranged to correspond to a new coordinate system using the neworig function. You can transform coordinate
systems of data grids as well as vector data. When regular data grids are manipulated in this manner, distance and azimuth calculations with the map variable become row and column operations.
Load elevation raster data and a geographic cells reference object. Transform the data set to a new coordinate system in which a point in Sri Lanka is the north pole. Reorient the data grid by using
the neworig function. Note that the result, [Z,lat,lon], is a geolocated data grid, not a regular data grid like the original data.
load topo60c
origin = newpole(7,80);
[Z,lat,lon] = neworig(topo60c,topo60cR,origin);
Display the new map, in normal aspect, as its orientation vector shows. Note that every cell in the first row of the new grid is 0 to 1 degrees distant from the point new origin. Every cell in its
second row is 1 to 2 degrees distant, and so on. In addition, every cell in a particular column has the same great circle azimuth from the new origin.
axesm miller
lat = linspace(-90,90,90);
lon = linspace(-180,180,180);
mstruct = getm(gca); | {"url":"https://de.mathworks.com/help/map/rotational-transformations-on-the-globe-1.html","timestamp":"2024-11-08T09:27:19Z","content_type":"text/html","content_length":"76055","record_id":"<urn:uuid:94b18d01-1de8-49ab-b6d6-d71826e831a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00737.warc.gz"} |
I want to draw and make animate solid when I rotate this figure around the line `CD`.
I do not how to start. Hot to get it?
This question is conceptually almost the same as your previous [question on the tetrahedron](https://topanswers.xyz/tex?q=1349), so we can recycle a lot of things from there. Basically you can use routines like `draw face with corners` for an arbitrary polyhedron as long as it is sufficiently convex. In the case at hand, there is however one plane which needs to be treated separately. Please note that if you have updated your pgf installation to 3.1.6, you need to get the [newest version of `3dtools`](https://github.com/marmotghost/tikz-3dtools).
\usetikzlibrary{3dtools}% https://github.com/marmotghost/tikz-3dtools
% something like this may go into the 3dtools library
draw face with corners/.code={\begingroup\c@pgf@counta0\relax
\advance\c@pgf@counta by1\relax
\edef\pgfutil@tmpl{\pgfutil@tmpl -- (\pgfutil@tmpT)}%
\typeout{A face needs at least three vertices. However, only \the\c@pgf@counta\space vertices were specified.}%
\pgfmathtruncatemacro{\pgfutil@tmpi}{\pgfkeysvalueof{/tikz/3d/polyhedron/shading function}(\pgfutil@tmph)}%
\ifdim\pgfutil@tmpe pt<0pt\relax
\begin{pgfonlayer}{\pgfkeysvalueof{/tikz/3d/polyhedron/back layer}}
\tikzset{3d/polyhedron/before hidden}%
\draw[fill=tikz@td@face@color!\pgfutil@tmpi!black,3d/polyhedron/back] \pgfutil@tmpl -- cycle;
\tikzset{3d/polyhedron/before hidden}%
\begin{pgfonlayer}{\pgfkeysvalueof{/tikz/3d/polyhedron/fore layer}}
\tikzset{3d/polyhedron/before visible}%
\draw[fill=tikz@td@face@color!\pgfutil@tmpi!black,3d/polyhedron/fore] \pgfutil@tmpl -- cycle;
\tikzset{3d/polyhedron/after visible}%
O/.initial={(0,0,0)},% point inside the polyhedron
L/.initial={(1,1,1)},% "light source",
fore/.style={draw,solid},fore layer/.initial=foreground,
back/.style={draw,dashed,fill opacity=0},back layer/.initial=background,
shading function/.initial={tikztdpolyhedronshade},
/tikz/declare function={tikztdpolyhedronshade(\x)=70+30*\x;},
before visible/.code={},after visible/.code={},
before hidden/.code={},after hidden/.code={},
\tikzset{% https://tex.stackexchange.com/a/76216
even odd clip/.code={\pgfseteorule}}
\foreach \Angle in {5,15,...,355}
{\begin{tikzpicture}[tdplot_main_coords,line join=round,line cap=round,
declare function={L=2;l=1;d=1;}]
\path[tdplot_screen_coords,use as bounding box]
(-0.75*L,-1.75*L) rectangle (1.75*L,1.75*L);
\path (1,0,0) coordinate (ex)
(0,1,0) coordinate (ey)
(0,{cos(\Angle)},{sin(\Angle)}) coordinate (ey')
(0,{-sin(\Angle)},{cos(\Angle)}) coordinate (ez') ;
\path (0,0,0) coordinate (D)
(L,0,0) coordinate (C)
(L,-L,0) coordinate (B)
(0,-L,0) coordinate (A)
(0,-L,L) coordinate (A2)
(l,-L,L) coordinate (B2)
(l,0,L) coordinate (C2)
(0,0,L) coordinate (D2)
(l,-L,L-l) coordinate (A1)
(l,0,L-l) coordinate (D1)
(L,-L,L-l) coordinate (B1)
(L,0,L-l) coordinate (C1)
(l/2,-l,l) coordinate (O');
\begin{scope}[canvas is yz plane at x=0]
\draw[thick,blue] let \p1=($(D)-(A)$),\n1={veclen(\y1,\x1)} in
(A) arc[start angle=180,end angle=180-1*\Angle,radius=\n1]
let \p2=($(D)-(A2)$),\n2={veclen(\y2,\x2)} in
(A2) arc[start angle=180-45,end angle=180-45-1*\Angle,radius=\n2];
fore/.append style={fill opacity=0.6,thick,solid},
back/.append style={fill opacity=0.6,dashed,thin},
draw face with corners={{(A)},{(B)},{(B1)},{(A1)},{(B2)},{(A2)}},
draw face with corners={{(D)},{(C)},{(C1)},{(D1)},{(C2)},{(D2)}},
draw face with corners={{(A)},{(B)},{(C)},{(D)}},
draw face with corners={{(A2)},{(B2)},{(C2)},{(D2)}},
draw face with corners={{(A)},{(A2)},{(D2)},{(D)}},
draw face with corners={{(B)},{(B1)},{(C1)},{(C)}},
draw face with corners={{(A1)},{(B1)},{(C1)},{(D1)}}}
\tikzset{my path/.style={insert path={
(D) -- (A) -- (B) -- (C) -- (C1) -- (D1) -- (C2) -- (D2) -- cycle}}}
\tikzset{my path/.style={insert path={
(A) -- (D) -- (C) -- (B) -- (B1) -- (A1) -- (B2) -- (A2) -- cycle}}}
\begin{scope}% clip on the covered area and use fake O
before hidden/.code={\clip [my path];},
draw face with corners={{(A1)},{(B2)},{(C2)},{(D1)}}}
% clip on the uncovered area
\tikzset{3d/polyhedron/.cd,before visible/.code={
\clip[even odd clip]
(current bounding box.south west) -- (current bounding box.south east)
-- (current bounding box.north east) -- (current bounding box.north west)
-- cycle [my path];},
draw face with corners={{(A1)},{(B2)},{(C2)},{(D1)}}} | {"url":"https://topanswers.xyz/tex?q=1368","timestamp":"2024-11-11T10:13:53Z","content_type":"text/html","content_length":"47912","record_id":"<urn:uuid:1ca4d46f-6865-448d-ae8c-ec93caf060d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00292.warc.gz"} |