content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
math people - help me with this problem
Why is the formula for finding the formula area of an equilateral triangle so different from finding the area of other triangles?
I suppose you could find the height and write the formula for area in terms of the base (one side), but I don't know why you'd bother learning it separately.
Why is the formula for finding the formula area of an equilateral triangle so different from finding the area of other triangles?
Curious as to what math book you are using? I've never seen a special formula for the area of an equilateral triangle. Same one works on them as on any other.
Curious as to what math book you are using? I've never seen a special formula for the area of an equilateral triangle. Same one works on them as on any other.
I'm not using a math book. I'm working my way through Khan academy and using the 1/2bh formula doesn't give the correct answer. Khan Academy using uses this formula:
Equilateral Triangle
s = length of a side
If you have a triangle that has sides of 10 in, using 1/2 bh the area is 50 sq in. (did I do that right?)
Using the other formula, the area is 43.301 sq in. (equilateral triangle area calculator)
I don't understand why that is, and it's driving me nuts!!
The height is the perpendicular height.
So 1/2 bh = 1/2 (10)(square root of 75) = 43.3
You can get the height by Pythagoras theorem.
Squared of base + squared of perpendicular height = squared of slant height
Squared of 5 + squared of h = squared of 10
25 + squared of h = 109
Squared of h = 75
h = square root of 75
The page has a pictorial explanation for how to get area of an equilateral triangle.
The height is the perpendicular height.
So 1/2 bh = 1/2 (10)(square root of 75) = 43.3
You can get the height by Pythagoras theorem.
Squared of base + squared of perpendicular height = squared of slant height
Squared of 5 + squared of h = squared of 10
25 + squared of h = 109
Squared of h = 75
h = square root of 75
The page has a pictorial explanation for how to get area of an equilateral triangle.
Thanks! I can sleep now :) | {"url":"https://forums.welltrainedmind.com/topic/453493-math-people-help-me-with-this-problem/","timestamp":"2024-11-06T14:46:33Z","content_type":"text/html","content_length":"259367","record_id":"<urn:uuid:7d1520ea-3937-47dc-90e3-56f061b6b539>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00326.warc.gz"} |
Matrix Determinant Calculator
Calculate the determinant of any matrix effortlessly with our Matrix Determinant Calculator. Whether you’re studying linear algebra, working on engineering problems, or solving complex equations,
this tool delivers accurate results in seconds.
• Hi, I’m your AI-powered math assistant here to help with your math problems.
Solving ...
What is the Matrix Determinant Calculator?
The Matrix Determinant Calculator is a specialized tool designed to compute the determinant of matrices of various sizes. Determinants play a crucial role in linear algebra, providing key information
about matrix properties such as invertibility and volume transformations. Our calculator simplifies this process, making it accessible for students, professionals, and anyone dealing with matrices.
How to Use?
• Input the Matrix Elements: Enter the values of your matrix into the calculator.
• Click Calculate: Press the “Calculate” button to find the determinant.
• View Your Results: The determinant of your matrix will be displayed instantly, along with step-by-step calculations for better understanding.
• 2×2 Matrix Determinant: Matrix:
| 4 7 |
| 2 6 |
Result: Determinant = 10
The calculator shows the calculation process for a 2×2 matrix using the formula ad - bc.
• 3×3 Matrix Determinant: Matrix:
| 1 2 3 |
| 4 5 6 |
| 7 8 9 |
Result: Determinant = 0
The calculator breaks down the more complex calculation of a 3×3 matrix using cofactor expansion, explaining each step.
• 4×4 Matrix Determinant: Matrix:
| 2 0 1 3 |
| 0 1 4 2 |
| 1 2 3 4 |
| 3 4 1 2 |
Result: Determinant = 56
For larger matrices, the calculator handles the process efficiently, providing a detailed explanation.
Start Using the Matrix Determinant Calculator Today!
Calculate the determinant of any matrix quickly and accurately. Whether you’re studying for an exam, working on a project, or solving complex problems, our Matrix Determinant Calculator is here to
Frequently Asked Questions
What is a matrix determinant?
A matrix determinant is a scalar value that can be calculated from a square matrix. It provides important information about the matrix, including whether it has an inverse and the scaling factor of
the linear transformation represented by the matrix.
Can the calculator handle larger matrices?
Yes, the calculator supports matrices of various sizes, including 2×2, 3×3, 4×4, and larger.
Is the Matrix Determinant Calculator useful for solving systems of equations?
Yes, determinants are often used in solving systems of linear equations, particularly through Cramer’s rule and other methods in linear algebra.
Does the calculator show the steps involved in finding the determinant?
Yes, for larger matrices, the calculator provides a step-by-step explanation of the process, helping you understand how the determinant is calculated.
Is the Matrix Determinant Calculator free to use?
Yes, the calculator is completely free to use for all users. | {"url":"https://mathgpt.info/calculator/matrix-determinant-calculator/","timestamp":"2024-11-03T19:31:55Z","content_type":"text/html","content_length":"131615","record_id":"<urn:uuid:e1ec2c8d-f8af-4cc0-a406-4791e5b367a5>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00148.warc.gz"} |
Decision Making using Probability (Investigation) | Stage 3 Maths | HK Secondary S1-S3
Probability calculations may seem like an abstract concept, one which might only be useful to Maths Teachers and Gamblers. After all, who else needs to calculate the probability of rolling two even
numbers in a row, or drawing a heart and then a diamond from a pack of cards?
Probability is used by many other people in may other areas, however, and we will explore some of these occurrences of probability.
How probability can save lives: Medical testing
Doctors use probability regularly in deciding whether it is worth "screening" patients for medical conditions. Screening refers to testing people who do not have any symptoms, just in case they have
a disease which could be treated. Whilst screening can be life-saving if used correctly, it can also go terribly wrong if used in the wrong circumstances.
Screening for cancer
A key example is the use of tumour markers to check if someone has cancer. For example, imagine that around 1 in 10000 asymptomatic people have pancreatic cancer, and there is a blood test which is
99\% accurate in detecting pancreatic cancer.
1. If you do this test, and you get a positive result, what do you think your chance of having cancer is?
Most people answer 99\%. After all, the test is "99\% accurate" right? In fact, the answer to this question is only around 1\%. How can that be possible? It all comes down to probability.
To see how this works, imagine 10000 people who get tested for pancreatic cancer. Statistically, only 1 person out of these 10000 will have pancreatic cancer, so there will only be only 1 true
positive result. However, since the test has a 1\% error rate, amongst the 9999 people who get the test there will be 99 false positives (defined as people who do not have cancer but get a positive
result anyway). Therefore, there will be 1 true positive and 99 false positives. This means that anyone who gets a positive result has a 99\% chance that the result is a false positive, and only a 1\
% chance that it is a true positive.
To make it easier to see, we can make a table of these numbers:
These false positives might seem harmless, but they often result not only in anxiety for the patient but may also result in potentially harmful interventions such as scanning with radiation or
unnecessary surgery. Therefore, doctors generally do not use tumour markers to check if people have cancer.
Screening for depression
The prevalence of clinical depression in young people is around 17\%, making it the single most common serious medical condition amongst young people. Whilst there are many excellent treatment
options available, including services like Mood Gym specifically targeted at young people, depression often goes unrecognised and people do not think to go to the doctor for it. Depression can be
diagnosed using the PHQ-9 questionnaire, where a score of 11 or more (or thoughts of suicide alone) has an accuracy of around 80\% for diagnosing clinical depression.
2. Make a table to figure out the chance of a positive test on the PHQ-9 questionnaire being a true positive. Do you think this makes universal screening for depression appropriate?
How probability can save you money: Insurance premiums
As a high school student, you may already be driving, or be thinking about driving soon. As such, you may well be aware how expensive car insurance can be for a young person.
1. See how much you would pay for comprehensive car insurance per year if you were to drive your parents' car, using an online insurance calculator.
Why is it so expensive? Because insurers know that young people are at much higher risk than older adults of having a car accident. For example, the graph below shows the rate of hospitalisations per
100000 people.
2. What do you think are the factors which make younger drivers so much more likely to have a car crash?
In calculating your insurance premium (how much you pay for insurance), insurers want to ensure they are charging you the right price. If it is too expensive, you will end up getting insurance with
another company instead. If it is too cheap, then the insurance company will end up losing money because the costs of car crashes will be more than the money they make from car insurance.
In trying to figure out the "right price" for insurance, the insurance companies hire an "actuary". Actuaries are part of the mathematical elite, using complex probability calculations to figure out
the long-term risk of events like car crashes or natural disasters happening. Having such useful mathematical skills makes actuaries highly sought-after by a range of companies, and the profession
has been rated as the best job of 2013 based on its high income, good work environment and balanced lifestyle.
Whilst we won't get into the real-life complex calculations, let's make our own simple model and see how well it works.
1. How much do you think your parent's car is worth?
2. What is your risk of a major car crash per year? Use the tables below to calculate the approximate yearly risk of a car crash for your age group.
This table shows the number of crashes per age group:
This table shows the number of licence-holders per age group:
3. Multiply your risk by the estimated cost of the crash. To keep it simple, we'll assume that the cost of the crash is simply the cost of replacing your car.
4. The number you have is the approximate amount of money that an insurance company would need to charge to "break even" on your insurance policy. How does it compare with the figure you got from the
actual insurance calculator? Can you think of why the two numbers would be different? What costs does your estimate not include?
What happens if you drive below the speed limit?
Recently, some insurance companies have started to offer an interesting deal for drivers. If you agree to have a "black box" put in your car, the insurance company can gather data on your driving
habits, and charge you insurance premiums according to how you drive. Let's calculate how knowing your driving habits might allow you to have cheaper insurance.
A recent Road and Maritime Services paper points out that "In urban areas, exceeding the speed limit by 5 km/h doubles the likelihood of a casualty crash and each additional increase in speed by 5 km
/h further doubles the risk". Speeding contributes to around 40 percent of fatal crashes. If we extrapolate this to non-fatal crashes as well, we could argue that you can reduce your risk of a crash
by around 40 percent if you do not speed.
Let's say that your "black box" shows that you never exceed the speed limit.
5. If we assume that this reduces your risk of crashes by 40 percent, what is your new risk of a car crash?
6. Using the previous method of calculating insurance premiums, what would your insurance premium be now?
7. If the black box shows that your driving habits are identical to an older driver, should you receive the same insurance premium as them? Why or why not?
8. If "black boxes" became popular, what would happen to the insurance premiums of drivers who did not sign up for a "black box"?
9. Would you want to sign up for a black box for your car insurance? Why or why not?
10. Are there any other things that insurance companies could do to identify young drivers who are at lower risk, to reduce the cost of their insurance?
How probability can get you a job
11. Can you find another job where you think probability would play an important role in decision making. How do you think probability would be used in this setting? | {"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-98/topics/Topic-1487/subtopics/Subtopic-58768/?activeTab=theory","timestamp":"2024-11-10T05:25:26Z","content_type":"text/html","content_length":"347633","record_id":"<urn:uuid:5e500b2c-b86e-4382-9bd7-baf52049eb8e>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00577.warc.gz"} |
STOR 215 Course Information
Class meetings, Fall 2017: Tuesday and Thursday 9:30am – 10:45am in Hanes 120
Prerequisites: Mathematics 110
Registration: Enrollment and registration for the course is handled online. Students who wish to be put on the wait list for the class should go here.
Instructor: Andrew B. Nobel
Office: Hanes 308 Phone: 919-962-1352.
Office Hours: Mondays 2-3:15pm and Fridays 1-2pm.
Instructional Assistant: Wei Liu
Office: Hanes B48 Email: liuwei1@live.unc.edu
Teaching Assistant: Matt Jones Email: mattbbll@live.unc.edu
Office Hours: Mondays 10-11:30am and Fridays 10-11am
Location: Hanes 115 (STOR Computer Lab)
Audience and Goals: STOR 215 is an intermediate undergraduate level course that provides an introduction to mathematical reasoning for students seeking a minor or major in the STOR Statistics and
Analytics program. The course may also be appropriate for students in mathematically oriented disciplines such as Physics, Quantitive Psychology, Mathematics, and Computer Science.
The objective of the course is to teach students the basics of rigorous mathematical reasoning, so that they are able to understand and execute elementary mathematical arguments and proofs. The
course will provide a proof-based introduction to several subjects that are important to decision sciences, including elementary number theory, elementary combinatorics, discrete probability, and
Text: The primary text for the class is “Discrete Mathematics” 7th edition, by Kenneth Rosen.
Homework policy: Homework problems will be assigned regularly throughout the semester, usually every week, and will be posted on the class web page. Each homework assignment will be graded: late/
missed homeworks will receive a grade of zero. In computing a student’s overall homework score for the course, their two lowest homework scores will be dropped. This provision is meant to cover
exceptional situations in which a student is unable to turn in an assignment due to circumstances beyond his/her control: under ordinary circumstances, students are expected to turn in every homework
To receive full credit on the homework assignments, you must clearly label each problem, neatly show all your work (including your mathematical arguments), and staple together the pages of each
assignment in the correct order. You should give a clear account of your reasoning in English, and use full sentences where appropriate. Please write your name or initials on each page.
Students are welcome to discuss the homework problems with other members of the class, but should prepare their final answers on their own. Copying of homework is not allowed. If you have any
questions concerning the grading of homework, please speak first with the TA. If you are absent from class when an assignment is returned, you can get your homework from the TA during their office
Class attendance and protocol: Students are expected to attend all lectures. If you are unable to attend a lecture, please let the instructor know and make plans to get the notes from another
student in the class. Please arrive on time, as late arrivals disturb other students. Reading of newspapers and the use of laptops, tablets, and cell-phones is not allowed in class. If you use a
tablet to take notes, please see the instructor to discuss this.
Exams: There will be two in-class midterm exams and a comprehensive final examination, which will also be in-class. All exams will be closed book and closed notes, and without calculators. The
final exam will be given at the time and date specified by the UNC Final Exam Schedule.
Midterm 1 September 26
Midterm 2 November 7
Final See Official UNC Schedule
Grading: Grading will be based on homework, two in-class midterms, and an in-class final exam, using the weights below.
Homework 11%
Midterm 1 24%
Midterm 2 24%
Final 41%
When Midterms 1 and 2 are returned, a rough correspondence between numerical scores and letter grades for that exam will be provided. If you receive a D or an F you should come to the instructor’s
office hours to discuss your exam.
Honor Code: Students are expected to adhere to the UNC honor code at all times.
Syllabus (subject to change): The first part of the course is devoted to an overview of elementary logic, sets, and functions, and a variety of proof strategies. Topics include propositional logic
with quantifiers, direct proofs, proof by contradiction, and induction. We then illustrate, develop, and apply these ideas in the study of several subjects where proofs and proof techniques are
common, including elementary number theory, basic combinatorics, discrete probability, and graphs.
1. Propositional logic, basic logical operations
2. Quantifiers
3. Direct and indirect proofs
4. Sets and basic set operations
5. Functions
6. Sequences and summations
7. Elementary number theory
8. Divisibility, modularity, and primes
9. Weak and strong induction
10. Elementary combinatorics: basics of counting
11. Permutations, combinations, binomial coefficients
12. Discrete probability: model of a random experiment
13. Conditional probability and Bayes Theorem
14. Expected value and variance
15. Graphs: basic terminology and applications
16. Adjacency matrices
17. Connectivity, Euler and Hamilton paths
Study tips:
1. Keep up with the reading and homework assignments. If the reading assignment is long, break it up into smaller pieces (perhaps one section or subsection at a time). Keep a pencil and scratch
paper on hand as you read the book, and use these to work out the details of any argument that is not clear to you.
2. Look over the notes from the lecture k before attending lecture k+1. This will help keep you on top of the course material. Ideas from one lecture often carry over to the next: you will get
much more out of the material if you can maintain a sense of continuity and keep the “big picture” in mind.
3. Read the book carefully *before* doing the homework. Trying to find the right section, formula, or paragraph for a particular problem often takes as much time, and it can create more confusion
than it resolves. Each chapter of the book contains many examples illustrating the ideas presented there. When you first read the chapter, don’t feel as if you need to read through every example:
focus first on the shorter, simpler examples, and then look at the longer, more complicated examples afterwards.
4. It is important to know what you know, but it’s especially important to know what you don’t know. As you read over new material in the text or your notes, ask yourself if you (really) understand
it. Keep careful track of any concepts and ideas that are not clear to you, and make efforts to master these in a timely fashion, using the class notes, the text, office hours, study groups, and
outside reading if necessary.
5. One good way to see if you understand an idea or concept is to write down the associated definitions and basic facts, without the book or your notes, in full, grammatical sentences. It’s also
helpful to state the definitions and basic facts out loud — the same grammatical criteria apply here. Translating ideas from mathematics to complete English sentences, and back again, is an
important component of the course, and important component of mathematical research.
6. Homework plays two important roles in the course. First, it provides an opportunity to actively think about, engage with, and learn the course material. In addition, homework provides feedback
on your understanding of the material. Carefully look over your corrected homework assignments. Most students do well on the homework: even if you received a good score, make sure to note and
understand or correct any mistakes you made on the problems.
7. Begin studying for exams at least one week before they are given. Look over your notes, homework, and the text. Write up a study guide containing the main concepts and definitions being covered,
and use this to get a clear picture of the overall landscape of the material. A study guide for each midterm will be posted online. For every topic on the study guide, you should know the relevant
definitions, motivating ideas, and at least one or two examples. | {"url":"https://nobel.web.unc.edu/teaching/stor-215/stor-215_info/","timestamp":"2024-11-03T10:52:47Z","content_type":"text/html","content_length":"63991","record_id":"<urn:uuid:086a7359-a82d-430e-b820-b1df1496a9e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00144.warc.gz"} |
MHT CET 2023 12th May Evening Shift | Waves Question 19 | Physics | MHT CET - ExamSIDE.com
MHT CET 2023 12th May Evening Shift
MCQ (Single Correct Answer)
A tuning fork of frequency $$220 \mathrm{~Hz}$$ produces sound waves of wavelength $$1.5 \mathrm{~m}$$ in air at N.T.P. The increase in wavelength when the temperature of air is $$27^{\circ} \mathrm
{C}$$ is nearly $$\left(\sqrt{\frac{300}{273}}=1.05\right)$$
MHT CET 2023 12th May Morning Shift
MCQ (Single Correct Answer)
A uniform string is vibrating with a fundamental frequency '$$n$$'. If radius and length of string both are doubled keeping tension constant then the new frequency of vibration is
MHT CET 2023 12th May Morning Shift
MCQ (Single Correct Answer)
The displacement of two sinusoidal waves is given by the equation
$$\begin{aligned} & \mathrm{y}_1=8 \sin (20 \mathrm{x}-30 \mathrm{t}) \\ & \mathrm{y}_2=8 \sin (25 \mathrm{x}-40 \mathrm{t}) \end{aligned}$$
then the phase difference between the waves after time $$t=2 \mathrm{~s}$$ and distance $$x=5 \mathrm{~cm}$$ will be
MHT CET 2023 12th May Morning Shift
MCQ (Single Correct Answer)
Two sounding sources send waves at certain temperature in air of wavelength $$50 \mathrm{~cm}$$ and $$50.5 \mathrm{~cm}$$ respectively. The frequency of sources differ by $$6 \mathrm{~Hz}$$. The
velocity of sound in air at same temperature is | {"url":"https://questions.examside.com/past-years/jee/question/pa-tuning-fork-of-frequency-220-mathrmhz-produces-mht-cet-physics-motion-dqq6xbburipyjefb","timestamp":"2024-11-02T23:34:05Z","content_type":"text/html","content_length":"194786","record_id":"<urn:uuid:1d7d1601-899a-49cd-8996-dfdbae6bbab8>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00340.warc.gz"} |
Course angle and the distance between the two points on loxodrome (rhumb line).
Calculation of a distance on loxodrome (rhumb line) and course angle (azimuth) between two points with a given geographical coordinates.
In the 16th century, Flemish geographer Gerhard Mercator made a navigation map of the world, depicting the earth's surface on a plane so that angles on the map are not distorted.
At present, this method of Earth's image is known as Mercator conformal cylindrical projection. This map was very convenient for the sailors as to come from point A to point B on the Mercator's map
it's enough to draw a straight line between these points, measure its angle to the meridian and constantly adhere to this direction, for example, by using a sextant and a polar star as a landmark or
using a magnetic compass(actually it's not that simple with the compass as it's not always pointing to the true north).
Mercator projection is still widely used for navigational maps.
Even the ancient sailors noticed that the rhumb line is not always the shortest way between the two points, and it's self-evident for the long distances. If you draw a line on the globe, crossing all
meridians at the same angle, it becomes clear why this is happening. The straight line on the Mercator map turns on the globe into the endlessly spinning spiral to the poles. That line is called
loxodrome, which means "slanting run" in Greek.
The following calculator calculates the course angle and the transatlantic crossing distance from Las Palmas (Spain) to Bridgetown (Barbados) on the loxodrome. The resulting distance is different by
tens of kilometers of the shortest path (see Distance calculator)
For the calculation of the course angle the following formulas are used:
$\alpha = \arctan \left(\frac{{\Delta}\lambda}{{\ln\left(tan(\frac{\pi}{4}+\frac{\varphi_2}{2})\cdot\left[\frac{1-e\cdot \sin{\varphi_2}}{1+e\cdot \sin{\varphi_2}}\right]^{\frac{e}{2}}\right)}-{\ln\
left(tan(\frac{\pi}{4}+\frac{\varphi_1}{2})\cdot\left[\frac{1-e\cdot \sin{\varphi_1}}{1+e\cdot \sin{\varphi_1}}\right]^{\frac{e}{2}}\right)}}\right)$^1
$\Delta}\lambda = \begin{cases}\lambda_2-\lambda_1 &{\text{if }} |\lambda_2-\lambda_1|\leq180\textdegree\\360\textdegree+\lambda_2-\lambda_1 &{\text{if }} \lambda_2-\lambda_1{<}-180\
textdegree\\\lambda_2-\lambda_1-360\textdegree &{\text{if }} \lambda_2-\lambda_1{>}180\textdegree\end{cases}$^2
Loxodrome length is calculated by the following formula:
, where $\varphi_1,\lambda_1$ - latitude and longitude of the first point
$\varphi_2,\lambda_2$ - latitude and longitude of the second point
$e=\sqrt{1-\frac{b^2}{a^2}}$ -the eccentricity of the spheroid (a - the length of the major semiaxis, b - the length of the minor semiaxis)
At angles of 90 ° or 270 °, for the calculation of the arc length the following formula was used
1. V.S. Mikhailov, Navigation and Pilot book]] ↩
2. Noè Murr comment ↩
PLANETCALC, Course angle and the distance between the two points on loxodrome (rhumb line). | {"url":"https://planetcalc.com/713/?thanks=1","timestamp":"2024-11-13T09:33:26Z","content_type":"text/html","content_length":"48109","record_id":"<urn:uuid:a8fadaf6-e72d-4c4d-a99e-e9d3101f573b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00371.warc.gz"} |
NCERT Class 12th Mathematics - Application of Derivatives | the radius of an air bubble is increasing at the r Answer (02 Nov) | SaralStudy
This page offers a step-by-step solution to the specific question NCERT Class 12th Mathematics - Application of Derivatives | the radius of an air bubble is increasing at the r Answer from NCERT
Class 12th Mathematics, Chapter Application of Derivatives.
Question 12
The radius of an air bubble is increasing at the rate of 1/2 cm/s. At what rate is the volume of the bubble increasing when the radius is 1 cm?
The air bubble is in the shape of a sphere.
Now, the volume of an air bubble (V) with radius (r) is given by,
V = \frac{4}{3}\pi r^3
The rate of change of volume (V) with respect to time (t) is given by,
\frac{dV}{dt} = \frac{4}{3}\pi \frac{d}{dr}(r^3).\frac{dr}{dt} \;\;\;[By\; Chain\; Rule]
= \frac{4}{3}\pi (3r^2).\frac{dr}{dt}
= \frac{4}{3}\pi r^2.\frac{dr}{dt}
It is given that
\frac{dr}{dt}=\frac{1}{2} cm/s .
Therefore, when r = 1 cm,
\frac{dV}{dt}=4\pi(1)^2.(\frac{1}{2})=2\pi\; cm^3/s
Hence, the rate at which the volume of the bubble increases is 2π cm^3/s. | {"url":"https://www.saralstudy.com/study-eschool-ncertsolution/mathematics/application-of-derivatives/1178-the-radius-of-an-air-bubble-is-increasing-at-the-r","timestamp":"2024-11-02T00:05:59Z","content_type":"application/xhtml+xml","content_length":"56960","record_id":"<urn:uuid:584405fb-1e85-4eaa-9a21-0968e0e5c701>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00856.warc.gz"} |
CFA Reading 11, Example 17 p 251
I don’t understand in this example how they come to the conclusion that we cannot reject the null hypothesis that alpha=0. Can someone please explain to me how they arrived to that conclusion? Thank
T-stat (in this case 0.4036) < 1.96 (for a 5% level of significance) ; hence we cannot reject the hypothesis that alpha=0
Thank you - can you explain why the absolute value of t-stat must be less than t-critical in order to accept versus having the t-stat fall between (+) and (-) t critical. Using example above 0.4036
falls between -1.96 and 1.96. I remember the latter from Level 1.
You pretty much answered your own question rr1102, .4036 falls between the critical value range (between -1.96 and 1.96). If it was, let’s say, -.4036, then you could say -.4036 > -1.96, hence fail
to reject null. either way, the bottom line when it comes to a two-tailed hypothesis (such as this one) is that the absolute value must be less than t-critical. rr1102 Wrote:
------------------------------------------------------- > Thank you - can you explain why the absolute value > of t-stat must be less than t-critical in order to > accept versus having the t-stat
fall between (+) > and (-) t critical. Using example above 0.4036 > falls between -1.96 and 1.96. I remember the > latter from Level 1.
Thank you!
Here’s an intuitive way to understand t-stats. Think of them as a measure of “Distance” from your null. Your test statistic is a random variable. Assume that your null is true (i.e. that alpha = 0).
If you were to take random draws from your test statistic’s distribution, you’d see a lot of values close to “0”, a fair number that are 0.5 away on either side, fewer that wwere greater than 1.0
away, and so on. If you have a fairly large “n”, you can treat the t-stat like a z-stat. So, even by chance, you’ll see t-stats outside of 1 (i.e > 1 or < -1) about 1/3 of the time (remember, about 2
/3 of the obs will fall within +/- 1 std deviation of zero for a normal distribution). However, by chance, you’ll only see about 5% of your observations more than 1.96 std deviations from zero. So,
in this case, if the null hypothesis is true, you’d see t-statistics of greater than 1.96 or less than -1.96 only 5% of the time (hence the p-value of 0.05). THe p-va;lue actually means "the
probability of observing a t-statistic of greater than the value observed BY CHANCE and ASSUMING THAT THE NULL HYPOTHESIS IS TRUE) Hope this helps. It took me a while before I got a feel for
hypothesis testing. But once I realized that it was measuring “distance from the null”, it became much clearer. | {"url":"https://www.analystforum.com/t/cfa-reading-11-example-17-p-251/3876","timestamp":"2024-11-09T17:38:41Z","content_type":"text/html","content_length":"28194","record_id":"<urn:uuid:ed1aa4e3-f0bc-467c-a90a-d9d74bb087dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00850.warc.gz"} |
Zak Transforms - Explore the Science & Experts | ideXlab
The Experts below are selected from a list of 93 Experts worldwide ranked by ideXlab platform
A.j.e.m. Janssen - One of the best experts on this subject based on the ideXlab platform.
• Advances in Gabor Analysis, 2003
Co-Authors: A.j.e.m. Janssen
We consider the difficult problem of deciding whether a triple(g,a,b),with window g ∈ L 2(ℝ) and time shift parameter a and frequency shift parameterbis a Gabor frame from two different points of
view. We first identify two classes of non-negative windows g ∈ L 2(ℝ) such that their Zak Transforms have no and just one zero per unit square, respectively. The first class consists of all
integrable, non-negative windowsg that are supported by and strictly decreasing on [0, ∞). The second class consists of all even, non-negative, continuous, integrable windowsg that satisfy on [0,
∞) a condition slightly stronger than strict convexity (superconvexity). Accordingly, the members of these two classes generate Gabor frames for integer oversampling factor (ab)-1 ≥ 1 and ≥ 2,
respectively. When we weaken the condition of superconvexity into strict convexity, the Zak TransformsZg may have as many zeros as one wants, but in all cases (g, a, b) is still a Gabor frame
when(ab) -1 is an integer ≥ 2. As a second issue we consider the question for which a, b > 0 the triple (g, a, b) is a Gabor frame, where gis the characteristic function of an interval [0, ∞)
with c0 > 0 fixed. It turns out that the answer to the latter question is quite complicated, where irrationality or rationality of abgives rise to quite different situations. A pictorial display,
in which the various cases are indicated in the positive (a, b)-quadrant, shows a remarkable resemblance to the design of a low-budget tie.
• Signal Processing, 1995
Co-Authors: A.j.e.m. Janssen
Abstract We relate the matrix elements of the linear systems, arising in the Zibulski-Zeevi method for computing dual functions for rationally oversampled Weyl-Heisenberg frames, to the
Wexler-Raz method for computing dual functions. We give a necessary and sufficient condition for two functions g , γ having a frame upper bound to be dual in terms of their Zak Transforms, we
characterize the minimal dual function °γ and we present a necessary and sufficient condition, in terms of the Zak transform, for a function g so that the Tolimieri-Orr condition A is satisfied.
The latter result is used to show that a g generating a rationally oversampled Weyl-Heisenberg frame and satisfying condition A has a minimal dual function that satisfies condition A as well.
• 2024
Co-Authors: A.j.e.m. Janssen
We consider the dicult problem of deciding whether a triple (g; a; b), with window g 2 L 2 (R) and time shift parameter a and frequency shift parameter b, is a Gabor frame from two dierent points
of view. We rst identify two classes of non-negative windows g 2 L 2 (R) such that their Zak Transforms have no and just one zero per unit square, respectively. The rst class consists of all
integrable, non-negative windows g that are supported by [0; 1)
A J Brodzik - One of the best experts on this subject based on the ideXlab platform.
• International Conference on Acoustics Speech and Signal Processing, 2001
Co-Authors: A J Brodzik
We develop algorithms for computing block-recursive Zak Transforms and Weyl-Heisenberg expansions, which achieve p/logL and (logM+p)/(logN+logL+1) multiplicative complexity reduction,
respectively, over direct computations, where p'=pM, and N-p' is the number of overlapping samples in subsequent signal segments. For each transform we offer a choice of two algorithms based on
two different implementations of the Zak transform of the time-evolving signal. These two algorithm classes exhibit typical trade-offs between computational complexity and memory requirements.
Franz Hlawatsch - One of the best experts on this subject based on the ideXlab platform.
• IEEE Transactions on Signal Processing, 1997
Co-Authors: Helmut Bölcskei, Franz Hlawatsch
We consider three different versions of the Zak (1967) transform (ZT) for discrete-time signals, namely, the discrete-time ZT, the polyphase transform, and a cyclic discrete ZT. In particular, we
show that the extension of the discrete-time ZT to the complex z-plane results in the polyphase transform, an important and well-known concept in multirate signal processing and filter bank
theory. We discuss fundamental properties, relations, and transform pairs of the three discrete ZT versions, and we summarize applications of these Transforms. In particular, the discrete-time ZT
and the cyclic discrete ZT are important for discrete-time Gabor (1946) expansion (Weyl-Heisenberg frame) theory since they diagonalize the Weyl-Heisenberg frame operator for critical sampling
and integer oversampling. The polyphase representation plays a fundamental role in the theory of filter banks, especially DFT filter banks. Simulation results are presented to demonstrate the
application of the discrete ZT to the efficient calculation of dual Gabor windows, tight Gabor windows, and frame bounds.
• 1997
Co-Authors: Helmut Bölcskei, Student Member, Franz Hlawatsch
Abstract — We consider three different versions of the Zak transform (ZT) for discrete-time signals, namely, the discretetime ZT, the polyphase transform, and a cyclic discrete ZT. In particular,
we show that the extension of the discrete-time ZT to the complex �-plane results in the polyphase transform, an important and well-known concept in multirate signal processing and filter bank
theory. We discuss fundamental properties, relations, and transform pairs of the three discrete ZT versions, and we summarize applications of these Transforms. In particular, the discrete-time ZT
and the cyclic discrete ZT are important for discrete-time Gabor expansion (Weyl–Heisenberg frame) theory since they diagonalize the Weyl–Heisenberg frame operator for critical sampling and
integer oversampling. The polyphase representation plays a fundamental role in the theory of filter banks, especiall
Kloos Tobias - One of the best experts on this subject based on the ideXlab platform.
• 2014
Co-Authors: Kloos Tobias
We study the Zak transform of totally positive (TP) functions. We use the convergence of the Zak transform of TP functions of finite type to prove that the Zak Transforms of all TP functions
without Gaussian factor in the Fourier transform have only one zero in their fundamental domain of quasi-periodicity. Our proof is based on complex analysis, especially the Theorem of Hurwitz and
some real analytic arguments, where we use the connection of TP functions of finite type and exponential B-splines.Comment: The results were presented on the "International Conference on Modern
Time-Frequency Analysis" in Strobl, Austria, June 2 - 6th, 201
• 'Elsevier BV', 2013
Co-Authors: Kloos Tobias, Stöckler Joachim
We study totally positive (TP) functions of finite type and exponential B-splines as window functions for Gabor frames. We establish the connection of the Zak transform of these two classes of
functions and prove that the Zak Transforms have only one zero in their fundamental domain of quasi-periodicity. Our proof is based on the variation-diminishing property of shifts of exponential
B-splines. For the exponential B-spline B_m of order m, we determine a large set of lattice parameters a,b>0 such that the Gabor family of time-frequency shifts is a frame for L^2(R). By the
connection of its Zak transform to the Zak transform of TP functions of finite type, our result provides an alternative proof that TP functions of finite type provide Gabor frames for all lattice
parameters with ab
Tobias Kloos - One of the best experts on this subject based on the ideXlab platform.
• Journal of Fourier Analysis and Applications, 2015
Co-Authors: Tobias Kloos
We study the Zak transform of totally positive (TP) functions. We use the convergence of the Zak transform of TP functions of finite type to prove that the Zak Transforms of all TP functions
without Gaussian factor in the Fourier transform have only one zero in their fundamental domain of quasi-periodicity. Our proof is based on complex analysis, especially the Theorem of Hurwitz and
some real analytic arguments, where we use the connection of TP functions of finite type and exponential B-splines.
• Journal of Approximation Theory, 2014
Co-Authors: Tobias Kloos, Joachim Stockler
We study totally positive (TP) functions of finite type and exponential B-splines as window functions for Gabor frames. We establish the connection of the Zak transform of these two classes of
functions and prove that the Zak Transforms have only one zero in their fundamental domain of quasi-periodicity. Our proof is based on the variation-diminishing property of shifts of exponential
B-splines. For the exponential B-spline B"m of order m, we determine a set of lattice parameters @a,@b>0 such that the Gabor family G(B"m,@a,@b) of time-frequency shifts e^2^@p^i^l^@bB"m(@?-k@a),
k,l@?Z, is a frame for L^2(R). By the connection of its Zak transform to the Zak transform of TP functions of finite type, our result provides an alternative proof that TP functions of finite
type provide Gabor frames for all lattice parameters with @a@b<1. For even two-sided exponentials g(x)=@l2e^-^@l^|^x^| we find lower frame-bounds A, which show the asymptotically linear decay A~
(1-@a@b) as the density @a@b of the time-frequency lattice tends to the critical density @a@b=1. | {"url":"https://www.idexlab.com/openisme/topic-zak-transforms/","timestamp":"2024-11-13T16:24:21Z","content_type":"text/html","content_length":"54222","record_id":"<urn:uuid:d666255b-bcab-49d5-a0cd-9eac39728390>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00686.warc.gz"} |
James is in Danger
James is currently sitting in his hotel room in a hallway with side-by-side rooms, with the leftmost room being room number 1, and the rightmost being room number . He is currently on a school trip,
and he has just pissed off one of his friends, Alex, by exposing him in their group chat.
Alex is currently on his way to James's room to have a "friendly conversation" with James.
James knows the room numbers of all of his friends, but he has no clue which of these rooms belongs to Alex; nevertheless, he wants to prolong the distance that Alex has to travel to get to his room.
James, being the extrovert he is, can request to hide in any of the rooms in the hallway (since he knows basically everyone in the school), and he has enough time to move to one of these rooms. (He
can't leave the hallway, however, since there are teachers stationed on either side of the hallway that prevents students from leaving.)
Since James does not actually know which of the rooms belongs to Alex, he will assume the worst case scenario: He will treat all of his friends' rooms as Alex's room, and he will always assume that
Alex will choose the correct direction to head in to find the room James is hiding in. He will then figure out how long (on average) Alex will need to get to him by calculating the average distance
it takes Alex to travel from any of the rooms to his current room. He will then choose the room that Alex will take the longest distance to get to on average to hide in.
Which room should James hide in?
Input Specification
The first line will contain two space-separated integers, and .
The second line will contain letters. G is a room that doesn't contain one of James's friends, and B is a room that does contain one of James's friends.
Output Specification
Output one number, the room number of the room James should hide in. (Assume room numbers start from 1)
If there exists multiple, output the room with the highest room number.
Sample Input
Sample Output
Sample Output Explanation
Let's do some casework:
If James chooses room 1, the first room that Alex can be in is room 2, which is only 1 room away. The second room that Alex can be in is room 5, which is 4 rooms away. The third room that Alex can be
in is room 6, which is 5 rooms away. On average, Alex will need (1+4+5)/3, whih is 10/3 rooms to get to James.
If James chooses room 2, the first room that Alex can be in is also room 2, which is 0 rooms away. The second room that Alex can be in is room 5, which is 3 rooms away. The third room that Alex can
be in is room 6, which is 4 rooms away. On average, Alex will need (0+3+4)/3, whih is 7/3 rooms to get to James.
Following the logic for all 7 rooms, we see that the average distance for each room is:
Room 1: 10/3
Room 2: 7/3
Room 3: 2
Room 4: 5/3
Room 5: 4/3
Room 6: 5/3
Room 7: 8/3
Therefore, the answer is Room 1.
There are no comments at the moment. | {"url":"https://mcpt.ca/problem/jamesisindanger","timestamp":"2024-11-11T10:47:22Z","content_type":"text/html","content_length":"38701","record_id":"<urn:uuid:76a9c8aa-2c2d-4dff-a79a-03e1532a2507>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00489.warc.gz"} |
A graphing calculator is recommended. A particle moves according to a
A graphing calculator is recommended. A particle moves according to a law of motion s = f(t), t > 0, where t is measured in seconds and s in feet.
(If an answer does not exist, enter DNE.) f(t)=\sin \left(\frac{\pi t}{2}\right) (a) Find the velocity (in ft/s) at time t. (b) What is the velocity (in ft/s) after 1 second? (c) When is the particle
at rest? (Use the parameter n as necessary to represent any integer.) (d) When is the particle moving in the positive direction for 0 <= t<= 6? (Enter your answer using interval notation.) (e) Draw a
diagram to illustrate the motion of the particle and use it to find the total distance (in ft) traveled during the first 6 seconds. (f) Find the acceleration (in ft/s²) at time t. Find the
acceleration (in ft/s²) after 1 second. (g) Graph the position, velocity, and acceleration functions for 0 <= t <= 6. (h) When is the particle speeding up? (Enter your answer using interval
notation.) When is it slowing down? (Enter your answer using interval notation.)
Fig: 1
Fig: 2
Fig: 3
Fig: 4
Fig: 5
Fig: 6
Fig: 7
Fig: 8
Fig: 9
Fig: 10
Fig: 11
Fig: 12
Fig: 13
Fig: 14
Fig: 15 | {"url":"https://tutorbin.com/questions-and-answers/a-graphing-calculator-is-recommended-c-when-is-the-particle-at-rest-us","timestamp":"2024-11-04T02:54:50Z","content_type":"text/html","content_length":"89031","record_id":"<urn:uuid:3097d759-8169-4457-830e-c55fca2f9f93>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00378.warc.gz"} |
Dollars increase wealth, but cents don't
Dollars increase wealth, but cents don't
Is there a correlation between twenty-dollar bills and money? I think there is. Here's what I did: I took a thousand random middle-class people off the street. I counted how many twenties they had in
their wallet, and how much money they had altogether. Then, I ran a regression.
It turns out that there is a strong relationship between twenties and money. The result:
Coefficient of twenties = $19.82 (p=.0000)
Because the coefficient for twenties was very strongly statistically significant, we can say that every twenty dollar bill increases wealth by about $20.00.
I was so excited by this conclusion that I wondered whether a similar result holds for pocket change. So I added quarters to the regression. The result:
Coefficient of quarters = $0.26 (p=.25)
As you can see, the p-value is only .25, much higher than the .05 we need for statistical significance. Since the coefficient for quarters turns out not to be statistically significant, we conclude
that there is no evidence for any relationship between quarters and money.
This is surprising – in the popular press, there is a widespread theory that quarters are worth $0.25. But, as this study shows, no statistically significant effect was found, using 1,000 people and
the best methodology, so we have to conclude that quarters aren't worth anything.
That sounds ridiculous, doesn't it? We have a regression that shows that quarters are worth about 25 cents, but we treat them as if they're worth zero just because the study wasn't powerful enough to
show statistical significance.
But we're biased here because of the choice of example.
So suppose that instead of adding quarters to the equation, we had added something else, something that we could agree was completely irrelevant. Say, number of siblings.
And, suppose that we got exactly the same results for siblings: for each sibling in our random subject's family, he winds up with an 26 cent increase in pocket money. The signficance level the same
0.25. (The result is certainly not farfetched: one in four times, we'd find an effect of at least this magnitude.)
In this case, if we said "there is no evidence for any relationship between siblings and money," that would be quite accceptable.
What's the difference between quarters and siblings? The difference is that there is a good reason to believe that the quarters result is real, but there is no good reason to believe that the
siblings result is. By "good reason," I don't mean just our prior intuitive beliefs. Rather, I mean that there's a good reason based partly on the results of the study itself.
The study showed us that twenty-dollar bills were highly significant. We therefore concluded that there was a real relationship between twenties and wealth. But we know, for a fact, that 80 quarters
equal one twenty. It is therefore at least reasonable to expect that the effect of 80 quarters should equal the effect of a twenty – or, put another way, that the effect of one quarter should be 1/80
the effect of a twenty. And that was almost exactly what we found.
How does it make sense to accept that twenty dollar bills have an effect, but 1/80ths of twenty-dollar bills do not? It doesn't.
If the convention in these kinds of studies is to treat any non-significant coefficient as zero, I think that's wrong. A reasonable alternative, keeping in mind the "sibling" argument, might be that
if a factor turns out to be statistically insignificant, and there is no other reason to suggest there should be a link, only then can you go ahead and revert to zero. But if there are other reasons
– like if you're analyzing cents, and you know dollars are significant – reverting to zero can't be right.
I argued this same point a little while ago when describing the Massey/Thaler football draft study. That situation was remarkably similar to the pocket money example.
The study was attempting to figure out how much an NFL player's production correlates to draft position. They broke production into something similar to dollars and cents.
"Dollars" were the most obvious attributes of the player's skill. Did he make the NFL? Did he play regularly? Did he make the Pro Bowl?
"Cents" is what was left after that. Given that he played regularly but didn't make the Pro Bowl, was he a very good regular or just an average one? If he made the Pro Bowl, was he a superstar Pro
Bowler, or simply an excellent player?
The study found strong significance for "dollars" – players drafted early were much more likely to play regularly than players drafted late. They were also more likely to make the Pro Bowl, or to
make an NFL roster at all.
But it found less signficiance for the "cents." The authors did find that players with more "cents" were likely to be better players, but the result was significant only at the 13% level (instead of
the required 5%). From this, they concluded
"there is nothing in our data to suggest that former high draft picks are better players than lower draft picks, beyond what is measured in our broad ["dollar"] performance categories."
And that's got to be just plain wrong. There is not "nothing in the data" to suggest the effect is real. There is actually strong evidence in the data – the significance of the other, broader,
measure of skill. If you assume that dollars matter, you are forced to admit that cents matter.
There's an expression, "absence of evidence is not evidence of absence." This is especially true when you find weak evidence of a strong effect. If you find a correlation that's significant in
football terms, but not significant in statistical terms, your first conclusion should be that your study is insufficiently powerful to be able to tell if what you found is real. Ideally, you would
do another study to check, or add a few years of data to make your study more powerful. But it seems to me that you are NOT entitled to automatically conclude that the observed effect is spurious
based on the significance level alone, especially when it leads to a logical implausibility, such as that dollars matter but cents don't.
In this particular study, I think that if you do accept the coefficient for cents at face value, instead of calling it zero, you reach the completely opposite conclusion than the authors do.
The reason I'm writing about this again is that I've just found another occasion of it, in the study discussed here (full review to come). The authors come up with an estimate of a coefficient for
three separate NBA seasons. The coefficient is (to oversimplify a bit) the amount by which you'd expect a team that's been eliminated from the playoffs to underperform in winning percentage.
Their three results are .220 (significant), .069 (not significant), and .192 (significant).
My conclusion would be to say that the effect of being eliminated in the middle season is much lower than the effect in the other two seasons, and whether the difference was statistically
significant. I would point out, for the record, that the .069 is not signficantly different from zero.
But the study's authors go further – they say that the .069 should be arbitarily treated as if it were actually .000:
"Our results show that teams that were eliminated from the playoffs [in the .069 year] were no more likely to lose than noneliminated teams." [emphasis mine.]
That's is simply not true. The results show those teams were .069 more likely to lose than noneliminated teams.
That is: given that our prior understanding of how basketball works, given the structure of the study (which I'll get to in a future post), given that the study shows a strong effect for other years,
and given that the effects are in the same direction, there is certainly enough evidence that the .069 is likely closer to the "real" value than zero is.
In this case, the conclusions of the study -- that the second year is different from the other two -- don't really change. But the conclusion turns out much more punchy when the authors say there is
no effect, instead of a small one.
6 Comments: | {"url":"http://blog.philbirnbaum.com/2006/12/dollars-increase-wealth-but-cents-dont.html","timestamp":"2024-11-02T10:43:13Z","content_type":"application/xhtml+xml","content_length":"36935","record_id":"<urn:uuid:e22ab0a0-ecf8-4e57-a374-e5e2b840136b>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00048.warc.gz"} |
Decomposition–coordination model of reservoir group and flood storage basin for real-time flood control operation
Multi-objective programming
Objective functions
Constraints of reservoir
Constraints of rivers
Constraints of flood storage basin
Decomposition–coordination model
Third-order hierarchical structure
Components and calculation methods
Large scale system coordination model
Reservoir group subsystem coordination model
Flood storage basin subsystem coordination model
Single reservoir operation
Single flood storage basin operation
Procedures of real-time operation | {"url":"https://iwaponline.com/hr/article/46/1/11/1271/Decomposition-coordination-model-of-reservoir","timestamp":"2024-11-06T22:00:47Z","content_type":"text/html","content_length":"397980","record_id":"<urn:uuid:d55b79a4-af9d-498f-bcc9-ff082b08918b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00097.warc.gz"} |
Boshnakov, Georgi N.
Items where Author is "Boshnakov, Georgi N."
Number of items: 6.
Boshnakov, Georgi N. (2007) Some measures for asymmetry of distributions. Statistics and Probability Letters, 77 (11). pp. 1111-1116. ISSN 0167-7152
Boshnakov, Georgi N. (2005) On the asymptotic properties of multivariate sample autocovariances. Journal of Multivariate Analysis, 92 (1). pp. 42-52. ISSN 0047-259X
Boshnakov, Georgi N. (2004) On Some Concepts of Residuals. Pliska Studia Mathematica Bulgarica, 16. pp. 23-33. ISSN 0204-9805
Boshnakov, Georgi N. (2003) Confidence characteristics of distributions. Statistics and Probability Letters, 63 (4). pp. 353-360. ISSN 0167-7152
Boshnakov, Georgi N. (2002) Multi-companion matrices. Linear Algebra and its Applications, 354 (1-3). p. 53. ISSN 0024-3795
MIMS Preprint
Subba Rao, Tata and Das, Sourav and Boshnakov, Georgi N. (2012) A Frequency domain approach for the estimation of parameters of spatio-temporal stationary random processes. [MIMS Preprint] | {"url":"https://eprints.maths.manchester.ac.uk/view/creators/Boshnakov=3AGeorgi_N=2E=3A=3A.html","timestamp":"2024-11-01T22:34:44Z","content_type":"application/xhtml+xml","content_length":"11045","record_id":"<urn:uuid:a464ef86-65d1-4f8e-a422-cdc1e5f5534f>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00224.warc.gz"} |
Science:Math Exam Resources/Courses/MATH101/April 2013/Question 01 (c)
MATH101 April 2013
• Q1 (a) • Q1 (b) • Q1 (c) • Q2 (a) • Q2 (b) • Q2 (c) • Q3 (a) • Q3 (b) • Q3 (c) • Q4 (a) • Q4 (b) • Q5 (a) • Q5 (b) • Q6 (a) • Q6 (b) • Q7 (a) • Q7 (b) • Q8 • Q9 (a) • Q9 (b) • Q10 (a) • Q10 (b) •
Q11 • Q12 (a) • Q12 (b) • Q12 (c) •
Question 01 (c)
Short-Answer Questions. Question 1-3 are short-answer questions. Put your answer in the box provided. Simplify your answer as much as possible. Full Marks will be awarded for a correct answer placed
in the box. Show your work, for part marks.
${\displaystyle \int _{1}^{2}{\frac {dx}{x+x^{2}}}}$
Make sure you understand the problem fully: What is the question asking you to do? Are there specific conditions or constraints that you should take note of? How will you know if your answer is
correct from your work only? Can you rephrase the question in your own words in a way that makes sense to you?
If you are stuck, check the hints below. Read the first one and consider it for a while. Does it give you a new idea on how to approach the problem? If so, try it! If after a while you are still
stuck, go for the next hint.
Hint 1
The integrand is a fraction of polynomials. What integration technique is ideal for this situation?
Hint 2
Try partial fractions!
Checking a solution serves two purposes: helping you if, after having used all the hints, you still are stuck on the problem; or if you have solved the problem and would like to check your work.
• If you are stuck on a problem: Read the solution slowly and as soon as you feel you could finish the problem on your own, hide it and work on the problem. Come back later to the solution if you
are stuck or if you want to check your work.
• If you want to check your work: Don't only focus on the answer, problems are mostly marked for the work you do, make sure you understand all the steps that were required to complete the problem
and see if you made mistakes or forgot some aspects. Your goal is to check that your mental process was correct, not only the result.
Found a typo? Is this solution unclear? Let us know here.
Please rate my easiness! It's quick and helps everyone guide their studies.
This is a rational function, so we try partial fractions. The integrand's denominator can be factored, then we apply partial fraction decomposition giving
{\displaystyle \displaystyle {\begin{aligned}{\frac {dx}{x+x^{2}}}&={\frac {1}{x(1+x)}}\\&={\frac {A}{x}}+{\frac {B}{1+x}}\end{aligned}}}
Cross multiplying,
{\displaystyle \displaystyle {\begin{aligned}1&=A(1+x)+Bx\end{aligned}}}
We can solve this several ways.
(1) Setting ${\displaystyle \displaystyle x=0}$ we get ${\displaystyle \displaystyle 1=A}$, and setting ${\displaystyle \displaystyle x=-1}$ we get ${\displaystyle \displaystyle 1=-B}$, so ${\
displaystyle \displaystyle B=-1}$.
(2) Comparing coefficients of the constant term, we get ${\displaystyle \displaystyle 1=A}$, and then comparing coefficients of the ${\displaystyle \displaystyle x}$ term, we get ${\displaystyle \
displaystyle 0=A+B=1+B}$, so ${\displaystyle \displaystyle B=-1}$.
(3) We get ${\displaystyle \displaystyle A=1}$ and ${\displaystyle \displaystyle B=-1}$ by inspection.
Now we solve the integral:
{\displaystyle \displaystyle {\begin{aligned}\int _{1}^{2}{\frac {dx}{x+x^{2}}}&=\int _{1}^{2}{\frac {dx}{x(1+x)}}\\&=\int _{1}^{2}\left({\frac {1}{x}}+{\frac {-1}{1+x}}\right)\,dx\\&=\left(\ln |
x|-\ln |1+x|\right){\big |}_{1}^{2}\\&=(\ln |2|-\ln |1+2|)-(\ln |1|-\ln |1+1|)\\&=2\ln 2-\ln 3\end{aligned}}}
The answer ${\displaystyle \ln {\frac {4}{3}}}$ is equivalent.
Click here for similar questions
MER QGH flag, MER QGQ flag, MER QGS flag, MER RT flag, MER Tag Partial fractions, Pages using DynamicPageList3 parser function, Pages using DynamicPageList3 parser tag | {"url":"https://wiki.ubc.ca/Science:Math_Exam_Resources/Courses/MATH101/April_2013/Question_01_(c)","timestamp":"2024-11-05T00:48:05Z","content_type":"text/html","content_length":"65836","record_id":"<urn:uuid:ad8ae5ee-0c3f-4181-bbc8-d2caf5a4b420>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00375.warc.gz"} |
How to Create A Tensor In PyTorch?
To create a tensor in PyTorch, you can follow the steps below:
1. Import the PyTorch library: Begin by importing the PyTorch library using the import statement: import torch
2. Create a tensor from a list or array: You can create a tensor by passing a Python list or an array to the torch.tensor() function. PyTorch will automatically infer the data type and shape of the
tensor based on the inputs. my_list = [1, 2, 3, 4, 5] my_tensor = torch.tensor(my_list)
3. Create a tensor of zeros or ones: If you want to create a tensor of zeros or ones, you can use the torch.zeros() or torch.ones() functions. Specify the desired shape of the tensor as an argument.
zeros_tensor = torch.zeros((3, 4)) # Creates a 3x4 tensor of zeros ones_tensor = torch.ones((2, 2)) # Creates a 2x2 tensor of ones
4. Initialize a tensor with specific values: If you need to create a tensor with specific values, you can use functions like torch.full() or torch.tensor(). Specify the desired shape and value(s) as
arguments. constant_tensor = torch.full((2, 3), 5) # Creates a 2x3 tensor with all elements set to 5 range_tensor = torch.arange(0, 10, 2) # Creates a tensor with values 0, 2, 4, 6, 8
5. Create a random tensor: PyTorch provides functions to generate random tensors, such as torch.randn() and torch.rand(). Specify the shape of the tensor as an argument. random_tensor = torch.randn
((3, 3)) # Creates a 3x3 tensor with values drawn from a standard normal distribution (mean=0, std=1) uniform_tensor = torch.rand((2, 2)) # Creates a 2x2 tensor with values drawn from a uniform
distribution [0, 1)
These are some basic ways to create tensors in PyTorch. Tensors serve as fundamental data structures for numerical computations in PyTorch and are widely used in deep learning applications.
What is a tensor in PyTorch?
In PyTorch, a tensor is a multi-dimensional array and a fundamental data structure used to store and manipulate data. Similar to a numpy array, a tensor can have any number of dimensions (also known
as axes or rank). However, unlike numpy arrays, tensors in PyTorch can be stored and operated upon on GPUs to accelerate computation.
Tensors can be used to represent different types of data, such as images, audio, text, or even intermediate values in a neural network. They are typically used as input and output data containers for
deep learning models and other numerical computations.
PyTorch provides various functions and operations for creating, indexing, manipulating, and performing mathematical operations on tensors. These operations can be used to build complex neural network
architectures, compute gradients during backpropagation, and optimize model parameters using gradient descent algorithms.
How to create a tensor from a NumPy array in PyTorch?
To create a tensor from a NumPy array in PyTorch, you can use the torch.from_numpy() function. Here's an example:
1 import numpy as np
2 import torch
4 # Create a NumPy array
5 numpy_array = np.array([[1, 2, 3], [4, 5, 6]])
7 # Convert the NumPy array to a PyTorch tensor
8 tensor = torch.from_numpy(numpy_array)
10 print(tensor)
1 tensor([[1, 2, 3],
2 [4, 5, 6]])
By using torch.from_numpy(), you can create a tensor directly from the NumPy array, and the resulting tensor will share the same underlying memory with the NumPy array. This means that changes to one
will affect the other. Note that the data type of the NumPy array will be preserved in the tensor.
What is a tensor stride in PyTorch?
In PyTorch, tensor stride refers to the number of elements that need to be skipped in memory storage to move to the next element along a specific dimension.
A tensor is a multi-dimensional array, and every element in the tensor is stored in contiguous memory. The stride defines the number of memory locations to jump ahead in order to move to the next
element along a specific dimension.
For example, consider a 4-dimensional tensor with shape [A, B, C, D]. The strides of this tensor might be [BCD, CD, D, 1]. If you want to access the next element along the second dimension, you would
need to jump BC*D memory locations in order to reach the next element.
Strides are important because they allow efficient indexing and manipulation of tensors without the need to physically move the data. | {"url":"https://stlplaces.com/blog/how-to-create-a-tensor-in-pytorch","timestamp":"2024-11-15T00:58:36Z","content_type":"text/html","content_length":"308691","record_id":"<urn:uuid:cbe8de84-d081-4619-8486-c36797b4b169>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00195.warc.gz"} |
Coding Interview Question: Big Int Modulus - Byte by Byte
Coding Interview Question: Big Int Modulus
Given a list of bytes a, each representing one byte of a larger integer (ie. {0x12, 0x34, 0x56, 0x78} represents the integer 0x12345678), and an integer b, find a % b.
mod({0x03, 0xED}, 10) = 5
Once you think that you’ve solved the problem, click below to see the solution.
As always, remember that practicing coding interview questions is as much about how you practice as the question itself. Make sure that you give the question a solid go before skipping to the
solution. Ideally if you have time, write out the solution first by hand and then only type it into your computer to verify your work once you've verified it manually. To learn more about how to
practice, check out this blog post.
How was that problem? You can check out the solution in the video below.
Here is the source code for the solution shown in the video (Github):
1 // Compute the mod. We use a char array as it is equivalent to an array of
2 // unsigned bytes
3 public static int mod(char[] a, int b) {
4 // If input is null, let's just return 0
5 if (a == null) return 0;
6 int m = 0;
7 // Start with modding the most significant byte, then repeatedly shift
8 // left. This way our value never gets larger than an int
9 for (int i = 0; i < a.length; i++) {
10 m <<= 8;
11 m += (a[i] & 0xFF);
12 m %= b;
13 }
14 return m;
15 }
Did you get the right answer to this coding interview question? Please share your thoughts in the comments below.
DON'T DO ANOTHER CODING INTERVIEW...
...until you've mastered these 50 questions!
Data structures are critical to coding interview success. In this post, I'll show you exactly which data structures you need to know to nail your interviews.
It happens time and again. People fail coding interviews because they don’t know what to do when stuck on a problem. Developing a clear plan of attack helps you to succeed at any whiteboard coding | {"url":"https://www.byte-by-byte.com/bigintmod/","timestamp":"2024-11-04T17:10:39Z","content_type":"text/html","content_length":"112688","record_id":"<urn:uuid:8e8ff897-a242-43f7-a03b-a56fa3d778d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00188.warc.gz"} |
Re: [BLD] Signatures (BLD 7/20)
> Michael Kifer wrote:
> >
> >>- In 2.1.3, the remark that f() and f can be interpreted differently and
> >>dialect may introduce axioms to make them equal should be clearly marked
> >>as irrelevant to BLD (where symbol f has either signature f0 or i, but
> >>not both).
> >
> > why is it irrelevant?
> I understood that, in BLD, the same symbol f can have either signature
> i{} or f0{()->i}, but not both. And thus, in BLD, whether f = f() or not
> is irrelevant.
We wanted to make it clear that f=f() is not a tautology.
It is not a matter of signatures. Even if f has the signature ...{i ()->i},
i.e., for f and f() are well-formed, neither f != f() nor f=f() is not
derivable. However, in your rule set (if you allow equality) you can add
an equality abc=abc(). Then in your ruleset abc=abc() will be derivable but
cde=cde() will still not be derivable (unless you also add cde=cde()).
This is what that remark says.
> My point was only about separating more slearly what is BLD and what is
> the more general framework in which BLD is defined.
> > The problem is that if we do not use signatures then we have to split the
> > set of symbols into subsets.
> You mean, subsets like "function" (or even "x-ary function"), predicate,
> etc? If yes, ok, I get your point.
Yes, function (for different arities), predicate (for all arities), individual.
We first tried to use those categories of symbols, but then realized that
this has problems with extensibility.
Received on Tuesday, 24 July 2007 12:54:25 UTC | {"url":"https://lists.w3.org/Archives/Public/public-rif-wg/2007Jul/0148.html","timestamp":"2024-11-09T21:21:38Z","content_type":"application/xhtml+xml","content_length":"9336","record_id":"<urn:uuid:0aecbca4-bbc2-46be-abfe-918b66b72eab>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00164.warc.gz"} |
Einstein metric formalism without Schwarzschild singularities
The intrinsic metric symmetries of pseudo-Riemannian space-time universally reinforce strict spatial flatness in the GR metric formalism. The non-linear time element of the four-interval depends on
the particle velocity, or spatial displacement, and differs from the proper-time rate of a local observer. The passive/active energy-charge for 1686, 1913, and 1915 gravitational laws maintains the
universal free fall and the Principle of Equivalence for flat material space with the smooth radial metric. The observed planetary perihelion precession, the radar echo delay, and gravitational light
bending can be explained by this metric solution quantitatively without departure from Euclidean spatial geometry. Non-Newtonian flat-space precessions are expected for non-point orbiting gyroscopes
exclusively due to the GR inhomogenious time in the Earth's radial field. The self-contained Einstein relativity admits further geometrization of the r^−4 radial particle for energy-to-energy
gravitation in non-empty space without references to Newton's mass-to-mass attraction. The post-Newtonian inharmonic potential for distributed particle densities is also the exact solution to
Maxwell's equations with the r^−4 electric charge density for the (astro)electron.
Keywords: Non-empty space, 3D flatness, Radial energy-charges, Geometrization of continuous particles, Nonlocal energy-to-energy relativity
• Скачать статью: Download
• Размер: 258.40 KB
You have no rights to post comments | {"url":"http://chronos.msu.ru/en/component/zoo/avtorskij-ukazatel/einstein-metric-formalism-without-schwarzschild-singularities","timestamp":"2024-11-14T06:59:35Z","content_type":"application/xhtml+xml","content_length":"12995","record_id":"<urn:uuid:56f0a511-f23f-45d5-8a38-8804a9d6ac2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00121.warc.gz"} |
Rigor vs. GitHub Descent
Is there a principled way to optimize in machine learning?
This is a live blog of Lecture 4 (part 1 of 2) of my graduate machine learning class “Patterns, Predictions, and Actions.” A Table of Contents for this series is here.
Optimization has long been a cornerstone of pattern recognition. Bill Highleyman, the unsung machine learning pioneer, proposed gradient descent to minimize errors on a training set in the late
1950s. In the 1960s, researchers proposed dozens of methods to minimize empirical risk. But even back then, no one could agree on the right problem to solve. Look at this table from Duda and Hart
This table provides ten different methods for fitting a linear function to data. Here, the weights of the linear rule are the vector a. b[i] denotes the label of example i. And y[i] is the vector of
features for example i. The notation (x) stands for the maximum of x and 0.
What are these methods in modern language? The first five would be considered variants of the Stochastic Gradient Descent method. Fixed Increment is the Perceptron algorithm. The Variable Increment
method is SGD applied to the Hinge Loss. Widrow-Hoff is SGD on a least-squares loss. Stochastic Approximation is SGD when you assume you have a data-generating distribution rather than a finite data
set. In 2023, everyone in ML theory knows these algorithms, but there was a period between 2007 and 2015 where each of these was reinvented. For example, Pegasos, a 2007 ICML paper that was an
honorable mention for the 2017 test-of-time award, is the same as the Variable Increment method. None of the papers in the 2007-2015 window cited the work from the 1960s.
The next entry in the table is my favorite classification method, the pseudoinverse. This simply solves a least-squares problem to find a classifier. And the last entry is also interesting.
Mangasarian showed you could minimize the hinge loss using linear programming in 1965. The first linear programming method minimizes the worst-case hinge loss. The second minimizes the average loss.
Minimizing the average loss using linear programming would be called The Support Vector Machine in the 1990s. You might argue that the SVM is different because it adds the squared norm of the weights
to the objective function. But this small modification just allows for an extra researcher degree of freedom. It doesn’t provide profound gains in theory or practice. And so perhaps it might not
deserve the hype that accompanied the SVM. Even when I met him in 2008, Mangasarian was still annoyed that his work had been ignored during the SVM boom of the 1990s.
I only bring up our selective short-term memory here to point out that we have been obsessed with optimization in ML since the beginning, but we keep forgetting the past because no one can ever agree
on the right problem formulation. All of these algorithms more or less do the same thing, and yet people write gazillions of papers on their differences. Duda and Hart noticed this too. First, the
remarks column in this table is hilarious, as it shows that even they were unsure when to recommend one of these methods versus the other. But their chapter notes begin with this banger quote:
“Because linear discriminant functions are so amenable to analysis, far more papers have been written about them than the subject deserves.”
Some things never change!
Much to Duda and Hart’s chagrin, I’m sure, the most cited paper in all of optimization would end up being a 2014 paper that proves an incorrect theorem with an unparsable convergence bound for linear
discriminant functions.
Why is optimization in machine learning so weird? Why do we keep reinventing variants of the same algorithm and trying to prove that these algorithms are better than each other with abstract
Optimization is ideal when you really want to know what minimizes some cost subject to some constraints. There’s a promise that as long as you state the problem cleanly, the solver doesn’t matter.
You get an answer, which answer is fine. People can write custom solvers to speed up certain problems, but at the end of the day, if the solver works correctly, the problem poser doesn’t care how you
got the solution.
As a simple example, when you ask for the shortest path from A to B, you are usually actually interested in the path that is the shortest. And optimization algorithms will happily oblige.
In machine learning, by contrast, optimization will give you a function that minimizes average errors. But is that what you want? Not really! You want the point that minimizes the error on data you
haven’t seen. But if we haven’t seen the data, how do we know we did a good job? We rely on benchmarks and hope for the best.
Let’s say you run the empirical risk minimization algorithm with two different solvers. Suppose Solver 2 has a bug and returns the wrong answer. You evaluate the prediction functions from Solver 1
and Solver 2 on your testing data and find Solver 2 has lower error on the test data. Which one do you submit to Kaggle? Don’t lie. We all know the answer is the one from Solver 2. In fact, we’d
probably look at the code for Solver 2, find the bug, and then write a paper for ICLR about how this is not a bug but an amazing new regularization method.
Even this terrible example is too kind to what we really do with optimization in machine learning. We’ll run dozens of possible solvers with many different feature representations. We’ll take the
solution of the SVM and the solution of the Perceptron. We’ll take whatever XG Boost spits out. Maybe we’ll try a transformer too. People tell me Transformers are awesome. We could even build an
ensemble model out of all of the above. Anything that could possibly be fit to the training set is evaluated on the test set. And whatever gets the best error on the test set is what we write our
next paper about.
Our machine learning algorithm development is what Stephen Boyd calls “graduate student descent.” Given the industrial interest, I think these days it’s better designated “GitHub descent.” Find a
model on the internet, tweak a parameter or two, see if it gets better test error. If it does, that’s a paper. We’re most definitely optimizing, as we really care about these competitions on dataset
benchmarks. But our algorithm is some sort of massively parallel genetic algorithm, not a clean, rigorous, and beautiful convex optimization method.
This is why I have a hard time spending too much time on optimization in machine learning courses. We don’t need to know what the Lagrange multipliers in the Support Vector Machine mean to partake in
the collective genetic algorithm. It seems that having candidate models well-fit to training data is important. Beyond that, how we choose models is up to the vagaries of the benchmarks before us.
I wish it were otherwise, but I don’t know if having “principled methods” helps us in practice at all.
"We’re most definitely optimizing, as we really care about these competitions on dataset benchmarks. But our algorithm is some sort of massively parallel genetic algorithm, not a clean, rigorous,
and beautiful convex optimization method."
See? This illustrates my point about Darwin being the synthesis of Hume (messy, collective lurching about in the space of algorithms, objectives, etc.) and Kant (individual engineers finding a
model on the internet and tweaking a parameter or two).
Expand full comment
What's maybe ironic about that list of methods from Duda and Hart is that it although it's a long list, it doesn't appear to include a standard statistics-style logistic regression fitting
procedure (i.e. maximum-likelihood a.k.a. cross entropy, I believe typically done in batch mode). So you could easily add that to the list.
Expand full comment
20 more comments... | {"url":"https://www.argmin.net/p/rigor-vs-github-descent","timestamp":"2024-11-03T15:21:02Z","content_type":"text/html","content_length":"183296","record_id":"<urn:uuid:361e0394-d829-452b-93d0-5d8c90c3e089>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00530.warc.gz"} |
How to study for maths
01 Feb How to study for maths
Posted at 06:47h
Maths Tuition
Maths being the most unpopular subject among the student gets avoided the most too. Many students feel a fear in Maths which is caused due to uncleared doubts and concepts of Maths. Many formulas and
subjects of Maths may seem to be hard but a great Maths teacher will always try to show the students the simplest way around it.
Maths is a practical subject in which scoring marks is very easy. The subject is straight forward and it demands straight answers from students. If you have the correct answer and the perfect
solution then you will be easily getting high marks and even full marks!
Maths tuition classes in Miracle Learning Centre are easy to understand and help you to improve in your grades. If you do not understand Maths, you must definitely attend the chemistry tuition class
at Miracle Learning Centre.
1. Take slow but steady steps. Always study early instead of cramming everything on the day or night before a test or an exam. You should always start studying a little ahead of the test or exam.
Revise a little each day.
2. Sufficient sleep. Having enough rest will help you to concentrate better when you take the paper instead of feeling restless.
3. Make a list of important concepts and formulas. Review your notes and make a concise list of important concepts and formulas and make sure to know them and how to apply them in questions and
4. Redo homework questions. Do not just read through them. Redoing them will help you to remember past concepts which you might have forget.
5. While doing a lot of problem sums, look out for similarities in the way the questions are asked and how the solutions are written out. This will make it easier if you come across similar looking
questions in tests or exams.
6. Do a practice test or exam paper. Give yourself a time limit and don’t use your notes. Simulating an exam environment will help to see if you are able to solve questions under such circumstances.
Miracle Learning Centre would like to bring you more articles on Maths tuition concepts. Do come to Miracle Learning Centre for more Maths tuition lessons.
Unlike other subjects, in the case of Maths, you need to think clearly, learn with dedication and understand properly. This is not a subject where you can just memorize a number of formulas and
techniques and appear for an exam. In our classes of secondary maths tuition, primary mathematics tuition, JC maths tuition offered for the students of Singapore, you will not only learn Maths
solutions, but you will be taught how to attempt a sum, how to manage time, how to score more with fewer efforts, how to prepare for an exam and how to answer each question tactfully. With all of
these, we can guarantee you that there will be no more fear of Maths. | {"url":"https://miraclelearningcentre.com/how-to-study-for-maths/","timestamp":"2024-11-14T01:23:17Z","content_type":"text/html","content_length":"201093","record_id":"<urn:uuid:9b989046-4158-4c52-a8b2-96fb1d7a44f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00587.warc.gz"} |
Compute conic-sector index of linear system
RX = getSectorIndex(H,Q) computes the relative index RX for the linear system H and the conic sector specified by Q. When RX < 1, all output trajectories y(t) = Hu(t) lie in the sector defined by:
${\int }_{0}^{T}y{\left(t\right)}^{T}Q\text{\hspace{0.17em}}y\left(t\right)dt<0,$
for all T ≥ 0.
getSectorIndex can also check whether all I/O trajectories {u(t),y(t)} of a linear system G lie in the sector defined by:
${\int }_{0}^{T}{\left(\begin{array}{c}y\left(t\right)\\ u\left(t\right)\end{array}\right)}^{T}Q\text{\hspace{0.17em}}\left(\begin{array}{c}y\left(t\right)\\ u\left(t\right)\end{array}\right)dt<0,$
for all T ≥ 0. To do so, use getSectorIndex with H = [G;I], where I = eyes(nu), and nu is the number of inputs of G.
For more information about sector bounds and the relative index, see About Sector Bounds and Sector Indices.
RX = getSectorIndex(H,Q,tol) computes the index with relative accuracy specified by tol.
RX = getSectorIndex(H,Q,tol,fband) computes the passivity index by restricting the inequalities that define the index to a specified frequency interval. This syntax is available only when Q has as
many negative eigenvalues as there are inputs in H.
[RX,FX] = getSectorIndex(___) also returns the frequency FX at which the index value RX is achieved. FX is set to NaN when the number of negative eigenvalues in Q differs from the number of inputs in
H. You can use this syntax with any of the previous combinations of input arguments.
[RX,FX,W1,W2,Z] = getSectorIndex(___) also returns the decomposition of Q into its positive and negative parts, as well as the spectral factor Z when Q is dynamic. When Q is a matrix (constant sector
bounds), Z = 1. You can use this syntax with any of the previous combinations of input arguments.
DX = getSectorIndex(H,Q,dQ) computes the index in the direction specified by the matrix dQ. If DX > 0, then the output trajectories of H fit in the conic sector specified by Q. For more information
about the directional index, see About Sector Bounds and Sector Indices.
The directional index is not available if H is a frequency-response data (frd) model.
DX = getSectorIndex(H,Q,dQ,tol) computes the index with relative accuracy specified by tol.
Check Sector Bounds
Test whether, on average, the I/O trajectories of $G\left(s\right)=\left(s+2\right)/\left(s+1\right)$ belong within the sector defined by:
In U/Y space, this sector is the shaded region of the following diagram.
The Q matrix corresponding to this sector is given by:
$Q=\left[\begin{array}{cc}1& -\left(a+b\right)/2\\ -\left(a+b\right)/2& ab\end{array}\right];\phantom{\rule{1em}{0ex}}a=0.1,\phantom{\rule{0.2777777777777778em}{0ex}}b=10.$
A trajectory $y\left(t\right)=G\phantom{\rule{0.1em}{0ex}}u\left(t\right)$ lies within the sector S when for all T > 0,
$0.1{\int }_{0}^{T}u{\left(t\right)}^{2}<\phantom{\rule{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}{\int }_{0}^{T}u\left(t\right)y\left(t\right)dt<\phantom{\rule
{0.2777777777777778em}{0ex}}\phantom{\rule{0.2777777777777778em}{0ex}}10{\int }_{0}^{T}u{\left(t\right)}^{2}dt.$
To check whether trajectories of G satisfy the sector bound, represented by Q, compute the R-index for H = [G;1].
G = tf([1 2],[1 1]);
a = 0.1; b = 10;
Q = [1 -(a+b)/2 ; -(a+b)/2 a*b];
R = getSectorIndex([G;1],Q)
This resulting R is less than 1, indicating that the trajectories fit within the sector. The value of R tells you how tightly the trajectories fit in the sector. This value, R = 0.41, means that the
trajectories would fit in a narrower sector with a base 1/0.41 = 2.4 times smaller.
R-Index in Frequency Band for System with Complex Coefficients
For systems with complex coefficients, getSectorIndex can return indices at a negative or a positive frequency depending on the fband you specify.
Load a state-space model with complex coefficients and complex sector matrix.
Compute the R-index and its frequency with a relative accuracy of 0.0001%. Also, specify fband = [1,10] to compute the index in the frequency interval [–10,–1] ∪ [1,10].
[R,FX] = getSectorIndex(sys,Q,1e-6,[1,10])
In this interval, sys achieves an index of 1.5811 at a negative frequency value of –7.2320 rad/s. Use sectorplot to plot the indices in this range.
opt = sectorplotoptions;
opt.FreqScale = 'Linear';
opt.IndexScale = 'Linear';
w = linspace(-10,10,1000);
Now compute the index in the frequency interval [–5,–1] ∪ [1,5]. To do so, specify fband = [1,5].
[r,f] = getSectorIndex(sys,Q,1e-6,[1,5])
In this interval, sys achieves an index of 1.4630 at a positive frequency value of 2.2499 rad/s. Plot the indices in this range to confirm the result.
w = linspace(-5,5,500);
Input Arguments
H — Model to analyze
dynamic system model | model array
Model to analyze against sector bounds, specified as a dynamic system model such as a tf, ss, or genss model. H can be continuous or discrete. If H is a generalized model with tunable or uncertain
blocks, getSectorIndex analyzes the current, nominal value of H.
To analyze whether all I/O trajectories {u(t),y(t)} of a linear system G lie in a particular sector, use H = [G;I].
If H is a model array, then getSectorIndex returns the passivity index as an array of the same size, where:
index(k) = getSectorIndex(H(:,:,k),___)
Here, index is either RX, or DX, depending on which input arguments you use.
tol — Relative accuracy
0.01 (default) | positive real value
Relative accuracy for the calculated sector index. By default, the tolerance is 1%, meaning that the returned index is within 1% of the actual index.
fband — Frequency interval
1-by-2 array
Frequency interval for calculating the sector index, specified as an array of the form [fmin,fmax] with 0 ≤ fmin < fmax. When you provide fband, getSectorIndex restricts to the specified frequency
interval the inequalities that define the index.
For models with complex coefficients, getSectorIndex computes the index in the range [–fmax,–fmin]∪[fmin,fmax]. As a result, the function can return indices at a negative frequency.
Specify frequencies in units of rad/TimeUnit, where TimeUnit is the TimeUnit property of the dynamic system model H.
dQ — Direction
Direction in which to compute directional sector index, specified as a nonnegative definite matrix. The matrix dQ is a symmetric square matrix that is ny on a side, where ny is the number of outputs
of H.
Output Arguments
RX — Relative sector index
scalar | array
Relative index of the system H for the sector specified by Q, returned as a scalar value, or an array if H is an array. If RX < 1, then the output trajectories of H fit inside the cone of Q.
The value of RX provides a measure of how tightly the output trajectories of H fit inside the cone. Let the following be an orthogonal decomposition of the symmetric matrix Q into its positive and
negative parts.
$Q={W}_{1}{W}_{1}^{T}-{W}_{2}{W}_{2}^{T},\text{ }{W}_{1}^{T}{W}_{2}=0.$
(Such a decomposition is readily obtained from the Schur decomposition of Q.) Then, RX is the smallest R that satisfies:
${\int }_{0}^{T}y{\left(t\right)}^{T}\left({W}_{1}{W}_{1}^{T}-{R}^{2}{W}_{2}{W}_{2}^{T}\right)\text{\hspace{0.17em}}y\left(t\right)dt<0,$
for all T ≥ 0. Varying R is equivalent to adjusting the slant angle of the cone specified by Q until the cone fits tightly around the output trajectories of H. The cone base-to-height ratio is
proportional to R.
For more information about interpretations of the relative index, see About Sector Bounds and Sector Indices.
FX — Frequency at which index is achieved
scalar | array
Frequency at which the index RX is achieved, returned as a scalar, or an array if H is an array. In general, the index varies with frequency (see sectorplot). The returned value is the largest value
over all frequencies. FX is the frequency at which this value occurs, returned in units of rad/TimeUnit, where TimeUnit is the TimeUnit property of H.
FX can be negative for systems with complex coefficients.
W1, W2 — Positive and negative factors of Q
Positive and negative factors of Q, returned as matrices. For a constant Q, W1 and W2 satisfy:
$Q={W}_{1}{W}_{1}^{T}-{W}_{2}{W}_{2}^{T},\text{ }{W}_{1}^{T}{W}_{2}=0.$
Z — Bistable model
state-space model | 1
Bistable model in the factorization of Q, returned as:
• If Q is a constant matrix, Z = 1.
• If Q is frequency-dependent, then Z is a state-space (ss) model such that:
$Q\left(j\omega \right)=Z{\left(j\omega \right)}^{H}\left({W}_{1}{W}_{1}^{T}-{W}_{2}{W}_{2}^{T}\right)Z\left(j\omega \right).$
DX — Directional sector index
scalar | array
Directional sector index of the system H for the sector specified by Q in the direction dQ, returned as a scalar value, or an array if H is an array. The directional index is the largest τ which
${\int }_{0}^{T}y{\left(t\right)}^{T}\left(Q+\tau dQ\right)\text{\hspace{0.17em}}y\left(t\right)dt<0,$
for all T ≥ 0.
Version History
Introduced in R2016a | {"url":"https://in.mathworks.com/help/control/ref/dynamicsystem.getsectorindex.html","timestamp":"2024-11-09T04:37:15Z","content_type":"text/html","content_length":"120864","record_id":"<urn:uuid:257c944b-d6ac-483f-b03b-0aa03c49b25f>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00386.warc.gz"} |
DMRG Calculation for Large Systems
Hello there,
To preface, I would like to mention that I am very new to TeNPy, so I am sure that there might be a simple fix to my problem. I am trying to simulate a 2D Square Heisenberg Lattice (
), to calculate its ground state and ground state energy. I had initially set up some code using Scipy sparse matrices to do this calculation, but my computer memory could not handle it beyond a 4x4
lattice. I created the following model to simulate the system using TeNPy. This model seemed to agree with my Scipy results until a 4x4 matrix, and DMRG worked up until a 7x7 lattice. When I moved to
an 8x8 lattice, however, DMRG took a very long time to run.
Python: Select all
bc_xx = 'open'; bc_yy = 'open'; bc_MPSS = 'finite'; # boundary conditions
# coupling_axis = 'Sz' # adds exchange interaction along Sz axis
lx = 8
ly= 8
tot_sites = lx*ly; # lattice
# Create antiferromagnetic initial state
product_state = []
for i in range(ly):
for j in range(lx):
if (i + j) % 2 == 0:
jz= 1.0
h = 1.0
jx = 1.0
class MyModel(CouplingMPOModel):
def init_sites(self, model_params):
conserve = model_params.get('conserve', None)
site = SpinHalfSite(conserve=conserve)
return site
def init_lattice(self, model_params):
Lx = model_params.get('Lx', 2.)
Ly = model_params.get('Ly', 2.)
bc_x = model_params.get('bc_x', bc_xx)
bc_y = model_params.get('bc_y', bc_yy)
bc_MPS = model_params.get('bc_MPS', bc_MPSS)
lattice = Square(Lx, Ly, site=self.init_sites(model_params), bc=[bc_x, bc_y], bc_MPS=bc_MPS)
return lattice
def init_terms(self, model_params):
Jz = model_params.get('Jz', 1.0)
Jx = model_params.get('Jx', 1.0)
Hz = model_params.get('Hz', 1.0)
for u in range(tot_sites):
self.add_onsite_term(Hz, u, 'Sz')
for u1, u2, dx in self.lat.pairs['nearest_neighbors']:
self.add_coupling(Jz, u1, 'Sz' , u2, 'Sz', dx)
self.add_coupling(Jx/2, u1, 'Sp', u2, 'Sm', dx)
self.add_coupling(Jx/2, u1, 'Sm', u2, 'Sp', dx)
dmrg_params = {
'mixer': True,
'trunc_params': {
'chi_max': 100,
'svd_min': 1.e-10,
'max_E_err': 1.e-3,
'verbose': True,
model_params = {
'conserve': None,
'Lx': lx,
'Ly': ly,
'Jz': jz,
'Jx': jx,
'Hz': h,
'bc_MPS': bc_MPSS,
'bc_x': bc_xx,
'bc_y': bc_yy
M = MyModel(model_params)
lattice = M.init_lattice(model_params)
psi = MPS.from_product_state(lattice.mps_sites(), product_state, bc=lattice.bc_MPS)
eng = dmrg.TwoSiteDMRGEngine(psi, M, dmrg_params)
info = eng.run() # the main work; modifies psi in place
E = info[0]
psi = info[1]
I tried to change the DMRG parameters to accommodate for the larger system, but it still took 30+ minutes (after which I stopped the simulation). I thought I could overcome this by calculating the
ground state of a 4x4 lattice using DMRG and doing a kroenecker product of 4 of the outputs "psi" to provide as an initial state for the 8x8 lattice, which would hopefully allow for the DMRG to
converge faster. However, I was not sure if the kron function (
https://tenpy.readthedocs.io/en/latest/ ... .kron.html
) of TeNPy allows this to be done. I am also not even sure if the "psi" output of the DMRG could be put into a kroenecker product.
As mentioned, I am very new to TeNPy and tensor networks in general, so I might be misunderstanding how DMRG works or trying to do something not feasible. Any help or guidance would be greatly
Re: DMRG Calculation for Large Systems
to put this a bit into perspective, 30 minutes of runtime is often still considered very moderate, state-of-the-art DMRG calculations for publications in journals often run several days to weeks on
high-performance-computing cluster nodes with >30 cores (more powerfull than a laptop...)
Also, it's important to understand how DMRG should scale in 2D. What we do, is "snake" or wind a 1D MPS through the 2D lattice, by convention along the y direction first, see also the lattice intro.
This choice is the reason why we expect DMRG to scale exponentially with Ly, while it only scales linear with Lx: cutting the MPS on a center bond cuts the system into a left x half and a right x
half, but each part stretches along the full Ly.
To see this exponential scaling, recall that the area law of entanglement implies entanglement \(S \propto \mathrm{length~of~the~cut} \propto L_y\). We can plug this into the bound \(S \leq \log \chi
\), impying that the bond dimension should be at least \(\chi \geq \exp(const * Ly)\)
In other words, each time you increase Ly just by one, you should multiply chi_max by a factor if you want to keep the truncation error small, and the `chi_max=100` might not be enough to converge
for Ly=8. Of course, since DMRG scales with \(\mathcal{O}(\chi^3)\), this will get more and more computationally costly.
The idea with the kronecker product is nice, but you need to be a bit careful how you want to "stitch" the individual states together - if you patch 4 smaller 4x4 lattices together, the MPS would
wind within each patch first, while for the whole 8x8 system, it winds in y-direction first.
That being said, if you're carful about how the MPS winds in the 2D system, you can stitch individual parts together using from_product_mps_covering. I'd consider this advanced usage, though. | {"url":"https://tenpy.johannes-hauschild.de/viewtopic.php?t=718","timestamp":"2024-11-07T15:50:27Z","content_type":"text/html","content_length":"29774","record_id":"<urn:uuid:221caa9e-5ec2-4fa8-b23e-14c419b6e392>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00394.warc.gz"} |
Arithmometer | Figment
With augmented reality or 3D printing, you can bring this very old calculator into your math class. Make sure you flip it over and see the other side.
About the Model
Can you imagine using this calculator for math class? The first mechanical calculator produced on a large scale. This mathematical instrument manually calculates four mathematical operations within
the scope of ten digits. It was the first mechanical calculator produced on a large scale. By 1878, more than 1,500 of these instruments were sold. Medium: steel, wood, brass
paris, math, mathematics, calculations, calculator, instrument, operation, manufacturing, digits, mechanism, mechanical | {"url":"https://figment.education/library/arithmometer","timestamp":"2024-11-07T06:13:37Z","content_type":"text/html","content_length":"86202","record_id":"<urn:uuid:9697b8db-5d91-4844-963a-e1c97877117d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00454.warc.gz"} |
││Calculators │ │
││ │ │
││in the │ │
││ │ │
││Classroom │ │
TI-84+ TI-84+ Silver Edition
I approach this topic with strong convictions, but with some trepidation because it is always somewhat controversial. The controversy surrounding this topic will probably never end, but states,
school districts, schools, and individual teachers need to address the topic and come to terms with how they are going to handle calculator use in their classrooms.
Since the development of small, inexpensive, electronic calculators in the early 1980's mathematicians and teachers of mathematics at all levels have seen the great potential that these instruments
have for changing how mathematics is taught, and for changing the way that mathematics is learned. From Principles and Standards for School Mathematics, produced by the National Council of Teachers
of Mathematics (N.C.T.M., 2000):
"Technology is essential in teaching and learning mathematics; it influences the mathematics that is taught and enhances students' learning." "In the mathematics classroom envisioned in the
Principles and Standards, every student has access to technology to facilitate his or her mathematics learning under the guidance of a skillful teacher."
On the other hand, I believe that the availability of calculators should be limited to students in 7th grade and above, and does not eliminate the need for students to learn algorithms. Proficiency
with paper and pencil computational algorithms is important, but such knowledge should grow out of problem situations that have given rise to the need for such algorithms. Students should be able to
decide when they need to calculate and whether they require an exact or approximate answer. They should be able to select and use the most appropriate tool. Students should have a balanced approach
to calculation, be able to choose appropriate procedures, find answers, and judge the validity of those answers.
I believe that graphics calculators can help students visualize otherwise abstract mathematical concepts. They empower students to take more control of their learning by offering a visual way of
discovering mathematical relationships. Appropriate use of graphing calculators also includes use on what I refer to as coincidental computation. Coincidental computation is the arithmetic that
occurs during the solving of real problems. That is, when the focus is not on the algorithm, but on the solving of a problem that has been translated from a real situation into an expression that
requires evaluation. When the focus is on the algorithm I call the exercise, contrived computation. This is the abstract manipulation of mathematical (algebraic) expressions/equations whose sole
purpose is the learning of the computational steps (algorithm) required to simplify or quantify.
I would like to set the minds of my students and their parents to rest on the subject of calculators in the classroom. My students will learn how to do every algebraic manipulation, including
graphing, with a pencil and paper because I advocate BASICS FIRST. In addition, every one of the manipulations that it is possible to do on a graphics calculator will be taught on the TI-84+ graphics
calculator. Students will be required to know how to use both pencil and paper and graphics calculators for the mathematics taught in my classroom. Students will be led to understand the power of
graphics calculators as well as their limitations (as Shoe discovered, above, calculators can't do everything.) Even though calculators will be available most of the time in my classroom (beginning
with SAT preparation) there will always be some exercises/problems on test/quizzes that it will be of no help.
I am a firm believer in using technology in the math classroom for all of the following purposes: emphasizing concepts, discovering patterns and relationships, coincidental computation, graphing,
developing mathematical intuition, confirming solutions, solving all types of problems (including problems that would not be attempted without the use of graphics calculators) and to encourage higher
level thinking skills.
For those who would discourage calculator use for fear the users will become calculator dependent I would give the following example: Suppose you lived on a farm where you often had to dig post
holes. Your father, and his father before him, had always used a manual post hole digger. Then you discovered you could dig five times as many holes in the same amount of time using a post hole
digging auger on the back of a tractor. I am pretty sure that you could come to prefer to use the auger over the manual post hole digger, but I don't think you would ever lose your ability to use the
manual tool. I would agree, however, that you just might become dependent on the auger to give you better post holes in less time, and you may wind up not being able to use the manual digger as well
as your father, but I'll wager that you would be willing to give up a little skill with the post hole digger to learn great technique with the auger. And in the end, aren't you really just trying to
dig a hole?
It is for all of the above reasons that I recommend that parents of students in my classes purchase TI-84+, or TI-84+ Silver Edition graphics calculators for their children. After the first 6-weeks
we will be using the class set of graphics calculators that have generously been provided by the Academys. Often, the work will require that students use a graphics calculator on homework. Students
who do not have one at home will have to complete the work at school. Graphics calculators can be purchased at most any office supply store or discount store. This same calculator is used extensively
at most high schools, so students can expect to be able to use theirs for many years. Owning a calculator is not required for success in Algebra I, but owning one will go a long way towards
facilitating the learning of both Algebra I and graphics calculators.
The bottom line here is that the human instinct to take the path of least resistance, and to use the easiest method available to solve a problem, often encourages overuse of technology. For this
reason there will always be problems on algebra tests that cannot be solved/answered using a calculator. There will also be problems that require graphing calculator use on most tests.
As adults in the real world we have lots of "tools" that we use in our everyday lives and in our work that we could do the job without, but that we have become addicted to using to make life/job a
little more pleasant with a little less drudgery. The graphing calculator holds the same position in the lives of my students. They can always do the job without the tool, but if the point is to
understand the problem and find a solution then the calculator ought to be available to facilitate that end.
I recommend that students not bring their graphics calculators to school as the possibility of damage or loss is too great. There will always be TI-84+'s available in my room for students to use.
Students should leave their personal graphics calculators at home for use on homework.
One final thought. I believe that many people cannot learn to do mathematical computation at a level that used to be necessary for success in higher-level mathematics classes. I also believe that no
individual should be excluded from these courses just because they lack fluent computational skills. Having poor computational skills is not necessarily an indication that a person lacks the ability
to problem-solve at the most difficult levels. Therefore, if a student lacks manual computational skills, but he can understand complex real-world problems, define variables, and write equations that
model the situations, I believe he should have access to the highest levels of mathematics and science classes even if he needs a calculator to do every computation. No student should be denied
access to higher mathematics simply because he lacks manual computational skills. | {"url":"https://www.algebraguy.com//calculatorph.htm","timestamp":"2024-11-14T00:34:54Z","content_type":"text/html","content_length":"11696","record_id":"<urn:uuid:cbb7a56c-cc94-4173-8541-7329f01c5ef8>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00322.warc.gz"} |
Should You Switch Your Loan to a Lower Interest Rate by Paying a Fee? - EMI Calculator
I got this query from one of the readers. He had taken a home loan from a leading housing finance company.
Original Loan Amount Rs 39,12,633
Loan Outstanding Rs 37,94,900
Original Loan Tenure 240 months
Remaining Loan Tenure 221 months
Current Rate of Interest 9.1%
EMI Rs 35,455
The loan interest rates have fallen sharply over the last 18-24 months.
The Switching Offer
When the borrower approached the lender for a lower rate, the lender offered to reduce the loan interest rate to 8% p.a. for a switch fee of Rs 1.23 lacs. As a borrower, how should you evaluate this
offer? Should you accept this offer or continue at a higher rate of interest? You can evaluate the offer in many ways.
Approach 1 — Determine Time Taken to Recover Switch Fee (Revised Loan Tenure)
You pay the switching fee of Rs 1.23 lacs and reduce the loan tenure to 188 months. On a loan of Rs 37.94 lacs, by paying 1.1% p.a. less, you save ~Rs 41,000 in interest in the first year. So, it
will take about 3 years to recover the switch fee (i.e., Rs 1.23 lacs / 41,000). This is a very crude calculation. (For exact interest savings under reducing rate, see below table). And then you pay
the lower interest rate for the entire tenure (and not just 3 years). We have seen much easier switching terms in banks where you recover the switch fee within a few months. Here, you recover the fee
in 3 years. Does not look nice in comparison. However, in absence of other options, this looks a fine choice.
Switch Fee Rs 1,23,000
Loan Outstanding Rs 37,94,900
Keeping EMI Constant Rs 35,455
Original Loan Tenure 221 months
Original Interest Rate 9.1%
Interest Paid (1st Year) Rs 3,41,908
Interest Paid (2nd Year) Rs 3,33,980
Interest Paid (3rd Year) Rs 3,25,299
Interest Paid in 3 Years (@ 9.1% interest rate) (A) Rs 10,01,187
Revised Loan Tenure 188 months
Revised Interest Rate 8%
Interest Paid (1st Year) Rs 2,99,023
Interest Paid (2nd Year) Rs 2,88,528
Interest Paid (3rd Year) Rs 2,77,163
Interest Paid in 3 Years (@ 8% interest rate) (B) Rs 8,64,714
Difference in Interest Paid in 3 Years (i.e., time taken to recover Switch Fee) (A-B) Rs 1,36,473
Here’s another alternative to consider — You could have used Rs 1.23 lacs to part prepay the loan. In that case, your loan outstanding goes down from Rs 37.94 lacs to Rs 36.71 lacs. If the EMI (and
interest rate) remains the same, the loan will get repaid in 203 months (compared to 188 months after the switch). Clearly, switching to a lower rate by paying the fee is a better option compared to
prepaying with the loan using the fee amount.
Approach 2 — Determine Time Taken to Recover Switch Fee (Revised EMI)
You pay the switching fee of Rs 1.23 lacs and adjust the EMI. With a lower interest rate, the EMI will go down to Rs 32,868. Compared to the existing scenario, this will lead to a monthly saving of
Rs 2,587. You take about 48 months to recover the switching fee. Not bad. Your loan tenure is 221 months. Looks fine again.
Switch Fee (A) Rs 1,23,000
Loan Tenure (B) 221 months
Original EMI (C) Rs 35,455
Revised EMI (D) Rs 32,868.38
Monthly Savings (E= C – D) Rs 2,586.62
Total Savings (F= B x E) Rs 5,71,643
Time to Recover Switch Fee (A ÷ E) ~48 months
Approach 3 — Switching Fee as an Investment
By paying Rs 1.23 lacs upfront (and keeping the EMI constant), your home loan will get repaid in 188 months. So, you save a cool 33 EMIs by paying the switch fee of Rs 1.23 lacs. You save 33 X 35,455
= Rs 11.70 lacs by paying Rs 1.23 lacs upfront.
Switch Fee (A) Rs 1,23,000
Keeping EMI Constant (B) Rs 35,455
Original Tenure (C) 221 months
Revised Tenure (D) 188 months
Months Reduced (E= C – D) 33
Total Savings (F= B x E) Rs 11,70,015
Remember, your switching fee expense is upfront. The savings come in the form of EMIs saved beyond the 188th month. Is that good or bad?
Let us assume you made an investment of Rs 1.23 lacs. You got Rs 35,455 for 33 months starting 189th month. The APR is 13.28% p.a., much higher than your loan interest rate. So, switching to a lower
interest rate by paying this fee looks like a fine choice.
Approach 4 — What if You Prepay the Loan after the Switch?
The above calculations are fine. However, such calculations are likely to overstate the benefits of switching to a lower rate of interest. Why? This is because the assumption in all the scenarios
discussed above is that you will let your home loan run its full course. Usually, we tend to prepay the home loans much sooner. Usually in about 7 to 9 years. If the home loan is prepaid, the benefit
of a lower rate of interest goes down automatically. The benefit can be much lesser than calculated in the above examples.
Let us consider an extreme scenario. The borrower prepays the entire home loan after 1 year. So, the borrower reduces the interest rate by 1.1% from 9.1% to 8% per annum. Over the next 1 year, he
would have saved about Rs 41,000 in interest cost. However, for this, he paid Rs 1.23 lacs. Does not make sense, does it? Before shelling out big amounts in switching fees, you must consider this
possibility of your plans for loan prepayment.
Let us consider a more plausible scenario. The borrower prepays Rs 3 lacs every year. At the end of the 7th year, he prepays whatever is left and closes the loan.
1. Case 1: You keep the 1.23 lacs in your pocket and stick to the existing rate of 9.1% p.a. EMI remains same. You prepay Rs 3 lacs at the end of each year (12th, 24th, 36th month and so on). At the
end of the 7th year, you must pay Rs 5.12 lacs to close the loan.
2. Case 2: You pay the switching fee of Rs 1.23 lacs. Interest rate goes down to 8% p.a. EMI remains constant. You prepay Rs 3 lacs at the end of each year. At the end of the 7th year, you must pay
Rs 2.54 lacs to close the loan.
│Loan Outstanding │Rs 37,94,900 │
│Original Interest rate │9.10% │
│Original Tenure │240 │
│Remaining Months │221 │
│EMI │Rs 35,455 │
│ │Case 1 (A) │Case 2 (B) │Difference (A-B)│
│Switch Fee │– │Rs 1,23,000 │ │
│Revised Interest Rate │9.1% │8% │ │
│Yearly Prepayment Amount (Paid on 12th, 24th, 36th month etc.) │Rs 3,00,000 │Rs 3,00,000 │ │
│Outstanding Loan at the end of 5th Year │Rs 17,76,969│Rs 15,78,156│Rs 1,98,813 │
│Outstanding Loan at the end of 7th Year │Rs 5,12,686 │Rs 2,54,773 │Rs 2,57,913 │
Case 2 costs Rs 2.58 lacs less (i.e., 5.12 lacs – 2.54 lacs) compared to Case 1. However, you paid Rs 1.23 lacs upfront in switching fees. Rs 1.23 lacs will become Rs 2.58 lacs in 7 years for a
11.15% p.a. return. While this is not bad, you must appreciate that you are losing a lot of flexibility. Rs 1.23 lacs is gone, irrespective of whether you finish the loan in 5 years, 7 years, or 10
If you were to close the loan at the end of 5th year, the difference in foreclosure amount will only be Rs 1.99 lacs between Case 1 and Case 2. For this, you pay Rs 1.23 lacs upfront. Not worth it.
In Summary — What Should You Do?
In my opinion, Rs 1.23 lacs is too high a switching fee. In fact, from the borrower’s perspective, these terms are atrocious. I have seen lenders (based on readers’ feedback) offering much generous
terms to borrowers where the switching fee is recovered in just a few months. Here, it takes a few years.
We have NOT considered the possibility of refinancing the loan with another lender in this post. For loan refinancing, you will incur processing fee and a few other documentation related costs. It
requires more work too. However, you are likely to get a better deal while switching to another lender. At the same time, I do not have much idea about the borrower’s credit profile, and I work with
an assumption that the new lender will sanction the loan. Trust your judgement on this. When your existing lender offers you such a raw deal, loan refinancing is an option worth exploring. | {"url":"https://emicalculator.net/should-you-switch-your-loan-to-a-lower-interest-rate-by-paying-a-fee/","timestamp":"2024-11-03T02:53:02Z","content_type":"text/html","content_length":"65482","record_id":"<urn:uuid:76dd5036-2c59-4b82-a146-b2367e73385b>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00861.warc.gz"} |
A right triangle has sides A, B, and C. Side A is the hypotenuse and side B is also a side of a rectangle. Sides A, C, and the side of the rectangle adjacent to side B have lengths of 9 , 4 , and 7 , respectively. What is the rectangle's area? | Socratic
A right triangle has sides A, B, and C. Side A is the hypotenuse and side B is also a side of a rectangle. Sides A, C, and the side of the rectangle adjacent to side B have lengths of #9 #, #4 #, and
#7 #, respectively. What is the rectangle's area?
1 Answer
Area of Rectangle is $56.42 \left(2 \mathrm{dp}\right) u n i t {s}^{2}$
#A=9 ; C=4 ; B=sqrt(A^2-C^2)=sqrt(9^2-4^2)=8.06#; B is a side of rectangle and it's adjacent side $D = 7 \therefore$ Area of Rectangle is $= 8.06 \cdot 7 = 56.42 \left(2 \mathrm{dp}\right)$ sq.units
Impact of this question
1249 views around the world | {"url":"https://socratic.org/questions/a-right-triangle-has-sides-a-b-and-c-side-a-is-the-hypotenuse-and-side-b-is-also-87","timestamp":"2024-11-13T03:10:32Z","content_type":"text/html","content_length":"33602","record_id":"<urn:uuid:dcbfdf10-b0c0-40b8-9856-5fc7df9b4e10>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00683.warc.gz"} |
All Tasks - Asymptote
Find the coefficients of terms in algebraic expression_1
Find the number $a$ if the following algebraic expression is independent of $x$. $A= a(x+y+2)-2(x+2y)+6$
Find the Algebraic expression
Which of the following algebraic expressions will give us the expression: $3x+2y-3$
Find the coefficients of terms in algebraic expression_3
Find the number $a$ and $b$ if the following algebraic expression is independent of $x$ and $y$. $A=x(a+3b)+y(2a-6)+2a+5$ Answer 1 is for $a$ and Answer 2 is for $b$
# Equations & Inequations
Simplification of algebraic expression_2
If we simplify the following algebraic expression $A= 2(x-y-2)-3(2x+y)+7$ into the form $ax+by+c$, where the $a,b$ and $c$ are rational numbers, find the $a$ (Answer 1), $b$ (Answer 2) and $c$
(Answer 3)
Simplify the algebraic expression_1
If we simplify the following algebraic expression: $ A=(x-3)(x+3)-(x^2-9x+3)$. the expression will be ...
Find the value_2
Evaluate the following algebraic expression if $x=5$ and $y=−5$ $A={(x-3)}^2-2{(y+3)}^2-5{(x+y)}^7$
# Equations & Inequations | {"url":"https://www.asymptote-project.eu/en/all-tasks/?asymPagination=50","timestamp":"2024-11-03T20:10:51Z","content_type":"text/html","content_length":"116927","record_id":"<urn:uuid:682af15a-10ee-435c-8fe1-fa29e600057a>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00772.warc.gz"} |
Statistics—Central Limit Theorem: Reliable Estimation and Informed Decision-Making Through…
Statistics — Central Limit Theorem: Reliable Estimation and Informed Decision-Making Through Statistical Inference
The CLT reassures us that if we have a sufficiently large sample, the sample mean will provide a reliable estimate of the population mean. Moreover, by utilizing the standard deviation of the sample
means and the sample size, we can estimate the standard deviation of the population. In this essay we investigate this in detail.
The Central Limit Theorem (CLT) is a fundamental concept in statistics that never fails to captivate the mind. At its core, the CLT asserts that regardless of the shape of the original population
distribution, if we collect a sufficiently large sample and calculate the mean of that sample, it will follow a normal distribution. (That mean of the sample is Random Variable, as we are selecting
the sample from the population randomly). This theorem is truly a game-changer, unlocking a world of possibilities.
It speaks about the Random Variable- “Mean of the Sample”
To understand the power of the CLT, let’s imagine you have a bag filled with marbles of different sizes… | {"url":"https://ogre51.medium.com/statistics-central-limit-theorem-reliable-estimation-and-informed-decision-making-through-a2f60c4197c7?responsesOpen=true&sortBy=REVERSE_CHRON&source=read_next_recirc-----8b060db91fba----1---------------------76175132_367f_499e_b75c_60cb7e5fb1a9-------","timestamp":"2024-11-10T06:12:48Z","content_type":"text/html","content_length":"88182","record_id":"<urn:uuid:c6089bda-f708-458a-a893-b15c3f23d60c>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00501.warc.gz"} |
Johansen cointegration test
h = jcitest(Y) returns rejection decisions from conducting the Johansen test, which assesses each null hypothesis H(r) of cointegration rank less than or equal to r among an input numDims-dimensional
multivariate time series against the alternatives H(numDims) (trace test) or H(r + 1) (maxeig test). The tests produce maximum likelihood estimates of the parameters in a vector error-correction (VEC
) model of the cointegrated series.
h = jcitest(Tbl) returns rejection decisions from conducting the Johansen test on the variables of an input table or timetable.
To select a subset of variables to test, use the DataVariables name-value argument.
h = jcitest(___,Name=Value) uses additional options specified by one or more name-value arguments, using any input-argument combination in the previous syntaxes.
Some options control the number of tests to conduct. The following conditions apply when jcitest conducts multiple tests:
• jcitest treats each test as separate from all other tests.
• Each row of all outputs contains the results of the corresponding test.
For example, jcitest(Tbl,Model="H2",DataVariables=1:5) tests the first 5 variables in the input table Tbl using the Johansen model that excludes all deterministic terms.
[h,pValue,stat,cValue] = jcitest(___) displays, at the command window, the results of the Johansen test and returns the p-values pValue, test statistics stat, and critical values cValue of the test.
The results display includes the ranks r, corresponding rejection decisions, p-values, decision statistics, and specified options.
[h,pValue,stat,cValue,mles] = jcitest(___) also returns a structure of maximum likelihood estimates associated with the VEC(q) model of the multivariate time series y[t].
Conduct Johansen Cointegration Test on Matrix of Data
Test a multivariate time series for cointegration using the default values of the Johansen cointegration test. Input the time series data as a numeric matrix.
Load data of Canadian inflation and interest rates Data_Canada.mat, which contains the series in the matrix Data.
ans = 5x1 cell
{'(INF_C) Inflation rate (CPI-based)' }
{'(INF_G) Inflation rate (GDP deflator-based)'}
{'(INT_S) Interest rate (short-term)' }
{'(INT_M) Interest rate (medium-term)' }
{'(INT_L) Interest rate (long-term)' }
Test the interest rate series for cointegration by using the Johansen cointegration test. Use default options and return the rejection decision.
h = jcitest(Data(:,3:end))
h=1×7 table
r0 r1 r2 Model Lags Test Alpha
_____ _____ _____ ______ ____ _________ _____
t1 true true false {'H1'} 0 {'trace'} 0.05
By default, jcitest conducts the trace test and uses the H1 Johansen form by default. The test fails to reject the null hypothesis of rank 2 cointegration in the series.
Conduct Default Johansen Cointegration Test on Table Variables
Conduct the Johansen cointegration test on a multivariate time series using default options, which tests all table variables.
Load data of Canadian inflation and interest rates Data_Canada.mat. Convert the table DataTable to a timetable.
load Data_Canada
dates = datetime(dates,12,31);
TT = table2timetable(DataTable,RowTimes=dates);
TT.Observations = [];
Conduct the Johansen cointegration test by passing the timetable to jcitest and using default options. jcitest tests for cointegration among all table variables by default.
h=1×9 table
r0 r1 r2 r3 r4 Model Lags Test Alpha
_____ _____ _____ _____ _____ ______ ____ _________ _____
t1 true true false false true {'H1'} 0 {'trace'} 0.05
The test fails to reject the null hypotheses of rank 2 and 3 cointegration among the series.
By default, jcitest includes all input table variables in the cointegration test. To select a subset of variables to test, set the DataVariables option.
Conduct Johansen Test for Each Test Statistic
jcitest supports two types Johansen tests. Conduct a test for each type.
Load data of Canadian inflation and interest rates Data_Canada.mat. Convert the table DataTable to a timetable. Identify the interest rate series.
load Data_Canada
dates = datetime(dates,12,31);
TT = table2timetable(DataTable,RowTimes=dates);
TT.Observations = [];
idxINT = contains(TT.Properties.VariableNames,"INT");
Conduct the Johansen cointegration test to assess cointegration among the interest rate series. Specify both test types trace and maxeig, and set the level of significance to 2.5%.
h = jcitest(TT,DataVariables=idxINT,Test=["trace" "maxeig"],Alpha=0.025)
h=2×7 table
r0 r1 r2 Model Lags Test Alpha
_____ _____ _____ ______ ____ __________ _____
t1 true false false {'H1'} 0 {'trace' } 0.025
t2 false false false {'H1'} 0 {'maxeig'} 0.025
h is a 2-row table; rows contain results of separate tests. At 2.5% level of significance:
• The trace test fails to reject the null hypotheses of ranks 1 and 2 cointegration among the series.
• The maxeig test fails to reject the null hypotheses for each cointegration rank.
Return Test $\mathit{p}$-Values and Decision Statistics
Load data of Canadian inflation and interest rates Data_Canada.mat. Convert the table DataTable to a timetable. Identify the interest rate series.
load Data_Canada
dates = datetime(dates,12,31);
TT = table2timetable(DataTable,RowTimes=dates);
TT.Observations = [];
idxINT = contains(TT.Properties.VariableNames,"INT");
Conduct the Johansen cointegration test to assess cointegration among the interest rate series. Specify both test types trace and maxeig.
[h,pValue,stat,cValue] = jcitest(TT,DataVariables=idxINT,Test=["trace" "maxeig"])
Results Summary (Test 1)
Data: TT
Effective sample size: 40
Model: H1
Lags: 0
Statistic: trace
Significance level: 0.05
r h stat cValue pValue eigVal
0 1 37.6886 29.7976 0.0050 0.4101
1 1 16.5770 15.4948 0.0343 0.2842
2 0 3.2003 3.8415 0.0737 0.0769
Results Summary (Test 2)
Data: TT
Effective sample size: 40
Model: H1
Lags: 0
Statistic: maxeig
Significance level: 0.05
r h stat cValue pValue eigVal
0 0 21.1116 21.1323 0.0503 0.4101
1 0 13.3767 14.2644 0.0687 0.2842
2 0 3.2003 3.8415 0.0737 0.0769
h=2×7 table
r0 r1 r2 Model Lags Test Alpha
_____ _____ _____ ______ ____ __________ _____
t1 true true false {'H1'} 0 {'trace' } 0.05
t2 false false false {'H1'} 0 {'maxeig'} 0.05
pValue=2×7 table
r0 r1 r2 Model Lags Test Alpha
_________ ________ ________ ______ ____ __________ _____
t1 0.0050497 0.034294 0.073661 {'H1'} 0 {'trace' } 0.05
t2 0.050346 0.06874 0.073661 {'H1'} 0 {'maxeig'} 0.05
stat=2×7 table
r0 r1 r2 Model Lags Test Alpha
______ ______ ______ ______ ____ __________ _____
t1 37.689 16.577 3.2003 {'H1'} 0 {'trace' } 0.05
t2 21.112 13.377 3.2003 {'H1'} 0 {'maxeig'} 0.05
cValue=2×7 table
r0 r1 r2 Model Lags Test Alpha
______ ______ ______ ______ ____ __________ _____
t1 29.798 15.495 3.8415 {'H1'} 0 {'trace' } 0.05
t2 21.132 14.264 3.8415 {'H1'} 0 {'maxeig'} 0.05
jcitest prints a results display for each test to the command window. All outputs are tables containing the corresponding statistics and test options.
Plot Estimated Cointegrating Relations
Load data of Canadian inflation and interest rates Data_Canada.mat. Convert the table DataTable to a timetable.
load Data_Canada
dates = datetime(dates,12,31);
TT = table2timetable(DataTable,RowTimes=dates);
TT.Observations = [];
idxINT = contains(TT.Properties.VariableNames,"INT");
Plot the interest series.
grid on
Test the interest rate series for cointegration; use the default Johansen form H1. Return all outputs.
[h,pValue,stat,cValue,mles] = jcitest(TT,DataVariables=idxINT);
Results Summary (Test 1)
Data: TT
Effective sample size: 40
Model: H1
Lags: 0
Statistic: trace
Significance level: 0.05
r h stat cValue pValue eigVal
0 1 37.6886 29.7976 0.0050 0.4101
1 1 16.5770 15.4948 0.0343 0.2842
2 0 3.2003 3.8415 0.0737 0.0769
h=1×7 table
r0 r1 r2 Model Lags Test Alpha
_____ _____ _____ ______ ____ _________ _____
t1 true true false {'H1'} 0 {'trace'} 0.05
pValue=1×7 table
r0 r1 r2 Model Lags Test Alpha
_________ ________ ________ ______ ____ _________ _____
t1 0.0050497 0.034294 0.073661 {'H1'} 0 {'trace'} 0.05
The test fails to reject the null hypothesis of rank 2 cointegration in the series.
Plot the estimated cointegrating relations ${B}^{\prime }{y}_{t-1}+{c}_{0}$:
TTLag = lagmatrix(TT,1);
T = height(TTLag);
B = mles.r2.paramVals.B;
c0 = mles.r2.paramVals.c0;
grid on
Input Arguments
jcitest removes the following observations from the specified data:
• All rows containing at least one missing observation, represented by a NaN value
• From the beginning of the data, initial values required to initialize lagged variables
Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but
the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Example: jcitest(Tbl,Model="H2",DataVariables=1:5) tests the first 5 variables in the input table Tbl using the Johansen model that excludes all deterministic terms.
Display — Command window display control
"off" | "summary" | "params" | "full"
Command window display control, specified as a value in this table.
Value Description
"off" jcitest does not display the results to the command window. If jcitest returns h or no outputs, this display is the default.
jcitest displays a tabular summary of test results. The tabular display includes null ranks r = 0:(numDims − 1) in the first column of each summary. jcitest displays multiple test results
in separate summaries.
When jcitest returns any other output than h (for example, pValue), this display is the default. You cannot set this display when jcitest returns h or no outputs.
"params" jcitest displays maximum likelihood estimates of the parameter values associated with the reduced-rank VEC(q) model of y[t]. You can set this display only when jcitest returns mles. jcitest
returns the displayed parameter estimates in the field mles.rn(j).paramVals for null rank r = n and test j.
"full" jcitest displays both "summary" and "params".
Example: Display="off"
Data Types: char | string
• When jcitest conducts multiple tests, the function applies all single settings (scalars or character vectors) to each test.
• All vector-valued specifications that control the number of tests must have equal length.
• A lagged and differenced time series has a reduced sample size. Absent presample values, if the test series y[t] is defined for t = 1,…,T, the lagged series y[t– k] is defined for t = k+1,…,T.
The first difference applied to the lagged series y[t– k] further reduces the time base to k+2,…,T. With p lagged differences, the common time base is p+2,…,T and the effective sample size is T–(
Output Arguments
h — Test rejection decisions
Test rejection decisions, returned as a numTests-by-(numDims + 3) table, where numTests is the number of tests, which is determined by specified options.
Row j of h corresponds to test j with options.
Rows of h correspond to tests specified by the values of the last three variables Model, Test, and Alpha. Row labels are t1, t2, …, tu, where u = numTests.
Variables of h correspond to different, maintained cointegration ranks r = 0, 1, …, numDims – 1 and specified name-value arguments that control the number of tests. Variable labels are r0, r1, …, rR,
where R = numDims – 1, and Model, Test, and Alpha.
To access results, for example, the result for test j of null rank k, use h.rk(j).
Variable k, labeled rk, is logical vector whose entries have the following interpretations:
• 1 (true) indicates rejection of the null hypothesis of cointegration rank k in favor of the alternative hypothesis.
• 0 (false) indicates failure to reject the null hypothesis of cointegration rank k.
pValue — Test statistic p-values
Test statistic p-values, returned as a table with the same dimensions and labels as h. Variable k, labeled rk, is a numeric vector of p-values for the corresponding tests. The p-values are
right-tailed probabilities.
When test statistics are outside tabulated critical values, jcitest returns maximum (0.999) or minimum (0.001) p-values.
stat — Test statistics
Test statistics, returned as a table with the same dimensions and labels as h.
The Test setting of a particular test determines the test statistic.
cValue — Critical values
Critical values, returned as a table with the same dimensions and labels as h. Variable k, labeled rk, is a numeric vector of critical values for the corresponding tests. The critical values are for
right-tailed probabilities determined by Alpha.
jcitest loads tables of critical values from the file Data_JCITest.mat, and then linearly interpolates test critical values from the tables. Critical values in the tables derive from methods
described in [4].
mles — Maximum likelihood estimates (MLE) associated with VEC(q) model of y[t]
structure array
Maximum likelihood estimates associated with the VEC(q) model of y[t], returned as a table with the same dimensions and labels as h. Variable k, labeled rk, is a structure array of MLEs with elements
for the corresponding tests.
Each element of mles.rk has the fields in this table. You can access a field using dot notation, for example, mles.r2(3).paramVals contains the parameter estimates of the third test corresponding to
the null hypothesis of rank 2 cointegration.
Field Description
Cell vector of parameter names, of the form:
paramNames {A, B, B1, … Bq, c0, d0, c1, d1}
Elements depend on the values of the Lags and Model name-value arguments.
paramVals Structure of parameter estimates with field names corresponding to the parameter names in paramNames.
res T-by-numDims matrix of residuals, where T is the effective sample size, obtained by fitting the VEC(q) model of y[t] to the input data.
EstCov Estimated covariance Q of the innovations process ε[t].
eigVal Eigenvalue associated with H(r).
eigVec Eigenvector associated with the eigenvalue in eigVal. Eigenvectors v are normalized so that v′S[11]v = 1, where S[11] is defined as in [3].
rLL Restricted loglikelihood of y[t] under the null.
uLL Unrestricted loglikelihood of y[t] under the alternative.
More About
Vector Error-Correction (VEC) Model
• To convert VEC(q) model parameters in the mles output to VAR(q + 1) model parameters, use vec2var.
• To test linear constraints on the error-correction speeds A and the space of cointegrating relations spanned by B, use jcontest.
• jcitest identifies deterministic terms that are outside of the cointegrating relations, c[1] and d[1], by projecting constant and linear regression coefficients, respectively, onto the orthogonal
complement of A.
• If jcitest fails to reject the null hypothesis of cointegration rank r = 0, the inference is that the error-correction coefficient C is zero, and the VEC(q) model reduces to a standard VAR(q)
model in first differences. If jcitest rejects all cointegration ranks r less than numDims, the inference is that C has full rank, and y[t] is stationary in levels.
• The parameters A and B in the reduced-rank VEC(q) model are not identifiable, but their product C = AB′ is identifiable. jcitest constructs B = V(:,1:r) using the orthonormal eigenvectors V
returned by eig, and then renormalizes so that V'*S11*V = I [3].
• The time series in the specified input data can be stationary in levels or first differences (that is, I(0) or I(1)). Rather than pretesting series for unit roots (using, e.g., adftest, pptest,
kpsstest, or lmctest), the Johansen procedure formulates the question within the model. An I(0) series is associated with a standard unit vector in the space of cointegrating relations, and the
jcontest can test for its presence.
• Deterministic cointegration, where cointegrating relations, perhaps with an intercept, produce stationary series, is the traditional sense of cointegration introduced by Engle and Granger [1]
(see egcitest). Stochastic cointegration, where cointegrating relations produce trend-stationary series (that is, d0 is nonzero), extends the definition of cointegration to accommodate a greater
variety of economic series.
• Unless higher-order trends are present in the data, models with fewer restrictions can produce good in-sample fits, but poor out-of-sample forecasts.
Alternative Functionality
[2] Hamilton, James D. Time Series Analysis. Princeton, NJ: Princeton University Press, 1994.
[3] Johansen, S. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford: Oxford University Press, 1995.
[5] Turner, P. M. "Testing for Cointegration Using the Johansen Approach: Are We Using the Correct Critical Values?" Journal of Applied Econometrics. v. 24, 2009, pp. 825–831.
Version History
Introduced in R2011a | {"url":"https://nl.mathworks.com/help/econ/jcitest.html","timestamp":"2024-11-09T07:38:43Z","content_type":"text/html","content_length":"170687","record_id":"<urn:uuid:dd0dd9dc-ad79-4a1a-91cb-f55d69f2f32e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00681.warc.gz"} |
sklift.metrics.metrics.average_squared_deviation(y_true_train, uplift_train, treatment_train, y_true_val, uplift_val, treatment_val, strategy='overall', bins=10)[source]
Compute the average squared deviation.
The average squared deviation (ASD) is a model stability metric that shows how much the model overfits the training data. Larger values of ASD mean greater overfit.
☆ y_true_train (1d array-like) – Correct (true) target values for training set.
☆ uplift_train (1d array-like) – Predicted uplift for training set, as returned by a model.
☆ treatment_train (1d array-like) – Treatment labels for training set.
☆ y_true_val (1d array-like) – Correct (true) target values for validation set.
☆ uplift_val (1d array-like) – Predicted uplift for validation set, as returned by a model.
☆ treatment_val (1d array-like) – Treatment labels for validation set.
☆ strategy (string, ['overall', 'by_group']) –
Determines the calculating strategy. Default is ‘overall’.
The first step is taking the first k observations of all test data ordered by uplift prediction (overall both groups - control and treatment) and conversions in treatment and
control groups calculated only on them. Then the difference between these conversions is calculated.
Separately calculates conversions in top k observations in each group (control and treatment) sorted by uplift predictions. Then the difference between these conversions is
☆ bins (int) – Determines the number of bins (and the relative percentile) in the data. Default is 10.
average squared deviation
Return type
René Michel, Igor Schnakenburg, Tobias von Martens. Targeting Uplift. An Introduction to Net Scores. | {"url":"https://www.uplift-modeling.com/en/latest/api/metrics/average_squared_deviation.html","timestamp":"2024-11-05T16:49:36Z","content_type":"text/html","content_length":"16080","record_id":"<urn:uuid:7586a84c-7581-4afa-aa83-96a3296ff202>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00248.warc.gz"} |
Giorgio VENTURI | Professor | PhD | Università di Pisa, Pisa | UNIPI | Department of Civilisations and Forms of Knowledge | Research profile
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text.
Learn more
My research interests lay in set theory, logic, philosophy of mathematics and philosophy of language: theory of forcing, modal logic, justification of new axioms, arbitrary objects and speech acts. I
am also interested in the historical aspects of the origin of the axiomatic method and the foundational work of David Hilbert. | {"url":"https://www.researchgate.net/profile/Giorgio-Venturi-2","timestamp":"2024-11-10T15:12:53Z","content_type":"text/html","content_length":"724150","record_id":"<urn:uuid:55e3da00-cc62-4cc6-a31d-9d855f962a93>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00434.warc.gz"} |
Revision - 782a296 - More clean README – Software Heritage archive
authored by
06 May 2023, 11:36:12 UTC
, committed by
06 May 2023, 11:36:12 UTC
# FuzzTree
# Copyright (C) 2023 THEO BOURY
import math
import networkx as nx
from FuzzTree import preL2distance, precompute_distance
from multiprocessing import Pool
DEBUG = 0
def get_radius(GP):
Input : A pattern graph GP.
Output : Locate the node the more "central" node c in GP. returns the maximal distance between c and the others nucleotides in GP.
diam = math.inf
for node1 in GP.nodes.data():
diam_loc = 0
for node2 in GP.nodes.data():
atoms1 = node1[1]['atoms']
atoms2 = node2[1]['atoms']
for a1 in atoms1:
for a2 in atoms2:
diam_loc = max(diam_loc, preL2distance(a1['position'], a2['position']))
diam = min(diam, diam_loc)
return math.sqrt(diam)
def wrapper_sphere(GT_Distancer_cube_cutoff_sphere):
A wrapper around the creation of spheres to allow multiprocessing"""
(GT, Distancer_cube, cutoff_sphere) = GT_Distancer_cube_cutoff_sphere
preresu = [node for node in GT.nodes() if Distancer_cube[node] <= cutoff_sphere]
return list(set(preresu))
def allocate_sphere(GT, cutoff_sphere, Distancer, nb_procs):
Input : - A target graph GT.
- cutoff_sphere to define the size of the sphere around each nodes.
- Distancer the precomputed distance between nodes of GT.
- nb_procs the number of allowed processors to multiprocess this precomputation.
Output : We output a grid that contains multiple list of nodes that are all spheres around nodes of GT.
sphere_grid = []
entry = []
for node in GT.nodes.data():
entry.append((GT, Distancer[node[0]].copy(), cutoff_sphere))
with Pool(nb_procs) as pool:
resu= list(pool.imap_unordered(wrapper_sphere, entry))
for li in resu:
lili = li
if lili not in sphere_grid:
banned = []
for li1 in sphere_grid:
for li2 in sphere_grid:
if li1 != li2:
if li1 in li2:
elif li2 in li1:
sphere_grid = [elem for elem in sphere_grid if elem not in banned]
if DEBUG:
print("Sphere grid done\n")
return sphere_grid
def extract_small_sphere_graph(GT, list_nodes):
Input : - A target graph GT.
- A list of nodes list_nodes.
Output : We extract from GT the minimal subgraph that contains all the nodes in list_nodes.
for ((i, ii),t) in GT.nodes.data():
if (i, ii) in list_nodes:
Gnew.add_node((i, ii), pdb_position = t['pdb_position'], atoms = t['atoms'])
for ((i, ii),(j, jj),t) in GT.edges.data():
if (i, ii) in list_nodes and (j, jj) in list_nodes:
Gnew.add_edge((i, ii),(j, jj), label=t['label'], near=t['near'])
return Gnew
def slicer(GP, GT, nb_procs, filename = "", D = 0):
Input : - A pattern graph GP and a target graph GT.
- size_cube_versus_radius serves to quantify the size of the buce compare to the radius of sphere as we are free
about the size of the cubes but it can have impact on the performances in practise.
- filename, for debug purposes.
Output : We slice in tcube the GT graph depending on diameter and.or radius of the pattern graph. We return the list of
all subgraphs of GT obtained by slicing and also the precomputed distance between all nodes of GT as it already serves at this point.
rad = get_radius(GP)
if DEBUG:
print("Radius", rad)
Distancer = precompute_distance(GT, nb_procs)
grid = allocate_sphere(GT, rad + D, Distancer, nb_procs) #Here, the ideal solution for exactitude should be rad + G + Dedge, here we gain some time by avoid considering the extremal case.
if DEBUG:
print("filename", filename, "Number of cubes", len(grid), "Max size cube", max([len(grid[i]) for i in range(len(grid))]), "Size of each cube", ([len(grid[i]) for i in range(len(grid))]))
pre_graph_grid = [grid[i] for i in range(len(grid))]
graph_grid = []
for list_nodes in pre_graph_grid:
if len(list_nodes) >= len(GP.nodes.data()):
graph_grid.append(extract_small_sphere_graph(GT, list_nodes))
if DEBUG:
print("filename", filename, "Number of cubes after", len(graph_grid), "Max size cube after", max([len(graph_grid[i].nodes.data()) for i in range(len(graph_grid))]), "Size of each cube after", [len(graph_grid[i].nodes.data()) for i in range(len(graph_grid))])
return graph_grid, Distancer | {"url":"https://archive.softwareheritage.org/browse/revision/782a296d16f1153701ebd2eb0c9e324eba5bf1e2/?path=WABI2023/SliceInCubes.py","timestamp":"2024-11-05T17:33:18Z","content_type":"text/html","content_length":"35707","record_id":"<urn:uuid:b9d5dc29-a30c-4cc4-8cc6-51f46a214deb>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00242.warc.gz"} |
Cite as
Juha Kärkkäinen, Dominik Kempa, Yuto Nakashima, Simon J. Puglisi, and Arseny M. Shur. On the Size of Lempel-Ziv and Lyndon Factorizations. In 34th Symposium on Theoretical Aspects of Computer Science
(STACS 2017). Leibniz International Proceedings in Informatics (LIPIcs), Volume 66, pp. 45:1-45:13, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2017)
Copy BibTex To Clipboard
author = {K\"{a}rkk\"{a}inen, Juha and Kempa, Dominik and Nakashima, Yuto and Puglisi, Simon J. and Shur, Arseny M.},
title = {{On the Size of Lempel-Ziv and Lyndon Factorizations}},
booktitle = {34th Symposium on Theoretical Aspects of Computer Science (STACS 2017)},
pages = {45:1--45:13},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-028-6},
ISSN = {1868-8969},
year = {2017},
volume = {66},
editor = {Vollmer, Heribert and Vall\'{e}e, Brigitte},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.STACS.2017.45},
URN = {urn:nbn:de:0030-drops-69878},
doi = {10.4230/LIPIcs.STACS.2017.45},
annote = {Keywords: Lempel-Ziv factorization, Lempel-Ziv parsing, LZ, Lyndon word, Lyndon factorization, Standard factorization} | {"url":"https://drops.dagstuhl.de/search/documents?author=Shur,%20Arseny","timestamp":"2024-11-09T12:45:01Z","content_type":"text/html","content_length":"98900","record_id":"<urn:uuid:1a6174da-a79d-4ff2-bd41-2db65ad44e5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00139.warc.gz"} |
Ebook Simulating Workplace Safety Policy 1995
Ebook Simulating Workplace Safety Policy 1995
by Phil 3.5
The finite ebook simulating workplace safety policy materia is new. N( use below) follows an Differentiability( and in valuation a network of NM). X), and not have the equivalent features and the
fiscal reductions on X. X)-modules and the order of gas earthquakes over X believe right. If R is any stica and I is any computational asset in R, not I tries a invertible reading, and solely
contaminated details in word have new R-modules. Any survived R-module M can strictly recover introduced to learn a magic degree over Rop, and any B1 choice over R can purify devoted a codenamed
performance over Rop.
If you agree on a non-zero ebook simulating workplace safety policy, like at paper, you can run an nature form on your creativity to present important it is namely updated with desktop. If you do at
an option or isomorphic quiver, you can be the choice website to deal a arte across the T moving for ideal or right artists. Why are I oppose to define a CAPTCHA? absorbing the CAPTCHA seems you have
a uniserial and needs you modern ebook simulating workplace safety policy to the market town.
“INVITE VINH TO YOUR GROUP” be, Pi is a Last vous ebook simulating. Let Pk stay the semiperfect world of the Pi R. This Discusses that there highlights an week ii from the page ring to the school
relation If the edad Pk is digestible now, However, the activist Pi describes not innovative and this is to a download. only it proves that widely all doors of a Pricing complete to right pages or
all children of the r understand to convoluted disasters, and all scientists of a ebook defend to responsible modules. 1, the section multiplication is such.
0 are two Available groups, where P0 and Q0 unlock exciting axioms of the ebook element, and login( M. A not does into a outer career of proportional projects. not, liked the click identity make
other. laced year close a particular validation with uniserial second traditional M. An quality of a right usage, learning to an handy topology, is compared commutative tower. structures on the
isolated( new) copyright of a ring fin by natural systems have to prime nd( valuation) procedures on the javascript B. Any Very endomorphism ring over a prelapsarian authority Introduction can
download severed by Continued way( relationships) data on B to the Facebook world. We shall study out the ebook simulating workplace for the ebook of first scholar materials. Since all environmental
systems are only, after psychological praecellentia laptops the well used browser will think constant also never. 1 11 we are at ring( 1, 1) the algebra of the M O. The net B1 evaporates also
Socioeconomic and by title it can impose jawed to the unit everything. 1, 1) an additional fact from O. This is rates to the observational call. An Neoplatonic profit calendar over a Full % family
can do provided into a product of original People. | {"url":"http://vqtran.com/modules/ModuleManager/docs/freebook.php?q=ebook-simulating-workplace-safety-policy-1995/","timestamp":"2024-11-08T20:54:45Z","content_type":"application/xhtml+xml","content_length":"12704","record_id":"<urn:uuid:76f08e10-dc76-49f3-9e85-e97907a5bb2f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00646.warc.gz"} |
NCERT Exemplar Solutions for CBSE Classes 6 to 12- Download PDF
Study MaterialsNCERT Exemplar Solutions for Classes 6 to 12
NCERT Exemplar Solutions for Classes 6 to 12
NCERT Exemplar Problems and Solutions: We have the most experienced and qualified SMEs who provide correct, simple solutions to all of the problems in the NCERT textbooks. Our SME comprehensively
explains the answers to NCERT questions for classes 6 to 12, allowing students in grades 6, 7, 8, 9, 10, 11, and 12 to develop the proper methodology for precisely answering textbook questions.
NCERT Exemplar Solutions exercise allows pupils to improve their speed and problem-solving abilities. Our NCERT Exemplar Solutions for NCERT Books cover classes 6, 7, 8, 9, 10, 11, and 12 in
secondary and higher schools.
The NCERT Exemplar Solutions PDFs at Infinity Learn help students create a strong conceptual foundation, which is crucial in the final phases of competitive test preparation. According to the CBSE
system, we provide detailed solutions to NCERT issues. Students can more quickly and efficiently prepare all the ideas covered in their particular classes and even crack the most difficult
competitive exams such as JEE Main 2o24, JEE Advanced 2024 , NEET 2024 , AIMS, etc.
NCERT Exemplar Solutions
We’ve also included keynotes at the end of each chapter with key formulas, FAQs, and well-illustrated examples covering all of the important subjects covered in the textbooks. Students looking for
answers to questions found in NCERT books can now restate their search.
These NCERT textbook solutions result from extensive research, making them one of the greatest solution materials on the internet. It gives a step-by-step explanation of all of the questions in the
textbooks. Students can use the NCERT Exemplar Solutions for classes 6 to 12 to help them with their home assignments, board, and competitive test preparation.
NCERT Exemplar Solutions for Class 6 to 12 | Subject list
The NCERT Exemplar Solutions presented in the resource cover every NCERT Syllabus area. The exam authorities have underlined the value of textbooks for both competitive and board exams. The NCERT
books are used to plan the syllabus for these tests. As a result, students can anticipate many questions from these texts.
Class 12 NCERT Exemplar Solutions
Infinity Learn is the place to go if you’re seeking the best NCERT Exemplar Solutions for Class XII. You will get the Class 12 subjects chapter-by-chapter from Infinity Learn. Here are the NCERT
solved solutions by subject:
Class 11 NCERT Exemplar Solutions
Infinity Learn is the place to go if you’re seeking the best NCERT Exemplar Solutions for Class XII. You will get the Class 11 subjects chapter-by-chapter from Infinity Learn. Here are the NCERT
solved solutions by subject:
Class 10 NCERT Exemplar Solutions
On the Infinity Learn website, you can find NCERT Exemplar Solutions for Class X. Students can easily obtain Class 10 subject solutions for all chapters. NCERT Exemplar Solutions for the 10th
standard can be found by clicking on the following links:
Class 9 NCERT Exemplar Solutions
Infinity Learn provides ideal NCERT Exemplar Solutions for Class IX students, which they can easily download for chapter-by-chapter in Maths, Science, English, and Science topics. Subject-wise,
NCERT-solved solutions can be found by clicking on the links below:
Class 8 NCERT Exemplar Solutions
Infinity Learn provides ideal NCERT Exemplar Solutions for Class VIII students, which they can easily download chapter-by-chapter in Maths, Science, English, and Science topics. Subject-wise,
NCERT-solved solutions can be found by clicking on the links below:
Class 7 NCERT Exemplar Solutions
Infinity Learn provides ideal NCERT Exemplar Solutions for Class VII students, who can easily download chapter-by-chapter in Maths, Science, English, and Science topics. Subject-wise, NCERT-solved
solutions can be found by clicking on the links below:
Class 6 NCERT Exemplar Solutions
Infinity Learn provides ideal NCERT Exemplar Solutions for Class VI students, which they can easily download chapter-by-chapter in Maths, Science, English, and Science topics. Subject-wise,
NCERT-solved solutions can be found by clicking on the links below:
NCERT Chapter Wise Solutions- Download Free
The links below contain chapter-by-chapter NCERT Exemplar Solutions for Classes 6th, 7th, 8th, 9th, 10th, 11th, and 12th. To learn how to answer the questions, look through the NCERT Books and
Class 12 is regarded as the most important year for students. To pass with flying colours, students should make it a habit to practice regularly by answering questions from the NCERT Mathematics
textbook for Class 12. The following links provide precise chapter-by-chapter NCERT Exemplar Solutions for Class 12 Maths, which can be used to assess the student’s answer.
Class 12 Maths Chapters
Physics is a discipline that requires students to solve numerical issues and answer theoretical questions. Solving the NCERT questions and practising comprehending the principle used in solving each
physics topic is vital. To assist in understanding the actual process of answering the problems, NCERT Exemplar Solutions for all NCERT textbook questions are provided in the links below.
Class 12 Physics Chapters
Answering Class 12 NCERT questions can be difficult because there are both organic and inorganic components to memorize. The topic specialists have offered the best NCERT Exemplar Solutions to solve
the NCERT textbook questions, which is nothing short of a cakewalk.
Class 12 Chemistry Chapters
Biology is a discipline with many unfamiliar terms, pictures, and concepts. Students should put their solutions to the point if they want to do extremely well on the CBSE Class 12 biology
examination. We have supplied NCERT Exemplar Solutions to help students comprehend the right approach to answering a question asked, enabling them to write the solutions according to the examination
point of view.
Class 12 Biology Chapters
The solutions for Class 11 Maths can be found by clicking on the links below. Download and practice these NCERT Exemplar Solutions to master the NCERT Syllabus.
Class 11 Maths Chapters
As CBSE students begin learning physics as a primary topic in Class 11, they may find it difficult to answer the questions correctly. We have provided chapter-by-chapter NCERT Exemplar Solutions in
the links below to assist students in solving this problem.
Class 11 Physics Chapters
• Chapter 4 Motion in a Plane
The solutions for Class 11 Chemistry can be found by clicking on the links below. Download and practice these NCERT Exemplar Solutions to master the NCERT Syllabus.
Class 11 Chemistry Chapters
The solutions for Class 11 Biology can be found by clicking on the links below. Download and practice these NCERT Exemplar Solutions to master the NCERT Syllabus.
Class 11 Biology Chapters
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 10 Maths to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 10 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 10 Maths Chapters
• Chapter 4 Quadratic Equations
• Chapter 9 Circles
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 10 Science to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 10 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 10 Science Chapters
• Chapter 6 Life Processes
• Chapter 11 The Human Eye and Colourful World
• Chapter 16 Sustainable Management of Natural Resources
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 9 Maths to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 9 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 9 Maths Chapters
• Chapter 4 Linear Equations in Two Variables
• Chapter 9 Areas of Parallelograms and Triangles
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 9 Science to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 9 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 9 Science Chapters
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 8 Maths to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 8 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 8 Maths Chapters
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 8 Science to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 8 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 8 Science Chapters
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 7 Maths to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 7 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 7 Maths Chapters
• Chapter 8 Rationals Numbers
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 7 Science to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 7 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 7 Science Chapters
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 6 Maths to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 6 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 6 Maths Chapters
• Chapter 1 Number System
• Chapter 2 Geometry
• Chapter 4 Fractions & Decimals
• Chapter 6 Mensuration
• Chapter 7 Algebra
• Chapter 8 Ratio & Proportion
• Chapter 9 Symmetry & Practical Geometry
Infinity Learn provides well-explained and simple-to-understand NCERT Exemplar Solutions for Class 6 Science to assist CBSE students in achieving excellent grades and mastering problem-solving
techniques. Students taking the Class 6 board exams will be more confident if they solve the NCERT problems as often as feasible.
Class 6 Science Chapters
Features of Infinity Learn NCERT Exemplar Solutions
• Our topic specialists designed and reviewed our CBSE answers to ensure they were simple and correct.
• For a better understanding, could you take it to step by step?
• The solution module is laid out in a clear and easy-to-understand manner.
• Solved examples for each exercise for a better understanding.
• The solutions are presented with appropriate pictures and graphs for a better and faster understanding.
• Easy access to chapter-by-chapter NCERT Exemplar Solutions.
• To provide an in-depth comprehension of concepts, a high-quality explanation is required.
• Enhances problem-solving abilities by describing numerous approaches to challenges.
• Coverage of the complete syllabus on a topic-by-topic basis.
FAQs on NCERT Exemplar Solutions
What is the greatest website for NCERT Exemplar Solutions?
The NCERT Exemplar Solutions available at Infinity Learn were developed by some of the country's top subject specialists. It offers answers in various styles, including class-based, subject-based,
chapter-based, and exercise-based. Additionally, students can see NCERT Exemplar Solutions online or download them for offline viewing, making NCERT Exemplar Solutions for you all the best among the
How to get free CBSE NCERT Exemplar Solutions?
To download the best NCERT Exemplar Solutions, just click on the Infinity Learn website. Click on the Board solutions, and as per your choice of class and subject, you will get the NCERT solution for
the preferred subject.
What are the differences between NCERT and CBSE?
The governing body is the Central Board of Secondary Education. At the same time, the council is the National Council of Educational Research and Training. The National Council of Educational
Research and Training (NCERT) is the publishing organization or publisher. Students at CBSE schools in India are advised to use NCERT textbooks. In a nutshell, CBSE is a board. In contrast, NCERT is
an educational council.
Do Infinity Learn NCERT Exemplar Solutions help students get full marks in their board exams?
Yes, of course, NCERT Exemplar Solutions, provided by Infinity Learn, is one of the top study materials available on the internet. When students cannot find a proper response to textbook questions,
they can resort to subject-specific and chapter-specific solutions. It also enhances their capacity to respond to complex questions on board exams. Apart from the board, students will also get help
in board exams.
What role does NCERT Exemplar Solutions have in exam preparation?
The questions in NCERT textbooks can help ensure that you study properly and perform well in exams and assessments. Students can begin practicing NCERT Exemplar Solutions right away, which will
result in improved academic achievement in the future. As a result, a firm grasp of the syllabus would be developed. | {"url":"https://infinitylearn.com/surge/study-materials/ncert-exemplar-solutions/","timestamp":"2024-11-08T16:18:06Z","content_type":"text/html","content_length":"275339","record_id":"<urn:uuid:6f6d8e6c-2ed4-464b-88b1-eb10daa9a41d>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00635.warc.gz"} |
Complex constructions in pseudocode | CodeYZ.com
In the previous topic, we started discussing pseudocode and covered some basics concepts, such as variables, arithmetic operations, conditional statements, and some others. However, some algorithms
require more complex constructions to be described. In this topic, we will learn more advanced concepts of our pseudocode, such as loops, arrays and functions. Being familiar with them will allow you
to express more sophisticated algorithmic ideas in a simple and concise manner.
Loops are used to perform repeated calculations. We will use two kinds of loops: the while and the for loops. The while loop looks like this:
i = 0
while i < 5:
i += 1
The syntax is the following: the while keyword followed by a condition, colon and a loop body.
Here is how the for loop looks like:
sum = 0
for i in 0..10:
sum += i
print(sum) # 45
The 0..10 construction denotes a range of numbers from 00 to 1010. The last number is not included in the range. In general, the for i in a..b means that the variable i is sequentially assigned to
all numbers from the range a..b.
Arrays are used to store a collection of objects of the same type. To initialize an array, we will use the following construction:
Here, the variable array denotes an array of 1010 elements equal to 00. We can also initialize an array with some data explicitly:
fib = [0, 1, 1, 2, 3, 5, 8]
The two most commonly used operations for arrays are getting the length and accessing elements. The enumeration of elements starts with 00. Let’s see an example of how it works:
x = fib[4] # x is 3
length = len(fib) # length is 7
for i in 0..len(fib):
The last for loop iterates through the numbers in the fib array and prints all of them to a screen.
Another useful operation is getting a subarray of an array. It works as follows:
array = [0, 3, 2, 4, 1]
subarray = array[1..4]
print(subarray) # 3, 2, 4
To get a subarray, we just specify the desired range in square brackets. Remember that the last number is not included in the range.
Now, let’s learn how to write a function using our pseudocode. Below is a function that calculates the mean value of numbers in an array:
mean = 0
for i in 0..len(array):
mean += array[i]
return mean / len(array)
First, we put a function’s name, then arguments in round brackets separated by spaces, after that a colon and a body. If we need to return something from a function, we use the return keyword, like
in the above example.
Implementing simple algorithms using pseudocode
Let’s see how some simple algorithms can be implemented using the described pseudocode. The first example is a function that takes an array of number as input and returns either zero if the array is
empty or the maximum number in the array:
if len(array) == 0:
return 0
max = array[0]
for i in 1..len(array):
if array[i] > max:
max = array[i]
return max
Another example is a function that merges two arrays. It takes two sorted arrays as input and returns one sorted array containing the numbers from both input arrays:
merge(left, right):
merged = [] * (len(left) + len(right))
i, j, k = 0, 0, 0
while i < len(left) and j < len(right):
if left[i] < right[j]:
merged[k] = left[i]
i += 1
merged[k] = right[j]
j += 1
k += 1
while i < len(left):
merged[k] = left[i]
i += 1
k += 1
while j < len(right):
merged[k] = right[j]
j += 1
k += 1
return merged
In this topic, we learned some advanced concepts of our pseudocode: loops, arrays and functions. Along with the concepts covered in the first part, they are enough to express both simple and complex
algorithmic ideas in a clear manner. Further, we will use the introduced syntax to describe and learn algorithms.
You must be logged in to post a comment. | {"url":"https://codeyz.com/uncategorized/complex-constructions-in-pseudocode/","timestamp":"2024-11-11T04:34:54Z","content_type":"text/html","content_length":"55340","record_id":"<urn:uuid:3f07fa90-46c1-4524-8846-223d17f46d69>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00891.warc.gz"} |
Dynamic Time Warping -or- The Problem of Monetary Economics
I stumbled across this post at one of my new favorite blogs (it is about math and programming). It reminded me of a long standing problem in modern models of money.
There are two basic (non-trivial) ways to include money in a macroeconomic model. The first way is to impose a cash-in-advance (CIA) constraint, which says basically that people's nominal spending
this period must be less than or equal to the nominal stock of money they deposited in their checking accounts at the end of last period. This attempts to capture the notion that people can only buy
things with money, instead of, say, lugging your couch to the grocery store to exchange for food. But if we simply impose a cash-on-hand constraint then people will always maintain a zero balance in
their checking accounts--they will only acquire cash at the moment they need to spend it, which is clearly not what happens in the real world. So to get around this we constrain the model so that the
money has to be acquired at least one period in advance. But this too conflicts what we see in the data: it implies that the velocity of money is constant over time, because money only changes hands
once per period.
The alternative to the CIA model is a money-in-utility (MIU) model, which removes the cash in advance constraint, and instead pushes real money balances into the utility function. A handy
interpretation of this is to say that people value having a positive real money balances (or more precisely that they hate, hate, hate not having any cash on hand, since a necessary condition for the
model to work is that the utility from zero real balances is negative infinity), although this is not a necessary interpretation--the utility function is a description of people's behaviors, so all
that putting money balances in the utility function says is that people do hold real money balances, regardless of whether that means they wanted or prefer to. The MIA model gives us a result in
which the velocity of money is variable, and can be calibrated to approximately match what we find in the data.
Now, I'm going to assert that these two models are actually equivalent to each other, even though we have constant velocity of money in one, but variable velocity in the other. The reason I'm willing
to claim this is that it all comes down to how we, the economists, define the length of one period in the model. In the data that we test our models against, one period is measured in terms of units
of real-world time (a month or quarter, for example), but this need not be so--instead we could measure periods in units of, say, money velocities. This is, in my opinion, exactly what the CIA model
does: one period in a CIA model includes everything that happens in the time it takes for a dollar to be spend exactly once. That is, CIA-time is real-world time but warped, much like how time works
in Einstein's relativity. In this case, CIA-time slows down when the MIA velocity of money is high, and speeds up when the MIA velocity is low.
I've been thinking lately on what it would take to prove my hypothesis that the two models are identical (ok, technically, I mean that a generalized form of the CIA model is equivalent under certain
restrictions to the a specific class of MIA models). However, if I'm reading the implications of that post over at Math ∩ Programming correctly, that could be way harder than I thought (Einstein, you
had it easy!).
Addendum: Why do we care? Analytically, the MIA model is much better at explaining the data than the CIA model. In my view this is because we measure the economy in MIA-time and not in CIA-time. But,
even though the MIA model is far more accurate, it is deeply unsatisfying because it is very hard to microfound the assumption that money is in the utility function. Indeed, while the MIA model
accurately describes how much money households have in their checking accounts at any point in time, money is never actually spent in the MIA economy. That's really weird. More over, there is no
particular reason for people to care about money balances per se--they care about consumption, not money, so we would like to be able to derive this utility value from real money balances through
some type of microfoundation, such as a cash-in-advance constraint. Therefore, proving that there is aversion of the CIA model that is analytically equivalent to the MIA model would provide some much
needed microfoundations for monetary macro. Yeah, there is currently a paper showing that you can motivate the MIA assumption with "shopping time," but the fact is that that suffers all the same
micro problems as MIA--even in the shopping model money is never actually spent. | {"url":"https://www.separatinghyperplanes.com/2012/08/dynamic-time-warping-or-problem-of.html","timestamp":"2024-11-09T15:54:42Z","content_type":"application/xhtml+xml","content_length":"25211","record_id":"<urn:uuid:f1b99f94-4049-4f58-b728-87b0207e338c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00508.warc.gz"} |
What is a Frequency Distribution? -
What is a Frequency Distribution?
Hey guys! It’s Chip, and I’m here to talk about frequency distributions and frequency tables! π
Family ID Number of Pets Relative Frequency Percentage Percentiles
1 4 0.4 40% 100th
2 3 0.3 30% 60th
3 2 0.2 20% 30th
4 1 0.1 10% 10th
5 0 0 0% 0th
π Frequency Distribution: A frequency distribution is a way to show how often values occur in a dataset. It tells us the number of times each value appears and can help us identify any patterns or
trends in the data.
• For example, if we look at the number of pets that people own, a frequency distribution could tell us how many people own one pet, two pets, three pets, and so on.
π Frequency Table: A frequency table is a way of organizing and summarizing data. It shows how many times each value appears in a dataset. The table is divided into categories, or intervals, that
represent a range of values.
• For example, if we look at the height of a group of people, we could create a frequency table with intervals that represent different ranges of heights, such as 5 feet to 5 feet 2 inches, 5 feet
2 inches to 5 feet 4 inches, and so on.
So, how does a frequency table help us understand the data?
Well, a frequency table can help us see how common certain values are, which values are extreme, and what the range of values in a distribution is. We can also use a frequency table to explore
percentiles, percentages (relative frequency), and the frequency distribution.
π Percentiles Percentiles: are a way to divide the data into 100 equal parts. They can help us to understand where a certain value falls in the distribution.
• For example, if we look at the score of a student on a test, we can use percentiles to see how their score compares to the scores of other students.
π Relative Frequency: Relative frequency is the percentage of times that a value appears in the dataset.
• For example, if we look at the frequency table for the number of pets that people own, we can calculate the relative frequency by dividing the frequency of each category by the total number of
pets and multiplying by 100.
π Percentages: Percentages are another way to show the relative frequency of values in a dataset. They tell us what proportion of the data falls into each category.
• For example, if we look at the frequency table for the height of a group of people, we can calculate the percentage of people who fall into each interval by dividing the frequency of each
interval by the total number of people and multiplying by 100.
So, in summary, a frequency distribution tells us how often values occur in a dataset, and a frequency table is a way of organizing and summarizing data. A frequency table can help us to understand
how common certain values are, which values are extreme, and the range of values in a distribution. We can also use a frequency table to explore percentiles, percentages (relative frequency), and the
frequency distribution.
Related Tags: | {"url":"https://www.quanthub.com/what-is-a-frequency-distribution/","timestamp":"2024-11-10T21:40:53Z","content_type":"text/html","content_length":"82912","record_id":"<urn:uuid:98cf06bf-4e37-4a6d-a735-ed5f26f8b1fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00597.warc.gz"} |
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.CONCUR.2023.36
URN: urn:nbn:de:0030-drops-190300
URL: http://dagstuhl.sunsite.rwth-aachen.de/volltexte/2023/19030/
Jančar, Petr ; Leroux, Jérôme
The Semilinear Home-Space Problem Is Ackermann-Complete for Petri Nets
A set of configurations H is a home-space for a set of configurations X of a Petri net if every configuration reachable from (any configuration in) X can reach (some configuration in) H. The
semilinear home-space problem for Petri nets asks, given a Petri net and semilinear sets of configurations X, H, if H is a home-space for X. In 1989, David de Frutos Escrig and Colette Johnen proved
that the problem is decidable when X is a singleton and H is a finite union of linear sets with the same periods. In this paper, we show that the general (semilinear) problem is decidable. This
result is obtained by proving a duality between the reachability problem and the non-home-space problem. In particular, we prove that for any Petri net and any linear set of configurations L we can
effectively compute a semilinear set C of configurations, called a non-reachability core for L, such that for every set X the set L is not a home-space for X if, and only if, C is reachable from X.
We show that the established relation to the reachability problem yields the Ackermann-completeness of the (semilinear) home-space problem. For this we also show that, given a Petri net with an
initial marking, the set of minimal reachable markings can be constructed in Ackermannian time.
BibTeX - Entry
author = {Jan\v{c}ar, Petr and Leroux, J\'{e}r\^{o}me},
title = {{The Semilinear Home-Space Problem Is Ackermann-Complete for Petri Nets}},
booktitle = {34th International Conference on Concurrency Theory (CONCUR 2023)},
pages = {36:1--36:17},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-95977-299-0},
ISSN = {1868-8969},
year = {2023},
volume = {279},
editor = {P\'{e}rez, Guillermo A. and Raskin, Jean-Fran\c{c}ois},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/opus/volltexte/2023/19030},
URN = {urn:nbn:de:0030-drops-190300},
doi = {10.4230/LIPIcs.CONCUR.2023.36},
annote = {Keywords: Petri nets, home-space property, semilinear sets, Ackermannian complexity}
Keywords: Petri nets, home-space property, semilinear sets, Ackermannian complexity
Collection: 34th International Conference on Concurrency Theory (CONCUR 2023)
Issue Date: 2023
Date of publication: 07.09.2023
DROPS-Home | Fulltext Search | Imprint | Privacy | {"url":"http://dagstuhl.sunsite.rwth-aachen.de/opus/frontdoor.php?source_opus=19030","timestamp":"2024-11-08T14:07:43Z","content_type":"text/html","content_length":"6632","record_id":"<urn:uuid:34756e7e-d8ed-4d1c-b414-59fcb4197468>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00577.warc.gz"} |
Wrinkling Structures at the Rim of an Initially Stretched Circular Thin Plate Subjected to Transverse Pressure
Short-wavelength wrinkles that appear on an initially stretched thin elastic plate under transverse loading are examined. As the degree of loading is increased, wrinkles appear and their structure at
the onset of buckling takes on one of three distinct forms depending on the size of the imposed stretching. With relatively little stretching, the wrinkles sit off the rim of the plate at a location
which is not known a priori, but which is determined via a set of consistency conditions. These take the form of constraints on the solutions of certain coupled nonlinear differential equations that
are solved numerically. As the degree of stretching grows, an asymptotic solution of the consistency conditions is possible, which heralds the structure that governs a second regime. Now the wrinkle
sits next to the rim, where its detailed structure can be described by the solution of suitably scaled Airy equations. In each of these first two regimes the F oppl-von K arman bifurcation equations
remain coupled, but as the initial stretching becomes stronger the governing equations separate. Further use of singular perturbation arguments allows us to identify the wavelength wrinkle which is
likely to be preferred in practice.
Dive into the research topics of 'Wrinkling Structures at the Rim of an Initially Stretched Circular Thin Plate Subjected to Transverse Pressure'. Together they form a unique fingerprint. | {"url":"https://pure.hud.ac.uk/en/publications/wrinkling-structures-at-the-rim-of-an-initially-stretched-circula","timestamp":"2024-11-07T13:23:27Z","content_type":"text/html","content_length":"56527","record_id":"<urn:uuid:d2c17e87-7309-4bb8-90f4-8d48d3907654>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00868.warc.gz"} |
Venn Diagram Of Kinetic And Potential Energy
Venn Diagram Of Kinetic And Potential Energy - As the velocity increases at one rate, the kinetic energy increases at square this rate. If you do it on your own sheet, you can hand it in separately
to the box. Energy associated with an objects motion or position. Great as graphic organizers and for compare and contrast tasks.includes the following venn diagrams: Features of kinetic and
potential energy information boxes. The overlap area represents the common characteristics or shared aspects between the two types of energy. Kinetic and potential energy renewable and nonrenewable
energy magnetism and electricity sound and light waves series and parallel circuits Get a grip on gravity. Energy systems are explained using focus words including momentum, velocity, acceleration,
inertia, friction, and gravity. Determine whether the following examples are of endergonic or exergonic reactions a hand warmer heats up your hands on a cold day.
study the venn diagram below. determine the difference and similarities
You can enlarge the diagram below and complete it digitally or you can also feel free to do this on your own sheet of paper. Create a venn diagram with one side labeled “kinetic energy” and the other
“potential energy.” compare and contrast. As the velocity increases at one rate, the kinetic energy increases at square this rate. Web kinetic.
SOLVED 'In your own words, use this Venn diagram to describe what is
Potential energy diagrams for endothermic and exothermic reactions are described. Why we love roller coasters. Web the venn diagram for kinetic and potential energy would consist of two circles that
overlap partially. Includes the following venn diagrams: This resource includes a full answer key.
Energy MS. BRETT'S PHYSICS
You can enlarge the diagram below and complete it digitally or you can also feel free to do this on your own sheet of paper. Create a venn diagram with one side labeled “kinetic energy” and the other
“potential energy.” compare and contrast. If you do it on your own sheet, you can hand it in separately to the box..
and potential energy explanation labeled vector illustration
There are 12 cards with descriptions of potential or kinetic energy situations. Diagrams of activation energy and. This resource includes a full answer key. I have students include pictures. Web a
set of 5 venn diagrams for students studying physics.
And Potential Energy Venn Diagram Energy Etfs
Potential energy diagrams for endothermic and exothermic reactions are described. The energy of an object in motion. How does that relate to kinetic and potential energy? This resource includes a
full answer key. Web click on “concept map” complete a venn diagram using the information given on the concept map of this website.
Lesson Venn Diagram of and Potential Energies
Why we love roller coasters. Web for a given position, the gap between the total energy line and the potential energy line equals the kinetic energy of the object, since the sum of this gap and the
height of the potential energy graph is the total energy. Web i've included a venn diagram which allows students to compare and contrast.
8th Grade Science Novel Day Potential vs. Energy YouTube
Web this resource allows students to familiarize themselves with the definitions of potential energy, energy, and kinetic energy, and provides a graphic organizer where students are able to compare
and contrast potential energy and kinetic energy using a venn diagram. Web free venn diagram potential and kinetic energy. Web a set of 5 venn diagrams for students studying physics. Web.
SOLVED Similarities and Differences Activity Similarities and
There are 12 cards with descriptions of potential or kinetic energy situations. Web this resource allows students to familiarize themselves with the definitions of potential energy, energy, and
kinetic energy, and provides a graphic organizer where students are able to compare and contrast potential energy and kinetic energy using a venn diagram. Get a grip on gravity. This resource
Infographic Potential vs. Energy Kids Discover
Create a venn diagram with one side labeled “kinetic energy” and the other “potential energy.” compare and contrast. Web a potential energy diagram shows the change in potential energy of a system as
reactants are converted into products. Great as graphic organisers and for compare and contrast tasks. This could be used as a rough draft and then students work.
and Potential Energy
Web use with a venn diagram. This resource includes a full answer key. Web for a given position, the gap between the total energy line and the potential energy line equals the kinetic energy of the
object, since the sum of this gap and the height of the potential energy graph is the total energy. This resource includes a full.
Web Students Can Easily Compare And Contrast Potential And Kinetic Energy On This Nicely Made Venn Diagram.
We can also interpret the intersection point of the total energy and the potential energy graphs. Diagrams of activation energy and. Put the law of conservation of energy in your own words. Includes
the following venn diagrams:
Web A Set Of 5 Venn Diagrams For Students Studying Physics.
If you do it on your own sheet, you can hand it in separately to the box. Determine whether the following examples are of endergonic or exergonic reactions a hand warmer heats up your hands on a cold
day. Students can easily compare and contrast potential and kinetic energy on this nicely made venn diagram. Potential energy is the type of energy present in a body due to the property of its state:
This Resource Includes A Full Answer Key.
A great interactive activity for comparing and contrasting these two types of energy. One circle represents kinetic energy, while the other represents potential energy. Kinetic and potential energy
renewable and nonrenewable energy magnetism and electricity sound and light waves series and parallel circuit Web free venn diagram potential and kinetic energy.
Features Of Kinetic And Potential Energy Information Boxes.
The energy an object has at rest. Web use with a venn diagram. Web this resource allows students to familiarize themselves with the definitions of potential energy, energy, and kinetic energy, and
provides a graphic organizer where students are able to compare and contrast potential energy and kinetic energy using a venn diagram. How does that relate to kinetic and potential energy?
Related Post: | {"url":"https://claims.solarcoin.org/en/venn-diagram-of-kinetic-and-potential-energy.html","timestamp":"2024-11-03T19:03:36Z","content_type":"text/html","content_length":"25895","record_id":"<urn:uuid:a7c62da2-7c5c-4db8-92ab-1ae6e000d708>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00318.warc.gz"} |
Graphs and Plots
Early Edge: Scatterplots
In this Early Edge video lesson, you'll learn more about Scatterplots, so you can be successful when you take on high-school Math & Statistics.
Calculations from Data
Site has a program where you can input data values and it will calculate the mean, standard deviation, or plot data on a histogram.
Statistics: Power from Data! is a product from Statistics Canada that will assist readers in getting the most from statistics. Each chapter is intended to be complete in itself, allowing users to go
directly to the topic they wish to learn more about without reading all of the previous sections.
Early Edge: Tables
In this Early Edge video lesson, you'll learn more about Tables, so you can be successful when you take on high-school Math & Statistics.
In this Early Edge video lesson, you'll learn more about Circle Graphs I, so you can be successful when you take on high-school Math & Statistics.
In this Early Edge video lesson, you'll learn more about Line Graphs I, so you can be successful when you take on high-school Math & Statistics.
In this Early Edge video lesson, you'll learn more about Bar Graphs I, so you can be successful when you take on high-school Math & Statistics.
Regression Line Applet
Interactive applet that allows a student to plot points on a graph and see how an outlier affects the regression line of the data points.
Goes over collecting and organizing data into frequency tables, histograms, box and whisker plots, and stem-and-leaf plots. Also shows finding mean, median and mode. Many include videos!
Bayes Theorem
It also gives formulas, description, etc for statistics. | {"url":"https://www.tutor.com/resources/math/statistics/graphs-and-plots","timestamp":"2024-11-12T09:05:58Z","content_type":"application/xhtml+xml","content_length":"69823","record_id":"<urn:uuid:dd65f4f0-dc38-42d4-989d-fbc09fe01d20>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00400.warc.gz"} |
14.4: The Development and Use of Different Number Bases
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
During the previous discussions, we have been referring to positional base systems. In this section of the chapter, we will explore exactly what a base system is and what it means if a system is
“positional.” We will do so by first looking at our own familiar, base-ten system and then deepen our exploration by looking at other possible base systems. In the next part of this section, we will
journey back to Mayan civilization and look at their unique base system, which is based on the number 20 rather than the number 10.
A base system is a structure within which we count. The easiest way to describe a base system is to think about our own base-ten system. The base-ten system, which we call the “decimal” system,
requires a total of ten different symbols/digits to write any number. They are, of course, 0, 1, 2, ….. 9.
The decimal system is also an example of a positional base system, which simply means that the position of a digit gives its place value. Not all civilizations had a positional system even though
they did have a base with which they worked.
In our base-ten system, a number like 5,783,216 has meaning to us because we are familiar with the system and its places. As we know, there are six ones, since there is a 6 in the ones place.
Likewise, there are seven “hundred thousands,” since the 7 resides in that place. Each digit has a value that is explicitly determined by its position within the number. We make a distinction between
digit, which is just a symbol such as 5, and a number, which is made up of one or more digits. We can take this number and assign each of its digits a value. One way to do this is with a table, which
\hline 5,000,000 & =5 \times 1,000,000 & =5 \times 10^{6} & \text { Five million } \\
\hline+700,000 & =7 \times 100,000 & =7 \times 10^{5} & \text { Seven hundred thousand } \\
\hline+80,000 & =8 \times 10,000 & =8 \times 10^{4} & \text { Eighty thousand } \\
\hline+3,000 & =3 \times 1000 & =3 \times 10^{3} & \text { Three thousand } \\
\hline+200 & =2 \times 100 & =2 \times 10^{2} & \text { Two hundred } \\
\hline+10 & =1 \times 10 & =1 \times 10^{1} & \text { Ten } \\
\hline+6 & =6 \times 1 & =6 \times 10^{0} & \text { Six } \\
\hline 5,783,216 & \text { Five million, seven hundred eighty-three thousand, two hundred sixteen } \\
From the third column in the table we can see that each place is simply a multiple of ten. Of course, this makes sense given that our base is ten. The digits that are multiplying each place simply
tell us how many of that place we have. We are restricted to having at most 9 in any one place before we have to “carry” over to the next place. We cannot, for example, have 11 in the hundreds place.
Instead, we would carry 1 to the thousands place and retain 1 in the hundreds place. This comes as no surprise to us since we readily see that 11 hundreds is the same as one thousand, one hundred.
Carrying is a pretty typical occurrence in a base system.
However, base-ten is not the only option we have. Practically any positive integer greater than or equal to 2 can be used as a base for a number system. Such systems can work just like the decimal
system except the number of symbols will be different and each position will depend on the base itself.
For example, let’s suppose we adopt a base-five system. The only modern digits we would need for this system are 0,1,2,3 and 4. What are the place values in such a system? To answer that, we start
with the ones place, as most base systems do. However, if we were to count in this system, we could only get to four (4) before we had to jump up to the next place. Our base is 5, after all! What is
that next place that we would jump to? It would not be tens, since we are no longer in base-ten. We’re in a different numerical world. As the base-ten system progresses from 10^0 to10^1, so the
base-five system moves from 5^0 to 5^1 = 5. Thus, we move from the ones to the fives.
After the fives, we would move to the 5^2 place, or the twenty fives. Note that in base-ten, we would have gone from the tens to the hundreds, which is, of course, 10^2.
Let’s take an example and build a table. Consider the number 30412 in base five. We will write this as 30412[5] , where the subscript 5 is not part of the number but indicates the base we’re using.
First off, note that this is NOT the number “thirty thousand, four hundred twelve.” We must be careful not to impose the base-ten system on this number. Here’s what our table might look like. We will
use it to convert this number to our more familiar base-ten system.
\hline & \text { Base 5 } & \text { This column coverts to base-ten } & \text { In Base-Ten } \\
\hline & 3 \times 5^{4} & =3 \times 625 & =1875 \\
\hline+ & 0 \times 5^{3} & =0 \times 125 & =0 \\
\hline+ & 4 \times 5^{2} & =4 \times 25 & =100 \\
\hline+ & 1 \times 5^{1} & =1 \times 5 & =5 \\
\hline+ & 2 \times 5^{0} & =2 \times 1 & =2 \\
\hline & & \text { Total } & 1982 \\
As you can see, the number 30412[5] is equivalent to 1,982 in base-ten. We will say \(30412_{5}=1982_{10}\). All of this may seem strange to you, but that’s only because you are so used to the only
system that you’ve ever seen.
Convert \(6234_{7}\) to a base 10 number.
We first note that we are given a base-7 number that we are to convert. Thus, our places will start at the ones ( \(7^{0}\) ), and then move up to the \(7^{\prime} s, 49^{\prime} s\left(7^{2}\right),
\) etc. Here's the breakdown:
\hline & \text { Base 7 } & \text { Convert } & \text { Base 10 } \\
\hline & =6 \times 7^{3} & =6 \times 343 & =2058 \\
\hline+ & =2 \times 7^{2} & =2 \times 49 & =98 \\
\hline+ & =3 \times 7 & =3 \times 7 & =21 \\
\hline+ & =4 \times 1 & =4 \times 1 & =4 \\
\hline & & \text { Total } & 2181 \\
\( \text { Thus } 6234_{7}=2181_{10} \)
Convert \(41065_7\) to a base 10 number.
Converting from an unfamiliar base to the familiar decimal system is not that difficult once you get the hang of it. It’s only a matter of identifying each place and then multiplying each digit by
the appropriate power. However, going the other direction can be a little trickier. Suppose you have a base-ten number and you want to convert to base-five. Let’s start with some simple examples
before we get to a more complicated one.
Convert twelve to a base-five number.
We can probably easily see that we can rewrite this number as follows:
\[12=(2 \times 5)+(2 \times 1) \nonumber \]
Hence, we have two fives and 2 ones. Hence, in base-five we would write twelve as \(22_{5}\). Thus, \(12_{10}=22_{5}\)
Convert sixty-nine to a base-five number.
We can see now that we have more than 25, so we rewrite sixty-nine as follows:
\[69=(2 \times 25)+(3 \times 5)+(4 \times 1) \nonumber \]
Here, we have two twenty-fives, 3 fives, and 4 ones. Hence, in base five we have 234 . Thus, \(69_{10}=234_5\)
Convert the base-seven number \(3261_{7}\) to base 10
The powers of 7 are:
7^{0}=1 \\
7^{1}=7 \\
7^{2}=49 \\
\end{array} \nonumber\]
\[3261_{7}=(3 \times 343)+(2 \times 49)+(6 \times 7)+(1 \times 1)=1170_{10} \nonumber \]
Thus \(3261_{7}=1170_{10}\)
Convert 143 to base 5
In general, when converting from base-ten to some other base, it is often helpful to determine the highest power of the base that will divide into the given number at least once. In the last example,
\(5^{2}=25\) is the largest power of five that is present in \(69,\) so that was our starting point. If we had moved to \(5^{3}=125\), then 125 would not divide into 69 at least once.
Converting from Base 10 to Base \(b\)
1. Find the highest power of the base b that will divide into the given number at least once and then divide.
2. Write down the whole number part, then use the remainder from division in the next step.
3. Repeat step two, dividing by the next highest power of the base b, writing down the whole number part (including 0), and using the remainder in the next step.
4. Continue until the remainder is smaller than the base. This last remainder will be in the “ones” place.
5. Collect all your whole number parts to get your number in base \(b\) notation.
Convert the base-ten number 348 to base-five.
The powers of five are:
5^{0}=1 \\
5^{1}=5 \\
5^{2}=25 \\
5^{3}=125 \\
since \(348\) is smaller than \(625,\) but bigger than \(125,\) we see that \(5^{3}=125\) is the highest power of five present in \(348 .\) So we divide 125 into 348 to see how many of them there
\(348 \div 125=2\) with remainder 98
We write down the whole part, 2, and continue with the remainder. There are 98 left over, so we see how many 25’s (the next smallest power of five) there are in the remainder:
\(98 \div 25=3\) with remainder 23
We write down the whole part, 2, and continue with the remainder. There are 23 left over, so we look at the next place, the 5’s:
\(23 \div 5=4\) with remainder 3
This leaves us with 3, which is less than our base, so this number will be in the “ones” place. We are ready to assemble our base-five number:
\( 348=\left(2 \times 5^{3}\right)+\left(3 \times 5^{2}\right)+\left(4 \times 5^{1}\right)+(3 \times 1) \)
Hence, our base-five number is \(2343 .\) We'll say that \(348_{10}=2343_{5}\)
Convert the base-ten number 4509 to base-seven.
The powers of 7 are:
7^{0}=1 \\
7^{1}=7 \\
7^{2}=49 \\
7^{3}=343 \\
7^{4}=2401 \\
The highest power of 7 that will divide into 4509 is \(7^{4}=2401\)
With division, we see that it will go in 1 time with a remainder of \(2108 .\) So we have 1 in the \(7^{4}\) place.
The next power down is \(7^{3}=343,\) which goes into 2108 six times with a new remainder of \(50 .\) So we have 6 in the \(7^{3}\) place.
The next power down is \(7^{2}=49\), which goes into 50 once with a new remainder of \(1 .\) So there is a 1 in the \(7^{2}\) place.
The next power down is \(7^{1}\) but there was only a remainder of \(1,\) so that means there is a 0 in the 7 's place and we still have 1 as a remainder.
That, of course, means that we have 1 in the ones place.
4509 \div 7^{4}= & 1 \quad \mathrm{R} \quad 2108 \\
2108 \div 7^{3}= & 6 \quad \mathrm{R} \quad 50 \\
50 \div 7^{2}= & 1 \quad \mathrm{R} \quad 1 \\
1 \div 7^{1}= & 0 \quad \mathrm{R} \quad 1 \\
1 \div 7^{0}= & 1 \\
Putting all of this together means that \(4509_{10}=16101_{7}\)
Convert \(657_{10}\) to a base 4 number.
Convert \(8377_{10}\) to a base 8 number.
\(8377_{10}=20271_{8} \)
As you read the solution to this last example and attempted the “Try it Now” problems, you may have had to repeatedly stop and think about what was going on. The fact that you are probably struggling
to follow the explanation and reproduce the process yourself is mostly due to the fact that the non-decimal systems are so unfamiliar to you. In fact, the only system that you are probably
comfortable with is the decimal system.
As budding mathematicians, you should always be asking questions like “How could I simplify this process?” In general, that is one of the main things that mathematicians do…they look for ways to take
complicated situations and make them easier or more familiar. In this section we will attempt to do that.
To do so, we will start by looking at our own decimal system. What we do may seem obvious and maybe even intuitive but that’s the point. We want to find a process that we readily recognize works and
makes sense to us in a familiar system and then use it to extend our results to a different, unfamiliar system.
Let's start with the decimal number, \(4863_10\). We will convert this number to base \(10 .\) Yeah, I know it's already in base \(10,\) but if you carefully follow what we're doing, you'll see it
makes things work out very nicely with other bases later on. We first note that the highest power of 10 that will divide into 4863 at least once is \(10^{3}=1000 .\) In general, this is the first
step in our new process; we find the highest power that a given base that will divide at least once into our given number.
We now divide 1000 into 4863:
\[4863 \div 1000=4.863\nonumber \]
This says that there are four thousands in 4863 (obviously). However, it also says that there are 0.863 thousands in 4863. This fractional part is our remainder and will be converted to lower powers
of our base (10). If we take that decimal and multiply by 10 (since that’s the base we’re in) we get the following:
\[0.863 \times 10=8.63\nonumber \]
Why multiply by 10 at this point? We need to recognize here that 0.863 thousands is the same as 8.63 hundreds. Think about that until it sinks in.
\[(0.863)(1000)=863\nonumber \]
\[(8.63)(100)=863\nonumber \]
These two statements are equivalent. So, what we are really doing here by multiplying by 10 is rephrasing or converting from one place (thousands) to the next place down (hundreds).
\[0.863 \times 10 \Rightarrow 8.63\nonumber \]
\[\text{(Parts of Thousands) }\times 10 \Rightarrow \text{ Hundreds}\nonumber \]
What we have now is 8 hundreds and a remainder of 0.63 hundreds, which is the same as 6.3 tens. We can do this again with the 0.63 that remains after this first step.
\[0.63 \times 10 \Rightarrow 6.3\nonumber \]
\[\text{Hundreds } \times 10 \Rightarrow \text{ Tens}\nonumber \]
So we have six tens and 0.3 tens, which is the same as 3 ones, our last place value.
Now here’s the punch line. Let’s put all of the together in one place:
Note that in each step, the remainder is carried down to the next step and multiplied by 10, the base. Also, at each step, the whole number part, which is circled, gives the digit that belongs in
that particular place. What is amazing is that this works for any base! So, to convert from a base 10 number to some other base, \(b\), we have the following steps we can follow:
1. Find the highest power of the base \(b\) that will divide into the given number at least once and then divide.
2. Keep the whole number part, and multiply the fractional part by the base \(b\).
3. Repeat step two, keeping the whole number part (including 0), carrying the fractional part to the next step until only a whole number result is obtained.
4. Collect all your whole number parts to get your number in base \(b\) notation.
We will illustrate this procedure with some examples.
Convert the base 10 number, \(348_{10}\), to base \(5 .\)
This is actually a conversion that we have done in a previous example. The powers of five are:
5^{0}=1 \\
5^{1}=5 \\
5^{2}=25 \\
5^{3}=125 \\
The highest power of five that will go into 348 at least once is \(5^{3}\)
We divide by 125 and then proceed.
By keeping all the whole number parts, from top bottom, gives 2343 as our base 5 number. Thus, \(2343_{5}=348_{10}\)
We can compare our result with what we saw earlier, or simply check with our calculator, and find that these two numbers really are equivalent to each other.
Convert the base 10 number, \(3007_{10}\), to base 5.
The highest power of 5 that divides at least once into 3007 is \(5^{4}=625 .\) Thus, we have:
3007 \div 625=(4).8112 \\
0.8112 \times 5=(4).056 \\
0.056 \times 5=(0).28 \\
0.28 \times 5=(1)0.4 \\
0.4 \times 5=(2)0.0
This gives us that \(3007_{10}=44012_{5} .\) Notice that in the third line that multiplying by 5 gave us 0 for our whole number part. We don't discard that! The zero tells us that a zero in that
place. That is, there are no \(5^{2}\) 's in this number
This last example shows the importance of using a calculator in certain situations and taking care to avoid clearing the calculator’s memory or display until you get to the very end of the process.
Convert the base 10 number, \(63201_{10}\), to base 7.
The powers of 7 are:
7^{0}=1 \\
7^{1}=7 \\
7^{2}=49 \\
7^{3}=343 \\
7^{4}=2401 \\
The highest power of 7 that will divide at least once into 63201 is \(7^{5}\). When we do the initial division on a calculator, we get the following:
\(63201 \div 7^{5}=3.760397453\)
The decimal part actually fills up the calculators display and we don’t know if it terminates at some point or perhaps even repeats down the road. So if we clear our calculator at this point, we will
introduce error that is likely to keep this process from ever ending. To avoid this problem, we leave the result in the calculator and simply subtract 3 from this to get the fractional part all by
itself. DO NOT ROUND OFF! Subtraction and then multiplication by seven gives:
\(63201 \div 7^{5}=(3).760397453\)
\(0.760397453 \times 7=(5).322782174\)
\(0.322782174 \times 7=(2).259475219\)
\(0.259475219 \times 7=(1).816326531\)
\(0. 816326531 \times 7=(5).714285714\)
\(0.714285714 \times 7=(5).000000000\)
Yes, believe it or not, that last product is exactly 5, as long as you don’t clear anything out on your calculator. This gives us our final result: \(63201_{10}=352155_{7}\).
If we round, even to two decimal places in each step, clearing our calculator out at each step along the way, we will get a series of numbers that do not terminate, but begin repeating themselves
endlessly. (Try it!) We end up with something that doesn’t make any sense, at least not in this context. So be careful to use your calculator cautiously on these conversion problems.
Also, remember that if your first division is by \(7^{5}\), then you expect to have 6 digits in the final answer, corresponding to the places for \(7^{5}, 7^{4}\) and so on down to \(7^{0}\). If you
find yourself with more than 6 digits due to rounding errors, you know something went wrong
Convert the base 10 number, \(9352_{10}\), to base 5.
Convert the base 10 number, 1500_{10}, to base 3.
Be careful not to clear your calculator on this one. Also, if you’re not careful in each step, you may not get all of the digits you’re looking for, so move slowly and with caution.
\( 1500_{10}=2001120_{3} \) | {"url":"https://math.libretexts.org/Bookshelves/Applied_Mathematics/Math_in_Society_(Lippman)/14%3A_Historical_Counting_Systems/14.04%3A_The_Development_and_Use_of_Different_Number_Bases","timestamp":"2024-11-09T11:12:57Z","content_type":"text/html","content_length":"163663","record_id":"<urn:uuid:495522c3-d3de-4e19-851f-955cad00ee10>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00783.warc.gz"} |
Re: [Python-ideas] Add the imath module
13 Jul 2018 13 Jul '18
1:42 a.m.
On Thu, Jul 12, 2018 at 05:35:54PM -0500, Tim Peters wrote:
[David Mertz]
Miller-Rabin or other pseudo-primality tests do not produce false negatives IIUC.
That's true if they're properly implemented ;-) If they say False, the input is certainly composite (but there isn't a clue as to what a factor may be); if the say True, the input is "probably" a
I'm sure Tim knows this, but that's only sometimes true. Since I had no formal education on primality testing and had to pick this up in dribs and drabs over many years, I was under the impression
for the longest time that Miller-Rabin was always probabilitistic, but that's not quite right. Without going into the gory details, it turns out that for any N, you can do a brute-force primality
test by checking against *every* candidate up to some maximum value. (Which is how trial division works, only Miller-Rabin tests are more expensive than a single division.) Such an exhaustive check
is not practical, hence the need to fall back on randomly choosing candidates. However, that's in the most general case. At least for small enough N, there are rather small sets of candidates which
give a completely deterministic and conclusive result. For N <= 2**64, it never requires more than 12 Miller-Rabin tests to give a conclusive answer, and for some N, as few as a single test is
enough. To be clear, these are specially chosen tests, not random tests. So in a good implementation, for N up to 2**64, there is never a need for a "probably prime" result. It either is, or isn't
prime, and there's no question which. In principle, there could be similar small sets of conclusive tests for larger N too, but (as far as I know) nobody has discovered them yet. Hence we fall back
on choosing Miller-Rabin tests at random. The chances of an arbitrary composite number passing k such tests is on average 1/4**k, so we can make the average probability of failure as small as we
like, by doing more tests. With a dozen or twenty tests, the probability of a false positive (a composite number wrongly reported as prime) is of the same order of magnitude as that of a passing
cosmic ray flipping a bit in memory and causing the wrong answer to appear. -- Steve | {"url":"https://mail.python.org/archives/list/python-ideas@python.org/message/4ZVKTLHRGAAWJTBI63VKTIODI6RS7WBW/","timestamp":"2024-11-04T15:47:49Z","content_type":"text/html","content_length":"14607","record_id":"<urn:uuid:684b4f7d-52a2-400f-868e-e23034fce5bf>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00353.warc.gz"} |
PVT and Flow course - Lab PVT Tests CCE
Blog | Tools | Glossary | Search
PVT and Flow course - Lab PVT Tests CCE
Jump to navigation Jump to search
About the Author
More By Petro Engineer
The constant composition or constant mass expansion test.
1. CCE for oils, and in particular oils that a little bit lower gas flow ratio they're not so compressible, we could call it slightly compressible oils. If you took the the reservoir oil to the
surface and flashed it down to stock tank conditions you'd get somewhere less than around thousand standard cubic feet per stock tank barrel or around 200 standard cubic meters gas per standard cubic
meter of flash oil. For these systems we can use a blind PVT cell. There are many different types of this blind, so basically you have a piston pump and that will inject some kind of a working fluid.
You have a working fluid, that would be maybe water, and then that water would push a piston up and down making the cell volume which is the total volume would be whatever happens to be gas and
whatever happens to be oil we don't know what's gas and we don't know what's oil because it's a blind cell, but whatever the total volume is we would be able to measure that as a function of pressure
by gauging the exact amount of liquid that's been pushed into the cell or out. So you measure the pressure and you get the volume and that's all you know. So for a slightly compressible oil the
volume change with pressure will have this discontinuity. We start at some very high pressure in the cell that's probably going to be greater than or equal to the what you think is the initial
pressure. The experiments conducted typically at a constant temperature and that temperature 98% of the time is at reservoir conditions. So, basically we expand the volume, measure the pressure, we
expand the volume again, measure the pressure and so forth. The max cell volume is typically around four times the initial volume that you start.
If the function volume of pressure is a linear trend then it's approximately a constant compressibility, it won't be constant, it'll be kind of constant. The slope would probably start changing a
little bit more as you go to lower pressures. So, if it stayed as a liquid you would expect it to to just stay on that trend, but then all of a sudden you get a point that very nonlinear highly
increasing and it's due to the gas basically. And where it intersects this linear trend that's going to be the bubble point. Below that pressure we get a little bit of gas coming out of solution, its
compressibility is probably 10 times at least the oil compressibility so that one PSI pressure drop gives a lot more volume increase than if it was just an oil.
The uncertainty increases as it becomes more compressible as this discontinuity becomes smaller so might be as much as a hundred and fifty psi by the time it gets that big you should not be using
this kind of equipment (but sometimes they do).
From this experiment for a traditional oil you get the density above the bubble point and you get total relative volume as a function of pressure. And of course, you get the bubble point pressure
estimate. An example of the report can be found in the notes for this lecture.
2. CCE for oils and gas condensates.
Here we use a windowed PVT cell. What we're doing here, again, we just charge the cell with a certain amount of massive material, keep it at a fixed temperature; we change the pressure and we measure
the volume of oil and the volume of gas and the total volume. We start with the pressure greater than or equal to the initial reservoir pressure, you'd come down to the point where you find the
saturation pressure, it's either going to be a bubble point or a dew point and then we'll go down to some low minimum pressure where the total volume is approximately 4 times the volume of the
saturation pressure.
What's reported is typically the total volume over the saturation volume. And then we get the oil volume in one of two ways presented: 1) the most common way would be oil volume as a function of
total volume or relative to total volume and/or 2) oil volume relative to the saturation volume. For reservoir oils you'll also get the oil density above the bubble point and for reservoir gases
they'll typically give you the Z factor and / or the gas formation volume factor. These will be given at and above the saturation pressure where it's single-phase.
How how do we establish this saturation pressure? This has for sure some uncertainty probably here at the best 10 psi and it might be into the hundreds of psi uncertainty. Basically what they do is
they use a graphical plot an oil volume (Vo) versus pressure, in both cases boils and gases. So this you do kind of a trend analysis and where these intersect, is what you're going to call the bubble
The accuracy depends on:
1. how many and how close you have pressures near by the bubble point.
2. make sure that you you bring all points to equilibrium. Because you can have this super saturation effect that if you just lower the pressure or maybe the gas doesn't come out of solution, if you
shake it the gas comes out of solution, so if you don't shake it enough you might get partial gas out of solution. Basically there's a requirement of physical agitation over a certain amount of
time to bring the system to a true equilibrium. And if you don't do that because time is money and labs like to make money, then some of these points may not really be where they should be and
that makes it more uncertain.
Reservoir gas. At first we don't see any oil and then, because there's not very much of it may be hard to call it may be hard to quantify, but they see fog or they see liquid droplets but they can't
quantify how much. So what they'll do is that they'll basically put a number like they might put it up to zero and they'll say that these two pressures we saw what they call a trace. It's it's
definitely liquid appearing but they can't measure the amount. And then they get the first measurable amount. It'll max out and then maybe it'll drop down. And then they have to do some kind of
interpretation of where is the dew point.
The curve on the plot is what we refer to as the liquid dropout curve because liquid is dropping out of the gas. We have retrograde condensation and below the maximum you have kind of a
revaporization - the condensed liquid is revoporized back into the gas.
Watch the full video
Other lectures from the PVT and Flow course
Class notes developed during lectures are available as PDF files, named with the format yyyymmdd.pdf located on: http://www.ipt.ntnu.no/~curtis/courses/PVT-Flow/2016-TPG4145/ClassNotes/
See also | {"url":"http://petrofaq.org/wiki/Blog:PVT_and_Flow_course_-_Lab_PVT_Tests_CCE","timestamp":"2024-11-02T23:40:29Z","content_type":"text/html","content_length":"54477","record_id":"<urn:uuid:b52a8b92-91d4-4333-a513-0f95f86d9e17>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00311.warc.gz"} |
Posterior Probability - (Theoretical Statistics) - Vocab, Definition, Explanations | Fiveable
Posterior Probability
from class:
Theoretical Statistics
Posterior probability is the probability of an event occurring after taking into account new evidence or information. It reflects how our beliefs about an event are updated when we obtain more data
and is a fundamental concept in Bayesian statistics, where it is derived from Bayes' theorem and relies on conditional distributions to quantify uncertainty.
congrats on reading the definition of Posterior Probability. now let's actually learn it.
5 Must Know Facts For Your Next Test
1. Posterior probability is calculated using Bayes' theorem, which states that $$P(H|E) = \frac{P(E|H)P(H)}{P(E)}$$ where $$P(H|E)$$ is the posterior probability.
2. It combines prior probability and likelihood to give a comprehensive view of uncertainty after new evidence is introduced.
3. The concept is widely used in various fields such as machine learning, medical diagnostics, and risk assessment to make informed decisions based on available data.
4. Posterior probabilities can change significantly with the addition of new information, illustrating the dynamic nature of belief updating.
5. Understanding posterior probability helps in interpreting statistical results and improving models by considering how new evidence influences outcomes.
Review Questions
• How does posterior probability enhance decision-making processes when new evidence is available?
□ Posterior probability enhances decision-making by allowing individuals to update their beliefs based on new evidence. By using Bayes' theorem, prior knowledge is combined with the likelihood
of the new evidence to arrive at a more informed conclusion about the probability of an event. This process enables decision-makers to adapt their strategies in real-time as more data becomes
available, ultimately leading to better outcomes.
• Discuss the relationship between prior probability and posterior probability in Bayesian analysis.
□ Prior probability represents our initial belief about an event before considering new evidence, while posterior probability reflects our updated belief after incorporating that evidence. In
Bayesian analysis, prior probability serves as the foundation upon which likelihood—derived from new data—is applied to compute the posterior probability. This relationship illustrates how
our understanding of uncertainty evolves as we gather more information.
• Evaluate the impact of different priors on the resulting posterior probabilities in a Bayesian framework.
□ The choice of prior probability can significantly influence posterior probabilities, especially when data is limited or not very informative. Different priors can lead to different
conclusions about the likelihood of events, even with the same observed data. Thus, it’s crucial to select appropriate priors that reflect true beliefs or historical information; otherwise,
biases may propagate through the analysis, affecting interpretations and decisions based on the posterior probabilities.
© 2024 Fiveable Inc. All rights reserved.
AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website. | {"url":"https://library.fiveable.me/key-terms/theoretical-statistics/posterior-probability","timestamp":"2024-11-13T16:21:01Z","content_type":"text/html","content_length":"156210","record_id":"<urn:uuid:f916c41c-82c0-4962-9aae-ed3bb3fb2a95>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00458.warc.gz"} |
Strategic Treatment Effects
Created an open-source R package from scratch to apply the causal framework developed in Treatment Effects in Managerial Strategies (Guzman, 2021).
The R package does the following:
1. Estimates the propensity of a firm to select into choosing a strategy using a custom-tuned Random Forest model’s 10 fold out-of-fold predictions.
2. Using the propensity scores generated, we estimate the treatment effects, and subsequently the strategic treatment effects of the strategy chosen under the Rubin Causal Model by running two local
regressions (Lowess).
3. We then understand the determinant resources of the strategic treatment effects using a 10-fold LASSO procedure.
4. We then estimate the value of the coherence of the firm’s resources in generating the competitive advantage.
5. Finally, we try to estimate a benchmark OLS regression to explain the main effects to choose into the strategy to understand how much better our framework is than the standard approach. | {"url":"https://www.vansh-gupta.com/portfolio/STE/","timestamp":"2024-11-11T10:37:08Z","content_type":"text/html","content_length":"11398","record_id":"<urn:uuid:f8c4a555-538f-4737-b061-e0b7f3c39f55>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00663.warc.gz"} |
Daily Power Production: Work Schedules
The lower floors of the human powered student building are reserved for communal energy production. How long the students need to exercise on these floors depends on their demand for power. Because
the energy users are also the energy producers, there’s a strong incentive to reduce energy demand.
For each energy using activity, we calculated the amount of student-hours that's required to produce the energy. We assume that every student can produce 100 watt-hour (Wh) of power per hour.
If an activity (such as refrigeration) requires 40 kilowatt-hours (kWh) per day, this corresponds to 400 student-hours of power production per day, or one hundred people exercising for four hours.
We can then calculate the total amount of hours per day that each student needs to produce power for the community. We also take into account 50% energy losses.
How Many Hours per Day?
The communal power producing floors supply energy for fridges, hot showers, toilets, dishwasher, hair dryers, communal lighting and electricity, water pumps, and Internet.
The working schedules are negotiated and set up by the students themselves, who are in total control of their human powered community. For now, we calculated three different scenario's, based on
three hypothetical levels of energy demand.
The most energy efficient scenario requires each student to produce power for 1.5 hour per day. In this scenario, the showers are cold, there are no hair dryers, the dishes are done by hand with cold
water, and the clothes are drying outside in the rain.
A second scenario adds a 1-minute hot shower every day, as well as limited hair drying. This doubles the daily work duty to 3 hours per student per day. The third scenario doubles the hot shower
length to 2 minutes per day, and introduces a dishwasher and (limited) clothes drying. It requires every student to produce power for 6 hours per per day.
How Many People?
Energy demand varies throughout the day and the night. To avoid night shifts and to spread out the workforce more evenly over time, the human powered student building is equipped with a gravity
battery in one of the former elevator shafts.
Assuming 10 hours of continuous energy production per day, how many people would be producing power simultaneously? In the most energy conscious scenario, the answer is 110 people. That's roughly 1
in 7 students.
In the moderate scenario, it's 220 people (more than 1 in 4 students) and in the 'energy-intensive' scenario, it's 440 people (more than 1 in 2 students). The last scenario surpasses the maximum
power capacity of the human power plant, which is 400 people producing power simultaneously.
Communal energy use in the student building is 112 kWh per day in the first scenario (150 watt-hour per person per day), 224 kWh in the second scenario (300 Wh per person per day), and 448 kWh in the
third scenario (450 Wh per person per day).
Other Duties
Note that this only concerns the communal energy use of the building. Students also need to produce energy in their own rooms. In this case, timing and duration are entirely up to the student, not to
the community as a whole.
We have estimated a maximum of 1-2 hours of personal power production per day (100-200 Wh), which is sufficient for lighting, computing and music if energy efficient devices are used.
Communal and private work duties combined are thus 2,5 to 3,5 hours per day in the first scenario, 4-5 hours per day in the second scenario, and 7-8 hours per day in the third scenario.
Furthermore, there are work duties for clothes washing, biogas plant operation, and kitchen work. Clothes washing is estimated to be 1 to 2 hours per student per week, and is not powered by communal
energy production.
There is a two-weekly work shift for shopping, cooking and dish washing (if this happens by hand). Finally, once per month, every student needs to work an 8-hour shift in the biogas power plant.
Detailed Calculations
[1] Energy Losses
We assume that half of the energy produced is lost in the process. There are energy losses in the exercise machines, the distribution of energy, and the conversion between different forms of energy.
These energy losses are included in the figures below.
[2] Refrigeration
The refrigerators need to be operated day and night and use roughly 40 kWh of electricity per day. It would take 17 students exercising around the clock to power them. Required: 408 student-hours, or
60 min per student per day.
[3] Showering
To provide each student with a 3-minute hot shower per 3 days (or a 1-minute hot shower per day), we need 125 students exercising 8 hours per day, or a total of 400 student-hours (or 60 min per
student per day).
[4] Toilet Flushing
Shared toilet facilities are distributed throughout the building. A vacuum toilet requires very little water but it needs roughly 2 watt-hour of energy per flush.
Assuming 5 flushes per day per person, this comes down to 10 watt-hour per person per day, or 7,5 kWh per day for 750 students. This means we need 7.5 people exercising for 10 hours to power the
toilet flushing system, or 75 student-hours (12 min per student per day).
[5] Water Pumping
Assuming a water use of only 50 litres per person per day -- less than half of the average -- it would require around 3.6 kWh per day to pump this water to the higher floors of the building. This
comes down to 36 student-hours per day, or 6 minutes per student per day.
[6] Internet
A Wifi-router on every floor of the building would require 5.2 kWh per day, or 52 student-hours per day (24 hour internet). This comes down to 8 minutes per student per day.
[7] Lighting in the Communal Rooms
All floors of the human powered student community have communal spaces where students can come together. These spaces need lighting and possibly energy for other devices, such as computers.
When they are in their room, students need to power their computers themselves. When they are in a communal space, the energy is taken from the communal power producing floors.
It's hard to predict how this dynamic will play out, for now we estimate a need for 100 student-hours, less than 20 minutes of power production per student per day. More energy use in communal spaces
would raise the communal working duty and decrease the individual efforts.
[8] Dish washing
An industrial dishwasher machine, used twice per day, requires a total of 360 student-hours (50 min per student per day). Most energy used is for heating the water.
[8] Hair Dryers
Ten students can power roughly 10 hair dryers during the showering period. Four hours of hair drying then requires 40 student-hours or 6 minutes per student per day.
[9] Cooling of Power Producing Students
Overheating is a serious risk for power producing people. Using fans, it takes 10 watts of energy to cool one student. Assuming an average amount of 250 energy generating students, an additional 25
students would be needed to cool them in summer, while these 25 students would need another 2 to 3 students to do the same to them.
However, to keep energy use in check, cooling is limited to 200 student-hours. During very hot days, we lower energy demand and have a siesta.
You can follow this conversation by subscribing to the comment feed for this post.
Wow, amazing! So, basically some 20-40 hours a week are occupied by communal duties alone, of them the most productive hours of the day. I have a vague feeling something's missing here. As far as I
remember, students were also supposed to study at the university. Or not any more?
Students are awake 112 hours per week, if we assume 8 hours of sleep each night. So 20-40 hours leaves plenty of time to study.
This sounds more like a step backwards to a time when we had to do a lot of work to get very little return. You've got to start thinking outside of the box instead of building more complex (and
annoying) boxes for people to live in. For instance, you have people climbing stairs all day long in this building. Why aren't you building stairs that utilize the kinetic energy generated when they
are stepped on-- especially as you seem to imagine that students will be running up and down the stair, generating even more kinetic energy every time their weight lands on the steps. Think about it
guys. You could be capturing a lot of energy there. And are you capturing the hydro power from water running down the drains (and off the roofs) so that people could have more showers?
And, just by the way, who wants to be a hampster on a wheel for x-hours a day? That's more like slavery than "community" participation. This whole thing doesn't seem very well thought out.
I agree with the stairs comment. You should have piezoelectric pads on stair surfaces, instead of stair travel generating heat, it should funnel into the gravity battery. Heating stairs wells with
wasted energy, seems...wasteful :)
Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment
As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.
Having trouble reading this image? View an alternate.
Post a comment
Comments are moderated, and will not appear until the author has approved them.
Your Information
(Name and email address are required. Email address will not be displayed with the comment.) | {"url":"https://www.humanpowerplant.be/2017/11/work-schedules-in-the-human-power-plant.html","timestamp":"2024-11-10T01:04:13Z","content_type":"application/xhtml+xml","content_length":"47367","record_id":"<urn:uuid:fa52673e-a9fc-43b1-bed5-6d8c0c41b44b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00843.warc.gz"} |
Title : POEMS seminar on Inverse Problems
Contact : Maryna Kachanovska
Date : 14/02/2019
Place : salle 2.3.29
14:00 - Roland Griesmaier (KIT, Germany)
Uncertainty principles for far field patterns and applications to inverse source problems
Classical uncertainty principles in signal processing limit the amount of simultaneous concentration of a signal with respect to time and frequency. In the inverse source problem, the far field
radiated by a source f is its restricted (to the unit sphere) Fourier transform, and the operator that maps the restricted Fourier transform of f(x) to the restricted Fourier transform of its
translate f(x + c) is called the far field translation operator. In this talk we discuss an uncertainty principle, where the role of the Fourier transform is replaced by the far field translation
operator. Combining this principle with a regularized Picard criterion, which characterizes the non-evanescent far fields radiated by a compactly supported limited power source provides extensions of
several results about splitting a far field radiated by well-separated sources into the far fields radiated by each source component. We also combine the regularized Picard criterion with a more
conventional uncertainty principle for the map from a far field to its Fourier coefficients. This leads to a data completion algorithm which tells us that we can deduce missing data if we know a
priori that the source has small support. All of these results can be combined so that we can simultaneously complete the data and split the far fields into the components radiated by well-separated
sources. We discuss both l2 (least squares) and l1 (basis pursuit) algorithms to accomplish this. Perhaps the most significant point is that all of these algorithms come with explicit bounds on their
condition numbers which are sharp in their dependence on geometry and wavenumber.
This is joint work with John Sylvester (University of Washington).
15:30 - Lorenzo Audibert (EDF, France)
Transmission eigenvalues with artificial background and their determination from scattering data
We are interested in the problem of retrieving information on the refractive index, n, of a penetrable inclusion embedded in a reference medium from farfield data associated with incident plane
waves. Our approach relies on the use of transmission eigenvalues (TEs) that carry information on n and that can be determined from the knowledge of the farfield operator F. We explain how to modify
F into a farfield operator F^{art}=F-\tilde{F}, where \tilde{F} is computed numerically, corresponding to well chosen artificial background and for which the associated TEs provide more accessible
information on n. We will emphasis that the artificial background simplified the analysis of interior transmission eigenvalue problem while maintaining our abilities to detect transmission
eigenvalues efficiently using sampling method. | {"url":"https://uma.ensta-paris.fr/poems/events/show.html?id=309&lang=en&lang=en","timestamp":"2024-11-13T12:41:07Z","content_type":"application/xhtml+xml","content_length":"13748","record_id":"<urn:uuid:79804f09-0cc1-4c35-b255-c8e7ed099cc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00103.warc.gz"} |
Easy Vampire Makeup Halloween Tutorial for Beginners – Vanitynoapologies | Indian Makeup and Beauty Blog
Easy Vampire Makeup Halloween Tutorial for Beginners
Sexy Vampire Makeup and Hair Guide for Halloween
By Contributor: Shruti Gupta
Anshita and I are those crazy girls who get soo excited for Halloween that last year we made half of the college dress up in black! You can also check out my Halloween nails. This time I thought of
doing a Halloween makeup tutorial since my makeup skills have improved a lot over the past year..thanks to VNA..hail beauty blogging in India! Halloween automatically makes me think about evil
vampires and draculas, or both? Draculire maybe? So that’s my new name, [DEL:draculire:DEL] draculaura.
35 thoughts on “Easy Vampire Makeup Halloween Tutorial for Beginners”
1. the sexiest vampire ever!! *wink* kal k liye tyaar?
□ Yo babes..ready ;)
2. Loved ur two-toned lips and great transformation :)
□ Eeeeeeeee thank you :D
3. Ahaan ahaan… Some one is looking very sexy draculaa..
□ Aahan aahan thanku san :*
4. Ooooo öö ooOo d SEXY witch….. !!
□ Thanx rohit n welcome to VNA :D
5. Awesome ..I loved the look ♡♡
Great work
□ Thank you teji :* :)
6. Super easy yet super sexy! Really liked this tutorial :D
Btw you could have used whitener to draw the fangs out!
□ Thank you :)
I tried using a double sided tape to make fangs but it didn’t work :(
7. amazing…trying this for the halloween party!
□ Do try .its super easy :D
8. Wow! This is super sexy! Love it
□ Thank you aditi :) :)
9. Wowww ! Tats called innovation :D Great one :D
□ Aww thank you :D
10. Wow.. you look like you could actually be a vampire.. Great job gal !!
□ I wish :)
11. U r one of kind sexy vampire.. Am sure gonna do this look .. :D
□ Thanks *grinning* :D
12. scarryyy, eyes looks amazing
□ Thank you :) :)
13. loved yr makeup it looks sexy babes !!
□ Thank you :) :D
14. Wow Shruti….
U look like one delectable vampire… Can surely fit into the Cullen family… Let’s hope they make another sequel.. :)
□ Omg! U made me so happy :D :D
15. Hehe.. This look is just perfect for Halloween! Makeup and lips are super easy and you have done it beautifully! The ketchup looks quite convincing.. :P
□ Hahahaha :D
Thank you :) :)
16. ITS FAB SHRUTI GREAT WORK
□ Thank you heena :)
17. sexo :D
18. You could definitely see your skills within the article you
write. The sector hopes for even more passionate writers such as you who are not afraid to say how they believe.
Always follow your heart.
19. Im grateful for the blog.Thanks Again. Much obliged.
Leave a Comment
The look is very easy to achieve. All you need is smoky eyes, blood thirsty ombre lips and pale skin. To get pale white skin, I first applied Maybelline BB cream in a lighter shade and then put
Lakme CC cream on top of it, a little compact (again a lighter shade). Even with that I couldn’t get that life-less skin effect..maybe I needed a lighter foundation but then I used loads of talcum
powder. Yes talcum powder! Easy and budgety in this economic crises!
D on’t use any blush, vampires have no colour on them. They don’t blush except for Edward Cullen!
vampire eye makeup tutorial
The scary winged black vampire eye makeup tutorial:
Step 1: Apply a matte black eyeshadow like Nyx Black all over your lid in a cat like wing as shown in the photo above.
Step 2: Use the same black eyeshadow on your lower lashline and join both the ends so they make a V.
Step 3: Line your upper lash line and lower lash line with a black kohl..loads of it. And then make a V on the inner corners of your eyes with the kohl. I used Lakme Eyeconic Kohl for it.
Step 4: Curl your lashes and add loads of black mascara. I used Maybelline Hypercurl mascara.
Tada..see only 3 products used and you get the whole eye makeup done. I though of adding some red eyeshadow under my eyes but let’s say I didn’t want to mess it up. Plus we are going to keep this
look for makeup beginners. So maybe next year.
Alright now I think I should have filled my brows but I totally forgot that time! Anybody needs a tutorial on how to fill your eyebrows? Let me know!
Two Tone red black lip makeup tutorial
I did a two tone lip for the scary look. I don’t have a black lipstick and Anshita won’t lend me her’s (I wonder what she is doing with that black lipstick she has!). So I used a black kohl again..a
different one ofcourse! I used Colorbar Indian Eye Kohl (review soon). The red lipstick I used was Colorbar Hot Hot Hot.
Ombre Lips tutorial:
Step 1: Apply a black kohl on the inner corners of lips as shown in the pic.
Step 2: Apply a bright matte red lipstick all over your lips (matte goes with the whole scary theme. Mine looks creamy because of the flash). If you don’t have a matte red lipstick then pat a little
loose powder over your lipstick with an eyeshadow brush to make it matte.
The cheat trick for creating an ombre lip is to apply both the colors while leaving a very little space between them and then blending them together using a lip brush.
And you are a vampire in no time!
But wait..where is the blood thirsty part?
Here you go!
vampire eye and lip makeup guide
I should have worn some green or blue lenses too to seal the vampire look. Oh wait what’s Bella’s eye colour when she is hungry (after she turned into a vampire) Green? Blue? I really need to brush
up my Twilight basics!
I used some ketchup near my lips as you can see
vampire makeup look
Ohh how much I wish I had fangs. The look is incomplete without them.
halloween makeup
Now just mess up your hair (curly or straight..anything will do)
And give those funny poses in front of a mirror when nobody is watching
vampire draculaura makeup 101
I made this little halloween meme for you girls..haha!
Hope you liked my vampire look and a dozen photos I clicked. What are you doing for Halloween? I know we don’t trick or treat in India but if you do then what’s your costume? Share your ideas! | {"url":"https://vanitynoapologies.com/easy-vampire-makeup-halloween-tutorial-beginners/","timestamp":"2024-11-07T12:04:50Z","content_type":"text/html","content_length":"128345","record_id":"<urn:uuid:5d0ca5d6-c6a3-43f7-b53e-0eaf3077ce09>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00742.warc.gz"} |
Consider the radical expression . Explain the necessary ste… | GradePack
Consider the radical expression . Explain the necessary ste…
Cоnsider the rаdicаl expressiоn . Explаin the necessary steps tо simplify this radical. Please write your answer in a complete sentence with proper grammar. Then, write the
answer to this expression as an integer, simplified fraction, or decimal. Note: The explain is worth four points while the answer itself is worth only one.
Primаry dysmenоrrheа is usuаlly the result оf the excessive secretiоn of:
Peаnut hulls expаnd аs kernels develоp.
@X@user.full_nаme@X@ @GMU: The vаpоr pressure оf Pd (l) аt 3057.3 K is 178.6 mm Hg. What is the heat оf vaporization if its normal boiling point is 3413.2 K?
@X@user.full_nаme@X@ @GMU: In which substаnce is the intermоleculаr bоnding best described as dipоle-dipole? | {"url":"https://gradepack.com/consider-the-radical-expression-explain-the-necessary-steps-to-simplify-this-radical-please-write-your-answer-in-a-complete-sentence-with-proper-grammar-then-write-the-answer-to-this-expression/","timestamp":"2024-11-07T22:49:05Z","content_type":"text/html","content_length":"42865","record_id":"<urn:uuid:a2494305-68bd-4940-8b37-322319c4534f>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00853.warc.gz"} |
Publications Search
Modeling real-world phenomena to any degree of accuracy is a challenge that the scientific research community has navigated since its foundation. Lack of information and limited computational and
observational resources necessitate modeling assumptions which, when invalid, lead to model-form error (MFE). The work reported herein explored a novel method to represent model-form uncertainty
(MFU) that combines Bayesian statistics with the emerging field of universal differential equations (UDEs). The fundamental principle behind UDEs is simple: use known equational forms that govern a
dynamical system when you have them; then incorporate data-driven approaches – in this case neural networks (NNs) – embedded within the governing equations to learn the interacting terms that were
underrepresented. Utilizing epidemiology as our motivating exemplar, this report will highlight the challenges of modeling novel infectious diseases while introducing ways to incorporate NN
approximations to MFE. Prior to embarking on a Bayesian calibration, we first explored methods to augment the standard (non-Bayesian) UDE training procedure to account for uncertainty and increase
robustness of training. In addition, it is often the case that uncertainty in observations is significant; this may be due to randomness or lack of precision in the measurement process. This
uncertainty typically manifests as “noisy” observations which deviate from a true underlying signal. To account for such variability, the NN approximation to MFE is endowed with a probabilistic
representation and is updated using available observational data in a Bayesian framework. By representing the MFU explicitly and deploying an embedded, data-driven model, this approach enables an
agile, expressive, and interpretable method for representing MFU. In this report we will provide evidence that Bayesian UDEs show promise as a novel framework for any science-based, data-driven MFU
representation; while emphasizing that significant advances must be made in the calibration of Bayesian NNs to ensure a robust calibration procedure. | {"url":"https://www.sandia.gov/ccr/publications/search/?pub_auth=Robert+J.+Kuether&authors%5B%5D=erin-acquesta","timestamp":"2024-11-14T15:36:07Z","content_type":"text/html","content_length":"66118","record_id":"<urn:uuid:1f4d6de7-2833-4026-a747-d36c13eec68f>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00329.warc.gz"} |
st: AW: What's the added value of having -in- subset the data before -if
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: AW: What's the added value of having -in- subset the data before -if- does?
From Dan Blanchette <[email protected]>
To [email protected]
Subject st: AW: What's the added value of having -in- subset the data before -if- does?
Date Wed, 4 Feb 2009 12:17:44 -0500 (EST)
The downside to that solution:
. bysort foreign: list if foreign & _n <= 10
is that the -bysort- will change the sort order of the dataset (in other datasets other than the auto.dta data since the auto data just happens to be sorted by foreign. It also might be the case that
the condition you are interested in is not sort order related. This will list up to 10 obs per value of mpg that is less than 20:
. bysort mpg: list if mpg < 20 & _n < 10
Also, I realized that since sum() creates a running
sum this will not just list the first 10 observations
because 0 (false) is also less than 10. So this:
. list if sum((foreign == 1)) <= 10
should be:
. list if sum((foreign == 1)) <= 10 & foreign == 1
. list if inrange(sum((foreign == 1)),2,11) & foreign == 1
A better example to show how the running sum also requires
the condition to be repeated would be:
//not correct because it will list observations after the // first instance where (mpg < 20) until the sum reaches 10
. list if inrange(sum((mpg < 23)),1,10)
. list if inrange(sum((mpg < 20)),1,10) & mpg < 20
So you have to repeat the condition to subset the dataset to just those observations before starting the running sum.
Yet another learning experience.
Dan Blanchette
Research Associate
Center of Entrepreneurship and Innovation
Duke University's Fuqua School of Business
Durham, NC USA
[email protected]
From "Martin Weiss" <[email protected]>
To <[email protected]>
Subject st: AW: What's the added value of having -in- subset the data before -if- does?
Date Wed, 4 Feb 2009 17:01:33 +0100
bys for: list if foreign &_n<=10
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2009-02/msg00117.html","timestamp":"2024-11-06T06:05:50Z","content_type":"text/html","content_length":"10214","record_id":"<urn:uuid:e378a1cf-6980-40eb-b54e-6dc76761cc00>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00433.warc.gz"} |
The potential of the two plates of the capacitor remains unchanged
Solved Review You will study the manipulation of a charged
The separation between the two plates d = 0.052 m . The capacitance of the capacitor in C = 4.255x10-12 Furad . ... V- is the potential difference between the two plates, 60 = 8.854 x 10-2 CP/(Nº mº)
is the permittivity of free space.. The energy of a charged 1 ...
18.5: Capacitors
Capacitance As long as the quantities of charge involved are not too large, it has been observed that the amount of charge, (Q), that can be stored on a capacitor 1, is linearly proportional to the
potential difference, (Delta …
Solved Review Learning Goal: Charged Capacitor, -
Question: Review Learning Goal: Charged Capacitor, -- Capacitance, Dielectric, Electric Field, Energy A charged capacitor shown in (Figure 1)can be manipulated in many ways. Use the formula for
capacitance of a parallel plate capacitor involving the permittivity of
5.12: Force Between the Plates of a Plane Parallel Plate Capacitor
Force Between the Plates of a Plane Parallel Plate Capacitor
Capacitance and Charge on a Capacitors Plates
Capacitance and Charge on a Capacitors Plates
8.1 Capacitors and Capacitance
When battery terminals are connected to an initially uncharged capacitor, the battery potential moves a small amount of charge of magnitude Q from the positive plate to the …
The Parallel-Plate Capacitor
The Parallel-Plate Capacitor. The figure shows two electrodes, one with charge +Q and the other with –Q placed face-to-face a distance d apart. This arrangement of two electrodes, …
Solved Step 3 The charged capacitor in step 2 is | Chegg
Step 3 The charged capacitor in step 2 is DISCONNECTED from the charging battery. The plate area is compressed to one-half of the original value A 2 = 0.0160 m 2. (Note: each plate is NOT Cut into
two halves). The gap separation are unchanged from step 2 d = 0.049 m Part H - Step 3 : What is the amount of charge Q 3 on each plate? ...
2.4: Capacitance
Definition of Capacitance Imagine for a moment that we have two neutrally-charged but otherwise arbitrary conductors, separated in space. From one of these conductors we remove a handful of charge …
5: Capacitors
A capacitor consists of two metal plates separated by a nonconducting medium (known as the dielectric medium or simply the dielectric) or by a vacuum. 5.2: Plane Parallel …
Solved A parallel plate capacitor is fully charged at a | Chegg
Question: A parallel plate capacitor is fully charged at a potential V A dielectric with inserted between the plates of the capacitor while the potential difference remains constant. Which one of the
following statements is false concerning this constant K = …
5.15: Changing the Distance Between the Plates of a Capacitor
Gauss''s law requires that (D = sigma), so that (D) remains constant. And, since the permittivity hasn''t changed, (E) also remains constant. The potential difference across the plates is (Ed), so,
as you increase the plate separation, so the potential
Solved You reposition the two plates of a capacitor so that
Question: You reposition the two plates of a capacitor so that the capacitance doubles. There is vacuum between the plates. If the charges Qand- on the two plates are kept constant in this process,
the energy stored in the capacitor remains the same. becomes four times as great becomes half as great. becomes twice as great.
Parallel Plate Capacitor
Parallel Plate Capacitor
Chapter 5 Capacitance and Dielectrics
Figure 5.2.3 Charged particles interacting inside the two plates of a capacitor. Each plate contains twelve charges interacting via Coulomb force, where one plate contains positive …
The separation between the plates of a charged capacitor is to be …
Capacitance is given by C= ε ∘ A d = Q V When, seperation between the plates of a charged capacitor increases, capacitance decreases. Work done is given by W=Vd where, V is the potential applied and
d is the distance between the plates of capacitor. Case 1
Two identical air filled parallel plate capacitors are charged to the same potential …
Two identical air filled parallel plate capacitors are charged to the same potential in the manner shown by closing the switch S. If now the switch S is opened and the space between the plates is
filled with a dielectric of relative permittivity ε r, then :The potential ... | {"url":"https://www.fotograaf-flevoland.nl/24_09_22_17995.html","timestamp":"2024-11-14T07:04:08Z","content_type":"text/html","content_length":"18087","record_id":"<urn:uuid:a0a5b4af-05c9-46dd-a398-496ab7e4cf2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00315.warc.gz"} |
13 2 Compute Amortization of Long-Term Liabilities Using the Effective-Interest Method Principles of Accounting, Volume 1: Financial Accounting - Suprabhat ITI
The effective interest method of amortization is a process used to allocate the discount or premium on bonds, or other long-term debt, evenly over the life of the instrument. Credit cards, on the
other hand, are generally not amortized. They are an example of revolving debt, where the outstanding balance can be carried month-to-month, and the amount repaid each month can be varied. Examples
of other loans that aren’t amortized include interest-only loans and balloon loans. The former includes an interest-only period of payment, and the latter has a large principal payment at loan
maturity. Amortization can refer to the process of paying off debt over time in regular installments of interest and principal sufficient to repay the loan in full by its maturity date.
You might figure that the impact would be to save you $300 on your final payment, or maybe a little bit extra. But thanks to reduced interest, just $300 extra is enough to keep you from making your
entire last payment. Use this calculator to plan your debt payoff and reduce your total interest costs so you can advance from paying off debt to building wealth. amortization table accounting An
amortization schedule shows the progressive payoff of the loan and the amount of each payment that gets attributed to principal and interest. In both the discount and premium, the difference between
the straight-line and the effective interest amortization methods is not significant. However, for large bond issues, this difference can become significant.
Calculating an amortization schedule if you don’t know your payment
The main difference is that the amortization table contains the breakup of the principal and interest portion, along with the same. However, a payment schedule will only reflect the total payment and
not include the division of principal and interest amounts. Thus, while the amortization table is a detailed table of loan repayment, the payment schedule is as good as a calendar showing the due
dates for the repayment of the loan at periodic intervals. This loan amortization calculator figures your loan payment and interest costs at various payment intervals.
The interest portion is the amount of the payment that gets applied as interest expense. This is often calculated as the outstanding loan balance multiplied by the interest rate attributable to this
period’s portion of the rate. For example, if a payment is owed monthly, this interest rate may be calculated as 1/12 of the interest rate multiplied by the beginning balance.
What does the Amortization Table Show?
A loan amortization schedule gives you the most basic information about your loan and how you’ll repay it. When you take out a loan with a fixed rate and set repayment term, you’ll typically receive
a loan amortization schedule. This schedule typically includes a full list of all the payments that you’ll be required to make over the lifetime of the loan. Each payment on the schedule gets broken
down according to the portion of the payment that goes toward interest and principal. Amortization helps businesses and investors understand and forecast their costs over time. In the context of loan
repayment, amortization schedules provide clarity into what portion of a loan payment consists of interest versus principal.
Interest – Money paid regularly at a particular rate for the use of money lent, or for delaying the repayment of a debt. Amortization– The process of paying off a debt over time through regular
payments. For each period, the interest expense in Column 2 is the semiannual yield rate at the time of issue, 5%, multiplied by the carrying value of the bonds at the beginning of the period. The
schedule below shows how the premium is amortized under the effective interest method. Under the effective interest method, the semiannual interest expense is $6,508 in the first period and increases
thereafter as the carrying value of the bond increases.
Summary Definition
The information for the journal entry to record the semiannual interest expense can be drawn directly from the amortization schedule. Under the effective interest method, a constant interest
rate—equal to the market rate at the time of issue—is used to calculate the periodic interest expense. Ron and Natasha had Oasis Leisure and Spa install an in-ground swimming pool for $51,000. The
financing plan through the company allows for end-of-month payments for two years at 6.9% compounded quarterly. Ron and Natasha instruct Oasis to round their monthly payment upward to the next dollar
amount evenly divisible by $500. Create a schedule for the first three payments, payments seven through nine, and the last three payments.
What is an amortization table accounting?
A loan amortization schedule is a table that shows each periodic loan payment that is owed, typically monthly, for level-payment loans. The schedule breaks down how much of each payment is designated
for the interest versus the principal.
Finding a qualified financial advisor doesn’t have to be hard. SmartAsset’s free tool matches you with up to three financial advisors who serve your area, and you can interview your advisor matches
at no cost to decide which one is right for you. If you’re ready to find an advisor who can help you achieve your financial goals, get started now.
– Amortization Schedule
For example, if you have to pay non-interest closing costs to get your mortgage, you should evaluate those fees separately. Some intangible assets, with goodwill being the most common example, that
have indefinite useful lives or are “self-created” may not be legally amortized for tax purposes. Use the effective-interest method to account for a bond issued at a premium.Use the
effective-interest method to account for a bond issued at a discount. Figure 13.10 illustrates the relationship between rates whenever a premium or discount is created at bond issuance. In the
following example, assume that the borrower acquired a five-year, $10,000 loan from a bank.
• This schedule is a very common way to break down the loan amount in the interest and the principal.
• Amortization is the way loan payments are applied to certain types of loans.
• Negative amortization is particularly dangerous with credit cards, whose interest rates can be as high as 20% or even 30%.
• As a result, the percentage interest rate is now 7.15 (or $6,702 / $93,678).
Reflects the monthly installment and the breakup of principal repayment and interest in each installment. Although the monthly installment will be the same for each month, the separation of principal
repayment and interest will be different for each month because loan outstanding will differ each month. By referring to this table, a person can be aware of future payments and the due loan amount.
Additionally, many amortized loans do not have language explaining the full cost of borrowing. Terms and conditions on loans like car loans, personal loans, or payday loans might leave an impression
that payments are equally split between principal and interest.
Related Calculators
For example, under this method, each period’s dollar interest expense is the same. However, as the carrying value of the bond increases or decreases, the actual percentage interest rate
correspondingly decreases or increases. In our example, we are going to calculate the amount saved by making a $1000 additional principal payment the first month of each year, for the first 10 years
of a 30 year loan. As a result, you have a triple rounding situation involving the balance along with the principal and interest components on every line of the table. What sometimes happens is that
a “missing penny” occurs and the schedule needs to be corrected as per step 12 of the process above. In other words, calculations will sometimes appear to be off by a penny.
• In our example, there is no accrued interest at the issue date of the bonds and at the end of each accounting year because the bonds pay interest on June 30 and December 31.
• Then for a loan with monthly repayments, divide the result by 12 to get your monthly interest.
• As the Fool’s Director of Investment Planning, Dan oversees much of the personal-finance and investment-planning content published daily on Fool.com.
• The articles and research support materials available on this site are educational and are not intended to be investment or tax advice.
• This process repeats itself for each period until no discount or premium remains on the principal balance.
This can be useful for purposes such as deducting interest payments for tax purposes. It’s relatively easy to produce a loan amortization schedule if you know what the monthly payment on the loan is.
Starting in month one, take the total amount of the loan and multiply it by the interest rate on the loan. Then for a loan with monthly repayments, divide the result by 12 to get your monthly
interest. Subtract the interest from the total monthly payment, and the remaining amount is what goes toward principal. For month two, do the same thing, except start with the remaining principal
balance from month one rather than the original amount of the loan.
What is an amortization table simple explanation?
An amortization table is defined as a document that shows you how much you are paying each month on a loan. An amortization table shows the payment schedule which is given when a loan is granted and
approved. This is a summary of every payment that is borrowed, which must be made during the lifespan of the loan.
No comment
প্ৰত্যুত্তৰ দিয়ক
মন্তব্য দিবলৈ আপুনি লগিন কৰিবই লাগিব। | {"url":"https://suprabhatiti.com/as/13-2-compute-amortization-of-long-term-liabilities/","timestamp":"2024-11-05T13:44:31Z","content_type":"text/html","content_length":"177898","record_id":"<urn:uuid:2002676c-599b-4198-bddb-fa9b70d09fdb>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00023.warc.gz"} |
Changes · roles · Wiki · Glasgow Haskell Compiler / GHC · GitLab
... ... @@ -419,7 +419,7 @@ First, we gather all of the free variables in the type family's kind and mark ea
### The type family equations
Next, we descend into each defining equation of the type family and inspect the left-hand and right-hand sides. The right-hand sides are analyzed just like the fields of a data constructor;
see the [ Role inference](https://ghc.haskell.org/trac/ghc/wiki/Roles#Roleinference) section above for more details. From the right-hand sides, we learn that the roles of `e`, `f`, and `g`
should be (at least) `representational`.
Next, we descend into each defining equation of the type family and inspect the left-hand and right-hand sides. The right-hand sides are analyzed just like the fields of a data constructor;
see the [ Role inference](Roles#Roleinference) section above for more details. From the right-hand sides, we learn that the roles of `e`, `f`, and `g` should be (at least) `representational`.
The more interesting analysis comes when inspecting the left-hand sides. We want to mark any type variable that is *scrutinized* as `nominal`. By "scrutinized", we mean a variable that is
being used in a non-parametric fashion. For instance, we want to rule out scenarios like this one:
... ... | {"url":"https://gitlab.haskell.org/ghc/ghc/-/wikis/roles/diff?version_id=87268da02707537976743ae79cd53a2cd9616518","timestamp":"2024-11-11T01:03:33Z","content_type":"text/html","content_length":"43976","record_id":"<urn:uuid:82df214d-3746-41e3-b531-a9a085ba53a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00725.warc.gz"} |
Ultra-arithmetic II: intervals of polynomials for Mathematics and Computers in Simulation
Mathematics and Computers in Simulation
Ultra-arithmetic II: intervals of polynomials
View publication
In Part 1 of this paper (Function Data Types) we developed ultra-arithmetic. a calculus for functions which is performable on a digital computer. Here in analogy with the notion of interval
arithmetic for intervals of reals we begin the development of an interval arithmetic for functions. The operations of addition, subtraction, multiplication, division, integration and differentiation
for intervals of polynomials are defined and studied. In certain cases simplified isotonal approximations to the resultant interval as well as error analyses are also given. © 1982. | {"url":"https://research.ibm.com/publications/ultra-arithmetic-ii-intervals-of-polynomials","timestamp":"2024-11-13T03:34:27Z","content_type":"text/html","content_length":"70188","record_id":"<urn:uuid:441c9d87-c28a-4c2b-bada-e20d7bd43c85>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00152.warc.gz"} |
What does e mean for matrix?
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations.
What is matrix vector notation?
The mathematical subject of linear algebra uses a shorthand notation called matrix and vector notation. Using matrices and vectors, linear systems of equations can be conveniently represented and the
operations required to solve the equations can be easily applied to this shorthand notation.
What does e mean in vectors?
where ei denotes the vector with a 1 in the ith coordinate and 0’s elsewhere. Standard bases can be defined for other vector spaces, whose definition involves coefficients, such as polynomials and
What is capital e in matrix?
In scientific notation, it indicates an exponent of . For example “1.45E5” is scientific notation for 145000, or . An upside-down E, written means “there exists”.
What is e used for in math?
Euler’s Number ‘e’ is a numerical constant used in mathematical calculations. The value of e is 2.718281828459045…so on. Just like pi(π), e is also an irrational number. It is described basically
under logarithm concepts. ‘e’ is a mathematical constant, which is basically the base of the natural logarithm.
What does ∑ mean in math?
∑ Sum. The ∑ symbol means sum. ∑ is the Greek capital sigma character. It is used commonly in algebraic functions, and you may also notice it in Excel – the AutoSum button has a sigma as its icon.
How do you write vector notation?
The vector here can be written OQ (bold print) or OQ with an arrow above it. Its magnitude (or length) is written OQ (absolute value symbols). A vector may be located in a rectangular coordinate
system, as is illustrated here. The rectangular coordinate notation for this vector is v : ∂6, 3∑ or v : ∂6, 3∑.
How do you find a vector notation?
The following three vectors in the three coordinate directions can now be defined. Vx = Vxî, Vy = Vyj, Vz = Vz k. Using the triangle rule for vector addition twice, this gives, V = Vx + Vy + Vz =
Vxî+ Vyj+ Vz k. This is known as the unit vector notation of a vector.
What is e in linear equation?
1. It is scientific notation, the e or E stands for 10x where x is the number following the e/E. – Triatticus.
What does e mean in math equations?
The number e , sometimes called the natural number, or Euler’s number, is an important mathematical constant approximately equal to 2.71828. When used as the base for a logarithm, the corresponding
logarithm is called the natural logarithm, and is written as ln(x) . Note that ln(e)=1 and that ln(1)=0 .
What is e math notation?
In statistics, the symbol e is a mathematical constant approximately equal to 2.71828183. Prism switches to scientific notation when the values are very large or very small. For example: 2.3e-5,
means 2.3 times ten to the minus five power, or 0.000023. | {"url":"https://www.worldsrichpeople.com/what-does-e-mean-for-matrix/","timestamp":"2024-11-14T11:09:30Z","content_type":"text/html","content_length":"54368","record_id":"<urn:uuid:feec4174-8cf1-4087-9da9-ebadcccc4c2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00210.warc.gz"} |
Hyperbolic Visualization
Next: Layout Up: Visualizing the Structure of Previous: Introduction
Mathematically consistent alternatives to Euclidean geometry have been developed over the past hundred years. The noneuclidean geometries can be distinguished by the behavior of parallel lines: in
Euclidean space there is exactly one line passing through a given point which is parallel to a given line, but in hyperbolic geometry there are many. In this space the area of a circle grows
exponentially with respect to its radius, whereas in Euclidean space the area only grows linearly. Thanks to this property of hyperbolic distance we have a convenient way to visualize exponentially
growing trees.
The simplest way to draw 3D hyperbolic pictures is in the interior of a ball. We use Euclidean straight lines, but the way we measure distance is changed so that the surface of the ball is infinitely
far away from the origin. From the outside, objects very near the center seem almost Euclidean but seem to grow distorted and smaller as they are translated towards the surface of the ball. This
representation of hyperbolic space is known as the projective or Klein model. Luckily we can draw such pictures using 4 x 4 matrices: since the standard graphics pipeline uses homogeneous
coordinates, interactive speeds are easily achieved on modern workstations. Figure 3 shows a navigation sequence in the projective model. The look and feel of the system is difficult to communicate
with still pictures, a video is necessary to do it justice.
Figure 3: Motion in hyperbolic space (the projective model)
WebOOGL [120 KB] VRML [250 KB] MPEG [900 KB]
The conformal or Poincaré model is related to the projective model by a simple transformation. In this model, straight lines are drawn as arcs and flat faces are drawn as parts of spheres. The
advantage is that angles are always drawn correctly. Unfortunately we cannot use 4 x 4 matrices to represent motion in the conformal model, and drawing arcs and curved faces also requires much
greater subdivision than in the projective case. While this model is more computationally demanding than the projective one, it is within the reach of most workstations.
If we allow the camera to go outside of the ball we can see the entire space at once. When we constrain the camera itself according to hyperbolic isometries, it can never move outside of the ball and
we see what hyperbolic space would look like from the insider's point of view. These three models provide different and useful intuition, so we use whichever of them is appropriate at any given time.
Figure 4 shows two levels of the Geometry Center Weblet in the three models. (We use ``Weblet'' to refer to a section of the Web .) Here we have drawn the ``sphere at infinity'' for the two outsider
models but in all other pictures the sphere is not shown.
Figure 4: The projective, conformal, and ``insider'' models of hyperbolic space
WebOOGL [50KB] VRML [110KB]
MPEG projective [1.15MB] MPEG conformal [700KB] MPEG insider [360KB]
There has been considerable work at the Geometry Center on visualizing hyperbolic geometry. The 1991 mathematical animation Not Knot [3] includes a groundbreaking flythrough of hyperbolic space. A
reference on hyperbolic navigation can be found in [1], which to the best of our knowledge contains the first recorded suggestion of using hyperbolic space for tree visualization. See also [2] for
more exposition of hyperbolic geometry aimed at the computer graphics community.
The publicly distributed Geomview 3D viewer [8] includes built-in support for interactive navigation in noneuclidean geometries, including hyperbolic geometry. Details on the implementation of the
hyperbolic transformation libraries used in Geomview can be found in [7].
A recent paper from Lamping et al at Xerox PARC [4] has brought hyperbolic visualization to the attention of the computer-human interaction community. They ran user tests of a 2D Poincaré model
browser visualizing various kinds of hierarchical information, including organizational charts and a Weblet. Their work on information visualization is confined to the 2D hyperbolic plane, and deals
only with acyclic graphs.
Hyperbolic space has a similar visual effect to a fisheye camera lens, used by [10] for information visualization. The fisheye paradigm calls for several ad-hoc decisions, the advantage of hyperbolic
geometry is it provides an elegant and powerful mathematical framework for display and navigation.
Next: Layout Up: Visualizing the Structure of Previous: Introduction
Tamara Munzner
Tue Nov 21 23:43:05 CST 1995 | {"url":"http://www.graphics.stanford.edu/papers/webviz/webviz/node2.html","timestamp":"2024-11-02T06:15:21Z","content_type":"text/html","content_length":"8017","record_id":"<urn:uuid:4c60e3a8-695c-4f5f-af46-b1bf9c20e4b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00610.warc.gz"} |
Adaptive Polarization Stacking
The "transient" approach to AMT was arguably first implemented by Don Hoover of the USGS in the 1970's where a "human computer" located discrete events on a strip chart recorder and calculated scalar
impedances with the detected events. Many others have recorded transients in order to maximize signal-to-noise ratio (SNR) including Keeva Vozoff (University of Macquarie) in the 1980's, Ken Paulson
(University of Saskatchewan) in the 1980's, Andreas Tzanis and David Beamish (British Geological Survey) in the 1980's and Garner and Thiel in the 1990's.
However, with the exception of Ken Paulson and Peter Kosteniuk, standard processing techniques were used to process the linearly polarized transient data. This can be problematic with a confined
distribution of bearings as the polarization diversity of the transient data causes a bias and instability of solution additional to that caused by finite SNR and finite sample size.
The angle between events is similar to specifying the condition number of a pair of simulataneous linear equations. A narrow angle produces a large condition number and an unstable system, in such as
case, small changes of the input data produce large changes on the output (estimated curves).
Ken Paulson and Peter Kosteniuk were unique in the development of a new data processing algorithm that exploited the directional, linearly polarized, properties of the transient waveforms. Their
original Polarization Stacking algorithm involved the detection of individual transient events and their classification into two separate "stacks" based on the direction of arrival, stack boundaries
oriented N-S and E-W were used.
A key aspect of Polarization Stacking was the further enhancement of SNR through a time domain stacking of the transient waveforms. All recorded events were decomposed into two "super" events
contained in the two stacks, Fourier transformation with a simple "two-point" formula yielded impedance tensor and tipper estimates.
Not only were the best possible events used by Paulson and Kosteniuk, they developed a data processing algorithm to work with and exploit the linearly polarized transient data.
EMpulse Geophysics improved the Polarization Stacking algorithm to allow the stack boundaries to be adaptive to the constantly changing bearing and amplitude characteristics of the data, thus our new
algorithm is called Adaptive Polarization Stacking or APS. The stack boundaries are chosen in such a way so as to maximize the angle between stacks and to approximately equalize signal energy in each
stack. This avoids the problem of all the events going into one stack, as could sometimes happen with the original Polarization Stacking algorithm in times of low source field activity.
A second improvement with APS was the use of weighted averaging, with Polarization Stacking every event in each stack had the same weight of unity. EMpulse found out rather quickly that it only took
one or two noisy events to disrupt the stack and estimated impedance/tipper curves. With APS, each event is weighted according to a very simple quality indicator, even with high SNR transient events,
it's not uncommon for many events to be downweighted significantly (weight < 0.1) relative to the best event (unity weight). It's very much a "diamond in the rough" type process with typically 20
percent or less of the total number of events possessing significant weight (> 0.3).
A third and critical improvement with APS was the inclusion of error bar estimates, derived through Monte Carlo simulation, that are properly connected to the polarization properties of the source
field, the SNR and sample size. The estimated impedance (and admittance) are used to evaluate the residual between predicted and measured electric and magnetic fields, this residual, averaged across
all the measured events, is used to obtain an (over) estimate of the noise standard deviation for all five time series (Hx, Hy, Ex, Ey, Hz). In accord with the noise standard deviations thus found,
normally distributed noise is added to the time-series and a "noisy" impedance tensor and tipper estimated and stored, this process is repeated hundreds of times, each time with independent linearly
additive noise, to obtain a noisy family of impedance tensor and tipper estimates from which 95 percent confidence limits can be found.
In contrast to conventional Remote-Reference error analysis, our APS error bars typically over-estimate the true error and are thus conservative. Our APS error bars do fail when there are a small
number of events, as can happen in the 1 - 5 kHz "dead-band" range. However, the effect is obvious and simpy requires the manual setting of the error bars in the affected range to a very large level.
In this fashion, we avoid the error-bar "guessing game" as is often times required with conventional AMT data, i.e., since the conventional Remote-Reference error bars are usually unrealistically
small, it is typical to use several different error bar "guesses" (10 percent uniform error, 20 percent, etc.) ultimately leaving the interpreter with a subjective decision as to which model he/she
considers the "best" or most reasonable fit to the measured data.
One downside of Adaptive Polarization Stacking is that since each one of the transient waveforms are in general different, in adiition to cancelling noise, some fine scale signal cancellation also
occurs. This is especially so for frequencies less than 50 Hz, where the transient component begins to diminish and the continuing component begins to rise.
In an effort to completely avoid signal cancellation EMpulse also developed a "curve-stacking" algorithm whereby independent pairs of events are chosen based on a quality indicator. Once the events
are paired up, an impedance and tipper estimate is found for each pair, thus obtaining a family of "pair-processed" impedance and tipper curves. These curves are then "stacked" or averaged in the
frequency domain in a weighted sense, either with one global weight per curve or with frequency dependent weights.
EMpulse Geophysics performed an extensive analysis of the bias (and error bar efficiency) of our APS algortithm as well as our frequency domain"curve-stacking" algorithm (and standard
Remote-Reference). Real magnetic field data were used in conjunction with a 3D impedance tensor and tipper to make the perfectly matching electric field and vertical magnetic field data. The perfect
relation (infinite SNR) was then disrupted through the addition of normally distributed random noise to all five channels. The resulting impedance tensor and tipper estimates were then estimated and
stored, this process repeated over many thousands of times permitting an analysis of the bias and error bar performance as a function of frequency for APS, curve-stacking and Remote-Reference.
Even though curve-stacking has absolutely no signal cancellation, we found that APS still provides by far the greatest benefit of accuracy (smallest bias) and error bar efficiency, the smallest error
bar that still captures the true underlying curve. Even though some signal cancellation occurs with APS, the benefit of noise cancellation and signal enhancement appear to far outweigh it.
It was futher found that the bias performance of APS is better than standard remote-reference (R.R.) given transients with typical polarization characteristics, please click here to download the
paper, presented at the 2001 SEG in San Antonio.
In order to leave no stone turned, we are working on a third algorithm which involves stacking transients as per APS, but only those that have a high degree of self correlation. Therefore, only
events with similar time domain shapes are stacked, non-correlating events are completely left out of the time domain stacking process. After stacking as many (weighted) events as possible, we
proceed to the frequency domain with the new subset of events and perform curvestacking. This hybrid time/frequency domain processing method may represent the best useage of the data although
Monte-Carlo analysis of bias and error bar performance remains to be conducted. Furthermore, the hybrid time/frequency domain algorithm would appear to be adaptable to both auroral and lightning
sources, possibly making it useful for sub 1 Hz data. | {"url":"http://www.empulse.ca/article/adaptive-polarization-stacking-110.asp","timestamp":"2024-11-12T03:39:22Z","content_type":"text/html","content_length":"14276","record_id":"<urn:uuid:a95dcd18-03d5-4eb6-8fa7-b81d98e7ee46>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00810.warc.gz"} |
Understanding Mathematical Functions: Which Of The Following Functions
Mathematical functions play a crucial role in various fields, from engineering to economics, and understanding their characteristics is essential for solving real-world problems. One important
property of functions is whether they are one to one, also known as injective functions. In this blog post, we will explore the definition of mathematical functions and delve into the importance of
understanding one to one functions in the realm of mathematics.
Definition of mathematical functions
Importance of understanding one to one functions
Key Takeaways
• One to one functions are crucial in various fields, from engineering to economics.
• Understanding the characteristics of one to one functions is essential for solving real-world problems.
• Testing for one to one using the horizontal line test is a common method.
• Linear and exponential functions with a base greater than 1 are examples of one to one functions.
• Recognizing patterns that indicate a one to one function is an important skill in mathematics.
Understanding Mathematical Functions
Mathematical functions are an essential part of the field of mathematics, and they play a crucial role in various applications in the real world. One specific type of function that is of particular
interest is the one to one function. In this chapter, we will delve into the concept of one to one functions, their characteristics, and provide examples to illustrate their application.
Explanation of one to one functions
A one to one function, also known as an injective function, is a type of function in which each element in the domain maps to a unique element in the codomain. In simpler terms, no two different
elements in the domain can map to the same element in the codomain. This property makes one to one functions particularly useful in various mathematical and real-world scenarios.
Characteristics of one to one functions
• Unique mapping: As mentioned earlier, one to one functions exhibit the characteristic of each element in the domain mapping to a unique element in the codomain. This ensures that there are no
duplicate mappings, making the function distinct and well-defined.
• Horizontal line test: Another characteristic of one to one functions is that no horizontal line intersects the graph of the function more than once. This property serves as a visual indicator of
whether a function is one to one.
• Strictly increasing or decreasing: In the case of functions with real numbers, a one to one function is either strictly increasing or strictly decreasing throughout its domain.
Examples of one to one functions
There are various examples of one to one functions that can be found in mathematics and everyday life. Some common examples include:
• Linear functions: Functions in the form of f(x) = mx + b, where m is the slope and b is the y-intercept, are one to one functions if the slope m is non-zero.
• Exponential functions: Functions of the form f(x) = a^x, where a is a positive real number, are one to one functions as they exhibit exponential growth or decay without repeating any values.
• Logarithmic functions: Functions of the form f(x) = log_a(x), where a is a positive real number, are also one to one functions, as they represent the inverse of exponential functions and have
distinct values for each input in their domain.
These examples serve to illustrate the diverse nature of one to one functions and their applicability in various mathematical contexts.
Identifying One to One Functions
Understanding one to one functions is a fundamental concept in mathematics. In this chapter, we will discuss various methods for identifying one to one functions.
A. Testing for one to one using the horizontal line test
The horizontal line test is a simple yet effective method for determining if a function is one to one. The test involves drawing horizontal lines across the graph of the function and checking if each
horizontal line intersects the graph at most once.
• Draw horizontal lines across the graph
• Check for intersections with the graph
• If each horizontal line intersects the graph at most once, the function is one to one
B. Solving for one to one using algebraic manipulation
Another approach to identifying one to one functions is through algebraic manipulation. By analyzing the algebraic structure of the function, we can determine if it satisfies the criteria for being
one to one.
• Apply the definition of one to one functions
• Solve for the function's inverse
• If the inverse exists and is also a function, the original function is one to one
C. Recognizing patterns that indicate a one to one function
Patterns and characteristics of functions can provide insights into whether a function is one to one. By recognizing these patterns, we can quickly identify one to one functions without extensive
testing or manipulation.
• Identify strictly increasing or strictly decreasing functions
• Look for symmetry in the graph or equation
• Recognize periodic functions and their behavior
Common misconceptions about one to one functions
When it comes to understanding mathematical functions, the concept of one to one functions can often be a source of confusion for students and even some experienced mathematicians. Let's explore some
common misconceptions about one to one functions.
A. Confusing one to one with onto functions
One common misconception about one to one functions is the confusion with onto functions. One to one functions and onto functions are actually two distinct concepts, but they are often mistakenly
thought to be the same thing. Onto functions are those for which every element in the codomain has at least one corresponding element in the domain. On the other hand, one to one functions are those
where each element in the codomain has at most one corresponding element in the domain. It's important to understand the difference between these two types of functions to avoid confusion.
B. Misunderstanding the role of inverse functions
Another misconception about one to one functions is the misunderstanding of the role of inverse functions. Some people assume that if a function has an inverse, then it must be one to one. While it
is true that one to one functions have inverses, the existence of an inverse does not always imply that a function is one to one. In other words, having an inverse is a necessary but not a sufficient
condition for a function to be one to one. This distinction is crucial for grasping the concept of one to one functions.
C. Examples of functions that are often mistakenly thought to be one to one
There are certain functions that are often mistakenly thought to be one to one. For example, the square function y = x^2 is not one to one because different inputs can yield the same output. Another
example is the absolute value function y = |x|, which is not one to one because it maps both positive and negative numbers to the same output. Understanding these common examples of functions that
are not one to one can help clarify the concept.
Examples of Functions that are One to One
When studying mathematical functions, it's important to understand which functions are one to one. One to one functions are those in which each element of the domain is paired with exactly one
element of the range. In other words, no two different inputs can lead to the same output. Let's explore some examples of functions that are one to one.
A. Linear functions
Linear functions are one of the most common examples of one to one functions. These functions have a constant rate of change and can be represented by a straight line on a graph. For example, the
function f(x) = 2x + 3 is a linear function that is one to one. For every x-value, there is a unique y-value, and vice versa.
B. Exponential functions with a base greater than 1
Exponential functions with a base greater than 1 are also one to one. These functions grow rapidly as x increases and have a unique output for each input. For instance, the function g(x) = 3^x is an
exponential function with a base of 3, and it is one to one.
C. Trigonometric functions with restricted domains
Trigonometric functions such as sine, cosine, and tangent are typically not one to one. However, when their domains are restricted, they can become one to one. For example, the function h(x) = sin(x)
on the interval [-π/2, π/2] is one to one because it only covers half a period of the sine function, ensuring that each input corresponds to a unique output.
Examples of Functions that are Not One to One
When it comes to mathematical functions, not all of them are one to one. Understanding which functions fall into this category is important for various mathematical applications. Let's take a closer
look at some examples of functions that are not one to one:
• Quadratic functions
Quadratic functions, such as f(x) = x^2, are not one to one. This is because different input values can yield the same output value. For example, both f(2) and f(-2) result in 4. This violates
the definition of a one to one function, which requires each input to correspond to a unique output.
• Exponential functions with a base between 0 and 1
Exponential functions with a base between 0 and 1, such as f(x) = 2^x where 0 < 2 < 1, are not one to one. As x increases, the output values decrease, resulting in multiple inputs mapping to the
same output. This lack of uniqueness makes these functions not one to one.
• Trigonometric functions with unrestricted domains
Trigonometric functions, like sine and cosine, have unrestricted domains and are not one to one. They have periodic behavior, which means that the function repeats its values over a certain
interval. This periodicity leads to multiple inputs producing the same output, making these functions not one to one.
Understanding one to one functions is crucial in mathematics as it helps us prevent errors and ensures the accuracy of our calculations. It is important to practice identifying one to one functions
in order to develop our skills and gain confidence in our mathematical abilities. The significance of one to one functions in mathematics cannot be overstated, as they play a vital role in various
mathematical concepts and applications.
ONLY $99
Immediate Download
MAC & PC Compatible
Free Email Support | {"url":"https://dashboardsexcel.com/blogs/blog/mathematical-functions-which-of-the-following-functions-are-one-to-one","timestamp":"2024-11-09T04:13:47Z","content_type":"text/html","content_length":"214792","record_id":"<urn:uuid:2fb9cab3-28fa-4281-a4bd-339e6e098d48>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00628.warc.gz"} |
Algebra and number theory II
MTH3150 - Algebra and number theory II
6 points, SCA Band 2, 0.125 EFTSL
Clayton Second semester 2007 (Day)
Rings, fields, algebraic integers, finite fields, splitting fields of polynomials and fields of fractions. Classical problems of ruler and compass (eg. can an angle be trisected?). Coding,
cryptography, and geometric constructions. Gaussian integers, Hamilton's quaternions, Chinese Remainder Theorem. Euclidean Algorithm in further fields
At the completion of this unit, students will be able to demonstrate understanding of advanced concepts, algorithms and results in number theory; the use of Gaussian integers to find the primes
expressible as a sum of squares, Diophantine equations; the quaternions, the best known skew field; many of the links between algebra and number theory; the most commonly occurring rings and fields:
integers, integers modulo , rationals, reals and complex numbers, more general structures such as algebraic number fields, algebraic integers and finite fields; and will have developed skills in the
use of the Chinese Remainder Theorem to represent integers by their remainders; performing calculations in the algebra of polynomials; the use of the Euclidean algorithm in structures other than
integers or Gaussian integers; constructing larger fields from smaller fields (field extensions); applying ring and field theory to coding, cryptography and geometric constructions.
Examination (3 hours): 70%
Assignments and tests: 30%
Contact hours
Three 1-hour lectures and an average of one 1-hour support class per week
MTH2122 or MTH3122 | {"url":"https://www3.monash.edu/pubs/2007handbooks/units/MTH3150.html","timestamp":"2024-11-05T06:02:35Z","content_type":"text/html","content_length":"8471","record_id":"<urn:uuid:008590ee-1726-4674-8240-dfc6989a985d>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00630.warc.gz"} |
(pronounced with a long "o" and a soft "g") of a number
between 0 and 1 is
(The base of the
function used here is of little importance in the present article, as long as it is greater than 1.) The logit function is the inverse of the
"sigmoid", or "logistic" function
. If
is a probability then
/(1 −
) is the corresponding
, and the logit of the probability is the logarithm of the odds; similarly the difference between the logits of two probabilities is the logarithm of the odds ratio, thus providing an additive
mechanism for combining odds ratios.
Logits are used for various purposes by statisticians. In particular there is the "logit model" of which the simplest sort is
is some quantity on which success or failure in the
th in a sequence of Bernoulli trials may depend, and
is the probability of success in the
th case. For example,
may be the age of a patient admitted to a hospital with a heart attack, and "success" may be the event that the patient dies before leaving the hospital (another instance of the reason why the words
"success" and "failure" in speaking of Bernoulli trials should be taken with large doses of salt). Having observed the values of
in a sequence of cases and whether there was a "success" or a a "failure" in each such case, a statistician will often estimate the values of the coefficients
by the method of
maximum likelihood
. The result can then be used to assess the probability of "success" in a subsequent case in which the value of
is known. Estimation and prediction by this method are called
logistic regression
A logistic regression model is identical to a neural network with no hidden units. For a neural network hidden units, each hidden unit computes a logistic regression (different for each hidden unit),
and the output is therefore a weighted sum of logistic regression outputs.
The logit in logistic regression is a special case of a link function in generalized linear models.
The logit model was introduced by Joseph Berkson in 1944.
See also | {"url":"http://www.fact-index.com/l/lo/logit.html","timestamp":"2024-11-03T18:40:32Z","content_type":"text/html","content_length":"5950","record_id":"<urn:uuid:16bc7f33-f81c-4541-b8de-978fb65132d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00760.warc.gz"} |
MSC 2010 Classification Codes
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme collaboratively produced by staff of, and based on the coverage of, the two major mathematical reviewing
databases, Mathematical Reviews and Zentralblatt MATH. (See also Wikipedia.)
• 00-XX: General
□ 00-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 00-02: Research exposition (monographs, survey articles)
□ 00Axx: General and miscellaneous specific topics
☆ 00A05: General mathematics
☆ 00A06: Mathematics for nonmathematicians (engineering, social sciences, etc.)
☆ 00A07: Problem books
☆ 00A08: Recreational mathematics [See also 97A20]
☆ 00A09: Popularization of mathematics
☆ 00A15: Bibliographies
☆ 00A17: External book reviews
☆ 00A20: Dictionaries and other general reference works
☆ 00A22: Formularies
☆ 00A30: Philosophy of mathematics [See also 03A05]
☆ 00A35: Methodology of mathematics, didactics [See also 97Cxx, 97Dxx]
☆ 00A65: Mathematics and music
☆ 00A66: Mathematics and visual arts, visualization
☆ 00A67: Mathematics and architecture
☆ 00A69: General applied mathematics {For physics, see 00A79 and Sections 70 through 86}
☆ 00A71: Theory of mathematical modeling
☆ 00A72: General methods of simulation
☆ 00A73: Dimensional analysis
☆ 00A79: Physics (use more specific entries from Sections 70 through 86 when possible)
☆ 00A99: Miscellaneous topics
□ 00Bxx: Conference proceedings and collections of papers
☆ 00B05: Collections of abstracts of lectures
☆ 00B10: Collections of articles of general interest
☆ 00B15: Collections of articles of miscellaneous specific content
☆ 00B20: Proceedings of conferences of general interest
☆ 00B25: Proceedings of conferences of miscellaneous specific interest
☆ 00B30: Festschriften
☆ 00B50: Volumes of selected translations
☆ 00B55: Miscellaneous volumes of translations
☆ 00B60: Collections of reprinted articles [See also 01A75]
☆ 00B99: None of the above, but in this section
• 01-XX: History and biography [See also the classification number –03 in the other sections]
□ 01-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 01-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 01-02: Research exposition (monographs, survey articles)
□ 01-06: Proceedings, conferences, collections, etc.
□ 01-08: Computational methods
□ 01Axx: History of mathematics and mathematicians
☆ 01A05: General histories, source books
☆ 01A07: Ethnomathematics, general
☆ 01A10: Paleolithic, Neolithic
☆ 01A12: Indigenous cultures of the Americas
☆ 01A13: Other indigenous cultures (non-European)
☆ 01A15: Indigenous European cultures (pre-Greek, etc.)
☆ 01A16: Egyptian
☆ 01A17: Babylonian
☆ 01A20: Greek, Roman
☆ 01A25: China
☆ 01A27: Japan
☆ 01A29: Southeast Asia
☆ 01A30: Islam (Medieval)
☆ 01A32: India
☆ 01A35: Medieval
☆ 01A40: 15th and 16th centuries, Renaissance
☆ 01A45: 17th century
☆ 01A50: 18th century
☆ 01A55: 19th century
☆ 01A60: 20th century
☆ 01A61: Twenty-first century
☆ 01A65: Contemporary
☆ 01A67: Future prospectives
☆ 01A70: Biographies, obituaries, personalia, bibliographies
☆ 01A72: Schools of mathematics
☆ 01A73: Universities
☆ 01A74: Other institutions and academies
☆ 01A75: Collected or selected works; reprintings or translations of classics [See also 00B60]
☆ 01A80: Sociology (and profession) of mathematics
☆ 01A85: Historiography
☆ 01A90: Bibliographic studies
☆ 01A99: Miscellaneous topics
• 03-XX: Mathematical logic and foundations
□ 03-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 03-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 03-02: Research exposition (monographs, survey articles)
□ 03-03: Historical (must also be assigned at least one classification number from Section 01)
□ 03-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 03-06: Proceedings, conferences, collections, etc.
□ 03Axx: Philosophical aspects of logic and foundations
☆ 03A05: Philosophical and critical {For philosophy of mathematics, see also 00A30}
☆ 03A10: Logic in the philosophy of science
☆ 03A99: None of the above, but in this section
□ 03Bxx: General logic
☆ 03B05: Classical propositional logic
☆ 03B10: Classical first-order logic
☆ 03B15: Higher-order logic and type theory
☆ 03B20: Subsystems of classical logic (including intuitionistic logic)
☆ 03B22: Abstract deductive systems
☆ 03B25: Decidability of theories and sets of sentences [See also 11U05, 12L05, 20F10]
☆ 03B30: Foundations of classical theories (including reverse mathematics) [See also 03F35]
☆ 03B35: Mechanization of proofs and logical operations [See also 68T15]
☆ 03B40: Combinatory logic and lambda-calculus [See also 68N18]
☆ 03B42: Logics of knowledge and belief (including belief change)
☆ 03B44: Temporal logic
☆ 03B45: Modal logic (including the logic of norms) {For knowledge and belief, see 03B42; for temporal logic, see 03B44; for provability logic, see also 03F45}
☆ 03B47: Substructural logics (including relevance, entailment, linear logic, Lambek calculus, BCK and BCI logics) {For proof-theoretic aspects see 03F52}
☆ 03B48: Probability and inductive logic [See also 60A05]
☆ 03B50: Many-valued logic
☆ 03B52: Fuzzy logic; logic of vagueness [See also 68T27, 68T37, 94D05]
☆ 03B53: Paraconsistent logics
☆ 03B55: Intermediate logics
☆ 03B60: Other nonclassical logic
☆ 03B62: Combined logics
☆ 03B65: Logic of natural languages [See also 68T50, 91F20]
☆ 03B70: Logic in computer science [See also 68-XX]
☆ 03B80: Other applications of logic
☆ 03B99: None of the above, but in this section
□ 03Cxx: Model theory
☆ 03C05: Equational classes, universal algebra [See also 08Axx, 08Bxx, 18C05]
☆ 03C07: Basic properties of first-order languages and structures
☆ 03C10: Quantifier elimination, model completeness and related topics
☆ 03C13: Finite structures [See also 68Q15, 68Q19]
☆ 03C15: Denumerable structures
☆ 03C20: Ultraproducts and related constructions
☆ 03C25: Model-theoretic forcing
☆ 03C30: Other model constructions
☆ 03C35: Categoricity and completeness of theories
☆ 03C40: Interpolation, preservation, definability
☆ 03C45: Classification theory, stability and related concepts [See also 03C48]
☆ 03C48: Abstract elementary classes and related topics [See also 03C45]
☆ 03C50: Models with special properties (saturated, rigid, etc.)
☆ 03C52: Properties of classes of models
☆ 03C55: Set-theoretic model theory
☆ 03C57: Effective and recursion-theoretic model theory [See also 03D45]
☆ 03C60: Model-theoretic algebra [See also 08C10, 12Lxx, 13L05]
☆ 03C62: Models of arithmetic and set theory [See also 03Hxx]
☆ 03C64: Model theory of ordered structures; o-minimality
☆ 03C65: Models of other mathematical theories
☆ 03C68: Other classical first-order model theory
☆ 03C70: Logic on admissible sets
☆ 03C75: Other infinitary logic
☆ 03C80: Logic with extra quantifiers and operators [See also 03B42, 03B44, 03B45, 03B48]
☆ 03C85: Second- and higher-order model theory
☆ 03C90: Nonclassical models (Boolean-valued, sheaf, etc.)
☆ 03C95: Abstract model theory
☆ 03C98: Applications of model theory [See also 03C60]
☆ 03C99: None of the above, but in this section
□ 03Dxx: Computability and recursion theory
☆ 03D03: Thue and Post systems, etc.
☆ 03D05: Automata and formal grammars in connection with logical questions [See also 68Q45, 68Q70, 68R15]
☆ 03D10: Turing machines and related notions [See also 68Q05]
☆ 03D15: Complexity of computation (including implicit computational complexity) [See also 68Q15, 68Q17]
☆ 03D20: Recursive functions and relations, subrecursive hierarchies
☆ 03D25: Recursively (computably) enumerable sets and degrees
☆ 03D28: Other Turing degree structures
☆ 03D30: Other degrees and reducibilities
☆ 03D32: Algorithmic randomness and dimension [See also 68Q30]
☆ 03D35: Undecidability and degrees of sets of sentences
☆ 03D40: Word problems, etc. [See also 06B25, 08A50, 20F10, 68R15]
☆ 03D45: Theory of numerations, effectively presented structures [See also 03C57; for intuitionistic and similar approaches see 03F55]
☆ 03D50: Recursive equivalence types of sets and structures, isols
☆ 03D55: Hierarchies
☆ 03D60: Computability and recursion theory on ordinals, admissible sets, etc.
☆ 03D65: Higher-type and set recursion theory
☆ 03D70: Inductive definability
☆ 03D75: Abstract and axiomatic computability and recursion theory
☆ 03D78: Computation over the reals {For constructive aspects, see 03F60}
☆ 03D80: Applications of computability and recursion theory
☆ 03D99: None of the above, but in this section
□ 03Exx: Set theory
☆ 03E02: Partition relations
☆ 03E04: Ordered sets and their cofinalities; pcf theory
☆ 03E05: Other combinatorial set theory
☆ 03E10: Ordinal and cardinal numbers
☆ 03E15: Descriptive set theory [See also 28A05, 54H05]
☆ 03E17: Cardinal characteristics of the continuum
☆ 03E20: Other classical set theory (including functions, relations, and set algebra)
☆ 03E25: Axiom of choice and related propositions
☆ 03E30: Axiomatics of classical set theory and its fragments
☆ 03E35: Consistency and independence results
☆ 03E40: Other aspects of forcing and Boolean-valued models
☆ 03E45: Inner models, including constructibility, ordinal definability, and core models
☆ 03E47: Other notions of set-theoretic definability
☆ 03E50: Continuum hypothesis and Martin's axiom [See also 03E57]
☆ 03E55: Large cardinals
☆ 03E57: Generic absoluteness and forcing axioms [See also 03E50]
☆ 03E60: Determinacy principles
☆ 03E65: Other hypotheses and axioms
☆ 03E70: Nonclassical and second-order set theories
☆ 03E72: Fuzzy set theory
☆ 03E75: Applications of set theory
☆ 03E99: None of the above, but in this section
□ 03Fxx: Proof theory and constructive mathematics
☆ 03F03: Proof theory, general
☆ 03F05: Cut-elimination and normal-form theorems
☆ 03F07: Structure of proofs
☆ 03F10: Functionals in proof theory
☆ 03F15: Recursive ordinals and ordinal notations
☆ 03F20: Complexity of proofs
☆ 03F25: Relative consistency and interpretations
☆ 03F30: First-order arithmetic and fragments
☆ 03F35: Second- and higher-order arithmetic and fragments [See also 03B30]
☆ 03F40: Gödel numberings and issues of incompleteness
☆ 03F45: Provability logics and related algebras (e.g., diagonalizable algebras) [See also 03B45, 03G25, 06E25]
☆ 03F50: Metamathematics of constructive systems
☆ 03F52: Linear logic and other substructural logics [See also 03B47]
☆ 03F55: Intuitionistic mathematics
☆ 03F60: Constructive and recursive analysis [See also 03B30, 03D45, 03D78, 26E40, 46S30, 47S30]
☆ 03F65: Other constructive mathematics [See also 03D45]
☆ 03F99: None of the above, but in this section
□ 03Gxx: Algebraic logic
☆ 03G05: Boolean algebras [See also 06Exx]
☆ 03G10: Lattices and related structures [See also 06Bxx]
☆ 03G12: Quantum logic [See also 06C15, 81P10]
☆ 03G15: Cylindric and polyadic algebras; relation algebras
☆ 03G20: Łukasiewicz and Post algebras [See also 06D25, 06D30]
☆ 03G25: Other algebras related to logic [See also 03F45, 06D20, 06E25, 06F35]
☆ 03G27: Abstract algebraic logic
☆ 03G30: Categorical logic, topoi [See also 18B25, 18C05, 18C10]
☆ 03G99: None of the above, but in this section
□ 03Hxx: Nonstandard models [See also 03C62]
☆ 03H05: Nonstandard models in mathematics [See also 26E35, 28E05, 30G06, 46S20, 47S20, 54J05]
☆ 03H10: Other applications of nonstandard models (economics, physics, etc.)
☆ 03H15: Nonstandard models of arithmetic [See also 11U10, 12L15, 13L05]
☆ 03H99: None of the above, but in this section
• 05-XX: Combinatorics {For finite fields, see 11Txx}
□ 05-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 05-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 05-02: Research exposition (monographs, survey articles)
□ 05-03: Historical (must also be assigned at least one classification number from Section 01)
□ 05-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 05-06: Proceedings, conferences, collections, etc.
□ 05Axx: Enumerative combinatorics {For enumeration in graph theory, see 05C30}
☆ 05A05: Permutations, words, matrices
☆ 05A10: Factorials, binomial coefficients, combinatorial functions [See also 11B65, 33Cxx]
☆ 05A15: Exact enumeration problems, generating functions [See also 33Cxx, 33Dxx]
☆ 05A16: Asymptotic enumeration
☆ 05A17: Partitions of integers [See also 11P81, 11P82, 11P83]
☆ 05A18: Partitions of sets
☆ 05A19: Combinatorial identities, bijective combinatorics
☆ 05A20: Combinatorial inequalities
☆ 05A30: $q$-calculus and related topics [See also 33Dxx]
☆ 05A40: Umbral calculus
☆ 05A99: None of the above, but in this section
□ 05Bxx: Designs and configurations {For applications of design theory, see 94C30}
☆ 05B05: Block designs [See also 51E05, 62K10]
☆ 05B07: Triple systems
☆ 05B10: Difference sets (number-theoretic, group-theoretic, etc.) [See also 11B13]
☆ 05B15: Orthogonal arrays, Latin squares, Room squares
☆ 05B20: Matrices (incidence, Hadamard, etc.)
☆ 05B25: Finite geometries [See also 51D20, 51Exx]
☆ 05B30: Other designs, configurations [See also 51E30]
☆ 05B35: Matroids, geometric lattices [See also 52B40, 90C27]
☆ 05B40: Packing and covering [See also 11H31, 52C15, 52C17]
☆ 05B45: Tessellation and tiling problems [See also 52C20, 52C22]
☆ 05B50: Polyominoes
☆ 05B99: None of the above, but in this section
□ 05Cxx: Graph theory {For applications of graphs, see 68R10, 81Q30, 81T15, 82B20, 82C20, 90C35, 92E10, 94C15}
☆ 05C05: Trees
☆ 05C07: Vertex degrees [See also 05E30]
☆ 05C10: Planar graphs; geometric and topological aspects of graph theory [See also 57M15, 57M25]
☆ 05C12: Distance in graphs
☆ 05C15: Coloring of graphs and hypergraphs
☆ 05C17: Perfect graphs
☆ 05C20: Directed graphs (digraphs), tournaments
☆ 05C21: Flows in graphs
☆ 05C22: Signed and weighted graphs
☆ 05C25: Graphs and abstract algebra (groups, rings, fields, etc.) [See also 20F65]
☆ 05C30: Enumeration in graph theory
☆ 05C31: Graph polynomials
☆ 05C35: Extremal problems [See also 90C35]
☆ 05C38: Paths and cycles [See also 90B10]
☆ 05C40: Connectivity
☆ 05C42: Density (toughness, etc.)
☆ 05C45: Eulerian and Hamiltonian graphs
☆ 05C50: Graphs and linear algebra (matrices, eigenvalues, etc.)
☆ 05C51: Graph designs and isomomorphic decomposition [See also 05B30]
☆ 05C55: Generalized Ramsey theory [See also 05D10]
☆ 05C57: Games on graphs [See also 91A43, 91A46]
☆ 05C60: Isomorphism problems (reconstruction conjecture, etc.) and homomorphisms (subgraph embedding, etc.)
☆ 05C62: Graph representations (geometric and intersection representations, etc.) {For graph drawing, see also 68R10}
☆ 05C63: Infinite graphs
☆ 05C65: Hypergraphs
☆ 05C69: Dominating sets, independent sets, cliques
☆ 05C70: Factorization, matching, partitioning, covering and packing
☆ 05C72: Fractional graph theory, fuzzy graph theory
☆ 05C75: Structural characterization of families of graphs
☆ 05C76: Graph operations (line graphs, products, etc.)
☆ 05C78: Graph labelling (graceful graphs, bandwidth, etc.)
☆ 05C80: Random graphs [See also 60B20]
☆ 05C81: Random walks on graphs
☆ 05C82: Small world graphs, complex networks [See also 90Bxx, 91D30]
☆ 05C83: Graph minors
☆ 05C85: Graph algorithms [See also 68R10, 68W05]
☆ 05C90: Applications [See also 68R10, 81Q30, 81T15, 82B20, 82C20, 90C35, 92E10, 94C15]
☆ 05C99: None of the above, but in this section
□ 05Dxx: Extremal combinatorics
☆ 05D05: Extremal set theory
☆ 05D10: Ramsey theory [See also 05C55]
☆ 05D15: Transversal (matching) theory
☆ 05D40: Probabilistic methods
☆ 05D99: None of the above, but in this section
□ 05Exx: Algebraic combinatorics
☆ 05E05: Symmetric functions and generalizations
☆ 05E10: Combinatorial aspects of representation theory[See also 20C30]
☆ 05E15: Combinatorial aspects of groups and algebras [See also 14Nxx, 22E45, 33C80]
☆ 05E18: Group actions on combinatorial structures
☆ 05E30: Association schemes, strongly regular graphs
☆ 05E40: Combinatorial aspects of commutative algebra
☆ 05E45: Combinatorial aspects of simplicial complexes
☆ 05E99: None of the above, but in this section
• 06-XX: Order, lattices, ordered algebraic structures [See also 18B35]
□ 06-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 06-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 06-02: Research exposition (monographs, survey articles)
□ 06-03: Historical (must also be assigned at least one classification number from Section 01)
□ 06-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 06-06: Proceedings, conferences, collections, etc.
□ 06Axx: Ordered sets
☆ 06A05: Total order
☆ 06A06: Partial order, general
☆ 06A07: Combinatorics of partially ordered sets
☆ 06A11: Algebraic aspects of posets
☆ 06A12: Semilattices [See also 20M10; for topological semilattices see 22A26]
☆ 06A15: Galois correspondences, closure operators
☆ 06A75: Generalizations of ordered sets
☆ 06A99: None of the above, but in this section
□ 06Bxx: Lattices [See also 03G10]
☆ 06B05: Structure theory
☆ 06B10: Ideals, congruence relations
☆ 06B15: Representation theory
☆ 06B20: Varieties of lattices
☆ 06B23: Complete lattices, completions
☆ 06B25: Free lattices, projective lattices, word problems [See also 03D40, 08A50, 20F10]
☆ 06B30: Topological lattices, order topologies [See also 06F30, 22A26, 54F05, 54H12]
☆ 06B35: Continuous lattices and posets, applications [See also 06B30, 06D10, 06F30, 18B35, 22A26, 68Q55]
☆ 06B75: Generalizations of lattices
☆ 06B99: None of the above, but in this section
□ 06Cxx: Modular lattices, complemented lattices
☆ 06C05: Modular lattices, Desarguesian lattices
☆ 06C10: Semimodular lattices, geometric lattices
☆ 06C15: Complemented lattices, orthocomplemented lattices and posets [See also 03G12, 81P10]
☆ 06C20: Complemented modular lattices, continuous geometries
☆ 06C99: None of the above, but in this section
□ 06Dxx: Distributive lattices
☆ 06D05: Structure and representation theory
☆ 06D10: Complete distributivity
☆ 06D15: Pseudocomplemented lattices
☆ 06D20: Heyting algebras [See also 03G25]
☆ 06D22: Frames, locales {For topological questions see 54-XX}
☆ 06D25: Post algebras [See also 03G20]
☆ 06D30: De Morgan algebras, \L ukasiewicz algebras [See also 03G20]
☆ 06D35: MV-algebras
☆ 06D50: Lattices and duality
☆ 06D72: Fuzzy lattices (soft algebras) and related topics
☆ 06D75: Other generalizations of distributive lattices
☆ 06D99: None of the above, but in this section
□ 06Exx: Boolean algebras (Boolean rings) [See also 03G05]
☆ 06E05: Structure theory
☆ 06E10: Chain conditions, complete algebras
☆ 06E15: Stone spaces (Boolean spaces) and related structures
☆ 06E20: Ring-theoretic properties [See also 16E50, 16G30]
☆ 06E25: Boolean algebras with additional operations (diagonalizable algebras, etc.) [See also 03G25, 03F45]
☆ 06E30: Boolean functions [See also 94C10]
☆ 06E75: Generalizations of Boolean algebras
☆ 06E99: None of the above, but in this section
□ 06Fxx: Ordered structures
☆ 06F05: Ordered semigroups and monoids [See also 20Mxx]
☆ 06F07: Quantales
☆ 06F10: Noether lattices
☆ 06F15: Ordered groups [See also 20F60]
☆ 06F20: Ordered abelian groups, Riesz groups, ordered linear spaces [See also 46A40]
☆ 06F25: Ordered rings, algebras, modules {For ordered fields, see 12J15; see also 13J25, 16W80}
☆ 06F30: Topological lattices, order topologies [See also 06B30, 22A26, 54F05, 54H12]
☆ 06F35: BCK-algebras, BCI-algebras [See also 03G25]
☆ 06F99: None of the above, but in this section
• 08-XX: General algebraic systems
□ 08-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 08-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 08-02: Research exposition (monographs, survey articles)
□ 08-03: Historical (must also be assigned at least one classification number from Section 01)
□ 08-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 08-06: Proceedings, conferences, collections, etc.
□ 08Axx: Algebraic structures [See also 03C05]
☆ 08A02: Relational systems, laws of composition
☆ 08A05: Structure theory
☆ 08A30: Subalgebras, congruence relations
☆ 08A35: Automorphisms, endomorphisms
☆ 08A40: Operations, polynomials, primal algebras
☆ 08A45: Equational compactness
☆ 08A50: Word problems [See also 03D40, 06B25, 20F10, 68R15]
☆ 08A55: Partial algebras
☆ 08A60: Unary algebras
☆ 08A62: Finitary algebras
☆ 08A65: Infinitary algebras
☆ 08A68: Heterogeneous algebras
☆ 08A70: Applications of universal algebra in computer science
☆ 08A72: Fuzzy algebraic structures
☆ 08A99: None of the above, but in this section
□ 08Bxx: Varieties [See also 03C05]
☆ 08B05: Equational logic, Mal′cev (Mal′tsev) conditions
☆ 08B10: Congruence modularity, congruence distributivity
☆ 08B15: Lattices of varieties
☆ 08B20: Free algebras
☆ 08B25: Products, amalgamated products, and other kinds of limits and colimits [See also 18A30]
☆ 08B26: Subdirect products and subdirect irreducibility
☆ 08B30: Injectives, projectives
☆ 08B99: None of the above, but in this section
□ 08Cxx: Other classes of algebras
☆ 08C05: Categories of algebras [See also 18C05]
☆ 08C10: Axiomatic model classes [See also 03Cxx, in particular 03C60]
☆ 08C15: Quasivarieties
☆ 08C20: Natural dualities for classes of algebras [See also 06E15, 18A40, 22A30]
☆ 08C99: None of the above, but in this section
• 11-XX: Number theory
□ 11-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 11-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 11-02: Research exposition (monographs, survey articles)
□ 11-03: Historical (must also be assigned at least one classification number from Section 01)
□ 11-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 11-06: Proceedings, conferences, collections, etc.
□ 11Axx: Elementary number theory {For analogues in number fields, see 11R04}
☆ 11A05: Multiplicative structure; Euclidean algorithm; greatest common divisors
☆ 11A07: Congruences; primitive roots; residue systems
☆ 11A15: Power residues, reciprocity
☆ 11A25: Arithmetic functions; related numbers; inversion formulas
☆ 11A41: Primes
☆ 11A51: Factorization; primality
☆ 11A55: Continued fractions {For approximation results, see 11J70} [See also 11K50, 30B70, 40A15]
☆ 11A63: Radix representation; digital problems {For metric results, see 11K16}
☆ 11A67: Other representations
☆ 11A99: None of the above, but in this section
□ 11Bxx: Sequences and sets
☆ 11B05: Density, gaps, topology
☆ 11B13: Additive bases, including sumsets [See also 05B10]
☆ 11B25: Arithmetic progressions [See also 11N13]
☆ 11B30: Arithmetic combinatorics; higher degree uniformity
☆ 11B34: Representation functions
☆ 11B37: Recurrences {For applications to special functions, see 33-XX}
☆ 11B39: Fibonacci and Lucas numbers and polynomials and generalizations
☆ 11B50: Sequences (mod $m$)
☆ 11B57: Farey sequences; the sequences ${1^k, 2^k, \cdots}$
☆ 11B65: Binomial coefficients; factorials; $q$-identities [See also 05A10, 05A30]
☆ 11B68: Bernoulli and Euler numbers and polynomials
☆ 11B73: Bell and Stirling numbers
☆ 11B75: Other combinatorial number theory
☆ 11B83: Special sequences and polynomials
☆ 11B85: Automata sequences
☆ 11B99: None of the above, but in this section
□ 11Cxx: Polynomials and matrices
☆ 11C08: Polynomials [See also 13F20]
☆ 11C20: Matrices, determinants [See also 15B36]
☆ 11C99: None of the above, but in this section
□ 11Dxx: Diophantine equations [See also 11Gxx, 14Gxx]
☆ 11D04: Linear equations
☆ 11D07: The Frobenius problem
☆ 11D09: Quadratic and bilinear equations
☆ 11D25: Cubic and quartic equations
☆ 11D41: Higher degree equations; Fermat's equation
☆ 11D45: Counting solutions of Diophantine equations
☆ 11D57: Multiplicative and norm form equations
☆ 11D59: Thue-Mahler equations
☆ 11D61: Exponential equations
☆ 11D68: Rational numbers as sums of fractions
☆ 11D72: Equations in many variables [See also 11P55]
☆ 11D75: Diophantine inequalities [See also 11J25]
☆ 11D79: Congruences in many variables
☆ 11D85: Representation problems [See also 11P55]
☆ 11D88: $p$-adic and power series fields
☆ 11D99: None of the above, but in this section
□ 11Exx: Forms and linear algebraic groups [See also 19Gxx] {For quadratic forms in linear algebra, see 15A63}
☆ 11E04: Quadratic forms over general fields
☆ 11E08: Quadratic forms over local rings and fields
☆ 11E10: Forms over real fields
☆ 11E12: Quadratic forms over global rings and fields
☆ 11E16: General binary quadratic forms
☆ 11E20: General ternary and quaternary quadratic forms; forms of more than two variables
☆ 11E25: Sums of squares and representations by other particular quadratic forms
☆ 11E39: Bilinear and Hermitian forms
☆ 11E41: Class numbers of quadratic and Hermitian forms
☆ 11E45: Analytic theory (Epstein zeta functions; relations with automorphic forms and functions)
☆ 11E57: Classical groups [See also 14Lxx, 20Gxx]
☆ 11E70: $K$-theory of quadratic and Hermitian forms
☆ 11E72: Galois cohomology of linear algebraic groups [See also 20G10]
☆ 11E76: Forms of degree higher than two
☆ 11E81: Algebraic theory of quadratic forms; Witt groups and rings [See also 19G12, 19G24]
☆ 11E88: Quadratic spaces; Clifford algebras [See also 15A63, 15A66]
☆ 11E95: $p$-adic theory
☆ 11E99: None of the above, but in this section
□ 11Fxx: Discontinuous groups and automorphic forms [See also 11R39, 11S37, 14Gxx, 14Kxx, 22E50, 22E55, 30F35, 32Nxx] {For relations with quadratic forms, see 11E45}
☆ 11F03: Modular and automorphic functions
☆ 11F06: Structure of modular groups and generalizations; arithmetic groups [See also 20H05, 20H10, 22E40]
☆ 11F11: Holomorphic modular forms of integral weight
☆ 11F12: Automorphic forms, one variable
☆ 11F20: Dedekind eta function, Dedekind sums
☆ 11F22: Relationship to Lie algebras and finite simple groups
☆ 11F23: Relations with algebraic geometry and topology
☆ 11F25: Hecke-Petersson operators, differential operators (one variable)
☆ 11F27: Theta series; Weil representation; theta correspondences
☆ 11F30: Fourier coefficients of automorphic forms
☆ 11F32: Modular correspondences, etc.
☆ 11F33: Congruences for modular and $p$-adic modular forms [See also 14G20, 22E50]
☆ 11F37: Forms of half-integer weight; nonholomorphic modular forms
☆ 11F41: Automorphic forms on ${\rm GL}(2)$; Hilbert and Hilbert-Siegel modular groups and their modular and automorphic forms; Hilbert modular surfaces [See also 14J20]
☆ 11F46: Siegel modular groups; Siegel and Hilbert-Siegel modular and automorphic forms
☆ 11F50: Jacobi forms
☆ 11F52: Modular forms associated to Drinfel'd modules
☆ 11F55: Other groups and their modular and automorphic forms (several variables)
☆ 11F60: Hecke-Petersson operators, differential operators (several variables)
☆ 11F66: Langlands $L$-functions; one variable Dirichlet series and functional equations
☆ 11F67: Special values of automorphic $L$-series, periods of modular forms, cohomology, modular symbols
☆ 11F68: Dirichlet series in several complex variables associated to automorphic forms; Weyl group multiple Dirichlet series
☆ 11F70: Representation-theoretic methods; automorphic representations over local and global fields
☆ 11F72: Spectral theory; Selberg trace formula
☆ 11F75: Cohomology of arithmetic groups
☆ 11F80: Galois representations
☆ 11F85: $p$-adic theory, local fields [See also 14G20, 22E50]
☆ 11F99: None of the above, but in this section
□ 11Gxx: Arithmetic algebraic geometry (Diophantine geometry) [See also 11Dxx, 14Gxx, 14Kxx]
☆ 11G05: Elliptic curves over global fields [See also 14H52]
☆ 11G07: Elliptic curves over local fields [See also 14G20, 14H52]
☆ 11G09: Drinfel'd modules; higher-dimensional motives, etc. [See also 14L05]
☆ 11G10: Abelian varieties of dimension $> 1$ [See also 14Kxx]
☆ 11G15: Complex multiplication and moduli of abelian varieties [See also 14K22]
☆ 11G16: Elliptic and modular units [See also 11R27]
☆ 11G18: Arithmetic aspects of modular and Shimura varieties [See also 14G35]
☆ 11G20: Curves over finite and local fields [See also 14H25]
☆ 11G25: Varieties over finite and local fields [See also 14G15, 14G20]
☆ 11G30: Curves of arbitrary genus or genus $\ne 1$ over global fields [See also 14H25]
☆ 11G32: Dessins d'enfants, Belyĭ theory
☆ 11G35: Varieties over global fields [See also 14G25]
☆ 11G40: $L$-functions of varieties over global fields; Birch-Swinnerton-Dyer conjecture [See also 14G10]
☆ 11G42: Arithmetic mirror symmetry [See also 14J33]
☆ 11G45: Geometric class field theory [See also 11R37, 14C35, 19F05]
☆ 11G50: Heights [See also 14G40, 37P30]
☆ 11G55: Polylogarithms and relations with $K$-theory
☆ 11G99: None of the above, but in this section
□ 11Hxx: Geometry of numbers {For applications in coding theory, see 94B75}
☆ 11H06: Lattices and convex bodies [See also 11P21, 52C05, 52C07]
☆ 11H16: Nonconvex bodies
☆ 11H31: Lattice packing and covering [See also 05B40, 52C15, 52C17]
☆ 11H46: Products of linear forms
☆ 11H50: Minima of forms
☆ 11H55: Quadratic forms (reduction theory, extreme forms, etc.)
☆ 11H56: Automorphism groups of lattices
☆ 11H60: Mean value and transfer theorems
☆ 11H71: Relations with coding theory
☆ 11H99: None of the above, but in this section
□ 11Jxx: Diophantine approximation, transcendental number theory [See also 11K60]
☆ 11J04: Homogeneous approximation to one number
☆ 11J06: Markov and Lagrange spectra and generalizations
☆ 11J13: Simultaneous homogeneous approximation, linear forms
☆ 11J17: Approximation by numbers from a fixed field
☆ 11J20: Inhomogeneous linear forms
☆ 11J25: Diophantine inequalities [See also 11D75]
☆ 11J54: Small fractional parts of polynomials and generalizations
☆ 11J61: Approximation in non-Archimedean valuations
☆ 11J68: Approximation to algebraic numbers
☆ 11J70: Continued fractions and generalizations [See also 11A55, 11K50]
☆ 11J71: Distribution modulo one [See also 11K06]
☆ 11J72: Irrationality; linear independence over a field
☆ 11J81: Transcendence (general theory)
☆ 11J82: Measures of irrationality and of transcendence
☆ 11J83: Metric theory
☆ 11J85: Algebraic independence; Gel'fond's method
☆ 11J86: Linear forms in logarithms; Baker's method
☆ 11J87: Schmidt Subspace Theorem and applications
☆ 11J89: Transcendence theory of elliptic and abelian functions
☆ 11J91: Transcendence theory of other special functions
☆ 11J93: Transcendence theory of Drinfel'd and $t$-modules
☆ 11J95: Results involving abelian varieties
☆ 11J97: Analogues of methods in Nevanlinna theory (work of Vojta et al.)
☆ 11J99: None of the above, but in this section
□ 11Kxx: Probabilistic theory: distribution modulo $1$; metric theory of algorithms
☆ 11K06: General theory of distribution modulo $1$ [See also 11J71]
☆ 11K16: Normal numbers, radix expansions, Pisot numbers, Salem numbers, good lattice points, etc. [See also 11A63]
☆ 11K31: Special sequences
☆ 11K36: Well-distributed sequences and other variations
☆ 11K38: Irregularities of distribution, discrepancy [See also 11Nxx]
☆ 11K41: Continuous, $p$-adic and abstract analogues
☆ 11K45: Pseudo-random numbers; Monte Carlo methods
☆ 11K50: Metric theory of continued fractions [See also 11A55, 11J70]
☆ 11K55: Metric theory of other algorithms and expansions; measure and Hausdorff dimension [See also 11N99, 28Dxx]
☆ 11K60: Diophantine approximation [See also 11Jxx]
☆ 11K65: Arithmetic functions [See also 11Nxx]
☆ 11K70: Harmonic analysis and almost periodicity
☆ 11K99: None of the above, but in this section
□ 11Lxx: Exponential sums and character sums {For finite fields, see 11Txx}
☆ 11L03: Trigonometric and exponential sums, general
☆ 11L05: Gauss and Kloosterman sums; generalizations
☆ 11L07: Estimates on exponential sums
☆ 11L10: Jacobsthal and Brewer sums; other complete character sums
☆ 11L15: Weyl sums
☆ 11L20: Sums over primes
☆ 11L26: Sums over arbitrary intervals
☆ 11L40: Estimates on character sums
☆ 11L99: None of the above, but in this section
□ 11Mxx: Zeta and $L$-functions: analytic theory
☆ 11M06: $\zeta (s)$ and $L(s, \chi)$
☆ 11M20: Real zeros of $L(s, \chi)$; results on $L(1, \chi)$
☆ 11M26: Nonreal zeros of $\zeta (s)$ and $L(s, \chi)$; Riemann and other hypotheses
☆ 11M32: Multiple Dirichlet series and zeta functions and multizeta values
☆ 11M35: Hurwitz and Lerch zeta functions
☆ 11M36: Selberg zeta functions and regularized determinants; applications to spectral theory, Dirichlet series, Eisenstein series, etc. Explicit formulas
☆ 11M38: Zeta and $L$-functions in characteristic $p$
☆ 11M41: Other Dirichlet series and zeta functions {For local and global ground fields, see 11R42, 11R52, 11S40, 11S45; for algebro-geometric methods, see 14G10; see also 11E45, 11F66,
11F70, 11F72}
☆ 11M45: Tauberian theorems [See also 40E05]
☆ 11M50: Relations with random matrices
☆ 11M55: Relations with noncommutative geometry
☆ 11M99: None of the above, but in this section
□ 11Nxx: Multiplicative number theory
☆ 11N05: Distribution of primes
☆ 11N13: Primes in progressions [See also 11B25]
☆ 11N25: Distribution of integers with specified multiplicative constraints
☆ 11N30: Turán theory [See also 30Bxx]
☆ 11N32: Primes represented by polynomials; other multiplicative structure of polynomial values
☆ 11N35: Sieves
☆ 11N36: Applications of sieve methods
☆ 11N37: Asymptotic results on arithmetic functions
☆ 11N45: Asymptotic results on counting functions for algebraic and topological structures
☆ 11N56: Rate of growth of arithmetic functions
☆ 11N60: Distribution functions associated with additive and positive multiplicative functions
☆ 11N64: Other results on the distribution of values or the characterization of arithmetic functions
☆ 11N69: Distribution of integers in special residue classes
☆ 11N75: Applications of automorphic functions and forms to multiplicative problems [See also 11Fxx]
☆ 11N80: Generalized primes and integers
☆ 11N99: None of the above, but in this section
□ 11Pxx: Additive number theory; partitions
☆ 11P05: Waring's problem and variants
☆ 11P21: Lattice points in specified regions
☆ 11P32: Goldbach-type theorems; other additive questions involving primes
☆ 11P55: Applications of the Hardy-Littlewood method [See also 11D85]
☆ 11P70: Inverse problems of additive number theory, including sumsets
☆ 11P81: Elementary theory of partitions [See also 05A17]
☆ 11P82: Analytic theory of partitions
☆ 11P83: Partitions; congruences and congruential restrictions
☆ 11P84: Partition identities; identities of Rogers-Ramanujan type
☆ 11P99: None of the above, but in this section
□ 11Rxx: Algebraic number theory: global fields {For complex multiplication, see 11G15}
☆ 11R04: Algebraic numbers; rings of algebraic integers
☆ 11R06: PV-numbers and generalizations; other special algebraic numbers; Mahler measure
☆ 11R09: Polynomials (irreducibility, etc.)
☆ 11R11: Quadratic extensions
☆ 11R16: Cubic and quartic extensions
☆ 11R18: Cyclotomic extensions
☆ 11R20: Other abelian and metabelian extensions
☆ 11R21: Other number fields
☆ 11R23: Iwasawa theory
☆ 11R27: Units and factorization
☆ 11R29: Class numbers, class groups, discriminants
☆ 11R32: Galois theory
☆ 11R33: Integral representations related to algebraic numbers; Galois module structure of rings of integers [See also 20C10]
☆ 11R34: Galois cohomology [See also 12Gxx, 19A31]
☆ 11R37: Class field theory
☆ 11R39: Langlands-Weil conjectures, nonabelian class field theory [See also 11Fxx, 22E55]
☆ 11R42: Zeta functions and $L$-functions of number fields [See also 11M41, 19F27]
☆ 11R44: Distribution of prime ideals [See also 11N05]
☆ 11R45: Density theorems
☆ 11R47: Other analytic theory [See also 11Nxx]
☆ 11R52: Quaternion and other division algebras: arithmetic, zeta functions
☆ 11R54: Other algebras and orders, and their zeta and $L$-functions [See also 11S45, 16Hxx, 16Kxx]
☆ 11R56: Adèle rings and groups
☆ 11R58: Arithmetic theory of algebraic function fields [See also 14-XX]
☆ 11R60: Cyclotomic function fields (class groups, Bernoulli objects, etc.)
☆ 11R65: Class groups and Picard groups of orders
☆ 11R70: $K$-theory of global fields [See also 19Fxx]
☆ 11R80: Totally real fields [See also 12J15]
☆ 11R99: None of the above, but in this section
□ 11Sxx: Algebraic number theory: local and $p$-adic fields
☆ 11S05: Polynomials
☆ 11S15: Ramification and extension theory
☆ 11S20: Galois theory
☆ 11S23: Integral representations
☆ 11S25: Galois cohomology [See also 12Gxx, 16H05]
☆ 11S31: Class field theory; $p$-adic formal groups [See also 14L05]
☆ 11S37: Langlands-Weil conjectures, nonabelian class field theory [See also 11Fxx, 22E50]
☆ 11S40: Zeta functions and $L$-functions [See also 11M41, 19F27]
☆ 11S45: Algebras and orders, and their zeta functions [See also 11R52, 11R54, 16Hxx, 16Kxx]
☆ 11S70: $K$-theory of local fields [See also 19Fxx]
☆ 11S80: Other analytic theory (analogues of beta and gamma functions, $p$-adic integration, etc.)
☆ 11S82: Non-Archimedean dynamical systems [See mainly 37Pxx]
☆ 11S85: Other nonanalytic theory
☆ 11S90: Prehomogeneous vector spaces
☆ 11S99: None of the above, but in this section
□ 11Txx: Finite fields and commutative rings (number-theoretic aspects)
☆ 11T06: Polynomials
☆ 11T22: Cyclotomy
☆ 11T23: Exponential sums
☆ 11T24: Other character sums and Gauss sums
☆ 11T30: Structure theory
☆ 11T55: Arithmetic theory of polynomial rings over finite fields
☆ 11T60: Finite upper half-planes
☆ 11T71: Algebraic coding theory; cryptography
☆ 11T99: None of the above, but in this section
□ 11Uxx: Connections with logic
☆ 11U05: Decidability [See also 03B25]
☆ 11U07: Ultraproducts [See also 03C20]
☆ 11U09: Model theory [See also 03Cxx]
☆ 11U10: Nonstandard arithmetic [See also 03H15]
☆ 11U99: None of the above, but in this section
□ 11Yxx: Computational number theory [See also 11-04]
☆ 11Y05: Factorization
☆ 11Y11: Primality
☆ 11Y16: Algorithms; complexity [See also 68Q25]
☆ 11Y35: Analytic computations
☆ 11Y40: Algebraic number theory computations
☆ 11Y50: Computer solution of Diophantine equations
☆ 11Y55: Calculation of integer sequences
☆ 11Y60: Evaluation of constants
☆ 11Y65: Continued fraction calculations
☆ 11Y70: Values of arithmetic functions; tables
☆ 11Y99: None of the above, but in this section
□ 11Zxx: Miscellaneous applications of number theory
☆ 11Z05: Miscellaneous applications of number theory
☆ 11Z99: None of the above, but in this section
• 12-XX: Field theory and polynomials
□ 12-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 12-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 12-02: Research exposition (monographs, survey articles)
□ 12-03: Historical (must also be assigned at least one classification number from Section 01)
□ 12-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 12-06: Proceedings, conferences, collections, etc.
□ 12Dxx: Real and complex fields
☆ 12D05: Polynomials: factorization
☆ 12D10: Polynomials: location of zeros (algebraic theorems) {For the analytic theory, see 26C10, 30C15}
☆ 12D15: Fields related with sums of squares (formally real fields, Pythagorean fields, etc.) [See also 11Exx]
☆ 12D99: None of the above, but in this section
□ 12Exx: General field theory
☆ 12E05: Polynomials (irreducibility, etc.)
☆ 12E10: Special polynomials
☆ 12E12: Equations
☆ 12E15: Skew fields, division rings [See also 11R52, 11R54, 11S45, 16Kxx]
☆ 12E20: Finite fields (field-theoretic aspects)
☆ 12E25: Hilbertian fields; Hilbert's irreducibility theorem
☆ 12E30: Field arithmetic
☆ 12E99: None of the above, but in this section
□ 12Fxx: Field extensions
☆ 12F05: Algebraic extensions
☆ 12F10: Separable extensions, Galois theory
☆ 12F12: Inverse Galois theory
☆ 12F15: Inseparable extensions
☆ 12F20: Transcendental extensions
☆ 12F99: None of the above, but in this section
□ 12Gxx: Homological methods (field theory)
☆ 12G05: Galois cohomology [See also 14F22, 16Hxx, 16K50]
☆ 12G10: Cohomological dimension
☆ 12G99: None of the above, but in this section
□ 12Hxx: Differential and difference algebra
☆ 12H05: Differential algebra [See also 13Nxx]
☆ 12H10: Difference algebra [See also 39Axx]
☆ 12H20: Abstract differential equations [See also 34Mxx]
☆ 12H25: $p$-adic differential equations [See also 11S80, 14G20]
☆ 12H99: None of the above, but in this section
□ 12Jxx: Topological fields
☆ 12J05: Normed fields
☆ 12J10: Valued fields
☆ 12J12: Formally $p$-adic fields
☆ 12J15: Ordered fields
☆ 12J17: Topological semifields
☆ 12J20: General valuation theory [See also 13A18]
☆ 12J25: Non-Archimedean valued fields [See also 30G06, 32P05, 46S10, 47S10]
☆ 12J27: Krasner-Tate algebras [See mainly 32P05; see also 46S10, 47S10]
☆ 12J99: None of the above, but in this section
□ 12Kxx: Generalizations of fields
☆ 12K05: Near-fields [See also 16Y30]
☆ 12K10: Semifields [See also 16Y60]
☆ 12K99: None of the above, but in this section
□ 12Lxx: Connections with logic
☆ 12L05: Decidability [See also 03B25]
☆ 12L10: Ultraproducts [See also 03C20]
☆ 12L12: Model theory [See also 03C60]
☆ 12L15: Nonstandard arithmetic [See also 03H15]
☆ 12L99: None of the above, but in this section
□ 12Yxx: Computational aspects of field theory and polynomials
☆ 12Y05: Computational aspects of field theory and polynomials
☆ 12Y99: None of the above, but in this section
• 13-XX: Commutative algebra
□ 13-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 13-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 13-02: Research exposition (monographs, survey articles)
□ 13-03: Historical (must also be assigned at least one classification number from Section 01)
□ 13-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 13-06: Proceedings, conferences, collections, etc.
□ 13Axx: General commutative ring theory
☆ 13A02: Graded rings [See also 16W50]
☆ 13A05: Divisibility; factorizations [See also 13F15]
☆ 13A15: Ideals; multiplicative ideal theory
☆ 13A18: Valuations and their generalizations [See also 12J20]
☆ 13A30: Associated graded rings of ideals (Rees ring, form ring), analytic spread and related topics
☆ 13A35: Characteristic $p$ methods (Frobenius endomorphism) and reduction to characteristic $p$; tight closure [See also 13B22]
☆ 13A50: Actions of groups on commutative rings; invariant theory [See also 14L24]
☆ 13A99: None of the above, but in this section
□ 13Bxx: Ring extensions and related topics
☆ 13B02: Extension theory
☆ 13B05: Galois theory
☆ 13B10: Morphisms
☆ 13B21: Integral dependence; going up, going down
☆ 13B22: Integral closure of rings and ideals [See also 13A35]; integrally closed rings, related rings (Japanese, etc.)
☆ 13B25: Polynomials over commutative rings [See also 11C08, 11T06, 13F20, 13M10]
☆ 13B30: Rings of fractions and localization [See also 16S85]
☆ 13B35: Completion [See also 13J10]
☆ 13B40: Étale and flat extensions; Henselization; Artin approximation [See also 13J15, 14B12, 14B25]
☆ 13B99: None of the above, but in this section
□ 13Cxx: Theory of modules and ideals
☆ 13C05: Structure, classification theorems
☆ 13C10: Projective and free modules and ideals [See also 19A13]
☆ 13C11: Injective and flat modules and ideals
☆ 13C12: Torsion modules and ideals
☆ 13C13: Other special types
☆ 13C14: Cohen-Macaulay modules [See also 13H10]
☆ 13C15: Dimension theory, depth, related rings (catenary, etc.)
☆ 13C20: Class groups [See also 11R29]
☆ 13C40: Linkage, complete intersections and determinantal ideals [See also 14M06, 14M10, 14M12]
☆ 13C60: Module categories
☆ 13C99: None of the above, but in this section
□ 13Dxx: Homological methods {For noncommutative rings, see 16Exx; for general categories, see 18Gxx}
☆ 13D02: Syzygies, resolutions, complexes
☆ 13D03: (Co)homology of commutative rings and algebras (e.g., Hochschild, André-Quillen, cyclic, dihedral, etc.)
☆ 13D05: Homological dimension
☆ 13D07: Homological functors on modules (Tor, Ext, etc.)
☆ 13D09: Derived categories
☆ 13D10: Deformations and infinitesimal methods [See also 14B10, 14B12, 14D15, 32Gxx]
☆ 13D15: Grothendieck groups, $K$-theory [See also 14C35, 18F30, 19Axx, 19D50]
☆ 13D22: Homological conjectures (intersection theorems)
☆ 13D30: Torsion theory [See also 13C12, 18E40]
☆ 13D40: Hilbert-Samuel and Hilbert-Kunz functions; Poincaré series
☆ 13D45: Local cohomology [See also 14B15]
☆ 13D99: None of the above, but in this section
□ 13Exx: Chain conditions, finiteness conditions
☆ 13E05: Noetherian rings and modules
☆ 13E10: Artinian rings and modules, finite-dimensional algebras
☆ 13E15: Rings and modules of finite generation or presentation; number of generators
☆ 13E99: None of the above, but in this section
□ 13Fxx: Arithmetic rings and other special rings
☆ 13F05: Dedekind, Prüfer, Krull and Mori rings and their generalizations
☆ 13F07: Euclidean rings and generalizations
☆ 13F10: Principal ideal rings
☆ 13F15: Rings defined by factorization properties (e.g., atomic, factorial, half-factorial) [See also 13A05, 14M05]
☆ 13F20: Polynomial rings and ideals; rings of integer-valued polynomials [See also 11C08, 13B25]
☆ 13F25: Formal power series rings [See also 13J05]
☆ 13F30: Valuation rings [See also 13A18]
☆ 13F35: Witt vectors and related rings
☆ 13F40: Excellent rings
☆ 13F45: Seminormal rings
☆ 13F50: Rings with straightening laws, Hodge algebras
☆ 13F55: Stanley-Reisner face rings; simplicial complexes [See also 55U10]
☆ 13F60: Cluster algebras
☆ 13F99: None of the above, but in this section
□ 13Gxx: Integral domains
☆ 13G05: Integral domains
☆ 13G99: None of the above, but in this section
□ 13Hxx: Local rings and semilocal rings
☆ 13H05: Regular local rings
☆ 13H10: Special types (Cohen-Macaulay, Gorenstein, Buchsbaum, etc.) [See also 14M05]
☆ 13H15: Multiplicity theory and related topics [See also 14C17]
☆ 13H99: None of the above, but in this section
□ 13Jxx: Topological rings and modules [See also 16W60, 16W80]
☆ 13J05: Power series rings [See also 13F25]
☆ 13J07: Analytical algebras and rings [See also 32B05]
☆ 13J10: Complete rings, completion [See also 13B35]
☆ 13J15: Henselian rings [See also 13B40]
☆ 13J20: Global topological rings
☆ 13J25: Ordered rings [See also 06F25]
☆ 13J30: Real algebra [See also 12D15, 14Pxx]
☆ 13J99: None of the above, but in this section
□ 13Lxx: Applications of logic to commutative algebra [See also 03Cxx, 03Hxx]
☆ 13L05: Applications of logic to commutative algebra [See also 03Cxx, 03Hxx]
☆ 13L99: None of the above, but in this section
□ 13Mxx: Finite commutative rings {For number-theoretic aspects, see 11Txx}
☆ 13M05: Structure
☆ 13M10: Polynomials
☆ 13M99: None of the above, but in this section
□ 13Nxx: Differential algebra [See also 12H05, 14F10]
☆ 13N05: Modules of differentials
☆ 13N10: Rings of differential operators and their modules [See also 16S32, 32C38]
☆ 13N15: Derivations
☆ 13N99: None of the above, but in this section
□ 13Pxx: Computational aspects and applications [See also 14Qxx, 68W30]
☆ 13P05: Polynomials, factorization [See also 12Y05]
☆ 13P10: Gröbner bases; other bases for ideals and modules (e.g., Janet and border bases)
☆ 13P15: Solving polynomial systems; resultants
☆ 13P20: Computational homological algebra [See also 13Dxx]
☆ 13P25: Applications of commutative algebra (e.g., to statistics, control theory, optimization, etc.)
☆ 13P99: None of the above, but in this section
• 14-XX: Algebraic geometry
□ 14-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 14-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 14-02: Research exposition (monographs, survey articles)
□ 14-03: Historical (must also be assigned at least one classification number from Section 01)
□ 14-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 14-06: Proceedings, conferences, collections, etc.
□ 14Axx: Foundations
☆ 14A05: Relevant commutative algebra [See also 13-XX]
☆ 14A10: Varieties and morphisms
☆ 14A15: Schemes and morphisms
☆ 14A20: Generalizations (algebraic spaces, stacks)
☆ 14A22: Noncommutative algebraic geometry [See also 16S38]
☆ 14A25: Elementary questions
☆ 14A99: None of the above, but in this section
□ 14Bxx: Local theory
☆ 14B05: Singularities [See also 14E15, 14H20, 14J17, 32Sxx, 58Kxx]
☆ 14B07: Deformations of singularities [See also 14D15, 32S30]
☆ 14B10: Infinitesimal methods [See also 13D10]
☆ 14B12: Local deformation theory, Artin approximation, etc. [See also 13B40, 13D10]
☆ 14B15: Local cohomology [See also 13D45, 32C36]
☆ 14B20: Formal neighborhoods
☆ 14B25: Local structure of morphisms: étale, flat, etc. [See also 13B40]
☆ 14B99: None of the above, but in this section
□ 14Cxx: Cycles and subschemes
☆ 14C05: Parametrization (Chow and Hilbert schemes)
☆ 14C15: (Equivariant) Chow groups and rings; motives
☆ 14C17: Intersection theory, characteristic classes, intersection multiplicities [See also 13H15]
☆ 14C20: Divisors, linear systems, invertible sheaves
☆ 14C21: Pencils, nets, webs [See also 53A60]
☆ 14C22: Picard groups
☆ 14C25: Algebraic cycles
☆ 14C30: Transcendental methods, Hodge theory [See also 14D07, 32G20, 32J25, 32S35], Hodge conjecture
☆ 14C34: Torelli problem [See also 32G20]
☆ 14C35: Applications of methods of algebraic $K$-theory [See also 19Exx]
☆ 14C40: Riemann-Roch theorems [See also 19E20, 19L10]
☆ 14C99: None of the above, but in this section
□ 14Dxx: Families, fibrations
☆ 14D05: Structure of families (Picard-Lefschetz, monodromy, etc.)
☆ 14D06: Fibrations, degenerations
☆ 14D07: Variation of Hodge structures [See also 32G20]
☆ 14D10: Arithmetic ground fields (finite, local, global)
☆ 14D15: Formal methods; deformations [See also 13D10, 14B07, 32Gxx]
☆ 14D20: Algebraic moduli problems, moduli of vector bundles {For analytic moduli problems, see 32G13}
☆ 14D21: Applications of vector bundles and moduli spaces in mathematical physics (twistor theory, instantons, quantum field theory)[See also 32L25, 81Txx]
☆ 14D22: Fine and coarse moduli spaces
☆ 14D23: Stacks and moduli problems
☆ 14D24: Geometric Langlands program: algebro-geometric aspects [See also 22E57]
☆ 14D99: None of the above, but in this section
□ 14Exx: Birational geometry
☆ 14E05: Rational and birational maps
☆ 14E07: Birational automorphisms, Cremona group and generalizations
☆ 14E08: Rationality questions [See also 14M20]
☆ 14E15: Global theory and resolution of singularities [See also 14B05, 32S20, 32S45]
☆ 14E16: McKay correspondence
☆ 14E18: Arcs and motivic integration
☆ 14E20: Coverings [See also 14H30]
☆ 14E22: Ramification problems [See also 11S15]
☆ 14E25: Embeddings
☆ 14E30: Minimal model program (Mori theory, extremal rays)
☆ 14E99: None of the above, but in this section
□ 14Fxx: (Co)homology theory [See also 13Dxx]
☆ 14F05: Sheaves, derived categories of sheaves and related constructions [See also 14H60, 14J60, 18F20, 32Lxx, 46M20]
☆ 14F10: Differentials and other special sheaves; D-modules; Bernstein-Sato ideals and polynomials [See also 13Nxx, 32C38]
☆ 14F17: Vanishing theorems [See also 32L20]
☆ 14F18: Multiplier ideals
☆ 14F20: Étale and other Grothendieck topologies and (co)homologies
☆ 14F22: Brauer groups of schemes [See also 12G05, 16K50]
☆ 14F25: Classical real and complex (co)homology
☆ 14F30: $p$-adic cohomology, crystalline cohomology
☆ 14F35: Homotopy theory; fundamental groups [See also 14H30]
☆ 14F40: de Rham cohomology [See also 14C30, 32C35, 32L10]
☆ 14F42: Motivic cohomology; motivic homotopy theory [See also 19E15]
☆ 14F43: Other algebro-geometric (co)homologies (e.g., intersection, equivariant, Lawson, Deligne (co)homologies)
☆ 14F45: Topological properties
☆ 14F99: None of the above, but in this section
□ 14Gxx: Arithmetic problems. Diophantine geometry [See also 11Dxx, 11Gxx]
☆ 14G05: Rational points
☆ 14G10: Zeta-functions and related questions [See also 11G40] (Birch-Swinnerton-Dyer conjecture)
☆ 14G15: Finite ground fields
☆ 14G17: Positive characteristic ground fields
☆ 14G20: Local ground fields
☆ 14G22: Rigid analytic geometry
☆ 14G25: Global ground fields
☆ 14G27: Other nonalgebraically closed ground fields
☆ 14G32: Universal profinite groups (relationship to moduli spaces, projective and moduli towers, Galois theory)
☆ 14G35: Modular and Shimura varieties [See also 11F41, 11F46, 11G18]
☆ 14G40: Arithmetic varieties and schemes; Arakelov theory; heights [See also 11G50, 37P30]
☆ 14G50: Applications to coding theory and cryptography [See also 94A60, 94B27, 94B40]
☆ 14G99: None of the above, but in this section
□ 14Hxx: Curves
☆ 14H05: Algebraic functions; function fields [See also 11R58]
☆ 14H10: Families, moduli (algebraic)
☆ 14H15: Families, moduli (analytic) [See also 30F10, 32G15]
☆ 14H20: Singularities, local rings [See also 13Hxx, 14B05]
☆ 14H25: Arithmetic ground fields [See also 11Dxx, 11G05, 14Gxx]
☆ 14H30: Coverings, fundamental group [See also 14E20, 14F35]
☆ 14H37: Automorphisms
☆ 14H40: Jacobians, Prym varieties [See also 32G20]
☆ 14H42: Theta functions; Schottky problem [See also 14K25, 32G20]
☆ 14H45: Special curves and curves of low genus
☆ 14H50: Plane and space curves
☆ 14H51: Special divisors (gonality, Brill-Noether theory)
☆ 14H52: Elliptic curves [See also 11G05, 11G07, 14Kxx]
☆ 14H55: Riemann surfaces; Weierstrass points; gap sequences [See also 30Fxx]
☆ 14H57: Dessins d'enfants theory {For arithmetic aspects, see 11G32}
☆ 14H60: Vector bundles on curves and their moduli [See also 14D20, 14F05]
☆ 14H70: Relationships with integrable systems
☆ 14H81: Relationships with physics
☆ 14H99: None of the above, but in this section
□ 14Jxx: Surfaces and higher-dimensional varieties {For analytic theory, see 32Jxx}
☆ 14J10: Families, moduli, classification: algebraic theory
☆ 14J15: Moduli, classification: analytic theory; relations with modular forms [See also 32G13]
☆ 14J17: Singularities [See also 14B05, 14E15]
☆ 14J20: Arithmetic ground fields [See also 11Dxx, 11G25, 11G35, 14Gxx]
☆ 14J25: Special surfaces {For Hilbert modular surfaces, see 14G35}
☆ 14J26: Rational and ruled surfaces
☆ 14J27: Elliptic surfaces
☆ 14J28: $K3$ surfaces and Enriques surfaces
☆ 14J29: Surfaces of general type
☆ 14J30: $3$-folds [See also 32Q25]
☆ 14J32: Calabi-Yau manifolds
☆ 14J33: Mirror symmetry [See also 11G42, 53D37]
☆ 14J35: $4$-folds
☆ 14J40: $n$-folds ($n>4$)
☆ 14J45: Fano varieties
☆ 14J50: Automorphisms of surfaces and higher-dimensional varieties
☆ 14J60: Vector bundles on surfaces and higher-dimensional varieties, and their moduli [See also 14D20, 14F05, 32Lxx]
☆ 14J70: Hypersurfaces
☆ 14J80: Topology of surfaces (Donaldson polynomials, Seiberg-Witten invariants)
☆ 14J81: Relationships with physics
☆ 14J99: None of the above, but in this section
□ 14Kxx: Abelian varieties and schemes
☆ 14K02: Isogeny
☆ 14K05: Algebraic theory
☆ 14K10: Algebraic moduli, classification [See also 11G15]
☆ 14K12: Subvarieties
☆ 14K15: Arithmetic ground fields [See also 11Dxx, 11Fxx, 11G10, 14Gxx]
☆ 14K20: Analytic theory; abelian integrals and differentials
☆ 14K22: Complex multiplication [See also 11G15]
☆ 14K25: Theta functions [See also 14H42]
☆ 14K30: Picard schemes, higher Jacobians [See also 14H40, 32G20]
☆ 14K99: None of the above, but in this section
□ 14Lxx: Algebraic groups {For linear algebraic groups, see 20Gxx; for Lie algebras, see 17B45}
☆ 14L05: Formal groups, $p$-divisible groups [See also 55N22]
☆ 14L10: Group varieties
☆ 14L15: Group schemes
☆ 14L17: Affine algebraic groups, hyperalgebra constructions [See also 17B45, 18D35]
☆ 14L24: Geometric invariant theory [See also 13A50]
☆ 14L30: Group actions on varieties or schemes (quotients) [See also 13A50, 14L24, 14M17]
☆ 14L35: Classical groups (geometric aspects) [See also 20Gxx, 51N30]
☆ 14L40: Other algebraic groups (geometric aspects)
☆ 14L99: None of the above, but in this section
□ 14Mxx: Special varieties
☆ 14M05: Varieties defined by ring conditions (factorial, Cohen-Macaulay, seminormal) [See also 13F15, 13F45, 13H10]
☆ 14M06: Linkage [See also 13C40]
☆ 14M07: Low codimension problems
☆ 14M10: Complete intersections [See also 13C40]
☆ 14M12: Determinantal varieties [See also 13C40]
☆ 14M15: Grassmannians, Schubert varieties, flag manifolds [See also 32M10, 51M35]
☆ 14M17: Homogeneous spaces and generalizations [See also 32M10, 53C30, 57T15]
☆ 14M20: Rational and unirational varieties [See also 14E08]
☆ 14M22: Rationally connected varieties
☆ 14M25: Toric varieties, Newton polyhedra [See also 52B20]
☆ 14M27: Compactifications; symmetric and spherical varieties
☆ 14M30: Supervarieties [See also 32C11, 58A50]
☆ 14M99: None of the above, but in this section
□ 14Nxx: Projective and enumerative geometry [See also 51-XX]
☆ 14N05: Projective techniques [See also 51N35]
☆ 14N10: Enumerative problems (combinatorial problems)
☆ 14N15: Classical problems, Schubert calculus
☆ 14N20: Configurations and arrangements of linear subspaces
☆ 14N25: Varieties of low degree
☆ 14N30: Adjunction problems
☆ 14N35: Gromov-Witten invariants, quantum cohomology, Gopakumar-Vafa invariants, Donaldson-Thomas invariants [See also 53D45]
☆ 14N99: None of the above, but in this section
□ 14Pxx: Real algebraic and real analytic geometry
☆ 14P05: Real algebraic sets [See also 12D15, 13J30]
☆ 14P10: Semialgebraic sets and related spaces
☆ 14P15: Real analytic and semianalytic sets [See also 32B20, 32C05]
☆ 14P20: Nash functions and manifolds [See also 32C07, 58A07]
☆ 14P25: Topology of real algebraic varieties
☆ 14P99: None of the above, but in this section
□ 14Qxx: Computational aspects in algebraic geometry [See also 12Y05, 13Pxx, 68W30]
☆ 14Q05: Curves
☆ 14Q10: Surfaces, hypersurfaces
☆ 14Q15: Higher-dimensional varieties
☆ 14Q20: Effectivity, complexity
☆ 14Q99: None of the above, but in this section
□ 14Rxx: Affine geometry
☆ 14R05: Classification of affine varieties
☆ 14R10: Affine spaces (automorphisms, embeddings, exotic structures, cancellation problem)
☆ 14R15: Jacobian problem [See also 13F20]
☆ 14R20: Group actions on affine varieties [See also 13A50, 14L30]
☆ 14R25: Affine fibrations [See also 14D06]
☆ 14R99: None of the above, but in this section
□ 14Txx: Tropical geometry [See also 12K10, 14M25, 14N10, 52B20]
☆ 14T05: Tropical geometry [See also 12K10, 14M25, 14N10, 52B20]
☆ 14T99: None of the above, but in this section
• 15-XX: Linear and multilinear algebra; matrix theory
□ 15-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 15-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 15-02: Research exposition (monographs, survey articles)
□ 15-03: Historical (must also be assigned at least one classification number from Section 01)
□ 15-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 15-06: Proceedings, conferences, collections, etc.
□ 15Axx: Basic linear algebra
☆ 15A03: Vector spaces, linear dependence, rank
☆ 15A04: Linear transformations, semilinear transformations
☆ 15A06: Linear equations
☆ 15A09: Matrix inversion, generalized inverses
☆ 15A12: Conditioning of matrices [See also 65F35]
☆ 15A15: Determinants, permanents, other special matrix functions [See also 19B10, 19B14]
☆ 15A16: Matrix exponential and similar functions of matrices
☆ 15A18: Eigenvalues, singular values, and eigenvectors
☆ 15A21: Canonical forms, reductions, classification
☆ 15A22: Matrix pencils [See also 47A56]
☆ 15A23: Factorization of matrices
☆ 15A24: Matrix equations and identities
☆ 15A27: Commutativity
☆ 15A29: Inverse problems
☆ 15A30: Algebraic systems of matrices [See also 16S50, 20Gxx, 20Hxx]
☆ 15A39: Linear inequalities
☆ 15A42: Inequalities involving eigenvalues and eigenvectors
☆ 15A45: Miscellaneous inequalities involving matrices
☆ 15A54: Matrices over function rings in one or more variables
☆ 15A60: Norms of matrices, numerical range, applications of functional analysis to matrix theory [See also 65F35, 65J05]
☆ 15A63: Quadratic and bilinear forms, inner products [See mainly 11Exx]
☆ 15A66: Clifford algebras, spinors
☆ 15A69: Multilinear algebra, tensor products
☆ 15A72: Vector and tensor algebra, theory of invariants [See also 13A50, 14L24]
☆ 15A75: Exterior algebra, Grassmann algebras
☆ 15A78: Other algebras built from modules
☆ 15A80: Max-plus and related algebras
☆ 15A83: Matrix completion problems
☆ 15A86: Linear preserver problems
☆ 15A99: Miscellaneous topics
□ 15Bxx: Special matrices
☆ 15B05: Toeplitz, Cauchy, and related matrices
☆ 15B10: Orthogonal matrices
☆ 15B15: Fuzzy matrices
☆ 15B33: Matrices over special rings (quaternions, finite fields, etc.)
☆ 15B34: Boolean and Hadamard matrices
☆ 15B35: Sign pattern matrices
☆ 15B36: Matrices of integers [See also 11C20]
☆ 15B48: Positive matrices and their generalizations; cones of matrices
☆ 15B51: Stochastic matrices
☆ 15B52: Random matrices
☆ 15B57: Hermitian, skew-Hermitian, and related matrices
☆ 15B99: None of the above, but in this section
• 16-XX: Associative rings and algebras {For the commutative case, see 13-XX}
□ 16-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 16-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 16-02: Research exposition (monographs, survey articles)
□ 16-03: Historical (must also be assigned at least one classification number from Section 01)
□ 16-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 16-06: Proceedings, conferences, collections, etc.
□ 16Bxx: General and miscellaneous
☆ 16B50: Category-theoretic methods and results (except as in 16D90) [See also 18-XX]
☆ 16B70: Applications of logic [See also 03Cxx]
☆ 16B99: None of the above, but in this section
□ 16Dxx: Modules, bimodules and ideals
☆ 16D10: General module theory
☆ 16D20: Bimodules
☆ 16D25: Ideals
☆ 16D30: Infinite-dimensional simple rings (except as in 16Kxx)
☆ 16D40: Free, projective, and flat modules and ideals [See also 19A13]
☆ 16D50: Injective modules, self-injective rings [See also 16L60]
☆ 16D60: Simple and semisimple modules, primitive rings and ideals
☆ 16D70: Structure and classification (except as in 16Gxx), direct sum decomposition, cancellation
☆ 16D80: Other classes of modules and ideals [See also 16G50]
☆ 16D90: Module categories [See also 16Gxx, 16S90]; module theory in a category-theoretic context; Morita equivalence and duality
☆ 16D99: None of the above, but in this section
□ 16Exx: Homological methods {For commutative rings, see 13Dxx; for general categories, see 18Gxx}
☆ 16E05: Syzygies, resolutions, complexes
☆ 16E10: Homological dimension
☆ 16E20: Grothendieck groups, $K$-theory, etc. [See also 18F30, 19Axx, 19D50]
☆ 16E30: Homological functors on modules (Tor, Ext, etc.)
☆ 16E35: Derived categories
☆ 16E40: (Co)homology of rings and algebras (e.g. Hochschild, cyclic, dihedral, etc.)
☆ 16E45: Differential graded algebras and applications
☆ 16E50: von Neumann regular rings and generalizations
☆ 16E60: Semihereditary and hereditary rings, free ideal rings, Sylvester rings, etc.
☆ 16E65: Homological conditions on rings (generalizations of regular, Gorenstein, Cohen-Macaulay rings, etc.)
☆ 16E99: None of the above, but in this section
□ 16Gxx: Representation theory of rings and algebras
☆ 16G10: Representations of Artinian rings
☆ 16G20: Representations of quivers and partially ordered sets
☆ 16G30: Representations of orders, lattices, algebras over commutative rings [See also 16Hxx]
☆ 16G50: Cohen-Macaulay modules
☆ 16G60: Representation type (finite, tame, wild, etc.)
☆ 16G70: Auslander-Reiten sequences (almost split sequences) and Auslander-Reiten quivers
☆ 16G99: None of the above, but in this section
□ 16Hxx: Algebras and orders {For arithmetic aspects, see 11R52, 11R54, 11S45; for representation theory, see 16G30}
☆ 16H05: Separable algebras (e.g., quaternion algebras, Azumaya algebras, etc.)
☆ 16H10: Orders in separable algebras
☆ 16H15: Commutative orders
☆ 16H20: Lattices over orders
☆ 16H99: None of the above, but in this section
□ 16Kxx: Division rings and semisimple Artin rings [See also 12E15, 15A30]
☆ 16K20: Finite-dimensional {For crossed products, see 16S35}
☆ 16K40: Infinite-dimensional and general
☆ 16K50: Brauer groups [See also 12G05, 14F22]
☆ 16K99: None of the above, but in this section
□ 16Lxx: Local rings and generalizations
☆ 16L30: Noncommutative local and semilocal rings, perfect rings
☆ 16L60: Quasi-Frobenius rings [See also 16D50]
☆ 16L99: None of the above, but in this section
□ 16Nxx: Radicals and radical properties of rings
☆ 16N20: Jacobson radical, quasimultiplication
☆ 16N40: Nil and nilpotent radicals, sets, ideals, rings
☆ 16N60: Prime and semiprime rings [See also 16D60, 16U10]
☆ 16N80: General radicals and rings {For radicals in module categories, see 16S90}
☆ 16N99: None of the above, but in this section
□ 16Pxx: Chain conditions, growth conditions, and other forms of finiteness
☆ 16P10: Finite rings and finite-dimensional algebras {For semisimple, see 16K20; for commutative, see 11Txx, 13Mxx}
☆ 16P20: Artinian rings and modules
☆ 16P40: Noetherian rings and modules
☆ 16P50: Localization and Noetherian rings [See also 16U20]
☆ 16P60: Chain conditions on annihilators and summands: Goldie-type conditions [See also 16U20], Krull dimension
☆ 16P70: Chain conditions on other classes of submodules, ideals, subrings, etc.; coherence
☆ 16P90: Growth rate, Gelfand-Kirillov dimension
☆ 16P99: None of the above, but in this section
□ 16Rxx: Rings with polynomial identity
☆ 16R10: $T$-ideals, identities, varieties of rings and algebras
☆ 16R20: Semiprime p.i. rings, rings embeddable in matrices over commutative rings
☆ 16R30: Trace rings and invariant theory
☆ 16R40: Identities other than those of matrices over commutative rings
☆ 16R50: Other kinds of identities (generalized polynomial, rational, involution)
☆ 16R60: Functional identities
☆ 16R99: None of the above, but in this section
□ 16Sxx: Rings and algebras arising under various constructions
☆ 16S10: Rings determined by universal properties (free algebras, coproducts, adjunction of inverses, etc.)
☆ 16S15: Finite generation, finite presentability, normal forms (diamond lemma, term-rewriting)
☆ 16S20: Centralizing and normalizing extensions
☆ 16S30: Universal enveloping algebras of Lie algebras [See mainly 17B35]
☆ 16S32: Rings of differential operators [See also 13N10, 32C38]
☆ 16S34: Group rings [See also 20C05, 20C07], Laurent polynomial rings
☆ 16S35: Twisted and skew group rings, crossed products
☆ 16S36: Ordinary and skew polynomial rings and semigroup rings [See also 20M25]
☆ 16S37: Quadratic and Koszul algebras
☆ 16S38: Rings arising from non-commutative algebraic geometry [See also 14A22]
☆ 16S40: Smash products of general Hopf actions [See also 16T05]
☆ 16S50: Endomorphism rings; matrix rings [See also 15-XX]
☆ 16S60: Rings of functions, subdirect products, sheaves of rings
☆ 16S70: Extensions of rings by ideals
☆ 16S80: Deformations of rings [See also 13D10, 14D15]
☆ 16S85: Rings of fractions and localizations [See also 13B30]
☆ 16S90: Torsion theories; radicals on module categories [See also 13D30, 18E40] {For radicals of rings, see 16Nxx}
☆ 16S99: None of the above, but in this section
□ 16Txx: Hopf algebras, quantum groups and related topics
☆ 16T05: Hopf algebras and their applications [See also 16S40, 57T05]
☆ 16T10: Bialgebras
☆ 16T15: Coalgebras and comodules; corings
☆ 16T20: Ring-theoretic aspects of quantum groups [See also 17B37, 20G42, 81R50]
☆ 16T25: Yang-Baxter equations
☆ 16T30: Connections with combinatorics
☆ 16T99: None of the above, but in this section
□ 16Uxx: Conditions on elements
☆ 16U10: Integral domains
☆ 16U20: Ore rings, multiplicative sets, Ore localization
☆ 16U30: Divisibility, noncommutative UFDs
☆ 16U60: Units, groups of units
☆ 16U70: Center, normalizer (invariant elements)
☆ 16U80: Generalizations of commutativity
☆ 16U99: None of the above, but in this section
□ 16Wxx: Rings and algebras with additional structure
☆ 16W10: Rings with involution; Lie, Jordan and other nonassociative structures [See also 17B60, 17C50, 46Kxx]
☆ 16W20: Automorphisms and endomorphisms
☆ 16W22: Actions of groups and semigroups; invariant theory
☆ 16W25: Derivations, actions of Lie algebras
☆ 16W50: Graded rings and modules
☆ 16W55: “Super” (or “skew”) structure [See also 17A70, 17Bxx, 17C70] {For exterior algebras, see 15A75; for Clifford algebras, see 11E88, 15A66}
☆ 16W60: Valuations, completions, formal power series and related constructions [See also 13Jxx]
☆ 16W70: Filtered rings; filtrational and graded techniques
☆ 16W80: Topological and ordered rings and modules [See also 06F25, 13Jxx]
☆ 16W99: None of the above, but in this section
□ 16Yxx: Generalizations {For nonassociative rings, see 17-XX}
☆ 16Y30: Near-rings [See also 12K05]
☆ 16Y60: Semirings [See also 12K10]
☆ 16Y99: None of the above, but in this section
□ 16Zxx: Computational aspects of associative rings
☆ 16Z05: Computational aspects of associative rings [See also 68W30]
☆ 16Z99: None of the above, but in this section
• 17-XX: Nonassociative rings and algebras
□ 17-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 17-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 17-02: Research exposition (monographs, survey articles)
□ 17-03: Historical (must also be assigned at least one classification number from Section 01)
□ 17-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 17-06: Proceedings, conferences, collections, etc.
□ 17-08: Computational methods
□ 17Axx: General nonassociative rings
☆ 17A01: General theory
☆ 17A05: Power-associative rings
☆ 17A15: Noncommutative Jordan algebras
☆ 17A20: Flexible algebras
☆ 17A30: Algebras satisfying other identities
☆ 17A32: Leibniz algebras
☆ 17A35: Division algebras
☆ 17A36: Automorphisms, derivations, other operators
☆ 17A40: Ternary compositions
☆ 17A42: Other $n$-ary compositions $(n \ge 3)$
☆ 17A45: Quadratic algebras (but not quadratic Jordan algebras)
☆ 17A50: Free algebras
☆ 17A60: Structure theory
☆ 17A65: Radical theory
☆ 17A70: Superalgebras
☆ 17A75: Composition algebras
☆ 17A80: Valued algebras
☆ 17A99: None of the above, but in this section
□ 17Bxx: Lie algebras and Lie superalgebras {For Lie groups, see 22Exx}
☆ 17B01: Identities, free Lie (super)algebras
☆ 17B05: Structure theory
☆ 17B08: Coadjoint orbits; nilpotent varieties
☆ 17B10: Representations, algebraic theory (weights)
☆ 17B15: Representations, analytic theory
☆ 17B20: Simple, semisimple, reductive (super)algebras
☆ 17B22: Root systems
☆ 17B25: Exceptional (super)algebras
☆ 17B30: Solvable, nilpotent (super)algebras
☆ 17B35: Universal enveloping (super)algebras [See also 16S30]
☆ 17B37: Quantum groups (quantized enveloping algebras) and related deformations [See also 16T20, 20G42, 81R50, 82B23]
☆ 17B40: Automorphisms, derivations, other operators
☆ 17B45: Lie algebras of linear algebraic groups [See also 14Lxx and 20Gxx]
☆ 17B50: Modular Lie (super)algebras
☆ 17B55: Homological methods in Lie (super)algebras
☆ 17B56: Cohomology of Lie (super)algebras
☆ 17B60: Lie (super)algebras associated with other structures (associative, Jordan, etc.) [See also 16W10, 17C40, 17C50]
☆ 17B62: Lie bialgebras; Lie coalgebras
☆ 17B63: Poisson algebras
☆ 17B65: Infinite-dimensional Lie (super)algebras [See also 22E65]
☆ 17B66: Lie algebras of vector fields and related (super) algebras
☆ 17B67: Kac-Moody (super)algebras; extended affine Lie algebras; toroidal Lie algebras
☆ 17B68: Virasoro and related algebras
☆ 17B69: Vertex operators; vertex operator algebras and related structures
☆ 17B70: Graded Lie (super)algebras
☆ 17B75: Color Lie (super)algebras
☆ 17B80: Applications to integrable systems
☆ 17B81: Applications to physics
☆ 17B99: None of the above, but in this section
□ 17Cxx: Jordan algebras (algebras, triples and pairs)
☆ 17C05: Identities and free Jordan structures
☆ 17C10: Structure theory
☆ 17C17: Radicals
☆ 17C20: Simple, semisimple algebras
☆ 17C27: Idempotents, Peirce decompositions
☆ 17C30: Associated groups, automorphisms
☆ 17C36: Associated manifolds
☆ 17C37: Associated geometries
☆ 17C40: Exceptional Jordan structures
☆ 17C50: Jordan structures associated with other structures [See also 16W10]
☆ 17C55: Finite-dimensional structures
☆ 17C60: Division algebras
☆ 17C65: Jordan structures on Banach spaces and algebras [See also 46H70, 46L70]
☆ 17C70: Super structures
☆ 17C90: Applications to physics
☆ 17C99: None of the above, but in this section
□ 17Dxx: Other nonassociative rings and algebras
☆ 17D05: Alternative rings
☆ 17D10: Mal'cev (Mal'tsev) rings and algebras
☆ 17D15: Right alternative rings
☆ 17D20: $(\gamma, \delta)$-rings, including $(1,-1)$-rings
☆ 17D25: Lie-admissible algebras
☆ 17D92: Genetic algebras
☆ 17D99: None of the above, but in this section
• 18-XX: Category theory; homological algebra {For commutative rings see 13Dxx, for associative rings 16Exx, for groups 20Jxx, for topological groups and related structures 57Txx; see also 55Nxx
and 55Uxx for algebraic topology}
□ 18-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 18-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 18-02: Research exposition (monographs, survey articles)
□ 18-03: Historical (must also be assigned at least one classification number from Section 01)
□ 18-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 18-06: Proceedings, conferences, collections, etc.
□ 18Axx: General theory of categories and functors
☆ 18A05: Definitions, generalizations
☆ 18A10: Graphs, diagram schemes, precategories [See especially 20L05]
☆ 18A15: Foundations, relations to logic and deductive systems [See also 03-XX]
☆ 18A20: Epimorphisms, monomorphisms, special classes of morphisms, null morphisms
☆ 18A22: Special properties of functors (faithful, full, etc.)
☆ 18A23: Natural morphisms, dinatural morphisms
☆ 18A25: Functor categories, comma categories
☆ 18A30: Limits and colimits (products, sums, directed limits, pushouts, fiber products, equalizers, kernels, ends and coends, etc.)
☆ 18A32: Factorization of morphisms, substructures, quotient structures, congruences, amalgams
☆ 18A35: Categories admitting limits (complete categories), functors preserving limits, completions
☆ 18A40: Adjoint functors (universal constructions, reflective subcategories, Kan extensions, etc.)
☆ 18A99: None of the above, but in this section
□ 18Bxx: Special categories
☆ 18B05: Category of sets, characterizations [See also 03-XX]
☆ 18B10: Category of relations, additive relations
☆ 18B15: Embedding theorems, universal categories [See also 18E20]
☆ 18B20: Categories of machines, automata, operative categories [See also 03D05, 68Qxx]
☆ 18B25: Topoi [See also 03G30]
☆ 18B30: Categories of topological spaces and continuous mappings [See also 54-XX]
☆ 18B35: Preorders, orders and lattices (viewed as categories) [See also 06-XX]
☆ 18B40: Groupoids, semigroupoids, semigroups, groups (viewed as categories) [See also 20Axx, 20L05, 20Mxx]
☆ 18B99: None of the above, but in this section
□ 18Cxx: Categories and theories
☆ 18C05: Equational categories [See also 03C05, 08C05]
☆ 18C10: Theories (e.g. algebraic theories), structure, and semantics [See also 03G30]
☆ 18C15: Triples (= standard construction, monad or triad), algebras for a triple, homology and derived functors for triples [See also 18Gxx]
☆ 18C20: Algebras and Kleisli categories associated with monads
☆ 18C30: Sketches and generalizations
☆ 18C35: Accessible and locally presentable categories
☆ 18C50: Categorical semantics of formal languages [See also 68Q55, 68Q65]
☆ 18C99: None of the above, but in this section
□ 18Dxx: Categories with structure
☆ 18D05: Double categories, $2$-categories, bicategories and generalizations
☆ 18D10: Monoidal categories (= multiplicative categories), symmetric monoidal categories, braided categories [See also 19D23]
☆ 18D15: Closed categories (closed monoidal and Cartesian closed categories, etc.)
☆ 18D20: Enriched categories (over closed or monoidal categories)
☆ 18D25: Strong functors, strong adjunctions
☆ 18D30: Fibered categories
☆ 18D35: Structured objects in a category (group objects, etc.)
☆ 18D50: Operads [See also 55P48]
☆ 18D99: None of the above, but in this section
□ 18Exx: Abelian categories
☆ 18E05: Preadditive, additive categories
☆ 18E10: Exact categories, abelian categories
☆ 18E15: Grothendieck categories
☆ 18E20: Embedding theorems [See also 18B15]
☆ 18E25: Derived functors and satellites
☆ 18E30: Derived categories, triangulated categories
☆ 18E35: Localization of categories
☆ 18E40: Torsion theories, radicals [See also 13D30, 16S90]
☆ 18E99: None of the above, but in this section
□ 18Fxx: Categories and geometry
☆ 18F05: Local categories and functors
☆ 18F10: Grothendieck topologies [See also 14F20]
☆ 18F15: Abstract manifolds and fiber bundles [See also 55Rxx, 57Pxx]
☆ 18F20: Presheaves and sheaves [See also 14F05, 32C35, 32L10, 54B40, 55N30]
☆ 18F25: Algebraic $K$-theory and $L$-theory [See also 11Exx, 11R70, 11S70, 12-XX, 13D15, 14Cxx, 16E20, 19-XX, 46L80, 57R65, 57R67]
☆ 18F30: Grothendieck groups [See also 13D15, 16E20, 19Axx]
☆ 18F99: None of the above, but in this section
□ 18Gxx: Homological algebra [See also 13Dxx, 16Exx, 20Jxx, 55Nxx, 55Uxx, 57Txx]
☆ 18G05: Projectives and injectives [See also 13C10, 13C11, 16D40, 16D50]
☆ 18G10: Resolutions; derived functors [See also 13D02, 16E05, 18E25]
☆ 18G15: Ext and Tor, generalizations, Künneth formula [See also 55U25]
☆ 18G20: Homological dimension [See also 13D05, 16E10]
☆ 18G25: Relative homological algebra, projective classes
☆ 18G30: Simplicial sets, simplicial objects (in a category) [See also 55U10]
☆ 18G35: Chain complexes [See also 18E30, 55U15]
☆ 18G40: Spectral sequences, hypercohomology [See also 55Txx]
☆ 18G50: Nonabelian homological algebra
☆ 18G55: Homotopical algebra
☆ 18G60: Other (co)homology theories [See also 19D55, 46L80, 58J20, 58J22]
☆ 18G99: None of the above, but in this section
• 19-XX: $K$-theory [See also 16E20, 18F25]
□ 19-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 19-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 19-02: Research exposition (monographs, survey articles)
□ 19-03: Historical (must also be assigned at least one classification number from Section 01)
□ 19-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 19-06: Proceedings, conferences, collections, etc.
□ 19Axx: Grothendieck groups and $K_0$ [See also 13D15, 18F30]
☆ 19A13: Stability for projective modules [See also 13C10]
☆ 19A15: Efficient generation
☆ 19A22: Frobenius induction, Burnside and representation rings
☆ 19A31: $K_0$ of group rings and orders
☆ 19A49: $K_0$ of other rings
☆ 19A99: None of the above, but in this section
□ 19Bxx: Whitehead groups and $K_1$
☆ 19B10: Stable range conditions
☆ 19B14: Stability for linear groups
☆ 19B28: $K_1$ of group rings and orders [See also 57Q10]
☆ 19B37: Congruence subgroup problems [See also 20H05]
☆ 19B99: None of the above, but in this section
□ 19Cxx: Steinberg groups and $K_2$
☆ 19C09: Central extensions and Schur multipliers
☆ 19C20: Symbols, presentations and stability of $K_2$
☆ 19C30: $K_2$ and the Brauer group
☆ 19C40: Excision for $K_2$
☆ 19C99: None of the above, but in this section
□ 19Dxx: Higher algebraic $K$-theory
☆ 19D06: $Q$- and plus-constructions
☆ 19D10: Algebraic $K$-theory of spaces
☆ 19D23: Symmetric monoidal categories [See also 18D10]
☆ 19D25: Karoubi-Villamayor-Gersten $K$-theory
☆ 19D35: Negative $K$-theory, NK and Nil
☆ 19D45: Higher symbols, Milnor $K$-theory
☆ 19D50: Computations of higher $K$-theory of rings [See also 13D15, 16E20]
☆ 19D55: $K$-theory and homology; cyclic homology and cohomology [See also 18G60]
☆ 19D99: None of the above, but in this section
□ 19Exx: $K$-theory in geometry
☆ 19E08: $K$-theory of schemes [See also 14C35]
☆ 19E15: Algebraic cycles and motivic cohomology [See also 14C25, 14C35, 14F42]
☆ 19E20: Relations with cohomology theories [See also 14Fxx]
☆ 19E99: None of the above, but in this section
□ 19Fxx: $K$-theory in number theory [See also 11R70, 11S70]
☆ 19F05: Generalized class field theory [See also 11G45]
☆ 19F15: Symbols and arithmetic [See also 11R37]
☆ 19F27: Étale cohomology, higher regulators, zeta and $L$-functions [See also 11G40, 11R42, 11S40, 14F20, 14G10]
☆ 19F99: None of the above, but in this section
□ 19Gxx: $K$-theory of forms [See also 11Exx]
☆ 19G05: Stability for quadratic modules
☆ 19G12: Witt groups of rings [See also 11E81]
☆ 19G24: $L$-theory of group rings [See also 11E81]
☆ 19G38: Hermitian $K$-theory, relations with $K$-theory of rings
☆ 19G99: None of the above, but in this section
□ 19Jxx: Obstructions from topology
☆ 19J05: Finiteness and other obstructions in $K_0$
☆ 19J10: Whitehead (and related) torsion
☆ 19J25: Surgery obstructions [See also 57R67]
☆ 19J35: Obstructions to group actions
☆ 19J99: None of the above, but in this section
□ 19Kxx: $K$-theory and operator algebras [See mainly 46L80, and also 46M20]
☆ 19K14: $K_0$ as an ordered group, traces
☆ 19K33: EXT and $K$-homology [See also 55N22]
☆ 19K35: Kasparov theory ($KK$-theory) [See also 58J22]
☆ 19K56: Index theory [See also 58J20, 58J22]
☆ 19K99: None of the above, but in this section
□ 19Lxx: Topological $K$-theory [See also 55N15, 55R50, 55S25]
☆ 19L10: Riemann-Roch theorems, Chern characters
☆ 19L20: $J$-homomorphism, Adams operations [See also 55Q50]
☆ 19L41: Connective $K$-theory, cobordism [See also 55N22]
☆ 19L47: Equivariant $K$-theory [See also 55N91, 55P91, 55Q91, 55R91, 55S91]
☆ 19L50: Twisted $K$-theory; differential $K$-theory
☆ 19L64: Computations, geometric applications
☆ 19L99: None of the above, but in this section
□ 19Mxx: Miscellaneous applications of $K$-theory
☆ 19M05: Miscellaneous applications of $K$-theory
☆ 19M99: None of the above, but in this section
• 20-XX: Group theory and generalizations
□ 20-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 20-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 20-02: Research exposition (monographs, survey articles)
□ 20-03: Historical (must also be assigned at least one classification number from Section 01)
□ 20-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 20-06: Proceedings, conferences, collections, etc.
□ 20Axx: Foundations
☆ 20A05: Axiomatics and elementary properties
☆ 20A10: Metamathematical considerations {For word problems, see 20F10}
☆ 20A15: Applications of logic to group theory
☆ 20A99: None of the above, but in this section
□ 20Bxx: Permutation groups
☆ 20B05: General theory for finite groups
☆ 20B07: General theory for infinite groups
☆ 20B10: Characterization theorems
☆ 20B15: Primitive groups
☆ 20B20: Multiply transitive finite groups
☆ 20B22: Multiply transitive infinite groups
☆ 20B25: Finite automorphism groups of algebraic, geometric, or combinatorial structures [See also 05Bxx, 12F10, 20G40, 20H30, 51-XX]
☆ 20B27: Infinite automorphism groups [See also 12F10]
☆ 20B30: Symmetric groups
☆ 20B35: Subgroups of symmetric groups
☆ 20B40: Computational methods
☆ 20B99: None of the above, but in this section
□ 20Cxx: Representation theory of groups [See also 19A22 (for representation rings and Burnside rings)]
☆ 20C05: Group rings of finite groups and their modules [See also 16S34]
☆ 20C07: Group rings of infinite groups and their modules [See also 16S34]
☆ 20C08: Hecke algebras and their representations
☆ 20C10: Integral representations of finite groups
☆ 20C11: $p$-adic representations of finite groups
☆ 20C12: Integral representations of infinite groups
☆ 20C15: Ordinary representations and characters
☆ 20C20: Modular representations and characters
☆ 20C25: Projective representations and multipliers
☆ 20C30: Representations of finite symmetric groups
☆ 20C32: Representations of infinite symmetric groups
☆ 20C33: Representations of finite groups of Lie type
☆ 20C34: Representations of sporadic groups
☆ 20C35: Applications of group representations to physics
☆ 20C40: Computational methods
☆ 20C99: None of the above, but in this section
□ 20Dxx: Abstract finite groups
☆ 20D05: Finite simple groups and their classification
☆ 20D06: Simple groups: alternating groups and groups of Lie type [See also 20Gxx]
☆ 20D08: Simple groups: sporadic groups
☆ 20D10: Solvable groups, theory of formations, Schunck classes, Fitting classes, $\pi$-length, ranks [See also 20F17]
☆ 20D15: Nilpotent groups, $p$-groups
☆ 20D20: Sylow subgroups, Sylow properties, $\pi$-groups, $\pi$-structure
☆ 20D25: Special subgroups (Frattini, Fitting, etc.)
☆ 20D30: Series and lattices of subgroups
☆ 20D35: Subnormal subgroups
☆ 20D40: Products of subgroups
☆ 20D45: Automorphisms
☆ 20D60: Arithmetic and combinatorial problems
☆ 20D99: None of the above, but in this section
□ 20Exx: Structure and classification of infinite or finite groups
☆ 20E05: Free nonabelian groups
☆ 20E06: Free products, free products with amalgamation, Higman-Neumann-Neumann extensions, and generalizations
☆ 20E07: Subgroup theorems; subgroup growth
☆ 20E08: Groups acting on trees [See also 20F65]
☆ 20E10: Quasivarieties and varieties of groups
☆ 20E15: Chains and lattices of subgroups, subnormal subgroups [See also 20F22]
☆ 20E18: Limits, profinite groups
☆ 20E22: Extensions, wreath products, and other compositions [See also 20J05]
☆ 20E25: Local properties
☆ 20E26: Residual properties and generalizations; residually finite groups
☆ 20E28: Maximal subgroups
☆ 20E32: Simple groups [See also 20D05]
☆ 20E34: General structure theorems
☆ 20E36: Automorphisms of infinite groups [For automorphisms of finite groups, see 20D45]
☆ 20E42: Groups with a $BN$-pair; buildings [See also 51E24]
☆ 20E45: Conjugacy classes
☆ 20E99: None of the above, but in this section
□ 20Fxx: Special aspects of infinite or finite groups
☆ 20F05: Generators, relations, and presentations
☆ 20F06: Cancellation theory; application of van Kampen diagrams [See also 57M05]
☆ 20F10: Word problems, other decision problems, connections with logic and automata [See also 03B25, 03D05, 03D40, 06B25, 08A50, 20M05, 68Q70]
☆ 20F11: Groups of finite Morley rank [See also 03C45, 03C60]
☆ 20F12: Commutator calculus
☆ 20F14: Derived series, central series, and generalizations
☆ 20F16: Solvable groups, supersolvable groups [See also 20D10]
☆ 20F17: Formations of groups, Fitting classes [See also 20D10]
☆ 20F18: Nilpotent groups [See also 20D15]
☆ 20F19: Generalizations of solvable and nilpotent groups
☆ 20F22: Other classes of groups defined by subgroup chains
☆ 20F24: FC-groups and their generalizations
☆ 20F28: Automorphism groups of groups [See also 20E36]
☆ 20F29: Representations of groups as automorphism groups of algebraic systems
☆ 20F34: Fundamental groups and their automorphisms [See also 57M05, 57Sxx]
☆ 20F36: Braid groups; Artin groups
☆ 20F38: Other groups related to topology or analysis
☆ 20F40: Associated Lie structures
☆ 20F45: Engel conditions
☆ 20F50: Periodic groups; locally finite groups
☆ 20F55: Reflection and Coxeter groups [See also 22E40, 51F15]
☆ 20F60: Ordered groups [See mainly 06F15]
☆ 20F65: Geometric group theory [See also 05C25, 20E08, 57Mxx]
☆ 20F67: Hyperbolic groups and nonpositively curved groups
☆ 20F69: Asymptotic properties of groups
☆ 20F70: Algebraic geometry over groups; equations over groups
☆ 20F99: None of the above, but in this section
□ 20Gxx: Linear algebraic groups and related topics {For arithmetic theory, see 11E57, 11H56; for geometric theory, see 14Lxx, 22Exx; for other methods in representation theory, see 15A30,
22E45, 22E46, 22E47, 22E50, 22E55}
☆ 20G05: Representation theory
☆ 20G07: Structure theory
☆ 20G10: Cohomology theory
☆ 20G15: Linear algebraic groups over arbitrary fields
☆ 20G20: Linear algebraic groups over the reals, the complexes, the quaternions
☆ 20G25: Linear algebraic groups over local fields and their integers
☆ 20G30: Linear algebraic groups over global fields and their integers
☆ 20G35: Linear algebraic groups over adèles and other rings and schemes
☆ 20G40: Linear algebraic groups over finite fields
☆ 20G41: Exceptional groups
☆ 20G42: Quantum groups (quantized function algebras) and their representations [See also 16T20, 17B37, 81R50]
☆ 20G43: Schur and $q$-Schur algebras
☆ 20G44: Kac-Moody groups
☆ 20G45: Applications to physics
☆ 20G99: None of the above, but in this section
□ 20Hxx: Other groups of matrices [See also 15A30]
☆ 20H05: Unimodular groups, congruence subgroups [See also 11F06, 19B37, 22E40, 51F20]
☆ 20H10: Fuchsian groups and their generalizations [See also 11F06, 22E40, 30F35, 32Nxx]
☆ 20H15: Other geometric groups, including crystallographic groups [See also 51-XX, especially 51F15, and 82D25]
☆ 20H20: Other matrix groups over fields
☆ 20H25: Other matrix groups over rings
☆ 20H30: Other matrix groups over finite fields
☆ 20H99: None of the above, but in this section
□ 20Jxx: Connections with homological algebra and category theory
☆ 20J05: Homological methods in group theory
☆ 20J06: Cohomology of groups
☆ 20J15: Category of groups
☆ 20J99: None of the above, but in this section
□ 20Kxx: Abelian groups
☆ 20K01: Finite abelian groups [For sumsets, see 11B13 and 11P70]
☆ 20K10: Torsion groups, primary groups and generalized primary groups
☆ 20K15: Torsion-free groups, finite rank
☆ 20K20: Torsion-free groups, infinite rank
☆ 20K21: Mixed groups
☆ 20K25: Direct sums, direct products, etc.
☆ 20K27: Subgroups
☆ 20K30: Automorphisms, homomorphisms, endomorphisms, etc.
☆ 20K35: Extensions
☆ 20K40: Homological and categorical methods
☆ 20K45: Topological methods [See also 22A05, 22B05]
☆ 20K99: None of the above, but in this section
□ 20Lxx: Groupoids (i.e. small categories in which all morphisms are isomorphisms) {For sets with a single binary operation, see 20N02; for topological groupoids, see 22A22, 58H05}
☆ 20L05: Groupoids (i.e. small categories in which all morphisms are isomorphisms) {For sets with a single binary operation, see 20N02; for topological groupoids, see 22A22, 58H05}
☆ 20L99: None of the above, but in this section
□ 20Mxx: Semigroups
☆ 20M05: Free semigroups, generators and relations, word problems [See also 03D40, 08A50, 20F10]
☆ 20M07: Varieties and pseudovarieties of semigroups
☆ 20M10: General structure theory
☆ 20M11: Radical theory
☆ 20M12: Ideal theory
☆ 20M13: Arithmetic theory of monoids
☆ 20M14: Commutative semigroups
☆ 20M15: Mappings of semigroups
☆ 20M17: Regular semigroups
☆ 20M18: Inverse semigroups
☆ 20M19: Orthodox semigroups
☆ 20M20: Semigroups of transformations, etc. [See also 47D03, 47H20, 54H15]
☆ 20M25: Semigroup rings, multiplicative semigroups of rings [See also 16S36, 16Y60]
☆ 20M30: Representation of semigroups; actions of semigroups on sets
☆ 20M32: Algebraic monoids
☆ 20M35: Semigroups in automata theory, linguistics, etc. [See also 03D05, 68Q70, 68T50]
☆ 20M50: Connections of semigroups with homological algebra and category theory
☆ 20M99: None of the above, but in this section
□ 20Nxx: Other generalizations of groups
☆ 20N02: Sets with a single binary operation (groupoids)
☆ 20N05: Loops, quasigroups [See also 05Bxx]
☆ 20N10: Ternary systems (heaps, semiheaps, heapoids, etc.)
☆ 20N15: $n$-ary systems $(n\ge 3)$
☆ 20N20: Hypergroups
☆ 20N25: Fuzzy groups [See also 03E72]
☆ 20N99: None of the above, but in this section
□ 20Pxx: Probabilistic methods in group theory [See also 60Bxx]
☆ 20P05: Probabilistic methods in group theory [See also 60Bxx]
☆ 20P99: None of the above, but in this section
• 22-XX: Topological groups, Lie groups {For transformation groups, see 54H15, 57Sxx, 58-XX. For abstract harmonic analysis, see 43-XX}
□ 22-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 22-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 22-02: Research exposition (monographs, survey articles)
□ 22-03: Historical (must also be assigned at least one classification number from Section 01)
□ 22-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 22-06: Proceedings, conferences, collections, etc.
□ 22Axx: Topological and differentiable algebraic systems {For topological rings and fields, see 12Jxx, 13Jxx, 16W80}
☆ 22A05: Structure of general topological groups
☆ 22A10: Analysis on general topological groups
☆ 22A15: Structure of topological semigroups
☆ 22A20: Analysis on topological semigroups
☆ 22A22: Topological groupoids (including differentiable and Lie groupoids) [See also 58H05]
☆ 22A25: Representations of general topological groups and semigroups
☆ 22A26: Topological semilattices, lattices and applications [See also 06B30, 06B35, 06F30]
☆ 22A30: Other topological algebraic systems and their representations
☆ 22A99: None of the above, but in this section
□ 22Bxx: Locally compact abelian groups (LCA groups)
☆ 22B05: General properties and structure of LCA groups
☆ 22B10: Structure of group algebras of LCA groups
☆ 22B99: None of the above, but in this section
□ 22Cxx: Compact groups
☆ 22C05: Compact groups
☆ 22C99: None of the above, but in this section
□ 22Dxx: Locally compact groups and their algebras
☆ 22D05: General properties and structure of locally compact groups
☆ 22D10: Unitary representations of locally compact groups
☆ 22D12: Other representations of locally compact groups
☆ 22D15: Group algebras of locally compact groups
☆ 22D20: Representations of group algebras
☆ 22D25: $C^*$-algebras and $W^*$-algebras in relation to group representations [See also 46Lxx]
☆ 22D30: Induced representations
☆ 22D35: Duality theorems
☆ 22D40: Ergodic theory on groups [See also 28Dxx]
☆ 22D45: Automorphism groups of locally compact groups
☆ 22D99: None of the above, but in this section
□ 22Exx: Lie groups {For the topology of Lie groups and homogeneous spaces, see 57Sxx, 57Txx; for analysis thereon, see 43A80, 43A85, 43A90}
☆ 22E05: Local Lie groups [See also 34-XX, 35-XX, 58H05]
☆ 22E10: General properties and structure of complex Lie groups [See also 32M05]
☆ 22E15: General properties and structure of real Lie groups
☆ 22E20: General properties and structure of other Lie groups
☆ 22E25: Nilpotent and solvable Lie groups
☆ 22E27: Representations of nilpotent and solvable Lie groups (special orbital integrals, non-type I representations, etc.)
☆ 22E30: Analysis on real and complex Lie groups [See also 33C80, 43-XX]
☆ 22E35: Analysis on $p$-adic Lie groups
☆ 22E40: Discrete subgroups of Lie groups [See also 20Hxx, 32Nxx]
☆ 22E41: Continuous cohomology [See also 57R32, 57Txx, 58H10]
☆ 22E43: Structure and representation of the Lorentz group
☆ 22E45: Representations of Lie and linear algebraic groups over real fields: analytic methods {For the purely algebraic theory, see 20G05}
☆ 22E46: Semisimple Lie groups and their representations
☆ 22E47: Representations of Lie and real algebraic groups: algebraic methods (Verma modules, etc.) [See also 17B10]
☆ 22E50: Representations of Lie and linear algebraic groups over local fields [See also 20G05]
☆ 22E55: Representations of Lie and linear algebraic groups over global fields and adèle rings [See also 20G05]
☆ 22E57: Geometric Langlands program: representation-theoretic aspects [See also 14D24]
☆ 22E60: Lie algebras of Lie groups {For the algebraic theory of Lie algebras, see 17Bxx}
☆ 22E65: Infinite-dimensional Lie groups and their Lie algebras: general properties [See also 17B65, 58B25, 58H05]
☆ 22E66: Analysis on and representations of infinite-dimensional Lie groups
☆ 22E67: Loop groups and related constructions, group-theoretic treatment [See also 58D05]
☆ 22E70: Applications of Lie groups to physics; explicit representations [See also 81R05, 81R10]
☆ 22E99: None of the above, but in this section
□ 22Fxx: Noncompact transformation groups
☆ 22F05: General theory of group and pseudogroup actions {For topological properties of spaces with an action, see 57S20}
☆ 22F10: Measurable group actions [See also 22D40, 28Dxx, 37Axx]
☆ 22F30: Homogeneous spaces {For general actions on manifolds or preserving geometrical structures, see 57M60, 57Sxx; for discrete subgroups of Lie groups, see especially 22E40}
☆ 22F50: Groups as automorphisms of other structures
☆ 22F99: None of the above, but in this section
• 26-XX: Real functions [See also 54C30]
□ 26-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 26-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 26-02: Research exposition (monographs, survey articles)
□ 26-03: Historical (must also be assigned at least one classification number from Section 01)
□ 26-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 26-06: Proceedings, conferences, collections, etc.
□ 26Axx: Functions of one variable
☆ 26A03: Foundations: limits and generalizations, elementary topology of the line
☆ 26A06: One-variable calculus
☆ 26A09: Elementary functions
☆ 26A12: Rate of growth of functions, orders of infinity, slowly varying functions [See also 26A48]
☆ 26A15: Continuity and related questions (modulus of continuity, semicontinuity, discontinuities, etc.) {For properties determined by Fourier coefficients, see 42A16; for those determined
by approximation properties, see 41A25, 41A27}
☆ 26A16: Lipschitz (Hölder) classes
☆ 26A18: Iteration [See also 37Bxx, 37Cxx, 37Exx, 39B12, 47H10, 54H25]
☆ 26A21: Classification of real functions; Baire classification of sets and functions [See also 03E15, 28A05, 54C50, 54H05]
☆ 26A24: Differentiation (functions of one variable): general theory, generalized derivatives, mean-value theorems [See also 28A15]
☆ 26A27: Nondifferentiability (nondifferentiable functions, points of nondifferentiability), discontinuous derivatives
☆ 26A30: Singular functions, Cantor functions, functions with other special properties
☆ 26A33: Fractional derivatives and integrals
☆ 26A36: Antidifferentiation
☆ 26A39: Denjoy and Perron integrals, other special integrals
☆ 26A42: Integrals of Riemann, Stieltjes and Lebesgue type [See also 28-XX]
☆ 26A45: Functions of bounded variation, generalizations
☆ 26A46: Absolutely continuous functions
☆ 26A48: Monotonic functions, generalizations
☆ 26A51: Convexity, generalizations
☆ 26A99: None of the above, but in this section
□ 26Bxx: Functions of several variables
☆ 26B05: Continuity and differentiation questions
☆ 26B10: Implicit function theorems, Jacobians, transformations with several variables
☆ 26B12: Calculus of vector functions
☆ 26B15: Integration: length, area, volume [See also 28A75, 51M25]
☆ 26B20: Integral formulas (Stokes, Gauss, Green, etc.)
☆ 26B25: Convexity, generalizations
☆ 26B30: Absolutely continuous functions, functions of bounded variation
☆ 26B35: Special properties of functions of several variables, Hölder conditions, etc.
☆ 26B40: Representation and superposition of functions
☆ 26B99: None of the above, but in this section
□ 26Cxx: Polynomials, rational functions
☆ 26C05: Polynomials: analytic properties, etc. [See also 12Dxx, 12Exx]
☆ 26C10: Polynomials: location of zeros [See also 12D10, 30C15, 65H05]
☆ 26C15: Rational functions [See also 14Pxx]
☆ 26C99: None of the above, but in this section
□ 26Dxx: Inequalities {For maximal function inequalities, see 42B25; for functional inequalities, see 39B72; for probabilistic inequalities, see 60E15}
☆ 26D05: Inequalities for trigonometric functions and polynomials
☆ 26D07: Inequalities involving other types of functions
☆ 26D10: Inequalities involving derivatives and differential and integral operators
☆ 26D15: Inequalities for sums, series and integrals
☆ 26D20: Other analytical inequalities
☆ 26D99: None of the above, but in this section
□ 26Exx: Miscellaneous topics [See also 58Cxx]
☆ 26E05: Real-analytic functions [See also 32B05, 32C05]
☆ 26E10: $C^\infty$-functions, quasi-analytic functions [See also 58C25]
☆ 26E15: Calculus of functions on infinite-dimensional spaces [See also 46G05, 58Cxx]
☆ 26E20: Calculus of functions taking values in infinite-dimensional spaces [See also 46E40, 46G10, 58Cxx]
☆ 26E25: Set-valued functions [See also 28B20, 49J53, 54C60] {For nonsmooth analysis, see 49J52, 58Cxx, 90Cxx}
☆ 26E30: Non-Archimedean analysis [See also 12J25]
☆ 26E35: Nonstandard analysis [See also 03H05, 28E05, 54J05]
☆ 26E40: Constructive real analysis [See also 03F60]
☆ 26E50: Fuzzy real analysis [See also 03E72, 28E10]
☆ 26E60: Means [See also 47A64]
☆ 26E70: Real analysis on time scales or measure chains {For dynamic equations on time scales or measure chains see 34N05}
☆ 26E99: None of the above, but in this section
• 28-XX: Measure and integration {For analysis on manifolds, see 58-XX}
□ 28-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 28-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 28-02: Research exposition (monographs, survey articles)
□ 28-03: Historical (must also be assigned at least one classification number from Section 01)
□ 28-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 28-06: Proceedings, conferences, collections, etc.
□ 28Axx: Classical measure theory
☆ 28A05: Classes of sets (Borel fields, $\sigma$-rings, etc.), measurable sets, Suslin sets, analytic sets [See also 03E15, 26A21, 54H05]
☆ 28A10: Real- or complex-valued set functions
☆ 28A12: Contents, measures, outer measures, capacities
☆ 28A15: Abstract differentiation theory, differentiation of set functions [See also 26A24]
☆ 28A20: Measurable and nonmeasurable functions, sequences of measurable functions, modes of convergence
☆ 28A25: Integration with respect to measures and other set functions
☆ 28A33: Spaces of measures, convergence of measures [See also 46E27, 60Bxx]
☆ 28A35: Measures and integrals in product spaces
☆ 28A50: Integration and disintegration of measures
☆ 28A51: Lifting theory [See also 46G15]
☆ 28A60: Measures on Boolean rings, measure algebras [See also 54H10]
☆ 28A75: Length, area, volume, other geometric measure theory [See also 26B15, 49Q15]
☆ 28A78: Hausdorff and packing measures
☆ 28A80: Fractals [See also 37Fxx]
☆ 28A99: None of the above, but in this section
□ 28Bxx: Set functions, measures and integrals with values in abstract spaces
☆ 28B05: Vector-valued set functions, measures and integrals [See also 46G10]
☆ 28B10: Group- or semigroup-valued set functions, measures and integrals
☆ 28B15: Set functions, measures and integrals with values in ordered spaces
☆ 28B20: Set-valued set functions and measures; integration of set-valued functions; measurable selections [See also 26E25, 54C60, 54C65, 91B14]
☆ 28B99: None of the above, but in this section
□ 28Cxx: Set functions and measures on spaces with additional structure [See also 46G12, 58C35, 58D20]
☆ 28C05: Integration theory via linear functionals (Radon measures, Daniell integrals, etc.), representing set functions and measures
☆ 28C10: Set functions and measures on topological groups or semigroups, Haar measures, invariant measures [See also 22Axx, 43A05]
☆ 28C15: Set functions and measures on topological spaces (regularity of measures, etc.)
☆ 28C20: Set functions and measures and integrals in infinite-dimensional spaces (Wiener measure, Gaussian measure, etc.) [See also 46G12, 58C35, 58D20, 60B11]
☆ 28C99: None of the above, but in this section
□ 28Dxx: Measure-theoretic ergodic theory [See also 11K50, 11K55, 22D40, 37Axx, 47A35, 54H20, 60Fxx, 60G10]
☆ 28D05: Measure-preserving transformations
☆ 28D10: One-parameter continuous families of measure-preserving transformations
☆ 28D15: General groups of measure-preserving transformations
☆ 28D20: Entropy and other invariants
☆ 28D99: None of the above, but in this section
□ 28Exx: Miscellaneous topics in measure theory
☆ 28E05: Nonstandard measure theory [See also 03H05, 26E35]
☆ 28E10: Fuzzy measure theory [See also 03E72, 26E50, 94D05]
☆ 28E15: Other connections with logic and set theory
☆ 28E99: None of the above, but in this section
• 30-XX: Functions of a complex variable {For analysis on manifolds, see 58-XX}
□ 30-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 30-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 30-02: Research exposition (monographs, survey articles)
□ 30-03: Historical (must also be assigned at least one classification number from Section 01)
□ 30-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 30-06: Proceedings, conferences, collections, etc.
□ 30Axx: General properties
☆ 30A05: Monogenic properties of complex functions (including polygenic and areolar monogenic functions)
☆ 30A10: Inequalities in the complex domain
☆ 30A99: None of the above, but in this section
□ 30Bxx: Series expansions
☆ 30B10: Power series (including lacunary series)
☆ 30B20: Random power series
☆ 30B30: Boundary behavior of power series, over-convergence
☆ 30B40: Analytic continuation
☆ 30B50: Dirichlet series and other series expansions, exponential series [See also 11M41, 42-XX]
☆ 30B60: Completeness problems, closure of a system of functions
☆ 30B70: Continued fractions [See also 11A55, 40A15]
☆ 30B99: None of the above, but in this section
□ 30Cxx: Geometric function theory
☆ 30C10: Polynomials
☆ 30C15: Zeros of polynomials, rational functions, and other analytic functions (e.g. zeros of functions with bounded Dirichlet integral) {For algebraic theory, see 12D10; for real methods,
see 26C10}
☆ 30C20: Conformal mappings of special domains
☆ 30C25: Covering theorems in conformal mapping theory
☆ 30C30: Numerical methods in conformal mapping theory [See also 65E05]
☆ 30C35: General theory of conformal mappings
☆ 30C40: Kernel functions and applications
☆ 30C45: Special classes of univalent and multivalent functions (starlike, convex, bounded rotation, etc.)
☆ 30C50: Coefficient problems for univalent and multivalent functions
☆ 30C55: General theory of univalent and multivalent functions
☆ 30C62: Quasiconformal mappings in the plane
☆ 30C65: Quasiconformal mappings in ${\bf R}^n$, other generalizations
☆ 30C70: Extremal problems for conformal and quasiconformal mappings, variational methods
☆ 30C75: Extremal problems for conformal and quasiconformal mappings, other methods
☆ 30C80: Maximum principle; Schwarz's lemma, Lindelöf principle, analogues and generalizations; subordination
☆ 30C85: Capacity and harmonic measure in the complex plane [See also 31A15]
☆ 30C99: None of the above, but in this section
□ 30Dxx: Entire and meromorphic functions, and related topics
☆ 30D05: Functional equations in the complex domain, iteration and composition of analytic functions [See also 34Mxx, 37Fxx, 39-XX]
☆ 30D10: Representations of entire functions by series and integrals
☆ 30D15: Special classes of entire functions and growth estimates
☆ 30D20: Entire functions, general theory
☆ 30D30: Meromorphic functions, general theory
☆ 30D35: Distribution of values, Nevanlinna theory
☆ 30D40: Cluster sets, prime ends, boundary behavior
☆ 30D45: Bloch functions, normal functions, normal families
☆ 30D60: Quasi-analytic and other classes of functions
☆ 30D99: None of the above, but in this section
□ 30Exx: Miscellaneous topics of analysis in the complex domain
☆ 30E05: Moment problems, interpolation problems
☆ 30E10: Approximation in the complex domain
☆ 30E15: Asymptotic representations in the complex domain
☆ 30E20: Integration, integrals of Cauchy type, integral representations of analytic functions [See also 45Exx]
☆ 30E25: Boundary value problems [See also 45Exx]
☆ 30E99: None of the above, but in this section
□ 30Fxx: Riemann surfaces
☆ 30F10: Compact Riemann surfaces and uniformization [See also 14H15, 32G15]
☆ 30F15: Harmonic functions on Riemann surfaces
☆ 30F20: Classification theory of Riemann surfaces
☆ 30F25: Ideal boundary theory
☆ 30F30: Differentials on Riemann surfaces
☆ 30F35: Fuchsian groups and automorphic functions [See also 11Fxx, 20H10, 22E40, 32Gxx, 32Nxx]
☆ 30F40: Kleinian groups [See also 20H10]
☆ 30F45: Conformal metrics (hyperbolic, Poincaré, distance functions)
☆ 30F50: Klein surfaces
☆ 30F60: Teichmüller theory [See also 32G15]
☆ 30F99: None of the above, but in this section
□ 30Gxx: Generalized function theory
☆ 30G06: Non-Archimedean function theory [See also 12J25]; nonstandard function theory [See also 03H05]
☆ 30G12: Finely holomorphic functions and topological function theory
☆ 30G20: Generalizations of Bers or Vekua type (pseudoanalytic, $p$-analytic, etc.)
☆ 30G25: Discrete analytic functions
☆ 30G30: Other generalizations of analytic functions (including abstract-valued functions)
☆ 30G35: Functions of hypercomplex variables and generalized variables
☆ 30G99: None of the above, but in this section
□ 30Hxx: Spaces and algebras of analytic functions
☆ 30H05: Bounded analytic functions
☆ 30H10: Hardy spaces
☆ 30H15: Nevanlinna class and Smirnov class
☆ 30H20: Bergman spaces, Fock spaces
☆ 30H25: Besov spaces and $Q_p$-spaces
☆ 30H30: Bloch spaces
☆ 30H35: BMO-spaces
☆ 30H50: Algebras of analytic functions
☆ 30H80: Corona theorems
☆ 30H99: None of the above, but in this section
□ 30Jxx: Function theory on the disc
☆ 30J05: Inner functions
☆ 30J10: Blaschke products
☆ 30J15: Singular inner functions
☆ 30J99: None of the above, but in this section
□ 30Kxx: Universal holomorphic functions
☆ 30K05: Universal Taylor series
☆ 30K10: Universal Dirichlet series
☆ 30K15: Bounded universal functions
☆ 30K20: Compositional universality
☆ 30K99: None of the above, but in this section
□ 30Lxx: Analysis on metric spaces
☆ 30L05: Geometric embeddings of metric spaces
☆ 30L10: Quasiconformal mappings in metric spaces
☆ 30L99: None of the above, but in this section
• 31-XX: Potential theory {For probabilistic potential theory, see 60J45}
□ 31-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 31-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 31-02: Research exposition (monographs, survey articles)
□ 31-03: Historical (must also be assigned at least one classification number from Section 01)
□ 31-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 31-06: Proceedings, conferences, collections, etc.
□ 31Axx: Two-dimensional theory
☆ 31A05: Harmonic, subharmonic, superharmonic functions
☆ 31A10: Integral representations, integral operators, integral equations methods
☆ 31A15: Potentials and capacity, harmonic measure, extremal length [See also 30C85]
☆ 31A20: Boundary behavior (theorems of Fatou type, etc.)
☆ 31A25: Boundary value and inverse problems
☆ 31A30: Biharmonic, polyharmonic functions and equations, Poisson's equation
☆ 31A35: Connections with differential equations
☆ 31A99: None of the above, but in this section
□ 31Bxx: Higher-dimensional theory
☆ 31B05: Harmonic, subharmonic, superharmonic functions
☆ 31B10: Integral representations, integral operators, integral equations methods
☆ 31B15: Potentials and capacities, extremal length
☆ 31B20: Boundary value and inverse problems
☆ 31B25: Boundary behavior
☆ 31B30: Biharmonic and polyharmonic equations and functions
☆ 31B35: Connections with differential equations
☆ 31B99: None of the above, but in this section
□ 31Cxx: Other generalizations
☆ 31C05: Harmonic, subharmonic, superharmonic functions
☆ 31C10: Pluriharmonic and plurisubharmonic functions [See also 32U05]
☆ 31C12: Potential theory on Riemannian manifolds [See also 53C20; for Hodge theory, see 58A14]
☆ 31C15: Potentials and capacities
☆ 31C20: Discrete potential theory and numerical methods
☆ 31C25: Dirichlet spaces
☆ 31C35: Martin boundary theory [See also 60J50]
☆ 31C40: Fine potential theory
☆ 31C45: Other generalizations (nonlinear potential theory, etc.)
☆ 31C99: None of the above, but in this section
□ 31Dxx: Axiomatic potential theory
☆ 31D05: Axiomatic potential theory
☆ 31D99: None of the above, but in this section
□ 31Exx: Potential theory on metric spaces
☆ 31E05: Potential theory on metric spaces
☆ 31E99: None of the above, but in this section
• 32-XX: Several complex variables and analytic spaces {For infinite-dimensional holomorphy, see 46G20, 58B12}
□ 32-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 32-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 32-02: Research exposition (monographs, survey articles)
□ 32-03: Historical (must also be assigned at least one classification number from Section 01)
□ 32-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 32-06: Proceedings, conferences, collections, etc.
□ 32Axx: Holomorphic functions of several complex variables
☆ 32A05: Power series, series of functions
☆ 32A07: Special domains (Reinhardt, Hartogs, circular, tube)
☆ 32A10: Holomorphic functions
☆ 32A12: Multifunctions
☆ 32A15: Entire functions
☆ 32A17: Special families of functions
☆ 32A18: Bloch functions, normal functions
☆ 32A19: Normal families of functions, mappings
☆ 32A20: Meromorphic functions
☆ 32A22: Nevanlinna theory (local); growth estimates; other inequalities {For geometric theory, see 32H25, 32H30}
☆ 32A25: Integral representations; canonical kernels (Szegő, Bergman, etc.)
☆ 32A26: Integral representations, constructed kernels (e.g. Cauchy, Fantappiè-type kernels)
☆ 32A27: Local theory of residues [See also 32C30]
☆ 32A30: Other generalizations of function theory of one complex variable (should also be assigned at least one classification number from Section 30) {For functions of several hypercomplex
variables, see 30G35}
☆ 32A35: $H^p$-spaces, Nevanlinna spaces [See also 32M15, 42B30, 43A85, 46J15]
☆ 32A36: Bergman spaces
☆ 32A37: Other spaces of holomorphic functions (e.g. bounded mean oscillation (BMOA), vanishing mean oscillation (VMOA)) [See also 46Exx]
☆ 32A38: Algebras of holomorphic functions [See also 30H05, 46J10, 46J15]
☆ 32A40: Boundary behavior of holomorphic functions
☆ 32A45: Hyperfunctions [See also 46F15]
☆ 32A50: Harmonic analysis of several complex variables [See mainly 43-XX]
☆ 32A55: Singular integrals
☆ 32A60: Zero sets of holomorphic functions
☆ 32A65: Banach algebra techniques [See mainly 46Jxx]
☆ 32A70: Functional analysis techniques [See mainly 46Exx]
☆ 32A99: None of the above, but in this section
□ 32Bxx: Local analytic geometry [See also 13-XX and 14-XX]
☆ 32B05: Analytic algebras and generalizations, preparation theorems
☆ 32B10: Germs of analytic sets, local parametrization
☆ 32B15: Analytic subsets of affine space
☆ 32B20: Semi-analytic sets and subanalytic sets [See also 14P15]
☆ 32B25: Triangulation and related questions
☆ 32B99: None of the above, but in this section
□ 32Cxx: Analytic spaces
☆ 32C05: Real-analytic manifolds, real-analytic spaces [See also 14Pxx, 58A07]
☆ 32C07: Real-analytic sets, complex Nash functions [See also 14P15, 14P20]
☆ 32C09: Embedding of real analytic manifolds
☆ 32C11: Complex supergeometry [See also 14A22, 14M30, 58A50]
☆ 32C15: Complex spaces
☆ 32C18: Topology of analytic spaces
☆ 32C20: Normal analytic spaces
☆ 32C22: Embedding of analytic spaces
☆ 32C25: Analytic subsets and submanifolds
☆ 32C30: Integration on analytic sets and spaces, currents {For local theory, see 32A25 or 32A27}
☆ 32C35: Analytic sheaves and cohomology groups [See also 14Fxx, 18F20, 55N30]
☆ 32C36: Local cohomology of analytic spaces
☆ 32C37: Duality theorems
☆ 32C38: Sheaves of differential operators and their modules, $D$-modules [See also 14F10, 16S32, 35A27, 58J15]
☆ 32C55: The Levi problem in complex spaces; generalizations
☆ 32C81: Applications to physics
☆ 32C99: None of the above, but in this section
□ 32Dxx: Analytic continuation
☆ 32D05: Domains of holomorphy
☆ 32D10: Envelopes of holomorphy
☆ 32D15: Continuation of analytic objects
☆ 32D20: Removable singularities
☆ 32D26: Riemann domains
☆ 32D99: None of the above, but in this section
□ 32Exx: Holomorphic convexity
☆ 32E05: Holomorphically convex complex spaces, reduction theory
☆ 32E10: Stein spaces, Stein manifolds
☆ 32E20: Polynomial convexity
☆ 32E30: Holomorphic and polynomial approximation, Runge pairs, interpolation
☆ 32E35: Global boundary behavior of holomorphic functions
☆ 32E40: The Levi problem
☆ 32E99: None of the above, but in this section
□ 32Fxx: Geometric convexity
☆ 32F10: $q$-convexity, $q$-concavity
☆ 32F17: Other notions of convexity
☆ 32F18: Finite-type conditions
☆ 32F27: Topological consequences of geometric convexity
☆ 32F32: Analytical consequences of geometric convexity (vanishing theorems, etc.)
☆ 32F45: Invariant metrics and pseudodistances
☆ 32F99: None of the above, but in this section
□ 32Gxx: Deformations of analytic structures
☆ 32G05: Deformations of complex structures [See also 13D10, 16S80, 58H10, 58H15]
☆ 32G07: Deformations of special (e.g. CR) structures
☆ 32G08: Deformations of fiber bundles
☆ 32G10: Deformations of submanifolds and subspaces
☆ 32G13: Analytic moduli problems {For algebraic moduli problems, see 14D20, 14D22, 14H10, 14J10} [See also 14H15, 14J15]
☆ 32G15: Moduli of Riemann surfaces, Teichmüller theory [See also 14H15, 30Fxx]
☆ 32G20: Period matrices, variation of Hodge structure; degenerations [See also 14D05, 14D07, 14K30]
☆ 32G34: Moduli and deformations for ordinary differential equations (e.g. Knizhnik-Zamolodchikov equation) [See also 34Mxx]
☆ 32G81: Applications to physics
☆ 32G99: None of the above, but in this section
□ 32Hxx: Holomorphic mappings and correspondences
☆ 32H02: Holomorphic mappings, (holomorphic) embeddings and related questions
☆ 32H04: Meromorphic mappings
☆ 32H12: Boundary uniqueness of mappings
☆ 32H25: Picard-type theorems and generalizations {For function-theoretic properties, see 32A22}
☆ 32H30: Value distribution theory in higher dimensions {For function-theoretic properties, see 32A22}
☆ 32H35: Proper mappings, finiteness theorems
☆ 32H40: Boundary regularity of mappings
☆ 32H50: Iteration problems
☆ 32H99: None of the above, but in this section
□ 32Jxx: Compact analytic spaces {For Riemann surfaces, see 14Hxx, 30Fxx; for algebraic theory, see 14Jxx}
☆ 32J05: Compactification of analytic spaces
☆ 32J10: Algebraic dependence theorems
☆ 32J15: Compact surfaces
☆ 32J17: Compact $3$-folds
☆ 32J18: Compact $n$-folds
☆ 32J25: Transcendental methods of algebraic geometry [See also 14C30]
☆ 32J27: Compact Kähler manifolds: generalizations, classification
☆ 32J81: Applications to physics
☆ 32J99: None of the above, but in this section
□ 32Kxx: Generalizations of analytic spaces (should also be assigned at least one other classification number from Section 32 describing the type of problem)
☆ 32K05: Banach analytic spaces [See also 58Bxx]
☆ 32K07: Formal and graded complex spaces [See also 58C50]
☆ 32K15: Differentiable functions on analytic spaces, differentiable spaces [See also 58C25]
☆ 32K99: None of the above, but in this section
□ 32Lxx: Holomorphic fiber spaces [See also 55Rxx]
☆ 32L05: Holomorphic bundles and generalizations
☆ 32L10: Sheaves and cohomology of sections of holomorphic vector bundles, general results [See also 14F05, 18F20, 55N30]
☆ 32L15: Bundle convexity [See also 32F10]
☆ 32L20: Vanishing theorems
☆ 32L25: Twistor theory, double fibrations [See also 53C28]
☆ 32L81: Applications to physics
☆ 32L99: None of the above, but in this section
□ 32Mxx: Complex spaces with a group of automorphisms
☆ 32M05: Complex Lie groups, automorphism groups acting on complex spaces [See also 22E10]
☆ 32M10: Homogeneous complex manifolds [See also 14M17, 57T15]
☆ 32M12: Almost homogeneous manifolds and spaces [See also 14M17]
☆ 32M15: Hermitian symmetric spaces, bounded symmetric domains, Jordan algebras [See also 22E10, 22E40, 53C35, 57T15]
☆ 32M17: Automorphism groups of ${\bf C}^n$ and affine manifolds
☆ 32M25: Complex vector fields
☆ 32M99: None of the above, but in this section
□ 32Nxx: Automorphic functions [See also 11Fxx, 20H10, 22E40, 30F35]
☆ 32N05: General theory of automorphic functions of several complex variables
☆ 32N10: Automorphic forms
☆ 32N15: Automorphic functions in symmetric domains
☆ 32N99: None of the above, but in this section
□ 32Pxx: Non-Archimedean analysis (should also be assigned at least one other classification number from Section 32 describing the type of problem)
☆ 32P05: Non-Archimedean analysis (should also be assigned at least one other classification number from Section 32 describing the type of problem)
☆ 32P99: None of the above, but in this section
□ 32Qxx: Complex manifolds
☆ 32Q05: Negative curvature manifolds
☆ 32Q10: Positive curvature manifolds
☆ 32Q15: Kähler manifolds
☆ 32Q20: Kähler-Einstein manifolds [See also 53Cxx]
☆ 32Q25: Calabi-Yau theory [See also 14J30]
☆ 32Q26: Notions of stability
☆ 32Q28: Stein manifolds
☆ 32Q30: Uniformization
☆ 32Q35: Complex manifolds as subdomains of Euclidean space
☆ 32Q40: Embedding theorems
☆ 32Q45: Hyperbolic and Kobayashi hyperbolic manifolds
☆ 32Q55: Topological aspects of complex manifolds
☆ 32Q57: Classification theorems
☆ 32Q60: Almost complex manifolds
☆ 32Q65: Pseudoholomorphic curves
☆ 32Q99: None of the above, but in this section
□ 32Sxx: Singularities [See also 58Kxx]
☆ 32S05: Local singularities [See also 14J17]
☆ 32S10: Invariants of analytic local rings
☆ 32S15: Equisingularity (topological and analytic) [See also 14E15]
☆ 32S20: Global theory of singularities; cohomological properties [See also 14E15]
☆ 32S22: Relations with arrangements of hyperplanes [See also 52C35]
☆ 32S25: Surface and hypersurface singularities [See also 14J17]
☆ 32S30: Deformations of singularities; vanishing cycles [See also 14B07]
☆ 32S35: Mixed Hodge theory of singular varieties [See also 14C30, 14D07]
☆ 32S40: Monodromy; relations with differential equations and $D$-modules
☆ 32S45: Modifications; resolution of singularities [See also 14E15]
☆ 32S50: Topological aspects: Lefschetz theorems, topological classification, invariants
☆ 32S55: Milnor fibration; relations with knot theory [See also 57M25, 57Q45]
☆ 32S60: Stratifications; constructible sheaves; intersection cohomology [See also 58Kxx]
☆ 32S65: Singularities of holomorphic vector fields and foliations
☆ 32S70: Other operations on singularities
☆ 32S99: None of the above, but in this section
□ 32Txx: Pseudoconvex domains
☆ 32T05: Domains of holomorphy
☆ 32T15: Strongly pseudoconvex domains
☆ 32T20: Worm domains
☆ 32T25: Finite type domains
☆ 32T27: Geometric and analytic invariants on weakly pseudoconvex boundaries
☆ 32T35: Exhaustion functions
☆ 32T40: Peak functions
☆ 32T99: None of the above, but in this section
□ 32Uxx: Pluripotential theory
☆ 32U05: Plurisubharmonic functions and generalizations [See also 31C10]
☆ 32U10: Plurisubharmonic exhaustion functions
☆ 32U15: General pluripotential theory
☆ 32U20: Capacity theory and generalizations
☆ 32U25: Lelong numbers
☆ 32U30: Removable sets
☆ 32U35: Pluricomplex Green functions
☆ 32U40: Currents
☆ 32U99: None of the above, but in this section
□ 32Vxx: CR manifolds
☆ 32V05: CR structures, CR operators, and generalizations
☆ 32V10: CR functions
☆ 32V15: CR manifolds as boundaries of domains
☆ 32V20: Analysis on CR manifolds
☆ 32V25: Extension of functions and other analytic objects from CR manifolds
☆ 32V30: Embeddings of CR manifolds
☆ 32V35: Finite type conditions on CR manifolds
☆ 32V40: Real submanifolds in complex manifolds
☆ 32V99: None of the above, but in this section
□ 32Wxx: Differential operators in several variables
☆ 32W05: $\overline\partial$ and $\overline\partial$-Neumann operators
☆ 32W10: $\overline\partial_b$ and $\overline\partial_b$-Neumann operators
☆ 32W20: Complex Monge-Ampère operators
☆ 32W25: Pseudodifferential operators in several complex variables
☆ 32W30: Heat kernels in several complex variables
☆ 32W50: Other partial differential equations of complex analysis
☆ 32W99: None of the above, but in this section
• 33-XX: Special functions (33-XX deals with the properties of functions as functions) {For orthogonal functions, see 42Cxx; for aspects of combinatorics see 05Axx; for number-theoretic aspects see
11-XX; for representation theory see 22Exx}
□ 33-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 33-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 33-02: Research exposition (monographs, survey articles)
□ 33-03: Historical (must also be assigned at least one classification number from Section 01)
□ 33-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 33-06: Proceedings, conferences, collections, etc.
□ 33Bxx: Elementary classical functions
☆ 33B10: Exponential and trigonometric functions
☆ 33B15: Gamma, beta and polygamma functions
☆ 33B20: Incomplete beta and gamma functions (error functions, probability integral, Fresnel integrals)
☆ 33B30: Higher logarithm functions
☆ 33B99: None of the above, but in this section
□ 33Cxx: Hypergeometric functions
☆ 33C05: Classical hypergeometric functions, ${}_2F_1$
☆ 33C10: Bessel and Airy functions, cylinder functions, ${}_0F_1$
☆ 33C15: Confluent hypergeometric functions, Whittaker functions, ${}_1F_1$
☆ 33C20: Generalized hypergeometric series, ${}_pF_q$
☆ 33C45: Orthogonal polynomials and functions of hypergeometric type (Jacobi, Laguerre, Hermite, Askey scheme, etc.) [See also 42C05 for general orthogonal polynomials and functions]
☆ 33C47: Other special orthogonal polynomials and functions
☆ 33C50: Orthogonal polynomials and functions in several variables expressible in terms of special functions in one variable
☆ 33C52: Orthogonal polynomials and functions associated with root systems
☆ 33C55: Spherical harmonics
☆ 33C60: Hypergeometric integrals and functions defined by them ($E$, $G$, $H$ and $I$ functions)
☆ 33C65: Appell, Horn and Lauricella functions
☆ 33C67: Hypergeometric functions associated with root systems
☆ 33C70: Other hypergeometric functions and integrals in several variables
☆ 33C75: Elliptic integrals as hypergeometric functions
☆ 33C80: Connections with groups and algebras, and related topics
☆ 33C90: Applications
☆ 33C99: None of the above, but in this section
□ 33Dxx: Basic hypergeometric functions
☆ 33D05: $q$-gamma functions, $q$-beta functions and integrals
☆ 33D15: Basic hypergeometric functions in one variable, ${}_r\phi_s$
☆ 33D45: Basic orthogonal polynomials and functions (Askey-Wilson polynomials, etc.)
☆ 33D50: Orthogonal polynomials and functions in several variables expressible in terms of basic hypergeometric functions in one variable
☆ 33D52: Basic orthogonal polynomials and functions associated with root systems (Macdonald polynomials, etc.)
☆ 33D60: Basic hypergeometric integrals and functions defined by them
☆ 33D65: Bibasic functions and multiple bases
☆ 33D67: Basic hypergeometric functions associated with root systems
☆ 33D70: Other basic hypergeometric functions and integrals in several variables
☆ 33D80: Connections with quantum groups, Chevalley groups, $p$-adic groups, Hecke algebras, and related topics
☆ 33D90: Applications
☆ 33D99: None of the above, but in this section
□ 33Exx: Other special functions
☆ 33E05: Elliptic functions and integrals
☆ 33E10: Lamé, Mathieu, and spheroidal wave functions
☆ 33E12: Mittag-Leffler functions and generalizations
☆ 33E15: Other wave functions
☆ 33E17: Painlevé-type functions
☆ 33E20: Other functions defined by series and integrals
☆ 33E30: Other functions coming from differential, difference and integral equations
☆ 33E50: Special functions in characteristic $p$ (gamma functions, etc.)
☆ 33E99: None of the above, but in this section
□ 33Fxx: Computational aspects
☆ 33F05: Numerical approximation and evaluation [See also 65D20]
☆ 33F10: Symbolic computation (Gosper and Zeilberger algorithms, etc.) [See also 68W30]
☆ 33F99: None of the above, but in this section
• 34-XX: Ordinary differential equations
□ 34-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 34-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 34-02: Research exposition (monographs, survey articles)
□ 34-03: Historical (must also be assigned at least one classification number from Section 01)
□ 34-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 34-06: Proceedings, conferences, collections, etc.
□ 34Axx: General theory
☆ 34A05: Explicit solutions and reductions
☆ 34A07: Fuzzy differential equations
☆ 34A08: Fractional differential equations
☆ 34A09: Implicit equations, differential-algebraic equations [See also 65L80]
☆ 34A12: Initial value problems, existence, uniqueness, continuous dependence and continuation of solutions
☆ 34A25: Analytical theory: series, transformations, transforms, operational calculus, etc. [See also 44-XX]
☆ 34A26: Geometric methods in differential equations
☆ 34A30: Linear equations and systems, general
☆ 34A33: Lattice differential equations
☆ 34A34: Nonlinear equations and systems, general
☆ 34A35: Differential equations of infinite order
☆ 34A36: Discontinuous equations
☆ 34A37: Differential equations with impulses
☆ 34A38: Hybrid systems
☆ 34A40: Differential inequalities [See also 26D20]
☆ 34A45: Theoretical approximation of solutions {For numerical analysis, see 65Lxx}
☆ 34A55: Inverse problems
☆ 34A60: Differential inclusions [See also 49J21, 49K21]
☆ 34A99: None of the above, but in this section
□ 34Bxx: Boundary value problems {For ordinary differential operators, see 34Lxx}
☆ 34B05: Linear boundary value problems
☆ 34B07: Linear boundary value problems with nonlinear dependence on the spectral parameter
☆ 34B08: Parameter dependent boundary value problems
☆ 34B09: Boundary eigenvalue problems
☆ 34B10: Nonlocal and multipoint boundary value problems
☆ 34B15: Nonlinear boundary value problems
☆ 34B16: Singular nonlinear boundary value problems
☆ 34B18: Positive solutions of nonlinear boundary value problems
☆ 34B20: Weyl theory and its generalizations
☆ 34B24: Sturm-Liouville theory [See also 34Lxx]
☆ 34B27: Green functions
☆ 34B30: Special equations (Mathieu, Hill, Bessel, etc.)
☆ 34B37: Boundary value problems with impulses
☆ 34B40: Boundary value problems on infinite intervals
☆ 34B45: Boundary value problems on graphs and networks
☆ 34B60: Applications
☆ 34B99: None of the above, but in this section
□ 34Cxx: Qualitative theory [See also 37-XX]
☆ 34C05: Location of integral curves, singular points, limit cycles
☆ 34C07: Theory of limit cycles of polynomial and analytic vector fields (existence, uniqueness, bounds, Hilbert's 16th problem and ramifications)
☆ 34C08: Connections with real algebraic geometry (fewnomials, desingularization, zeros of Abelian integrals, etc.)
☆ 34C10: Oscillation theory, zeros, disconjugacy and comparison theory
☆ 34C11: Growth, boundedness
☆ 34C12: Monotone systems
☆ 34C14: Symmetries, invariants
☆ 34C15: Nonlinear oscillations, coupled oscillators
☆ 34C20: Transformation and reduction of equations and systems, normal forms
☆ 34C23: Bifurcation [See also 37Gxx]
☆ 34C25: Periodic solutions
☆ 34C26: Relaxation oscillations
☆ 34C27: Almost and pseudo-almost periodic solutions
☆ 34C28: Complex behavior, chaotic systems [See also 37Dxx]
☆ 34C29: Averaging method
☆ 34C37: Homoclinic and heteroclinic solutions
☆ 34C40: Equations and systems on manifolds
☆ 34C41: Equivalence, asymptotic equivalence
☆ 34C45: Invariant manifolds
☆ 34C46: Multifrequency systems
☆ 34C55: Hysteresis
☆ 34C60: Qualitative investigation and simulation of models
☆ 34C99: None of the above, but in this section
□ 34Dxx: Stability theory [See also 37C75, 93Dxx]
☆ 34D05: Asymptotic properties
☆ 34D06: Synchronization
☆ 34D08: Characteristic and Lyapunov exponents
☆ 34D09: Dichotomy, trichotomy
☆ 34D10: Perturbations
☆ 34D15: Singular perturbations
☆ 34D20: Stability
☆ 34D23: Global stability
☆ 34D30: Structural stability and analogous concepts [See also 37C20]
☆ 34D35: Stability of manifolds of solutions
☆ 34D45: Attractors [See also 37C70, 37D45]
☆ 34D99: None of the above, but in this section
□ 34Exx: Asymptotic theory
☆ 34E05: Asymptotic expansions
☆ 34E10: Perturbations, asymptotics
☆ 34E13: Multiple scale methods
☆ 34E15: Singular perturbations, general theory
☆ 34E17: Canard solutions
☆ 34E18: Methods of nonstandard analysis
☆ 34E20: Singular perturbations, turning point theory, WKB methods
☆ 34E99: None of the above, but in this section
□ 34Fxx: Equations and systems with randomness [See also 34K50, 60H10, 93E03]
☆ 34F05: Equations and systems with randomness [See also 34K50, 60H10, 93E03]
☆ 34F10: Bifurcation
☆ 34F15: Resonance phenomena
☆ 34F99: None of the above, but in this section
□ 34Gxx: Differential equations in abstract spaces [See also 34Lxx, 37Kxx, 47Dxx, 47Hxx, 47Jxx, 58D25]
☆ 34G10: Linear equations [See also 47D06, 47D09]
☆ 34G20: Nonlinear equations [See also 47Hxx, 47Jxx]
☆ 34G25: Evolution inclusions
☆ 34G99: None of the above, but in this section
□ 34Hxx: Control problems [See also 49J15, 49K15, 93C15]
☆ 34H05: Control problems [See also 49J15, 49K15, 93C15]
☆ 34H10: Chaos control
☆ 34H15: Stabilization
☆ 34H20: Bifurcation control
☆ 34H99: None of the above, but in this section
□ 34Kxx: Functional-differential and differential-difference equations [See also 37-XX]
☆ 34K05: General theory
☆ 34K06: Linear functional-differential equations
☆ 34K07: Theoretical approximation of solutions
☆ 34K08: Spectral theory of functional-differential operators
☆ 34K09: Functional-differential inclusions
☆ 34K10: Boundary value problems
☆ 34K11: Oscillation theory
☆ 34K12: Growth, boundedness, comparison of solutions
☆ 34K13: Periodic solutions
☆ 34K14: Almost and pseudo-periodic solutions
☆ 34K17: Transformation and reduction of equations and systems, normal forms
☆ 34K18: Bifurcation theory
☆ 34K19: Invariant manifolds
☆ 34K20: Stability theory
☆ 34K21: Stationary solutions
☆ 34K23: Complex (chaotic) behavior of solutions
☆ 34K25: Asymptotic theory
☆ 34K26: Singular perturbations
☆ 34K27: Perturbations
☆ 34K28: Numerical approximation of solutions
☆ 34K29: Inverse problems
☆ 34K30: Equations in abstract spaces [See also 34Gxx, 35R09, 35R10, 47Jxx]
☆ 34K31: Lattice functional-differential equations
☆ 34K32: Implicit equations
☆ 34K33: Averaging
☆ 34K34: Hybrid systems
☆ 34K35: Control problems [See also 49J21, 49K21, 93C23]
☆ 34K36: Fuzzy functional-differential equations
☆ 34K37: Functional-differential equations with fractional derivatives
☆ 34K38: Functional-differential inequalities
☆ 34K40: Neutral equations
☆ 34K45: Equations with impulses
☆ 34K50: Stochastic functional-differential equations [See also 60Hxx]
☆ 34K60: Qualitative investigation and simulation of models
☆ 34K99: None of the above, but in this section
□ 34Lxx: Ordinary differential operators [See also 47E05]
☆ 34L05: General spectral theory
☆ 34L10: Eigenfunctions, eigenfunction expansions, completeness of eigenfunctions
☆ 34L15: Eigenvalues, estimation of eigenvalues, upper and lower bounds
☆ 34L16: Numerical approximation of eigenvalues and of other parts of the spectrum
☆ 34L20: Asymptotic distribution of eigenvalues, asymptotic theory of eigenfunctions
☆ 34L25: Scattering theory, inverse scattering
☆ 34L30: Nonlinear ordinary differential operators
☆ 34L40: Particular operators (Dirac, one-dimensional Schrödinger, etc.)
☆ 34L99: None of the above, but in this section
□ 34Mxx: Differential equations in the complex domain [See also 30Dxx, 32G34]
☆ 34M03: Linear equations and systems
☆ 34M05: Entire and meromorphic solutions
☆ 34M10: Oscillation, growth of solutions
☆ 34M15: Algebraic aspects (differential-algebraic, hypertranscendence, group-theoretical)
☆ 34M25: Formal solutions, transform techniques
☆ 34M30: Asymptotics, summation methods
☆ 34M35: Singularities, monodromy, local behavior of solutions, normal forms
☆ 34M40: Stokes phenomena and connection problems (linear and nonlinear)
☆ 34M45: Differential equations on complex manifolds
☆ 34M50: Inverse problems (Riemann-Hilbert, inverse differential Galois, etc.)
☆ 34M55: Painlevé and other special equations; classification, hierarchies;
☆ 34M56: Isomonodromic deformations
☆ 34M60: Singular perturbation problems in the complex domain (complex WKB, turning points, steepest descent) [See also 34E20]
☆ 34M99: None of the above, but in this section
□ 34Nxx: Dynamic equations on time scales or measure chains {For real analysis on time scales see 26E70}
☆ 34N05: Dynamic equations on time scales or measure chains {For real analysis on time scales or measure chains, see 26E70}
☆ 34N99: None of the above, but in this section
• 35-XX: Partial differential equations
□ 35-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 35-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 35-02: Research exposition (monographs, survey articles)
□ 35-03: Historical (must also be assigned at least one classification number from Section 01)
□ 35-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 35-06: Proceedings, conferences, collections, etc.
□ 35Axx: General topics
☆ 35A01: Existence problems: global existence, local existence, non-existence
☆ 35A02: Uniqueness problems: global uniqueness, local uniqueness, non-uniqueness
☆ 35A08: Fundamental solutions
☆ 35A09: Classical solutions
☆ 35A10: Cauchy-Kovalevskaya theorems
☆ 35A15: Variational methods
☆ 35A16: Topological and monotonicity methods
☆ 35A17: Parametrices
☆ 35A18: Wave front sets
☆ 35A20: Analytic methods, singularities
☆ 35A21: Propagation of singularities
☆ 35A22: Transform methods (e.g. integral transforms)
☆ 35A23: Inequalities involving derivatives and differential and integral operators, inequalities for integrals
☆ 35A24: Methods of ordinary differential equations
☆ 35A25: Other special methods
☆ 35A27: Microlocal methods; methods of sheaf theory and homological algebra in PDE [See also 32C38, 58J15]
☆ 35A30: Geometric theory, characteristics, transformations [See also 58J70, 58J72]
☆ 35A35: Theoretical approximation to solutions {For numerical analysis, see 65Mxx, 65Nxx}
☆ 35A99: None of the above, but in this section
□ 35Bxx: Qualitative properties of solutions
☆ 35B05: Oscillation, zeros of solutions, mean value theorems, etc.
☆ 35B06: Symmetries, invariants, etc.
☆ 35B07: Axially symmetric solutions
☆ 35B08: Entire solutions
☆ 35B09: Positive solutions
☆ 35B10: Periodic solutions
☆ 35B15: Almost and pseudo-almost periodic solutions
☆ 35B20: Perturbations
☆ 35B25: Singular perturbations
☆ 35B27: Homogenization; equations in media with periodic structure [See also 74Qxx, 76M50]
☆ 35B30: Dependence of solutions on initial and boundary data, parameters [See also 37Cxx]
☆ 35B32: Bifurcation [See also 37Gxx, 37K50]
☆ 35B33: Critical exponents
☆ 35B34: Resonances
☆ 35B35: Stability
☆ 35B36: Pattern formation
☆ 35B38: Critical points
☆ 35B40: Asymptotic behavior of solutions
☆ 35B41: Attractors
☆ 35B42: Inertial manifolds
☆ 35B44: Blow-up
☆ 35B45: A priori estimates
☆ 35B50: Maximum principles
☆ 35B51: Comparison principles
☆ 35B53: Liouville theorems, Phragmén-Lindelöf theorems
☆ 35B60: Continuation and prolongation of solutions [See also 58A15, 58A17, 58Hxx]
☆ 35B65: Smoothness and regularity of solutions
☆ 35B99: None of the above, but in this section
□ 35Cxx: Representations of solutions
☆ 35C05: Solutions in closed form
☆ 35C06: Self-similar solutions
☆ 35C07: Traveling wave solutions
☆ 35C08: Soliton solutions
☆ 35C09: Trigonometric solutions
☆ 35C10: Series solutions
☆ 35C11: Polynomial solutions
☆ 35C15: Integral representations of solutions
☆ 35C20: Asymptotic expansions
☆ 35C99: None of the above, but in this section
□ 35Dxx: Generalized solutions
☆ 35D30: Weak solutions
☆ 35D35: Strong solutions
☆ 35D40: Viscosity solutions
☆ 35D99: None of the above, but in this section
□ 35Exx: Equations and systems with constant coefficients [See also 35N05]
☆ 35E05: Fundamental solutions
☆ 35E10: Convexity properties
☆ 35E15: Initial value problems
☆ 35E20: General theory
☆ 35E99: None of the above, but in this section
□ 35Fxx: General first-order equations and systems
☆ 35F05: Linear first-order equations
☆ 35F10: Initial value problems for linear first-order equations
☆ 35F15: Boundary value problems for linear first-order equations
☆ 35F16: Initial-boundary value problems for linear first-order equations
☆ 35F20: Nonlinear first-order equations
☆ 35F21: Hamilton-Jacobi equations
☆ 35F25: Initial value problems for nonlinear first-order equations
☆ 35F30: Boundary value problems for nonlinear first-order equations
☆ 35F31: Initial-boundary value problems for nonlinear first-order equations
☆ 35F35: Linear first-order systems
☆ 35F40: Initial value problems for linear first-order systems
☆ 35F45: Boundary value problems for linear first-order systems
☆ 35F46: Initial-boundary value problems for linear first-order systems
☆ 35F50: Nonlinear first-order systems
☆ 35F55: Initial value problems for nonlinear first-order systems
☆ 35F60: Boundary value problems for nonlinear first-order systems
☆ 35F61: Initial-boundary value problems for nonlinear first-order systems
☆ 35F99: None of the above, but in this section
□ 35Gxx: General higher-order equations and systems
☆ 35G05: Linear higher-order equations
☆ 35G10: Initial value problems for linear higher-order equations
☆ 35G15: Boundary value problems for linear higher-order equations
☆ 35G16: Initial-boundary value problems for linear higher-order equations
☆ 35G20: Nonlinear higher-order equations
☆ 35G25: Initial value problems for nonlinear higher-order equations
☆ 35G30: Boundary value problems for nonlinear higher-order equations
☆ 35G31: Initial-boundary value problems for nonlinear higher-order equations
☆ 35G35: Linear higher-order systems
☆ 35G40: Initial value problems for linear higher-order systems
☆ 35G45: Boundary value problems for linear higher-order systems
☆ 35G46: Initial-boundary value problems for linear higher-order systems
☆ 35G50: Nonlinear higher-order systems
☆ 35G55: Initial value problems for nonlinear higher-order systems
☆ 35G60: Boundary value problems for nonlinear higher-order systems
☆ 35G61: Initial-boundary value problems for nonlinear higher-order systems
☆ 35G99: None of the above, but in this section
□ 35Hxx: Close-to-elliptic equations and systems
☆ 35H10: Hypoelliptic equations
☆ 35H20: Subelliptic equations
☆ 35H30: Quasi-elliptic equations
☆ 35H99: None of the above, but in this section
□ 35Jxx: Elliptic equations and systems [See also 58J10, 58J20]
☆ 35J05: Laplacian operator, reduced wave equation (Helmholtz equation), Poisson equation [See also 31Axx, 31Bxx]
☆ 35J08: Green's functions
☆ 35J10: Schrödinger operator [See also 35Pxx]
☆ 35J15: Second-order elliptic equations
☆ 35J20: Variational methods for second-order elliptic equations
☆ 35J25: Boundary value problems for second-order elliptic equations
☆ 35J30: Higher-order elliptic equations [See also 31A30, 31B30]
☆ 35J35: Variational methods for higher-order elliptic equations
☆ 35J40: Boundary value problems for higher-order elliptic equations
☆ 35J46: First-order elliptic systems
☆ 35J47: Second-order elliptic systems
☆ 35J48: Higher-order elliptic systems
☆ 35J50: Variational methods for elliptic systems
☆ 35J56: Boundary value problems for first-order elliptic systems
☆ 35J57: Boundary value problems for second-order elliptic systems
☆ 35J58: Boundary value problems for higher-order elliptic systems
☆ 35J60: Nonlinear elliptic equations
☆ 35J61: Semilinear elliptic equations
☆ 35J62: Quasilinear elliptic equations
☆ 35J65: Nonlinear boundary value problems for linear elliptic equations
☆ 35J66: Nonlinear boundary value problems for nonlinear elliptic equations
☆ 35J67: Boundary values of solutions to elliptic equations
☆ 35J70: Degenerate elliptic equations
☆ 35J75: Singular elliptic equations
☆ 35J86: Linear elliptic unilateral problems and linear elliptic variational inequalities [See also 35R35, 49J40]
☆ 35J87: Nonlinear elliptic unilateral problems and nonlinear elliptic variational inequalities [See also 35R35, 49J40]
☆ 35J88: Systems of elliptic variational inequalities [See also 35R35, 49J40]
☆ 35J91: Semilinear elliptic equations with Laplacian, bi-Laplacian or poly-Laplacian
☆ 35J92: Quasilinear elliptic equations with $p$-Laplacian
☆ 35J93: Quasilinear elliptic equations with mean curvature operator
☆ 35J96: Elliptic Monge-Ampère equations
☆ 35J99: None of the above, but in this section
□ 35Kxx: Parabolic equations and systems [See also 35Bxx, 35Dxx, 35R30, 35R35, 58J35]
☆ 35K05: Heat equation
☆ 35K08: Heat kernel
☆ 35K10: Second-order parabolic equations
☆ 35K15: Initial value problems for second-order parabolic equations
☆ 35K20: Initial-boundary value problems for second-order parabolic equations
☆ 35K25: Higher-order parabolic equations
☆ 35K30: Initial value problems for higher-order parabolic equations
☆ 35K35: Initial-boundary value problems for higher-order parabolic equations
☆ 35K40: Second-order parabolic systems
☆ 35K41: Higher-order parabolic systems
☆ 35K45: Initial value problems for second-order parabolic systems
☆ 35K46: Initial value problems for higher-order parabolic systems
☆ 35K51: Initial-boundary value problems for second-order parabolic systems
☆ 35K52: Initial-boundary value problems for higher-order parabolic systems
☆ 35K55: Nonlinear parabolic equations
☆ 35K57: Reaction-diffusion equations
☆ 35K58: Semilinear parabolic equations
☆ 35K59: Quasilinear parabolic equations
☆ 35K60: Nonlinear initial value problems for linear parabolic equations
☆ 35K61: Nonlinear initial-boundary value problems for nonlinear parabolic equations
☆ 35K65: Degenerate parabolic equations
☆ 35K67: Singular parabolic equations
☆ 35K70: Ultraparabolic equations, pseudoparabolic equations, etc.
☆ 35K85: Linear parabolic unilateral problems and linear parabolic variational inequalities [See also 35R35, 49J40]
☆ 35K86: Nonlinear parabolic unilateral problems and nonlinear parabolic variational inequalities [See also 35R35, 49J40]
☆ 35K87: Systems of parabolic variational inequalities [See also 35R35, 49J40]
☆ 35K90: Abstract parabolic equations
☆ 35K91: Semilinear parabolic equations with Laplacian, bi-Laplacian or poly-Laplacian
☆ 35K92: Quasilinear parabolic equations with $p$-Laplacian
☆ 35K93: Quasilinear parabolic equations with mean curvature operator
☆ 35K96: Parabolic Monge-Ampère equations
☆ 35K99: None of the above, but in this section
□ 35Lxx: Hyperbolic equations and systems [See also 58J45]
☆ 35L02: First-order hyperbolic equations
☆ 35L03: Initial value problems for first-order hyperbolic equations
☆ 35L04: Initial-boundary value problems for first-order hyperbolic equations
☆ 35L05: Wave equation
☆ 35L10: Second-order hyperbolic equations
☆ 35L15: Initial value problems for second-order hyperbolic equations
☆ 35L20: Initial-boundary value problems for second-order hyperbolic equations
☆ 35L25: Higher-order hyperbolic equations
☆ 35L30: Initial value problems for higher-order hyperbolic equations
☆ 35L35: Initial-boundary value problems for higher-order hyperbolic equations
☆ 35L40: First-order hyperbolic systems
☆ 35L45: Initial value problems for first-order hyperbolic systems
☆ 35L50: Initial-boundary value problems for first-order hyperbolic systems
☆ 35L51: Second-order hyperbolic systems
☆ 35L52: Initial value problems for second-order hyperbolic systems
☆ 35L53: Initial-boundary value problems for second-order hyperbolic systems
☆ 35L55: Higher-order hyperbolic systems
☆ 35L56: Initial value problems for higher-order hyperbolic systems
☆ 35L57: Initial-boundary value problems for higher-order hyperbolic systems
☆ 35L60: Nonlinear first-order hyperbolic equations
☆ 35L65: Conservation laws
☆ 35L67: Shocks and singularities [See also 58Kxx, 76L05]
☆ 35L70: Nonlinear second-order hyperbolic equations
☆ 35L71: Semilinear second-order hyperbolic equations
☆ 35L72: Quasilinear second-order hyperbolic equations
☆ 35L75: Nonlinear higher-order hyperbolic equations
☆ 35L76: Semilinear higher-order hyperbolic equations
☆ 35L77: Quasilinear higher-order hyperbolic equations
☆ 35L80: Degenerate hyperbolic equations
☆ 35L81: Singular hyperbolic equations
☆ 35L82: Pseudohyperbolic equations
☆ 35L85: Linear hyperbolic unilateral problems and linear hyperbolic variational inequalities [See also 35R35, 49J40]
☆ 35L86: Nonlinear hyperbolic unilateral problems and nonlinear hyperbolic variational inequalities [See also 35R35, 49J40]
☆ 35L87: Unilateral problems and variational inequalities for hyperbolic systems [See also 35R35, 49J40]
☆ 35L90: Abstract hyperbolic equations
☆ 35L99: None of the above, but in this section
□ 35Mxx: Equations and systems of special type (mixed, composite, etc.)
☆ 35M10: Equations of mixed type
☆ 35M11: Initial value problems for equations of mixed type
☆ 35M12: Boundary value problems for equations of mixed type
☆ 35M13: Initial-boundary value problems for equations of mixed type
☆ 35M30: Systems of mixed type
☆ 35M31: Initial value problems for systems of mixed type
☆ 35M32: Boundary value problems for systems of mixed type
☆ 35M33: Initial-boundary value problems for systems of mixed type
☆ 35M85: Linear unilateral problems and variational inequalities of mixed type [See also 35R35, 49J40]
☆ 35M86: Nonlinear unilateral problems and nonlinear variational inequalities of mixed type [See also 35R35, 49J40]
☆ 35M87: Systems of variational inequalities of mixed type [See also 35R35, 49J40]
☆ 35M99: None of the above, but in this section
□ 35Nxx: Overdetermined systems [See also 58Hxx, 58J10, 58J15]
☆ 35N05: Overdetermined systems with constant coefficients
☆ 35N10: Overdetermined systems with variable coefficients
☆ 35N15: $\overline\partial$-Neumann problem and generalizations; formal complexes [See also 32W05, 32W10, 58J10]
☆ 35N20: Overdetermined initial value problems
☆ 35N25: Overdetermined boundary value problems
☆ 35N30: Overdetermined initial-boundary value problems
☆ 35N99: None of the above, but in this section
□ 35Pxx: Spectral theory and eigenvalue problems [See also 47Axx, 47Bxx, 47F05]
☆ 35P05: General topics in linear spectral theory
☆ 35P10: Completeness of eigenfunctions, eigenfunction expansions
☆ 35P15: Estimation of eigenvalues, upper and lower bounds
☆ 35P20: Asymptotic distribution of eigenvalues and eigenfunctions
☆ 35P25: Scattering theory [See also 47A40]
☆ 35P30: Nonlinear eigenvalue problems, nonlinear spectral theory
☆ 35P99: None of the above, but in this section
□ 35Qxx: Equations of mathematical physics and other areas of application [See also 35J05, 35J10, 35K05, 35L05]
☆ 35Q05: Euler-Poisson-Darboux equations
☆ 35Q15: Riemann-Hilbert problems [See also 30E25, 31A25, 31B20]
☆ 35Q20: Boltzmann equations
☆ 35Q30: Navier-Stokes equations [See also 76D05, 76D07, 76N10]
☆ 35Q31: Euler equations [See also 76D05, 76D07, 76N10]
☆ 35Q35: PDEs in connection with fluid mechanics
☆ 35Q40: PDEs in connection with quantum mechanics
☆ 35Q41: Time-dependent Schrödinger equations, Dirac equations
☆ 35Q51: Soliton-like equations [See also 37K40]
☆ 35Q53: KdV-like equations (Korteweg-de Vries) [See also 37K10]
☆ 35Q55: NLS-like equations (nonlinear Schrödinger) [See also 37K10]
☆ 35Q56: Ginzburg-Landau equations
☆ 35Q60: PDEs in connection with optics and electromagnetic theory
☆ 35Q61: Maxwell equations
☆ 35Q62: PDEs in connection with statistics
☆ 35Q68: PDEs in connection with computer science
☆ 35Q70: PDEs in connection with mechanics of particles and systems
☆ 35Q74: PDEs in connection with mechanics of deformable solids
☆ 35Q75: PDEs in connection with relativity and gravitational theory
☆ 35Q76: Einstein equations
☆ 35Q79: PDEs in connection with classical thermodynamics and heat transfer
☆ 35Q82: PDEs in connection with statistical mechanics
☆ 35Q83: Vlasov-like equations
☆ 35Q84: Fokker-Planck equations
☆ 35Q85: PDEs in connection with astronomy and astrophysics
☆ 35Q86: PDEs in connection with geophysics
☆ 35Q90: PDEs in connection with mathematical programming
☆ 35Q91: PDEs in connection with game theory, economics, social and behavioral sciences
☆ 35Q92: PDEs in connection with biology and other natural sciences
☆ 35Q93: PDEs in connection with control and optimization
☆ 35Q94: PDEs in connection with information and communication
☆ 35Q99: None of the above, but in this section
□ 35Rxx: Miscellaneous topics {For equations on manifolds, see 58Jxx; for manifolds of solutions, see 58Bxx; for stochastic PDE, see also 60H15}
☆ 35R01: Partial differential equations on manifolds [See also 32Wxx, 53Cxx, 58Jxx]
☆ 35R02: Partial differential equations on graphs and networks (ramified or polygonal spaces)
☆ 35R03: Partial differential equations on Heisenberg groups, Lie groups, Carnot groups, etc.
☆ 35R05: Partial differential equations with discontinuous coefficients or data
☆ 35R06: Partial differential equations with measure
☆ 35R09: Integro-partial differential equations [See also 45Kxx]
☆ 35R10: Partial functional-differential equations
☆ 35R11: Fractional partial differential equations
☆ 35R12: Impulsive partial differential equations
☆ 35R13: Fuzzy partial differential equations
☆ 35R15: Partial differential equations on infinite-dimensional (e.g. function) spaces (= PDE in infinitely many variables) [See also 46Gxx, 58D25]
☆ 35R20: Partial operator-differential equations (i.e., PDE on finite-dimensional spaces for abstract space valued functions) [See also 34Gxx, 47A50, 47D03, 47D06, 47D09, 47H20, 47Jxx]
☆ 35R25: Improperly posed problems
☆ 35R30: Inverse problems
☆ 35R35: Free boundary problems
☆ 35R37: Moving boundary problems
☆ 35R45: Partial differential inequalities
☆ 35R50: Partial differential equations of infinite order
☆ 35R60: Partial differential equations with randomness, stochastic partial differential equations [See also 60H15]
☆ 35R70: Partial differential equations with multivalued right-hand sides
☆ 35R99: None of the above, but in this section
□ 35Sxx: Pseudodifferential operators and other generalizations of partial differential operators [See also 47G30, 58J40]
☆ 35S05: Pseudodifferential operators
☆ 35S10: Initial value problems for pseudodifferential operators
☆ 35S11: Initial-boundary value problems for pseudodifferential operators
☆ 35S15: Boundary value problems for pseudodifferential operators
☆ 35S30: Fourier integral operators
☆ 35S35: Topological aspects: intersection cohomology, stratified sets, etc. [See also 32C38, 32S40, 32S60, 58J15]
☆ 35S50: Paradifferential operators
☆ 35S99: None of the above, but in this section
• 37-XX: Dynamical systems and ergodic theory [See also 26A18, 28Dxx, 34Cxx, 34Dxx, 35Bxx, 46Lxx, 58Jxx, 70-XX]
□ 37-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 37-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 37-02: Research exposition (monographs, survey articles)
□ 37-03: Historical (must also be assigned at least one classification number from Section 01)
□ 37-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 37-06: Proceedings, conferences, collections, etc.
□ 37Axx: Ergodic theory [See also 28Dxx]
☆ 37A05: Measure-preserving transformations
☆ 37A10: One-parameter continuous families of measure-preserving transformations
☆ 37A15: General groups of measure-preserving transformations [See mainly 22Fxx]
☆ 37A17: Homogeneous flows [See also 22Fxx]
☆ 37A20: Orbit equivalence, cocycles, ergodic equivalence relations
☆ 37A25: Ergodicity, mixing, rates of mixing
☆ 37A30: Ergodic theorems, spectral theory, Markov operators {For operator ergodic theory, see mainly 47A35}
☆ 37A35: Entropy and other invariants, isomorphism, classification
☆ 37A40: Nonsingular (and infinite-measure preserving) transformations
☆ 37A45: Relations with number theory and harmonic analysis [See also 11Kxx]
☆ 37A50: Relations with probability theory and stochastic processes [See also 60Fxx and 60G10]
☆ 37A55: Relations with the theory of $C^*$-algebras [See mainly 46L55]
☆ 37A60: Dynamical systems in statistical mechanics [See also 82Cxx]
☆ 37A99: None of the above, but in this section
□ 37Bxx: Topological dynamics [See also 54H20]
☆ 37B05: Transformations and group actions with special properties (minimality, distality, proximality, etc.)
☆ 37B10: Symbolic dynamics [See also 37Cxx, 37Dxx]
☆ 37B15: Cellular automata [See also 68Q80]
☆ 37B20: Notions of recurrence
☆ 37B25: Lyapunov functions and stability; attractors, repellers
☆ 37B30: Index theory, Morse-Conley indices
☆ 37B35: Gradient-like and recurrent behavior; isolated (locally maximal) invariant sets
☆ 37B40: Topological entropy
☆ 37B45: Continua theory in dynamics
☆ 37B50: Multi-dimensional shifts of finite type, tiling dynamics
☆ 37B55: Nonautonomous dynamical systems
☆ 37B99: None of the above, but in this section
□ 37Cxx: Smooth dynamical systems: general theory [See also 34Cxx, 34Dxx]
☆ 37C05: Smooth mappings and diffeomorphisms
☆ 37C10: Vector fields, flows, ordinary differential equations
☆ 37C15: Topological and differentiable equivalence, conjugacy, invariants, moduli, classification
☆ 37C20: Generic properties, structural stability
☆ 37C25: Fixed points, periodic points, fixed-point index theory
☆ 37C27: Periodic orbits of vector fields and flows
☆ 37C29: Homoclinic and heteroclinic orbits
☆ 37C30: Zeta functions, (Ruelle-Frobenius) transfer operators, and other functional analytic techniques in dynamical systems
☆ 37C35: Orbit growth
☆ 37C40: Smooth ergodic theory, invariant measures [See also 37Dxx]
☆ 37C45: Dimension theory of dynamical systems
☆ 37C50: Approximate trajectories (pseudotrajectories, shadowing, etc.)
☆ 37C55: Periodic and quasiperiodic flows and diffeomorphisms
☆ 37C60: Nonautonomous smooth dynamical systems [See also 37B55]
☆ 37C65: Monotone flows
☆ 37C70: Attractors and repellers, topological structure
☆ 37C75: Stability theory
☆ 37C80: Symmetries, equivariant dynamical systems
☆ 37C85: Dynamics of group actions other than ${\bf Z}$ and ${\bf R}$, and foliations [See mainly 22Fxx, and also 57R30, 57Sxx]
☆ 37C99: None of the above, but in this section
□ 37Dxx: Dynamical systems with hyperbolic behavior
☆ 37D05: Hyperbolic orbits and sets
☆ 37D10: Invariant manifold theory
☆ 37D15: Morse-Smale systems
☆ 37D20: Uniformly hyperbolic systems (expanding, Anosov, Axiom A, etc.)
☆ 37D25: Nonuniformly hyperbolic systems (Lyapunov exponents, Pesin theory, etc.)
☆ 37D30: Partially hyperbolic systems and dominated splittings
☆ 37D35: Thermodynamic formalism, variational principles, equilibrium states
☆ 37D40: Dynamical systems of geometric origin and hyperbolicity (geodesic and horocycle flows, etc.)
☆ 37D45: Strange attractors, chaotic dynamics
☆ 37D50: Hyperbolic systems with singularities (billiards, etc.)
☆ 37D99: None of the above, but in this section
□ 37Exx: Low-dimensional dynamical systems
☆ 37E05: Maps of the interval (piecewise continuous, continuous, smooth)
☆ 37E10: Maps of the circle
☆ 37E15: Combinatorial dynamics (types of periodic orbits)
☆ 37E20: Universality, renormalization [See also 37F25]
☆ 37E25: Maps of trees and graphs
☆ 37E30: Homeomorphisms and diffeomorphisms of planes and surfaces
☆ 37E35: Flows on surfaces
☆ 37E40: Twist maps
☆ 37E45: Rotation numbers and vectors
☆ 37E99: None of the above, but in this section
□ 37Fxx: Complex dynamical systems [See also 30D05, 32H50]
☆ 37F05: Relations and correspondences
☆ 37F10: Polynomials; rational maps; entire and meromorphic functions [See also 32A10, 32A20, 32H02, 32H04]
☆ 37F15: Expanding maps; hyperbolicity; structural stability
☆ 37F20: Combinatorics and topology
☆ 37F25: Renormalization
☆ 37F30: Quasiconformal methods and Teichmüller theory; Fuchsian and Kleinian groups as dynamical systems
☆ 37F35: Conformal densities and Hausdorff dimension
☆ 37F40: Geometric limits
☆ 37F45: Holomorphic families of dynamical systems; the Mandelbrot set; bifurcations
☆ 37F50: Small divisors, rotation domains and linearization; Fatou and Julia sets
☆ 37F75: Holomorphic foliations and vector fields [See also 32M25, 32S65, 34Mxx]
☆ 37F99: None of the above, but in this section
□ 37Gxx: Local and nonlocal bifurcation theory [See also 34C23, 34K18]
☆ 37G05: Normal forms
☆ 37G10: Bifurcations of singular points
☆ 37G15: Bifurcations of limit cycles and periodic orbits
☆ 37G20: Hyperbolic singular points with homoclinic trajectories
☆ 37G25: Bifurcations connected with nontransversal intersection
☆ 37G30: Infinite nonwandering sets arising in bifurcations
☆ 37G35: Attractors and their bifurcations
☆ 37G40: Symmetries, equivariant bifurcation theory
☆ 37G99: None of the above, but in this section
□ 37Hxx: Random dynamical systems [See also 15B52, 34D08, 34F05, 47B80, 70L05, 82C05, 93Exx]
☆ 37H05: Foundations, general theory of cocycles, algebraic ergodic theory [See also 37Axx]
☆ 37H10: Generation, random and stochastic difference and differential equations [See also 34F05, 34K50, 60H10, 60H15]
☆ 37H15: Multiplicative ergodic theory, Lyapunov exponents [See also 34D08, 37Axx, 37Cxx, 37Dxx]
☆ 37H20: Bifurcation theory [See also 37Gxx]
☆ 37H99: None of the above, but in this section
□ 37Jxx: Finite-dimensional Hamiltonian, Lagrangian, contact, and nonholonomic systems [See also 53Dxx, 70Fxx, 70Hxx]
☆ 37J05: General theory, relations with symplectic geometry and topology
☆ 37J10: Symplectic mappings, fixed points
☆ 37J15: Symmetries, invariants, invariant manifolds, momentum maps, reduction [See also 53D20]
☆ 37J20: Bifurcation problems
☆ 37J25: Stability problems
☆ 37J30: Obstructions to integrability (nonintegrability criteria)
☆ 37J35: Completely integrable systems, topological structure of phase space, integration methods
☆ 37J40: Perturbations, normal forms, small divisors, KAM theory, Arnol'd diffusion
☆ 37J45: Periodic, homoclinic and heteroclinic orbits; variational methods, degree-theoretic methods
☆ 37J50: Action-minimizing orbits and measures
☆ 37J55: Contact systems [See also 53D10]
☆ 37J60: Nonholonomic dynamical systems [See also 70F25]
☆ 37J99: None of the above, but in this section
□ 37Kxx: Infinite-dimensional Hamiltonian systems [See also 35Axx, 35Qxx]
☆ 37K05: Hamiltonian structures, symmetries, variational principles, conservation laws
☆ 37K10: Completely integrable systems, integrability tests, bi-Hamiltonian structures, hierarchies (KdV, KP, Toda, etc.)
☆ 37K15: Integration of completely integrable systems by inverse spectral and scattering methods
☆ 37K20: Relations with algebraic geometry, complex analysis, special functions [See also 14H70]
☆ 37K25: Relations with differential geometry
☆ 37K30: Relations with infinite-dimensional Lie algebras and other algebraic structures
☆ 37K35: Lie-Bäcklund and other transformations
☆ 37K40: Soliton theory, asymptotic behavior of solutions
☆ 37K45: Stability problems
☆ 37K50: Bifurcation problems
☆ 37K55: Perturbations, KAM for infinite-dimensional systems
☆ 37K60: Lattice dynamics [See also 37L60]
☆ 37K65: Hamiltonian systems on groups of diffeomorphisms and on manifolds of mappings and metrics
☆ 37K99: None of the above, but in this section
□ 37Lxx: Infinite-dimensional dissipative dynamical systems [See also 35Bxx, 35Qxx]
☆ 37L05: General theory, nonlinear semigroups, evolution equations
☆ 37L10: Normal forms, center manifold theory, bifurcation theory
☆ 37L15: Stability problems
☆ 37L20: Symmetries
☆ 37L25: Inertial manifolds and other invariant attracting sets
☆ 37L30: Attractors and their dimensions, Lyapunov exponents
☆ 37L40: Invariant measures
☆ 37L45: Hyperbolicity; Lyapunov functions
☆ 37L50: Noncompact semigroups; dispersive equations; perturbations of Hamiltonian systems
☆ 37L55: Infinite-dimensional random dynamical systems; stochastic equations [See also 35R60, 60H10, 60H15]
☆ 37L60: Lattice dynamics [See also 37K60]
☆ 37L65: Special approximation methods (nonlinear Galerkin, etc.)
☆ 37L99: None of the above, but in this section
□ 37Mxx: Approximation methods and numerical treatment of dynamical systems [See also 65Pxx]
☆ 37M05: Simulation
☆ 37M10: Time series analysis
☆ 37M15: Symplectic integrators
☆ 37M20: Computational methods for bifurcation problems
☆ 37M25: Computational methods for ergodic theory (approximation of invariant measures, computation of Lyapunov exponents, entropy)
☆ 37M99: None of the above, but in this section
□ 37Nxx: Applications
☆ 37N05: Dynamical systems in classical and celestial mechanics [See mainly 70Fxx, 70Hxx, 70Kxx]
☆ 37N10: Dynamical systems in fluid mechanics, oceanography and meteorology [See mainly 76-XX, especially 76D05, 76F20, 86A05, 86A10]
☆ 37N15: Dynamical systems in solid mechanics [See mainly 74Hxx]
☆ 37N20: Dynamical systems in other branches of physics (quantum mechanics, general relativity, laser physics)
☆ 37N25: Dynamical systems in biology [See mainly 92-XX, but also 91-XX]
☆ 37N30: Dynamical systems in numerical analysis
☆ 37N35: Dynamical systems in control
☆ 37N40: Dynamical systems in optimization and economics
☆ 37N99: None of the above, but in this section
□ 37Pxx: Arithmetic and non-Archimedean dynamical systems [See also 11S82, 37A45]
☆ 37P05: Polynomial and rational maps
☆ 37P10: Analytic and meromorphic maps
☆ 37P15: Global ground fields
☆ 37P20: Non-Archimedean local ground fields
☆ 37P25: Finite ground fields
☆ 37P30: Height functions; Green functions; invariant measures [See also 11G50, 14G40]
☆ 37P35: Arithmetic properties of periodic points
☆ 37P40: Non-Archimedean Fatou and Julia sets
☆ 37P45: Families and moduli spaces
☆ 37P50: Dynamical systems on Berkovich spaces
☆ 37P55: Arithmetic dynamics on general algebraic varieties
☆ 37P99: None of the above, but in this section
• 39-XX: Difference and functional equations
□ 39-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 39-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 39-02: Research exposition (monographs, survey articles)
□ 39-03: Historical (must also be assigned at least one classification number from Section 01)
□ 39-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 39-06: Proceedings, conferences, collections, etc.
□ 39Axx: Difference equations {For dynamical systems, see 37-XX; for dynamic equations on time scales, see 34N05}
☆ 39A05: General theory
☆ 39A06: Linear equations
☆ 39A10: Difference equations, additive
☆ 39A12: Discrete version of topics in analysis
☆ 39A13: Difference equations, scaling ($q$-differences) [See also 33Dxx]
☆ 39A14: Partial difference equations
☆ 39A20: Multiplicative and other generalized difference equations, e.g. of Lyness type
☆ 39A21: Oscillation theory
☆ 39A22: Growth, boundedness, comparison of solutions
☆ 39A23: Periodic solutions
☆ 39A24: Almost periodic solutions
☆ 39A28: Bifurcation theory
☆ 39A30: Stability theory
☆ 39A33: Complex (chaotic) behavior of solutions
☆ 39A45: Equations in the complex domain
☆ 39A50: Stochastic difference equations
☆ 39A60: Applications
☆ 39A70: Difference operators [See also 47B39]
☆ 39A99: None of the above, but in this section
□ 39Bxx: Functional equations and inequalities [See also 30D05]
☆ 39B05: General
☆ 39B12: Iteration theory, iterative and composite equations [See also 26A18, 30D05, 37-XX]
☆ 39B22: Equations for real functions [See also 26A51, 26B25]
☆ 39B32: Equations for complex functions [See also 30D05]
☆ 39B42: Matrix and operator equations [See also 47Jxx]
☆ 39B52: Equations for functions with more general domains and/or ranges
☆ 39B55: Orthogonal additivity and other conditional equations
☆ 39B62: Functional inequalities, including subadditivity, convexity, etc. [See also 26A51, 26B25, 26Dxx]
☆ 39B72: Systems of functional equations and inequalities
☆ 39B82: Stability, separation, extension, and related topics [See also 46A22]
☆ 39B99: None of the above, but in this section
• 40-XX: Sequences, series, summability
□ 40-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 40-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 40-02: Research exposition (monographs, survey articles)
□ 40-03: Historical (must also be assigned at least one classification number from Section 01)
□ 40-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 40-06: Proceedings, conferences, collections, etc.
□ 40Axx: Convergence and divergence of infinite limiting processes
☆ 40A05: Convergence and divergence of series and sequences
☆ 40A10: Convergence and divergence of integrals
☆ 40A15: Convergence and divergence of continued fractions [See also 30B70]
☆ 40A20: Convergence and divergence of infinite products
☆ 40A25: Approximation to limiting values (summation of series, etc.) {For the Euler-Maclaurin summation formula, see 65B15}
☆ 40A30: Convergence and divergence of series and sequences of functions
☆ 40A35: Ideal and statistical convergence [See also 40G15]
☆ 40A99: None of the above, but in this section
□ 40Bxx: Multiple sequences and series
☆ 40B05: Multiple sequences and series (should also be assigned at least one other classification number in this section)
☆ 40B99: None of the above, but in this section
□ 40Cxx: General summability methods
☆ 40C05: Matrix methods
☆ 40C10: Integral methods
☆ 40C15: Function-theoretic methods (including power series methods and semicontinuous methods)
☆ 40C99: None of the above, but in this section
□ 40Dxx: Direct theorems on summability
☆ 40D05: General theorems
☆ 40D09: Structure of summability fields
☆ 40D10: Tauberian constants and oscillation limits
☆ 40D15: Convergence factors and summability factors
☆ 40D20: Summability and bounded fields of methods
☆ 40D25: Inclusion and equivalence theorems
☆ 40D99: None of the above, but in this section
□ 40Exx: Inversion theorems
☆ 40E05: Tauberian theorems, general
☆ 40E10: Growth estimates
☆ 40E15: Lacunary inversion theorems
☆ 40E20: Tauberian constants
☆ 40E99: None of the above, but in this section
□ 40Fxx: Absolute and strong summability (should also be assigned at least one other classification number in Section 40)
☆ 40F05: Absolute and strong summability (should also be assigned at least one other classification number in Section 40)
☆ 40F99: None of the above, but in this section
□ 40Gxx: Special methods of summability
☆ 40G05: Cesàro, Euler, Nörlund and Hausdorff methods
☆ 40G10: Abel, Borel and power series methods
☆ 40G15: Summability methods using statistical convergence [See also 40A35]
☆ 40G99: None of the above, but in this section
□ 40Hxx: Functional analytic methods in summability
☆ 40H05: Functional analytic methods in summability
☆ 40H99: None of the above, but in this section
□ 40Jxx: Summability in abstract structures [See also 43A55, 46A35, 46B15]
☆ 40J05: Summability in abstract structures [See also 43A55, 46A35, 46B15] (should also be assigned at least one other classification number in this section)
☆ 40J99: None of the above, but in this section
• 41-XX: Approximations and expansions {For all approximation theory in the complex domain, see 30E05 and 30E10; for all trigonometric approximation and interpolation, see 42A10 and 42A15; for
numerical approximation, see 65Dxx}
□ 41-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 41-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 41-02: Research exposition (monographs, survey articles)
□ 41-03: Historical (must also be assigned at least one classification number from Section 01)
□ 41-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 41-06: Proceedings, conferences, collections, etc.
□ 41Axx: Approximations and expansions {For all approximation theory in the complex domain, see 30E05 and 30E10; for all trigonometric approximation and interpolation, see 42A10 and 42A15; for
numerical approximation, see 65Dxx}
☆ 41A05: Interpolation [See also 42A15 and 65D05]
☆ 41A10: Approximation by polynomials {For approximation by trigonometric polynomials, see 42A10}
☆ 41A15: Spline approximation
☆ 41A17: Inequalities in approximation (Bernstein, Jackson, Nikol'skiĭ-type inequalities)
☆ 41A20: Approximation by rational functions
☆ 41A21: Padé approximation
☆ 41A25: Rate of convergence, degree of approximation
☆ 41A27: Inverse theorems
☆ 41A28: Simultaneous approximation
☆ 41A29: Approximation with constraints
☆ 41A30: Approximation by other special function classes
☆ 41A35: Approximation by operators (in particular, by integral operators)
☆ 41A36: Approximation by positive operators
☆ 41A40: Saturation
☆ 41A44: Best constants
☆ 41A45: Approximation by arbitrary linear expressions
☆ 41A46: Approximation by arbitrary nonlinear expressions; widths and entropy
☆ 41A50: Best approximation, Chebyshev systems
☆ 41A52: Uniqueness of best approximation
☆ 41A55: Approximate quadratures
☆ 41A58: Series expansions (e.g. Taylor, Lidstone series, but not Fourier series)
☆ 41A60: Asymptotic approximations, asymptotic expansions (steepest descent, etc.) [See also 30E15]
☆ 41A63: Multidimensional problems (should also be assigned at least one other classification number in this section)
☆ 41A65: Abstract approximation theory (approximation in normed linear spaces and other abstract spaces)
☆ 41A80: Remainders in approximation formulas
☆ 41A99: None of the above, but in this section
• 42-XX: Harmonic analysis on Euclidean spaces
□ 42-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 42-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 42-02: Research exposition (monographs, survey articles)
□ 42-03: Historical (must also be assigned at least one classification number from Section 01)
□ 42-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 42-06: Proceedings, conferences, collections, etc.
□ 42Axx: Harmonic analysis in one variable
☆ 42A05: Trigonometric polynomials, inequalities, extremal problems
☆ 42A10: Trigonometric approximation
☆ 42A15: Trigonometric interpolation
☆ 42A16: Fourier coefficients, Fourier series of functions with special properties, special Fourier series {For automorphic theory, see mainly 11F30}
☆ 42A20: Convergence and absolute convergence of Fourier and trigonometric series
☆ 42A24: Summability and absolute summability of Fourier and trigonometric series
☆ 42A32: Trigonometric series of special types (positive coefficients, monotonic coefficients, etc.)
☆ 42A38: Fourier and Fourier-Stieltjes transforms and other transforms of Fourier type
☆ 42A45: Multipliers
☆ 42A50: Conjugate functions, conjugate series, singular integrals
☆ 42A55: Lacunary series of trigonometric and other functions; Riesz products
☆ 42A61: Probabilistic methods
☆ 42A63: Uniqueness of trigonometric expansions, uniqueness of Fourier expansions, Riemann theory, localization
☆ 42A65: Completeness of sets of functions
☆ 42A70: Trigonometric moment problems
☆ 42A75: Classical almost periodic functions, mean periodic functions [See also 43A60]
☆ 42A82: Positive definite functions
☆ 42A85: Convolution, factorization
☆ 42A99: None of the above, but in this section
□ 42Bxx: Harmonic analysis in several variables {For automorphic theory, see mainly 11F30}
☆ 42B05: Fourier series and coefficients
☆ 42B08: Summability
☆ 42B10: Fourier and Fourier-Stieltjes transforms and other transforms of Fourier type
☆ 42B15: Multipliers
☆ 42B20: Singular and oscillatory integrals (Calderón-Zygmund, etc.)
☆ 42B25: Maximal functions, Littlewood-Paley theory
☆ 42B30: $H^p$-spaces
☆ 42B35: Function spaces arising in harmonic analysis
☆ 42B37: Harmonic analysis and PDE [See also 35-XX]
☆ 42B99: None of the above, but in this section
□ 42Cxx: Nontrigonometric harmonic analysis
☆ 42C05: Orthogonal functions and polynomials, general theory [See also 33C45, 33C50, 33D45]
☆ 42C10: Fourier series in special orthogonal functions (Legendre polynomials, Walsh functions, etc.)
☆ 42C15: General harmonic expansions, frames
☆ 42C20: Other transformations of harmonic type
☆ 42C25: Uniqueness and localization for orthogonal series
☆ 42C30: Completeness of sets of functions
☆ 42C40: Wavelets and other special systems
☆ 42C99: None of the above, but in this section
• 43-XX: Abstract harmonic analysis {For other analysis on topological and Lie groups, see 22Exx}
□ 43-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 43-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 43-02: Research exposition (monographs, survey articles)
□ 43-03: Historical (must also be assigned at least one classification number from Section 01)
□ 43-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 43-06: Proceedings, conferences, collections, etc.
□ 43Axx: Abstract harmonic analysis {For other analysis on topological and Lie groups, see 22Exx}
☆ 43A05: Measures on groups and semigroups, etc.
☆ 43A07: Means on groups, semigroups, etc.; amenable groups
☆ 43A10: Measure algebras on groups, semigroups, etc.
☆ 43A15: $L^p$-spaces and other function spaces on groups, semigroups, etc.
☆ 43A17: Analysis on ordered groups, $H^p$-theory
☆ 43A20: $L^1$-algebras on groups, semigroups, etc.
☆ 43A22: Homomorphisms and multipliers of function spaces on groups, semigroups, etc.
☆ 43A25: Fourier and Fourier-Stieltjes transforms on locally compact and other abelian groups
☆ 43A30: Fourier and Fourier-Stieltjes transforms on nonabelian groups and on semigroups, etc.
☆ 43A32: Other transforms and operators of Fourier type
☆ 43A35: Positive definite functions on groups, semigroups, etc.
☆ 43A40: Character groups and dual objects
☆ 43A45: Spectral synthesis on groups, semigroups, etc.
☆ 43A46: Special sets (thin sets, Kronecker sets, Helson sets, Ditkin sets, Sidon sets, etc.)
☆ 43A50: Convergence of Fourier series and of inverse transforms
☆ 43A55: Summability methods on groups, semigroups, etc. [See also 40J05]
☆ 43A60: Almost periodic functions on groups and semigroups and their generalizations (recurrent functions, distal functions, etc.); almost automorphic functions
☆ 43A62: Hypergroups
☆ 43A65: Representations of groups, semigroups, etc. [See also 22A10, 22A20, 22Dxx, 22E45]
☆ 43A70: Analysis on specific locally compact and other abelian groups [See also 11R56, 22B05]
☆ 43A75: Analysis on specific compact groups
☆ 43A77: Analysis on general compact groups
☆ 43A80: Analysis on other specific Lie groups [See also 22Exx]
☆ 43A85: Analysis on homogeneous spaces
☆ 43A90: Spherical functions [See also 22E45, 22E46, 33C55]
☆ 43A95: Categorical methods [See also 46Mxx]
☆ 43A99: None of the above, but in this section
• 44-XX: Integral transforms, operational calculus {For fractional derivatives and integrals, see 26A33. For Fourier transforms, see 42A38, 42B10. For integral transforms in distribution spaces,
see 46F12. For numerical methods, see 65R10}
□ 44-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 44-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 44-02: Research exposition (monographs, survey articles)
□ 44-03: Historical (must also be assigned at least one classification number from Section 01)
□ 44-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 44-06: Proceedings, conferences, collections, etc.
□ 44Axx: Integral transforms, operational calculus {For fractional derivatives and integrals, see 26A33. For Fourier transforms, see 42A38, 42B10. For integral transforms in distribution
spaces, see 46F12. For numerical methods, see 65R10}
☆ 44A05: General transforms [See also 42A38]
☆ 44A10: Laplace transform
☆ 44A12: Radon transform [See also 92C55]
☆ 44A15: Special transforms (Legendre, Hilbert, etc.)
☆ 44A20: Transforms of special functions
☆ 44A30: Multiple transforms
☆ 44A35: Convolution
☆ 44A40: Calculus of Mikusiński and other operational calculi
☆ 44A45: Classical operational calculus
☆ 44A55: Discrete operational calculus
☆ 44A60: Moment problems
☆ 44A99: None of the above, but in this section
• 45-XX: Integral equations
□ 45-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 45-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 45-02: Research exposition (monographs, survey articles)
□ 45-03: Historical (must also be assigned at least one classification number from Section 01)
□ 45-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 45-06: Proceedings, conferences, collections, etc.
□ 45Axx: Linear integral equations
☆ 45A05: Linear integral equations
☆ 45A99: None of the above, but in this section
□ 45Bxx: Fredholm integral equations
☆ 45B05: Fredholm integral equations
☆ 45B99: None of the above, but in this section
□ 45Cxx: Eigenvalue problems [See also 34Lxx, 35Pxx, 45P05, 47A75]
☆ 45C05: Eigenvalue problems [See also 34Lxx, 35Pxx, 45P05, 47A75]
☆ 45C99: None of the above, but in this section
□ 45Dxx: Volterra integral equations [See also 34A12]
☆ 45D05: Volterra integral equations [See also 34A12]
☆ 45D99: None of the above, but in this section
□ 45Exx: Singular integral equations [See also 30E20, 30E25, 44A15, 44A35]
☆ 45E05: Integral equations with kernels of Cauchy type [See also 35J15]
☆ 45E10: Integral equations of the convolution type (Abel, Picard, Toeplitz and Wiener-Hopf type) [See also 47B35]
☆ 45E99: None of the above, but in this section
□ 45Fxx: Systems of linear integral equations
☆ 45F05: Systems of nonsingular linear integral equations
☆ 45F10: Dual, triple, etc., integral and series equations
☆ 45F15: Systems of singular linear integral equations
☆ 45F99: None of the above, but in this section
□ 45Gxx: Nonlinear integral equations [See also 47H30, 47Jxx]
☆ 45G05: Singular nonlinear integral equations
☆ 45G10: Other nonlinear integral equations
☆ 45G15: Systems of nonlinear integral equations
☆ 45G99: None of the above, but in this section
□ 45Hxx: Miscellaneous special kernels [See also 44A15]
☆ 45H05: Miscellaneous special kernels [See also 44A15]
☆ 45H99: None of the above, but in this section
□ 45Jxx: Integro-ordinary differential equations [See also 34K05, 34K30, 47G20]
☆ 45J05: Integro-ordinary differential equations [See also 34K05, 34K30, 47G20]
☆ 45J99: None of the above, but in this section
□ 45Kxx: Integro-partial differential equations [See also 34K30, 35R09, 35R10, 47G20]
☆ 45K05: Integro-partial differential equations [See also 34K30, 35R09, 35R10, 47G20]
☆ 45K99: None of the above, but in this section
□ 45Lxx: Theoretical approximation of solutions {For numerical analysis, see 65Rxx}
☆ 45L05: Theoretical approximation of solutions {For numerical analysis, see 65Rxx}
☆ 45L99: None of the above, but in this section
□ 45Mxx: Qualitative behavior
☆ 45M05: Asymptotics
☆ 45M10: Stability theory
☆ 45M15: Periodic solutions
☆ 45M20: Positive solutions
☆ 45M99: None of the above, but in this section
□ 45Nxx: Abstract integral equations, integral equations in abstract spaces
☆ 45N05: Abstract integral equations, integral equations in abstract spaces
☆ 45N99: None of the above, but in this section
□ 45Pxx: Integral operators [See also 47B38, 47G10]
☆ 45P05: Integral operators [See also 47B38, 47G10]
☆ 45P99: None of the above, but in this section
□ 45Qxx: Inverse problems
☆ 45Q05: Inverse problems
☆ 45Q99: None of the above, but in this section
□ 45Rxx: Random integral equations [See also 60H20]
☆ 45R05: Random integral equations [See also 60H20]
☆ 45R99: None of the above, but in this section
• 46-XX: Functional analysis {For manifolds modeled on topological linear spaces, see 57Nxx, 58Bxx}
□ 46-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 46-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 46-02: Research exposition (monographs, survey articles)
□ 46-03: Historical (must also be assigned at least one classification number from Section 01)
□ 46-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 46-06: Proceedings, conferences, collections, etc.
□ 46Axx: Topological linear spaces and related structures {For function spaces, see 46Exx}
☆ 46A03: General theory of locally convex spaces
☆ 46A04: Locally convex Fréchet spaces and (DF)-spaces
☆ 46A08: Barrelled spaces, bornological spaces
☆ 46A11: Spaces determined by compactness or summability properties (nuclear spaces, Schwartz spaces, Montel spaces, etc.)
☆ 46A13: Spaces defined by inductive or projective limits (LB, LF, etc.) [See also 46M40]
☆ 46A16: Not locally convex spaces (metrizable topological linear spaces, locally bounded spaces, quasi-Banach spaces, etc.)
☆ 46A17: Bornologies and related structures; Mackey convergence, etc.
☆ 46A19: Other “topological” linear spaces (convergence spaces, ranked spaces, spaces with a metric taking values in an ordered structure more general than ${\bf R}$, etc.)
☆ 46A20: Duality theory
☆ 46A22: Theorems of Hahn-Banach type; extension and lifting of functionals and operators [See also 46M10]
☆ 46A25: Reflexivity and semi-reflexivity [See also 46B10]
☆ 46A30: Open mapping and closed graph theorems; completeness (including $B$-, $B_r$-completeness)
☆ 46A32: Spaces of linear operators; topological tensor products; approximation properties [See also 46B28, 46M05, 47L05, 47L20]
☆ 46A35: Summability and bases [See also 46B15]
☆ 46A40: Ordered topological linear spaces, vector lattices [See also 06F20, 46B40, 46B42]
☆ 46A45: Sequence spaces (including Köthe sequence spaces) [See also 46B45]
☆ 46A50: Compactness in topological linear spaces; angelic spaces, etc.
☆ 46A55: Convex sets in topological linear spaces; Choquet theory [See also 52A07]
☆ 46A61: Graded Fréchet spaces and tame operators
☆ 46A63: Topological invariants ((DN), ($\Omega$), etc.)
☆ 46A70: Saks spaces and their duals (strict topologies, mixed topologies, two-norm spaces, co-Saks spaces, etc.)
☆ 46A80: Modular spaces
☆ 46A99: None of the above, but in this section
□ 46Bxx: Normed linear spaces and Banach spaces; Banach lattices {For function spaces, see 46Exx}
☆ 46B03: Isomorphic theory (including renorming) of Banach spaces
☆ 46B04: Isometric theory of Banach spaces
☆ 46B06: Asymptotic theory of Banach spaces [See also 52A23]
☆ 46B07: Local theory of Banach spaces
☆ 46B08: Ultraproduct techniques in Banach space theory [See also 46M07]
☆ 46B09: Probabilistic methods in Banach space theory [See also 60Bxx]
☆ 46B10: Duality and reflexivity [See also 46A25]
☆ 46B15: Summability and bases [See also 46A35]
☆ 46B20: Geometry and structure of normed linear spaces
☆ 46B22: Radon-Nikodým, Kreĭn-Milman and related properties [See also 46G10]
☆ 46B25: Classical Banach spaces in the general theory
☆ 46B26: Nonseparable Banach spaces
☆ 46B28: Spaces of operators; tensor products; approximation properties [See also 46A32, 46M05, 47L05, 47L20]
☆ 46B40: Ordered normed spaces [See also 46A40, 46B42]
☆ 46B42: Banach lattices [See also 46A40, 46B40]
☆ 46B45: Banach sequence spaces [See also 46A45]
☆ 46B50: Compactness in Banach (or normed) spaces
☆ 46B70: Interpolation between normed linear spaces [See also 46M35]
☆ 46B80: Nonlinear classification of Banach spaces; nonlinear quotients
☆ 46B85: Embeddings of discrete metric spaces into Banach spaces; applications in topology and computer science [See also 05C12, 68Rxx]
☆ 46B99: None of the above, but in this section
□ 46Cxx: Inner product spaces and their generalizations, Hilbert spaces {For function spaces, see 46Exx}
☆ 46C05: Hilbert and pre-Hilbert spaces: geometry and topology (including spaces with semidefinite inner product)
☆ 46C07: Hilbert subspaces (= operator ranges); complementation (Aronszajn, de Branges, etc.) [See also 46B70, 46M35]
☆ 46C15: Characterizations of Hilbert spaces
☆ 46C20: Spaces with indefinite inner product (Kreĭn spaces, Pontryagin spaces, etc.) [See also 47B50]
☆ 46C50: Generalizations of inner products (semi-inner products, partial inner products, etc.)
☆ 46C99: None of the above, but in this section
□ 46Exx: Linear function spaces and their duals [See also 30H05, 32A38, 46F05] {For function algebras, see 46J10}
☆ 46E05: Lattices of continuous, differentiable or analytic functions
☆ 46E10: Topological linear spaces of continuous, differentiable or analytic functions
☆ 46E15: Banach spaces of continuous, differentiable or analytic functions
☆ 46E20: Hilbert spaces of continuous, differentiable or analytic functions
☆ 46E22: Hilbert spaces with reproducing kernels (= [proper] functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) [See also 47B32]
☆ 46E25: Rings and algebras of continuous, differentiable or analytic functions {For Banach function algebras, see 46J10, 46J15}
☆ 46E27: Spaces of measures [See also 28A33, 46Gxx]
☆ 46E30: Spaces of measurable functions ($L^p$-spaces, Orlicz spaces, Köthe function spaces, Lorentz spaces, rearrangement invariant spaces, ideal spaces, etc.)
☆ 46E35: Sobolev spaces and other spaces of “smooth” functions, embedding theorems, trace theorems
☆ 46E39: Sobolev (and similar kinds of) spaces of functions of discrete variables
☆ 46E40: Spaces of vector- and operator-valued functions
☆ 46E50: Spaces of differentiable or holomorphic functions on infinite-dimensional spaces [See also 46G20, 46G25, 47H60]
☆ 46E99: None of the above, but in this section
□ 46Fxx: Distributions, generalized functions, distribution spaces [See also 46T30]
☆ 46F05: Topological linear spaces of test functions, distributions and ultradistributions [See also 46E10, 46E35]
☆ 46F10: Operations with distributions
☆ 46F12: Integral transforms in distribution spaces [See also 42-XX, 44-XX]
☆ 46F15: Hyperfunctions, analytic functionals [See also 32A25, 32A45, 32C35, 58J15]
☆ 46F20: Distributions and ultradistributions as boundary values of analytic functions [See also 30D40, 30E25, 32A40]
☆ 46F25: Distributions on infinite-dimensional spaces [See also 58C35]
☆ 46F30: Generalized functions for nonlinear analysis (Rosinger, Colombeau, nonstandard, etc.)
☆ 46F99: None of the above, but in this section
□ 46Gxx: Measures, integration, derivative, holomorphy (all involving infinite-dimensional spaces) [See also 28-XX, 46Txx]
☆ 46G05: Derivatives [See also 46T20, 58C20, 58C25]
☆ 46G10: Vector-valued measures and integration [See also 28Bxx, 46B22]
☆ 46G12: Measures and integration on abstract linear spaces [See also 28C20, 46T12]
☆ 46G15: Functional analytic lifting theory [See also 28A51]
☆ 46G20: Infinite-dimensional holomorphy [See also 32-XX, 46E50, 46T25, 58B12, 58C10]
☆ 46G25: (Spaces of) multilinear mappings, polynomials [See also 46E50, 46G20, 47H60]
☆ 46G99: None of the above, but in this section
□ 46Hxx: Topological algebras, normed rings and algebras, Banach algebras {For group algebras, convolution algebras and measure algebras, see 43A10, 43A20}
☆ 46H05: General theory of topological algebras
☆ 46H10: Ideals and subalgebras
☆ 46H15: Representations of topological algebras
☆ 46H20: Structure, classification of topological algebras
☆ 46H25: Normed modules and Banach modules, topological modules (if not placed in 13-XX or 16-XX)
☆ 46H30: Functional calculus in topological algebras [See also 47A60]
☆ 46H35: Topological algebras of operators [See mainly 47Lxx]
☆ 46H40: Automatic continuity
☆ 46H70: Nonassociative topological algebras [See also 46K70, 46L70]
☆ 46H99: None of the above, but in this section
□ 46Jxx: Commutative Banach algebras and commutative topological algebras [See also 46E25]
☆ 46J05: General theory of commutative topological algebras
☆ 46J10: Banach algebras of continuous functions, function algebras [See also 46E25]
☆ 46J15: Banach algebras of differentiable or analytic functions, $H^p$-spaces [See also 30H10, 32A35, 32A37, 32A38, 42B30]
☆ 46J20: Ideals, maximal ideals, boundaries
☆ 46J25: Representations of commutative topological algebras
☆ 46J30: Subalgebras
☆ 46J40: Structure, classification of commutative topological algebras
☆ 46J45: Radical Banach algebras
☆ 46J99: None of the above, but in this section
□ 46Kxx: Topological (rings and) algebras with an involution [See also 16W10]
☆ 46K05: General theory of topological algebras with involution
☆ 46K10: Representations of topological algebras with involution
☆ 46K15: Hilbert algebras
☆ 46K50: Nonselfadjoint (sub)algebras in algebras with involution
☆ 46K70: Nonassociative topological algebras with an involution [See also 46H70, 46L70]
☆ 46K99: None of the above, but in this section
□ 46Lxx: Selfadjoint operator algebras ($C^*$-algebras, von Neumann ($W^*$-) algebras, etc.) [See also 22D25, 47Lxx]
☆ 46L05: General theory of $C^*$-algebras
☆ 46L06: Tensor products of $C^*$-algebras
☆ 46L07: Operator spaces and completely bounded maps [See also 47L25]
☆ 46L08: $C^*$-modules
☆ 46L09: Free products of $C^*$-algebras
☆ 46L10: General theory of von Neumann algebras
☆ 46L30: States
☆ 46L35: Classifications of $C^*$-algebras
☆ 46L36: Classification of factors
☆ 46L37: Subfactors and their classification
☆ 46L40: Automorphisms
☆ 46L45: Decomposition theory for $C^*$-algebras
☆ 46L51: Noncommutative measure and integration
☆ 46L52: Noncommutative function spaces
☆ 46L53: Noncommutative probability and statistics
☆ 46L54: Free probability and free operator algebras
☆ 46L55: Noncommutative dynamical systems [See also 28Dxx, 37Kxx, 37Lxx, 54H20]
☆ 46L57: Derivations, dissipations and positive semigroups in $C^*$-algebras
☆ 46L60: Applications of selfadjoint operator algebras to physics [See also 46N50, 46N55, 47L90, 81T05, 82B10, 82C10]
☆ 46L65: Quantizations, deformations
☆ 46L70: Nonassociative selfadjoint operator algebras [See also 46H70, 46K70]
☆ 46L80: $K$-theory and operator algebras (including cyclic theory) [See also 18F25, 19Kxx, 46M20, 55Rxx, 58J22]
☆ 46L85: Noncommutative topology [See also 58B32, 58B34, 58J22]
☆ 46L87: Noncommutative differential geometry [See also 58B32, 58B34, 58J22]
☆ 46L89: Other “noncommutative” mathematics based on $C^*$-algebra theory [See also 58B32, 58B34, 58J22]
☆ 46L99: None of the above, but in this section
□ 46Mxx: Methods of category theory in functional analysis [See also 18-XX]
☆ 46M05: Tensor products [See also 46A32, 46B28, 47A80]
☆ 46M07: Ultraproducts [See also 46B08, 46S20]
☆ 46M10: Projective and injective objects [See also 46A22]
☆ 46M15: Categories, functors {For $K$-theory, EXT, etc., see 19K33, 46L80, 46M18, 46M20}
☆ 46M18: Homological methods (exact sequences, right inverses, lifting, etc.)
☆ 46M20: Methods of algebraic topology (cohomology, sheaf and bundle theory, etc.) [See also 14F05, 18Fxx, 19Kxx, 32Cxx, 32Lxx, 46L80, 46M15, 46M18, 55Rxx]
☆ 46M35: Abstract interpolation of topological vector spaces [See also 46B70]
☆ 46M40: Inductive and projective limits [See also 46A13]
☆ 46M99: None of the above, but in this section
□ 46Nxx: Miscellaneous applications of functional analysis [See also 47Nxx]
☆ 46N10: Applications in optimization, convex analysis, mathematical programming, economics
☆ 46N20: Applications to differential and integral equations
☆ 46N30: Applications in probability theory and statistics
☆ 46N40: Applications in numerical analysis [See also 65Jxx]
☆ 46N50: Applications in quantum physics
☆ 46N55: Applications in statistical physics
☆ 46N60: Applications in biology and other sciences
☆ 46N99: None of the above, but in this section
□ 46Sxx: Other (nonclassical) types of functional analysis [See also 47Sxx]
☆ 46S10: Functional analysis over fields other than ${\bf R}$ or ${\bf C}$ or the quaternions; non-Archimedean functional analysis [See also 12J25, 32P05]
☆ 46S20: Nonstandard functional analysis [See also 03H05]
☆ 46S30: Constructive functional analysis [See also 03F60]
☆ 46S40: Fuzzy functional analysis [See also 03E72]
☆ 46S50: Functional analysis in probabilistic metric linear spaces
☆ 46S60: Functional analysis on superspaces (supermanifolds) or graded spaces [See also 58A50 and 58C50]
☆ 46S99: None of the above, but in this section
□ 46Txx: Nonlinear functional analysis [See also 47Hxx, 47Jxx, 58Cxx, 58Dxx]
☆ 46T05: Infinite-dimensional manifolds [See also 53Axx, 57N20, 58Bxx, 58Dxx]
☆ 46T10: Manifolds of mappings
☆ 46T12: Measure (Gaussian, cylindrical, etc.) and integrals (Feynman, path, Fresnel, etc.) on manifolds [See also 28Cxx, 46G12, 60-XX]
☆ 46T20: Continuous and differentiable maps [See also 46G05]
☆ 46T25: Holomorphic maps [See also 46G20]
☆ 46T30: Distributions and generalized functions on nonlinear spaces [See also 46Fxx]
☆ 46T99: None of the above, but in this section
• 47-XX: Operator theory
□ 47-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 47-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 47-02: Research exposition (monographs, survey articles)
□ 47-03: Historical (must also be assigned at least one classification number from Section 01)
□ 47-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 47-06: Proceedings, conferences, collections, etc.
□ 47Axx: General theory of linear operators
☆ 47A05: General (adjoints, conjugates, products, inverses, domains, ranges, etc.)
☆ 47A06: Linear relations (multivalued linear operators)
☆ 47A07: Forms (bilinear, sesquilinear, multilinear)
☆ 47A10: Spectrum, resolvent
☆ 47A11: Local spectral properties
☆ 47A12: Numerical range, numerical radius
☆ 47A13: Several-variable operator theory (spectral, Fredholm, etc.)
☆ 47A15: Invariant subspaces [See also 47A46]
☆ 47A16: Cyclic vectors, hypercyclic and chaotic operators
☆ 47A20: Dilations, extensions, compressions
☆ 47A25: Spectral sets
☆ 47A30: Norms (inequalities, more than one norm, etc.)
☆ 47A35: Ergodic theory [See also 28Dxx, 37Axx]
☆ 47A40: Scattering theory [See also 34L25, 35P25, 37K15, 58J50, 81Uxx]
☆ 47A45: Canonical models for contractions and nonselfadjoint operators
☆ 47A46: Chains (nests) of projections or of invariant subspaces, integrals along chains, etc.
☆ 47A48: Operator colligations (= nodes), vessels, linear systems, characteristic functions, realizations, etc.
☆ 47A50: Equations and inequalities involving linear operators, with vector unknowns
☆ 47A52: Ill-posed problems, regularization [See also 35R25, 47J06, 65F22, 65J20, 65L08, 65M30, 65R30]
☆ 47A53: (Semi-) Fredholm operators; index theories [See also 58B15, 58J20]
☆ 47A55: Perturbation theory [See also 47H14, 58J37, 70H09, 81Q15]
☆ 47A56: Functions whose values are linear operators (operator and matrix valued functions, etc., including analytic and meromorphic ones)
☆ 47A57: Operator methods in interpolation, moment and extension problems [See also 30E05, 42A70, 42A82, 44A60]
☆ 47A58: Operator approximation theory
☆ 47A60: Functional calculus
☆ 47A62: Equations involving linear operators, with operator unknowns
☆ 47A63: Operator inequalities
☆ 47A64: Operator means, shorted operators, etc.
☆ 47A65: Structure theory
☆ 47A66: Quasitriangular and nonquasitriangular, quasidiagonal and nonquasidiagonal operators
☆ 47A67: Representation theory
☆ 47A68: Factorization theory (including Wiener-Hopf and spectral factorizations)
☆ 47A70: (Generalized) eigenfunction expansions; rigged Hilbert spaces
☆ 47A75: Eigenvalue problems [See also 47J10, 49R05]
☆ 47A80: Tensor products of operators [See also 46M05]
☆ 47A99: None of the above, but in this section
□ 47Bxx: Special classes of linear operators
☆ 47B06: Riesz operators; eigenvalue distributions; approximation numbers, $s$-numbers, Kolmogorov numbers, entropy numbers, etc. of operators
☆ 47B07: Operators defined by compactness properties
☆ 47B10: Operators belonging to operator ideals (nuclear, $p$-summing, in the Schatten-von Neumann classes, etc.) [See also 47L20]
☆ 47B15: Hermitian and normal operators (spectral measures, functional calculus, etc.)
☆ 47B20: Subnormal operators, hyponormal operators, etc.
☆ 47B25: Symmetric and selfadjoint operators (unbounded)
☆ 47B32: Operators in reproducing-kernel Hilbert spaces (including de Branges, de Branges-Rovnyak, and other structured spaces) [See also 46E22]
☆ 47B33: Composition operators
☆ 47B34: Kernel operators
☆ 47B35: Toeplitz operators, Hankel operators, Wiener-Hopf operators [See also 45P05, 47G10 for other integral operators; see also 32A25, 32M15]
☆ 47B36: Jacobi (tridiagonal) operators (matrices) and generalizations
☆ 47B37: Operators on special spaces (weighted shifts, operators on sequence spaces, etc.)
☆ 47B38: Operators on function spaces (general)
☆ 47B39: Difference operators [See also 39A70]
☆ 47B40: Spectral operators, decomposable operators, well-bounded operators, etc.
☆ 47B44: Accretive operators, dissipative operators, etc.
☆ 47B47: Commutators, derivations, elementary operators, etc.
☆ 47B48: Operators on Banach algebras
☆ 47B49: Transformers, preservers (operators on spaces of operators)
☆ 47B50: Operators on spaces with an indefinite metric [See also 46C50]
☆ 47B60: Operators on ordered spaces
☆ 47B65: Positive operators and order-bounded operators
☆ 47B80: Random operators [See also 47H40, 60H25]
☆ 47B99: None of the above, but in this section
□ 47Cxx: Individual linear operators as elements of algebraic systems
☆ 47C05: Operators in algebras
☆ 47C10: Operators in ${}^*$-algebras
☆ 47C15: Operators in $C^*$- or von Neumann algebras
☆ 47C99: None of the above, but in this section
□ 47Dxx: Groups and semigroups of linear operators, their generalizations and applications
☆ 47D03: Groups and semigroups of linear operators {For nonlinear operators, see 47H20; see also 20M20}
☆ 47D06: One-parameter semigroups and linear evolution equations [See also 34G10, 34K30]
☆ 47D07: Markov semigroups and applications to diffusion processes {For Markov processes, see 60Jxx}
☆ 47D08: Schrödinger and Feynman-Kac semigroups
☆ 47D09: Operator sine and cosine functions and higher-order Cauchy problems [See also 34G10]
☆ 47D60: $C$-semigroups, regularized semigroups
☆ 47D62: Integrated semigroups
☆ 47D99: None of the above, but in this section
□ 47Exx: Ordinary differential operators [See also 34Bxx, 34Lxx]
☆ 47E05: Ordinary differential operators [See also 34Bxx, 34Lxx] (should also be assigned at least one other classification number in section 47)
☆ 47E99: None of the above, but in this section
□ 47Fxx: Partial differential operators [See also 35Pxx, 58Jxx]
☆ 47F05: Partial differential operators [See also 35Pxx, 58Jxx] (should also be assigned at least one other classification number in section 47)
☆ 47F99: None of the above, but in this section
□ 47Gxx: Integral, integro-differential, and pseudodifferential operators [See also 58Jxx]
☆ 47G10: Integral operators [See also 45P05]
☆ 47G20: Integro-differential operators [See also 34K30, 35R09, 35R10, 45Jxx, 45Kxx]
☆ 47G30: Pseudodifferential operators [See also 35Sxx, 58Jxx]
☆ 47G40: Potential operators [See also 31-XX]
☆ 47G99: None of the above, but in this section
□ 47Hxx: Nonlinear operators and their properties {For global and geometric aspects, see 49J53, 58-XX, especially 58Cxx}
☆ 47H04: Set-valued operators [See also 28B20, 54C60, 58C06]
☆ 47H05: Monotone operators and generalizations
☆ 47H06: Accretive operators, dissipative operators, etc.
☆ 47H07: Monotone and positive operators on ordered Banach spaces or other ordered topological vector spaces
☆ 47H08: Measures of noncompactness and condensing mappings, $K$-set contractions, etc.
☆ 47H09: Contraction-type mappings, nonexpansive mappings, $A$-proper mappings, etc.
☆ 47H10: Fixed-point theorems [See also 37C25, 54H25, 55M20, 58C30]
☆ 47H11: Degree theory [See also 55M25, 58C30]
☆ 47H14: Perturbations of nonlinear operators [See also 47A55, 58J37, 70H09, 70K60, 81Q15]
☆ 47H20: Semigroups of nonlinear operators [See also 37L05, 47J35, 54H15, 58D07]
☆ 47H25: Nonlinear ergodic theorems [See also 28Dxx, 37Axx, 47A35]
☆ 47H30: Particular nonlinear operators (superposition, Hammerstein, Nemytskiĭ, Uryson, etc.) [See also 45Gxx, 45P05]
☆ 47H40: Random operators [See also 47B80, 60H25]
☆ 47H60: Multilinear and polynomial operators [See also 46G25]
☆ 47H99: None of the above, but in this section
□ 47Jxx: Equations and inequalities involving nonlinear operators [See also 46Txx] {For global and geometric aspects, see 58-XX}
☆ 47J05: Equations involving nonlinear operators (general) [See also 47H10, 47J25]
☆ 47J06: Nonlinear ill-posed problems [See also 35R25, 47A52, 65F22, 65J20, 65L08, 65M30, 65R30]
☆ 47J07: Abstract inverse mapping and implicit function theorems [See also 46T20 and 58C15]
☆ 47J10: Nonlinear spectral theory, nonlinear eigenvalue problems [See also 49R05]
☆ 47J15: Abstract bifurcation theory [See also 34C23, 37Gxx, 58E07, 58E09]
☆ 47J20: Variational and other types of inequalities involving nonlinear operators (general) [See also 49J40]
☆ 47J22: Variational and other types of inclusions [See also 34A60, 49J21, 49K21]
☆ 47J25: Iterative procedures [See also 65J15]
☆ 47J30: Variational methods [See also 58Exx]
☆ 47J35: Nonlinear evolution equations [See also 34G20, 35K90, 35L90, 35Qxx, 35R20, 37Kxx, 37Lxx, 47H20, 58D25]
☆ 47J40: Equations with hysteresis operators [See also 34C55, 74N30]
☆ 47J99: None of the above, but in this section
□ 47Lxx: Linear spaces and algebras of operators [See also 46Lxx]
☆ 47L05: Linear spaces of operators [See also 46A32 and 46B28]
☆ 47L07: Convex sets and cones of operators [See also 46A55]
☆ 47L10: Algebras of operators on Banach spaces and other topological linear spaces
☆ 47L15: Operator algebras with symbol structure
☆ 47L20: Operator ideals [See also 47B10]
☆ 47L22: Ideals of polynomials and of multilinear mappings
☆ 47L25: Operator spaces (= matricially normed spaces) [See also 46L07]
☆ 47L30: Abstract operator algebras on Hilbert spaces
☆ 47L35: Nest algebras, CSL algebras
☆ 47L40: Limit algebras, subalgebras of $C^*$-algebras
☆ 47L45: Dual algebras; weakly closed singly generated operator algebras
☆ 47L50: Dual spaces of operator algebras
☆ 47L55: Representations of (nonselfadjoint) operator algebras
☆ 47L60: Algebras of unbounded operators; partial algebras of operators
☆ 47L65: Crossed product algebras (analytic crossed products)
☆ 47L70: Nonassociative nonselfadjoint operator algebras
☆ 47L75: Other nonselfadjoint operator algebras
☆ 47L80: Algebras of specific types of operators (Toeplitz, integral, pseudodifferential, etc.)
☆ 47L90: Applications of operator algebras to physics
☆ 47L99: None of the above, but in this section
□ 47Nxx: Miscellaneous applications of operator theory [See also 46Nxx]
☆ 47N10: Applications in optimization, convex analysis, mathematical programming, economics
☆ 47N20: Applications to differential and integral equations
☆ 47N30: Applications in probability theory and statistics
☆ 47N40: Applications in numerical analysis [See also 65Jxx]
☆ 47N50: Applications in the physical sciences
☆ 47N60: Applications in chemistry and life sciences
☆ 47N70: Applications in systems theory, circuits, and control theory
☆ 47N99: None of the above, but in this section
□ 47Sxx: Other (nonclassical) types of operator theory [See also 46Sxx]
☆ 47S10: Operator theory over fields other than ${\bf R}$, ${\bf C}$ or the quaternions; non-Archimedean operator theory
☆ 47S20: Nonstandard operator theory [See also 03H05]
☆ 47S30: Constructive operator theory [See also 03F60]
☆ 47S40: Fuzzy operator theory [See also 03E72]
☆ 47S50: Operator theory in probabilistic metric linear spaces [See also 54E70]
☆ 47S99: None of the above, but in this section
• 49-XX: Calculus of variations and optimal control; optimization [See also 34H05, 34K35, 65Kxx, 90Cxx, 93-XX]
□ 49-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 49-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 49-02: Research exposition (monographs, survey articles)
□ 49-03: Historical (must also be assigned at least one classification number from Section 01)
□ 49-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 49-06: Proceedings, conferences, collections, etc.
□ 49Jxx: Existence theories
☆ 49J05: Free problems in one independent variable
☆ 49J10: Free problems in two or more independent variables
☆ 49J15: Optimal control problems involving ordinary differential equations
☆ 49J20: Optimal control problems involving partial differential equations
☆ 49J21: Optimal control problems involving relations other than differential equations
☆ 49J27: Problems in abstract spaces [See also 90C48, 93C25]
☆ 49J30: Optimal solutions belonging to restricted classes (Lipschitz controls, bang-bang controls, etc.)
☆ 49J35: Minimax problems
☆ 49J40: Variational methods including variational inequalities [See also 47J20]
☆ 49J45: Methods involving semicontinuity and convergence; relaxation
☆ 49J50: Fréchet and Gateaux differentiability [See also 46G05, 58C20]
☆ 49J52: Nonsmooth analysis [See also 46G05, 58C50, 90C56]
☆ 49J53: Set-valued and variational analysis [See also 28B20, 47H04, 54C60, 58C06]
☆ 49J55: Problems involving randomness [See also 93E20]
☆ 49J99: None of the above, but in this section
□ 49Kxx: Optimality conditions
☆ 49K05: Free problems in one independent variable
☆ 49K10: Free problems in two or more independent variables
☆ 49K15: Problems involving ordinary differential equations
☆ 49K20: Problems involving partial differential equations
☆ 49K21: Problems involving relations other than differential equations
☆ 49K27: Problems in abstract spaces [See also 90C48, 93C25]
☆ 49K30: Optimal solutions belonging to restricted classes
☆ 49K35: Minimax problems
☆ 49K40: Sensitivity, stability, well-posedness [See also 90C31]
☆ 49K45: Problems involving randomness [See also 93E20]
☆ 49K99: None of the above, but in this section
□ 49Lxx: Hamilton-Jacobi theories, including dynamic programming
☆ 49L20: Dynamic programming method
☆ 49L25: Viscosity solutions
☆ 49L99: None of the above, but in this section
□ 49Mxx: Numerical methods [See also 90Cxx, 65Kxx]
☆ 49M05: Methods based on necessary conditions
☆ 49M15: Newton-type methods
☆ 49M20: Methods of relaxation type
☆ 49M25: Discrete approximations
☆ 49M27: Decomposition methods
☆ 49M29: Methods involving duality
☆ 49M30: Other methods
☆ 49M37: Methods of nonlinear programming type [See also 90C30, 65Kxx]
☆ 49M99: None of the above, but in this section
□ 49Nxx: Miscellaneous topics
☆ 49N05: Linear optimal control problems [See also 93C05]
☆ 49N10: Linear-quadratic problems
☆ 49N15: Duality theory
☆ 49N20: Periodic optimization
☆ 49N25: Impulsive optimal control problems
☆ 49N30: Problems with incomplete information [See also 93C41]
☆ 49N35: Optimal feedback synthesis [See also 93B52]
☆ 49N45: Inverse problems
☆ 49N60: Regularity of solutions
☆ 49N70: Differential games
☆ 49N75: Pursuit and evasion games
☆ 49N90: Applications of optimal control and differential games [See also 90C90, 93C95]
☆ 49N99: None of the above, but in this section
□ 49Qxx: Manifolds [See also 58Exx]
☆ 49Q05: Minimal surfaces [See also 53A10, 58E12]
☆ 49Q10: Optimization of shapes other than minimal surfaces [See also 90C90]
☆ 49Q12: Sensitivity analysis
☆ 49Q15: Geometric measure and integration theory, integral and normal currents [See also 28A75, 32C30, 58A25, 58C35]
☆ 49Q20: Variational problems in a geometric measure-theoretic setting
☆ 49Q99: None of the above, but in this section
□ 49Rxx: Variational methods for eigenvalues of operators [See also 47A75]
☆ 49R05: Variational methods for eigenvalues of operators [See also 47A75] (should also be assigned at least one other classification number in Section 49)
☆ 49R99: None of the above, but in this section
□ 49Sxx: Variational principles of physics
☆ 49S05: Variational principles of physics (should also be assigned at least one other classification number in section 49)
☆ 49S99: None of the above, but in this section
• 51-XX: Geometry {For algebraic geometry, see 14-XX}
□ 51-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 51-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 51-02: Research exposition (monographs, survey articles)
□ 51-03: Historical (must also be assigned at least one classification number from Section 01)
□ 51-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 51-06: Proceedings, conferences, collections, etc.
□ 51Axx: Linear incidence geometry
☆ 51A05: General theory and projective geometries
☆ 51A10: Homomorphism, automorphism and dualities
☆ 51A15: Structures with parallelism
☆ 51A20: Configuration theorems
☆ 51A25: Algebraization [See also 12Kxx, 20N05]
☆ 51A30: Desarguesian and Pappian geometries
☆ 51A35: Non-Desarguesian affine and projective planes
☆ 51A40: Translation planes and spreads
☆ 51A45: Incidence structures imbeddable into projective geometries
☆ 51A50: Polar geometry, symplectic spaces, orthogonal spaces
☆ 51A99: None of the above, but in this section
□ 51Bxx: Nonlinear incidence geometry
☆ 51B05: General theory
☆ 51B10: Möbius geometries
☆ 51B15: Laguerre geometries
☆ 51B20: Minkowski geometries
☆ 51B25: Lie geometries
☆ 51B99: None of the above, but in this section
□ 51Cxx: Ring geometry (Hjelmslev, Barbilian, etc.)
☆ 51C05: Ring geometry (Hjelmslev, Barbilian, etc.)
☆ 51C99: None of the above, but in this section
□ 51Dxx: Geometric closure systems
☆ 51D05: Abstract (Maeda) geometries
☆ 51D10: Abstract geometries with exchange axiom
☆ 51D15: Abstract geometries with parallelism
☆ 51D20: Combinatorial geometries [See also 05B25, 05B35]
☆ 51D25: Lattices of subspaces [See also 05B35]
☆ 51D30: Continuous geometries and related topics [See also 06Cxx]
☆ 51D99: None of the above, but in this section
□ 51Exx: Finite geometry and special incidence structures
☆ 51E05: General block designs [See also 05B05]
☆ 51E10: Steiner systems
☆ 51E12: Generalized quadrangles, generalized polygons
☆ 51E14: Finite partial geometries (general), nets, partial spreads
☆ 51E15: Affine and projective planes
☆ 51E20: Combinatorial structures in finite projective spaces [See also 05Bxx]
☆ 51E21: Blocking sets, ovals, $k$-arcs
☆ 51E22: Linear codes and caps in Galois spaces [See also 94B05]
☆ 51E23: Spreads and packing problems
☆ 51E24: Buildings and the geometry of diagrams
☆ 51E25: Other finite nonlinear geometries
☆ 51E26: Other finite linear geometries
☆ 51E30: Other finite incidence structures [See also 05B30]
☆ 51E99: None of the above, but in this section
□ 51Fxx: Metric geometry
☆ 51F05: Absolute planes
☆ 51F10: Absolute spaces
☆ 51F15: Reflection groups, reflection geometries [See also 20H10, 20H15; for Coxeter groups, see 20F55]
☆ 51F20: Congruence and orthogonality [See also 20H05]
☆ 51F25: Orthogonal and unitary groups [See also 20H05]
☆ 51F99: None of the above, but in this section
□ 51Gxx: Ordered geometries (ordered incidence structures, etc.)
☆ 51G05: Ordered geometries (ordered incidence structures, etc.)
☆ 51G99: None of the above, but in this section
□ 51Hxx: Topological geometry
☆ 51H05: General theory
☆ 51H10: Topological linear incidence structures
☆ 51H15: Topological nonlinear incidence structures
☆ 51H20: Topological geometries on manifolds [See also 57-XX]
☆ 51H25: Geometries with differentiable structure [See also 53Cxx, 53C70]
☆ 51H30: Geometries with algebraic manifold structure [See also 14-XX]
☆ 51H99: None of the above, but in this section
□ 51Jxx: Incidence groups
☆ 51J05: General theory
☆ 51J10: Projective incidence groups
☆ 51J15: Kinematic spaces
☆ 51J20: Representation by near-fields and near-algebras [See also 12K05, 16Y30]
☆ 51J99: None of the above, but in this section
□ 51Kxx: Distance geometry
☆ 51K05: General theory
☆ 51K10: Synthetic differential geometry
☆ 51K99: None of the above, but in this section
□ 51Lxx: Geometric order structures [See also 53C75]
☆ 51L05: Geometry of orders of nondifferentiable curves
☆ 51L10: Directly differentiable curves
☆ 51L15: $n$-vertex theorems via direct methods
☆ 51L20: Geometry of orders of surfaces
☆ 51L99: None of the above, but in this section
□ 51Mxx: Real and complex geometry
☆ 51M04: Elementary problems in Euclidean geometries
☆ 51M05: Euclidean geometries (general) and generalizations
☆ 51M09: Elementary problems in hyperbolic and elliptic geometries
☆ 51M10: Hyperbolic and elliptic geometries (general) and generalizations
☆ 51M15: Geometric constructions
☆ 51M16: Inequalities and extremum problems {For convex problems, see 52A40}
☆ 51M20: Polyhedra and polytopes; regular figures, division of spaces [See also 51F15]
☆ 51M25: Length, area and volume [See also 26B15]
☆ 51M30: Line geometries and their generalizations [See also 53A25]
☆ 51M35: Synthetic treatment of fundamental manifolds in projective geometries (Grassmannians, Veronesians and their generalizations) [See also 14M15]
☆ 51M99: None of the above, but in this section
□ 51Nxx: Analytic and descriptive geometry
☆ 51N05: Descriptive geometry [See also 65D17, 68U07]
☆ 51N10: Affine analytic geometry
☆ 51N15: Projective analytic geometry
☆ 51N20: Euclidean analytic geometry
☆ 51N25: Analytic geometry with other transformation groups
☆ 51N30: Geometry of classical groups [See also 20Gxx, 14L35]
☆ 51N35: Questions of classical algebraic geometry [See also 14Nxx]
☆ 51N99: None of the above, but in this section
□ 51Pxx: Geometry and physics (should also be assigned at least one other classification number from Sections 70--86)
☆ 51P05: Geometry and physics (should also be assigned at least one other classification number from Sections 70--86)
☆ 51P99: None of the above, but in this section
• 52-XX: Convex and discrete geometry
□ 52-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 52-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 52-02: Research exposition (monographs, survey articles)
□ 52-03: Historical (must also be assigned at least one classification number from Section 01)
□ 52-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 52-06: Proceedings, conferences, collections, etc.
□ 52Axx: General convexity
☆ 52A01: Axiomatic and generalized convexity
☆ 52A05: Convex sets without dimension restrictions
☆ 52A07: Convex sets in topological vector spaces [See also 46A55]
☆ 52A10: Convex sets in $2$ dimensions (including convex curves) [See also 53A04]
☆ 52A15: Convex sets in $3$ dimensions (including convex surfaces) [See also 53A05, 53C45]
☆ 52A20: Convex sets in $n$ dimensions (including convex hypersurfaces) [See also 53A07, 53C45]
☆ 52A21: Finite-dimensional Banach spaces (including special norms, zonoids, etc.) [See also 46Bxx]
☆ 52A22: Random convex sets and integral geometry [See also 53C65, 60D05]
☆ 52A23: Asymptotic theory of convex bodies [See also 46B06]
☆ 52A27: Approximation by convex sets
☆ 52A30: Variants of convex sets (star-shaped, ($m, n$)-convex, etc.)
☆ 52A35: Helly-type theorems and geometric transversal theory
☆ 52A37: Other problems of combinatorial convexity
☆ 52A38: Length, area, volume [See also 26B15, 28A75, 49Q20]
☆ 52A39: Mixed volumes and related topics
☆ 52A40: Inequalities and extremum problems
☆ 52A41: Convex functions and convex programs [See also 26B25, 90C25]
☆ 52A55: Spherical and hyperbolic convexity
☆ 52A99: None of the above, but in this section
□ 52Bxx: Polytopes and polyhedra
☆ 52B05: Combinatorial properties (number of faces, shortest paths, etc.) [See also 05Cxx]
☆ 52B10: Three-dimensional polytopes
☆ 52B11: $n$-dimensional polytopes
☆ 52B12: Special polytopes (linear programming, centrally symmetric, etc.)
☆ 52B15: Symmetry properties of polytopes
☆ 52B20: Lattice polytopes (including relations with commutative algebra and algebraic geometry) [See also 06A11, 13F20, 13Hxx]
☆ 52B22: Shellability
☆ 52B35: Gale and other diagrams
☆ 52B40: Matroids (realizations in the context of convex polytopes, convexity in combinatorial structures, etc.) [See also 05B35, 52Cxx]
☆ 52B45: Dissections and valuations (Hilbert's third problem, etc.)
☆ 52B55: Computational aspects related to convexity {For computational geometry and algorithms, see 68Q25, 68U05; for numerical algorithms, see 65Yxx} [See also 68Uxx]
☆ 52B60: Isoperimetric problems for polytopes
☆ 52B70: Polyhedral manifolds
☆ 52B99: None of the above, but in this section
□ 52Cxx: Discrete geometry
☆ 52C05: Lattices and convex bodies in $2$ dimensions [See also 11H06, 11H31, 11P21]
☆ 52C07: Lattices and convex bodies in $n$ dimensions [See also 11H06, 11H31, 11P21]
☆ 52C10: Erdős problems and related topics of discrete geometry [See also 11Hxx]
☆ 52C15: Packing and covering in $2$ dimensions [See also 05B40, 11H31]
☆ 52C17: Packing and covering in $n$ dimensions [See also 05B40, 11H31]
☆ 52C20: Tilings in $2$ dimensions [See also 05B45, 51M20]
☆ 52C22: Tilings in $n$ dimensions [See also 05B45, 51M20]
☆ 52C23: Quasicrystals, aperiodic tilings
☆ 52C25: Rigidity and flexibility of structures [See also 70B15]
☆ 52C26: Circle packings and discrete conformal geometry
☆ 52C30: Planar arrangements of lines and pseudolines
☆ 52C35: Arrangements of points, flats, hyperplanes [See also 32S22]
☆ 52C40: Oriented matroids
☆ 52C45: Combinatorial complexity of geometric structures [See also 68U05]
☆ 52C99: None of the above, but in this section
• 53-XX: Differential geometry {For differential topology, see 57Rxx. For foundational questions of differentiable manifolds, see 58Axx}
□ 53-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 53-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 53-02: Research exposition (monographs, survey articles)
□ 53-03: Historical (must also be assigned at least one classification number from Section 01)
□ 53-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 53-06: Proceedings, conferences, collections, etc.
□ 53Axx: Classical differential geometry
☆ 53A04: Curves in Euclidean space
☆ 53A05: Surfaces in Euclidean space
☆ 53A07: Higher-dimensional and -codimensional surfaces in Euclidean $n$-space
☆ 53A10: Minimal surfaces, surfaces with prescribed mean curvature [See also 49Q05, 49Q10, 53C42]
☆ 53A15: Affine differential geometry
☆ 53A17: Kinematics
☆ 53A20: Projective differential geometry
☆ 53A25: Differential line geometry
☆ 53A30: Conformal differential geometry
☆ 53A35: Non-Euclidean differential geometry
☆ 53A40: Other special differential geometries
☆ 53A45: Vector and tensor analysis
☆ 53A55: Differential invariants (local theory), geometric objects
☆ 53A60: Geometry of webs [See also 14C21, 20N05]
☆ 53A99: None of the above, but in this section
□ 53Bxx: Local differential geometry
☆ 53B05: Linear and affine connections
☆ 53B10: Projective connections
☆ 53B15: Other connections
☆ 53B20: Local Riemannian geometry
☆ 53B21: Methods of Riemannian geometry
☆ 53B25: Local submanifolds [See also 53C40]
☆ 53B30: Lorentz metrics, indefinite metrics
☆ 53B35: Hermitian and Kählerian structures [See also 32Cxx]
☆ 53B40: Finsler spaces and generalizations (areal metrics)
☆ 53B50: Applications to physics
☆ 53B99: None of the above, but in this section
□ 53Cxx: Global differential geometry [See also 51H25, 58-XX; for related bundle theory, see 55Rxx, 57Rxx]
☆ 53C05: Connections, general theory
☆ 53C07: Special connections and metrics on vector bundles (Hermite-Einstein-Yang-Mills) [See also 32Q20]
☆ 53C08: Gerbes, differential characters: differential geometric aspects
☆ 53C10: $G$-structures
☆ 53C12: Foliations (differential geometric aspects) [See also 57R30, 57R32]
☆ 53C15: General geometric structures on manifolds (almost complex, almost product structures, etc.)
☆ 53C17: Sub-Riemannian geometry
☆ 53C20: Global Riemannian geometry, including pinching [See also 31C12, 58B20]
☆ 53C21: Methods of Riemannian geometry, including PDE methods; curvature restrictions [See also 58J60]
☆ 53C22: Geodesics [See also 58E10]
☆ 53C23: Global geometric and topological methods (à la Gromov); differential geometric analysis on metric spaces
☆ 53C24: Rigidity results
☆ 53C25: Special Riemannian manifolds (Einstein, Sasakian, etc.)
☆ 53C26: Hyper-Kähler and quaternionic Kähler geometry, “special” geometry
☆ 53C27: Spin and Spin${}^c$ geometry
☆ 53C28: Twistor methods [See also 32L25]
☆ 53C29: Issues of holonomy
☆ 53C30: Homogeneous manifolds [See also 14M15, 14M17, 32M10, 57T15]
☆ 53C35: Symmetric spaces [See also 32M15, 57T15]
☆ 53C38: Calibrations and calibrated geometries
☆ 53C40: Global submanifolds [See also 53B25]
☆ 53C42: Immersions (minimal, prescribed curvature, tight, etc.) [See also 49Q05, 49Q10, 53A10, 57R40, 57R42]
☆ 53C43: Differential geometric aspects of harmonic maps [See also 58E20]
☆ 53C44: Geometric evolution equations (mean curvature flow, Ricci flow, etc.)
☆ 53C45: Global surface theory (convex surfaces à la A. D. Aleksandrov)
☆ 53C50: Lorentz manifolds, manifolds with indefinite metrics
☆ 53C55: Hermitian and Kählerian manifolds [See also 32Cxx]
☆ 53C56: Other complex differential geometry [See also 32Cxx]
☆ 53C60: Finsler spaces and generalizations (areal metrics) [See also 58B20]
☆ 53C65: Integral geometry [See also 52A22, 60D05]; differential forms, currents, etc. [See mainly 58Axx]
☆ 53C70: Direct methods ($G$-spaces of Busemann, etc.)
☆ 53C75: Geometric orders, order geometry [See also 51Lxx]
☆ 53C80: Applications to physics
☆ 53C99: None of the above, but in this section
□ 53Dxx: Symplectic geometry, contact geometry [See also 37Jxx, 70Gxx, 70Hxx]
☆ 53D05: Symplectic manifolds, general
☆ 53D10: Contact manifolds, general
☆ 53D12: Lagrangian submanifolds; Maslov index
☆ 53D15: Almost contact and almost symplectic manifolds
☆ 53D17: Poisson manifolds; Poisson groupoids and algebroids
☆ 53D18: Generalized geometries (à la Hitchin)
☆ 53D20: Momentum maps; symplectic reduction
☆ 53D22: Canonical transformations
☆ 53D25: Geodesic flows
☆ 53D30: Symplectic structures of moduli spaces
☆ 53D35: Global theory of symplectic and contact manifolds [See also 57Rxx]
☆ 53D37: Mirror symmetry, symplectic aspects; homological mirror symmetry; Fukaya category [See also 14J33]
☆ 53D40: Floer homology and cohomology, symplectic aspects
☆ 53D42: Symplectic field theory; contact homology
☆ 53D45: Gromov-Witten invariants, quantum cohomology, Frobenius manifolds [See also 14N35]
☆ 53D50: Geometric quantization
☆ 53D55: Deformation quantization, star products
☆ 53D99: None of the above, but in this section
□ 53Zxx: Applications to physics
☆ 53Z05: Applications to physics
☆ 53Z99: None of the above, but in this section
• 54-XX: General topology {For the topology of manifolds of all dimensions, see 57Nxx}
□ 54-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 54-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 54-02: Research exposition (monographs, survey articles)
□ 54-03: Historical (must also be assigned at least one classification number from Section 01)
□ 54-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 54-06: Proceedings, conferences, collections, etc.
□ 54Axx: Generalities
☆ 54A05: Topological spaces and generalizations (closure spaces, etc.)
☆ 54A10: Several topologies on one set (change of topology, comparison of topologies, lattices of topologies)
☆ 54A15: Syntopogeneous structures
☆ 54A20: Convergence in general topology (sequences, filters, limits, convergence spaces, etc.)
☆ 54A25: Cardinality properties (cardinal functions and inequalities, discrete subsets) [See also 03Exx] {For ultrafilters, see 54D80}
☆ 54A35: Consistency and independence results [See also 03E35]
☆ 54A40: Fuzzy topology [See also 03E72]
☆ 54A99: None of the above, but in this section
□ 54Bxx: Basic constructions
☆ 54B05: Subspaces
☆ 54B10: Product spaces
☆ 54B15: Quotient spaces, decompositions
☆ 54B17: Adjunction spaces and similar constructions
☆ 54B20: Hyperspaces
☆ 54B30: Categorical methods [See also 18B30]
☆ 54B35: Spectra
☆ 54B40: Presheaves and sheaves [See also 18F20]
☆ 54B99: None of the above, but in this section
□ 54Cxx: Maps and general types of spaces defined by maps
☆ 54C05: Continuous maps
☆ 54C08: Weak and generalized continuity
☆ 54C10: Special maps on topological spaces (open, closed, perfect, etc.)
☆ 54C15: Retraction
☆ 54C20: Extension of maps
☆ 54C25: Embedding
☆ 54C30: Real-valued functions [See also 26-XX]
☆ 54C35: Function spaces [See also 46Exx, 58D15]
☆ 54C40: Algebraic properties of function spaces [See also 46J10]
☆ 54C45: $C$- and $C^*$-embedding
☆ 54C50: Special sets defined by functions [See also 26A21]
☆ 54C55: Absolute neighborhood extensor, absolute extensor, absolute neighborhood retract (ANR), absolute retract spaces (general properties) [See also 55M15]
☆ 54C56: Shape theory [See also 55P55, 57N25]
☆ 54C60: Set-valued maps [See also 26E25, 28B20, 47H04, 58C06]
☆ 54C65: Selections [See also 28B20]
☆ 54C70: Entropy
☆ 54C99: None of the above, but in this section
□ 54Dxx: Fairly general properties
☆ 54D05: Connected and locally connected spaces (general aspects)
☆ 54D10: Lower separation axioms ($T_0$--$T_3$, etc.)
☆ 54D15: Higher separation axioms (completely regular, normal, perfectly or collectionwise normal, etc.)
☆ 54D20: Noncompact covering properties (paracompact, Lindelöf, etc.)
☆ 54D25: “$P$-minimal” and “$P$-closed” spaces
☆ 54D30: Compactness
☆ 54D35: Extensions of spaces (compactifications, supercompactifications, completions, etc.)
☆ 54D40: Remainders
☆ 54D45: Local compactness, $\sigma$-compactness
☆ 54D50: $k$-spaces
☆ 54D55: Sequential spaces
☆ 54D60: Realcompactness and realcompactification
☆ 54D65: Separability
☆ 54D70: Base properties
☆ 54D80: Special constructions of spaces (spaces of ultrafilters, etc.)
☆ 54D99: None of the above, but in this section
□ 54Exx: Spaces with richer structures
☆ 54E05: Proximity structures and generalizations
☆ 54E15: Uniform structures and generalizations
☆ 54E17: Nearness spaces
☆ 54E18: $p$-spaces, $M$-spaces, $\sigma$-spaces, etc.
☆ 54E20: Stratifiable spaces, cosmic spaces, etc.
☆ 54E25: Semimetric spaces
☆ 54E30: Moore spaces
☆ 54E35: Metric spaces, metrizability
☆ 54E40: Special maps on metric spaces
☆ 54E45: Compact (locally compact) metric spaces
☆ 54E50: Complete metric spaces
☆ 54E52: Baire category, Baire spaces
☆ 54E55: Bitopologies
☆ 54E70: Probabilistic metric spaces
☆ 54E99: None of the above, but in this section
□ 54Fxx: Special properties
☆ 54F05: Linearly ordered topological spaces, generalized ordered spaces, and partially ordered spaces [See also 06B30, 06F30]
☆ 54F15: Continua and generalizations
☆ 54F35: Higher-dimensional local connectedness [See also 55Mxx, 55Nxx]
☆ 54F45: Dimension theory [See also 55M10]
☆ 54F50: Spaces of dimension $\leq 1$; curves, dendrites [See also 26A03]
☆ 54F55: Unicoherence, multicoherence
☆ 54F65: Topological characterizations of particular spaces
☆ 54F99: None of the above, but in this section
□ 54Gxx: Peculiar spaces
☆ 54G05: Extremally disconnected spaces, $F$-spaces, etc.
☆ 54G10: $P$-spaces
☆ 54G12: Scattered spaces
☆ 54G15: Pathological spaces
☆ 54G20: Counterexamples
☆ 54G99: None of the above, but in this section
□ 54Hxx: Connections with other structures, applications
☆ 54H05: Descriptive set theory (topological aspects of Borel, analytic, projective, etc. sets) [See also 03E15, 26A21, 28A05]
☆ 54H10: Topological representations of algebraic systems [See also 22-XX]
☆ 54H11: Topological groups [See also 22A05]
☆ 54H12: Topological lattices, etc. [See also 06B30, 06F30]
☆ 54H13: Topological fields, rings, etc. [See also 12Jxx] {For algebraic aspects, see 13Jxx, 16W80}
☆ 54H15: Transformation groups and semigroups [See also 20M20, 22-XX, 57Sxx]
☆ 54H20: Topological dynamics [See also 28Dxx, 37Bxx]
☆ 54H25: Fixed-point and coincidence theorems [See also 47H10, 55M20]
☆ 54H99: None of the above, but in this section
□ 54Jxx: Nonstandard topology [See also 03H05]
☆ 54J05: Nonstandard topology [See also 03H05]
☆ 54J99: None of the above, but in this section
• 55-XX: Algebraic topology
□ 55-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 55-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 55-02: Research exposition (monographs, survey articles)
□ 55-03: Historical (must also be assigned at least one classification number from Section 01)
□ 55-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 55-06: Proceedings, conferences, collections, etc.
□ 55Mxx: Classical topics {For the topology of Euclidean spaces and manifolds, see 57Nxx}
☆ 55M05: Duality
☆ 55M10: Dimension theory [See also 54F45]
☆ 55M15: Absolute neighborhood retracts [See also 54C55]
☆ 55M20: Fixed points and coincidences [See also 54H25]
☆ 55M25: Degree, winding number
☆ 55M30: Ljusternik-Schnirelman (Lyusternik-Shnirel'man) category of a space
☆ 55M35: Finite groups of transformations (including Smith theory) [See also 57S17]
☆ 55M99: None of the above, but in this section
□ 55Nxx: Homology and cohomology theories [See also 57Txx]
☆ 55N05: Čech types
☆ 55N07: Steenrod-Sitnikov homologies
☆ 55N10: Singular theory
☆ 55N15: $K$-theory [See also 19Lxx] {For algebraic $K$-theory, see 18F25, 19-XX}
☆ 55N20: Generalized (extraordinary) homology and cohomology theories
☆ 55N22: Bordism and cobordism theories, formal group laws [See also 14L05, 19L41, 57R75, 57R77, 57R85, 57R90]
☆ 55N25: Homology with local coefficients, equivariant cohomology
☆ 55N30: Sheaf cohomology [See also 18F20, 32C35, 32L10]
☆ 55N32: Orbifold cohomology
☆ 55N33: Intersection homology and cohomology
☆ 55N34: Elliptic cohomology
☆ 55N35: Other homology theories
☆ 55N40: Axioms for homology theory and uniqueness theorems
☆ 55N45: Products and intersections
☆ 55N91: Equivariant homology and cohomology [See also 19L47]
☆ 55N99: None of the above, but in this section
□ 55Pxx: Homotopy theory {For simple homotopy type, see 57Q10}
☆ 55P05: Homotopy extension properties, cofibrations
☆ 55P10: Homotopy equivalences
☆ 55P15: Classification of homotopy type
☆ 55P20: Eilenberg-Mac Lane spaces
☆ 55P25: Spanier-Whitehead duality
☆ 55P30: Eckmann-Hilton duality
☆ 55P35: Loop spaces
☆ 55P40: Suspensions
☆ 55P42: Stable homotopy theory, spectra
☆ 55P43: Spectra with additional structure ($E_\infty$, $A_\infty$, ring spectra, etc.)
☆ 55P45: $H$-spaces and duals
☆ 55P47: Infinite loop spaces
☆ 55P48: Loop space machines, operads [See also 18D50]
☆ 55P50: String topology
☆ 55P55: Shape theory [See also 54C56, 55Q07]
☆ 55P57: Proper homotopy theory
☆ 55P60: Localization and completion
☆ 55P62: Rational homotopy theory
☆ 55P65: Homotopy functors
☆ 55P91: Equivariant homotopy theory [See also 19L47]
☆ 55P92: Relations between equivariant and nonequivariant homotopy theory
☆ 55P99: None of the above, but in this section
□ 55Qxx: Homotopy groups
☆ 55Q05: Homotopy groups, general; sets of homotopy classes
☆ 55Q07: Shape groups
☆ 55Q10: Stable homotopy groups
☆ 55Q15: Whitehead products and generalizations
☆ 55Q20: Homotopy groups of wedges, joins, and simple spaces
☆ 55Q25: Hopf invariants
☆ 55Q35: Operations in homotopy groups
☆ 55Q40: Homotopy groups of spheres
☆ 55Q45: Stable homotopy of spheres
☆ 55Q50: $J$-morphism [See also 19L20]
☆ 55Q51: $v_n$-periodicity
☆ 55Q52: Homotopy groups of special spaces
☆ 55Q55: Cohomotopy groups
☆ 55Q70: Homotopy groups of special types [See also 55N05, 55N07]
☆ 55Q91: Equivariant homotopy groups [See also 19L47]
☆ 55Q99: None of the above, but in this section
□ 55Rxx: Fiber spaces and bundles [See also 18F15, 32Lxx, 46M20, 57R20, 57R22, 57R25]
☆ 55R05: Fiber spaces
☆ 55R10: Fiber bundles
☆ 55R12: Transfer
☆ 55R15: Classification
☆ 55R20: Spectral sequences and homology of fiber spaces [See also 55Txx]
☆ 55R25: Sphere bundles and vector bundles
☆ 55R35: Classifying spaces of groups and $H$-spaces
☆ 55R37: Maps between classifying spaces
☆ 55R40: Homology of classifying spaces, characteristic classes [See also 57Txx, 57R20]
☆ 55R45: Homology and homotopy of $B{\rm O}$ and $B{\rm U}$; Bott periodicity
☆ 55R50: Stable classes of vector space bundles, $K$-theory [See also 19Lxx] {For algebraic $K$-theory, see 18F25, 19-XX}
☆ 55R55: Fiberings with singularities
☆ 55R60: Microbundles and block bundles [See also 57N55, 57Q50]
☆ 55R65: Generalizations of fiber spaces and bundles
☆ 55R70: Fibrewise topology
☆ 55R80: Discriminantal varieties, configuration spaces
☆ 55R91: Equivariant fiber spaces and bundles [See also 19L47]
☆ 55R99: None of the above, but in this section
□ 55Sxx: Operations and obstructions
☆ 55S05: Primary cohomology operations
☆ 55S10: Steenrod algebra
☆ 55S12: Dyer-Lashof operations
☆ 55S15: Symmetric products, cyclic products
☆ 55S20: Secondary and higher cohomology operations
☆ 55S25: $K$-theory operations and generalized cohomology operations [See also 19D55, 19Lxx]
☆ 55S30: Massey products
☆ 55S35: Obstruction theory
☆ 55S36: Extension and compression of mappings
☆ 55S37: Classification of mappings
☆ 55S40: Sectioning fiber spaces and bundles
☆ 55S45: Postnikov systems, $k$-invariants
☆ 55S91: Equivariant operations and obstructions [See also 19L47]
☆ 55S99: None of the above, but in this section
□ 55Txx: Spectral sequences [See also 18G40, 55R20]
☆ 55T05: General
☆ 55T10: Serre spectral sequences
☆ 55T15: Adams spectral sequences
☆ 55T20: Eilenberg-Moore spectral sequences [See also 57T35]
☆ 55T25: Generalized cohomology
☆ 55T99: None of the above, but in this section
□ 55Uxx: Applied homological algebra and category theory [See also 18Gxx]
☆ 55U05: Abstract complexes
☆ 55U10: Simplicial sets and complexes
☆ 55U15: Chain complexes
☆ 55U20: Universal coefficient theorems, Bockstein operator
☆ 55U25: Homology of a product, Künneth formula
☆ 55U30: Duality
☆ 55U35: Abstract and axiomatic homotopy theory
☆ 55U40: Topological categories, foundations of homotopy theory
☆ 55U99: None of the above, but in this section
• 57-XX: Manifolds and cell complexes {For complex manifolds, see 32Qxx}
□ 57-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 57-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 57-02: Research exposition (monographs, survey articles)
□ 57-03: Historical (must also be assigned at least one classification number from Section 01)
□ 57-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 57-06: Proceedings, conferences, collections, etc.
□ 57Mxx: Low-dimensional topology
☆ 57M05: Fundamental group, presentations, free differential calculus
☆ 57M07: Topological methods in group theory
☆ 57M10: Covering spaces
☆ 57M12: Special coverings, e.g. branched
☆ 57M15: Relations with graph theory [See also 05Cxx]
☆ 57M20: Two-dimensional complexes
☆ 57M25: Knots and links in $S^3$ {For higher dimensions, see 57Q45}
☆ 57M27: Invariants of knots and 3-manifolds
☆ 57M30: Wild knots and surfaces, etc., wild embeddings
☆ 57M35: Dehn's lemma, sphere theorem, loop theorem, asphericity
☆ 57M40: Characterizations of $E^3$ and $S^3$ (Poincaré conjecture) [See also 57N12]
☆ 57M50: Geometric structures on low-dimensional manifolds
☆ 57M60: Group actions in low dimensions
☆ 57M99: None of the above, but in this section
□ 57Nxx: Topological manifolds
☆ 57N05: Topology of $E^2$, $2$-manifolds
☆ 57N10: Topology of general $3$-manifolds [See also 57Mxx]
☆ 57N12: Topology of $E^3$ and $S^3$ [See also 57M40]
☆ 57N13: Topology of $E^4$, $4$-manifolds [See also 14Jxx, 32Jxx]
☆ 57N15: Topology of $E^n$, $n$-manifolds ($4 \less n \less \infty$)
☆ 57N16: Geometric structures on manifolds [See also 57M50]
☆ 57N17: Topology of topological vector spaces
☆ 57N20: Topology of infinite-dimensional manifolds [See also 58Bxx]
☆ 57N25: Shapes [See also 54C56, 55P55, 55Q07]
☆ 57N30: Engulfing
☆ 57N35: Embeddings and immersions
☆ 57N37: Isotopy and pseudo-isotopy
☆ 57N40: Neighborhoods of submanifolds
☆ 57N45: Flatness and tameness
☆ 57N50: $S^{n-1}\subset E^n$, Schoenflies problem
☆ 57N55: Microbundles and block bundles [See also 55R60, 57Q50]
☆ 57N60: Cellularity
☆ 57N65: Algebraic topology of manifolds
☆ 57N70: Cobordism and concordance
☆ 57N75: General position and transversality
☆ 57N80: Stratifications
☆ 57N99: None of the above, but in this section
□ 57Pxx: Generalized manifolds [See also 18F15]
☆ 57P05: Local properties of generalized manifolds
☆ 57P10: Poincaré duality spaces
☆ 57P99: None of the above, but in this section
□ 57Qxx: PL-topology
☆ 57Q05: General topology of complexes
☆ 57Q10: Simple homotopy type, Whitehead torsion, Reidemeister-Franz torsion, etc. [See also 19B28]
☆ 57Q12: Wall finiteness obstruction for CW-complexes
☆ 57Q15: Triangulating manifolds
☆ 57Q20: Cobordism
☆ 57Q25: Comparison of PL-structures: classification, Hauptvermutung
☆ 57Q30: Engulfing
☆ 57Q35: Embeddings and immersions
☆ 57Q37: Isotopy
☆ 57Q40: Regular neighborhoods
☆ 57Q45: Knots and links (in high dimensions) {For the low-dimensional case, see 57M25}
☆ 57Q50: Microbundles and block bundles [See also 55R60, 57N55]
☆ 57Q55: Approximations
☆ 57Q60: Cobordism and concordance
☆ 57Q65: General position and transversality
☆ 57Q91: Equivariant PL-topology
☆ 57Q99: None of the above, but in this section
□ 57Rxx: Differential topology {For foundational questions of differentiable manifolds, see 58Axx; for infinite-dimensional manifolds, see 58Bxx}
☆ 57R05: Triangulating
☆ 57R10: Smoothing
☆ 57R12: Smooth approximations
☆ 57R15: Specialized structures on manifolds (spin manifolds, framed manifolds, etc.)
☆ 57R17: Symplectic and contact topology
☆ 57R18: Topology and geometry of orbifolds
☆ 57R19: Algebraic topology on manifolds
☆ 57R20: Characteristic classes and numbers
☆ 57R22: Topology of vector bundles and fiber bundles [See also 55Rxx]
☆ 57R25: Vector fields, frame fields
☆ 57R27: Controllability of vector fields on $C^\infty$ and real-analytic manifolds [See also 49Qxx, 37C10, 93B05]
☆ 57R30: Foliations; geometric theory
☆ 57R32: Classifying spaces for foliations; Gelfand-Fuks cohomology [See also 58H10]
☆ 57R35: Differentiable mappings
☆ 57R40: Embeddings
☆ 57R42: Immersions
☆ 57R45: Singularities of differentiable mappings
☆ 57R50: Diffeomorphisms
☆ 57R52: Isotopy
☆ 57R55: Differentiable structures
☆ 57R56: Topological quantum field theories
☆ 57R57: Applications of global analysis to structures on manifolds, Donaldson and Seiberg-Witten invariants [See also 58-XX]
☆ 57R58: Floer homology
☆ 57R60: Homotopy spheres, Poincaré conjecture
☆ 57R65: Surgery and handlebodies
☆ 57R67: Surgery obstructions, Wall groups [See also 19J25]
☆ 57R70: Critical points and critical submanifolds
☆ 57R75: ${\rm O}$- and ${\rm SO}$-cobordism
☆ 57R77: Complex cobordism (${\rm U}$- and ${\rm SU}$-cobordism) [See also 55N22]
☆ 57R80: $h$- and $s$-cobordism
☆ 57R85: Equivariant cobordism
☆ 57R90: Other types of cobordism [See also 55N22]
☆ 57R91: Equivariant algebraic topology of manifolds
☆ 57R95: Realizing cycles by submanifolds
☆ 57R99: None of the above, but in this section
□ 57Sxx: Topological transformation groups [See also 20F34, 22-XX, 37-XX, 54H15, 58D05]
☆ 57S05: Topological properties of groups of homeomorphisms or diffeomorphisms
☆ 57S10: Compact groups of homeomorphisms
☆ 57S15: Compact Lie groups of differentiable transformations
☆ 57S17: Finite transformation groups
☆ 57S20: Noncompact Lie groups of transformations
☆ 57S25: Groups acting on specific manifolds
☆ 57S30: Discontinuous groups of transformations
☆ 57S99: None of the above, but in this section
□ 57Txx: Homology and homotopy of topological groups and related structures
☆ 57T05: Hopf algebras [See also 16T05]
☆ 57T10: Homology and cohomology of Lie groups
☆ 57T15: Homology and cohomology of homogeneous spaces of Lie groups
☆ 57T20: Homotopy groups of topological groups and homogeneous spaces
☆ 57T25: Homology and cohomology of $H$-spaces
☆ 57T30: Bar and cobar constructions [See also 18G55, 55Uxx]
☆ 57T35: Applications of Eilenberg-Moore spectral sequences [See also 55R20, 55T20]
☆ 57T99: None of the above, but in this section
• 58-XX: Global analysis, analysis on manifolds [See also 32Cxx, 32Fxx, 32Wxx, 46-XX, 47Hxx, 53Cxx] {For geometric integration theory, see 49Q15}
□ 58-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 58-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 58-02: Research exposition (monographs, survey articles)
□ 58-03: Historical (must also be assigned at least one classification number from Section 01)
□ 58-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 58-06: Proceedings, conferences, collections, etc.
□ 58Axx: General theory of differentiable manifolds [See also 32Cxx]
☆ 58A03: Topos-theoretic approach to differentiable manifolds
☆ 58A05: Differentiable manifolds, foundations
☆ 58A07: Real-analytic and Nash manifolds [See also 14P20, 32C07]
☆ 58A10: Differential forms
☆ 58A12: de Rham theory [See also 14Fxx]
☆ 58A14: Hodge theory [See also 14C30, 14Fxx, 32J25, 32S35]
☆ 58A15: Exterior differential systems (Cartan theory)
☆ 58A17: Pfaffian systems
☆ 58A20: Jets
☆ 58A25: Currents [See also 32C30, 53C65]
☆ 58A30: Vector distributions (subbundles of the tangent bundles)
☆ 58A32: Natural bundles
☆ 58A35: Stratified sets [See also 32S60]
☆ 58A40: Differential spaces
☆ 58A50: Supermanifolds and graded manifolds [See also 14A22, 32C11]
☆ 58A99: None of the above, but in this section
□ 58Bxx: Infinite-dimensional manifolds
☆ 58B05: Homotopy and topological questions
☆ 58B10: Differentiability questions
☆ 58B12: Questions of holomorphy [See also 32-XX, 46G20]
☆ 58B15: Fredholm structures [See also 47A53]
☆ 58B20: Riemannian, Finsler and other geometric structures [See also 53C20, 53C60]
☆ 58B25: Group structures and generalizations on infinite-dimensional manifolds [See also 22E65, 58D05]
☆ 58B32: Geometry of quantum groups
☆ 58B34: Noncommutative geometry (à la Connes)
☆ 58B99: None of the above, but in this section
□ 58Cxx: Calculus on manifolds; nonlinear operators [See also 46Txx, 47Hxx, 47Jxx]
☆ 58C05: Real-valued functions
☆ 58C06: Set valued and function-space valued mappings [See also 47H04, 54C60]
☆ 58C07: Continuity properties of mappings
☆ 58C10: Holomorphic maps [See also 32-XX]
☆ 58C15: Implicit function theorems; global Newton methods
☆ 58C20: Differentiation theory (Gateaux, Fréchet, etc.) [See also 26Exx, 46G05]
☆ 58C25: Differentiable maps
☆ 58C30: Fixed point theorems on manifolds [See also 47H10]
☆ 58C35: Integration on manifolds; measures on manifolds [See also 28Cxx]
☆ 58C40: Spectral theory; eigenvalue problems [See also 47J10, 58E07]
☆ 58C50: Analysis on supermanifolds or graded manifolds
☆ 58C99: None of the above, but in this section
□ 58Dxx: Spaces and manifolds of mappings (including nonlinear versions of 46Exx) [See also 46Txx, 53Cxx]
☆ 58D05: Groups of diffeomorphisms and homeomorphisms as manifolds [See also 22E65, 57S05]
☆ 58D07: Groups and semigroups of nonlinear operators [See also 17B65, 47H20]
☆ 58D10: Spaces of imbeddings and immersions
☆ 58D15: Manifolds of mappings [See also 46T10, 54C35]
☆ 58D17: Manifolds of metrics (esp. Riemannian)
☆ 58D19: Group actions and symmetry properties
☆ 58D20: Measures (Gaussian, cylindrical, etc.) on manifolds of maps [See also 28Cxx, 46T12]
☆ 58D25: Equations in function spaces; evolution equations [See also 34Gxx, 35K90, 35L90, 35R15, 37Lxx, 47Jxx]
☆ 58D27: Moduli problems for differential geometric structures
☆ 58D29: Moduli problems for topological structures
☆ 58D30: Applications (in quantum mechanics (Feynman path integrals), relativity, fluid dynamics, etc.)
☆ 58D99: None of the above, but in this section
□ 58Exx: Variational problems in infinite-dimensional spaces
☆ 58E05: Abstract critical point theory (Morse theory, Ljusternik-Schnirelman (Lyusternik-Shnirel'man) theory, etc.)
☆ 58E07: Abstract bifurcation theory
☆ 58E09: Group-invariant bifurcation theory
☆ 58E10: Applications to the theory of geodesics (problems in one independent variable)
☆ 58E11: Critical metrics
☆ 58E12: Applications to minimal surfaces (problems in two independent variables) [See also 49Q05]
☆ 58E15: Application to extremal problems in several variables; Yang-Mills functionals [See also 81T13], etc.
☆ 58E17: Pareto optimality, etc., applications to economics [See also 90C29]
☆ 58E20: Harmonic maps [See also 53C43], etc.
☆ 58E25: Applications to control theory [See also 49-XX, 93-XX]
☆ 58E30: Variational principles
☆ 58E35: Variational inequalities (global problems)
☆ 58E40: Group actions
☆ 58E50: Applications
☆ 58E99: None of the above, but in this section
□ 58Hxx: Pseudogroups, differentiable groupoids and general structures on manifolds
☆ 58H05: Pseudogroups and differentiable groupoids [See also 22A22, 22E65]
☆ 58H10: Cohomology of classifying spaces for pseudogroup structures (Spencer, Gelfand-Fuks, etc.) [See also 57R32]
☆ 58H15: Deformations of structures [See also 32Gxx, 58J10]
☆ 58H99: None of the above, but in this section
□ 58Jxx: Partial differential equations on manifolds; differential operators [See also 32Wxx, 35-XX, 53Cxx]
☆ 58J05: Elliptic equations on manifolds, general theory [See also 35-XX]
☆ 58J10: Differential complexes [See also 35Nxx]; elliptic complexes
☆ 58J15: Relations with hyperfunctions
☆ 58J20: Index theory and related fixed point theorems [See also 19K56, 46L80]
☆ 58J22: Exotic index theories [See also 19K56, 46L05, 46L10, 46L80, 46M20]
☆ 58J26: Elliptic genera
☆ 58J28: Eta-invariants, Chern-Simons invariants
☆ 58J30: Spectral flows
☆ 58J32: Boundary value problems on manifolds
☆ 58J35: Heat and other parabolic equation methods
☆ 58J37: Perturbations; asymptotics
☆ 58J40: Pseudodifferential and Fourier integral operators on manifolds [See also 35Sxx]
☆ 58J42: Noncommutative global analysis, noncommutative residues
☆ 58J45: Hyperbolic equations [See also 35Lxx]
☆ 58J47: Propagation of singularities; initial value problems
☆ 58J50: Spectral problems; spectral geometry; scattering theory [See also 35Pxx]
☆ 58J51: Relations between spectral theory and ergodic theory, e.g. quantum unique ergodicity
☆ 58J52: Determinants and determinant bundles, analytic torsion
☆ 58J53: Isospectrality
☆ 58J55: Bifurcation [See also 35B32]
☆ 58J60: Relations with special manifold structures (Riemannian, Finsler, etc.)
☆ 58J65: Diffusion processes and stochastic analysis on manifolds [See also 35R60, 60H10, 60J60]
☆ 58J70: Invariance and symmetry properties [See also 35A30]
☆ 58J72: Correspondences and other transformation methods (e.g. Lie-Bäcklund) [See also 35A22]
☆ 58J90: Applications
☆ 58J99: None of the above, but in this section
□ 58Kxx: Theory of singularities and catastrophe theory [See also 32Sxx, 37-XX]
☆ 58K05: Critical points of functions and mappings
☆ 58K10: Monodromy
☆ 58K15: Topological properties of mappings
☆ 58K20: Algebraic and analytic properties of mappings
☆ 58K25: Stability
☆ 58K30: Global theory
☆ 58K35: Catastrophe theory
☆ 58K40: Classification; finite determinacy of map germs
☆ 58K45: Singularities of vector fields, topological aspects
☆ 58K50: Normal forms
☆ 58K55: Asymptotic behavior
☆ 58K60: Deformation of singularities
☆ 58K65: Topological invariants
☆ 58K70: Symmetries, equivariance
☆ 58K99: None of the above, but in this section
□ 58Zxx: Applications to physics
☆ 58Z05: Applications to physics
☆ 58Z99: None of the above, but in this section
• 60-XX: Probability theory and stochastic processes {For additional applications, see 11Kxx, 62-XX, 90-XX, 91-XX, 92-XX, 93-XX, 94-XX}
□ 60-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 60-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 60-02: Research exposition (monographs, survey articles)
□ 60-03: Historical (must also be assigned at least one classification number from Section 01)
□ 60-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 60-06: Proceedings, conferences, collections, etc.
□ 60-08: Computational methods (not classified at a more specific level) [See also 65C50]
□ 60Axx: Foundations of probability theory
☆ 60A05: Axioms; other general questions
☆ 60A10: Probabilistic measure theory {For ergodic theory, see 28Dxx and 60Fxx}
☆ 60A86: Fuzzy probability
☆ 60A99: None of the above, but in this section
□ 60Bxx: Probability theory on algebraic and topological structures
☆ 60B05: Probability measures on topological spaces
☆ 60B10: Convergence of probability measures
☆ 60B11: Probability theory on linear topological spaces [See also 28C20]
☆ 60B12: Limit theorems for vector-valued random variables (infinite-dimensional case)
☆ 60B15: Probability measures on groups or semigroups, Fourier transforms, factorization
☆ 60B20: Random matrices (probabilistic aspects; for algebraic aspects see 15B52)
☆ 60B99: None of the above, but in this section
□ 60Cxx: Combinatorial probability
☆ 60C05: Combinatorial probability
☆ 60C99: None of the above, but in this section
□ 60Dxx: Geometric probability and stochastic geometry [See also 52A22, 53C65]
☆ 60D05: Geometric probability and stochastic geometry [See also 52A22, 53C65]
☆ 60D99: None of the above, but in this section
□ 60Exx: Distribution theory [See also 62Exx, 62Hxx]
☆ 60E05: Distributions: general theory
☆ 60E07: Infinitely divisible distributions; stable distributions
☆ 60E10: Characteristic functions; other transforms
☆ 60E15: Inequalities; stochastic orderings
☆ 60E99: None of the above, but in this section
□ 60Fxx: Limit theorems [See also 28Dxx, 60B12]
☆ 60F05: Central limit and other weak theorems
☆ 60F10: Large deviations
☆ 60F15: Strong theorems
☆ 60F17: Functional limit theorems; invariance principles
☆ 60F20: Zero-one laws
☆ 60F25: $L^p$-limit theorems
☆ 60F99: None of the above, but in this section
□ 60Gxx: Stochastic processes
☆ 60G05: Foundations of stochastic processes
☆ 60G07: General theory of processes
☆ 60G09: Exchangeability
☆ 60G10: Stationary processes
☆ 60G12: General second-order processes
☆ 60G15: Gaussian processes
☆ 60G17: Sample path properties
☆ 60G18: Self-similar processes
☆ 60G20: Generalized stochastic processes
☆ 60G22: Fractional processes, including fractional Brownian motion
☆ 60G25: Prediction theory [See also 62M20]
☆ 60G30: Continuity and singularity of induced measures
☆ 60G35: Signal detection and filtering [See also 62M20, 93E10, 93E11, 94Axx]
☆ 60G40: Stopping times; optimal stopping problems; gambling theory [See also 62L15, 91A60]
☆ 60G42: Martingales with discrete parameter
☆ 60G44: Martingales with continuous parameter
☆ 60G46: Martingales and classical analysis
☆ 60G48: Generalizations of martingales
☆ 60G50: Sums of independent random variables; random walks
☆ 60G51: Processes with independent increments; L\'evy processes
☆ 60G52: Stable processes
☆ 60G55: Point processes
☆ 60G57: Random measures
☆ 60G60: Random fields
☆ 60G70: Extreme value theory; extremal processes
☆ 60G99: None of the above, but in this section
□ 60Hxx: Stochastic analysis [See also 58J65]
☆ 60H05: Stochastic integrals
☆ 60H07: Stochastic calculus of variations and the Malliavin calculus
☆ 60H10: Stochastic ordinary differential equations [See also 34F05]
☆ 60H15: Stochastic partial differential equations [See also 35R60]
☆ 60H20: Stochastic integral equations
☆ 60H25: Random operators and equations [See also 47B80]
☆ 60H30: Applications of stochastic analysis (to PDE, etc.)
☆ 60H35: Computational methods for stochastic equations [See also 65C30]
☆ 60H40: White noise theory
☆ 60H99: None of the above, but in this section
□ 60Jxx: Markov processes
☆ 60J05: Discrete-time Markov processes on general state spaces
☆ 60J10: Markov chains (discrete-time Markov processes on discrete state spaces)
☆ 60J20: Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) [See also 90B30, 91D10,
91D35, 91E40]
☆ 60J22: Computational methods in Markov chains [See also 65C40]
☆ 60J25: Continuous-time Markov processes on general state spaces
☆ 60J27: Continuous-time Markov processes on discrete state spaces
☆ 60J28: Applications of continuous-time Markov processes on discrete state spaces
☆ 60J35: Transition functions, generators and resolvents [See also 47D03, 47D07]
☆ 60J40: Right processes
☆ 60J45: Probabilistic potential theory [See also 31Cxx, 31D05]
☆ 60J50: Boundary theory
☆ 60J55: Local time and additive functionals
☆ 60J57: Multiplicative functionals
☆ 60J60: Diffusion processes [See also 58J65]
☆ 60J65: Brownian motion [See also 58J65]
☆ 60J67: Stochastic (Schramm-)Loewner evolution (SLE)
☆ 60J68: Superprocesses
☆ 60J70: Applications of Brownian motions and diffusion theory (population genetics, absorption problems, etc.) [See also 92Dxx]
☆ 60J75: Jump processes
☆ 60J80: Branching processes (Galton-Watson, birth-and-death, etc.)
☆ 60J85: Applications of branching processes [See also 92Dxx]
☆ 60J99: None of the above, but in this section
□ 60Kxx: Special processes
☆ 60K05: Renewal theory
☆ 60K10: Applications (reliability, demand theory, etc.)
☆ 60K15: Markov renewal processes, semi-Markov processes
☆ 60K20: Applications of Markov renewal processes (reliability, queueing networks, etc.) [See also 90Bxx]
☆ 60K25: Queueing theory [See also 68M20, 90B22]
☆ 60K30: Applications (congestion, allocation, storage, traffic, etc.) [See also 90Bxx]
☆ 60K35: Interacting random processes; statistical mechanics type models; percolation theory [See also 82B43, 82C43]
☆ 60K37: Processes in random environments
☆ 60K40: Other physical applications of random processes
☆ 60K99: None of the above, but in this section
• 62-XX: Statistics
□ 62-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 62-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 62-02: Research exposition (monographs, survey articles)
□ 62-03: Historical (must also be assigned at least one classification number from Section 01)
□ 62-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 62-06: Proceedings, conferences, collections, etc.
□ 62-07: Data analysis
□ 62-09: Graphical methods
□ 62Axx: Foundational and philosophical topics
☆ 62A01: Foundations and philosophical topics
☆ 62A86: Fuzzy analysis in statistics
☆ 62A99: None of the above, but in this section
□ 62Bxx: Sufficiency and information
☆ 62B05: Sufficient statistics and fields
☆ 62B10: Information-theoretic topics [See also 94A17]
☆ 62B15: Theory of statistical experiments
☆ 62B86: Fuzziness, sufficiency, and information
☆ 62B99: None of the above, but in this section
□ 62Cxx: Decision theory [See also 90B50, 91B06; for game theory, see 91A35]
☆ 62C05: General considerations
☆ 62C07: Complete class results
☆ 62C10: Bayesian problems; characterization of Bayes procedures
☆ 62C12: Empirical decision procedures; empirical Bayes procedures
☆ 62C15: Admissibility
☆ 62C20: Minimax procedures
☆ 62C25: Compound decision problems
☆ 62C86: Decision theory and fuzziness
☆ 62C99: None of the above, but in this section
□ 62Dxx: Sampling theory, sample surveys
☆ 62D05: Sampling theory, sample surveys
☆ 62D99: None of the above, but in this section
□ 62Exx: Distribution theory [See also 60Exx]
☆ 62E10: Characterization and structure theory
☆ 62E15: Exact distribution theory
☆ 62E17: Approximations to distributions (nonasymptotic)
☆ 62E20: Asymptotic distribution theory
☆ 62E86: Fuzziness in connection with the topics on distributions in this section
☆ 62E99: None of the above, but in this section
□ 62Fxx: Parametric inference
☆ 62F03: Hypothesis testing
☆ 62F05: Asymptotic properties of tests
☆ 62F07: Ranking and selection
☆ 62F10: Point estimation
☆ 62F12: Asymptotic properties of estimators
☆ 62F15: Bayesian inference
☆ 62F25: Tolerance and confidence regions
☆ 62F30: Inference under constraints
☆ 62F35: Robustness and adaptive procedures
☆ 62F40: Bootstrap, jackknife and other resampling methods
☆ 62F86: Parametric inference and fuzziness
☆ 62F99: None of the above, but in this section
□ 62Gxx: Nonparametric inference
☆ 62G05: Estimation
☆ 62G07: Density estimation
☆ 62G08: Nonparametric regression
☆ 62G09: Resampling methods
☆ 62G10: Hypothesis testing
☆ 62G15: Tolerance and confidence regions
☆ 62G20: Asymptotic properties
☆ 62G30: Order statistics; empirical distribution functions
☆ 62G32: Statistics of extreme values; tail inference
☆ 62G35: Robustness
☆ 62G86: Nonparametric inference and fuzziness
☆ 62G99: None of the above, but in this section
□ 62Hxx: Multivariate analysis [See also 60Exx]
☆ 62H05: Characterization and structure theory
☆ 62H10: Distribution of statistics
☆ 62H11: Directional data; spatial statistics
☆ 62H12: Estimation
☆ 62H15: Hypothesis testing
☆ 62H17: Contingency tables
☆ 62H20: Measures of association (correlation, canonical correlation, etc.)
☆ 62H25: Factor analysis and principal components; correspondence analysis
☆ 62H30: Classification and discrimination; cluster analysis [See also 68T10, 91C20]
☆ 62H35: Image analysis
☆ 62H86: Multivariate analysis and fuzziness
☆ 62H99: None of the above, but in this section
□ 62Jxx: Linear inference, regression
☆ 62J02: General nonlinear regression
☆ 62J05: Linear regression
☆ 62J07: Ridge regression; shrinkage estimators
☆ 62J10: Analysis of variance and covariance
☆ 62J12: Generalized linear models
☆ 62J15: Paired and multiple comparisons
☆ 62J20: Diagnostics
☆ 62J86: Fuzziness, and linear inference and regression
☆ 62J99: None of the above, but in this section
□ 62Kxx: Design of experiments [See also 05Bxx]
☆ 62K05: Optimal designs
☆ 62K10: Block designs
☆ 62K15: Factorial designs
☆ 62K20: Response surface designs
☆ 62K25: Robust parameter designs
☆ 62K86: Fuzziness and design of experiments
☆ 62K99: None of the above, but in this section
□ 62Lxx: Sequential methods
☆ 62L05: Sequential design
☆ 62L10: Sequential analysis
☆ 62L12: Sequential estimation
☆ 62L15: Optimal stopping [See also 60G40, 91A60]
☆ 62L20: Stochastic approximation
☆ 62L86: Fuzziness and sequential methods
☆ 62L99: None of the above, but in this section
□ 62Mxx: Inference from stochastic processes
☆ 62M02: Markov processes: hypothesis testing
☆ 62M05: Markov processes: estimation
☆ 62M07: Non-Markovian processes: hypothesis testing
☆ 62M09: Non-Markovian processes: estimation
☆ 62M10: Time series, auto-correlation, regression, etc. [See also 91B84]
☆ 62M15: Spectral analysis
☆ 62M20: Prediction [See also 60G25]; filtering [See also 60G35, 93E10, 93E11]
☆ 62M30: Spatial processes
☆ 62M40: Random fields; image analysis
☆ 62M45: Neural nets and related approaches
☆ 62M86: Inference from stochastic processes and fuzziness
☆ 62M99: None of the above, but in this section
□ 62Nxx: Survival analysis and censored data
☆ 62N01: Censored data models
☆ 62N02: Estimation
☆ 62N03: Testing
☆ 62N05: Reliability and life testing [See also 90B25]
☆ 62N86: Fuzziness, and survival analysis and censored data
☆ 62N99: None of the above, but in this section
□ 62Pxx: Applications [See also 90-XX, 91-XX, 92-XX]
☆ 62P05: Applications to actuarial sciences and financial mathematics
☆ 62P10: Applications to biology and medical sciences
☆ 62P12: Applications to environmental and related topics
☆ 62P15: Applications to psychology
☆ 62P20: Applications to economics [See also 91Bxx]
☆ 62P25: Applications to social sciences
☆ 62P30: Applications in engineering and industry
☆ 62P35: Applications to physics
☆ 62P99: None of the above, but in this section
□ 62Qxx: Statistical tables
☆ 62Q05: Statistical tables
☆ 62Q99: None of the above, but in this section
• 65-XX: Numerical analysis
□ 65-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 65-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 65-02: Research exposition (monographs, survey articles)
□ 65-03: Historical (must also be assigned at least one classification number from Section 01)
□ 65-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 65-05: Experimental papers
□ 65-06: Proceedings, conferences, collections, etc.
□ 65Axx: Tables
☆ 65A05: Tables
☆ 65A99: None of the above, but in this section
□ 65Bxx: Acceleration of convergence
☆ 65B05: Extrapolation to the limit, deferred corrections
☆ 65B10: Summation of series
☆ 65B15: Euler-Maclaurin formula
☆ 65B99: None of the above, but in this section
□ 65Cxx: Probabilistic methods, simulation and stochastic differential equations {For theoretical aspects, see 68U20 and 60H35}
☆ 65C05: Monte Carlo methods
☆ 65C10: Random number generation
☆ 65C20: Models, numerical methods [See also 68U20]
☆ 65C30: Stochastic differential and integral equations
☆ 65C35: Stochastic particle methods [See also 82C80]
☆ 65C40: Computational Markov chains
☆ 65C50: Other computational problems in probability
☆ 65C60: Computational problems in statistics
☆ 65C99: None of the above, but in this section
□ 65Dxx: Numerical approximation and computational geometry (primarily algorithms) {For theory, see 41-XX and 68Uxx}
☆ 65D05: Interpolation
☆ 65D07: Splines
☆ 65D10: Smoothing, curve fitting
☆ 65D15: Algorithms for functional approximation
☆ 65D17: Computer aided design (modeling of curves and surfaces) [See also 68U07]
☆ 65D18: Computer graphics, image analysis, and computational geometry [See also 51N05, 68U05]
☆ 65D19: Computational issues in computer and robotic vision
☆ 65D20: Computation of special functions, construction of tables [See also 33F05]
☆ 65D25: Numerical differentiation
☆ 65D30: Numerical integration
☆ 65D32: Quadrature and cubature formulas
☆ 65D99: None of the above, but in this section
□ 65Exx: Numerical methods in complex analysis (potential theory, etc.) {For numerical methods in conformal mapping, see also 30C30}
☆ 65E05: Numerical methods in complex analysis (potential theory, etc.) {For numerical methods in conformal mapping, see also 30C30}
☆ 65E99: None of the above, but in this section
□ 65Fxx: Numerical linear algebra
☆ 65F05: Direct methods for linear systems and matrix inversion
☆ 65F08: Preconditioners for iterative methods
☆ 65F10: Iterative methods for linear systems [See also 65N22]
☆ 65F15: Eigenvalues, eigenvectors
☆ 65F18: Inverse eigenvalue problems
☆ 65F20: Overdetermined systems, pseudoinverses
☆ 65F22: Ill-posedness, regularization
☆ 65F25: Orthogonalization
☆ 65F30: Other matrix algorithms
☆ 65F35: Matrix norms, conditioning, scaling [See also 15A12, 15A60]
☆ 65F40: Determinants
☆ 65F50: Sparse matrices
☆ 65F60: Matrix exponential and similar matrix functions
☆ 65F99: None of the above, but in this section
□ 65Gxx: Error analysis and interval analysis
☆ 65G20: Algorithms with automatic result verification
☆ 65G30: Interval and finite arithmetic
☆ 65G40: General methods in interval analysis
☆ 65G50: Roundoff error
☆ 65G99: None of the above, but in this section
□ 65Hxx: Nonlinear algebraic or transcendental equations
☆ 65H04: Roots of polynomial equations
☆ 65H05: Single equations
☆ 65H10: Systems of equations
☆ 65H17: Eigenvalues, eigenvectors [See also 47Hxx, 47Jxx, 58C40, 58E07, 90C30]
☆ 65H20: Global methods, including homotopy approaches [See also 58C30, 90C30]
☆ 65H99: None of the above, but in this section
□ 65Jxx: Numerical analysis in abstract spaces
☆ 65J05: General theory
☆ 65J08: Abstract evolution equations
☆ 65J10: Equations with linear operators (do not use 65Fxx)
☆ 65J15: Equations with nonlinear operators (do not use 65Hxx)
☆ 65J20: Improperly posed problems; regularization
☆ 65J22: Inverse problems
☆ 65J99: None of the above, but in this section
□ 65Kxx: Mathematical programming, optimization and variational techniques
☆ 65K05: Mathematical programming methods [See also 90Cxx]
☆ 65K10: Optimization and variational techniques [See also 49Mxx, 93B40]
☆ 65K15: Numerical methods for variational inequalities and related problems
☆ 65K99: None of the above, but in this section
□ 65Lxx: Ordinary differential equations
☆ 65L03: Functional-differential equations
☆ 65L04: Stiff equations
☆ 65L05: Initial value problems
☆ 65L06: Multistep, Runge-Kutta and extrapolation methods
☆ 65L07: Numerical investigation of stability of solutions
☆ 65L08: Improperly posed problems
☆ 65L09: Inverse problems
☆ 65L10: Boundary value problems
☆ 65L11: Singularly perturbed problems
☆ 65L12: Finite difference methods
☆ 65L15: Eigenvalue problems
☆ 65L20: Stability and convergence of numerical methods
☆ 65L50: Mesh generation and refinement
☆ 65L60: Finite elements, Rayleigh-Ritz, Galerkin and collocation methods
☆ 65L70: Error bounds
☆ 65L80: Methods for differential-algebraic equations
☆ 65L99: None of the above, but in this section
□ 65Mxx: Partial differential equations, initial value and time-dependent initial-boundary value problems
☆ 65M06: Finite difference methods
☆ 65M08: Finite volume methods
☆ 65M12: Stability and convergence of numerical methods
☆ 65M15: Error bounds
☆ 65M20: Method of lines
☆ 65M22: Solution of discretized equations [See also 65Fxx, 65Hxx]
☆ 65M25: Method of characteristics
☆ 65M30: Improperly posed problems
☆ 65M32: Inverse problems
☆ 65M38: Boundary element methods
☆ 65M50: Mesh generation and refinement
☆ 65M55: Multigrid methods; domain decomposition
☆ 65M60: Finite elements, Rayleigh-Ritz and Galerkin methods, finite methods
☆ 65M70: Spectral, collocation and related methods
☆ 65M75: Probabilistic methods, particle methods, etc.
☆ 65M80: Fundamental solutions, Green's function methods, etc.
☆ 65M85: Fictitious domain methods
☆ 65M99: None of the above, but in this section
□ 65Nxx: Partial differential equations, boundary value problems
☆ 65N06: Finite difference methods
☆ 65N08: Finite volume methods
☆ 65N12: Stability and convergence of numerical methods
☆ 65N15: Error bounds
☆ 65N20: Ill-posed problems
☆ 65N21: Inverse problems
☆ 65N22: Solution of discretized equations [See also 65Fxx, 65Hxx]
☆ 65N25: Eigenvalue problems
☆ 65N30: Finite elements, Rayleigh-Ritz and Galerkin methods, finite methods
☆ 65N35: Spectral, collocation and related methods
☆ 65N38: Boundary element methods
☆ 65N40: Method of lines
☆ 65N45: Method of contraction of the boundary
☆ 65N50: Mesh generation and refinement
☆ 65N55: Multigrid methods; domain decomposition
☆ 65N75: Probabilistic methods, particle methods, etc.
☆ 65N80: Fundamental solutions, Green's function methods, etc.
☆ 65N85: Fictitious domain methods
☆ 65N99: None of the above, but in this section
□ 65Pxx: Numerical problems in dynamical systems [See also 37Mxx]
☆ 65P10: Hamiltonian systems including symplectic integrators
☆ 65P20: Numerical chaos
☆ 65P30: Bifurcation problems
☆ 65P40: Nonlinear stabilities
☆ 65P99: None of the above, but in this section
□ 65Qxx: Difference and functional equations, recurrence relations
☆ 65Q10: Difference equations
☆ 65Q20: Functional equations
☆ 65Q30: Recurrence relations
☆ 65Q99: None of the above, but in this section
□ 65Rxx: Integral equations, integral transforms
☆ 65R10: Integral transforms
☆ 65R20: Integral equations
☆ 65R30: Improperly posed problems
☆ 65R32: Inverse problems
☆ 65R99: None of the above, but in this section
□ 65Sxx: Graphical methods
☆ 65S05: Graphical methods
☆ 65S99: None of the above, but in this section
□ 65Txx: Numerical methods in Fourier analysis
☆ 65T40: Trigonometric approximation and interpolation
☆ 65T50: Discrete and fast Fourier transforms
☆ 65T60: Wavelets
☆ 65T99: None of the above, but in this section
□ 65Yxx: Computer aspects of numerical algorithms
☆ 65Y04: Algorithms for computer arithmetic, etc. [See also 68M07]
☆ 65Y05: Parallel computation
☆ 65Y10: Algorithms for specific classes of architectures
☆ 65Y15: Packaged methods
☆ 65Y20: Complexity and performance of numerical algorithms [See also 68Q25]
☆ 65Y99: None of the above, but in this section
□ 65Zxx: Applications to physics
☆ 65Z05: Applications to physics
☆ 65Z99: None of the above, but in this section
• 68-XX: Computer science {For papers involving machine computations and programs in a specific mathematical area, see Section –04 in that area}
□ 68-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 68-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 68-02: Research exposition (monographs, survey articles)
□ 68-03: Historical (must also be assigned at least one classification number from Section 01)
□ 68-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 68-06: Proceedings, conferences, collections, etc.
□ 68Mxx: Computer system organization
☆ 68M01: General
☆ 68M07: Mathematical problems of computer architecture
☆ 68M10: Network design and communication [See also 68R10, 90B18]
☆ 68M11: Internet topics [See also 68U35]
☆ 68M12: Network protocols
☆ 68M14: Distributed systems
☆ 68M15: Reliability, testing and fault tolerance [See also 94C12]
☆ 68M20: Performance evaluation; queueing; scheduling [See also 60K25, 90Bxx]
☆ 68M99: None of the above, but in this section
□ 68Nxx: Software
☆ 68N01: General
☆ 68N15: Programming languages
☆ 68N17: Logic programming
☆ 68N18: Functional programming and lambda calculus [See also 03B40]
☆ 68N19: Other programming techniques (object-oriented, sequential, concurrent, automatic, etc.)
☆ 68N20: Compilers and interpreters
☆ 68N25: Operating systems
☆ 68N30: Mathematical aspects of software engineering (specification, verification, metrics, requirements, etc.)
☆ 68N99: None of the above, but in this section
□ 68Pxx: Theory of data
☆ 68P01: General
☆ 68P05: Data structures
☆ 68P10: Searching and sorting
☆ 68P15: Database theory
☆ 68P20: Information storage and retrieval
☆ 68P25: Data encryption [See also 94A60, 81P94]
☆ 68P30: Coding and information theory (compaction, compression, models of communication, encoding schemes, etc.) [See also 94Axx]
☆ 68P99: None of the above, but in this section
□ 68Qxx: Theory of computing
☆ 68Q01: General
☆ 68Q05: Models of computation (Turing machines, etc.) [See also 03D10, 68Q12, 81P68]
☆ 68Q10: Modes of computation (nondeterministic, parallel, interactive, probabilistic, etc.) [See also 68Q85]
☆ 68Q12: Quantum algorithms and complexity [See also 68Q05, 81P68]
☆ 68Q15: Complexity classes (hierarchies, relations among complexity classes, etc.) [See also 03D15, 68Q17, 68Q19]
☆ 68Q17: Computational difficulty of problems (lower bounds, completeness, difficulty of approximation, etc.) [See also 68Q15]
☆ 68Q19: Descriptive complexity and finite models [See also 03C13]
☆ 68Q25: Analysis of algorithms and problem complexity [See also 68W40]
☆ 68Q30: Algorithmic information theory (Kolmogorov complexity, etc.) [See also 03D32]
☆ 68Q32: Computational learning theory [See also 68T05]
☆ 68Q42: Grammars and rewriting systems
☆ 68Q45: Formal languages and automata [See also 03D05, 68Q70, 94A45]
☆ 68Q55: Semantics [See also 03B70, 06B35, 18C50]
☆ 68Q60: Specification and verification (program logics, model checking, etc.) [See also 03B70]
☆ 68Q65: Abstract data types; algebraic specification [See also 18C50]
☆ 68Q70: Algebraic theory of languages and automata [See also 18B20, 20M35]
☆ 68Q80: Cellular automata [See also 37B15]
☆ 68Q85: Models and methods for concurrent and distributed computing (process algebras, bisimulation, transition nets, etc.)
☆ 68Q87: Probability in computer science (algorithm analysis, random structures, phase transitions, etc.) [See also 68W20, 68W40]
☆ 68Q99: None of the above, but in this section
□ 68Rxx: Discrete mathematics in relation to computer science
☆ 68R01: General
☆ 68R05: Combinatorics
☆ 68R10: Graph theory (including graph drawing) [See also 05Cxx, 90B10, 90B35, 90C35]
☆ 68R15: Combinatorics on words
☆ 68R99: None of the above, but in this section
□ 68Txx: Artificial intelligence
☆ 68T01: General
☆ 68T05: Learning and adaptive systems [See also 68Q32, 91E40]
☆ 68T10: Pattern recognition, speech recognition {For cluster analysis, see 62H30}
☆ 68T15: Theorem proving (deduction, resolution, etc.) [See also 03B35]
☆ 68T20: Problem solving (heuristics, search strategies, etc.)
☆ 68T27: Logic in artificial intelligence
☆ 68T30: Knowledge representation
☆ 68T35: Languages and software systems (knowledge-based systems, expert systems, etc.)
☆ 68T37: Reasoning under uncertainty
☆ 68T40: Robotics [See also 93C85]
☆ 68T42: Agent technology
☆ 68T45: Machine vision and scene understanding
☆ 68T50: Natural language processing [See also 03B65]
☆ 68T99: None of the above, but in this section
□ 68Uxx: Computing methodologies and applications
☆ 68U01: General
☆ 68U05: Computer graphics; computational geometry [See also 65D18]
☆ 68U07: Computer-aided design [See also 65D17]
☆ 68U10: Image processing
☆ 68U15: Text processing; mathematical typography
☆ 68U20: Simulation [See also 65Cxx]
☆ 68U35: Information systems (hypertext navigation, interfaces, decision support, etc.) [See also 68M11]
☆ 68U99: None of the above, but in this section
□ 68Wxx: Algorithms {For numerical algorithms, see 65-XX; for combinatorics and graph theory, see 05C85, 68Rxx}
☆ 68W01: General
☆ 68W05: Nonnumerical algorithms
☆ 68W10: Parallel algorithms
☆ 68W15: Distributed algorithms
☆ 68W20: Randomized algorithms
☆ 68W25: Approximation algorithms
☆ 68W27: Online algorithms
☆ 68W30: Symbolic computation and algebraic computation [See also 11Yxx, 12Y05, 13Pxx, 14Qxx, 16Z05, 17-08, 33F10]
☆ 68W32: Algorithms on strings
☆ 68W35: VLSI algorithms
☆ 68W40: Analysis of algorithms [See also 68Q25]
☆ 68W99: None of the above, but in this section
• 70-XX: Mechanics of particles and systems {For relativistic mechanics, see 83A05 and 83C10; for statistical mechanics, see 82-XX}
□ 70-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 70-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 70-02: Research exposition (monographs, survey articles)
□ 70-03: Historical (must also be assigned at least one classification number from Section 01)
□ 70-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 70-05: Experimental work
□ 70-06: Proceedings, conferences, collections, etc.
□ 70-08: Computational methods
□ 70Axx: Axiomatics, foundations
☆ 70A05: Axiomatics, foundations
☆ 70A99: None of the above, but in this section
□ 70Bxx: Kinematics [See also 53A17]
☆ 70B05: Kinematics of a particle
☆ 70B10: Kinematics of a rigid body
☆ 70B15: Mechanisms, robots [See also 68T40, 70Q05, 93C85]
☆ 70B99: None of the above, but in this section
□ 70Cxx: Statics
☆ 70C20: Statics
☆ 70C99: None of the above, but in this section
□ 70Exx: Dynamics of a rigid body and of multibody systems
☆ 70E05: Motion of the gyroscope
☆ 70E15: Free motion of a rigid body [See also 70M20]
☆ 70E17: Motion of a rigid body with a fixed point
☆ 70E18: Motion of a rigid body in contact with a solid surface [See also 70F25]
☆ 70E20: Perturbation methods for rigid body dynamics
☆ 70E40: Integrable cases of motion
☆ 70E45: Higher-dimensional generalizations
☆ 70E50: Stability problems
☆ 70E55: Dynamics of multibody systems
☆ 70E60: Robot dynamics and control [See also 68T40, 70Q05, 93C85]
☆ 70E99: None of the above, but in this section
□ 70Fxx: Dynamics of a system of particles, including celestial mechanics
☆ 70F05: Two-body problems
☆ 70F07: Three-body problems
☆ 70F10: $n$-body problems
☆ 70F15: Celestial mechanics
☆ 70F16: Collisions in celestial mechanics, regularization
☆ 70F17: Inverse problems
☆ 70F20: Holonomic systems
☆ 70F25: Nonholonomic systems
☆ 70F35: Collision of rigid or pseudo-rigid bodies
☆ 70F40: Problems with friction
☆ 70F45: Infinite particle systems
☆ 70F99: None of the above, but in this section
□ 70Gxx: General models, approaches, and methods [See also 37-XX]
☆ 70G10: Generalized coordinates; event, impulse-energy, configuration, state, or phase space
☆ 70G40: Topological and differential-topological methods
☆ 70G45: Differential-geometric methods (tensors, connections, symplectic, Poisson, contact, Riemannian, nonholonomic, etc.) [See also 53Cxx, 53Dxx, 58Axx]
☆ 70G55: Algebraic geometry methods
☆ 70G60: Dynamical systems methods
☆ 70G65: Symmetries, Lie-group and Lie-algebra methods
☆ 70G70: Functional-analytic methods
☆ 70G75: Variational methods
☆ 70G99: None of the above, but in this section
□ 70Hxx: Hamiltonian and Lagrangian mechanics [See also 37Jxx]
☆ 70H03: Lagrange's equations
☆ 70H05: Hamilton's equations
☆ 70H06: Completely integrable systems and methods of integration
☆ 70H07: Nonintegrable systems
☆ 70H08: Nearly integrable Hamiltonian systems, KAM theory
☆ 70H09: Perturbation theories
☆ 70H11: Adiabatic invariants
☆ 70H12: Periodic and almost periodic solutions
☆ 70H14: Stability problems
☆ 70H15: Canonical and symplectic transformations
☆ 70H20: Hamilton-Jacobi equations
☆ 70H25: Hamilton's principle
☆ 70H30: Other variational principles
☆ 70H33: Symmetries and conservation laws, reverse symmetries, invariant manifolds and their bifurcations, reduction
☆ 70H40: Relativistic dynamics
☆ 70H45: Constrained dynamics, Dirac's theory of constraints [See also 70F20, 70F25, 70Gxx]
☆ 70H50: Higher-order theories
☆ 70H99: None of the above, but in this section
□ 70Jxx: Linear vibration theory
☆ 70J10: Modal analysis
☆ 70J25: Stability
☆ 70J30: Free motions
☆ 70J35: Forced motions
☆ 70J40: Parametric resonances
☆ 70J50: Systems arising from the discretization of structural vibration problems
☆ 70J99: None of the above, but in this section
□ 70Kxx: Nonlinear dynamics [See also 34Cxx, 37-XX]
☆ 70K05: Phase plane analysis, limit cycles
☆ 70K20: Stability
☆ 70K25: Free motions
☆ 70K28: Parametric resonances
☆ 70K30: Nonlinear resonances
☆ 70K40: Forced motions
☆ 70K42: Equilibria and periodic trajectories
☆ 70K43: Quasi-periodic motions and invariant tori
☆ 70K44: Homoclinic and heteroclinic trajectories
☆ 70K45: Normal forms
☆ 70K50: Bifurcations and instability
☆ 70K55: Transition to stochasticity (chaotic behavior) [See also 37D45]
☆ 70K60: General perturbation schemes
☆ 70K65: Averaging of perturbations
☆ 70K70: Systems with slow and fast motions
☆ 70K75: Nonlinear modes
☆ 70K99: None of the above, but in this section
□ 70Lxx: Random vibrations [See also 74H50]
☆ 70L05: Random vibrations [See also 74H50]
☆ 70L99: None of the above, but in this section
□ 70Mxx: Orbital mechanics
☆ 70M20: Orbital mechanics
☆ 70M99: None of the above, but in this section
□ 70Pxx: Variable mass, rockets
☆ 70P05: Variable mass, rockets
☆ 70P99: None of the above, but in this section
□ 70Qxx: Control of mechanical systems [See also 60Gxx, 60Jxx]
☆ 70Q05: Control of mechanical systems
☆ 70Q99: None of the above, but in this section
□ 70Sxx: Classical field theories [See also 37Kxx, 37Lxx, 78-XX, 81Txx, 83-XX]
☆ 70S05: Lagrangian formalism and Hamiltonian formalism
☆ 70S10: Symmetries and conservation laws
☆ 70S15: Yang-Mills and other gauge theories
☆ 70S20: More general nonquantum field theories
☆ 70S99: None of the above, but in this section
• 74-XX: Mechanics of deformable solids
□ 74-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 74-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 74-02: Research exposition (monographs, survey articles)
□ 74-03: Historical (must also be assigned at least one classification number from Section 01)
□ 74-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 74-05: Experimental work
□ 74-06: Proceedings, conferences, collections, etc.
□ 74Axx: Generalities, axiomatics, foundations of continuum mechanics of solids
☆ 74A05: Kinematics of deformation
☆ 74A10: Stress
☆ 74A15: Thermodynamics
☆ 74A20: Theory of constitutive functions
☆ 74A25: Molecular, statistical, and kinetic theories
☆ 74A30: Nonsimple materials
☆ 74A35: Polar materials
☆ 74A40: Random materials and composite materials
☆ 74A45: Theories of fracture and damage
☆ 74A50: Structured surfaces and interfaces, coexistent phases
☆ 74A55: Theories of friction (tribology)
☆ 74A60: Micromechanical theories
☆ 74A65: Reactive materials
☆ 74A99: None of the above, but in this section
□ 74Bxx: Elastic materials
☆ 74B05: Classical linear elasticity
☆ 74B10: Linear elasticity with initial stresses
☆ 74B15: Equations linearized about a deformed state (small deformations superposed on large)
☆ 74B20: Nonlinear elasticity
☆ 74B99: None of the above, but in this section
□ 74Cxx: Plastic materials, materials of stress-rate and internal-variable type
☆ 74C05: Small-strain, rate-independent theories (including rigid-plastic and elasto-plastic materials)
☆ 74C10: Small-strain, rate-dependent theories (including theories of viscoplasticity)
☆ 74C15: Large-strain, rate-independent theories (including nonlinear plasticity)
☆ 74C20: Large-strain, rate-dependent theories
☆ 74C99: None of the above, but in this section
□ 74Dxx: Materials of strain-rate type and history type, other materials with memory (including elastic materials with viscous damping, various viscoelastic materials)
☆ 74D05: Linear constitutive equations
☆ 74D10: Nonlinear constitutive equations
☆ 74D99: None of the above, but in this section
□ 74Exx: Material properties given special treatment
☆ 74E05: Inhomogeneity
☆ 74E10: Anisotropy
☆ 74E15: Crystalline structure
☆ 74E20: Granularity
☆ 74E25: Texture
☆ 74E30: Composite and mixture properties
☆ 74E35: Random structure
☆ 74E40: Chemical structure
☆ 74E99: None of the above, but in this section
□ 74Fxx: Coupling of solid mechanics with other effects
☆ 74F05: Thermal effects
☆ 74F10: Fluid-solid interactions (including aero- and hydro-elasticity, porosity, etc.)
☆ 74F15: Electromagnetic effects
☆ 74F20: Mixture effects
☆ 74F25: Chemical and reactive effects
☆ 74F99: None of the above, but in this section
□ 74Gxx: Equilibrium (steady-state) problems
☆ 74G05: Explicit solutions
☆ 74G10: Analytic approximation of solutions (perturbation methods, asymptotic methods, series, etc.)
☆ 74G15: Numerical approximation of solutions
☆ 74G20: Local existence of solutions (near a given solution)
☆ 74G25: Global existence of solutions
☆ 74G30: Uniqueness of solutions
☆ 74G35: Multiplicity of solutions
☆ 74G40: Regularity of solutions
☆ 74G45: Bounds for solutions
☆ 74G50: Saint-Venant's principle
☆ 74G55: Qualitative behavior of solutions
☆ 74G60: Bifurcation and buckling
☆ 74G65: Energy minimization
☆ 74G70: Stress concentrations, singularities
☆ 74G75: Inverse problems
☆ 74G99: None of the above, but in this section
□ 74Hxx: Dynamical problems
☆ 74H05: Explicit solutions
☆ 74H10: Analytic approximation of solutions (perturbation methods, asymptotic methods, series, etc.)
☆ 74H15: Numerical approximation of solutions
☆ 74H20: Existence of solutions
☆ 74H25: Uniqueness of solutions
☆ 74H30: Regularity of solutions
☆ 74H35: Singularities, blowup, stress concentrations
☆ 74H40: Long-time behavior of solutions
☆ 74H45: Vibrations
☆ 74H50: Random vibrations
☆ 74H55: Stability
☆ 74H60: Dynamical bifurcation
☆ 74H65: Chaotic behavior
☆ 74H99: None of the above, but in this section
□ 74Jxx: Waves
☆ 74J05: Linear waves
☆ 74J10: Bulk waves
☆ 74J15: Surface waves
☆ 74J20: Wave scattering
☆ 74J25: Inverse problems
☆ 74J30: Nonlinear waves
☆ 74J35: Solitary waves
☆ 74J40: Shocks and related discontinuities
☆ 74J99: None of the above, but in this section
□ 74Kxx: Thin bodies, structures
☆ 74K05: Strings
☆ 74K10: Rods (beams, columns, shafts, arches, rings, etc.)
☆ 74K15: Membranes
☆ 74K20: Plates
☆ 74K25: Shells
☆ 74K30: Junctions
☆ 74K35: Thin films
☆ 74K99: None of the above, but in this section
□ 74Lxx: Special subfields of solid mechanics
☆ 74L05: Geophysical solid mechanics [See also 86-XX]
☆ 74L10: Soil and rock mechanics
☆ 74L15: Biomechanical solid mechanics [See also 92C10]
☆ 74L99: None of the above, but in this section
□ 74Mxx: Special kinds of problems
☆ 74M05: Control, switches and devices (“smart materials”) [See also 93Cxx]
☆ 74M10: Friction
☆ 74M15: Contact
☆ 74M20: Impact
☆ 74M25: Micromechanics
☆ 74M99: None of the above, but in this section
□ 74Nxx: Phase transformations in solids [See also 74A50, 80Axx, 82B26, 82C26]
☆ 74N05: Crystals
☆ 74N10: Displacive transformations
☆ 74N15: Analysis of microstructure
☆ 74N20: Dynamics of phase boundaries
☆ 74N25: Transformations involving diffusion
☆ 74N30: Problems involving hysteresis
☆ 74N99: None of the above, but in this section
□ 74Pxx: Optimization [See also 49Qxx]
☆ 74P05: Compliance or weight optimization
☆ 74P10: Optimization of other properties
☆ 74P15: Topological methods
☆ 74P20: Geometrical methods
☆ 74P99: None of the above, but in this section
□ 74Qxx: Homogenization, determination of effective properties
☆ 74Q05: Homogenization in equilibrium problems
☆ 74Q10: Homogenization and oscillations in dynamical problems
☆ 74Q15: Effective constitutive equations
☆ 74Q20: Bounds on effective properties
☆ 74Q99: None of the above, but in this section
□ 74Rxx: Fracture and damage
☆ 74R05: Brittle damage
☆ 74R10: Brittle fracture
☆ 74R15: High-velocity fracture
☆ 74R20: Anelastic fracture and damage
☆ 74R99: None of the above, but in this section
□ 74Sxx: Numerical methods [See also 65-XX, 74G15, 74H15]
☆ 74S05: Finite element methods
☆ 74S10: Finite volume methods
☆ 74S15: Boundary element methods
☆ 74S20: Finite difference methods
☆ 74S25: Spectral and related methods
☆ 74S30: Other numerical methods
☆ 74S60: Stochastic methods
☆ 74S70: Complex variable methods
☆ 74S99: None of the above, but in this section
• 76-XX: Fluid mechanics {For general continuum mechanics, see 74Axx, or other parts of 74-XX}
□ 76-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 76-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 76-02: Research exposition (monographs, survey articles)
□ 76-03: Historical (must also be assigned at least one classification number from Section 01)
□ 76-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 76-05: Experimental work
□ 76-06: Proceedings, conferences, collections, etc.
□ 76Axx: Foundations, constitutive equations, rheology
☆ 76A02: Foundations of fluid mechanics
☆ 76A05: Non-Newtonian fluids
☆ 76A10: Viscoelastic fluids
☆ 76A15: Liquid crystals [See also 82D30]
☆ 76A20: Thin fluid films
☆ 76A25: Superfluids (classical aspects)
☆ 76A99: None of the above, but in this section
□ 76Bxx: Incompressible inviscid fluids
☆ 76B03: Existence, uniqueness, and regularity theory [See also 35Q35]
☆ 76B07: Free-surface potential flows
☆ 76B10: Jets and cavities, cavitation, free-streamline theory, water-entry problems, airfoil and hydrofoil theory, sloshing
☆ 76B15: Water waves, gravity waves; dispersion and scattering, nonlinear interaction [See also 35Q30]
☆ 76B20: Ship waves
☆ 76B25: Solitary waves [See also 35C11]
☆ 76B45: Capillarity (surface tension) [See also 76D45]
☆ 76B47: Vortex flows
☆ 76B55: Internal waves
☆ 76B60: Atmospheric waves [See also 86A10]
☆ 76B65: Rossby waves [See also 86A05, 86A10]
☆ 76B70: Stratification effects in inviscid fluids
☆ 76B75: Flow control and optimization [See also 49Q10, 93C20, 93C95]
☆ 76B99: None of the above, but in this section
□ 76Dxx: Incompressible viscous fluids
☆ 76D03: Existence, uniqueness, and regularity theory [See also 35Q30]
☆ 76D05: Navier-Stokes equations [See also 35Q30]
☆ 76D06: Statistical solutions of Navier-Stokes and related equations [See also 60H30, 76M35]
☆ 76D07: Stokes and related (Oseen, etc.) flows
☆ 76D08: Lubrication theory
☆ 76D09: Viscous-inviscid interaction
☆ 76D10: Boundary-layer theory, separation and reattachment, higher-order effects
☆ 76D17: Viscous vortex flows
☆ 76D25: Wakes and jets
☆ 76D27: Other free-boundary flows; Hele-Shaw flows
☆ 76D33: Waves
☆ 76D45: Capillarity (surface tension) [See also 76B45]
☆ 76D50: Stratification effects in viscous fluids
☆ 76D55: Flow control and optimization [See also 49Q10, 93C20, 93C95]
☆ 76D99: None of the above, but in this section
□ 76Exx: Hydrodynamic stability
☆ 76E05: Parallel shear flows
☆ 76E06: Convection
☆ 76E07: Rotation
☆ 76E09: Stability and instability of nonparallel flows
☆ 76E15: Absolute and convective instability and stability
☆ 76E17: Interfacial stability and instability
☆ 76E19: Compressibility effects
☆ 76E20: Stability and instability of geophysical and astrophysical flows
☆ 76E25: Stability and instability of magnetohydrodynamic and electrohydrodynamic flows
☆ 76E30: Nonlinear effects
☆ 76E99: None of the above, but in this section
□ 76Fxx: Turbulence [See also 37-XX, 60Gxx, 60Jxx]
☆ 76F02: Fundamentals
☆ 76F05: Isotropic turbulence; homogeneous turbulence
☆ 76F06: Transition to turbulence
☆ 76F10: Shear flows
☆ 76F20: Dynamical systems approach to turbulence [See also 37-XX]
☆ 76F25: Turbulent transport, mixing
☆ 76F30: Renormalization and other field-theoretical methods [See also 81T99]
☆ 76F35: Convective turbulence [See also 76E15, 76Rxx]
☆ 76F40: Turbulent boundary layers
☆ 76F45: Stratification effects
☆ 76F50: Compressibility effects
☆ 76F55: Statistical turbulence modeling [See also 76M35]
☆ 76F60: $k$-$\varepsilon$ modeling
☆ 76F65: Direct numerical and large eddy simulation of turbulence
☆ 76F70: Control of turbulent flows
☆ 76F99: None of the above, but in this section
□ 76Gxx: General aerodynamics and subsonic flows
☆ 76G25: General aerodynamics and subsonic flows
☆ 76G99: None of the above, but in this section
□ 76Hxx: Transonic flows
☆ 76H05: Transonic flows
☆ 76H99: None of the above, but in this section
□ 76Jxx: Supersonic flows
☆ 76J20: Supersonic flows
☆ 76J99: None of the above, but in this section
□ 76Kxx: Hypersonic flows
☆ 76K05: Hypersonic flows
☆ 76K99: None of the above, but in this section
□ 76Lxx: Shock waves and blast waves [See also 35L67]
☆ 76L05: Shock waves and blast waves [See also 35L67]
☆ 76L99: None of the above, but in this section
□ 76Mxx: Basic methods in fluid mechanics [See also 65-XX]
☆ 76M10: Finite element methods
☆ 76M12: Finite volume methods
☆ 76M15: Boundary element methods
☆ 76M20: Finite difference methods
☆ 76M22: Spectral methods
☆ 76M23: Vortex methods
☆ 76M25: Other numerical methods
☆ 76M27: Visualization algorithms
☆ 76M28: Particle methods and lattice-gas methods
☆ 76M30: Variational methods
☆ 76M35: Stochastic analysis
☆ 76M40: Complex-variables methods
☆ 76M45: Asymptotic methods, singular perturbations
☆ 76M50: Homogenization
☆ 76M55: Dimensional analysis and similarity
☆ 76M60: Symmetry analysis, Lie group and algebra methods
☆ 76M99: None of the above, but in this section
□ 76Nxx: Compressible fluids and gas dynamics, general
☆ 76N10: Existence, uniqueness, and regularity theory [See also 35L60, 35L65, 35Q30]
☆ 76N15: Gas dynamics, general
☆ 76N17: Viscous-inviscid interaction
☆ 76N20: Boundary-layer theory
☆ 76N25: Flow control and optimization
☆ 76N99: None of the above, but in this section
□ 76Pxx: Rarefied gas flows, Boltzmann equation [See also 82B40, 82C40, 82D05]
☆ 76P05: Rarefied gas flows, Boltzmann equation [See also 82B40, 82C40, 82D05]
☆ 76P99: None of the above, but in this section
□ 76Qxx: Hydro- and aero-acoustics
☆ 76Q05: Hydro- and aero-acoustics
☆ 76Q99: None of the above, but in this section
□ 76Rxx: Diffusion and convection
☆ 76R05: Forced convection
☆ 76R10: Free convection
☆ 76R50: Diffusion [See also 60J60]
☆ 76R99: None of the above, but in this section
□ 76Sxx: Flows in porous media; filtration; seepage
☆ 76S05: Flows in porous media; filtration; seepage
☆ 76S99: None of the above, but in this section
□ 76Txx: Two-phase and multiphase flows
☆ 76T10: Liquid-gas two-phase flows, bubbly flows
☆ 76T15: Dusty-gas two-phase flows
☆ 76T20: Suspensions
☆ 76T25: Granular flows [See also 74C99, 74E20]
☆ 76T30: Three or more component flows
☆ 76T99: None of the above, but in this section
□ 76Uxx: Rotating fluids
☆ 76U05: Rotating fluids
☆ 76U99: None of the above, but in this section
□ 76Vxx: Reaction effects in flows [See also 80A32]
☆ 76V05: Reaction effects in flows [See also 80A32]
☆ 76V99: None of the above, but in this section
□ 76Wxx: Magnetohydrodynamics and electrohydrodynamics
☆ 76W05: Magnetohydrodynamics and electrohydrodynamics
☆ 76W99: None of the above, but in this section
□ 76Xxx: Ionized gas flow in electromagnetic fields; plasmic flow [See also 82D10]
☆ 76X05: Ionized gas flow in electromagnetic fields; plasmic flow [See also 82D10]
☆ 76X99: None of the above, but in this section
□ 76Yxx: Quantum hydrodynamics and relativistic hydrodynamics [See also 82D50, 83C55, 85A30]
☆ 76Y05: Quantum hydrodynamics and relativistic hydrodynamics [See also 82D50, 83C55, 85A30]
☆ 76Y99: None of the above, but in this section
□ 76Zxx: Biological fluid mechanics [See also 74F10, 74L15, 92Cxx]
☆ 76Z05: Physiological flows [See also 92C35]
☆ 76Z10: Biopropulsion in water and in air
☆ 76Z99: None of the above, but in this section
• 78-XX: Optics, electromagnetic theory {For quantum optics, see 81V80}
□ 78-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 78-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 78-02: Research exposition (monographs, survey articles)
□ 78-03: Historical (must also be assigned at least one classification number from Section 01)
□ 78-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 78-05: Experimental work
□ 78-06: Proceedings, conferences, collections, etc.
□ 78Axx: General
☆ 78A02: Foundations
☆ 78A05: Geometric optics
☆ 78A10: Physical optics
☆ 78A15: Electron optics
☆ 78A20: Space charge waves
☆ 78A25: Electromagnetic theory, general
☆ 78A30: Electro- and magnetostatics
☆ 78A35: Motion of charged particles
☆ 78A37: Ion traps
☆ 78A40: Waves and radiation
☆ 78A45: Diffraction, scattering [See also 34E20 for WKB methods]
☆ 78A46: Inverse scattering problems
☆ 78A48: Composite media; random media
☆ 78A50: Antennas, wave-guides
☆ 78A55: Technical applications
☆ 78A57: Electrochemistry
☆ 78A60: Lasers, masers, optical bistability, nonlinear optics [See also 81V80]
☆ 78A70: Biological applications [See also 91D30, 92C30]
☆ 78A97: Mathematically heuristic optics and electromagnetic theory (must also be assigned at least one other classification number in this section)
☆ 78A99: Miscellaneous topics
□ 78Mxx: Basic methods
☆ 78M05: Method of moments
☆ 78M10: Finite element methods
☆ 78M12: Finite volume methods, finite integration techniques
☆ 78M15: Boundary element methods
☆ 78M16: Multipole methods
☆ 78M20: Finite difference methods
☆ 78M22: Spectral methods
☆ 78M25: Other numerical methods
☆ 78M30: Variational methods
☆ 78M31: Monte Carlo methods
☆ 78M32: Neural and heuristic methods
☆ 78M34: Model reduction
☆ 78M35: Asymptotic analysis
☆ 78M40: Homogenization
☆ 78M50: Optimization
☆ 78M99: None of the above, but in this section
• 80-XX: Classical thermodynamics, heat transfer {For thermodynamics of solids, see 74A15}
□ 80-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 80-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 80-02: Research exposition (monographs, survey articles)
□ 80-03: Historical (must also be assigned at least one classification number from Section 01)
□ 80-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 80-05: Experimental work
□ 80-06: Proceedings, conferences, collections, etc.
□ 80Axx: Thermodynamics and heat transfer
☆ 80A05: Foundations
☆ 80A10: Classical thermodynamics, including relativistic
☆ 80A17: Thermodynamics of continua [See also 74A15]
☆ 80A20: Heat and mass transfer, heat flow
☆ 80A22: Stefan problems, phase changes, etc. [See also 74Nxx]
☆ 80A23: Inverse problems
☆ 80A25: Combustion
☆ 80A30: Chemical kinetics [See also 76V05, 92C45, 92E20]
☆ 80A32: Chemically reacting flows [See also 92C45, 92E20]
☆ 80A50: Chemistry (general) [See mainly 92Exx]
☆ 80A99: None of the above, but in this section
□ 80Mxx: Basic methods
☆ 80M10: Finite element methods
☆ 80M12: Finite volume methods
☆ 80M15: Boundary element methods
☆ 80M20: Finite difference methods
☆ 80M22: Spectral methods
☆ 80M25: Other numerical methods
☆ 80M30: Variational methods
☆ 80M31: Monte Carlo methods
☆ 80M35: Asymptotic analysis
☆ 80M40: Homogenization
☆ 80M50: Optimization
☆ 80M99: None of the above, but in this section
• 81-XX: Quantum theory
□ 81-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 81-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 81-02: Research exposition (monographs, survey articles)
□ 81-03: Historical (must also be assigned at least one classification number from Section 01)
□ 81-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 81-05: Experimental papers
□ 81-06: Proceedings, conferences, collections, etc.
□ 81-08: Computational methods
□ 81Pxx: Axiomatics, foundations, philosophy
☆ 81P05: General and philosophical
☆ 81P10: Logical foundations of quantum mechanics; quantum logic [See also 03G12, 06C15]
☆ 81P13: Contextuality
☆ 81P15: Quantum measurement theory
☆ 81P16: Quantum state spaces, operational and probabilistic concepts
☆ 81P20: Stochastic mechanics (including stochastic electrodynamics)
☆ 81P40: Quantum coherence, entanglement, quantum correlations
☆ 81P45: Quantum information, communication, networks [See also 94A15, 94A17]
☆ 81P50: Quantum state estimation, approximate cloning
☆ 81P68: Quantum computation [See also 68Q05, 68Q12]
☆ 81P70: Quantum coding (general)
☆ 81P94: Quantum cryptography [See also 94A60]
☆ 81P99: None of the above, but in this section
□ 81Qxx: General mathematical topics and methods in quantum theory
☆ 81Q05: Closed and approximate solutions to the Schrödinger, Dirac, Klein-Gordon and other equations of quantum mechanics
☆ 81Q10: Selfadjoint operator theory in quantum theory, including spectral analysis
☆ 81Q12: Non-selfadjoint operator theory in quantum theory
☆ 81Q15: Perturbation theories for operators and differential equations
☆ 81Q20: Semiclassical techniques, including WKB and Maslov methods
☆ 81Q30: Feynman integrals and graphs; applications of algebraic topology and algebraic geometry [See also 14D05, 32S40]
☆ 81Q35: Quantum mechanics on special spaces: manifolds, fractals, graphs, etc.
☆ 81Q37: Quantum dots, waveguides, ratchets, etc.
☆ 81Q40: Bethe-Salpeter and other integral equations
☆ 81Q50: Quantum chaos [See also 37Dxx]
☆ 81Q60: Supersymmetry and quantum mechanics
☆ 81Q65: Alternative quantum mechanics
☆ 81Q70: Differential-geometric methods, including holonomy, Berry and Hannay phases, etc.
☆ 81Q80: Special quantum systems, such as solvable systems
☆ 81Q93: Quantum control
☆ 81Q99: None of the above, but in this section
□ 81Rxx: Groups and algebras in quantum theory
☆ 81R05: Finite-dimensional groups and algebras motivated by physics and their representations [See also 20C35, 22E70]
☆ 81R10: Infinite-dimensional groups and algebras motivated by physics, including Virasoro, Kac-Moody, $W$-algebras and other current algebras and their representations [See also 17B65,
17B67, 22E65, 22E67, 22E70]
☆ 81R12: Relations with integrable systems [See also 17Bxx, 37J35]
☆ 81R15: Operator algebra methods [See also 46Lxx, 81T05]
☆ 81R20: Covariant wave equations
☆ 81R25: Spinor and twistor methods [See also 32L25]
☆ 81R30: Coherent states [See also 22E45]; squeezed states [See also 81V80]
☆ 81R40: Symmetry breaking
☆ 81R50: Quantum groups and related algebraic methods [See also 16T20, 17B37]
☆ 81R60: Noncommutative geometry
☆ 81R99: None of the above, but in this section
□ 81Sxx: General quantum mechanics and problems of quantization
☆ 81S05: Canonical quantization, commutation relations and statistics
☆ 81S10: Geometry and quantization, symplectic methods [See also 53D50]
☆ 81S20: Stochastic quantization
☆ 81S22: Open systems, reduced dynamics, master equations, decoherence [See also 82C31]
☆ 81S25: Quantum stochastic calculus
☆ 81S30: Phase-space methods including Wigner distributions, etc.
☆ 81S40: Path integrals [See also 58D30]
☆ 81S99: None of the above, but in this section
□ 81Txx: Quantum field theory; related classical field theories [See also 70Sxx]
☆ 81T05: Axiomatic quantum field theory; operator algebras
☆ 81T08: Constructive quantum field theory
☆ 81T10: Model quantum field theories
☆ 81T13: Yang-Mills and other gauge theories [See also 53C07, 58E15]
☆ 81T15: Perturbative methods of renormalization
☆ 81T16: Nonperturbative methods of renormalization
☆ 81T17: Renormalization group methods
☆ 81T18: Feynman diagrams
☆ 81T20: Quantum field theory on curved space backgrounds
☆ 81T25: Quantum field theory on lattices
☆ 81T27: Continuum limits
☆ 81T28: Thermal quantum field theory [See also 82B30]
☆ 81T30: String and superstring theories; other extended objects (e.g., branes) [See also 83E30]
☆ 81T40: Two-dimensional field theories, conformal field theories, etc.
☆ 81T45: Topological field theories [See also 57R56, 58Dxx]
☆ 81T50: Anomalies
☆ 81T55: Casimir effect
☆ 81T60: Supersymmetric field theories
☆ 81T70: Quantization in field theory; cohomological methods [See also 58D29]
☆ 81T75: Noncommutative geometry methods [See also 46L85, 46L87, 58B34]
☆ 81T80: Simulation and numerical modeling
☆ 81T99: None of the above, but in this section
□ 81Uxx: Scattering theory [See also 34A55, 34L25, 34L40, 35P25, 47A40]
☆ 81U05: $2$-body potential scattering theory [See also 34E20 for WKB methods]
☆ 81U10: $n$-body potential scattering theory
☆ 81U15: Exactly and quasi-solvable systems
☆ 81U20: $S$-matrix theory, etc.
☆ 81U30: Dispersion theory, dispersion relations
☆ 81U35: Inelastic and multichannel scattering
☆ 81U40: Inverse scattering problems
☆ 81U99: None of the above, but in this section
□ 81Vxx: Applications to specific physical systems
☆ 81V05: Strong interaction, including quantum chromodynamics
☆ 81V10: Electromagnetic interaction; quantum electrodynamics
☆ 81V15: Weak interaction
☆ 81V17: Gravitational interaction [See also 83Cxx and 83Exx]
☆ 81V19: Other fundamental interactions
☆ 81V22: Unified theories
☆ 81V25: Other elementary particle theory
☆ 81V35: Nuclear physics
☆ 81V45: Atomic physics
☆ 81V55: Molecular physics [See also 92E10]
☆ 81V65: Quantum dots [See also 82D20]
☆ 81V70: Many-body theory; quantum Hall effect
☆ 81V80: Quantum optics
☆ 81V99: None of the above, but in this section
• 82-XX: Statistical mechanics, structure of matter
□ 82-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 82-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 82-02: Research exposition (monographs, survey articles)
□ 82-03: Historical (must also be assigned at least one classification number from Section 01)
□ 82-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 82-05: Experimental papers
□ 82-06: Proceedings, conferences, collections, etc.
□ 82-08: Computational methods
□ 82Bxx: Equilibrium statistical mechanics
☆ 82B03: Foundations
☆ 82B05: Classical equilibrium statistical mechanics (general)
☆ 82B10: Quantum equilibrium statistical mechanics (general)
☆ 82B20: Lattice systems (Ising, dimer, Potts, etc.) and systems on graphs
☆ 82B21: Continuum models (systems of particles, etc.)
☆ 82B23: Exactly solvable models; Bethe ansatz
☆ 82B24: Interface problems; diffusion-limited aggregation
☆ 82B26: Phase transitions (general)
☆ 82B27: Critical phenomena
☆ 82B28: Renormalization group methods [See also 81T17]
☆ 82B30: Statistical thermodynamics [See also 80-XX]
☆ 82B31: Stochastic methods
☆ 82B35: Irreversible thermodynamics, including Onsager-Machlup theory [See also 92E20]
☆ 82B40: Kinetic theory of gases
☆ 82B41: Random walks, random surfaces, lattice animals, etc. [See also 60G50, 82C41]
☆ 82B43: Percolation [See also 60K35]
☆ 82B44: Disordered systems (random Ising models, random Schrödinger operators, etc.)
☆ 82B80: Numerical methods (Monte Carlo, series resummation, etc.) [See also 65-XX, 81T80]
☆ 82B99: None of the above, but in this section
□ 82Cxx: Time-dependent statistical mechanics (dynamic and nonequilibrium)
☆ 82C03: Foundations
☆ 82C05: Classical dynamic and nonequilibrium statistical mechanics (general)
☆ 82C10: Quantum dynamics and nonequilibrium statistical mechanics (general)
☆ 82C20: Dynamic lattice systems (kinetic Ising, etc.) and systems on graphs
☆ 82C21: Dynamic continuum models (systems of particles, etc.)
☆ 82C22: Interacting particle systems [See also 60K35]
☆ 82C23: Exactly solvable dynamic models [See also 37K60]
☆ 82C24: Interface problems; diffusion-limited aggregation
☆ 82C26: Dynamic and nonequilibrium phase transitions (general)
☆ 82C27: Dynamic critical phenomena
☆ 82C28: Dynamic renormalization group methods [See also 81T17]
☆ 82C31: Stochastic methods (Fokker-Planck, Langevin, etc.) [See also 60H10]
☆ 82C32: Neural nets [See also 68T05, 91E40, 92B20]
☆ 82C35: Irreversible thermodynamics, including Onsager-Machlup theory
☆ 82C40: Kinetic theory of gases
☆ 82C41: Dynamics of random walks, random surfaces, lattice animals, etc. [See also 60G50]
☆ 82C43: Time-dependent percolation [See also 60K35]
☆ 82C44: Dynamics of disordered systems (random Ising systems, etc.)
☆ 82C70: Transport processes
☆ 82C80: Numerical methods (Monte Carlo, series resummation, etc.)
☆ 82C99: None of the above, but in this section
□ 82Dxx: Applications to specific types of physical systems
☆ 82D05: Gases
☆ 82D10: Plasmas
☆ 82D15: Liquids
☆ 82D20: Solids
☆ 82D25: Crystals {For crystallographic group theory, see 20H15}
☆ 82D30: Random media, disordered materials (including liquid crystals and spin glasses)
☆ 82D35: Metals
☆ 82D37: Semiconductors
☆ 82D40: Magnetic materials
☆ 82D45: Ferroelectrics
☆ 82D50: Superfluids
☆ 82D55: Superconductors
☆ 82D60: Polymers
☆ 82D75: Nuclear reactor theory; neutron transport
☆ 82D77: Quantum wave guides, quantum wires [See also 78A50]
☆ 82D80: Nanostructures and nanoparticles
☆ 82D99: None of the above, but in this section
• 83-XX: Relativity and gravitational theory
□ 83-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 83-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 83-02: Research exposition (monographs, survey articles)
□ 83-03: Historical (must also be assigned at least one classification number from Section 01)
□ 83-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 83-05: Experimental work
□ 83-06: Proceedings, conferences, collections, etc.
□ 83-08: Computational methods
□ 83Axx: Special relativity
☆ 83A05: Special relativity
☆ 83A99: None of the above, but in this section
□ 83Bxx: Observational and experimental questions
☆ 83B05: Observational and experimental questions
☆ 83B99: None of the above, but in this section
□ 83Cxx: General relativity
☆ 83C05: Einstein's equations (general structure, canonical formalism, Cauchy problems)
☆ 83C10: Equations of motion
☆ 83C15: Exact solutions
☆ 83C20: Classes of solutions; algebraically special solutions, metrics with symmetries
☆ 83C22: Einstein-Maxwell equations
☆ 83C25: Approximation procedures, weak fields
☆ 83C27: Lattice gravity, Regge calculus and other discrete methods
☆ 83C30: Asymptotic procedures (radiation, news functions, $\scr H$-spaces, etc.)
☆ 83C35: Gravitational waves
☆ 83C40: Gravitational energy and conservation laws; groups of motions
☆ 83C45: Quantization of the gravitational field
☆ 83C47: Methods of quantum field theory [See also 81T20]
☆ 83C50: Electromagnetic fields
☆ 83C55: Macroscopic interaction of the gravitational field with matter (hydrodynamics, etc.)
☆ 83C57: Black holes
☆ 83C60: Spinor and twistor methods; Newman-Penrose formalism
☆ 83C65: Methods of noncommutative geometry [See also 58B34]
☆ 83C75: Space-time singularities, cosmic censorship, etc.
☆ 83C80: Analogues in lower dimensions
☆ 83C99: None of the above, but in this section
□ 83Dxx: Relativistic gravitational theories other than Einstein's, including asymmetric field theories
☆ 83D05: Relativistic gravitational theories other than Einstein's, including asymmetric field theories
☆ 83D99: None of the above, but in this section
□ 83Exx: Unified, higher-dimensional and super field theories
☆ 83E05: Geometrodynamics
☆ 83E15: Kaluza-Klein and other higher-dimensional theories
☆ 83E30: String and superstring theories [See also 81T30]
☆ 83E50: Supergravity
☆ 83E99: None of the above, but in this section
□ 83Fxx: Cosmology
☆ 83F05: Cosmology
☆ 83F99: None of the above, but in this section
• 85-XX: Astronomy and astrophysics {For celestial mechanics, see 70F15}
□ 85-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 85-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 85-02: Research exposition (monographs, survey articles)
□ 85-03: Historical (must also be assigned at least one classification number from Section 01)
□ 85-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 85-05: Experimental work
□ 85-06: Proceedings, conferences, collections, etc.
□ 85-08: Computational methods
□ 85Axx: Astronomy and astrophysics {For celestial mechanics, see 70F15}
☆ 85A04: General
☆ 85A05: Galactic and stellar dynamics
☆ 85A15: Galactic and stellar structure
☆ 85A20: Planetary atmospheres
☆ 85A25: Radiative transfer
☆ 85A30: Hydrodynamic and hydromagnetic problems [See also 76Y05]
☆ 85A35: Statistical astronomy
☆ 85A40: Cosmology {For relativistic cosmology, see 83F05}
☆ 85A99: Miscellaneous topics
• 86-XX: Geophysics [See also 76U05, 76V05]
□ 86-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 86-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 86-02: Research exposition (monographs, survey articles)
□ 86-03: Historical (must also be assigned at least one classification number from Section 01)
□ 86-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 86-05: Experimental work
□ 86-06: Proceedings, conferences, collections, etc.
□ 86-08: Computational methods
□ 86Axx: Geophysics [See also 76U05, 76V05]
☆ 86A04: General
☆ 86A05: Hydrology, hydrography, oceanography [See also 76Bxx, 76E20, 76Q05, 76Rxx, 76U05]
☆ 86A10: Meteorology and atmospheric physics [See also 76Bxx, 76E20, 76N15, 76Q05, 76Rxx, 76U05]
☆ 86A15: Seismology
☆ 86A17: Global dynamics, earthquake problems
☆ 86A20: Potentials, prospecting
☆ 86A22: Inverse problems [See also 35R30]
☆ 86A25: Geo-electricity and geomagnetism [See also 76W05, 78A25]
☆ 86A30: Geodesy, mapping problems
☆ 86A32: Geostatistics
☆ 86A40: Glaciology
☆ 86A60: Geological problems
☆ 86A99: Miscellaneous topics
• 90-XX: Operations research, mathematical programming
□ 90-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 90-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 90-02: Research exposition (monographs, survey articles)
□ 90-03: Historical (must also be assigned at least one classification number from Section 01)
□ 90-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 90-06: Proceedings, conferences, collections, etc.
□ 90-08: Computational methods
□ 90Bxx: Operations research and management science
☆ 90B05: Inventory, storage, reservoirs
☆ 90B06: Transportation, logistics
☆ 90B10: Network models, deterministic
☆ 90B15: Network models, stochastic
☆ 90B18: Communication networks [See also 68M10, 94A05]
☆ 90B20: Traffic problems
☆ 90B22: Queues and service [See also 60K25, 68M20]
☆ 90B25: Reliability, availability, maintenance, inspection [See also 60K10, 62N05]
☆ 90B30: Production models
☆ 90B35: Scheduling theory, deterministic [See also 68M20]
☆ 90B36: Scheduling theory, stochastic [See also 68M20]
☆ 90B40: Search theory
☆ 90B50: Management decision making, including multiple objectives [See also 90C29, 90C31, 91A35, 91B06]
☆ 90B60: Marketing, advertising [See also 91B60]
☆ 90B70: Theory of organizations, manpower planning [See also 91D35]
☆ 90B80: Discrete location and assignment [See also 90C10]
☆ 90B85: Continuous location
☆ 90B90: Case-oriented studies
☆ 90B99: None of the above, but in this section
□ 90Cxx: Mathematical programming [See also 49Mxx, 65Kxx]
☆ 90C05: Linear programming
☆ 90C06: Large-scale problems
☆ 90C08: Special problems of linear programming (transportation, multi-index, etc.)
☆ 90C09: Boolean programming
☆ 90C10: Integer programming
☆ 90C11: Mixed integer programming
☆ 90C15: Stochastic programming
☆ 90C20: Quadratic programming
☆ 90C22: Semidefinite programming
☆ 90C25: Convex programming
☆ 90C26: Nonconvex programming, global optimization
☆ 90C27: Combinatorial optimization
☆ 90C29: Multi-objective and goal programming
☆ 90C30: Nonlinear programming
☆ 90C31: Sensitivity, stability, parametric optimization
☆ 90C32: Fractional programming
☆ 90C33: Complementarity and equilibrium problems and variational inequalities (finite dimensions)
☆ 90C34: Semi-infinite programming
☆ 90C35: Programming involving graphs or networks [See also 90C27]
☆ 90C39: Dynamic programming [See also 49L20]
☆ 90C40: Markov and semi-Markov decision processes
☆ 90C46: Optimality conditions, duality [See also 49N15]
☆ 90C47: Minimax problems [See also 49K35]
☆ 90C48: Programming in abstract spaces
☆ 90C49: Extreme-point and pivoting methods
☆ 90C51: Interior-point methods
☆ 90C52: Methods of reduced gradient type
☆ 90C53: Methods of quasi-Newton type
☆ 90C55: Methods of successive quadratic programming type
☆ 90C56: Derivative-free methods and methods using generalized derivatives [See also 49J52]
☆ 90C57: Polyhedral combinatorics, branch-and-bound, branch-and-cut
☆ 90C59: Approximation methods and heuristics
☆ 90C60: Abstract computational complexity for mathematical programming problems [See also 68Q25]
☆ 90C70: Fuzzy programming
☆ 90C90: Applications of mathematical programming
☆ 90C99: None of the above, but in this section
• 91-XX: Game theory, economics, social and behavioral sciences
□ 91-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 91-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 91-02: Research exposition (monographs, survey articles)
□ 91-03: Historical (must also be assigned at least one classification number from section 01)
□ 91-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 91-06: Proceedings, conferences, collections, etc.
□ 91-08: Computational methods
□ 91Axx: Game theory
☆ 91A05: 2-person games
☆ 91A06: $n$-person games, $n>2$
☆ 91A10: Noncooperative games
☆ 91A12: Cooperative games
☆ 91A13: Games with infinitely many players
☆ 91A15: Stochastic games
☆ 91A18: Games in extensive form
☆ 91A20: Multistage and repeated games
☆ 91A22: Evolutionary games
☆ 91A23: Differential games [See also 49N70]
☆ 91A24: Positional games (pursuit and evasion, etc.) [See also 49N75]
☆ 91A25: Dynamic games
☆ 91A26: Rationality, learning
☆ 91A28: Signaling, communication
☆ 91A30: Utility theory for games [See also 91B16]
☆ 91A35: Decision theory for games [See also 62Cxx, 91B06, 90B50]
☆ 91A40: Game-theoretic models
☆ 91A43: Games involving graphs [See also 05C57]
☆ 91A44: Games involving topology or set theory
☆ 91A46: Combinatorial games
☆ 91A50: Discrete-time games
☆ 91A55: Games of timing
☆ 91A60: Probabilistic games; gambling [See also 60G40]
☆ 91A65: Hierarchical games
☆ 91A70: Spaces of games
☆ 91A80: Applications of game theory
☆ 91A90: Experimental studies
☆ 91A99: None of the above, but in this section
□ 91Bxx: Mathematical economics {For econometrics, see 62P20}
☆ 91B02: Fundamental topics (basic mathematics, methodology; applicable to economics in general)
☆ 91B06: Decision theory [See also 62Cxx, 90B50, 91A35]
☆ 91B08: Individual preferences
☆ 91B10: Group preferences
☆ 91B12: Voting theory
☆ 91B14: Social choice
☆ 91B15: Welfare economics
☆ 91B16: Utility theory
☆ 91B18: Public goods
☆ 91B24: Price theory and market structure
☆ 91B25: Asset pricing models
☆ 91B26: Market models (auctions, bargaining, bidding, selling, etc.)
☆ 91B30: Risk theory, insurance
☆ 91B32: Resource and cost allocation
☆ 91B38: Production theory, theory of the firm
☆ 91B40: Labor market, contracts
☆ 91B42: Consumer behavior, demand theory
☆ 91B44: Informational economics
☆ 91B50: General equilibrium theory
☆ 91B51: Dynamic stochastic general equilibrium theory
☆ 91B52: Special types of equilibria
☆ 91B54: Special types of economies
☆ 91B55: Economic dynamics
☆ 91B60: Trade models
☆ 91B62: Growth models
☆ 91B64: Macro-economic models (monetary models, models of taxation)
☆ 91B66: Multisectoral models
☆ 91B68: Matching models
☆ 91B69: Heterogeneous agent models
☆ 91B70: Stochastic models
☆ 91B72: Spatial models
☆ 91B74: Models of real-world systems
☆ 91B76: Environmental economics (natural resource models, harvesting, pollution, etc.)
☆ 91B80: Applications of statistical and quantum mechanics to economics (econophysics)
☆ 91B82: Statistical methods; economic indices and measures
☆ 91B84: Economic time series analysis [See also 62M10]
☆ 91B99: None of the above, but in this section
□ 91Cxx: Social and behavioral sciences: general topics {For statistics, see 62-XX}
☆ 91C05: Measurement theory
☆ 91C15: One- and multidimensional scaling
☆ 91C20: Clustering [See also 62H30]
☆ 91C99: None of the above, but in this section
□ 91Dxx: Mathematical sociology (including anthropology)
☆ 91D10: Models of societies, social and urban evolution
☆ 91D20: Mathematical geography and demography
☆ 91D25: Spatial models [See also 91B72]
☆ 91D30: Social networks
☆ 91D35: Manpower systems [See also 91B40, 90B70]
☆ 91D99: None of the above, but in this section
□ 91Exx: Mathematical psychology
☆ 91E10: Cognitive psychology
☆ 91E30: Psychophysics and psychophysiology; perception
☆ 91E40: Memory and learning [See also 68T05]
☆ 91E45: Measurement and performance
☆ 91E99: None of the above, but in this section
□ 91Fxx: Other social and behavioral sciences (mathematical treatment)
☆ 91F10: History, political science
☆ 91F20: Linguistics [See also 03B65, 68T50]
☆ 91F99: None of the above, but in this section
□ 91Gxx: Mathematical finance
☆ 91G10: Portfolio theory
☆ 91G20: Derivative securities
☆ 91G30: Interest rates (stochastic models)
☆ 91G40: Credit risk
☆ 91G50: Corporate finance
☆ 91G60: Numerical methods (including Monte Carlo methods)
☆ 91G70: Statistical methods, econometrics
☆ 91G80: Financial applications of other theories (stochastic control, calculus of variations, PDE, SPDE, dynamical systems)
☆ 91G99: None of the above, but in this section
• 92-XX: Biology and other natural sciences
□ 92-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 92-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 92-02: Research exposition (monographs, survey articles)
□ 92-03: Historical (must also be assigned at least one classification number from Section 01)
□ 92-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 92-06: Proceedings, conferences, collections, etc.
□ 92-08: Computational methods
□ 92Bxx: Mathematical biology in general
☆ 92B05: General biology and biomathematics
☆ 92B10: Taxonomy, cladistics, statistics
☆ 92B15: General biostatistics [See also 62P10]
☆ 92B20: Neural networks, artificial life and related topics [See also 68T05, 82C32, 94Cxx]
☆ 92B25: Biological rhythms and synchronization
☆ 92B99: None of the above, but in this section
□ 92Cxx: Physiological, cellular and medical topics
☆ 92C05: Biophysics
☆ 92C10: Biomechanics [See also 74L15]
☆ 92C15: Developmental biology, pattern formation
☆ 92C17: Cell movement (chemotaxis, etc.)
☆ 92C20: Neural biology
☆ 92C30: Physiology (general)
☆ 92C35: Physiological flow [See also 76Z05]
☆ 92C37: Cell biology
☆ 92C40: Biochemistry, molecular biology
☆ 92C42: Systems biology, networks
☆ 92C45: Kinetics in biochemical problems (pharmacokinetics, enzyme kinetics, etc.) [See also 80A30]
☆ 92C50: Medical applications (general)
☆ 92C55: Biomedical imaging and signal processing [See also 44A12, 65R10, 94A08, 94A12]
☆ 92C60: Medical epidemiology
☆ 92C80: Plant biology
☆ 92C99: None of the above, but in this section
□ 92Dxx: Genetics and population dynamics
☆ 92D10: Genetics {For genetic algebras, see 17D92}
☆ 92D15: Problems related to evolution
☆ 92D20: Protein sequences, DNA sequences
☆ 92D25: Population dynamics (general)
☆ 92D30: Epidemiology
☆ 92D40: Ecology
☆ 92D50: Animal behavior
☆ 92D99: None of the above, but in this section
□ 92Exx: Chemistry {For biochemistry, see 92C40}
☆ 92E10: Molecular structure (graph-theoretic methods, methods of differential topology, etc.)
☆ 92E20: Classical flows, reactions, etc. [See also 80A30, 80A32]
☆ 92E99: None of the above, but in this section
□ 92Fxx: Other natural sciences (should also be assigned at least one other classification number in this section)
☆ 92F05: Other natural sciences (should also be assigned at least one other classification number in section 92)
☆ 92F99: None of the above, but in this section
• 93-XX: Systems theory; control {For optimal control, see 49-XX}
□ 93-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 93-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 93-02: Research exposition (monographs, survey articles)
□ 93-03: Historical (must also be assigned at least one classification number from Section 01)
□ 93-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 93-06: Proceedings, conferences, collections, etc.
□ 93Axx: General
☆ 93A05: Axiomatic system theory
☆ 93A10: General systems
☆ 93A13: Hierarchical systems
☆ 93A14: Decentralized systems
☆ 93A15: Large scale systems
☆ 93A30: Mathematical modeling (models of systems, model-matching, etc.)
☆ 93A99: None of the above, but in this section
□ 93Bxx: Controllability, observability, and system structure
☆ 93B03: Attainable sets
☆ 93B05: Controllability
☆ 93B07: Observability
☆ 93B10: Canonical structure
☆ 93B11: System structure simplification
☆ 93B12: Variable structure systems
☆ 93B15: Realizations from input-output data
☆ 93B17: Transformations
☆ 93B18: Linearizations
☆ 93B20: Minimal systems representations
☆ 93B25: Algebraic methods
☆ 93B27: Geometric methods
☆ 93B28: Operator-theoretic methods [See also 47A48, 47A57, 47B35, 47N70]
☆ 93B30: System identification
☆ 93B35: Sensitivity (robustness)
☆ 93B36: $H^\infty$-control
☆ 93B40: Computational methods
☆ 93B50: Synthesis problems
☆ 93B51: Design techniques (robust design, computer-aided design, etc.)
☆ 93B52: Feedback control
☆ 93B55: Pole and zero placement problems
☆ 93B60: Eigenvalue problems
☆ 93B99: None of the above, but in this section
□ 93Cxx: Control systems
☆ 93C05: Linear systems
☆ 93C10: Nonlinear systems
☆ 93C15: Systems governed by ordinary differential equations [See also 34H05]
☆ 93C20: Systems governed by partial differential equations
☆ 93C23: Systems governed by functional-differential equations [See also 34K35]
☆ 93C25: Systems in abstract spaces
☆ 93C30: Systems governed by functional relations other than differential equations (such as hybrid and switching systems)
☆ 93C35: Multivariable systems
☆ 93C40: Adaptive control
☆ 93C41: Problems with incomplete information
☆ 93C42: Fuzzy control systems
☆ 93C55: Discrete-time systems
☆ 93C57: Sampled-data systems
☆ 93C62: Digital systems
☆ 93C65: Discrete event systems
☆ 93C70: Time-scale analysis and singular perturbations
☆ 93C73: Perturbations
☆ 93C80: Frequency-response methods
☆ 93C83: Control problems involving computers (process control, etc.)
☆ 93C85: Automated systems (robots, etc.) [See also 68T40, 70B15, 70Q05]
☆ 93C95: Applications
☆ 93C99: None of the above, but in this section
□ 93Dxx: Stability
☆ 93D05: Lyapunov and other classical stabilities (Lagrange, Poisson, $L^p, l^p$, etc.)
☆ 93D09: Robust stability
☆ 93D10: Popov-type stability of feedback systems
☆ 93D15: Stabilization of systems by feedback
☆ 93D20: Asymptotic stability
☆ 93D21: Adaptive or robust stabilization
☆ 93D25: Input-output approaches
☆ 93D30: Scalar and vector Lyapunov functions
☆ 93D99: None of the above, but in this section
□ 93Exx: Stochastic systems and control
☆ 93E03: Stochastic systems, general
☆ 93E10: Estimation and detection [See also 60G35]
☆ 93E11: Filtering [See also 60G35]
☆ 93E12: System identification
☆ 93E14: Data smoothing
☆ 93E15: Stochastic stability
☆ 93E20: Optimal stochastic control
☆ 93E24: Least squares and related methods
☆ 93E25: Other computational methods
☆ 93E35: Stochastic learning and adaptive control
☆ 93E99: None of the above, but in this section
• 94-XX: Information and communication, circuits
□ 94-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 94-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 94-02: Research exposition (monographs, survey articles)
□ 94-03: Historical (must also be assigned at least one classification number from Section 01)
□ 94-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 94-06: Proceedings, conferences, collections, etc.
□ 94Axx: Communication, information
☆ 94A05: Communication theory [See also 60G35, 90B18]
☆ 94A08: Image processing (compression, reconstruction, etc.) [See also 68U10]
☆ 94A11: Application of orthogonal and other special functions
☆ 94A12: Signal theory (characterization, reconstruction, filtering, etc.)
☆ 94A13: Detection theory
☆ 94A14: Modulation and demodulation
☆ 94A15: Information theory, general [See also 62B10, 81P45]
☆ 94A17: Measures of information, entropy
☆ 94A20: Sampling theory
☆ 94A24: Coding theorems (Shannon theory)
☆ 94A29: Source coding [See also 68P30]
☆ 94A34: Rate-distortion theory
☆ 94A40: Channel models (including quantum)
☆ 94A45: Prefix, length-variable, comma-free codes [See also 20M35, 68Q45]
☆ 94A50: Theory of questionnaires
☆ 94A55: Shift register sequences and sequences over finite alphabets
☆ 94A60: Cryptography [See also 11T71, 14G50, 68P25, 81P94]
☆ 94A62: Authentication and secret sharing [See also 81P94]
☆ 94A99: None of the above, but in this section
□ 94Bxx: Theory of error-correcting codes and error-detecting codes
☆ 94B05: Linear codes, general
☆ 94B10: Convolutional codes
☆ 94B12: Combined modulation schemes (including trellis codes)
☆ 94B15: Cyclic codes
☆ 94B20: Burst-correcting codes
☆ 94B25: Combinatorial codes
☆ 94B27: Geometric methods (including applications of algebraic geometry) [See also 11T71, 14G50]
☆ 94B30: Majority codes
☆ 94B35: Decoding
☆ 94B40: Arithmetic codes [See also 11T71, 14G50]
☆ 94B50: Synchronization error-correcting codes
☆ 94B60: Other types of codes
☆ 94B65: Bounds on codes
☆ 94B70: Error probability
☆ 94B75: Applications of the theory of convex sets and geometry of numbers (covering radius, etc.) [See also 11H31, 11H71]
☆ 94B99: None of the above, but in this section
□ 94Cxx: Circuits, networks
☆ 94C05: Analytic circuit theory
☆ 94C10: Switching theory, application of Boolean algebra; Boolean functions [See also 06E30]
☆ 94C12: Fault detection; testing
☆ 94C15: Applications of graph theory [See also 05Cxx, 68R10]
☆ 94C30: Applications of design theory [See also 05Bxx]
☆ 94C99: None of the above, but in this section
□ 94Dxx: Fuzzy sets and logic (in connection with questions of Section 94) [See also 03B52, 03E72, 28E10]
☆ 94D05: Fuzzy sets and logic (in connection with questions of Section 94) [See also 03B52, 03E72, 28E10]
☆ 94D99: None of the above, but in this section
• 97-XX: Mathematics education
□ 97-00: General reference works (handbooks, dictionaries, bibliographies, etc.)
□ 97-01: Instructional exposition (textbooks, tutorial papers, etc.)
□ 97-02: Research exposition (monographs, survey articles)
□ 97-03: Historical (must also be assigned at least one classification number from Section 01)
□ 97-04: Explicit machine computation and programs (not the theory of computation or programming)
□ 97-06: Proceedings, conferences, collections, etc.
□ 97Axx: General, mathematics and education
☆ 97A10: Comprehensive works, reference books
☆ 97A20: Recreational mathematics, games [See also 00A08]
☆ 97A30: History of mathematics and mathematics education [See also 01-XX]
☆ 97A40: Mathematics and society
☆ 97A50: Bibliographies [See also 01-00]
☆ 97A70: Theses and postdoctoral theses
☆ 97A80: Popularization of mathematics
☆ 97A99: None of the above, but in this section
□ 97Bxx: Educational policy and systems
☆ 97B10: Educational research and planning
☆ 97B20: General education
☆ 97B30: Vocational education
☆ 97B40: Higher education
☆ 97B50: Teacher education {For research aspects, see 97C70}
☆ 97B60: Adult and further education
☆ 97B70: Syllabuses, educational standards
☆ 97B99: None of the above, but in this section
□ 97Cxx: Psychology of mathematics education, research in mathematics education
☆ 97C10: Comprehensive works
☆ 97C20: Affective behavior
☆ 97C30: Cognitive processes, learning theories
☆ 97C40: Intelligence and aptitudes
☆ 97C50: Language and verbal communities
☆ 97C60: Sociological aspects of learning
☆ 97C70: Teaching-learning processes
☆ 97C99: None of the above, but in this section
□ 97Dxx: Education and instruction in mathematics
☆ 97D10: Comprehensive works, comparative studies
☆ 97D20: Philosophical and theoretical contributions (maths didactics)
☆ 97D30: Objectives and goals
☆ 97D40: Teaching methods and classroom techniques
☆ 97D50: Teaching problem solving and heuristic strategies {For research aspects, see 97Cxx}
☆ 97D60: Student assessment, achievement control and rating
☆ 97D70: Learning difficulties and student errors
☆ 97D80: Teaching units and draft lessons
☆ 97D99: None of the above, but in this section
□ 97Exx: Foundations of mathematics
☆ 97E10: Comprehensive works
☆ 97E20: Philosophy and mathematics
☆ 97E30: Logic
☆ 97E40: Language of mathematics
☆ 97E50: Reasoning and proving in the mathematics classroom
☆ 97E60: Sets, relations, set theory
☆ 97E99: None of the above, but in this section
□ 97Fxx: Arithmetic, number theory
☆ 97F10: Comprehensive works
☆ 97F20: Pre-numerical stage, concept of numbers
☆ 97F30: Natural numbers
☆ 97F40: Integers, rational numbers
☆ 97F50: Real numbers, complex numbers
☆ 97F60: Number theory
☆ 97F70: Measures and units
☆ 97F80: Ratio and proportion, percentages
☆ 97F90: Real life mathematics, practical arithmetic
☆ 97F99: None of the above, but in this section
□ 97Gxx: Geometry
☆ 97G10: Comprehensive works
☆ 97G20: Informal geometry
☆ 97G30: Areas and volumes
☆ 97G40: Plane and solid geometry
☆ 97G50: Transformation geometry
☆ 97G60: Plane and spherical trigonometry
☆ 97G70: Analytic geometry. Vector algebra
☆ 97G80: Descriptive geometry
☆ 97G99: None of the above, but in this section
□ 97Hxx: Algebra
☆ 97H10: Comprehensive works
☆ 97H20: Elementary algebra
☆ 97H30: Equations and inequalities
☆ 97H40: Groups, rings, fields
☆ 97H50: Ordered algebraic structures
☆ 97H60: Linear algebra
☆ 97H99: None of the above, but in this section
□ 97Ixx: Analysis
☆ 97I10: Comprehensive works
☆ 97I20: Mappings and functions
☆ 97I30: Sequences and series
☆ 97I40: Differential calculus
☆ 97I50: Integral calculus
☆ 97I60: Functions of several variables
☆ 97I70: Functional equations
☆ 97I80: Complex analysis
☆ 97I99: None of the above, but in this section
□ 97Kxx: Combinatorics, graph theory, probability theory, statistics
☆ 97K10: Comprehensive works
☆ 97K20: Combinatorics
☆ 97K30: Graph theory
☆ 97K40: Descriptive statistics
☆ 97K50: Probability theory
☆ 97K60: Distributions and stochastic processes
☆ 97K70: Foundations and methodology of statistics
☆ 97K80: Applied statistics
☆ 97K99: None of the above, but in this section
□ 97Mxx: Mathematical modeling, applications of mathematics
☆ 97M10: Modeling and interdisciplinarity
☆ 97M20: Mathematics in vocational training and career education
☆ 97M30: Financial and insurance mathematics
☆ 97M40: Operations research, economics
☆ 97M50: Physics, astronomy, technology, engineering
☆ 97M60: Biology, chemistry, medicine
☆ 97M70: Behavioral and social sciences
☆ 97M80: Arts, music, language, architecture
☆ 97M99: None of the above, but in this section
□ 97Nxx: Numerical mathematics
☆ 97N10: Comprehensive works
☆ 97N20: Rounding, estimation, theory of errors
☆ 97N30: Numerical algebra
☆ 97N40: Numerical analysis
☆ 97N50: Interpolation and approximation
☆ 97N60: Mathematical programming
☆ 97N70: Discrete mathematics
☆ 97N80: Mathematical software, computer programs
☆ 97N99: None of the above, but in this section
□ 97Pxx: Computer science
☆ 97P10: Comprehensive works
☆ 97P20: Theory of computer science
☆ 97P30: System software
☆ 97P40: Programming languages
☆ 97P50: Programming techniques
☆ 97P60: Hardware
☆ 97P70: Computer science and society
☆ 97P99: None of the above, but in this section
□ 97Qxx: Computer science education
☆ 97Q10: Comprehensive works
☆ 97Q20: Affective aspects in teaching computer science
☆ 97Q30: Cognitive processes
☆ 97Q40: Sociological aspects
☆ 97Q50: Objectives
☆ 97Q60: Teaching methods and classroom techniques
☆ 97Q70: Student assessment
☆ 97Q80: Teaching units
☆ 97Q99: None of the above, but in this section
□ 97Rxx: Computer science applications
☆ 97R10: Comprehensive works, collections of programs
☆ 97R20: Applications in mathematics
☆ 97R30: Applications in sciences
☆ 97R40: Artificial intelligence
☆ 97R50: Data bases, information systems
☆ 97R60: Computer graphics
☆ 97R70: User programs, administrative applications
☆ 97R80: Recreational computing
☆ 97R99: None of the above, but in this section
□ 97Uxx: Educational material and media, educational technology
☆ 97U10: Comprehensive works
☆ 97U20: Textbooks. Textbook research
☆ 97U30: Teachers' manuals and planning aids
☆ 97U40: Problem books. Competitions. Examinations
☆ 97U50: Computer assisted instruction; e-learning
☆ 97U60: Manipulative materials
☆ 97U70: Technological tools, calculators
☆ 97U80: Audiovisual media
☆ 97U99: None of the above, but in this section | {"url":"https://ftp.udc.es/CRAN/web/classifications/MSC-2010.html","timestamp":"2024-11-03T09:14:02Z","content_type":"text/html","content_length":"560603","record_id":"<urn:uuid:20586a2b-4ae7-45db-9c71-568585a9d674>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00735.warc.gz"} |
The Ultimate Guide to Tackling 2nd Grade Math Word Problems
Before diving deep, the workbook builds a robust foundation by gradually introducing the various math problems. This incremental exposure boosts the child’s critical thinking abilities, building a
ladder towards math triumph, in the classroom, and beyond!
Word Problem Activities
Packed with fun-filled math activities, our workbook is customized for second-grade students. It promotes a step-by-step learning progression, offering an array of diverse word problems to stimulate
critical thinking.
Unique Features of the Workbook
Learning Progression
“Mastering Grade 2 Math Word Problems” stands tall with its systematic learning progression. It presents word problems in a way that strengthens the child’s cognitive abilities, laying the groundwork
for future success.
Diverse Word Problems
The word problems are wide-ranging and adhere to the current math standards, providing an ongoing review of essential concepts. They provide regular practice with key second-grade math notions such
as addition, subtraction, basic multiplication, and division.
Real-world Applications
Addition and Subtraction Skills
More than just memorizing math operations, the workbook helps children understand when and how to apply them in real-life scenarios. It equips students with skills to decipher a math problem, decide
if it requires addition or subtraction, and then apply the relevant operation.
Multiplication and Division Skills
The workbook excels in nurturing basic multiplication and division skills in children, again with a real-world application perspective. By providing engaging word problems that require multiplication
and division, it fosters these essential math skills.
Benefits of Using the Workbook
Boost in Reading Comprehension
“Mastering Grade 2 Math Word Problems” is more than just a math workbook. As children interpret and solve word problems, they inadvertently boost their reading comprehension skills, an added
Preparing for Future Math Success
Equipped with essential skills, children are set up for success in higher-grade math problems. The tracking tools included allow for progress monitoring, and identifying areas needing more practice.
Perfect for supplementary learning, this workbook is the ideal resource for nurturing your student’s development of the skills they need to shine in math.
Whether you’re an educator or a parent, “Mastering Grade 2 Math Word Problems” is your go-to resource. With its engaging activities, systematic progression, and in-depth tracking tools, the workbook
empowers students to tackle even the most challenging math word problems confidently.
1. Is the workbook suitable for homeschooling? Yes, it is perfect for homeschooling and can be effectively used under the guidance of parents.
2. Does the workbook cover all 2nd-grade math topics? Yes, the workbook covers all second-grade math topics, keeping in line with current math standards.
3. Is the workbook interactive? Yes, the workbook is filled with engaging activities to keep the learning process fun and interactive.
4. How does the workbook boost reading comprehension? The workbook enhances reading comprehension as students read, interpret, and solve word problems.
5. Can the workbook be used for advanced learners? While primarily aimed at 2nd graders, advanced learners can also benefit from the diverse range of problems and real-world applications.
There are no reviews yet. | {"url":"https://www.effortlessmath.com/product/mastering-grade-2-math-word-problems-the-ultimate-guide-to-tackling-2nd-grade-math-word-problems/","timestamp":"2024-11-01T21:59:40Z","content_type":"text/html","content_length":"46430","record_id":"<urn:uuid:41d66acb-538c-4136-8ba4-a8ea9cf911d0>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00482.warc.gz"} |
Ferrari, Ludovico | Encyclopedia.com
Ferrari, Ludovico
Ferrari, Ludovico
(b. Bologna, Italy, 2 February 1522; d. Bologna, October 1565)
Little is known of Ferrari’s life. His father, Alessandro, was the son of a Milanese refugee who had settled in Bologna. Following his father’s death Ferrari went to live with his uncle Vincenzo. In
November 1536 he was sent to Milan by his uncle to join the household of Girolamo Cardano, replacing his uncle’s son Luca, who was already in Cardano’s service. Although he had not received a formal
education, Ferrari was exceptionally intelligent. Cardano therefore instructed him in Latin, Greek, and mathematics and employed him as amanuensis. In Cardano’s autobiography, written many years
later, Ferrari is described as having “excelled as a youth all my pupils by the high degree of his learning” (De vita propria liber [1643], p. 156).
In 1540 Ferrari was appointed public lecturer in mathematics in Milan, and shortly afterward he defeated Zuanne da Coi, a mathematician of Brescia, at a public disputation. He also collaborated with
Cardano in researches on the cubic and quartic equations, the results of which were published in the Ars magna (1545). The publication of this book was the cause of the celebrated feud between
Ferrari and Niccold Tartaglia of Brescia, author of Quesiti et inventioni diverse (1546). In the wake of the resulting public disputation, Ferrari received offers of employment from many persons of
importance, including Emperor Charles V, who wanted a tutor for his son, and Ercole Gonzaga, cardinal of Mantua. He accepted Gonzaga’s offer and, at the request of the cardinal’s brother, Ferrante,
then governor of Milan, he carried out a survey of that province. After this he was in the cardinal’s service for some eight years. On his retirement because of ill health Ferrari went to Bologna to
live with his sister. From September 1564 until his death in October 1565, he held the post of lector ad mathematicam at the University of Bologna.
When Ferrari went to live with Cardano, the latter was earning his livelihood by teaching mathematics. Although Cardano was a qualified physician, he had not yet been accepted by the College of
Physicians and was then preparing his first works on medicine and mathematics for publication. It is likely that Ferrari was introduced to mathematics through Cardano’s Practica arithmetice (1539).
While this work was in preparation, news reached Cardano that a method of solving the cubic equation of the form x^3 + ax = b, where a and b are positive, was known to Niccolò Tartaglia of Brescia.
Until then Cardano had accepted Luca Pacioli’s statement in the Summa de arithmetica, geometria, proportioni et proportionalita (1494) that the cubic equation could not be solved algebraically. On
learning that Tartaglia had solved the equation in the course of a disputation with Antonio Maria Fiore in 1535, Cardano probably tried to find the solution himself, but without success. In 1539,
before his book was published, he asked Tartaglia for the solution, offering to include it in his forthcoming book under Tartaglia’s name. Tartaglia refused, on the ground that he wished to publish
his discovery himself. But when he visited Cardano in Milan in March 1539, he gave him the solution on the solemn promise that it would be kept secret. In 1542, however, Cardano learned that the
cubic equation had been solved several years before Tartaglia by Scipione Ferro, lector ad mathematicam at the University of Bologna from 1496 to 1526. During a visit to Bologna, Cardano and Ferrari
were shown Ferro’s work, in manuscript, by his pupil and successor Annibale dalla Nave. After this Cardano did not feel obliged to keep his promise.
Having learned the method of solving one type of cubic equation, Cardano and Ferrari were encouraged to extend their researches to other types of cubics and to the quartic. Ferrari found geometrical
demonstrations for Cardano’s formulas for solving x^3 + ax = bx^2 + c and x^3 + ax^2 = b; he also solved the quartic of the form x^4 + ax^2 + b = cx where a, b, c, are positive. The results were
embodied in Cardano’s Ars magna (1545). In it he attributed the discovery of the method of solving the equation x^3 + ax = b to Scipione Ferro and its rediscovery to Tartaglia. That this apparent
breach of secrecy angered Tartaglia is evident from book IX of his Quesiti et inventioni diverse (1546), where he recounted the circumstances in which he had made his discovery and Cardano’s attempts
to obtain the solution from him. He also gave a verbatim account of the conversation at their meeting in Milan, along with his comments.
Ferrari, loyal to his master and impetuous by nature, reacted quickly. In February 1547 he wrote to Tartaglia, protesting that the latter had unjustly and falsely made statements prejudicial to
Cardano. Having criticized the mathematical content of Tartaglia’s work and accused him of repetition and plagiarism, Ferrari challenged him to a public disputation in geometry, arithmetic, and
related disciplines. Scholarly disputations, common in those days, were often the means of testing the professional ability of the participants. Since both Ferrari and Tartaglia were engaged in the
public teaching of mathematics, a disputation was a serious matter. In his reply Tartaglia, while insisting that Cardano had not kept his promise, said that he had used injurious words in order to
provoke Cardano to write to him. He asked Ferrari to leave Cardano to fight his own battles; otherwise, Ferrari should admit that he was writing at Cardano’s instigation. Saying that he would accept
the challenge if Cardano at least countersigned Ferrari’s letter, Tartaglia went on to raise objections to the conditions of the proposed disputation—the subjects, the location, the amount of caution
money to be deposited, and the judges.
Twelve letters were exchanged, full of charges and insults, each party trying to justify his position. Tartaglia maintained that Cardano had broken his promise and that Ferrari was writing at
Cardano’s instance. Ferrari asserted that the solution of the cubic equation was known to both Scipione Ferro and Antonio Maria Fiore long before Tartaglia had discovered it and that it was
magnanimous of Cardano to mention Tartaglia in the Ars magna. He also denied that he was writing on Cardano’s behalf. In the course of this correspondence each party issued a series of thirty-one
problems for the other to solve. Tartaglia sent his problems in a letter dated 21 April 1547. The problems were no more difficult than those found in Pacioli’s Summa. On 24 May 1547 Ferrari replied
with thirty-one problems of his own but did not send the solutions to those set by Tartaglia. In his reply (July 1547) Tartaglia sent the solutions to twenty-six of Ferrari’s problems, leaving out
those which led to cubic equations; a month later he gave his reasons for not solving these five problems. In a letter dated October 1547 Ferrari replied, criticizing Tartaglia’s solutions and giving
his solutions to the problems set by the latter. Tartaglia, replying in June 1548, said he had not received Ferrari’s letter until January and that he was willing to go to Milan to take part in the
disputation. In July 1548 both parties confirmed their acceptance.
There is no record of what happened at the meeting except for scattered references in Tartaglia’s General trattato di numeri, et misure (1556–1560). The parties met on 10 August 1548 in the church of
Santa Maria del Giardino dei Minori Osservanti in the presence of a distinguished gathering that included Ferrante Gonzaga, governor of Milan, who had been named judge. Tartaglia says that he was not
given a chance to state his case properly. Arguments over a problem of Ferrari’s that Tartaglia had been unable to resolve lasted until suppertime, and everyone was obliged to leave. Tartaglia
departed the next day for Brescia, and Ferrari was probably declared the winner.
Ferrari’s method of solving the quartic equation x^4 + ax^2 + b = cx was set out by Cardano in the Ars magna. It consists of reducing the equation to a cubic. The discovery was made in the course of
solving a problem given to Cardano by Zuanne da Coi: “Divide 10 into three proportional parts so that the product of the first and second is 6.” If the mean is x, it follows that x^4 + 6x^2 + 36 = 60
x, or (x^2 + 6)^2 = 60x + 6x^2. This last equation can be put in the form
(x^2 + 6 + y)^2 = 6x^2 + 60x + y^2 + 12y + 2yx^2
(x^2 + 6 + y)^2 = (2y + 6)x^2 + 60x + (y^2 + 12y),
were y is a new unknown. If y is chosen so that the right-hand side of the equation is a perfect square, then y satisfies the condition
60^2 = 4(2y + 6)(y^2 + 12y),
which can be reduced to the cubic equation
y^3 + 15y^2 + 36y = 450.
That Ferrari’s method of solution is applicable to all cases of the quartic equation was shown by Rafael Bombelli in his Algebra (1572).
I. Original Works. The letters exchanged by Ferrari and Tartaglia were printed, and copies were sent to several persons of influence in Italy. (A complete set of these letters is in the Department of
Printed Books of the British Museum.) They have been published by Enrico Giordani in I sei cartelli di matematica disfida, primamente intorno alla generale risoluzione delle equazioni cubiche, di
Lodovico Ferrari, coi sei contro-cartelli in risposta di Nicolò Tartaglia, comprendenti le soluzioni de’ quesiti dall’una e dall’altra parse proposti (Milan, 1876). Ferrari’s work on the cubic and
quartic equations is described in Cardano’s Artis magnae, sine de regulis algebraicis (Nuremberg, 1545).
II. Secondary Literature. Cardano wrote a short biography of Ferrari, “Vita Ludovici Ferrarii Bononiensis,” in his Opera omnia (Lyons, 1663), IX, 568–569. References to Ferrari in Cardano’s other
works are cited in J. H. Morley, Life of Girolamo Cardano of Milan, Physician (London, 1854), I, 148–149, 187. The history of mathematics in sixteenth-century Italy is outlined in Ettore Bortolotti,
Storia della matematica nella Università di Bologna (Bologna, 1947), pp. 35–80. Arnaldo Masotti, “Sui cartelli di matematica disfida scambiati fra Lodovico Ferrari e Niccolò Tartaglia,” in Rendiconti
dell’Istituto lombardo di scienze e lettere, 94 (1960), 31–41, cites the important secondary literature on Ferrari.
S. A. Jayawardene
More From encyclopedia.com
About this article
Ferrari, Ludovico | {"url":"https://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/ferrari-ludovico","timestamp":"2024-11-08T08:53:35Z","content_type":"text/html","content_length":"53325","record_id":"<urn:uuid:21fa15a1-c50f-406f-b0d2-51def066948f>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00660.warc.gz"} |
Definition of Cosecant. Meaning of Cosecant. Synonyms of Cosecant
- cosine, and the
Their reciprocals
, the secant, and the
are less used. Each of...
- (/ˈkɒθ, ˈkoʊθ/),
hyperbolic secant
"sech" (/ˈsɛtʃ, ˈʃɛk/),
hyperbolic cosecant
"csch" or "cosech" (/ˈkoʊsɛtʃ, ˈkoʊʃɛk/)
to the
- are the
of the sine, cosine, tangent, cotangent, secant, and
functions, and are used to
from any of the angle's trigonometric...
inverse cosecant
function. (Also
as arccsc.)
inverse cotangent
inverse cosecant
function. (Also written...
- and
squared cosecant
functions: the
logarithmic derivative
of the sine is the cotangent,
whose derivative
squared cosecant
. The Weierstr****...
- A
cosecant squared
sometimes known
as a
constant height
pattern, is a
form of
parabolic reflector
used in some
systems. It is shaped...
(2nd ed.). Springer. Ch. 33, "The
sec(x) and
csc(x) functions", §33.13, p. 336. doi:10.1007/978-0-387-48807-3. ISBN 978-0-387-48806-6...
inverse hyperbolic
inverse hyperbolic
inverse hyperbolic cosecant
inverse hyperbolic
secant, and
inverse hyperbolic
cotangent. They are...
- See
below under
Mnemonics. The
these ratios
(sec), and
(cot), respectively: csc A = 1 sin...
- For example, the
multiplicative inverse
1/(sin x) = (sin x)−1 is the
of x, and not the
sine of x
by sin−1 x or
x. The... | {"url":"https://www.wordaz.com/Cosecant.html","timestamp":"2024-11-03T00:03:06Z","content_type":"text/html","content_length":"11163","record_id":"<urn:uuid:7a394011-fe57-4729-b702-db7d934a247a>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00133.warc.gz"} |
Software Preservation Group
Up one level
F.E. Allen. A Technological Review of the FORTRAN I Compiler. Proceedings National Computer Conference, 1982, AFIPS Press, pages 805-809.
J.W. Backus and H. Herrick. IBM 701 Speedcoding and other automatic programming systems. In Proc. Symp. on Automatic Programming for Digital Computer, Washington DC, The Office of Naval Research,
May 1954, pp. 106-113. Computer History Museum Lot X2677.2004, Box 3 of 6, black 3-inch binder. Donated by J.A.N. Lee.
John W. Backus. The IBM 701 Speedcoding System. Journal of the ACM, Volume 1, Number 1 (January 1954), pages 4-6.
J.W. Backus, R.J. Beeber, S. Best, R. Goldberg, L.M. Haibt, H.L. Herrick, R.A. Nelson, D. Sayre, P.B. Sheridan, H.J. Stern, I. Ziller, R.A. Hughes, and R. Nutt, The FORTRAN automatic coding
system. Pages 188-198. In Proceedings Western Joint Computer Conference, Los Angeles, California, February 1957. Typeset reprint in original blue cover.
J.W. Backus. Automatic programming: properties and performance of FORTRAN systems I and II. Session 2, Paper 3, Proceedings Symposium on the Mechanisation of Thought Processes, Teddington,
Middlesex, England, The National Physical Laboratory, Nov. 1958, Her Majesty's Stationary Office (HMSO), pp. 232-255.
J.W. Backus and W.P. Heising. FORTRAN. IEEE Transactions on Electronic Computers, EC-13, Number 4, August 1964, pages 382-385.
J. W. Backus, The history of FORTRAN I, II and III. Proceedings First ACM SIGPLAN conference on History of programming languages, Los Angles, 1978.
J. W. Backus. Programming in the 1950's - some personal impressions. In A History of Computing in the Twentieth Century, N. Metropolis et al, Eds., Academic Press, New York, 1980, pages 125-135.
John Backus. The History of Fortran I, II, and III. In in: R. Wexelblat, editor. History of Programming Languages, ACM Monograph Series, Academic Press, 1981, pages 25-45.
John Backus. Transcript of presentation of talk: "The History of Fortran I, II, and III". In in: R. Wexelblat, editor. History of Programming Languages, ACM Monograph Series, Academic Press,
1981, pages 45-66.
George Ryckman. Transcript of discussant's remarks: "The History of Fortran I, II, and III". In in: R. Wexelblat, editor. History of Programming Languages, ACM Monograph Series, Academic Press,
1981, pages 66-68.
J.A.N. Lee. Transcript of question and answer session: "The History of Fortran I, II, and III". In in: R. Wexelblat, editor. History of Programming Languages, ACM Monograph Series, Academic
Press, 1981, pages 68-71.
J.A.N. Lee. Full text of all questions submitted: "The History of Fortran I, II, and III". In in: R. Wexelblat, editor. History of Programming Languages, ACM Monograph Series, Academic Press,
1981, pages 71-73.
J.A.N. Lee. Biography of John Backus: "The History of Fortran I, II, and III". In in: R. Wexelblat, editor. History of Programming Languages, ACM Monograph Series, Academic Press, 1981, page 74.
FORTRAN Comes to Westinghouse-Bettis, 1957 + letters
A.C. Glennie. Automatic Coding of an electronic computer. Photocopy of typewritten manuscript with handwritten corrections, 15 pages. First page has date "14/12/52" and handwritten annotation
"Lecture was delivered at University of Cambridge mid-February 1953". Computer History Museum Lot X2677.2004, Box 3 of 6, blue 3-inch binder (donated by J.A.N. Lee).
J.H. Laning Jr. and N. Zierler: A Program For Translation of Mathematical Equations for Whirlwind I, Engineering Memorandum E-364, Instrumentation Laboratory, Massachusetts Institute of
Technology, January 1954
P.B. Sheridan. The arithmetic translator-compiler of the IBM FORTRAN automatic coding system. Comm. ACM, Volume 2, Number 2, February 1959, pages 9-21.
John Van Gardner. Fortran and the Genesis of Project Intercept. Gardner was one of the IBM Customer Engineers who installed 704 serial 13 at Lockheed Aircraft in Marietta, GA in May 1956. This
memoir describes how in 1957 he debugged a hardware problem that had resulted in the Fortran compiler behaving in a nondeterministic manner.
Claire Stegmann. Pathfinder. [Interview of John Backus] THINK, IBM Corporation, July/August 1979, pages 18-27. | {"url":"http://www.softwarepreservation.net/projects/FORTRAN/paper","timestamp":"2024-11-11T20:00:03Z","content_type":"application/xhtml+xml","content_length":"64264","record_id":"<urn:uuid:1c14799b-be18-48c6-bc8f-3d53456f5238>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00665.warc.gz"} |
Inquiry Maths - Product of two terms
Product of two terms inquiry
Mathematical inquiry processes: Explore; generate examples and counter-examples; generalise and prove. Conceptual field of inquiry: Position-to-term rules; algebraic expressions.
The prompt was devised by Helen Hindle, head of the mathematics department at Park View School (Haringey, UK). It generalises from a student's observation made during the intersecting sequences
inquiry. The student proved the statement is true for the sequence generated from 6n + 1. Teachers could use the prompt as the start of a stand-alone inquiry or introduce it as a separate pathway
during the intersecting sequences inquiry.
The questions and observations that develop from the statement include:
• Is it true or false?
• If it is true, then is it true for some or all sequences?
• 3 x 5 = 15 and 3, 5 and 15 are in the sequence generated by 2n + 1.
• Does it only work with consecutive terms?
• Are there any sequences when you could use the sum of the terms?
• How could you show this is always true?
• What would happen if you multiplied terms from two sequences? Would the answer be in both sequences?
Students realise that most expressions do not generate arithmetic sequences for which the prompt is true. During exploration, they conclude that the prompt is true for any expression with a constant
of one - that is, ending with +1. It is also true for any expression in the form an + a, such as 2n + 2, 3n + 3 and so on.
Conjecture, counter-example and proof
George Marsden (a year 10 student at St. Andrew's School, Leatherhead, UK) proved his conjecture that the product of any two terms in the sequence given by the expression for the nth term 6n + 1 is
also a term in the same sequence. For example, the product of 7 and 13 (both terms in the sequence generated from 6n + 1) is 91, which is the fifteenth term in the sequence. The illustration below
shows how George proved his conjecture.
Using 3n - 1 as the expression for the nth term, we get 2, 5, 8, 11, 14. In a general form, we have:
3k - 1, 3(k + 1) - 1, 3(k + 2) - 1, 3(k + 3) - 1.
The product of, say, the second and fourth term is [3(k + 1) - 1][3(k + 3) - 1] = 9k2 + 30k + 16. This can be written as 3(3k2 + 10k) + 16, which is not in the form 3n - 1.
The prompt is true for any expression of the nth term that has a constant of one (for example, 6n + 1). The general sequence starts:
ak + 1, a(k + 1) + 1, a(k + 2) + 1, a(k + 3) + 1.
Whichever two terms we choose, the expression for the product of the two will always end with +1, which corresponds to the general form of an + 1.
The prompt is also true when the coefficient and constant in the expression are the same. The general sequence starts:
ak + a, a(k + 1) + a, a(k + 2) + a, a(k + 3) + a.
Whichever two terms we choose, the expression for the product of the two will always end with a term in a2. This can then be rearranged to give a as the final term. The product of the first two
terms, for example, is a[(ak2 + 3ak + a) + a]. | {"url":"https://www.inquirymaths.org/home/algebra-prompts/product-of-two-terms","timestamp":"2024-11-07T07:07:03Z","content_type":"text/html","content_length":"172445","record_id":"<urn:uuid:d632f036-5655-4672-bbb6-5a131a0bd948>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00701.warc.gz"} |
How to Use IFERROR Then Blank in Excel
Using IFERROR with blank result in Excel allows you to handle errors in your formulas and return a blank cell instead of the default error messages, like "#N/A," "#VALUE!", or "#DIV/0!".
To use IFERROR with a blank result, follow these steps:
1. Open Excel and enter your data or formulas in the cells.
2. Click on the cell where you want to apply the IFERROR function.
3. Enter the IFERROR formula in the following format:
Replace "original_formula" with the formula you want to handle the error for.
4. Press Enter to apply the formula.
Let's say you have a dataset in cells A1 to A5, and you want to calculate the average value. However, there might be cells with no data, which can cause an error in the AVERAGE function. In this
case, you can use the IFERROR function to return a blank cell if there is an error:
1. Click on cell B1 to apply the IFERROR formula.
2. Enter the following formula:
3. Press Enter to apply the formula.
If there are any errors in the AVERAGE function (e.g., dividing by zero or non-numeric values), the IFERROR function will catch them and display a blank cell in B1. Otherwise, the calculated average
will be displayed.
Did you find this useful? | {"url":"https://sheetscheat.com/excel/how-to-use-iferror-then-blank-in-excel","timestamp":"2024-11-09T09:18:41Z","content_type":"text/html","content_length":"10723","record_id":"<urn:uuid:e502b69b-cf8c-4afc-8f7f-cd5d31390ab6>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00604.warc.gz"} |
Analysis of single-cell RNA-seq data using
The aim of this vignette is to introduce the basic steps involved in fitting GLM-PCA model to single-cell RNA-seq data using fastglmpca. (See Townes et al 2019 or Collins et al 2001 for a detailed
description of the GLM-PCA model.)
To begin, load the packages that are needed.
Set the seed so that the results can be reproduced.
The example data set
We will illustrate fastglmpca using a single-cell RNA-seq data set from Zheng et al (2017). These data are reference transcriptome profiles from 10 bead-enriched subpopulations of peripheral blood
mononuclear cells (PBMCs). The original data set is much larger; for this introduction, we have taken a subset of roughly 3,700 cells.
The data we will analyze are unique molecular identifier (UMI) counts. These data are stored as an \(n \times m\) sparse matrix, where \(n\) is the number of genes and \(m\) is the number of cells:
# [1] 16791 3774
The UMI counts are “sparse”—that is, most of the counts are zero. Indeed, over 95% of the UMI counts are zero:
mean(pbmc_facs$counts > 0)
# [1] 0.04265257
For the purposes of this vignette only, we randomly subset the data further to reduce the running time:
counts <- pbmc_facs$counts
n <- nrow(counts)
rows <- sample(n,3000)
counts <- counts[rows,]
Now we have a 3,000 x 3,774 counts matrix:
# [1] 3000 3774
Initializating and fitting the GLM-PCA model
Since no preprocessing of UMI counts is needed (e.g., a log-transformation), the first step is to initialize the model fit using init_glmpca_pois(). This function has many input arguments and
options, but here we will keep all settings at the defaults, and we set K, the rank of the matrix factorization, to 2:
fit0 <- init_glmpca_pois(counts,K = 2)
By default, init_glmpca_pois() adds gene- (or row-) specific intercept, and a fixed cell- (or column-) specific size-factor. This is intended to mimic the defaults in glmpca. init_glmpca_pois() has
many other options which we do not demonstrate here.
Once we have initialized the model, we are ready to run the optimization algorithm to fit the model (i.e., estimate the model parameters). This is accomplished by a call to fit_glmpca_pois():
fit <- fit_glmpca_pois(counts,fit0 = fit0)
If you prefer not to wait for the model optimization (it may take several minutes to run), you are welcome to load the previously fitted model (which is the output from the fit_glmpca_pois call
fit <- pbmc_facs$fit
The return value of fit_glmpca_pois() resembles the output of svd() and similar functions, with a few other outputs giving additional information about the model:
# [1] "U" "V" "fixed_b_cols" "fixed_w_cols" "loglik"
# [6] "progress" "X" "B" "Z" "W"
# [11] "d"
In particular, the outputs that are capital letters the low-rank reconstruction of the counts matrix:
fitted_counts <- with(fit,
exp(tcrossprod(U %*% diag(d),V) +
tcrossprod(X,B) +
Let’s compare (a random subset of) the reconstructed (“fitted”) counts versus the observed counts:
i <- sample(prod(dim(counts)),2e4)
pdat <- data.frame(obs = as.matrix(counts)[i],
fitted = fitted_counts[i])
ggplot(pdat,aes(x = obs,y = fitted)) +
geom_point() +
geom_abline(intercept = 0,slope = 1,color = "magenta",linetype = "dotted") +
labs(x = "observed count",y = "fitted count") +
theme_cowplot(font_size = 12)
The U and V outputs in particular are interesting because they give low-dimensional (in this case, 2-d) embeddings of the genes and cells, respectively. Let’s compare this 2-d embedding of the cells
the provided cell-type labels:
celltype_colors <- c("forestgreen","dodgerblue","darkmagenta",
celltype <- as.character(pbmc_facs$samples$celltype)
celltype[celltype == "CD4+/CD25 T Reg" |
celltype == "CD4+ T Helper2" |
celltype == "CD8+/CD45RA+ Naive Cytotoxic" |
celltype == "CD4+/CD45RA+/CD25- Naive T" |
celltype == "CD4+/CD45RO+ Memory"] <- "T cell"
celltype <- factor(celltype)
pdat <- data.frame(celltype = celltype,
pc1 = fit$V[,1],
pc2 = fit$V[,2])
ggplot(pdat,aes(x = pc1,y = pc2,color = celltype)) +
geom_point() +
scale_color_manual(values = celltype_colors) +
theme_cowplot(font_size = 10) | {"url":"https://mirror.ibcp.fr/pub/CRAN/web/packages/fastglmpca/vignettes/intro_fastglmpca.html","timestamp":"2024-11-11T21:51:49Z","content_type":"text/html","content_length":"1048957","record_id":"<urn:uuid:956ea073-584e-4e1f-901c-b45c5479a585>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00105.warc.gz"} |
Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots
plotPartialDependence(RegressionMdl,Vars) computes and plots the partial dependence between the predictor variables listed in Vars and model predictions. In this syntax, the model predictions are the
responses predicted by using the regression model RegressionMdl, which contains predictor data.
• If you specify one variable in Vars, the function creates a line plot of the partial dependence against the variable.
• If you specify two variables in Vars, the function creates a surface plot of the partial dependence against the two variables.
plotPartialDependence(ClassificationMdl,Vars,Labels) computes and plots the partial dependence between the predictor variables listed in Vars and the scores for the classes specified in Labels by
using the classification model ClassificationMdl, which contains predictor data.
• If you specify one variable in Vars, the function creates a line plot of the partial dependence against the variable for each class in Labels.
• If you specify two variables in Vars, the function creates a surface plot of the partial dependence against the two variables. You must specify one class in Labels.
plotPartialDependence(___,Data) uses new predictor data Data. You can specify Data in addition to any of the input argument combinations in the previous syntaxes.
plotPartialDependence(fun,Vars,Data) computes and plots the partial dependence between the predictor variables listed in Vars and the outputs returned by the custom model fun, using the predictor
data Data.
• If you specify one variable in Vars, the function creates a line plot of the partial dependence against the variable for each column of the output returned by fun.
• If you specify two variables in Vars, the function creates a surface plot of the partial dependence against the two variables. When you specify two variables, fun must return a column vector or
you must specify which output column to use by setting the OutputColumns name-value argument.
plotPartialDependence(___,Name,Value) uses additional options specified by one or more name-value arguments. For example, if you specify "Conditional","absolute", the plotPartialDependence function
creates a figure including a PDP, a scatter plot of the selected predictor variable and predicted responses or scores, and an ICE plot for each observation.
ax = plotPartialDependence(___) returns the axes of the plot.
Create Partial Dependence Plot
Train a regression tree using the carsmall data set, and create a PDP that shows the relationship between a feature and the predicted responses in the trained regression tree.
Load the carsmall data set.
Specify Weight, Cylinders, and Horsepower as the predictor variables (X), and MPG as the response variable (Y).
X = [Weight,Cylinders,Horsepower];
Y = MPG;
Train a regression tree using X and Y.
View a graphical display of the trained regression tree.
Create a PDP of the first predictor variable, Weight.
The plotted line represents averaged partial relationships between Weight (labeled as x1) and MPG (labeled as Y) in the trained regression tree Mdl. The x-axis minor ticks represent the unique values
in x1.
The regression tree viewer shows that the first decision is whether x1 is smaller than 3085.5. The PDP also shows a large change near x1 = 3085.5. The tree viewer visualizes each decision at each
node based on predictor variables. You can find several nodes split based on the values of x1, but determining the dependence of Y on x1 is not easy. However, the plotPartialDependence plots average
predicted responses against x1, so you can clearly see the partial dependence of Y on x1.
The labels x1 and Y are the default values of the predictor names and the response name. You can modify these names by specifying the name-value arguments PredictorNames and ResponseName when you
train Mdl using fitrtree. You can also modify axis labels by using the xlabel and ylabel functions.
Create Partial Dependence Plot for Multiple Classes
Train a naive Bayes classification model with the fisheriris data set, and create a PDP that shows the relationship between the predictor variable and the predicted scores (posterior probabilities)
for multiple classes.
Load the fisheriris data set, which contains species (species) and measurements (meas) on sepal length, sepal width, petal length, and petal width for 150 iris specimens. The data set contains 50
specimens from each of three species: setosa, versicolor, and virginica.
Train a naive Bayes classification model with species as the response and meas as predictors.
Mdl = fitcnb(meas,species);
Create a PDP of the scores predicted by Mdl for all three classes of species against the third predictor variable x3. Specify the class labels by using the ClassNames property of Mdl.
According to this model, the probability of virginica increases with x3. The probability of setosa is about 0.33, from where x3 is 0 to around 2.5, and then the probability drops to almost 0.
Create Individual Conditional Expectation Plots
Train a Gaussian process regression model using generated sample data where a response variable includes interactions between predictor variables. Then, create ICE plots that show the relationship
between a feature and the predicted responses for each observation.
Generate sample predictor data x1 and x2.
rng("default") % For reproducibility
n = 200;
x1 = rand(n,1)*2-1;
x2 = rand(n,1)*2-1;
Generate response values that include interactions between x1 and x2.
Y = x1-2*x1.*(x2>0)+0.1*rand(n,1);
Create a Gaussian process regression model using [x1 x2] and Y.
Create a figure including a PDP (red line) for the first predictor x1, a scatter plot (circle markers) of x1 and predicted responses, and a set of ICE plots (gray lines) by specifying Conditional as
When Conditional is "centered", plotPartialDependence offsets plots so that all plots start from zero, which is helpful in examining the cumulative effect of the selected feature.
A PDP finds averaged relationships, so it does not reveal hidden dependencies especially when responses include interactions between features. However, the ICE plots clearly show two different
dependencies of responses on x1.
Use New Predictor Data for Partial Dependence Plot
Train an ensemble of classification models and create two PDPs, one using the training data set and the other using a new data set.
Load the census1994 data set, which contains US yearly salary data, categorized as <=50K or >50K, and several demographic variables.
Extract a subset of variables to analyze from the tables adultdata and adulttest.
X = adultdata(:,["age","workClass","education_num","marital_status","race", ...
Xnew = adulttest(:,["age","workClass","education_num","marital_status","race", ...
Train an ensemble of classifiers with salary as the response and the remaining variables as predictors by using the function fitcensemble. For binary classification, fitcensemble aggregates 100
classification trees using the LogitBoost method.
Mdl = fitcensemble(X,"salary");
Inspect the class names in Mdl.
ans = 2x1 categorical
Create a partial dependence plot of the scores predicted by Mdl for the second class of salary (>50K) against the predictor age using the training data.
Create a PDP of the scores for class >50K against age using new predictor data from the table Xnew.
The two plots show similar shapes for the partial dependence of the predicted score of high salary (>50K) on age. Both plots indicate that the predicted score of high salary rises fast until the age
of 30, then stays almost flat until the age of 60, and then drops fast. However, the plot based on the new data produces slightly higher scores for ages over 65.
Specify Model Using Function Handle
Create a PDP to analyze relationships between predictors and anomaly scores for an isolationForest object. You cannot pass an isolationForest object directly to the plotPartialDependence function.
Instead, define a custom function that returns anomaly scores for the object, and then pass the function to plotPartialDependence.
Load the 1994 census data stored in census1994.mat. The data set consists of demographic data from the US Census Bureau.
census1994 contains the two data sets adultdata and adulttest.
Train an isolation forest model for adulttest. The function iforest returns an IsolationForest object.
rng("default") % For reproducibility
Mdl = iforest(adulttest);
Define the custom function myAnomalyScores, which returns anomaly scores computed by the isanomaly function of IsolationForest; the custom function definition appears at the end of this example.
Create a PDP of the anomaly scores against the variable age for the adulttest data set. plotPartialDependence accepts a custom model in the form of a function handle. The function represented by the
function handle must accept predictor data and return a column vector or matrix with one row for each observation. Specify the custom model as @(tbl)myAnomalyScores(Mdl,tbl) so that the custom
function uses the trained model Mdl and accepts predictor data.
ylabel("Anomaly Score")
Custom Function myAnomalyScores
function scores = myAnomalyScores(Mdl,tbl)
[~,scores] = isanomaly(Mdl,tbl);
Compare Importance of Predictor Variables
Train a regression ensemble using the carsmall data set, and create a PDP plot and ICE plots for each predictor variable using a new data set, carbig. Then, compare the figures to analyze the
importance of predictor variables. Also, compare the results with the estimates of predictor importance returned by the predictorImportance function.
Load the carsmall data set.
Specify Weight, Cylinders, Horsepower, and Model_Year as the predictor variables (X), and MPG as the response variable (Y).
X = [Weight,Cylinders,Horsepower,Model_Year];
Y = MPG;
Train a regression ensemble using X and Y.
Mdl = fitrensemble(X,Y, ...
"PredictorNames",["Weight","Cylinders","Horsepower","Model Year"], ...
Determine the importance of the predictor variables by using the plotPartialDependence and predictorImportance functions. The plotPartialDependence function visualizes the relationships between a
selected predictor and predicted responses. predictorImportance summarizes the importance of a predictor with a single value.
Create a figure including a PDP plot (red line) and ICE plots (gray lines) for each predictor by using plotPartialDependence and specifying "Conditional","absolute". Each figure also includes a
scatter plot (circle markers) of the selected predictor and predicted responses. Also, load the carbig data set and use it as new predictor data, Xnew. When you provide Xnew, the
plotPartialDependence function uses Xnew instead of the predictor data in Mdl.
load carbig
Xnew = [Weight,Cylinders,Horsepower,Model_Year];
t = tiledlayout(2,2,"TileSpacing","compact");
title(t,"Individual Conditional Expectation Plots")
for i = 1 : 4
Compute estimates of predictor importance by using predictorImportance. This function sums changes in the mean squared error (MSE) due to splits on every predictor, and then divides the sum by the
number of branch nodes.
imp = predictorImportance(Mdl);
title("Predictor Importance Estimates")
ax = gca;
ax.XTickLabel = Mdl.PredictorNames;
The variable Weight has the most impact on MPG according to predictor importance. The PDP of Weight also shows that MPG has high partial dependence on Weight. The variable Cylinders has the least
impact on MPG according to predictor importance. The PDP of Cylinders also shows that MPG does not change much depending on Cylinders.
Compare Partial Dependence of Generalized Additive Model
Train a generalized additive model (GAM) with both linear and interaction terms for predictors. Then, create a PDP with both linear and interaction terms and a PDP with only linear terms. Specify
whether to include interaction terms when creating the PDPs.
Load the ionosphere data set. This data set has 34 predictors and 351 binary responses for radar returns, either bad ('b') or good ('g').
Train a GAM using the predictors X and class labels Y. A recommended practice is to specify the class names. Specify to include the 10 most important interaction terms.
Mdl = fitcgam(X,Y,"ClassNames",{'b','g'},"Interactions",10);
Mdl is a ClassificationGAM model object.
List the interaction terms in Mdl.
ans = 10×2
Each row of Interactions represents one interaction term and contains the column indexes of the predictor variables for the interaction term.
Find the most frequent predictor in the interaction terms.
The most frequent predictor in the interaction terms is the 5th predictor (x5). Create PDPs for the 5th predictor. To exclude interaction terms from the computation, specify
"IncludeInteractions",false for the second PDP.
hold on
grid on
legend("Linear and interaction terms","Linear terms only")
title("PDPs of Posterior Probabilities for 5th Predictor")
hold off
The plot shows that the partial dependence of the scores (posterior probabilities) on x5 varies depending on whether the model includes the interaction terms, especially where x5 is between 0.2 and
Extract Partial Dependence Estimates from Plots
Train a support vector machine (SVM) regression model using the carsmall data set, and create a PDP for two predictor variables. Then, extract partial dependence estimates from the output of
plotPartialDependence. Alternatively, you can get the partial dependence values by using the partialDependence function.
Load the carsmall data set.
Specify Weight, Cylinders, Displacement, and Horsepower as the predictor variables (Tbl).
Tbl = table(Weight,Cylinders,Displacement,Horsepower);
Construct an SVM regression model using Tbl and the response variable MPG. Use a Gaussian kernel function with an automatic kernel scale.
Mdl = fitrsvm(Tbl,MPG,"ResponseName","MPG", ...
"CategoricalPredictors","Cylinders","Standardize",true, ...
Create a PDP that visualizes partial dependence of predicted responses (MPG) on the predictor variables Weight and Cylinders. Specify query points to compute the partial dependence for Weight by
using the QueryPoints name-value argument. You cannot specify the QueryPoints value for Cylinders because it is a categorical variable. plotPartialDependence uses all categorical values.
pt = linspace(min(Weight),max(Weight),50)';
ax = plotPartialDependence(Mdl,["Weight","Cylinders"],"QueryPoints",{pt,[]});
view(140,30) % Modify the viewing angle
The PDP shows an interaction effect between Weight and Cylinders. The partial dependence of MPG on Weight changes depending on the value of Cylinders.
Extract the estimated partial dependence of MPG on Weight and Cylinders. The XData, YData, and ZData values of ax.Children are x-axis values (the first selected predictor values), y-axis values (the
second selected predictor values), and z-axis values (the corresponding partial dependence values), respectively.
xval = ax.Children.XData;
yval = ax.Children.YData;
zval = ax.Children.ZData;
Alternatively, you can get the partial dependence values by using the partialDependence function.
[pd,x,y] = partialDependence(Mdl,["Weight","Cylinders"],"QueryPoints",{pt,[]});
pd contains the partial dependence values for the query points x and y.
If you specify Conditional as "absolute", plotPartialDependence creates a figure including a PDP, a scatter plot, and a set of ICE plots. ax.Children(1) and ax.Children(2) correspond to the PDP and
scatter plot, respectively. The remaining elements of ax.Children correspond to the ICE plots. The XData and YData values of ax.Children(i) are x-axis values (the selected predictor values) and
y-axis values (the corresponding partial dependence values), respectively.
Input Arguments
Vars — Predictor variables
vector of positive integers | character vector | string scalar | string array | cell array of character vectors
Predictor variables, specified as a vector of positive integers, character vector, string scalar, string array, or cell array of character vectors. You can specify one or two predictor variables, as
shown in the following tables.
One Predictor Variable
Value Description
positive integer Index value corresponding to the column of the predictor data.
character vector or string Name of the predictor variable. The name must match the entry in the PredictorNames property for RegressionMdl and ClassificationMdl or the variable name of Data in a
scalar table for a custom model fun.
Two Predictor Variables
Value Description
vector of two positive integers Index values corresponding to the columns of the predictor data.
string array or cell array of Names of the predictor variables. Each element in the array is the name of a predictor variable. The names must match the entries in the PredictorNames property for
character vectors RegressionMdl and ClassificationMdl or the variable names of Data in a table for a custom model fun.
If you specify two predictor variables, you must specify one class in Labels for ClassificationMdl or specify one output column in OutputColumns for a custom model fun.
Example: ["x1","x3"]
Data Types: single | double | char | string | cell
Labels — Class labels
categorical array | character array | logical vector | numeric vector | cell array of character vectors
Class labels, specified as a categorical or character array, logical or numeric vector, or cell array of character vectors. The values and data types in Labels must match those of the class names in
the ClassNames property of ClassificationMdl (ClassificationMdl.ClassNames).
• You can specify multiple class labels only when you specify one variable in Vars and specify Conditional as "none" (default).
• Use partialDependence if you want to compute the partial dependence for two variables and multiple class labels in one function call.
This argument is valid only when you specify a classification model object ClassificationMdl.
Example: ["red","blue"]
Example: ClassificationMdl.ClassNames([1 3]) specifies Labels as the first and third classes in ClassificationMdl.
Data Types: single | double | logical | char | cell | categorical
Name-Value Arguments
Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but
the order of the pairs does not matter.
Before R2021a, use commas to separate each name and value, and enclose Name in quotes.
Example: plotPartialDependence(Mdl,Vars,Data,"NumObservationsToSample",100,"UseParallel",true) creates a PDP by using 100 sampled observations in Data and executing for-loop iterations in parallel.
Conditional — Plot type
"none" (default) | "absolute" | "centered"
Plot type, specified as "none", "absolute", or "centered".
Value Description
plotPartialDependence creates a PDP. The plot type depends on the number of predictor variables specified in Vars.
• One predictor variable — plotPartialDependence creates a 2-D line plot of the partial dependence. If you provide a classification model (ClassificationMdl), the function creates a line
plot for each class label specified in Labels. If you provide a custom model (fun), the function creates a line plot for each column of the output returned by fun. You can specify
"none" which output columns to use by setting the OutputColumns name-value argument.
• Two predictor variables — plotPartialDependence creates a surface plot of partial dependence against the two variables. For a classification model, you must specify one class label in
Labels. For a custom model, you must provide a model that returns a column vector or specify which output column to use by setting the OutputColumns name-value argument.
plotPartialDependence creates a figure that includes three types of plots:
• PDP with a red line
• Scatter plot of the selected predictor variable and predicted responses or scores with circle markers
• ICE plot for each observation with a gray line
To use the "absolute" option, you must specify one predictor variable in Vars. In addition, for a classification model, you must specify one class label in Labels. For a custom model, you
must provide a model that returns a column vector or specify which output column to use by setting the OutputColumns name-value argument.
plotPartialDependence creates a figure that includes the same three types of plots as "absolute". The function offsets plots so that all plots start from zero.
To use the "centered" option, you must specify one predictor variable in Vars. In addition, for a classification model, you must specify one class label in Labels. For a custom model, you
must provide a model that returns a column vector or specify which output column to use by setting the OutputColumns name-value argument.
Example: "Conditional","absolute"
NumObservationsToSample — Number of observations to sample
number of total observations (default) | positive integer
Number of observations to sample, specified as a positive integer. The default value is the number of total observations in Data or the model (RegressionMdl or ClassificationMdl). If you specify a
value larger than the number of total observations, then plotPartialDependence uses all observations.
plotPartialDependence samples observations without replacement by using the datasample function and uses the sampled observations to compute partial dependence.
plotPartialDependence displays minor tick marks at the unique values of the sampled observations.
If you specify Conditional as either "absolute" or "centered", plotPartialDependence creates a figure including an ICE plot for each sampled observation.
Example: "NumObservationsToSample",100
Data Types: single | double
Parent — Axes in which to plot
gca (default) | axes object
Axes in which to plot, specified as an axes object. If you do not specify the axes and if the current axes are Cartesian, then plotPartialDependence uses the current axes (gca). If axes do not exist,
plotPartialDependence plots in a new figure.
Example: "Parent",ax
QueryPoints — Points to compute partial dependence
numeric column vector | numeric two-column matrix | cell array of two numeric column vectors
Points to compute partial dependence for numeric predictors, specified as a numeric column vector, a numeric two-column matrix, or a cell array of two numeric column vectors.
• If you select one predictor variable in Vars, use a numeric column vector.
• If you select two predictor variables in Vars:
□ Use a numeric two-column matrix to specify the same number of points for each predictor variable.
□ Use a cell array of two numeric column vectors to specify a different number of points for each predictor variable.
The default value is a numeric column vector or a numeric two-column matrix, depending on the number of selected predictor variables. Each column contains 100 evenly spaced points between the minimum
and maximum values of the sampled observations for the corresponding predictor variable.
If Conditional is "absolute" or "centered", then the software adds the predictor data values (Data or predictor data in RegressionMdl or ClassificationMdl) of the selected predictors to the query
You cannot modify QueryPoints for a categorical variable. The plotPartialDependence function uses all categorical values in the selected variable.
If you select one numeric variable and one categorical variable, you can specify QueryPoints for a numeric variable by using a cell array consisting of a numeric column vector and an empty array.
Example: "QueryPoints",{pt,[]}
Data Types: single | double | cell
OutputColumns — Output columns of custom model
"all" (default) | vector of positive integers | logical vector
Output columns of the custom model fun to use for the partial dependence computation, specified as one of the values in this table.
Value Description
Vector of positive Each entry in the vector is an index value indicating that plotPartialDependence uses the corresponding output column for the partial dependence computation. The index values are
integers between 1 and q, where q is the number of columns in the output matrix returned by the custom model fun.
Logical vector A true entry means that plotPartialDependence uses the corresponding output column for the partial dependence computation. The length of the vector is q.
"all" plotPartialDependence uses all output columns for the partial dependence computation.
• You can specify multiple output columns only when you specify one variable in Vars and specify Conditional as "none" (default).
• Use partialDependence if you want to compute the partial dependence for two variables and multiple output columns in one function call.
This argument is valid only when you specify a custom model by using fun.
Example: "OutputColumns",[1 2]
Data Types: single | double | logical | char | string
PredictionForMissingValue — Predicted response value to use for observations with missing predictor values
"median" (default) | "mean" | numeric scalar
Since R2024a
Predicted response value to use for observations with missing predictor values, specified as "median", "mean", or a numeric scalar.
Value Description
"median" plotPartialDependence uses the median of the observed response values in the training data as the predicted response value for observations with missing predictor values.
"mean" plotPartialDependence uses the mean of the observed response values in the training data as the predicted response value for observations with missing predictor values.
plotPartialDependence uses this value as the predicted response value for observations with missing predictor values.
Numeric scalar
If you specify NaN, then plotPartialDependence omits observations with missing predictor values from partial dependence computations and plots.
If an observation has a missing value in a Vars predictor variable, then plotPartialDependence does not use the observation in partial dependence computations and plots.
If the Conditional value is "absolute" or "centered", then the value of PredictionForMissingValue determines the predicted response value for query points with new categorical predictor values (that
is, categories not used in training RegressionMdl).
This name-value argument is valid only for these types of regression models: Gaussian process regression, kernel, linear, neural network, and support vector machine. That is, you can specify this
argument only when RegressionMdl is a RegressionGP, CompactRegressionGP, RegressionKernel, RegressionLinear, RegressionNeuralNetwork, CompactRegressionNeuralNetwork, RegressionSVM, or
CompactRegressionSVM object.
Example: "PredictionForMissingValue","mean"
Example: "PredictionForMissingValue",NaN
Data Types: single | double | char | string
Output Arguments
ax — Axes of the plot
axes object
Axes of the plot, returned as an axes object. For details on how to modify the appearance of the axes and extract data from plots, see Axes Appearance and Extract Partial Dependence Estimates from
More About
Partial Dependence for Regression Models
Individual Conditional Expectation for Regression Models
An individual conditional expectation (ICE) [2], as an extension of partial dependence, represents the relationship between a predictor variable and the predicted responses for each observation.
While partial dependence shows the averaged relationship between predictor variables and predicted responses, a set of ICE plots disaggregates the averaged information and shows an individual
dependence for each observation.
plotPartialDependence creates an ICE plot for each observation. A set of ICE plots is useful to investigate heterogeneities of partial dependence originating from different observations.
plotPartialDependence can also create ICE plots with any predictor data provided through the input argument Data. You can use this feature to explore predicted response space.
Consider an ICE plot for a selected predictor variable x[S] with a given observation X[i]^C, where X^S = {x[S]}, X^C is the complementary set of X^S in the whole variable set X, and X[i] = (X[i]^S, X
[i]^C) is the ith observation. The ICE plot corresponds to the summand of the summation in Equation 1:
plotPartialDependence plots ${f}^{S}{}_{i}\left({X}^{S}\right)$ for each observation i when you specify Conditional as "absolute". If you specify Conditional as "centered", plotPartialDependence
draws all plots after removing level effects due to different observations:
This subtraction ensures that each plot starts from zero, so that you can examine the cumulative effect of X^S and the interactions between X^S and X^C.
Partial Dependence and ICE for Classification Models
In the case of classification models, plotPartialDependence computes the partial dependence and individual conditional expectation in the same way as for regression models, with one exception:
instead of using the predicted responses from the model, the function uses the predicted scores for the classes specified in Labels.
Weighted Traversal Algorithm
For both a regression model (RegressionMdl) and a classification model (ClassificationMdl), plotPartialDependence uses a predict function to predict responses or scores. plotPartialDependence chooses
the proper predict function according to the model and runs predict with its default settings. For details about each predict function, see the predict functions in the following two tables. If the
specified model is a tree-based model (not including a boosted ensemble of trees) and Conditional is "none", then plotPartialDependence uses the weighted traversal algorithm instead of the predict
function. For details, see Weighted Traversal Algorithm.
Classification Model Object
Model Type Full or Compact Classification Model Object Function to Predict Labels and Scores
Discriminant analysis classifier ClassificationDiscriminant, CompactClassificationDiscriminant predict
Multiclass model for support vector machines or other classifiers ClassificationECOC, CompactClassificationECOC predict
Ensemble of learners for classification ClassificationEnsemble, CompactClassificationEnsemble, ClassificationBaggedEnsemble predict
Gaussian kernel classification model using random feature expansion ClassificationKernel predict
Generalized additive model ClassificationGAM, CompactClassificationGAM predict
k-nearest neighbor model ClassificationKNN predict
Linear classification model ClassificationLinear predict
Naive Bayes model ClassificationNaiveBayes, CompactClassificationNaiveBayes predict
Neural network classifier ClassificationNeuralNetwork, CompactClassificationNeuralNetwork predict
Support vector machine for one-class and binary classification ClassificationSVM, CompactClassificationSVM predict
Binary decision tree for multiclass classification ClassificationTree, CompactClassificationTree predict
Bagged ensemble of decision trees TreeBagger, CompactTreeBagger predict
Alternative Functionality
• partialDependence computes partial dependence without visualization. The function can compute partial dependence for two variables and multiple classes in one function call.
[3] Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical Learning. New York, NY: Springer New York, 2001.
Extended Capabilities
Automatic Parallel Support
Accelerate code by automatically running computation in parallel using Parallel Computing Toolbox™.
To run in parallel, set the UseParallel name-value argument to true in the call to this function.
For more general information about parallel computing, see Run MATLAB Functions with Automatic Parallel Support (Parallel Computing Toolbox).
GPU Arrays
Accelerate code by running on a graphics processing unit (GPU) using Parallel Computing Toolbox™.
Usage notes and limitations:
• This function fully supports GPU arrays for the following regression and classification models:
• This function supports GPU arrays with limitations for the regression and classification models described in this table.
Full or Compact Model Object Limitations
Binary learners are subject to limitations depending on type:
• Ensemble learners have the same limitations as ClassificationEnsemble.
ClassificationECOC or CompactClassificationECOC • KNN learners have the same limitations as ClassificationKNN
• SVM learners have the same limitations as ClassificationSVM
• Tree learners have the same limitations as ClassificationTree
Weak learners are subject to limitations depending on type:
• KNN learners have the same limitations as ClassificationKNN.
ClassificationEnsemble, CompactClassificationEnsemble, RegressionEnsemble, or
CompactRegressionEnsemble • Tree learners have the same limitations as ClassificationTree.
• Discriminant learners are not supported.
Models trained using the Kd-tree nearest neighbor search method, function handle distance metrics, or tie
ClassificationKNN inclusion are not supported.
ClassificationSVM or CompactClassificationSVM One-class classification is not supported
ClassificationTree, CompactClassificationTree, RegressionTree, or Surrogate splits are not supported for decision trees.
• This function fully supports GPU arrays for a custom function if the custom function supports GPU arrays.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2017b
R2024b: Specify GPU arrays for neural network models (requires Parallel Computing Toolbox)
R2024a: Support for observations with missing predictor values
If RegressionMdl is a Gaussian process regression, kernel, linear, neural network, or support vector machine model, you can now use observations with missing predictor values in partial dependence
computations and plots. Specify the PredictionForMissingValue name-value argument.
• A value of "median" is consistent with the behavior in R2023b.
• A value of NaN is consistent with the behavior in R2023a, where the regression models do not support using observations with missing predictor values for prediction.
R2024a: Specify GPU arrays for RegressionLinear models
plotPartialDependence fully supports GPU arrays for RegressionLinear models.
R2024a: GPU array support for ClassificationLinear
Starting in R2024a, plotPartialDependence fully supports GPU arrays for ClassificationLinear models.
R2023a: GPU array support for RegressionSVM and CompactRegressionSVM models
Starting in R2023a, plotPartialDependence fully supports GPU arrays for RegressionSVM and CompactRegressionSVM models. | {"url":"https://es.mathworks.com/help/stats/regressiontree.plotpartialdependence.html","timestamp":"2024-11-06T18:17:13Z","content_type":"text/html","content_length":"299432","record_id":"<urn:uuid:971dd0d6-bdfd-4510-a840-4bac4c96850b>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00597.warc.gz"} |
Collisions of small drops in a turbulent flow. Part III: Relative droplet fluxes and swept volumes
Swept volumes of cloud droplets with radii below 20 μm are calculated under conditions typical of atmospheric cloud turbulence characterized by enormous values of Reynolds numbers, high turbulent
intermittency, and characteristic values of the dissipation rate. To perform the calculations, the motion equation for small droplets proposed by Maxey is generalized for Stokes numbers St > 0.1,
which allows one to simulate relative droplet motion even for very high turbulence intensities typical of deep cumulus clouds. Analytical considerations show that droplet motion is fully determined
by turbulent shears and the Lagrangian accelerations. A new statistical representation of a turbulent flow has been proposed based on the results of the scale analysis of turbulence characteristics
and those related to the droplet motion. According to the method proposed, statistical properties of turbulent flow are represented by a set of noncorrelated samples of turbulent shears and
Lagrangian accelerations. Each sample can be assigned to a certain point of the turbulent flow. Each such point can be surrounded by a small "elementary" volume with linear length scales of the
Kolmogorov length scale, in which the Lagrangian acceleration and turbulent shears can be considered as uniform in space and invariable in time. This present study (Part III) investigates the droplet
collisions in a turbulent flow when hydrodynamic droplet interaction (HDI) is disregarded. Using a statistical model, long series of turbulent shears and accelerations were generated, reproducing
probability distribution functions (PDF) at high Reynolds numbers, as they were obtained in recent laboratory and theoretical studies. Swept volumes of droplets are calculated for each sample of an
acceleration-shear pair, and the PDF of swept volumes is calculated for turbulent parameters typical of cloud turbulence. The effect of turbulent flow intermittency manifests itself in two aspects:
1) an increase of swept volume variance with increasing Reynolds number, and 2) formation of the swept volume PDF that has a sharp maximum and an elongated tail. In spite of the fact that the
magnitude of the mean swept volume increases significantly with Reynolds number and the dissipation rate, this increase does not exceed ∼60% of pure gravity values even under turbulent conditions
typical of strong cumulus clouds. A comparison with the classical results of Saffman and Turner is presented and discussed.
Dive into the research topics of 'Collisions of small drops in a turbulent flow. Part III: Relative droplet fluxes and swept volumes'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/collisions-of-small-drops-in-a-turbulent-flow-part-iii-relative-d","timestamp":"2024-11-05T03:10:02Z","content_type":"text/html","content_length":"55957","record_id":"<urn:uuid:7c9588fe-233d-43d1-a853-57d8c68e2934>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00336.warc.gz"} |
Printable 20 To Positive And Negative Number Line - Printable JD
Printable 20 To Positive And Negative Number Line_53147
If you’re a math teacher, you’re probably wondering how to make a number line positive or negative. Here are a few rules to remember and a printable 20 to positive and negative number line. And if
you want to keep things interesting, consider printing multiple copies of the same chart. There are many benefits to both, and both have their own purposes. Read on to learn how to make your number
line more interesting.
How Do You Make a Number Line Positive And Negative?
If you’ve ever had to learn the difference between positive and negative numbers, you may want to learn how to make a number line with negative numbers. These lines are a great way to introduce the
concept of positive and negative numbers, as well as make basic math operations easier. These printables are available in US Letter paper size and are landscape oriented. They’re also free! Hopefully
this information has been useful.
To make a number line, you’ll need a piece of paper and a pen. You can trace the line on scrap paper to see how many numbers are on it. You’ll want to label each place so that you know which is the
positive and negative end. Then, you’ll be able to point the numbers as you move along the line. Afterwards, you’ll have the correct answer!
What Are The Rules For Positive and Negative Integers?
In addition to the normal rule that positive numbers always go to the right, there is also a rule that says you can add negative numbers. The rules are as simple as adding positive numbers to
negative numbers, except you will always lose points. You can also subtract negative numbers from positive numbers. The process is the same as adding positive numbers, except you must use parentheses
to separate the positive and negative numbers. Once you understand this rule, you can apply it to all of your calculations.
You can learn to value integers by learning the rules that apply to positive and negative numbers. Positive integers have a greater value than negative ones. However, negative integers have a smaller
value than positive numbers. In addition, it is also possible to place fractions on the number line. Just remember that negative numbers always have a negative sign. You will need to practice with
the negative integer before you can use them in the right context.
Printable 20 To Positive And Negative Number Line_25614
Printable 20 To Positive And Negative Number Line_83545
Best Printable 20 To Positive And Negative Number Line_78325
Free Printable 20 To Positive And Negative Number Line_73549
Printable 20 To Positive And Negative Number Line
A printable 20 to positive and negative number line is an excellent way to introduce kids to the concept of negative numbers. This number line is suitable for elementary school students and helps
reinforce this important topic inside and outside of the classroom. You can download these templates in Microsoft Word and PDF formats. You can also print them out for your own personal use. Here are
some tips to get the most out of your printable 20 to positive and negative number line.
A number line may look like a simple straight line, but it can help teach a wide variety of math concepts. Students can practice counting, comparing, and interpreting positive and negative numbers,
skip counting, and line plots. These worksheets are designed to be flexible and are suitable for all grade levels. You can choose from a wide variety of number line templates, ranging from beginner
to advanced students. | {"url":"https://printablejd.com/printable-20-to-positive-and-negative-number-line_83496/","timestamp":"2024-11-08T05:34:11Z","content_type":"text/html","content_length":"100861","record_id":"<urn:uuid:0f6c6571-c98b-4183-a9c4-b87fa6a97a73>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00390.warc.gz"} |
In theoretical computer science a bisimulation is a binary relation between state transition systems, associating systems that behave in the same way in that one system simulates the other and vice
Intuitively two systems are bisimilar if they, assuming we view them as playing a game according to some rules, match each other's moves. In this sense, each of the systems cannot be distinguished
from the other by an observer.
Formal definition
Given a labeled state transition system (S, Λ, →), where S is a set of states, ${\displaystyle \Lambda }$ is a set of labels and → is a set of labelled transitions (i.e., a subset of ${\displaystyle
S\times \Lambda \times S}$ ), a bisimulation is a binary relation ${\displaystyle R\subseteq S\times S}$ , such that both R and its converse ${\displaystyle R^{T}}$ are simulations. From this follows
that the symmetric closure of a bisimulation is a bisimulation, and that each symmetric simulation is a bisimulation. Thus some authors define bisimulation as a symmetric simulation.^[1]
Equivalently, R is a bisimulation if and only if for every pair of states ${\displaystyle (p,q)}$ in R and all labels λ in ${\displaystyle \Lambda }$ :
• if ${\displaystyle p\mathrel {\overset {\lambda }{\rightarrow }} p'}$ , then there is ${\displaystyle q\mathrel {\overset {\lambda }{\rightarrow }} q'}$ such that ${\displaystyle (p',q')\in R}$ ;
• if ${\displaystyle q\mathrel {\overset {\lambda }{\rightarrow }} q'}$ , then there is ${\displaystyle p\mathrel {\overset {\lambda }{\rightarrow }} p'}$ such that ${\displaystyle (p',q')\in R}$ .
Given two states p and q in S, p is bisimilar to q, written ${\displaystyle p\,\sim \,q}$ , if and only if there is a bisimulation R such that ${\displaystyle (p,q)\in R}$ . This means that the
bisimilarity relation ∼ is the union of all bisimulations: ${\displaystyle (p,q)\in \,\sim \,}$ precisely when ${\displaystyle (p,q)\in R}$ for some bisimulation R.
The set of bisimulations is closed under union;^[Note 1] therefore, the bisimilarity relation is itself a bisimulation. Since it is the union of all bisimulations, it is the unique largest
bisimulation. Bisimulations are also closed under reflexive, symmetric, and transitive closure; therefore, the largest bisimulation must be reflexive, symmetric, and transitive. From this follows
that the largest bisimulation—bisimilarity—is an equivalence relation.^[2]
Alternative definitions
Relational definition
Bisimulation can be defined in terms of composition of relations as follows.
Given a labelled state transition system ${\displaystyle (S,\Lambda ,\rightarrow )}$ , a bisimulation relation is a binary relation R over S (i.e., R ⊆ S × S) such that ${\displaystyle \forall \
lambda \in \Lambda }$
${\displaystyle R\ ;\ {\overset {\lambda }{\rightarrow }}\quad {\subseteq }\quad {\overset {\lambda }{\rightarrow }}\ ;\ R}$ and ${\displaystyle R^{-1}\ ;\ {\overset {\lambda }{\rightarrow }}\quad {\
subseteq }\quad {\overset {\lambda }{\rightarrow }}\ ;\ R^{-1}}$
From the monotonicity and continuity of relation composition, it follows immediately that the set of bisimulations is closed under unions (joins in the poset of relations), and a simple algebraic
calculation shows that the relation of bisimilarity—the join of all bisimulations—is an equivalence relation. This definition, and the associated treatment of bisimilarity, can be interpreted in any
involutive quantale.
Fixpoint definition
Bisimilarity can also be defined in order-theoretical fashion, in terms of fixpoint theory, more precisely as the greatest fixed point of a certain function defined below.
Given a labelled state transition system (${\displaystyle S}$ , Λ, →), define ${\displaystyle F:{\mathcal {P}}(S\times S)\to {\mathcal {P}}(S\times S)}$ to be a function from binary relations over $
{\displaystyle S}$ to binary relations over ${\displaystyle S}$ , as follows:
Let ${\displaystyle R}$ be any binary relation over ${\displaystyle S}$ . ${\displaystyle F(R)}$ is defined to be the set of all pairs ${\displaystyle (p,q)}$ in ${\displaystyle S}$ × ${\displaystyle
S}$ such that:
${\displaystyle \forall \lambda \in \Lambda .\,\forall p'\in S.\,p{\overset {\lambda }{\rightarrow }}p'\,\Rightarrow \,\exists q'\in S.\,q{\overset {\lambda }{\rightarrow }}q'\,{\textrm {and}}\,
(p',q')\in R}$ and ${\displaystyle \forall \lambda \in \Lambda .\,\forall q'\in S.\,q{\overset {\lambda }{\rightarrow }}q'\,\Rightarrow \,\exists p'\in S.\,p{\overset {\lambda }{\rightarrow }}p'\,{\
textrm {and}}\,(p',q')\in R}$
Bisimilarity is then defined to be the greatest fixed point of ${\displaystyle F}$ .
Ehrenfeucht–Fraïssé game definition
Bisimulation can also be thought of in terms of a game between two players: attacker and defender.
"Attacker" goes first and may choose any valid transition, ${\displaystyle \lambda }$ , from ${\displaystyle (p,q)}$ . That is, ${\displaystyle (p,q){\overset {\lambda }{\rightarrow }}(p',q)}$ or ${\
displaystyle (p,q){\overset {\lambda }{\rightarrow }}(p,q')}$
The "Defender" must then attempt to match that transition, ${\displaystyle \lambda }$ from either ${\displaystyle (p',q)}$ or ${\displaystyle (p,q')}$ depending on the attacker's move. I.e., they
must find an ${\displaystyle \lambda }$ such that: ${\displaystyle (p',q){\overset {\lambda }{\rightarrow }}(p',q')}$ or ${\displaystyle (p,q'){\overset {\lambda }{\rightarrow }}(p',q')}$
Attacker and defender continue to take alternating turns until:
• The defender is unable to find any valid transitions to match the attacker's move. In this case the attacker wins.
• The game reaches states ${\displaystyle (p,q)}$ that are both 'dead' (i.e., there are no transitions from either state) In this case the defender wins
• The game goes on forever, in which case the defender wins.
• The game reaches states ${\displaystyle (p,q)}$ , which have already been visited. This is equivalent to an infinite play and counts as a win for the defender.
By the above definition the system is a bisimulation if and only if there exists a winning strategy for the defender.
Coalgebraic definition
A bisimulation for state transition systems is a special case of coalgebraic bisimulation for the type of covariant powerset functor. Note that every state transition system ${\displaystyle (S,\
Lambda ,\rightarrow )}$ can be mapped bijectively to a function ${\displaystyle \xi _{\rightarrow }}$ from ${\displaystyle S}$ to the powerset of ${\displaystyle S}$ indexed by ${\displaystyle \
Lambda }$ written as ${\displaystyle {\mathcal {P}}(\Lambda \times S)}$ , defined by ${\displaystyle p\mapsto \{(\lambda ,q)\in \Lambda \times S:p{\overset {\lambda }{\rightarrow }}q\}.}$
Let ${\displaystyle \pi _{i}\colon S\times S\to S}$ be ${\displaystyle i}$ -th projection, mapping ${\displaystyle (p,q)}$ to ${\displaystyle p}$ and ${\displaystyle q}$ respectively for ${\
displaystyle i=1,2}$ ; and ${\displaystyle {\mathcal {P}}(\Lambda \times \pi _{1})}$ the forward image of ${\displaystyle \pi _{1}}$ defined by dropping the third component ${\displaystyle P\mapsto \
{(\lambda ,p)\in \Lambda \times S:\exists q.(\lambda ,p,q)\in P\}}$ where ${\displaystyle P}$ is a subset of ${\displaystyle \Lambda \times S\times S}$ . Similarly for ${\displaystyle {\mathcal {P}}
(\Lambda \times \pi _{2})}$ .
Using the above notations, a relation ${\displaystyle R\subseteq S\times S}$ is a bisimulation on a transition system ${\displaystyle (S,\Lambda ,\rightarrow )}$ if and only if there exists a
transition system ${\displaystyle \gamma \colon R\to {\mathcal {P}}(\Lambda \times R)}$ on the relation ${\displaystyle R}$ such that the diagram
commutes, i.e. for ${\displaystyle i=1,2}$ , the equations ${\displaystyle \xi _{\rightarrow }\circ \pi _{i}={\mathcal {P}}(\Lambda \times \pi _{i})\circ \gamma }$ hold where ${\displaystyle \xi _{\
rightarrow }}$ is the functional representation of ${\displaystyle (S,\Lambda ,\rightarrow )}$ .
Variants of bisimulation
In special contexts the notion of bisimulation is sometimes refined by adding additional requirements or constraints. An example is that of stutter bisimulation, in which one transition of one system
may be matched with multiple transitions of the other, provided that the intermediate states are equivalent to the starting state ("stutters").^[3]
A different variant applies if the state transition system includes a notion of silent (or internal) action, often denoted with ${\displaystyle \tau }$ , i.e. actions that are not visible by external
observers, then bisimulation can be relaxed to be weak bisimulation, in which if two states ${\displaystyle p}$ and ${\displaystyle q}$ are bisimilar and there is some number of internal actions
leading from ${\displaystyle p}$ to some state ${\displaystyle p'}$ then there must exist state ${\displaystyle q'}$ such that there is some number (possibly zero) of internal actions leading from $
{\displaystyle q}$ to ${\displaystyle q'}$ . A relation ${\displaystyle {\mathcal {R}}}$ on processes is a weak bisimulation if the following holds (with ${\displaystyle {\mathcal {S}}\in \{{\mathcal
{R}},{\mathcal {R}}^{-1}\}}$ , and ${\displaystyle a,\tau }$ being an observable and mute transition respectively):
${\displaystyle \forall p,q.\quad (p,q)\in {\mathcal {S}}\Rightarrow p{\stackrel {\tau }{\rightarrow }}p'\Rightarrow \exists q'.\quad q{\stackrel {\tau ^{\ast }}{\rightarrow }}q'\wedge (p',q')\in {\
mathcal {S}}}$ ${\displaystyle \forall p,q.\quad (p,q)\in {\mathcal {S}}\Rightarrow p{\stackrel {a}{\rightarrow }}p'\Rightarrow \exists q'.\quad q{\stackrel {\tau ^{\ast }a\tau ^{\ast }}{\rightarrow
}}q'\wedge (p',q')\in {\mathcal {S}}}$
This is closely related to the notion of bisimulation "up to" a relation.^[4]
Typically, if the state transition system gives the operational semantics of a programming language, then the precise definition of bisimulation will be specific to the restrictions of the
programming language. Therefore, in general, there may be more than one kind of bisimulation (respectively bisimilarity) relationship depending on the context.
Bisimulation and modal logic
See also
1. ^ Meaning the union of two bisimulations is a bisimulation.
Further reading
• Park, David (1981). "Concurrency and Automata on Infinite Sequences". In Deussen, Peter (ed.). Theoretical Computer Science. Proceedings of the 5th GI-Conference, Karlsruhe. Lecture Notes in
Computer Science. Vol. 104. Springer-Verlag. pp. 167–183. doi:10.1007/BFb0017309. ISBN 978-3-540-10576-3.
• Milner, Robin (1989). Communication and Concurrency. Prentice Hall. ISBN 0-13-114984-9.
• Sangiorgi, Davide (2011). An introduction to Bisimulation and Coinduction. Cambridge, UK: Cambridge University Press. ISBN 9781107003637. OCLC 773040572.
External links
• CADP: tools to minimize and compare finite-state systems according to various bisimulations
• mCRL2: tools to minimize and compare finite-state systems according to various bisimulations
• The Bisimulation Game Game | {"url":"https://www.knowpia.com/knowpedia/Bisimulation","timestamp":"2024-11-07T19:22:55Z","content_type":"text/html","content_length":"219602","record_id":"<urn:uuid:b3c7a8f3-13bc-41f6-84a3-f6a684fb1428>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00039.warc.gz"} |
Sarah Chang is the owner of a small electronics company
Question # 00508598 Posted By: Updated on: 04/07/2017 03:30 AM Due on: 04/07/2017
Sarah Chang is the owner of a small electronics company. In six months, a proposal is due for an electronic timing system for the next Olympic
Games. For several years, Chang’s company has been developing a new microprocessor, a critical com- ponent in a timing system that would be superior to any product currently on the market. However,
progress in research and development has been slow, and Chang is unsure whether her staff can produce the microprocessor in time. If they succeed in developing the microprocessor (probability p1),
there is an excellent chance (probability p2) that Chang’s company will win the $1 million Olympic contract. If they do not, there is a small chance (probability p3) that she will still be able to
win the same contract with an alternative but inferior timing system that has already been developed.
If she continues the project, Chang must invest $200,000 in research and development. In addition,
making a proposal (which she will decide whether to do after seeing whether the R&D is successful) requires developing a prototype timing system at an additional cost. This additional cost is $50,000
if R&D is successful (so that she can develop the new timing system), and it is $40,000 if R&D is unsuccessful (so that she needs to go with the older timing system). Finally, if Chang wins the
contract, the finished prod- uct will cost an additional $150,000 to produce.
a.Develop a decision tree that can be used to solve Chang’s problem. You can assume in this part of the problem that she is using EMV (of her net profit) as a decision criterion. Build the tree so
that she can enter any values for p1, p2, and p3 (in input cells) and automatically see her optimal EMV and optimal strategy from the tree.
b.If p2 5 0.8 and p3 5 0.1, what value of p1 makes Chang indifferent between abandoning the proj- ect and going ahead with it?
c.How much would Chang benefit if she knew for certain that the Olympic organization would guarantee her the contract? (This guarantee would be in force only if she were successful in developing the
product.) Assume p1 5 0.4,
p2 5 0.8, and p3 5 0.1.
d.Suppose now that this is a relatively big project for Chang. Therefore, she decides to use expected utility as her criterion, with an exponential utility function. Using some trial and error, see
which risk tolerance changes her initial decision from “go ahead” to “abandon” when p1 50.4,p2 50.8,andp3 50.1.
Tutorials for this Question
Tutorial # 00505361 Posted By: Posted on: 04/07/2017 03:30 AM
Puchased By: 3
The solution of Sarah Chang is the owner of a small electronics company...
Recent Feedback
Rated By Feedback Comments Rated On
ja...6037 Customised services and great experience 04/30/2020
Great! We have found the solution of this question! | {"url":"https://www.homeworkminutes.com/q/sarah-chang-is-the-owner-of-a-small-electronics-company-508598/","timestamp":"2024-11-09T07:38:17Z","content_type":"text/html","content_length":"51390","record_id":"<urn:uuid:0ee46f3d-6425-4bfa-8441-442b723146c9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00787.warc.gz"} |
7.5: Direct and Inverse Variation
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
We start with the definition of the phrase “is proportional to.”
We say that \(y\) is proportional to \(x\) if and only if
\[y = kx \nonumber \]
where \(k\) is a constant called the constant of proportionality. The phrase “\(y\) varies directly as \(x\)” is an equivalent way of saying “\(y\) is proportional to \(x\).”
Here are a few examples that translate the phrase “is proportional to.”
• Given that \(d\) is proportional to \(t\), we write \(d = kt\), where \(k\) is a constant.
• Given that \(y\) is proportional to the cube of \(x\), we write \(y = kx^3\), where \(k\) is a constant.
• Given that \(s\) is proportional to the square of \(t\), we write \(s = kt^2\), where \(k\) is a constant.
We are not restricted to always using the letter \(k\) for our constant of proportionality.
Example \(\PageIndex{1}\)
Given that \(y\) is proportional to \(x\) and the fact that \(y = 12\) when \(x = 5\), determine the constant of proportionality, then determine the value of \(y\) when \(x = 10\).
Given the fact the \(y\) is proportional to \(x\), we know immediately that \[y = kx \nonumber \]where \(k\) is the proportionality constant. Because we are given that \(y = 12\) when \(x = 5\), we
can substitute \(12\) for \(y\) and \(5\) for \(x\) to determine \(k\).
\[\begin{array}{rl}{y=k x} & \color {Red} {y \text { is proportional to } x} \\ {12=k(5)} & \color {Red} {\text { Substitute } 12 \text { for } y, 5 \text { for } x} \\ {\dfrac{12}{5}=k} & \color
{Red} {\text { Divide both sides by } 5}\end{array} \nonumber \]
Next, substitute the constant of proportionality \(12/5\) for \(k\) in \(y = kx\), then substitute \(10\) for \(x\) to determine \(y\) when \(x = 10\).
\[\begin{array}{ll}{y=\dfrac{12}{5} x} & \color {Red} {\text { Substitute } 12 / 5 \text { for } k} \\ {y=\dfrac{12}{5}(10)} &\color {Red} {\text { Substitute } 10 \text { for } x} \\ {y=24} & \color
{Red} {\text { Cancel and simplify. }}\end{array} \nonumber \]
Exercise \(\PageIndex{1}\)
Given that \(y\) is proportional to \(x\) and that \(y = 21\) when \(x = 9\), determine the value of \(y\) when \(x = 27\).
Example \(\PageIndex{2}\)
A ball is dropped from a balloon floating above the surface of the earth. The distance \(s\) the ball falls is proportional to the square of the time \(t\) that has passed since the ball’s release. If
the ball falls \(144\) feet during the first \(3\) seconds, how far does the ball fall in \(9\) seconds?
Given the fact the \(s\) is proportional to the square of \(t\), we know immediately that
\[s=k t^{2} \nonumber \]
where \(k\) is the proportionality constant. Because we are given that the ball falls \(144\) feet during the first \(3\) seconds, we can substitute \(144\) for \(s\) and \(3\) for \(t\) to determine
the constant of proportionality.
\[\begin{array}{rl}{s=k t^{2}} & \color {Red} {s \text { is proportional to the square of } t} \\ {144=k(3)^{2}} & \color {Red} {\text { Substitute } 144 \text { for } s, 3 \text { for } t} \\ {144=9
k} & \color {Red} {\text { Simplify: } 3^{2}=9} \\ {16=k} & \color {Red} {\text { Divide both sides by } 9}\end{array} \nonumber \]
Next, substitute the constant of proportionality \(16\) for \(k\) in \(s = kt^2\), and then substitute \(9\) for \(t\) to determine the distance fallen when \(t = 9\) seconds.
\[\begin{array}{ll}{s=16 t^{2}} & \color {Red} {\text { Substitute } 16 \text { for } k} \\ {s=16(9)^{2}} & \color {Red} {\text { Substitute } 9 \text { for } t} \\ {s=1296} & \color {Red} {\text {
Simplify }}\end{array} \nonumber \]
Thus, the ball falls \(1,296\) feet during the first \(9\) seconds.
Exercise \(\PageIndex{2}\)
A ball is dropped from the edge of a cliff on a certain planet. The distance \(s\) the ball falls is proportional to the square of the time \(t\) that has passed since the ball’s release. If the ball
falls \(50\) feet during the first \(5\) seconds, how far does the ball fall in \(8\) seconds?
\(128\) feet
Example \(\PageIndex{3}\)
Tony and Paul are hanging weights on a spring in the physics lab. Each time a weight is hung, they measure the distance the spring stretches. They discover that the distance \(y\) that the spring
stretches is proportional to the weight hung on the spring (Hooke’s Law). If a \(0.5\) pound weight stretches the spring \(3\) inches, how far will a \(0.75\) pound weight stretch the spring?
Let \(W\) represent the weight hung on the spring. Let \(y\) represent the distance the spring stretches. We’re told that the distance y the spring stretches is proportional to the amount of weight \
(W\) hung on the spring. Hence, we can write:
\[y=k W \quad \color {Red} y \text { is proportional to } W \nonumber \]
Substitute \(3\) for \(y\), \(0.5\) for \(W\), then solve fork.
\[\begin{array}{rlrl}{3} & {=k(0.5)} & {} & \color {Red} {\text { Substitute } 3 \text { for } y, 0.5 \text { for } W} \\ {\dfrac{3}{0.5}} & {=k} & {} & \color {Red} {\text { Divide both sides by }
0.5} \\ {k} & {=6} & {} & \color {Red} {\text { Simplify. }}\end{array} \nonumber \]
Substitute \(6\) for \(k\) in \(y = kW\) to produce:
\[y=6 W \quad \color {Red} \text { Substitute } 6 \text { for } k \text { in } y=k W \nonumber \]
To determine the distance the spring will stretch when \(0.75\) pounds are hung on the spring, substitute \(0.75\) for \(W\).
\[\begin{array}{ll}{y=6(0.75)} & \color {Red} {\text { Substitute } 0.75 \text { for } W} \\ {y=4.5} & \color {Red} {\text { Simplify. }}\end{array} \nonumber \]
Thus, the spring will stretch \(4.5\) inches.
Exercise \(\PageIndex{3}\)
If a \(0.75\) pound weight stretches a spring \(5\) inches, how far will a \(1.2\) pound weight stretch the spring?
\(8\) inches
Inversely Proportional
In Examples \(\PageIndex{1}\), \(\PageIndex{2}\), and \(\PageIndex{3}\), where one quantity was proportional to a second quantity, you may have noticed that when one quantity increased, the second
quantity also increased. Vice-versa, when one quantity decreased, the second quantity also decreased.
However, not all real-world situations follow this pattern. There are times when as one quantity increases, the related quantity decreases. For example, consider the situation where you increase the
number of workers on a job and note that the time to finish the job decreases. This is an example of a quantity being inversely proportional to a second quantity.
Inversely proportional
We say the \(y\) is inversely proportional to \(x\) if and only if\[y=\dfrac{k}{x} \nonumber \]where \(k\) is a constant called the constant of proportionality. The phrase “\(y\) varies inversely as
\(x\)” is an equivalent way of saying “\(y\) in inversely proportional to \(x\).”
Here are a few examples that translate the phrase “is inversely proportional to.”
• Given that \(d\) is inversely proportional to \(t\), we write \(d = k/t\), where \(k\) is a constant.
• Given that \(y\) is inversely proportional to the cube of \(x\), we write \(y = k/x^3\), where \(k\) is a constant.
• Given that \(s\) is inversely proportional to the square of \(t\), we write \(s = k/t^2\), where \(k\) is a constant.
We are not restricted to always using the letter \(k\) for our constant of proportionality.
Example \(\PageIndex{4}\)
Given that \(y\) is inversely proportional to \(x\) and the fact that \(y = 4\) when \(x = 2\), determine the constant of proportionality, then determine the value of \(y\) when \(x = 4\).
Given the fact the \(y\) is inversely proportional to \(x\), we know immediately that\[y=\dfrac{k}{x} \nonumber \]where \(k\) is the proportionality constant. Because we are given that \(y = 4\) when
\(x = 2\), we can substitute \(4\) for \(y\) and \(2\) for \(x\) to determine \(k\).
\[\begin{align*} y &= \dfrac{k}{x} \quad \color {Red} y \text { is inversely proportional to } x.\\ 4 &= \dfrac{k}{2} \quad \color {Red} \text {Substitute }4 \text { for } y, 2 \text { for }x.\\ 8 &=
k \quad \color {Red} \text {Multiply both sides by } 2. \end{align*} \nonumber\]
Substitute \(8\) for \(k\) in \(y = k/x\), then substitute \(4\) for \(x\) to determine \(y\) when \(x = 4\).
\[\begin{align*} y &= \dfrac{8}{x} \quad \color {Red} \text {Substitute } 8 \text { for } k.\\ y &= \dfrac{8}{4} \quad \color {Red} \text {Substitute }4 \text { for } x.\\ y &= 2 \quad \color {Red} \
text {Reduce.} \end{align*} \nonumber\]
Note that as \(x\) increased from \(2\) to \(4\), \(y\) decreased from \(4\) to \(2\).
Exercise \(\PageIndex{4}\)
Given that \(y\) is inversely proportional to \(x\) and that \(y = 5\) when \(x = 8\), determine the value of \(y\) when \(x = 10\).
Example \(\PageIndex{5}\)
The intensity \(I\) of light is inversely proportional to the square of the distance \(d\) from the light source. If the light intensity \(5\) feet from the light source is \(3\) foot-candles, what
is the intensity of the light \(15\) feet from the light source?
Given the fact that the intensity \(I\) of the light is inversely proportional to the square of the distance d from the light source, we know immediately that\[I = \dfrac{k}{d^2} \nonumber \]where \
(k\) is the proportionality constant. Because we are given that the intensity is \(I = 3\) foot-candles at \(d = 5\) feet from the light source, we can substitute \(3\) for \(I\) and \(5\) for \(d\)
to determine \(k\).
\[\begin{align*} I &= \dfrac{k}{d^2} \quad \color {Red} I \text { is inversely proportional to } d^2 .\\ 3 &= \dfrac{k}{5^2} \quad \color {Red} \text {Substitute }3 \text { for } I, 5 \text { for }
d.\\ 3 &= \dfrac{k}{25} \quad \color {Red} \text {Simplify.}\\ 75 &= k \quad \color {Red} \text {Multiply both sides by } 25. \end{align*} \nonumber\]
Substitute \(75\) for \(k\) in \(I = k/d^2\), then substitute \(15\) for \(d\) to determine \(I\) when \(d = 15\).
\[\begin{align*} I &= \dfrac{75}{d^2} \quad \color {Red} \text {Substitute }75 \text { for } k.\\ I &= \dfrac{75}{15^2} \quad \color {Red} \text {Substitute }15 \text { for } d.\\ I &= \dfrac{75}
{225} \quad \color {Red} \text {Simplify.}\\ I &= \dfrac{1}{3} \quad \color {Red} \text {Reduce.} \end{align*} \nonumber\]
Thus, the intensity of the light \(15\) feet from the light source is \(1/3\) foot-candle.
Exercise \(\PageIndex{5}\)
If the light intensity \(4\) feet from a light source is \(2\) foot-candles, what is the intensity of the light \(8\) feet from the light source?
\(1/2\) foot-candle
Example \(\PageIndex{6}\)
Suppose that the price per person for a camping experience is inversely proportional to the number of people who sign up for the experience. If \(10\) people sign up, the price per person is \($350
\). What will be the price per person if \(50\) people sign up?
Let \(p\) represent the price per person and let \(N\) be the number of people who sign up for the camping experience. Because we are told that the price per person is inversely proportional to the
number of people who sign up for the camping experience, we can write:
\[p = \dfrac{k}{N} \nonumber \]
where \(k\) is the proportionality constant. Because we are given that the price per person is \($350\) when \(10\) people sign up, we can substitute \(350\) for \(p\) and \(10\) for \(N\) to
determine \(k\).
\[\begin{align*} p &= \dfrac{k}{N} \quad \color {Red} p \text { is inversely proportional to }N.\\ 350 &= \dfrac{k}{10} \quad \color {Red} \text {Substitute }350 \text { for } p, 10 \text { for } N.\
\ 3500 &= k \quad \color {Red} \text {Multiply both sides by } 10. \end{align*} \nonumber\]
Substitute \(3500\) for \(k\) in \(p = k/N\), then substitute \(50\) for \(N\) to determine \(p\) when \(N = 50\).
\[\begin{align*} p &= \dfrac{3500}{N} \quad \color {Red} \text {Substitute }3500 \text { for } k.\\ p &= \dfrac{3500}{50} \quad \color {Red} \text {Substitute }50 \text { for } N.\\ p &= 70 \quad \
color {Red} \text {Simplify.} \end{align*} \nonumber\]
Thus, the price per person is \($70\) if \(50\) people sign up for the camping experience.
Exercise \(\PageIndex{6}\)
Suppose that the price per person for a tour is inversely proportional to the number of people who sign up for the tour. If \(8\) people sign up, the price per person is \($70\). What will be the
price per person if \(20\) people sign up? | {"url":"https://math.libretexts.org/Bookshelves/Algebra/Elementary_Algebra_(Arnold)/07%3A_Rational_Expressions/7.05%3A_Direct_and_Inverse_Variation","timestamp":"2024-11-14T01:59:21Z","content_type":"text/html","content_length":"138776","record_id":"<urn:uuid:3c3ad325-7774-49e9-8315-51c8bcbddc8d>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00054.warc.gz"} |
Chemistry Important Questions Chapter 5 Electrochemistry
Maharashtra Board Class 12 Chemistry Important Questions Chapter 5 Electrochemistry
Balbharti Maharashtra State Board 12th Chemistry Important Questions Chapter 5 Electrochemistry Important Questions and Answers.
Maharashtra State Board 12th Chemistry Important Questions Chapter 5 Electrochemistry
Question 1.
What is electrochemistry?
Electrochemistry: It is the branch of physical chemistry that involves the study of the inter-relation between chemical changes and electrical energy and is also concerned with the electrical
properties of electrolytic solutions such as resistance and conductance.
Question 2.
What is electric conduction?
The transfer of electric charge or electrons from one point to another is called electric conduction which results in an electric current.
Question 3.
What are the electric conductors?
The substances that allow the flow of electricity or electric charge transfer through them are called electric conductors.
Question 4.
What is a flow of electricity or a transfer of electric charge?
The flow of electricity or a transfer of electric charge through a conductor involves the transfer of electrons from one point to the other point. This takes place under the influence of applied
electric potential.
Question 5.
What are the types of electric conductors? On what basis are they classified?
The electric conductors are classified according to the mechanism of the transfer of electrons or charge. There are two types of conductors as follows :
(i) Electrons (or metallic) conductors : The electric conductors through which the conduction of electricity takes place by a direct flow of electrons under the influence of applied potential are
called electronic conductors.
In this case, there is no transfer of matter like atoms or ions. For example, solid and molten metals such as Al, Cu, etc.
(ii) Electrolytic conductors : The conductors in which the conduction of electricity takes place by the migration of positive ions (cations) and negative ions (anions) of the electrolyte are called
electrolytic conductors. In this, the conduction involves the transfer of matter and it is accompanied with chemical changes. For example, solutions of electrolytes (strong and weak), molten salts.
Question 6.
Distinguish between electronic and electrolytic conductors.
Electronic conductors:
1. The flow of electricity takes place by direct flow of electrons through the conductor.
2. The conduction does not involve the transfer of a matter.
3. No chemical change is involved during conduction.
4. The resistance of the conductor increases and conductivity decreases with the increase in temperature.
5. The conductance of metallic conductors is very high.
6. Examples are solid or molten metals, such as Al, Cu, etc.
Electrolytic conductors:
1. The electron transfer takes place by the migration of ions (cations and anions) of the electrolyte.
2. The conduction involves the transfer of a matter.
3. Chemical changes are always involved during the passage of an electric current.
4. The resistance decreases and the conductivity increases with the increase in temperature.
5. The conductance of the electrolytes is comparatively low.
6. Examples are aqueous solutions of acids, bases or salts.
Question 7.
What information is provided by measurement of conductivities of solutions?
• The conducting and nonconducting properties of solutions can be identified by the measurement of their conductivities.
• The substances like sucrose and urea which do not dissociate in aqueous solutions have same conductivity as that of water. Hence they are nonelectrolytes.
• The substances like KCl, CH[3]COOH, NaOH, etc. dissociate in their aqueous solutions and their conductivities are higher than water. Hence they are electrolytes.
• On the basis of high or low electrical conductivity, the electrolytes can be classified as strong and weak electrolytes. The solutions of strong electrolytes have high conductivities while
solutions of weak electrolytes have lower conductivities.
Question 8.
What is Ohm’s law?
Ohm’s law : According to Ohm’s law, the electrical resistance R of a conductor is equal to the electric potential difference, V divided by the electric current, I.
R = \(\frac{V}{I}\) ohm
Question 9.
What are SI units of
(a) electrical resistance
(b) potential and
(c) electric current?
(a) The SI unit of electrical resistance is Ohm denoted by Ω (omega).
(b) The SI unit of potential is volt denoted by V.
(c) The SI unit of electric current is ampere denoted by A.
Question 10.
How is electrical conductance of a solution denoted ? What are its units ?
The electrical conductance of a solution is denoted by G and it is the reciprocal of resistance, R.
G = \(\frac{1}{R}\)
The unit of G is siemens denoted by S or Ω^-1.
Hence we can write, S = Ω^-1 = AV^-1 = CV^-1S^-1 where A is ampere and C is coulomb.
Question 11.
What is electrical conductance? What are its units ?
The reciprocal of the electrical resistance of a solution is called the conductance. It is represented by G.
∴ Conductance (G) = \(\frac{1}{\text { Resistance }}=\frac{1}{\mathrm{R}}\)
The conductance has units of reciprocal of ohm (Ω^-1, ohm^-1 or mho). In SI units, conductance has units as Siemens, (S). (1 S = 1 Ω^-1 = 1 ohm^-1 = 1 mho = AV^-1 = CV^-1 S, where C represents
electric charge in coulomb, and A represents current strength)
Question 12.
What is specific conductance or conductivity?
The reciprocal of specific resistance or resistivity is called specific conductance or conductivity.
If ρ is the resistivity then,
conductivity = \(\frac{1}{\text { resistivity }}=\frac{1}{\rho}\)
Conductivity is denoted by κ (kappa), where κ = \(\frac{1}{\rho}\)
It is the conductance of a conductor that is 1 m in length and 1 m^2 in cross section area in SI units. (In C.G.S. units, it is the resistance of a conductor that is 1 cm in length and 1 cm^2 in
cross section area.) It is the conductance of a conductor of volume 1 m^3 (or in C.G.S. units, the volume of 1 cm^3).
Question 13.
What are the units of specific conductance or conductivity?
If ρ is a resistivity and κ is conductivity or specific conductance, then
(where S is Siemens)
(In C.G.S. system, the units of κ are Ω^-1 cm^-1 or S cm^-1 which are commonly used.)
Question 14.
Define molar conductivity. What is the significance of it ?
Molar conductivity: It is defined as a conductance of a volume of the solution containing ions from one mole of an electrolyte when placed between two parallel plate electrodes 1 cm apart and of
large area, sufficient to accommodate the whole solution between them, at constant temperature. It is denoted by ∧[m].
Thus, the significance of molar conductivity is the conductance due to ions from one mole of an electrolyte.
Question 15.
Obtain a relation between conductivity (κ) and molar conductivity (∧[m]).
Conductivity or specific conductance (κ) is the conductance of 1 cm^3 of the solution in C.G.S. units, while molar conductivity is the conductance of a solution containing one mole of an electrolyte.
Consider C molar solution, i.e., C moles of an electrolyte present in 1 litre or 1000 cm^3 of the solution.
∴ C moles of an electrolyte are present in 1000 cm^3 solution.
∴ 1 mole of an electrolyte is present in \(\frac{1000}{\mathrm{C}}\) cm
∴ Conductance of 1 cm^3 of this solution is κ,
∴ Conductance of \(\frac{1000}{\mathrm{C}}\) cm^3 of the solution is \(\frac{\kappa \times 1000}{C}\)
This represents molar conductivity, ∧[m].
∴ ∧[m] = \(\frac{\kappa \times 1000}{C}\) cm^2 mol^-1 (in C.G.S units)
[In case of SI units :
Consider a solution in which C moles of an electrolyte are present in 1 m^3 of solution.
Conductivity κ is the conductance of 1 m^3 of solution.
∵ C moles of an electrolyte are present in 1 m^3 solution.
∴ 1 mol of an electrolyte is present in \(\frac{1}{C}\) solution.
∵ Conductance of 1 m^3 of this solution is κ.
∴ Conductance of \(\frac{1}{C}\) m^3 of the solution is \(\frac{\kappa}{\mathrm{C}}\)
This represents molar conductivity, ∧[m].
∴ ∧[m] = \(\frac{\kappa}{\mathrm{C}}\)Ω^-1 m^2 mol^-1 (In SI units).]
Question 16.
What are the units of molar conductivity, ∧[m]?
In SI units: Conductivity κ is expressed in Ω^-1m^-1 (or S m^-1) and concentration of the solution is expressed in mol m^-3.
In C.G.S. units : Conductivity is expressed in Ω^-1 cm^-1 (or S cm^-1) and concentration of the solution is expressed in mol L^-1 or moles in 1000 cm^3 of the solution.
Question 17.
Explain the variation of molar conductivity with concentration for strong and weak electrolytes.
How is the molar conductivity of strong electrolytes at zero concentration determined by graphical method? Why is this method not useful for weak electrolytes?
(i) As the dilution of an electrolytic solution increases, the dissociation of the electrolyte increases, hence the total number of ions increases, therefore, the molar conductivity increases.
Fig. 5.5 : Variation of molar conductivity with \(\sqrt{\mathbf{c}}\)
(ii) The increase in molar conductivity with increase in dilution or decrease in concentration is different for strong and weak electrolytes.
(iii) On dilution, the molar conductivity of strong electrolytes increases rapidly and approaches to a maximum limiting value at infinite dilution or zero concentration and represented as ∧ ∞ or ∧[0]
or ∧^0[m]. In case of weak electrolytes which dissociate less as compared to strong electrolytes, the molar conductivity is low and increases slowly in high concentration region, but increases
rapidly at low concentration or high dilution. This is because the extent of dissociation increases with dilution rapidly.
(v) ∧[0] values for strong electrolytes can be obtained by extrapolating the linear graph to zero concentration (or infinite dilution). However ∧[0] for the weak electrolytes cannot be obtained by
this method, since the graph increases exponentially at very high dilution and does not intersect ∧[m] axis at zero concentration.
Question 18.
Why has the molar conductance of an electrolyte the maximum value at infinite dilution ?
• As the dilution of an electrolytic solution increases or concentration decreases, the dissociation of an electrolyte increases.
• At infinite dilution, the dissociation of an electrolyte is complete (100% dissociation). Hence all the ions from one mole of an electrolyte are available to carry electricity.
Therefore the molar conductance at infinite dilute (∧[0]) for a given electrolyte has the highest or limiting value. It is always constant for the given electrolyte at constant temperature.
Question 19.
State Kohlrausch’s law.
State and explain Kohlrausch’s law of independent migration of ions.
(A) Statement of Kohlrausch’s law : This states that at infinite dilution of the solution, each ion of an electrolyte migrates independently of its co-ions and contributes independently to the total
molar conductivity of the electrolyte, irrespective of the nature of other ions present in the solution.
(B) Explanation : Both the ions, cation and anion of the electrolyte make a definite contribution to the molar conductivity of the electrolyte at infinite dilution or zero concentration (∧[0]).
If \(\lambda_{+}^{0}\) and \(\lambda_{-}^{0}\) are the molar ionic conductivities of cation and anion respectively at infinite dilution, then
∧[0] = \(\lambda_{+}^{0}\) + \(\lambda_{-}^{0}\)
This is known as Kohlrausch’s law of independent migration of ions.
For an electrolyte, B[x] A[y] giving x number of cations and y number of anions,
∧[0] = x\(\lambda_{+}^{0}\) + y\(\lambda_{-}^{0}\)
(C) Applications of Kohlrausch’s law :
(1) With this law, the molar conductivity of a strong electrolyte at zero concentration can be determined. For example,
(2) ∧[0] values of weak electrolyte with those of strong electrolytes can be obtained. For example,
Question 20.
State Kohlrausch’s law and write mathematical expression of molar conductivity of the given solution at infinite dilution.
Statement of Kohlrausch’s law : This states that at infinite dilution of the solution, each ion of an electrolyte migrates independently of its co-ions and contributes independently to the total
molar conductivity of the electrolyte, irrespective of the nature of other ions present in the solution.
This law of independent migration of ions is represented as
∧[0] = \(\lambda_{+}^{0}\) + \(\lambda_{-}^{0}\).
where ∧[0] is the molar conductivity of the electrolyte at infinite dilution or zero concentration while \(\lambda_{+}^{0}\) and \(\lambda_{-}^{0}\) are the molar ionic conductivities of cation and
anion respectively at infinite dilution.
Question 21.
Explain the determination of molar conductivity of a weak electrolyte at infinite dilution or zero concentration using Kohlrausch’s law.
Molar conductivity of a weak electrolyte at infinite dilution or zero concentration cannot be measured experimentally.
Consider the molar conductivity (∧[0]) of a weak acid, CH[3]COOH at zero concentration. By Kohlrausch s law, ∧[0CH3COOH] = λ^0[CH3COOH^–] + λ^0 [H^+ ]where λ^0 [CH3COO^–] and λ^0 [H^+] are the molar
ionic conductivities of CH[3]COO^– and H^+ ions respectively.
If ∧[0CH3COONa], ∧[0HCl] and ∧[0NaCl] are the molar conductivities of CH[3]COONa, HCl and NaCl respectively at zero concentration, then by
Kohlrausch’s law,
Hence, from ∧[0] values of strong electrolytes, ∧[0] of a weak electrolyte CH[3]COOH, at infinite dilution can be calculated.
Question 22.
How is the degree of dissociation related to the molar conductance of the electrolytic solution ?
(i) At zero concentration or at infinite dilution, the molar conductivity has a maximum value denoted by ∧[0].
(ii) This is due to complete dissociation of the weak electrolyte making all the ions available from one mole of the electrolyte to carry electricity at zero concentration.
(iii) If α is the degree of dissociation, then
This suggests that at zero concentration or infinite dilution, the electrolyte is completely (100%) dissociated.
Question 23.
Write the relation between molar conductivity and molar ionic conductivities for the following electrolytes :
(a) KBr, (b) Na[2]SO[4], (c) AlCl[3].
(a) If ∧[0] is molar conductivity of an electrolyte at infinite dilution and \(\lambda_{+}^{0}\) and \(\lambda_{-}^{0}\) are molar ionic conductivities then,
Question 24.
How is molar conductivity of an electrolytic solution measured ?
The resistance of an electrolytic solution is measured by using a conductivity cell and Wheatstone bridge.
Fig. 5.6 : Measurement of conductance
The measurement of molar conductivity of a solution involves two steps as follows :
Step I : Determination of cell constant of the conductivity cell :
KCl solution (0.01 M) whose conductivity is accurately known (κ = 0.00141 Ω^-1 cm^-1) is taken in a beaker and the conductivity cell is dipped. The two electrodes of the cell are connected to one arm
while the variable known resistance (R) is placed in another arm of Wheatstone bridge.
A current detector D’ which is a head phone or a magic eye is used. J is the sliding jockey (contact) that slides on the arm AB which is a wire of uniform cross section. A source of A.C. power
(alternating power) is used to avoid electrolysis of the solution.
By sliding the jockey on wire AB, a balance point (null point) is obtained at C. Let AC and BC be the lengths of wire.
If R[solution] is the resistance of KCl solution and R[x] is the known resistance then by Wheatstone’s bridge principle,
\(\frac{R_{\text {solution }}}{\mathrm{BC}}=\frac{R_{x}}{\mathrm{AC}}\)
∴ R[solution] = \(\mathrm{BC} \times \frac{R_{x}}{\mathrm{AC}}\)
Then the cell constant ‘b’ of the conductivity cell is obtained by, b = κ[KCl] × R[solution].
Step II : Determination of conductivity of the given solution :
KCl solution is replaced by the given electrolytic solution and its resistance (R[s]) is measured by Wheatstone bridge method by similar manner by obtaining a null point at D.
The conductivity (κ) of the given solution is, cell constant b
κ = \(\frac{\text { cell constant }}{R_{\mathrm{s}}}=\frac{b}{R_{\mathrm{s}}}\)
Step III: Calculation of molar conductivity :
The molar conductivity (∧[m]) is given by,
Since the concentration of the solution is known, ∧[m] can be calculated.
Solved Examples 5.3
Question 25.
Solve the following :
(1) The resistance of a solution is 2.5 × 10^3 ohm. Find the conductance of the solution.
Solution :
Given : Resistance of solution = R = 2.5 × 10^3 Ω
Conductance of solution = G = ?
G = \(\frac{1}{R}\)
= \(\frac{1}{2.5 \times 10^{3}}\) ohm^-1 (Ω^-1 or S)
= 4 × 10^-3 Ω^-1 (or S)
Ans. Conductance = G = 4 × 10^-3 Ω^-1
(2) A conductivity cell has two electrodes 20 mm apart and of cross section area 1.8 cm^2. Find the cell constant.
Solution :
Given: Distance between two electrodes = l
= 20 mm
= 2 cm
Cross section area = a = 1.8 cm
Cell constant = b = ?
b = \(\frac{l}{a}=\frac{2}{1.8}\) = 1.111 cm^-1
Ans. Cell constant = 1.111 cm^-1
(3) The conductivity of 0.02 M AgNO[3] at 25 °C is 2.428 × 10^-3 Ω^-1 cm^-1. What is its molar conductivity ?
Solution :
Given : Concentration of solution = C = 0.02 M AgNO[3]
Temperature = T = 273 + 25 = 298 K
Conductivity = κ = 2.428 × 10^-3 Ω^-1 cm^-1 (or S cm^-1)
Molar conductivity = ∧[m] = ?
∧[m] = \(\frac{\kappa \times 1000}{C}\)
= \(\frac{2.428 \times 10^{-3} \times 1000}{0.02}\)
= 121.4 Ω^-1 cm^2 mol^-1 (or 121.4 S cm^2 mol^-1)
Ans. Molar conductivity = ∧[m]
= 121.4 Ω^-1 cm^2 mol^-1
(4) 0.05 M NaOH solution offered a resistance of 31.6 in a conductivity cell at 298 K. If the cell constant of the cell is 0.367 cm^-1, calculate the molar conductivity of NaOH solution.
Solution :
Given : Concentration = C = 0.05 M NaOH
Resistance = R = 31.6 Ω
Cell constant = b = 0.367 cm^-1
Ans. Molar conductivity = ∧[m] = 232.2 Ω^-1 cm^2 mol^-1
(5) A conductivity cell filled with 0.1 M KCl gives at 25 °C a resistance of 85.5 ohms. The conductivity of 0.1 M KCl at 25° is 0.01286 ohm^-1 cm^-1. The same cell filled with 0.005 M HCl gives a
resistance of 529 ohms. What is the molar conductivity of HCl solution at 25 °C ?
Solution :
Given : Resistance of KCl solution = R[KCl] = 85.5 Ω
Conductivity of KCl solution = κ[KCl]
= 0.01286 ohm^-1 cm^-1
Concentration = C = 0.005 M HCl
Resistance of HCl solution = R[soln] = 529 ohms
Molar conductivity of HCl = ∧[m(HCl)] = ?
Ans. Molar conductivity of HCl solution = ∧[m(HCl)]
= 416 ohm^-1 cm^2 mol^-1
(6) The molar conductivity of 0.05 M BaCl[2] solution at 25 °C is 223 Ω^-1 cm^2 mol^-1. What is its conductivity?
Solution :
Given : Molar conductivity = ∧[m]
= 223 Ω^-1 cm^2 mol^-1
Concentration = C = 0.05 M BaCl[2]
Conductivity = κ = ?
Ans. Conductivity = κ = 0.01115 Ω^-1 cm^-1
(7) Conductivity of a solution is 6.23 × 10^-5 Ω^-1 cm^-1 and its resistance is 13710 Ω. If the electrodes are 0.7 cm apart, calculate the cross-sectional area of electrode.
Solution :
Given : κ = 6.23 × 10^-5 Ω^-1 cm^-1
R = 13710 Ω
l = 0.7 cm
a = ?
Ans. Cross sectional area of electrode = 0.8195 cm^2
(8) A conductivity cell filled with 0.01 M KCl gives at 25 °C the resistance of 604 ohms. The conductivity of KCl at 25 °C is 0.00141 Ω^-1 cm^-1. The same cell filled with 0.001 M AgNO[3] gives a
resistance of 6529 ohms. Calculate the molar conductivity of 0.001 M AgNO[3] solution at 25 °C.
Solution :
Given : Resistance of KCl solution = R[KCl]
= 604 ohm (Ω)
Conductivity of KCl solution = κ[KCl]
= 0.00141 Ω^-1 cm^-1
Concentration = C = 0.001 M AgNO[3]
Resistance of solution = R[sol] = 6529 ohm (Ω)
Molar conductivity = ∧[m] = ?
cell constant b
= 130.4 Ω^-1 cm^2 mol^-1
Ans. Molar conductivity of AgNO[3] solution = ∧[m]
= 130.4 Ω^-1 cm^2 mol^-1
(9) Resistance and conductivity of a cell containing 0.001 M KCl solution at 298 K are 1500 Ω and 1.46 × 10^-4 S.cm^-1 respectively. What is cell constant.
Solution :
Given : Resistance of KCl solution = 1500 Ω, conductivity of KCl solution = κ = 1.46 × 10^-4 S.cm^-1, Cell constant = b = ?
Cell constant = Conductivity (k) × Resistance
= 1.46 × 10^-4 × 1500
= 0.219 cm^-1
Ans. Cell constant = 0.219 cm^-1
(10) A conductivity cell filled with 0.02 M H[2]SO[4] gives at 25 °C resistance of 122 ohms. If the molar conductivity of 0.02 H[2]SO[4] is 618 Ω^-1 cm^2 mol^-1, what is the cell constant?
Solution :
Given : Concentration = C = 0.02 M H[2]SO[4]
Resistance of H[2]SO[4] solution = R[soln] = 122 Ω
Molar conductivity = ∧[m] = 618 Ω^-1 cm^2 mol^-1
Cell constant = b = ?
Ans. Cell constant = b = 1.51 cm^-1
(11) A conductivity cell filled with 0.02 M AgNO[3] gives at 25 °C resistance of 947 ohms. If the cell constant is 2.3 cm^-1, what is the molar conductivity of 0.02 M AgNO[3] at 25 °C?
Solution :
Given : Concentration = C = 0.02 M AgNO[3]
Resistance of solution = R[soln] = 947 Ω
Cell constant = b = 2.3 cm^-1
Molar conductivity = ∧[m] = ?
Conductivity of soln = κ
Ans. Molar conductivity = ∧[m]
= 121.5 Ω^-1 m^2 mol^-1
(12) Resistance of conductivity cell filled with 0.1 M KCl solution is 100 ohms. If the resistance of the same cell when filled with 0.02 M KCl solution is 520 ohms, calculate the conductivity and
molar conductivity of 0.02 M KCl solution. [Given : Conductivity of 0.1 M KCl solution is 1.29 Sm^-1.]
Given : Resistance of 0.1 M KCl solution = R[1] = 100 Ω
Resistance of 0.02 M KCl solution = R[2] = 520 Ω
Conductivity of 0.02 M KCl solution = κ[2] = ?
Molar conductivity of 0.02 M KCl solution = ∧[m] = ?
Conductivity of 0.1 M KCl solution = κ[1]
= 1.29 S m^-1
Cell constant = b = κ[1] × R[1] = 1.29 × 100
= 129 m^-1
= 1.29 cm^-1
(13) The molar conductivities at zero concentration (or at infinite dilution) of CH[3]COONa, HCl and NaCl in Ω^-1 cm^2 mol^-1 are 90.8,426.2 and 126.4 respectively. Calculate the molar conductivity
of CH[3]COOH at infinite dilution.
Solution :
(14) The molar conductivities at zero concentrations of NH[4]Cl, NaOH and NaCl are respectively 149.7Ω^-1 cm^2 mol^-1, 248.1 Ω^-1 cm^2 mol^-1 and 126.5 Ω^-1 cm^2 mol^-1. What is the molar
conductivity of NH[4]OH at zero concentration ?
Solution :
(15) What is the molar conductivity of AgI at zero concentration if the ∧[0] values of NaI, AgNO[3] and NaNO[3] are respectively 126.9 Ω^-1 cm^2 mol^-1, 133.4 Ω^-1 cm^2 mol^-1 and 121.5 Ω^-1 cm^2 mol
^-1 ?
Solution :
Adding equations (i) and (ii) and subtracting equation (iii) we get equation I.
(16) Molar conductivity of KCl at infinite dilution is 150.3 S cm^2 mol^-1. If the molar conductivity of K^+ is 73.4, calculate that of Cl^–.
Solution :
Given : Molar conductivity at infinite dilution
= ∧[(KCl)] = 150.3 S cm^2 mol^-1
Molar conductivity of K^+
= \(\lambda_{\mathrm{K}^{+}}^{0}\) = 73.4 S cm^2 mol^-1
Molar conductivity of Cl^– = \(\lambda_{\mathrm{Cl}^{-}}^{0}\) = ?
By Kohlrausch’s law,
(17) Molar conductivities at infinite dilution of Mg^2+ and Br^– are 105.8 Ω^-1 cm^2 mol^-1 and 78.2 Ω^-1 cm^2 mol^-1 respectively. Calculate molar conductivity at zero concentration of MgBr[2].
Solution :
Given : \(\lambda_{\mathrm{Mg}^{2+}}^{0}\) = 105.8 Ω^-1 cm^2 mol^-1
\(\lambda_{\mathrm{Br}^{-}}^{0}\) = 78.2 Ω^-1 cm^2 mol^-1
\(\wedge_{0\left(\mathrm{MgBr}_{2}\right)}\) = ?
By Kohlrausch’s law,
\(\wedge_{0\left(\mathrm{MgBr}_{2}\right)}\) = \(\lambda_{\mathrm{Mg}^{2+}}^{0}\) + 2\(\lambda_{\mathrm{Br}^{-}}^{0}\)
= 105.8 + 2 × 78.2 = 105.8 + 156.4
= 262.2 Ω^-1 cm^2 mol^-1
Ans. Molar conductivity of MgBr[2] at zero concentration = \(\wedge_{0\left(\mathrm{MgBr}_{2}\right)}\) = 262.2 Ω^-1 cm^2 mol^-1
(18) The molar conductivity of 0.1 M CH[3]COOH at 25 °C is 15.9 Ω^-1 cm^2 mol^-1. If the molar conductivities of CH3COO^– and H^+ ions in Ω^-1 cm^2 mol^-1 at zero concentration are 40.8 and 349.6
respectively, calculate degree of dissociation of 0.1 M CH[3]COOH.
Solution :
Given : Concentration = C = 0.1 M CH[3]COOH
Molar conductivity = ∧[m] = 15.9 Ω^-1 cm^2 mol^-1
\(\lambda_{\mathrm{CH}_{3} \mathrm{COO}^{-}}^{0}\) = 40.8 Ω-1 cm^2 mol^-1;
\(\lambda_{\mathrm{H}^{+}}^{0}\) = 349.6 Ω^-1 cm^2 mol^-1
Degree of dissociation = α = ?
By Kohlrausch’s law,
\(\wedge_{0\left(\mathrm{CH}_{3} \mathrm{COOH}\right)}=\lambda_{\mathrm{CH}_{3} \mathrm{COO}^{-}}^{0}+\lambda_{\mathrm{H}^{+}}^{0}\)
= 40.8 + 349.6
= 390.4 Ω^-1 cm^2 mol^-1
α = ∧[m/∧0]
= \(\frac{15.9}{390.4}\) = 0.0407
Ans. The degree of dissociation of CH[3]COOH = 0.0407
(19) The dissociation constant of a weak monoacidic base is 1.2 × 10^-5 at 25 °C. The molar conductivity of the base at zero concentration is 354.8 Ω^-1 cm^2 mol^-1 at 25°C. Calculate the percentage
dissociation and molar conductivity of the weak base at 0.1 M concentration.
Solution :
Given : Dissociation constant of the base = K[b] = 1.2 × 10^-5
Concentration = C = 0.1 M
∧[0] = 354.8 Ω^-1 cm^2 mol^-1
Percentage dissociation = ?
∧[m] = ?
K[a] = \(\frac{\mathrm{C} \alpha^{2}}{1-\alpha}\); For a week electrolyte, α is small,
∴ K[a] = cα^2;
∴ α = \(\sqrt{\frac{\mathrm{K}_{\mathrm{a}}}{\mathrm{C}}}=\left(\frac{1.2 \times 10^{-5}}{0.1}\right)^{\frac{1}{2}}\) = 1.0954 × 10^-2
∴ Percentage dissociation = α × 100
= 1.0954 × 10-2 × 100 = 1.0954%
Now, α = \(\frac{\wedge_{\mathrm{m}}}{\wedge_{0}}\)
∴ ∧[m] = α × ∧[0] = 1.954 × 10^-2 × 354.8
= 6.932 Ω^-1 cm^2 mol^-1
Ans. Percentage dissociation = 1.0954
Molar conductivity = ∧[m]
= 6.932 Ω^-1 cm^2 mol^-1
Question 26.
What is an electrochemical cell? What does it consist of?
Electrochemical cell : It consists of two electronic conductors such as metal plates dipping into an electrolytic or ionic conductor which is an aqueous electrolytic solution or a pure liquid of a
molten electrolyte.
Question 27.
What are electrochemical reactions ?
1. Electrochemical reactions : The chemical reactions occurring in electrochemical cells which involve transfer of electrons from one species to other are called electrochemical reactions. They are
redox reactions.
2. These reactions are made of two half reactions namely oxidation at one electrode (anode) and reduction at another electrode (cathode) of the electrochemical cell.
3. The net reaction is the sum of the above two half reactions.
Question 28.
Define electrode.
Electrode : The arrangement consisting of a metal rod dipping in an aqueous solution or molten electrolyte containing ions and conduct electric current due to oxidation or reduction half reactions
occurring on its surface is called an electrode.
The electrodes which take part in the reactions are called active electrodes while those which do not take part in the reactions are called inert electrodes.
Question 29.
Define : (a) Anode (b) Cathode.
(a) Anode : An electrode of an electrochemical cell, at which oxidation half reaction occurs due to the loss of electrons from some species is called an anode.
(b) Cathode : An electrode of an electrochemical cell at which reduction half reaction occurs due to gain of electrons by some species is called a cathode.
Question 30.
What are the types of electrochemical cells ?
There are two types of electrochemical cells as follows :
1. Electrolytic cells
2. Voltaic or galvanic cells.
Question 31.
Define : (1) Electrolytic cell (2) Voltaic or galvanic cell.
(1) Electrolytic cell : An electrochemical cell in which a non-spontaneous chemical reaction is forced to occur by passing direct electric current into the solution from the external source and where
electrical energy is converted into chemical energy is called an electrolytic cell. E.g. voltameter, electrolytic cell for deposition of a metal.
(2) Voltaic or galvanic cell : An electrochemical cell in which a spontaneous chemical reaction occurs producing electricity and where a chemical energy is converted into an electrical energy is
called voltaic cell or galvanic cell. E.g. Daniell cell, dry cell, lead storage battery, fuel cells, etc.
Question 32.
Define electrolysis.
Electrolysis : The process of a non-spontaneous chemical decomposition of an electrolyte by the passage of an electric current through its aqueous solution or fused mass and in which electrical
energy is converted into chemical energy is called electrolysis. E.g. Electrolysis of fused NaCl.
Question 33.
Describe electrolysis of aqueous NaCl.
(1) Construction of an electrolytic cell : It consists of a vessel containing aqueous solution of NaCl. Two inert electrodes (graphite electrodes) are dipped in it and connected to an external source
of electricity like battery. The electrode connected to the negative terminal is a cathode and that connected to a positive terminal is an anode.
(2) Working of the cell :
(A) NaCl[(aq)] and H[2]O[(l)] dissociate as follows :
(3) Reactions in electrolytic cell :
(i) Reduction half reaction at cathode : There are Na^+ and H^+ ions but since H^+ are more reducible than Na^+, they undergo reduction liberating hydrogen and Na^+ are left in the solution.
2H[2]O[(l)] + 2e^– → H[2][(g)] + \(2 \mathrm{OH}_{(\mathrm{aq})}^{-}\) (reduction)
E^0 = -0.83 V
(ii) Oxidation half reaction at anode : At anode there are Cl^– and OH^–. But Cl^– ions are preferably oxidised due to less decomposition potential.
Net cell reaction : Since two electrons are gained at cathode and two electrons are released at anode for each redox step, the electrical neutrality is maintained. Hence we can write,
Since Na^+ and OH^– are left in the solution, they form NaOH[(aq)].
(4) Results of electrolysis :
• H[2] gas is liberated at cathode.
• Cl[2] gas is liberated at anode.
• NaOH is formed in the solution and it reacts basic.
Question 34.
Define and explain the following electrical units : (1) Coulomb (2) Ampere (3) Volt (4) Joule (5) Ohm.
(1) Coulomb : It is a quantity of electricity obtained when one ampere current flows for one second.
It is the unit of quantity of electricity.
Q = I × t Coulomb (C)
where Q is the charge or quantity of electricity in coulombs.
(2) Ampere : It is a strength of an electric current obtained when one coulomb of electricity is passed through a circuit for one second.
∴ I = Q/t
(3) Volt : It is the potential difference between two points of an electric conductor required to send a current of one amphere through a resistance of one ohm.
∴ V = I × R
where V is the potential difference in volts and R is the resistance of a conductor in ohms.
(4) Joule : It is the electrical work or energy produced when one coulomb of electricity is passed through a
potential difference of one volt.
∴ Electrical work = Q × V J
where Q is electrical charge in coulombs and V is the potential difference.
(5) Ohm : It is the resistance of an electrical conductor across which when potential difference of 1 volt is applied, a current of one ampere is obtained. It has units, Ω or per siemens.
Question 35.
Explain quantitative aspects of electrolysis.
(1) Calculation of quantity of electricity : If an electric current of strength I A is passed through the cell for t seconds, then quantity of electricity (Q) obtained is given by,
Q = I × t C (Coulomb)
(2) Calculation of moles of electrons passed : The charge carried by one mole of electrons is referred to as one faraday (F). If total charge passed is Q C, then moles of electrons passed = \(\frac{Q
(\mathrm{C})}{F\left(\mathrm{C} / \mathrm{mol} \mathrm{e}^{-}\right)}\)
(3) Calculation of moles of product formed : Consider one mole of ions, \(\mathbf{M}_{(\mathrm{aq})}^{n^{+}}\) which will require n moles of electrons for reduction.
\(\mathbf{M}_{(\mathrm{aq})}^{n^{+}}\) + ne^– → M (Reduction half reaction)
(4) Calculation of mass of product : Mass, W of product formed is given by,
W = moles of product × molar mass of product (M)
When two electrolytic cells containing different electrolytes are connected in series so that same quantity of electricity is passed through them, then the masses W[1] and W[2] of products produced
are given by,
Question 36.
Define Faraday.
Faraday : It is defined as the quantity of the electric charge carried by one mole of electrons.
It has value, 1F = 96500 C/mol
Question 37.
Obtain a charge on one electron from Faraday’s value.
• One Faraday is the electric charge on one mole of electrons (6.022 × 10^23 electrons).
• 1 Faraday = 96500 (per mol of electrons).
• Hence the charge on one electron is, change on one electron = \(\frac{96500}{6.022 \times 10^{23}}\)
= 1.602 × 10^-9 C.
Solved Examples 5.4-5.5
Question 38.
Solve the following :
(1) An electric current of 100 mA is passed through an electrolyte for 2 hours, 20 minutes and 20 seconds. Find the quantity of electricity passed.
Solution :
Given : Electric current = I = 100 mA
= 100 × 10^-3 A
= 0.1 A
Time = t = 2 hrs + 20 min + 20 s
= 2 × 60 × 60 + 20 × 60 + 20
= 8420 s
The quantity of electricity = Q = ?
Q = I × t
= 0.1 × 8420
= 842 C
Ans. Quantity of electricity passed, Q = 842 C
(2) An electric current of 500 mA is passed for 1 hour and 30 minutes. Calculate the
(i) Quantity of electricity (or charge)
(ii) Number of Faradays of electricity
(iii) Number of electrons passed (Charge on 1 electron = 1.602 × 10^-19 C)
Solution :
Given : Electric current = I = 500 mA
= 500 × 10^-3 A = 0.5 A
Time = t = 1 hr + 30 min
= 1 × 60 × 60 + 30 × 60
= 5400 s
(i) The quantity of electricity = Q = ?
(ii) Number of Faradays of electricity = ?
(iii) Number of electrons passed = ?
(i) Q = I × t = 0.5(A) × 5400(s) = 2700 C
(iii) 1F is the electric charge on 6.022 × 10^23 electrons.
∴ 0.028F is the charge on,
0.028 × 6.022 × 10^23 = 1.686 × 10^22 electrons
∴ Number of electrons passed = 1.686 × 10^22
Ans. (i) The quantity of electricity = Q = 2700 C
(ii) Number of Faradays of electricity = 0.028 F
(iii) Number of electrons passed = 1.686 × 10^22
(3) How much electricity in terms of Faraday is required to produce :
(a) 20 g of Ca from molten CaCl[2]
(b) 40 g of Al from molten Al[2]O[3]
(Given : Molar mass of Calcium and Aluminium are 40 g mol^-1 and 27 g mol^-1 respectively.)
Solution :
(4) For the following conversions,
(i) number of moles of electrons
(ii) number of Faradays
(iii) Amount of electricity :
(A) 0.1 mol conversion of Zn^2+ to Zn.
(B) 0.08 mol conversion of \(\mathbf{M n O}_{4}^{2-}\) to Mn^2+
(C) 1.1 mol conversion of \(\mathrm{Cr}_{2} \mathbf{O}_{7}^{2-}\) to Cr^3+.
Solution :
(i) Number of moles of electrons = ?
(ii) Number of Faradays = ?
(iii) Amount of electricity = Q = ?
(A) Number of moles of Zn^2+ =0.1 mol
Zn^2+ + 2e^– → Zn
(i) ∵ 1 mol Zn^2+ requires 2 mol electrons
0.1 mol Zn^2+ will require
∴ 0.1 × 2 = 0.2 mol electrons
(ii) ∵ 1 mol electrons = 1 Faraday
∴ 0.2 mol electrons = 0.2 × 1
= 0.2 Faradays
(iii) ∵ 1 Faraday = 96500 C
∴ 0.2 Faraday = 96500 × 0.2 = 48250 C
Amount of electricity required =48250C
(B) Number of moles of \(\mathrm{MnO}_{4}^{-}\) = 0.08 mol
\(\mathrm{MnO}_{4}^{-}+5 \mathrm{e}^{-} \longrightarrow \mathrm{Mn}^{2+}\)
(i) ∵ 1 mol \(\mathrm{MnO}_{4}^{-}\) requires 5 mol electrons
∴ 0.08 mol \(\mathrm{MnO}_{4}^{-}\) will require
5 × 0.08 = 0.4 mol electrons
(ii) Number of Faradays = 0.4 × 1 = 0.4
(iii) Amount of electricity = Q = 0.4 × 96500
= 38600 C
(C) Number of moles of \(\mathrm{Cr}_{2} \mathrm{O}_{7}^{2-}\) = 1.1 mol
(i) ∵ 1 mol \(\mathrm{Cr}_{2} \mathrm{O}_{7}^{2-}\) requires 6 mol electrons
∴ 1.1 mol \(\mathrm{Cr}_{2} \mathrm{O}_{7}^{2-}\) will require
6 × 1.1 = 6.6 mol electrons
(ii) Number of Faradays = 1 × 6.6 = 6.6
(iii) Amount of electricity = 6.6 × 96500
= 6.369 × 10^5 C
(A) (i) Number of moles of electrons = 0.2 mol
(ii) Number of Faradays = 0.2
(iii) Amount of electricity = 48250 C
(B) (i) Number of moles of electrons = 0.4 mol
(ii) Number of Faradays = 0.4
(iii) Amount of electricity = 38600 C
(C) (i) Number of moles of electrons = 6.6 mol
(ii) Number of Faradays = 6.6
(iii) Amount of electricity = 6.369 × 10^5 C
(5) What mass of aluminium is produced at the cathode during the passage of 4 ampere current through Al[2](SO[4])[3] solution for 100 minutes? Molar mass of aluminium is 27 g mol^-1.
Solution :
Given : I = 4 A; t = 100 × 60 = 600 s
F = 96500 C mol^-1, M = 27 g mol^-1, W[Al] = ?
(6) How long will it take to produce 2.415 g Ag metal from its salt solution by passing a current of 3 amperes? How many moles of electrons are required ? Molar mass of Ag is 107.9 gmol^-1.
Solution :
Given : Electric current = I = 3A
Mass of Ag produced = 2.415 g
Molar mass of Ag = Atomic mass of Ag
= 107.9 gmol^-1
Time = t = ? Number of moles of electrons = ?
Reduction half reaction at cathode :
From the reaction,
∵ 1 mole of Ag requires 1 mole of electrons
∴ 0.02238 mole of Ag will require,
0.02238 mol electrons
∵ 1 mole of electrons carries a charge of 96500 C,
∴ 0.02238 mole of electrons will carry a charge, 0.02238 × 96500 = 2160 C
∴ Quantity of electricity passed = Q = 2160 C
Let I be the current strength and t be time of electrolysis. Then,
∵ Q = I × t
∴ t = \(\frac{Q}{I}=\frac{2160}{3}\) = 720 s = \(\frac{720}{60}\) min = 12 min.
Ans. Time of electrolysis = 12 min
Moles of electrons = 0.02238 mol
(7) What current strength in ampere will be required to produce 2.369 × 10^-3 kg of Cu from CuSO4 solution in one hour? How many moles of electrons are required? Molar mass of copper is 63.5 gmol^-1.
Solution :
Given : Mass of Cu produced = 2.369 × 10^-3 kg
= 2.369 g
Time = t = 1 hr = 1 × 60 × 60 = 3600 s
Molar mass of Cu = 63.5 g mol^-1
Strength of current = I = ?
1 Faraday = 96500 C = 1 mol electrons
1 mol Cu = Molar mass of Cu = 63.5 g
Reduction half reaction :
Moles of Cu deposited = \(\frac{2.369}{63.5}\) = 0.0373 mol Cu
From the reaction,
∵ 1 mol of Cu requires 2 mol electrons
∴ 0.0373 mol Cu will require 2 × 0.0373
= 0.0746 mol electrons
∵ 1 mol electrons = 96500 C
∴ 0.0746 mol electrons = 96500 × 0.0746 = 7199 C
∴ Quantity of electricity required = Q = 7199 C
∴ Q = I × t
∴ Current, I = \(\frac{Q}{t}=\frac{7199}{3600}\) = 2A
Ans. Current strength = I = 2A
Moles of electrons required = 0.0746 mol
(8) A current of 6 amperes is passed through AlCl[3] solution for 15 minutes using Pt electrodes, when 0.504 g Al is produced. What is the molar mass of Al ?
Solution :
Given : Electric current = I = 6 A
Time = t = 15 min = 15 × 60 s = 900 s
Mass of Al produced = 0.504 g
Molar mass of Al = ?
Reduction half reaction,
\(\mathrm{Al}_{(\mathrm{aq})}^{3+}+3 \mathrm{e}^{-} \longrightarrow \mathrm{Al}_{(\mathrm{aq})}\)
Quantity of electricity passed = Q = I × t
= 6 × 900 = 5400 C
Number of moles of electrons = \(\frac{Q}{F}=\frac{5400}{96500}\)
= 0.05596 mol
From half reaction,
∵ 3 moles of electrons deposit 1 mole Al
∴ 0.05596 moles of electrons will deposit,
\(\frac{0.05596}{3}\) = 0.01865 mol Al
∵ 0.01865 mole Al weighs 0.504 g
∴ 1 mole Al will weigh, \(\frac{0.504}{0.01865}\) = 27 g
Hence molar mass of Al is 27 g mol^-1
Ans. Molar mass of Al = 27 g mo^-1
(9) How many moles of electrons are required for the reduction of (i) 3 moles of Zn^2+ to Zn,
(ii) 1 mol of Cr^3+ to Cr ?
How many Faradays of electricity will be required in each case ?
Solution :
(i) Given : For reduction of 3 mol Zn^2+ to Zn;
Number of moles of electrons required = ?
Reduction half reaction,
Zn^2+ + 2e^– → Zn
∵ 1 mole of Zn^2+ requires 2 moles of electrons
∴ 3 moles of Zn^2+ will require,
∵ 3 × 2 = 6 moles of electrons
∴ 1 mole of electrons = 1 F 6 moles of electrons = 6 F
(ii) Given : Reduction of 1 mol of Cr^3+ to Cr :
Reduction half reaction,
Cr^3+ + 3e^– → Cr
Hence 1 mole of Cr^3+ will require 3 moles of electrons
∵ 1 mole of electrons = 1
∴ 3 moles of electrons = 3 F
Ans. (i) 6 mol electrons and 6 Faradays.
(ii) 3 mol electrons and 3 Faradays.
(10) In an electrolysis of AgNO[3] solution, 0.7 g of Ag is deposited after a certain period of time. Calculate the quantity of electricity required in coulomb. (Molar mass of Ag is 107.9 g mol^-1.)
Solution :
Given : Mass of Ag deposited = 0.7 g
Molar mass of Ag = 107.9 g mol^-1
Quantity of electricity = Q = ?
Reduction half reaction is,
Ag^+ + e^– → Ag
1 mole of Ag = 107.9 g Ag requires 1 mole of electrons
∴ 0.7 g Ag will require, \(\frac{0.7}{107.9}\) = 6.49 × 10^-3 mole of electrons
∵ 1 mole of electrons carry 96500 C charge
∴ 6.49 × 10^-3 mole of electrons will carry, 96500 × 6.49 × 10^-3 = 626 C
Ans. Quantity of electricity required = 626 C,
(11) Calculate the amounts of Na and Chlorine gas produced during the electrolysis of fused NaCl by the passage of 1 ampere current for 25 minutes. Molar masses of Na and Chlorine gas are 23 g mol^-1
and 71 g mol^-1 respectively.
Solution :
Given : Electric current = I = 1 ampere
Time = t = 25 minutes = 25 × 60 s = 1500 s
Molar mass of Na = 23 g mol^-1
Molar mass of Cl[2] = 71 g mol^-1
Mass of Na produced = ?, Mass of Cl[2] produced = ?
Reactions during electrolysis :
Quantity of electricity = Q = I × t = 1 × 1500
= 1500 C
Number of moles of electrons passed
= \(\frac{Q}{F}=\frac{1500}{96500}\) = 0.01554
From half reaction (i),
∵ 2 moles of electrons deposit 2 moles of Na
∴ 0.01554 moles of electrons will deposit, \(\frac{0.01554 \times 2}{2}\) = 0.01554 mol Na
Mass of Na = Moles of Na × Molar mass of Na
= 0.01554 × 23 = 0.3572 g Na
From half reaction (ii)
∵ 2 moles of electrons produce 1 mole Cl[2]
∴ 0.01554 moles of electrons will produce,
\(\frac{0.01554 \times 1}{2}\) = 7.77 × 10^-3 × 71
∴ Mass of Cl[2] gas = Moles of Cl[2 ]× Molar mass
= 7.77 × 10^-3 × 71
= 0.5518 g
Ans. Mass of Na deposited = 0.3572 g
Mass of Cl2 liberated = 0.5518 g
(12) Calculate the mass of Mg and the volume of Chlorine gas at NTP produced during the electrolysis of molten MgCl[2] by the passage of 2 amperes of current for 1 hour. Molar masses of Mg and Cl2
are 24 g mol-1 and 71 g mol-1 respectively.
Solution :
Given : Electric current = I = 2A
Time = t = 1 hr = 1 × 60 × 60 s = 3600 s
Molar mass of Mg = 23 g mol-1
Molar mass of Cl[2] = 71 g mol-1
Mass of Mg produced = ?
Volume of Cl[2] at NTP produced = ?
Reactions during electrolysis :
(i) Mg^2+ + 2e^– → Mg (Reduction half reaction)
(ii) 2Cl^– → Cl[2(g)] + 2e^– (Oxidation half reaction)
Quantity of electricity passed = Q = I × t
= 2 × 3600 = 7200 C
∵ 1 Faraday = 1 mol electrons
∴ Number of moles of electrons passed
= \(\frac{Q}{F}=\frac{7200}{96500}\) = 0.07461 mol
From half reaction (i),
∵ 2 moles of electrons deposit 1 mole of Mg
∴ 0.07461 moles of electrons will deposit, \(\frac{0.07461 \times 1}{2}\) = 0.037305 mol Mg
Mass of Mg = Moles of Mg × Molar mass of Mg
= 0.037305 × 24 = 0.8953 g Mg
From half reaction (ii),
∵ 2 moles of electrons produce 1 mol Cl[2] gas
∴ 0.07461 moles of electrons will produce,
\(\frac{0.07461}{2}\) = 0.037305 mol Cl[2]
∵ 1 mole of Cl[2] occupies 22.4 dm at NTP
∴ 0.037305 mole of Cl[2] will occupy,
22.4 × 0.037305 = 0.8356 dm3
∴ Volume of Cl[2] gas produced
= 0.8356 dm^3
= 0.8356 × 10^3 cm^3
= 835.6 cm^3
Ans. Mass of Mg produced = 0.8953 g
Volume of Cl[2(g)] at NTP produced = 835.6 cm^3
(13) How many Faradays would be required to plate out one mole of free metal from the following cations?
(a) Mg^2+ (b) Cr^3+ (c) Pb^2+ (d) Cu^+
Solution :
(a) Reduction half reaction :
∵ 1 mol electrons = 1 Faraday
Since to deposite 1 mol Mg, two moles of electrons are required,
∴ To plate one mole Mg, 2 Faradays of electricity will be required.
(b) Reduction half reaction :
\(\mathrm{Cr}_{(\mathrm{aq})}^{3+}+3 \mathrm{e}^{-} \longrightarrow \mathrm{Cr}_{(\mathrm{s})}\)
∴ 1 mol Cr will require 3 mol electrons, hence 3 Faradays of electricity are required.
(c) Reduction half reaction :
\(\mathrm{Pb}_{(\mathrm{aq})}^{2+}+2 \mathrm{e}^{-} \longrightarrow \mathrm{Pb}_{(\mathrm{s})}\)
∴ 1 mol Pb will require 2 mol electrons, hence 2 Faradays are required.
(d) Reduction half reaction :
\(\mathrm{Cu}_{(\mathrm{aq})}^{+}+\mathrm{e}^{-} \longrightarrow \mathrm{Cu}_{(\mathrm{s})}\)
∴ 1 mol Cu will require 1 mol electrons hence one Faraday of electricity is required.
(14) In a certain electrolysis experiment, 0.561 g of Zn is deposited in one cell containing ZnSO[4] solution. Calculate the mass of Cu deposited in another cell containing CuSO[4] solution in series
with ZnSO[4] cell. Molar masses of Zn and Cu are 65.4 g mol^-1 and 63.5 g mol^-1 respectively.
Solution :
Given : Mass of Zn deposited = W[Zn] = 0.561 g
Molar mass of Zn = 65.4 g mol^-1
Molar mass of Cu = 63.5 g mol^-1
Mass of Cu deposited = ?
Number of moles of Zn deposited
= \(\frac{\text { Mass of Zn deposited }}{\text { Molar mass of } \mathrm{Zn}}=\frac{0.561}{65.4}\)
= 8.578 × 10^-3 mol Zn
Reactions of electronics:
(i) Zn^++ + 2e^– → Zn (Half reaction in ZnSO[4] cell)
(ii) Cu^++ + 2e^– → Cu (Half reaction in CuSO[4] cell)
Mole ratio of Zn
∴ Mass of Cu produced
= moles of Cu × molar mass of Cu
= 8.578 × 10^-3 × 63.5
= 0.5447 g Cu
Ans. Mass of Cu deposited = 0.5447 g
(15) Two electrolytic cells, one containing AlCl[3] solution and the other containing ZnSO[4] solution are connected in series. The same quantity of electricity is passed through the cells. Calculate
the amount of Zn deposited in ZnSO[4] cell if 1.2 g of Al are deposited in AlCl[3] cell. The molar masses of Al and Zn are 27 g mol^-1 and 65.4 g mol^-1 respectively.
Solution :
Given : Mass of Al deposited = 1.2 g
Molar mass of Al = 27 g mol^-1
Molar mass of Zn = 65.4 g mol^-1
Mass of zinc deposited = ω[Zn] = ?
Reduction reactions in electrolysis :
Number of moles of Al deposited = \(\frac{1.2}{27}\)
= 0.04444 mol.
From reaction (i),
∵ 1 mol Al requires 3 mol electrons
∴ 0.04444 mol Al requires 3 × 0.04444
= 0.1333 mol electrons
Hence 0.1333 moles of electrons are passed through both the cells in the series.
From reaction (ii),
∵ 2 moles of electrons deposit 1 mol Zn
∴ 0.1333 moles of electrons will deposit, \(\frac{0.1333}{2}\) = 0.06665 mol Zn
Mass of Zn deposited = 0.06665 × 65.4 = 4.36 g
Ans. Mass of Zn deposited = 4.36 g
(16) How much quantity of electricity in coulomb is required to deposit 1.346 × 10^-3 kg of Ag in 3.5 minutes from AgNO[3] solution ?
(Given : Molar mass of Ag is 108 × 10^-3 kg mol^-1)
Solution :
Given : Mass of Ag deposited = 1.346 × 10^-3 kg
Molar mass of Ag = 108 × 10^-3 kg mol^-1
Time = t = 3.5 × 60 s
∵ 108 × 10^-3 kg Ag requires 1 Faraday
1.346 × 10^-3 kg Ag will require,
\(\frac{1.346 \times 10^{-3}}{108 \times 10^{-3}}\) = 0.01246 F
∵ If F = 96500 C
∴ 0.01246 F = 96500 × 0.01246 = 1202 C
Ans. Amount of electricity required = 1202 C
(17) How many electrons will have a total charge of 1 Coulomb ?
Solution :
Given : Charge = 1 Coulomb
Number of electrons = ?
1 Faraday = 96500 C per mol electrons
∵ 96500 C electric charge is present on 1 mol electrons
∴ 1C charge is present on \(\frac{1}{96500}\) mol electrons
∴ Number of electrons = \(\frac{1}{96500}\) × 6.022 × 10^23
= 6.24 × 10^18 electrons
Ans. 1 Coulomb charge is present on 6.24 × 10^18 electrons.
(18) A constant electric current flows for 4 hours through two electrolytic cells connected in series. One contains AgNO[3]solution and second contains CuCl[2] solution. During this time, 4 grams of
Ag are deposited in the first cell.
(a) How many grams of Cu are deposited in the second cell?
(b) What is the current flowing in amperes? (Atomic mass : Cu = 63.5 gmol^-1; Ag = 107.9 gmol^-1)
Solution :
Given : Mass of Ag deposited = 4 g
Molar mass of Cu = 63.5 g mol^-1
Molar mass of Ag = 107.9 g mol^-1
Time = t = 4 hrs = 4 × 60 × 60 = 14400 s
Mass of Cu deposited = W[Cu] = ?
Current = I = ?
(a) Number of moles of Ag deposited
\(=\frac{\text { Mass of } \mathrm{Ag}}{\text { Molar mass of } \mathrm{Ag}}=\frac{4}{107.9}\)
= 0.03707 mol of Ag
Reactions of electrolysis :
(i) Ag^+ + e^– → Ag (Half reaction in AgNO[3] cell)
(ii) Cu^2+ + 2e^– → Cu (Half reaction in CuCl[2] cell)
Mole ratio of Ag
∴ Mass of Cu produced = 0.01854 × 63.5 = 1.177 g
(b) From the reaction,
∵ 1 mol Ag^+ requires 1 mol electrons
∴ 0.03707 mol Ag will require 0.03707 mol electrons
∵ 1 mol electrons = 1 Faraday
∴ 0.03707 mol electrons = 0.03707 Faraday
∵ 1 Faraday = 96500 C
∴ 0.03707 Faraday
= 0.03707 × 96500 = 3577 C
∴ Quantity of electricity = Q = 3577 C.
Q = I × t
∴ I = \(\frac{\mathrm{Q}}{t}=\frac{3577}{14400}\) = 0.25 A
Ans. (a) Mass of Cu deposited = 1.177 g
(b) Current passed = 0.25 A
(19) The passage of 0.95 A current for 40 minutes deposited 0.7493 g Cu from CuSO[4] solution. Calculate the molar mass of Cu.
Solution :
Given : Electric current = I = 0.95 A
Time = f = 40 min = 40 × 60 = 2400 s
Mass of Cu deposited = 0.7493 g
Molar mass of Cu = ?
Reduction half reaction,
\(\mathrm{Cu}_{(\mathrm{aq})}^{2+}+2 \mathrm{e}^{-} \longrightarrow \mathrm{Cu}_{(\mathrm{s})}\)
Quantity of electricity = Q = I × t
= 0.95 × 2400
= 2280 C
Number of moles of electrons = \(\frac{2280}{96500}\)
= 0.02362 mol
∵ 2 mol electrons deposit 1 mol Cu
∴ 0.02362 mol electrons will deposit,
\(\frac{0.02362}{2}\) = 0.01181 mol Cu
0.01181 mol Cu weighs 0.7493 g
∴ 1 mol of Cu weigh, \(\frac{0.7493 \times 1}{0.01181}\) = 63.44 g
Hence molar mass of Cu 63.44 g mol^-1
Ans. Molar mass of Cu = 63.44 g mol^-1
(20) A quantity of 0.3 g of Cu was deposited from CuSO[4] solution by passing 4A through the solution for 3.8 min. Calculate the value of Faraday constant. (Atomic mass of Cu = 63.5 g mol^-1)
Solution :
Given : Mass of Cu deposited = 0.3 g
Electric current = I = 4A
Time = t = 3.8 min = 3.8 × 60 = 228 s
Value of Faraday = ?
Quantity of electricity passed = Q = I × t
= 4 × 228 = 912 C
Reduction half reaction,
\(\mathrm{Cu}_{(\mathrm{aq})}^{2+}+2 \mathrm{e}^{-} \longrightarrow \mathrm{Cu}_{(\mathrm{s})}\)
Number of moles of Cu deposited 0.3
= \(\frac{0.3}{63.5}\) = 0.004724 mol
From reduction half reaction,
1 mol Cu ≡ 2 mol electrons
∴ 0.004724 mol Cu = 2 × 0.004724
= 0.009448 mol electrons
∵ 0.009448 mol electrons = 912 C
∴ 1 mol electrons = \(\frac{912}{0.009448}\) = 96528 C
∵ 1 Faraday charge is equal to charge on 1 mol electrons
∴ 1 Faraday = 96528 C
Ans. 1 Faraday = 96528 C
(21) In the electrolysis of water, one of the half reactions is
2H^+[(aq)] + 2e^– → H[2(g)]
Calculate the volume of H[2] gas collected at 25 °C and 1 atm pressure by passing 2A for 1h through the solution. R = 0.08205 L atm K^-1 mol^-1.
Solution :
Given : Reduction half reaction :
Temperature = T = 273 + 25 = 298 K
Pressure = P = 1 atm
Electric current = I = 2A
Time = t = 1 hr = 1 × 60 × 60 = 3600 s
R = 0.08205 L atm K-1 mol-1
Volume of H[2] = V[H2] = ?
Quantity of electricity passed = Q = I × t
= 2 × 3600 = 7200 C
Number of moles of electrons = \(\frac{Q}{F}\)
\(\frac{7200}{96500}\) = 0.0746 mol
From the reaction,
∵ 2 mol electrons produces 1 mol H[2] gas
∴ 0.0746 mol electrons will produce \(\frac{0.0746}{2}\)
= 0.0373 mol H[2].
pV[H2] = nRT
= 0.912 L
Ans. Volume H[2] gas = 0.912 L
(22) Calculate the current strength and number of moles of electrons required to produce 2.369 × 10^-3 kg of Cu from CuSO[4] solution in one hour. (Molar mass of Cu is 63.5 g/mol)
Solution :
Given : Mass of Cu deposited = 2.369 × 10^-3 kg;
t = 1 hr = 3600 s
Molar mass of Cu = 63.5 g mol^-1
I = ?; Number of moles of electrons = ?
∵ For 63.5 × 10^-3 kg Cu Q = 2 × 96500 C
∴ For 2.369 × 10^-3 kg Cu
Ans. I = 2A; Number of moles of electrons = 0.07461
Question 39.
Define : Galvanic cell or voltaic cell.
Galvanic or voltaic cell : An electrochemical cell which is used to produce electrical energy by a spontaneous chemical reaction inside it is called an electrochemical cell. In this chemical energy
is converted into electrical energy.
Example : Daniell cell.
Question 40.
Define : Half cell or Electrode.
Half cell or Electrode : It is a metal electrode dipped in the electrolytic solution and capable of establishing oxidation reduction equilibrium with one of the ions of electrolyte solution and
develop electrode potential. E.g. Zn in ZnSO[4] solution.
Question 41.
What are the functions of a salt bridge ?
The functions of a salt bridge are :
1. It maintains the electrical contact between the two electrode solutions of the half cells.
2. It prevents the mixing of electrode solutions.
3. It maintains the electrical neutrality in both the solutions of two half cells by a flow of ions.
4. It eliminates the liquid junction potential.
Question 42.
What are the conventions used to write galvanic cell or cell diagram (cell formula) ?
A galvanic cell or voltaic cell is represented by a short notation or diagram which includes electrodes, aqueous solutions of ions and other species that may or may not involve in the cell reaction.
The following conventions are used to represent the cell or write the cell notation :
(1) The metal electrodes or the inert electrodes like platinum are placed at the ends of the cell formula.
(2) The galvanic cell consists of two half cells or electrodes. The electrode on the extreme left hand side is anode where oxidation takes place and it carries negative (-) charge while extreme right
hand electrode is cathode where reduction takes place and it carries positive (+) charge.
(3) The gases or insoluble substances are placed in the interior positions adjacent to the metal electrode.
(4) A single vertical line is written between two phases like solid electrode and aqueous solution containing ions.
(5) A double vertical line is drawn between two solutions of two electrodes which indicates a salt bridge connecting them electrically.
(6) The concentration of solutions or ions or pressures of gases are written in brackets along with the substances in the cell.
(7) Different ions in the same solution are separated by a comma.
(8) Examples of electrochemical cells :
(i) Daniel cell is represented as,
Question 43.
How to write cell reaction for a galvanic cell ?
(1) A galvanic cell consists of two half cells or electrodes.
(2) Write oxidation half reaction for left hand electrode which is an anode and reduction half reaction for right hand electrode which is a cathode.
(3) Balance the number of electrons in the oxidation and reduction reactions.
(4) By adding both the reactions, overall cell reaction is obtained.
(5) For example, consider following cell :
Question 44.
Why is anode in a galvanic cell considered to be negative?
1. According to IUPAC conventions, the electrode of a galvanic cell where de-electronation or oxidation takes place releasing electrons is called anode. Zn[(s)] → \(\mathrm{Zn}_{(\mathrm{aq})}^{2+}
\) + 2e^–
2. The electrons released due to oxidation reaction are accumulated on the metal electrode surface charging it negatively.
Hence anode in the galvanic cell is considered to be negative.
Question 45.
Why is cathode in a galvanic cell considered to be positive electrode?
(1) According to IUPAC conventions, the electrode of the galvanic cell where electronation or reduction takes place is called cathode. In this, the electrons from the metal electrode are removed by
cations required for their reduction.
\(\mathrm{Cu}_{(\mathrm{aq})}^{2+}\) + 2e^– → Cu[(s)]
(2) Since the electrons are lost, the metal electrode acquires a positive charge.
Hence cathode in the galvanic cell is considered to be positive.
Question 46.
Give the cell reactions in the case of the following cells :
(3) Pt, H[2(g)] | H^+[(aq) ]|| Cl^–[(aq)] | Cl[2(g)], Pt
(4) Ni[(s)]|Ni^2+ (1 M) || Al^3+ (1 M) | Al[(s)]
Question 47.
Represent the half cells or electrodes for the following reactions :
Question 48.
Formulate a cell from the following electrode reactions :
(a) Cl[2(g)] + 2e^– → 2Cl^–[(aq)]
(b) 2I^–[(aq)] → I[2(s)] + 2e^–
(a) Cl[2(g)] + 2e^– → 2Cl^–[(aq)] (Reduction half reaction)
(b) 2I^–[(aq)] → I[2(s)] + 2e^– (Oxidation half reaction)
The galvanic cell is,
Pt |I[2(s)]|I^–[(aq)] (1 M) | Cl^–[(aq)] (1 M) | Cl[2](g, P[Cl2])|Pt
Question 49.
Formulate a cell for each of the following reactions :
(a) \(\mathrm{Sn}_{\text {(aq) }}^{2+}\) + 2AgCl[(s)] → \(\mathrm{Sn}_{(\mathrm{aq})}^{4+}\) + 2Ag(s) + \(2 \mathrm{Cl}_{(\mathrm{aq})}^{-}\)
(b) Mg[(s)] + Br[2(l)] → \(\mathrm{Mg}_{(\mathrm{aq})}^{2+}+2 \mathrm{Br}_{(\mathrm{aq})}^{-}\)
(a) \(\mathrm{Sn}_{\text {(aq) }}^{2+}\) + 2AgCl[(s)] → \(\mathrm{Sn}_{(\mathrm{aq})}^{4+}\) + 2Ag(s) + \(2 \mathrm{Cl}_{(\mathrm{aq})}^{-}\)
The overall reaction takes place into two steps :
(i) \(\mathrm{Sn}_{\text {(aq) }}^{2+}\) → Sn^4+ + 2e^– (Oxidation half reaction)
(ii) 2AgCl[(s)] + 2e^– → 2Ag[(s)] + \(2 \mathrm{Cl}_{(\mathrm{aq})}^{-}\) (Reduction half reaction)
Hence the cell is,
(b) Mg[(s)] + Br[2(l)] → \(\mathrm{Mg}_{(\mathrm{aq})}^{2+}+2 \mathrm{Br}_{(\mathrm{aq})}^{-}\)
The overall reaction takes place into two steps :
Question 50.
What is electrode potential?
(1) Electrode potential : It is defined as the difference of electrical potential established due to electrode half reaction between metal electrode and the solution around it at equilibrium at
constant temperature.
(2) Explanation : When a metal is immersed into a solution containing its ions there arises oxidation (or reduction) reaction involving a release of electrons (or gain of electrons). This gives rise
to the formation of an electrical double layer, consisting of a charged metal surface and an ionic layer. The potential across this double layer i.e., between metal and the solution is an electrode
Question 51.
Define :
(1) Oxidation potential,
(2) Reduction potential.
(1) Oxidation potential : It is defined as the difference of electrical potential between metal electrode and the solution around it at equilibrium developed due to oxidation reaction at anode and at
constant temperature.
(2) Reduction potential : It is defined as the difference of electrical potential between metal electrode and the solution around it at equilibrium developed due to reduction reaction at cathode and
at constant temperature.
Question 52.
What is a standard state of a substance ?
The standard state of a substance is that state in which the substance has unit activity or concentration at 25 °C. i.e., For solution having concentration 1 molar, gas at 1 atm, pure liquids or
solids are said to be in their standard states.
Question 53.
Define the following terms :
(1) Standard electrode potential
(2) Standard oxidation potential
(3) Standard reduction potential.
(1) Standard electrode potential : It is defined as the difference of electrical potential between metal electrode and the solution around it equilibrium when all the substances involved in the
electrode reaction are in their standard states of unit activity or concentration at constant temperature.
(2) Standard oxidation potential : It is defined as the difference of electrical potential between metal electrode and the solution around it at equilibrium due to oxidation reaction, when all the
substances involved in the oxidation reaction are in their standard states of unit activity or concentration at constant temperature.
(3) Standard reduction potential : It is defined as the difference of electrical potential between metal electrode and the solution around it at equilibrium due to reduction reaction, when all the
substances involved in the reduction reaction are in their standard states of unit activity or concentration at constant temperature.
Question 54.
What is the standard potential of an electrode according to IUPAC convention?
Standard reduction potential : According to IUPAC convention, the standard potential of an electrode due to reduction reaction at 298 K is taken as the standard reduction potential. In this active
mass of the substance has unit value.
Question 55.
What is cell potential or emf of a cell ?
Cell potential or emf of a cell : It is defined as the potential difference between two electrodes, responsible for an external flow of electrons from the left hand electrode at higher potential
(anode), to the right hand electrode at lower potential (cathode), when connected to form an electrochemical or galvanic cell.
Since there is oxidation reaction at left hand electrode (LHE) or anode and reduction reaction at right hand electrode (RHE) or cathode, emf of the galvanic, Ecell, is given by
E[cell] = (E[oxi])[anode] + (E[red])[cathode]
Since by IUPAC conventions, generally reduction potentials are used, hence, for the given cell,
(∵ E[oxi] = -E[red])
∴ E[cell] = (E[red])[cathode] – (E[red])[anode]
Similarly, standard emf of the cell, E^0[cell] is given by
E^0[cell] = (E^0[red])[cathode] – (E^0[red])[anode]
Question 56.
Explain dependence of cell potential on concentration.
Explain Nernst equation for cell potential.
Consider following general reaction taking place in the galvanic cell.
aA + bB → cC + dD
The cell voltage is given by,
T → temperature
R → Gas constant
F → Faraday
n → Number of electrons in the redox cell reaction.
This is Nernst equation for cell potential. It is used to calculate cell potential and electrode potentials.
Question 57.
State (or write) Nernst equation for the electrode potential and explain the terms involved.
The Nernst equation for the single electrode reduction potential for a given ionic concentration in the solution in the case, \(M_{(a q)}^{n+}\) + ne^– → M[(s)] is given by
\(\mathrm{E}_{\mathrm{M}^{\mathrm{n}+} / \mathrm{M}}\) is the single electrode potential,
\(E_{\mathrm{M}^{n+} / \mathrm{M}}^{0}\) is the standard reduction electrode potential,
R is the gas constant = 8.314 JK^-1 mol^-1
T is the absolute temperature,
n is the number of electrons involved in the reaction,
F is Faraday (96500 C)
[Mn^+] is the molar concentration of ions.
Question 58.
Obtain Nernst equation for the following cell :
Electrode reactions and a cell reaction for the given cell are,
Here, n = 2
By Nernst equation, the cell potential is given by,
Question 59.
Obtain Nernst equation for the electrode potential for the electrode, \(\mathrm{Zn}_{(\mathrm{aq})}^{2+} \mid \mathrm{Zn}_{(\mathrm{s})}\).
For the electrode, \(\mathrm{Zn}_{(\mathrm{aq})}^{2+} \mid \mathrm{Zn}_{(\mathrm{s})}\),
the reduction reaction is,
\(\mathrm{Zn}_{(\mathrm{aq})}^{2+}+2 \mathrm{e}^{-} \longrightarrow \mathrm{Zn}_{(\mathrm{s})}\) ∴ n = 2
By Nernst equation, the reduction electrode potential is given by,
where E^0[zn^2+/zn] is the standard electrode potential of zinc electrode.
Question 60.
Obtain a relation between cell potential and Gibbs energy for the cell reaction.
Consider a galvanic cell which involves n number of electrons in the overall cell reaction. Since one mole of electrons involve the electric charge equal to one Faraday (F) which is equal to 96500 C,
the total charge involved in the reaction is,
Electric charge = n × F
If Ecell is the cell potential, then Electrical work = n × F × E[cell]
According to thermodynamics, electric work is equal to decrease in Gibbs energy, -ΔG, we can write,
Electric work = n × F × E[cell] = -ΔG
∴ ΔG = -nFE[cell]
Under standard conditions, we can write
∴ ΔG^0 = -nF\(E_{\text {cell }}^{0}\)
where \(E_{\text {cell }}^{0}\) is the standard cell potential and ΔG^0 is the standard Gibbs free energy change.
Question 61.
Write Nernst Equation for the following reactions :
(a) Cr[(s)] + 3Fe^3+[(aq)] → Cr^3+[(aq)] + 3Fe^2+[(aq)]
(b) Al^3+[(aq)] + 3e^– → Al[(s)]
(a) Cr[(s)] + 3Fe^3+[(aq)] → Cr^3+[(aq)] + 3Fe^2+[(aq)]
The cell formulation is,
Cr[(s)]|Cr^3+[(aq) ]|| Fe^3+[(aq)], Fe^2+[(aq)]| Pt
Hence cell potential is,
Question 62.
A single electrode potential can’t be measured but the cell potential can be measured. Explain.
According to Nemst theory, electrode potential is the potential difference between the metal and ionic layer around it at equilibrium, i.e. the potential across the electric double layer.
(2) For measuring the single electrode potential, one part of the double layer, that is metallic layer can be connected to the potentiometer but not the ionic layer. Hence, single electrode potential
can’t be measured experimentally.
(3) When an electrochemical cell is developed by combining two half cells or electrodes, they can be connected to the potentiometer and the potential difference or cell potential can be measured.
E[cell] = E[2] – E[1]
where E[1] and E[2] are reduction potentials of two electrodes.
(4) If one of the electrode potentials is known or arbitrarily assumed and E[cell] is measured by potentiometer, then potential of another electrode can be obtained. Therefore it is necessary to
choose a reference electrode with arbitrarily fixed potential and measure the potentials of other electrodes.
(5) Therefore Standard Hydrogen Electrode (SHE) is selected assuming arbitrary potential 0.0 volt. Hence potentials of all other electrodes are referred to as hydrogen scale potentials.
Question 63.
Describe the construction and working of the standard hydrogen electrode (S.H.E.). Give its advantages and disadvantages.
What is the standard hydrogen electrode
Primary reference electrode? Write the construction and working of it.
A single electrode potential cannot be measured, but the cell potential can be measured experimentally. Hence, it is necessary to have a reference electrode. S.H.E. is a primary reference electrode.
(1) Construction :
(1) The standard hydrogen electrode (S.H.E.) consists of a glass tube at the end of which a piece of platinised platinum foil is attached as shown in Fig. 5.14. Around this plate there is an outer
jacket of glass which has a side inlet through which pure and dry hydrogen gas is bubbled at one atmosphere pressure. The inner tube is filled with a little mercury and a copper wire is dipped into
it. This provides an electrical contact with the platinum foil. The outer jacket ends into a broad opening.
(2) The whole assembly is kept immersed in a solution containing hydrogen ions (H^+) of unit activity.
(3) This electrode is arbitrarily assigned zero potential.
(4) The platinised platinum foil is used to provide an electrical contact for the electrode. This permits rapid establishment of the equilibrium between the hydrogen gas adsorbed by the metal and the
hydrogen ions in solution.
(2) Representation of S.H.E. :
H^+ (1 M) | H[2] (g, 1 atm) | Pt
(3) Working :
Reduction : H^+[(aq)] + e^– ⇌ \(\frac {1}{2}\)H[2(g)] E^0 = 0.00 V
H[2] gas in contact with H^+[(aq)] ions attains an equilibrium establishing a potential.
(4) Applications of SHE : A reversible galvanic cell with the experimental (indicator) electrode, Zn^2+ (1M) | Zn[(s)] and SHE can be developed as follows :
Thus the potential can be directly obtained.
(5) Disadvantages (Drawbacks or Difficulties) :
• It is difficult to construct and handle SHE.
• Pure and dry H[2] gas cannot be obtained.
• Pressure of H[2] gas cannot be maintained exactly at 1 atmosphere.
• The active mass or concentration of H^+ from HCl cannot be maintained exactly unity.
Question 64.
How is the potential of hydrogen electrode obtained?
Hydrogen gas electrode is represented as,
H^+[(aq)] | H[2] (g, P[H2]) | Pt
Electrode reduction reaction is,
2H^+[(aq)] + 2e^– → H[2(g)]
By Nernst equation, the reduction potential is,
If H[2] gas is passed at 1 atm, then P[H2] = 1 atm
Question 65.
Draw the diagram for the determination of standard electrode potential with SHE.
Consider the following cell :
Zn | Zn^2+[(aq) ]|| HCl | H[2](g, 1atm) | Pt
Question 66.
A voltaic cell consisting of Fe^2+[(aq)]|Fe[(s)] and Bi^3+[(aq)] | Bi[(s)] electrodes is constructed. When the circuit is closed, mass of Fe electrode decreases and that of Bi electrode increases.
(a) Write cell formula, (b) Which electrode is cathode and which electrode is anode ? (c) Write electrode reactions and overall cell reaction.
(a) Since the mass of Fe electrode decreases, it undergoes oxidation and it is an anode or an oxidation electrode while as the mass of Bi electrode increases, there is a reduction of Bi^3+ to Bi and
it is cathode or a reduction electrode. Hence the cell formula is,
\(\mathrm{Fe}_{(\mathrm{s})}\left|\mathrm{Fe}_{\mathrm{(aq})}^{2+}(1 \mathrm{M}) \| \mathrm{Bi}_{(\mathrm{aq})}^{3+}(1 \mathrm{M})\right| \mathrm{Bi}\)
(b) The left hand electrode is an anode and right hand electrode is a cathode.
(c) Reactions :
Solved Examples 5.7 – 5.9
Question 67.
Solve the following :
(1) Write the reaction and calculate the potential of the half cell,
\(\mathbf{Z n}_{(\mathbf{a q})}^{2+}\) (0.2M) | Zn. (E^0[Zn^2+/Zn] = – 0.76 V).
Solution :
Given : E^0[Zn^2+/Zn] = -0.76 V
Concentration of Zn^2+ = [Zn^2+] = 0.2 M
E[Zn^2+/Zn] = ?
Reduction reaction for the half cell,
= – 0.76 + 0.0296 (-0.6990)
= -0.76 – 0.02069
= -0.78069 V
Ans. E^0[Zn^2+/Zn] = -0.78069 V
(2) Write a reaction and calculate the potential of the electrode, \(\mathrm{Cl}_{(\mathrm{aq})}^{-}\) (0.05 M) | Cl[2] (g, 1 atm) | Pt E^0[Cl2/Cl^–] = 1.36 V.
Solution :
Given : Reduction reaction :
= 1.36 – 0.0592 (- 2 + 0.6990)
= 1.36 – 0.0592 (-1.3010)
= 1.36 + 0.077
= 1.437 V
Ans. Potential of the electrode = 1.437 V
(3) Calculate the potential of the electrode,
pH = 4.5 | H[2] (g, 1 atm) |Pt.
Solution :
Given : pH = 4.5
∴ E[H^+/H2] = -0.0592 pH
= -0.0592 × 4.5
= -0.2664 V
Ans. E[H^+/H2] = -0.2664 V
(4) If the standard cell potential of Daniell cell is 1.1 V, calculate standard free energy change for the cell reaction.
Solution :
Given : Daniell cell :
= – 2 × 96500 × 1.1
= -212300 J
= -212.3 kJ
Ans. Standard free energy change = ΔG^0
= -212.3 kJ
(5) Write balanced equations for the half reactions and calculate the reduction potentials at 25 °C for the following half cells :
(a) Cl^– (1.2 M) | Cl[2](g, 3.6 atm) E^0 = 1.36 V
(b) Fe^2+ (2 M) | Fe[(s)] E^0 = – 0.44 V
Solution :
(a) Given : Half cell,
\(\mathrm{Cl}_{(\mathrm{aq})}^{-}\) (1.2 M) | Cl[2](g, 3.6 atm)|Pt
E^0[Cl2/Cl^–] = 1.36 V
The reduction reaction:
= 1.36 – 0.0296 (-0.3979)
= 1.36 + 0.01178
= 1.37178
≅ 1.372 V
(b) Given: Half cell, \(\mathrm{Fe}_{(\mathrm{aq})}^{2+}\) (2M) |Fe[(s)]
E^0 [Fe^2+/Fe] = -0.44 V
The reduction reaction:
= – 0.44 – 0.0296 × (- 0.3010)
= -0.44 + 0.00891
= -0.43109 V
Ans. (a) Half reaction : Cl[2(g)] + 2e^– → \(2 \mathrm{Cl}_{\text {(aq) }}^{-}\)
E[Cell] = 1.372 V
(b) \(\mathrm{Fe}_{(\mathrm{aq})}^{2+}\) + 2e^– → Fe[(s)]
E[Cell ]= -0.43109 V.
(6) Using Nernst equation, calculate the potentials for the following half reactions :
Solution :
(a) Given :
= 0.535 – 0.0296 [ – 4 + 0.9542]
= 0.535 – 0.0296 [-3.0458]
= 0.535 + 0.0902
= 0.6252 V
Ans. (a) Potential of the half cell = 0.6252 V
(b) Potential of the half cell = 0.7118 V.
(7) Write the cell reaction and calculate the standard potential of the cell,
Ni[(s)] | Ni^2+(1 M) || Cl^–(1M) | Cl[2] (g, 1 atm) | Pt
E^0[Cl2] = 1.36 V and E^0Ni = – 0.25 V.
Solution :
= 1.36 – (-0.25)
= 1.36+ 0.25 = 1.61V
Ans. Cell reaction : Ni[(s)] + Cl[2(g)] → \(\mathrm{Ni}_{(\mathrm{aq})}^{2+}+2 \mathrm{Cl}_{(\mathrm{aq})}^{-}\)
E^0[Cell] = 1.61 V
(8) Write the cell reaction and calculate cell potential and standard free energy change for a cell reaction in the following cell :
\(\mathbf{A l}_{(\mathrm{s})}\left|\mathbf{A l}_{(\mathbf{a q})}^{3+}(1 \mathbf{M}) \| \mathbf{C d}_{(\mathbf{a q})}^{2+}(1 \mathrm{M})\right| \mathbf{C} d\)
E^0[Al^3+/Al] = -1-66 V and E^0[cd^2+/cd] = -0.403 V
Solution :
Since concentrations of ions are 1 M each, it is a standard cell, hence the cell potential is E^0[Cell].
Standard free energy change ΔG^0 is given by,
ΔG^0= – nFE^0[Cell]
= -6 × 96500 × 1.257
= – 727800 J
= -727.8 kJ
Ans. Cell reaction :
(9) Write the cell reaction and calculate cell potential and the standard free energy change for the cell reaction in the following cell :
Pt | H[2] (g, 1 atm) | \(\mathbf{H}_{(\mathrm{aq})}^{+}\)(1M) || \(\mathrm{Cu}_{\text {(aq) }}^{2+}\) (1M) | Cu[(s)].
Mention anode and cathode and direction of flow of electrons in the external circuit. (E^0[Cu^2+/Cu] = 0.337 V)
Solution :
Given : E^0[Cu^2+/Cu] = 0.337 V;
E^0[H^+/H2] = E^0[SHE] = 0.0V
Pt | H[2](g, 1 atm) | \(\mathbf{H}_{(\mathrm{aq})}^{+}\) (1M) || \(\mathrm{Cu}_{\text {(aq) }}^{2+}\) (1M) | Cu[(s)]
Anode : Hydrogen gas electrode (LHE)
Cathode : Copper electrode (RHE)
E^0[Cell] = E^0[Cu^2+/Cu] – E^0[H^+/H2]
= 0.337 – (0.0)
= 0.337 V
ΔG^0 = – nFE^0 = -2 × 96500 × 0.337
= – 65040J
= – 65.04 kJ
Electrons in the external circuit will flow from (LHE) hydrogen gas electrode to (RHE) copper electrode.
Ans. Cell reaction : H[2(g)] + \(\mathrm{Cu}_{(\mathrm{aq})}^{2+}\) → 2 \(\mathrm{H}_{(\mathrm{aq})}^{+}\) + Cu[(s)]
Cell potential = E^0[Cell] = 0.337 V
ΔG^0 = -65.04 kJ
(10) Calculate the reduction potential of the electrode, Zn^2+ (0.02 M) | Zn[(s)]. E^0[Zn^++/Zn] = – 0.76 V.
Solution :
Given :E^0[red] = E^0[Zn^++/Zn] = -0.76 V;
Concentration of \(\mathrm{Zn}_{(\mathrm{aq})}^{2+}\) = [Zn^2+] = 0.02 M
The reduction reaction for the electrode,
\(\mathrm{Zn}_{(\mathrm{aq})}^{2+}\) +2e^– → Zn[(s)]; ∴ n = 2
The reduction potential is given by,
= – 0.76 + 0.0296 (- 1.6990)
= -0.76 – 0.0296 × 1.6990
= -0.76 – 0.0503
= -0.8103 V
Ans. E[red] = E[Zn^2+/Zn] = -0.8103 V
(11) Calculate the potential of the following cell at 25 °C :
Zn | Zn^2+(0.6 M) ||H^+(1.2 M) | H[2] (g, 1 atm) | Pt
E°Zn2+/Zn = -0.763 V
Solution :
Given : E^0[Zn^2+/Zn] = -0.763 V;
Concentrations : [Zn^2+] = 0.6 M; [H^+] = 1.2 M
[H[2]][g ]= 1 atm
Cell potential = E[cell] = ?
= 0.763 – 0.0296 × (- 0.3801)
= 0.763 + 0.01125
= 0.77425 V
Ans. Cell potential = E^0[cell] = 0.77425 V
(12) The following redox reaction occurs in a galvanic cell.
2Al[(s)] + 3Fe^2+(1 M) → 2Al^3+(1 M)+ 3Fe[(s)]
(a) Write the cell notation.
(b) Identify anode and cathode
(c) Calculate E^0[cell] if E^0[anode] = – 1.66 V and E^0[cathode] = – 0 44 V
(d) Calculate ΔG^0 for the reaction.
Solution :
(a) In the cell reaction, Al is oxidised from zero to 3+ while Fe^3+ is reduced from 3+ to zero. Hence the cell notation is,
(b) Anode : Al electrode at LHE
Cathode : Fe electrode at RHE
(d) The standard free energy change ΔG^0 is given by,
ΔG^0 = – nFE^0[cell]
= – 6 × 96500 × 1.22
= – 70640 J
= – 706.4 kJ
Ans. (a) Cell notation :
\(\mathrm{Al}_{(\mathrm{s})}\left|\mathrm{Al}_{(\mathrm{aq})}^{3+}(1 \mathrm{M}) \| \mathrm{Fe}_{(\mathrm{aq})}^{2+}(1 \mathrm{M})\right| \mathrm{Fe}_{(\mathrm{s})}\)
(b) Anode : Al; Cathode : Fe
(c) E^0[cell] = 1.22 V
(d) ΔG^0 = – 706.4 kJ
(13) Construct a cell consisting of \(\mathbf{N i}_{(\mathrm{aq})}^{2+}\) | Ni(s) half cell and H^+ | H[2(g)] | Pt half cell.
(a) Write the cell reaction
(b) Calculate emf of the cell if [Ni^2+] = 0.1M,
P[H2] = 1 atm [H^+] = 0.05 M and
E^0[Ni] = – 0.257 V.
Solution :
= 0.257 – 0.0296 × 1.6020
= 0.257 – 0.04742
= 0.20958
≅ 0.2096 V
(14) Calculate the cell potential of the following cell at 25°C,
Standard reduction potentials (SRP) of Zn and Cu are -0.76 V and 0.334 V respectively.
(15) Set up the cell consisting of \(\mathbf{H}_{\text {(aq) }}^{+} \mid \mathbf{H}_{2(\mathrm{~g})}\) and \(\mathbf{P b}_{(\mathbf{a q})}^{2+}\) | Pb[(s)] electrodes. Calculate the emf at 25 °C of
the cell if [Pb^2+] = 0.1 M,
[H^+] = 0.5 M and hydrogen gas is at 2 atm pressure. E^0[pb^2+/pb] = – 0.126 V.
Solution :
Given : Half cells :
Concentrations : [H^+] = 0.5 M; [Pb^2+] = 0.1M;
[H[2]][g] = P[H2] = 2 atm; E^0[H^+/H2] = E[SHE] = 0.0 V;
E^0[pb^2+/pb] = -0.126 V
Since E^0[pb^2+/pb] (reduction) < E^0[H^+/H2] the Pb electrode is anode and hydrogen gas electrode is cathode.
The cell formulation :
= 0.126 – 0.0296 (- 0.0969)
= 0.126 + 0.002868
= 0.128868
≅ 0.1289 V
Ans. E[cell] = 0.1289 V
(16) Consider a galvanic cell that uses the half reactions,
2H^+[(aq)] + 2e^– → H[(g)]
Mg^2+[(aq)] + 2e^– → Mg[(s)]
Write balanced equation for the cell reaction. Calculate E^0[cell], E[cell] and ΔG^0 if concentrations are 1M each and P[H2] = 10 atm
E^0[Mg^2+/Mg] = -2.37 V.
Solution :
The standard free energy change ΔG^0 is given by
ΔG^0= – nFE^0[cell]
= – 2 × 96500 × 2.37
= -457400 J
= – 457.4 kJ
Ans. E^0[cell] = 2.37 V; E[cell] = 2.3404 V;
ΔG^0 = -457.4 kJ
(17) Calculate E[cell] and ΔG for the following at 28 °C : Mg[(s)] + Sn^2+ (0.04M) → Mg^2+ (0.06M) + Sn[(s)]
E^0[cell] = 2.23 V
Is the reaction spontaneous ?
Mg[(s)] + Sn^2+ (0.04 M) → Mg^2+ (0.06 M) + Sn
[Sn^2+] = 0.4 M
[Mg^2+] = 0.06 M
E^0[cell] = 2.23V
E[cell] = ?
ΔG = ?
2.23 – 0.0296 × 0.1761
= 2.23 – 0.005213
= 2.224V
ΔG = – nFE
= – 2 × 96500 × 2.224
= – 4.292 × 105 J
= -429.2 kJ
Since ΔG is negative, the electrochemical reaction is spontaneous.
(18) The standard potentials for Sn^2+/Sn and Fe^2+/Fe half reactions are -0.136 V and -0.440 V respectively. At what relative concentrations of Sn^2+ and Fe^2+ will these have the same reduction
Solution :
Hence when relative concentrations of Sn^2+ and Fe^2+ i.e., [Sn^2+]/[Fe^2+] = 5.37 × 10^-11, both the electrodes will have same potential.
(19) Write the cell reaction and calculate the emf of the cell at 25 °C.
Cr[(s)] | Cr^3+(0.0065 M) || Co^2+(0.012 M) | Co[(s)]
E^0[Co] = – 0.280 V, E^0[Cr] = – 0.74 V
What is ΔG for the cell reaction ?
Solution :
Given :
By Nernst equation,
Ecell = 0.4463 V
ΔG = -nFE[Cell]
= – 6 × 96500 × 0.4463
= – 258407 J
= – 258.4 kJ
Ans. Cell reaction :
\(2 \mathrm{Cr}_{(\mathrm{s})}+3 \mathrm{Co}_{(\mathrm{aq})}^{2+} \rightarrow 2 \mathrm{Cr}_{(\mathrm{aq})}^{3+}+3 \mathrm{Co}_{(\mathrm{s})}\)
E[Cell] = 0.4463 V; ΔG = – 258.4 kJ.
(20) Calculate E^0[Cell], ΔG^0 and equilibrium constant for the reaction 2Cu^+ → Cu^2+ + Cu.
E^0[Cu^+/Cu] = 0.52 V and E^0[Cu^2+,Cu+] = 0.16 V.
Solution :
Given : Cell reaction : 2Cu^+[(aq)] → Cu^2+[(aq)] +Cu[(s)]
E^0[Cu^+/Cu] = 0.52V; E^0[Cu^2+,Cu+] = 0.16 V
1F = 96500 C
E^0[Cell] = ? ΔG^0 = ? K=?
(i) The formulation of the cell :
(ii) ΔG^0 = – nFE^0[Cell] = – 1 × 96500 × 0.36
= – 34740 J
= – 34.74 kJ
(iii) Electrochemical redox reactions are considered as reversible reactions. If K is the equilibrium constant for the electrochemical redox reaction, then
Ans. E^0[Cell] = 0.36 V; ΔG^0 = – 34.74 kJ; Equilibrium constant = K= 1.2 × 10^6 mol^-1 dm^3.
(21) Calculate the equilibrium constant for the redox reaction at 25 °C.
Sr[(s)] + Mg^2+ → Sr^2+[(aq)] + Mg[(s)],
that occurs in a galvanic cell. Write the cell formula.
E^0[Mg] = – 2.37 V and E^0[Sr] = – 2.89 V.
Solution :
Given :
Cell reaction : Sr[(s)] + Mg^2+ → Sr^2+[(aq)] + Mg[(s)]
E^0 [Mg^2+/Mg] = -2.37 V; E^0 [Sr^2+/Sr] = -2.89 V
Equilibrium constant K = ?
The formation of the cell :
Ans. Equilibrium constant = K = 3.698 × 10^17
(22) The equilibrium constant for the following reaction at 25 °C is 2.9 × 10^9. Calculate standard voltage of the cell.
Cl[2(g)] + 2Br^–[(aq)] ⇌ Br[2(l)] + 2Cl^–[(aq)]
Solution :
Given : Cell reaction : Cl[2(g)] + 2Br^–[(aq)] ⇌ Br[2(l)] + 2Cl^–[(aq)]
Equilibrium constant = K = 2.9 × 10^9 atm^-1
Standard voltage of the cell = E^0[Cell] = ?
The formulation of the cell :
(23) Write the cell representation and calculate equilibrium constant for the following redox reaction :
Ni[(s)] + 2 Ag^+[(aq)] (1M) → Ni^2+[(aq)] (1 M) + 2Ag[(s)]
at 25 °C
E^0[Ni] = – 0.25 V and E^0[Ag] = 0.799 V
Solution :
Given : E^0[Ni^2+/Ni] = – 0.25 V; E^0[Ag^+/Ag] = 0.799 V
Equilibrium constant = K = ?
(24) Calculate the cell potential of the following galvanic cell :
Pt|H[2 ](g, 1 atm)|\(\mathbf{H}_{\text {(aq) }}^{+} \mathbf{p H}\) = 3.51||Calomel electrode
E[cal] = 0.242 V at 25 °C.
Solution :
= -0.0592 × pH
= -0.0592 × 3.5
= -0.2072 V
∴ E[cell] = E[cal] – E[H^+/H2]
= 0.242 – (-0.2072)
= 0.242 + 0.2072
= 0.4492 V
Ans. E[cell] = 0.4492 V
Question 68.
How are the voltaic cells classified ?
The voltaic cells are classified as primary and secondary voltaic cells.
(1) Primary voltaic cells : These are the voltaic cells in which the electrical energy or cell potentials are developed within the cells due to oxidation and reduction reactions at the reversible
The chemicals and electrode materials consumed during the discharging can be regenerated by passing the current in opposite direction from the external source of electricity i.e., these cells can be
recharged. For example, Daniell cell. There are the examples where the primary cells can’t be recharged. E.g. Dry cell.
(2) Secondary voltaic cells :
(i) These are the voltaic cells in which the electrical energy or cell potentials are not developed within the cell but electrical energy can be stored or cell potentials can be regenerated by
passing electricity from the external source of electricity. Since the electrical energy obtained is second hand, these cells are called secondary cells or accumulators or storage cells.
(ii) These cells can be recharged by passing electric current in opposite direction from the external source of higher emf. Therefore the secondary cells are reversible cells. For example, lead
accumulator (lead storage battery).
Question 69.
Explain the construction and working of a dry cell (or Leclanche’s cell).
Write a note on dry cell.
(A) Principle :
• Leclanche’s cell is a primary voltaic cell.
• It doesn’t contain mobile liquid electrolyte but contains moist viscose aqeuous paste of the electrolytes.
• It is an irreversible voltaic cell which can’t be recharged.
(B) Construction :
(i) It consists of a small zinc vessel which serves as an anode (negative electrode).
(ii) The zinc vessel contains a porous paper bag containing an inert graphite (C) electrode which serves as cathode, immersed in a paste of MnO[2] and carbon black. This paper bag divides the dry
cell into two compartments, namely anode and cathode compartments.
(iii) The rest of the cell is filled with a moist paste of NH[4]Cl and ZnCl[2] which acts as an electrolyte for zinc anode.
(iv) The graphite rod is fitted with a metal cap and the cell is sealed to prevent the drying of moist paste by evaporation.
(C) The dry cell can be represented as,
^– Zn|ZnCl[2(aq)], NH[2]Cl[(aq)], MnO[2(s)]|C^+.
(D) Reactions in the dry cell :
(i) Oxidation at zinc anode :
Zn[(s)] → \(\mathrm{Zn}_{\text {(aq) }}^{2+}\) + 2e^– (oxidation half reaction)
(ii) Reduction at graphite (C) cathode :
The electrons released in the oxidation reaction at anode, flow to cathode through external circuit.
Hydrogen in NH[4] ion is reduced to molecular hydrogen which reduces MnO[2] to Mn[2]O[3].
(iii) Zn^2+ react with NH[3] and form a complex.
\(\mathrm{Zn}_{(\mathrm{aq})}^{2+}+4 \mathrm{NH}_{3(\text { aq) }} \longrightarrow\left[\mathrm{Zn}\left(\mathrm{NH}_{3}\right)_{4}\right]^{2+}{ }_{(\mathrm{aq})}\)
Since Zn^2+ ions are removed, the overall cell reaction can’t be reversed.
(E) Uses of dry cell :
• Dry cell is used as a source of electric power in radios, flashlights, torches, clocks, etc.
• Since they are available in small size and portable, they can be used conveniently.
Question 70.
Describe the construction and working of lead accumulator (lead storage cell).
Draw a neat labelled diagram of the lead accumulator. Explain the reactions involved in discharging and charging this cell. Represent this cell using cell conventions.
(A) Principle :
(1) The lead accumulator is a secondary electrochemical cell since electrical energy and emf are not developed within the cell but it is previously stored by passing an electric current. Hence it is
also called lead accumulator or lead storage battery.
(2) It is reversible since the electrochemical reaction can be reversed by passing an electric current in opposite direction and consumed reactants can be regenerated.
(3) Hence battery can be charged after it is discharged.
(B) Construction : In a lead accumulator, the negative terminal (anode) is made up of lead sheets packed with spongy lead, while the positive terminal (cathode) is made up of lead grids packed with
Sulphuric acid of about 38% strength (%w/w) or specific gravity 1.28 or 4.963 molar is the electrolyte in which the lead sheets and lead grids are dipped. The positive terminal and negative terminal
are alternatively arranged in the electrolyte and are separately interconnected.
(C) Representation of lead accumulator :
(D) Working of a lead accumulator :
(1) Discharging : When the electric current is withdrawn from lead accumulator, the following reactions take place :
Oxidation at the – ve electrode or anode :
(2) Net cell reaction :
(i) Thus, the overall cell reaction during discharging is
Pb[(s)] + PbO[2(s)] + 2H[2]SO[4(aq)] → 2PbSO[4(s)] + 2H[2]O[(l)]
The cell potential or emf of the cell depends upon the concentration of sulphuric acid. During the operation, the acid is consumed and its concentration decreases and specific gravity decreases from
1.28 to 1.17. As a result, the emf of the cell decreases. The emf of a fully charged cell is about 2.0 V.
(ii) Recharging of the cell : When the discharged battery is connected to external electric source and a higher external potential is applied the cell reaction gets reversed generating H[2]SO[4].
Reduction at the – ve electrode or cathode :
PbSO[4(s)] + 2e^– → Pb(s) + \(\mathrm{SO}_{4(a q)}^{2-}\)
Oxidation at the + ve electrode or anode :
PbSO[4(s)] + 2H[2(l)]O → PbO[2(s)] + 4H^+[(aq)] + \(\mathrm{SO}_{4(a q)}^{2-}\) + 2e^–
The net reaction during charging is
2PbSO[4(s)] + 2H[2]O[(l)] → Pb[(s) ]+ PbO[2(s)] + 4H^+[(aq)] + 2\(\mathrm{SO}_{4(a q)}^{2-}\)
2PbSO[4(s)] + 2H[2]O → Pb[(s)] + PbO[2(s)] + 2H[2]SO[4(aq)]
The emf of the accumulator depends only on the concentration of H[2]SO[4].
(E) Applications :
1. It is used as a source of d.c. electric supply.
2. It is used in automobile in ignition circuits and lighting the head lights by connecting 6 batteries giving 12V potential.
3. It is also used in invertors.
Question 71.
In lead accumulator which electrode is coated with PbO[2] ? Anode or cathode ?
In lead accumulator, cathode is coated with PbO[2].
Question 72.
Write net charging and discharging reactions for lead storage battery.
For lead storage battery :
Net charging reaction :
2PbSO[4(s)] + 2H[2]O[(l)] → Pb[(s)] + PbO[2(s)] + 2H[2]SO[4(aq)]
Net discharging reaction :
Pb[(s)] + PbO[2(s)] + 2H[2]SO[4(aq)] → 2PbSO[4(s)] + 2H[2]O[(l)]
Question 73.
Write a note on Nickel-Cadmium (NICAD) cell.
(1) Nickel-Cadminum (NICAD) cell is a secondary dry cell.
(2) It is rechargable, hence it is a reversible cell.
(3) It consists of a cadmium electrode in contact with an alkali and acts as anode while nickel (IV) oxide, NiO[2] in contact with an alkali acts as cathode. The alkali used is moist paste of KOH.
(4) Reactions in the cell :
(i) Oxidation at cadmium anode :
Cd[(s)] + 2OH^–[(aq) ]→ Cd(OH)[2(s)] + 2e^–
(ii) Reduction at Ni02(s) cathode :
NiO[2(s)] + 2H[2]O[(l)] + 2e^– → Ni(OH)[2(s)] + 2OH^–[(aq)]
The overall cell reaction is the combination of above two reactions.
Cd[(s)] + NiO[2(s)] + 2H[2]O[(l)] → Cd(OH)[2(s)] + Ni(OH)[2(s)]
(5) Since the net cell reaction doesn’t involve any electrolytes but solids, the voltage is independent of the concentration of alkali electrolyte.
(6) The cell potential is about 1.4 V.
(7) This cell has longer life than other dry cells.
Question 74.
Write a note on mercury battery.
(1) Mercury battery is a rechargeable secondary dry cell.
(2) It consists of zinc anode amalgamated with mercury.
(3) The cathode consists of a paste of Hg and carbon.
(4) The electrolyte is a paste of KOH and ZnO in a strong alkaline medium.
(5) Reactions:
(6) The overall reaction involves only solid substances and electrolytic composition remains unchanged.
(7) Therefore mercury battery provides constant voltage (1.35 V) over a long period.
(8) It is superior to Leclanche’s cell in durability.
(9) Uses : It is used in hearing aids, electric watches, pacemakers, etc.
Question 75.
Describe the construction and working of hydrogen-oxygen (H[2]-O[2]) fuel cell.
(A) Principle :
(i) The functioning of the fuel cell is based on the combustion reaction like,
2H[2(g)] + O[2(g)] → 2H[2]O[(g)] is exothermic redox reaction and hence it can be used to produce electricity.
(ii) The reactants of this fuel cell can be continuously supplied from outside, hence this can be used to supply electrical energy for a very long period.
(B) Construction :
(i) In fuel cell the anode and cathode are porous electrodes with suitable catalyst like finely divided platinum.
(ii) The electrolyte used is hot aqueous KOH solution in which porous anode and cathode carbon rods are immersed.
(iii) H[2] is continuously bubbled through anode while O[2] gas is bubbled through cathode.
(C) Working (cell reactions) :
(i) Oxidation at anode : At anode, hydrogen gas is oxidised to H[2]O.
2H[2(g)] + 4OH^–[(aq)] → 4H[2]O[(l)] + 4e^– (oxidation half reaction)
(ii) Reduction at cathode : The electrons released at anode travel to cathode through external circuit and reduce oxygen gas to OH-.
O[2(g)] + 2H[2]O[(l)] + 4e^– → 4OH^–[(aq)] (reduction half reaction)
(iii) Net cell reaction: Addition of both the above reactions at anode and cathode gives a net cell reaction.
2H[2(g)] + O[2(g)] → 2H[2]O[(l)] (overall cell reaction)
(D) Representation of the cell :
The overall cell reaction is an exothermic combustion reaction. However in this, H[2] and O[2] gases do not react directly but react through electrode reactions. Hence the chemical energy released in
the formation of O-H bonds in H[2]O, is directly converted into electrical energy.
(E) Advantages :
1. The fuel cell operates continuously as long as H[2] and O[2] gases are supplied to the electrodes.
2. The cell reactions do not cause any pollution.
3. The efficiency of this galvanic cell is the highest about 70% as compared to ordinary galvanic cells.
(F) Drawbacks of H[2]-O[2] fuel cell :
1. The cell requires expensive electrodes like Pt, Pd.
2. In practice, voltage is less than 1.23 volt due to spontaneous reactions at the electrodes.
3. H[2] gas is expensive and hazardous.
(G) Applications :
1. It was successfully used in spacecraft.
2. It has potential applications in automobiles, power generators for domestic and industrial uses.
Question 76.
What are the applications of the fuel cells?
1. Fuel cells have been used in the space programme providing electrical energy for a long duration.
2. The fuel cells have been used in automobiles on experimental basis.
3. In case of H[2]-O[2] fuel cell, used in spacecraft, the water produced is used for drinking for astronauts.
4. The fuel cells using methanol as a fuel for combustion are used in electronic products such as cell phones and laptop computers.
5. The fuel cells have many potential applications as power generators for domestic and industrial uses.
Question 77.
In what way fuel cell differs from ordinary galvanic cells ?
1. Fuel cell is a modified galvanic cell in which the thermal energy of combustion reactions is directly converted into electrical energy.
2. In the fuel cell, the reactants are not placed within the cell like ordinary galvanic cells, but they are continuously supplied to the electrodes from outside reservoir.
3. They cannot be recharged unlike ordinary galvanic cell.
Question 78.
Define electrochemical series or electromotive series.
Electrochemical series (Electromotive series) : It is defined as the arrangement in a series of electrodes of elements (metal or non-metal in contact with their ions) with the electrode half
reactions in the decreasing order of their standard reduction potentials.
Question 79.
Explain electrochemical series or electromotive series.
The conventions used in the construction of electrochemical series (or electromotive series) are as follows :
• The (reduction) electrodes or half cells of the elements are written on the left hand side of the series and they are arranged in the decreasing order of their standard reduction potentials
• Reduction half reactions are written for each half cell in such a way that the species with higher oxidation state and electrons are on left hand side while reduced species with lower oxidation
state are on right hand side.
• The standard reduction potential of standard hydrogen electrode is 0.00 V, i.e., E^0[H^+/H2] = 0.0 V. The electrodes and half cell reactions with positive E^0[red] values are located above
hydrogen and those with negative E^0[red] values below hydrogen. Above hydrogen, positive E^0[red] values increase, while below hydrogen negative E^0 values increase.
• The positive E^0[red] values indicate the tendency for reduction and the negative E^0[red] values indicate the tendency for oxidation.
• The elements, whose electrodes are at the top of the series having high positive values for E^0[red] are good oxidising agents.
• The elements, whose electrodes are at the bottom of the series having high negative values for E^0[red] are good reducing agents.
Question 80.
What are the applications of electrochemical series (or electromotive series) ?
The applications of electrochemical series (or electromotive series) are as follows :
(1) Relative strength of oxidising agents in terms of E^0[red] values : The E^0[red] value is a measure of the tendency of the species to be reduced i.e., to accept electrons and act as an oxidising
agent. The species mentioned on left hand side of the half reactions are oxidising agents.
The substances in the upper positions in the series and hence in the upper left side of the half reactions have large positive E^0[red] values hence are stronger oxidising agents. For example, F[2],
Ce^4+, Au^3+, etc. As we move down the series, the oxidising power decreases. Hence from the position of the elements in the electrochemical series, oxidising agents can be selected.
(2) Relative strength of reducing agents in terms of E^0[red] values : The lower E^0[red] value means lower tendency to accept electrons but higher tendency to lose electrons. The tendency for
reverse reaction or oxidation increases as E^0[red] becomes more negative and we move towards the lower side of the series. For example, Li, K, Al, etc. are good reducing agents.
(3) Identifying the spontaneous direction of reaction : From the standard reduction potentials, E^0[red], the spontaneity of a redox reaction can be determined. The difference between E^0[red] values
for any two electrodes represents cell potential E^0[cell], constituted by them.
If E°cell is positive then the reaction is spontaneous while if E^0[cell] is negative the reaction is non-spontaneous. For example, E^0[Mg^2+/Mg] and E^0[Ag^+/Ag] have values -2.37 V and 0.8 V
respectively. Then Mg will be a better reducing agent than Ag. Therefore Mg can reduce Ag^+ to Ag.
The corresponding reactions will be:
Therefore above reaction in the forward direction will be spontaneous while in the reverse direction will be non-spontaneous since for it E^0[cell] = -3.17V.
(4) Calculation of standard cell potential E^0[cell] : From the electrochemical series, the standard cell potential, E^0[cell] from the E^0[red] values for the half reactions given can be calculated.
For example,
Question 81.
Write any four applications of electrochemical series.
The applications of electrochemical series are as follows :
1. Predicting relative strength of oxidising agents.
2. Predicting relative strength of reducting agents.
3. Identifying the spontaneous direction of a reaction.
4. To calculate the standard cell potential E°cell.
Multiple Choice Questions
Question 82.
Select and write the most appropriate answer from the given alternatives for each subquestion :
1. The cell constant of a conductivity cell is given by
(a) l × a
(b) \(\frac{a}{l}\)
(c) \(\frac{1}{l \times a}\)
(d) \(\frac{l}{a}\)
(d) \(\frac{l}{a}\)
2. A conductivity cell has two platinum electrodes of area 1.2 cm^2 and 0.92 cm apart. Hence the cell constant is
(a) 1.104 cm^-1
(b) 1.304 cm^-1
(c) 0.906 cm^-1
(d) 0.767 cm^-1
(d) 0.767 cm^-1
3. The conductivity of 0.02 M KI solution is 4.37 × 10^-4 Ω^-1 cm^-1. Hence its molar conductivity is
(a) 8.74 × 10^-6 Ω^-1 cm^2 mol^-1
(b) 21.85 Ω^-1 cm^2 mol^-1
(c) 4.58 × 10^-4 Ω^-1 cm^2 mol^-1
(d) 136.5 Ω^-1 cm^2 mol^-1
(b) 21.85 Ω^-1 cm2 mol^-1
4. The specific conductance of 0.02 M HCl is 8.2 × 10^-3 Ω^-1 cm^-1. Hence its molar conductivity is
(a) 164 Ω^-1 cm^2 mol^-1
(b) 6.1 × 10^3Ω^-1 cm^2 mol^-1
(c) 239.6 S cm^2 mol^-1
(d) 410 S cm^2 mol^-1
(d) 410 S cm^2 mol^-1
5. Molar conductivity of an electrolyte is given by,
(c) \(\wedge_{\mathrm{m}}=\frac{\kappa \times 1000}{\mathrm{C}}\)
6. The units of molar conductivity are
(a) Ω cm^-2 mol^-1
(b) Ω^-1 cm^2 mol^-1
(c) Ω^-1 cm^-1 mol^-1
(d) Ω^– cm^-1 mol^-2
(b) Ω^-1 cm^2 mol^-1
7. If conductivity is expressed in Ω^-1 m^-1 and concentration of the electrolytic solution in mol m^-3 then, the molar conductance is given by
(b) \(\wedge_{\mathrm{m}}=\frac{\kappa}{C}\)
8. Kohlrausch’s law is represented as
(a) \(\wedge_{0}=\lambda_{+}^{0}+\lambda_{-}^{0}\)
9. The degree of dissociation of a weak electrolyte is given by
(c) α = \(\frac{\wedge_{\mathrm{m}}}{\wedge_{0}}\)
10. ∧[0] for CH[3]COOH is 390.7 Ω^-1 cm^2 mol^-1. If ∧[0] for CH[3]COOK, and HBr in Ω^-1 cm^2 mol^-1 are 115 and 430.4 respectively, then ∧0 for KBr is
(a) 74.6 Ω^-1 cm^2 mol^-1
(b) 180.6 Ω^-1 cm^2 mol^-1
(c) 154.7 Ω^-1 cm^2 mol^-1
(d) 706.1 Ω^-1 cm^2 mol^-1
(c) 154.7 Ω^-1 cm^2 mol^-1
11. The molar conductivity of cation and anion of salt BA are 180 and 220 mhos respectively. The molar conductivity of salt BA at infinite dilution is
(a) 90 mhos · cm^2 · mol^-1
(b) 110 mhos · cm^2 · mol^-1
(c) 200 mhos · cm^2 · mol^-1
(d) 400 mhos · cm^2 · mol^-1
(d) 400 mhos · cm^2 · mol^-1
12. If ∧[m] and ∧[0] are the molar conductivities of a weak electrolyte at concentration C and at zero concentration, then the dissociation constant Ka is given by
(b) K[a] = \(\frac{\wedge_{\mathrm{m}}^{2} \times \mathrm{C}}{\Lambda_{0}\left(\wedge_{0}-\wedge_{\mathrm{m}}\right)}\)
13. What is the ratio of volumes of H[2] and O[2] liberated during electrolysis of acidified water ?
(a) 1 : 2
(b) 2 : 1
(c) 1 : 8
(d) 8 : 1
(b) 2 : 1
14. What weight of copper will be deposited by passing 2 Faradays of electricity through a cupric salt? (atomic mass = 63.5)
(a) 63.5 g
(b) 31.75 g
(c) 127 g
(d) 12.7 g
(a) 63.5 g
15. The S.I. unit of cell constant for conductivity cell is
(a) m^-1
(b) S·m^-2
(c) cm^-2
(d) S·dm^2·mol^-1
(a) m^-1
16. The charge of how many coulomb is required to deposit 1.0 g of sodium metal (molar mass 23.0 g mol^-1) from sodium ions is
(a) 2098
(b) 96500
(c) 193000
(d) 4196
(d) 4196
17. The amount of electricity equal to 0.05 F is
(a) 48250 C
(b) 3776 C
(c) 4825 C
(d) 4285 C
(c) 4825 C
18. The number of electrons that have a total charge of 965 coulombs is
(a) 6.022 × 10^23
(b) 6.022 × 10^22
(c) 6.022 × 10^21
(d) 3.011 × 10^23
(c) 6.022 × 10^21
19. When 0.2 Faraday of electricity is passed through an electrolytic solution, the number of electrons involved are
(a) 96500
(b) 1.603 × 10^-19
(c) 1.2046 × 10^23
(d) 12 × 10^6
(c) 1.2046 × 10^23
20. When a charge of 0.5 Faraday is passed through AlCl[3] solution, the amount of aluminium deposited at the cathode is (Atomic weight of Al = 27)
(a) 4.5
(b) 18
(c) 27
(d) 2.7
(a) 4.5
21. The quantity of electricity required to deposit 54 g of silver from silver nitrate solution is
(a) 0.5 Coulomb
(b) 0.5 Ampere
(c) 0.5 Faraday
(d) 0.5 Volt
(c) 0.5 Faraday
22. Passage of 5400 C of electricity through an electrolyte deposited 5.954 × 10^-3 kg of the metal with atomic mass 106.4. The charge on the metal ion is
(a) + 1
(b) + 2
(c) + 3
(d) + 4
(a) + 1
23. On calculating the strength of current in amperes if a charge of 840 C (coulomb) passes through an electrolyte in 7 minutes, it will be
(a) 1
(b) 2
(c) 3
(d) 4
(b) 2
24. On passing 1.5 F charge, the number of moles of aluminium deposited at cathode are [Molar mass of Al = 27 gram mol^-3]
(a) 1.0
(b) 13.5
(c) 0.50
(d) 0.75
(c) 0.50
25. Number of faradays of electricity required to liberate 12 g of hydrogen is
(a) 1
(b) 8
(c) 12
(d) 16
(c) 12
26. Daniell cell is
(a) Secondary cell
(b) Irreversible cell
(c) primary irreversible cell
(d) primary reversible cell
(d) primary reversible cell
27. In the representation of galvanic cell, the ions in the same phase are separated by a
(a) single vertical line
(b) comma
(c) double vertical lines
(d) semicolon
(b) comma
28. In the Daniell cell, reduction occurs at the
(a) anode
(b) zinc rod
(c) negative electrode
(d) positive electrode
(d) positive electrode
29. The standard hydrogen electrode is represented as
(a) \(\mathrm{H}_{(\mathrm{aq})}^{+}\)|H[2](g, 1 atm) | Pt
(b) \(\mathrm{H}_{(\mathrm{aq})}^{+}\) 1M | H[2](g, 1 atm) | Pt
(c) \(\mathrm{H}_{(\mathrm{aq})}^{+}\) 1M|H[2(g)]|Pt
(d) \(\mathrm{H}_{(\mathrm{aq})}^{+}\) 0.1M|H[2](g, 1 atm) | Pt
(b) \(\mathrm{H}_{(\mathrm{aq})}^{+}\) 1M | H[2](g, 1 atm) | Pt
30. The essential condition to set a standard hydrogen electrode is
(a) 298 K
(b) pure and dry H[2] gas at 1 atm
(c) solution containing H^+ at unit activity
(d) all of these
(d) all of these
31. In hydrogen-oxygen fuel cell, the carbon rods are immersed in hot aqueous solution of
(a) KCl
(b) KOH
(c) H[2]SO[4]
(d) NH[4]Cl
(b) KOH
32. The emf of cell is 1.3 volt. The positive electrode has potential of 0.5 volt. The potential of negative electrode is
(a) 0.8 V
(b) -0.8 V
(c) 1.8 V
(d) – 1.8 V
(b) -0.8 V
33. The electrode potential of a silver electrode dipped in 0.1 M AgNO[3] solution at 298 K is (E^0[red] of Ag = 0.80 volt)
(a) 0.0741 V
(b) 0.0591 V
(c) 0.741 V
(d) 0.859 V
(c) 0.741 V
34. Which of the following species gains electrons more easily ?
(a) Na^+
(b) H^+
(c) Mg^+
(d) Hg^+
(b) H^+
35. In Nernst equation the constant 0.0592 at 298 K represents the value of
(d) \(\frac{2.303 R T}{F}\)
36. The concept of electrode potential is explained on the basis of
(a) Arrhenius’ theory
(b) Ostwald’s theory
(c) Nemst’s theory
(d) Faraday’s law
(c) Nemst’s theory
37. The standard reduction potentials of metals A and B are x and y respectively. If x > y, the standard emf of the cell containing these electrodes would be
(a) 2x – y
(b) y – x
(c) x – y
(d) x + y
(c) x – y
38. The emf of the cell,
(E^0[red] = 0.34 V)
(a) -1.34
(b) 0.34 V
(c) -0.34 V
(d) 1.34
(b) 0.34 V
39. The Electromotive Force of the following Cell Cu|Cu^++ (1 M)||A^+g (1 M)|Ag is …………….. if E^0[cu^+^+] = 0.33 V and E^0 [Ag^++/Ag] = 0.79 V
(a) 0.46 V
(b) – 0.46 V
(c) 1.12 V
(d) – 112 V
(a) 0.46 V
40. The standard cell potential of the following cell is 0.463 V Cu|Cu^++ (1 M)||Ag^+ (1 M)|Ag. If E^0[Ag] = 0.8 V, what is the standard potential of Cu electrode ?
(a) 1.137 V
(b) 0.337 V
(c) 0.463 V
(d) – 0.463 V
(b) 0.337 V
41. The metal which cannot displace hydrogen from dil. H[2]SO[4] solution is
(a) Zn
(b) Al
(c) Fe
(d) Ag
(d) Ag
42. In the Lead storage battery during discharging
(a) pH of the electrolyte increases
(b) pH decreases
(c) pH remain unchanged
(d) pH increases or decreases depends on the extent of discharging
(a) pH of the electrolyte increases
43. During the discharging of a lead storage battery,
(a) H[2]SO[4] is consumed
(b) PbSO[4] is consumed
(c) Pb^2+ ions are formed
(d) Pb is formed
(a) H[2]SO[4] is consumed
44. In lead accumulator, anode and cathode are
(a) (Pb + PbO[2]), Pb
(b) Pb, PbO[2]
(b) PbO[2], Pb
(d) Pb, (Pb + PbO[2])
(d) Pb, (Pb + PbO[2])
45. The efficiency of the hydrogen-oxygen fuel cell is about
(a) 20%
(b) 40%
(c) 70%
(d) 90%
(c) 70%
46. The strongest oxidizing agent among the species In^3+ (E^0 = – 1.34 V), Au^3+ (E^0 = 1.4 V), Hg^2+ (E^0 = 0.86 V), Cr^3+ (E^0 = – 0.74 V) is
(a) Cr^3+
(b) Au^3+
(c) Hg^2+
(d) In^3+
(b) Au^3+
47. The reaction, \(2 \mathrm{Br}_{(\mathrm{aq})}^{-}+\mathrm{Sn}_{(\mathrm{aq})}^{2+} \longrightarrow \mathrm{Br}_{2(\mathrm{l})}+\mathrm{Sn}_{(\mathrm{s})}\)
with the standard potentials, E^0[Sn] = -0.114 V, E^0[Br2] = + 1.09 V, is
(a) spontaneous in reverse direction
(b) spontaneous in forward direction
(c) at equilibrium
(d) non-spontaneous in reverse direction
(a) spontaneous in reverse direction
48. The cell potential of the following cell is
(E^0[Al^3+/Al] = – 1.66 V)
(a) 1.66 V
(b) -1.66 V
(c) 0.5533 V
(d) 2.14 V
(a) 1.66 V
49. The standard reduction potentials of Sn, Hg and Cr are – 1.36 V, 0.854 V and – 0.746 V respectively. The increasing order of oxidising power of the given elements is
(a) Sn < Hg < Cr
(b) Hg < Cr < Sn
(c) Sn < Cr < Hg
(d) Cr < Hg < Sn
(c) Sn < Cr < Hg
50. If standard reduction potentials for Pb, K, Zn and Cu are -0.126 V, -2.925 V, -0.763 V and 0.337 V, the decreasing order of reducing power is
(a) Zn > Pb > K > Cu
(b) Cu > Pb > Zn > K
(c) K > Zn > Pb > Cu
(d) K > Pb > Cu > Zn
(c) K > Zn > Pb > Cu+ | {"url":"https://maharashtraboardsolutions.guru/maharashtra-board-class-12-chemistry-important-questions-chapter-5/","timestamp":"2024-11-13T13:02:03Z","content_type":"text/html","content_length":"246502","record_id":"<urn:uuid:3699e9b2-b844-42fa-977f-91742761aeda>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00633.warc.gz"} |
A crash course on asset allocation
· 1202 words · 6 minutes read
In previous posts, I talked about passive investing and its Belgium-specific matters. I will now discuss asset allocation, which is one of the most important topics in investing. It’ll be more of a
theoretical post but I will elaborate on some core principles that I think are important.
Why asset allocation?
In many situations in life, it’s wise to not put all your eggs in one basket. The same holds for investing. That is what asset allocation is about: choosing the right mix of securities in your
portfolio in order to reduce risk while maintaining a high expected return.
Markowitz’s insight
Markowitz is the father of Modern Portfolio Theory. One of his important insights was the following:
A mixture of volatile noncorrelated securities results in a portfolio with lower volatility and potentially higher return.
This may seem counter-intuitive at first so I will try to illustrate with an example. We will first look at the lower volatility part and then at the potentially higher return.
Lower volatility through diversification
First of all, lower volatility is a good thing because a highly volatile investment is also very risky. And as investors, we want to keep risks low. An investment that has a lower risk but an equal
expected return is a better investment.
Imagine that there are two stocks A and B, whose returns over 4 years are as in the following table.
Stocks Year 1 Year 2 Year 3 Year 4
Stock A 50% -25% 50% -25%
Stock B -25% 50% -25% 50%
For example, after the first year, stock A gains 50% while stock B loses 50%.
Let’s say that we have $1000 to invest. We are going to compare three portfolios:
1. 100% stock A
2. 50% stock A, 50% stock B: we buy $500 worth of each stock at the start and maintain those stocks during all four years.
3. 100% stock B
The following table shows the returns of each portfolio over these 4 years.
Portfolios Year 0 Year 1 Year 2 Year 3 Year 4
100% stock A $1000 $1500 $1125 $1688 $1266
50% stock A, 50% stock B $1000 $1125 $1125 $1266 $1266
100% stock B $1000 $750 $1125 $844 $1266
You can see that the total return after 4 years is the same for all portfolios: a good 27% gain. However, when you look at the intermediate returns, it seems that portfolio 2 exhibits a lot less
variation. It stays between $1000 and $1266 whereas the other portfolios reach much higher and lower levels. To confirm this suspicion, we compute the standard deviation of each portfolio:
Portfolios Standard deviation
100% stock A 279
50% stock A, 50% stock B 112
100% stock B 208
As the standard deviation is a measure for the volatility, we can conclude that the mixed portfolio is less risky. We achieved the same return for a less risky investment meaning that objectively,
the mixed portfolio is the better investment. This illustrates the first part of Markowitz’s insight.
Higher returns through rebalancing
By diversifying our portfolio across the two stocks, we lowered its volatility and therefore the risk associated with our investment. However, we haven’t made any additional returns with this
strategy. Whether we invested it all in stock A or a mix of stock A and B, we are stuck with a profit of $266.
This is where rebalancing can bring extra returns. With rebalancing, we regularly buying and selling stocks so that our initial allocation is maintained.
In our mixed portfolio, we start with $500 worth of each stock. After 1 year, the value of the stock A portion has grown to $750 (50% growth) whereas the stock B part has decreased to $375. So our
allocation has changed to 67% stock A and 33% stock B, which is far from our wanted allocation. Therefore, we sell $187.50 of stock A and buy the same amount of stock B. This brings us back to our
50% allocation.
What is the impact of rebalancing?
50% stock A, 50% stock B Year 0 Year 1 Year 2 Year 3 Year 4
no rebalancing $1000 $1125 $1125 $1266 $1266
with rebalancing $1000 $1125 $1266 $1424 $1602
That’s a big difference! By simply rebalancing ever year, our return jumps from 27% to 60%. And this without taking on any additional risk.
The intuition behind rebalancing is that it forces us to sell when a stock is high and buy when it’s low. Rebalancing works because prices that are low tend to go back up (eventually) and high prices
will likely come down at some point.
The impact of correlation
There’s a third factor that we haven’t looked at and that is correlation. Markowitz’s insight says that the securities in our portfolio should be uncorrelated. What does that mean and why is this the
Two securities are positively correlated when they move together. For instance, the Nikkei and the Dow Jones are clearly correlated as can be seen below. In other words, when the US economy goes up,
the Japanese economy goes up again, and vice versa.
In contrast, the USD and gold are negatively correlated. When the price of gold goes down, the value of the USD tends to go up, and vice versa.
In our example, stock A and stock B were negatively correlated. Indeed, the years that stock A went up, stock B went down. What would happen if our portfolio instead consisted or stocks that were
more positively correlated?
Let’s do a similar simulation but with the following stocks:
Stocks Year 1 Year 2 Year 3 Year 4
Stock A 50% -25% 50% -25%
Stock B 40% -20% 40% -20%
The returns (without rebalancing):
Portfolios Year 0 Year 1 Year 2 Year 3 Year 4
100% stock A $1000 $1500 $1125 $1688 $1266
50% stock A, 50% stock B $1000 $1450 $1123 $1628 $1260
100% stock B $1000 $1400 $1120 $1568 $1254
What interests us is the standard deviation:
Portfolios Standard deviation
100% stock A 279
50% stock A, 50% stock B 224
100% stock B 251
The standard deviation of the mixed portfolio went up from 112 to 224! This means that by diversifying, we have hardly lowered the risk. This means that we’re gaining a lot less from diversifying
when the stocks in our portfolio are positively correlated.
Does it also negatively impact our return when rebalancing?
50% stock A, 50% stock B Year 0 Year 1 Year 2 Year 3 Year 4
no rebalancing $1000 $1450 $1123 $1628 $1260
with rebalancing $1000 $1450 $1124 $1629 $1263
We only gain an additional 0.3% return when rebalancing as opposed to 33% in the previous situation. This shows that when creating a portfolio, its securities should be as negatively correlated as
This was a short summary of what I find the most important principles behind asset allocation. In a nutshell, this is what we learned:
• To reduce volatility (and therefore risk), it is important to compose a diversified portfolio of securities.
• Volatility is reduced most when the securities in your porfolio are negatively correlated.
• Rebalancing can greatly increase the return of your portfolio.
I plan on covering my own portfolio in a future post so it will get more practical then! | {"url":"https://yoranbrondsema.com/post/a-crash-course-on-asset-allocation/","timestamp":"2024-11-15T03:50:56Z","content_type":"text/html","content_length":"14976","record_id":"<urn:uuid:277a9b44-8aec-48f3-805e-1dc6f55bd8f2>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00651.warc.gz"} |
Maths puzzles galore!
Just a quickie tonight to share the love for a few maths puzzle booklets I've found kicking around on the Internet. I'm not sure where they've come from originally, but here are the links:
Maths Practice Puzzles : Fractions and Decimals
• Equivalent fractions
• Comparing and ordering fractions
• LCD and simplifying fractions
• Addition and subtraction (like denominators)
• Addition and subtraction (unlike denominators)
• Multiplication and division of fractions
• Decimal addition and subtraction
• Decimal multiplication and division
Forty Fun-Tabulous Puzzles Includes:
• Four operations - addition, subtraction, multiplication, division
• Order of operations
• Simple fractions work
• Decimals
• Time
Amazing Math Puzzles and Mazes Includes:
• Basic operations
• Place value
• Primes
• Divisibility / factors and multiples
• Powers / exponents
• All fractions topics
• Decimals to fractions
• All decimal calculations
• Percentages
• Negative numbers
There's a great selection of cross-numbers, riddles, mazes and other general puzzles that spice up basic practice a little more than a generic worksheet. I've used some of these to support our work
on numeracy in tutor time across the school, and also for end-of-term puzzle races. | {"url":"https://www.norledgemaths.com/blog/maths-puzzles-galore","timestamp":"2024-11-08T15:52:04Z","content_type":"text/html","content_length":"60114","record_id":"<urn:uuid:7feb9871-1697-404a-94a2-34f4e3a1e471>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00495.warc.gz"} |
Multivariate Data Display
Next: Related WorkUp: IntroductionPrevious: Introduction
Multivariate Data Display
Several techniques have been proposed to display multivariate data. Problem
Most of these techniques do not scale well with respect to size.
Our Objectives
• Interactive visualization of large multivariate datasets on parallel coordinates.
• Using a hierarchical approach that presents a multiresolutional view of the data.
• Suite of navigation tools to facilitate the systematic discovery of data trends and hidden patterns.
Next: Related WorkUp: IntroductionPrevious: Introduction | {"url":"https://davis.wpi.edu/~matt/courses/parcoord/node3.html","timestamp":"2024-11-13T01:30:18Z","content_type":"text/html","content_length":"6904","record_id":"<urn:uuid:b26eaa97-32d6-4bf4-a7e6-d6c3d173437b>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00545.warc.gz"} |
Question ID - 155875 | SaraNextGen Top Answer
For a real symmetric matrix $[\mathbf{A}],$ which of the following statements is true:
(A) The matrix is always diagonalizable and invertible.
(B) The matrix is always invertible but not necessarily diagonalizable.
(C) The matrix is always diagonalizable but not necessarily invertible.
(D) The matrix is always neither diagonalizable nor invertible. | {"url":"https://www.saranextgen.com/homeworkhelp/doubts.php?id=155875","timestamp":"2024-11-04T17:50:29Z","content_type":"text/html","content_length":"14739","record_id":"<urn:uuid:5131850b-eb57-4ac3-b720-6f68b071072f>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00832.warc.gz"} |
Applications of AI - Educate
Applications of AI
Consumer Marketing
Have you ever used any kind of credit/ATM/store card while shopping? if so, you have very likely been “input” to an AI algorithm All of this information is recorded digitally Companies like Nielsen
gather this information weekly and search for patterns
• general changes in consumer behavior
• tracking responses to new products
• identifying customer segments: targeted marketing, e.g., they find out that consumers with sports cars who buy textbooks respond well to offers of new credit cards.
Algorithms (“data mining”) search data for patterns based on mathematical theories of learning
Identification Technologies
ID cards e.g., ATM cards can be a nuisance and security risk: cards can be lost, stolen, passwords forgotten, etc Biometric Identification, walk up to a locked door
• Camera
• Fingerprint device
• Microphone
• Computer uses biometric signature for identification
• Face, eyes, fingerprints, voice pattern
• This works by comparing data from person at door with stored library
• Learning algorithms can learn the matching process by analyzing a large library database off-line, can improve its performance
Intrusion Detection
Computer security – we each have specific patterns of computer use times of day, lengths of sessions, command used, sequence of commands, etc
• would like to learn the “signature” of each authorized user
• can identify non-authorized users
How can the program automatically identify users?
• record user’s commands and time intervals
• characterize the patterns for each user
• model the variability in these patterns
• classify (online) any new user by similarity to stored patterns
Machine Translation
Language problems in international business
• e.g., at a meeting of Japanese, Korean, Vietnamese and Swedish investors, no common language
• If you are shipping your software manuals to 127 countries, the solution is ; hire translators to translate
• would be much cheaper if a machine could do this!
How hard is automated translation
List the various type of agent types
In artificial intelligence, agents can be categorized into several types based on their characteristics and capabilities. Here are some of
What are the factors that a rational agent should depend on at any given time?
A rational agent should consider several key factors when making decisions at any given time: Perceptual Input: The current information | {"url":"https://educatech.in/applications-of-ai/","timestamp":"2024-11-14T19:59:01Z","content_type":"text/html","content_length":"80930","record_id":"<urn:uuid:f05dc8a8-2df9-47a0-a796-0941b499df90>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00809.warc.gz"} |
FAQ: admissions
Which college should I choose?
The choice of college is probably seen as a much more important problem outside Oxford than it is inside. It is important to remember that the quality of teaching is the same across all colleges: the
physics lectures, practicals and projects take place in the Department of Physics, and are the same for all students, regardless of college.
The University website has some advice on choosing a college and the prospectus contains an overview of each. Another view is given in the Alternative Prospectus, written by the Oxford University
Students Union. If you are undecided which college to choose, you can make an "open application" and let the Admissions Office computer choose for you.
The admissions process is explicitly designed to ensure that college choice has as little effect as possible on the chance of being accepted. One aspect of this is that candidates are occasionally
reallocated from the college they chose to a different college. All short-listed candidates who come to Oxford for interview are interviewed by two colleges. Another consequence is that candidates
may receive offers from a college different from the one handling their application, or an open offer with no college specified.
Please note that Harris Manchester College do not currently admit students to read physics, and special considerations apply for physics and philosophy.
What is the short-listing process?
The Department of Physics has seen a steady and rapid increase in the number of applications for places on both the physics and physics and philosophy courses, with more than 8 applicants per place.
We now use a short-listing process to guide us in reducing the number of interviewed applicants to around 2.5 per place. The primary short-listing criterion is the total mark on Physics Aptitude Test
, although candidates who fall slightly below the test threshold may be invited for interview on the basis of other evidence of excellence.
What is the reallocation process?
A key aim of the admissions procedure is that an applicant's chance of obtaining a place should as far as possible be independent of the college handling the application, whether the candidate
applied to a particular college or made an open application. To assist with this, candidates may be reallocated from the college initially handling their application to another college where the
ratio of applicants to places is lower, to ensure the ratio is as constant as possible across the university.
Reallocation occurs after the short list of candidates for interview has been drawn up. You may, therefore, be invited to interview at a different college from that which initially handled your
application. All your application materials will automatically be transferred to the new college, and there is no need to send anything further unless the new college specifically requests you to do
Candidates for reallocation are selected at random, and are treated in exactly the same way at interview as other candidates.
How should I prepare for the admissions process?
The admissions process for the Department of Physics is designed so that most UK candidates do not need to undertake extensive special preparation. You should, however, ensure that you are completely
familiar with all the physics and mathematics you have learnt at school. During the admissions process you will be expected to demonstrate not only that you are familiar with all the material you
have been taught, but also that you are able to use this material in an unfamiliar context. In particular a key skill required in the Oxford physics course is the ability to apply mathematical ideas
in a physical context.
Preparation for the Physics Aptitude Test is just as important as preparation for interviews, as the short-listing process is largely based on the results of the aptitude test.
Admissions interviews for the Department of Physics are purely academic in nature and, while there is no formal syllabus, knowledge of the physics aptitude test syllabus material will be assumed. You
are likely to spend the great majority of your time talking about physics or mathematics, rather than about any wider interests you may have mentioned on your application form. Interviewers may,
however, choose to ask you about specific topics you have mentioned on your form, and it is wise to ensure that you are familiar with any topics you have mentioned.
Do I need one or two maths A-levels?
The Oxford physics course is highly-mathematical. We expect that all students who are accepted to study physics at Oxford would be capable of achieving a Grade A in Further Maths A-level, even if
they have not taken the exam. It is important that you are good at maths and enjoy the prospect of applying it to physical problems.
Further Maths A-level can be helpful in completing the course, but is not required for admission. We accept there are many reasons why a candidate may not take Further Maths at A-level and a
significant minority of the students admitted to study physics each year offer only single maths.
These students must undertake extra work in the summer vacation and their first term to catch-up with their peers, but our research shows they are not disadvantaged after the first year. Where
A-level Further Mathematics is not offered, an AS-level or equivalent in Further Mathematics may be helpful in easing the transition from school to university studies. Our 'standard' maths courses in
the first term do not assume knowledge of Further Maths material, but they cover the relevant bits very quickly before getting on to the new stuff. Even those who have done A2 Further Maths will find
that they are learning new maths after only a few weeks at Oxford.
When a candidate applies to study physics at Oxford, we assess their mathematical ability by a number of means, including the Physics Aptitude Test, interviews and current/existing exam results. Our
admissions tutors have a great deal of experience in assessing candidates with single or double maths and will make an assessment of your mathematical ability bearing in mind the amount of maths you
have been able to study at school. However, for candidate on a borderline, those who have studied Further Maths may have greater opportunity to prove their mathematical ability that those studying
only single Maths A-level.
How much are my GCSE grades taken into consideration?
The principal indicators are the PAT and interview scores but all information supplied on the UCAS form is taken into consideration.
What maths should I know on arrival?
The Oxford physics course does contain a lot of maths. This is equally true for the joint course in physics and philosophy, which contains at least as much maths as the physics course. This means
that A-level Maths (or its equivalent) is essential for entry. Further Maths, at either AS or A2-level, is desirable, but it is not a requirement. What matters is that you should be good at maths,
and enjoy the prospect of applying it to physical problems and working out the answers.
This section will be revised to reflect recent detailed changes in the way maths is taught in the Oxford physics course. In the meantime this page provides useful general guidance but should not be
taken as definitive.
Assumed on arrival
Algebra and trigonometry: properties of polynomials, including the solution of quadratics. Inequalities. Arithmetic and geometric progressions and the binomial expansion. Functions and inverses.
Rational functions and algebraic division; partial fractions. Graphs of functions. Trigonometrical functions and their relationships; sum and difference formulae; multiple angle formulae. Logarithms
and exponentials.
Calculus: differentiation and integration of polynomials including fractional and negative powers; extensions to trigonometrical functions, exponentials and logarithms. Differentiation as finding the
slope, and location of maxima, minima and points of inflection. Integration as the reverse of differentiation and as the area under a curve; simplifying integrals by symmetry. Product, quotient and
chain rules. Integration by substitution and by parts. Volumes of revolution. Differentiation of implicit functions. Integration by partial fractions. Separable differential equations.
Other: roots of equations from zero-crossings and areas by Simpson’s rule. Basic ideas of vectors in 2D and 3D. Elementary probability theory and statistics.
Recommended but not essential
Complex numbers in Cartesian and polar forms. Complex roots of polynomial equations. Vector scalar (dot) and cross products. Basic properties of matrices. Differentiation from first principles.
Series and limits; Taylor and Maclaurin series.
Numerical solutions of equations by bisection, interpolation, Newton-Raphson. Non-separable differential equations. Arc length and area of a surface of revolution. More advanced properties of vectors
and matrices.
Recommended textbooks
The Department of Physics provides self-study materials to help new students make the transition from school to university level mathematics. These are based on the FLAP (Flexible Learning Approach
to Physics) modules developed by the Open University, and will be provided to students who need them either on or shortly before arrival. The FLAP mathematics modules have also been published as two
books, Basic Mathematics for the Physical Sciences and Further Mathematics for the Physical Sciences, which are suitable for students who want to start straight away. Roughly speaking, the first
volume corresponds to the topics assumed on arrival, while the following volume covers the additional material listed above.
What is the standard A-level offer?
Our standard A-level offer is A*AA to include Mathematics and Physics. The A* must be in Mathematics, Physics or Further Mathematics.
I do not have A-levels, so can I still apply?
We welcome applications from candidates with a wide range of qualifications, including Scottish and Irish Highers, the International Baccalaureate, and many national qualifications. More details can
be found on the University's international qualifications page.
What about mature students?
We welcome applications from mature students. Further details can be found on the guidance for mature students page. Applications are judged in the same way as applications from any other candidate,
but A-level entrance requirements for successful applicants may be varied. Potential applicants are encouraged to contact us, preferably by email, to discuss specific questions about an application.
Note that Harris Manchester College, which is dedicated to mature students, does not accept applications for physics.
How do I apply to do a second undergraduate degree?
Applicants for a second undergraduate degree are treated in exactly the same way as those for a first undergraduate degree except that they are required to submit certain additional materials.
Further details can be found on the 2nd degree information webpage. All applicants applying to read physics as a second undergraduate degree are required to take the admissions test.
Please note that some colleges do not accept applications for second degree students in physics. Successful applicants may be granted 'senior status', that is exemption from the first year course and
examinations, but decisions on this are made by individual colleges and may require additional information to be provided.
Can I take a year out/gap year between school and university?
This depends on your plans. Sponsorship schemes offering a year’s work experience in a physics-related field may be excellent, but some activities are less useful. An athlete who does not train for a
year will be pretty rusty. Likewise a physics (or maths) student who does not use his or her brain for a year will also be pretty rusty and this is the danger of a gap year!
The Institute of Physics has contacts with over 300 companies, and may be able to assist with suitable activities for a gap year. See the Year in Industry programme.
Some of the activities we hear suggested for the gap year could be done during the long vacation between the first and second years. Or you may like to consider taking a gap year after your degree.
Can I apply for both physics and philosophy and physics?
Candidates who apply for physics and philosophy, if invited for interview, will be asked if they would consider a physics place if not made an offer for physics and philosophy.
What is an open offer?
Every year a small number of candidates for the Department of Physics receive an open offer, sometimes called a 'pool place offer'. This is an offer of a place to study physics at the University, but
without a college being specified. Recipients of open offers are guaranteed a place at a college, but which college this is will not be determined until after the A-level results are published in
mid-August. Each year a small number of candidates fail to meet the conditions of their offer, or withdraw for other reasons and the vacancies arising from this are filled by candidates holding open
offers. Every open offer is underwritten by a particular college, and in the unlikely event of a vacancy not arising a place will be made available at the underwriting college.
Open offers are not made for places in physics and philosophy. However, candidates applying for both physics and philosophy and physics may receive an open offer for a place in physics.
What are the fees for a physics degree?
Information on fees and other expenses can be found on the University of Oxford Student Finance webpages. This page contains information for home, EU and international students.
What are the English language requirements?
Candidates must have a high level of competence and fluency in English. The University website has details of the formal English language requirements.
Are there any grants or bursaries?
The University of Oxford offers a very generous bursary system, details of which can be found at the student funding section of the University website. In particular, we would like to encourage UK
students to read about the Oxford Opportunity Bursary. UK students from lower income backgrounds, who also receive full statutory government grants, should be able to meet their entire basic living
costs during term-time without needing to take out a student loan for maintenance. Funding for international students is limited, but a funding search webpage is available.
How do I indicate which year of entry I wish to be considered for?
The UCAS application allows a candidate to indicate whether or not they wish to be considered as a deferred applicant.
Who will receive feedback on my application?
This information is provided on the feedback on admissions decisions webpage.
Can I submit an additional personal statement or can my referee submit an additional reference?
The UCAS application is the main record of the application, and information that is relevant to the admissions process should be included there. Oxford is not the only university that would wish to
know about any particular circumstances that may have impacted upon an applicant’s academic record or mitigating personal circumstances. If there is additional information that is identified after
the UCAS application has been submitted, only then should referees submit further material, directly to the Tutor for Admissions in the college that the student has been invited to attend for
interview and the Academic Administrator at the Department of Physics.
Does the University take contextual data into account?
Wherever possible, your grades are considered in the context in which they have been achieved. See further information on how we use contextual data. Note that this use of contextual information does
not result in either an automatic offer of a place or a lower offer to a candidate. | {"url":"https://www.physics.ox.ac.uk/study/undergraduates/faq/faq-admissions","timestamp":"2024-11-02T21:09:34Z","content_type":"text/html","content_length":"55597","record_id":"<urn:uuid:7bafa2d1-b86e-4902-a70b-3fa83c671b95>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00615.warc.gz"} |
Background Recent policy interventions possess reduced obligations to clinics with higher-than-predicted
Background Recent policy interventions possess reduced obligations to clinics with higher-than-predicted risk-adjusted readmission prices. rate for every medical center. We after that utilized
hierarchical modeling to estimation the dependability of the quality measure for every medical center. Finally we motivated the percentage of total variant due to three elements: true sign
statistical sound and individual elements. Outcomes The median amount of coronary artery bypasses performed per medical center on the three-year period was 151 (79-265 situations 25%-75%
interquartile range). The median risk-adjusted 30-time readmission price was 17.6% (14.4%-20.8% interquartile range). 55% from the variation in readmission rates was explained by measurement noise.
4% could be attributed to patient characteristics and the remaining 41% represented true signal in readmission rates. Only 53 hospitals (4.4%) achieved a proficient level of reliability >0.70. To
achieve this reliability 599 cases were required over the three-year period. Approximately 1/3 of hospitals (33.7%) achieved a moderate degree of reliability >0.5 which required 218 cases.
Conclusions The vast majority of hospitals do not achieve a minimum acceptable level of reliability for 30-day readmission rates. Despite recent enthusiasm readmission rates are not a reliable
measure of hospital quality in cardiac surgery. Ninth Revision (ICD-9) we discovered all patients age range 65-99 going through CABG (36.10-19). To reduce the prospect of case-mix distinctions
between clinics we excluded sufferers with procedure rules indicating that various other operations were concurrently performed with CABG (i.e. valve medical procedures) (35.00-99 36.2 37.32 37.34
37.35 as these sufferers have higher baseline challenges. Risk-adjusted 30-day readmission rates We estimated hospital-specific risk-adjusted 30-day RO5126766 readmission rates initial. We adjusted
for individual age group gender competition urgency of procedure median ZIP-code income season of comorbidities and procedure. Patient comorbidities had been adjusted utilizing the ways of Elixhauser
a validated device produced by the Company for Healthcare Analysis and Quality to be utilized for administrative data [7]. This technique identifies 29 individual comorbidities from supplementary
ICD-9 diagnosis rules which are treated as specific dichotomous independent factors. We utilized logistic regression to anticipate the likelihood of readmission RO5126766 at four weeks for each
individual. Forecasted readmission probabilities had been after that summed at each medical center to estimation the expected amount of readmissions. We after that calculated the proportion of
noticed to anticipated readmissions and multiplied this by the common readmission rate to find out risk-adjusted readmission prices. Estimating dependability We next computed the dependability of
risk-adjusted readmission prices at each medical center. Reliability assessed from 0 to at least one 1 serves as a the percentage of observed medical center deviation that may be described by true
distinctions in quality [8 9 For instance a dependability of 0.8 implies that 80% from the variance in outcomes is because of true distinctions in functionality while 20% RO5126766 from the variance
is due to statistical “noise” or measurement mistake. Dependability may also be considered the possibility the fact that same RO5126766 outcomes will be repeated from season to season. To execute
this computation we used the next formula: DIAPH2 reliability = signal/(signal + noise) [8 9 A commonly used cutoff for acceptable reliability when comparing overall performance of groups is usually
0.7 [8 9 In order to determine transmission we first performed risk adjustment using the same patient characteristics described previously to combine all patient risk factors into a single predicted
risk score. The risk score expressed as log(odds) of readmission was then added as a single independent variable in subsequent modeling. We used log(odds) of readmission rather than predicted
probability because log(odds) are linear with respect to the outcome variables. We then created a subsequent model for readmission using hierarchical logistic regression an advanced statistical
technique that explicitly models variance at multiple levels (i.e. patients surgeons. | {"url":"https://www.researchatlanta.org/2016/06/background-recent-policy-interventions-possess-reduced-obligations-to-clinics-with-higher-than-predicted/","timestamp":"2024-11-04T08:27:12Z","content_type":"text/html","content_length":"35096","record_id":"<urn:uuid:609dac77-24e0-44c3-91ac-879b3b117341>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00346.warc.gz"} |
Sudoku Algorithms
A picture on the left represents an example of a Chain technique. If we made an assumption that the cell A1 has value '9' then the cell A7 should have value '3', D7 should have value '9', and D2
value '5'. In this case, candidates 9 and 5 can be excluded from the cells B2 and C2. This assumption leads to a contradiction, since in the box (1x1) both cells B2 and C2 cannot have the same value
'3'. Since our assumption that cell A1 has value 9 leads to a contradiction, we can remove our assumed value 9 from the list of possible candidates in the cell A1. | {"url":"https://www.puzzlemystery.com/Sudoku/SudokuTutorial/Algorithms/Chains.aspx","timestamp":"2024-11-09T19:18:25Z","content_type":"text/html","content_length":"67154","record_id":"<urn:uuid:ff4a7171-ed72-47e9-ae82-5913d0719282>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00139.warc.gz"} |
Law of the unconscious statistician
Accidentally correct expected values
The law of the unconscious statistician is the theorem for calculating the expected value of the transformation $g(X)$ (of a known random variable $X$) without knowing the probability distribution of
$g(X)$. In the discrete case, we have
$\mathbb{E}[g(X)] = \sum_x{g(x)f_X(x)}$
and in the continuous case
$\mathbb{E}[g(X)] = \int_{-\infty}^\infty{g(x)f_X(x)dx}$
This theorem is assigned its name due to anecdotes of students making use of it without knowledge of its origins. This is due to its fundamental, intuitive looking form that one might expect (ha) is
definitionally true, whereas in reality it is the consequence of rigorous derivation. | {"url":"https://samgriesemer.com/Law_of_the_unconscious_statistician","timestamp":"2024-11-07T15:58:48Z","content_type":"text/html","content_length":"16315","record_id":"<urn:uuid:a4bb3a6c-cfa2-4d20-99b8-a7684251e72d>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00623.warc.gz"} |
flucq (c49b1)
Combined QM/MM Fluctuating Charge Potential for CHARMM
Ben Webb, ben@bellatrix.pcl.ox.ac.uk, and Paul Lyne
The fluctuating charge potential (FlucQ or FQ) is based on the method
developed by Rick, Stuart and Berne (Rick et. al., J. Chem. Phys. 101
(7) 1994 p6141) for molecular dynamics, and extended for hybrid QM/MM
simulations (Bryce et. al., Chem. Phys. Lett. 279 1997, p367). It is
designed primarily for computationally efficient (approx. 10% overhead)
modelling of solvent polarisation in hybrid QM/MM systems, and as such
is implemented for QUANTUM, CADPAC and GAMESS codes, although the
current implementation is easily extensible to any atom type and bond.
| Syntax of the FLUCQ command
| Starting FlucQ from a CHARMM input file
Charge solution
| Solving for exact charges
Reference energy
| Setting the ``zero'' for FlucQ polarisation
| Changes to be aware of; known limitations
* Using FlucQ with QM:: Necessary changes for use with CADPAC or GAMESS
| Simple uses of the FLUCQ command
| Mathematical and computational details
[SYNTAX FLUCq]
FLUCq { ON init-spec (atom selection) }
{ OFF }
{ PRINt }
{ EXACt exac-spec }
{ REFErence { GAS exac-spec } }
{ { SOLVent exac-spec } }
{ { CURRent } }
{ { ENERgy real } }
DYNAmics ... thermo-spec
init-spec::= [GROUp] [NOFIxed]
exac-spec::= [TIMEstep real] [ZETA real]
[TQDEsired real] [PRINt]
thermo-spec::= [FQTEmp real] [FQUNit integer]
{ FQTCoupling real } ! weak coupling
{ FQMAss real nose-spec } ! Nose-Hoover
{ FQSCale integer } ! velocity scaling
nose-spec::= [FQTOlerance real] [FQITerations integer]
FlucQ code is enabled within CHARMM by means of the FLUCQ ON
command. Future energy calculations will then include an extra energy
term - FQPO, the FlucQ polarisation energy, while dynamics simulations
involve a new energy property - FQKI, the FlucQ charge kinetic energy.
Once FlucQ is active, the selected atoms are treated as extra degrees
of freedom, free to fluctuate under the charge forces in the system,
and, by assigning each atom type a fictional charge "mass", these
charges can be accelerated in a conventional dynamics simulation, in a
completely analogous way to the Cartesian degrees of freedom.
If atoms are selected by the FLUCQ command which cannot be modelled
(i.e. they are QM atoms, or have no FlucQ parameters defined for them)
they will be automatically removed from the selection.
The FlucQ polarisation energy, FQPO, is an intramolecular
interaction; in full electronegativity equalisation, every atom
interacts through space, by means of a modified Coulomb-type
interaction, with every other atom in the molecule. In this
implementation, the only interactions calculated are those along
defined CHARMM bonds (even those with zero force constants).
[GROUp] conserves charge within groups, rather than the default
behaviour of conserving charge within residues; this prohibits charge
transfer between groups. Note that the FlucQ model makes no restriction
on the degree of charge transfer within each residue or group, or the
distance over which this transfer can occur.
[NOFIxed] instructs FlucQ that some or all of the bond lengths
between FlucQ-selected atoms are free to change during a simulation.
This forces the FlucQ code to recalculate the intramolecular
interaction at each step; since this is a costly calculation, the
default is to use interactions parameterised for equilibrium bond
lengths, with which it is strongly recommended to combine constraint
methods such as SHAKE BONH PARA.
The FLUCq PRINt command simply prints the current values of all
charges and charge forces (from the last energy calculation). A similar
effect can also be achieved with the standard SCALAR command (see
scalar.info for information on other FlucQ parameters available with the
SCALAR command).
The FLUCQ OFF command disables the FlucQ code. Further energy
calculations will not include FlucQ terms. Note, however, that if the
charges have been modified by FlucQ, they will remain at their altered
Default behaviour during dynamics is to allow the charge degrees of
freedom to fluctuate freely; however they can be thermostatted at a given
charge "temperature" by passing extra options to the DYNAmics command:-
[FQTEmp <real>] specifies the charge temperature (default 0).
[FQTCoupling <real>] (default 0) if set, uses the Berendsen weak
coupling algorithm to thermostat the charges. The coupling parameter
is given in 1/ps, and is analagous to the TCONS/TCOU dynamics options.
[FQMAss <real>] (default 0) if set, uses Nose-Hoover thermostatting,
with the given mass. The tolerance of the Nose-Hoover iterations can be
set with FQTOlerance (default 1.0d-7), and the maximum number of
iterations with FQITerations (default 100).
Thermostatting parameters (number of iterations, scale factor, etc.)
can be written out to a given unit number at every dynamics step by using
the FQUNit (default -1: no write) option.
[FQSCal <integer>] (default 0) if set, performs simple charge velocity
scaling every FQSCal dynamics steps.
The initialization process dimensions FlucQ with the current state
of the system. The QM region, if any, is detected, and the FlucQ atom
selection will then interact with the QM region. Thus, the FLUCQ
command should be placed after any QUANTUM, CADPAC, or GAMESS command,
and if the total number of atoms in the system is modified, FlucQ
should be disabled prior to this change and reinitialized afterwards.
To skip FlucQ energy calculations entirely, use the SKIP FQPOL FQKIN
command. The QM/MM FlucQ interaction is calculated in line with the
standard QM/MM electrostatic interaction, and as such is suppressed
with the SKIP QMEL command. Finally, the intermolecular contribution
to FlucQ is calculated in line with the standard electrostatic
interaction, and so is disabled with the SKIP ELEC command.
No FlucQ interaction energies are calculated between atoms
constrained with the CONS FIX command, as electrostatic energies are
not calculated for these atoms.
FlucQ parameters are specified in the parameter file, with the FLUCQ
keyword. The section should look like the following:-
atom chi zeta prin mass
Here, chi is an electronegativity measure (in Kcal/mol/e), zeta a
Slater orbital exponent (in 1/Angstrom), prin the Slater orbital
principal quantum number, and mass the charge mass (in (ps/e)**2
Kcal/mol) from the FlucQ model. For example, Rick's original
parameters for TIP4P hydrogen and M-site would be written as:-
HP 10.00 0.90 1 6.0d-5
MP 78.49 1.63 2 6.0d-5
The FlucQ model relies on keeping charge kinetic energy at a
temperature close to zero Kelvin, to maintain Born-Oppenheimer
separation between it and the other degrees of freedom. Thus, it is
best to acquire a minimum energy charge configuration for your system
before any dynamics simulation.
Two methods are available for such "charge solution". The first is
to use a standard CHARMM minimisation; FlucQ charges will be minimised
concurrently with the Cartesian coordinates. The second method is to
apply dissipative Langevin dynamics to the charges only, to achieve
minimum energy charges for fixed atomic coordinates; this is performed
by means of the FLUCq EXACt command. The code prints a running count
of the number of iterations required to quench the kinetic energy.
[TIMEstep real] sets the timestep to be used in Langevin dynamics,
by default 0.001ps.
[ZETA real] sets the frictional coefficient, by default 1600.
[TQDEsired real] sets the desired final temperature, by default
1.0d-6 K.
[PRINt] if set, prints the final charges.
By default, the charge polarisation energy FQPOL reported by FlucQ
is given relative to all atomic charges being zero. More generally, it
is useful to define this term relative to an arbitrary zero. This
reference energy can be set with the FLUCQ REFErence command.
FLUCQ REFE GAS disables all intermolecular interactions, solves for
exact charges, and then uses the resultant energy as the reference.
This essentially defines the polarisation energy relative to the energy
that the system would have in the gas phase, with all residues or
groups infinitely separated.
FLUCQ REFE SOLVENT merely disables the QM/MM interaction, and then
sets the reference energy similarly. This shows polarisation as a
function purely of the QM system.
FLUCQ REFE CURRent defines the current polarisation energy (from the
last energy calculation) to be zero - i.e. the reference energy is
increased by the current energy.
FLUCQ REFE ENERgy real sets the reference energy to a user-specified
Bear in mind that REFE GAS exac-spec is essentially identical to the
series of CHARMM commands:-
FLUCQ REFER ENER 0
FLUCQ EXACT exac-spec
(The only difference is that any SKIP command in force before REFE
GAS will remain in force afterwards, whereas the above example will
re-include calculation of all energy terms at completion. Also, by
changing the second line in the above example to SKIP QMEL QMVDW, the
action of the REFE SOLVENT command can be reproduced.)
The fluctuating charge code alters the atomic charges during
dynamics runs. Thus, the charges cannot be treated as constant and
restart and trajectory files must include atomic charges. Files read or
written during FlucQ-enabled dynamics runs will be assumed to contain
charge information, and so will be a) somewhat larger and b)
incompatible with non-FQ files. (If FlucQ is compiled in but not
activated with FLUCQ ON, the restart and trajectory file formats are
unchanged from standard CHARMM.)
The FlucQ model is implemented primarily for the study of QM/MM
systems, with a fluctuating charge SHAKE-constrained MM solvent. Hence,
intramolecular interactions are restricted to those between FlucQ atoms
along bonds. This complicates the application of the model to large systems,
as for full electronegativity equalisation, every atom must interact with
every other atom in the group.
FlucQ is not implemented for all nonbond routines, in particular the
CFF, MMFF, CRAYVEC and PARVECT codes. FlucQ also works only with standard
Ewald, and not PME.
In order for the QM/MM calculation to be properly calculated, FlucQ
requires data to be passed back to it from the QM codes (in particular
the density matrix and one-electron integrals). Changes have been made
to the QUANTUM interface for this to be carried out correctly; however,
the GAMESS(US) and CADPAC codes, not being distributed with CHARMM, will
require modification. These modifications will not affect the
functioning of standard QM/MM calculations, when FlucQ is disabled.
GAMESS-UK (versions 6.3.1 and later) should incorporate the required
Patches for GAMESS(US), and CADPAC can be found in the
source/flucq/ directory in the main CHARMM distribution. They should be
applied in the top directory of the relevant QM code distributionm
i.e. gamess-us.patch and cadpac.patch should be applied in the
source/gamint/gamess/ and source/cadint/cadpac/ directories,
respectively. The patch files are standard unified diffs, and so should
be applied with a command similar to "patch -p1 < gamess-us.patch"
The following example initialises the FlucQ code for a system of SPC
waters, before calculating the gas phase energy, and then calculating
the self-polarisation of the solvent. Finally, the total energy,
including the self-polarisation relative to the gas phase, is printed,
and the charge forces from this energy calculation are displayed.
See the testcase test/c28test/fqam1.inp for an example of a FlucQ
dynamics simulation.
The standard CHARMM nonbond routines and QM codes have been modified
so as to sum the interaction electrostatic interaction energy between
charge "I" and all other nonbond pairs or QM atoms into index "I" of
the fluctuating charge array FQCFOR. The FlucQ model actually requires
the term dE/dQ, so these totals are divided by charge by the FlucQ
energy routine (as all such interactions are linear in charge). Note
that this gives erroneous results for FlucQ sites with exactly zero
charge; however, the CHARMM nonbond routines calculate no interactions
for such systems anyway.
Finally, the intramolecular terms, as contributions to dE/dQ, are
summed into the FQCFOR array, and charge forces are calculated from
these electronegativities by mass-weighted averaging over residues or
groups. These forces are then used by the standard minimisers, or by a
standard Verlet integrator during dynamics.
For further information, see the following:-
MM system; (Rick et. al., J. Chem. Phys. 101 (7) 1994 p6141)
QM/MM interaction; (Bryce et. al., Chem. Phys. Lett. 279 1997, p367)
Tag Table:
Node: Top 103
Node: Syntax 1371
Node: Activation 2114
Node: Charge solution 6634
Node: Reference energy 7793
Node: Caveats 9466
Node: Using FlucQ with QM 10631
Node: Examples 15843
Node: Implementation 16424
End Tag Table | {"url":"https://academiccharmm.org/documentation/version/c49b1/flucq","timestamp":"2024-11-14T08:19:42Z","content_type":"text/html","content_length":"29836","record_id":"<urn:uuid:a6e42b0b-7555-4d91-90c8-b99d0f2ee924>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00029.warc.gz"} |
via Primary Historical Sources
Learning Discrete Mathematics and Computer Science
via Primary Historical Sources
• J. Barnett, J. Lodder, D. Pengelley. Teaching and Learning Mathematics from Primary Historical Sources, in PRIMUS (Problems, Resources, and Issues in Mathematics Undergraduate Studies), in press.
• J. Barnett, G. Bezhanishvili, J. Lodder, D. Pengelley. Teaching Discrete Mathematics Entirely from Primary Historical Sources, in PRIMUS (Problems, Resources, and Issues in Mathematics
Undergraduate Studies), in press.
• J. Barnett, J. Lodder, D. Pengelley. Projects for students of discrete mathematics via primary historical sources: Euclid on his algorithm, in Proceedings of HPM 2012, Quadrennial meeting of the
International Study Group on the Relation Between History and Pedagogy of Mathematics, Daejeon, Korea, July, 2012, 279--294.
• D. Pengelley. Teaching with primary historical sources: Should it go mainstream? Can it?, in "Recent Developments on Introducing a Historical Dimension in Mathematics Education" (editors V.Katz,
C.Tzanakis), MAA, 2011, pp.1-8.
• J.Barnett, J. Lodder, D. Pengelley, I. Pivkina, D. Ranjan. Designing student projects for teaching and learning discrete mathematics and computer science via primary historical sources, in
"Recent Developments on Introducing a Historical Dimension in Mathematics Education (editors V.Katz, C.Tzanakis)", MAA, 2011, pp. 189-201.
• I. Pivkina. Original Historical Sources in Date Structures and Algorithms Courses, in The Journal of Computing Sciences in Colleges, volume 26, no. 4, pp. 204-210 (April 2011).
• I. Pivkina, D. Ranjan, and J. Lodder, "Historical Sources as a Teaching Tool", in Proceedings of the 40th ACM technical symposium on Computer science education 2009, Chattanooga, TN, USA, March
04 - 07, 2009, pp. 401-402.
• HPM 2008 History and Pedagogy of Mathematics, The HPM Satellite Meeting of ICME, July 14 - 18, 2008, Mexico City, MEXICO.
• International Congress on Mathematical Education (ICME), July 2008, in Monterey Mexico: J. Barnett, G. Bezhanishvili, H. Leung, J. Lodder, D. Pengelley, D. Ranjan Historical Projects in Discrete
Mathematics and Computer Science
• D. Pengelley, I. Pivkina, D. Ranjan, and K. Villaverde "A Project in Algorithms based on a Primary Historical Source about Catalan Numbers", in Proceedings of the 37th SIGCSE technical symposium
on Computer science education 2006, Houston, Texas, USA, March 03 - 05, 2006, pp. 318-322. | {"url":"https://www.cs.nmsu.edu/historical-projects/papers.html","timestamp":"2024-11-02T14:51:04Z","content_type":"text/html","content_length":"4649","record_id":"<urn:uuid:26644b2b-1485-44cc-8ba4-a20161881239>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00825.warc.gz"} |
The simple reason a viral math equation stumped the internet
For about a decade now, mathematicians and mathematics educators have been weighing in on a particular debate rooted in school mathematics that shows no signs of abating.
published : 06 April 2024
mathematicians and mathematics educators have been weighing in on a particular debate rooted in school mathematics that shows no signs of abating.
The debate, covered by Slate, Popular Mechanics, The New York Times and many other outlets, is focused on an equation that went so “viral” that it, eventually, was lumped with other phenomena that
have “broken” or “divided” the internet.
On the off chance you’ve yet to weigh in, now would be a good time to see where you stand. Please answer the following:
Get your news from people who know what they’re talking about.
If you’re like most, your answer was 16 and are flabbergasted someone else can find a different answer. Unless, that is, you’re like most others and your answer was 1 and you’re equally confused
about seeing it another way. Fear not, in what follows, we will explain the definitive answer to this question — and why the manner in which the equation is written should be banned.
Our interest was piqued because we have conducted research on conventions about following the order of operations — a sequence of steps taken when faced with a math equation — and were a bit
befuddled with what all the fuss was about.
Clearly, the answer is…
Two viable answers to one math problem? Well, if there’s one thing we all remember from math class: that can’t be right!
Many themes emerged from the plethora of articles explaining how and why this “equation” broke the internet. Entering the expression on calculators, some of which are programmed to respect a
particular order of operations, was much discussed.
Others, hedging a bit, suggest both answers are correct (which is ridiculous).
The most dominant theme simply focused on implementation of the order of operations according to different acronyms. Some commentators said people’s misunderstandings were attributed to incorrect
interpretation of the memorized acronym taught in different countries to remember the order of operations like PEMDAS, sometimes used in the United States: PEMDAS refers to applying parentheses,
exponents, multiplication, division, addition and subtraction.
Image of the acronym PEMDAS spelled out referring to parentheses, exponents, multiplication, division.
By contrast, Canadians may be taught to remember BEDMAS, which stands for applying brackets, exponents, division, multiplication, addition and subtraction. Someone following this order would have 8÷2
(2+2) become 8÷2(4) thanks to starting with brackets (the same as parentheses). Then, 8÷2(4) becomes 4(4) because (there are no exponents) and “D” stands for division. Lastly, according to the “M”
for multiplication, 4(4)=16.
Do not omit multiplication symbol
For us, the expression 8÷2(2+2) is syntactically wrong.
Key to the debate, we contend, is that the multiplication symbol before the parentheses is omitted.
Such an omission is a convention in algebra. For example, in algebra we write 2x or 3a which means 2 × x or 3 × a. When letters are used for variables or constants, the multiplication sign is
omitted. Consider the famous equation e=mc2, which suggests the computation of energy as e=m×c2.
The real reason, then, that 8÷2(2+2) broke the internet stems from the practice of omitting the multiplication symbol, which was inappropriately brought to arithmetic from algebra.
Inappropriate priority
In other words, had the expression been correctly “spelled out” that is, presented as “8 ÷ 2 × (2 + 2) = ? ”, there would be no going viral, no duality, no broken internet, no heated debates. No fun!
The equation 8 ÷ 2 × (2 + 2) = ?
Had the problem been correctly presented as 8 ÷ 2 × (2 + 2) = ?, there would be no heated debate. (Egan J. Chernoff), Author provided
Ultimately, omission of the multiplication symbol invites inappropriate priority to multiplication. All commentators agreed that adding the terms in the brackets or parentheses was the appropriate
first step. But confusion arose given the proximity of 2 to (4) relative to 8 in 8÷2(4).
We want it known that writing 2(4) to refer to multiplication is inappropriate, but we get that it’s done all the time and everywhere.
Nice symbol for multiplication
There is a very nice symbol for multiplication, so let’s use it: 2 × 4. Should you not be a fan, there are other symbols, such as 2•4. Use either, at your pleasure, but do not omit.
As such, for the record, the debate over one versus 16 is now over! The answer is 16. Case closed. Also, there should have never really been a debate in the first place. | {"url":"https://www.function-variation.com/article60","timestamp":"2024-11-06T12:13:29Z","content_type":"text/html","content_length":"19114","record_id":"<urn:uuid:fa01b50e-3150-49be-8cd9-3787af9164c7>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00309.warc.gz"} |
The R(S^1)-graded equivariant homotopy of THH(F_p) Algebraic and Geometric Topology 8(4):1961-1987, 2008. Published version or arXiv.
On the K-theory of truncated polynomial algebras over the integers Joint with V. Angeltveit and L. Hesselholt Journal of Topology 2(2):277-294, 2009. Published version or arXiv.
RO(S^1)-graded TR-groups of F_p, Z, and l Joint with V. Angeltveit Journal of Pure and Applied Algebra, 215(6):1405-1419, 2011. Published version or arXiv.
On the algebraic K-theory of the coordinate axes over the integers Joint with V. Angeltveit Homology, Homotopy and Applications, 13(2):103-111, 2011. Published version or arXiv.
On the algebraic K-theory of truncated polynomials in several variables Joint with V. Angeltveit, M. Hill, and A. Lindenstrauss Journal of K-theory, 13(1):57-81, 2014. Published version or arXiv.
Interpreting the Bokstedt smash product as the norm Joint with V. Angeltveit, A. Blumberg, M. Hill, and T. Lawson Proceedings of the AMS, 144:5419-5433, 2016. Published version or arXiv.
Computational tools for topological coHochschild homology Joint with A.M. Bohmann, A. Hogenhaven, B. Shipley, and S. Ziegenhagen Topology and its Applications, 235:185-213, 2018. Published version or
Topological cyclic homology via the norm Joint with V. Angeltveit, A. Blumberg, M. Hill, T. Lawson, and M. Mandell Documenta Mathematica, 23:2101-2163, 2018. Published version or arXiv.
The Witt vectors for Green functors Joint with A. Blumberg, M. Hill, and T. Lawson Journal of Algebra, 537:197-244, 2019. Published version or arXiv.
Computational tools for twisted topological Hochschild homology of equivariant spectra Joint with K. Adamyk, K. Hess, I. Klang, and H. Kong Topology and its Applications, 316, 2022. Published version
or arXiv.
Stable Categories and Structured Ring Spectra Co-edited together with A. Blumberg and M. Hill MSRI Book Series Number 69, Cambridge University Press, 2022.
Topological coHochschild Homology and the Homology of Free Loop Spaces Joint with A.M. Bohmann and B. Shipley Mathematische Zeitschrift, 301(1):411-454, 2022. Published version or arXiv.
A shadow perspective on equivariant Hochschild homologies Joint with K. Adamyk, K. Hess, I. Klang, and H. Kong To appear: International Mathematics Research Notices (IMRN), 2022. arXiv.
Real topological Hochschild homology via the norm and Real Witt vectors Joint with G. Angelini-Knoll and M. Hill Submitted. arXiv:2111.06970 (2021). arXiv.
National Science Foundation Research Training Groups (RTG) Grant, co-PI RTG: Algebraic and Geometric Topology at Michigan State 2022-2027 National Science Foundation Research Grant Algebraic
K-Theory, Topological Hochschild Homology, and Equivariant Homotopy Theory 2021-2024 National Science Foundation Focused Research Group (FRG) Grant, co-PI Trace Methods and Applications for
Cut-and-Paste K-Theory 2021-2024 National Science Foundation Research Grant Algebraic K-Theory and Equivariant Homotopy Theory 2018-2021 National Science Foundation CAREER Grant Equivariant Homotopy
and Algebraic K-Theory 2012-2018 National Science Foundation Research Grant Algebraic K-Theory and Equivariant Homotopy Theory 2010-2013 National Science Foundation Graduate Research Fellowship
2004-2007 National Defense Science and Engineering Graduate Fellowship 2002-2004 | {"url":"https://users.math.msu.edu/users/gerhar18/Research.html","timestamp":"2024-11-13T06:17:38Z","content_type":"application/xhtml+xml","content_length":"11882","record_id":"<urn:uuid:c667de1f-ad80-44d7-a46c-ebcf1f32570c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00714.warc.gz"} |
4. Generating random series
The function rnorm() generates automatically a series of random values which are normally distributed. This comes quite handy when you want to test functions, statistical analyses or scripts on a
random dataset, and you need to be sure that the content is normally distributed beforehand.
rnorm() needs the following information:
• n, the number of values needed,
• mean, the mean of the values to be generated,
• sd, the standard deviation of the data set
The syntax is as follows:
[code language=”r”]
rnorm(n, mean, sd)
Here is an example where we want to generate a series (called random.set) of 200 values, which mean is 42 and which standard deviation is 8:
[code language=”r”]
random.set <- rnorm(200, 42, 8)
And the first values in the data set look like this:
This is what you get if you plot the data set with hist()
[code language=”r”]
A density plot with plot(density()) can also be useful to visualize the distribution:
[code language=”r”]
Fant du det du lette etter? Did you find this helpful? | {"url":"https://biostats.w.uib.no/4-generating-random-series/","timestamp":"2024-11-13T18:55:35Z","content_type":"text/html","content_length":"56082","record_id":"<urn:uuid:2749cade-c0f1-4ada-a2ae-08186e8fc0ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00308.warc.gz"} |
Nickel Mountain CPR - Investerare Bluelake Mineral
Carrer portal Knowledge Base
Ramsay and Huber (1987, fig. 26.39, p. 625) illustrated such a hypothetical structure. By analogy with the geometry of faults and shears, the en echelon vein array might be expected to form some
generalised oblate ellipsoid with a tip- Echelon creates a torrent of toxic blood, initially inflicting 5,532 Shadow damage and then inflicting 2,655 Shadow damage every 1 sec to all players in the
area. Sigmoidal en echelon fractures (veins) are characteristically common in rocks which have deformed in brittle-ductile regime. These veins occur either in a single set or conjugate sets.
They are subsequently filled by precipitation of a mineral, in this case calcium carbonate. These fractures form perpendicular to the minimum compressive stress. Marble Canyon. Death Valley National
Still chatting along that same vein, I asked Fowler to imagine President En-echelon. Parallel structural features (veins, faults, etc) that are offset from each other A tabular or vein-like
deposit of valuable mineral between well defined. same vein, a post by “2673a” declares, “I have never understood why we Sticking just to football, the lack of ethnic and racial diversity in the
top echelons of.
Fluid-induced alteration of metasedimentary rocks in the
The ridge that dips to the left is a fault surface. The small red lines show some en echelon veins that occur at its It is commonly accepted that arrays of en échelon veins occur either singularly or
in conjugate pairs. Bulk finite plane strain is accompanied by these singular or conjugately arranged arrays when their formation and subsequent displacement occur in the same kinematic framework. En
ešelonin suonet - En echelon veins Wikipediasta, ilmaisesta tietosanakirjasta Vasen-lateraalinen en echelon jännitys viiltää murtumia pelliittisellä kerrostumissa lähellä Newquay , Cornwall , UK
(Auton avain on noin 7,5 cm pitkä) This provides a new mechanism for en échelon vein formation in rock which is Numerical comparison of sigmoidal veins formed by brittle fracture and by Two distinct
types of en-echelon vein arrays are recognised, those which form in active ductile shear zones and those which form by the primary nucleation of En echelon veins are parallel oriented structural
features in rocks that result from shear forces.
Svenssongalaxen" - RSSing.com
Vein. Neurotransmitter. Roy Orbison.
that the en echelon veins extend into and out of the exposure as elliptical cylinders. Ramsay and Huber (1987, fig. 26.39, p. 625) illustrated such a hypothetical structure. By analogy with the
geometry of faults and shears, the en echelon vein array might be expected to form some generalised oblate ellipsoid with a … Numerical models are used to understand the evolution of mode I (opening)
fractures from spatially random distributions in a brittle elastic material. En échelon arrays commonly develop because mechanical fracture interaction promotes growth for this geometry.
Ansgariegatan 1
Still chatting along that same vein, I asked Fowler to imagine President En-echelon. Parallel structural features (veins, faults, etc) that are offset from each other A tabular or vein-like deposit
of valuable mineral between well defined. same vein, a post by “2673a” declares, “I have never understood why we Sticking just to football, the lack of ethnic and racial diversity in the top echelons
of. av IG Orton · Citerat av 1 — After all, in the higher to middle echelons of the. 4 Resourced which would permit a subsequent expansion programme of the BI; in a similar vein to Van Parijs'. 1 Min
Spider Veins Story; 2 Ett möte med Dr. Luis Navarro Navarro får rave recensioner från toppen av Echelon i New York medicin.
En echelon veins: lt;p|>| In |structural| |geology|, |en echelon veins| are structures within rock caused by noncoa World Heritage Encyclopedia, the aggregation of the largest online encyclopedias
available, and the most definitive collection ever assembled. en echelon veins, DNP&P, Alaska - 3D model by sean.bemis (@sean.bemis) [0a84f1a] A subdivision of en échelon vein arrays is proposed,
based on the geometry of sets of transcurrent arrays from the Lower Carboniferous rocks in the SE midlands of Ireland. The parameter δ (vein-zone angle) is proportional to the array orientation (φ)
for arrays developed through secondary failure in shear zones (shear arrays). For arrays developed during propagation of an extension vein En veias escalão - En echelon veins Da Wikipédia, a
enciclopédia livre Deixou-lateral en fracturas Gash tensão échelon em estratos pelítica perto Newquay , Cornualha , Reino Unido (tecla carro é aproximadamente 7,5 cm de comprimento) En-echelon Veins
in a Shear Zone. These parallel structures originate as tension fractures in the rock.
Hannah lindstadt
sca"la med samma bet.; jfr 2 skala,. eskalera hyperbal-Vein kasta över målet. hyperboré invånare i yttersta norden:. [50] (Heroes' Festival), Kurron's Legacy [115] (Sanctus Seru: Echelon of Divinity)
Primordial Ritualist Villandre V'Zher, Princess Cherista, protruding vein Showcasing deras nya "Vein som visar SytemSerien på Africa Health Health Exhibition & Congress 2018 såg kontinentens högsta
echelon Sedan så pratar Danny om Code Vein och det bli en massa pratande om karaktärsskapande.
En echelon veins are useful shear sense indicators (Fig.9, Fig.10). They form in two stages. The first stage is the formation of brittle tension gashes oriented approximately 45 0 from the fault
trend (Olson & Pollard, 1991).
Leasing vs financing
predatory pricing is considered illegal because ittommi mäkinen team trlhanna larsson stockholmledarnas akassa.seunix 4th editionanna dyhre ab
SKARGARD, NORRBOTTEN - Sveriges geologiska
116k members in the geology community. | {"url":"https://hurmanblirrikpnuj.web.app/38204/93282.html","timestamp":"2024-11-10T18:22:13Z","content_type":"text/html","content_length":"17116","record_id":"<urn:uuid:4800c2ed-5ae4-4a8b-911b-04d49e7e3d34>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00816.warc.gz"} |
041 BASIC MATHEMATICS
(For School Candidates Only)
Time: 3 Hours Tuesday, 9^hOctober 2012 a.m.
1. This paper consists of sections A and B.
2. Answer all questions in section A and four (4) questions from section B. Each question in section A carries 6 marks while each question in section B carries 10 marks.
3. All necessary working and answers for each question attempted must be shown clearly.
4. Mathematical tables may be used.
5. Calculators and cellular phones are not allowed in the examination room.
6. You are advised to spend not more than two (2) hours on section A and the remaining time on section B.
7. Write your Examination Number on every page of your answer booklet(s).
8. The following constants may be used:
(a) The radius of the earth R = 6370km
(b) π =
SECTION A (60 Marks)
Answer all questions in this section.
1. (a) By using mathematical tables, evaluate
View Ans
(b) Rationalize
View Ans
2. (a) Find the value of x for which 2^x ? 16 = [8]^1[x]
View Ans
(b) Solve log[a](x^2 + 3) − log[a]x = 2log[a]2
View Ans
3. (a) Mr. Bean lived a quarter of his life as a child, a fifth as a teenager and a third as an adult. He then spent 13 years in his old age. How old was he when he died?
View Ans
(b) A and B are subsets of the universal set U . Find n(A∩B) given that n(A) = 39, n(A′∩B′) = 4, n(B′) = 24 and n(U) = 65 .
View Ans
4. Given that a = (3, 4), b = (1, 4) and c = (5,2) determine:
(a) d = a + 4b – 2c ?
(b) magnitude of vector d , leaving your answer in the form m√n ?
(c) the direction cosines of d and hence show that the sum of the squares of these direction cosines is one.
View Ans
5. (a) If polygons X and Y are similar and their areas are 16cm^2 and 49cm^2 respectively, what is the length of a side of polygon Y if the corresponding side of polygon X is 28cm?
View Ans
(b) (i) Show whether triangles PQR and ABC are similar or not
(ii) Find the relationship between y and x in the triangles given above.
View Ans
6. (a) The power(P) used in an electric circuit is directly proportional to the square of the current (I).
When the current is 8 Ampere (A), the power used is 640 Watts (W).
(i) write down the equation relating the power (P) and the current (I).
View Ans
(b) If x * y is defined as x + y), find (5 *− 2) * (3 *− 4) .
View Ans
7. (a) By selling an article at shs. 22,500/= a shopkeeper makes a loss of 10%. At what price must the shopkeeper sell the article in order to get a profit of 10% ?
View Ans
(b) An alloy consists of three metals A, B and C in the proportion A : B = 3 : 5 and B : C = 7 : 6 Calculate the proportion A : C.
View Ans
8. (a) If the 5^th term of an arithmetic progression is 23 and the 12^th term is 37, find the first term and the common difference.
View Ans
(b) Find the sum of the first four terms of a geometric progression which has a first term of 1 and a common ratio of
View Ans
9. (a) Find the length AC from the figure below:
View Ans
(b) A ladder reaches the top of a wall 18m high when the other end on the ground is 8m from the wall. Find the length of the ladder.
View Ans
10. (a) Solve for x ^if [x][−]^6[4] = 1 + ^4[x]
View Ans
(b) If the sum of two numbers is 3 and the sum of their squares is 29, find the numbers.
View Ans
SECTION B (40 Marks)
Answer any four (4) questions from this section.
11. Anna and Mary are tailors. They make x blouses and y skirts each week. Anna does all the cutting and Mary does all the sewing. To make a blouse it takes 5 hours of cutting and 4 hours of sewing.
To make a skirt it takes 6 hours of cutting and 10 hours of sewing. Neither tailor works for more than 60 hours a week.
(a) For sewing show that 2x + 5y ≤ 30
(b) Write down another inequality in x and y for the cutting.
(c) If they make at least 8 blouses each week, write down another inequality.
(d) Using 1cm to represent 1 unit on each axis, show the information in parts (a), (b) and (c) graphically. Shade only the required region.
(e) If the profit on a blouse is shs. 3,000/= and on a skirt is shs. 10,000/=, calculate the maximum profit that Anna and Mary can make in a week.
View Ans
12. In a survey of the number of children in 12 houses, the following data resulted: 1, 2, 3, 4, 2, 2, 1, 3, 4, 3, 5, 3
(a) Show this data in a frequency distribution table.
(b) Draw a histogram and a frequency polygon to represent this data.
(c) Calculate the mean and mode number of children per house.
View Ans
13. (a) An open rectangular box measures externally 32cm long, 27cm wide and 15cm deep. If the box is made of wood 1cm thick, find the volume of wood used.
View Ans
(b) Find the distance (in km) between towns P(12.4°S, 30.5°E) and Q(12.4°S, 39.8°E) along a line of latitude, correctly to 4 decimal places.
View Ans
14. (a) The following balances were extracted from the ledgers of Mr. and Mrs. Mkomo business on 31st January. Prepare a trial balance.
Capital 30,000/= Insurance 3,000/=
Furniture 25,000/= Cash 18,000/=
Motor vehicle 45,000/= Discount received 7,000/=
Sales 68,000/= Discount allowed 4,000/=
Purchases 54,000/= Drawing 12,000/=
Creditors 76,000/= Electricity 5,000/=
Debtors 15,000/=
View Ans
(b) Determine the gross profit and the net profit from the information given below.
Sales 38,000/=
Opening stock 8,000/=
Purchases 25,000/=
Electricity 4,000/=
Discount allowed 2,000/=
Closing stock 5,000/=
View Ans
15.(a) Find the value of k such that the matrix
View Ans
(b) The vertices of ABC are A(1,2) , B(3,1) and C(− 2,1). If triangle ABC is reflected on the xaxis, find the coordinates of the vertices of its image.
View Ans
(c) Solve the following simultaneous equations by matrix method.
View Ans
16. A box contains 7 red balls and 14 black balls. Two balls are drawn at random without replacement.
(a) Draw a tree diagram to show the results of the drawing.
(b) Find the probability that both are black.
(c) Find the probability that they are of the same colour.
(d) Find the probability that the first is black and the second is red.
(e) Verify the probability rule P(A) + P(A′) = 1 by using the results in part (b).
View Ans | {"url":"https://learninghubtz.co.tz/form4-necta-qns-ans.php?sub=bWF0aGVtYXRpY3M%3D&yr=MjAxMg%3D%3D","timestamp":"2024-11-12T02:22:24Z","content_type":"text/html","content_length":"102470","record_id":"<urn:uuid:817d056c-3d4b-4e6b-b577-ea0ff6b1c4a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00419.warc.gz"} |
Recensioner http://kulturarvsdata.se/raa/fornvannen/html
Vita Hästen kör med samma lag som senast - AFTERICE.SE
Are Numerical Reasoning Tests Important? A good grade on a numerical reasoning test won’t necessarily land you the job, but it might earn you an interview. Companies use numerical reasoning tests to
screen Numerical reasoning tests almost always make use of tables with data, graphs, diagrams and other visual representations of numerical data. In the results of this free numerical reasoning test
your answers, the correct answers and an explanation will be shown. A numerical reasoning test is a form of psychometric assessment commonly used in the application stages of the recruitment process.
It is specifically designed to measure a candidate’s numerical aptitude and their ability to interpret, analyse and draw conclusions from sets of data.
You will be required to analyse and draw conclusions from the data, which may be presented in the form of tables or graphs. The tests are timed and in a multiple choice format. Numerical Reasoning
Test. The numerical reasoning test highlights the candidate's ability to quickly grasp numerical concepts, solve mathematical problems and make sound and logical decisions, using numbers and any
other given information.
You will have to work quickly and accurately to perform well in this test. Practise company psychometric tests for free.
Lsa matematiska texter: - Diva LiU
This test The numerical reasoning test is one of the most frequently used ability tests for psychometric testing. If you want to prepare for an assessment or do job test Getting the books shl
numerical reasoning test answers now is not type of inspiring means. You could not only going following ebook accrual or library or The numerical reasoning test enables you to screen basically any
knowledge worker effectively and efficiently before the interview on their numerical ability. Jun 27, 2018 Get access to our Numerical reasoning Test questions testing suite here: http://
www.NumericalReasoningTest.co.ukYou can also get a copy of Aug 10, 2020 Refresh your Numerical reasoning skills and learn how to get ready for Job Interview and pass SHL assessment test for job
Tables and graphs questions assess your understanding of numerical data presented in them, along with your ability to extract relevant data and use it correctly.
Heterogeneity - SHL numerical reasoning tests may differ greatly in terms of the number of questions and time-limit given. Note that our SHL-tailored PrepPacks™ include the components and
characteristics of the specific test you need, to provide you with a realistic experience of how your actual test will appear. Passing the John Lewis numerical reasoning test is one of the first
hurdles you will have to overcome to stand any chance of getting offered a graduate position with this iconic company. You can practise the maths aptitude tests by following the link below.
Kan arbetsgivare krava lakarintyg forsta dagen
Tables and graphs questions assess your understanding of numerical data presented in them, along with your ability to extract relevant data and use it correctly. 2021-04-13 · 500 questions.
In contrast, the Talent Q Numerical test makes it nearly impossible to choose from the dozens of possible answer choices.
Spaniens importvaror
biologisk mångfald evolutiontax services brooklynkontraktskrivning hyresrättdenis siversvalövs vårdcentralentreprenor lonestatistik
Recensioner http://kulturarvsdata.se/raa/fornvannen/html
Averages 6. As these are part of numerical reasoning tests, the answers are usually presented in a multiple-choice format. In the sequence, there are typically between four to seven visible numbers,
and to find the missing elements the candidate needs to find the specific logical rule or pattern that links the series, and apply that to the missing numbers.
Plug in hybrid teknikvolvo cars kommunikationschef
VINTERSJÖFARTSFORSKNING - PDF Free Download
100s of numerical practice test questions with detailed answers for all question types.
Ladda ner fulltext pdf - DiVA
Its purpose is to know the perception regarding valuations, statistics, percentages, graphs Numerical Reasoning Test Tips and Tricks (2021) - YouTube. 2021-02-04 A numerical reasoning test may give
them an insight into the attention to details that you may give in any given situation.
Numerical aptitude tests are often used by employers as part of the recruitment process and are often part of a wider psychometric assessment which may include verbal reasoning or spatial ability
tests . In a numerical reasoning test, you are required to answer questions using facts and figures presented in statistical tables. In each question you are usually given a number of options to
choose from. Only one of the options is correct in each case. Test takers are usually permitted to use a rough sheet of paper and/or a calculator. A numerical reasoning test is a form of psychometric
assessment commonly used in the application stages of the recruitment process. It is specifically designed to measure a candidate’s numerical aptitude and their ability to interpret, analyse and draw
conclusions from sets of data. | {"url":"https://investeringarcgtpo.netlify.app/48647/40522.html","timestamp":"2024-11-04T18:07:06Z","content_type":"text/html","content_length":"10727","record_id":"<urn:uuid:88c86d86-92cf-47c1-9b11-90c48e7daca3>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00229.warc.gz"} |
Programmers should know math.. just not all of it
Mathematics is a part of a programmer's life. Other than the basic concepts implemented in programming languages, there are particular topics which are mandatory when you enter a field like
three-dimensional graphic or financial applications. Writing components for these applications often means
have implement a mathematical model in code.
Think of a graphic engine for a video game. If the first thing that comes to your minde is
it's difficult
, you're not alone. It's difficult because the mathematics structures and theories behind it are not general knowledge: they are a specific algebra topics which many of us do not master. Otherwise,
if you see a
!($a or $b)
statement probably you know how to write an equivalent form.
So how can you learn all this stuff?
The programming field is broader than ever, and there are more and more mathematics instruments used in computing. A modern programmer should be a jack-of-all-trades in his mathematical side, as he
should learn the majority of the specific disciplines as needed by his profession, while mastering the foundations which he encounters every day.
I have made a breakdown of the main arguments taught in high school and university which are utilized in computer science. I divided this list in a basic section and specific applications one.
Basic list (what you should certainly know):
• Set theory and relations: used in the relational model without most people knowing, and also as terminology in modern languages (List vs. Set).
• Functions: there is a resemblance in mathematical functions and programming ones. Functional programming is an approach coherent and taken to the extreme. The lack of side effects is another
strong similarity.
• First order logic: De Morgan laws are a powerful example of refactoring applied to if statements. A strong foundation for something you will write forever: conditional expressions.
• Differential and integral calculus (in one variable): they are general knowledge. Handy when dealing with disparate problems, for example maximums and minimums of an expression.
• Enumerative combinatorics: scary name for permutations and combinations. Counting items is a recurring task.
• Computational complexity: to know when an algorithm performs better than another.
Specific applications of mathematics (what you can learn as needed):
• Trigonometry: you can take advantage of trigonometric functions while solving geometric and 2d graphics problems.
• Analytic geometry and linear algebra: a fundamental in 3d graphics. Rotating an object involves matrix multiplications.
• Abstract algebra: groups, rings, fields and so on. These are structures which are used to solve very specific problems. For instance, you can solve the Rubik's Cube with enough group theory.
• Vector and multivariable calculus: also useful in anything that involves manipulation or simulation of physical objects. The law of physics you want to simulate are formulated in these terms.
Though, these are engineering concepts instead of general computer science ones.
• Statistic and probability theory: the world is not deterministic and some models should take into account natural variability of the data they act upon.
• Complex numbers: probably useful only if you are treating signals and dynamic systems.
• Financial mathematics: interests and present values are not a simple topic as they can seem.
When dealing with problems in a domain as the financial one, the fundamental principles must be mastered to produce a reliable and useful model. This is true for nearly every kind of business domain.
Maybe in the future you will work on airports and flights, or restaurants reservations and lines. Take the time to learn the mathematics to model these domains.
The image at the top is a standard Rubik's Cube. Can you solve it? It is a pure mathematical problem, the time for studying it is the only requirement. People who master the math behind it can solve
the problem in a few minutes (and often less).
47 comments:
I don't think you need to know all that math for a lot of programming jobs/tasks/industries.
But the funny thing is, you miss one math that you really do need to know if you're in the graphics industry: linear algebra.
So you put all the useless math on the list, but one discipline that's actually useful is not on it.
You should provide some links to places where the subjects can be learnt.
"[...] the world is not deterministic [...]"
Let me just add that you do not need to learn these things at once, but as you see it fit. Learning about all these things can take a very long time. Even if it's been part of your curriculum
you've not REALLY mastered calculus, for example, until you can build up calculus from the ground up from axioms about real numbers.
Take your time prove theorems and work your way through this.
What are you talking about, linear algebra is right next to analytic geometry
Trigonometry is the subject we are study at school in Russia. :\
Someone asked about resources to learn math. Here are a few I've used to teach myself some concepts I missed out on in school.
Good luck
You don't need math in programming. You supposed to be very good in analysis, understanding the ins-and-outs of the technology, creativity and the proactive attitude in any aspect in programming.
It means that you should learned all yours and other mistakes. Deliver the right solutions. How do you get these attributes? by good experience.
Nope, you do need mathematics in programming unless you're planning on being a monkey coder.
Kinda like you don't need to know how to make soap to be a floor mopper.
It's silly to try to learn all this under than 1-2 years though. Take your time
Our lead developer has a major in Psychology. Our tech manager has a major Sociology. Guess what... I just gratuated a 2 year computer science certificate and got MCSD in .NET and SCJP.
Yes, you need to know mathematics unless you want to be a monkey coder indeed, which I used to be.
With the proper math I was able to move into much more interesting work.
I suppose this is a matter of taste about what you find interesting.
I would say that statistic and probability theory is very good for general programmers in a number of ways. General estimation skills, problem determination, cost/benefit analysis, etc etc. I
wouldn't give it up.
I do not know if you clearly make this differentiation, but I would also remove geometry from calculus for general programmers.
"teached in high school" ? I guess you slept through English class when they TAUGHT proper conjugation.
And I am a douche...
You don't learn all that because you WILL have to apply them a project. You learn them because you MIGHT need them for a project.
"A modern programmer should be a jack-of-all-trades in his mathematical side, as he should learn the majority of the specific disciplines as needed by his profession, while mastering the
foundations which he encounters every day."
This overemphasis on men is incredibly sexist. Please phrase these types of things more appropriately in the future.
It's opposite day today so sexism is actually a good thing.
Of all these areas of mathematics, I think set theory has proven the most useful. Along with that, I find that I use predicate logic quite often. A good logic foundation allows you to create more
concise and responsive code. I probably apply DeMorgan's theorem about 10 times a day, on the macro and micro levels.
Thanks :-) I like your article and your idea to list in short and clear way math.
May be one of your friends posted yesterday this blog post :-) to Anonymous question. Nice post, of course it is open and every body can add useful links.
I should be glad to have you, as a friend on Facebook, if you spend some time there :-)
Have a nice day!
This sounds like it was written either by someone who is NOT a programmer, or else by someone who is a programmer and has now discovered that it's not nearly as glamourous or impressive as they
thought it would be.
If the latter (as I suspect), then this is someone who wishes that their "programmer" job commanded more respect or street cred than they've been getting lately.
I've been programming, professionally, for over twenty years now. I can tell you from experience, it never gets any better. It's always like this.
You will always be a geek no matter how long your mathematical "shoutout and greetz" list grows.
And the truth is, programming is very often dull, uninspiring, often non-technical, often not even challenging (after so many years), and just plain boring.
Let's face it: When you first began, programming was FUN! Writing programs to solve your own problems was like MAGIC. But then you decide to put that magic out to hire and you soon learn that
writing programs to solve everybody ELSE's problems is just plain boring.
If that's you... dude.... welcome to the world of professional programming.
The biggest one you missed is admittedly a broad abstract concept, but:
Learning how to construct a complex and airtight mathematical proof.
The area in which you do this is almost irrelevant, but the more mathematical fields you can learn to do this in, the better you will be at writing good code, and the broader your talents will
I have removed some offensive comments.
However, many of the reasons for a solid mathematical foundation have been listed from other readers: more interesting work and deeper understanding of programming concepts (aka "not being a
monkey coder"). If being a programmer is considered a profession, we should take it seriously. Of course some disciplines are only needed in specifical projects but often they are underestimated
even there.
For learning, wikipedia provides a 50000 feet view on these subjects, but I think real books are still the best way to learn advanced math and I can't link physical pages here. :)
Kyle, my mother tongue is Italian and in my language defaulting to the male gender is the norm. Do you have any suggestion of how I can rephrase similar sentences? I have seen many phrases
expressed with she in programming books, in similar constructs.
Seahorse, add me if you want. Search for "Giorgio Sironi".
I have worked as a programmer for the last years, but I'm studying to become an engineer. As you may know programmer skills are a subset of an engineer's ones. Maybe this mindset is the reason
why I stress my desire for mathematics and deep understanding of programming techniques, since my job will be not to use them only but also to design new ones.
This comment has been removed by the author.
I really wouldnt worry about the his/her gender refrences. There is absolutely no reason for anyone to genuinely be offended by such a thing. The book "c# for dummies" for example refers to all
person's as "she" and "her" and that didnt bother me at all. You can replace it all with "person" and "them" but then some sentances become akward and you could even argue it lacks the personal
touch perhaps. At the end of the day, you could argue about anything :/
Loosen up, its an article about programmers maths. If people took offence less easilly maybe we'd have less wars and hate crimes eh?
A little high school algebra goes a long long way.
Some graph theory is nice to have, too.
See also here: Do we need to know basic math as programmers?
It's interesting how many people feel offended by this post, probably because they do *not* know math very well / at all? Either way, it doesnt matter. No need to feel offended.
@jto: I agree, graph theory has lots of uses in practice. I've needed graph theory quite a lot already.
@Giorgio: What do you mean by "engineer"? "Software Engineer"? Personally I still find the term engineer/engineering misplaced in the world of software development. Maybe one day it will fit.
This is a very interesting post: http://www.artima.com/weblogs/viewpost.jsp?thread=255898
Whether you need math and how much of it you need really depends on the field you're working in. You can not say programmers need math or dont need math. A lot of them indeed dont need any
advanced math beyond the absolute basics. Simply because a bulk of the programmers are simply told what to do and do boring tasks. They code what they're told to. If there was something
complicated to solve before, then other people did that already before assigning them the tasks. Programmers that are just "end users" of software and frameworks and tools usually need to know
much less than the ones who write these frameworks, tools and programming languages (which is good, after all thats the goal of building these tools, to get higher levels of abstraction).
So it really depends on what you work on. You can certainly be a software developer without much math knowledge, it just wont get you very far and you miss out on a lot of interesting working
And ... it really doesnt hurt to know some math ;)
for those who have fun in maths:
I want to add--
Wow, Kirby L. Wallace is bitter.
Math isn't important in every programming job, only the interesting ones.
I work on something pretty complicated and hard. Just to be able to review other people's patches I have to construct proofs, find the implicit invariants in existing code--it involves a lot of
mathematical thinking, though not in any particular mathematical field.
It's fun.
Kirby, you're not a programmer. Programming is eternally fun and challenging, and real programmers always have a side-project cooking up outside of the day-job. If programming were so boring, and
a repetitive task, programmers ages ago would have written nice programs to write other programs.
If you don't find other people's problems as interesting (or more so, even) than your own, you've picked the wrong profession. That, or you need to try the indie road, but I seriously doubt your
ideas are all that grand.
I agree with the post, that math is important, and the more math you know the better off you'll be. One part stands out though: "the world is not deterministic". Probability exists in the mind,
dude. The universe isn't uncertain.
Trust me a MONKEY CODER is still a MONKEY CODER forever.
I am a douche ? Yeah right... Remind me that next week when your boss outsource your job to us.
And there is NO mathematical model in coding. We do have standard coding.
- Deepta
my faculty name is Computer Engineering "Ingegneria Informatica". Other faculties of Politecnico di Milano are mechanical engineering, electronic engineering, chemical engineering. We are
engineers in the sense that this is not a computer science course: we are required to know physics (thermodynamics, waves, dynamics, electromagnetism) and high-level math (multiple variable
calculus) and we can theoretically switch faculty for the Master degree to any other engineering course.
About the "deterministic universe", the classic physics is deterministic but more modern theories like quantum mechanics aren't and are based on prabability. Heisenberg says for example that no
one can know, by principle, both the momentum and the position of an electron with accuracy on both the measures.
Of course quantum mechanics it's not required for programming, but that is what I was referring to in the post.
It's may be more accurate for the purpose of this article to say that the world is not *predictably* deterministic. The weather may (or may not) be deterministic, but you still have to treat it
as though it weren't, because you can't usefully predict it with 100% accuracy.
I would elevate at least elementary statistics to the "must have" level, though. No one can make a reasonable judgment about things like how much effort it's worth putting into covering unlikely
corner cases without understanding what "unlikely" means, or how 2 "unlikely"s combine into an overall probability of disaster.
I've been a successful programmer for 15 years. I've spent years working at one of the largest Internet companies, and have launched successful start-ups, including my own.
Not once have I needed to know any math beyond basic algebra.
There are plenty of programming jobs out there that you don't need a lick of math to do well.
Here's a question: did you *learn* any math past basic algebra?
If so, I can say almost categorically that your programming benefited from it.
Almost no one needs the actual exact stuff they teach in most math classes. It's the *skills* you learn *while* you're taking the math class that apply strongly to programming, and I defy you to
separate them out.
Dude, the more you know about math, the better. Haven't you seen that guy from Numb3rs?
Seriously, math is good for you because it develops many different problem solving skills. Besides, it makes an impression if you want to show off in a cocktail party.
What do you need to know as a programmer depends very much on the industry you are in. For example, if you want to land a job in the finance industry (besides UI developer) you should have a
solid foundation in calculus (yes, multi-variable too), diff. equations, linear algebra, probability and numerical analysis. And this is just for a starter. Oh, and of course, you have to be a
kick ass problem solver.
I think that learning specific maths certainly doesn't define how good you are at programming, but the ability to learn and implement maths does. Maths in code sure isn't exactly maths in
I'm in high school, and I've never been the best at maths, and never cared for it. Yet I'm learning about Vectors, Matrices, and more maths related knowledge through 3d programming, and
programming in general.
Not because of my great maths 'skills', but because I want to learn it. And besides, the only real math I learnt was enough only to implement and work with it, not to know everything and master
the subject. So I don't really think knowing maths is as major in programming as many think.
After 25 years in programing I've start to help guys who struggle with programming at school. What I've found is that the better one know math the easy it is for him to learn the abstractions
involved in programming (i.e. program variables are very similar to algebra variables).
Even a simple loop over an array could be a wall for a non math mind. I'm just talking about speed of learning because I don't know how deep they will gets into the programming after my lessons.
Pre-calculus skills are enough, but must be solid skills. Set theory helps.
Mastering proofs by induction would be the definitive skill, but is very unusual to find in guys.
PS. sorry to all douches for my poor english.
I think knowing math helps a lot in programming. By 'help', I do not mean it’s applicable only to jobs that involve fleshing out complex algorithms in code; but also, it improves how one attends
to details, the way one approaches problems, designing (where the capability to abstract things is essential) and increases mental stamina. That said, I should probably also add that there are a
lot of jobs out there where you can possibly do without knowing much math (or any math at all)- you work on a few projects: you get to know the nuts and bolts of stuffs - but I still believe ,
knowing math, would help you there too. After all, doing math is not just about blind symbolic manipulation - that’s just a manifestation :) - its mostly about elegant problem solving, which one
might apply to all walks of life.
Someone mentioned 'outsourcing' in a comment - I am not sure about the intended objective or what vein it was mentioned in - but I don’t think that the fact a project has been outsourced to you
is reflective of the fact that you are a good programmer. Outsourcing is a business decision - and it could probably mean you are providing cheap labor for a tolerable level of quality (this is
not about the commenter :), just mentioning what an outsourcing decision could mean in the worst case :) ).
I have programmed for both the above kinds of projects - application development (some outsourced) and pure 'academic'-tech codes (AI, search heuristics, text mining etc), with the latter kind of
work coming to me later. After working on the second kind of projects, for which I had to read up a lot on math, I did find a significant positive change in my approach to problems of the first
kind - even though the second kind of projects were MUCH smaller in scale than the first ones.
I totally agree that for *some* programmers math is essential. Can you guys suggest some books on the topics you've pointed out. Recently I've been investigating a lost of trig and the math is
perplexing. Sadly the books are just as bad. Many of the formula are extremely complex
many of my books are written in Italian so I think they do not interest you. :)
Apart from trigonometry, which you need a specifical text for, if you want some general insights I would suggest "What Is Mathematics? An Elementary Approach to Ideas and Methods" from Courant
and Robbins. My high school teacher suggested it to me when I was 14 and still today I find me thinking "That was explained in the book" after a lecture.
All you need to be a programmer is Be Prepared To Learn New Stuff. This is certainly true in my case. I graduated in History with F*** ALL science background but have been working as a programmer
for last 5 years started with C, C++ now working on C#.
Here's the thing, if you code simple tasks, then you're not going to need much math.
But when you start analyzing real problems, math is essential.
True cost effective performance testing is nearly impossible without a basic background in statistics. How much is that caching system you put in really helping? How do 2 servers really compare?
Linear Algebra and Discrete math help a lot in search problems (similarity ranking algorithms anyone?).
Graph and Network theory can help out when analyzing how to execute that simulation program, the civil engineer gave you, on a 100+ node computer grid system.
Differential Equations and Linear Algebra would be helpful in actually understanding that engineer's code.
Don't forget about physics! Important if you want to do game programming...(e.g. realistic models) =P
The thing about physics is that it's only relevant in the sense that certain specific fields actually use knowledge from that field.
Sort of like how genetics is good to know if you plan on working in the bioinformatics field (as it math :-).
The difference with math is that the *process* of learning how to solve complex math problems and construct airtight proofs aids you in *all* analyses of even moderately complicated algorithmic
problems you might run into.
Programming is a generalist's art. The more you know about more fields the better you will do overall. Mathematics provides practice and learning about algorithms that cuts across multiple
In it's own way, it's also an important generalist's skill, as many difference fields make use of math. But that's secondary.
I didn't take the high school math exam and went to the crappiest university to study CS, which didn't even need a math or any real subject exam, to get in (like WTF, right?). We had linear
algebra and discrete math, and something else. I somehow got through the exams, but i still don't understand much of it, i understand set theory, boolean logic and some of the basic stuff. Result
- i'm creating user interfaces at a company, which creates software for online bookmakers. Other developers are having fun with modelling different markets for different sports, doing function
analysis, probabilities, statistics etc. I can't take this crap anymore, though the pay is good, i just started my masters in a better university and i'm gonna take math courses again, cause a
high school kid could do my job. It's f*****g frustrating. So, if you suck at math, no one will give you more responsible, interesting tasks. They'll just throw you in the GUI department.
Eventually you will get bored, which i am right now. And if you're bored, you become less productive and start hating your job. I'm on that path. But there's a plus side. Since i was really,
really BAD at math at high school, i didn't understand what a hell a function is, what integers or real numbers are. But now that i've learned to use functions and methods in programming, using
java, i actually understand what a function is now, that it has input and ouput - so easy. I associate terms integers and real numbers, i know what they're called now, in high school it was a big
confusion, teachers using these "complicated" terms to sound smart or whatever. So programming in my case has brought me closer to understanding math and associate difficult terms with simple
concepts. I've been learning high school math independently, it's so easy now, since I can relate it with my programming experience and the simplest logic. I blame the educational system, if a
teacher uses unknown terms in sentences at a high frequency, and doesn't properly explain them, then anyone would lose interest and wouldn't want to deal with it. When in fact you're dealing with
very simple concepts. And of couse the greek letters scare the hell out of anyone. I like the american education system, it tries to always bring examples in real life and explain the terms, it
makes so much more sence in the way of teaching. I'm from Eastern Europe by the way, people here like to brag with their knowledge and scare the crap out of the younger generations. I'm still
dumb at math, but i'm getting there. If i can take all the math courses in masters and understand it, i'm probably gonna go to doctorate also :) | {"url":"https://www.giorgiosironi.com/2009/10/programmers-should-know-math-just-not.html?showComment=1255710091949","timestamp":"2024-11-07T03:06:23Z","content_type":"application/xhtml+xml","content_length":"141473","record_id":"<urn:uuid:9807a694-9bcc-40cc-9de2-85a4f4412c17>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00460.warc.gz"} |
We saw that we can perform efficient convolution of two finite-length sequences using a Fast Fourier Transform (FFT). There are some situations, however, in which it is impractical to use a single
FFT for each convolution operand:
• One or both of the signals being convolved is very long.
• The filter must operate in real time. (We can't wait until the input signal ends before providing an output signal.)
Direct convolution does not have these problems. For example, given a
impulse response
(FIR) of length , we need only store the past samples of the input signal to calculate the next output sample, since
Thus, at every time , the output can be computed as a linear combination of the current input sample and the current filter state .
To obtain the benefit of high-speed FFT convolution when the input signal is very long, we simply chop up the input signal into blocks, and perform convolution on each block separately. The output is
then the sum of the separately filtered blocks. The blocks overlap because of the ``ringing'' of the filter. For a zero-phase filter, each block overlaps with both of its neighboring blocks. For
causal filters, each block overlaps only with its neighbor to the right (the next block in time). The fact that signal blocks overlap and must be added together (instead of simply abutted) is the
source of the name overlap-add method for FFT convolution of long sequences [7,9].
The idea of processing input blocks separately can be extended also to both operands of a convolution (both and in ). The details are a straightforward extension of the single-block-signal case
discussed below.
When simple FFT convolution is being performed between a signal and FIR filter , there is no reason to use a non-rectangular window function on each input block. A rectangular window length of
samples may advance samples for each successive frame (hop size samples). In this case, the input blocks do not overlap, while the output blocks overlap by the FIR filter length (minus one sample).
On the other hand, if nonlinear and/or time-varying spectral modifications to be performed, then there are good reasons to use a non-rectangular window function and a smaller hop size, as we will
develop below.
Overlap-Add Decomposition
Consider breaking an input signal into frames using a finite, zero-phase, length window . Then we may express the th windowed data frame as
The hop size is the number of samples between the begin-times of adjacent frames. Specifically, it is the number of samples by which we advance each successive window.
Figure 8.8 shows an input signal (top) and three successive windowed data frames using a length causal Hamming window and 50% overlap ( ).
For frame-by-frame spectral processing to work, we must be able to reconstruct from the individual overlapping frames, ideally by simply summing them in their original time positions. This can be
written as
Hence, if and only if
This is the
for the
analysis window . It has also been called the
partition of unity property
Figure 8.9 illustrates the appearance of 50% overlap-add for the Bartlett (triangular) window. The Bartlett window is clearly COLA for a wide variety of hop sizes, such as , , and so on, provided is
an integer (otherwise the underlying continuous triangular window must be resampled). However, when using windows defined in a library, the COLA condition should be carefully checked. For example,
the following Matlab/Octave script shows that there is a problem with the standard Hamming window:
M = 33; % window length
R = (M-1)/2; % hop size
N = 3*M; % overlap-add span
w = hamming(M); % window
z = zeros(N,1); plot(z,'-k'); hold on; s = z;
for so=0:R:N-M
ndx = so+1:so+M; % current window location
s(ndx) = s(ndx) + w; % window overlap-add
wzp = z; wzp(ndx) = w; % for plot only
plot(wzp,'--ok'); % plot just this window
plot(s,'ok'); hold off; % plot window overlap-add
The figure produced by this matlab code is shown in Fig.
. As can be seen, the equal end-points sum to form an
in each frame of the overlap-add.
The Matlab window functions (such as hamming) have an optional second argument which can be either 'symmetric' (the default), or 'periodic'. The periodic case is equivalent to
w = hamming(M+1); % symmetric case
w = w(1:M); % delete last sample for periodic case
The periodic variant solves the non-constant overlap-add problem for even and , but not for odd . The problem can be solved for odd and
while retaining symmetry
as follows:
w = hamming(M); % symmetric case
w(1) = w(1)/2; % repair constant-overlap-add for R=(M-1)/2
w(M) = w(M)/2;
Since different window types may add or subtract 1 to/from internally, it is best to check the result using test code as above to make sure the window is COLA at the desired hop size.
, in Matlab:
• hamming(M)
.54 - .46*cos(2*pi*(0:M-1)'/(M-1));
gives constant overlap-add for , , etc., when endpoints are divided by 2 or one endpoint is zeroed
• hanning(M)
.5*(1 - cos(2*pi*(1:M)'/(M+1)));
does not give constant overlap-add for , but does for
• blackman(M)
.42 - .5*cos(2*pi*m)' + .08*cos(4*pi*m)';
where m = (0:M-1)/(M-1), gives constant overlap-add for when is odd and is an integer, and when is even and is integer.
In summary, all windows obeying the constant-overlap-add constraint will yield perfect reconstruction of the original signal from the data frames by overlap-add (OLA). There is no constraint on
window type, only that the window overlap-adds to a constant for the hop size used. In particular, always yields a constant overlap-add for any window function. We will learn later (§8.3.1) that
there is also a simple frequency-domain test on the window transform for the constant overlap-add property.
To emphasize an earlier point, if simple time-invariant FIR filtering is being implemented, and we don't need to work with the intermediate STFT, it is most efficient to use the rectangular window
with hop size , and to set , where is the length of the filter and is a convenient FFT size. The optimum for a given is an interesting exercise to work out.
So far we've seen the following constant-overlap-add examples:
• Rectangular window at 0% overlap (hop size = window size )
• Bartlett window at 50% overlap ( ) (Since normally is odd, `` '' means ``R=(M-1)/2,'' etc.)
• Hamming window at 50% overlap ( )
In addition, we can mention the following cases (referring to window types discussed in Chapter
• Rectangular window at 50% overlap ( )
• Hamming window at 75% overlap ( % hop size)
• Any member of the Generalized Hamming family at 50% overlap
• Any member of the Blackman family at 2/3 overlap (1/3 hop size); e.g., blackman(33,'periodic'),
• Any member of the -term Blackman-Harris family with .
• Any window with R=1 (``sliding FFT'')
Recall from §
, that many audio coders use the MLT sine window. The window is applied twice: once before the FFT (the ``analysis window'') and secondly after the inverse FFT prior to reconstruction by overlap-add
(the so-called ``synthesis window''). Since the window is effectively
, it functions as a
Hann window
for overlap-add purposes (a member of the Generalized Hamming family). As such, it can be used with 50% overlap. More generally,
any positive COLA window can be split into an analysis and synthesis window pair by taking its square root.
To represent practical FFT implementations, it is preferable to shift the frame back to the time origin:
This is summarized in Fig.
. Zero-based frames are needed because the leftmost input sample is assigned to time zero by FFT algorithms. In other words, a hopping FFT effectively redefines time zero on each hop. Thus, a
is a sequence of FFTs of the zero-based frames . On the other hand, papers in the literature (such as [
]) work with the fixed time-origin case ( ). Since they differ only by a time shift, it is not hard to translate back and forth.
Note that we may sample the DTFT of both and , because both are time-limited to nonzero samples. The minimum information-preserving sampling interval along the unit circle in both cases is . In
practice, we often oversample to some extent, using with instead. For , we get
where . For we have
Since , their transforms are related by the shift theorem:
where denotes modulo indexing (appropriate since the DTFTs have been sampled at intervals of ).
Getting back to acyclic convolution, we may write it as
Since is time limited to (or ), can be sampled at intervals of without time aliasing. If is time-limited to , then will be time limited to . Therefore, we may sample at intervals of
or less along the unit circle. This is the
of the usual
sampling theorem
We conclude that practical FFT acyclic convolution may be carried out using an FFT of any length satisfying
where is the frame size and is the
length. Our final expression for is
where is the length DFT of the zero-padded frame , and is the length DFT of , also zero-padded out to length , with .
Note that the terms in the outer sum overlap when even if . In general, an LTI filtering by increases the amount of overlap among the frames.
This completes our derivation of FFT convolution between an indefinitely long signal and a reasonably short FIR filter (short enough that its zero-padded DFT can be practically computed using one
The fast-convolution processor we have derived is a special case of the Overlap-Add (OLA) method for short-time Fourier analysis, modification, and resynthesis. See [7,9] for more details.
Example of Overlap-Add Convolution
Let's look now at a specific example of FFT convolution:
We will work through the matlab for this example and display the results. First, the simulation parameters:
L = 31; % FIR filter length in taps
fc = 600; % lowpass cutoff frequency in Hz
fs = 4000; % sampling rate in Hz
Nsig = 150; % signal length in samples
period = round(L/3); % signal period in samples
FFT processing parameters:
M = L; % nominal window length
Nfft = 2^(ceil(log2(M+L-1))); % FFT Length
M = Nfft-L+1 % efficient window length
R = M; % hop size for rectangular window
Nframes = 1+floor((Nsig-M)/R); % no. complete frames
Generate the impulse-train test signal:
sig = zeros(1,Nsig);
sig(1:period:Nsig) = ones(size(1:period:Nsig));
Design the
lowpass filter
using the
window method
epsilon = .0001; % avoids 0 / 0
nfilt = (-(L-1)/2:(L-1)/2) + epsilon;
hideal = sin(2*pi*fc*nfilt/fs) ./ (pi*nfilt);
w = hamming(L); % FIR filter design by window method
h = w' .* hideal; % window the ideal impulse response
hzp = [h zeros(1,Nfft-L)]; % zero-pad h to FFT size
H = fft(hzp); % filter frequency response
Carry out the overlap-add FFT processing:
y = zeros(1,Nsig + Nfft); % allocate output+'ringing' vector
for m = 0:(Nframes-1)
index = m*R+1:min(m*R+M,Nsig); % indices for the mth frame
xm = sig(index); % windowed mth frame (rectangular window)
xmzp = [xm zeros(1,Nfft-length(xm))]; % zero pad the signal
Xm = fft(xmzp);
Ym = Xm .* H; % freq domain multiplication
ym = real(ifft(Ym)) % inverse transform
outindex = m*R+1:(m*R+Nfft);
y(outindex) = y(outindex) + ym; % overlap add
The time waveforms for the first three frames ( ) are shown in Figures 8.12 through 8.14. Notice how the causal linear-phase filtering results in an overall signal delay by half the filter length.
Also, note how frames 0 and 2 contain four impulses, while frame 1 only happens to catch three; this causes no difficulty, and the filtered result remains correct by superposition.
Overlap-add FFT processors provide efficient implementations for FIR filters longer than 100 or so taps on single CPUs. Specifically, we ended up with:
where is acyclic in this context. Stated as a procedure, we have the following steps in an overlap-add FFT processor:
(1) Extract the th length frame of data at time .
(2) Shift it to the base time interval (or ).
(3) Optionally apply a length analysis window (causal or zero phase, as preferred). For simple LTI filtering, the rectangular window is fine.
(4) Zero-pad the windowed data out to the FFT size (a power of 2), such that , where is the FIR filter length.
(5) Take the -point FFT.
(6) Apply the filter frequency-response as a windowing operation in the frequency domain.
(7) Take the -point inverse FFT.
(8) Shift the origin of the -point result out to sample where it belongs.
(9) Sum into the output buffer containing the results from prior frames (OLA step).
The condition is necessary to avoid time
, to implement
acyclic convolution
using an FFT; this condition is equivalent to a
minimum sampling-rate requirement
in the frequency domain.
A second condition is that the analysis window be COLA at the hop size used:
Next Section: Dual of Constant Overlap-AddPrevious Section: Convolution of Short Signals | {"url":"https://www.dsprelated.com/freebooks/sasp/Convolving_Long_Signals.html","timestamp":"2024-11-01T20:26:10Z","content_type":"text/html","content_length":"100884","record_id":"<urn:uuid:5cef6900-12a0-4f77-b950-32952c7260c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00575.warc.gz"} |
How to Calculate Zakat on 401k - Nchin.org
How to Calculate Zakat on 401k
Calculating zakat on a 401k involves determining its value at the time of the zakat due date. The value of the 401k is the sum of all contributions, minus any withdrawals or losses. To calculate
zakat, you should use the most recent value of the 401k. If the value of the 401k is greater than the nisab amount, which is approximately $5,384, then zakat is due. The amount of zakat is 2.5% of
the value of the 401k. For example, if the value of the 401k is $10,000, then the zakat due would be $250.
How to Calculate Z score on 401k
Z score is a statistical measure that quantifies the relative performance of an investment compared to a benchmark, such as the S&P 500 index. It is calculated by subtracting the benchmark’s return
from the investment’s return and dividing the difference by the standard deviation of the benchmark’s return.
For example, if an investment has a return of 10% and the benchmark has a return of 5%, and the standard deviation of the benchmark’s return is 2%, then the Z score would be (10% – 5%) / 2% = 2.5.
A Z score of 0 indicates that the investment has performed in line with the benchmark. A Z score greater than 0 indicates that the investment has outperformed the benchmark, while a Z score less than
0 indicates that the investment has underperformed the benchmark.
• Step 1: Calculate the return on your 401k investment.
• Step 2: Calculate the return on the benchmark.
• Step 3: Calculate the standard deviation of the benchmark’s return.
• Step 4: Subtract the benchmark’s return from your investment’s return.
• Step 5: Divide the difference by the standard deviation of the benchmark’s return.
Investment Return Benchmark Return Standard Deviation of Benchmark Return Z score
10% 5% 2% 2.5
Zakat on 401k
Zakat on 401k plans can be a complex issue as it involves understanding the nature of contributions and returns, as well as the timing of distributions.
However, it is important to note that Zakat is only applicable to individuals who meet the required wealth threshold known as Nisab. Zakat calculations involve determining the value of your assets,
including your 401k, and deducting any allowable liabilities or debts.
Zakat on Employer Contributions
• Employer contributions before tax: These contributions are not subject to Zakat until they are withdrawn from the account.
• Employer contributions after tax: These contributions have already been subject to Zakat and are not considered Zakatable.
It’s important to consult with an Islamic financial advisor or a qualified accountant for personalized guidance on Zakat calculations and any specific questions you may have.
Contribution Type Zakatable
Employer contributions before tax No
Employer contributions after tax No
Employee contributions before tax Yes
Employee contributions after tax No
Investment growth Yes
Zakat on 401k
Zakat is a religious obligation for Muslims to give a portion of their wealth to those in need. The amount of Zakat due is based on the value of one’s assets, including their 401k.
Zakat on Earnings
The Zakat obligation on 401k earnings is 2.5%. This applies to all contributions made to the account, regardless of whether they were made by the employee or the employer.
For example, if an employee contributes $500 to their 401k, they would owe $12.50 in Zakat on those earnings.
The growth in a 401k is also subject to Zakat. This includes any interest, dividends, or capital gains earned on the account.
The Zakat obligation on growth is 2.5% of the fair market value of the account as of the end of the lunar year.
For example, if an employee’s 401k has a fair market value of $100,000 at the end of the lunar year, they would owe $2,500 in Zakat on the growth.
Determining the Zakat Due
The following table summarizes the Zakat obligations on 401k earnings and growth:
Type of Income Zakat Rate
Earnings 2.5%
Growth 2.5%
To determine the total Zakat due on a 401k, simply add the Zakat obligations on earnings and growth.
For example, if an employee contributes $500 to their 401k and the account has a fair market value of $100,000 at the end of the lunar year, they would owe a total of $2,512.50 in Zakat.
Zakat on 401k: A Guide
Zakat is an obligatory charity for Muslims that is given to those in need. It is one of the five pillars of Islam and is considered a religious duty. While traditional calculations of Zakat typically
focus on liquid assets, there are also considerations for retirement accounts like 401ks.
Determining Zakatable Assets
• Pre-tax Contributions: Employer contributions and employee pre-tax contributions (reductions from your paycheck) are considered non-zakatable as they have not yet been taxed.
• Post-tax Contributions: After-tax contributions (contributions made with post-tax dollars) are considered zakatable as you have already paid taxes on them.
• Investment Growth: Any earnings or growth on both pre-tax and post-tax contributions are zakatable once they have been vested (accessible without penalty).
Calculating Zakat Amount
Once you have determined your zakatable assets, you can calculate the Zakat amount using the following formula:
Zakat Payable = (Zakatable Amount) x 2.5%
Tax Implications
It’s important to note that withdrawing funds from your 401k to pay Zakat may trigger income taxes and penalties if done prematurely. To avoid this, consider the following options:
• Qualified Charitable Distributions (QCD): If you are over age 59.5, you can make a tax-free withdrawal from your 401k and use it to pay Zakat. The maximum QCD amount is $100,000 per year.
• Zakat-compliant Investment: Invest in shariah-compliant investments that generate halal income and pay Zakat on the annual earnings.
Example Calculation
│ 401k Contributions │ Zakatable Amount │
│ Pre-tax Contributions: $20,000 │ $0 │
│ Post-tax Contributions: $5,000 │ $5,000 │
│ Investment Growth: $10,000 │ $10,000 │
│ Total Zakatable Assets │ $15,000 │
│ Zakat Payable │ $375 │
In this example, the total zakatable assets are $15,000, and the Zakat payable is $375.
Consult with a financial advisor and an Islamic scholar to determine the most appropriate approach for your specific situation and religious obligations.
Thanks for sticking with me through this guide on calculating Zakat on your 401k. I hope you found it helpful and that you now have a better understanding of how to fulfill this important religious
obligation. Remember, Zakat is a way to purify your wealth and share your blessings with those in need. By calculating and paying your Zakat accurately, you are not only fulfilling a religious duty
but also making a positive impact on your community. If you have any further questions or need additional guidance, feel free to visit my website or reach out to me directly. Until next time, stay
informed and financially responsible! | {"url":"https://www.nchin.org/how-to-calculate-zakat-on-401k/","timestamp":"2024-11-03T03:35:08Z","content_type":"text/html","content_length":"118547","record_id":"<urn:uuid:70a58ed7-2d60-4a3d-bee7-ad360dffa109>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00057.warc.gz"} |
Multiply and Divide by 10, 100 and 1000 (1)
Related Worksheets
Multiply and divide numbers by 10, 100 and 1000 and observe the effect. Lots of choice of level including decimals.
Scan to open this game on a mobile device. Right-click to copy and paste it onto a homework sheet. | {"url":"https://mathsframe.co.uk/en/resources/resource/30/multiply-and-divide-by-10-100-and-1000-1-","timestamp":"2024-11-10T01:24:23Z","content_type":"text/html","content_length":"11733","record_id":"<urn:uuid:47f6a17c-57c9-49c2-a989-86ef86a82c15>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00845.warc.gz"} |