id stringlengths 1 6 | url stringlengths 16 1.82k | content stringlengths 37 9.64M |
|---|---|---|
189500 | https://www.reddit.com/r/worldbuilding/comments/140742c/nation_vs_country_vs_state/ | Nation vs country vs state : r/worldbuilding
Skip to main contentNation vs country vs state : r/worldbuilding
Open menu Open navigationGo to Reddit Home
r/worldbuilding A chip A close button
Log InLog in to Reddit
Expand user menu Open settings menu
Go to worldbuilding
r/worldbuilding
r/worldbuilding
For artists, writers, gamemasters, musicians, programmers, philosophers and scientists alike! The creation of new worlds and new universes has long been a key element of speculative fiction, from the fantasy works of Tolkien and Le Guin, to the science-fiction universes of Delany and Asimov, to the tabletop realm of Gygax and Barker, and beyond. This subreddit is about sharing your worlds, discovering the creations of others, and discussing the many aspects of creating new universes.
1.8M Members Online
•2 yr. ago
[deleted]
Nation vs country vs state
Discussion
To be honest I feel like "state" really fits it better - it doesn't imply that the territory is a full-on country. But I like "nation" better than country, I guess because I feel like the term "country" is specific to the modern concept of countries, which again is why I like "state" better - an area where kindoms are not tied to each other as a greater state would be an area of several small states - they could ally with bigger nations, while remaining independent and no more than a kingdom and perhaps surrounding territory.
But I'm nervous because in usa "states" are smaller territories within the nation.. so is "nation" really best? I just like the adaptiveness of the word "state"
Maybe Sovereignty?
what are your thoughts?
Read more
Share
Related Answers Section
Related Answers
Discuss the concept of nation vs state
Explore if a nation is a country
Compare country and state definitions
Differentiate between country and nation
Define what a sovereign country is
New to Reddit?
Create your account and connect with a world of communities.
Continue with Email
Continue With Phone Number
By continuing, you agree to ourUser Agreementand acknowledge that you understand thePrivacy Policy.
Public
Anyone can view, post, and comment to this community
Top Posts
Reddit reReddit: Top posts of June 4, 2023
Reddit reReddit: Top posts of June 2023
Reddit reReddit: Top posts of 2023
Trending topics today
Coppola, Dunst reunite
Texas House passes new maps
Gabbard to halve intel staff
Judge Caprio dies at 88
Hurricane Erin triggers tropical storm warning in North Carolina
KPop Demon Hunters breaks out
Bears extend Bagent
Target CEO steps down
Paul vs. Davis set for Nov.
Google unveils Pixel 10 lineup
Trump rules out Ukraine troops
Duffer Bros. join Paramount
White House joins TikTok
Isak-Newcastle standoff escalates
LEGO Batman game revealed
Fallout S2 teases New Vegas
GAP taps KATSEYE for denim ad
SoftBank invests $2B in Intel
Florence drops Halloween album
Helldivers 2 teases ODST
Plaza opens up on grief
Reddit RulesPrivacy PolicyUser AgreementAccessibilityReddit, Inc. © 2025. All rights reserved.
Expand Navigation Collapse Navigation |
189501 | https://weibeld.net/misc/necessary-vs-sufficient.html | Necessary vs. Sufficient
Notes
Home
Misc
Necessary vs. Sufficient
Necessary vs. Sufficient
Daniel Weibel
Created 29 Oct 2013
Last updated 31 Oct 2017
These are some reminders for the intuition of the terms necessary and sufficient conditions.
Sets
Consider the following Venn diagram:
Sufficient
P P is a sufficient condition for Q Q
If an element is in P P, it must also be in Q Q
P P is not a necessary condition for Q Q
An element may be in Q Q without being in P P
Necessary
Q Q is a necessary condition for P P
An element cannot be in P P without also being in Q Q
Q Q is not a sufficient condition for P P
If an element is in Q Q, it may or may not be in P P
Logic
Consider the following propositional logic formula:
P→Q P→Q
Sufficient
P P is a sufficient condition for Q Q
If P P is true, then Q Qmust be true
P P is not a necessary condition for Q Q
Q Q may be true if P P is false
Necessary
Q Q is a necessary condition for P P.
P P can only be true, if Q Q is also true
Q Q is not a sufficient condition for P P
If Q Q is true, P P may be true or false
About
This is a collection of pretty arbitrary notes about computer science, software engineering, and related topics, that I made over the years. I usually make these notes after researching something, so that I won't forget it again. I often come back here when I want to quickly look up something that I know I researched it before. I hope that these pages prove to be equally useful for other computer scientists and programmers!
Contact
My name is Daniel Weibel. I can be reached by e-mail at daniel@weibeld.net.
And most importantly, my GitHub profile is weibeld.
Licensing
This website and all its content is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
© 2017–2019 Daniel Weibel |
189502 | https://www.quora.com/What-are-the-differences-between-stereoisomers-and-constitutional-isomers | What are the differences between stereoisomers and constitutional isomers? - Quora
Something went wrong. Wait a moment and try again.
Try again
Skip to content
Skip to search
Sign In
Chemistry
Constitutional Isomers
Structure of Organic Comp...
Stereoisomers
Organic Chemistry 1
Molecular Structure & Bon...
Organic Chemistry, Bioche...
Isomers
Stereochemistry
5
What are the differences between stereoisomers and constitutional isomers?
All related (33)
Sort
Recommended
Assistant
Bot
·
1y
Stereoisomers and constitutional isomers are two types of isomers that differ in the arrangement of atoms in a molecule. Here are the key differences between them:
Constitutional Isomers
Definition: Constitutional isomers (also called structural isomers) are compounds that have the same molecular formula but differ in the connectivity of their atoms. This means that the way atoms are bonded to each other is different.
Examples:
Butane (C₄H₁₀): Has two constitutional isomers: n-butane (a straight-chain) and isobutane (a branched-chain).
Alcohols: Butanol has different forms like 1-butanol and 2-but
Continue Reading
Stereoisomers and constitutional isomers are two types of isomers that differ in the arrangement of atoms in a molecule. Here are the key differences between them:
Constitutional Isomers
Definition: Constitutional isomers (also called structural isomers) are compounds that have the same molecular formula but differ in the connectivity of their atoms. This means that the way atoms are bonded to each other is different.
Examples:
Butane (C₄H₁₀): Has two constitutional isomers: n-butane (a straight-chain) and isobutane (a branched-chain).
Alcohols: Butanol has different forms like 1-butanol and 2-butanol, which have different connectivity.
Stereoisomers
Definition: Stereoisomers are compounds that have the same molecular formula and the same connectivity of atoms but differ in the spatial arrangement of those atoms. This means that the atoms are connected in the same way but are oriented differently in three-dimensional space.
Types:
Geometric Isomers: These arise due to restricted rotation around a double bond (cis/trans isomerism).
Optical Isomers: These are chiral molecules that are non-superimposable mirror images of each other, often referred to as enantiomers.
Examples:
Cis-2-butene and trans-2-butene: Both have the formula C₄H₈ but differ in the spatial arrangement around the double bond.
Lactic acid: Exists as two enantiomers, D-lactic acid and L-lactic acid, which are mirror images of each other.
Summary
Constitutional Isomers: Different connectivity (different structures).
Stereoisomers: Same connectivity but different spatial arrangements.
Understanding these differences is crucial in fields such as organic chemistry, where the properties and reactivities of compounds can vary significantly based on their isomeric forms.
Upvote ·
Related questions
More answers below
What is the difference between isomers and stereoisomers?
Are all isomers stereoisomers? How do I identify them?
How do structural isomers and stereoisomers differ?
What is the difference between stereoisomers and configurational isomers?
What are constitutional isomers?
Sophie Bodrog
Biochemistry undergraduate
·9y
Constitutional isomers have the same chemical formula, but the way in which the different atoms are connected together differs. In the case of stereoisomers, the connections between all the atoms are the same, however the orientation of the atoms in three dimensional space is different and brings about the idea of chirality.
Continue Reading
Constitutional isomers have the same chemical formula, but the way in which the different atoms are connected together differs. In the case of stereoisomers, the connections between all the atoms are the same, however the orientation of the atoms in three dimensional space is different and brings about the idea of chirality.
Upvote ·
9 2
Sponsored by Grammarly
Stuck on the blinking cursor?
Move your great ideas to polished drafts without the guesswork. Try Grammarly today!
Download
99 34
Fred Pearce
University Teacher of Chemistry & Pharmacology · Author has 1.1K answers and 2.9M answer views
·6y
Originally Answered: How does a chemical property distinguish constitutional isomers and stereoisomers? ·
Constitutional isomers are compounds that have the same molecular formula and different connectivity. They may have quite different chemical properties. Thus, ethanol (CH3CH2OH) and dimethyl ether (methoxymethane, CH3OCH3) both have the molecular formula C2H6O but ethanol is common alcohol while dimethyl ether is a colorless volatile poisonous liquid compound used as a solvent, fuel, aerosol, propellant and refrigerant.
Stereoisomers have the same connectivity but differ in the arrangement of the atoms in space. They may be enantiomers (related as object and mirror image) or diastereomers (not
Continue Reading
Constitutional isomers are compounds that have the same molecular formula and different connectivity. They may have quite different chemical properties. Thus, ethanol (CH3CH2OH) and dimethyl ether (methoxymethane, CH3OCH3) both have the molecular formula C2H6O but ethanol is common alcohol while dimethyl ether is a colorless volatile poisonous liquid compound used as a solvent, fuel, aerosol, propellant and refrigerant.
Stereoisomers have the same connectivity but differ in the arrangement of the atoms in space. They may be enantiomers (related as object and mirror image) or diastereomers (not related as object and mirror image). Enantiomers have identical chemical properties (apart from in their reactions with other chiral molecules and they also differ in their effect on polarised light) while diastereomers may have quite different chemical properties. Thus, fumaric acid and maleic acid are diasteromers but are so different that they are even given distinct names.
Upvote ·
David Min
Studied Organic Chemistry (Graduated 2010) · Author has 1.9K answers and 4.4M answer views
·4y
Related
What is the difference between structural and constitutional isomers?
”Structural isomer“ is obsolete. It is not recommended or recognized by IUPAC except as archaic.
If you think about the three types of isomer it is too vague.
There is something structurally different about every kind of isomer so that term means little in appropriateness and way too much in the larger scheme of things. It’s too imprecise.
Structural isomers are Constititional isomers. One term is obsolete. Other obsolete terms exist still (geometrical isomers, homomers…).
All isomers to even be classified by the major type (there are only three) all have the same molecular formula. Otherwise, the
Continue Reading
”Structural isomer“ is obsolete. It is not recommended or recognized by IUPAC except as archaic.
If you think about the three types of isomer it is too vague.
There is something structurally different about every kind of isomer so that term means little in appropriateness and way too much in the larger scheme of things. It’s too imprecise.
Structural isomers are Constititional isomers. One term is obsolete. Other obsolete terms exist still (geometrical isomers, homomers…).
All isomers to even be classified by the major type (there are only three) all have the same molecular formula. Otherwise, they would not be isomers at all. That is the similarity between them however, the differences are what matter.
constitutional isomers differ in the connectivity of atoms. Ethanol (C2H6O is ethanol or dimethyl ether (C2H6O). They have completely different physical properties.
For example: these are also constitutional (formerly “structural” isomers
2) configurational isomers come in two forms:
Enantiomers (nonsuperimposable mirror images).
and diastereomers (nonsuperimposable non-mirror images).
These differ by the arrangement of the atoms in 3-D space.
3) Conformational isomers are isomers in that are the same compound in a different shape. Cyclohexane is the most commonly studied based on its shape. The flat hexagon is horribly strained. The chair conformer is much more stable.
Constitutional Isomers can easily be recognized. How?
If you can name compounds, you’ll find anything that has a different name than another molecule yet has the the same chemical formula, can only be a constitutional isomer, regardless of how it is drawn.
The other two types have the same name since the structure differs only in spatial arrangement or in strain (but not all the physical properties are the same in configuration and conformational isomers.
Upvote ·
9 8
9 5
Related questions
More answers below
What is the difference between optical isomers and stereoisomers?
What's the difference between cis-trans isomers and stereoisomers?
What is the difference between configurational and conformational isomers?
What is the difference between structural and constitutional isomers?
How does a chemical property distinguish constitutional isomers and stereoisomers?
Donald Montecalvo
Former Organic Chemistry Professor for 35 Years (1975–2010) · Author has 1.5K answers and 1.4M answer views
·4y
Related
What is the difference between structural and constitutional isomers?
Many texts say that these words apply to the same type of isomerism. I do not like to us the term “structural “isomers, since it can apply to stereoisomers also. The best way is to use the term “constitutional” isomers for molecules with the same molecular formula but a different bonding sequence, meaning which atoms are attached to which other atoms. For example CH3OCH3 and CH3CH2OH are constitutional isomers, since the sequence of bonding has been altered.
The term “stereoisomers” refers to molecules with the same constitution (bonding sequence) but with different orientation of certain atoms
Continue Reading
Many texts say that these words apply to the same type of isomerism. I do not like to us the term “structural “isomers, since it can apply to stereoisomers also. The best way is to use the term “constitutional” isomers for molecules with the same molecular formula but a different bonding sequence, meaning which atoms are attached to which other atoms. For example CH3OCH3 and CH3CH2OH are constitutional isomers, since the sequence of bonding has been altered.
The term “stereoisomers” refers to molecules with the same constitution (bonding sequence) but with different orientation of certain atoms in 3-D space. For example, the molecules cis- and trans- 2-butene are stereoisomers. The structures are different; if you had models, you would have to break bonds and reassemble them to interconvert these stereoisomers. This is, technically, a structural change.
Bottom line is, do not use the term structural isomers, since it brings in a certain amount of vagueness.
Upvote ·
9 2
Sponsored by All Out
Kill Dengue, Malaria and Chikungunya with New 30% Faster All Out.
Chance Mat Lo, Naya All Out Lo - Recommended by Indian Medical Association.
Shop Now
999 624
Daniel James Berger
PhD in chemistry · Author has 4.6K answers and 11.3M answer views
·Updated 8y
Related
Are all isomers stereoisomers? How do I identify them?
There are three kinds of isomers (compounds which have the same formula but are not identical in structure):
In constitutional isomers the same atoms are arranged in a different order or connectivity. [math]\mathrm{CH_3CH_2CH_2CH_3}[/math] and [math]\mathrm{(CH_3)2CHCH_3}[/math] are constitutional isomers. They both have the formula [math]\mathrm{C_4H{10}}[/math] but are arranged in a different order.
Note that constitutional isomers have different "condensed formulas," as I just demonstrated.
In stereoisomers, atoms are arranged in the same order but point different directions in space. There are two types of stereoisomer: diastere
Continue Reading
There are three kinds of isomers (compounds which have the same formula but are not identical in structure):
In constitutional isomers the same atoms are arranged in a different order or connectivity. [math]\mathrm{CH_3CH_2CH_2CH_3}[/math] and [math]\mathrm{(CH_3)2CHCH_3}[/math] are constitutional isomers. They both have the formula [math]\mathrm{C_4H{10}}[/math] but are arranged in a different order.
Note that constitutional isomers have different "condensed formulas," as I just demonstrated.
In stereoisomers, atoms are arranged in the same order but point different directions in space. There are two types of stereoisomer: diastereomers and enantiomers.
It is important to recognize that stereoisomers have the same condensed formulas; you need to use representations that have directionality to them in order to distinguish stereoisomers.
Enantiomers are stereoisomers (not superimposable in 3-D) that are also mirror images of each other. A simple example is (R)-2-butanol and (S)-2-butanol. Both have the condensed formula [math]\mathrm{CH_3CH(OH)CH_2CH_3}[/math].
Diastereomers are stereoisomers that are not mirror images of each other. I give two examples below, one in which there are no chiral carbon atoms and one in which there are two.
2-butene has the condensed formula [math]\mathrm{CH_3CHCHCH_3}[/math]. But it comes in two stereoisomers: one in which the methyl groups are on the same side of the double bond, and one in which they are on opposite sides.
Tartaric acid has the condensed formula HOOCCH(OH)CH(OH)COOH. It has three stereoisomers. Two of them are enantiomers of each other, and the third is a diastereomer of the other two.
Upvote ·
9 7
9 1
Barbara Murray
PhD in Organic Chemistry, University of Illinois at Urbana-Champaign (Graduated 1984) · Upvoted by
Bob Mouk
, PhD Organic Chemistry, Michigan State University (1969) · Author has 636 answers and 683.1K answer views
·6y
Related
What's the difference between cis-trans isomers and stereoisomers?
Stereoisomers are compounds that have the same molecular formula and the same atomic linkages, but different arrangement in space. There are two categories of stereiosomers: enantiomers (non superimposable mirror images of each other) and diasteriomers (all other stereoisomers that are not enantiomers.) Cis/trans isomers (sometimes called geometric isomers) are diasteriomers. So we can say that cis/trans isomers are a subset of stereoisomers. Cis/trans isomers happen when there is not free rotation around a bond, for example a double bond. Then side groups could be on one side of the double bo
Continue Reading
Stereoisomers are compounds that have the same molecular formula and the same atomic linkages, but different arrangement in space. There are two categories of stereiosomers: enantiomers (non superimposable mirror images of each other) and diasteriomers (all other stereoisomers that are not enantiomers.) Cis/trans isomers (sometimes called geometric isomers) are diasteriomers. So we can say that cis/trans isomers are a subset of stereoisomers. Cis/trans isomers happen when there is not free rotation around a bond, for example a double bond. Then side groups could be on one side of the double bond (cis) or on opposite sides of the double bond (trans.)
Upvote ·
9 3
9 1
Sponsored by Play Free
Try the best online slots.
Play the most popular online slots with bonuses. Enjoy the best and reliable online casino games.
Start Now
999 454
Shiraz Kaderuppan
Author has 119 answers and 166K answer views
·5y
Related
What is the difference between optical isomers and stereoisomers?
Stereoisomers refer to any kind of structural isomer differing in the manner by which their atoms are arranged. This includes cis-/trans-, E-/Z- isomerism, optical isomerism, etc.
Optical isomers (enantiomers) are a special class of stereoisomers which are able to rotate plane polarized light passing through the isomeric compound, if one type of isomer is more prevalent than the other. In biochemistry, there exist mainly 2 types of optical isomers - D & L type isomers. The former rotates plane polarized light clockwise; the latter rotates it counter-clockwise. A racemic mixture exists when ther
Continue Reading
Stereoisomers refer to any kind of structural isomer differing in the manner by which their atoms are arranged. This includes cis-/trans-, E-/Z- isomerism, optical isomerism, etc.
Optical isomers (enantiomers) are a special class of stereoisomers which are able to rotate plane polarized light passing through the isomeric compound, if one type of isomer is more prevalent than the other. In biochemistry, there exist mainly 2 types of optical isomers - D & L type isomers. The former rotates plane polarized light clockwise; the latter rotates it counter-clockwise. A racemic mixture exists when there is equal proportion (50%) of each isomer type, so no apparent rotation of the plane polarized light occurs. In most optical isomeric organic compounds, the carbon atom which is responsible for causing the isomers to be formed is said to be chiral.
Upvote ·
9 3
9 4
Jack Leonard
Ph.D. in Chemistry&Biology, California Institute of Technology (Caltech) (Graduated 1971) · Author has 1K answers and 362.7K answer views
·3y
Related
How do the types of stereoisomers differ from each other?
Basically, there are two types of stereoisomerism.
What such isomers have in common is that the order of bonds in the backbone are identical in number and sequence are identical in number, type, and sequence. Thus, in cis- and trans- forms of dichloroethylene, each of the end carbons is doubly bonded to the other carbon and has single bonds to a hydrogen and a chlorine, that is, in linear notation ClHC=CHCl. In 3 dimensional space, though, the two Cl are locked (by the nature of the pi bond) so that the structure does not readily rotate. Therefore, there are two different isomeric forms: cis- (
Continue Reading
Basically, there are two types of stereoisomerism.
What such isomers have in common is that the order of bonds in the backbone are identical in number and sequence are identical in number, type, and sequence. Thus, in cis- and trans- forms of dichloroethylene, each of the end carbons is doubly bonded to the other carbon and has single bonds to a hydrogen and a chlorine, that is, in linear notation ClHC=CHCl. In 3 dimensional space, though, the two Cl are locked (by the nature of the pi bond) so that the structure does not readily rotate. Therefore, there are two different isomeric forms: cis- (or Z-) where both chlorines are on a single side of the line between the carbons, or the trans- (or Z-) side where they are not.
The other type of stereoisomerism is a little harder to visualize. It is called chirality or mirror isomerism or optical isomerism, since the two forms differ as the right hand differs from the left [chiros is the Greek word for “hand”], which is to say, they are mirror images of one another. It can only happen with three-dimensional structures, and the simplest form is with the tetrahedral structure and four different attached substituents.
Notice that we could rotate the W on the right 180 degrees to make the W and X overlap, bu now the Y and Z will not be in the same position relative to the rest of the molecule.
Upvote ·
9 4
Sponsored by RedHat
Know what your AI knows, with open source models.
Your AI should keep records, not secrets.
Learn More
99 36
Douglas Klein
PhD in Chemical Physics, The University of Texas at Austin (Graduated 1971) · Author has 730 answers and 991.9K answer views
·4y
Related
What is the difference between structural and constitutional isomers?
Isomers (typically) refer to molecules with precisely the same numbers of each type of atom per molecule. Structural & constitutional isomers are the same thing — with the same numbers of each atom and further the same numbers of each type of bond (including the atom types on the two ends of a bond). The phrase “structural isomer” was at one time commonly used, but now the officially preferred name is “constitutional isomer”.
There are other distinguished sorts of isomers. Two molecules are “enantiomers” if the effective structures are achiral, with the two being mirror images of one another. T
Continue Reading
Isomers (typically) refer to molecules with precisely the same numbers of each type of atom per molecule. Structural & constitutional isomers are the same thing — with the same numbers of each atom and further the same numbers of each type of bond (including the atom types on the two ends of a bond). The phrase “structural isomer” was at one time commonly used, but now the officially preferred name is “constitutional isomer”.
There are other distinguished sorts of isomers. Two molecules are “enantiomers” if the effective structures are achiral, with the two being mirror images of one another. Two molecules are “stereo-isomers” of one constitutional isomers of another but their (effective) geometric arrangement of atoms in space differentiates them from one another. Ambiguities about isomerism can occur if there is a possibility of interconversion between two molecular arrangements, which if it did not happen would lead to distinguishable isomers. Sometimes a molecular structure leads to distinguishable isomers at lower temperatures, but not at higher temperatures.
Upvote ·
9 1
Fred Pearce
University Teacher of Chemistry & Pharmacology · Author has 1.1K answers and 2.9M answer views
·6y
Related
What is the difference between stereoisomers and configurational isomers?
Configurational isomers are stereoisomers that can cannot be converted into one another by rotation around a single bond. Conformational isomers are stereoisomers which can be interconverted just by rotations about formally single bonds.
Examples of configurational isomers include cis/trans isomers and enantiomers. An example of conformational isomers would be the eclipsed and staggered forms of ethane. So, configurational isomers are one type of stereoisomers. The general term stereoisomerism includes both configurational and conformational isomersism.
Continue Reading
Configurational isomers are stereoisomers that can cannot be converted into one another by rotation around a single bond. Conformational isomers are stereoisomers which can be interconverted just by rotations about formally single bonds.
Examples of configurational isomers include cis/trans isomers and enantiomers. An example of conformational isomers would be the eclipsed and staggered forms of ethane. So, configurational isomers are one type of stereoisomers. The general term stereoisomerism includes both configurational and conformational isomersism.
Upvote ·
9 2
Suparna Pati
M.sc 2020 in Chemistry, Kolhan University (Graduated 2018)
·7y
Related
What is the difference between isomers and enantiomers?
Isomer is the molecule which having same molecular formula but different chemical structure.Propanol is the example of isomer.
Enantiomer is the special types of isomer i.e. Stereoisomer.This isomers having same bond structure but having different geometrical position of atoms and functional group in space.
Enantiomer is the isomer which the molecules are mirror image to each other and having a chiral center.
Here is the example of lactic acid which are mirror image to each other.
Continue Reading
Isomer is the molecule which having same molecular formula but different chemical structure.Propanol is the example of isomer.
Enantiomer is the special types of isomer i.e. Stereoisomer.This isomers having same bond structure but having different geometrical position of atoms and functional group in space.
Enantiomer is the isomer which the molecules are mirror image to each other and having a chiral center.
Here is the example of lactic acid which are mirror image to each other.
Upvote ·
9 5
Alexander Mathey
Former Chemical Engineer, retired, lives in Athens, GR · Upvoted by
Bob Mouk
, PhD Organic Chemistry, Michigan State University (1969) · Author has 5.6K answers and 10.9M answer views
·6y
Related
What's the difference between cis-trans isomers and stereoisomers?
Cis-trans isomerism is one special case of stereoisomerism, so what you are asking is probably the difference between cis-trans isomers and enantiomers.
Cis - Trans isomers are different compounds with different physical and chemical properties.
They come into being when a double bond, or some other plane defining structural element, forces two different structural groups to lie either on the same side (Cis) or across (Trans) of the line dividing the plane in two half planes.
Example: Maleic acid and fumaric acid are different compounds.
Enantiomers (or optical isomers) are two different configura
Continue Reading
Cis-trans isomerism is one special case of stereoisomerism, so what you are asking is probably the difference between cis-trans isomers and enantiomers.
Cis - Trans isomers are different compounds with different physical and chemical properties.
They come into being when a double bond, or some other plane defining structural element, forces two different structural groups to lie either on the same side (Cis) or across (Trans) of the line dividing the plane in two half planes.
Example: Maleic acid and fumaric acid are different compounds.
Enantiomers (or optical isomers) are two different configurations of the same compound, if its molecule can exist in two forms which are non equal mirror images of each other.
The situation is analogous to the relation of a right and a left hand to each other, therefore this property is called ‘chirality’, from the Greek word χείρ (chir) = hand.
One, but not the only one, situation which leads to the formation of enantiomers is when one carbon atom is connected to 4 different atoms or groups: Since those are arranged around this ‘chiral’ carbon as the corners of a tetrahedron around its center, they can exist in two mirror configurations which can never coincide.
They are designated as R/S (for Rectus/Sinister), or D/L (for Dexter/Laevus) or simply (+)/(-) optical isomers.
The two conditions can coexist: If a cis-trans isomer contains one of more chiral carbon atoms, then you can have R/S isomerism for both the cis- as well as the trans- configuration.
Upvote ·
9 3
9 2
9 1
Related questions
What is the difference between isomers and stereoisomers?
Are all isomers stereoisomers? How do I identify them?
How do structural isomers and stereoisomers differ?
What is the difference between stereoisomers and configurational isomers?
What are constitutional isomers?
What is the difference between optical isomers and stereoisomers?
What's the difference between cis-trans isomers and stereoisomers?
What is the difference between configurational and conformational isomers?
What is the difference between structural and constitutional isomers?
How does a chemical property distinguish constitutional isomers and stereoisomers?
What is the difference between constitutional isomers and other isomers?
What is the relation between these two compounds, stereoisomers, diastereomers, constitutional isomers, or identical?
Are conformational isomers and stereoisomers the same thing?
How do you determine the constitutional isomers of cyclopentane?
What is the difference between conformational, constitutional, and structural isomers and the same and different molecules?
Related questions
What is the difference between isomers and stereoisomers?
Are all isomers stereoisomers? How do I identify them?
How do structural isomers and stereoisomers differ?
What is the difference between stereoisomers and configurational isomers?
What are constitutional isomers?
What is the difference between optical isomers and stereoisomers?
Advertisement
About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press ·
© Quora, Inc. 2025 |
189503 | https://www.geektonight.com/time-value-of-money-tvm/ | Time Value Of Money (TVM): Rationale, Concepts
Skip to content
Home
Best Courses
Google Professional Certificates
Human Resource
Human Resource
Human Resource Management
Human Resource Planning
Organizational Culture
Organization Development
Organizational Behavior
Learning Deals
All Blog Post
Management
Business Statistics
Lean Six Sigma
Management
Operation Management
Research Methodology
Operations Research
Procurement Management
Production Management
Supply Chain
Strategic Management
Marketing
Economics
Brand Management
Business
Business Communication
Business Law
Entrepreneurship
Consumer Behaviour
Marketing Essentials
Marketing Management
Sales Management
Shark Tank India
Business Tech
Project Management
Business Analytics
Management Information System
Enterprise Resource Planning
Technologies
Cloud Computing
About
About Us
Cookie Policy
DMCA Policy
Disclaimer
Contact Us
Toggle website search
All Category Close
Home
Best Courses
Google Professional Certificates
Human Resource
Human Resource
Human Resource Management
Human Resource Planning
Organizational Culture
Organization Development
Organizational Behavior
Learning Deals
All Blog Post
Management
Business Statistics
Lean Six Sigma
Management
Operation Management
Research Methodology
Operations Research
Procurement Management
Production Management
Supply Chain
Strategic Management
Marketing
Economics
Brand Management
Business
Business Communication
Business Law
Entrepreneurship
Consumer Behaviour
Marketing Essentials
Marketing Management
Sales Management
Shark Tank India
Business Tech
Project Management
Business Analytics
Management Information System
Enterprise Resource Planning
Technologies
Cloud Computing
About
About Us
Cookie Policy
DMCA Policy
Disclaimer
Contact Us
Time Value of Money (TVM): Rationale, Concepts
Home>
Financial Management>
Time Value of Money (TVM): Rationale, Concepts
Time Value of Money (TVM): Rationale, Concepts
Post last modified:10 August 2023
Reading time:24 mins read
Post category:Financial Management
Discover more
insurance
Insurance
google-analytics
Analytics
Bookshelves
Google Ads
Google Tag Manager
google ads
educator
Cloud
Time Value of Money (TVM)
The determination of an asset’s monetary value is one of the most important functions of financial analysis. This value is measured in part by the income produced over the asset’s lifetime. Since funds can be paid at various times, it may be difficult to compare the prices of different properties. Let us begin with a simple scenario. Would you rather have a ₹1,000-paying asset today or ₹1,000-paying asset a year from now?
Table of Content
1 Time Value of Money (TVM)
1.1 Rationale for TVM
1.2 Why Future Amount of Money is Worth Less than Same Amount of Present Money
2 Important Concepts in TVM
2.1 Types of Cash Flows and Timelines
2.1.1 Annuity
2.1.2 Graduated Annuity (Growing Annuity)
2.2 Simple Interest and Compound Interest
2.3 Compounding and Discount
2.4 Nominal Interest Rate, Real Interest Rate and Effective Interest Rate (EIR)
2.5 Compounding – Finding Future Value
2.6 Discounting – Finding Present Value
2.7 Discounting a Single Cash Flow
2.8 Discounting a Series of Cash Flows
3 Calculating Values of Lump-Sum Amounts
3.1 Future Value of Single Amount
3.2 Present Value of Single Amount
4 Calculating Values of Annuities
4.1 Future Value of Annuity
4.2 Present Value of Annuity
5 Practical Applications of Time Value of Money
5.1 Sinking Fund Problems
5.2 Capital Recovery Problems
5.3 Compound Growth Rate Problems
It turns out that money paid now is preferable to money paid later (we will see why in a moment). The time value of money is the term for this preference for time. The time value of money is at the heart of many financial calculations, particularly those that involve value. What if you could choose between getting a payment of ₹1,000 today and ₹1,100 in a year? The second choice will give you more money but at a later date. In this section, we will see how companies and investors make that comparison.
Rationale for TVM
The fact that the value of one rupee today is not equal to the value of one rupee at the end of one year or at the end of the second year underpins the entire spectrum of financial decisions (whether funding or investment). In other words, we cannot presume that the rupee’s value would remain constant. This is referred to as the ‘Time Value of Money.’ In financial decision-making, understanding the time value of money is important.
A financial decision made today would have long-term consequences. Any financial decision includes weighing capital outflows (outlays or investment costs) against cash inflows (benefits or earnings after tax but before depreciation). For a meaningful comparison, the two sets of flows must be strictly comparable. The use of time elements in calculations is a basic necessity of comparability.
To put it another way to make a fair and accurate distinction between cash flows that occur over time, the amounts of money must be converted to common points of time. If the timing of cash flows is not taken into account, the firm will make decisions that are not objective.
Discover more
insurance
Insurance
educator
google-analytics
Data Analytics
Search engine optimization
Education
GCP
Cloud Computing
Educator
This may be due to
Risk or the uncertainty of future receipts
Inflation, which reduces money’s buying power
Opportunities to reinvest funds earned early
It could be argued that the risk factor associated with future money receipts could be removed or minimised to a greater degree by appropriate commitments, default insurance, and so on, so that possibility of default (money not to be received in future) becomes quite remote. Naturally, the time value of money then becomes irrelevant. Similarly, if the economy is believed to be free of inflation, the value of money today and tomorrow can be assumed to be the same, and the time value of money becomes meaningless.
Despite these two drastic assumptions, a rupee received today will be preferred over a rupee received tomorrow (i.e. future), since the rupee received today could be invested and its value tomorrow (in future) would be higher (this is due to the fact that the rupee invested will fetch some interest). Future cash flows are only considered less valuable than current cash flows when reinvestment opportunities of funds earned early are considered.
If the funds were earned now, they would gain a rate of return that would be impossible to achieve if they were received later. Understanding the time value of money entails a basic understanding of the mathematical principles of compounding and discounting. These ideas are present in all types of financial decisions.
Why Future Amount of Money is Worth Less than Same Amount of Present Money
The principle of present value states that a sum of money today is worth more than a sum of money in the future. To put it another way, money earned in the future is not worth the same as money received today. A ₹1,000 gift today is worth more than ₹1,000 in five years. What is the reason for this? An investor can put ₹1,000 in today and expect to make a profit for the next five years.
Any interest rate that an investment can gain is factored into the present value. For instance, if an investor receives ₹1,000 today and earns a 5% annual return, the ₹1,000 today is unquestionably worth more than ₹1,000 five years from now. If an investor waited five years for ₹1,000, he or she will incur opportunity cost or miss out on the five-year rate of return.
Important Concepts in TVM
To thoroughly understand the concept of TVM, it is essential that you are well-acquainted with the terminology that is used in it. There are two important concepts related to TVM as follows:
Types of cash flows
Timeline of cash flows
Types of Cash Flows and Timelines
The major types of cash flows are as follows:
Annuity
An annuity refers to a series of equal amounts of cash flows (payments or receipts) that are paid or received at equal time intervals for a particular number of periods. For example, Tarun leased a factory from Varun for 10 years and promised to make a payment of ₹12,00,000 at the beginning of each period. This can be called as a 10-year, ₹12,00,000 annuity. A series of cash flows is called as an annuity only when the amounts of all the cash flows are identical and the time interval between two cash flows is equal in each case.
There are two types of annuities that are differentiated from each other because of the timing of the first cash flow as follows:
Regular annuity: In this annuity, the payment is made at the end of each period.
Annuity due: In this annuity, the payment becomes due immediately at the start of each period.
Graduated Annuity (Growing Annuity)
Graduated annuity is a type of annuity that involves payment or receival of gradually increasing cash flows for a particular number of periods. It must be remembered that the rate of growth remains constant for the life of the annuity.
Lumpsum: When a single payment is made, it is called as a lump sum payment.
Perpetuity: When annuity cash flows continue for an infinite period, it is called as a perpetuity or perpetual annuity.
Uneven cash flows: Any series of cash flows that is not in line with the definition of an annuity is considered as a case of uneven cash flows. For example, receiving a payment of ₹10,000, ₹20,000, ₹15,000, ₹40,000, is uneven cash flow.
The timeline of cash flows may differ from one case to another. Most frequently used timelines in TVM are: annual, semi-annual, quarterly, and monthly cash flows. When cash is paid once in a year, it is called annual cash flow. When cash is paid twice a year, it is called semi-annual cash flow. When cash is paid once every three months, it is called quarterly cash flow. Lastly, when cash is paid every month, it is called monthly cash flow.
Simple Interest and Compound Interest
The time value of money is a term used in accounting to describe the relationship between time and money. In other words, a dollar earned today is worth more than a dollar due at a later date. A dollar is worth more today because of the opportunity to invest the dollar and receive interest on the investment. What exactly is interest? Interest is a type of compensation for the use of capital. It is the amount of money collected or returned in excess of the amount lent or borrowed.
Suppose you lend a friend $100 at a 10% annual interest rate. Your friend will owe you $110 after a year ($100 borrowed plus $10 interest). Simple interest and compound interest are the two methods for calculating interest on money. Simple interest is the return on a single period’s principal. Simple interest is demonstrated in the preceding case. Interest is measured on a yearly basis based on the initial sum lent or borrowed.
Compound interest is the amount of money earned in excess of the amount lent or borrowed for two or more time periods. Compound interest calculates interest for the next year based on the accrued balance (original value plus interest to date) at year’s end. Adjust the interest rate in the previous example from simple to compound. You will receive $10 in interest in the first year for a total of $110.
You will collect $11 in interest after the second year for a total balance of $121. The additional dollar comes from the interest paid on the initial balance as well as interest gained in the first year ($110 X 10%).
Compounding and Discount
The time value of money theory states that the value of a unit of money can change in the future. Simply put, the current value of one rupee will decrease in the future. The whole definition revolves around the current and future value of capital. Compounding and discounting are the two mechanisms for determining the value of money at various points in time. Compounding is a method for calculating the future value of present money.
Discounting, on the other hand, is a method of calculating the present value of future money. Compounding is useful for determining the potential values of a cash flow at the end of a certain time at a certain rate. Discounting, on the other hand, is used to calculate the present value of potential cash flows at a given interest rate. To comprehend the idea of compounding, you must first comprehend the term future value.
Over a certain period of time, the money you invest today will rise and gain interest, changing its value automatically in the future. As a result, the future value of an investment is what it is worth in the future. Compounding is the method of collecting returns on both the principal and earned interest by reinvesting the whole sum to gain even more interest. Compounding is a technique for calculating the potential value of a current investment.
The compound interest formula that can be used to calculate the future value (compounding) is as follows:
Future Value of Single Cash Flow is calculated as follows:
A=P (1 + R)t
Where,
t = number of years
R = Rate of return on investment
Future Value of annuity is calculated as follows:
FV = PV (1 + R)
Where,
FV = Future value of money
PV = Present value of money
R = Interest rate
t = Number of years
Present value refers to the present value of a potential value. By applying a discount rate, the discounting method aids in determining the current value of potential cash flows. To calculate the present value of a future cash flow, use the following formula:
If n = number of compounding periods per year; then, FV and PV formulae will be modified as:
Nominal Interest Rate, Real Interest Rate and Effective Interest Rate (EIR)
The nominal interest rate is the rate of interest until inflation is taken into account. It also refers to the rate stated in the loan contract before compounding is taken into account. The nominal interest rate varies from the real interest rate in terms of inflation adjustment, and the effective interest rate differs from the nominal interest rate in terms of compounding adjustment.
Different factors can influence nominal interest rates, including money demand and supply, federal government intervention, central bank monetary policy, and many others. The short-term nominal interest rate is used by central banks to enforce monetary policy. To stimulate economic activity during a recession, the nominal rate is reduced. The nominal rate is boosted during inflationary times. The rate of time preference for current goods over future goods is reflected in the real interest rate.
The real interest rate of an investment is calculated as the difference between the nominal interest rate and the inflation rate:
Real Interest Rate = Nominal Interest Rate – Inflation (Expected or Actual)
The nominal interest rate is the rate charged on a loan or investment, while the real interest rate reflects the shift in buying power gained from an investment or giving up by the borrower. The nominal interest rate is usually the one advertised by the lending or investing institution. Adjusting the nominal interest rate to account for inflation’s impact aids in identifying the change in the buying power of a given amount of capital over time.
There are two types of interest rates: nominal interest rate and effective interest rate. The compounding cycle is not taken into account in the nominal interest rate. When the compounding duration is taken into account, the effective interest rate is a more precise indicator of interest charges. The phrase “interest rate of 10%” means that interest is compounded annually at a rate of 10% each year.
The average annual interest rate in this situation is 10%, and the actual annual interest rate is also 10%. The effective interest rate would be greater than 10% if compounding occurred more than once a year. The higher the effective interest rate, the more often compounding occurs.
The relationship between nominal annual and effective annual interest rates is:
i = [1 + (r / m) ]m – 1
where “i” is the effective annual interest rate, “r” is the nominal annual interest rate, and “m” is the number of compounding periods per year.
Compounding – Finding Future Value
The compound value of an amount of investment can be computed by the following formula:
FV = PV (1 + R)t
Where,
FV = Future value
PV = Amount invested
R = Interest rate
t = Number of years for which investment is made
If n = compounding frequency; then, FV is calculated as:
FV = PV
nt R
When interest rate is compounded semi-annually or quarterly, the rate of interest must be divided by 2 and 4 respectively and, the number of years must be multiplied by 4 and 2 respectively. For instance, if ₹1,000 is invested for three years at a rate of 10% with quarterly compounding, the FV of ₹1,000 can be calculated as:
Discounting – Finding Present Value
The time value of money is a basic financial concept that holds that money in the present is worth more than the same sum of money to be received in the future. This is true because money that you have right now can be invested and earn a return, thus creating a larger amount of money in the future.
Discounting a Single Cash Flow
There are two ways to look at an investment. It can be seen in two ways: as a potential asset or as a current asset. We have shown that if we know the present value of an amount (such as our `100 deposit), computing the sum’s future value in years using equation is a reasonably easy process (1). However, if we only know the future value of an amount and not its present value, we can use the following equation to calculate the present value of any future number.
) = +
FV PV
Discounting a Series of Cash Flows
Usually, capital expenditure project involves cash inflows for years to come. For example, assume that a company is acquiring a machine which involves cash inflows of ₹5,000 each year for five years. What is the present value of the streams of receipts from the project? As shown in Table, the present value of this stream is ₹21,060 if we assume a discount rate of 6 percent compounded annually, the discount factors used in this exhibit were taken from table.
Two points are important in connection with this Appendix. First, notice that the farther we go forward in time, the smaller the present value of the ₹5,000 earnings. The present value of ₹5,000 received a year from now is ₹4,715.00 as compared to only ₹3, 735 for the ₹5,000 earnings to be received 5 years from now. This point simply underscores the fact that money has a time value. Table showing the value of ₹100 invested at 10 present simple and compound interest.
| | | Simple interest | | | | | Compound interest | | | |
--- --- --- --- ---
| Year | Initial Amount | + | Interest | = | Year and amt | Initial Amount | + | Interest | = | Year and amt |
| 1 | 100 | + | 10 | = | 110 | 100 | + | 10 | = | 110 |
| 5 | 140 | + | 10 | = | 150 | 146 | + | 15 | = | 161 |
| 10 | 190 | + | 10 | = | 200 | 236 | + | 24 | = | 260 |
| 20 | 290 | + | 10 | = | 300 | 612 | + | 61 | = | 676 |
Value of ₹100 Invested at 10 Present Simple and Compound Interest
Table showing factors at 6% with present value:
| Year | Factor @ 6% (Appendix 2.1) | Interest received | Present value |
--- --- |
| 1 | 0.943 | 5,000 | 4,715.00 |
| 2 | 0.890 | 5,000 | 4,450.00 |
| 3 | 0.840 | 5,000 | 4,200.00 |
| 4 | 0.792 | 5,000 | 3,690.00 |
| 5 | 0.747 | 5,000 | 3,735.00 |
| | | Total | 21,060.00 |
Present Value of a Series of Cash Receipts
The second point is that even though the computations involved in Table are accurate, they involve unnecessary work. The same present value of ` 21,060 could have been obtained more easily by referring to Appendix 2.2. Appendix 2.2 is an annuity table which contains the present value of rupee one to be received each year over a series of years, at various rates of interest. Appendix 2.2 has been derived by simply adding the factors from Appendix 2.1 together.
To illustrate, we use the following factors from Table in the computations in Table.
| Year | Factor @ 6% |
--- |
| 1 | 0.943 |
| 2 | 0.890 |
| 3 | 0.840 |
| 4 | 0.7921 |
| 5 | 0.7473 |
Computations
Calculating Values of Lump-Sum Amounts
For a lump sum, the present value is the value of a given amount today. For example, if you deposited ₹5,000 into a savings account today at a given rate of interest, say 6%, with the goal of taking it out in exactly three years, the ₹5,000 today would be a present value-lump sum.
Future Value of Single Amount
Money that is available now is more important than money that will be available in the future. Suppose you have ₹100 and you deposited this amount in a bank at 10 % rate of interest for 1 year. How much future sum or value would you receive after 1 year? You would receive ₹110:
Future value = Principal + Interest = 100 + (0.10 × 100) = 100 × (1.10) = ₹110
What would be the future value if you deposited `100 for 2 years?
You would now receive interest on interest earned after 1 year:
Future value = [100 + 0.10 × 100) + 0.10[100 + (0.10 × 100)]
= 100 × 1.10 × 1.10
= ₹121
You could similarly calculate future value for any number of years. We can express this procedure of calculating compound, or future, value in formal terms. Let ‘i’ represent the interest rate per period, “n” the number of periods before pay-off, and FV the future value, or compound value. If the present amount or value PV is invested at i rate of interest for one year, then the future value F1 (viz., principal plus interest) at the end of one year will be Future sum Principal Interest on principal
F 1 = P + P × i = P (1+i)
The outstanding amount at the beginning of second year is: F 1 = P (1 + i). The compound sum at the end of second year will be:
F 2 = F1 + F1i = F1 (1+i)
F 2 = P (1+i) (1+i) = P(1+i)2
Similarly, F 3 = F 2 (1 + i) = P (1 + i)3 and so on. The general form of equation for calculating the future value of a lump sum after n periods may, therefore, be written as follows:
F n = P (1+i)2
The term (1+ i) n is the compound value factor (CVF) of a lump sum of 1, and it always has a value greater than ₹1 for positive i, indicating that CVF increases as i and n increase. Suppose You have ₹1,000 are placed in the savings account of a bank at interest rate of 5 %. How much shall it grow at the end of 3 years?
It will grow as follows:
F1 = ₹1,000 + (1,000 × 5%)
F1 = ₹1,000 + 50 = ₹1,050
F2 = ₹1,050 + (1,050 × 5%)
F2 = ₹1,050 + 52.50 = ₹1,102.50
F3 = ₹1,102.50 + (1,102.50 × 5%)
F3 = ₹1,102.50 + 55.10 = ₹1,157.60
Amount of ₹1,000 will earn interest of 50 and will grow to ₹1,050 at the end of the first year. The outstanding balance of ₹1,050 in the beginning of the second year will earn interest of ₹52.50, thus making the outstanding amount equal to ₹1,102.50 at the beginning of the third year. Future or compound value at the end of third year will grow to ₹1,157.60 after earning interest of ₹55.10 on ₹1,102.50. In compounding, interest on interest is earned.
Thus, the compound value of ₹1,000 in the example can also be calculated as follows:
F 1 = ₹1,000 × 1.05 = 1,050
F 2 = ₹1,000 × [1.05 ×1.05]
F 2 = ₹1,000 × (1.05)2
F 2 = ₹1,000 × 1.1025
F 2 = ₹1,102.5
F 3 = ₹1,000 × {1.05 ×1.05 × 1.05]
F 3 = ₹1,000 × (1.05)3
F 3 = ₹1,000 × 1.1576
F 3 = ₹1,157.60
We can see that the compound value factor (CVF) for a lump sum of one rupee at 5 per cent, for one year is 1.05, for two years 1.1025 and for three years 1.1576. In Figure we show the future values of ₹1 for different interest rates. You can see from the figure that as the interest rate increases, the compound value of ₹1 increases appreciably.
Present Value of Single Amount
1 is the current value of the future sum of 1.08 if it can be invested at 8% today to become 1.08 in the future. In business decision-making, the current value of potential cash receipts is important. To assess whether an investment should be made or how much should be spent, you must first determine how much potential receipts are worth today. Compounding is the polar opposite of discounting. It entails calculating the present value of a potential sum of money that is supposed to contain interest.
Calculating Values of Annuities
An annuity is a set of equivalent payments made at regular intervals, with each payment compounded or discounted at the time of payment. Rent is the term used to describe each annuity payment. There are various forms of annuities, with an ordinary annuity paying or receiving rent at the end of each year.
Future Value of Annuity
If you open a savings account that earns compound interest per month and deposit ₹100 at the end of each month, the deposits are the rentals of an annuity. After a year, you will have 12 deposits of ₹100 each, for a total of ₹1.200, but the account will have more than ₹1,200 in it due to the interest earned on each deposit. Your balance is ₹1233.56 if the interest rate is 6% a year, compounded annually.
The sum accrued in the future from all the rentals charged and the interest received by the rents is known as the future value of an annuity or the total of annuity. To generate a table of potential annuity values, we presume that payments of ₹1 are made per year into a fund that receives 8% compounded interest per year.
An annuity of four instalments of one, each paid at the end of each year, with interest compounded at 8% each year, is depicted in Figure:
Notice that there are four rents and four periods. Each rent is paid at the end of each period. At the end of the first period, ₹1 is deposited and earns interest for three periods. The next rent earns interest for two periods, and so on. The amount at the end of the fourth period can be determined by calculating the future value of ₹1 deposited by each individual as follows:
Future value of 1 at 8% for 3 periods = 1.25971
Future value of 1 at 8% for 2 periods = 1.16640
Future value of 1 at 8% for 1 period = 1.08000
The fourth rent of 1 earns no interest = 1.0000
Total for 4 rents = 4.50611
The general formula for the future value of ₹1, with n representing the number of compounding period is
FV = (1+i)n
You can use this formula for calculating future values for any interest rate and any number of periods. For doing so, multiply the principal amount by the factor for the future value of 1 as follows:
FV = (1+i)n
FV = P (1+i)n
Where i = interest rate
n = number of periods
The formula for the future value of an annuity of ₹1 can be used to produce tables for a variety of periods and interest rates.
) + − =
n
1 i 1 F
Present Value of Annuity
An annuity’s present value is the sum that must be invested today at compound interest to receive periodic rents in the future. We can geta table for the present value of an ordinary annuity of ₹1 by using the present value of ₹1. The present value of an ordinary annuity of ₹1 is shown in Figure:
When compounding occurs, the number of rentals equals the number of periods since each rent is available at the end of each term. We find the current value of the whole annuity by discounting each potential occurrence to the present.
Present value of 1 discounted for 1 period at 8% = 0.92593
Present value of 1 discounted for 2 periods at 8% = 0.85734
Present value of 1 discounted for 3 periods at 8% = 0.79383
Present value of 1 discounted for 4 periods at 8% = 0.73503
Present value of annuity of 4 rents at 8% = 3.31213
Since it is paid first, the first rent is worth more than the others. To solve problems in this field, a table on the present value of annuities can be used. The following formula was used to construct the table:
Practical Applications of Time Value of Money
In addition, time value of money has applications in many areas of finance including capital budgeting, bond valuation, and stock valuation. Future value describes the process of finding what an investment today will grow to in the future. This is called compounding.
Some of the major applications of TVM are as follows:
Sinking Fund Problems
At times, a financial manager may need to calculate the amount of annual payments needed to accumulate a particular amount of money at a later date to redeem an existing debt or substitute an existing asset. For example, a financial manager may have a target to accumulate a sum of ₹50,000 after five years.
When the compound interest rate is given (say 8%); then, what amount shall be allocated or provisioned every year so that at the end of 5th year, the manager will have a sum of ₹50,000. This is usually common in case of redemption of debentures. In such cases, the manager needs to determine the share of profits that should be retained by the company every year and which are invested at a given rate to obtain the sum of ₹50,000 after 5 years.
The formula used in this case is:
Future Value = Annuity Amount × FVIFA (r,n)
FV = A × FVIFA (r, n)
The annuity amount is calculated as:
Let us calculate the annuity amount in our example above:
Capital Recovery Problems
A financial manager may also be curious about the amount of annual equivalent instalments required to repay a certain amount of loan borrowed from a financial institution at a fixed rate of interest for a set period of time. The amount of instalments that are to be paid by the company can be determined by using the following formula:
For example, a company has taken a loan of ₹10 lacs which is to be repaid in 20 equal instalments. In this case if the compounding rate is 15%; then, the amount of each instalment is calculated as follows:
Compound Growth Rate Problems
A finance manager may be required to measure the compound rate of growth over a period of time, for example, to calculate revenue or income. He can easily measure such compound rates of growth using Compound Factor Tables. Let us see how using an example. Assume that you need to measure the compound rate of growth in income for ABC Co. Ltd. using the following details.
| Year | Profit (in ₹ Lacs) |
--- |
| 2003 | 60 |
| 2004 | 65 |
| 2005 | 75 |
| 2006 | 87 |
| 2007 | 98 |
| 2008 | 100 |
Here, rate of interest can be determined as follows:
FV = PV × FVIF(r%,n)
Where, FVIFA (r,n) = Future Value Interest Factor at r% interest for n years
Therefore,
100 = 60 × FVIF(r%,n)
FVIF(r%,n) = 1.667
Now, we look up the future value table for 6 years and find that nearest value to 1.667 is 1.677 which occurs at 9% rate of interest.
Financial Accounting
(Click on Topic to Read)
4 Accounting Conventions
What Is Accounting Standards?
What is Accounting Equation?
What is Source Documents?
Whatis Accounting Cycle?
Classification Of Accounts
3 Branches of Accounting
What is Double Entry System of Accounting?
Whatis Journal In Accounting?
What is Ledger In Accounting?
What is Posting In Accounting?
What is Trial Balance?
What is Accounting Errors?
What is Depreciation In Accounting?
What is Financial Statements?
What is Departmental Accounts?
What is Branch Accounting?
Accounting for Dependent Branches
Independent Branch Accounting
Accounting for Foreign Branches
Corporate Finance
What is Corporate Finance?
Long Term Financing
What is Inventory Management?
External Sources Of Finance
Short Term Financing
Time Value Of Money
Capital Assets Pricing Model (CAPM)
What is Capital Rationing?
What is Capital Budgeting?
What is Costof Capital?
What is Dividend?
Dividend Theories
What is Dividend Policy?
What is Cash Management?
Types of Derivatives Contract
What is Inventory Control?
What is Consumer Financing?
Management Accounting
What is Management Accounting?
What is Financial Statement Analysis?
Types of Accounting
What is a Management Accountant?
Inventory Control Techniques
Determination Of Working Capital
What is Cash Flow Statement?
Determination of Working Capital
Read more articles
Previous Post What is Financial Management? Objectives, Scope, Relevance
Next Post What is Capital Budgeting? Principles, Process, Techniques
Tags: Financial Management
You Might Also Like
Measurement and Decision-Making Under Non-Discounted Cash Flow Techniques
22 July 2023
What is Capital Structure? Optimum, Factors Affecting, Theories
23 July 2023
What is Cost of Capital? Importance, Factor Affecting
23 July 2023
What is Capital Budgeting? Principles, Process, Techniques
3 July 2023
Scope of Financial Management
23 February 2021
Cost of Retained Earning, Preference Share, Debt
23 July 2023
Measurement and Decision-Making Under Discounted Cash Flow Techniques
22 July 2023
Estimating Cash Flow of Capital Budgeting Project
22 July 2023
What is Financial Management? Objectives, Scope, Relevance
1 July 2023
Long-Term Sources of Finance
23 July 2023
Sources of Short-Term Finance
22 July 2023
What is Leverages? Operating, Financial, Combined
23 July 2023
Leave a Reply Cancel reply
You must be logged in to post a comment.
All Category
All Category Select Category
World's Best Online Courses at One Place
We’ve spent the time in finding, so you can spend your time in learning
Digital Marketing
Google Ads Course
Facebook Ads Course
SEO Course
Instagram Marketing Course
SEM Course
Social Media Course
Email Marketing Course
Pinterest Course
Chatbot Course
Blogging Course
Content Marketing Course
WooCommerce Course
Clickbank Affiliate Marketing Course
Affiliate Marketing Course
Amazon Affiliate Marketing Course
Shopify, eCommerce & Dropshipping Course
Excel Data Analysis Course
WordPress Course
Google Tag Manager Course
Google Analytics Course
Digital Marketing Course
Youtube Marketing Course
Bing Ads Course
Social Media Analytics Course
Business
Product Strategy Course
Sales Course
Brand Strategy Course
Business Law Course
Strategic Management Course
Marketing Analytics Course
Business Strategy Course
Marketing Management Course
Human Resource Course
Product Management Course
Product Marketing Course
B2B Marketing Course
Growth Hacking Course
People HR Analytics Course
Entrepreneurship Course
Business Statistics Course
Project Management Course
Negotiation Course
Time Management Course
Leadership Course
Career Development Course
Stress Management Course
Anxiety Management Course
Design Thinking Course
Emotional Intelligence Course
Team Building Course
Business Analytics Course
Digital Transformation Course
Personal Growth
English Grammar Course
Vocabulary Course
Soft Skills Course
Public Speaking Course
Photography Course
Body Language Course
Communication Skills Course
Interview Preparation Course
Productivity Course
Mindfulness Course
Memory Course
Self Discipline
Speed Reading
Academic Writing
Copywriting Course
Scientific Writing Course
Novel Writing Course
Academic Writing Course
Travel Writer Course
Creative Writing Course
Interior Design Course
Graphic Design Course
Drawing Course
Digital Art Course
UI UX Designer Course
Finance
Mutual Fund Course
Financial Analysis Course
Personal Finance Course
Cost Accounting Course
Audit Course
Fintech Course
Value Investing Course
Trading Course
Financial Modeling Course
Investment Course
Project Finance Course
Stock Trading Course
Financial & Capital Markets Course
Accounting Course
Financial Engineering Course
FinTech
NFT Course
MongoDB Course
jQuery Course
Blockchain Course
Cryptocurrency Course
Swift Course
AWS Course
Redux Course
Go Course
DeFi Course
Solidity Course
Metaverse Course
Django Course
JIRA Course
Conversion Rate Optimization (CRO) Course
Analytics Course
Customer Loyalty Course
Language
English Speaking
Korean Language
German Language
Spanish Language
French Language
Italian Language
Russian Language
Japanese Language
Arabic Language
Swedish Language
Hindi Language
Portuguese Language
Dutch Language
Latin Language
Turkish Language
Hungarian Language
Vietnamese
American Accent
Pronunciation
Spelling Courses
Tech
Data Science Course
R Programming Course
Big Data Course
SQL Course
Data Analytics Course
Machine Learning Course
Python Course
SQL Data Science Course
Artificial Intelligence Course
Cloud Computing Course
Data Warehouse Course
NLP Course
Development
React JS Courses
Front End Development Course
Full Stack Web Developer Course
C++ Course
Data Engineering Course
HTML & CSS3 Course
Microsoft SQL Course
MySQL Course
Java Course
JavaScript Course
TypeScript Course
Back End Development Course
Database Course
GraphQL Course
Exam Prep
GRE Prep
GMAT Prep
MCAT Prep
IELTS Prep
DAT Prep
PSAT Prep
CFA Prep
OAT Prep
ACT Prep
LSAT Prep
FRM Prep
SSAT Prep
CPA Prep
TESOL Prep
SAT Prep
SSAT Prep
Python
Python Course
Deep Learning Python Course
Python Data Science Course
Python for Marketing Course
Python for Finance Course
Python Pandas Course
Python Data Visualization Course
Python Machine Learning Course
Python Data Processing Course
Python Scripting Course
Python for Data Analysis Course
Python Data Structure Course
NLP Python Course
Matplotlib Course
Data Cleaning Course
Statistical Modeling Course
Keras Course
Pytorch Course
Machine Learning Finance Course
Tech
SCADA Course
ASP.net Course
Scrum Course
Spring Boot and MVC Course
IT Support & Help Desk Course
Ruby on Rails Course
Kubernetes Course
Docker Course
NodeJs Course
Angular Course
PHP Course
API Course
Alteryx Course
Power BI Course
Tableau Course
Data Visualization Course
DAX Course
Data Streaming Course
Regex Course
Qlik Sense Course
Plotly Dash Course
Data Modeling Course
Development
Android Course
iOS Development Course
Flutter Course
Kotlin Course
Ionic Course
Xamarin Course
Virtual Reality Course
Matlab Course
Git & GitHub Course
Selenium Course
Shell Scripting Course
ARKit Course
Game Design Course
Unity Course
Unreal Engine Course
Game Development Course
Blender Course
Dreamweaver Course
Visual Studio Course
C# (C-Sharp) Course
Bootstrap Course
Child Care
Child Nutrition Course
Baby Massage Course
Childcare & Early Education Course
Baby Sign Language Course
Kids Art & Drawing Course
Kids Coding Course
Child Development Course
Geektonight is a vision to support learner’s worldwide (2+ million readers from 200+ countries till now) to empower themselves through free and easy education, who wants to learn about marketing, business and technology and many more subjects for personal, career and professional development.
Categories
Categories Select Category
Copyright 2023 Geektonight
Search this website Type then hit enter to search |
189504 | https://www.youtube.com/watch?v=-CsEPYeSBsg | Closest Point to the Origin | MIT 18.01SC Single Variable Calculus, Fall 2010
MIT OpenCourseWare
5940000 subscribers
460 likes
Description
66821 views
Posted: 8 Jan 2011
Closest Point to the Origin
Instructor: Christine Breiner
View the complete course:
License: Creative Commons BY-NC-SA
More information at
More courses at
15 comments
Transcript:
welcome back to recitation today we're going to work on an optimization problem so the question I want us to answer is what point on the curve y equals square root of X+ 4 comes closest to the origin I've drawn a sketch of this curve the scale in this direction the each hash mark is one unit in the X Direction each hash mark here is one unit in the y direction just want to point out two easy places to figure out the distance to the origin over here where the curve starts at40 the distance to the origin is 4 units and here at 02 the distance of the origin is two units so probably we could safely say further away here so we're anticipating that somewhere along the curve in this region is where we should find our our place that's closest to the origin the only reason I point that out is that when you're doing these problems on your own you should always try and anticipate roughly where thing should happen in what kind of region so that you don't you don't start thinking if you if you do something wrong and you get x equals 100 and then you come back and look at the curve you realize right away well that doesn't make any sense so we would always be thinking as we're solving the problem does my answer make sense so I'm actually going to give you a little bit of time to work on this yourself and then I'll come back and I'll work on it as well well welcome back hopefully you were able to get pretty far into this problem and so I will start working on it now so the again the question is that we want to we want to optimize uh in this in this case minimize distance to the origin from this curve and so what we're really trying to do is we have we have a constraint the constraint is we have to be on the curve and then we also have um something we're trying to minimize and the thing we're trying to minimize is distance and so we have to we have to make sure that we understand the two equations that we need the optimization or the constraint equation and the optimizing equation so to optimize we need to know how to measure distance in in two-dimensional space and one point I want to make is that if you want to optimize distance you might as well optimize the square of distance because it's much easier so let me justify that briefly and then we'll go on so I want to optimize the distance squared to the origin it's well distance you remember first in general between two points x y and ab is is something in this form distance squared is the difference between the x value squared plus the difference between the y- value squared this is should remind you of the Pythagorean theorem ultimately so in this case in our case distance is to the origin is x^2 + y^2 the distance squ is x2+ y^2 I just told you that instead of trying to optimize distance we can optimize distance squared Why is that well remember that when you optimize what you're looking for is a place where the derivative of the function of of interest is equal to zero so what I want to point out is that when you take the derivative of distance squared and find where that's zero it's going to be the same as the place where uh where the derivative of distance is equal to zero so let's notice that so this is a little sidebar justification notice d squ Prime is equal to 2D D Prime where did that come from that's this is implicit differentiation with respect to X and this is the chain rule so if I want D Prime to equal zero I can also find where D2 Prime equals zero I'm assuming notice the distance is never at the origin so distance is never zero so I don't have to worry about that so that's a small sidebar but just to justify why we can do that now let's come back into the problem at hand what is our optimization problem um equation that we want to minimize we want to minimize this equation with respect to a certain constraint what's the constraint the constraint is what Y is y depends on on X and so when I solve these problems I'm I'm going to have to substitute in my constraint so y^2 is the square root of X+ 4 quantity squared so I just get X+ 4 so now I have my optimization equation how do I find a minimum or a maximum I take the derivative and set it equal to zero so let me come give myself a little more room and do that over here so d^2 Prime now I get derivative of X2 is 2x the derivative of x is one and the derivative of four is zero this will be optimized when this is equal Al to 0 so 0 = 2x + 1 so X is equal to -2 does this pass as we would say maybe the smell test does it smell okay to us the answer will be yes because it remember we said somewhere in this x region is where we expect that we will have a distance closest a point closest to the origin and so we're right here on the x value now we have to find what the Y value is to finish the problem but this this is not so far very surprising it seems like maybe the right thing now we have X so now how do we find y well we know what Y is y is equal to the square root of x + 4 so it's equal to theun of -2 + 4 which simplified is three and a half which I think uh is seven halves so the point is - one2 comma square root of sevenes and then you just want to double check and make sure did I ask for the distance or did I ask for the point and right now we have the point so let's come over and make sure what point on the curve is comes closest to the origin so now we know that we've answered the correct question so again it was a maxim maximize Min sorry it was a minimizing problem it was an optimization problem where we wanted to minimize distance we had a constraint equation we had the thing we wanted to minimize then we took the derivative of the minimizer set it or of the optimizing equation set it equal to zero solved for x and then found the answer to the specific question by then finding the Y value and I think I'll stop there |
189505 | https://www.ldoceonline.com/Fish-topic/pike | pike | Definition from the Fish topic | Fish
English
EnglishEnglish - JapaneseEnglish - KoreanEnglish - SpanishJapanese - EnglishSpanish - English
English
日本語Español latino한국어
pike in Fish topic
From Longman Dictionary of Contemporary English pike pike /paɪk/ noun [countable]1HBF (plural pike) a large fish that eats other fish and lives in rivers and lakes2 PMW a long-handled weapon with a sharpblade, used in the past 3 →come down the pike4 TTR American English spoken a turnpikeExamples from the Corpus pike• The personal best pikemeasured a massive 45 inches with a girth of 24 inches and was returned alive.• Imagine for a moment what easy pickings a hugeshoal of small bream are to a pack of marauding pike.• A 21-poundpike was caught there recently.• Our image as a bunch of bumpkins who roll over for anything that comes down the pike?• Authorities said they were sending biologists to Delliker Pond to determine whether pike were present there.
Explore Fish Topic
snapper
flying fish
minnow
roach
skate
dogfish
sprat
ray
salmon
halibut
dorsal
conger eel
whitebait
whiting
bass
herring
anchovy
pelagic
sturgeon
pike
flatfish
marlin
spawn
sunfish
shark
hake
tiddler
tunny
sardine
flounder
sole
dory
guppy
fish farm
turbot
haddock
fish
dab
barracuda
perch
Show all entries from Topic: Fish
Other topics
Biology
Recording
Loans
Training
Earth sciences
Construction
Pets
Government
Photography
Tools
History
Tastes
Optics
Currencies
Pre school
Other sports
Horses
Space
Drink
Illness & disability
Cookie PolicyPrivacy PolicyCopyright and legalPearson LanguagesAbout LDOCEHow to use |
189506 | https://www.khanacademy.org/math/cc-sixth-grade-math/x0267d782:cc-6th-rates-and-percentages/x0267d782:visualize-percents/v/finding-the-whole-with-a-tape-diagram | Use of cookies
Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account.
For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session.
Functional Cookies
These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences.
Targeting Cookies
These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission.
We do not use cookies to serve third party ads on our Service.
Performance Cookies
These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates.
For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users.
We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
Cookie List
Consent Leg.Interest
label
label
label |
189507 | https://www.britannica.com/science/population-ecology/Calculating-population-growth | SUBSCRIBE
SUBSCRIBE
Home
History & Society
Science & Tech
Biographies
Animals & Nature
Geography & Travel
Arts & Culture
ProCon
Money
Games & Quizzes
Videos
On This Day
One Good Fact
Dictionary
New Articles
History & Society
Lifestyles & Social Issues
Philosophy & Religion
Politics, Law & Government
World History
Science & Tech
Health & Medicine
Science
Technology
Biographies
Browse Biographies
Animals & Nature
Birds, Reptiles & Other Vertebrates
Bugs, Mollusks & Other Invertebrates
Environment
Fossils & Geologic Time
Mammals
Plants
Geography & Travel
Geography & Travel
Arts & Culture
Entertainment & Pop Culture
Literature
Sports & Recreation
Visual Arts
Image Galleries
Podcasts
Summaries
Top Questions
Britannica Kids
Ask the Chatbot Games & Quizzes History & Society Science & Tech Biographies Animals & Nature Geography & Travel Arts & Culture ProCon Money Videos
population ecology
Genetic variation within local populations
Effects of mode of reproduction: sexual and asexual
Effects of population size
Population density and growth
Life histories and the structure of populations
Life tables and the rate of population growth
Survivorship curves
Calculating population growth
Regulation of populations
Limits to population growth
Exponential and geometric population growth
Logistic population growth
Population fluctuation
Factors affecting population fluctuation
Population cycles
Species interactions and population growth
Interspecific interactions
Lotka-Volterra equations
Metapopulations
References & Edit History Related Topics
Images
Quizzes
Biology Bonanza
Ecosystems
Calculating population growth
in population ecology in
Population density and growth
print Print
Please select which sections you would like to print:
verifiedCite
While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions.
Select Citation Style
Share
Share to social media
Facebook X
URL
Feedback
Thank you for your feedback
Our editors will review what you’ve submitted and determine whether to revise the article.
External Websites
University of Arkansas - Arkansas Forest Resources Center - Population Growth
Western Oregon University - Plant Population Ecology
McGraw-Hill Education - Ecology and Behavior
Clemson University - The Basics of Population Dynamics
El Camino College - Population Ecology
University of Washington - Population Ecology
Britannica Websites
Articles from Britannica Encyclopedias for elementary and high school students.
population biology - Student Encyclopedia (Ages 11 and up)
Written by
Written by
John N. Thompson
Professor, Department of Ecology and Evolutionary Biology, and Director, STEPS Institute for Innovation in Environmental Research, University of California, Santa Cruz. Author of Interaction and Coevolution...
John N. Thompson ,
Eric Post
Professor of Biology, Department of Biology, Pennsylvania State University, State College, Pennsylvania.
Eric Post •All
Fact-checked by
Fact-checked by
The Editors of Encyclopaedia Britannica
Encyclopaedia Britannica's editors oversee subject areas in which they have extensive knowledge, whether from years of experience gained by working on that content or via study for an advanced degree....
The Editors of Encyclopaedia Britannica
Article History
Life tables also are used to study population growth. The average number of offspring left by a female at each age together with the proportion of individuals surviving to each age can be used to evaluate the rate at which the size of the population changes over time. These rates are used by demographers and population ecologists to estimate population growth and to evaluate the effects of conservation efforts on endangered species.
The average number of offspring that a female produces during her lifetime is called the net reproductive rate (R0). If all females survived to the oldest possible age for that population, the net reproductive rate would simply be the sum of the average number of offspring produced by females at each age. In real populations, however, some females die at every age. The net reproductive rate for a set cohort is obtained by multiplying the proportion of females surviving to each age (lx) by the average number of offspring produced at each age (mx) and then adding the products from all the age groups: R0 = Σlxmx. A net reproductive rate of 1.0 indicates that a population is neither increasing nor decreasing but replacing its numbers exactly. This rate indicates population stability. Any number below 1.0 indicates a decrease in population, while any number above indicates an increase. For example, the net reproductive rate for the Galapagos cactus finch (Geospiza scandens) is 2.101, which means that the population can more than double its size each generation.
Life table for one Darwin finch, the Galapagos cactus finch (Geospiza scandens)
| age class (x) | probability of surviving to age x (lx) | average number of fledgling daughters (mx) | product of survival and reproduction (Σlxmx) |
| The values are for the cohort of females born in 1975. |
| Designated in years. |
| Source: Adapted from Peter R. Grant and B. Rosemary Grant, "Demography and the Genetically Effective Sizes of Two Populations of Darwin's Finches," Ecology, 73(3), 1992, copyright © 1992 The Ecological Society of America, used by permission. |
| 0 | 1.0 | 0.0 | 0.0 |
| 1 | 0.512 | 0.364 | 0.186 |
| 2 | 0.279 | 0.187 | 0.052 |
| 3 | 0.279 | 1.438 | 0.401 |
| 4 | 0.209 | 0.833 | 0.174 |
| 5 | 0.209 | 0.500 | 0.104 |
| 6 | 0.209 | 0.833 | 0.174 |
| 7 | 0.209 | 0.250 | 0.052 |
| 8 | 0.209 | 3.333 | 0.696 |
| 9 | 0.139 | 0.125 | 0.017 |
| 10 | 0.070 | 0.0 | 0.0 |
| 11 | 0.070 | 0.0 | 0.0 |
| 12 | 0.070 | 3.500 | 0.245 |
| 13 | 0 | — | — |
| | R0 = 2.101 |
| Net reproductive rate = R0 = Σlxmx = 2.101 |
| Mean generation time = T = (Σxlxmx)/(R0) = 6.08 years |
| Intrinsic rate of natural increase of the population = r = approximately 1nR0 / T = 2.101/6.08 = 0.346 |
The other value needed to calculate the rate at which the population can grow is the mean generation time (T). Generation time is the average interval between the birth of an individual and the birth of its offspring. To determine the mean generation time of a population, the age of the individuals (x) is multiplied by the proportion of females surviving to that age (lx) and the average number of offspring left by females at that age (mx). This calculation is performed for each age group, and the values are added together and divided by the net reproductive rate (R0) to yield the result
For example, the mean generation time of the Galapagos cactus finch is 6.08 years.
Another value is used by population biologists to calculate the rate of increase in populations that reproduce within discrete time intervals and possess generations that do not overlap. This is known as the intrinsic rate of natural increase (r), or the Malthusian parameter. Very simply, this rate can be understood as the number of births minus the number of deaths per generation time—in other words, the reproduction rate less the death rate. To derive this value using a life table, the natural logarithm of the net reproductive rate is divided by the mean generation time:
Values above zero indicate that the population is increasing; the higher the value, the faster the growth rate. The intrinsic rate of natural increase can be used to compare growth rates of populations of a species that have different generation times. Some human populations have higher intrinsic rates of natural increase partially because individuals in those groups begin reproducing earlier than those in other groups. Mice have higher intrinsic rates of natural increase than elephants because they reproduce at a much earlier age and have a much shorter mean generation time.
If a population has an intrinsic rate of natural increase of zero, then it is said to have a stable age distribution and neither grows nor declines in numbers. A growing population has more individuals in the lower age classes than does a stable population, and a declining population has more individuals in the older age classes than does a stable population (see population: Population composition). Many human populations are currently undergoing population increase, far exceeding a stable age distribution. Although the global human population has increased almost continuously throughout history, it has skyrocketed since the Industrial Revolution, primarily because of a drop in death rates. No other species has shown such sustained growth.
Intrinsic rate of increase (r) calculated for populations of species that differ greatly in their potential for the rate of population growth
| species | intrinsic rate of increase (r) |
| Values above zero indicate that the population is increasing. The higher the value of r, the faster the intrinsic growth rate of the population. |
| Source: Adapted from Robert E. Ricklefs, The Economy of Nature, 3rd edition, copyright © 1993 by W.H. Freeman & Company, used with permission. |
| elephant seal | 0.091 |
| ring-necked pheasant | 1.02 |
| field vole | 3.18 |
| flour beetle | 23 |
| water flea | 69 |
Regulation of populations
Limits to population growth
Exponential and geometric population growth
In an ideal environment, one that has no limiting factors, populations grow at a geometric rate or an exponential rate. Human populations, in which individuals live and reproduce for many years and in which reproduction is distributed throughout the year, grow exponentially. Exponential population growth can be determined by dividing the change in population size (ΔN) by the time interval (Δt) for a certain population size (N):
The growth curve of these populations is smooth and becomes increasingly steeper over time. The steepness of the curve depends on the intrinsic rate of natural increase for the population. Human population growth has been exponential since the beginning of the 20th century. Much concern exists about the impact this growth will have, not only on the environment but on humans as well. The World Bank projection for human population growth predicts that the human population will grow from 6.8 billion in 2010 to nearly 10 billion in 2050. That estimate could be offset by four population-control measures: (1) lower the rate of unwanted births, (2) lower the desired family size, (3) raise the average age at which women begin to bear children, and (4) reduce the number of births below the level that would replace current human populations (e.g., one child per woman).
Insects and plants that live for a single year and reproduce once before dying are examples of organisms whose growth is geometric. In these species a population grows as a series of increasingly steep steps rather than as a smooth curve. |
189508 | https://math.stackexchange.com/questions/2400459/how-to-remember-the-difference-between-stars-and-bars-and-multinomial-coeffici | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
How to remember the difference between "stars and bars" and multinomial coefficients?
Ask Question
Asked
Modified 1 year, 7 months ago
Viewed 2k times
2
$\begingroup$
Explanation: It is easy for me to remember the difference between permutation coefficients and combination coefficients verbally: one says that the former
"is the number of ways of choosing $k$ things from $n$ options without replacement where order matters"
and that the latter
"is the number of ways of choosing $k$ things from $n$ options without replacement where order doesn't matter".
Question: How can one quickly describe (verbally) the difference between multinomial coefficients and "stars and bars" (which will here henceforth be called multiset coefficients)? For which does order matter? For which does order not matter? For which is the choosing with/without replacement?
Attempt: Both involve a counting process with two considerations: (i) dividing objects of $n$ types into $i$ containers, (ii) each container with $k_i$ spots, so assigning $k = \sum_i k_i$ objects from the $n$ different types. So it seems like there might be some ambiguity, when one says "with/without replacement" or "order does/doesn't matter", is one referring to (i) or (ii)?
So it seems like there are at least four distinct possibilities:
Assign $n$ types of object into $i$ containers, where the order of the containers doesn't matter, with $k = \sum_i k_i$ spots total, where the order within each container doesn't matter.
Assign $n$ types of object into $i$ containers, where the order of the containers doesn't matter, with $k = \sum_i k_i$ spots total, where the order within each container does matter.
Assign $n$ types of object into $i$ containers, where the order of the containers does matter, with $k = \sum_i k_i$ spots total, where the order within each container doesn't matter.
Assign $n$ types of object into $i$ containers, where the order of the containers does matter, with $k = \sum_i k_i$ spots total, where the order within each container does matter.
IF this is true, then which of these options corresponds to multinomial coefficients, and which of these options corresponds to multiset coefficients?
Update: I think the difference is this: "multiset coefficients are the number of ways of distributing balls into boxes, where (i) the total number of balls is fixed, (ii) the number of balls in each container is not fixed, (iii) the balls are indistinguishable (their order doesn't matter, just the number in each container), (iv) the boxes are distinguishable (i.e. their order does matter)". (For (iv), e.g. $x^2 y \not= x y^2$.) While "multinomial coefficients are the number of ways of distributing balls into boxes, where (i) the total number of balls is fixed because (ii) the number of balls in each box is fixed, (iii) the balls are indistinguishable (hence the relationship to binomial coefficients), and (iv) the boxes are distinguishable (but it doesn't matter in the case of binomials because the solution to the Diophantine equation $k_1 + k_2 = k$ is already determined once $k_1$ is chosen)".
So TL;DR: in both cases, the balls are indistinguishable (their order doesn't matter), the boxes are distinguishable (their order does matter), the total number of balls is fixed, but (a) for multiset coefficients, the number of balls in each box is not fixed in advance, while (b) for multinomial coefficients the number of balls in each box is fixed in advance.
Thus the difference is whether one fixes in advance how many balls are to be placed into each box.
combinatorics
soft-question
definition
multinomial-coefficients
multisets
Share
edited Aug 21, 2017 at 4:11
Chill2MachtChill2Macht
asked Aug 20, 2017 at 17:57
Chill2MachtChill2Macht
22.2k1010 gold badges6969 silver badges181181 bronze badges
$\endgroup$
1
1
$\begingroup$ Multinomial coefficients are (almost, but not quite) option 3. Multiset coefficients are none of the above. $\endgroup$
David K
– David K
2017-08-21 03:20:08 +00:00
Commented Aug 21, 2017 at 3:20
Add a comment |
3 Answers 3
Reset to default
4
$\begingroup$
A multinomial coefficient is often written in the form $$ \binom{n}{k_1,k_2,\ldots,k_m} $$ where $k_1+k_2+\cdots+k_m = n.$ Because of that last equation, the $n$ is redundant. Binomial coefficients are technically multinomial coefficients as well, but instead of writing $\binom{n}{k_1,k_2}$ where $k_1+k_2 = n,$ we usually write either $\binom{n}{k_1}$ or $\binom{n}{k_2}.$ These are all mean exactly the same thing: $$ \binom{k_1+k_2}{k_1,k_2}=\binom{k_1+k_2}{k_1} =\binom{k_1+k_2}{k_2} = \frac{(k_1+k_2)!}{k_1!\,k_2!}. $$
A multinomial coefficient can arise in a counting problem as follows:
You have $m$ containers, with sizes $k_1, k_2, \ldots, k_m$ respectively.
You have $n = k_1+k_2+\cdots+k_m$ distinguishable objects, which are just enough to fill every container when drawing from the $n$ objects without replacement.
The order of the containers matters.
The order of the objects within each container does not matter.
A multiset coefficient can arise as follows:
You have $n$ distinguishable objects.
You have a single container able to hold $k$ objects.
You draw objects from the $n$ objects with replacement, and put the copies of those objects into the container.
You continue putting copies of objects in the container until it is full.
The order of objects in the container does not matter.
A multiset coefficient can alternatively arise this way:
You have $k$ objects that are indistinguishable or that you choose not to distinguish.
You have $n$ containers, each with effectively infinite capacity (they never get full).
You draw the objects without replacement and put them in the containers.
The order of the containers matters.
The order of objects in a container cannot matter, because by assumption you cannot tell which object is which, only how many are in each container.
These are two quite different ways of modeling a multiset coefficient: in one model, the number of objects is the first number of the coefficient, while in the other model the number of objects is the second number. But there is a simple counting argument that shows both ways model the same total count: each time you choose one of the $n$ objects in the first model to occupy one of the $k$ unordered (hence indistinguishable) places in the container, you choose one of the $n$ containers in the second model to hold one of the $k$ indistinguishable objects. Just as the first model allows you to choose the same object as often as you want (until all spaces in the container are full), the second model allows you to put objects in the the same container as many times as you like (until all objects have been placed).
While the term "multiset" is inspired by the first model, the second model is a frequent application of the "stars and bars" method. It's useful to know how to apply the multiset coefficient to either model as needed.
Share
answered Aug 21, 2017 at 3:52
David KDavid K
110k88 gold badges9191 silver badges243243 bronze badges
$\endgroup$
1
$\begingroup$ Very thorough and very clear -- I greatly appreciate your help and this contribution. $\endgroup$
Chill2Macht
– Chill2Macht
2017-08-21 04:17:06 +00:00
Commented Aug 21, 2017 at 4:17
Add a comment |
1
$\begingroup$
Here is a simple way to state it. Below that, I give a simple real-world scenario that can be turned into either type of problem. Most of the bad explanations online give examples that aren't even remotely similar (e.g. the ubiquitous "anagram" example for multinomials), which probably causes this sort of confusion.
Multisets
The multiset coefficient is what is commonly called, at lower levels, a combination with replacement. Multisets solve allocation questions. If we are making $k$ choices from a set of $n$ possible things to choose, the multiset counts how many ways there are to do that.
A concrete example is this: if I am at a barbecue joint and order a protein-packed dinner with $k = 4$ servings of meat, which I can choose from $n=3$ options (say, beef, chicken, and pork), the number of ways I can allocate my choices is given by $\binom{n+k-1}{k} = \binom{n+k-1}{n-1} = \binom{6}{2} = 15$.
Here is why: imagine that I order by standing in front of each type of meat in the case and saying "a scoop, please" and then moving to the right, to the next meat, when I'm done. So, I'm taking two actions: saying "yes" and moving right. If I start at meat $1$, I only need to move right $n-1$ times to get to the end, and this means that there are $n + k - 1$ actions to take, of which I can choose $n-1$ to be rightward steps or $k$ to be "yes"es. This type of explanation is sometimes called "stars and bars" after an older explanation in Fuller (1966), where the two classes of object were $\ast$s and $|$s.
Multinomials
Multinomials are similar only in a very general way, that they involve the distribution of a set of choices into different classes. However, in the multinomial scenario, the distribution is fixed, or assumed, whereas the whole purpose of the multiset is to let it vary and count all such variations. Both problems are often described with "ball and urn" scenarios, whereas simple permutations and combinations without replacement often use more palatable examples involving people, maybe causing the two to be confused.
The multinomial coefficient instead counts the number of ways to take $n$ distinguishable objects and distribute them in fixed proportions into classes $k_1, k_2, ... k_m$. In somewhat sloppy fashion, these $k$s are also typically used to denote the size of each class which is fixed here.
To use a slightly-stilted modification of our previous examplesince these coefficients count different things, it is hard to find a scenario where both answer equally natural questionssuppose that I settle on two scoops of beef, one of chicken, and one of pork. The distribution is fixed. However, now the clerk has a choice of where to put the food in the takeout tray, and perhaps we want to count all the unique ways the scoops could be arranged. The number of ways they can do that is $\frac{n!}{k_1!k_2!k_3!} = \frac{4!}{2!} = 12$ in this case (ignoring the two $1!$s). The clerk can put the scoops in any order they like, but we divide by $k_i$ for each $i$ because it doesn't matter to the naked eye if the scoops of beef would be swapped.
You can consider the multinomial to give you the number of unique permutations of a set of items distributed into types, where the number of each type is fixed and we can't distinguish items that are the same type. The binomial can also be considered this way; if I am deciding how many ways I can pick two guys on my football team of $11$ to be co-captains, this is the same as lining everyone up and then giving people slips of paper that say "captain" or "non-captain", except I don't care about the ordering within the slips.
Share
edited Feb 24, 2024 at 4:43
answered Feb 24, 2024 at 4:32
gjmbgjmb
3344 bronze badges
$\endgroup$
5
$\begingroup$ This is helpful, thank you! The cafeteria example helps understanding that. I still find it difficult to summarize both succinctly and clearly at the same time (but maybe a tradeoff is necessary between the two), but this explanation still makes it easier to remember the difference. $\endgroup$
Chill2Macht
– Chill2Macht
2024-04-13 13:13:54 +00:00
Commented Apr 13, 2024 at 13:13
1
$\begingroup$ Thanks, @Chill2Macht. The examples are designed to be very exact. If you had to say it concisely, I might suggest the following. The multinomial coefficient counts the number of ways to permute $n$ items that have been placed into groups of size $k_1, k_2, ... k_m$, where each group's size is fixed (so, unlike a true permutation, some of the rearrangements of individuals shouldn't be counted). The multiset coefficient counts the number of ways to split $n$ items up into $k$ groups, but the sizes of each group are left to vary. $\endgroup$
gjmb
– gjmb
2024-04-15 16:32:08 +00:00
Commented Apr 15, 2024 at 16:32
$\begingroup$ Yes, I like that explanation a lot, it highlights a key difference, namely that one allows the sizes of the groups to vary, but the other doesn't. It's also interesting too because (as I think you said in the answer) that interpretation of multinomials even works in the special case of binomials, even though those are usually described as being "permutation-free" -- a better explanation is that you're just "quotienting out" within-group permutations, i.e. some rearrangements shouldn't be counted. That's also true for multisets, so the fixed group size seems to be the more key difference. $\endgroup$
Chill2Macht
– Chill2Macht
2024-04-21 00:03:07 +00:00
Commented Apr 21, 2024 at 0:03
$\begingroup$ Yes, great point. It is IMHO bad that we teach combinations as "order doesn't matter". Combinations also count permutations of classes of things (of fixed size). IMHO, "order mattering" at all is a somewhat useful device, but it is good to see that even an pure "permutation with repetition" can be cast as a multinomial: you just find all the ways to choose $0, 1, ... n$ ordered trials to be successes (another view on the famous identity of the sum of the row of Pascal's triangle and $2^n$). $\endgroup$
gjmb
– gjmb
2024-06-05 02:49:22 +00:00
Commented Jun 5, 2024 at 2:49
$\begingroup$ I am putting up a blogpost about this shortly, but I think that this is one advantage that the "in/distinct balls into in/distinct bins" metaphor has. It also avoids the (misleading, too specific to one common example about the ice-cream parlor...it only works if the objects and bins are changed, usually without telling the reader!) language of with/without repetition since you can repeat bins generally in that metaphor. It's about whether balls have labels and the size of bins is predetermined. And as a bonus, if bins are unlabeled, we can introduce partitions. $\endgroup$
gjmb
– gjmb
2024-06-05 02:53:18 +00:00
Commented Jun 5, 2024 at 2:53
Add a comment |
-1
$\begingroup$
Motivation: Describing the number of monomials of degree $d$ in the variables $x_0, \dots, x_n$ requires the use of a multiset coefficient, not a multinomial coefficient. When I forget why, I have difficulty explaining to myself why the multiset coefficient works and why the multinomial one doesn't.
Background: According to Wikipedia, they are distinct, but I am having difficulty focusing in on the key aspects which differentiate them, as well as the key aspects which make them similar.
Unlike for binomial coefficients, there is no "multiset theorem" in which multiset coefficients would occur, and they should not be confused with the unrelated multinomial coefficients that occur in the multinomial theorem. (Source)
Multiset and multinomial coefficients are distinct, but even if they are not related to each other, they do appear to both be related to counting problems involving multisets.
n multichoose k is given by the simple formula
$$((n; k))=(n+k-1; k)=(n-1,k)!,$$ where (n-1,k)! is a multinomial coefficient. (Source)
The multinomial coefficients $$(n_1,n_2,...,n_k)!=\frac{((n_1+n_2+...+n_k)!)}{(n_1!n_2!...n_k!)} $$are the terms in the multinomial series expansion. In other words, the number of distinct permutations in a multiset of k distinct elements of multiplicity $n_i (1 \le i \le k)$ is $(n_1,...,n_k)!$ (Skiena 1990, p. 12). (Source)
A multinomial coefficient is associated with each (finite) multiset taken from the set of natural numbers. Such a multi-set is given by a list $k_1, . . . , k_n,$ where numbers may be repeated, and where order does not matter... The total number of such multi-indices is $(\binom{n}{k})$. The multi-index N determines a multi-set $k_1, . . . , k_n$ taken from the natural numbers. This multi-set consists of the values of the function N. It has less information in it than the multi-index $N$, since it has lost the information about which elements of $B$ index the numbers. (Source)
The number of multiset combinations equals the multinomial coefficient (n multichoose k). (Source)
My understanding currently is that the multinomial coefficient is the number of ways of assigning $n$ objects to $i$ containers with $k_i$ spots each and $k = \sum_i k_i$ spots total, where order matters.
This explanation makes sense to me inasmuch as it explains why the sum of all multinomial coefficients is $i^k = i^n$ when $k=n$. However, it doesn't make sense to me inasmuch as multinomial coefficients are supposed to be related to binomial coefficients, which are supposed to give the number of ways of choosing elements when order doesn't matter.
But if order doesn't matter for multinomial coefficients, it doesn't matter for multiset coefficients either, which doesn't leave a clear way to distinguish them:
A k-combination with repetitions, or k-multicombination, or multisubset of size k from a set S is given by a sequence of k not necessarily distinct elements of S, where order is not taken into account. (Source)
Share
answered Aug 20, 2017 at 22:47
community wiki
Chill2Macht
$\endgroup$
3
1
$\begingroup$ While it is interesting that "multichoose" comes out to a binomial coefficient (which is also therefore a multinomial coefficient), it's not exactly a direct correspondence: $\binom{n+k-1}{k},$ not $\binom nk.$ It's just one of those interesting mathematical marvels where something designed to do one thing can, with a little tweaking, be made to give the answer to a very different problem. $\endgroup$
David K
– David K
2017-08-21 03:10:31 +00:00
Commented Aug 21, 2017 at 3:10
1
$\begingroup$ By the way, it's kind of weird to write an "answer" that's really just a continuation of the question. $\endgroup$
David K
– David K
2017-08-21 03:12:03 +00:00
Commented Aug 21, 2017 at 3:12
1
$\begingroup$ @DavidK I agree, but (1) it's CW, so it doesn't matter and (2) the question was too long with this in the body, so no one was reading it. My priority is to get the question answered, not to worry about protocol. $\endgroup$
Chill2Macht
– Chill2Macht
2017-08-21 04:03:01 +00:00
Commented Aug 21, 2017 at 4:03
Add a comment |
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
combinatorics
soft-question
definition
multinomial-coefficients
multisets
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
1 Random distribution of colored balls into boxes.
2 A little quirk on the good old "how many ways are there to place balls into boxes"
1 Stars and Bars Derivation
2 Can I use "stars and bars" to count the number of ways to distribute $N$ balls into $K$ black and $M$ white boxes?
1 Is my reasoning correct? (using the stars and bars method)
Stars and bars approach.
5 Stars and Bars in terms of dice - Can't spot my error
2 Understanding the difference between the multinomial coefficients and the Stirling number of the 2nd kind
The multinomial coefficient and number of unique words
0 Stars and Bars with Distinct Stars (not quite a repost)
Hot Network Questions
What meal can come next?
Exchange a file in a zip file quickly
в ответе meaning in context
Is this commentary on the Greek of Mark 1:19-20 accurate?
An odd question
alignment in a table with custom separator
What is a "non-reversible filter"?
Interpret G-code
Survival analysis - is a cure model a good fit for my problem?
Who is the target audience of Netanyahu's speech at the United Nations?
Calculating the node voltage
Riffle a list of binary functions into list of arguments to produce a result
Another way to draw RegionDifference of a cylinder and Cuboid
What is the meaning of 率 in this report?
Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth?
The geologic realities of a massive well out at Sea
Storing a session token in localstorage
Should I let a player go because of their inability to handle setbacks?
RTC battery and VCC switching circuit
Copy command with cs names
I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way?
Can a GeoTIFF have 2 separate NoData values?
Is existence always locational?
With with auto-generated local variables
more hot questions
Question feed |
189509 | https://www.khanacademy.org/math/calculus-1/cs1-applications-of-integrals/cs1-area-horizontal-area-between-curves/v/area-between-two-functions-of-y | Use of cookies
Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account.
For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session.
Functional Cookies
These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences.
Targeting Cookies
These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission.
We do not use cookies to serve third party ads on our Service.
Performance Cookies
These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates.
For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users.
We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
Cookie List
Consent Leg.Interest
label
label
label |
189510 | https://pmc.ncbi.nlm.nih.gov/articles/PMC10543094/ | Severe Untreated Scoliosis and Early Onset Breast Cancer in a Patient with Neurofibromatosis Associated with a Nonsense Variant of NF1 Gene - PMC
Skip to main content
An official website of the United States government
Here's how you know
Here's how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Search
Log in
Dashboard
Publications
Account settings
Log out
Search… Search NCBI
Primary site navigation
Search
Logged in as:
Dashboard
Publications
Account settings
Log in
Search PMC Full-Text Archive
Search in PMC
Journal List
User Guide
View on publisher site
Download PDF
Add to Collections
Cite
Permalink PERMALINK
Copy
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice
Orthop Res Rev
. 2023 Sep 26;15:183–189. doi: 10.2147/ORR.S415978
Search in PMC
Search in PubMed
View in NLM Catalog
Add to search
Severe Untreated Scoliosis and Early Onset Breast Cancer in a Patient with Neurofibromatosis Associated with a Nonsense Variant of NF1 Gene
Vivian Reinhold
Vivian Reinhold
1 Institute of Biomedicine, University of Turku, Turku, Finland
Find articles by Vivian Reinhold
1,✉, Antti Saarinen
Antti Saarinen
2 Department of Paediatric Orthopaedic Surgery, University of Turku and Turku University, Turku, Finland
Find articles by Antti Saarinen
2, Eetu Suominen
Eetu Suominen
2 Department of Paediatric Orthopaedic Surgery, University of Turku and Turku University, Turku, Finland
Find articles by Eetu Suominen
2, Stina Syrjänen
Stina Syrjänen
3 Department of Oral Pathology and Radiology, Institute of Dentistry, Faculty of Medicine, University of Turku, Turku, Finland
4 Department of Pathology, University of Turku, Turku University Hospital, Turku, Finland
Find articles by Stina Syrjänen
3,4, Minna Kankuri-Tammilehto
Minna Kankuri-Tammilehto
1 Institute of Biomedicine, University of Turku, Turku, Finland
5 Department of Clinical Genetics, Turku University Hospital and University of Turku, Turku, Finland
Find articles by Minna Kankuri-Tammilehto
1,5
Author information
Article notes
Copyright and License information
1 Institute of Biomedicine, University of Turku, Turku, Finland
2 Department of Paediatric Orthopaedic Surgery, University of Turku and Turku University, Turku, Finland
3 Department of Oral Pathology and Radiology, Institute of Dentistry, Faculty of Medicine, University of Turku, Turku, Finland
4 Department of Pathology, University of Turku, Turku University Hospital, Turku, Finland
5 Department of Clinical Genetics, Turku University Hospital and University of Turku, Turku, Finland
✉
Correspondence: Vivian Reinhold, Institute of Biomedicine, Department of Clinical Genetics, Turku University Hospital and University of Turku, Kiinamyllynkatu 10, Turku, 20520, Finland, Tel +358503453948, Email vivvis@utu.fi
Received 2023 Apr 5; Accepted 2023 Jun 30; Collection date 2023.
© 2023 Reinhold et al.
This work is published and licensed by Dove Medical Press Limited. The full terms of this license are available at and incorporate the Creative Commons Attribution – Non Commercial (unported, v3.0) License ( By accessing the work you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms (
PMC Copyright notice
PMCID: PMC10543094 PMID: 37791039
Abstract
Background
Neurofibromatosis 1 (NF1) is a relatively common genetic disorder linked to skeletal abnormalities and elevated risk of cancer. Early onset scoliosis is common in patients with NF1 although severe scoliosis is rare. Scoliosis complicates the normal development and growth and may lead to thoracic insufficiency syndrome. The increased risk for breast cancer in young NF1 female patients has been recently identified.
Case Presentation
We describe a NF1 patient with dystrophic scoliosis symptoms emerged at childhood. At 37 years of age major scoliosis curve in the thoracolumbar region was 80 degrees. The patient was diagnosed with breast cancer at the age of 37 years, histologically the breast cancer was ductal, hormone receptor positive and Her2-positive.
Results
A novel pathogenic variant in NF1 p.(Trp2348) was identified by next-generation sequencing method. The patient did not have pathogenic variants in BRCA genes or in other currently known hereditary breast cancer genes.
Conclusion
Here, we describe a novel pathogenic variant in NF1 named p.(Trp2348) which may cause severe dystrophic scoliosis and deteriorate the quality of life and physical function, as well as Her-2 positive breast cancer. Untreated dystrophic scoliosis in patients with NF1 may result in significant spinal deformity and deteriorate the quality of life and physical function. Genetic counseling is recommended in all patients with NF1. Patients need routine follow-up throughout life. Multidisciplinary consulting is warranted in patients with neurofibromatosis 1.
Keywords: spinal deformity, neurofibromatosis, breast cancer
Introduction
Type 1 neurofibromatosis (NF1) is an autosomal dominant disorder caused by a pathogenic variant in the NF1 gene in chromosome 17q11.2.1NF1 gene works as tumor suppressor gene by producing neurofibromin protein which controls cellular growth and differentiation. Abnormal function caused by a pathogenic variant of the NF1 gene leads to abnormal cell proliferation, which leads to multisystemic manifestation including abnormalities of the nervous tissue, soft tissue, skin, and bone. Café au lait spots and plexiform neurofibromas are common clinical manifestations (Box 1). NF1 is one of the most common genetic disorders, with prevalence approximately 1:2500–3000.1
Box 1.
Revised NF1 Diagnostic Criteria Proposed by International Consensus Group on Neurofibromatosis Diagnostic Criteria Were Published in August 2021 (Legius, 2021). Criteria Proposals Aim is to Incorporate New Clinical Features and Genetic Testing and for Better Separation of NF1 from Other Diseases with Spot Symptoms
| A Clinical Diagnosis of NF1 is Based on Agreed Criteria. The Diagnosis can be Considered Certain if the Patient has at Least Two of the Following Symptoms: |
| At least 6 cafe au lait spots (flat coffee-colored skin lesions) with a size exceeding 5 mm before puberty or exceeding 15 mm after puberty At least one of the two pigmentary findings (café-au-lait macules or freckling) should be bilateral. |
| Freckles in axillary or inguinal regions At least one of the two pigmentary findings (café-au-lait macules or freckling) should be bilateral. |
| Neurofibroma, at least 2 regular or 1 plexiform |
| Optic glioma |
| Lisch nodules of the iris, At least two or more choroidal abnormalities (CAs)—defined as bright, patchy nodules imaged by optical coherence tomography (OCT)/near-infrared reflectance (NIR) imaging |
| Skeletal development disorder such as sphenoid wing dysplasia or thinning of long bone cortex, with or without pseudarthrosis |
| A first-degree relative (parent, child or sibling) has been diagnosed with NF1 Parent has been diagnosed with NF1 |
| Genetic research has shown an NF1 genetic defect (pathogenic variant) |
Open in a new tab
NF1 patients may have skeletal manifestations such as short stature, osteopenia or osteoporosis, tibial bowing, congenital pseudoarthrosis (false joint), and scoliosis.1 Spinal deformities are common in patients with NF1.2,3 Often NF1 patients have mild scoliosis. However, some NF1 patients have dystrophic scoliosis, a more severe type of scoliosis. Dystrophic scoliosis is a rapidly progressing early onset scoliosis with characteristic spinal changes.2 Dystrophic scoliosis often presents a striking curvature (short and angular) in the thoracolumbar spine and is often associated with extensive kyphosis.
Spinal tumors are common in dystrophic scoliosis.3,4 Severe dystrophic scoliosis may lead to neurologic injury. Dystrophic scoliosis does not typically respond to conservative treatment. Surgical treatment is warranted with growth-friendly instrumentation in infantile patients and spinal fusion in adolescent patients. Untreated scoliosis may lead to decreased quality of life and pulmonary compromise. Surgical treatment in patients with NF1 is complicated due to poor bone quality, sharp angular curvatures, vertebral rotation, and dural ectasia.4 Male-to-female ratio of 4:1 was reported for surgically treated dystrophic scoliosis in NF1 child patients.4
Case Presentation
A 38-year-old woman originally from the Middle East received referral to the Department of Clinical Genetics after moving to Finland at the age of 37 years. She had a clinical diagnosis of neurofibromatosis 1. Skin freckles were observed in the upper body along with six café au lait spots, large plexiform neurofibroma was observed in the right sided chest area which was surgically removed at the age of 37 years. Neurological status was normal. Patient is 142 cm tall.
Patient was evaluated with native radiographs, computer tomography, and magnetic resonance imaging (MRI). Severe dystrophic kyphoscoliosis was present in the plain radiographs. Imaging revealed severe kyphoscoliosis with major scoliosis curve in the thoracolumbar region of 80 degrees and a marked kyphosis. Thoracic height (T1-T12) was 185 mm. MRI scan revealed three masses in the abdominal cavity which were suspected as neurofibromas. Anomaly in T12 vertebra and severe lumbar vertebra rotation was observed in CT and MRI scans. Patient had no spinal tumors and no compression of the spinal cord.
Patient had worn an orthopedic cast at the age 3 to 4 for early onset scoliosis. Operative treatment was suggested but the family had refused the treatment. Currently, the patient is able to walk for 10 minutes before muscle cramps forces her to rest. Patient has pain in the lower extremities and experienced shortness of breath during minor physical performance. Patient does not experience back pain at rest or when moving. Patient manages light physical activities and work but lifting of heavy objects is not possible and rotational movements are not recommended. The patient was recently evaluated by an orthopedic surgeon in our university hospital. Despite the marked deformity, surgical treatment was not recommended to the patient as it would be unlikely to improve the symptoms. Patient is being monitored for progression of the deformity. Patient did not either want surgical treatment. Physiotherapy was organized for the patient. Spinal deformity has not progressed during the 6-year follow-up in Finland (Figures 1–3).
Figure 1.
Open in a new tab
3-D reconstruction of the patient. Sternum was removed from the reconstruction for illustrative purposes. Severe dystrophic scoliosis is seen.
Figure 2.
Open in a new tab
3-D reconstruction of the patient. Severe kyphosis is seen.
Figure 3.
Open in a new tab
MRI of the patient’s thorax. Paraspinal neurofibromas are seen.
At the age of 37 patient was diagnosed with ductal breast carcinoma. The carcinoma was estrogen, progesterone receptor, and HER2 positive by immunohistochemistry. Patient was treated with surgery (mastectomy and reconstruction) combined with radiotherapy and adjuvant therapy. Patient has remained disease free.
Family
The patient’s mother and aunt have reportedly similar clinical signs and symptoms suggesting NF1. None of the family members had severe scoliosis. There were no other breast cancer cases in the near family. Family members did not participate in NF1 genetic investigations.
Result of Genetic Studies
Patient’s DNA from peripheral blood was analyzed using neurofibromatosis next-generation sequencing gene panel which presented a heterozygous nonsense variant in the NF1 gene c.7044G>A in exon 47, GenBank reference sequence (NM_000267.3(NF1). Variant is also named c.7107G>A, p.Trp2369Ter according to GenBank reference sequence NM_001042492.3(NF1). Variant was confirmed by bi-directional Sanger sequencing. This variant has not been observed in the large reference population cohort in Genome Aggregation Database (gnomAD) or in Sequencing Initiative Suomi (SISu) database which refers to the pathogenicity of the variant. This variant changes the amino acid from a tryptophan to a premature stop codon [p.(Trp2348)], and is expected to result in an absent or disrupted protein product and is predicted to cause loss of normal protein function either through protein truncation or nonsense-mediated mRNA decay. Loss-of-function variants in NF1 are known to be pathogenic. The variant described here has been previously reported twice in NF1 patients by laboratory of Invitae (classified as pathogenic, ClinVar database, ID: 404475) and by laboratory of Ambry Genetics (classified as pathogenic, ClinVar database, ID: 404475). In silico MUTTASTER analysis evaluated the pathogenicity of the variant as disease causing. The same variant has been recently associated also with somatic variant in brain tumors.5 So far, there is no functional evidence in ClinVar for this genetic NF1 variation. The American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG/AMP) classification system is an important interpretation standardization system for variants and, according to this system the variant is classified as likely pathogenic.6 We suggest here, that the NF1 variant identified here should be classified too pathogenic. This is because the variant presents a nonsense mutation, and it is rare in control populations and has been found earlier in two individuals with a NF1 phenotype. No other pathogenic variants were found in BRCA genes or in currently known other hereditary breast cancer genes.
Discussion
Skeletal deformities are common in NF1. However, there are fewer published reports about severe scoliosis in NF1.2 Here, we describe an adult NF1 patient with dystrophic early onset scoliosis, her performance status and found pathogenic variant in NF1. The presently described patient had poor physical performance and shortness of breath during normal ambulation which is likely to partly be caused by the spinal deformity. Despite of severe thoracolumbar deformity the patient’s physical performance is good. She manages light physical activities and work. Surgical treatment for early onset scoliosis during adult age is unlikely to significantly improve the pulmonary symptoms. Patient als did not want surgical treatment.
Approximately 10% of NF1 children have early onset scoliosis.4,7 Dystrophic scoliosis often progresses even after skeletal maturity to a severe deformity thus requiring surgical treatment. Growth-friendly treatment is used to prevent the progression of the deformity while allowing further spinal growth in young patients with early onset scoliosis.4,8,9 Thoracic height of 18 cm is considered satisfactory to prevent thoracic insufficiency syndrome.10 Spinal fusion is used in adolescent and adult patients. Severe early onset scoliosis may complicate normal cardiopulmonary development and lead to decreased pulmonary function.11 Most cases of NF1 associated dystrophic scoliosis are progressive. Due to the risk of severe progression surgical treatment in childhood is often recommended, although there is no clear consensus on the indications of the surgical treatment.4 Operative treatment of dystrophic scoliosis remains a significant challenge. Complications and need of revision surgery are common in patients with dystrophic scoliosis. NF1 patients have an increased risk for osteoporosis12 and risk increases with age. Bisphosphonate and asfotase alfa in combination have been used after spinal surgery in NF1 patient at her Middle Ages after to enhance bone healing.13
The patient described here had neurofibromas typical to NF1 and ductal breast carcinoma at relatively young age similarly as recently observed in NF1 patients without familial breast cancer risk.14 NF1 gene functions as tumor suppressor. Patients with NF1 have increased prevalence of both benign and malignant tumors. Recently it has been reported that ductal breast carcinoma in patients under 50 years is significantly more common in female NF1 patients than in the general population.15,16 Elevated risk of breast cancer should be assessed in patients with NF1.16 Our patient has been organized annual mammography and breast MRI for the evaluation of the collateral breast until the age of 50 years.
Our study emphases that the identified NF1 variant is pathogenic 1) because this variant is found in our patient having a typical NF1 phenotype, 2) the same variant has previously reported in two other patients with NF1 phenotype, 3) the variant is exceptionally rare and not found in control populations, 4) the variant is a nonsense variant, and 5) it is in silico MUTTASTER analysis a disease-causing variant. Previously, nearly 700 nonsense mutations have been reported in NF1. Genotype-phenotype correlation in NF1 is not well understood.3,17 Previously, only a few clear genotype-phenotype correlations have been observed.18 In a recent study skeletal abnormalities were associated with frameshift variants.19 The variant in our patient changes the amino acid from a tryptophan to a premature stop codon [p.(Trp2348)], and is expected to result in an absent or disrupted protein product and is predicted to cause loss of normal protein function either through protein truncation or nonsense-mediated mRNA decay. Our study provided a new potential finding on the genotype-phenotype correlation in NF1. Thus, further studies on this topic are urgently warranted.
In a recent study it was shown that in mice that NF1 expression in bone marrow osteoprogenitors is required for the maintenance of the adult skeleton.20 Recent studies suggest that a somatic NF1 second hit mutation contributes to dystrophic scoliosis.21,22
Based on the ClinVar database the other of the two previously reported NF1 patients had a cardiovascular phenotype. In some NF1 patients the hypertrophic cardiomyopathy has been seen23. Cardiac ultrasound was also performed with our patient because of the Herceptin treatment due to her Her-2 positive ductal breast carcinoma. No abnormalities were found in the ultrasound. Currently, cardiac ultrasound screening is not a routine in NF1 patients.
Conclusion
We described a novel pathogenic variant in NF1 named p.(Trp2348) which may cause severe dystrophic scoliosis and deteriorate the quality of life and physical function, as well as Her-2 positive ductal breast carcinoma. Untreated dystrophic scoliosis in patients with NF1 may result in significant spinal deformity and deteriorate the quality of life and physical function. Genetic counseling is recommended in all patients with NF1. Patients need routine follow-up throughout life to detect possible associated diseases at early stage. Multidisciplinary consulting is warranted in management of patients with neurofibromatosis 1.
Funding Statement
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
Consent for Publication
This study is a case report and informed consent was obtained from the patient regarding the use of information obtained during clinical treatment included consent for case details and accompanying images to be published. The patient had been treated at the hospital. As no new samples were required a separate ethics board permit was not required.
Author Contributions
All authors made a significant contribution to the work reported including the conception, study design, execution, acquisition of data, analysis and interpretation, and participated in drafting, revising or critically reviewing the manuscript. All gave final approval of the last version of the manuscript to be submitted and agreed on the journal for submission; and agreed to be accountable for all aspects of the work.
Disclosure
The authors report no conflicts of interest in this work.
References
1.Hirbe AC, Gutmann DH. Neurofibromatosis type 1: a multidisciplinary approach to care. Lancet Neurol. 2014;13(8):834–843. doi: 10.1016/S1474-4422(14)70063-8 [DOI] [PubMed] [Google Scholar]
2.Crawford AH, Herrera-Soto J. Scoliosis associated with neurofibromatosis. Orthop Clin North Am. 2007;38(4):553–562. doi: 10.1016/j.ocl.2007.03.008 [DOI] [PubMed] [Google Scholar]
3.Well L, Careddu A, Stark M, et al. Phenotyping spinal abnormalities in patients with Neurofibromatosis type 1 using whole-body MRI. Sci Rep. 2021;11(1):1–13. doi: 10.1038/s41598-021-96310-x [DOI] [PMC free article] [PubMed] [Google Scholar]
4.Neifert SN, Khan HA, Kurland DB, et al. Management and surgical outcomes of dystrophic scoliosis in neurofibromatosis type 1: a systematic review. Neurosurg Focus. 2022;52(5):E7. doi: 10.3171/2022.2.FOCUS21790 [DOI] [PubMed] [Google Scholar]
5.Kim H, Lim KY, Park JW, et al. Sporadic and Lynch syndrome associated mismatch repair-deficient brain tumors. Lab Investigat. 2022;102(16171):160–171. doi: 10.1038/s41374-021-00694-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
6.Richards S, Aziz N, Bale S, et al., ACMG Laboratory Quality Assurance Committee. Standards and guidelines for the interpretation of sequence variants: a joint consensus recommendation of the American College of Medical Genetics andGenomics and the Association for Molecular Pathology. Genet Med. 2015;175(5):405–424. doi: 10.1038/gim.2015.30 [DOI] [PMC free article] [PubMed] [Google Scholar]
7.Toro G, Santoro C, Ambrosio D, et al. Natural History of Scoliosis in Children with NF1: an Observation Study. Healthcare. 2021;9(7):881. doi: 10.3390/healthcare9070881 [DOI] [PMC free article] [PubMed] [Google Scholar]
8.Haapala H, Saarinen AJ, Salonen A, Helenius I. Shilla growth guidance compared with magnetically controlled growing rods in the treatment of neuromuscular and syndromic early onset scoliosis. Spine. 2020;45(23):E1604–E1614. doi: 10.1097/BRS.0000000000003654 [DOI] [PubMed] [Google Scholar]
9.Jain V, Berry CA, Crawford AH, Emans JB, Sponseller PD. Growing rods are an effective fusionless method of controlling early-onset scoliosis associated with neurofibromatosis type 1 (NF1): a multicenter retrospective case series. J Pediatr Orthop. 2017;37(8):e612–e618. doi: 10.1097/BPO.0000000000000963 [DOI] [PubMed] [Google Scholar]
10.Karol LA, Johnston C, Mladenov K, Schochet P, Walters P, Browne RH. Pulmonary function following early thoracic fusion in non-neuromuscular scoliosis. JBJS. 2008;90(6):1272–1281. doi: 10.2106/JBJS.G.00184 [DOI] [PubMed] [Google Scholar]
11.Pehrsson K, Larsson S, Oden A, Nachemson A. Long-term follow-up of patients with untreated scoliosis. A study of mortality causes of death, and symptoms. Spine. 1992;17(9):1091–1096. doi: 10.1097/00007632-199209000-00014 [DOI] [PubMed] [Google Scholar]
12.Poyrazoğlu HG, Baş VN, Arslan A, et al. Bone mineral density and bone metabolic markers’ status in children with neurofibromatosis type 1. J Pediatr Endocrinol Metabol. 2017;30(2):175–180. doi: 10.1515/jpem-2016-0092 [DOI] [PubMed] [Google Scholar]
13.Harindhanavudhi T, Takahashi T, Petryk A, Polly DW. An Adjunctive use of Asfotase alfa and Zoledronic acid after spinal surgery in neurofibromatosis type I related dystrophic scoliosis. AACE Clin Case Rep. 2020;6(6):305–310. doi: 10.4158/ACCR-2020-0222 [DOI] [PMC free article] [PubMed] [Google Scholar]
14.Evans DGR, Kallionpää RA, Clementi M, et al. Breast cancer in neurofibromatosis 1: survival and risk of contralateral breast cancer in a five country cohort study. Genet Med. 2020;22(2):398–406. doi: 10.1038/s41436-019-0651-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
15.Walker L, Thompson D, Easton D, et al. A prospective study of neurofibromatosis type 1 cancer incidence in the UK. Br J Cancer. 2006;95(2):233–238. doi: 10.1038/sj.bjc.6603227 [DOI] [PMC free article] [PubMed] [Google Scholar]
16.Uusitalo E, Kallionpää RA, Kurki S, et al. Breast cancer in neurofibromatosis type 1: overrepresentation of unfavourable prognostic factors. Br J Cancer. 2017;116(2):211–217. doi: 10.1038/bjc.2016.403 [DOI] [PMC free article] [PubMed] [Google Scholar]
17.Li H, Zhang W, Yao Z, Guo R, Hao C, Zhang X. Genotypes and clinical intervention of patients with neurofibromatosis type 1 associated dystrophic scoliosis. Front Pediatr. 2022;10. doi: 10.3389/fped.2022.918136 [DOI] [PMC free article] [PubMed] [Google Scholar]
18.Shofty B, Mauda-Havakuk M, Weizman L, et al. The effect of chemotherapy on optic pathway gliomas and their sub-components: a volumetric MR analysis study. Pediatr Blood Cancer. 2015;62(8):1353–1359. doi: 10.1002/pbc.25480 [DOI] [PubMed] [Google Scholar]
19.Scala M, Schiavetti I, Madia F, et al. Genotype-phenotype correlations in neurofibromatosis type 1: a single-center cohort study. Cancers. 2021;13(8):1879. doi: 10.3390/cancers13081879 [DOI] [PMC free article] [PubMed] [Google Scholar]
20.Paria N, Khalid A, Shen B, et al. Molecular dissection of somatic skeletal disease in neurofibromatosis type 1. J Bone Miner Res. 2023;38(2):288–299. doi: 10.1002/jbmr.4755 [DOI] [PMC free article] [PubMed] [Google Scholar]
21.Chelleri C, Scala M, De Marco P, et al. Somatic double inactivation of NF1 associated with NF1-related pectus excavatum deformity. Hum Mutat. 2023;2023–2030. doi: 10.1155/2023/3160653 [DOI] [Google Scholar]
22.Margraf RL, VanSant-Webb C, Mao R, et al. NF1 somatic mutation in dystrophic scoliosis. J Mol Neurosci. 2019;68(1):11–18. doi: 10.1007/s12031-019-01277-0 [DOI] [PubMed] [Google Scholar]
23.Incecik F, Hergüner ÖM, Alınç Erdem S, Altunbaşak Ş. Neurofibromatosis type 1 and cardiac manifestations. Turk Kardiyol Dern Ars. 2015;43(8):714–716. doi: 10.5543/tkda.2015.27557 [DOI] [PubMed] [Google Scholar]
Articles from Orthopedic Research and Reviews are provided here courtesy of Dove Press
ACTIONS
View on publisher site
PDF (597.5 KB)
Cite
Collections
Permalink PERMALINK
Copy
RESOURCES
Similar articles
Cited by other articles
Links to NCBI Databases
On this page
Abstract
Introduction
Case Presentation
Result of Genetic Studies
Discussion
Conclusion
Funding Statement
Consent for Publication
Author Contributions
Disclosure
References
Cite
Copy
Download .nbib.nbib
Format:
Add to Collections
Create a new collection
Add to an existing collection
Name your collection
Choose a collection
Unable to load your collection due to an error
Please try again
Add Cancel
Follow NCBI
NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed
Connect with NLM
NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov
Back to Top |
189511 | https://www.slideserve.com/niveditha/ap-physics | PPT - AP Physics PowerPoint Presentation, free download - ID:1442875
Browse
Recent Presentations
Recent Articles
Content Topics
Updated Contents
Featured Contents
PowerPoint Templates
Create
Presentation
Article
Survey
Quiz
Lead-form
E-Book
Presentation Creator
Pro
Upload
Download Presentation
Download
1 / 8
Download Download Presentation >>
Discover more
Science
Math
Presentation
mathematics
presentation
science
Mathematics
Content marketing services
PowerPoint templates
Discover more
mathematics
science
Presentation
presentation
Mathematics
AP Physics
Feb 14, 2014
• 110 likes • 924 Views
AP Physics. Torque and Equilibrium Practice. A uniform beam weighs 200 N and holds a 450 N object as shown in the figure below. Find the magnitudes of the forces exerted on the beam by the two supports at its ends.
Share Presentation
Embed Code
Link
Download Presentation
niveditha + Follow
Download Presentation
AP Physics
An Image/Link below is provided (as is) to download presentationDownload Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.Content is provided to you AS IS for your information and personal use only. Download presentation by click this link.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.During download, if you can't get a presentation, the file might be deleted by the publisher.
Discover more
presentation
Math
mathematics
Science
Presentation
science
Mathematics
Content marketing software
presentations
Find corporate presentation solutions
E N D
Presentation Transcript
AP Physics Torque and Equilibrium Practice
A uniform beam weighs 200 N and holds a 450 N object as shown in the figure below. Find the magnitudes of the forces exerted on the beam by the two supports at its ends.
Where must an 800 N object be hung on a uniform, 100 N pole so that a girl (P) at one end supports 1/3 as much as a woman (3P) at the other end?
A uniform, 200 N board of length L has two objects hanging from it: 300 N at L/3 from one end, and 400 N at 3L/4 from the same end. What additional single force (P) acting on the board will cause the board to be in equilibrium?
Consider the situation shown below. The uniform, 600 N beam is hinged at P. Find the tension in the tie rope and the components of the force exerted by the hinge on the beam.
A uniform, 400 N boom is supported as shown in the figure below. Find the tension in the tie rope and the force exerted on the boom by the pin at P.
As shown in the figure, hinges A and B hold a uniform, 400 N door in place. The upper hinge supports the entire weight of the door. Find the forces exerted on the door at the hinges. The width of the door is h/2, where h is the distance between the hinges.
A ladder leans against a smooth wall. (By a “smooth” wall, we mean that the wall only exerts a perpendicular force on the wall. There is no friction force.) The ladder weighs 200 N and its center of gravity is 0.4 L from the base, where L is the ladder’s length. (a) How large a friction force must exist at the base of the ladder if it is not to slip? (b) What is the necessary coefficient of friction?
Discover more
Science
Presentation
mathematics
Math
presentation
Mathematics
science
Presentation design service
Find marketing tool subscriptions
Related
More by User
#### AP Physics AP Physics. Center of Mass. Why does the high jumper do the “Fosbury flop?”. Center of Mass of a system. When the net external force on a system is zero, the motion of the CM of that system will not change (law of inertia) M = m 1 + m 2 + … = S m i. 577 views • 7 slides
#### AP Physics C AP Physics C. Electricity and Magnetism Review. Electrostatics – 30% Chap 22-25. Charge and Coulomb’s Law Electric Field and Electric Potential (including point charges) Gauss’ Law Fields and potentials of other charge distributions. Electrostatics Charge and Coulomb’s Law. 1.25k views • 41 slides
#### AP Physics Review Round One. 100. 200. 300. 400. 500. 100. 100. 100. 100. 200. 300. 400. 500. 200. 300. 400. 500. 200. 300. 400. 500. 200. 300. 400. 500. Kinematics. Fluids/ Thermo. Waves/ Optics. E 425 views • 27 slides
#### AP Physics Review AP Physics Review. Jeopardy. Round One. Fluids/ Thermo. Waves/ Optics. Kinematics. E&M. Nuclear. 100. 100. 100. 100. 100. 200. 200. 200. 200. 200. 300. 300. 300. 300. 300. 400. 400. 400. 400. 400. 500. 500. 500. 500. 500. V. 2. 1. 0. t. 1. 2. 3. 4. 5. -1. 424 views • 28 slides
#### AP Physics I.C AP Physics I.C. Work, Energy and Power. Work Done by a Constant Force. In Physics, more work is done when . . . . . . Applied over a greater distance . . . And when more force is used. Work is. Scalar 596 views • 38 slides
#### AP Physics AP Physics. Chapter 1: Measurement. Intro to Physics. Classical Physics- Mechanics (Newton) Thermodynamics (Watt/Carnot/Diesel) Optics (Huygens/Young/Fresnel) Electricity and Magnetism (Maxwell) Solid understanding Prior to 1900’s. Intro. Modern Physics Late 1800’s 537 views • 20 slides
#### AP Physics Review AP Physics Review. Kinematics. Vectors / Proj. Forces. Work / NRG. Random. 10. 10. 10. 10. 10. 20. 20. 20. 20. 20. 30. 30. 30. 30. 30. 40. 40. 40. 40. 40. 50. 50. 50. 50. 50. Topic 1 – 10 Points. QUESTION: 398 views • 27 slides
#### AP Physics AP Physics. It’s an Awesome Opportunity!. Learn Physics Get College Credit Understand Science Better. It’s a Huge Challenge. An Inch Deep… Math ~ Algebra 1, with a Smidge of Trig and A Mile Wide Lots and lots of material. We will cover ~ 1 concept/day. . Understandable Chunks. 292 views • 17 slides
#### AP Physics 1 and AP Physics 2 AP Physics 1 and AP Physics 2. The College Board redesigned the previous course AP Physics B into two separate courses: AP Physics 1 and AP Physics 2 The College Board made this decision based on the input of college professors and high school teachers. 531 views • 5 slides
#### AP Physics AP Physics . Chap 11 and 12 Oscillations, Waves and Sound. March 5. Learning target: I can understand the importance of keeping up in Physics Question: What do I need to turn in today to improve my grade in Physics? 261 views • 7 slides
#### AP Physics 2 AP Physics 2. Ch 15 Review. Terms to Remember. Electrostatics – study of charges at rest Electroscope – device that detects the presence of charge by diverging leaves. Terms to Remember. Conductor – material that allows charge to flow easily - large number of free electrons 470 views • 14 slides
#### AP Physics III.A AP Physics III.A. Electrostatics. 18.1 Origin of Electricity. The Fundamental Charge (Robert Millikan and his oil drop experiment). Ex. How many electrons are in two Coulombs of negative charge?. 18.2 Charged Objects and Electric Force. 788 views • 66 slides
#### AP Physics AP Physics. Mr. Jean April 27 th , 2012. The plan:. Quest Assignment High Level Question Handout AP Handouts PDF. With Solutions Torque AP Sample Exam. High Level Questions:. Here is a high level Physics exam which covers many of the same concepts you have in this course. 499 views • 22 slides
#### AP Physics AP Physics. Two AP versions. AP Physics B : trigonometry based course that covers virtually all areas of physics. AP Physics C : calculus based course that only covers mechanics and electricity/magnetism. AP Physics C is the more rigorous course that simulates an actual college course. 421 views • 13 slides
Loading...
More Related
#### AP Physics AP Physics. Syllabus Overview. Snapshot. Physics is the foundation science for all others Physics studies matter, energy, and how the two interact AP Physics is a challenging, mathematics-based class 585 views • 27 slides
#### AP Physics AP Physics. Overview. Quick overview. Physics concepts relate to matter and energy Motion, forces, energy, sound, light, electricity, magnetism, etc. Mathematics is an essential part of the class AP Physics B is non-Calculus Mathematics include algebra, basic geometry, and some trigonometry 471 views • 19 slides
#### AP PHYSICS AP PHYSICS. Friday, 1/22/09 AP PHYSICS. ret HW 46. p. 423 #59 176 views • 10 slides
#### AP Physics AP Physics. II.A – Fluid Mechanics. Mass Density. Ex. What is the mass of a solid iron wrecking ball of radius 18 cm? The density of iron is 7800 kg/cubic meter. Pressure. Consider the lowly tire. The force perpendicular to a given surface area is. Increase pressure by. Increasing force 568 views • 45 slides
#### AP PHYSICS AP PHYSICS. An object is placed at a distance of 1.5f from a converging lens of focal length f, as shown above. What type of image is formed and what is its size relative to the object? Type Size a) Virtual Larger b) Virtual Same size c)Virtual Smaller d)Real Larger 166 views • 11 slides
#### AP Physics AP Physics. II.A – Fluid Mechanics. 11.1 – Mass Density. Ex. What is the mass of a solid iron wrecking ball of radius 18 cm? The density of iron is 7800 kg/cubic meter. 11.2 Pressure. Consider the lowly tire. The force perpendicular to a given surface area is. Increase pressure by. 494 views • 45 slides
#### Pre AP Physics STATIC ELECTRICITY. Pre AP Physics. STATIC ELECTRICITY Electrostatic – the study of electrical charges that can be collected and held in one area. They flow in no particular direction (trapped in a body). Electricity of the Atom. They are ordinarily neutral in charge 526 views • 47 slides
#### AP Physics B AP Physics B. Fluid Dynamics. College Board Objectives. II. FLUID MECHANICS AND THERMAL PHYSICS A. Fluid Mechanics 1. Hydrostatic pressure Students should understand the concept of pressure as it applies to fluids, so they can: a) Apply the relationship between pressure, force, and area. 483 views • 39 slides
English
Français
About
Privacy
DMCA
Blog
Contact
© 2025 SlideServe. All rights reserved |
189512 | https://ise.ncsu.edu/wp-content/uploads/sites/9/2015/07/Pages-used-in-last-problem-1.pdf | 11.8 Inequality Constraints 341 Because by assumption x∗is a regular point and Lx∗ is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function Theorem, there is a solution xcc to the system which is in fact continuously differentiable.
By the chain rule we have cfxc c=0 = xfx∗cx0 and chxc c=0 = xhx∗cx0 In view of (31), the second of these is equal to the identity I on Em, while this, in view of (30), implies that the first can be written cfxc c=0 = −T 11.8 INEQUALITY CONSTRAINTS We consider now problems of the form minimize fx subject to hx = 0 (32) gx ⩽0 We assume that f and h are as before and that g is a p-dimensional function.
Initially, we assume fhg ∈C1.
There are a number of distinct theories concerning this problem, based on various regularity conditions or constraint qualifications, which are directed toward obtaining definitive general statements of necessary and sufficient conditions. One can by no means pretend that all such results can be obtained as minor extensions of the theory for problems having equality constraints only. To date, however, these alternative results concerning necessary conditions have been of isolated theoretical interest only—for they have not had an influence on the development of algorithms, and have not contributed to the theory of algorithms. Their use has been limited to small-scale programming problems of two or three variables. We therefore choose to emphasize the simplicity of incorporating inequalities rather than the possible complexities, not only for ease of presentation and insight, but also because it is this viewpoint that forms the basis for work beyond that of obtaining necessary conditions.
342 Chapter 11 Constrained Minimization Conditions First-Order Necessary Conditions With the following generalization of our previous definition it is possible to parallel the development of necessary conditions for equality constraints.
Definition.
Let x∗be a point satisfying the constraints hx∗ = 0 gx∗ ⩽0 (33) and let J be the set of indices j for which gjx∗ = 0. Then x∗is said to be a regular point of the constraints (33) if the gradient vectors hix∗, gjx∗, 1 ⩽i ⩽mj ∈J are linearly independent.
We note that, following the definition of active constraints given in Section 11.1, a point x∗is a regular point if the gradients of the active constraints are linearly independent. Or, equivalently, x∗is regular for the constraints if it is regular in the sense of the earlier definition for equality constraints applied to the active constraints.
Karush–Kuhn–Tucker Conditions. Let x∗be a relative minimum point for the problem minimize fx subject to hx = 0 gx ⩽0 (34) and suppose x∗is a regular point for the constraints. Then there is a vector ∈Em and a vector ∈Ep with ⩾0 such that fx∗+Thx∗+Tgx∗ = 0 (35) Tgx∗ = 0 (36) Proof.
We note first, since ⩾0 and gx∗ ⩽0, (36) is equivalent to the statement that a component of may be nonzero only if the corresponding constraint is active. This a complementary slackness condition, stating that gx∗i < 0 implies i = 0 and i > 0 implies gx∗i = 0.
Since x∗is a relative minimum point over the constraint set, it is also a relative minimum over the subset of that set defined by setting the active constraints to zero.
Thus, for the resulting equality constrained problem defined in a neighborhood of x∗, there are Lagrange multipliers. Therefore, we conclude that (35) holds with j = 0 if gjx∗ ̸= 0 (and hence (36) also holds).
It remains to be shown that ⩾0. Suppose k < 0 for some k ∈J. Let S and M be the surface and tangent plane, respectively, defined by all other active constraints at x∗. By the regularity assumption, there is a y such that y ∈M and 11.8 Inequality Constraints 343 gkx∗y < 0. Let xt be a curve on S passing through x∗(at t = 0) with ˙ x0 = y.
Then for small t ⩾0, xt is feasible, and df dt xt t=0 = fx∗y < 0 by (35), which contradicts the minimality of x∗.
Example.
Consider the problem minimize 2x2 1 +2x1x2 +x2 2 −10x1 −10x2 subject to x2 1 +x2 2 ⩽5 3x1 +x2 ⩽6 The first-order necessary conditions, in addition to the constraints, are 4x1 +2x2 −10+21x1 +32 = 0 2x1 +2x2 −10+21x2 +2 = 0 1 ⩾0 2 ⩾0 1x2 1 +x2 2 −5 = 0 23x1 +x2 −6 = 0 To find a solution we define various combinations of active constraints and check the signs of the resulting Lagrange multipliers. In this problem we can try setting none, one, or two constraints active. Assuming the first constraint is active and the second is inactive yields the equations 4x1 +2x2 −10+21x1 = 0 2x1 +2x2 −10+21x2 = 0 x2 1 +x2 2 = 5 which has the solution x1 = 1 x2 = 2 1 = 1 This yields 3x1 + x2 = 5 and hence the second constraint is satisfied. Thus, since 1 > 0, we conclude that this solution satisfies the first-order necessary conditions.
344 Chapter 11 Constrained Minimization Conditions Second-Order Conditions The second-order conditions, both necessary and sufficient, for problems with inequality constraints, are derived essentially by consideration only of the equality constrained problem that is implied by the active constraints. The appropriate tangent plane for these problems is the plane tangent to the active constraints.
Second-Order Necessary Conditions. Suppose the functions fgh ∈C2 and that x∗is a regular point of the constraints (33). If x∗is a relative minimum point for problem (32), then there is a ∈Em, ∈Ep, ⩾0 such that (35) and (36) hold and such that Lx∗ = Fx∗+THx∗+TGx∗ (37) is positive semidefinite on the tangent subspace of the active constraints at x∗.
Proof.
If x∗is a relative minimum point over the constraints (33), it is also a relative minimum point for the problem with the active constraints taken as equality constraints.
Just as in the theory of unconstrained minimization, it is possible to formulate a converse to the Second-Order Necessary Condition Theorem and thereby obtain a Second-Order Sufficiency Condition Theorem. By analogy with the unconstrained situation, one can guess that the required hypothesis is that Lx∗ be positive definite on the tangent plane M. This is indeed sufficient in most situations. However, if there are degenerate inequality constraints (that is, active inequality constraints having zero as associated Lagrange multiplier), we must require Lx∗ to be positive definite on a subspace that is larger than M.
Second-Order Sufficiency Conditions. Let fgh ∈C2. Sufficient conditions that a point x∗satisfying (33) be a strict relative minimum point of problem (32) is that there exist ∈Em, ∈Ep, such that ⩾0 (38) Tgx∗ = 0 (39) fx∗+Thx∗+T1gx∗ = 0 (40) and the Hessian matrix Lx∗ = Fx∗+THx∗+TGx∗ (41) is positive definite on the subspace M′ = y hx∗y = 0gjx∗y = 0 for all j ∈J where J = j gjx∗ = 0j > 0 11.8 Inequality Constraints 345 Proof.
As in the proof of the corresponding theorem for equality constraints in Section 11.5, assume that x∗is not a strict relative minimum point; let yk be a sequence of feasible points converging to x∗such that fyk ⩽fx∗, and write each yk in the form yk = x∗+ ksk with sk = 1k > 0. We may assume that k →0 and sk →s∗. We have 0 ⩾fx∗s∗, and for each i = 1m we have hix∗s∗= 0 Also for each active constraint gj we have gjyk−gjx∗ ⩽0, and hence gjx∗s∗⩽0 If gjx∗s∗= 0 for all j ∈J, then the proof goes through just as in Section 11.5.
If gjx∗s∗< 0 for at least one j ∈J, then 0 ⩾fx∗s∗= −Thx∗s∗−Tgx∗s∗> 0 which is a contradiction.
We note in particular that if all active inequality constraints have strictly positive corresponding Lagrange multipliers (no degenerate inequalities), then the set J includes all of the active inequalities. In this case the sufficient condition is that the Lagrangian be positive definite on M, the tangent plane of active constraints.
Sensitivity The sensitivity result for problems with inequalities is a simple restatement of the result for equalities. In this case, a nondegeneracy assumption is introduced so that the small variations produced in Lagrange multipliers when the constraints are varied will not violate the positivity requirement.
Sensitivity Theorem. Let fgh ∈C2 and consider the family of problems minimize fx subject to hx = c gx ⩽d (42) Suppose that for c = 0, d = 0, there is a local solution x∗that is a regular point and that, together with the associated Lagrange multipliers, ⩾0, satisfies the second-order sufficiency conditions for a strict local minimum.
Assume further that no active inequality constraint is degenerate. Then for every cd ∈Em+p in a region containing 00 there is a solution xcd, depending continuously on cd, such that x00 = x∗, and such that xcd is a relative minimum point of (42). Furthermore, cfxcd 00 = −T (43) dfxcd 00 = −T (44) |
189513 | http://203.72.57.15/blog_jmath/wp-content/uploads/2020/03/02-108%E6%95%B8%E5%AD%B89%E4%B8%8B%E5%82%99%E8%AA%B2-L01-pdf.io-3.pdf | 會考觀測站– 加強演練題 ■ 將二次函數 y=x 2+8x+5 化成 y=a (x-h) 2+k 的形式, 求 a+h-k 的值。 8 ■ 8 小時 活動 1 利用配方法, 將形如 y=ax 2+bx +c,a≠0 的二次函 數,轉變成 y =a (x-h) 2+k 的形式。 ■ 解一元二次方程式 的配方法與二次函 數多項式的配方法 有些差異。 ■ 教師講解例題 1 時,不妨對照方 程式 x2-6x+8=0 的配方法,讓學生 複習與比較。 ■ 形如 y=x2-5x+6 的題型,在配方過 程中出現分數的形 式,對學生難度較 高,教師可視學生 程度,適時增加練 習與教學,本書在 例題 3 才加入 x 項 係數為奇數之題型 (配方過程中出現 分數的形式) 。 搭配例 1 ■ 補救教學.計算 Basic 1-2 ■ 免試加強類題本 1-2 1-2 如何描繪二次函數 y=ax 2+bx+c 的圖形呢? 我們可以先將 y=ax 2+bx+c 化成 y=a (x-h) 2+k 的形式,就可以利 用前面學過的方法描繪出 y=ax 2+bx+c 的圖形,而將 y=ax 2+bx+c 化成 y=a (x-h) 2+k 的過程,稱為配方法。接下來,我們以一些例子說明。 y=x 2-6x+8 =x 2-2.x.3+32-32+8 = (x-3) 2-1 將 x 2-6x 加上 3 2 配成完全平方式, 再減去 3 2 使得原函數不變 將二次函數 y=x 2+4x+5 化成 y=a (x-h) 2+k 的形式。 配方法 1 配方法與二次函數 1-2 解 將二次函數 y=x 2-6x+8 化成 y=a (x-h) 2+k 的形式。 配方法(x2 項係數為 1) 例1 隨堂練習 對應能力指標 9-a-02 搭配習作 P9 基礎題 11 y=x 2+4x+5 =x 2+2.x.2+2 2-2 2+5 = (x+2) 2+1 第一章 二次函數 34 58 93 基測 II 第 23 題 ■ (D) 下列哪一個二次函數,其圖形和 y=4x2-8x 的圖形有相同的頂點? A y=2x2-4x B y=-2 (x+1) 2 C y=2 (x+1) 2+4 D y=-2 (x-1) 2-4 ■ 90 基測 II 第 13 題 ■ 93 基測 I 第 18 題 ■ 93 基測 II 第 23 題 ■ 93 基測 II 第 24 題 ■ 100 聯測第 6 題 ■ 例題 1 是 x 2 項係 數為 1 的配方,而 例題 2 是 x 2 項係 數不為 1 的配方。 ■ 形如 y=2x2+5x+ 4 的題型,比例題 3 難度高,教師勿 急於此處讓學生練 習,可在例題 3 之 後視學生程度補 充。 ■ 隨堂 1,教師也可 以讓學生練習寫出 完整的配方過程。 關鍵提問 ■ 隨堂 1,h、k 的 值為何? 答:h=-1, k=-5。 搭配例 2 y=2x 2+4x-1 =2 (x 2+2x) -1 =2 (x 2+2.x.1+12-12) -1 =2 〔 (x+1) 2-1〕 -1 =2 (x+1) 2-2-1 =2 (x+1) 2-3 將 x 2 項和 x 項括在一起,並提出 x 2 項的係數 2 將 x 2+2x 配成完全平方式 配方法(x2 項係數不為 1) 1若 y=a(x-h) 2+k 是由二次函數 y=4x 2+8x-1 經由配方而得, 則 a=? A 1 B 2 C 3 D 4 2將二次函數 y=3x 2+12x+2 化成 y=a (x-h) 2+k 的形式。 隨堂練習 解 將二次函數 y=2x 2+4x-1 化成 y=a (x-h) 2+k 的形式。 例2 D y=4x 2+8x-1 =4 (x 2+2x) -1 =4 (x 2+2.x.1+12-12) -1 =4 〔 (x+1) 2-1〕 -1 =4 (x+1) 2-4-1 =4 (x+1) 2-5 故 a=4,選 D。 〈另解〉 a 的值即為 x2 項的係數, 故 a=4,選 D。 y =3x2+12 x+2 =3 (x2+4 x) +2 =3 (x2+2.x.2+22-22) +2 =3 〔 (x+2) 2-4〕 +2 =3 (x+2) 2-12+2 =3 (x+2) 2-10 35 1 –2 配方法與二次函數 59 ■ 教師可讓學生多加 練習 x2 項係數為 正整數的配方,待 學生熟練後,再 教學 x2 項係數為 負整數、分數的題 型。 ■ 教師講解例題 3 時,宜對照方程式 -x2+5x-1=0 的 配方法,讓學生比 較其差異。 ■ 91 基測 I 第 8 題 ■ 98 基測 II 第 18 題 ■ 100 基測 II 第 16 題 ■ 101 基測第 18 題 ■ 102 基測第 8 題 101 基測第 18 題 ■ (D) 判斷下列哪一個組的 a、b、c,可使二次函數 y=ax2+bx+c-5x2-3x+7 在坐標平面上的圖形有最低點? A a=0,b=4,c=8 B a=2,b=4,c=-8 C a=4,b=-4,c=8 D a=6,b=-4,c=-8 搭配例 3 y =-x 2+5x-1 =- (x 2-5x) -1 =- 〔x 2-2.x.5 2 + (5 2 ) 2- (5 2 ) 2〕 -1 =- 〔 (x-5 2 ) 2-25 4 〕 -1 =- (x-5 2 ) 2+25 4 -1 =- (x-5 2 ) 2+21 4 故函數圖形的頂點為 (5 2 , 21 4 ) 。 將 x 2 項和 x 項括在一起,並提出 x 2 項的係數 -1 將 x 2-5x 配成完全平方式 解 將二次函數 y=-x 2+5x-1 化成 y=a (x-h) 2+k 的形式,並求此函數 圖形的頂點為何? y=ax 2+bx+c 圖形的頂點 例3 將二次函數 y=-x 2-3x+2 化成 y=a (x-h) 2+k 的形式,並求此函數 圖形的頂點為何? 隨堂練習 x y O (5 2 , 21 4 ) 搭配習作 P9 基礎題 12 y=-x 2-3x+2 =- (x 2+3x) +2 =- 〔x 2+2.x.3 2 + (3 2 ) 2- (3 2 ) 2〕 +2 =- 〔 (x+3 2 ) 2-9 4 〕 +2 =- (x+3 2 ) 2+9 4 +2 =- (x+3 2 ) 2+17 4 故函數圖形的頂點為 (-3 2 , 17 4 ) 。 第一章 二次函數 36 60 會考觀測站– 基礎演練題 ■ 將二次函數 y=2 3 x 2+4 3 x+8 3 配方成 y=a (x-h) 2+k 的形式, 則此函數圖形有最高點或最低點為何? 有最低點 (-1 , 2) ■ 教師講解例題4 時,宜對照方程式 1 2 x 2-3x+3 2 =0 的配方法,讓學生 比較其差異。 ■ 當 x 2 項係數不為 1 時,解方程式一般 會用等量公理使係 數變為 1,而二次 函數是以提出係數 的方式處理,因舊 經驗的關係,學生 在二次函數的配方 時,會有將 x2 項 係數直接變成 1 的 錯誤, 例如: y= 2x 2+4x=x 2+2x= ⋯⋯因此建議教 師講解時,能與方 程式對照比較,以 釐清觀念。 搭配例 4 解 將二次函數 y=1 2 x 2-3x+3 2 化成 y=a (x-h) 2+k 的形式,並求此函數 圖形的頂點?此頂點為最高點或最低點? y=ax2+bx+c 圖形的最高點或最低點 例4 y=1 2 x 2-3x+3 2 =1 2 (x 2-6x) +3 2 =1 2 (x 2-2.x.3+32-32) +3 2 =1 2 〔 (x-3) 2-9〕 +3 2 =1 2 (x-3) 2-9 2 +3 2 =1 2 (x-3) 2-3 可得函數圖形的頂點為 (3 ,-3) 。 ∵a>0,∴圖形的開口向上, 故函數圖形的頂點為最低點。 將 x 2-6x 配成完全平方式 1 2 x 2-3x=1 2 x 2-1 2 .6x=1 2 (x 2-6x) 求下列二次函數圖形的頂點?此頂點為最高點或最低點? 1y=-2x 2+8x-3 2y=1 2 x 2+2x+3 x y O (3 ,-3) 隨堂練習 若二次函數的圖形開口向上,則頂點是最低點;若圖形開口向下,則頂點 是最高點。 y=-2x 2+8x-3 =-2 (x 2-4x+4-4) -3 =-2 (x-2) 2+5 可得函數圖形的頂點為 (2 , 5) , 又圖形的開口向下, 故函數圖形的頂點為最高點。 y=1 2 x 2+2x+3 =1 2(x 2+4x+4-4) +3 =1 2(x+2) 2+1 可得函數圖形的頂點為 (-2 , 1) , 又圖形的開口向上, 故函數圖形的頂點為最低點。 搭配習作 P9 基礎題 2 37 1 –2 配方法與二次函數 61 活化體驗站 ⬶ػᄲણ ■ 1∼10 公克的砝碼 中,用哪三個砝碼 就能秤出 1 到 13 公克之間的所有物 品? (砝碼與物重 皆為整數) 1、3、9 公克。 ■ 例題 5 另解: (利 用頂點的 x 坐標公 式) 令 a=-2, ∴頂點的 x 坐標為 -b 2a =- b 2× (-2) =-1 因此 b=-4, 將 (-1 , 1) 代入函 數 y=-2x 2-4x+ c 得 c=-1。 關鍵提問 ■ 二次函數圖形有 最低點時,表示 圖形開口方向為 何?有最高點 時,表示圖形開 口方向為何? 答:開口向上; 開口向下。 會考觀測站– 基礎演練題 ■ 已知二次函數 y=-2x 2+x+a 的最高點為 (b , 1) ,求 a、b 的值。 a=7 8 ,b=1 4 搭配例 5 1若二次函數 y=2x 2+bx+c 的最低點為 (-2 , 1) ,求 b 及 c 的值。 ∵y=-2x 2+bx+c 的最高點為 (-1 , 1) , x 2 項的係數為-2, ∴可令 y=-2 (x+1) 2+1 =-2 (x 2+2x+1) +1 =-2x 2-4x-2+1 =-2x 2-4x-1 故對照原二次函數的係數可得 b=-4、c=-1。 1由最高點 (h , k) 及 x2 項的係數 a,可知二次函數 y=a (x-h) 2+k。 2將 y=a (x-h) 2+k 展開後,與 y=ax2+bx+c 比較,即可求得 b、c 的值。 思路分析 y=ax2+bx+c 的應用 解 若二次函數 y=-2x 2+bx+c 的最高點為 (-1 , 1) ,求 b 及 c 的值。 例5 隨堂練習 搭配習作 P10 基礎題 3 ∵y=2x 2+bx+c 的最低點為 (-2 , 1) , 且 x 2 的係數為 2, ∴可令 y=2 (x+2) 2+1 =2 (x 2+4x+4) +1 =2x 2+8x+8+1 =2x 2+8x+9 故對照原二次函數的係數可得 b=8、c=9。 第一章 二次函數 38 62 ■ y=ax 2+bx+c 的 配方過程,教師可 同時以例題 4 的配 方過程來對照說 明。 ■ 雖然導出一般式的 頂點公式,但建議 教師要求學生盡量 用配方法來解題, 而公式方面,因為 可將 x 坐標代入函 數求得 y 坐標,因 此可建議學生背 x 坐標即可。 關鍵提問 ■ 二次函數圖形向 左、向右、向上、 向下平移時,圖形 的開口大小、方向 會如何改變? 答:開口大小、方 向都不變。 會考觀測站– 基礎演練題 ■ 將二次函數 y=ax 2+bx+c 的圖形向右平移 3 個單位,再向下平移 5 個單位後, 可得新圖形的二次函數為 y=2x 2-1,求 a、b、c 的值。 a=2、b=12、c=22 搭配隨堂 ■ 隨堂輕鬆考第 6 回 y=ax2+bx+c 的頂點坐標與對稱軸 補給站 若將 y=ax 2+bx+c,a≠0 化成 y=a (x-h) 2+k 的形式,即可找到它 的頂點與對稱軸。下面是利用配方法推導的過程: y=ax 2+bx+c =a (x 2+b a x) +c =a 〔x 2+b a x+ (b 2a ) 2- (b 2a ) 2〕 +c =a 〔 (x+b 2a ) 2- (b 2a ) 2〕 +c =a (x+b 2a ) 2-a (b 2a ) 2+c =a (x+b 2a ) 2+ (-b2+4ac 4a ) =a 〔x- (-b 2a ) 〕 2+ (-b2-4ac 4a ) 二次函數 y=ax 2+bx+c 的圖形,其頂點坐標為 (-b 2a , -b2-4ac 4a ) , 對稱軸為直線 x=-b 2a 。 2 已知某二次函數圖形的頂點為 (2 , 3) ,將此圖形平移後, 會和 y=3x 2-x-2 的圖形完全疊合,求此二次函數。 隨堂練習 設二次函數為 y=a (x-h) 2+k, ∵y=a (x-h) 2+k 的圖形可與 y=3x 2-x-2 的圖形完全疊合, ∴a=3, 又圖形的頂點為 (2 , 3) , 故二次函數為 y=3 (x-2) 2+3。 39 1 –2 配方法與二次函數 63 會考觀測站– 基礎演練題 1 (C) 已知 a、b 為常數,且 y=a (x-5) 2+b 有最小值-1,則下列敘述何者正確? A a<b B a=b C a>b D a 、b 無法比較 2已知 a>0,則二次函數 y=-a 2(x-2 3 ) 2 的最大值或最小值為何? 有最大值為 0 活動 2 能利用圖形 觀察的方式,找出 形如 y=a (x-h) 2+ k 函數圖形的最高點 或最低點,進而得 到此函數的最大值 或最小值。 活動 3 能利用配方 法,將形如 y=ax 2 +bx+c 的二次函 數,轉化成 y =a (x -h ) 2+k 的形 式,並利用不等式 來找最大值或最小 值。 ■ 利用函數圖形的最 高點或最低點來尋 找函數的最大值或 最小值,對學生來 說比較容易,教師 可建議學生熟練此 種方法。 搭配例 6 從前面的討論可知,二次函數 y=ax 2+bx+c 均可利用配方法轉化成 y=a (x-h) 2+k 的形式,進而由函數圖形的最高點或最低點得到此函數的 最大值或最小值。 例如:y=2x 2-4x-1=2 (x-1) 2-3 由圖 1-7 可知,圖形的最低點為 (1 ,-3) , 所以在 x=1 時,函數 y 有最小值 -3。 就另一觀點而言,二次函數 y=ax 2+bx+c 轉化成 y=a (x-h) 2+k 的 形式後,也可以利用任意完全平方式均大於或等於零的概念,推得此函數的 最大值或最小值。 例如:y=2x 2-4x-1=2 (x-1) 2-3 ∵ (x-1) 2 0 2 (x-1) 2 0 2 (x-1) 2-3 -3 ∴y -3 故在 x=1 時,函數 y 有最小值 -3。 x y O (1 ,-3) y=2x2-4x-1 圖 1-7 ∵y=-5 (x+3) 2+1 的圖形開口向下, 頂點 (-3 , 1) 為最高點。 ∴在 x=-3 時,函數 y 有最大值 1。 二次函數的最大值或最小值 2 判別二次函數 y=-5 (x+3) 2+1 是否有最大值或最小值,並求出其值。 二次函數的最大值或最小值 例6 解一 x y O (-3 , 1) 搭配習作 P10 基礎題 4 對應能力指標 9-a-03 第一章 二次函數 40 64 100 基測 I 第 28 題 ■ (D) 右圖為坐標平面上二次函數 y=ax 2+bx+c 的圖形, 且此圖形通過 (-1 , 1) 、 (2 ,-1) 兩點。下列關於此 二次函數的敘述,何者正確? A y 的最大值小於 0 B 當 x=0 時,y 的值大於 1 C 當 x=1 時,y 的值大於 1 D 當 x=3 時,y 的值小於 0 ■ 直接對二次函數 y = a (x - h) 2+k, 以不等式作形式化 的說明。 ■ 以 y=a (x-h) 2+ k, a > 0 時為例: ∵ a (x-h) 2 0 a (x-h) 2+k 0+k ∴ y=a (x-h) 2+ k k 又 x=h 時, y=k, 故在 x=h 時,函數 有最小值 y=k。 ■ 92 基測 I 第 6 題 ■ 100 基測 I 第 28 題 關鍵提問 ■ 隨堂 2,若圖形在 x=-1 時,函數 y 有最大值 3,則 其圖形可能為哪一 個? 答:D。 y x (-1 , 1) (2 ,-1) O 搭配隨堂 解 解二∵ (x+3) 2 0 - (x+3) 2 0 -5 (x+3) 2 0 -5 (x+3) 2+1 1 ∴y 1 故在 x=-3 時,函數 y 有最大值 1。 2 若 y=a (x-h) 2+k 的圖形,在 x=3 時,函數 y 有最小值 -1, 則其圖形可能為下列何者? A B 1判別下列二次函數是否有最大值或最小值,並求出其值。 1y=-3 (x-1) 2+2 2y=2 (x+2) 2-4 x y O (3 ,-1) x y O (-1 , 3) C D x y O (3 ,-1) x y O (-1 , 3) 隨堂練習 ∵圖形的開口向下, 又頂點 (1 , 2) 為最高點, ∴在 x=1 時, 函數 y 有最大值 2。 C ∵圖形的開口向上, 又頂點 (-2 ,-4) 為最低點, ∴在 x=-2 時, 函數 y 有最小值 -4。 ∵函數圖形有最小值,∴圖形的開口向上, 又頂點為 (3 ,-1) ,故選 C。 41 1 –2 配方法與二次函數 65 ■ 例題 7 是先經過配 方,再利用不等式 作形式化的說明。 ■ 教師可跟學生強 調,只要是二次函 數就一定會有最大 值或最小值。 ■ 若學生先經過配 方,再利用圖形的 最高點或最低點找 最大值或最小值, 教師不宜算學生 錯。 會考觀測站– 基礎演練題 ■ 求下列各二次函數的最大值或最小值: 1 y=x 2-x 2 y=-x 2-3x+2 3 y=4x 2+6x+1 4 有最小值-1 4 有最大值17 4 有最小值-2 ■ 隨堂輕鬆考第 7 回 搭配例 7 1y =-3x 2+6x-5 =-3 (x 2-2x) -5 =-3 (x 2-2x+1-1) -5 =-3 (x-1) 2-2 ∵-3 (x-1) 2-2 -2, ∴在 x=1 時, 函數 y 有最大值-2。 2y =1 2 x 2+x-2 =1 2(x 2+2x) -2 =1 2 (x 2+2x+1-1) -2 =1 2 (x+1) 2-5 2 ∵1 2 (x+1) 2-5 2 -5 2 , ∴在 x=-1 時, 函數 y 有最小值-5 2 。 判別下列二次函數是否有最大值或最小值,並求出其值。 1y=3x 2-12x-1 2 y=-1 2 x 2+4x 解 判別下列二次函數是否有最大值或最小值,並求出其值。 1y=-3x 2+6x-5 2y=1 2 x 2+x-2 利用配方法求二次函數的最大值或最小值 例7 隨堂練習 最大值或最小值 形如 y=a (x-h) 2+k 的二次函數,有最大值或最小值: 1當 a>0 時,圖形開口向上,在 x=h 時,函數 y 有最小值 k。 2當 a<0 時,圖形開口向下,在 x=h 時,函數 y 有最大值 k。 y=3x 2-12x-1 =3 (x-2) 2-13 ∵3 (x-2) 2-13 -13, ∴在 x=2 時, 函數 y 有最小值-13。 y=-1 2 x 2+4x =-1 2(x-4) 2+8 ∵-1 2(x-4) 2+8 8, ∴在 x=4 時, 函數 y 有最大值 8。 搭配習作 P10 基礎題 5 第一章 二次函數 42 66 活動 4 了解二次函 數的圖形與兩軸的 相交關係,並知道 其圖形與 x 軸的交 點坐標,即為其對 應的一元二次方程 式的解。 ■ 例題 8 與隨堂練習 目的在讓學生練習 找出兩軸的交點坐 標,所以只呈現了 圖形與 x 軸交於兩 點、交於一點的情 形,而沒有交點的 情形後面會說明。 ■ 解方程式 x 2-6x+ 7=0 時,可利用 配方法或公式解來 解題。 ■ 98 基測 II 第 30 題 ■ 101 基測第 30 題 ■ 104 會考第 21 題 ■ 105 會考 (新店高中 考場重考) 第 22 題 ■ 107 會考第 21 題 ■ 108 會考第 26 題 108 會考第 26 題 ■ (B) 如圖,坐標平面上有一頂點為 A 的拋物線,此拋 物線與方程式 y = 2 的圖形交於 B、C 兩點,且 △ABC 為正三角形。若 A 點坐標為 (-3 , 0) ,則 此拋物線與 y 軸的交點坐標為何? A (0 , 9 2 ) B (0 , 27 2 ) C (0 , 9) D (0 , 18) 搭配例 8 y y=2 x O C B A 1求圖形與 y 軸的交點坐標: ∵在 y 軸上的點,其 x 坐標為 0, ∴將 x=0 代入函數得 y=-7, 故與 y 軸的交點坐標為 (0 ,-7) 。 2求圖形與 x 軸的交點坐標: ∵在 x 軸上的點,其 y 坐標為 0, ∴將 y=0 代入函數得 0=-x 2+6x-7, 解方程式 x 2-6x+7=0 可得兩個解為 x=3± 2 故與 x 軸的交點坐標為 (3+2 , 0) 與 (3-2 , 0) 。 求下列二次函數圖形與兩軸的交點坐標: 1y=-x 2-x+6 2y=4x 2-12x+9 前面我們已經學過二次函數的畫圖,並知道二次函數圖形的頂點、對稱 軸,接下來,我們探討二次函數的圖形與兩軸 (x 軸、y 軸) 的關係。 x y O (3 , 2) (3+2 , 0) (3-2 , 0) (0 ,-7) 圖形與兩軸的交點 3 解 求二次函數 y=-x 2+6x-7 圖形與兩軸的交點坐標。 圖形與兩軸的交點坐標 例8 隨堂練習 將 x=0 代入函數得 y=6, 故圖形與 y 軸交於 (0 , 6) 。 將 y=0 代入函數 得-x 2-x+6=0 (x+3) (-x+2) =0 x=-3 或 x=2 故圖形與 x 軸交於 (-3 , 0) 、 (2 , 0) 。 將 x=0 代入函數得 y=9, 故圖形與 y 軸交於 (0 , 9) 。 將 y=0 代入函數 得 4x 2-12x+9=0 (2x-3) 2=0 x=3 2 (重根) 故圖形與 x 軸交於 (3 2 , 0) 。 對應能力指標 9-a-04 搭配習作 P11 基礎題 6 43 1 –2 配方法與二次函數 67 會考觀測站– 加強演練題 1 二次函數 y=x 2+11x+18 的圖形與 x 軸的交點為 , 與 y 軸交點為 。 2 二次函數 y=x 2+4x-12 的圖形與 x 軸交於 A、B 兩點,則 ‾ AB =? 8 (-2 , 0) 、 (-9 , 0) (0 , 18) ■ 將判別式的三種情 況與三種圖形對 照,讓觀念與圖形 整合,不僅學生較 容易明白,也可以 加深印象。 活化體驗站 ⬶ػᄲણ ■ 弟弟考試得了 95 分,哥哥得分比弟 弟多一點,為什麼 還是被媽媽責罵? 因為哥哥考了 9.5 分。 搭配課文 二次函數圖形與兩軸的關係如下: 二次函數圖形與 y 軸的交點 將 x=0 代入二次函數 y=ax 2+bx+c 中,可得函數圖形與 y 軸交於 (0 , c) 。 二次函數圖形與 x 軸的交點 將 y=0 代入二次函數 y=ax 2+bx+c 中,可得 ax 2+bx+c=0,也就是說二 次函數 y=ax 2+bx+c 的圖形與 x 軸的交點坐標,可由方程式 ax 2+bx+c=0 的 解求得。 1 當判別式 b 2-4ac>0: 方程式有兩個相異解,即二次函數 y=ax 2+bx+c 的圖形與 x 軸交於 (-b+ b 2-4ac 2a , 0) 與 (-b- b 2-4ac 2a , 0) 兩點,如圖 1-8。 圖 1-8 與 x 軸交於兩點的拋物線 x y O x y O 2 當判別式 b 2-4ac=0: 方程式恰有一解,即二次函數 y=ax 2+bx+c 的圖形與 x 軸只交於一點, 也就是頂點 (-b 2a , 0) ,如圖 1-9。 圖 1-9 與 x 軸只交於一點的拋物線 x y O x y O 搭配習作 P11 基礎題 7 第一章 二次函數 44 68 100 基測 I 第 19 題 ■ (D) 坐標平面上,二次函數 y=x 2-6x+3 的圖形與下列哪一個方程式的圖形沒有交點? A x=50 B x=-50 C y=50 D y=-50 ■ 教師亦可先配方, 再利用圖形的特性 作判別,藉此讓學 生明白,此步驟的 繁瑣,來說明這樣 的方法在某些題型 上比較不適合。 ■ 99 基測 II 第 17 題 ■ 100 基測 I 第 19 題 搭配例 9 3 當判別式 b 2-4ac<0: 方程式沒有解,即二次函數 y=ax 2+bx+c 的圖形與 x 軸沒有交點, 如圖 1-10。 1∵x 2+x+1=0 的判別式為 12-4×1×1=-3<0, ∴其圖形與 x 軸沒有交點。 2∵2x 2-x=0 的判別式為 (-1) 2-4×2×0=1>0, ∴其圖形與 x 軸有兩個交點。 3∵-1 2 x 2+x-1 2 =0 的判別式為 12-4× (-1 2 ) × (-1 2 ) =0, ∴其圖形與 x 軸恰有一個交點。 下列哪些二次函數的圖形與 x 軸恰有一個交點?一一檢查後並勾選。 □ 1y=x 2-2x+4 □ 2y=x 2+5x □ 3y=9x 2+6x+1 □ 4y=-2x 2-3x-9 8 解 判別下列二次函數的圖形與 x 軸的交點個數: 1y=x 2+x+1 2y=2x 2-x 3y=-1 2 x 2+x-1 2 圖形與 x 軸的交點個數 (未配方) 例9 圖 1-10 與 x 軸沒有交點的拋物線 x y O x y O 隨堂練習 搭配習作 P11 基礎題 8 L L 1∵x 2-2x+4=0 的判別式為 (-2) 2-4×1×4=-12<0, ∴圖形與 x 軸沒有交點。 3∵9x 2+6x+1=0 的判別式為 62-4×9×1=0, ∴圖形與 x 軸恰有 1 個交點。 2∵x 2+5x=0 的判別式為 52-4×1×0=25>0, ∴圖形與 x 軸有 2 個交點。 4∵-2x 2-3x-9 8 =0 的判別式為 (-3) 2-4× (-2) × (-9 8 ) =0, ∴圖形與 x 軸恰有 1 個交點。 45 1 –2 配方法與二次函數 69 98 基測 I 第 31 題 ■ (D) 下列哪一個函數,其圖形與 x 軸有兩個交點? A y=17 (x+83) 2+2274 B y=17 (x-83) 2+2274 C y=-17 (x-83) 2-2274 D y=-17 (x+83) 2+2274 ■ 例題 10 雖然可展 開平方式後用判別 式來判別,但還是 利用圖形的特性作 判別較適宜, 尤其 像第2 、3 小題 有分數時更不適合 用判別式。 ■ 本教材於此強調利 用圖形特性來解題 的概念,是希望強 化學生對於二次函 數與其圖形間的關 聯。 ■ 隨堂練習第 1 題 中,若要用判別式 作判別,可先求出 b、c 的值,再算 判別式,但因題目 不求 b、c 的值, 還是以圖形的特性 作判別較適合。 ■ 98 基測 I 第 31 題 ■ 100 聯測第 32 題 關鍵提問 ■ 隨堂 1,b、c 的值 為何? 答: b=8、 c=-5。 搭配例 10 二次函數圖形與 x 軸的相交情形,除了可用判別式來判別外,當可以確定 圖形頂點與開口方向時,也可利用其圖形的特性來判別。 1 y=2 (x-5) 2-4 圖形的頂點 (5 ,-4) 在 x 軸下方,開口向上,因此其圖形 與 x 軸會有兩個交點。 2 y=-3 2 (x+3) 2 圖形的頂點 (-3 , 0) 恰在 x 軸上,因此其圖形與 x 軸恰有 一個交點。 3 y=-3 4 (x+3 2 ) 2-1 2 圖形的頂點 (-3 2 ,-1 2 ) 在 x 軸下方,開口向下, 因此其圖形與 x 軸沒有交點。 y x O (5 ,-4) x y O (-3 , 0) x y O (-3 2 ,-1 2 ) 1 已知二次函數 y=-2 (x+a) 2 +b 的頂點為 (2 , 3) ,求其圖形與 x 軸的 交點個數。 2 求二次函數 y=3 (x-h) 2 的圖形與 x 軸的交點個數。 解 判別下列二次函數的圖形與 x 軸的交點個數: 1y=2 (x-5) 2-4 2y=-3 2 (x+3) 2 3y=-3 4 (x+3 2 ) 2-1 2 圖形與 x 軸的交點個數 (已配方) 例10 隨堂練習 ∵圖形的開口向下,且頂點 (2 , 3) 在 x 軸上方, 因此圖形與 x 軸有 2 個交點。 ∵頂點 (h , 0) 恰在 x 軸上, 因此圖形與 x 軸恰有 1 個交點。 搭配習作 P11 基礎題 8 第一章 二次函數 46 70 ■ 輸入 f (x) =sqrt (x) 按 ENTER 得到 y=f (x) =x 的圖 形。 ■ sqrt 代表 square root (平方根) 的縮 寫。 會考觀測站– 基礎演練題 1 二次函數 y=1 3(x-3 2 ) 2+7 4 的圖形向下平移 2 個單位後,其圖形與 x 軸有 幾個交點?若再向上平移 k 個單位後,其圖形與 x 軸恰有一個交點,則 k=? 2 個交點,k=1 4 2 已知 ak>0,則二次函數 y=a (x-1) 2+k 的圖形與 x 軸是否有交點?沒有交點 搭配例 10 ■ 免試基礎講堂 1-2 ■ 隨堂輕鬆考第 8 回 ■ 免試精熟本 1-2 函數繪圖軟體 補給站 GeoGebra 數學軟體是一個結合動態幾何、代數與微積分的數學軟體, 是由美國 佛羅里達州 大西洋大學 (Florida Atlantic University) 的數學教授 Markus Hohenwarter 所設計。這套軟體可以畫出點、直線、圓與多邊形等 圖形,也可以直接在直角坐標系中輸入點坐標、方程式與函數得到圖形。 GeoGebra 數學軟體繪製二次函數圖形的方法如下: 一、進入 GeoGebra; 二、在下方 輸入 框內輸入二次函數。例如: 1輸入 f (x) =-1/2 x^2+1,按 ENTER,得到 y=f (x) =-1 2 x 2+1 的二次函數圖形。 2輸入 g (x) =x^2-6x+8,按 ENTER,得到 y=g (x) =x 2-6x+8 的二次函數圖形。 (電腦上常用 「^」 表示次方的運算) 同學亦可試著輸入其他函數,認識一些未學過函數的圖形,例如: f (x) =x3、f (x) =x 等。 以上資料改編自:GeoGebraWiki 中文版 取得安裝軟體與說明,可參考上面網站。 47 1 –2 配方法與二次函數 71 活化體驗站 ⬶ػᄲણ ■ 如下圖,將 1∼9 的數字填入圓圈 中,使每邊的數字 和皆相等,且每邊 數字的平方和也相 等。 2 4 9 5 1 6 8 3 7 99 基測 I 第 27 題 ■ (D) 坐標平面上,若移動二次函數 y=2 (x-175) (x-176) +6 的圖形,使其與 x 軸 交於兩點,且此兩點的距離為 1 單位,則移動方式可為下列哪一種? A 向上移動 3 單位 B 向下移動 3 單位 C 向上移動 6 單位 D 向下移動 6 單位 搭配自評第 1 題 顧 回 點 重 1 最大值或最小值: 形如 y=a (x-h) 2+k 的二次函數,有最大值或最小值: 1當 a>0 時,圖形開口向上,頂點 (h , k) 為最低點;在 x=h 時, 函數 y 有最小值 k。 2當 a<0 時,圖形開口向下,頂點 (h , k) 為最高點;在 x=h 時, 函數 y 有最大值 k。 2 二次函數與兩軸的交點: 1二次函數 y=ax 2+bx+c 的圖形與 y 軸恰交於一點 (0 , c) 。 2二次函數 y=ax 2+bx+c 的圖形與 x 軸相交的情形: 判別式 b 2-4ac>0 b 2-4ac=0 b 2-4ac<0 與 x 軸的 交點坐標 交於兩點: (-b+ b 2-4ac 2a , 0 ) (-b- b 2-4ac 2a , 0 ) 交於一點: (-b 2a , 0 ) 沒有交點 圖形 x y O x y O x y O x y O x y O x y O 第一章 二次函數 48 72 ■ 99 基測 I 第 27 題 ■ 100 基測 II 第 29 題 ■ 學生進行繪圖時, 宜提醒學生要配 合所選的點來畫兩 軸的位置,才不會 讓所描的點跑出方 格外,且有的圖形 可酌量增描一、兩 組對稱點,繪圖時 才不會有太大的誤 差。 會考觀測站– 基礎演練題 ■ (C) 在坐標平面上,二次函數 y=-3x 2+6x 圖形的頂點與原點的距離為 d, 則 d 的範圍為何? A 2< d <2.5 B 2.5< d <3 C 3< d <3.5 D 3.5< d <4 搭配自評第 1 題 ■ 會考 100 分 1-2 ■ 會考基礎卷 1-2 ■ 會考精熟卷 1-2 ■ 段考精選試題 1-2 1-2自我評量 1 描繪下列二次函數的圖形,並求此圖形的頂點坐標、對稱軸及開口方向: 1y=x 2-2x-1 x y O x y O 2y=-1 2 x 2+2x 頂點坐標: 對稱軸: 開口方向: 頂點坐標: 對稱軸: 開口方向: x ⋯ ⋯ y ⋯ ⋯ x ⋯ ⋯ y ⋯ ⋯ 課 P34 例 1 課 P36、37 例 3、4 (1 ,-2) (2 , 2) 開口向上 開口向下 x=1 x=2 -1 0 1 2 3 0 1 2 3 4 2 -1 -2 -1 2 0 3 2 2 3 2 0 y=x 2-2x-1 = (x 2-2x+1-1) -1 = (x-1) 2-1-1 = (x-1) 2-2 y=-1 2 x 2+2x =-1 2(x 2-4x) =-1 2(x 2-4x+4-4) =-1 2(x-2) 2+2 (1 ,-2) (2 ,-1) (0 ,-1) (-1 , 2) (3 , 2) (2 , 2) (3 , 3 2 ) (1 , 3 2 ) (0 , 0) (4 , 0) 49 49 49 1 –2 配方法與二次函數 73 會考觀測站– 精熟演練題 1 已知二次函數 y=x 2+ax+3 有最小值為-1,則 a=?±4 2 當 a 為何數時,二次函數 y=ax 2-4x+a 有最大值 3 呢?-1 關鍵提問 ■ 自評 2,二次函數 的圖形有最高點 (-1 2 , 2) ,表示 在 x 的值為多少 時,函數 y 會得到 最大值或最小值為 何? 答:在 x =-1 2 時,函數 y 有最 大值 2。 ■ 提醒學生,若知 道二次函數 y = a (x-h) 2+k 圖形 的對稱軸時,即表 示知道頂點 (h , k) 的 x 坐標。 ■ 提醒學生求二次函 數 y=ax 2+bx+c 的最大值或最小值 時,一定要先化成 y=a (x-h) 2+k 的 形式。 搭配自評第 2、4 題 50 2 若二次函數 y=ax 2-x+c 的最高點為 (-1 2 , 2) ,求 a 及 c 的值。 3 將二次函數 y=-2x 2 的圖形平移後,可得 y=ax 2+bx+c 的圖形, 其對稱軸為 x=3,且通過坐標平面上的點 (-1 , 6) ,求 c 的值。 4 求下列二次函數的最大值或最小值,並寫出 x 的值為多少時, y 會得到最大值或最小值。 1y=3 4 (x-7) 2 2y=4 (x+2 3 ) 2-3 3y=-3x 2+24x-46 4y=x 2-5x+4 課 P38 例 5 課 P39 隨堂 課 P40、42 例 6、7 ∵函數有最高點 (-1 2 , 2) ,且 x 2 的係數為 a, ∴可令 y=a (x+1 2 ) 2+2=ax 2+ax+a 4 +2, 故對照原二次函數的係數可得 a=-1、c=-1 4 +2=7 4 。 ∵y=ax 2+bx+c 圖形的對稱軸為 x=3, 且由 y=-2x 2 的圖形平移而得, ∴可令 y=-2 (x-3) 2+k, 將 (-1 , 6) 代入 y=-2 (x-3) 2+k,可得 k=38, 因此函數為 y=-2 (x-3) 2+38=-2x 2+12x+20, 故對照原二次函數的係數可得 c=20。 ∵圖形的開口向上, 又頂點 (7 , 0) 為最低點, ∴在 x=7 時, 函數 y 有最小值 0。 y=-3x 2+24x-46 =-3 (x-4) 2+2 ∵-3 (x-4) 2+2 2, ∴在 x=4 時, 函數 y 有最大值 2。 ∵圖形的開口向上, 又頂點 (-2 3 ,-3) 為最低點, ∴在 x=-2 3 時, 函數 y 有最小值 -3。 y=x 2-5x+4 = (x-5 2 ) 2-9 4 ∵ (x-5 2 ) 2-9 4 -9 4 , ∴在 x=5 2 時, 函數 y 有最小值-9 4 。 c:a=-1、c=7 4 。 c:20。 第一章 二次函數 50 74 會考觀測站– 基礎演練題 ■ 如圖,二次函數 y=x 2-2x-3 的圖形交 x 軸於 A、B 兩點, 交 y 軸於 C 點,求: 1 A 點坐標。 2 C 點坐標。 3 頂點 D 坐標。 1 (-1 , 0) 2 (0 ,-3) 3 (1 ,-4) x y O A C D B ■ 若二次函數圖形 和 x 軸交於 (α , 0) 、 (β , 0) ,則 二次函數方程式為 y=a (x-α) (x- β) 。 ■ 第 6 題1也可利用 觀察函數圖形的最 低點來求解。 〈另解〉 ∵ 圖形的開口向 上,且頂點 (0 , 3) 在 x 軸上方, ∴ 圖形與 x 軸沒有 交點。 搭配自評第 5 題 5 已知二次函數 y=3x 2+2x-1 的圖形與 y 軸交於 A 點,與 x 軸交於 B、C 兩點,求: 1A 點坐標。 2 BC 的長度。 6 判別下列二次函數的圖形與 x 軸的交點個數: 1y=2x 2+3 2y=x 2-3x+1 3y=-2 (x+3) 2-5 4y=9x 2-12x+4 課 P43 例 8 課 P45、46 例 9、10 1將 x=0 代入函數得 y=-1, 故圖形與 y 軸交於 A (0 ,-1) 。 2將 y=0 代入函數得 3x 2+2x-1=0 (3x-1) (x+1) =0, x=1 3 或 x=-1, ∴圖形與 x 軸交於 B (1 3 , 0) 、C (-1 , 0) , 故 BC=|1 3 - (-1) |=4 3 。 c:1 (0 ,-1) ,2 4 3 。 ∵圖形的開口向上 且頂點 (0 , 3) 在 x 軸上方, ∴圖形與 x 軸沒有交點。 ∵圖形的開口向下, 且頂點 (-3 ,-5) 在 x 軸下方, ∴圖形與 x 軸沒有交點。 ∵x 2-3x+1=0 的判別式為 (-3) 2-4×1×1=5>0, ∴圖形與 x 軸有 2 個交點。 ∵9x 2-12x+4=0 的判別式為 (-12) 2-4×9×4=0, ∴圖形與 x 軸恰有 1 個交點。 51 51 51 1 –2 配方法與二次函數 75 |
189514 | https://math.stackexchange.com/questions/4049266/what-is-the-result-of-a-180-degree-rotation-of-a-cube-around-the-line-joining-th | combinatorics - What is the result of a 180 degree rotation of a cube around the line joining the midpoints of opposite edges? - Mathematics Stack Exchange
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
What is the result of a 180 degree rotation of a cube around the line joining the midpoints of opposite edges?
Ask Question
Asked 4 years, 6 months ago
Modified4 years, 6 months ago
Viewed 323 times
This question shows research effort; it is useful and clear
1
Save this question.
Show activity on this post.
This is supposed to be an isomorphism, but I can't visualize it. Suppose that there is a 180 degree rotation around the line joining the midpoint of the left edge of the top face to the midpoint of the right edge of the bottom face. Where do the top and bottom faces end up?
combinatorics
geometry
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this question to receive notifications
asked Mar 4, 2021 at 21:05
user1153980user1153980
1,244 9 9 silver badges 15 15 bronze badges
Add a comment|
1 Answer 1
Sorted by: Reset to default
This answer is useful
1
Save this answer.
Show activity on this post.
Top face ends up left, bottom face ends up right. As I convinced myself by picking up the nearest cube lying around (a Rubik's cube), holding it at those midpoints, and doing the rotation. The top and left faces swap places, as do back and front, and bottom and right.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Mar 4, 2021 at 21:20
Hans LundmarkHans Lundmark
55.2k 7 7 gold badges 96 96 silver badges 161 161 bronze badges
1
1 Thanks, I was able to use a Rubik's cube to verify. What really helped was to hold the cube so that the rotation axis is perpendicular to the floor.. That reinforces the idea that the axis does not move and the line between the front and left faces swings around and ends up on top of itself in the reverse direction. The same thing happens between the bottom and right faces. The back and front faces swap places. I can see it but it still seems kind of amazing..user1153980 –user1153980 2021-03-05 00:05:40 +00:00 Commented Mar 5, 2021 at 0:05
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
combinatorics
geometry
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Report this ad
Related
4Minimal number of solutions to a sudoku cube
11Distinguishable painted prisms with six colors (repetition allowed)
2Rotation around slant line
2Is colouring the edges of a cube the same as constructing a cube with matches?
2What is the probability that a cube lands directly on one of its faces?
16Union of two disjoint congruent polygons is centrally symmetric. Must the polygons differ by a 180 degree rotation?
0Geometrically determining the symmetry of a tetrahedron (using solid geometry, not group theory)
Hot Network Questions
Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator
Why are LDS temple garments secret?
Why is the fiber product in the definition of a Segal spaces a homotopy fiber product?
Discussing strategy reduces winning chances of everyone!
Is encrypting the login keyring necessary if you have full disk encryption?
What is this chess h4 sac known as?
What's the expectation around asking to be invited to invitation-only workshops?
Copy command with cs names
Interpret G-code
Cannot build the font table of Miama via nfssfont.tex
Storing a session token in localstorage
Does the Mishna or Gemara ever explicitly mention the second day of Shavuot?
For every second-order formula, is there a first-order formula equivalent to it by reification?
How long would it take for me to get all the items in Bongo Cat?
Suspicious of theorem 36.2 in Munkres “Analysis on Manifolds”
Determine which are P-cores/E-cores (Intel CPU)
What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel?
How can the problem of a warlock with two spell slots be solved?
Why include unadjusted estimates in a study when reporting adjusted estimates?
Lingering odor presumably from bad chicken
Implications of using a stream cipher as KDF
Riffle a list of binary functions into list of arguments to produce a result
How do trees drop their leaves?
ICC in Hague not prosecuting an individual brought before them in a questionable manner?
more hot questions
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices |
189515 | https://tc39.es/proposal-temporal/docs/timezone.html | Time Zones and Resolving Ambiguity
Table of Contents
Understanding Clock Time vs. Exact Time
Understanding Time Zones, Offset Changes, and DST
Wall-Clock Time, Exact Time, and Time Zones in Temporal
Ambiguity Due to DST or Other Time Zone Offset Changes
Resolving Time Ambiguity in Temporal
Examples: DST Disambiguation
Ambiguity Caused by Permanent Changes to a Time Zone Definition
Examples: offset option
Understanding Clock Time vs. Exact Time
The core concept in Temporal is the distinction between wall-clock time (also called "local time" or "clock time") which depends on the time zone of the clock and exact time (also called "UTC time") which is the same everywhere.
Wall-clock time is controlled by local governmental authorities, so it can abruptly change.
When Daylight Saving Time (DST) starts or if a country moves to another time zone, then local clocks will instantly change.
Exact time however has a consistent global definition and is represented by a special time zone called UTC (from Wikipedia):
Coordinated Universal Time (or UTC) is the primary time standard by which the world regulates clocks and time.
It is within about 1 second of mean solar time at 0° longitude, and is not adjusted for daylight saving time.
It is effectively a successor to Greenwich Mean Time (GMT).
Every wall-clock time is defined using a UTC Offset: the amount of exact time that a particular clock is set ahead or behind UTC.
For example, on January 19, 2020 in California, the UTC Offset (or "offset" for short) was -08:00 which means that wall-clock time in San Francisco was 8 hours behind UTC, so 10:00AM locally on that day was 18:00 UTC.
However the same calendar date and wall-clock time India would have an offset of +05:30: 5½ hours later than UTC.
RFC 9557, ISO 8601, and RFC 3339 define standard representations for exact times as a date and time value, e.g. 2020-09-06T17:35:24.485Z and 2020-09-06T17:35:24.485-07:00.
The presence of a Z suffix or UTC offset (-07:00) indicates that this is an exact time.
Temporal has two types that store exact time: Temporal.Instant (which only stores exact time and no other information) and Temporal.ZonedDateTime which stores exact time, a time zone, and a calendar system
Another way to represent exact time is using a single number representing temporal distance from the Unix epoch of January 1, 1970 at 00:00 UTC.
For example, Temporal.Instant (an exact-time type) can be constructed using only a BigInt value of nanoseconds since epoch, ignoring leap seconds.
Another term developers often encounter is "timestamp".
This most often refers to an exact time represented by the number of seconds since Unix epoch.
Temporal avoids using this terminology, however, because of historical ambiguity surrounding the term "timestamp".
For example, many databases have a type called TIMESTAMP, but its meaning varies: in MySQL, it is an exact time; in Oracle Database, it is the number of seconds since the wall-clock time 00:00 on January 1, 1970 (a quantity one might call a "local timestamp"); and in Microsoft SQL Server, it is a monotonically increasing value unrelated to date and time.
Understanding Time Zones, Offset Changes, and DST
A Time Zone defines the rules that control how local wall-clock time relates to UTC. You can think of a time zone as a function that accepts an exact time and returns a UTC offset, and a corresponding function for conversions in the opposite direction. (See below for why exact → local conversions are 1:1, but local → exact conversions can be ambiguous.)
Temporal uses the IANA Time Zone Database (or "TZ database"), which you can think of as a global repository of time zone functions. Each IANA time zone has:
A time zone ID that usually refers to a geographic area anchored by a city (e.g. Europe/Paris or Africa/Kampala) but can also denote single-offset time zones like UTC (a consistent +00:00 offset) or Etc/GMT+5 (which for historical reasons is a negative offset -05:00).
A time zone definition defines the offset for any UTC value since January 1, 1970. You can think of these definitions as a table that maps UTC date/time ranges (including future ranges) to specific offsets.
In some time zones, temporary offset changes happen twice each year due to Daylight Saving Time (DST) starting in the Spring and ending each Fall.
Offsets can also change permanently due to political changes, e.g. a country switching time zones.
The IANA Time Zone Database is updated several times per year in response to political changes around the world.
Each update contains changes to time zone definitions.
These changes usually affect only future date/time values, but occasionally fixes are made to past ranges too, for example when new historical sources are discovered about early-20th century timekeeping.
Wall-Clock Time, Exact Time, and Time Zones in Temporal
In Temporal:
The Temporal.Instant type represents exact time only.
The Temporal.PlainDateTime type represents calendar date and wall-clock time, as do other narrower types: Temporal.PlainDate, Temporal.PlainTime, Temporal.PlainYearMonth, and Temporal.PlainMonthDay.
These types all carry a calendar system, which by default is 'iso8601' (the ISO 8601 calendar) but can be overridden for other calendars like 'islamic' or 'japanese'.
The time zone identifier represents a time zone function that converts between exact time and wall-clock time and vice-versa.
The Temporal.ZonedDateTime type encapsulates all of the types above: an exact time (like a Temporal.Instant), its wall-clock equivalent (like a Temporal.PlainDateTime), and the time zone that links the two.
There are two ways to get a human-readable calendar date and clock time from a Temporal type that stores exact time.
If the exact time is already represented by a Temporal.ZonedDateTime instance then the wall-clock time values are trivially available using the properties and methods of that type, e.g. .year, .hour, or .toLocaleString().
However, if the exact time is represented by a Temporal.Instant, use a time zone and optional calendar to create a Temporal.ZonedDateTime. Example:
```
instant = Temporal.Instant.from('2019-09-03T08:34:05Z');
formatOptions = {
era: 'short',
year: 'numeric',
month: 'short',
day: 'numeric',
hour: 'numeric',
minute: 'numeric',
second: 'numeric'
};
zdt = instant.toZonedDateTimeISO('Asia/Tokyo');
// => 2019-09-03T17:34:05+09:00[Asia/Tokyo]
zdt.toLocaleString('en-us', { ...formatOptions, calendar: zdt.calendar });
// => 'Sep 3, 2019 AD, 5:34:05 PM'
zdt.year;
// => 2019
zdt.toLocaleString('ja-jp', formatOptions);
// => '西暦2019年9月3日 17:34:05'
zdt = zdt.withCalendar('japanese');
// => 2019-09-03T17:34:05+09:00[Asia/Tokyo][u-ca=japanese]
zdt.toLocaleString('en-us', { ...formatOptions, calendar: zdt.calendar });
// => 'Sep 3, 1 Reiwa, 5:34:05 PM'
zdt.eraYear;
// => 1
```
Conversions from calendar date and/or wall clock time to exact time are also supported:
```
// Convert various local time types to an exact time type by providing a time zone
date = Temporal.PlainDate.from('2019-12-17');
// If time is omitted, local time defaults to start of day
zdt = date.toZonedDateTime('Asia/Tokyo');
// => 2019-12-17T00:00:00+09:00[Asia/Tokyo]
zdt = date.toZonedDateTime({ timeZone: 'Asia/Tokyo', plainTime: '10:00' });
// => 2019-12-17T10:00:00+09:00[Asia/Tokyo]
time = Temporal.PlainTime.from('14:35');
zdt = time.toZonedDateTime({ timeZone: 'Asia/Tokyo', plainDate: Temporal.PlainDate.from('2020-08-27') });
// => 2020-08-27T14:35:00+09:00[Asia/Tokyo]
dateTime = Temporal.PlainDateTime.from('2019-12-17T07:48');
zdt = dateTime.toZonedDateTime('Asia/Tokyo');
// => 2019-12-17T07:48:00+09:00[Asia/Tokyo]
// Get the exact time in seconds, milliseconds or nanoseconds since the UNIX epoch.
inst = zdt.toInstant();
epochNano = inst.epochNanoseconds; // => 1576536480000000000n
epochMilli = inst.epochMilliseconds; // => 1576536480000
epochSecs = Math.floor(inst.epochMilliseconds / 1000); // => 1576536480
```
Ambiguity Due to DST or Other Time Zone Offset Changes
Usually, a time zone definition provides a bidirectional 1:1 mapping between any particular local date and clock time and its corresponding UTC date and time. However, near a time zone offset transition there can be time ambiguity where it's not clear what offset should be used to convert a wall-clock time into an exact time. This ambiguity leads to two possible UTC times for one clock time.
When offsets change in a backward direction, the same clock time will be repeated.
For example, 1:30AM happened twice on Sunday, 4 November 2018 in California.
The "first" 1:30AM on that date was in Pacific Daylight Time (offset -07:00).
30 exact minutes later, DST ended and Pacific Standard Time (offset -08:00) became active.
After 30 more exact minutes, the "second" 1:30AM happened.
This means that "1:30AM on Sunday, 4 November 2018" is not sufficient to know which 1:30AM it is.
The clock time is ambiguous.
When offsets change in a forward direction, local clock times are skipped.
For example, DST started on Sunday, 11 March 2018 in California.
When the clock advanced from 1:59AM to 2:00AM, local time immediately skipped to 3:00AM.
2:30AM didn't happen!
To avoid errors in this one-hour-per year case, most computing environments (including ECMAScript) will convert skipped clock times to exact times using either the offset before the transition or the offset after the transition.
In both cases, resolving the ambiguity when converting the local time into exact time requires choosing which of two possible offsets to use, or deciding to throw an exception.
Resolving Time Ambiguity in Temporal
In Temporal, if the exact time or time zone offset is known, then there is no ambiguity possible. For example:
// No ambiguity possible because source is exact time in UTC
inst = Temporal.Instant.from('2020-09-06T17:35:24.485Z');
// => 2020-09-06T17:35:24.485Z
// An offset can make a local time "exact" with no ambiguity possible.
inst = Temporal.Instant.from('2020-09-06T10:35:24.485-07:00');
// => 2020-09-06T17:35:24.485Z
zdt = Temporal.ZonedDateTime.from('2020-09-06T10:35:24.485-07:00[America/Los_Angeles]');
// => 2020-09-06T10:35:24.485-07:00[America/Los_Angeles]
// if the source is an exact Temporal object, then no ambiguity is possible.
zdt = inst.toZonedDateTimeISO('America/Los_Angeles');
// => 2020-09-06T10:35:24.485-07:00[America/Los_Angeles]
inst2 = zdt.toInstant();
// => 2020-09-06T17:35:24.485Z
However, opportunities for ambiguity are present when creating an exact-time type (Temporal.ZonedDateTime or Temporal.Instant) from a non-exact source. For example:
```
// Offset is not known. Ambiguity is possible!
zdt = Temporal.PlainDate.from('2019-02-19').toZonedDateTime('America/Sao_Paulo'); // can be ambiguous
zdt = Temporal.PlainDateTime.from('2019-02-19T00:00').toZonedDateTime('America/Sao_Paulo'); // can be ambiguous
// Even if the offset is present in the source string, if the type (like PlainDateTime)
// isn't an exact type then the offset is ignored when parsing so ambiguity is possible.
dt = Temporal.PlainDateTime.from('2019-02-19T00:00-03:00');
zdt = dt.toZonedDateTime('America/Sao_Paulo'); // can be ambiguous
// the offset is lost when converting from an exact type to a non-exact type
zdt = Temporal.ZonedDateTime.from('2020-11-01T01:30-08:00[America/Los_Angeles]');
// => 2020-11-01T01:30:00-08:00[America/Los_Angeles]
dt = zdt.toPlainDateTime(); // offset is lost!
// => 2020-11-01T01:30:00
zdtAmbiguous = dt.toZonedDateTime('America/Los_Angeles'); // can be ambiguous
// => 2020-11-01T01:30:00-07:00[America/Los_Angeles]
// note that the offset is now -07:00 (Pacific Daylight Time) which is the "first" 1:30AM
// not -08:00 (Pacific Standard Time) like the original time which was the "second" 1:30AM
```
To resolve this possible ambiguity, Temporal methods that create exact types from inexact sources accept a disambiguation option, which controls what exact time to return in the case of ambiguity:
'compatible' (the default): Acts like 'earlier' for backward transitions and 'later' for forward transitions.
'earlier': The earlier of two possible exact times will be returned.
'later': The later of two possible exact times will be returned.
'reject': A RangeError will be thrown.
When interoperating with existing code or services, 'compatible' mode matches the behavior of legacy Date as well as libraries like moment.js, Luxon, and date-fns.
This mode also matches the behavior of cross-platform standards like RFC 5545 (iCalendar).
Methods where this option is present include:
Temporal.ZonedDateTime.from with object argument
Temporal.ZonedDateTime.prototype.with
Temporal.PlainDateTime.prototype.toZonedDateTime
Examples: DST Disambiguation
This explanation was adapted from the moment-timezone documentation.
When entering DST, clocks move forward an hour.
In reality, it is not time that is moving, it is the offset moving.
Moving the offset forward gives the illusion that an hour has disappeared.
If you watch your computer's digital clock, you can see it move from 1:58 to 1:59 to 3:00.
It is easier to see what is actually happening when you include the offset.
1:58 -08:00
1:59 -08:00
3:00 -07:00
3:01 -07:00
The result is that any time between 1:59:59 and 3:00:00 never actually happened.
In 'earlier' mode, the exact time that is returned will be as if the post-change UTC offset had continued before the change, effectively skipping backwards by the amount of the DST gap (usually 1 hour).
In 'later' mode, the exact time that is returned will be as if the pre-change UTC offset had continued after the change, effectively skipping forwards by the amount of the DST gap.
In 'compatible' mode, the same time is returned as 'later' mode, which matches the behavior of existing JavaScript code that uses legacy Date.
// Different disambiguation modes for times in the skipped clock hour after DST starts in the Spring
// Offset of -07:00 is Daylight Saving Time, while offset of -08:00 indicates Standard Time.
props = { timeZone: 'America/Los_Angeles', year: 2020, month: 3, day: 8, hour: 2, minute: 30 };
zdt = Temporal.ZonedDateTime.from(props, { disambiguation: 'compatible' });
// => 2020-03-08T03:30:00-07:00[America/Los_Angeles]
zdt = Temporal.ZonedDateTime.from(props);
// => 2020-03-08T03:30:00-07:00[America/Los_Angeles]
// ('compatible' is the default)
earlier = Temporal.ZonedDateTime.from(props, { disambiguation: 'earlier' });
// => 2020-03-08T01:30:00-08:00[America/Los_Angeles]
// (1:30 clock time; still in Standard Time)
later = Temporal.ZonedDateTime.from(props, { disambiguation: 'later' });
// => 2020-03-08T03:30:00-07:00[America/Los_Angeles]
// ('later' is same as 'compatible' for backwards transitions)
later.toPlainDateTime().since(earlier.toPlainDateTime());
// => PT2H
// (2 hour difference in clock time...
later.since(earlier);
// => PT1H
// ... but 1 hour later in real-world time)
Likewise, at the end of DST, clocks move backward an hour.
In this case, the illusion is that an hour repeats itself.
In 'earlier' mode, the exact time will be the earlier instance of the duplicated wall-clock time.
In 'later' mode, the exact time will be the later instance of the duplicated time.
In 'compatible' mode, the same time is returned as 'earlier' mode, which matches the behavior of existing JavaScript code that uses legacy Date.
// Different disambiguation modes for times in the repeated clock hour after DST ends in the Fall
// Offset of -07:00 is Daylight Saving Time, while offset of -08:00 indicates Standard Time.
props = { timeZone: 'America/Los_Angeles', year: 2020, month: 11, day: 1, hour: 1, minute: 30 };
zdt = Temporal.ZonedDateTime.from(props, { disambiguation: 'compatible' });
// => 2020-11-01T01:30:00-07:00[America/Los_Angeles]
zdt = Temporal.ZonedDateTime.from(props);
// => 2020-11-01T01:30:00-07:00[America/Los_Angeles]
// 'compatible' is the default.
earlier = Temporal.ZonedDateTime.from(props, { disambiguation: 'earlier' });
// => 2020-11-01T01:30:00-07:00[America/Los_Angeles]
// 'earlier' is same as 'compatible' for backwards transitions.
later = Temporal.ZonedDateTime.from(props, { disambiguation: 'later' });
// => 2020-11-01T01:30:00-08:00[America/Los_Angeles]
// Same clock time, but one hour later.
// -08:00 offset indicates Standard Time.
later.toPlainDateTime().since(earlier.toPlainDateTime());
// => PT0S
// (same clock time...
later.since(earlier);
// => PT1H
// ... but 1 hour later in real-world time)
Ambiguity Caused by Permanent Changes to a Time Zone Definition
Time zone definitions can change.
Almost always these changes are forward-looking so don't affect historical data.
But computers sometimes store data about the future!
For example, a calendar app might record that a user wants to be reminded of a friend's birthday next year.
When date/time data for future times is stored with both the offset and the time zone, and if the time zone definition changes, then it's possible that the new time zone definition may conflict with previously-stored data.
In this case, then the offset option to Temporal.ZonedDateTime.from is used to resolve the conflict:
'use': Evaluate date/time values using the time zone offset if it's provided in the input.
This will keep the exact time unchanged even if local time will be different than what was originally stored.
'ignore': Never use the time zone offset provided in the input. Instead, calculate the offset from the time zone.
This will keep local time unchanged but may result in a different exact time than was originally stored.
'prefer': Evaluate date/time values using the offset if it's valid for this time zone.
If the offset is invalid, then calculate the offset from the time zone.
This option is rarely used when calling from().
See the documentation of with() for more details about why this option is used.
'reject': Throw a RangeError if the offset is not valid for the provided date and time in the provided time zone.
The default is 'reject' for Temporal.ZonedDateTime.from because there is no obvious default solution.
Instead, the developer needs to decide how to fix the now-invalid data.
For Temporal.ZonedDateTime.with the default is 'prefer'.
This default is helpful to prevent DST disambiguation from causing unexpected one-hour changes in exact time after making small changes to clock time fields.
For example, if a Temporal.ZonedDateTime is set to the "second" 1:30AM on a day where the 1-2AM clock hour is repeated after a backwards DST transition, then calling .with({minute: 45}) will result in an ambiguity which is resolved using the default offset: 'prefer' option.
Because the existing offset is valid for the new time, it will be retained so the result will be the "second" 1:45AM.
However, if the existing offset is not valid for the new result (e.g. .with({hour: 0})), then the default behavior will change the offset to match the new local time in that time zone.
Note that offset vs. timezone conflicts only matter for Temporal.ZonedDateTime because no other Temporal type accepts both an IANA time zone and a time zone offset as an input to any method.
For example, Temporal.Instant.from will never run into conflicts because the Temporal.Instant type ignores the time zone in the input and only uses the offset.
Examples: offset option
The primary reason to use the offset option is for parsing values which were saved before a change to that time zone's time zone definition.
For example, Brazil stopped observing Daylight Saving Time in 2019, with the final transition out of DST on February 16, 2019.
The change to stop DST permanently was announced in April 2019.
Now imagine that an app running in 2018 (before these changes were announced) had saved a far-future time in a string format that contained both offset and IANA time zone.
Such a format is used by Temporal.ZonedDateTime.prototype.toString as well as other platforms and libraries that use the same format like Java.time.ZonedDateTime.
Let's assume the stored future time was noon on January 15, 2020 in São Paulo:
``
zdt = Temporal.ZonedDateTime.from({ year: 2020, month: 1, day: 15, hour: 12, timeZone: 'America/Sao_Paulo' });
zdt.toString();
// => '2020-01-15T12:00:00-02:00[America/Sao_Paulo]'
// Assume this string is saved in an external database.
// Note that the offset is-02:00` which is Daylight Saving Time
// Also note that if you run the code above today, it will return an offset
// of -03:00 because that reflects the current time zone definition after
// DST was abolished. But this code running in 2018 would have returned -02:00
// which corresponds to the then-current Daylight Saving Time in Brazil.
```
This string was valid at the time is was created and saved in 2018.
But after the time zone rules were changed in April 2019, 2020-01-15T12:00-02:00[America/Sao_Paulo] is no longer valid because the correct offset for this time is now -03:00.
When parsing this string using current time zone rules, Temporal needs to know how to interpret it.
The offset option helps deal with this case.
savedUsingOldTzDefinition = '2020-01-01T12:00-02:00[America/Sao_Paulo]'; // string that was saved earlier
/ WRONG / zdt = Temporal.ZonedDateTime.from(savedUsingOldTzDefinition);
// => RangeError: Offset is invalid for '2020-01-01T12:00' in 'America/Sao_Paulo'. Provided: -02:00, expected: -03:00.
// Default is to throw when the offset and time zone conflict.
/ WRONG / zdt = Temporal.ZonedDateTime.from(savedUsingOldTzDefinition, { offset: 'reject' });
// => RangeError: Offset is invalid for '2020-01-01T12:00' in 'America/Sao_Paulo'. Provided: -02:00, expected: -03:00.
zdt = Temporal.ZonedDateTime.from(savedUsingOldTzDefinition, { offset: 'use' });
// => 2020-01-01T11:00:00-03:00[America/Sao_Paulo]
// Evaluate date/time string using old offset, which keeps UTC time constant as local time changes to 11:00
zdt = Temporal.ZonedDateTime.from(savedUsingOldTzDefinition, { offset: 'ignore' });
// => 2020-01-01T12:00:00-03:00[America/Sao_Paulo]
// Use current time zone rules to calculate offset, ignoring any saved offset
zdt = Temporal.ZonedDateTime.from(savedUsingOldTzDefinition, { offset: 'prefer' });
// => 2020-01-01T12:00:00-03:00[America/Sao_Paulo]
// Saved offset is invalid for current time zone rules, so use time zone to to calculate offset. |
189516 | https://en.wikipedia.org/wiki/Inductance | Published Time: Fri, 19 Sep 2025 02:00:49 GMT
Inductance - Wikipedia
Jump to content
[x] Main menu
Main menu
move to sidebar hide
Navigation
Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Contribute
Help
Learn to edit
Community portal
Recent changes
Upload file
Special pages
Search
Search
[x] Appearance
Donate
Create account
Log in
[x] Personal tools
Donate
Create account
Log in
Pages for logged out editors learn more
Contributions
Talk
Contents
move to sidebar hide
(Top)
1 History
2 Source of inductance
3 Self-inductance and magnetic energy
4 Inductive reactance
5 Calculating self inductanceToggle Calculating self inductance subsection
5.1 Straight single wire
5.1.1 Practical formulas
5.2 Wire loop
5.3 Solenoid
5.4 Coaxial cable
5.5 Multilayer coils
5.6 Magnetic cores
6 Mutual inductanceToggle Mutual inductance subsection
6.1 Definition of Mutual induction or Coefficient of mutual induction
6.2 Mutual inductance of two parallel straight wires
6.3 Mutual inductance of two wire loops
6.4 Derivation
6.5 Derivation of mutual inductance
6.6 Mutual inductance and magnetic field energy
6.7 Coupling coefficient
6.8 Matrix representation
6.8.1 Multiple Coupled Inductors
6.9 Equivalent circuits
6.9.1 T-circuit
6.9.2 π-circuit
6.10 Resonant transformer
6.11 Ideal transformers
7 Self-inductance of thin wire shapes
8 See also
9 Footnotes
10 References
11 General references
12 External links
[x] Toggle the table of contents
Inductance
[x] 68 languages
Afrikaans
አማርኛ
العربية
Asturianu
Azərbaycanca
বাংলা
閩南語 / Bân-lâm-gí
Беларуская
Беларуская (тарашкевіца)
Български
Boarisch
Català
Чӑвашла
Čeština
Dansk
Deutsch
Eesti
Ελληνικά
Español
Esperanto
Euskara
فارسی
Français
Gaeilge
Galego
한국어
Հայերեն
हिन्दी
Hrvatski
Bahasa Indonesia
Íslenska
Italiano
עברית
ქართული
Kreyòl ayisyen
Кыргызча
Latviešu
Lietuvių
Македонски
Bahasa Melayu
Nederlands
नेपाल भाषा
日本語
Nordfriisk
Norsk bokmål
Norsk nynorsk
Oʻzbekcha / ўзбекча
Polski
Português
Română
Русский
Shqip
Slovenčina
Slovenščina
Српски / srpski
Srpskohrvatski / српскохрватски
Suomi
Svenska
தமிழ்
Татарча / tatarça
Türkçe
Türkmençe
Українська
Tiếng Việt
Wolof
吴语
粵語
中文
Edit links
Article
Talk
[x] English
Read
Edit
View history
[x] Tools
Tools
move to sidebar hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Print/export
Download as PDF
Printable version
In other projects
Wikidata item
Appearance
move to sidebar hide
From Wikipedia, the free encyclopedia
Property of electrical conductors
| Inductance |
| Common symbols | L |
| SI unit | henry (H) |
| In SI base units | kg⋅m2⋅s−2⋅A−2 |
| Derivations from other quantities | L = V / ( I / t ) L = Φ / I |
| Dimension | M1·L2·T−2·I−2 |
| Electromagnetism |
| |
| Electricity Magnetism Optics History Computational Textbooks Phenomena |
| Electrostatics Charge density Conductor Coulomb law Electret Electric charge Electric dipole Electric field Electric flux Electric potential Electrostatic discharge Electrostatic induction Gauss's law Insulator Permittivity Polarization Potential energy Static electricity Triboelectricity |
| Magnetostatics Ampère's law Biot–Savart law Gauss's law for magnetism Magnetic dipole Magnetic field Magnetic flux Magnetic scalar potential Magnetic vector potential Magnetization Permeability Right-hand rule |
| Electrodynamics Bremsstrahlung Cyclotron radiation Displacement current Eddy current Electromagnetic field Electromagnetic induction Electromagnetic pulse Electromagnetic radiation Faraday's law Jefimenko equations Larmor formula Lenz's law Liénard–Wiechert potential London equations Lorentz force Maxwell's equations Maxwell tensor Poynting vector Synchrotron radiation |
| Electrical network Alternating current Capacitance Current density Direct current Electric current Electric power Electrolysis Electromotive force Impedance Inductance Joule heating Kirchhoff's laws Network analysis Ohm's law Parallel circuit Resistance Resonant cavities Series circuit Voltage Watt Waveguides |
| Magnetic circuit AC motor DC motor Electric machine Electric motor Gyrator–capacitor Induction motor Linear motor Magnetomotive force Permeance Reluctance (complex) Reluctance (real) Rotor Stator Transformer |
| Covariant formulation Electromagnetic tensor Electromagnetism and special relativity Four-current Four-potential Mathematical descriptions Maxwell equations in curved spacetime Relativistic electromagnetism Stress–energy tensor |
| Scientists Ampère Biot Coulomb Davy Einstein Faraday Fizeau Gauss Heaviside Helmholtz Henry Hertz Hopkinson Jefimenko Joule Kelvin Kirchhoff Larmor Lenz Liénard Lorentz Maxwell Neumann Ohm Ørsted Poisson Poynting Ritchie Savart Singer Steinmetz Tesla Thomson Volta Weber Wiechert |
| v t e |
Inductance is the tendency of an electrical conductor to oppose a change in the electric current flowing through it. The electric current produces a magnetic field around the conductor. The magnetic field strength depends on the magnitude of the electric current, and therefore follows any changes in the magnitude of the current. From Faraday's law of induction, any change in magnetic field through a circuit induces an electromotive force (EMF) (voltage) in the conductors, a process known as electromagnetic induction. This induced voltage created by the changing current has the effect of opposing the change in current. This is stated by Lenz's law, and the voltage is called back EMF.
Inductance is defined as the ratio of the induced voltage to the rate of change of current causing it. It is a proportionality constant that depends on the geometry of circuit conductors (e.g., cross-section area and length) and the magnetic permeability of the conductor and nearby materials. An electronic component designed to add inductance to a circuit is called an inductor. It typically consists of a coil or helix of wire.
The term inductance was coined by Oliver Heaviside in May 1884, as a convenient way to refer to "coefficient of self-induction". It is customary to use the symbol L{\displaystyle L} for inductance, in honour of the physicist Heinrich Lenz. In the SI system, the unit of inductance is the henry (H), which is the amount of inductance that causes a voltage of one volt, when the current is changing at a rate of one ampere per second. The unit is named for Joseph Henry, who discovered inductance independently of Faraday.
History
[edit]
Main article: History of electromagnetic theory
The history of electromagnetic induction, a facet of electromagnetism, began with observations of the ancients: electric charge or static electricity (rubbing silk on amber), electric current (lightning), and magnetic attraction (lodestone). Understanding the unity of these forces of nature, and the scientific theory of electromagnetism was initiated and achieved during the 19th century.
Electromagnetic induction was first described by Michael Faraday in 1831. In Faraday's experiment, he wrapped two wires around opposite sides of an iron ring. He expected that, when current started to flow in one wire, a sort of wave would travel through the ring and cause some electrical effect on the opposite side. Using a galvanometer, he observed a transient current flow in the second coil of wire each time that a battery was connected or disconnected from the first coil. This current was induced by the change in magnetic flux that occurred when the battery was connected and disconnected. Faraday found several other manifestations of electromagnetic induction. For example, he saw transient currents when he quickly slid a bar magnet in and out of a coil of wires, and he generated a steady (DC) current by rotating a copper disk near the bar magnet with a sliding electrical lead ("Faraday's disk").
Source of inductance
[edit]
A current i{\displaystyle i} flowing through a conductor generates a magnetic field around the conductor, which is described by Ampere's circuital law. The total magnetic fluxΦ{\displaystyle \Phi } through a circuit is equal to the product of the perpendicular component of the magnetic flux density and the area of the surface spanning the current path. If the current varies, the magnetic fluxΦ{\displaystyle \Phi } through the circuit changes. By Faraday's law of induction, any change in flux through a circuit induces an electromotive force (EMF, E{\displaystyle {\mathcal {E}}}) in the circuit, proportional to the rate of change of flux
E(t)=−d d t Φ(t){\displaystyle {\mathcal {E}}(t)=-{\frac {\text{d}}{{\text{d}}t}}\,\Phi (t)}
The negative sign in the equation indicates that the induced voltage is in a direction which opposes the change in current that created it; this is called Lenz's law. The potential is therefore called a back EMF. If the current is increasing, the voltage is positive at the end of the conductor through which the current enters and negative at the end through which it leaves, tending to reduce the current. If the current is decreasing, the voltage is positive at the end through which the current leaves the conductor, tending to maintain the current. Self-inductance, usually just called inductance, L{\displaystyle L} is the ratio between the induced voltage and the rate of change of the current
v(t)=L d i d t(1){\displaystyle v(t)=L\,{\frac {{\text{d}}i}{{\text{d}}t}}\qquad \qquad \qquad (1)\;}
Thus, inductance is a property of a conductor or circuit, due to its magnetic field, which tends to oppose changes in current through the circuit. The unit of inductance in the SI system is the henry (H), named after Joseph Henry, which is the amount of inductance that generates a voltage of one volt when the current is changing at a rate of one ampere per second.
All conductors have some inductance, which may have either desirable or detrimental effects in practical electrical devices. The inductance of a circuit depends on the geometry of the current path, and on the magnetic permeability of nearby materials; ferromagnetic materials with a higher permeability like iron near a conductor tend to increase the magnetic field and inductance. Any alteration to a circuit which increases the flux (total magnetic field) through the circuit produced by a given current increases the inductance, because inductance is also equal to the ratio of magnetic flux to current
L=Φ(i)i{\displaystyle L={\Phi (i) \over i}}
An inductor is an electrical component consisting of a conductor shaped to increase the magnetic flux, to add inductance to a circuit. Typically it consists of a wire wound into a coil or helix. A coiled wire has a higher inductance than a straight wire of the same length, because the magnetic field lines pass through the circuit multiple times, it has multiple flux linkages. The inductance is proportional to the square of the number of turns in the coil, assuming full flux linkage.
The inductance of a coil can be increased by placing a magnetic core of ferromagnetic material in the hole in the center. The magnetic field of the coil magnetizes the material of the core, aligning its magnetic domains, and the magnetic field of the core adds to that of the coil, increasing the flux through the coil. This is called a ferromagnetic core inductor. A magnetic core can increase the inductance of a coil by thousands of times.
If multiple electric circuits are located close to each other, the magnetic field of one can pass through the other; in this case the circuits are said to be inductively coupled. Due to Faraday's law of induction, a change in current in one circuit can cause a change in magnetic flux in another circuit and thus induce a voltage in another circuit. The concept of inductance can be generalized in this case by defining the mutual inductanceM k,ℓ{\displaystyle M_{k,\ell }} of circuit k{\displaystyle k} and circuit ℓ{\displaystyle \ell } as the ratio of voltage induced in circuit ℓ{\displaystyle \ell } to the rate of change of current in circuit k{\displaystyle k}. This is the principle behind a transformer. The property describing the effect of one conductor on itself is more precisely called self-inductance, and the properties describing the effects of one conductor with changing current on nearby conductors is called mutual inductance.
Self-inductance and magnetic energy
[edit]
If the current through a conductor with inductance is increasing, a voltage v(t){\displaystyle v(t)} is induced across the conductor with a polarity that opposes the current—in addition to any voltage drop caused by the conductor's resistance. The charges flowing through the circuit lose potential energy. The energy from the external circuit required to overcome this "potential hill" is stored in the increased magnetic field around the conductor. Therefore, an inductor stores energy in its magnetic field. At any given time t{\displaystyle t} , p(t){\displaystyle p(t)} is the power flowing into the magnetic field, which is equal to the rate of change of the stored energy U{\displaystyle U}, and to the product of the current i(t){\displaystyle i(t)} and voltage v(t){\displaystyle v(t)} across the conductor
p(t)=d U d t=v(t)i(t){\displaystyle p(t)={\frac {{\text{d}}U}{{\text{d}}t}}=v(t)\,i(t)}
From (1) above
d U d t=L(i)i d i d t d U=L(i)i d i{\displaystyle {\begin{aligned}{\frac {{\text{d}}U}{{\text{d}}t}}&=L(i)\,i\,{\frac {{\text{d}}i}{{\text{d}}t}}\[3pt]{\text{d}}U&=L(i)\,i\,{\text{d}}i\end{aligned}}}
When there is no current, there is no magnetic field and the stored energy is zero. Neglecting resistive losses, the energyU{\displaystyle U} (measured in joules, in SI) stored by an inductance with a current I{\displaystyle I} through it is equal to the amount of work required to establish the current through the inductance from zero, and therefore the magnetic field. This is given by:
U=∫0 I L(i)i d i{\displaystyle U=\int _{0}^{I}L(i)\,i\,{\text{d}}i\,}
If the inductance L(i){\displaystyle L(i)} is constant over the current range, the stored energy is
U=L∫0 I i d i=1 2 L I 2{\displaystyle {\begin{aligned}U&=L\int _{0}^{I}\,i\,{\text{d}}i\[3pt]&={\tfrac {1}{2}}L\,I^{2}\end{aligned}}}
Inductance is therefore also proportional to the energy stored in the magnetic field for a given current. This energy is stored as long as the current remains constant. If the current decreases, the magnetic field decreases, inducing a voltage in the conductor in the opposite direction, negative at the end through which current enters and positive at the end through which it leaves. This returns stored magnetic energy to the external circuit.
If ferromagnetic materials are located near the conductor, such as in an inductor with a magnetic core, the constant inductance equation above is only valid for linear regions of the magnetic flux, at currents below the level at which the ferromagnetic material saturates, where the inductance is approximately constant. If the magnetic field in the inductor approaches the level at which the core saturates, the inductance begins to change with current, and the integral equation must be used.
Inductive reactance
[edit]
The voltage (v{\displaystyle v}, blue) and current (i{\displaystyle i}, red) waveforms in an ideal inductor to which an alternating current has been applied. The current lags the voltage by 90°
When a sinusoidalalternating current (AC) is passing through a linear inductance, the induced back-EMF is also sinusoidal. If the current through the inductance is i(t)=I peak sin(ω t){\displaystyle i(t)=I_{\text{peak}}\sin \left(\omega t\right)}, from (1) above the voltage across it is v(t)=L d i d t=L d d t[I peak sin(ω t)]=ω L I peak cos(ω t)=ω L I peak sin(ω t+π 2){\displaystyle {\begin{aligned}v(t)&=L{\frac {{\text{d}}i}{{\text{d}}t}}=L\,{\frac {\text{d}}{{\text{d}}t}}\left[I_{\text{peak}}\sin \left(\omega t\right)\right]\&=\omega L\,I_{\text{peak}}\,\cos \left(\omega t\right)=\omega L\,I_{\text{peak}}\,\sin \left(\omega t+{\pi \over 2}\right)\end{aligned}}}
where I peak{\displaystyle I_{\text{peak}}} is the amplitude (peak value) of the sinusoidal current in amperes, ω=2 π f{\displaystyle \omega =2\pi f} is the angular frequency of the alternating current, with f{\displaystyle f} being its frequency in hertz, and L{\displaystyle L} is the inductance.
Thus the amplitude (peak value) of the voltage across the inductance is
V p=ω L I p=2 π f L I p{\displaystyle V_{p}=\omega L\,I_{p}=2\pi f\,L\,I_{p}}
Inductive reactance is the opposition of an inductor to an alternating current. It is defined analogously to electrical resistance in a resistor, as the ratio of the amplitude (peak value) of the alternating voltage to current in the component
X L=V p I p=2 π f L{\displaystyle X_{L}={\frac {V_{p}}{I_{p}}}=2\pi f\,L}
Reactance has units of ohms. It can be seen that inductive reactance of an inductor increases proportionally with frequency f{\displaystyle f}, so an inductor conducts less current for a given applied AC voltage as the frequency increases. Because the induced voltage is greatest when the current is increasing, the voltage and current waveforms are out of phase; the voltage peaks occur earlier in each cycle than the current peaks. The phase difference between the current and the induced voltage is ϕ=1 2 π{\displaystyle \phi ={\tfrac {1}{2}}\pi }radians or 90 degrees, showing that in an ideal inductor the current lags the voltage by 90°.
Calculating self inductance
[edit]
In the most general case, inductance can be calculated from Maxwell's equations. Many important cases can be solved using simplifications. Where high frequency currents are considered, with skin effect, the surface current densities and magnetic field may be obtained by solving the Laplace equation. Where the conductors are thin wires, self-inductance still depends on the wire radius and the distribution of the current in the wire. This current distribution is approximately constant (on the surface or in the volume of the wire) for a wire radius much smaller than other length scales.
Straight single wire
[edit]
As a practical matter, longer wires have more inductance, and thicker wires have less, analogous to their electrical resistance (although the relationships are not linear, and are different in kind from the relationships that length and diameter bear to resistance).
Separating the wire from the other parts of the circuit introduces some unavoidable error in any formulas' results. These inductances are often referred to as “partial inductances”, in part to encourage consideration of the other contributions to whole-circuit inductance which are omitted.
Practical formulas
[edit]
For derivation of the formulas below, see Rosa (1908). The total low frequency inductance (interior plus exterior) of a straight wire is:
L DC=200 nH m ℓ[ln(2 ℓ r)−0.75]{\displaystyle L_{\text{DC}}=200{\text{ }}{\tfrac {\text{nH}}{\text{m}}}\,\ell \left[\ln \left({\frac {\,2\,\ell \,}{r}}\right)-0.75\right]}
where
L DC{\displaystyle L_{\text{DC}}} is the "low-frequency" or DC inductance in nanohenries (nH or 10−9 H),
ℓ{\displaystyle \ell } is the length of the wire in meters,
r{\displaystyle r} is the radius of the wire in meters (hence a very small decimal number),
the constant 200 nH m{\displaystyle 200{\text{ }}{\tfrac {\text{nH}}{\text{m}}}} is the permeability of free space, commonly called μ o{\displaystyle \mu {\text{o}}}, divided by 2 π{\displaystyle 2\pi }; in the absence of magnetically reactive insulation the value 200 is exact when using the classical definition of _μ 0 = 4π×10−7 H/m, and correct to 7 decimal places when using the 2019-redefined SI value of μ 0 = 1.256 637 062 12(19)×10−6H/m.
The constant 0.75 is just one parameter value among several; different frequency ranges, different shapes, or extremely long wire lengths require a slightly different constant (see below). This result is based on the assumption that the radius r{\displaystyle r} is much less than the length ℓ{\displaystyle \ell }, which is the common case for wires and rods. Disks or thick cylinders have slightly different formulas.
For sufficiently high frequencies skin effects cause the interior currents to vanish, leaving only the currents on the surface of the conductor; the inductance for alternating current, L AC{\displaystyle L_{\text{AC}}} is then given by a very similar formula:
L AC=200 nH m ℓ[ln(2 ℓ r)−1]{\displaystyle L_{\text{AC}}=200{\text{ }}{\tfrac {\text{nH}}{\text{m}}}\,\ell \left[\ln \left({\frac {\,2\,\ell \,}{r}}\right)-1\right]} where the variables ℓ{\displaystyle \ell } and r{\displaystyle r} are the same as above; note the changed constant term now 1, from 0.75 above.
For example, a single conductor of a lamp cord 10 m long, made of 18AWG (1.024 mm) wire, would have a low frequency inductance of about 19.67 μH, at k=0.75, if stretched out straight.
Wire loop
[edit]
Formally, the self-inductance of a wire loop would be given by the above equation with m=n.{\displaystyle \ m=n\ .} However, here 1/|x−x′|{\displaystyle \ 1/\left|\mathbf {x} -\mathbf {x} '\right|\ } becomes infinite, leading to a logarithmically divergent integral.[a] This necessitates taking the finite wire radius a{\displaystyle \ a\ } and the distribution of the current in the wire into account. There remains the contribution from the integral over all points and a correction term,
L=μ 0 4 π[ℓ Y+∮C∮C′d x⋅d x′|x−x′|]+O b e n d for|s−s′|>1 2 a{\displaystyle L={\frac {\mu {0}}{4\pi }}\left[\ \ell \ Y+\oint {C}\oint {C'}{\frac {\mathrm {d} \mathbf {x} \cdot \mathrm {d} \mathbf {x} '}{\ \left|\mathbf {x} -\mathbf {x} '\right|\ }}\ \right]+{\mathcal {O}}{\mathsf {bend}}\quad {\text{ for }}\;\left|\mathbf {s} -\mathbf {s} '\right|>{\tfrac {1}{2}}a\ }
where
s{\displaystyle \ \mathbf {s} \ } and s′{\displaystyle \ \mathbf {s} '\ } are distances along the curves C{\displaystyle \ C\ } and C′{\displaystyle \ C'\ } respectively a{\displaystyle \ a\ } is the radius of the wire ℓ{\displaystyle \ \ell \ } is the length of the wire Y{\displaystyle \ Y\ } is a constant that depends on the distribution of the current in the wire: Y=0{\displaystyle \ Y=0\ } when the current flows on the surface of the wire (total skin effect),Y=1 2{\textstyle \ Y={\tfrac {1}{2}}\ } when the current is evenly over the cross-section of the wire.O b e n d{\displaystyle \ {\mathcal {O}}{\mathsf {bend}}\ } is an error term whose size depends on the curve of the loop: O b e n d=O(μ 0 a){\displaystyle \ {\mathcal {O}}{\mathsf {bend}}={\mathcal {O}}(\mu {0}a)\ } when the loop has sharp corners, and O b e n d=O(μ 0 a 2/ℓ){\textstyle \ {\mathcal {O}}{\mathsf {bend}}={\mathcal {O}}{\mathord {\left({\mu _{0}a^{2}}/{\ell }\right)}}\ } when it is a smooth curve.Both are small when the wire is long compared to its radius.
Solenoid
[edit]
A solenoid is a long, thin coil; i.e., a coil whose length is much greater than its diameter. Under these conditions, and without any magnetic material used, the magnetic flux densityB{\displaystyle B} within the coil is practically constant and is given by B=μ 0 N i ℓ{\displaystyle B={\frac {\mu _{0}\,N\,i}{\ell }}}
where μ 0{\displaystyle \mu {0}} is the magnetic constant, N{\displaystyle N} the number of turns, i{\displaystyle i} the current and l{\displaystyle l} the length of the coil. Ignoring end effects, the total magnetic flux through the coil is obtained by multiplying the flux density B{\displaystyle B} by the cross-section area A{\displaystyle A}:Φ=μ 0 N i A ℓ,{\displaystyle \Phi ={\frac {\mu {0}\,N\,i\,A}{\ell }},}
When this is combined with the definition of inductance L=N Φ i{\displaystyle L={\frac {N\,\Phi }{i}}}, it follows that the inductance of a solenoid is given by: L=μ 0 N 2 A ℓ.{\displaystyle L={\frac {\mu _{0}\,N^{2}\,A}{\ell }}.}
Therefore, for air-core coils, inductance is a function of coil geometry and number of turns, and is independent of current.
Coaxial cable
[edit]
Let the inner conductor have radius r i{\displaystyle r_{i}} and permeabilityμ i{\displaystyle \mu {i}}, let the dielectric between the inner and outer conductor have permeability μ d{\displaystyle \mu {d}}, and let the outer conductor have inner radius r o 1{\displaystyle r_{o1}}, outer radius r o 2{\displaystyle r_{o2}}, and permeability μ 0{\displaystyle \mu _{0}}. However, for a typical coaxial line application, we are interested in passing (non-DC) signals at frequencies for which the resistive skin effect cannot be neglected. In most cases, the inner and outer conductor terms are negligible, in which case one may approximate
L′=d L d ℓ≈μ d 2 π lnr o 1 r i{\displaystyle L'={\frac {{\text{d}}L}{{\text{d}}\ell }}\approx {\frac {\mu {d}}{2\pi }}\ln {\frac {r{o1}}{r_{i}}}}
Multilayer coils
[edit]
Most practical air-core inductors are multilayer cylindrical coils with square cross-sections to minimize average distance between turns (circular cross -sections would be better but harder to form).
Magnetic cores
[edit]
Many inductors include a magnetic core at the center of or partly surrounding the winding. Over a large enough range these exhibit a nonlinear permeability with effects such as magnetic saturation. Saturation makes the resulting inductance a function of the applied current.
The secant or large-signal inductance is used in flux calculations. It is defined as:
L s(i)=d e f N Φ i=Λ i{\displaystyle L_{s}(i)\mathrel {\overset {\underset {\mathrm {def} }{}}{=}} {\frac {N\ \Phi }{i}}={\frac {\Lambda }{i}}}
The differential or small-signal inductance, on the other hand, is used in calculating voltage. It is defined as:
L d(i)=d e f d(N Φ)d i=d Λ d i{\displaystyle L_{d}(i)\mathrel {\overset {\underset {\mathrm {def} }{}}{=}} {\frac {{\text{d}}(N\Phi )}{{\text{d}}i}}={\frac {{\text{d}}\Lambda }{{\text{d}}i}}}
The circuit voltage for a nonlinear inductor is obtained via the differential inductance as shown by Faraday's Law and the chain rule of calculus.
v(t)=d Λ d t=d Λ d i d i d t=L d(i)d i d t{\displaystyle v(t)={\frac {{\text{d}}\Lambda }{{\text{d}}t}}={\frac {{\text{d}}\Lambda }{{\text{d}}i}}{\frac {{\text{d}}i}{{\text{d}}t}}=L_{d}(i){\frac {{\text{d}}i}{{\text{d}}t}}}
Similar definitions may be derived for nonlinear mutual inductance.
Mutual inductance
[edit]
Further information: Inductive coupling
Definition of Mutual induction or Coefficient of mutual induction
[edit]
The mutual inductance or the coefficient of mutual induction of two magnetically linked coils is equal to the flux linkage of one coil per unit current in the neighboring coil. OR
The mutual inductance or the coefficient of mutual induction of two magnetically linked coils is numerically equal to the emf induced in one coil (secondary) per unit time rate of change of current in the neighboring coil (primary).
Mutual inductance of two parallel straight wires
[edit]
There are two cases to consider:
Current travels in the same direction in each wire, and
current travels in opposing directions in the wires.
Currents in the wires need not be equal, though they often are, as in the case of a complete circuit, where one wire is the source and the other the return.
Mutual inductance of two wire loops
[edit]
This is the generalized case of the paradigmatic two-loop cylindrical coil carrying a uniform low frequency current; the loops are independent closed circuits that can have different lengths, any orientation in space, and carry different currents. Nonetheless, the error terms, which are not included in the integral are only small if the geometries of the loops are mostly smooth and convex: They must not have too many kinks, sharp corners, coils, crossovers, parallel segments, concave cavities, or other topologically "close" deformations. A necessary predicate for the reduction of the 3-dimensional manifold integration formula to a double curve integral is that the current paths be filamentary circuits, i.e. thin wires where the radius of the wire is negligible compared to its length.
The mutual inductance by a filamentary circuit m{\displaystyle m} on a filamentary circuit n{\displaystyle n} is given by the double integral Neumann formula
L m,n=μ 0 4 π∮C m∮C n d x m⋅d x n|x m−x n|,{\displaystyle L_{m,n}={\frac {\mu {0}}{4\pi }}\oint {C_{m}}\oint {C{n}}{\frac {\mathrm {d} \mathbf {x} {m}\cdot \mathrm {d} \mathbf {x} {n}}{\ \left|\mathbf {x} {m}-\mathbf {x} {n}\right|\ }}\ ,}
where
C m{\displaystyle C_{m}} and C n{\displaystyle C_{n}} are the curves followed by the wires.μ 0{\displaystyle \mu {0}} is the permeability of free space (4 π×10−7 H/m)d x m{\displaystyle \mathrm {d} \mathbf {x} {m}} is a small increment of the wire in circuit C m x m{\displaystyle \mathbf {x} {m}} is the position of d x m{\displaystyle \mathrm {d} \mathbf {x} {m}} in space d x n{\displaystyle \mathrm {d} \mathbf {x} {n}} is a small increment of the wire in circuit C n x n{\displaystyle \mathbf {x} {n}} is the position of d x n{\displaystyle \mathrm {d} \mathbf {x} _{n}} in space.
Derivation
[edit]
M i j=d e f Φ i j I j{\displaystyle M_{ij}\mathrel {\stackrel {\mathrm {def} }{=}} {\frac {\Phi {ij}}{I{j}}}}
where
I j{\displaystyle I_{j}} is the current through the j{\displaystyle j}th wire, this current creates the magnetic flux Φ i j{\displaystyle \Phi _{ij}\ \,}through the i{\displaystyle i}th surface
Φ i j{\displaystyle \Phi {ij}} is the magnetic flux through the _i th surface due to the electrical circuit outlined by C j{\displaystyle C_{j}}:
Φ i j=∫S i B j⋅d a=∫S i(∇×A j)⋅d a=∮C i A j⋅d s i=∮C i(μ 0 I j 4 π∮C j d s j|s i−s j|)⋅d s i{\displaystyle \Phi {ij}=\int {S_{i}}\mathbf {B} {j}\cdot \mathrm {d} \mathbf {a} =\int {S_{i}}(\nabla \times \mathbf {A_{j}} )\cdot \mathrm {d} \mathbf {a} =\oint {C{i}}\mathbf {A} {j}\cdot \mathrm {d} \mathbf {s} {i}=\oint {C{i}}\left({\frac {\mu {0}I{j}}{4\pi }}\oint {C{j}}{\frac {\mathrm {d} \mathbf {s} {j}}{\left|\mathbf {s} {i}-\mathbf {s} {j}\right|}}\right)\cdot \mathrm {d} \mathbf {s} {i}}
where
C i{\displaystyle C_{i}} is the curve enclosing surface S i{\displaystyle S_{i}}; and S i{\displaystyle S_{i}} is any arbitrary orientable area with edge C i{\displaystyle C_{i}}
B j{\displaystyle \mathbf {B} {j}} is the magnetic field vector due to the j{\displaystyle j}-th current (of circuit C j{\displaystyle C{j}}).
A j{\displaystyle \mathbf {A} _{j}} is the vector potential due to the j{\displaystyle j}-th current.
Stokes' theorem has been used for the 3rd equality step. For the last equality step, we used the retarded potential expression for A j{\displaystyle A_{j}} and we ignore the effect of the retarded time (assuming the geometry of the circuits is small enough compared to the wavelength of the current they carry). It is actually an approximation step, and is valid only for local circuits made of thin wires.
Mutual inductance is defined as the ratio between the EMF induced in one loop or coil by the rate of change of current in another loop or coil. Mutual inductance is given the symbol M.
Derivation of mutual inductance
[edit]
The inductance equations above are a consequence of Maxwell's equations. For the important case of electrical circuits consisting of thin wires, the derivation is straightforward.
In a system of K{\displaystyle K} wire loops, each with one or several wire turns, the flux linkage of loop m{\displaystyle m},λ m{\displaystyle \lambda {m}}, is given by λ m=N m Φ m=∑n=1 K L m,n i n.{\displaystyle \displaystyle \lambda {m}=N_{m}\Phi {m}=\sum \limits {n=1}^{K}L_{m,n}\ i_{n}\,.}
Here N m{\displaystyle N_{m}} denotes the number of turns in loop m{\displaystyle m};Φ m{\displaystyle \Phi {m}} is the magnetic flux through loop m{\displaystyle m}; and L m,n{\displaystyle L{m,n}} are some constants described below. This equation follows from Ampere's law: magnetic fields and fluxes are linear functions of the currents. By Faraday's law of induction, we have
v m=d λ m d t=N m d Φ m d t=∑n=1 K L m,n d i n d t,{\displaystyle \displaystyle v_{m}={\frac {{\text{d}}\lambda {m}}{{\text{d}}t}}=N{m}{\frac {{\text{d}}\Phi {m}}{{\text{d}}t}}=\sum \limits {n=1}^{K}L_{m,n}{\frac {{\text{d}}i_{n}}{{\text{d}}t}},}
where v m{\displaystyle v_{m}} denotes the voltage induced in circuit m{\displaystyle m}. This agrees with the definition of inductance above if the coefficients L m,n{\displaystyle L_{m,n}} are identified with the coefficients of inductance. Because the total currents N n i n{\displaystyle N_{n}\ i_{n}} contribute to Φ m{\displaystyle \Phi {m}} it also follows that L m,n{\displaystyle L{m,n}} is proportional to the product of turns N m N n{\displaystyle N_{m}\ N_{n}}.
Mutual inductance and magnetic field energy
[edit]
Multiplying the equation for v m above with i m dt and summing over m gives the energy transferred to the system in the time interval dt, ∑m K i m v m d t=∑m,n=1 K i m L m,n d i n=!∑n=1 K∂W(i)∂i n d i n.{\displaystyle \sum \limits {m}^{K}i{m}v_{m}{\text{d}}t=\sum \limits {m,n=1}^{K}i{m}L_{m,n}{\text{d}}i_{n}\mathrel {\overset {!}{=}} \sum \limits {n=1}^{K}{\frac {\partial W\left(i\right)}{\partial i{n}}}{\text{d}}i_{n}.}
This must agree with the change of the magnetic field energy, W, caused by the currents. The integrability condition
∂2 W∂i m∂i n=∂2 W∂i n∂i m{\displaystyle \displaystyle {\frac {\partial ^{2}W}{\partial i_{m}\partial i_{n}}}={\frac {\partial ^{2}W}{\partial i_{n}\partial i_{m}}}}
requires L m,n=L n,m. The inductance matrix, L m,n, thus is symmetric. The integral of the energy transfer is the magnetic field energy as a function of the currents, W(i)=1 2∑m,n=1 K i m L m,n i n.{\displaystyle \displaystyle W\left(i\right)={\frac {1}{2}}\sum \limits {m,n=1}^{K}i{m}L_{m,n}i_{n}.}
This equation also is a direct consequence of the linearity of Maxwell's equations. It is helpful to associate changing electric currents with a build-up or decrease of magnetic field energy. The corresponding energy transfer requires or generates a voltage. A mechanical analogy in the K=1 case with magnetic field energy (1/2)Li 2 is a body with mass M, velocity u and kinetic energy (1/2)Mu 2. The rate of change of velocity (current) multiplied with mass (inductance) requires or generates a force (an electrical voltage).
Circuit diagram of two mutually coupled inductors. The two vertical lines between the windings indicate that the transformer has a ferromagnetic core . "n:m" shows the ratio between the number of windings of the left inductor to windings of the right inductor. This picture also shows the dot convention.
Mutual inductance occurs when the change in current in one inductor induces a voltage in another nearby inductor. It is important as the mechanism by which transformers work, but it can also cause unwanted coupling between conductors in a circuit.
The mutual inductance, M i j{\displaystyle M_{ij}}, is also a measure of the coupling between two inductors. The mutual inductance by circuit i{\displaystyle i} on circuit j{\displaystyle j} is given by the double integral Neumann formula, see calculation techniques
The mutual inductance also has the relationship: M 21=N 1 N 2 P 21{\displaystyle M_{21}=N_{1}\ N_{2}\ P_{21}!} where
M 21{\displaystyle M_{21}} is the mutual inductance, and the subscript specifies the relationship of the voltage induced in coil 2 due to the current in coil 1.
N 1{\displaystyle N_{1}} is the number of turns in coil 1,
N 2{\displaystyle N_{2}} is the number of turns in coil 2,
P 21{\displaystyle P_{21}} is the permeance of the space occupied by the flux.
Once the mutual inductance M{\displaystyle M} is determined, it can be used to predict the behavior of a circuit: v 1=L 1 d i 1 d t−M d i 2 d t{\displaystyle v_{1}=L_{1}\ {\frac {{\text{d}}i_{1}}{{\text{d}}t}}-M\ {\frac {{\text{d}}i_{2}}{{\text{d}}t}}} where
v 1{\displaystyle v_{1}} is the voltage across the inductor of interest;
L 1{\displaystyle L_{1}} is the inductance of the inductor of interest;
d i 1/d t{\displaystyle {\text{d}}i_{1}\,/\,{\text{d}}t} is the derivative, with respect to time, of the current through the inductor of interest, labeled 1;
d i 2/d t{\displaystyle {\text{d}}i_{2}\,/\,{\text{d}}t} is the derivative, with respect to time, of the current through the inductor, labeled 2, that is coupled to the first inductor; and
M{\displaystyle M} is the mutual inductance.
The minus sign arises because of the sense the current i 2{\displaystyle i_{2}} has been defined in the diagram. With both currents defined going into the dots the sign of M{\displaystyle M} will be positive (the equation would read with a plus sign instead).
Coupling coefficient
[edit]
The coupling coefficient is the ratio of the open-circuit actual voltage ratio to the ratio that would be obtained if all the flux coupled from one magnetic circuit to the other. The coupling coefficient is related to mutual inductance and self inductances in the following way. From the two simultaneous equations expressed in the two-port matrix the open-circuit voltage ratio is found to be:
V 2 V 1 open circuit=M L 1{\displaystyle {V_{2} \over V_{1}}{\text{open circuit}}={M \over L{1}}} where
M 2=M 1 M 2{\displaystyle M^{2}=M_{1}M_{2}}
while the ratio if all the flux is coupled is the ratio of the turns, hence the ratio of the square root of the inductances
V 2 V 1 max coupling=L 2 L 1{\displaystyle {V_{2} \over V_{1}}{\text{max coupling}}={\sqrt {L{2} \over L_{1}\ }}}
thus,
M=k L 1 L 2{\displaystyle M=k{\sqrt {L_{1}\ L_{2}\ }}} where
k{\displaystyle k} is the coupling coefficient,
L 1{\displaystyle L_{1}} is the inductance of the first coil, and
L 2{\displaystyle L_{2}} is the inductance of the second coil.
The coupling coefficient is a convenient way to specify the relationship between a certain orientation of inductors with arbitrary inductance. Most authors define the range as 0≤k<1{\displaystyle 0\leq k<1}, but some define it as −1<k<1{\displaystyle -1<k<1\,}. Allowing negative values of k{\displaystyle k} captures phase inversions of the coil connections and the direction of the windings.
Matrix representation
[edit]
Mutually coupled inductors can be described by any of the two-port network parameter matrix representations. The most direct are the z parameters, which are given by
[z]=s[L 1 M M L 2].{\displaystyle [\mathbf {z} ]=s{\begin{bmatrix}L_{1}\ M\M\ L_{2}\end{bmatrix}}.}
The y parameters are given by
[y]=1 s[L 1 M M L 2]−1.{\displaystyle [\mathbf {y} ]={\frac {1}{s}}{\begin{bmatrix}L_{1}\ M\M\ L_{2}\end{bmatrix}}^{-1}.} Where s{\displaystyle s} is the complex frequency variable, L 1{\displaystyle L_{1}} and L 2{\displaystyle L_{2}} are the inductances of the primary and secondary coil, respectively, and M{\displaystyle M} is the mutual inductance between the coils.
Multiple Coupled Inductors
[edit]
Mutual inductance may be applied to multiple inductors simultaneously. The matrix representations for multiple mutually coupled inductors are given by[z]=s[L 1 M 12 M 13…M 1 N M 12 L 2 M 23…M 2 N M 13 M 23 L 3…M 3 N⋮⋮⋮⋱M 1 N M 2 N M 3 N…L N]{\displaystyle {\begin{aligned}&[\mathbf {z} ]=s{\begin{bmatrix}L_{1}&M_{12}&M_{13}&\dots &M_{1N}\M_{12}&L_{2}&M_{23}&\dots &M_{2N}\M_{13}&M_{23}&L_{3}&\dots &M_{3N}\\vdots &\vdots &\vdots &\ddots \M_{1N}&M_{2N}&M_{3N}&\dots &L_{N}\\end{bmatrix}}\\end{aligned}}}
Equivalent circuits
[edit]
T-circuit
[edit]
T equivalent circuit of mutually coupled inductors
Mutually coupled inductors can equivalently be represented by a T-circuit of inductors as shown. If the coupling is strong and the inductors are of unequal values then the series inductor on the step-down side may take on a negative value.
This can be analyzed as a two port network. With the output terminated with some arbitrary impedance Z{\displaystyle Z}, the voltage gain A v{\displaystyle A_{v}}, is given by:
A v=s M Z s 2 L 1 L 2−s 2 M 2+s L 1 Z=k s(1−k 2)L 1 L 2 Z+L 1 L 2{\displaystyle A_{\mathrm {v} }={\frac {sMZ}{\,s^{2}L_{1}L_{2}-s^{2}M^{2}+sL_{1}Z\,}}={\frac {k}{\,s\left(1-k^{2}\right){\frac {\sqrt {L_{1}L_{2}}}{Z}}+{\sqrt {\frac {L_{1}}{L_{2}}}}\,}}}
where k{\displaystyle k} is the coupling constant and s{\displaystyle s} is the complex frequency variable, as above. For tightly coupled inductors where k=1{\displaystyle k=1} this reduces to
A v=L 2 L 1{\displaystyle A_{\mathrm {v} }={\sqrt {L_{2} \over L_{1}}}}
which is independent of the load impedance. If the inductors are wound on the same core and with the same geometry, then this expression is equal to the turns ratio of the two inductors because inductance is proportional to the square of turns ratio.
The input impedance of the network is given by:
Z in=s 2 L 1 L 2−s 2 M 2+s L 1 Z s L 2+Z=L 1 L 2 Z(1 1+Z s L 2)(1+1−k 2 Z s L 2){\displaystyle Z_{\text{in}}={\frac {s^{2}L_{1}L_{2}-s^{2}M^{2}+sL_{1}Z}{sL_{2}+Z}}={\frac {L_{1}}{L_{2}}}\,Z\,\left({\frac {1}{1+{\frac {Z}{\,sL_{2}\,}}}}\right)\left(1+{\frac {1-k^{2}}{\frac {Z}{\,sL_{2}\,}}}\right)}
For k=1{\displaystyle k=1} this reduces to
Z in=s L 1 Z s L 2+Z=L 1 L 2 Z(1 1+Z s L 2){\displaystyle Z_{\text{in}}={\frac {sL_{1}Z}{sL_{2}+Z}}={\frac {L_{1}}{L_{2}}}\,Z\,\left({\frac {1}{1+{\frac {Z}{\,sL_{2}\,}}}}\right)}
Thus, current gain A i{\displaystyle A_{i}} is not independent of load unless the further condition
|s L 2|≫|Z|{\displaystyle |sL_{2}|\gg |Z|}
is met, in which case,
Z in≈L 1 L 2 Z{\displaystyle Z_{\text{in}}\approx {L_{1} \over L_{2}}Z}
and
A i≈L 1 L 2=1 A v{\displaystyle A_{\text{i}}\approx {\sqrt {L_{1} \over L_{2}}}={1 \over A_{\text{v}}}}
π-circuit
[edit]
π equivalent circuit of coupled inductors
Alternatively, two coupled inductors can be modelled using a π equivalent circuit with optional ideal transformers at each port. While the circuit is more complicated than a T-circuit, it can be generalized to circuits consisting of more than two coupled inductors. Equivalent circuit elements L s{\displaystyle L_{\text{s}}},L p{\displaystyle L_{\text{p}}} have physical meaning, modelling respectively magnetic reluctances of coupling paths and magnetic reluctances of leakage paths. For example, electric currents flowing through these elements correspond to coupling and leakage magnetic fluxes. Ideal transformers normalize all self-inductances to 1 Henry to simplify mathematical formulas.
Equivalent circuit element values can be calculated from coupling coefficients with L S i j=det(K)−C i j L P i=det(K)∑j=1 N C i j{\displaystyle {\begin{aligned}L_{S_{ij}}&={\frac {\det(\mathbf {K} )}{-\mathbf {C} {ij}}}\[3pt]L{P_{i}}&={\frac {\det(\mathbf {K} )}{\sum {j=1}^{N}\mathbf {C} {ij}}}\end{aligned}}}
where coupling coefficient matrix and its cofactors are defined as
K=[1 k 12⋯k 1 N k 12 1⋯k 2 N⋮⋮⋱⋮k 1 N k 2 N⋯1]{\displaystyle \mathbf {K} ={\begin{bmatrix}1&k_{12}&\cdots &k_{1N}\k_{12}&1&\cdots &k_{2N}\\vdots &\vdots &\ddots &\vdots \k_{1N}&k_{2N}&\cdots &1\end{bmatrix}}\quad } and C i j=(−1)i+j M i j.{\displaystyle \quad \mathbf {C} {ij}=(-1)^{i+j}\,\mathbf {M} {ij}.}
For two coupled inductors, these formulas simplify to
L S 12=−k 12 2+1 k 12{\displaystyle L_{S_{12}}={\frac {-k_{12}^{2}+1}{k_{12}}}\quad } and L P 1=L P 2=k 12+1,{\displaystyle \quad L_{P_{1}}=L_{P_{2}}!=!k_{12}+1,}
and for three coupled inductors (for brevity shown only for L s12{\displaystyle L_{\text{s12}}} and L p1{\displaystyle L_{\text{p1}}})
L S 12=2 k 12 k 13 k 23−k 12 2−k 13 2−k 23 2+1 k 13 k 23−k 12{\displaystyle L_{S_{12}}={\frac {2\,k_{12}\,k_{13}\,k_{23}-k_{12}^{2}-k_{13}^{2}-k_{23}^{2}+1}{k_{13}\,k_{23}-k_{12}}}\quad } and L P 1=2 k 12 k 13 k 23−k 12 2−k 13 2−k 23 2+1 k 12 k 23+k 13 k 23−k 23 2−k 12−k 13+1.{\displaystyle \quad L_{P_{1}}={\frac {2\,k_{12}\,k_{13}\,k_{23}-k_{12}^{2}-k_{13}^{2}-k_{23}^{2}+1}{k_{12}\,k_{23}+k_{13}\,k_{23}-k_{23}^{2}-k_{12}-k_{13}+1}}.}
Resonant transformer
[edit]
Main article: Resonant inductive coupling
When a capacitor is connected across one winding of a transformer, making the winding a tuned circuit (resonant circuit) it is called a single-tuned transformer. When a capacitor is connected across each winding, it is called a double tuned transformer. These resonant transformers can store oscillating electrical energy similar to a resonant circuit and thus function as a bandpass filter, allowing frequencies near their resonant frequency to pass from the primary to secondary winding, but blocking other frequencies. The amount of mutual inductance between the two windings, together with the Q factor of the circuit, determine the shape of the frequency response curve. The advantage of the double tuned transformer is that it can have a wider bandwidth than a simple tuned circuit. The coupling of double-tuned circuits is described as loose-, critical-, or over-coupled depending on the value of the coupling coefficientk{\displaystyle k}. When two tuned circuits are loosely coupled through mutual inductance, the bandwidth is narrow. As the amount of mutual inductance increases, the bandwidth continues to grow. When the mutual inductance is increased beyond the critical coupling, the peak in the frequency response curve splits into two peaks, and as the coupling is increased the two peaks move further apart. This is known as overcoupling.
Stongly-coupled self-resonant coils can be used for wireless power transfer between devices in the mid range distances (up to two metres). Strong coupling is required for a high percentage of power transferred, which results in peak splitting of the frequency response.
Ideal transformers
[edit]
When k=1{\displaystyle k=1}, the inductor is referred to as being closely coupled. If in addition, the self-inductances go to infinity, the inductor becomes an ideal transformer. In this case the voltages, currents, and number of turns can be related in the following way:
V s=N s N p V p{\displaystyle V_{\text{s}}={\frac {N_{\text{s}}}{N_{\text{p}}}}V_{\text{p}}} where
V s{\displaystyle V_{\text{s}}} is the voltage across the secondary inductor,
V p{\displaystyle V_{\text{p}}} is the voltage across the primary inductor (the one connected to a power source),
N s{\displaystyle N_{\text{s}}} is the number of turns in the secondary inductor, and
N p{\displaystyle N_{\text{p}}} is the number of turns in the primary inductor.
Conversely the current:
I s=N p N s I p{\displaystyle I_{\text{s}}={\frac {N_{\text{p}}}{N_{\text{s}}}}I_{\text{p}}} where
I s{\displaystyle I_{\text{s}}} is the current through the secondary inductor,
I p{\displaystyle I_{\text{p}}} is the current through the primary inductor (the one connected to a power source),
N s{\displaystyle N_{\text{s}}} is the number of turns in the secondary inductor, and
N p{\displaystyle N_{\text{p}}} is the number of turns in the primary inductor.
The power through one inductor is the same as the power through the other. These equations neglect any forcing by current sources or voltage sources.
Self-inductance of thin wire shapes
[edit]
See also: Inductor §Inductance formulas
The table below lists formulas for the self-inductance of various simple shapes made of thin cylindrical conductors (wires). In general these are only accurate if the wire radius a{\displaystyle a} is much smaller than the dimensions of the shape, and if no ferromagnetic materials are nearby (no magnetic core).
Self-inductance of thin wire shapes | Type | Inductance | Explanation of symbols |
---
| Single layer solenoid | Wheeler's approximation formula for current-sheet model air-core coil: L=N 2 D 2 18 D+40 ℓ{\displaystyle {\mathcal {L}}={\frac {N^{2}D^{2}}{18D+40\ell }}} (inches) L=N 2 D 2 45.72 D+101.6 ℓ{\displaystyle {\mathcal {L}}={\frac {N^{2}D^{2}}{45.72D+101.6\ell }}} (cm) This formula gives an error no more than 1% when ℓ>0.4 D.{\displaystyle \ell >0.4\,D~.} | L{\displaystyle {\mathcal {L}}}: inductance in μH (10−6 henries) N{\displaystyle N}: number of turns D{\displaystyle D}: diameter in (inches) (cm) ℓ{\displaystyle \ell }: length in (inches) (cm) |
| Coaxial cable (HF) | L=μ 0 2 π ℓ ln(b a){\displaystyle {\mathcal {L}}={\frac {\mu {0}}{2\pi }}\ell \ln \left({\frac {b}{a}}\right)} | b{\displaystyle b}: Outer conductor's inside radius a{\displaystyle a}: Inner conductor's radius ℓ{\displaystyle \ell }: Length μ 0{\displaystyle \mu {0}}: see table footnote. |
| Circular loop | L=μ 0 r[ln(8 r a)−2+1 4 Y+O(a 2 r 2)]{\displaystyle {\mathcal {L}}=\mu {0}\ r\ \left[\ln \left({\frac {8r}{a}}\right)-2+{\tfrac {1}{4}}Y+{\mathcal {O}}\left({\frac {a^{2}}{r^{2}}}\right)\right]} | r{\displaystyle r}: Loop radius a{\displaystyle a}: Wire radius μ 0,Y{\displaystyle \mu {0},Y}: see table footnotes. |
| Rectangle from round wire | L=μ 0 π[ℓ 1 ln(2 ℓ 1 a)+ℓ 2 ln(2 ℓ 2 a)+2 ℓ 1 2+ℓ 2 2−ℓ 1 sinh−1(ℓ 1 ℓ 2)−ℓ 2 sinh−1(ℓ 2 ℓ 1)−(2−1 4 Y)(ℓ 1+ℓ 2)]{\displaystyle {\begin{aligned}{\mathcal {L}}={\frac {\mu {0}}{\pi }}\ {\biggl [}\ &\ell {1}\ln \left({\frac {2\ell {1}}{a}}\right)+\ell {2}\ \ln \left({\frac {2\ell {2}}{a}}\right)+2{\sqrt {\ell {1}^{2}+\ell {2}^{2}\ }}\&-\ell {1}\ \sinh ^{-1}\left({\frac {\ell {1}}{\ell {2}}}\right)-\ell {2}\sinh ^{-1}\left({\frac {\ell {2}}{\ell {1}}}\right)\&-\left(2-{\tfrac {1}{4}}Y\ \right)\left(\ell {1}+\ell {2}\right)\ {\biggr ]}\end{aligned}}} | ℓ 1,ℓ 2{\displaystyle \ell {1},\ell {2}}: Side lengths ℓ 1≫a,ℓ 2≫a{\displaystyle \ \ell {1}\gg a,\ell {2}\gg a\ } a{\displaystyle a}: Wire radius μ 0,Y{\displaystyle \mu {0},Y}: see table footnotes. |
| Pair of parallel wires | L=μ 0 π ℓ[ln(s a)+1 4 Y]{\displaystyle {\mathcal {L}}={\frac {\ \mu {0}}{\pi }}\ \ell \ \left[\ln \left({\frac {s}{a}}\right)+{\tfrac {1}{4}}Y\right]} | a{\displaystyle a}: Wire radius s{\displaystyle s}: Separation distance, s≥2 a{\displaystyle s\geq 2a} ℓ{\displaystyle \ell }: Length of pair μ 0,Y{\displaystyle \mu {0},Y}: see table footnotes. |
| Pair of parallel wires (HF) | L=μ 0 π ℓ cosh−1(s 2 a)=μ 0 π ℓ ln(s 2 a+s 2 4 a 2−1)≈μ 0 π ℓ ln(s a){\displaystyle {\begin{aligned}{\mathcal {L}}&={\frac {\mu {0}}{\pi }}\ \ell \ \cosh ^{-1}\left({\frac {s}{2a}}\right)\&={\frac {\mu {0}}{\pi }}\ \ell \ \ln \left({\frac {s}{2a}}+{\sqrt {{\frac {s^{2}}{4a^{2}}}-1}}\right)\&\approx {\frac {\mu {0}}{\pi }}\ \ell \ \ln \left({\frac {s}{a}}\right)\end{aligned}}} | a{\displaystyle a}: Wire radius s{\displaystyle s}: Separation distance, s≥2 a{\displaystyle s\geq 2a} ℓ{\displaystyle \ell }: Length (each) of pair μ 0{\displaystyle \mu {0}}: see table footnote. |
Y{\displaystyle Y} is an approximately constant value between 0 and 1 that depends on the distribution of the current in the wire: Y=0{\displaystyle Y=0} when the current flows only on the surface of the wire (complete skin effect), Y=1{\displaystyle Y=1} when the current is evenly spread over the cross-section of the wire (direct current). For round wires, Rosa (1908) gives a formula equivalent to:
Y≈1 1+a 1 8 μ σ ω{\displaystyle Y\approx {\frac {1}{\,1+a\ {\sqrt {{\tfrac {1}{8}}\mu \sigma \omega \,}}\,}}}
where
ω=2 π f{\displaystyle \omega =2\pi f} is the angular frequency, in radians per second;
μ=μ 0 μ r{\displaystyle \mu =\mu {0}\,\mu {\text{r}}} is the net magnetic permeability of the wire;
σ{\displaystyle \sigma } is the wire's specific conductivity; and
a{\displaystyle a} is the wire radius.
O(x){\displaystyle {\mathcal {O}}(x)} is represents small term(s) that have been dropped from the formula, to make it simpler. Read the term +O(x){\displaystyle {}+{\mathcal {O}}(x)} as "plus small corrections that vary on the order of x{\displaystyle x}" (see big O notation).
See also
[edit]
Electromagnetic induction
Gyrator
Hydraulic analogy
Leakage inductance
LC circuit, RLC circuit, RL circuit
Kinetic inductance
Footnotes
[edit]
^ The integral is called "logarithmically divergent" because ∫1 x d x=ln(x){\displaystyle \ \int {\frac {1}{x}}\ \mathrm {d} x=\ln(x)\ } for x>0{\displaystyle \ x>0\ }, hence it approaches infinity like a logarithm whose argument approaches infinity.
References
[edit]
^ abSerway, A. Raymond; Jewett, John W.; Wilson, Jane; Wilson, Anna; Rowlands, Wayne (2017). "Inductance". Physics for global scientists and engineers (2 ed.). Cengage AU. p.901. ISBN9780170355520.
^Baker, Edward Cecil (1976). Sir William Preece, F.R.S.: Victorian Engineer Extraordinary. Hutchinson. p.204. ISBN9780091266103..
^Heaviside, Oliver (1894). "The induction of currents in cores". Electrical Papers, Vol. 1. London: Macmillan. p.354.
^Elert, Glenn. "The Physics Hypertextbook: Inductance". Retrieved 30 July 2016.
^Davidson, Michael W. (1995–2008). "Molecular Expressions: Electricity and Magnetism Introduction: Inductance".
^The International System of Units(PDF), V3.01 (9th ed.), International Bureau of Weights and Measures, Aug 2024, ISBN978-92-822-2272-0, p. 160
^"A Brief History of Electromagnetism"(PDF).
^Note also that if the voltage across a one Henry inductor is changed in a step from zero to 1 one volt, then, by this same definition, the current will increase by one Amp per second; at least in theory, since there is always some natural limit to the current increasing.
^Ulaby, Fawwaz (2007). Fundamentals of applied electromagnetics (5th ed.). Pearson / Prentice Hall. p.255. ISBN978-0-13-241326-8.
^"Joseph Henry". Distinguished Members Gallery, National Academy of Sciences. Archived from the original on 2013-12-13. Retrieved 2006-11-30.
^Pearce Williams, L. (1971). Michael Faraday: A Biography. Simon and Schuster. pp.182–183. ISBN9780671209292.
^Giancoli, Douglas C. (1998). Physics: Principles with Applications (Fifth ed.). pp.623–624.
^Pearce Williams, L. (1971). Michael Faraday: A Biography. Simon and Schuster. pp.191–195. ISBN9780671209292.
^Singh, Yaduvir (2011). Electro Magnetic Field Theory. Pearson Education India. p.65. ISBN978-8131760611.
^Wadhwa, C.L. (2005). Electrical Power Systems. New Age International. p.18. ISBN8122417221.
^Pelcovits, Robert A.; Farkas, Josh (2007). Barron's AP Physics C. Barron's Educational Series. p.646. ISBN978-0764137105.
^Purcell, Edward M.; Morin, David J. (2013). Electricity and Magnetism. Cambridge Univ. Press. p.364. ISBN978-1107014022.
^Sears and Zemansky 1964:743
^ abSerway, Raymond A.; Jewett, John W. (2012). Principles of Physics: A Calculus-Based Text, 5th Ed. Cengage Learning. pp.801–802. ISBN978-1133104261.
^ abIda, Nathan (2007). Engineering Electromagnetics, 2nd Ed. Springer Science and Business Media. p.572. ISBN978-0387201566.
^ abPurcell, Edward (2011). Electricity and Magnetism, 2nd Ed. Cambridge University Press. p.285. ISBN978-1139503556.
^Gates, Earl D. (2001). Introduction to Electronics. Cengage Learning. p.153. ISBN0766816982.
^ abRosa, E.B. (1908). "The self and mutual inductances of linear conductors". Bulletin of the Bureau of Standards. 4 (2). U.S. Bureau of Standards: 301 ff. doi:10.6028/bulletin.088.
^Dengler, R. (2016). "Self inductance of a wire loop as a curve integral". Advanced Electromagnetics. 5 (1): 1–8. arXiv:1204.1486. Bibcode:2016AdEl....5....1D. doi:10.7716/aem.v5i1.331. S2CID53583557.
^Neumann, F.E. (1846). "Allgemeine Gesetze der inducirten elektrischen Ströme" [General rules for induced electric currents]. Annalen der Physik und Chemie (in German). 143 (1). Wiley: 31–44. Bibcode:1846AnP...143...31N. doi:10.1002/andp.18461430103. ISSN0003-3804.
^Jackson, J. D. (1975). Classical Electrodynamics. Wiley. pp.176, 263. ISBN9780471431329.
^The kinetic energy of the drifting electrons is many orders of magnitude smaller than W, except for nanowires.
^Nahvi, Mahmood; Edminister, Joseph (2002). Schaum's outline of theory and problems of electric circuits. McGraw-Hill Professional. p.338. ISBN0-07-139307-2.
^Thierauf, Stephen C. (2004). High-speed Circuit Board Signal Integrity. Artech House. p.56. ISBN1580538460.
^Kim, Seok; Kim, Shin-Ae; Jung, Goeun; Kwon, Kee-Won; Chun, Jung-Hoon (2009). "Design of a Reliable Broadband I/O Employing T-coil"(PDF). Journal of Semiconductor Technology and Science. 9 (4): 198–204. doi:10.5573/JSTS.2009.9.4.198. S2CID56413251. Archived(PDF) from the original on Jul 24, 2018 – via ocean.kisti.re.kr.
^Aatre, Vasudev K. (1981). Network Theory and Filter Design. US, Canada, Latin America, and Middle East: John Wiley & Sons. pp.71, 72. ISBN0-470-26934-0.
^Chua, Leon O.; Desoer, Charles A.; Kuh, Ernest S. (1987). Linear and Nonlinear Circuits. McGraw-Hill, Inc. p.459. ISBN0-07-100685-0.
^Eslami, Mansour (May 24, 2005). Circuit Analysis Fundamentals. Chicago, IL, US: Agile Press. p.194. ISBN0-9718239-5-2.
^Radecki, Andrzej; Yuan, Yuxiang; Miura, Noriyuki; Aikawa, Iori; Take, Yasuhiro; Ishikuro, Hiroki; Kuroda, Tadahiro (2012). "Simultaneous 6-Gb/s Data and 10-mW Power Transmission Using Nested Clover Coils for Noncontact Memory Card". IEEE Journal of Solid-State Circuits. 47 (10): 2484–2495. Bibcode:2012IJSSC..47.2484R. doi:10.1109/JSSC.2012.2204545. S2CID29266328.
^Kurs, A.; Karalis, A.; Moffatt, R.; Joannopoulos, J. D.; Fisher, P.; Soljacic, M. (6 July 2007). "Wireless Power Transfer via Strongly Coupled Magnetic Resonances". Science. 317 (5834): 83–86. Bibcode:2007Sci...317...83K. CiteSeerX10.1.1.418.9645. doi:10.1126/science.1143254. PMID17556549. S2CID17105396.
^Sample, Alanson P.; Meyer, D. A.; Smith, J. R. (2011). "Analysis, Experimental Results, and Range Adaptation of Magnetically Coupled Resonators for Wireless Power Transfer". IEEE Transactions on Industrial Electronics. 58 (2): 544–554. doi:10.1109/TIE.2010.2046002. S2CID14721.
^Rendon-Hernandez, Adrian A.; Halim, Miah A.; Smith, Spencer E.; Arnold, David P. (2022). "Magnetically Coupled Microelectromechanical Resonators for Low-Frequency Wireless Power Transfer". 2022 IEEE 35th International Conference on Micro Electro Mechanical Systems Conference (MEMS). pp.648–651. doi:10.1109/MEMS51670.2022.9699458. ISBN978-1-6654-0911-7. S2CID246753151.
^Wheeler, H.A. (1942). "Formulas for the Skin Effect". Proceedings of the IRE. 30 (9): 412–424. doi:10.1109/JRPROC.1942.232015. S2CID51630416.
^Wheeler, H.A. (1928). "Simple Inductance Formulas for Radio Coils". Proceedings of the IRE. 16 (10): 1398–1400. doi:10.1109/JRPROC.1928.221309. S2CID51638679.
^Elliott, R.S. (1993). Electromagnetics. New York: IEEE Press. Note: The published constant −3⁄2 in the result for a uniform current distribution is wrong.
^Grover, Frederick W. (1946). Inductance Calculations: Working formulas and tables. New York: Dover Publications, Inc.
General references
[edit]
Frederick W. Grover (1952). Inductance Calculations. Dover Publications, New York.
Griffiths, David J. (1998). Introduction to Electrodynamics (3rd ed.). Prentice Hall. ISBN0-13-805326-X.
Wangsness, Roald K. (1986). Electromagnetic Fields (2nd ed.). Wiley. ISBN0-471-81186-6.
Hughes, Edward. (2002). Electrical & Electronic Technology (8th ed.). Prentice Hall. ISBN0-582-40519-X.
Küpfmüller K., Einführung in die theoretische Elektrotechnik, Springer-Verlag, 1959.
Heaviside O., Electrical Papers. Vol.1. – L.; N.Y.: Macmillan, 1892, p.429-560.
Fritz Langford-Smith, editor (1953). Radiotron Designer's Handbook, 4th Edition, Amalgamated Wireless Valve Company Pty., Ltd. Chapter 10, "Calculation of Inductance" (pp.429–448), includes a wealth of formulas and nomographs for coils, solenoids, and mutual inductance.
F. W. Sears and M. W. Zemansky 1964 University Physics: Third Edition (Complete Volume), Addison-Wesley Publishing Company, Inc. Reading MA, LCCC 63-15265 (no ISBN).
External links
[edit]
Clemson Vehicular Electronics Laboratory: Inductance Calculator
| Authority control databases |
| International | GND |
| National | United States Israel |
| Other | Yale LUX |
Retrieved from "
Categories:
Electrodynamics
Electromagnetic quantities
Hidden categories:
CS1 German-language sources (de)
Articles with short description
Short description is different from Wikidata
This page was last edited on 8 August 2025, at 00:20(UTC).
Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Privacy policy
About Wikipedia
Disclaimers
Contact Wikipedia
Code of Conduct
Developers
Statistics
Cookie statement
Mobile view
Search
Search
[x] Toggle the table of contents
Inductance
68 languagesAdd topic |
189517 | https://www.sadlier.com/school/sadlier-math-blog/mean-absolute-deviation-and-the-concept-of-variability-in-middle-school | Sadlier's Math Blog
A K–8 resource to support deep comprehension of math skills and concepts
Topics:
K–2 Topics
Counting and Cardinality
Operations and Algebraic Thinking
Number and Operations in Base 10
Measurement and Data
Geometry
Math Practices
3–5 Topics
Operations and Algebraic Thinking
Number and Operations in Base 10
Number and Operations Fractions
Measurement and Data
Geometry
Statistics and Probability
Math Practices
6–8 Topics
The Number System
Ratios and Proportional Relationships
Expressions and Equations
Functions
Geometry
Statistics and Probability
Math Practices
Math Practices
MP1—Problem Solving
MP2—Abstract <−>Quantitative
MP3—Reasoning
MP4—Modeling
MP5—Using Tools
MP6—Precision
MP7—Structure
MP8—Repeated Reasoning
Research-Based Activities
Kindergarten Math Activities
First Grade Math Activities
Second Grade Math Activities
Third Grade Math Activities
Fourth Grade Math Activities
Fifth Grade Math Activities
Sixth Grade Math Activities
Seventh Grade Math Activities
Eighth Grade Math Activities
Other
Professional Development
Test-Taking Strategies
Classroom Posters
Writing In Mathematics
Seasonal
March 23, 2021 6-8-statistics-and-probability
Mean Absolute Deviation & the Concept of Variability in Middle School
By: Jeff Todd
Recently, I was working with some of my students on topics they had failed in last year’s state testing. The topic of the day was mean absolute deviation, and while they were able to perform the calculations, they were struggling with the concept of statistical variability.
Strengthen Understanding of Mean Absolute Deviation
To help them, I developed a Variability in Basketball Activity and used it with my students. This activity helped to start conversations about a real life situation (basketball) they could relate to. We were able to talk about the variability in the basketball scores quite naturally. That gave more meaning to the mean absolute deviation that they were calculating.
I was very surprised to discover that developing an understanding of the concept of statistical variability is now part of the Grade 6 Common Core State Standards in math. At best, this was previously a topic in high school statistics courses and not part of middle school mathematics.
The purpose of this post is to assist you in helping your students overcome their confusion with mean absolute deviation and understand what variability is using my basketball scores concept.
I've created a Variability in Basketball Activity you can download and use in the classroom. It would also be a great exercise to do during professional development days with some of your colleagues in order to strengthen your collective skills in this area.
Also please note that the names used for for this activity are gender neutral. Chris and Terry could be male or female. If you are interested in addressing students’ assumptions about gender, listen carefully for their use of pronouns (he/she) as they discuss this activity.
Download Now
Help Middle School Students Understand Variability
What the Current Standards Ask of Students
The two main topics in studying basic statistics are center and spread.
Middle school math has always included the study of measures of center (mean, median, and mode), but the idea of studying spread (variability) is new to the curriculum, particularly for teachers who have not necessarily studied statistics in detail.
What is Variability?
Variability describes how much a data set changes from element to element. Measures of variability describe the degree and amount of change within the data set. There are two ways that standards ask sixth grade students to calculate variability: the interquartile range and the mean average deviation.
When I was teaching eighth, ninth, and tenth grade math, we often made box and whiskers plots when studying statistics and the idea of interquartile range was a topic we touched on—at those grade levels. The current standards ask us to go deeper than simply performing calculations such as these, all the way to helping students understand what variability is.
What is Mean Absolute Deviation?
The new topic for middle school teachers and students is mean absolute deviation.
Mean absolute deviation describes variation in a data set and is the average distance between each data value and the mean. It can be calculated by finding the difference of each data value and the mean and then finding the average of that. Mean absolute deviation reveals how "spread out" the values are and how much the data vary.
Here is an example to help promote understanding of that topic and variability (spread).
Calculating Variability: The Basketball Example
About the line plot
Chris and Terry are basketball players. After the first eight games of the season, the coach analyzes their performance and creates a line plot of the number of baskets each player scored during these games.
After the first eight games of the seasons, they have made the following number of baskets:
Analyzing the line plot
If you calculate the measures of center using the mean and the median, you will find that Chris and Terry have mean of 9.5 baskets per game and a median of 9.5 baskets per game. By only looking at these measures of center without looking at the line plot, we might assume that these players perform equally well on the basketball court.
However, this is not the case. In fact, the players are quite different, and having students (or other teachers, if you are using this for professional development) explain how they are different can help promote an understanding of how to describe variability. Terry is a very consistent player, scoring either nine or 10 baskets every game. Chris’s results on the court have been much more variable. Sometimes Chris has a great game, and sometimes not. Chris’s scores are spread out, while Terry’s scores are all close to the mean and the median.
Having discussed the differences between the two players, it is now more meaningful for students to calculate the mean absolute deviation for each player. Terry’s mean absolute deviation is 0.5; in fact, all of Terry’s scores are exactly 0.5 away from the mean and median. The mean absolute deviation of Chris’s scores is 3.75. These scores are further away from the mean and median. Lower mean absolute deviations (closer to zero) indicate less variation, while higher mean absolute deviations show more variation.
In Summary
I hope this post has helped you know how to strengthen your students' understanding of mean absolute deviation—or even your own understanding. Download the Variability in Basketball Activity to use with your middle school students or for a professional development session. It includes the above example, guiding questions, sample responses, and the calculations for the mean absolute deviation for both players. |
189518 | https://math.stackexchange.com/questions/2386710/why-is-the-word-complement-used-in-set-theory | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Why is the word "complement" used in set theory?
Ask Question
Asked
Modified 8 years, 1 month ago
Viewed 3k times
10
$\begingroup$
Maybe this should have been on the English Exchange, but why do we use the word "complement" in set theory? If I have:
$$(A \cup B)'$$
Why does "complement" mean everything but the union? Is it because it is "all the things" that the original operation is not, thus it "completes" it?
I looked at the dictionary and wasn't sure.
Edit: $(A \cup B)'$ was only an example so I could use the complement mark. It was picked at random and was only meant to ask the question of what the word meant. It was not specifically about a union. I could have probably picked anything that allowed me to use the "tick mark" to indicate complement. My MathJax is not good and cumbersome for me, so I only used the single example.
elementary-set-theory
terminology
Share
edited Aug 8, 2017 at 20:47
jwodder
1,5491313 silver badges1818 bronze badges
asked Aug 8, 2017 at 13:48
johnnyjohnny
36722 gold badges55 silver badges1313 bronze badges
$\endgroup$
3
4
$\begingroup$ $A'$ would allow you to use the tick mark to indicate a complement, and it takes a lot less MathJax to write $A'$ than to write $(A \cup B)'.$ So it's still puzzling why you went to all the trouble to write something like $(A \cup B)'.$ But I suppose that's a moot point, since you already got the answer you needed. $\endgroup$
David K
– David K
2017-08-08 21:13:46 +00:00
Commented Aug 8, 2017 at 21:13
$\begingroup$ " (AªB)² was only an example so I could use the complement mark." A FAR simpler and more pertanent example would be $A'$. Complement has nothing to do with unions so that is very misleading. Complment is simply about sets. So just use a set. $\endgroup$
fleablood
– fleablood
2017-08-09 03:10:46 +00:00
Commented Aug 9, 2017 at 3:10
$\begingroup$ "My MathJax is not good and cumbersome for me, so I only used the single example." So why did you use a very hard example, instead of a very simple example. And why did you use MathJax at all. You don't need ANY MathJax to type A'. $\endgroup$
fleablood
– fleablood
2017-08-09 03:18:26 +00:00
Commented Aug 9, 2017 at 3:18
Add a comment |
4 Answers 4
Reset to default
21
$\begingroup$
The first non-mathy definition that Merriam-Webster gives for "complement" is
something that fills up, completes, or makes perfect
You may also note that the other definitions have a similar connotation. Things complement each other if, when put together, they make up something that is complete. For another mathematical example, consider complementary angles, which add up to a "perfect" right angle.
In set theory, a set and its complement form a universal set (i.e. if $X$ is the universe, then $A \cup A^c = X$). Hence the two sets complement each other, in the sense that together, they make a whole.
Addendum: the Oxford English Dictionary (this may require academic access) indicates that "complement" was first used in the mathematical sense in Billingsley's 1570 translation of Euclid's Elements. At that time, it had been in English usage for at least 150 years (the earliest reference in the OED goes back to 1419, as far as I can tell, meaning something to the effect of "finishing or completing something"). The word originally comes from Latin: "complÄmentum that which fills up or completes."
Share
edited Aug 8, 2017 at 16:36
answered Aug 8, 2017 at 13:51
Xander Henderson♦Xander Henderson
32.8k2525 gold badges7373 silver badges122122 bronze badges
$\endgroup$
2
1
$\begingroup$ From Latin complemens, present participle of complere, where com- = "together" and plere ="to make full", so "that which together (with the given) makes full" $\endgroup$
Hagen von Eitzen
– Hagen von Eitzen
2017-08-09 11:06:04 +00:00
Commented Aug 9, 2017 at 11:06
$\begingroup$ It is worth pointing out that a complement is not well defined without the extra knowledge of what "universe" it refers to. Set theoretical complements are actually fairly rare, since the actual complement of a set (i.e. $A^\prime={x; x\not\in A}$) is never a set in ZF(C). $\endgroup$
DRF
– DRF
2017-08-09 11:46:49 +00:00
Commented Aug 9, 2017 at 11:46
Add a comment |
9
$\begingroup$
Given how you talk about this, I suspect that part of your confusion is that you seem to think that the complement operator applies to some other operator (in your specific example, a union, but whatever other operator would end up there). But, complement is an operator that is applied to sets, not to operators. So, I can just have $A'$ as the complement of set $A$. And in the case of $(A \cup B)'$, the set to which you apply the complement operator happens to be the union of two sets.
So the complement is not, as you write, 'all the things that the original operator is not' (my emphasis) but rather 'all the things that the original set is not' (better put: 'all the things that are not in the original set'). Likewise, in your case, the complement does not 'complete' the operation of union, but simply 'completes' a set ... that again just happens to be a union of two sets.
Share
edited Aug 8, 2017 at 16:49
answered Aug 8, 2017 at 13:58
Bram28Bram28
104k66 gold badges7676 silver badges123123 bronze badges
$\endgroup$
7
$\begingroup$ Union was just an example. Probably a poor one, but that was the intent (to be an example, not a poor one.) $\endgroup$
johnny
– johnny
2017-08-08 16:42:25 +00:00
Commented Aug 8, 2017 at 16:42
$\begingroup$ @Johnny So why did you say "all the things that the original operation is not"? And why not just use $A'$ as an example? $\endgroup$
Bram28
– Bram28
2017-08-08 16:43:37 +00:00
Commented Aug 8, 2017 at 16:43
$\begingroup$ "operation" could mean anything when I wrote it. I had to put something there as an example, so I picked that. It took me a while to format even that with MathJax, and I thought it was sufficient to get my question across. $\endgroup$
johnny
– johnny
2017-08-08 16:44:48 +00:00
Commented Aug 8, 2017 at 16:44
7
$\begingroup$ @Johnny But why not put a set there? Why not just $A'$? You say you had to put something there ... I am still concerned that you think that the complement applies to whatever operator you would put there ... $\endgroup$
Bram28
– Bram28
2017-08-08 16:46:03 +00:00
Commented Aug 8, 2017 at 16:46
1
$\begingroup$ @johnny OK, that's all good then ... sorry to pry! :) $\endgroup$
Bram28
– Bram28
2017-08-08 16:48:11 +00:00
Commented Aug 8, 2017 at 16:48
| Show 2 more comments
8
$\begingroup$
"X complements Y" in colloquial English means basically "X has what Y lacks". This is exactly what the complement is in set theory, except that the complement of $A$ also has none of what $A$ has.
That said, my suspicion is that this term actually originates in mathematical French and was borrowed directly from mathematical French into mathematical English.
Share
answered Aug 8, 2017 at 13:52
IanIan
105k55 gold badges101101 silver badges171171 bronze badges
$\endgroup$
Add a comment |
1
$\begingroup$
The complement of set $A$ in set $X$, written $X\setminus A$, is "the other half" of $A$. The set $X\setminus A$ is what the set $A$ "needs" in order to "be whole" again. In this context, "being whole" means being all of $X$, so $A$ together with $X\setminus A$ are whole.
More is true. There may be many sets that would make $A$ "whole" again, but $X\setminus A$ is the smallest of them all.
I pronounce the symbol "$X\setminus A$" as "$X$ without $A$", or "$X$ not $A$", or "$X$ punctured at $A$", or "the $X$-complement of $A$", or even "not $A$" when $X$ is clear from context.
The idea of complement is also very important in trigonometry (where $\pi/2-\theta$ is the $\pi/2$-complement of angle $\theta$), probability (where $1-p$ is the complement of probability $p$), and logic (where $\neg a$ is the complement of $a$ when both are elements of a complemented distributive lattice).
Share
answered Aug 9, 2017 at 2:46
étale-cohomologyétale-cohomology
2,20222 gold badges2222 silver badges3232 bronze badges
$\endgroup$
Add a comment |
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
elementary-set-theory
terminology
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
Express the symmetric difference using only complement and intersect?
Why we use the word 'compact' for compact spaces?
Is the word "any" a $\forall$ or an $\exists$?
cardinality of the complement of a countable subset of R
0 Question regarding the correct use of set comparison operators
Inverse of set operations
3 Is there a symbol for set operation $A \cup B^\complement$?
0 If no universal set is defined, is it the union of all sets presented?
3 Does the word "if" have a different meaning in mathematical writing than in everyday English?
Hot Network Questions
Can Monks use their Dex modifier to determine jump distance?
Mishearing Monica’s line in Friends: “beacon that only dogs…” — is there a “then”?
How to home-make rubber feet stoppers for table legs?
What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel?
Implications of using a stream cipher as KDF
is the following argument valid?
How to design a circuit that outputs the binary position of the 3rd set bit from the right in an 8-bit input?
A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man
Identifying a movie where a man relives the same day
Is there a specific term to describe someone who is religious but does not necessarily believe everything that their religion teaches, and uses logic?
Passengers on a flight vote on the destination, "It's democracy!"
Determine which are P-cores/E-cores (Intel CPU)
On the Subject of Switches
What is the name of the 1950’s film about the new Scots lord whose relative is a frog like creature living in the ancestral home?
Survival analysis - is a cure model a good fit for my problem?
How do you emphasize the verb "to be" with do/does?
Singular support in Bezrukavnikov's equivalence
Separating trefoil knot on torus
How to start explorer with C: drive selected and shown in folder list?
How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done?
Can a Box have a lifetime less than 'static?
How to sample curves more densely (by arc-length) when their trajectory is more volatile, and less so when the trajectory is more constant
How can the problem of a warlock with two spell slots be solved?
What meal can come next?
more hot questions
Question feed |
189519 | https://www.youtube.com/watch?v=5Mo463dGSNU | Finding the Common Ratio of a Geometric Sequence - Sheaff Math
Sheaff Math
1040 subscribers
10 likes
Description
972 views
Posted: 22 Dec 2020
Today's lesson teaches how to quickly find the common ratio of a geometric sequence. This is a great skill to have in finding missing terms, next terms, and exponential rules of geometric sequences.
Transcript:
Intro hello and welcome back to sheaf math in today's lesson we're going to learn how to find the common ratio of a geometric sequence Geometric Sequence so first off let's just talk about what a geometric sequence is and so this is an example of what a geometric sequence is and you might be able to see a pattern to this this is different than an arithmetic sequence where it goes like 2 4 6 8 10 where it goes up by 2 each time or up by 8 or down by 3 each time this one you might notice has a pattern as well but this one we are multiplying by 2 each time to get the to get the next number and so this number is called the common ratio and why do we call it the common ratio well because what we're actually doing is we're taking a number and we're dividing it by the number that precedes it the number right before it so 4 divided by 2 right and if i did that to the next one 8 divided by 4 and 16 divided by 8 and 32 divided by 16 i would get fractions that are all equal ratios and so these all have the same uh answer and this ratio happens to be two and that's the common ratio so when you find the common ratio that's the number that's multiplied to get the next number so here's an example of another one and you might be able to spot what the pattern is here and so these are some easy ones and you can see that this is multiplying by three right each time but we really want to focus on on finding that ratio and doing it the proper way because they're not always going to be super easy and able to be found out real quick so to find that common ratio we take our a number and we divide it by the one before it so three divided by one nine over three 27 over 9 and 81 over 27 and those all come out to three they have that common ratio and that's the number again that you multiply to get to the next number Common Ratio so let's show you how to quickly find the common ratio so here's an example of one and if you're asked to find the common ratio it usually means that this is a geometric sequence so we're going to assume that and so even if you're able to spot what what it is you always want to get into the habit of just doing that division each time okay and so i'm gonna pick just two random numbers 16 and four now i should have probably picked four and one because they're the easier ones but i just want you to know that it doesn't matter which one pair you pick when you take the number and divide it by the one before it you're going to get that common ratio okay and you really only need to do one in order to to find it um if you had to prove that it was a geometric sequence then you would have to do all of them to see if they all equaled four but just to find it a common ratio quickly all you do is take pick any number divide it by the one before it and you always want to maybe pick the easiest ones so i'm going to give you a bunch of different scenarios different geometric sequences so that you uh have seen all of these and so here we go to find the common ratio of this one we're going to take our two easiest numbers negative 10 and negative 2 and we're going to divide them and that comes out to 5. and if you notice that each number if you multiply by positive 5 you'll get the next number now this one right here this one is going down and so some of you might notice right off the bat that oh i see it's dividing by 3 but divide by 3 is not the common ratio you want to put it in terms of what is multiplied to get the next number so what we do is we pick any two numbers i'll do 10 and 30 and i take 10 and divide it by 30 which simplifies to one-third and so this is the true common ratio if i took 270 and multiplied it by one-third i get 90. and so multiplying by one third is the same as dividing by three but you always want to put the common ratio in terms of what you're multiplying to get that next [Music] number all right now look at this one this one uh is going back and forth from negative to positive negative positive negative and anytime you see this you can be sure that the common ratio is going to be negative so let's try it out 6 divided by negative 3 is negative 2 and if you did it to negative 12 divided by 6 you would get negative 2. 24 divided by negative 12 that's negative 2 as well and so anytime you see them going back and forth from negative to positive your common ratio is going to be a negative number all right well we couldn't have it without fractions they're always fun and so hopefully you uh remember how to divide fractions because that's what you're going to have to do here so we will pick our two numbers we'll take two-ninths and divide it by one-third and if you forgot how to divide fractions you simply multiply the reciprocal of the second number in other words you flip the second guy and multiply and so we did that we flipped one third to three over one we multiply straight across we get 6 over 9 and that simplifies to 2 3. you'll always want to put it in to the simplified form and so it doesn't matter which fractions fraction pair you picked if you divided the one before it you would get two thirds and lastly let's do a decimal one so this is a geometric sequence and so i'm going to take my two smallest numbers and divide them so negative 2 i'm sorry 0.26 divided by 0.2 equals 1.3 and if you did that to any of them you would find it's 1.3 and this one you can use a calculator [Music] doing long division with decimals sometimes is a long process so there you have it you just learned how to find the common ratio quickly in a geometric sequence well thank you for watching and we'll see you next time |
189520 | https://www.geeksforgeeks.org/maths/types-of-functions/ | Types of Functions
Last Updated :
23 Jul, 2025
Suggest changes
11 Likes
Functions are defined as the relations which give a particular output for a particular input value. A function has a domain and codomain (range). f(x) usually denotes a function where x is the input of the function. In general, a function is written as y = f(x).
A function is a relation between two sets set A and set B. Such that every element of set A has an image in set B and no element in set A has more than one image in set B.
Let A and B be two nonempty sets. A function or mapping f from A to B is written as f: A → B is a rule by which each element a ∈ A is associated with a unique element b ∈ B.
Table of Content
Domain, Codomain, and Range of a Function
Representation of Function
Types of Functions in Maths
One to One (Injective) function
Many to One Function
Onto (Surjective) Function
Into Function
Summary: Types of Functions
Solved Examples on Types of Function
Domain, Codomain, and Range of a Function
The elements of set X are called the domain of f and the elements of set Y are called the codomain of f. The images of the elements of set X are called the range of function, which is always a subset of Y. The image given below demonstrates the domain, codomain, and range of the function.
The image demonstrates the domain, co-domain, and range of the function. Remember the element which is mapped only will be counted in the range as shown in the image. The domain, codomain, and range of the above function are:
Domain = {a, b, c}
Codomain = {1, 2, 3, 4, 5}
Range = {1, 2, 3}
Learn more about Domain and Range.
Representation of Function
There are three different forms of representation of functions. The function needs to be represented to showcase the domain values and the relationship between them. The function can be represented with the help of algebraic form, graphical formats, and other roster form.
Algebraic Form: A function is usually denoted by the equation y = f(x) which connects the values on the y-axis.
Graphical Form: Functions are easy to understand if they are represented in a graphical form with the help of coordinate axes, which helps us to understand the changing behavior of the function if the function is increasing or decreasing.
Roster Form: Roster notation of a set is a simple mathematical representation of the set in mathematical form. In this notation, a function is represented with a set in mathematical form. In this notation, a function is represented with a set of points on its graph with the first and second element of domain and range respectively.
Types of Functions in Maths
An example of a simple function is f(x) = x3. In this function, f(x) takes the value of “x” and then cubes it to find the value of the function.
For example, if the value of x is taken to be 2, then the function gives 8 as output i.e. f(2) = 8.
Some other examples of functions are:
f(x) = cos x,
f(x) = 5x2 + 9,
f(x) = 1/x3, etc.
There are several types of functions in maths. Some of the important types are:
One to One (Injective) function
Many to One function
Onto (Surjective) Function
Into Function
One to One (Injective) function
A function f: X → Y is said to be a one-to-one function if the images of distinct elements of X under f are distinct. Thus, f is one to one if f(x1) = f(x2)
Property: A function f: A → B is one-to-one if f(x1) = f(x2) implies x1 = x2, i.e., an image of a distinct element of A under f mapping (function) is distinct.
Condition to be One-to-One function: Every element of the domain has a single image with a codomain after mapping.
Learn more about One-to-One Functions.
Examples of One-to-One Functions
Some examples of one-one functions are:
f(x) = x (Identity function)
k(x) = 2x + 3 (Linear Polynomial)
g(x) = ex (Exponential function)
h(x) = √x (Square root function, defined for x ≥ 0)
Many to One Function
If the function is not one to one function, then it should be many to one function means every element of the domain has more than one image at codomain after mapping.
Property: One or more elements having the same image in the codomain
Condition to be Many to One function: One or more than one element in the domain having a single image in the codomain.
Learn more about Many-One Functions.
Examples of Many to One Function
Some of the most common examples of many-to-one functions are:
f(x) = x2 (Squared function)
g(x) = sin(x) (Sine function)
h(x) = cos(x) (Cosine function)
k(x) = tan(x) (Tangent function)
m(x) = ∣x∣ (Absolute value function)
Onto (Surjective) Function
A function f: X → Y is said to be an onto function if every element of Y is an image of some element of set X under f, i.e. for every y ∈ Y there exists an element x in X such that f(x) = y.
The range of functions should be equal to the codomain.
Every element of B is the image of some element of A.
Condition to be onto function: The range of function should be equal to the codomain.
As we see in the above two images, the range is equal to the codomain means that every element of the codomain is mapped with the element of the domain, as we know that elements that are mapped in the codomain are known as the range. So these are examples of the Onto function.
Learn more about Onto Functions.
Examples of Onto Functions
Some of the most common examples of onto functions are:
f(x) = x (Identity function)
g(x) = ex (Exponential function)
h(x) = sin(x) (Sine function within a limited domain, e.g., h: R→[−1,1])
k(x) = cos(x) (Cosine function within a limited domain, e.g., k : [0,π]→[−1,1])
m(x) = x3 (Cubic function)
Into Function
A function f: X → Y is said to be an into a function if there exists at least one element or more than one element in Y, which does not have any pre-images in X, which simply means that every element of the codomain is not mapped with elements of the domain.
Properties:
The Range of function is the proper subset of B
The Range of functions should not equal B, where B is the codomain.
From the above image, we can see that every element of the codomain is not mapped with elements of the domain means the 10th element of the codomain is left unmapped. So this type of function is known as the Into function.
Examples of Into Functions
Some examples of functions you can consider are:
f(x) = sin(x) where f: R→[−1, 1] is not onto because it doesn't cover all values in the interval [−1, 1][−1, 1].
g(x) = x2 where g: R→R+ (positive real numbers) is not onto because it doesn't map to any negative real numbers.
h(x) = ex where h: R→(0,∞) is not onto because it doesn't map to zero.
Summary: Types of Functions
All types can be summarized in the following table:
| Function Type | Definition | Example |
---
| One-to-One (Injective) | A function where each element of the domain maps to a unique element in the codomain. | f(x) = 2x+3 |
| Many-to-One | A function where multiple elements of the domain may map to the same element in the codomain. | f(x) = x2 |
| Onto (Surjective) | A function where every element in the codomain is mapped to by at least one element in the domain. | f(x) = ex, f : R→(0, ∞) |
| Into (Non-surjective) | A function that does not cover the entire codomain; there are elements in the codomain that are not mapped to by any element in the domain. | f(x) = sin(x), f:R→[−1, 1] |
Related Reads,
Domain and Range of Trigonometric Functions
Range of a Function
Relations and Functions
Composition of Functions
Hyperbolic Function
Solved Examples of Types of Function
Example 1: Check whether the function f(x) = 2x + 3, is one-to-one or not if Domain = {1, 2, 1/2} and Co-domain = {5, 7, 4}.
Solution:
Putting 1, 2, 1/2 in place of x in f(x) = 2x + 3, we get
f(1) = 5,
f(2) = 7,
f(1/2) = 4
As, for every value of x we get a unique f(x) thus, we can conclude that our function f(x) is One to One.
Example 2: Check whether the function is one-to-one or not: f(x) = 3x - 2.
Solution:
To check whether a function is one to one or not, we have to check that elements of the domain have only a single pre-image in codomain or not. For checking, we can write the function as,
f(x1) = f(x2)
3x1 - 2 = 3x2 - 2
3x1 = 3x2
x1 = x2
Since both x1 = x2 which means that elements of the domain having a single pre-image in its codomain. Hence the function f(x) = 3x - 2 is one to one function.
Example 3: Check whether the function is one-to-one or not: f(x) = x2 + 3.
Solution:
To check whether the function is One to One or not, we will follow the same procedure. Now let's check, we can write the function as,
f(x1) = f(x2)
(x1)2 + 3 = (x2)2 + 3
(x1)2 = (x2)2
Since (x1)2 = (x2)2 is not always true.
Hence the function f(x) = x2 + 3 is not one to one function
Example 4: If N: → N, f(x) = 2x + 1 then check whether the function is injective or not.
Solution:
In question N → N, where N belongs to Natural Number, which means that the domain and codomain of the function is a natural number. For checking whether the function is injective or not, we can write the functions as,
Let, f(x1) = f(x2)
2x1 + 1= 2x2 + 1
2x1 = 2x2 0
x1 = x2
Since x1 = x2, means all elements of the domain are mapped with a single element of the codomain. Hence function f(x) = 2x + 1 is Injective (One to One).
Example 5: f(x) = x2, check whether the function is Many to One or not.
Solution:
Domain = {1, -1, 2, -2}, let's put the elements of the domain in the function
f(1) = 12 = 1
f(-1) = (-1)2 = 1
f(2) = (2)2 = 4
f(-2) = (-2)2 = 4
Thus, we can see that more than one element of the domain have similar image after mapping. So this is Many to One function.
Example 6: If f(x) = 2x + 1 is defined on R:→ R. Then check whether the following function is Onto or not.
Solution:
For checking the function is Onto or not, Let's first put the function f(x) equal to y
f(x) = y
y = 2x + 1
y - 1 = 2x
x = (y - 1) / 2
Now put the value of x in the function f(x), we get,
f((y - 1) / 2) = 2 × [(y - 1) / 2] +1
Taking LCM 2, we get
= [2(y - 1) + 2] / 2
= (2y - 2 + 2) / 2
= y
Since we get back y after putting the value of x in the function. Hence the given function f(x) = 2x + 1 is Onto function.
Example 7: If f: N → N is defined by f(x) = 3x + 1. Then prove that function f(x) is Surjective.
Solution:
To prove that the function is Surjective or not, firstly we put the function equal to y. Then find out the value of x and then put that value in the function. So let's start solving it.
Let f(x) = y
3x + 1 = y
3x = y - 1
x = (y - 1) / 3
Now put the value of x in the function f(x), we get
f((y - 1) / 3) = {3 (y - 1) / 3} + 1
= y - 1 + 1
= y
Since we get back y after putting the value of x in the function. Hence the given function f(x) = (3x + 1) is Onto function.
Example 8: If A = R - {3} and B = R - {1}. Consider the function f: A → B defined by f(x) = (x - 2)/(x - 3), for all x ∈ A. Then show that the function f is bijective.
Solution:
To show the function is bijective we have to prove the given function both One to One and Onto.
Let's first check for One to One:
Let x1, x2 ∈ A such that f(x1) = f(x2)
Then, (x1 - 2) / (x1 - 3) = (x2 - 2) / (x2 - 3)
(x1 - 2) ( x2 - 3) = (x2 - 2) (x1 - 3)
x1 . x2 - 3x1 - 2x1 + 6 = x1 . x2 - 3x2 -2x1 + 6
-3x1 - 2x2 = -3x2 - 2x1
-3( x1 - x2) + 2( x1 - x2) = 0
-( x1 - x2) = 0
x1 - x2 = 0
⇒ x1 = x2
Thus, f(x1) = f(x2) ⇒ x1 = x2, ∀ x1, x2 ∈ A
So, the function is a One to One
Now let us check for Onto:
Let y ∈ B = R - {1} be any arbitrary element.
Then, f(x) = y
⇒ (x - 2) / (x - 3) = y
⇒ x - 2 = xy - 3y
⇒ x - xy = 2 - 3y
⇒ x(1 - y) = 2 - 3y
⇒ x = (2 - 3y) / (1 - y) or x = (3y - 2) / (y - 1)
Now put the value of x in the function f(x)
f((3y - 2) / (y - 1)) = { (3y - 2) / (y - 1) } - 2 / { (3y - 2) / (y - 1) - 3 }
= (3y - 2 - 2y + 2) / (3y - 2 - 3y + 3)
= y
Hence f(x) is Onto function. Since we proved both One to One and Onto this implies that the function is Bijective.
Example 9: A = {1, 2, 3, 4}, B = {a, b, c, d} then the function is defined as f= {(1, a), (2, b), (3, c), (4, d)}. Check whether the function is One to One Onto or not.
Solution:
To check whether the function is One to One Onto or not. We have to check for both one by one.
Let's check for One to One:
As we know the condition for One to One that all the elements of the domain are having a single image in the codomain. As we see in the mapping that all the elements of set A are mapped with set B and each having a single image after mapping.
So the function is One to One.
Now let's check for Onto:
As we know the condition for the function to be Onto is that, Range = Codomain means all the elements of codomain are mapped with domain elements, in this case, codomain will equal to the domain. As we see in the mapping that the condition of the function to be Onto is satisfied.
So the function is Onto.
Since we had proved that the function is both One to One and Onto.
Hence function is One to One Onto (Bijective).
Example 10: A = {1, 2, 3, 4}, B = {a, b, c, d}. The function is defined as f= {(1, a), (2, b), (3, c), (4, c)}. Check whether the function is Many to One Into or not.
Solution:
To check the function is Many to One Into or not. We have to check for both one by one.
Let's first check for Many to One function:
As we know the condition for Many to One function is that more than one element of domain should have more same image in codomain. From the above mapping we can see that the elements of A {3, 4 } are having same image in B { c }, so the function is Many to One.
Now let's check for Into function:
As we know the condition for Into function is that the Range of function should be the subset of codomain and also not equal to codomain. Let's check both the conditions are satisfied or not.
Range of function = {a, b, c}
Codomain of function = {a, b, c, d}
Range of function ≠ Codomain of function
As we check that the range of function is not equal to codomain of the function. Hence we can say that the function is Into function. As we prove that the function is Many to One and Into.
Hence the function is Many to One Into.
Suggested Quiz
10 Questions
Which of the following is not a type of function?
A
One-to-one function
B
Onto function
C
Objective function
D
Bijective function
Explanation:
One-to-one function (Injective): Each element in the domain maps to a unique element in the codomain.
Onto function (Surjective): Every element in the codomain is mapped by at least one element from the domain.
Bijective function: A function that is both one-to-one and onto.
Objective function: This is not a type of function in mathematics; it refers to a function used in optimization problems.
Which of the following statements about functions is not correct?
A
A one-to-one function maps each element of the domain to a unique element in the range.
B
An onto function maps every element of the range to at least one element in the domain.
C
A bijective function is both one-to-one and onto.
D
A injective function can have multiple elements in the range corresponding to the same element in the domain.
Explanation:
An injective function (also known as a one-to-one function) means that each element in the domain maps to a unique element in the range, with no repetitions in the range. So, no two different elements of the domain can map to the same element in the range.
If f(x) = x2 − 2x + 3, then f(x) is:
A
Odd
B
Even
C
Neither Odd or Even
D
Periodic
Explanation:
f(−x) = (−x)2 − 2(−x) + 3 = x2 + 2x + 3, which is not equal to f(x) or −f(x). Hence, f(x) is neither odd nor even.
If f: R → R, then f(x) = |x| is
A
One-one onto
B
Many-one onto
C
One-to-one but not onto
D
None of These
Explanation:
for |x|, f(-1) = f(1) = 1
Thus, the function is Many-one.
Also, as f: R → R, but for negative values in codomain there is no pre-image in the domain. Thus, function is not onto.
Which of the four statements given below is different from others?
A
f: A → B
B
f: x → f(x)
C
f is a mapping of A into B
D
f is a function of A into B
Explanation:
f: A → B maps from A to a codomain B, which may contain more elements than the actual outputs.
f: x → f(x) maps from x to the range f(x), which is the set of all possible outputs of f, and is a subset of B (if B is larger than f(X)).
Also, "f is a mapping of A into B" and "f is a function of A into B" is the same as f: A → B.
The function f:R→R defined by f(x)=(x−1)|(x−2)(x−3)| is
A
One-one but not onto
B
Onto but not one-one
C
Both one-one and onto
D
Neither one-one nor onto
Explanation:
Given: f(x) = (x − 1)|(x − 2)(x − 3)|
for x = 1, 2 and 3, f(x) = 0, thus function is many-one.
For x > 3, f(x) = (x − 1)(x − 2)(x − 3) = x3 - 6x2 + 11x - x - 6 (which is an increasing function)
For x < 1, f(x) = (x − 1)(x − 2)(x − 3) [as (x - 2) and (x - 3) both are negative, thus overall value is positive]
As polynomial function have range and domain in all real numbers.
For given function for any y there is always a pre-image in real numbers, thus function is onto.
Function f: R → R, f(x) = x2 + x is
A
One-one onto
B
One-one into
C
Many-one onto
D
Many-one into
Explanation:
For function, f: R → R defined as f(x) = x2 + x,
As, f(0) = f(-1), thus the function is many one.
Consider, f(x) = -1
x2 + x = -1
⇒ x2 + x + 1 = 0
D = b2 - 4ac = 12 - 4 × 1 × 1 = 1 - 4 = -3 < 0
So, there are no real solution for this equation. This means for f(x) = -1 there is no value of x in domain.
Thus, function is into.
Let f: N → N defined by f(x) = x2 + x + 1, x ∈ N, then f is
A
One-one onto
B
Many-one onto
C
One-one but not onto
D
None of These
Explanation:
Given: f(x) = x2 + x + 1, x ∈ N
Let x, y ∈ N such that f(x) = f(y)
⇒ x2 + x + 1 = y2 + y + 1
⇒ x2 - y2 + x - y = 0
⇒ (x - y)(x+ y+ 1) = 0
⇒ x = y or x = -y - 1 (which is not possible because x ∈ N)
Thus, f(x) is one-one.
For onto,
f(1) = 3, f(2) = 7, f(3) = 13, f(4) = 21, . .
Here, we can clearly see this function is increasing in value. So, there are gaps in the codomain for which there is no natural number in domain.
Thus, function is not onto.
f(x) =x + √(x2) is a function from R → R , then f(x) is
A
Injective
B
Surjective
C
Bijective
D
None of These
Explanation:
Given: f(x) =x + √(x2) = x + |x|
f(-1) = f(-2) = f(-3) = . . . = 0
Thus, function is not injective.
Also, for x < 0, f(x) = -k + k = 0 (where k < 0)
Therefore, there is no pre image in domain for f(x) = -1 or any other number.
Thus, for R → R function is also not surjecitve.
As function is neither injective nor subjective, thus function is also not bijective.
If f:0, ∞)→[0, ∞) and f(x) = x/(1 + x), then f is
A
One-one and onto
B
One-one but not onto
C
Onto but not one-one
D
Neither one-one nor onto
Explanation:
Given: f(x) = x/(1 + x)
Let x, y ∈ [0, ∞) such that f(x) = f(y)
⇒ x/(1 + x) = y/(1 + y)
⇒ x + xy = y + xy
⇒ x = y
Therefore, function is one-one.
For f(x) = 1,
⇒ x/(x + 1) = 1,
⇒ x = x + 1
⇒ 0 = 1 (which is not possible)
Thus, for f(x) = 1, there is no pre image in the domain.
Quiz Completed Successfully
Your Score : 2/10
Accuracy : 0%
Login to View Explanation
1/10
1/10
< Previous
Next >
S
srishivansh5404
Improve
Article Tags :
Mathematics
School Learning
Class 12
Relations and Functions
Maths-Class-12
Explore
[Maths
4 min read
Basic Arithmetic
What are Numbers?
15+ min readArithmetic Operations
9 min readFractions - Definition, Types and Examples
7 min readWhat are Decimals?
10 min readExponents
9 min readPercentage
4 min read
Algebra
Variable in Maths
5 min readPolynomials| Degree | Types | Properties and Examples
9 min readCoefficient
8 min readAlgebraic Identities
14 min readProperties of Algebraic Operations
3 min read
Geometry
Lines and Angles
9 min readGeometric Shapes in Maths
2 min readArea and Perimeter of Shapes | Formula and Examples
10 min readSurface Areas and Volumes
10 min readPoints, Lines and Planes
14 min readCoordinate Axes and Coordinate Planes in 3D space
6 min read
Trigonometry & Vector Algebra
Trigonometric Ratios
4 min readTrigonometric Equations | Definition, Examples & How to Solve
9 min readTrigonometric Identities
7 min readTrigonometric Functions
6 min readInverse Trigonometric Functions | Definition, Formula, Types and Examples
11 min readInverse Trigonometric Identities
9 min read
Calculus
Introduction to Differential Calculus
6 min readLimits in Calculus
12 min readContinuity of Functions
10 min readDifferentiation
2 min readDifferentiability of Functions
9 min readIntegration
3 min read
Probability and Statistics
Basic Concepts of Probability
7 min readBayes' Theorem
13 min readProbability Distribution - Function, Formula, Table
13 min readDescriptive Statistic
5 min readWhat is Inferential Statistics?
7 min readMeasures of Central Tendency in Statistics
11 min readSet Theory
3 min read
Practice
NCERT Solutions for Class 8 to 12
7 min readRD Sharma Class 8 Solutions for Maths: Chapter Wise PDF
5 min readRD Sharma Class 9 Solutions
10 min readRD Sharma Class 10 Solutions
9 min readRD Sharma Class 11 Solutions for Maths
13 min readRD Sharma Class 12 Solutions for Maths
13 min read |
189521 | https://www.youtube.com/watch?v=A5lNCM_7M1w | Top 10 DESMOS Features for Understanding Math | jensenmath.ca
JensenMath
250000 subscribers
1422 likes
Description
37007 views
Posted: 26 Feb 2025
Desmos isn’t just a calculator—it’s a powerful tool that helps you visualize math like never before! In this video, we’ll explore the top 10 Desmos features that make learning math easier and more intuitive. From graphing functions and restricting domains to shading inequalities and exploring statistics, these tools will help you see the connections between algebraic equations and their graphical representations. Whether you're a student trying to grasp new concepts or a teacher looking for better ways to explain them, Desmos has something for you. Make sure to go to jensenmath.ca for tons of FREE math resources.
0:00 intro
0:23 Graphing Calculator Basics
7:11 Domain Restrictions
9:55 Sliders
13:36 Shading and Inequalities
16:20 Statistics
20:55 Probability Tools
27:47 Other Calculators
29:46 Geometry Tool
34:52 3D Graphing Calculator
37:12 Teacher Tools
38 comments
Transcript:
intro here are the top 10 Desmos features that can deepen your understanding of math whether you're graphing functions exploring limits analyzing data or calculating probabilities Desmos lets you visualize Concepts in a way that equations alone can't let's dive in and see how these tools help me Teach math and will also help you truly understand math must know number one is the basics of using the graphing calculator in Graphing Calculator Basics Desmos so let's start by by going to desmos.com and then in the list of math tools let's choose the graphing calculator and in this graphing calculator I want to show you in this section how we can graph functions label them evaluate points and even solve equations by finding intersections so let me start actually by going into the graph settings this wrench that's in the top right and I'm going to make everything a little bit bigger so it's easier for you to see by clicking this bigger a and I'd also like to get rid of these minor grid lines as well and now what I'm going to do is start by graphing a quadratic function I'm going to graph the quadratic y equal x^2 - 4x + 3 and now to get this 2 as the exponent on the X I can use the button on my keyboard that looks like this or I can pop open the keyboard in the bottom left of the screen here and use this a squ button to write an exponent of two and in fact anytime you're trying to write a math symbol if you can't figure out how to use your keyboard you can pop open this keyboard and find the symbol and you can even Explore the different functions here that you can make use of in Desmos and now what I want to do instead of writing y equals this quadratic I'm going to change it to function notation I'm going to write f ofx equals this quadratic and now Desmos will recognize this as a named function so I can easily reference it for example if I wanted to know the yv value of this function when X is 4 I could just type F at 4 and notice Desmos tells me that when X is 4 Y is 3 and if I click on this quadratic function anywhere I click it'll tell me the X and Y coordinates of the point that I'm clicking on the function so when X is 4 I should see that there's a y-coordinate of three and there it is right there and also because this function is now named if I wanted to graph its derivative that wouldn't be a problem either I could just write fime of X and there's the graph of its derivative and I could also find the value of this derivative let's say I typed frime at 2 it tells me that the value of the derivative when X is 2 is zero and now let me show you how we can have Desmos plot any point we want by typing an ordered XY pair for example I could type the ordered pair 51 and you'll see that Desmos plots that point down here at 51 or I could have Desmos plot a point on my quadratic function by typing the ordered pair x f ofx for any x value that I want for example I could type 2 F of 2 as an ordered pair and Desmos will plot that point on my function at an x value of two and I see it's down here at the vertex the 21 and also notice if I click this label option underneath the point it defaults to labeling it with its X and Y coordinates and the colors and sizes of these points and our function can be adjusted by clicking this settings button this gear up here in the top of the equation editor then by clicking any of the circles beside my function or either of the two points I'll choose that vertex point to F of2 I could change the size of the point right now it's set at a thickness of eight I could change it to 15 to make it bigger and I can change the size of its label right now the font is at size one I could double it making it size two if I wanted to and that other point 51 something I could do with that point I could actually click this drag option and that allows me to take this point and move it any where I want which is a useful tool if I want to use it as a label for another point that point 21 is the vertex so for this red point I'll drag it right underneath that 21 label and then I'll add a custom label to it by clicking label and then writing on this line I'll write vertex and then I could hide the point Itself by clicking this red circle beside the point so I now only see that label vertex which I might want to make bigger by clicking this gear clicking the circle beside the point and making the font size I'll double it and let me make it the same color as the point so now I can see that this can be used as a label for that point 21 so that's one way we can manually add points to our function but there's another option by clicking this settings gear again notice beside our quadratic function there's this create table button if I click that it's going to create a table of values for this function for X values between -2 and 2 by default but I could add more or less if I wanted to just by adding X values and it'll calculate the Y values using my fidex equation and then it plots all the points on my function that are in this table let me change the color of them so you can see them more clearly I'll make them purple and I'll make them a bit bigger so all those purple points are from this table of values and now let me actually graph another function on this grid I'll give it a different name I'll call it g of X and I'll make it a linear function a/x + 1 graphing two functions is a great way to see when they're equal to each other so if you had an equation that had the quadratic set equal to the linear function you could look on the graph see where they intersect and I see they intersect at X values of a half and four so those would be the X values that make the equation true which hopefully demonstrates the power of Desmos for solving equations graphically and now let me clear these functions and show you one more thing I'm going to graph a trig function this time and whenever graphing trig functions you want to open the graph settings and choose chose whether you want to be graphing it in radians or degrees I'm going to use degrees this time and I'm going to show you the graph of f ofx equal sin of X over 2 notice I'm not able to see all of the properties of this function well with the default scale for the X and Y axis you can try adjusting the scale of this function by clicking these plus and minus zoom in and zoom out buttons you could try zooming in or zooming out or moving around on the screen by clicking and dragging but a better way to adjust the scale is to click the graph settings which is the wrench in the top right here and then you can manually choose the limits for the X and Y axis since the period of this function is 720° I want to be able to visualize at least two cycles of this function so I'm going to make the limits for this x axis to be between a th000 and a th000 and I'll also change the step which is the increment at which each unit goes up by I'm going to change it to 180 so the segments of the Cycles are easier to identify and now looking at this graph I can more clearly see the properties of this periodic function must know number Domain Restrictions two restrictions on the domain of a function continuing to look at this exact same sign function by default this function will extend infinitely in both directions I could keep scrolling forever to the right or forever to the left and this function is just going to continue forever but what if I only want to see one cycle of this sine wave to do that I can restrict the domain by beside the function in my equation editor in curly brackets I can State the interval of X values that I want to be displayed so if I only want one cycle of this function I can say I only want the X values between 0 and 720 to be displayed or maybe instead of just having this one cycle displayed maybe you wanted one cycle just highlighted so what you could have done instead of restricting the domain of that function we could have written another s function so yal s of x/2 again and then restricted the domain of that one so in curly brackets writing X has to be between 0 and 720 and now notice that one cycle of this function is highlighted in purple and if you ever wanted to hide a graph just click the button beside its equation and it makes the graph disappear or click it again it brings it back and now let me get rid of both of these trig functions because I want to show you how restricting domains can also be useful for graphing piecewise functions so let me reset my scale by clicking this house button it brings me back to the defaults but I'll have to go into the graph settings to change the step back to its default which was two I'll actually make the step go by one for the X and Y AIS so if I want to make a piecewise function which is just a function made up of different functions over different parts of its domain what I can do is inside curly brackets I can write condition one colon my first expression comma condition two colon my second expression comma Etc so if I have a peie wise function let's call it f ofx what I can do is inside curly brackets State over what parts of its domain is it equal to what type of function so I'll say when X is less than or equal to Z colon I want it to look like the function x + 2 but then when X is greater than zero I want it to look like the square root function rootx and I can find the square root symbol by popping open the keyboard and there it is right there squ < TK X and let's make this function a different color by clicking the settings gear in the top and changing the black color let's change it to red and maybe let's make this function even a bit thicker I'll change its thickness from 2 and A2 to four must know number three sliders now Sliders let's talk about one of the most powerful features in Desmos sliders sliders allow us to dynamically change values in our functions making it easy to explore Transformations and different patterns we can add a slider for a variable just by typing any variable other than X and Y and setting it equal to any constant so for example I could type AAL 1 and that's going to set a slider for the variable a A can take on by default any value between between -10 and 10 and notice that it goes up by an increment of 0.1 I could change the range that a could take on if I want it to be from -20 to 20 I can just type that in here and if I wanted the step to be by a whole number one I could type that as well but I think I'll put it back to its defaults for now and now what I could do is I could add a point that will Trace along this piecewise function by making the coordinates of the point based on the variable that has the slider so I could add the point A F of a and I'll have it be labeled with its coordinates and let me change the settings to make this point a little bit bigger I'll make its thickness 12 and I'll make its label a bit bigger as well and now see as I change the value of a you'll see that point dynamically move on the graph of this pie wise function and it shows the value of that pie wise function at each point that I scroll through here this tool with a pie wise function would be great for teaching limits if I added a vertical line that goes through xal a and a horizontal line that goes through y = f of a and then restricted their domains and ranges so they just connect to the X and Y axis I can now easily demonstrate the limit of this function as X approaches zero from both the left and the right as X approaches zero from the left I see that the Y value is approaching two and as X approaches zero from the right side of zero I see the Y value is approaching zero and now let me clear all of this and show you how we can use sliders to demonstrate transformations of functions let me write the vertex form equation of a quadratic so F ofx = A x - h^ 2 + K notice Desmos offers me the option to make sliders for those parameters a H and K so I'll click all and it'll add sliders for all of them let me set H and K back to zero and there we just have the normal quadratic y = x^2 let me Chang the color of it and now I can play around with these sliders for ah and K to see what type of Transformations they do to this x squ function I can see that the a value does vertical stretches and compressions and can also do a vertical reflection and the H value shifts it right and left and the K value shifts it up and down and if you wanted to you could also have the slider move automatically just by clicking this play button beside any of the variables in being able to dynamically change the value of Vari using sliders has also been fundamental to me being able to explain Key Concepts in calculus like for Newton's quotient showing that the slope of the tangent line at a point is equal to the limit of the secant slopes as the change in X between the points approaches zero or that an indefinite integral which gives the area under a curve is just equal to the limit of a rean sum as the number of rectangles being used to fill the area under the curve approaches Infinity or even just using it as a tool to show the slope of a tangent line at any point along a function must know number four shading Shading and Inequalities and inequalities to graph an inequality simply use an inequality symbol instead of an equal sign so for example notice when graphing Y is greater than x + 2 it Shades the region above the line Y = X + 2 because all of the points in that region satisfy the inequality let me actually plot a point in that region and let me actually make it movable by clicking this settings gear clicking on that point and I'll toggle on drag and I'll also make the label a bit bigger notice any of the points in this shaded region any point in that region satisfy the original inequality right a y value of 2 is greater than the x value of -4+ two and another thing to notice is that Desmos makes the boundary line dashed but if I were to change the inequality instead of just greater than X+ 2 if I made it greater than or equal to x + 2 by typing an equal sign here it changes it to a solid line showing that all of the points on that line are now included in our solution set and let's say I was trying to solve a compound inequality I can actually graph multiple inequalities on this grid to find overlapping solution regions for example I could add the inequality Y is less thanx - 2 I can see that all of the points in this overlapping region would satisfy both inequalities and not only does Desmos make it easy to visualize solutions to inequalities but you can also use the shading technique along with sliders and restricting domains to design some incredible artwork if we go back to the Desmos homepage notice there's an option at the top here to see a gallery of artwork that is being submitted to Desmos so if I click Gallery I can scroll through some of these amazing submissions to see the power of Desmos like for example if I click on these flags I can see that Desmos can be used to graph flags of different countries from all over the world or here's a submission that has a picture of Earth with a slider that allows you to rotate the Earth or here's an amazing one that makes use of the 3D graphing calculator and allows you to use a slider to see the season change in this scene and here's one more fun one that shows some line art and makes use of lots of different sliders I'll speed it up so you can see what happens must know number five how to use Desmos for statistics in Desmos we can Statistics store data as a list using square brackets for example let's create a list of numbers representing the test scores for my first unit test in my Calculus class so I'll say a equals and then in square brackets I'm going to list all the test scores for the students in my class and each test score has to be separated by a comma and know that the list of test scores is typed there are different calculations I can do to analyze that list for example I could find the average of those test scores by typing mean and then in Brackets making its argument my list name a that tells me the average test score was about 83% I could also find the median test score by typing median of a or the standard deviation of the test scores by typing stdev of a and there are many other calculations we could do to see a full list of them so you don't have to remember all of these functions just pop open the keyboard click functions and you could scroll to the section on statistics and you can see all the different types of calculations that we could do for example I could use the count function for my list a and it'll tell me that 24 students took that test and now let's visualize this data if I go back into the functions I can scroll to the part about visualizations and I see that there are three different graphs I could use to visualize my data let's start by making a box plot so I could actually click this button box plot or I could have typed it manually and then I need to make a box plot for the list of data stored in function a so box plot of a now on the graph I don't see the box plot and that's because there were no test scores this low the box plot is way over here somewhere instead of having to scroll all the way if I go over to the equation editor in the section where I typed box plot I see in the bottom left there's this Zoom fit button if I click that it will adjust the scale of my X and Y axis so I can very clearly see that box plot and instead of a box plot if I would rather a histogram I'll X out this box plot and I'll get a histogram for my list stored in a now I can see in the equation editor it tells me the arguments should be the data set a comma the binwidth which it's defaulting to one I think I'm going to make the bin width I'm going to make it bigger I'm going to make it 10 and then I'll want to zoom in so I can see all of these bars so I'll hit this Zoom fit button in the bottom left and there's the histogram of my test scores so these were the statistical analysis tools that you can use when you have univariate data but what if we have by variate data let me clear all of this and reenter my grid if we have by variate data we could visualize it with a scatter plot simply by creating a table of values for the data so I can click this plus sign in the top left above the equation editor and choose table and then I can enter some data into this table for whatever my variables X and Y are and notice as soon as I started entering the values Desmos started plotting the points on the grid and I'll click the zoom fit button so I can see all of the points clearly and let me make the points a bit bigger by clicking the settings gear clicking this red circle and making the thickness from eight let's change it to 15 and if we suspect a pattern in this data we could perform a regression for example if I think this data follows a linear Trend I could regress it against a linear function I could say y1 and then use a till assign to say approximately follows and then I'll write a linear function I'll say m of XY plus b and when I do that notice it actually gives me the r and r 2 values and it gives me an option here plot to plot the residual values for each point so notice the green dots are the residuals their values which were added to the table automatically by Desmos just tell me each Point's distance above or below the line of best fit but there may be a better regression model than this linear regression model let me get rid of it but instead of having to type manually different functions to Reg press this data against we can click beside the table this button here that says add regression and then if I toggle this list there are a bunch of different regression models I can scroll through I could change it to a quadratic regression which has a better r squar value or even a cubic regression I can see the r s value is very high so this is a great regression model for this data notice if I plot the residuals their values are all very small must know number six probability tools Probability Tools Desmos has built in probability functions that allow us to simulate Randomness visualize probability distributions and calculate probabilities whether you're working with discrete or continuous probability distributions Desmos provides an intuitive way to explore these Concepts let's start by looking at a binomial distribution in Desmos you can visualize the probability Mass function for any binomial distribution by inputting binomial d and then giving it two arguments the number of Trials and the probability of success so let's create a binomial distribution for an experiment that has 10 trials and a probability success of 0.3 to do that I'll input binomial D 10 for number of Trials and 0.3 for the probability of success I'll hit the zoom fit button and there's my probability Mass function let me give this distribution a name I'll call this a so a equals this binomial distribution so that I can reference it for doing calculations to find the probability of having any number of successes in 10 trials I can just look at this probability Mass function for example the probability of having two successes in 10 trials if I click on this point that an x value of two I can see the probability of two successes is about 23% or I could type in the equation editor a.pdf of two and it gives me that same value of 23% and if I wanted the probability of a range of successes happening I could use the the cumulative Mass function or cumulative distribution function CDF for example if I wanted the probability of having less than or equal to six successes in 10 trials I could just type a. CDF 6 and there's almost a 99% chance of having less than or equal to six successes in 10 trials or let's say if I wanted the probability of between four and six successes including four and six I could type a. CDF with two arguments the lower and upper boundaries for and six and that' be a lot faster than adding three PDF calculations together right this would be the same as doing a. PDF of four five and six added together now let me get rid of all this and let's look at a continuous probability distribution if I had a set of normally distributed data let's say their IQ scores with a mean of 100 and a standard deviation of 15 we can visualize its probability density function by typing normal D for normal distribution and then input two arguments the mean and standard deviation 100 comma 15 and then hit the zoom fit button and I'll name this function a so that I can reference it for calculations the total area under this curve is one which means the area between two points represents the probability that a random person's IQ Falls within that interval so if I wanted to calculate the probability of a random person's IQ score being less than 90 I could calculate it by typing a CDF of 90 and it'll give me about 25.2% but I think a better way to calculate this is by checking this box underneath the normal distribution function that says find cumulative probability if I do that I can set the minimum and maximum value so I'll set it from negative Infinity up to 90 and then on the graph it actually Shades that area for me so I can see the area under the curve less than 90 is .25 25% and let's say I wanted to calculate the probability of a random person's IQ being between 120 and 140 I could type a. CDF of give it two arguments 120 and 140 the lower and upper boundaries or I could just do that up here by changing the Min value to 120 and the max value to 140 and that way I can actually see the Shaded area on my graph and now let me show you how we could set these upper and lower boundaries to variables to make this more of a dynamic visualization I can change my Min to a and my Max to B add slide for them and then change their Min and Max values and then I can adjust these sliders to find the area between any two A and B values that are my Min and my Max values and it might be useful to actually plot these points A and B and then give them labels based on just their A and B values if I click the label icons it's going to label them as coordinates but I just want point a labeled as 79 not 79 Z and I want to change dynamically with a so what I can do I can type dollar sign and then inside curly brackets a that'll give it the label that is equal to the value of a and I'll do the same thing for B and now as I change a and b those labels are going to change and the area get shaded in between the A and B values and then maybe I actually want this area to be labeled somewhere well let me create a function that calculates the area so I'll say b equal a. CDF between A and B that finds the area between a and b and then I want to have a point that shows that value so I'll make some random Point somewhere on my graph I'll make the point 50 comma 02 and I'll make its label I want it to say area equals and then the value of that b function that I created so dollar sign and then in curly brackets B and then if I wanted to I can make this label bigger and I'll uncheck the point so I don't actually see the dot I just see the area label and then as I change a and b notice that it has this area calculation being recorded on the screen and now one more cool thing I want to show you about probabilities is let's say we want to generate a random IQ score from this set of data I could type a. random and then Open brackets and it'll give me a random IQ score based on this set of normally distributed data let's say I wanted three random IQ scores I could just change the argument to three and then it'll give me a list of three random IQ scores another useful random function let me get rid of this one is let's say I wanted to generate any random number within a given interval if I type random and then Open brackets Desmos defaults to giving a random value between just zero and one if I wanted three random values once again just change the argument to three most of the time you would want a random integer between a lower and upper boundary to do that we could type floor 100 random Open brackets and we'll just get one value for now so I'll make the argument one plus one and now this function gives me one random integer between one and 100 if I wanted it between 1 and 200 just change that first value to 200 but I put it back to 100 and let's say I wanted to get 10 random numbers between 1 and 100 I'll just change the argument on random to 10 and now I have a list of 10 random numbers between 1 and 100 and we can make this list as big as we want must Other Calculators know number seven the other calculators available in Desmos Beyond its powerful graphing calculator Desmos offers several other specialized calculators that can be incredibly useful for different areas of math let's take a quick look at them there's the four function calculator which is simple and userfriendly Tool designed for basic arithmetic operations such as addition subtraction multiplication and division it's great for elementary math students or quick calculations without extra complexity if I click on math tools up top here I can see that there's also a scientific calculator the scientific calculator is perfect for more advanced calculations including exponents square roots trig function and logarithms and then back into the math tools the next one I want to show you is the Matrix calculator The Matrix calculator is designed for working with matrices making it useful for linear algebra and higher level math courses I can create a matrix by just clicking this new Matrix button and choosing how many rows and columns that that Matrix has if I had a linear system of three equations in three variables I could create an augmented Matrix for it by inputting its values into Matrix a that I have here in this calculator and then I can press enter to input this Matrix into its stored memory and then I can put that Matrix into reduced row Echelon form by inputting an argument of a and then the fourth column will give me the solution to this system of equations 2 3 1 but also if I clear all of this we can do more simple calculations with matrices as well I'll create two matrices and then I could do lots of simple calculations with these such as adding the matrices a plus b i could multiply these matrices together subtract them or with one of the matrices I could transpose it or find its inverse or I could find the determinant of Matrix a so these additional calculators ensure that Desmos isn't just for graphing it's a tool for all levels of math from basic arithmetic to Advanced algebra and calculus must know number eight the geometry tool I'll open it here and I'm Geometry Tool going to start by going into the graph settings and I'm going to add a grid and axxes and enlarge it and I'm going to show you a bunch of the different functions available here for geometric shapes and then I think a good way for me to show you what this tool is useful for is to do an example I do with one of my grade 10 math classes let's find the circum center of a triangle that has vertices at a20 B 26 and C 10 -2 but instead of typing it like I typed A and B another option would be to use this toolbar at the top click the point and then go ahead and plop the Point anywhere on the graph we want so to plot at 10 -2 I might have to move over a little bit with my graph click the point option and then plot a point at 10 -2 now it didn't show up in the equation editor but if I click this little geometry button here I'll notice that it did make this point if I drag it down to the equation editor I see that it named it 0.1 and then I could give each of these points labels I could label them based on their coordinates or I could give them names based on the names of the points points a a b and c and now if I open up this keyboard in the bottom left if I click on functions and scroll down to the geometry section I see there's a bunch of different geometry tools I can use and different measurements I can make in this geometry calculator but instead of having to remember these functions and what arguments they take it's often useful just use the toolbar at the top here when constructing shapes or doing calculations with them so for example if I wanted to create the triangle by making the three lines segments to make up this triangle to create the line that goes from vertex a to vertex B I could type segment a comma b or instead of having to type that in the toolbar at the top I could just click on this line segment option and then click the starting and end point A and B and I can do that for all three sides of this triangle and then I'll actually bring these three lines down into the equation editor I said we're going to want the circumcenter of this triangle that's the point where the right bis sectors of these three sides intersect so if I want the right bis sectors I'm going to need the midpoints of all three sides so I could type midpoint of line one and it'll give me the midpoint of line one or instead of having to type it I could up in this toolbar at the top click on the point click it again to drop down the menu and then click midpoint and then just click the line that I want the midpoint of so I'll do it for all three of them and notice the midpoints have been plotted for all three lines I can drag down those three points so I can see what their labels are and now I want the line perpendicular to each of those three sides going through those midpoints so I can click this line segment click it again to drop down the menu and click perpendicular and then click the line I want it to be perpendicular to and then drag down to the point I want it to go through so I'll do that for all three sides so that I have all of their right bis sectors and then the circum Center is at the point where those right bis sectors intersect and I could label that point just by clicking the point option and then clicking where the lines intersect I'll drag that point down so maybe I could change its settings by clicking the gear settings here clicking on that intersection point and I can make that point bigger and maybe change its color and I'll label it with its coordinates by checking the label box so the circum Center is at 73 and it's called the circum Center because if I created a circle by clicking the circle tool centered at that 73 I would notice that if I dragged it out that Circle would go through all three ver vertices of the triangle at the same time that's why it's called the circumcenter of the triangle and there are a couple other useful tools let me delete all this except for the original Three Points instead of constructing the triangle by making all three line segments separately I could choose this polygon option and then click on the three points that form this polygon and then I should drag this polygon down so I can see its name so I can reference it for what I'm going to do next I'm going to make this triangle bigger or smaller using the dilate function so the dilated version I have to give it a name I'll say d equals dilate and then it needs to take three arguments the polygon that is going to be scaled up or down so polygon one a point where I want the center of the polygon to be at and then the scale factor I'll call it a right now and then add a slider for that so that you can see this polygon getting bigger or smaller as I move the slider and then another option instead of dilating the polygon I could rotate it so I'll say d equals rotate polygon 1 comma click the point about which I want to rotate it so I'll do about the center of this triangle and then comma How many degrees do I want to rotate it so I could rotate at 180 by typing 180 or I could once again use a slider I'll put a variable a and then add a slider for a and then notice as I move the slider it's going to rotate that shape must know number nine the 3D graphing calculator so let me click the 3D Graphing Calculator 3D option this is the most exciting new addition to Desmos it allows us to visualize functions surfaces and vectors in three-dimensional space in this 3D calculator we can plot equations in terms of X Y and Z just like in 2D but with an added depth dimension for example if I enter the equation Z = x^2 + y^2 it will generate a paraboloid which is a curve surface extending upwards from the origin and I can rotate around this and I could zoom in on this so I can better understand the properties of this function and I'll clear this and show you another thing we could do is to graph systems of planes and explore their intersections so for example if I graph a standard form equation with three variables it will graph the plane defined by that equation and if I write the equation of a couple more planes I could rotate this graph around and see from different perspectives how these planes intersect and I could see these three planes intersect at a single point and clearing these I'll show show you one more thing with the 3D graphing calculator and it's how to graph vectors we'd have to start by plotting different points in three space so I could plot a point at Point a by saying a equals and then write its X Y and Z coordinates let's say 1 2 and 3 notice it's able to plot that point that has an x and y coordinate of 1 and two but it also has that Z coordinate that added depth of three and then I'll plot another point point B and then I can create a vector that connects those two points by writing vector and then the starting point and end point a comma B and then it draws the vector connecting those points and then if I had one more Point let's call it Point c i could create another Vector that goes instead of from A to B let's make it go from a to c and then I'll give those two Vector names I'll call them vectors u and v i could do calculations with those two vectors I could do U plus v and it'll graph the sum of those vectors or U minus V or if I choose the multiplication operation between those two vectors it's going to do the dot product of the two vectors if I wanted the cross product I would actually have to type the word cross and then it gives me the vector perpendicular to both vectors u and v must know number 10 the teacher tools available in Teacher Tools Desmos if I scroll down on the homepage I can go to the teacher home there's lots of amazing pre-made resources you can use or you could even create your own activities but what I like to to do I like to scroll through go to the featured Collections and then I can look based on grade level or based on topic what pre-made activities I could use with my students so if I go to let's say Algebra 2 I can see a bunch of activities suitable for Algebra 2 and one of my favorite activities are these marble slides and if I click on an activity and what I can do here is I can actually preview this activity I notice there are 24 cards in this activity in each of these cards they're going to be launching Marbles and trying to collect Stars I can go through each of the different cards in this activity and see what the students are going to have to do like for example here if I launch the marbles it's not going to collect all the stars but it says to change the equation of that exponential function to collect them all so I think if I just horizontally reflect this exponential function it should be able to collect those stars and then if I like this activity there's a couple things I can do I can click this plus sign and add it to one of my own personal collections or I can assign it to my class I can assign it to a class that I've already pre-made or I could just create a single session code that way students don't have to create accounts now I won't be able to track their data as in detail as I would be able to if they were in my class but this makes it a lot faster to get students actually engaged in doing the activity and going back to Desmos classroom and back to featured collections if I scroll way down to the bottom I want to show you one of my other favorite activities used to do with students this one's called Escape rooms there's two different Escape rooms that students can try and if you preview them you can see that they're actually really fun to do you can go through in these rooms and try and figure how you can escape the house like finding the Ice Cube collecting it seeing that there might be a key inside these ice cubes so taking it over to the water melting the Ice Cube and having a key and then eventually you'll discover that there is this hidden room under the couch you can use the key to get downstairs and then there are so many different rooms in this escape room and students will have to play around and use their problem solving skills to actually get out of this house so I would highly recommend looking through all of these different featured pre-made activities there's lots of different great ones that'll be great for your classes and going back to Desmos classroom there's one more thing I want to show you there's this new thing called polypad manipulatives that also might be used to you I'll launch the polypad and then I see there's a ton of Cool Tools in here lots of things we can do with numbers fractions algebra probability lots of cool things let me just show you a really simple example so you can see how this works let's say you're just working on adding numbers if I was just adding 17 and 16 I could type that equation and I could represent each of those numbers with number tiles if I wanted to if I have 17 that's a 10 and 7 1es and 16 as well or I could do it with number cards if I wanted to and then I could show that when adding them together I can take my 17 and then add the 16 I could take three of the ones from the 16 add them to the 17 and then I have another group of 10 and then combining these together I see that I have 33 as my answer or with the number cards I can split each of those number cards and then combining these together I see that the seven and the six make 13 which I could split into a 10 and a three so I have 3 t and a three making if I merge them together 33 so lots of different ways to show why this is 33 and if you want to test out if you're right you could even in this algebra section choose this balance scale and then put the question on one side and what we think the answer is on the other side and show that it's perfectly balanced so I hope this video has inspired both students learning math and teachers teaching math to make the most of Desmos whether you're solving equations analyzing data or exploring probability Desmos brings mathematics to life in a way that makes learning more engaging and meaningful J |
189522 | https://www.forbes.com/sites/matthewroberts/2025/06/24/5-things-you-should-know-about-the-irs-bba-partnership-audit-rules/ | 5 Things You Should Know About The IRS BBA Partnership Audit Rules
ByMatthew L. Roberts, J.D., LL.M.,
Contributor.
Partnerships are an enigma under federal tax law. Although the partnership files an annual income tax return (i.e., Form 1065), the partners report their allocable share of the partnership’s tax items on their income tax returns (e.g., Form 1040). Due to the complexity inherent in partnership income tax reporting, Congress has historically struggled in attempting to find an appropriate examination tool to provide to the IRS to audit partnerships.
After more than three decades under the Tax Equity and Fiscal Responsibility Act of 1982, Congress changed the partnership audit and collection rules through passage of the Bipartisan Budget Act of 2015. Under the BBA, the IRS must generally audit the partnership unless the partnership qualifies for and makes a timely election out of the BBA centralized partnership audit regime. Significantly, the BBA audit provisions also allow the IRS to collect taxes directly from the partnership unless the partnership makes a timely election to “push out” the adjustments to its partners.
The new BBA partnership audit rules are complex and provide ample opportunities to mess up, including missing an election. This article discusses five components of the BBA audit provisions that every tax professional should recognize and understand.
BBA Partnership Audit Notices
The IRS generally issues four notices during a BBA partnership audit. These notices include: (i) notices of selection for examination; (ii) notices of administrative proceeding (NAP); (iii) notices of proposed partnership adjustment (NOPPA); and (iv) notices of final partnership adjustments (FPA).
To commence a BBA examination, the IRS issues the partnership a notice of selection for examination. Roughly thirty days after this notice, the IRS issues the NAP. After the NAP is issued, neither the partnership nor its partners may file an administrative adjustment request or notice of inconsistent statement, either of which often seeks to change the partnership’s income tax reporting.
If the IRS examiner concludes that adjustments are necessary to the partnership return, the agency will issue a NOPPA that contains and details the proposed partnership adjustments. As discussed more below, the IRS will first allow the partnership an opportunity to an administrative appeal prior to issuance of the NOPPA. After issuance of the NOPPA, the partnership has a 270-day window to request modifications to the proposed partnership-level tax, which is known as an “imputed underpayment.” Generally, the partnership representative makes the modification requests by electronically filing an IRS Form 8980, Partnership Request for Modification of Imputed Underpayment Under IRC Section 6225(c).
If the partnership and the IRS continue to disagree on the proposed adjustments, the IRS issues an FPA. The FPA triggers two important deadlines. First, the partnership representative may elect to “push out” the FPA’s adjustments to the partners if an election is made within 45 days of the FPA. Second, the FPA starts a 90-day deadline for the partnership representative to contest the FPA’s determinations in federal court.
BBA Partnership Push-Out Election
A timely push-out election can significantly reduce overall income tax. If the partnership representative makes the election, any proposed adjustments resulting in an imputed underpayment are pushed out to the reviewed-year partners, i.e., the persons who were partners for the year under IRS scrutiny. Because a push-out election results in a higher applicable interest rate, however, partnerships should consult with their tax advisors to determine the impact of the push-out election prior to making it. Given the 45-day deadline, there is not much time here to make the analysis—so tax advisers should be engaged early on after the IRS issues the FPA.
A partnership representative makes a push-out election by completing and electronically filing an IRS Form 8988, Election to Alternative to Payment of the Imputed Underpayment – IRC Section 6226. In addition to filing this form, the partnership representative must provide the partners with certain information concerning the push-out adjustments. These push-out statements must be provided to the partners generally within 150 days of the FPA if the partnership representative accepts the proposed adjustments and does not seek judicial review. If the partnership representative files a timely petition for readjustment in federal court, the push-out statements must generally be provided to the partners within 60 days from the date the court enters its final decision.
In either instance, the partnership representative provides its partners with IRS Forms 8985, Pass-Through – Statement Transmittal / Partnership Adjustment Tracking Report (Required Under Sections 6226 and 6227), and 8986, Partner’s Share of Adjustment(s) to Partnership-Related Items(s) (Required Under Sections 6226 and 6227). If these statements are not provided timely, the IRS may attempt to revoke the push-out election.
BBA Partnership Audits And Deposits
A BBA partnership dispute can last a long time—even more so if the partnership representative contests the proposed adjustments in federal court. If the partnership representative makes a push-out election and ultimately loses on the merits at federal court, the partners may be responsible for significant interest on the resulting income taxes.
Section 6603 of the Code, which governs deposits, may be helpful here. When a taxpayer makes a deposit, it stops interest from accruing on potential taxes owed. BBA partners can make deposits of tax to stop interest, but they must follow special rules.
Under IRS guidance, a BBA partner can make a section 6603 deposit by submitting a payment of the estimated tax and submitting a statement to the IRS designating the payment as a deposit. In the statement, the partner should include: (i) the name and TIN of the partnership under examination; (ii) the reviewed year of the partnership under examination; (iii) the audit control number of the partnership under examination; (iv) a statement of the amount and basis of the disputable tax; and (v) the partner’s estimated allocable share of the adjustments and the tax, interest, and penalty computations.
IRS Appeals Rights In BBA Partnership Audits
The IRS Independent Office of Appeals (IRS Appeals) provides taxpayers with an impartial administrative forum to resolve their tax disputes with the IRS. IRS Appeals hears non-docketed cases and docketed cases. Non-docketed cases are those, as applicable to BBA partnership audits, where no petition for readjustment has been filed. Docketed cases are those pending in a federal district court.
Generally, the IRS will issue a “30-Day Letter” to the partnership representative after the conclusion of the examination. The 30-Day Letter notifies the partnership of the proposed partnership adjustments and offers the partnership a right to appeal the adjustments with IRS Appeals. To request an appeals conference, the partnership representative must submit a timely protest. In addition, IRS Appeals will only accept cases where there is sufficient time remaining on the statute of limitations for the IRS to make an assessment. Accordingly, the IRS often asks for a statute of limitations extension waiver from the partnership representative in these circumstances.
If the partnership representative submits a timely protest, the partnership has an opportunity to discuss disputes associated with the proposed adjustments with IRS Appeals. These disputes can relate to issues of fact or law. IRS Appeals reviews the parties’ contentions to determine whether a settlement may be reached without judicial intervention.
Regardless of settlement, IRS Appeals issues the NOPPA at the conclusion of the appeals conference, which as mentioned above triggers the 270-day modification period. If the partnership representative requests modifications and the IRS refuses to grant them, the case may be forwarded again to IRS Appeals solely to review the modification requests. Thereafter, IRS Appeals issues the FPA.
Similar to non-docketed cases, IRS Appeals seeks to resolve disputes between the partnership representative and the IRS in docketed cases.
BBA Partnership Audits And Judicial Review
When the agency issues an FPA, the partnership representative has 90 days to file a petition for readjustment with the proper federal court, which is either the U.S. Tax Court, the district court in which the partnership’s principal place of business is located, or the Court of Federal Claims. Partnerships do not have to pay the imputed underpayment prior to filing a petition in the U.S. Tax Court. But for a federal district court or the Court of Federal Claims to have jurisdiction, the partnership must make a deposit of the proposed imputed underpayment with the IRS on or before the petition filing date. By statute, the partnership must also pay any proposed penalties and “additional amounts.” |
189523 | https://baike.baidu.com/item/%E7%9B%B8%E9%81%87%E9%97%AE%E9%A2%98/659180 | 相遇问题_百度百科
网页新闻贴吧知道网盘图片视频地图文库资讯采购百科
百度首页
登录
注册
进入词条 全站搜索帮助
进入词条 全站搜索帮助
播报 编辑讨论 2收藏 赞
登录
首页
历史上的今天
百科冷知识
图解百科
秒懂百科
懂啦
秒懂本尊答
秒懂大师说
秒懂看瓦特
秒懂五千年
秒懂全视界
特色百科
数字博物馆
非遗百科
恐龙百科
多肉百科
艺术百科
科学百科
知识专题
观千年·见今朝
中国航天
古鱼崛起
食品百科
数字文物守护计划
史记2024·科学100词
加入百科
新人成长
进阶成长
任务广场
百科团队
校园团
分类达人团
热词团
繁星团
蝌蚪团
权威合作
合作模式
常见问题
联系方式
个人中心
相遇问题
播报 编辑讨论 2上传视频
数学问题
相遇问题:数学里的浪漫邂逅
01:11
相遇问题
02:11
全38集【小学奥数系统课】更形象动讲解数学知识
12:06
L4-第27讲-相遇问题(基础)
29:03
小升初常考题型-相遇问题,看你熟练做法吗
06:24
相遇问题解题技巧:如何确定两车相遇地点
04:38
小学数学公式定律导引:30相遇问题
02:36
20 10 列方程解决相遇问题 小學數學五年級
03:55
相遇问题轻松解,掌握三点秒懂!
01:30
5.相遇问题
54:00
行程问题中的相遇问题
02:15
鱼学堂小学五年级奥数思维课堂 多画画线段图 相遇问题就能理解了
03:55
掌握相遇问题,轻松应对小升初数学!
03:50
趣味数学动画,让孩子轻松掌握相遇问题!
02:03
收藏
查看我的收藏
145 有用+1
121
相遇问题是数学中研究两个物体从两地同时出发、相向而行并在途中相遇的行程问题。其核心是分析两者的速度与时间关系,需满足运动的同时性与同步性。
该类问题的基础公式为“速度和×相遇时间=路程”,衍生出环形跑道反向相遇、距中点特定距离相遇等典型场景。解题时需处理速度差引起的路程差,通过线段图辅助理清数量关系,并严格校准同步性与实际相遇条件 。
该数学模型在现实中的应用被引申至城市规划领域。例如2019年上海市杨浦区滨江改造案例,将相遇问题转化为历史建筑与现代设计的空间碰撞、旧区改造与民生需求的时间交汇、多元主体协同创新的社会互动,形成从学科概念到社会治理实践的跨领域映射 。
中文名 相遇问题
外文名 Encounter Problem
学 科 物理,数学
目录
1基本概况
2基本公式
3例题
▪例一
▪例2
▪例3
4注意问题
基本概况
播报
编辑
它和一般的行程问题区别在:不是一个物体的运动,所以,它研究的速度包含两个物体的速度,也就是速度和。
从出发到相遇的时间是相遇时间,从出发到相遇合走的路程是相遇路程,单位时间合走的路程是两个物体的路程和。注意的是:必须是同时同步的。
基本公式
播报
编辑
相遇问题的关系式是:
速度和×相遇时间=路程;
路程÷速度和=相遇时间;
路程÷相遇时间=速度和。
【解题思路和方法】简单的题目可直接利用公式,而复杂的题目变通后再利用公式。
例题
播报
编辑
例一
南京到上海的水路长392千米,同时从两港各开出一艘轮船相对而行,从南京开出的船每小时行28千米,从上海开出的船每小时行21千米,经过几小时两船相遇?
解392÷(28+21)=8(小时)
答:经过8小时两船相遇。
例2
小李和小刘在周长为400米的环形跑道上跑步,小李每秒钟跑5米,小刘每秒钟跑3米,他们从同一地点同时出发,反向而跑,那么,二人从出发到第二次相遇需多长时间?
解“第二次相遇”可以理解为二人跑了两圈。因此总路程为400×2
相遇时间=(400×2)÷(5+3)=100(秒)
答:二人从出发到第二次相遇需100秒时间。
例3
甲乙二人同时从两地骑自行车相向而行,甲每小时行15千米,乙每小时行13千米,两人在距中点3千米处相遇,求两地的距离。
解“两人在距中点3千米处相遇”是正确理解本题题意的关键。从题中可知甲骑得快,乙骑得慢,甲过了中点3千米,乙距中点3千米,就是说甲比乙多走的路程是(3×2)千米,因此,
相遇时间=(3×2)÷(15-13)=3(小时)
两地距离=(15+13)×3=84(千米)
答:两地距离是84千米。
注意问题
播报
编辑
解答这类问题,要弄清题意,按照题意画出线段图,分析各数量之间的关系,选择解答方法。相遇问题除了要弄清路程,速度与相遇时间外,在审题时还要注意一些重要的问题:是否是同时出发,如果题目中有谁先出发,就把先行的路程去掉,找到同时行的路程。驶的方向,是相向,同向还是背向.不同的方向解题方法就不一样。是否相遇.有的题目行驶的物体并没有相遇,要把相距的路程去掉;有的题目是两者错过,要把多行的路程加上,得到同时行驶的路程。
词条图册 更多图册
概述图册(2张)
1/1
参考资料
1 浅析小学数学教学中“相遇问题”的解题方法.汕尾日报社官方网站 [引用日期2024-10-24]
2 【专题报道】 当年数学课上难倒大家的“相遇问题” 杨浦这样作答.上海市杨浦区人民政府.2019-12-16
相遇问题的概述图(2张)
词条统计
浏览次数:650374次
编辑次数:28次历史版本
最近更新:
竖琴月光
(2025-06-19)
突出贡献榜
再见0997
1 基本概况2 基本公式3 例题例一例2例34 注意问题
相关搜索
相遇问题
情感问题在线解答
情感问题
感情
很累想分手却很舍不得
一段感情
感情不合
恋爱问题
数学相遇问题解题技巧
人生若如初见
相遇问题
选择朗读音色
成熟女声
成熟男声
磁性男声
年轻女声
情感男声
0
0
2x
1.5x
1.25x
1x
0.75x
0.5x
分享到微信朋友圈
打开微信“扫一扫”即可将网页分享至朋友圈
新手上路
成长任务编辑入门编辑规则本人编辑
我有疑问
内容质疑在线客服官方贴吧意见反馈
投诉建议
举报不良信息未通过词条申诉投诉侵权信息封禁查询与解封
©2025 Baidu使用百度前必读|百科协议|隐私政策|百度百科合作平台|京ICP证030173号
京公网安备11000002000001号 |
189524 | https://chattube.io/summary/comedy/7YyhmAURCgU | Gravimetric Determination of Sulfate in Analytical Chemistry Lab - ChatTube - Chat with any YouTube video
InstallFeaturesPricingFAQsSupport
English
Sign In
Summaries
People & Blogs
Gravimetric Determination of Sulfate in Analytical Chemistry Lab
Gravimetric Determination of Sulfate in Analytical Chemistry Lab
This video demonstrates the gravimetric determination of sulfate in an analytical chemistry lab.
00:00:01 This video is about the gravimetric determination of sulfate in an analytical chemistry lab.
🔬The video is about the gravimetric determination of sulfate in an Analytical Chemistry lab.
🧪The experiment involves precipitating sulfate ions with barium chloride and calculating the amount of sulfate based on the weight of the precipitate.
💡The gravimetric method is a reliable and accurate technique for determining the sulfate content in a sample.
00:01:25 This video demonstrates the gravimetric determination of sulfate in an analytical chemistry lab.
🔍The video is about the gravimetric determination of sulfate in an Analytical Chemistry Lab.
📏The experiment involves precipitating sulfate ions using barium chloride and measuring the weight of the resulting barium sulfate precipitate.
⚖️Gravimetric analysis is a precise and accurate method for determining the sulfate concentration in a given sample.
00:03:09 This video showcases the process of gravimetric determination of sulfate in an analytical chemistry lab.
🔬This video is about the gravimetric determination of sulfate in an Analytical Chemistry Lab.
⚖️The process involves precipitating sulfate ions with barium chloride and then weighing the resulting barium sulfate precipitate.
🧪The experiment requires careful handling of chemicals and adherence to lab safety protocols.
00:04:33 This video covers the process of determining sulfate using gravimetric analysis in an analytical chemistry lab.
🔬This video is a tutorial on how to perform the gravimetric determination of sulfate in the lab.
⚖️The gravimetric method involves precipitating sulfate ions with barium chloride and measuring the mass of the resulting barium sulfate precipitate.
🧪The tutorial explains the steps involved in the gravimetric determination, including the preparation of solutions, precipitation, filtration, washing, drying, and weighing.
00:06:07 This video demonstrates the process of gravimetric determination of sulfate in an analytical chemistry lab.
🔬The video is about the gravimetric determination of sulfate in an Analytical Chemistry Lab.
⚖️The process involves precipitating sulfate ions and measuring the mass to determine the concentration.
🧪Gravimetric analysis is a precise and accurate method used in quantitative analysis.
00:07:38 This video demonstrates the gravimetric determination of sulfate in an Analytical Chemistry lab.
00:09:05 This video demonstrates the gravimetric determination of sulfate in an analytical chemistry lab.
🔬Analytical Chemistry Lab
⚖️Gravimetric Determination of Sulfate
🧪Quantitative Analysis
Summary of a video "Gravimetric Determination of Sulfate/ Analytical Chemistry Lab" by chemistry lab on YouTube.
Want to deep dive into this video?
Continue in chat
Chat with any YouTube video
Chat
Available in the Chrome Web Store
Try our Chrome extension!
Chat with any YouTube video
InstallFeaturesPricingFAQsSupportBlogSummariesTermsPrivacy
© Copyright 2025. All rights reserved.
English |
189525 | https://mathequalslove.net/always-sometimes-never-for-absolute/ | Always Sometimes Never for Absolute Value, Opposite, Reciprocal, and Opposite Reciprocal | Math = Love
Skip to content
Trending Resource: Printable Fall Puzzles and Activities
About
About Me
Contact Me
Speaking
Awards and Recognition
Puzzles
All Printable Puzzles
Math Puzzles
Logic Puzzles
Seasonal and Holiday Puzzles
Word Puzzles
Hands-On Puzzles
Mazes
Puzzle Solutions
Answer Key Database
Browse Resources
Sort by Season or Holiday
Sort by Theme
Sort by Grade Level
Lower Elementary K-2
Upper Elementary 3-5
Sort by Math Topic
Number & Operations
Algebra & Functions
Geometry & Measurement
Statistics & Probability
Trigonometry
Calculus
SEARCH
SEARCH
Home » Practice Structures » Always Sometimes Never » Always Sometimes Never for Absolute Value, Opposite, Reciprocal, and Opposite Reciprocal
Always Sometimes Never | Clothesline Math | Concept of Absolute Value
Always Sometimes Never for Absolute Value, Opposite, Reciprocal, and Opposite Reciprocal
August 26, 2017 July 29, 2025
Monday, we are starting our first unit of Algebra 1 by looking at the definitions of absolute value, opposite, reciprocal, and opposite reciprocal. Last year, I made the mistaken assumption that my 9th graders would be proficient at finding the absolute value of a number/expression when they came in. In addition to absolute value, I’ve decided to address opposites, reciprocals, and opposite reciprocals at the same time. This should save us some time when we get to discussing perpendicular lines!
I decided to create an Always/Sometimes/Never activity to get my students thinking about these terms at a deeper level.
Here’s what I came up with:
I’ve currently got a set of tents in the works for a clothesline activity to go with this lesson.
Files for Always, Sometimes, or Never Questions for Absolute Value, Opposite, and Reciprocal
Click here to SAVE the file to your device. Always Sometimes Never – Absolute Value, Opposite, Reciprocal (PDF) 1912 saves – 32.51 KB
Click here to SAVE the file to your device. Always Sometimes Never – Absolute Value, Opposite, Reciprocal (Editable Publisher File ZIP) 1835 saves – 35.21 KB
Files for Clothesline Cards
Click here to SAVE the file to your device. Clothesline Cards for Absolute Value, Opposite, Reciprocal (PDF) 1514 saves – 45.80 KB
Click here to SAVE the file to your device. Clothesline Cards for Absolute Value, Opposite, Reciprocal (Editable Publisher File ZIP) 1708 saves – 48.89 KB
Post Tags: #high#middle
Sarah Carter
Sarah Carter teaches high school math in her hometown of Coweta, Oklahoma. She currently teaches AP Precalculus, AP Calculus AB, and Statistics. She is passionate about sharing creative and hands-on teaching ideas with math teachers around the world through her blog, Math = Love.
Post navigation
Previous Make a Million Place Value Game
Next Monday Must Reads: Volume 7
Similar Posts
Guest Post: An Inequality Story
Snow Day Hidoku Logic Puzzles
Evaluating Trig Functions Tarsia Puzzle
Solving Logarithmic Equations Foldable
Multiplying Polynomials Using the Box Method Foldable
Cactus Nonogram Puzzle
4 Comments
Unknownsays: August 27, 2017 at 12:17 pm I'm curious as how you plan to use the tent cards/clothesline activity?
Sarah Carter (@mathequalslove)says: September 5, 2017 at 1:03 am Blog post coming soon! 🙂
Unknownsays: August 28, 2017 at 1:46 am I am also curious about this… I have seen lots of ideas for clothesline math, but haven't found any directions as to how it is organized. ??? Thanks for sharing your resources!
Sarah Carter (@mathequalslove)says: September 5, 2017 at 1:04 am Blog post coming soon 🙂 You can find more about clothesline math here:
Comments are closed.
Hi! I'm Sarah Carter. I currently teach high school math in Oklahoma (USA).
I love math, creating hands-on activities and resources, and crafting logic puzzles. I get extra-excited when I get to combine those passions to create quality classroom resources that are 100% FREE.
Browse by Puzzle Type
Hands-On Puzzles
Edge Matching
Paper Folding
Pentominoes
Tangrams
Dominoes
Logic Puzzles
Sudoku
Nonograms
Hidoku
Kakuro
Shikaku
No Four in a Row
Area Division
Futoshiki
Digit Addition
Magic Square
Number Search
Number Cross
Latin Square
Math Puzzles
Order of Operations Puzzles
Geometry Puzzles
Word Puzzles
Math Word Puzzles
Word Searches
Fill-In Puzzles
Hidden Words
Missing Vowels
Word Path
Mazes
Browse by Theme
Seasonal/Holiday Resources
Fall (Autumn) Resources
Themed Resources
Browse by Grade Level
Lower Elementary K-2
Upper Elementary 3-5
Note: There are costs associated with running a website of this magnitude. Ads and affiliate links do help offset these costs. You can learn more about me or buy me a coffee to say thank you.
BUY ME A COFFEE ☕
BrowseMath Resources by Level
Lower Elementary
Upper Elementary
Middle School
High School
Algebra 1 Activities
Geometry Activities
Algebra 2 Activities
Trigonometry Activities
Statistics Activities
Probability Activities
AP Precalculus Activities
AP Calculus AB Activities
Printable Puzzles for the Classroom
All Printable Puzzles
Math Puzzles
Order of Operations Puzzles
Geometry Puzzles
Number Challenges
Hands-On Puzzles
Edge Matching
Paper Folding
Pentominoes
Tangrams
Dominoes
Logic Puzzles
Sudoku
Hidoku
Nonograms
Star Battle
Kakuro
Shikaku
No Four in a Row
Area Division
Futoshiki
Word Puzzles
Math Word Puzzles
Word Search Puzzles
Fill-In Puzzles
Hidden Word Puzzles
Word Path Grid Puzzles
Word Triples
Missing Vowels
Mazes
Templates and Resources
Derivative Rules Chart
Factor Charts
Multiplication Chart Printables
Number Charts
Math Teacher
Fun
Math Jokes
Algebra Jokes
Calculus Jokes
Geometry Jokes
Pi Day Jokes
Statistics Jokes
Trigonometry Jokes
Math Art
Geometric Pattern Coloring Pages
Hexaflexagon Templates
Origami
Games
Math Bingo Games
Tic Tac Toe Board Printable
Dots and Boxes Printable
Dice Games
Farkle Score Sheet
Seasonal and Themed Resources
Seasonal Resources
Summer
Fall/Autumn
Winter
Spring
Themed Resources
Alphabet
Animals
Chess
Desert
Farm
Food
Gardening/Plants
Geography
Jungle
Math
Music
Ocean
Pirate
Rainy Day
Robot
Snow Day
Space
Sports
Vehicles
© 2025 Math = Love • Create Theme by Restored 316
Privacy Policy
Sarah Carter is a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to Amazon.com.
AboutToggle child menu
About Me
Contact Me
Speaking
Awards and Recognition
PuzzlesToggle child menu
All Printable Puzzles
Math Puzzles
Logic Puzzles
Seasonal and Holiday Puzzles
Word Puzzles
Hands-On Puzzles
Mazes
Puzzle Solutions
Answer Key Database
Browse ResourcesToggle child menu
Sort by Season or Holiday
Sort by Theme
Sort by Grade LevelToggle child menu
Lower Elementary K-2
Upper Elementary 3-5
Sort by Math TopicToggle child menu
Number & Operations
Algebra & Functions
Geometry & Measurement
Statistics & Probability
Trigonometry
Calculus
Search for: |
189526 | https://data.worldbank.org/indicator/AG.LND.TOTL.RU.K2 | DATA & RESOURCES
# Rural land area (sq. km)
Low Elevation Coastal Zone ( LECZ ) Urban-Rural Population and Land Area Estimates, Version 2, Center For International Earth Science Information Network ( CIESIN ) - Columbia University, uri:
earthdata.nasa.gov/data/catalog/sedac-ciesin-sedac-lecz-urplaev2-2.00
, publisher: NASA Socioeconomic Data and Applications Center ( SEDAC ), date published: 2013
Line
Bar
Map
All Countries and Economies
Most Recent Year
Most Recent Value
2015
2015
American Samoa
2015
2015
2015
Antigua and Barbuda
2015
2015
2015
2015
2015
2015
2015
Bahamas, The
2015
2015
2015
2015
2015
2015
2015
2015
2015
2015
Bolivia
2015
1,059,511
Bosnia and Herzegovina
2015
49,135
Botswana
2015
572,887
Brazil
2015
8,385,496
British Virgin Islands
2015
148
Brunei Darussalam
2015
5,572
Bulgaria
2015
108,914
Burkina Faso
2015
273,918
Burundi
2015
24,151
Cabo Verde
2015
4,044
Cambodia
2015
176,181
Cameroon
2015
463,837
Canada
2015
9,197,138
Cayman Islands
2015
229
Central African Republic
2015
622,165
Chad
2015
1,272,924
Channel Islands
Chile
2015
723,595
China
2015
8,723,723
Colombia
2015
1,127,198
Comoros
2015
1,535
Congo, Dem. Rep.
2015
2,295,899
Congo, Rep.
2015
340,341
Costa Rica
2015
49,805
Cote d'Ivoire
2015
318,024
Croatia
2015
54,777
Cuba
2015
106,665
Curacao
2015
316
Cyprus
2015
8,748
Czechia
2015
73,926
Denmark
2015
40,366
Djibouti
2015
21,572
Dominica
2015
736
Dominican Republic
2015
45,390
Ecuador
2015
253,214
Egypt, Arab Rep.
2015
971,206
El Salvador
2015
18,676
Equatorial Guinea
2015
26,898
Eritrea
2015
120,001
Estonia
2015
42,657
Eswatini
2015
17,073
Ethiopia
2015
1,124,616
Faroe Islands
2015
1,386
Fiji
2015
18,884
Finland
2015
302,765
France
2015
522,936
French Polynesia
2015
3,984
Gabon
2015
263,338
Gambia, The
2015
10,198
Georgia
2015
68,069
Germany
2015
316,383
Ghana
2015
227,830
Gibraltar
2015
0
Greece
2015
128,796
Greenland
2015
315,961
Grenada
2015
291
Guam
2015
467
Guatemala
2015
105,527
Guinea
2015
244,206
Guinea-Bissau
2015
33,293
Guyana
2015
209,904
Haiti
2015
25,688
Honduras
2015
110,411
Hong Kong SAR, China
2015
591
Hungary
2015
87,565
Iceland
2015
88,792
India
2015
2,956,471
Indonesia
2015
1,820,838
Iran, Islamic Rep.
2015
1,601,053
Iraq
2015
434,270
Ireland
2015
67,698
Isle of Man
2015
538
Israel
2015
19,195
Italy
2015
277,071
Jamaica
2015
10,006
Japan
2015
316,736
Jordan
2015
86,673
Kazakhstan
2015
2,636,600
Kenya
2015
579,896
Kiribati
2015
908
Korea, Dem. People's Rep.
2015
119,025
Korea, Rep.
2015
86,793
Kosovo
Kuwait
2015
16,648
Kyrgyz Republic
2015
186,901
Lao PDR
2015
228,354
Latvia
2015
62,762
Lebanon
2015
8,546
Lesotho
2015
30,172
Liberia
2015
95,558
Libya
2015
1,619,018
Liechtenstein
2015
117
Lithuania
2015
62,585
Luxembourg
2015
2,247
Macao SAR, China
2015
8
Madagascar
2015
588,613
Malawi
2015
93,703
Malaysia
2015
318,198
Maldives
2015
260
Mali
2015
1,251,575
Malta
2015
151
Marshall Islands
2015
272
Mauritania
2015
1,045,548
Mauritius
2015
1,524
Mexico
2015
1,920,994
Micronesia, Fed. Sts.
2015
758
Moldova
2015
31,804
Monaco
2015
0
Mongolia
2015
1,549,425
Montenegro
2015
13,147
Morocco
2015
409,516
Mozambique
2015
775,002
Myanmar
2015
660,980
Namibia
2015
827,174
Nauru
2015
15
Nepal
2015
140,133
Netherlands
2015
26,212
New Caledonia
2015
18,754
New Zealand
2015
270,549
Nicaragua
2015
118,298
Niger
2015
1,187,981
Nigeria
2015
882,025
North Macedonia
2015
23,681
Northern Mariana Islands
2015
477
Norway
2015
306,188
Oman
2015
309,152
Pakistan
2015
821,721
Palau
2015
481
Panama
2015
73,614
Papua New Guinea
2015
462,122
Paraguay
2015
396,431
Peru
2015
1,281,204
Philippines
2015
282,120
Poland
2015
293,194
Portugal
2015
87,686
Puerto Rico (US)
2015
6,517
Qatar
2015
10,591
Romania
2015
229,906
Russian Federation
2015
16,224,183
Rwanda
2015
22,337
Samoa
2015
2,817
San Marino
2015
36
Sao Tome and Principe
2015
994
Saudi Arabia
2015
1,908,957
Senegal
2015
194,449
Serbia
2015
74,453
Seychelles
2015
475
Sierra Leone
2015
71,911
Singapore
2015
210
Sint Maarten (Dutch part)
2015
9
Slovak Republic
2015
46,876
Slovenia
2015
19,401
Solomon Islands
2015
28,414
Somalia
2015
635,102
South Africa
2015
1,204,427
South Sudan
2015
623,625
Spain
2015
488,926
Sri Lanka
2015
61,386
St. Kitts and Nevis
2015
254
St. Lucia
2015
545
St. Martin (French part)
2015
28
St. Vincent and the Grenadines
2015
329
Sudan
2015
1,859,900
Suriname
2015
145,003
Sweden
2015
411,134
Switzerland
2015
34,764
Syrian Arab Republic
2015
180,423
Tajikistan
2015
134,045
Tanzania
2015
880,726
Thailand
2015
491,892
Timor-Leste
2015
14,910
Togo
2015
56,198
Tonga
2015
702
Trinidad and Tobago
2015
4,364
Tunisia
2015
151,133
Turkiye
2015
754,925
Turkmenistan
2015
463,258
Turks and Caicos Islands
2015
911
Tuvalu
2015
39
Uganda
2015
201,995
Ukraine
2015
569,710
United Arab Emirates
2015
77,233
United Kingdom
2015
218,903
United States
2015
8,903,098
Uruguay
2015
173,707
Uzbekistan
2015
412,602
Vanuatu
2015
12,338
Venezuela, RB
2015
899,029
Viet Nam
2015
Virgin Islands (U.S.)
2015
West Bank and Gaza
2015
Yemen, Rep.
2015
2015
2015
2015
Arab World
2015
Caribbean small states
2015
Central Europe and the Baltics
2015
East Asia & Pacific
2015
East Asia & Pacific (excluding high income)
2015
Euro area
2015
Europe & Central Asia
2015
Europe & Central Asia (excluding high income)
2015
European Union
2015
Fragile and conflict affected situations
2015
Heavily indebted poor countries (HIPC)
2015
Latin America & Caribbean
2015
Latin America & Caribbean (excluding high income)
2015
Least developed countries: UN classification
2015
Middle East, North Africa, Afghanistan & Pakistan
2015
Middle East, North Africa, Afghanistan & Pakistan (excluding high income)
2015
North America
2015
OECD members
2015
Other small states
2015
Pacific island small states
2015
Small states
2015
South Asia
2015
Sub-Saharan Africa
2015
Sub-Saharan Africa (excluding high income)
2015
High income
2015
Low & middle income
2015
Low income
2015
Lower middle income
2015
Middle income
2015
Upper middle income
2015 |
189527 | https://nyulangone.org/conditions/nasopharyngeal-cancer/diagnosis | If you need help accessing our website, call 855-698-9991
Open accessibility Menu. Please wait while loading the accessibility menu.
Accessibility Menu not currently available on this page. If you need help, call 855-698-9991
Skip to main content
Diagnosing Nasopharyngeal Cancer
Facebook. Opens in a new tab
𝕏
Twitter. Opens in a new tab
NYU Langone doctors are experienced at diagnosing nasopharyngeal cancer, which develops in the nasopharynx, the top part of the throat.
The nasopharynx is a tube-like structure behind the nasal cavity, which is the empty space above and behind the nose that moistens and filters air. The nasopharynx carries air from the nasal cavity to the throat, which is called the pharynx, helping you to breathe.
Most nasopharyngeal cancers begin in the epithelial cells, which line the nasopharynx. People typically do not experience symptoms when a nasopharyngeal tumor is small.
As the tumor grows, it can spread to nearby lymph nodes, which are small, bean-shaped structures that are part of the lymphatic system. The lymphatic system is a network of vessels, tissues, and organs that circulate lymph, a fluid that contains infection-fighting white blood cells called lymphocytes. Swollen nodes in the neck may be the first noticeable sign of the condition.
Nasopharyngeal cancers can grow and press on one of the two Eustachian tubes. These tubes connect the nasopharynx to the middle ear and help regulate pressure in and drain fluid from the middle ear. Nasopharyngeal cancer affecting the Eustachian tube can cause pain, fluid, or hearing loss in that ear.
As cancer grows it may block a nasal passage, causing a stuffy nose. Some people experience nosebleeds.
Nasopharyngeal cancer can also enter the skull base, an area filled with complex nerves and blood vessels at the base of the brain. The skull base sits behind the eyes and above the nasal cavity and separates the brain from other structures of the head. As a tumor spreads to the skull base, it can press on critical nerves, causing problems with vision, headaches, and facial pain.
Risk Factors for Nasopharyngeal Cancer
Infection with the Epstein–Barr virus (EBV) can increase the risk of developing nasopharyngeal cancer. EBV is spread through bodily fluids, including saliva. Most people become infected with EBV at some point in their lives. Usually, the body fights off the virus, and the infection goes unnoticed. In the United States, EBV is known for causing mononucleosis.
For reasons that are not completely understood, people living in northern Africa, southeast Asia, and southern China are at increased risk of developing nasopharyngeal cancer if they become infected with EBV. Because New York City is home to recent immigrants from these areas, doctors at NYU Langone diagnose and treat a large number of people with nasopharyngeal cancer compared to other medical centers in the United States.
More recently, human papillomavirus (HPV) has been associated with nasopharyngeal cancer. HPV is a member of the herpes virus family and is easily transmitted through skin contact, usually vaginal, oral, and anal sex. Most people’s immune systems destroy HPV after it’s contracted, but in some people it may cause cell changes that lead to cancer.
Vaccines are available to protect against HPV, but whether they prevent nasopharyngeal cancer is not yet known. Avoiding sex with multiple partners and using condoms or other barrier methods when having vaginal, anal, or oral sex may help to prevent HPV infection.
Genetic factors may also play a role in a person’s risk of nasopharyngeal cancer. People who have a first-degree relative, meaning a parent or sibling, with nasopharyngeal cancer have higher odds of developing the cancer. This form of cancer is more common in men than in women.
To diagnose nasopharyngeal cancer, the doctor performs a physical exam, in which he or she looks for any abnormal growths in the head and neck area. He or she also asks about your medical history, including whether you’ve had an HPV infection, an EBV infection, or have lived in a country in which there is a higher risk of developing nasopharyngeal cancer.
Your doctor may also conduct several tests.
Nasal Endoscopy
To examine the nasopharynx, the doctor may use an endoscope—a thin, lighted tube with a lens at the tip that transmits images to a monitor. After numbing the nasal cavity with anesthetic spray, he or she inserts the scope through the nose and into the nasopharynx to look for tumors.
Nasal endoscopy can be performed in the doctor’s office.
Biopsy
If a doctor identifies a suspicious growth in the nasopharynx during an endoscopy, he or she performs a biopsy either in the office or in the hospital. The doctor passes surgical tools through or next to the scope and removes a small amount of tissue. You are given local anesthesia before the test.
A pathologist then examines the tissue under a microscope to determine if it contains cancer cells. The pathologist can also test biopsy samples to determine whether nasopharyngeal cancer is associated with an EBV or HPV infection.
You can go home the same day of a biopsy. You may experience a sore throat or mild bleeding for several days afterward. Over-the-counter pain relievers, throat lozenges, and drinking plenty of fluids can help manage discomfort.
Fine Needle Aspiration
Sometimes nasopharyngeal cancer spreads to nearby lymph nodes, causing swelling and one or more neck masses. NYU Langone doctors often use fine needle aspiration, in which they insert a small needle into the mass to withdraw a sample of cells for examination under a microscope.
During the procedure, the doctor may use ultrasound, in which sound waves are used to create images of the structures in the neck, to guide the needle with precision.
Doctors can use fine needle aspiration of the lymph nodes to diagnose nasopharyngeal cancer or to determine whether the cancer has spread.
PET/CT Scan
Nasopharyngeal cancer can sometimes be hidden in the tissue underneath the lining of the nasopharynx, making it difficult to detect with an endoscopy. For this reason, doctors may use a PET/CT scan—a combination of two imaging techniques—to help diagnose the condition. This scan can also help determine if the cancer has spread to nearby lymph nodes, bones, or other parts of the body.
The CT portion of the scan uses X-rays and a computer to create two- or three-dimensional, cross-sectional images of the body. Your doctor may inject a special dye into a vein to enhance the CT image. The PET portion of the scan, which creates images of the entire body, requires an intravenous (IV) infusion of radioactive glucose, or sugar, into a vein. This substance collects in cancer cells, making them easier to detect during the scan.
The information from both the PET and CT scans are combined to determine whether cancer is present and where it is located.
PET/MRI Scan
A PET/MRI scan combines PET and MRI technology in one machine. An MRI scan uses a magnetic field to create images. It is especially useful for examining the brain, nerves, and soft tissue. Because nasopharyngeal cancer can grow into the skull base, along nerves, and into nearby lymph nodes, doctors may use this test to map exactly how far the tumor has spread.
Doctors combine MRI images with information gathered from a PET scan, increasing their ability to detect cancer.
PET/MRI scans to detect nasopharyngeal cancer are available at NYU Langone in a clinical trial setting.
SHARE:
𝕏
More Nasopharyngeal Cancer Resources
Meet Our Doctors
Perlmutter Cancer Center specialists provide care and support during treatment.
Browse Doctors
Overview & Treatment
Overview
Find a Doctor & Schedule
Diagnosis
Radiation Therapy
Medications
Surgery
Support
Research & Education
Perlmutter Cancer Center Education & Research
Otolaryngology Education & Research
Surgery Education & Research
Clinical Trials
We can help you find a cancer doctor.
Call
212-731-6000
or
browse our specialists. |
189528 | https://www.quantamagazine.org/mathematicians-discover-new-way-for-spheres-to-kiss-20250115/ | An editorially independent publication supported by the Simons Foundation.
Follow Quanta
Get the latest news delivered to your inbox.
Recent newsletters
Gift Store
Shop Quanta gear
Type search term(s) and press enter
Home
Mathematicians Discover New Way for Spheres to ‘Kiss’
Read Later
Share
Copied!
Comments
Read Later
sphere packing
Mathematicians Discover New Way for Spheres to ‘Kiss’
By Gregory Barber
January 15, 2025
A new proof marks the first progress in decades on important cases of the so-called kissing problem. Getting there meant doing away with traditional approaches.
Read Later
Introduction
In May of 1694, in a lecture hall at the University of Cambridge, Isaac Newton and the astronomer David Gregory started to contemplate the nature of the stars, only to end up with a math puzzle that would persist for centuries.
The details of their conversation were poorly recorded and are possibly apocryphal — it had something to do with how stars of varying sizes would orbit a central sun. What’s remembered today is the more general question it inspired: Given a central sphere, how many identical spheres can be arranged so that they touch it without overlapping?
In three dimensions, it’s trivial to position 12 spheres around a central one so that each touches it at a single point. But this arrangement leaves gaps between the spheres. Could you squeeze a 13th into that leftover space? Gregory thought you could. Newton didn’t.
The “kissing” problem, as it came to be called — a reference to when billiard balls strike, or kiss — has proved relevant for analyzing atomic structures, constructing error-correcting codes, and more.
It’s also shaped up to be quite the mathematical challenge. It took mathematicians until 1952 to prove Newton right: In our familiar three dimensions, the maximum “kissing number” is 12.
But the kissing problem can be asked about spheres of any dimension. In two dimensions, the answer is clearly six: Put a penny on a table, and you’ll find that when you arrange another six pennies around it, they fit snugly into a daisylike pattern.
In higher dimensions, the problem gets harder.
It has been solved in dimension four, as well as in dimensions 8 and 24, where mathematicians have been able to optimally pack spheres into gorgeously symmetrical lattice structures. But in all other dimensions, where more space appears between the spheres, the problem remains open. Mathematicians have instead come up with estimates of the kissing number, calculating upper and lower bounds that can often be quite far apart. In these cases, the question is no longer about whether you can add a single extra sphere, but about whether you can add hundreds, thousands or even millions.
To improve these estimates, mathematicians usually follow the same intuition that gave them solutions in dimensions like 8 and 24: They look for ways to arrange spheres as symmetrically as possible. But there’s still a possibility that the best arrangements might look a lot weirder. “There may be structures without any symmetry at all,” said Gabriele Nebe of RWTH Aachen University in Germany. “And no good way to find them.”
Then, in the spring of 2022, an undergraduate math major at the Massachusetts Institute of Technology named Anqi Li decided to go looking for those weirder structures. While working on a class project, she came up with a deceptively simple idea that has now allowed her and her professor, Henry Cohn, to improve estimates of the kissing number in a particularly challenging cluster of dimensions: 17 through 21. The work marks the first progress on the problem in those dimensions since the 1960s — and showcases the benefits of injecting more disorder into potential solutions.
“Usually, you work with a very strong symmetric lattice,” said Oleg Musin of the University of Texas, Rio Grande Valley, who proved the optimal kissing number in dimension four in 2003. “What they propose is something different.”
In fact, their proof is the latest in a spate of recent sphere-packing results that were only possible because mathematicians deviated from conventional approaches. “Things stagnated with the kissing problem, but it wasn’t because we were converging on the truth,” Cohn said. “We were just stuck.” To get unstuck, it turned out, they had to break a few unspoken rules.
From Codes to Kisses
Since the mid-20th century, mathematicians have relied on the mathematics of information theory and error correction to make headway on problems related to arranging spheres.
An error-correcting code allows you to send a message that is understandable to the recipient even if parts of it get distorted or corrupted during transmission. The code essentially consists of a set of “code words” — a dictionary of possible messages — that the recipient can use as a key to recover the original message. These code words need to be chosen carefully: They have to be sufficiently distinct for the recipient to know which code word to use when correcting errors.
Anqi Li started working on the kissing problem while she was an undergraduate at MIT. Her research led to exciting advances on several cases of it.
Thea Rossman
Mathematicians often visualize this problem in terms of spheres. You can think of each code word as a high-dimensional point at the center of a sphere. If an error-filled message (when represented as a high-dimensional point) lives inside a given sphere, you know that the code word at the sphere’s center was the intended message. You don’t want these spheres to overlap — otherwise, a received message might be interpreted in more than one way. But the spheres shouldn’t be too far apart, either. Packing the spheres tightly means you can communicate more efficiently.
Better codes have led to better sphere packings, and vice versa. In 1967, for instance, the mathematician John Leech used an incredibly efficient code — famous for its later use by NASA to communicate with its Voyager probes — to construct a lattice of points that now bears his name. Fifty years later, Cohn and several other mathematicians proved that you can use this lattice to pack spheres as densely as possible into 24-dimensional space. The lattice also gives the best kissing arrangement: Each sphere touches 196,560 neighbors. “The Leech lattice is a miracle of mathematics, the way things fit together,” Cohn said.
The lattice also gave mathematicians their best estimates of kissing numbers in dimensions 17 through 23. They simply took slices of the lattice to get lower-dimensional ones, much as you can slice a 3D sphere to get a 2D circle.
But this also meant that the Leech lattice “cast a huge shadow” on the kissing problem in those dimensions, Cohn said. No matter how mathematicians tried, they couldn’t find a structure that gave them better estimates — even though they suspected that taking slices of the Leech lattice wasn’t the right path to a solution.
Going Rogue
Li didn’t initially go looking for a new path when she started working on her project in 2022. At first, Cohn suggested that she focus on the kissing problem in dimensions higher than 24. In those dimensions, the current best estimates of kissing numbers are much rougher. Improving them often comes down to making computational advances rather than finding a creative new approach. Cohn knew of other students who had already made progress in such higher-dimensional cases using computer-based methods. He figured Li could, too.
But she found the work frustrating. “I had this awful feeling that my hands were tied,” she said. “It was impossible to picture.” So instead, she went a little rogue.
She set her sights on dimensions 17 through 23. “I told her she could still get an A if she explored possible improvements and nothing worked out,” Cohn recalled. Had she been one of his graduate students, he would have tried harder to convince her to work on something else. “If they work on something hopeless, it’ll be bad for their career,” he said.
But the result of her efforts, he added, “turned out to be far more exciting.”
She started by looking at dimension 16. There, the best kissing arrangement came from the “Barnes-Wall lattice,” which had been discovered in the 1950s using an elegant error-correcting code. (It also turned out to be a slice of the Leech lattice, which wouldn’t be discovered for another decade.)
The code consists of just two different types of points, which each satisfy a particular pattern of coordinates.
The way these points are defined leads to a quirk: In the Barnes-Wall lattice (and all higher-dimensional slices of the Leech lattice), the most common type of point, or sphere center, always has an even number of minus signs in its coordinates. This helps ensure that there’s enough distance between the points, and results in a symmetric structure that is particularly easy to work with.
But, Li thought, what if she used an odd number of signs in those points instead? If she was careful, that wouldn’t necessarily lead to overlapping spheres. To her knowledge, no one had bothered trying it before. “I don’t think either of us really thought that it mattered,” Cohn said. But Li suspected that there was a chance that, by changing some of the points in the lattice in this way, she might be able to distort it just enough to accommodate more spheres.
When she built her “odd” version of the Barnes-Wall lattice in dimension 16, it yielded no space for extra spheres, though it didn’t make anything worse either. But when she glued copies of it together into layers to make a 17-dimensional structure, there were clearly gaps where new points could be added — holes where, when she calculated the distance to existing spheres in the structure, it was clear that new spheres could fit. At first, she couldn’t believe it. She felt uneasy, not elated. “I remember telling my friends, I’m sure I did some basic arithmetic wrong,” she said.
Cohn indulged her skepticism at first — it’s easy to make a little mistake in these kinds of calculations, especially when wishful thinking might be involved. So they checked her new arrangement of points on a computer. It worked: All the spheres fit correctly.
That summer, Li went to work with Cohn as an intern at Microsoft Research, where the pair carefully refined the error-correcting codes they were using so that they could continue adding compatible spheres to Li’s “odd” 17-dimensional structure. In the end, they were able to add 384 new spheres to the Leech-based estimate from 1967, bringing the lower bound of the kissing number to 5,730.
They then applied similar techniques to improve the kissing number in dimensions 18 through 21. But in dimensions 22 and 23, their strategy failed. It seemed they’d exhausted their sign-flipping approach.
The pair’s new configurations likely aren’t optimal. In dimension 17, for instance, the estimated upper bound is 10,978; while that’s considered a gross overestimate of the real solution, it suggests that there is still significant room to improve the lower bound.
But mathematicians are more interested in how Cohn and Li achieved their gains. Their new structures look very different from the highly symmetric ones inspired by the Leech lattice. The code-based methods they used to add spheres gave them more irregular configurations — something entirely new.
A New Way Forward
It’s not clear why changing the signs creates enough space for more spheres. It just does. “I’m still unnerved by it now,” Cohn said. But the work demonstrates how a “seemingly insignificant change opens up or closes off possibility,” he added. It reveals, in that sense, how little mathematicians actually know about the kissing problem.
When building new error-correcting codes and sphere packings, mathematicians generally rely on symmetry. That’s what Leech did. This makes the construction process easier, more intuitive. But it can also close off possibilities, making it hard to see beyond a beautiful solution to other structures — ones that might have more disorder or involve less intuitive forms of symmetry. “Maybe we’re not coming near the truth because it just doesn’t have a humanly accessible description,” Cohn said.
Several recent results support the promise of these less accessible possibilities. In the past couple of years, mathematicians have come up with clever new constructions in dimensions 5, 10 and 11 by bending or breaking the usual symmetry rules.
Cohn was particularly astonished by the work of Ferenc Szöllősi, a Hungarian mathematician who intentionally started with a suboptimal arrangement of spheres in dimension four and built on it to match the best existing estimate in dimension five. For decades, there were two structures that generated that estimate; most mathematicians thought there couldn’t be any others. Suddenly here was Szöllősi with a third. (He later found out that another pair of researchers had also discovered the configuration, but they had not recognized its significance.) “It proved you could be taken by surprise,” said Cohn, who was then inspired to work with another one of his students to find a fourth.
Every unusual structure they discover gives them “little hints and clues to what the truth might be,” he added. “The kissing problem is still full of mysteries.”
Correction:January 16, 2025 Ferenc Szöllősi’s result was also independently discovered by another pair of mathematicians. The text has been updated to include mention of that other work.
The Quanta Newsletter
Get highlights of the most important news delivered to your email inbox
Recent newsletters
Also in Mathematics
The Math of Catastrophe
climate science
### The Math of Catastrophe
By Gregory Barber
September 15, 2025
Read Later
What Is the Fourier Transform?
harmonic analysis
### What Is the Fourier Transform?
By Shalma Wegsman
September 3, 2025
Read Later
‘Ten Martini’ Proof Uses Number Theory to Explain Quantum Fractals
mathematical physics
### ‘Ten Martini’ Proof Uses Number Theory to Explain Quantum Fractals
By Lyndie Chiou +1 authors
Joseph Howlett
August 25, 2025
Read Later
Comment on this article
Quanta Magazine moderates comments to facilitate an informed, substantive, civil conversation. Abusive, profane, self-promotional, misleading, incoherent or off-topic comments will be rejected. Moderators are staffed during regular business hours (New York time) and can only accept comments written in English.
Next article
Can AI Models Show Us How People Learn? Impossible Languages Point a Way.
Log in to Quanta
Use your social network
Don't have an account yet? Sign up
Forgot your password?
We’ll email you instructions to reset your password
Change your password
Enter your new password
Sign Up
Creating an account means you accept Quanta Magazine's Terms & Conditions and Privacy Policy
We care about your data, and we'd like to use cookies to give you a smooth browsing experience. Please agree and read more about our privacy policy.AGREEDISMISS |
189529 | https://www.wattsgallery.org.uk/blog/the-deep-cultural-significance-of-cherry-blossoms-in-japan | Skip to main content
News Story
A symbol of renewal and fleeting beauty
The delicate dance of sakura (cherry blossoms) painting the Japanese landscape in shades of pink isn't just a visual spectacle. It's a cultural phenomenon deeply woven into the fabric of Japanese life, carrying layers of meaning and tradition. Let's delve into the significance of these blooms.
Cherry blossoms mark the arrival of spring, a season of new beginnings and rebirth. Their short lifespan, blossoming for just a couple of weeks, serves as a powerful reminder of the transience of life and the importance of cherishing each moment. The philosophy, "mono no aware," meaning "the pathos of things" resonates deeply in Japanese culture.
Beyond aesthetics: hanami and Samurai spirit
The tradition of "hanami," or flower viewing, goes back centuries. Underneath the canopy of pink, families, friends, and colleagues gather for picnics and appreciate the fleeting beauty. It's a reminder to slow down, connect with nature, and savour the present.
For the samurai, cherry blossoms embodied a different facet. Their short, yet glorious bloom mirrored the warrior's code of Bushidō, emphasising honour, courage, and living life to the fullest. Fallen cherry blossoms symbolised the end of the samurai's short lives.
Cherry blossoms permeate Japanese art forms, appearing in paintings, woodblock prints, and even tattoos. They symbolise hope, new beginnings, and even love. In folk belief, the blossom trees were considered sacred and believed to be dwelling places for mountain deities who transformed into the gods of rice paddies.
An enduring legacy
Even today, cherry blossoms hold immense cultural significance. Schools and businesses hold welcome parties under their delicate blooms, and weather forecasts religiously track their blooming progress. This annual spectacle continues to unite and inspire the nation, serving as a powerful reminder of their heritage and cultural values.
### Sakura Saku (cherry blossoms are blooming)
Discover a new site-specific installation by Hiroko Imada, inspired by Edo Pop and the cherry blossom season.
### Edo Pop: Japanese Prints 1825-1895
Travel to the bustling metropolis of Edo through 19th-century Japanese woodblock prints in our new exhibition.
You might also like...
Exhibition
### Women’s Support Centre Surrey: New Beginnings Art Awards
More info
2. Event
### Friends First View: The Pattle Sisters
More info
Book now
3. Event
### After hours: sitar concert and curator's tour
4 December 2025
More info
Book now |
189530 | https://stackoverflow.com/questions/2432428/is-there-any-algorithm-for-calculating-area-of-a-shape-given-co-ordinates-that-d | c# - Is there any algorithm for calculating area of a shape given co-ordinates that define the shape? - Stack Overflow
Join Stack Overflow
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
Sign up with GitHub
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Overflow
1. About
2. Products
3. For Teams
Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
Advertising Reach devs & technologists worldwide about your product, service or employer brand
Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models
Labs The future of collective knowledge sharing
About the companyVisit the blog
Loading…
current community
Stack Overflow helpchat
Meta Stack Overflow
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Let's set up your homepage Select a few topics you're interested in:
python javascript c#reactjs java android html flutter c++node.js typescript css r php angular next.js spring-boot machine-learning sql excel ios azure docker
Or search from our full list:
javascript
python
java
c#
php
android
html
jquery
c++
css
ios
sql
mysql
r
reactjs
node.js
arrays
c
asp.net
json
python-3.x
.net
ruby-on-rails
sql-server
swift
django
angular
objective-c
excel
pandas
angularjs
regex
typescript
ruby
linux
ajax
iphone
vba
xml
laravel
spring
asp.net-mvc
database
wordpress
string
flutter
postgresql
mongodb
wpf
windows
xcode
amazon-web-services
bash
git
oracle-database
spring-boot
dataframe
azure
firebase
list
multithreading
docker
vb.net
react-native
eclipse
algorithm
powershell
macos
visual-studio
numpy
image
forms
scala
function
vue.js
performance
twitter-bootstrap
selenium
winforms
kotlin
loops
express
dart
hibernate
sqlite
matlab
python-2.7
shell
rest
apache
entity-framework
android-studio
csv
maven
linq
qt
dictionary
unit-testing
asp.net-core
facebook
apache-spark
tensorflow
file
swing
class
unity-game-engine
sorting
date
authentication
go
symfony
t-sql
opencv
matplotlib
.htaccess
google-chrome
for-loop
datetime
codeigniter
perl
http
validation
sockets
google-maps
object
uitableview
xaml
oop
visual-studio-code
if-statement
cordova
ubuntu
web-services
email
android-layout
github
spring-mvc
elasticsearch
kubernetes
selenium-webdriver
ms-access
ggplot2
user-interface
parsing
pointers
c++11
google-sheets
security
machine-learning
google-apps-script
ruby-on-rails-3
templates
flask
nginx
variables
exception
sql-server-2008
gradle
debugging
tkinter
delphi
listview
jpa
asynchronous
web-scraping
haskell
pdf
jsp
ssl
amazon-s3
google-cloud-platform
jenkins
testing
xamarin
wcf
batch-file
generics
npm
ionic-framework
network-programming
unix
recursion
google-app-engine
mongoose
visual-studio-2010
.net-core
android-fragments
assembly
animation
math
svg
session
intellij-idea
hadoop
rust
next.js
curl
join
winapi
django-models
laravel-5
url
heroku
http-redirect
tomcat
google-cloud-firestore
inheritance
webpack
image-processing
gcc
keras
swiftui
asp.net-mvc-4
logging
dom
matrix
pyspark
actionscript-3
button
post
optimization
firebase-realtime-database
web
jquery-ui
cocoa
xpath
iis
d3.js
javafx
firefox
xslt
internet-explorer
caching
select
asp.net-mvc-3
opengl
events
asp.net-web-api
plot
dplyr
encryption
magento
stored-procedures
search
amazon-ec2
ruby-on-rails-4
memory
canvas
audio
multidimensional-array
random
jsf
vector
redux
cookies
input
facebook-graph-api
flash
indexing
xamarin.forms
arraylist
ipad
cocoa-touch
data-structures
video
azure-devops
model-view-controller
apache-kafka
serialization
jdbc
woocommerce
razor
routes
awk
servlets
mod-rewrite
excel-formula
beautifulsoup
filter
docker-compose
iframe
aws-lambda
design-patterns
text
visual-c++
django-rest-framework
cakephp
mobile
android-intent
struct
react-hooks
methods
groovy
mvvm
ssh
lambda
checkbox
time
ecmascript-6
grails
google-chrome-extension
installation
cmake
sharepoint
shiny
spring-security
jakarta-ee
plsql
android-recyclerview
core-data
types
sed
meteor
android-activity
activerecord
bootstrap-4
websocket
graph
replace
scikit-learn
group-by
vim
file-upload
junit
boost
memory-management
sass
import
async-await
deep-learning
error-handling
eloquent
dynamic
soap
dependency-injection
silverlight
layout
apache-spark-sql
charts
deployment
browser
gridview
svn
while-loop
google-bigquery
vuejs2
dll
highcharts
ffmpeg
view
foreach
makefile
plugins
redis
c#-4.0
reporting-services
jupyter-notebook
unicode
merge
reflection
https
server
google-maps-api-3
twitter
oauth-2.0
extjs
terminal
axios
pip
split
cmd
pytorch
encoding
django-views
collections
database-design
hash
netbeans
automation
data-binding
ember.js
build
tcp
pdo
sqlalchemy
apache-flex
mysqli
entity-framework-core
concurrency
command-line
spring-data-jpa
printing
react-redux
java-8
lua
html-table
ansible
jestjs
neo4j
service
parameters
enums
material-ui
flexbox
module
promise
visual-studio-2012
outlook
firebase-authentication
web-applications
webview
uwp
jquery-mobile
utf-8
datatable
python-requests
parallel-processing
colors
drop-down-menu
scipy
scroll
tfs
hive
count
syntax
ms-word
twitter-bootstrap-3
ssis
fonts
rxjs
constructor
google-analytics
file-io
three.js
paypal
powerbi
graphql
cassandra
discord
graphics
compiler-errors
gwt
socket.io
react-router
solr
backbone.js
memory-leaks
url-rewriting
datatables
nlp
oauth
terraform
datagridview
drupal
oracle11g
zend-framework
knockout.js
triggers
neural-network
interface
django-forms
angular-material
casting
jmeter
google-api
linked-list
path
timer
django-templates
arduino
proxy
orm
directory
windows-phone-7
parse-platform
visual-studio-2015
cron
conditional-statements
push-notification
functional-programming
primefaces
pagination
model
jar
xamarin.android
hyperlink
uiview
visual-studio-2013
vbscript
google-cloud-functions
gitlab
azure-active-directory
jwt
download
swift3
sql-server-2005
configuration
process
rspec
pygame
properties
combobox
callback
windows-phone-8
linux-kernel
safari
scrapy
permissions
emacs
scripting
raspberry-pi
clojure
x86
scope
io
expo
azure-functions
compilation
responsive-design
mongodb-query
nhibernate
angularjs-directive
request
bluetooth
reference
binding
dns
architecture
3d
playframework
pyqt
version-control
discord.js
doctrine-orm
package
f#
rubygems
get
sql-server-2012
autocomplete
tree
openssl
datepicker
kendo-ui
jackson
yii
controller
grep
nested
xamarin.ios
static
null
statistics
transactions
active-directory
datagrid
dockerfile
uiviewcontroller
webforms
discord.py
phpmyadmin
sas
computer-vision
notifications
duplicates
mocking
youtube
pycharm
nullpointerexception
yaml
menu
blazor
sum
plotly
bitmap
asp.net-mvc-5
visual-studio-2008
yii2
electron
floating-point
css-selectors
stl
jsf-2
android-listview
time-series
cryptography
ant
hashmap
character-encoding
stream
msbuild
asp.net-core-mvc
sdk
google-drive-api
jboss
selenium-chromedriver
joomla
devise
cors
navigation
anaconda
cuda
background
frontend
binary
multiprocessing
pyqt5
camera
iterator
linq-to-sql
mariadb
onclick
android-jetpack-compose
ios7
microsoft-graph-api
rabbitmq
android-asynctask
tabs
laravel-4
environment-variables
amazon-dynamodb
insert
uicollectionview
linker
xsd
coldfusion
console
continuous-integration
upload
textview
ftp
opengl-es
macros
operating-system
mockito
localization
formatting
xml-parsing
vuejs3
json.net
type-conversion
data.table
kivy
timestamp
integer
calendar
segmentation-fault
android-ndk
prolog
drag-and-drop
char
crash
jasmine
dependencies
automated-tests
geometry
azure-pipelines
android-gradle-plugin
itext
fortran
sprite-kit
header
mfc
firebase-cloud-messaging
attributes
nosql
format
nuxt.js
odoo
db2
jquery-plugins
event-handling
jenkins-pipeline
nestjs
leaflet
julia
annotations
flutter-layout
keyboard
postman
textbox
arm
visual-studio-2017
gulp
stripe-payments
libgdx
synchronization
timezone
uikit
azure-web-app-service
dom-events
xampp
wso2
crystal-reports
namespaces
swagger
android-emulator
aggregation-framework
uiscrollview
jvm
google-sheets-formula
sequelize.js
com
chart.js
snowflake-cloud-data-platform
subprocess
geolocation
webdriver
html5-canvas
centos
garbage-collection
dialog
sql-update
widget
numbers
concatenation
qml
tuples
set
java-stream
smtp
mapreduce
ionic2
windows-10
rotation
android-edittext
modal-dialog
spring-data
nuget
doctrine
radio-button
http-headers
grid
sonarqube
lucene
xmlhttprequest
listbox
switch-statement
initialization
internationalization
components
apache-camel
boolean
google-play
serial-port
gdb
ios5
ldap
youtube-api
return
eclipse-plugin
pivot
latex
frameworks
tags
containers
github-actions
c++17
subquery
dataset
asp-classic
foreign-keys
label
embedded
uinavigationcontroller
copy
delegates
struts2
google-cloud-storage
migration
protractor
base64
queue
find
uibutton
sql-server-2008-r2
arguments
composer-php
append
jaxb
zip
stack
tailwind-css
cucumber
autolayout
ide
entity-framework-6
iteration
popup
r-markdown
windows-7
airflow
vb6
g++
ssl-certificate
hover
clang
jqgrid
range
gmail
Next You’ll be prompted to create an account to view your personalized homepage.
Home
Questions
AI Assist Labs
Tags
Challenges
Chat
Articles
Users
Jobs
Companies
Collectives
Communities for your favorite technologies. Explore all Collectives
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
Is there any algorithm for calculating area of a shape given co-ordinates that define the shape?
Ask Question
Asked 15 years, 6 months ago
Modified5 years, 10 months ago
Viewed 13k times
This question shows research effort; it is useful and clear
10
Save this question.
Show activity on this post.
So I have some function that receives N random 2D points.
Is there any algorithm to calculate area of the shape defined by the input points?
c#
algorithm
geometry
Share
Share a link to this question
Copy linkCC BY-SA 2.5
Improve this question
Follow
Follow this question to receive notifications
edited Mar 12, 2010 at 12:21
Pratik Deoghare
37.5k 30 30 gold badges 106 106 silver badges 147 147 bronze badges
asked Mar 12, 2010 at 11:46
RellaRella
67.3k 112 112 gold badges 378 378 silver badges 643 643 bronze badges
8
1 What is your question exactly and what kind of square are you talking about?Tuomas Pelkonen –Tuomas Pelkonen 2010-03-12 11:49:42 +00:00 Commented Mar 12, 2010 at 11:49
it's not really clear what you are asking for, the minimum bounding square/rectangle maybe?jk. –jk. 2010-03-12 11:50:53 +00:00 Commented Mar 12, 2010 at 11:50
What do you mean by "counting square of shape"? Do you mean the area? If so, what method do you use to determine what the "shape" is?Jens –Jens 2010-03-12 11:51:20 +00:00 Commented Mar 12, 2010 at 11:51
What size is N likely to be? Are there any performance constraints?Mark Byers –Mark Byers 2010-03-12 11:51:21 +00:00 Commented Mar 12, 2010 at 11:51
1 :) @Mark Byers: we are far away from performance tweaks because we are far away from the question?Pratik Deoghare –Pratik Deoghare 2010-03-12 11:55:43 +00:00 Commented Mar 12, 2010 at 11:55
|Show 3 more comments
5 Answers 5
Sorted by: Reset to default
This answer is useful
32
Save this answer.
Show activity on this post.
You want to calculate the area of a polygon?
(Taken from link, converted to C#)
```csharp
class Point { double x, y; }
double PolygonArea(Point[] polygon)
{
int i,j;
double area = 0;
for (i=0; i < polygon.Length; i++) {
j = (i + 1) % polygon.Length;
area += polygon[i].x polygon[j].y;
area -= polygon[i].y polygon[j].x;
}
area /= 2;
return (area < 0 ? -area : area);
}
```
Share
Share a link to this answer
Copy linkCC BY-SA 2.5
Improve this answer
Follow
Follow this answer to receive notifications
edited Mar 12, 2010 at 19:54
Jason Orendorff
45.4k 6 6 gold badges 68 68 silver badges 103 103 bronze badges
answered Mar 12, 2010 at 11:58
Dead accountDead account
20k 13 13 gold badges 54 54 silver badges 82 82 bronze badges
5 Comments
Add a comment
Pratik Deoghare
Pratik DeoghareOver a year ago
I dream about heavenly bodies ;)
2010-03-12T12:30:10.677Z+00:00
0
Reply
Copy link
Manitra Andriamitondra
Manitra AndriamitondraOver a year ago
Hmm, nice but does this take care of concavity (tweesting polygone :) ?
2010-03-12T13:34:03.8Z+00:00
1
Reply
Copy link
Pratik Deoghare
Pratik DeoghareOver a year ago
@manitra yes it does topcoder.com/tc?d1=tutorials&d2=geometry1&module=Static
2010-03-12T14:17:15.087Z+00:00
0
Reply
Copy link
Jacob
JacobOver a year ago
As I've mentioned in my answer, there is no ready-made polygon: you will need to create one.
2010-03-12T15:04:00.263Z+00:00
0
Reply
Copy link
DoubleDunk
DoubleDunkOver a year ago
I know this may not be the place to ask but... How would I apply this concept to a polygon made with geodetic vertices (lat lon). Do I need to do a projection to the cartesian coordinate system or is there a simpler way?
2014-07-11T21:11:08.087Z+00:00
0
Reply
Copy link
Add a comment
This answer is useful
1
Save this answer.
Show activity on this post.
Defining the "area" of your collection of points may be hard, e.g. if you want to get the smallest region with straight line boundaries which enclose your set then I'm not sure how to proceed. Probably what you want to do is calculate the area of the convex hull of your set of points; this is a standard problem, a description of the problem with links to implementations of solutions is given by Steven Skiena at the Stony Brook Algorithms repository. From there one way to calculate the area (it seems to me to be the obvious way) would be to triangulate the region and calculate the area of each individual triangle.
Share
Share a link to this answer
Copy linkCC BY-SA 2.5
Improve this answer
Follow
Follow this answer to receive notifications
answered Mar 12, 2010 at 12:35
NathanNathan
466 4 4 silver badges 7 7 bronze badges
4 Comments
Add a comment
Svante
SvanteOver a year ago
The "smallest region with straight line boundaries" has area 0 and encloses a minimum spanning tree.
2010-03-12T12:55:11.67Z+00:00
0
Reply
Copy link
Nathan
NathanOver a year ago
Yep, fair enough comment. I suppose there may be reasonable constraints that can be put on this problem to give solutions other than the convex hull (e.g. allowing for "large" holes by allowing for arbitrary edges with the restriction that the number of edges is O(n^{1/2})), but any such problem is likely to be extremely hard to solve.
2010-03-12T14:08:51.543Z+00:00
0
Reply
Copy link
user287792
user287792Over a year ago
Picking nits here, but the region with the shortest straight line boundaries would actually be a Steiner tree, which is in general shorter than the MST.
2010-03-12T14:28:26.247Z+00:00
1
Reply
Copy link
Jacob
JacobOver a year ago
The only guarantee you can make with the area of convex hull is that it will be an upper bound.
2010-03-12T15:05:04.613Z+00:00
0
Reply
Copy link
Add a comment
This answer is useful
1
Save this answer.
Show activity on this post.
You can use Timothy Chan's algorithm for finding convex hull in nlogh, where n is the number of points, h is the number of convex hull vertices. If you want an easy algorithm, go for Graham scan.
Also, if you know that your data is ordered like a simple chain, where the points don't cross each other, you can use Melkman's algorithm to compute convex hull in O(N).
Also, one more interesting property of convex hull is that, it has the minium perimeter.
Share
Share a link to this answer
Copy linkCC BY-SA 2.5
Improve this answer
Follow
Follow this answer to receive notifications
answered Mar 12, 2010 at 13:50
BooleanBoolean
14.7k 30 30 gold badges 92 92 silver badges 130 130 bronze badges
Comments
Add a comment
This answer is useful
0
Save this answer.
Show activity on this post.
Your problem does not directly imply that there's a ready-made polygon (which is assumed by this answer). I would recommend a triangulation such as a Delaunay Triangulation and then trivially compute the area of each triangle. OpenCV (I've used it with a large number of 2D points and it's very effective) and CGAL provide excellent implementations for determining the triangulation.
Share
Share a link to this answer
Copy linkCC BY-SA 2.5
Improve this answer
Follow
Follow this answer to receive notifications
edited May 23, 2017 at 11:54
CommunityBot
1 1 1 silver badge
answered Mar 12, 2010 at 14:57
JacobJacob
34.7k 15 15 gold badges 116 116 silver badges 167 167 bronze badges
Comments
Add a comment
This answer is useful
0
Save this answer.
Show activity on this post.
I found another function written in Java , so i traslated it to C#
csharp
public static double area(List<Double> lats,List<Double> lons)
{
double sum=0;
double prevcolat=0;
double prevaz=0;
double colat0=0;
double az0=0;
for (int i=0;i<lats.Count;i++)
{
double colat=2Math.Atan2(Math.Sqrt(Math.Pow(Math.Sin(lats[i]Math.PI/180/2), 2)+ Math.Cos(lats[i]Math.PI/180)Math.Pow(Math.Sin(lons[i]Math.PI/180/2), 2)),
Math.Sqrt(1- Math.Pow(Math.Sin(lats[i]Math.PI/180/2), 2)- Math.Cos(lats[i]Math.PI/180)Math.Pow(Math.Sin(lons[i]Math.PI/180/2), 2)));
double az=0;
if (lats[i]>=90)
{
az=0;
}
else if (lats[i]<=-90)
{
az=Math.PI;
}
else
{
az=Math.Atan2(Math.Cos(lats[i]Math.PI/180) Math.Sin(lons[i]Math.PI/180),Math.Sin(lats[i]Math.PI/180))% (2Math.PI);
}
if(i==0)
{
colat0=colat;
az0=az;
}
if(i>0 && i<lats.Count)
{
sum=sum+(1-Math.Cos(prevcolat + (colat-prevcolat)/2))Math.PI((Math.Abs(az-prevaz)/Math.PI)-2Math.Ceiling(((Math.Abs(az-prevaz)/Math.PI)-1)/2)) Math.Sign(az-prevaz);
}
prevcolat=colat;
prevaz=az;
}
sum=sum+(1-Math.Cos(prevcolat + (colat0-prevcolat)/2))(az0-prevaz);
return 5.10072E14 Math.Min(Math.Abs(sum)/4/Math.PI,1-Math.Abs(sum)/4/Math.PI);
}
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Improve this answer
Follow
Follow this answer to receive notifications
edited May 23, 2017 at 12:00
CommunityBot
1 1 1 silver badge
answered Jun 24, 2016 at 23:00
jcurieljcuriel
39 2 2 bronze badges
Comments
Add a comment
Your Answer
Thanks for contributing an answer to Stack Overflow!
Please be sure to answer the question. Provide details and share your research!
But avoid …
Asking for help, clarification, or responding to other answers.
Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Draft saved
Draft discarded
Sign up or log in
Sign up using Google
Sign up using Email and Password
Submit
Post as a guest
Name
Email
Required, but never shown
Post Your Answer Discard
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
c#
algorithm
geometry
See similar questions with these tags.
The Overflow Blog
The history and future of software development (part 1)
Getting Backstage in front of a shifting dev experience
Featured on Meta
Spevacus has joined us as a Community Manager
Introducing a new proactive anti-spam measure
New and improved coding challenges
New comment UI experiment graduation
Policy: Generative AI (e.g., ChatGPT) is banned
Community activity
Last 1 hr
Users online activity 23411 users online
33 questions
32 answers
106 comments
434 upvotes
Popular tags
c++phpjavascriptjavapythonc#
Popular unanswered question
MemoryCache with Microsoft.Extensions.Caching.Memory
c#asp.net-corememorycache
Jeeten Parmar
5.8k
2,490 days ago
Linked
106How do I calculate the area of a 2d polygon?
27Calculating area enclosed by arbitrary polygon on Earth's surface
7Formula for calculating the volumetric center or optical alignment (centroid) of any vector icon?
Related
1C# - How to get the area of a System.Drawing.Region?
1Algorithm to determine whether a point lies within a bounded area
2Calculating the area
2Calculation the area of a triangle
5Find area given a set of points
0find area of 2 coordinates
0How to call a method to get area of a triangle
0How to get the points of the rectangular area
1Given a set of points calculate the outline of the area that the points cover
0How do I calculate the area around a given point
Hot Network Questions
What's the expectation around asking to be invited to invitation-only workshops?
What is the meaning and import of this highlighted phrase in Selichos?
Alternatives to Test-Driven Grading in an LLM world
How to convert this extremely large group in GAP into a permutation group.
Why do universities push for high impact journal publications?
How to start explorer with C: drive selected and shown in folder list?
Why multiply energies when calculating the formation energy of butadiene's π-electron system?
alignment in a table with custom separator
What happens if you miss cruise ship deadline at private island?
Is existence always locational?
How do you emphasize the verb "to be" with do/does?
Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road?
Do we need the author's permission for reference
Drawing the structure of a matrix
With line sustain pedal markings, do I release the pedal at the beginning or end of the last note?
What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel?
Lingering odor presumably from bad chicken
Is encrypting the login keyring necessary if you have full disk encryption?
"Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf
Does a Linux console change color when it crashes?
If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church?
Discussing strategy reduces winning chances of everyone!
ConTeXt: Unnecessary space in \setupheadertext
What can be said?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
lang-cs
Why are you flagging this comment?
Probable spam.
This comment promotes a product, service or website while failing to disclose the author's affiliation.
Unfriendly or contains harassment/bigotry/abuse.
This comment is unkind, insulting or attacks another person or group. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Stack Overflow
Questions
Help
Chat
Products
Teams
Advertising
Talent
Company
About
Press
Work Here
Legal
Privacy Policy
Terms of Service
Contact Us
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings |
189531 | https://en.wikipedia.org/wiki/Hormonal_contraception | Jump to content
Hormonal contraception
العربية
Čeština
Español
فارسی
Bahasa Indonesia
עברית
Latviešu
Madhur
Português
Русский
کوردی
Svenska
ไทย
Edit links
From Wikipedia, the free encyclopedia
Birth control methods that act on the endocrine system
| Hormonal contraception | |
--- |
| Background | |
| Type | Hormonal |
| First use | 1960 |
| Pregnancy rates (first year) | |
| Perfect use | Varies by method: 0.05-2% |
| Typical use | Varies by method: 0.05-9% |
| Usage | |
| Duration effect | Various |
| Reversibility | Upon discontinuation |
| User reminders | Must follow usage schedule |
| Clinic review | Every 3–12 months, depending on method |
| Advantages and disadvantages | |
| STI protection | No |
| Periods | Withdrawal bleeds are frequently lighter than menstrual periods, and some methods can suppress bleeding altogether |
| Weight | No proven effect |
Hormonal contraception refers to birth control methods that act on the endocrine system. Almost all methods are composed of steroid hormones, although in India one selective estrogen receptor modulator is marketed as a contraceptive. The original hormonal method—the combined oral contraceptive pill—was first marketed as a contraceptive in 1960. In the ensuing decades, many other delivery methods have been developed, although the oral and injectable methods are by far the most popular. Hormonal contraception is highly effective: when taken on the prescribed schedule, users of steroid hormone methods experience pregnancy rates of less than 1% per year. Perfect-use pregnancy rates for most hormonal contraceptives are usually around the 0.3% rate or less. Currently available methods can only be used by women; the development of a male hormonal contraceptive is an active research area.
There are two main types of hormonal contraceptive formulations: combined methods which contain both an estrogen and a progestin, and progestogen-only methods which contain only progesterone or one of its synthetic analogues (progestins). Combined methods work by suppressing ovulation and thickening cervical mucus; while progestogen-only methods reduce the frequency of ovulation, most of them rely more heavily on changes in cervical mucus. The incidence of certain side effects is different for the different formulations: for example, breakthrough bleeding is much more common with progestogen-only methods. Certain serious complications occasionally caused by estrogen-containing contraceptives are not believed to be caused by progestogen-only formulations: deep vein thrombosis is one example of this.
Medical uses
[edit]
Hormonal contraception is primarily used for the prevention of pregnancy, but is also prescribed for the treatment of polycystic ovary syndrome, menstrual disorders such as dysmenorrhea and menorrhagia, and hirsutism.
Polycystic ovary syndrome
[edit]
Hormonal treatments, such as hormonal contraceptives, are frequently successful at alleviating symptoms associated with polycystic ovary syndrome. Birth control pills are often prescribed to reverse the effects of excessive androgen levels, and decrease ovarian hormone production.
Dysmenorrhea
[edit]
Hormonal birth control methods such as birth control pills, the contraceptive patch, vaginal ring, contraceptive implant, and hormonal IUD are used to treat cramping and pain associated with primary dysmenorrhea.
Menorrhagia
[edit]
Oral contraceptives are prescribed in the treatment of menorrhagia to help regulate menstrual cycles and prevent prolonged menstrual bleeding. The hormonal IUD (Mirena) releases levonorgestrel which thins the uterine lining, preventing excessive bleeding and loss of iron.
Hirsutism
[edit]
Birth control pills are the most commonly prescribed hormonal treatment for hirsutism, as they prevent ovulation and decrease androgen production by the ovaries. Additionally, estrogen in the pills stimulates the liver to produce more of a protein that binds to androgens and reduces their activity.
Effectiveness
[edit]
Modern contraceptives using steroid hormones have perfect-use or method failure rates of less than 1% per year. The lowest failure rates are seen with the implants Jadelle and Implanon, at 0.05% per year. According to Contraceptive Technology, none of these methods has a failure rate greater than 0.3% per year. The SERM ormeloxifene is less effective than the steroid hormone methods; studies have found a perfect-use failure rate near 2% per year.
Long-acting methods such as the implant and the IUS are user-independent methods. For user-independent methods, the typical or actual-use failure rates are the same as the method failure rates. Methods that require regular action by the user—such as taking a pill every day—have typical failure rates higher than perfect-use failure rates. Contraceptive Technology reports a typical failure rate of 3% per year for the injection Depo-Provera, and 8% per year for most other user-dependent hormonal methods. While no large studies have been done, it is hoped that newer methods which require less frequent action (such as the patch) will result in higher user compliance and therefore lower typical failure rates.
Currently there is little evidence that there is an association between being overweight and the effectiveness of hormonal contraceptives.
Combined vs. progestogen-only
[edit]
While unpredictable breakthrough bleeding is a possible side effect for all hormonal contraceptives, it is more common with progestogen-only formulations. Most regimens of COCPs, NuvaRing, and the contraceptive patch incorporate a placebo or break week that causes regular withdrawal bleeding. While women using combined injectable contraceptives may experience amenorrhea (lack of periods), they typically have predictable bleeding comparable to that of women using COCPs.
Although high-quality studies are lacking, it is believed that estrogen-containing contraceptives significantly decrease the quantity of milk in breastfeeding women. Progestogen-only contraceptives are not believed to have this effect. In addition, while in general the progestogen-only pill is less effective than other hormonal contraceptives, the added contraceptive effect of breastfeeding makes it highly effective in breastfeeding women.
While combined contraceptives increase the risk for deep vein thrombosis (DVT – blood clots), progestogen-only contraceptives are not believed to affect DVT formation.
Side effects
[edit]
Cancers
[edit]
There is a mixed effect of combined hormonal contraceptives on the rates of various cancers, with the International Agency for Research on Cancer (IARC) stating: "It was concluded that, if the reported association was causal, the excess risk for breast cancer associated with typical patterns of current use of combined oral contraceptives was very small." and also saying that "there is also conclusive evidence that these agents have a protective effect against cancers of the ovary and endometrium":
The (IARC) notes that "the weight of the evidence suggests a small increase in the relative risk for breast cancer among current and recent users" which following discontinuation then lessens over a period of 10 years to similar rates as women who never used them, as well as "The increase in risk for breast cancer associated with the use of combined oral contraceptives in younger women could be due to more frequent contacts with doctors"
Small increases are also seen in the rates of cervical cancer and hepatocellular (liver) tumours.[citation needed]
Endometrial and ovarian cancer risks are approximately halved and persists for at least 10 years after cessation of use; although "sequential oral contraceptives which were removed from the consumer market in the 1970s was associated with an increased risk for endometrial cancer".
Studies have overall not shown effects on the relative risks for colorectal, malignant melanoma or thyroid cancers.
Information on progesterone-only pills is less extensive, due to smaller sampling sizes, but they do not appear to significantly increase the risk of breast cancer.
Most other forms of hormonal contraception are too new for meaningful data to be available, although risks and benefits are believed to be similar for methods which use the same hormones; e.g., risks for combined-hormone patches are thought to be roughly equivalent to those for combined-hormone pills.
Cardiovascular disease
[edit]
Combined oral contraceptives can increase the risk of certain types of cardiovascular disease in women with a pre-existing condition or already-heightened risk of cardiovascular disease. Smoking (for women over 35), metabolic conditions like diabetes, obesity and family history of heart disease are all risk factors which may be exacerbated by the use of certain hormonal contraceptives. Oral contraceptives have also been linked to an inflated risk of myocardial infarction, arterial thrombosis, and ischemic stroke.
Blood clots
[edit]
Hormonal contraception methods are consistently linked with the risk of developing blood clots. However, the risk does vary depending on the hormone type or birth control method being used.
Types
[edit]
There are two main classes of hormonal contraceptives: combined contraceptives contain both an estrogen and a progestin, and progestogen-only contraceptives that contain only progesterone or a synthetic analogue (progestin). There is also a non-hormonal contraceptive called ormeloxifene which acts on the hormonal system to prevent pregnancy.
Combined
[edit]
The most popular form of hormonal contraception, is the combined oral contraceptive pill (COCP) known colloquially as the pill. It is taken once a day, most commonly for 21 days followed by a seven-day break, although other regimens are also used. For women not using ongoing hormonal contraception, COCPs may be taken after intercourse as emergency contraception: this is known as the Yuzpe regimen. COCPs are available in a variety of formulations.[citation needed]
The contraceptive patch is applied to the skin and worn continuously. A series of three patches are worn for one week each, and then the user takes a one-week break. NuvaRing is worn inside the vagina. A ring is worn for three weeks. After removal, the user takes a one-week break before inserting a new ring. As with COCPs, other regimens may be used with the contraceptive patch or NuvaRing to provide extended cycle combined hormonal contraception.
Some combined injectable contraceptives can be administered as one injection per month.
Progestogen-only
[edit]
The progestogen-only pill (POP) is taken once per day within the same three-hour window. Several different formulations of POP are marketed. A low-dose formulation is known as the minipill. Unlike COCPs, progestogen-only pills are taken every day with no breaks or placebos. For women not using ongoing hormonal contraception, progestogen-only pills may be taken after intercourse as emergency contraception. There are a number of dedicated products sold for this purpose.[citation needed]
Hormonal intrauterine contraceptives are known as intrauterine systems (IUS) or Intrauterine Devices (IUD). An IUS/IUD must be inserted by a health professional. The copper IUD does not contain hormones. While a copper-containing IUD may be used as emergency contraception, the IUS has not been studied for this purpose.
Depo-Provera is an injection that provides three months of contraceptive protection. Noristerat is another injection; it is given every two months.
Contraceptive implants are inserted under the skin of the upper arm, and contain progesterone only. Jadelle (Norplant 2) consists of two rods that release a low dose of hormones. It is effective for five years. Nexplanon has replaced the former Implanon and is also a single rod that releases etonogestrel (similar to the body's natural progesterone). The only difference between Implanon and Nexplanon is Nexplanon is radio opaque and can be detected by x-ray. This is needed for cases of implant migration. It is effective for three years and is usually done in office. It is over 99% effective. It works in 3 ways:
1. Prevents ovulation- usually an egg does not mature
2. thickens cervical mucus so to prevent sperm from reaching the egg
3. If those 2 fail, the last is the progesterone causes the lining of the uterus to be too thin for implantation.
Ormeloxifene
[edit]
Ormeloxifene is a selective estrogen receptor modulator (SERM). Marketed as Centchroman, Centron, or Saheli, it is pill that is taken once per week. Ormeloxifene is legally available only in India.
Mechanism of action
[edit]
The effect of hormonal agents on the reproductive system is complex. It is believed that combined hormonal contraceptives work primarily by preventing ovulation and thickening cervical mucus. Progestogen-only contraceptives can also prevent ovulation, but rely more significantly on the thickening of cervical mucus. Ormeloxifene does not affect ovulation, and its mechanism of action is not well understood.[citation needed]
Combined
[edit]
Combined hormonal contraceptives were developed to prevent ovulation by suppressing the release of gonadotropins. They inhibit follicular development and prevent ovulation as a primary mechanism of action.
Progestogen negative feedback decreases the pulse frequency of gonadotropin-releasing hormone (GnRH) release by the hypothalamus, which decreases the release of follicle-stimulating hormone (FSH) and greatly decreases the release of luteinizing hormone (LH) by the anterior pituitary. Decreased levels of FSH inhibit follicular development, preventing an increase in estradiol levels. Progestogen negative feedback and the lack of estrogen positive feedback on LH release prevent a mid-cycle LH surge. Inhibition of follicular development and the absence of a LH surge prevent ovulation.
Estrogen was originally included in oral contraceptives for better cycle control (to stabilize the endometrium and thereby reduce the incidence of breakthrough bleeding), but was also found to inhibit follicular development and help prevent ovulation. Estrogen negative feedback on the anterior pituitary greatly decreases the release of FSH, which inhibits follicular development and helps prevent ovulation.
Another primary mechanism of action of all progestogen-containing contraceptives is inhibition of sperm penetration through the cervix into the upper genital tract (uterus and fallopian tubes) by decreasing the amount of and increasing the viscosity of the cervical mucus.
The estrogen and progestogen in combined hormonal contraceptives have other effects on the reproductive system, but these have not been shown to contribute to their contraceptive efficacy:
Slowing tubal motility and ova transport, which may interfere with fertilization.
Endometrial atrophy and alteration of metalloproteinase content, which may impede sperm motility and viability, or theoretically inhibit implantation.
Endometrial edema, which may affect implantation.
Insufficient evidence exists on whether changes in the endometrium could actually prevent implantation. The primary mechanisms of action are so effective that the possibility of fertilization during combined hormonal contraceptive use is very small. Since pregnancy occurs despite endometrial changes when the primary mechanisms of action fail, endometrial changes are unlikely to play a significant role, if any, in the observed effectiveness of combined hormonal contraceptives.
Progestogen-only
[edit]
The mechanism of action of progestogen-only contraceptives depends on the progestogen activity and dose.
Low dose progestogen-only contraceptives include traditional progestogen-only pills, the subdermal implant Jadelle and the intrauterine system Mirena. These contraceptives inconsistently inhibit ovulation in ~50% of cycles and rely mainly on their progestogenic effect of thickening the cervical mucus and thereby reducing sperm viability and penetration.
Intermediate dose progestogen-only contraceptives, such as the progestogen-only pill Cerazette (or the subdermal implant Implanon), allow some follicular development but much more consistently inhibit ovulation in 97–99% of cycles. The same cervical mucus changes occur as with low dose progestogens.
High dose progestogen-only contraceptives, such as the injectables Depo-Provera and Noristerat, completely inhibit follicular development and ovulation. The same cervical mucus changes occur as with very low dose and intermediate dose progestogens.
In anovulatory cycles using progestogen-only contraceptives, the endometrium is thin and atrophic. If the endometrium was also thin and atrophic during an ovulatory cycle, this could theoretically interfere with implantation of a blastocyst (embryo).[citation needed]
Ormeloxifene
[edit]
Ormeloxifene does not affect ovulation. It has been shown to increase the rate of blastocyst development and to increase the speed at which the blastocyst is moved from the fallopian tubes into the uterus. Ormeloxifene also suppresses proliferation and decidualization of the endometrium (the transformation of the endometrium in preparation for possible implantation of an embryo). While they are believed to prevent implantation rather than fertilization, exactly how these effects operate to prevent pregnancy is not understood
Emergency contraception
[edit]
The use of emergency contraceptives (ECs) allows for the prevention of a pregnancy after unprotected sex or contraception failure. In the United States, there are currently four different methods available, including ulipristal acetate (UPA), an oral progesterone receptor agonist-antagonist; levonorgestrel (LNG), an oral progestin; off-label use of combined oral contraceptives (Yuzpe regimen); and the copper intrauterine device (Cu-IUD).
Types
[edit]
UPA, a progesterone agonist-antagonist, was approved by the FDA in 2010 for use as an EC. UPA acts as a partial agonist and antagonist of the progesterone receptor and works by preventing both ovulation and fertilization. Users of UPA are likely to experience delayed menses after the expected date. In the United States, UPA is sold under the brand name Ella, which is a 30 mg single pill to be taken up to 120 hours after unprotected sex. /> UPA has emerged as the most effective EC pill, however, the access to UPA is very limited in US cities. UPA is a prescription emergency contraceptive pill and a recent study has found that less than 10% of pharmacies indicated that a UPA prescription could be filled immediately. 72% of pharmacies reported the ability to order UPA and the prescription to be filled in a median wait time of 24 hours.
Plan B one step was the first levonorgestrel progestin-only EC approved by the FDA in 1999. Currently, there are many different brands of levonorgestrel EC pills, including Take Action, Next Choice One Dose, and My Way and regimens include a single 1.5 mg pill of levonorgestrel. Levonorgestrel EC pills should be taken up to 72 hours after unprotected sex due to the drug becoming less effective over time. Levonorgestrel acts as an agonist of the progesterone receptor, preventing ovulation. Users of levonorgestrel often experience menses before the expected date. A prescription for levonorgestrel is not needed and can be found over the counter at local pharmacies. Because levonorgestrel does not have any life-threatening side effects, it has been approved by the FDA for use by all age groups.
The Yuzpe regimen used combination oral contraceptives for EC and has been used since 1974. This regimen is no longer commonly used due to side effects such as nausea and vomiting, as well as the discovery of more effective methods. The regimen consists of two pills, each containing a minimum 100 μg of ethinyl estradiol and a minimum of 500 μg of levonorgestrel. The first pill is taken 72 hours after unprotected sex and the second pill is taken 12 hours after the first. The Yuzpe regimen is often used in areas where dedicated EC methods are unavailable or where EC is not accepted.
The most effective form of EC is the insertion of a Cu-IUD within 5 days of unprotected sex. Because the Cu-IUD is inserted into the uterus, it has the advantage of providing continued contraception for up to 10 years. Cu-IUDs have been the only IUDs that have been approved as ECs due to the mechanism in hormonal and copper IUDs differing. Hormonal IUDs are used for the treatment of unplanned pregnancies by being placed in the uterus after an oral EC has been taken.
Frequency of use
[edit]
Pills—combined and progestogen-only—are the most common form of hormonal contraception. Worldwide, they account for 12% of contraceptive use. 21% of users of reversible contraceptives choose COCPs or POPs. Pills are especially popular in more developed countries, where they account for 25% of contraceptive use.
Injectable hormonal contraceptives are also used by a significant portion—about 6%—of the world's contraceptive users. Other hormonal contraceptives are less common, accounting for less than 1% of contraceptive use.
History
[edit]
See also: Contraceptive trials in Puerto Rico
Leo Loeb showed in 1911 that removing the corpora lutea hastened the next ovulation.: 33 In 1921, Ludwig Haberlandt demonstrated a temporary hormonal contraception in a female rabbit by transplanting ovaries from a second, pregnant, animal. (Makepeace et al, 1937) showed specifically that injecting progesterone inhibited ovulation in rabbits. By the 1930s, scientists had isolated and determined the structure of the steroid hormones and found that high doses of androgens, estrogens, or progesterone inhibited ovulation. A number of economic, technological, and social obstacles had to be overcome before the development of the first hormonal contraceptive, the combined oral contraceptive pill (COCP). In 1957 Enovid, the first COCP, was approved in the United States for the treatment of menstrual disorders. In 1960, the U.S. Food and Drug Administration approved an application that allowed Enovid to be marketed as a contraceptive.
The first progestogen-only contraceptive was introduced in 1969: Depo-Provera, a high-dose progestin injection. Over the next decade and a half, other types of progestogen-only contraceptive were developed: a low-dose progestogen only pill (1973); Progestasert, the first hormonal intrauterine device (1976); and Norplant, the first contraceptive implant (1983).
Combined contraceptives have also been made available in a variety of forms. In the 1960s a few combined injectable contraceptives were introduced, notably Injectable Number 1 in China and Deladroxate in Latin America. A third combined injection, Cyclo-Provera, was reformulated in the 1980s by lowering the dose and renamed Cyclofem (also called Lunelle). Cyclofem and Mesigyna, another formulation developed in the 1980s, were approved by the World Health Organization in 1993. NuvaRing, a contraceptive vaginal ring, was first marketed in 2002. 2002 also saw the launch of Ortho Evra, the first contraceptive patch.
In 1991, ormeloxifene was introduced as a contraceptive in India. While it acts on the estrogen hormonal system, it is atypical in that it is a selective estrogen receptor modulator rather than an estrogen, and has the capacity for both estrogenic and antiestrogenic effects.
See also
[edit]
Reproductive Health Supplies Coalition
Male hormonal contraception
Progestogen-only injectable contraceptive
Estradiol-containing oral contraceptive
List of progestogens available in the United States
List of estrogens available in the United States
References
[edit]
^ Susan Scott Ricci; Terri Kyle (2009). "Common Reproductive Issues". Contraception. Lippincott Williams & Wilkins. p. 119.
^ Jump up to: a b National Prescribing Service (11 December 2009). "NPS News 54: Hormonal contraceptives – tailoring for the individual". Retrieved 19 March 2009.
^ "Hormonal Contraceptives Offer Benefits Beyond Pregnancy Prevention". The American College of Obstetricians and Gynecologists. Retrieved 5 May 2013.
^ "Hirsutism and Polycystic Ovary Syndrome" (PDF). American Society for Reproductive Medicine. Retrieved 5 May 2013.
^ "Noncontraceptive Benefits of Birth Control Pills". Reproductivefacts.org. The American Society for Reproductive Medicine. Retrieved 5 May 2013.
^ "Dysmenorrhea: Painful Periods". www.acog.org. Retrieved 14 November 2022.
^ "Menorrhagia (Heavy Menstrual Bleeding)". The Mayo Clinic. Retrieved 5 May 2013.
^ "Hirsutism and Polycystic Ovary Syndrome (PCOS)" (PDF). American Society for Reproductive Medicine. Retrieved 5 May 2013.
^ Sivin I, Campodonico I, Kiriwat O, Holma P, Diaz S, Wan L, et al. (December 1998). "The performance of levonorgestrel rod and Norplant contraceptive implants: a 5 year randomized study". Human Reproduction. 13 (12): 3371–3378. doi:10.1093/humrep/13.12.3371. PMID 9886517.
^ Jump up to: a b c d Trussell, James (2007). "Contraceptive Efficacy". In Hatcher, Robert A.; et al. (eds.). Contraceptive Technology (19th rev. ed.). New York: Ardent Media. ISBN 978-0-9664902-0-6.
^ Puri V, et al. (1988). "Results of multicentric trial of Centchroman". In Dhwan B. N., et al. (eds.). Pharmacology for Health in Asia : Proceedings of Asian Congress of Pharmacology, 15–19 January 1985, New Delhi, India. Ahmedabad: Allied Publishers.
^ Nityanand S, et al. (1990). "Clinical evaluation of Centchroman: a new oral contraceptive". In Puri, Chander P., Van Look, Paul F. A. (eds.). Hormone Antagonists for Fertility Regulation. Bombay: Indian Society for the Study of Reproduction and Fertility.
^ O'Connor ML (March–April 2001). "Contraceptive Use Elevates The Odds of Barrier Method Use for Disease Prevention". Family Planning Perspectives. 33 (2). Guttmacher Institute: 93–94. doi:10.2307/2673760. JSTOR 2673760. Retrieved 2009-08-30.
^ Paladine H (2003). "What's New In Contraception". The Female Patient. Retrieved 2009-08-30.
^ Lopez LM, Bernholc A, Chen M, Grey TW, Otterness C, Westhoff C, et al. (August 2016). "Hormonal contraceptives for contraception in overweight or obese women". The Cochrane Database of Systematic Reviews. 2016 (8): CD008452. doi:10.1002/14651858.CD008452.pub4. PMC 9063995. PMID 27537097.
^ POP:Kovacs G (October 1996). "Progestogen-only pills and bleeding disturbances". Human Reproduction. 11 Suppl 2 (Supplement 2): 20–23. doi:10.1093/humrep/11.suppl_2.20. PMID 8982741.
IUS:McCarthy L (2006). "Levonorgestrel-Releasing Intrauterine System (Mirena) for Contraception". Am Fam Physician. 73 (10): 1799–. Archived from the original on 2007-09-26. Retrieved 2009-09-01.
Depo-Provera and Jadelle:Hubacher D, Lopez L, Steiner MJ, Dorflinger L (August 2009). "Menstrual pattern changes from levonorgestrel subdermal implants and DMPA: systematic review and evidence-based comparisons". Contraception. 80 (2): 113–118. doi:10.1016/j.contraception.2009.02.008. PMID 19631785.
Implanon:Riney S, O'Shea B, Forde A (January 2009). "Etonogestrel implant as a contraceptive choice; patient acceptability and adverse effect profile in a general practice setting". Irish Medical Journal. 102 (1): 24–25. PMID 19284015.
^ Garceau RJ, Wajszczuk CJ, Kaunitz AM (December 2000). "Bleeding patterns of women using Lunelle monthly contraceptive injections (medroxyprogesterone acetate and estradiol cypionate injectable suspension) compared with those of women using Ortho-Novum 7/7/7 (norethindrone/ethinyl estradiol triphasic) or other oral contraceptives". Contraception. 62 (6): 289–295. doi:10.1016/S0010-7824(00)00183-9. PMID 11239615.
^ Truitt ST, Fraser AB, Grimes DA, Gallo MF, Schulz KF (October 2003). "Hormonal contraception during lactation. systematic review of randomized controlled trials". Contraception. 68 (4): 233–238. doi:10.1016/S0010-7824(03)00133-1. PMID 14572885.
^ Jump up to: a b Kelsey JJ (December 1996). "Hormonal contraception and lactation". Journal of Human Lactation. 12 (4): 315–318. doi:10.1177/089033449601200419. PMID 9025449. S2CID 6149248.
^ "Maximizing the use of the progestin minipill". Contraceptive Technology Update. 20 (2): 19–21. February 1999. PMID 12294591.
^ "Progestin-only Contraceptives". Hall Health Primary Care Center. University of Wisconsin-Seattle. 2007-02-11. Archived from the original on 2011-06-04. Retrieved 2009-09-01.
^ Jump up to: a b International Agency for Research on Cancer (IARC) (1999). "5. Summary of Data Reported and Evaluation". Oral Contraceptives, Combined (Vol. 72 ed.). p. 49.
^ Jump up to: a b "Combined Estrogen-Progestogen Contraceptives" (PDF). IARC Monographs on the Evaluation of Carcinogenic Risks to Humans. 91. International Agency for Research on Cancer. 2007.
^ "Endometrial cancer and oral contraceptives: an individual participant meta-analysis of 27 276 women with endometrial cancer from 36 epidemiological studies". The Lancet. Oncology. 16 (9): 1061–1070. September 2015. doi:10.1016/S1470-2045(15)00212-0. PMID 26254030.
^ IARC Working Group on the Evaluation of Carcinogenic Risks to Humans (1999). "Hormonal contraceptives, progestogens only". Hormonal contraception and post-menopausal hormonal therapy; IARC monographs on the evaluation of carcinogenic risks to humans, Volume 72. Lyon: IARC Press. pp. 339–397. ISBN 92-832-1272-X.
^ "McKinley Health Center, University of Illinois: OrthoEvra Contraceptive Patch". Retrieved 2007-07-13.
^ Roach RE, Helmerhorst FM, Lijfering WM, Stijnen T, Algra A, Dekkers OM (August 2015). "Combined oral contraceptives: the risk of myocardial infarction and ischemic stroke". The Cochrane Database of Systematic Reviews. 2018 (8): CD011054. doi:10.1002/14651858.CD011054.pub2. PMC 6494192. PMID 26310586.
^ "Blood Clots with Hormonal Contraception". Hormones Matter. 27 January 2016.
^ "Is It True That Birth Control Pills Cause Blood Clots?". National Blood Clot Alliance. Archived from the original on 15 April 2019. Retrieved 15 April 2019.
^ "Birth Control Pill (for Teens) - Nemours KidsHealth". kidshealth.org. Retrieved 2022-09-21.
^ Stacey D (2009-07-28). "Noristerat Injection". About.com. Archived from the original on 2009-02-14. Retrieved 2009-08-29.
^ Jump up to: a b "Centchroman". Reproductive Health Online. 2003. Archived from the original on 2012-02-05. Retrieved 2009-08-29.
^ Jump up to: a b c d e f Nelson AL, Cwiak C (2011). "Combined oral contraceptives (COCs)". In Hatcher RA, Trussell JN, Nelson AL, Cates Jr W, Kowal D, Policar MS (eds.). Contraceptive technology (20th revised ed.). New York: Ardent Media. pp. 249–341. ISBN 978-1-59708-004-0. ISSN 0091-9721. OCLC 781956734. pp. 257–258:
> Mechanism of action
> COCs prevent fertilization and, therefore, qualify as contraceptives. There is no significant evidence that they work after fertilization. The progestins in all COCs provide most of the contraceptive effect by suppressing ovulation and thickening cervical mucus, although the estrogens also make a small contribution to ovulation suppression. Cycle control is enhanced by the estrogen.
> Because COCs so effectively suppress ovulation and block ascent of sperm into the upper genital tract, the potential impact on endometrial receptivity to implantation is almost academic. When the two primary mechanisms fail, the fact that pregnancy occurs despite the endometrial changes demonstrates that those endometrial changes do not significantly contribute to the pill's mechanism of action.
^ Jump up to: a b c Speroff L, Darney PD (2011). "Oral contraception". A clinical guide for contraception (5th ed.). Philadelphia: Lippincott Williams & Wilkins. pp. 19–152. ISBN 978-1-60831-610-6.
^ Jump up to: a b c Levin ER, Hammes SR (2011). "Estrogens and progestins". In Brunton LL, Chabner BA, Knollmann BC (eds.). Goodman & Gilman's pharmacological basis of therapeutics (12th ed.). New York: McGraw-Hill Medical. pp. 1163–1194. ISBN 978-0-07-162442-8.
^ Jump up to: a b Glasier A (2010). "Contraception". In Jameson JL, De Groot LJ (eds.). Endocrinology (6th ed.). Philadelphia: Saunders Elsevier. pp. 2417–2427. ISBN 978-1-4160-5583-9.
^ "Emergency Contraception". Journal of Obstetric, Gynecologic, and Neonatal Nursing. 46 (6): 886–888. 2017-11-01. doi:10.1016/j.jogn.2017.09.004. PMID 29128088.
^ Jump up to: a b Upadhya KK (December 2019). "Emergency Contraception". Pediatrics. 144 (6): e20193149. doi:10.1542/peds.2019-3149. PMID 31740497.
^ Jump up to: a b "Emergency Contraceptive Agents", LiverTox: Clinical and Research Information on Drug-Induced Liver Injury, National Institute of Diabetes and Digestive and Kidney Diseases, 2012, PMID 31643382, retrieved 2019-11-25
^ Jump up to: a b Shen J, Che Y, Showell E, Chen K, Cheng L (January 2019). "Interventions for emergency contraception". The Cochrane Database of Systematic Reviews. 1: CD001324. doi:10.1002/14651858.CD001324.pub6. PMC 7055045. PMID 30661244.
^ Jump up to: a b c d e f g h i j k Zettermark S, Perez Vicente R, Merlo J (2018-03-22). "Hormonal contraception increases the risk of psychotropic drug use in adolescent girls but not in adults: A pharmacoepidemiological study on 800 000 Swedish women". PLOS ONE. 13 (3): e0194773. Bibcode:2018PLoSO..1394773Z. doi:10.1371/journal.pone.0194773. PMC 5864056. PMID 29566064.
^ Munuce MJ, Gómez-Elías MD, Caille AM, Bahamondes L, Cuasnicú PS, Cohen DJ (March 2020). "Mechanisms involved in the contraceptive effects of ulipristal acetate". Reproduction. 159 (3): R139 – R149. doi:10.1530/REP-19-0355. PMID 31689233.
^ Shigesato M, Elia J, Tschann M, Bullock H, Hurwitz E, Wu YY, Salcedo J (March 2018). "Pharmacy access to Ulipristal acetate in major cities throughout the United States". Contraception. 97 (3): 264–269. doi:10.1016/j.contraception.2017.10.009. PMC 6467254. PMID 29097224.
^ Jump up to: a b Skovlund CW, Mørch LS, Kessing LV, Lidegaard Ø (November 2016). "Association of Hormonal Contraception With Depression". JAMA Psychiatry. 73 (11): 1154–1162. doi:10.1001/jamapsychiatry.2016.2387. PMID 27680324.
^ Jump up to: a b Vrettakos C, Bajaj T (2019). "Levonorgestrel". StatPearls. StatPearls Publishing. PMID 30969559. Retrieved 2019-11-25.
^ Jump up to: a b Mittal S (December 2016). "Emergency contraception: which is the best?". Minerva Ginecologica. 68 (6): 687–699. PMID 27082029.
^ Black KI, Hussainy SY (October 2017). "Emergency contraception: Oral and intrauterine options". Australian Family Physician. 46 (10): 722–726. PMID 29036770.
^ Jump up to: a b c "Family Planning Worldwide: 2008 Data Sheet" (PDF). Population Reference Bureau. 2008. Archived from the original (PDF) on 2008-09-11. Retrieved 2008-06-27. Data from surveys 1997–2007.
^ Jump up to: a b Chandra A, Martinez GM, Mosher WD, Abma JC, Jones J (December 2005). "Fertility, family planning, and reproductive health of U.S. women: data from the 2002 National Survey of Family Growth" (PDF). Vital and Health Statistics. Series 23, Data from the National Survey of Family Growth. 23 (25). National Center for Health Statistics: 1–160. PMID 16532609. Retrieved 2007-05-20. See Table 60.
^ Loeb, L. (1911). "The cyclic changes in the ovary of the guinea pig". Journal of Morphology. 22: 37–70.
^ Zuckerman, Solly Zuckerman, Baron; Weir, Barbara J. (1977). The Ovary. Vol. 1. General Aspects (2 ed.). New York: Academic Press. ISBN 978-0-12-782601-1.{{cite book}}: CS1 maint: multiple names: authors list (link)
^ Müller-Jahncke WD (August 1988). "[Ludwig Haberlandt (1885-1932) and the development of hormonal contraception]". Zeitschrift für die Gesamte Innere Medizin und Ihre Grenzgebiete (in German). 43 (15): 420–422. PMID 3051743.
^ Makepeace, A. W.; Weinstein, George Louis; Friedman, Maurice H. (1937-06-30). "The effect of progestin and progesterone on ovulation in the rabbit". American Journal of Physiology. Legacy Content. 119 (3): 512–516. doi:10.1152/ajplegacy.1937.119.3.512. ISSN 0002-9513.
^ Goldzieher JW (May 1982). "Estrogens in oral contraceptives: historical perspectives". The Johns Hopkins Medical Journal. 150 (5): 165–169. PMID 7043034.
^ Perone N (1993). "The history of steroidal contraceptive development: the progestins". Perspectives in Biology and Medicine. 36 (3): 347–362. doi:10.1353/pbm.1993.0054. PMID 8506121. S2CID 46312750.
^ Junod SW, Marks L (April 2002). "Women's trials: the approval of the first oral contraceptive pill in the United States and Great Britain". Journal of the History of Medicine and Allied Sciences. 57 (2): 117–160. doi:10.1093/jhmas/57.2.117. PMID 11995593. S2CID 36533080.
^ Leary WE (1992-10-30). "U.S. Approves Injectable Drug As Birth Control". The New York Times. p. A.1. PMID 11646958.
^ McFadden S (2009-06-15). "Golden anniversary of a revolution". The New Zealand Herald. Retrieved 2009-08-29.
^ "IUDs—An Update. Chapter 2: Types of IUDs". Population Information Program, the Johns Hopkins School of Public Health. XXIII (5). December 1995. Archived from the original on 2007-06-29.
^ Chin M (1992). "The Chronological Important Events in the Development of Norplant". Norplant. Archived from the original on 2009-04-29. Retrieved 2009-08-29., possibly taken from Dorflinger LJ (1991-08-02). "Explanation of assumptions made for Norplant projections". Unpublished. Retrieved 2009-08-29.[permanent dead link]
^ "Research and regulatory approval (of injectable contraceptives)". Population Reports. AccessMyLibrary. 1995-08-01. Retrieved 2009-08-29.
^ "World Health Agency Endorsing 2 New Injectable Contraceptives". The New York Times. 1993-06-06. p. Section 1, p.20. Retrieved 2009-08-29.
^ Organon (July 16, 2002). "NuvaRing world's first vaginal birth control ring, first launch now in the US" (PDF). Archived from the original (PDF) on 2005-10-27. Retrieved 2009-08-29.
^ "FDA Approves Contraceptive Patch". Wired.com. Associated Press. 2001-11-11. Retrieved 2009-08-29.
^ Jump up to: a b Singh MM (July 2001). "Centchroman, a selective estrogen receptor modulator, as a contraceptive and for the management of hormone-related clinical disorders". Medicinal Research Reviews. 21 (4): 302–347. doi:10.1002/med.1011. PMID 11410933. S2CID 37474826.
External links
[edit]
Media related to Hormonal contraception at Wikimedia Commons
| v t e Birth control methods | |
--- |
| Related topics | Comparison of birth control methods Male contraceptives Preventing sexually-transmitted infections Over-the-counter oral contraceptives Oral contraceptive formulations |
| Long-acting reversible contraception (LARC) | Intrauterine device (Hormonal IUD Copper IUD) Contraceptive implant (Etonogestrel implant, Levonorgestrel implant) |
| Sterilization | Tubal ligation (female) Vasectomy (male) |
| Hormonal contraception | | | | --- | | Combined estrogen–progestogen | Oral pill Extended cycle Pill Contraceptive patch Monthly injections Vaginal ring | | Progestogen-only | Multi-month injections Progestogen-only pill Vaginal ring | | SERMs | | |
| Barrier Methods | External condom Internal condom Cervical cap Diaphragm Contraceptive sponge |
| Emergency Contraception (Post-intercourse) | Copper IUD Ulipristal acetate Levonorgestrel Low-dose mifepristone Yuzpe regimen |
| Spermicides | Vaginal pH modulation Nonoxynol-9 |
| Behavioral | | | | --- | | Including vaginal intercourse | Breastfeeding method Calendar (rhythm) methods Fertility awareness methods Withdrawal | | Avoiding vaginal intercourse | Non-penetrative sex Abstinence | |
| Experimental | Male contraceptives in development |
| v t e Active molecules in hormonal contraceptives | |
--- |
| Androgens | Male-only: Dimethandrolone Testosterone buciclate Testosterone undecanoate Trestolone |
| Estrogens | Diethylstilbestrol Estradiol Estradiol benzoate Estradiol cypionate Estradiol enanthate Estradiol valerate Ethinylestradiol Mestranol Ormeloxifene |
| Progestogens | | | | --- | | First generation (estranes) | Ethisterone Etynodiol diacetate Lynestrenol Norethisterone (norethindrone) Norethisterone acetate Norethisterone enanthate Noretynodrel Quingestanol | | Second generation | Estranes: Norethisterone enanthate Norgestrienone Gonanes: Levonorgestrel Norelgestromin Norgestrel | | Third generation (gonanes) | Desogestrel Etonogestrel Gestodene Norgestimate | | Fourth generation | Estranes: Dienogest Norpregnanes: Segesterone acetate Demegestone Nomegestrol acetate Promegestone Trimegestone Spironolactone derivatives: Drospirenone | | Pregnanes | Chlormadinone acetate Medrogestone Medroxyprogesterone acetate Megestrol acetate | | Miscellaneous | Mifepristone Ulipristal acetate Algestone acetophenide Dydrogesterone Gestrinone Progesterone Proligestone | |
Retrieved from "
Categories:
Hormonal contraception
Hepatotoxins
Human female endocrine system
Sex and drugs
Hidden categories:
CS1: long volume value
CS1 maint: multiple names: authors list
CS1 German-language sources (de)
All articles with dead external links
Articles with dead external links from January 2020
Articles with permanently dead external links
Articles with short description
Short description matches Wikidata
All articles with unsourced statements
Articles with unsourced statements from September 2023
Articles with unsourced statements from April 2022
Commons category link from Wikidata |
189532 | https://www.letstalkacademy.com/cholesterol-in-higher-animal-cell-membranes/ | Why Cholesterol Is Found Only in the Cell Membranes of Higher Animals - CSIR NET LIFE SCIENCE COACHING | NTA NET LIFE SCIENCE | CSIR LIFE SCIENCE
An Institute for CSIR NET Life Science- DBT JRF/ M Sc Entrance
letstalkacademy@gmail.com
Home
Courses
About Us
Contact
More
Publication
Free Tests
CSIR NET LIFE SCIENCE RESULT
Previous Years’ Question Papers
0
Sign Up
Home
Courses
About Us
Contact
More
Publication
Free Tests
CSIR NET LIFE SCIENCE RESULT
Previous Years’ Question Papers
Sign Up
Try For Free
Why Cholesterol Is Found Only in the Cell Membranes of Higher Animals
admin
April 4, 2025
8 Comments
Cholesterol occurs in membranes of
(1) only prokaryotes
(2) All eukaryotes
(3) All higher animals
(4) Only plants
Why Cholesterol Is Found Only in the Cell Membranes of Higher Animals
Cholesterol is one of those biological molecules that everyone’s heard of—mostly in the context of health and heart disease. But did you know that cholesterol also plays a critical structural role in the cell membranes of certain organisms?
Let’s dive into where cholesterol is found, what it does, and why it’s unique to higher animals.
What Is Cholesterol?
Cholesterol is a steroid-based lipid molecule with a rigid ring structure. It’s a major component of animal cell membranes, contributing to membrane fluidity, stability, and permeability.
Where Is Cholesterol Found?
Contrary to what some may think, cholesterol is not found in all life forms.
Correct Answer: Cholesterol occurs in the membranes of all higher animals (Option 3).
1. Not in Prokaryotes
Prokaryotic cells (like bacteria and archaea) lack cholesterol. Instead, they rely on hopanoids—structurally similar molecules—to stabilize their membranes.
2. Not in Plants
Plant cells do not have cholesterol in their membranes. Instead, they contain phytosterols such as sitosterol and stigmasterol, which serve similar structural functions.
3. Not in All Eukaryotes
While all animal cells are eukaryotic, not all eukaryotes (like fungi or protists) contain cholesterol in their membranes. Some may have ergosterol (as in fungi), but true cholesterol is primarily found in higher animals—especially vertebrates.
Role of Cholesterol in Higher Animal Membranes
1. Membrane Fluidity Buffer
Cholesterol acts as a fluidity buffer, making membranes less fluid at high temperatures and more fluid at low temperatures. This balance is critical for cellular function.
2. Structural Stability
Cholesterol adds rigidity to the otherwise flexible lipid bilayer, ensuring the membrane maintains its integrity.
3. Lipid Rafts Formation
It helps form lipid rafts—microdomains in the membrane that cluster specific proteins for cell signaling and transport.
Comparison Table: Cholesterol in Different Organisms
| Organism Type | Cholesterol Present? | Membrane Stabilizer Used |
---
| Higher Animals | Yes | Cholesterol |
| Plants | No | Phytosterols |
| Fungi | No | Ergosterol |
| Prokaryotes | No | Hopanoids |
Why This Matters in Biology and Medicine
Understanding the presence of cholesterol in higher animal cell membranes is crucial because:
It explains why cholesterol is a key biomarker in animal physiology.
It’s central to the study of cardiovascular health.
It plays a role in pharmaceutical targets, such as cholesterol-lowering drugs.
Conclusion
While cholesterol might get a bad rap in diet discussions, it’s an essential molecule in the biology of higher animals. Its unique ability to regulate membrane fluidity and stability is unmatched by any other lipid—and it’s one of the reasons complex multicellular life can function as it does.
So, the next time someone mentions cholesterol, remember—it’s more than just a number on your blood test. It’s a vital part of what makes animal cells work.
Tags :
csir cell biology pyq
csir previous year questions
csir pyq
Hydrophobicity Is Essential for Proteins to Embed in Membranes
Prev Post
How Saturation Level of Fatty Acids Affects Cell Membrane Fluidity
Next Post
8 Comments
###### Akshay mahawar
April 6, 2025
Done
Reply
###### Kabeer Narwal
April 6, 2025
Reply
###### Sapna yadav
April 17, 2025
Reply
###### Sapna yadav
April 17, 2025
Reply
###### Shalu Choudhary
April 20, 2025
Understood sir by good explanation
Reply
###### Shreeji Charan
April 22, 2025
Fantastic explanation sir
Reply
###### Mohit Akhand
April 28, 2025
Done
Reply
###### Aakansha sharma Sharma
September 26, 2025
(3) All higher animals
Reply
Leave a Reply Cancel reply
Your email address will not be published.Required fields are marked
[x] Save my name, email, and website in this browser for the next time I comment.
Post Comment
Categories
Best Books for CSIR NET Life Science Preparation: Top References & PDFs4
Best Books for CSIR NET Life Science Unit 2 Cellular Organization4
Career Opportunities In Life Science Biotechnology Zoology Botany2
CSIR NET LIFE SCIENCE7
CSIR NET LIFE SCIENCE FREE TEST CELL BIOLOGY7
CSIR NET LIFE SCIENCE FREE TEST DEVELOPMENTAL BIOLOGY6
CSIR NET LIFE SCIENCE FREE TEST ECOLOGY9
CSIR NET Life Science Free Test on Evolution4
CSIR NET LIFE SCIENCE FREE TEST Plant Physiology6
CSIR NET Life Science Previous Year Questions and Solution on Animal Physiology256
CSIR NET Life Science Previous Year Questions and Solution on applied ecology60
CSIR NET Life Science Previous Year Questions and Solution on Biochemistry182
CSIR NET Life Science Previous Year Questions and Solution on Biogeography25
CSIR NET Life Science Previous Year Questions and Solution on Blood and hemostasis17
CSIR NET Life Science Previous Year Questions and Solution on Brain, Behavior and Evolution99
CSIR NET Life Science Previous Year Questions and Solution on Cardiovascular System36
CSIR NET Life Science Previous Year Questions and Solution on Community Ecology47
CSIR NET Life Science Previous Year Questions and Solution on Conservation Biology31
CSIR NET Life Science Previous Year Questions and Solution on Digestive System34
CSIR NET Life Science Previous Year Questions and Solution on DNA REPAIR AND RECOMBINATION29
CSIR NET Life Science Previous Year Questions and Solution on DNA REPLICATION89
CSIR NET Life Science Previous Year Questions and Solution on Ecological Succession24
CSIR NET Life Science Previous Year Questions and Solution on Ecology361
CSIR NET Life Science Previous Year Questions and Solution on Ecosystem Ecology59
CSIR NET Life Science Previous Year Questions and Solution on Emergence of Evolutionary Thoughts70
CSIR NET Life Science Previous Year Questions and Solution on Environment and Niche Concept30
CSIR NET Life Science Previous Year Questions and Solution on Enzymes Kinetics79
CSIR NET Life Science Previous Year Questions and Solution on EUKARYOTIC GENE REGULATION54
CSIR NET Life Science Previous Year Questions and Solution on EVOLUTION AND BEHAVIOUR349
CSIR NET Life Science Previous Year Questions and Solution on Metabolism64
CSIR NET Life Science Previous Year Questions and Solution on Molecular Biology395
CSIR NET Life Science Previous Year Questions and Solution on Molecular Evolution43
CSIR NET Life Science Previous Year Questions and Solution on MOLECULAR TECHNIQUES11
CSIR NET Life Science Previous Year Questions and Solution on Nervous system51
CSIR NET Life Science Previous Year Questions and Solution on Operon38
CSIR NET Life Science Previous Year Questions and Solution on Origin of cell and Unicellular Evolution26
CSIR NET Life Science Previous Year Questions and Solution on OXIDATIVE PHOSPHORYLATION26
CSIR NET Life Science Previous Year Questions and Solution on Paleontology and Evolutionary History36
CSIR NET Life Science Previous Year Questions and Solution on Population Ecology101
CSIR NET Life Science Previous Year Questions and Solution on Post Transcriptional Processing39
CSIR NET Life Science Previous Year Questions and Solution on POST-TRANSLATIONAL MODIFICATION25
CSIR NET Life Science Previous Year Questions and Solution on Prokaryotic Transcription17
CSIR NET Life Science Previous Year Questions and Solution on Respiratory system38
CSIR NET Life Science Previous Year Questions and Solution on Sensory System27
CSIR NET Life Science Previous Year Questions and Solution on The Mechanisms of Evolution123
CSIR NET Life Science Previous Year Questions and Solution on Thermoregulation15
CSIR NET Life Science Previous Year Questions and Solution on Translation29
CSIR NET Life Science Previous Year Questions and Solution on TRANSLATION Inhibitors6
CSIR NET Life Science Previous Year Questions and Solution on VIRAL GENE REGULATION19
CSIR NET Life Science Previous Year Questions and Solution Species Interactions39
CSIR NET Life Science Previous Year Questions and Solutions Spectroscopy6
CSIR NET Life Science Previous Year Questions on METHODS IN BIOLOGY – Video and Text Solutions11
CSIR NET Life Science Previous Year Questions on Microscopy5
CSIR NET Life Science PYQs and Solutions Eukaryotic Transcription4
CSIR NET LIFE SCIENCE TOPIC WISE FREE TEST31
CSIR NET Previous Year Questions and Solutions on MEMBRANE STRUCTURE8
CSIR NET Previous Year Questions and Solutions on Stability of Proteins and Nucleic Acids18
CSIR NET Previous Year Questions Gel Electrophoresis44
CSIR NET Previous Year Questions on Biochemistry Atoms & Molecules12
CSIR NET Previous Year Questions on Biochemistry Stabilizing Interactions23
CSIR NET Previous Year Questions on Blotting62
CSIR NET Previous Year Questions on DNA SEQUENCING29
CSIR NET Previous Year Questions on PCR58
CSIR NET Previous Year Questions on Protein Sequencing3
CSIR PYQ Biochemistry-Conformation of Proteins41
DBT BET JRF 2018 Previous Year Questions and Video solution161
DBT BET JRF 2019 Previous Year Questions and Video solution163
DBT BET JRF 2020 Previous Year Questions and Video solutions147
DBT BET JRF 2021 Previous Year Questions and Video solution169
DBT BET JRF 2022 Previous Year Questions and Video solution180
DBT BET JRF 2023 Previous Year Questions and Video solutions192
Education69
Institute-PhD program6
Summer Internship Program7
VIRAL GENE REGULATION1
Courses
IIT JAM / CUET- PG
0
280h 30m
44
(4.17)
Are you aspiring to crack the IIT JAM Life Sciences exam with flying colors? Choosing the right coaching institute is…
AAdmin
Start Learning
ICMR JRF Life Science
1
300h 20m
10
(4.33)
If you are serious about cracking the ICMR Life Science JRF exam, Let’s Talk Academy is your ultimate destination. Join…
AAdmin
Start Learning
ICMR JRF
0
300h 20m
0
(0)
Are you aspiring to qualify for the ICMR Life Science JRF exam? Choosing the right coaching institute can significantly impact your success.…
AAdmin
Start Learning
GATE Life Sciences Biotechnology
0
280h 20m
30
(4.33)
Let’s Talk Academy (LTA) is a leading coaching institute dedicated to training students for CSIR NET, GATE, DBT-JRF, ICMR-JRF, and…
AAdmin
Start Learning
DBT-BET JRF
0
300h 30m
31
(5.00)
Let’s Talk Academy is the number one BET-DBT JRF Biotechnology Exam coaching in Jaipur for Biotechnology and Life Science. The…
AAdmin
Start Learning
CSIR UGC NET Life Science Course
1
250h 20m
8
(5.00)
Let's Talk Academy is the number one CSIR NET Life Science coaching in Jaipur for Life Science. At Let's Talk Academy,…
AAdmin
Start Learning
Search
Search
Recent Posts
Understanding Drug Metabolism and Therapeutic Window: Clarifying Common Misconceptions
Understanding the Impact of Diet and Age on Weight Gain and Reproductive Outcomes in Rats
Enzymatic Defects Responsible for Night Blindness and Failure of Vitamin A Treatment
Conditions Affecting Basal Metabolic Rate (BMR): Identifying When It Is the Lowest
Essential Protein for Vitamin B12 Absorption in the Small Intestine
Recent Comments
Neelam Sharma on Beta-Turns in Proteins: Structure
Neelam Sharma on Collagen: Triple Helix, Amino Acid Composition, and Stability
Heena Mahlawat on Understanding Dihedral Angles and Secondary Structures
Heena Mahlawat on Understanding Protein Conformation: Key Facts
Heena Mahlawat on Secondary Structure Regions in the Ramachandran Plot
Recent Posts
Understanding Drug Metabolism and T..
Sep 12, 2025
Understanding the Impact of Diet an..
Sep 12, 2025
Enzymatic Defects Responsible for N..
Sep 12, 2025
Conditions Affecting Basal Metaboli..
Sep 12, 2025
Essential Protein for Vitamin B12 A..
Sep 12, 2025
Popular Tags
Behavior and EvolutionblottingCSIRcsir enzymes kinetics PYQcsir net life scienceCSIR NET Life Science Previous Year Questions and Solution on BrainCSIR NET Life Science Previous Year Questions and Solution on Cardiovascular SystemCSIR NET Life Science Previous Year Questions and Solution on Emergence of Evolutionary ThoughtsCSIR NET Life Science Previous Year Questions and Solution on Excretory SystemCSIR NET Life Science Previous Year Questions and Solution on Molecular EvolutionCSIR NET Life Science Previous Year Questions and Solution on Nervous systemCSIR NET Life Science Previous Year Questions and Solution on Paleontology and Evolutionary HistoryCSIR NET Life Science Previous Year Questions and Solution on Respiratory SystemCSIR NET Life Science Previous Year Questions and Solution on The Mechanisms of EvolutionCSIR NET Life Science Previous Year Questions and Solution Population EcologyCSIR NET Life Science Previous Year Questions Community EcologyCSIR NET Life Science Previous Year Questions on applied ecologyCSIR NET Life Science Previous Year Questions on Ecosystem EcologyCSIR NET Life Science PYQ and Solution Species InteractionsCSIR NET Life Science PYQ on metabolismcsir previous year questionCSIR protein PYQcsir pyqcsir pyq biochemistrycsir pyq molecular biologydbt jrf 2018dbt jrf 2019dbt jrf pyqDBT JRF PYQ-2022DBT JRF PYQ-2023DBT JRF PYQ 2020dbt jrf pyq 2021DNA REPLICATION CSIR NET PYQDNA REPLICATION PYQEastern BlottingFar-Western Blottinggel electorphoresisNorthern blottingoperon pyqPCRPolymerase Chain ReactionSlot Blottingsouthern blottingTranslation PYQWestern blotting
Latest Courses
### CSIR UGC NET Life Science Course Let's Talk Academy is the number one CSIR NET Life Science coaching in Jaipur for Life Science. At Let's Talk Academy, you can be sure to get the right coaching strictly in adherence to the CSIR NET Life Science course. CSIR NET Life Science is an exam that requires in-depth knowledge, consistent practice, and the right guidance to crack. If you are looking for a place where you can get the best CSIR NET Life Science coaching, you are at the right place.
### ICMR JRF Life Science If you are serious about cracking the ICMR Life Science JRF exam, Let’s Talk Academy is your ultimate destination. Join now and take the first step toward a successful research career!
### IIT JAM / CUET- PG Are you aspiring to crack the IIT JAM Life Sciences exam with flying colors? Choosing the right coaching institute is crucial for your success. Let's Talk Academy (LTA) has established itself as the best coaching institute for IIT JAM Life Sciences, helping students achieve their dreams through expert guidance, structured study material, and personalized mentorship.
Download Android App
Download IOS App
Let's Talk Academy
An Institute for CSIR NET Life Science- DBT JRF/ M Sc Entrance
9785333312/9785333319
Resources
About Us
Publication
Contact Us
Free Tests
My account
Courses
CSIR UGC NET Life Science Course
GATE Life Sciences Biotechnology
ICMR JRF Life Science
DBT-BET JRF
IIT JAM / CUET- PG
Working Hours
Mon - Sat 7:00 AM - 7:00 PM
1st Floor, Krishna Tower, Manav Ashram Colony, Vasundhara Colony, Gopal Pura Mode, Jaipur, Rajasthan 302018
Copyright © 2025 Let's Talk Academy. All rights reserved. |
189533 | https://mathworld.wolfram.com/QuadraticFunction.html | Quadratic Function -- from Wolfram MathWorld
TOPICS
AlgebraApplied MathematicsCalculus and AnalysisDiscrete MathematicsFoundations of MathematicsGeometryHistory and TerminologyNumber TheoryProbability and StatisticsRecreational MathematicsTopologyAlphabetical IndexNew in MathWorld
Quadratic Function
See
Quadratic Polynomial
About MathWorld
MathWorld Classroom
Contribute
MathWorld Book
wolfram.com
13,278 Entries
Last Updated: Sun Sep 28 2025
©1999–2025 Wolfram Research, Inc.
Terms of Use
wolfram.com
Wolfram for Education
Created, developed and nurtured by Eric Weisstein at Wolfram Research
Created, developed and nurtured by Eric Weisstein at Wolfram Research |
189534 | https://www.investopedia.com/terms/n/nominalgdp.asp | Skip to content
Trade
Please fill out this field.
Top Stories
Don't Miss the Most Important Medicare Message You’ll See This Year
I’ve Saved Thousands by Refusing to Buy an iPhone.
Kevin O'Leary: 'If You Don't Have Money, You Don't Have a Marriage'
This Amazon Prime Member Perk Is Going Away in a Week
Table of Contents
Table of Contents
What Is Nominal GDP?
Understanding Nominal GDP
Formula
Components
Inflation Effect
How It Is Used
Limitations
Nominal vs. Real GDP
FAQs
The Bottom Line
Nominal Gross Domestic Product: Definition and Formula
By
The Investopedia Team
Full Bio
Investopedia contributors come from a range of backgrounds, and over 25 years there have been thousands of expert writers and editors who have contributed.
Learn about our editorial policies
Updated June 19, 2025
Reviewed by
Thomas Brock
Reviewed by Thomas Brock
Full Bio
Thomas J. Brock is a CFA and CPA with more than 20 years of experience in various areas including investing, insurance portfolio management, finance and accounting, personal investment and financial planning advice, and development of educational materials about life insurance and annuities.
Learn about our Financial Review Board
Fact checked by
Yarilet Perez
Fact checked by Yarilet Perez
Full Bio
Yarilet Perez is an experienced multimedia journalist and fact-checker with a Master of Science in Journalism. She has worked in multiple cities covering breaking news, politics, education, and more. Her expertise is in personal finance and investing, and real estate.
Learn about our editorial policies
Definition
Nominal GDP is the total value of goods and services produced in an economy without accounting for inflation.
What Is Nominal Gross Domestic Product (GDP)?
Nominal gross domestic product (GDP) is everything that a country produces during a certain period at current prices. This means it's not adjusted for inflation or deflation. Nominal GDP is usually measured at quarterly or annual intervals in the currency of the local economy. Nominal GDP has a direct relationship with prices, so when prices go up, nominal GDP follows. This economic metric is different from real GDP, which is adjusted for inflation.
Key Takeaways
Nominal GDP is the monetary value of a country's goods and services at current market prices.
It doesn't remove the pace of rising prices when comparing one period to another but can inflate the growth figure.
Growing nominal GDP from year to year may reflect a rise in prices as opposed to growth in the number of goods and services produced.
Real GDP starts with nominal GDP but factors in price change between periods.
Understanding Nominal GDP
The economy is a series of interrelated processes that determine how resources are allocated. These processes include the production, distribution, and consumption of goods and services, along with other activities. These goods and services are required by those living within the economy.
There are many ways to determine and measure how well the economy is doing. Economists watch various economic indicators, such as unemployment, inflation, retail sales, industrial production, and gross domestic product. GDP is a metric that measures the health and well-being of a nation's economy. It's the total value of all goods and services that are produced during a certain period less the value of those that are employed during the production process.
There are different types of GDP, including real, actual, potential, and nominal. Nominal GDP is an assessment of economic production in an economy that includes current prices in its calculation. In other words, it doesn't strip out inflation or the pace of rising prices, which can inflate the growth figure. All goods and services counted in nominal GDP are valued at the prices that are actually sold that year.
Fast Fact
The Bureau of Economic Analysis (BEA) measures and reports GDP figures in the United States.
Nominal GDP Formula
Nominal GDP is the total value of goods and services produced within a specific economy. There are a couple of ways to calculate it.
The first is the expenditure approach. To use this method, you'll have to know a few values, including:
Consumer Spending (C)
Business Investment (I)
Government Spending (G)
Total Net Exports (X-M): This figure is derived by subtracting import expenditures from the total value gained from a country's exports.
Once you have these figures, you can plug them into the following formula:
Nominal GDP = C + I + G + (X-M)
You can also use the GDP price deflator method to calculate nominal GDP. This approach involves the use of the following formula:
Nominal GDP = Real GDP x GDP Price Deflator
Economists use the prices of goods from a base year as a reference point when comparing GDP from one year to another. This price difference is called the GDP price deflator.
Components of Nominal GDP
Let's dive into the four main components of nominal GDP: consumption, investment, government spending, and net imports.
Consumption
Consumption represents the total expenditure by households on goods and services. This includes everything from groceries and clothing to healthcare services and entertainment. It's the demand side of the economy, and consumer spending patterns can be influenced by things like disposable income and cultural trends.
Investment
Investment encompasses spending on capital goods, such as machinery, equipment, and infrastructure. When a country spends on investment, it has the aim of increasing future production capacity. Investment also includes expenditures on research and development (R&D).
Government Spending
This refers to the expenditure by governments on goods and services. This can range from public education and healthcare to defense and infrastructure projects. While government spending contributes directly to nominal GDP, its impact on the economy can vary depending on the nature of the expenditure.
For instance, investments in education and infrastructure can boost productivity, while excessive spending on subsidies or bureaucracy may lead to economic inefficiencies (if innovators no longer have incentives to innovate).
Net Imports
The final component is the difference between a country's exports and imports or its net imports. When exports exceed imports, a country has a trade surplus. Conversely, when imports exceed exports, it has a trade deficit. Net exports reflect a country's competitiveness in international markets and how well a country may leverage global markets for expansion.
Effects of Inflation on Nominal GDP
Because it is measured in current prices, growing nominal GDP from year to year might reflect a rise in prices as opposed to growth in the number of goods and services produced. If all prices rise more or less together, then this will make nominal GDP appear greater. Inflation is a negative force for economic participants because it diminishes the purchasing power of income and savings, both for consumers and investors.
Inflation is most commonly measured using the Consumer Price Index (CPI) or the Producer Price Index (PPI). The CPI measures price changes from the buyer's perspective or how they impact the consumer. The PPI, on the other hand, measures the average change in selling prices that are paid to producers in the economy.
When the overall price level of the economy rises, consumers have to spend more to purchase the same amount of goods. If an individual’s income rises by 10% in a given period but inflation rises by 10% as well, then the individual’s real income (or purchasing power) is unchanged. The term real in real income merely reflects the income after inflation has been subtracted from the figure.
Important
The U.S. is the world's largest economy, followed by China and Germany.
How Nominal GDP Is Used
Nominal GDP is incredibly useful in economic analysis. It serves as a fundamental measure of economic growth, allowing policymakers, businesses, and investors to track changes in the size and direction of the economy over time.
Governments and central banks use nominal GDP data to formulate and evaluate economic policies. Policymakers theoretically make more informed decisions regarding fiscal and monetary measures by analyzing nominal GDP activity.
Nominal GDP per capita, which divides total GDP by the population, offers insights into the average income and purchasing power of individuals within a country. While it does not capture income distribution or disparities, it provides a broad measure of overall prosperity and living standards. A government can compare nominal GDP per capita over time to better understand the average economic well-being of its citizens.
It can be used as a comparison tool for economic performance across countries. Analysts can assess relative economic size, productivity levels, and competitiveness across countries. They can then leverage other metrics driven by nominal GDP (like nominal GDP per capita) to compare trends and changes across countries over time.
Businesses can use nominal GDP data to inform strategic planning, market analysis, and investment decisions. Understanding broader economic trends enables companies to anticipate changes in consumer demand, adjust pricing strategies, and identify growth opportunities. For instance, companies like Omnicom Group have referenced nominal GDP as part of their Form 10-K annual filing.
Limitations of Nominal GDP
There are several limitations to using nominal GDP as an economic indicator. There are several factors that aren't included in nominal GDP, such as:
The total cost of production. While certain costs can be measured, nominal GDP doesn't include external costs that are important to the production process, such as waste and environmental factors.
The production and sale of goods. When it comes to production, nominal GDP only takes final production into account rather than the steps and parts used during the manufacturing process. Similarly, this indicator also tracks inventory—not the actual sale of goods.
Certain services. Nominal GDP doesn't include valuable services that contribute to society and the economy as a whole because they can't be quantified. These include unpaid internships and volunteer work.
Another limitation arises when an economy is mired in a recession or a period of negative GDP growth. Negative nominal GDP growth could be due to a decrease in prices, called deflation. If prices declined at a greater rate than production growth, nominal GDP might reflect an overall negative growth rate in the economy. A negative nominal GDP would be signaling a recession when, in reality, production growth was positive.
Nominal GDP vs. Real GDP
A nation's nominal GDP growth might overstate its growth if inflation is present when we compare GDP growth between two periods using the GDP price deflator. If prices rose by 1% since the base year, the GDP deflator would be 1.01. Overall, real GDP is a better measure any time the comparison is over multiple years. That's why economists prefer to use real GDP over nominal GDP.
Real GDP starts with nominal GDP but it also factors in price changes from one period to another. Real GDP takes the total output for GDP and divides it by the GDP deflator. Let's say the current year's nominal GDP output was $2,000,000 while the GDP deflator showed a 1% increase in prices since the base year. Real GDP would be calculated as $2 million ÷ 1.01 or $1.98 million for the year.
What Are the Key Features of Nominal GDP?
Nominal GDP represents the value of all the goods and services produced within a country at current market prices. This means that it is unadjusted for inflation, so it follows any changes within the economy over time. This allows economists and analysts to track short-term changes or compare the economies of different nations or see how changes in nominal GDP can be influenced by inflation or population growth.
What Is the Formula for Nominal GDP?
The most common formula for nominal GDP is C + I + G + (X-M) which factors in consumer spending (C), business investment (I), government spending (G), and total net imports (X-M). GDP can also be calculated by multiplying real GDP by a GDP price deflator.
How Do I Calculate Nominal GDP?
Nominal GDP measures the economic production in an economy and includes the current prices of goods and services in its calculation. There are different ways to calculate nominal GDP:
The expenditure approach accounts for the changes in quantity and the current market prices and it's a suitable way to measure nominal GDP.
The GDP deflator approach uses the real GDP level and the change in price in its calculation. When multiplying both elements, the result is the nominal GDP.
Why Is Nominal GDP Higher Than Real GDP?
Nominal GDP is higher than real GDP because it takes current market prices into consideration. Conversely, real GDP is lower than nominal GDP because it takes market price changes into consideration.
What Is the Difference Between Nominal and Real GDP?
In short, nominal GDP measures the economic production at current market prices, whereas real GDP measures the economic production factoring in any prices changes in the market (deflation or inflation).
The Bottom Line
Nominal gross domestic product is a useful measure when GDP needs to be compared to any other factor that, like nominal GDP, is not inflation-adjusted. For example, a comparison of a nation's debt to its GDP will use nominal GDP. Keep in mind that debt is always measured in current dollars. Economists, however, favor real GDP is often over nominal GDP as it accounts for the effects of inflation.
Article Sources
Investopedia requires writers to use primary sources to support their work. These include white papers, government data, original reporting, and interviews with industry experts. We also reference original research from other reputable publishers where appropriate. You can learn more about the standards we follow in producing accurate, unbiased content in our editorial policy.
U.S. Bureau of Labor Statistics. "Consumer Price Index."
U.S. Bureau of Labor Statistics. "Producer Price Indexes."
International Monetary Fund. "GDP, Current Prices."
Securities and Exchange Commission. "Form 10-K (Omnicom Group Inc.)," Page 12.
The offers that appear in this table are from partnerships from which Investopedia receives compensation. This compensation may impact how and where listings appear. Investopedia does not include all offers available in the marketplace.
Popular Accounts from Our Partners
Read more
Economy
Economics
Macroeconomics
Partner Links
Related Articles
What Is Purchasing Power Parity (PPP), and How Is It Calculated? Real Effective Exchange Rate (REER): Definition, Formula, and Importance M3 Money Supply: Definition, Components, and Its Disuse Understanding Shrinkflation: Causes, Examples, and How to Identify It Macroeconomics: Definition, History, and Schools of Thought Understanding the Reference Base Period: Key to Measuring Inflation Understanding Response Lag in Monetary and Fiscal Policies Developed Economy: Definition, How It Works, HDI Indicator
Structural Change Explained: Causes, Examples, and Economic Impact Paradox of Thrift: How Savings Can Affect Economic Growth What Is Core Inflation? Monetarism Explained: Theory, Formula, and Keynesian Comparison Who Was Douglass C. North? What Is Cliometrics? Paper Money Explained: Definition, History, and Examples Understanding Corporate Downsizing: Meaning, Consequences, and Real Examples Understanding Currency Depreciation: Causes and Effects
Newsletter Sign Up
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. |
189535 | https://stats.libretexts.org/Courses/Fort_Hays_State_University/Elements_of_Statistics/02%3A_Descriptive_Statistics/2.08%3A_Measures_of_Median_and_Mean_on_Grouped_Data | fj
xj⋅fj
xj⋅fj
xj⋅fj
fj.
2.8.4
xj
fj
xj⋅fj
5
2
10
6
6
36
7
5
35
8
4
32
9
3
27
μ=∑xj⋅fj∑fj=14020=7
Skip to main content
2.8: Measures of Median and Mean on Grouped Data
Last updated
: Jul 7, 2025
Save as PDF
2.7: Distributions- Using Centrality and Variability Together
2.8.1: Measures of Median and Mean - Grouped Data Loss of Information - Optional Material
Page ID
: 41686
( \newcommand{\kernel}{\mathrm{null}\,})
Learning Objectives
Determine the median and mean in grouped discrete data
Determine the median and mean in grouped relative frequency data
Determine the median and mean in grouped continuous data
Extend to weighted mean
Section 2.82.8 Excel File (contains all of the data sets for this section)
Introduction to Grouped Data
In our investigation of descriptive statistics, we worked on a collection of individual data values and then formed appropriate summary measures of that "raw" data. However, we may sometimes be given the data in a summarized frequency distribution format instead of as a "raw" data collection. Can we find our various descriptive statistic measures if we only have the frequency table, which represents the data in grouped form? Although the answer is only "sometimes," the underlying concept of how we can do so is essential for later ideas in the course. We begin with mean and median measures found from frequency tables on grouped quantitative data, but where the grouping was not formed by interval values but only by the same single values.
Mean and Median of Non-Interval but Grouped Data in a Frequency Table
Look at the frequency distribution in Table 2.8.12.8.1 shown below from Section 2.12.1 about thirty student scores (discrete 10-point scale) for an assignment; we will assume these are only a sample of a larger population of scores. Notice that our table shows no loss of crucial information on the data as each distinct data value is explicitly shown in the table, and no intervals are used to represent the grouped student scores.
Table 2.8.12.8.1: Grouped Frequency Distribution
| Student Score | Frequency |
| 33 | 11 |
| 44 | 11 |
| 55 | 33 |
| 66 | 55 |
| 77 | 55 |
| 88 | 77 |
| 99 | 55 |
| 1010 | 33 |
With such a table, we could formally recreate the entire data set {3,{3,4,4,5,5,5,5,5,5,...,...,9,9,10,10,10,10,10}10} by recognizing the meaning of the frequency values for each of the various score values in the table. If we had more data values, recreating the data set would be tedious, and we could lose information on the data. Even if using technology to produce our descriptive measures, we must "type in" all individual data values. This process is likely to lead to many data entry errors. Recreating the "raw" data set is unnecessary; we can determine the median and the arithmetic mean by working with data in this frequency distribution format.
First, we examine the median measure by using quantitative reasoning. We note by the sum of the frequency column that there are 3030 pieces of data and, by our earlier discussion in Section 2.4,2.4, the median is the average of the two data values in position 1515 and 1616 in the ordered list of all data values. Using our frequency column, we accumulate across our frequency counts to see that the 15th15th data position is within the group of "77" scores and the 16th16th data position is within the group of "88" scores (total accumulation of number of data scores from "33" to "77" includes 1+1+3+5+51+1+3+5+5 =15=15 scores). So the median value of the data is 7+827+82 =7.5.=7.5. In general, we can find the median by focusing on our frequency counts to help us determine the center position location, using the location value to determine the median within the grouped variable values.
Next, we examine the mean. Recall that the mean is found in our original discussion by summing our quantitative data values, then dividing by the number of values in the data set: ˉxx¯ =∑xin.=∑xin. Notice in our grouped data, we can find the portion of the entire sum generated by each group by multiplying the group data value by the frequency. For example, the grouped data value 55 will contribute a total of 3⋅53⋅5 =15=15 to the total sum since there are three 55 values in the data set; similarly, the grouped data value 88 will contribute a total of 8⋅78⋅7 =56=56 toward to total sum since there are seven 88 values in the data set. This leads us to the following adjustment of our table to compute the mean of the grouped data (also note the change to general headings on each column).
Table 2.8.22.8.2: Computation of arithmetic mean for data from Table 2.8.12.8.1
| xjxj | fjfj | xj⋅fjxj⋅fj |
---
| 33 | 11 | 33 |
| 44 | 11 | 44 |
| 55 | 33 | 1515 |
| 66 | 55 | 3030 |
| 77 | 55 | 3535 |
| 88 | 77 | 5656 |
| 99 | 55 | 4545 |
| 1010 | 33 | 3030 |
| Totals: | ∑fj=30∑fj=30 | ∑(xj⋅fj)=218∑(xj⋅fj)=218 |
| | Arithmetic Mean: ˉx=∑(xj⋅fj)∑fj=21830≈7.2667x¯=∑(xj⋅fj)∑fj=21830≈7.2667 | |
In conclusion, by summing our fjfj frequency column values, we know the sample size n.n. We have accomplished the exact computation by adding our thirty individual data values together by summing our xj⋅fjxj⋅fj column of values. The arithmetic mean of the data is found by our last computation ˉxx¯ =∑(xj⋅fj)∑fj.=∑(xj⋅fj)∑fj. We divided the total sum of all data values by the number of data values. In grouped data of this form, we can find the mean by the above process, and described symbolically by the given formula:
Sample Mean from a Frequency Distribution
ˉx=∑xin=∑(xj⋅fj)∑fj
x¯=∑xin=∑(xj⋅fj)∑fj
If the data in our table had been population data, we would still perform the same calculation using the same reasoning. And have:
Population Mean from a Frequency Distribution
μ=∑xiN=∑(xj⋅fj)∑fj
μ=∑xiN=∑(xj⋅fj)∑fj
Text Exercise 2.8.12.8.1
Consider the Quiz 11 data from section 2.62.6 below in the frequency table format. Determine the mean from the grouped format and compare it with the results obtained in section 2.62.6 from the "raw" data. We assume the data is population data in this example.
Table 2.8.32.8.3: Grouped Frequency Distribution of Quiz 11 Data
| Quiz Scores | Frequency |
| 55 | 22 |
| 66 | 66 |
| 77 | 55 |
| 88 | 44 |
| 99 | 33 |
Answer
: To find the mean from this frequency table, we follow these steps with thoughts of what we are doing with the data:
1. Sum the frequency column fj to determine the size of the data set.
2. Compute the column of values xj⋅fj to weight each quiz score with their occurrence frequency.
3. Sum the column of xj⋅fj values to produce the total sum as if summing the individual data values.
4. Produce the mean by dividing the sum of the xj⋅fj column by the sum of the frequency column fj.
Table 2.8.4: Computation of arithmetic mean
| xj | fj | xj⋅fj |
---
| 5 | 2 | 10 |
| 6 | 6 | 36 |
| 7 | 5 | 35 |
| 8 | 4 | 32 |
| 9 | 3 | 27 |
| | μ=∑xj⋅fj∑fj=14020=7 | |
We notice this is the exact arithmetic mean value computed when working with the twenty individual quiz scores.
Mean and Median of Grouped Data in a Relative Frequency Table
What would happen if we had a relative frequency distribution of this data instead of a frequency distribution table? Recall that relative frequency in this situation measures the proportion of the data set that has a specific data value. We will be using P(xj)P(xj) to represent the relative frequency or proportion measure as tied to specific data value xj.xj.
Table 2.8.52.8.5: Relative frequency table of student scores from Table 2.8.12.8.1
| Student Score xjxj | Relative Frequency P(xj)P(xj) |
--- |
| 33 | 130≈0.0333=3.33%130≈0.0333=3.33% |
| 44 | 130≈0.0333=3.33%130≈0.0333=3.33% |
| 55 | 330=0.1000=10.00%330=0.1000=10.00% |
| 66 | 0.1667=16.67%0.1667=16.67% |
| 77 | 0.1667=16.67%0.1667=16.67% |
| 88 | 0.2333=23.33%0.2333=23.33% |
| 99 | 0.1667=16.67%0.1667=16.67% |
| 1010 | 0.1000=10.00%0.1000=10.00% |
| Totals: | ∑P(xj)=1.0000=100%∑P(xj)=1.0000=100% |
With a relative frequency table, we could not formally recreate the entire data set unless we first knew the number of data values in the data set (i.e., the sample or population size). However, we do not need such information to determine the distribution's median or mean. We proceed as above, working with relative frequency measures instead of counted frequency measures.
We first examine the median measure. All data is accounted for by the sum of the frequency column that 100%;100%; we should always sum our relative frequency measures to see if we have a total of 1.0000=100%.1.0000=100%. As discussed previously, the median is at the 50th50th percentile position in the ordered list of our data set. Using our relative frequency column, we can accumulate our relative percentages to see that the 50%50% data position is right on the border between the group of "77" scores and the group of "88" scores; total relative frequency accumulation of number of data scores from "33" to "77" includes 3.33%3.33% +3.33%+3.33%+10.00%+10.00%+16.67%+16.67%+16.67%+16.67% =50%.=50%. The median value of the data is 7+827+82 =7.5,=7.5, just as above. We can find the median by focusing on our relative frequency measures to help us determine the location of 50%50% of the data set from the smallest value and the 50%50% location value to determine the median within the grouped variable values.
Next, we examine the mean measure. In forming the relative frequency measures, we divided each frequency count by the sample size to form the relative frequency measures. This earlier division and some algebraic reasoning show how we can adjust our standard arithmetic mean formula to fit this situation.
ˉx=∑xin=x1+x2+…+xnn=x1n+x2n+…+xnn=∑xj⋅fjn=∑(xj⋅fjn)=∑[xj⋅P(xj)]
P(xj) stands for the relative frequency of data value xj; it is the proportion of the data set with that specific data value, xj. In a sense, each distinct data value is being "weighted" by the relative frequency of occurrence. For example, the fact that the data value 8 occurs with 23.33% relative frequency should make this data value "weigh-in" more heavily to the average than does the data value 5 that only occurs with 10% relative frequency. Our relative frequency gives us this "weighting" of the data in a relative sense instead of the above, in which the actual frequency measures give us a weighted "count" sense. This leads us to the following adjustment of our table to compute the mean of the grouped data (note the change to general headings on each column).
Table 2.8.6: Computation of arithmetic from relative frequency found in Table 2.8.5
| xj | P(xj) | xj⋅P(xj) |
---
| 3 | 130≈0.0333=3.33% | 3⋅130=0.1000 |
| 4 | 130≈0.0333=3.33% | 4⋅130≈0.1333 |
| 5 | 330=0.1000=10.00% | 5⋅330=0.5000 |
| 6 | 0.1667=16.67% | 1.0000 |
| 7 | 0.1667=16.67% | 1.1667 |
| 8 | 0.2333=23.33% | 1.8667 |
| 9 | 0.1667=16.67% | 1.5000 |
| 10 | 0.1000=10.00% | 1.0000 |
| Totals: | ∑P(xj)=1.0000=100% | ∑[xj⋅P(xj)]≈7.2667 |
We notice that the results from this relative frequency distribution are the same as those from the previous section of the plain frequency distribution.
In conclusion, by multiplying each unique data value xj by its relative frequency measure P(xj), we have used a relative weighting of each value to produce the arithmetic mean; so, computationally, we need only sum these products xj⋅P(xj) to produce our arithmetic mean. In grouped data of this relative frequency form, we can find the mean by the above process, as described symbolically by the given formula:
Sample Mean from a Relative Frequency Distribution
ˉx=∑[xj⋅P(xj)]
Once again, if the data in our table had been population data, then we would still perform the same calculation work using the same reasoning:
Population Mean from a Relative Frequency Distribution
μ=∑[xj⋅P(xj)]
Text Exercise 2.8.2
Consider the Quiz 1 data from section 2.6, this time given in the relative frequency table format below. Determine the mean from the grouped format and compare it with the earlier results.
Table 2.8.7: Grouped Relative Frequency Distribution of
Quiz 1 Data
| Quiz Scores | Relative Frequency |
| 5 | 10% |
| 6 | 30% |
| 7 | 25% |
| 8 | 20% |
| 9 | 15% |
Answer
: To find the mean from this relative frequency table, we follow these steps as established in the discussion above:
1. Sum the relative frequency column P(xj) to check that 100% of the data is accounted for in the table.
2. Compute the column of values xj⋅P(xj) to weight each of the various quiz scores with their relative frequency of occurrence.
3. Sum the column of xj⋅P(xj) values to produce mean of the data values.
Table 2.8.8: Computation of arithmetic mean using relative frequencies from Table 2.8.7
| xj | P(xj) | xj⋅P(xj) |
---
| 5 | 10% | 0.5 |
| 6 | 30% | 1.8 |
| 7 | 25% | 1.75 |
| 8 | 20% | 1.60 |
| 9 | 15 | 1.35 |
| Totals: | 100% | μ=7.00 |
Again, this is the exact arithmetic mean value computed previously when working with the raw data set or the grouped frequency table set.
We extend these ideas one more step with the concept of "weighted" averages.
Weighted Mean Measures
Sometimes, data values are assigned different weights; for example, course averages are often determined through a "weighting" of the various assessment values. This weighting is usually given as a percentage but can be shown in any chosen relative form (such as a "2" weight for those values that carry twice the weight of any values assigned a "1" weight). As such, we can see how the weights play the same role as the frequency or relative frequency values in the above discussion.
As an example, suppose a school, as is commonly done, uses a four-point scale (A = 4 points, B = 3 points, C= 2 points, D = 1 point, and U= 0 points) to determine grade point average (GPA) weighted by the number of credit hours for the class. A randomly chosen student's recent letter grades awarded and number of credits in eight courses were as follows: A with 3 credits, U with 2 credits, C with 4 credits, A with 5 credits, B with 3 credits, B with 3 credits, C with 5 credits, and D with 3 credits. We organize this information in table 2.8.3 to determine this student's GPA.
Table 2.8.9: Grouped Frequency Distribution
| Letter Grade | Point Value | Credit Hours (Weight) |
| A | 4 | 8 |
| B | 3 | 6 |
| C | 2 | 9 |
| D | 1 | 3 |
| U | 0 | 2 |
Again, we use the above ideas to compute the GPA, a weighted mean.
Table 2.8.10: Computation of GPA as a weighted mean
| Letter Grade | Point Value (xj) | Credit Hours (wj) | xj⋅wj |
| A | 4 | 8 | 32 |
| B | 3 | 6 | 18 |
| C | 2 | 9 | 18 |
| D | 1 | 3 | 3 |
| U | 0 | 2 | 0 |
| | Totals: | ∑wj=28 | ∑(xj⋅wj)=71 |
| | | Weighted Mean: | ∑(xj⋅wj)∑wj≈2.5357 |
This student had a GPA of 2.5357 for those courses. In data that carries varied weights, we can determine the mean as described symbolically.
Sample Mean from Weighted Data
∑(xj⋅wj)∑wj
As we have seen, we do not always need "raw" data, especially with huge data sets, to formulate many of our descriptive statistics for the data. Grouped data conserves space required to represent data and can often be used to produce many summary statistic measures with minor adjustments to our computational thinking. However, we must know if we have any "loss" in the data representation due to the grouping. All the above data sets were discrete, and each grouping was done on single values, not over interval values. When we group data over interval values, we lose some information in the data. The following optional subsection examines this issue of continuous data.
Note: Level of measurement and careful consideration of results
An astute reader will notice that the four-point grading scale, common in many academic institutions, takes values on an ordinal scale; the arithmetic differences in values do not provide any information other than the underlying ordering of letter grades. If one student earns a 99% while a second student earns 90.1%, both students would be awarded the same letter grade of an A, despite having achieved different levels of performance in the course.
When we look at a semester's average GPA, as we did above, how are we to interpret two students in the same courses having the same average? Just like with the racing example at the end of section 1.6, we cannot say that they performed (earned points), on average, the same. One student could have outperformed the other student on all assessments in each class, yet still be awarded the same letter grades in each class thus earning an equivalent GPA. All we can say is that the students earned, on average, the same letter grades.
Text Exercise 2.8.3
Consider two physics majors, Aaron and Elise, who took Engineering Physics I (five credit hours), Calculus I (five credit hours), and Elements of Statistics (three credit hours) last semester. Aaron earned 85%, 96%, and 98%, respectively, and Elise earned 90%, 91%, and 98%, respectively.
Convert each student's semester grades to the four-point grading scale and then compute the weighted average using the number of credit hours as the weight. This is the standard way four-point scale averages are computed.
Answer
: Aaron would receive a 3 for his physics course, and then 4 for each of his math courses. Since physics and calculus were five credit hour courses, those two grades will be weighted by 5, and statistics will be weighted by 3. We thus have the following computation.GPAAaron=3⋅5+4⋅5+4⋅35+5+3=15+20+1213=4713≈3.6154Elise earned an A in each course thus earning 4 in each of her courses. Since physics and calculus were five credit hour courses, those two grades will be weighted by 5, and statistics will be weighted by 3. We thus have the following computation.GPAElise=4⋅5+4⋅5+4⋅35+5+3=20+20+1213=5213=4We thus have that Aaron earned a 3.6154 and Elise earned a 4.0 last semester.
Compute each student's weighted average percentage using the number of credit hours as the weight and then convert the averages to the four-point scale. This is a nonstandard way to compute four-point scale averages.
Answer
: We compute the weighted averages similarly.GPAAaron=85⋅5+96⋅5+98⋅35+5+3=425+480+29413=119913≈92.2308%GPAElise=90⋅5+91⋅5+98⋅35+5+3=450+455+29413=119913≈92.2308%In converting the two weighted averages to the four-point scale, both Aaron and Elise would receive a 4.0 for the semester. Despite having the same average percentages, the standard way of computation distinguishes between a 4.0 student, Elise, and Aaron, a student who did not get straight A's. There is only one way to get a 4.0. There are many ways to get a lower GPA. The four-point scale emphasizes the distinction between straight A students and everyone else.
2.7: Distributions- Using Centrality and Variability Together
2.8.1: Measures of Median and Mean - Grouped Data Loss of Information - Optional Material |
189536 | https://tranquileducation.weebly.com/uploads/1/3/7/6/13765138/dm_unit-iv.pdf | www.tranquileducation.weebly.com Unit 4 Groups Part A Define Algebraic structure. The operations and relations on the set 𝑆 define a structure on the elements of 𝑆, an algebraic system is called an algebraic structure. Define Semi-group Let S be a nonempty set and 𝑜 be a binary operation on S. The algebraic system (𝑆, . ) is called a semigroup if the operation . is associative. In other words (𝑆, . ) is a semigroup if for any 𝑥, 𝑦, 𝑧 ∈𝑆, 𝑥. 𝑦 . 𝑧 = 𝑥. (𝑦. 𝑧). Define Monoid A semigroup (M,.) with an identity element with respect to the operation o is called a monoid. In other words, an algebraic system (M,.) is called a monoid if for any 𝑥, 𝑦, 𝑧 ∈𝑀, 𝑥. 𝑦 . 𝑧 = 𝑥. (𝑦. 𝑧) and there exists an element 𝑒∈𝑀 such that for any 𝑥∈𝑀, 𝑒. 𝑥 = 𝑥. 𝑒 = 𝑥 Define semigroup homomorphism. Let (𝑆,∗) and (𝑇, ∆ ) be any two semigroups. A mapping 𝑔: 𝑆→𝑇 such that for any two elements 𝑎, 𝑏∈𝑆, 𝑔(𝑎∗𝑏) = 𝑔(𝑎) ∆ 𝑔(𝑏) is called a semigroup homomorphism. Define direct product Let (𝑆,∗) and (𝑇, ∆) be two semigroups. The direct product of (𝑆,∗) and (𝑇, ∆) is the algebraic system (𝑆× 𝑇, . ) in which the operation . on 𝑆× 𝑇 is defined by 𝑠1, 𝑡1 . 𝑠2, 𝑡2 = (𝑠1 ∗𝑠2, 𝑡1∆ 𝑡2) for any 𝑠1, 𝑡1 𝑎𝑛𝑑 𝑠2, 𝑡2 ∈ 𝑆× 𝑇. Show that the set 𝑵 of natural numbers is a semigroup under the operation 𝒙∗𝒚 = 𝒎𝒂𝒙{𝒙, 𝒚}. Is it monoid? Given the operation 𝑥∗𝑦= 𝑚𝑎𝑥{𝑥, 𝑦} for any 𝑥, 𝑦∈𝑁. Clearly (𝑁,∗) is closed because 𝑥∗𝑦= 𝑚𝑎𝑥{𝑥, 𝑦} ∈𝑁 and ∗ is associative as (𝑥∗𝑦) ∗𝑧= 𝑚𝑎𝑥{𝑥∗𝑦, 𝑧} = 𝑚𝑎𝑥{𝑚𝑎𝑥{𝑥, 𝑦}, 𝑧} = 𝑚𝑎𝑥{𝑥, 𝑦, 𝑧} = 𝑚𝑎𝑥{𝑥, 𝑚𝑎𝑥{𝑦, 𝑧}} = 𝑚𝑎𝑥{𝑥, {𝑦∗𝑧}} = 𝑥∗(𝑦∗𝑧) Therefore, 𝑁,∗ is a semi-group. The identity 𝑒 of 𝑁,∗ must satisfy the property that 𝑥∗𝑒= 𝑒∗𝑥 = 𝑥. But 𝑥∗𝑒 = 𝑒∗𝑥 = 𝑚𝑎𝑥 𝑥, 𝑒 = 𝑚𝑎𝑥{𝑒, 𝑥} = 𝑥. Prove that “ A semi-group homomorphism preserves the property of associativity. 𝐿𝑒𝑡 𝑎, 𝑏, 𝑐 ∈ 𝑆, www.tranquileducation.weebly.com 𝑔( 𝑎∗𝑏 ∗𝑐 = 𝑔 𝑎∗𝑏 . 𝑔(𝑐) = [ 𝑔 𝑎 . 𝑔 𝑏 . 𝑔(𝑐)] … (1) 𝑔 𝑎∗ 𝑏∗𝑐 = 𝑔 𝑎 . 𝑔(𝑏∗𝑐) = 𝑔 𝑎 . [𝑔 𝑏 . 𝑔(𝑐)] … (2) 𝐵𝑢𝑡 𝑖𝑛 𝑆, (𝑎∗𝑏) ∗𝑐 = 𝑎∗(𝑏∗𝑐), ∀ 𝑎, 𝑏, 𝑐 ∈ 𝑆 ∴ 𝑔[(𝑎∗𝑏) ∗𝑐] = 𝑔[𝑎∗(𝑏∗𝑐)] ⇒ 𝑔 𝑎 . 𝑔 𝑏 .𝑔 𝑐 = 𝑔 𝑎 . [𝑔 𝑏 . 𝑔(𝑐)] ∴ The property of associativity is preserved. Prove that a semi group homomorphism preserves idem potency. Let 𝑎 ∈ 𝑆 be an idempotent element. ∴ 𝑎∗𝑎 = 𝑎 𝑔(𝑎∗𝑎) = 𝑔 𝑎 . 𝑔(𝑎) = 𝑔(𝑎) ∴𝑔(𝑎∗𝑎) = 𝑔(𝑎). This shows that 𝑔(𝑎) is an idempotent element in T. The property of idem potency is preserved under semi group homomorphism. Prove that A semigroup homomorphism preserves commutativity. Let 𝑎, 𝑏∈𝑆 Assume that 𝑎∗𝑏 = 𝑏∗𝑎 𝑔(𝑎∗𝑏) = 𝑔(𝑏∗𝑎) 𝑔 𝑎 . 𝑔 𝑏 = 𝑔 𝑏 . 𝑔(𝑎). This means that the operation . is commutative in 𝑇 The semigroup homomorphism preserves commutativity. Define group. A non-empty set G, together with a binary operation is said to be a group if it satisfies the following axioms. i) ∀a,b ∈G ⇒ ab ∈ G (Closure Property) ii) For any 𝑎, 𝑏, 𝑐 ∈ 𝐺, (𝑎∗𝑏) ∗𝑐 = 𝑎∗(𝑏∗𝑐) (Associative property) iii) There exists an element 𝑒 𝑖𝑛 𝐺 such that 𝑎∗𝑒 = 𝑒∗𝑎 = 𝑎, ∀𝑎∈𝐺 (Identity) iv) For all 𝑎∈𝐺 there exists an element 𝑎−1 ∈ 𝐺 such that 𝑎∗𝑎−1 = 𝑎−1 ∗𝑎 = 𝑒 (Inverse Property) Define Abelian group A Group (𝐺,∗) is said to be abelian if 𝑎∗𝑏 = 𝑏∗𝑎 for all 𝑎, 𝑏 ∈ 𝐺 Define Left coset of H in G Let (𝐻,∗) be a subgroup of (𝐺,∗). For any 𝑎 ∈ 𝐺 , the set 𝑎𝐻 defined by 𝑎𝐻 = {𝑎∗ / ∈𝐻 } is called the left coset of 𝐻 in 𝐺 determined by the element 𝑎∈𝐺. The element 𝑎 is called the representative element of the left coset 𝑎𝐻. www.tranquileducation.weebly.com State Lagrange’s theorem The order of a subgroup of a finite group divides the order of the group. Or If 𝐺 is a finite group, then 𝑂(𝐻) \ 𝑂(𝐺) , for all sub-group 𝐻 of 𝐺. If (𝑮,∗) is a finite group of order 𝒏, then for any 𝒂∈𝑮, we have 𝒂𝒏 = 𝒆 , where e is the identity of the group 𝑮. Let 𝑂(𝐺) = 𝑛 and Let 𝑎∈𝐺 Then order of the subgroup < 𝑎> is the order of the element 𝑎. If 𝑂(< 𝑎>) = 𝑚, then 𝑎𝑚 = 𝑒 and by Lagrange’s theorem , we get 𝑚\ 𝑛.Let 𝑛 = 𝑚𝑘 Then 𝑎𝑚 = 𝑎𝑚𝑘= (𝑎𝑚)𝑘 = 𝑒𝑘 = 𝑒. Let 𝑮= 𝟏, 𝒂, 𝒂𝟐, 𝒂𝟑 𝒘𝒉𝒆𝒓𝒆 (𝒂𝟒= 𝟏) be a group and 𝑯= {𝟏, 𝒂𝟐} is a subgroup of G under multiplication . Find all the cosets of H. Let us find the right cosets of 𝐻 in 𝐺. 𝐻1 = {1, 𝑎2} = 𝐻 𝐻𝑎= {𝑎, 𝑎3} 𝐻𝑎2 = {𝑎2, 𝑎4} = {𝑎2, 1} = 𝐻 𝑎𝑛𝑑 𝐻𝑎3 = {𝑎3, 𝑎5} = {𝑎3, 𝑎} = 𝐻𝑎 ∴ 𝐻. 1 = 𝐻 = 𝐻𝑎2 = {1, 𝑎2} 𝑎𝑛𝑑 𝐻𝑎 = 𝐻𝑎3 = {𝑎, 𝑎3} are distinct right cosets of 𝐻 in 𝐺. Similarly, we can find the left cosets of 𝐻 in 𝐺. Find the left cosets of {,} in the group (𝒁𝟒, +𝟒). Let 𝑍4 = {,, , } be a group and 𝐻= { , } be a sub-group of 𝑍4 under +4. The left cosets of 𝐻 are + 𝐻 = {, } + 𝐻 = {, } + 𝐻 = {, } = {,} = {,} = 𝐻 + 𝐻 = {, } = {, } = {, } = + 𝐻 + 𝐻 = + 𝐻 = 𝐻 𝑎𝑛𝑑 + 𝐻 = + 𝐻 are the two distinct left cosets of 𝐻 in 𝑍4. Define subgroup Let (𝐺,∗) be a group and let 𝐻 be a non-empty subset of 𝐺. Then 𝐻 is said to be a subgroup of 𝐺 if 𝐻 itself is a group with respect to the operation ∗ . Define normal subgroup A subgroup 𝐻,∗ of 𝐺,∗ is called a normal sub-group if for any 𝑎∈𝐺, 𝑎𝐻 = 𝐻𝑎. (i.e.) Left coset = Right coset Prove that every subgroup of an abelian group is normal subgroup. Let (𝐺,∗) be an abelian group and (𝑁,∗) be a subgroup of 𝐺. Let 𝑔 be any element in 𝐺 and let 𝑛 ∈ 𝑁. Now 𝑔∗𝑛∗𝑔−1 = (𝑛∗𝑔) ∗𝑔−1 [Since G is abelian] = 𝑛∗𝑒 = 𝑛 ∈ 𝑁 www.tranquileducation.weebly.com ∴∀ 𝑔∈𝐺 𝑎𝑛𝑑 𝑛 ∈𝑁, 𝑔∗𝑛∗𝑔−1 ∈𝑁 ∴ (𝑁,∗) is a normal subgroup. Define direct product on groups Let (𝐺,∗) and (𝐻, ∆) be two groups. The direct product of these two groups is the algebraic structure ( 𝐺× 𝐻 , . ) in which the binary operation . on 𝐺× 𝐻 is given by 𝑔1, 1 . (𝑔2, 2) = (𝑔1 ∗𝑔2, 1∆ 2) for any 𝑔1, 1 , (𝑔2, 2) ∈𝐺× 𝐻. If S denotes the set of positive integers ≤ 100, for any 𝒙, 𝒚 ∈ 𝑺, define 𝒙∗𝒚 = 𝒎𝒊𝒏{𝒙, 𝒚}. Verify whether (𝑺,∗) is a monoid assuming that ∗ is associative. The identity element is 𝑒 = 100 exists. Since for 𝑥 ∈𝑆, 𝑚𝑖𝑛 (𝑥, 100) = 𝑥⇒ 𝑥 ∗100 = 𝑥 , ∀𝑥 ∈ 𝑆 If 𝑯 is a subgroup of the group 𝑮, among the right cosets of 𝑯 in 𝑮. Prove that there is only one subgroup viz., 𝑯. Let 𝐻𝑎 be a right coset of 𝐻 in 𝐺 where 𝑎 ∈𝐺. If 𝐻𝑎 is a subgroup of 𝐺 then 𝑒 ∈𝐻𝑎, where 𝑒 is the identity element in 𝐺. 𝐻𝑎 is an equivalence class containing 𝑎 with respect to an equivalence relation. 𝑒 ∈𝐻𝑎 ⇒𝐻. 𝑒 = 𝐻𝑎. But 𝐻𝑒 = 𝐻 ∴𝐻𝑎= 𝐻. 𝑇𝑖𝑠 𝑠𝑜𝑤𝑠 𝐻 𝑖𝑠 𝑜𝑛𝑙𝑦 𝑠𝑢𝑏𝑔𝑟𝑜𝑢𝑝. Give an example of sub semi-group For the semi group (𝑁, +), where 𝑁 is the set of natural number, the set 𝐸 of all even non-negative integers (𝐸, +) is a sub semi-group of (𝑁, +). Find the subgroup of order two of the group (𝒁𝟖, +𝟖) 𝐻 = { , } is a subgroup of order two of the group 𝐺 = (𝑍8, +8) . +𝟖 Define Ring An algebraic system (𝑆, +, . ) is called a ring if the binary operations + and . on 𝑆 satisfy the following three properities. i)(𝑆, +) is an abelian group ii)(𝑆, . ) is a semigroup iii) The operation . is distributive over + , i.e. , for any 𝑎, 𝑏, 𝑐 ∈ 𝑆 , 𝑎. (𝑏+ 𝑐) = 𝑎. 𝑏 + 𝑎. 𝑐 𝑎𝑛𝑑 (𝑏+ 𝑐). 𝑎 = 𝑏. 𝑎 + 𝑐. 𝑎 Define Subring www.tranquileducation.weebly.com A commutative ring (𝑆, +, . ) is a ring is called a subring if (𝑅, +, . ) is itself with the operations + and . restricted to 𝑅. Define Ring homomorphism Let (𝑅, +, . ) 𝑎𝑛𝑑 (𝑆,⊕,⊙) be rings. A mapping g:R∈S is called a ring homomorphism from (𝑅, +, . ) to (𝑆,⊕,⊙) if for any 𝑎, 𝑏∈𝑅. 𝑔(𝑎+ 𝑏) = 𝑔(𝑎) ⊕ 𝑔(𝑏) 𝑎𝑛𝑑 𝑔(𝑎. 𝑏) = 𝑔(𝑎) ⊙𝑔(𝑏) If (𝑹, +, . ) be a ring then prove that 𝒂 . 𝟎 = 𝟎 for every 𝒂∈𝑹 Proof: Let 𝑎∈𝑅 then 𝑎. 0 = 𝑎. (0 + 0) = 𝑎. 0 + 𝑎. 0 [ by Distributive Law ] 𝑎. 0 = 0 [ Cancellation Law ] Give an example of an ring with zero-divisors. The ring ( 𝑍10, +10 , .10 ) is not an integral domain. Since 5 .10 2 = 0 , ( 5 ≠ 0, 2 ≠ 0 𝑖𝑛 𝑍10) Define Field. The commutative ring (𝑅, +,×) with unity is said to be a Field if it has inverse element under the binary operation ×. 𝑎−1 × 𝑎= 𝑎× 𝑎−1 = 1, ∀𝑎∈𝑅 . Part B State and Prove Lagrange’s theorem for finite groups. Statement: The order of a subgroup of a finite group is a divisor of the order of the group. Proof: Let 𝑎𝐻 and 𝑏𝐻 be two left cosets of the subgroup {𝐻,∗} in the group {𝐺,∗}. Let the two cosets 𝑎𝐻 and 𝑏𝐻 be not disjoint. Then let 𝑐 be an element common to 𝑎𝐻 and 𝑏𝐻 i.e., 𝑐∈𝑎𝐻∩ 𝑏𝐻 ∵𝑐∈𝑎𝐻, 𝑐= 𝑎∗1, 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 1 ∈𝐻…(1) ∵𝑐∈𝑏𝐻, 𝑐= 𝑏∗2, 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 2 ∈𝐻… (2) From (1) and (2), we have 𝑎∗1 = 𝑏∗2 𝑎= 𝑏∗2 ∗1 −1 … (3) Let 𝑥 be an element in 𝑎𝐻 𝑥= 𝑎∗ 3, 𝑓𝑜𝑟 𝑠𝑜𝑚𝑒 3 ∈𝐻 = 𝑏∗2 ∗1 −1 ∗ 3, 𝑢𝑠𝑖𝑛𝑔 (3) Since H is a subgroup, 2 ∗1 −1 ∗ 3 ∈𝐻 Hence, (3) means 𝑥∈𝑏𝐻 Thus, any element in 𝑎𝐻 is also an element in 𝑏𝐻. ∴ 𝑎𝐻⊆𝑏𝐻 Similarly, we can prove that 𝑏𝐻⊆𝑎𝐻 Hence 𝑎𝐻= 𝑏𝐻 Thus, if 𝑎𝐻 and 𝑏𝐻 are disjoint, they are identical. www.tranquileducation.weebly.com The two cosets 𝑎𝐻 and 𝑏𝐻 are disjoint or identical. …(4) Now every element 𝑎∈𝐺 belongs to one and only one left coset of 𝐻 in 𝐺, For, 𝑎= 𝑎𝑒∈𝑎𝐻, 𝑠𝑖𝑛𝑐𝑒 𝑒∈𝐻⇒𝑎∈𝑎𝐻 𝑎∉𝑏𝐻, since 𝑎𝐻 and 𝑏𝐻 are disjoint i.e., 𝑎 belongs to one and only left coset of 𝐻 in 𝐺 i.e., 𝑎𝐻… (5) From (4) and (5), we see that the set of left cosets of 𝐻 in 𝐺 form the partition of 𝐺. Now let the order of 𝐻 be 𝑚. Let 𝐻= 1, 2, … , 𝑚 , 𝑤𝑒𝑟𝑒 𝑖′𝑠 are distinct Then 𝑎𝐻= 𝑎1, 𝑎2, … , 𝑎𝑚 The elements of 𝑎𝐻 are also distinct, for, 𝑎𝑖= 𝑎𝑗⇒𝑖= 𝑗, which is not true. Thus 𝐻 and 𝑎𝐻 have the same number of elements, namely 𝑚. In fact every coset of 𝐻 in 𝐺 has exactly 𝑚 elements. Now let the order of the group {𝐺,∗} be 𝑛, i.e., there are 𝑛 elements in 𝐺 Let the number of distinct left cosets of 𝐻 in 𝐺 be 𝑝. ∴ The total number of elements of all the left cosets = 𝑝𝑚 = the total number of elements of 𝐺. i.e., 𝑛= 𝑝𝑚 i.e., 𝑚, the order of 𝐻 is adivisor of 𝑛, the order of 𝐺. Find all non-trivial subgroups of (𝒁𝟔 , +𝟔) Solution: 𝑍6 , +6 , 𝑆= 0 𝑢𝑛𝑑𝑒𝑟 𝑏𝑖𝑛𝑎𝑟𝑦 𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑜𝑛 +6 are trivial subgroups +𝟔 𝑆1 = 0 , 2 , +𝟔 From the above cayley’s table, All the elements are closed under the binary operation +𝟔 Associativity is also true under the binary operation +𝟔 is the identity element. Inverse element of is and vise versa www.tranquileducation.weebly.com Hence 𝑆1 = 0 , 2 , is a subgroup of (𝑍6 , +6) 𝑆2 = 0 , 3 +𝟔 From the above cayley’s table, All the elements are closed under the binary operation +𝟔 Associativity is also true under the binary operation +𝟔 is the identity element. Inverse element of is itself. Hence 𝑆2 = 0 , 3 is a subgroup of (𝑍6 , +6) Therefore 𝑆1 = 0 , 2 , and 𝑆2 = 0 , 3 are non trivial subgroups of (𝑍6 , +6) Show that the mapping 𝒈: (𝑺, +) →(𝑻,∗) defined by 𝒈(𝒂) = 𝟑𝒂, where 𝑺 is the set of all rational numbers under addition operation + and 𝑻 is the set of non-zero real numbers under multiplication operation ∗ is a homomorphism but not isomorphism. Solution: For any 𝑎, 𝑏∈𝑆, 𝑔 𝑎+ 𝑏 = 3𝑎+𝑏= 3𝑎∗3𝑏= 𝑔 𝑎 ∗𝑔(𝑏) ∴ 𝑔 is a homomorphism. To prove 𝑔 is one to one: For any 𝑎, 𝑏∈𝑆, Let 𝑔 𝑎 = 𝑔 𝑏 ⇒3𝑎= 3𝑏⇒𝑎= 𝑏 ∴𝑔 is one to one To prove 𝑔 is onto: 𝑏= 3𝑎⇒log𝑏= log 3𝑎⇒log𝑏= a log3 ⇒𝑎= log𝑏 log3 ∴𝑎= 𝑔 log𝑎 log3 , ∀𝑎∈𝑇 ∴∀𝑎∈𝑇, there is a pre-image log 𝑎 log 3 ∉𝑆 ∵ log3 is irratioinal ⇒ log 𝑎 log 3 is irratioinal ∴ 𝑔 is not onto. ∴𝑔 is not an isomorphism. The intersection of any two subgroups of a group G is again a subgroup of G. –Prove. Proof: Let 𝐻1 and 𝐻2 be two normal subgroups of a group (𝐺,∗). Then 𝐻1 𝑎𝑛𝑑 𝐻2 are subgroups. www.tranquileducation.weebly.com 𝑒∈𝐻1 𝑎𝑛𝑑 𝑒∈ 𝐻2 ⇒𝑒∈𝐻1 ∩𝐻2. Since 𝑒 is the identity element of G and it is unique. ∴𝐻1 ∩𝐻2 𝑖𝑠 𝑛𝑜𝑛 𝑒𝑚𝑝𝑡𝑦. ∀𝑎, 𝑏∈𝐻1 ∩𝐻2 ⇒𝑎, 𝑏∈𝐻1 𝑎𝑛𝑑 𝑎, 𝑏∈𝐻2 ⇒𝑎∗𝑏−1 ∈𝐻1 𝑎𝑛𝑑 𝑎∗𝑏−1 ∈ 𝐻2 Since 𝐻1 𝑎𝑛𝑑 𝐻2 are subgroups. ⇒ 𝑎∗𝑏−1 ∈𝐻1 ∩𝐻2 ∴𝐻1 ∩𝐻2 𝑖𝑠 𝑎 𝑠𝑢𝑏𝑔𝑟𝑜𝑢𝑝 Show that monoid homomorphism preserves the property of invertibility. Solution: If 𝑀,∗, 𝑒 𝑎𝑛𝑑 𝑇,⋅, 𝑒′ 𝑏𝑒 𝑎𝑛𝑦 𝑡𝑤𝑜 𝑚𝑜𝑛𝑜𝑖𝑑𝑠, 𝑤𝑒𝑟𝑒 𝑒 𝑎𝑛𝑑 𝑒′𝑎𝑟𝑒 𝑖𝑑𝑒𝑛𝑡𝑖𝑡𝑦 elements of 𝑀 and 𝑇 with respect to the operations ∗ and . respectively, then a mapping 𝑔: 𝑀→ 𝑇 such that, for any two elements 𝑎, 𝑏∈𝑀, 𝑔 𝑎∗𝑏 = 𝑔 𝑎 . 𝑔 𝑏 𝑎𝑛𝑑 𝑔 𝑒 = 𝑒′ is called monoid homomorphism. Let 𝑎−1 ∈𝑀 be the inverse of 𝑎∈𝑀 Then 𝑔 𝑎∗𝑎−1 = 𝑔 𝑒 = 𝑒′ 𝑏𝑦 𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑖𝑜𝑛. Also 𝑔 𝑎∗𝑎−1 = 𝑔 𝑎 . 𝑔 𝑎−1 𝑏𝑦 𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑖𝑜𝑛 𝑔 𝑎 . 𝑔 𝑎−1 = 𝑒′ Hence the inverse of 𝑔 𝑎 = 𝑔 𝑎−1 = 𝑔(𝑎) −1 ∴ Monoid homomorphism preserves the property of invertibility. Prove that the intersection of two normal subgroup of a group will be a normal subgroup. Solution: Let 𝐻1 and 𝐻2 be two normal subgroups of a group (𝐺,∗). Then 𝐻1 𝑎𝑛𝑑 𝐻2 are subgroups. Since 𝑒∈𝐻1 𝑎𝑛𝑑 𝑒∈ 𝐻2 ⇒𝑒∈𝐻1 ∩𝐻2 ∴𝐻1 ∩𝐻2 𝑖𝑠 𝑛𝑜𝑛 𝑒𝑚𝑝𝑡𝑦. ∀𝑎, 𝑏∈𝐻1 ∩𝐻2 ⇒𝑎, 𝑏∈𝐻1 𝑎𝑛𝑑 𝑎, 𝑏∈𝐻2⇒𝑎∗𝑏−1 ∈𝐻1 𝑎𝑛𝑑 𝑎∗𝑏−1 ∈ 𝐻2 Since 𝐻1 𝑎𝑛𝑑 𝐻2 are subgroups. ⇒ 𝑎∗𝑏−1 ∈𝐻1 ∩𝐻2 ∴𝐻1 ∩𝐻2 𝑖𝑠 𝑎 𝑠𝑢𝑏𝑔𝑟𝑜𝑢𝑝 ∀𝑎∈𝐺, ∀∈𝐻1 ∩𝐻2 ⇒∈𝐻1 𝑎𝑛𝑑 ∈𝐻2, ⇒𝑎−1 ∗∗𝑎∈𝐻1 𝑎𝑛𝑑 𝑎−1 ∗∗𝑎∈𝐻2 Since 𝐻1 𝑎𝑛𝑑 𝐻2 are normal subgroups. ⇒𝑎−1 ∗∗𝑎∈𝐻1 ∩𝐻2 ∴𝐻1 ∩𝐻2 𝑖𝑠 𝑎 𝑛𝑜𝑟𝑚𝑎𝑙 𝑠𝑢𝑏𝑔𝑟𝑜𝑢𝑝 Let 𝑺 be a non-empty set and 𝑷(𝑺) denote the power set of 𝑺. Verify that (𝑷(𝑺),∩) is a group. Solution: ∵𝑃(𝑆) denote the power set of 𝑆 ∀𝐴, 𝐵∈𝑃(𝑆) ⇒𝐴∩𝐵∈𝑃(𝑆) ∴𝑃(𝑆) is closed. www.tranquileducation.weebly.com ∀𝐴, 𝐵, 𝐶∈𝑃(𝑆) ⇒𝐴∩ 𝐵∩𝐶 = 𝐴∩𝐵 ∩𝐶 ∴𝑃(𝑆) is associative ∀𝐴∈𝑃(𝑆) , we have 𝐴∩𝑆= 𝐴= 𝑆 ∩𝐴 ∴𝑆∈𝑃(𝑆) be the identity element. ∀𝐴∈𝑃 𝑆 , there exists some 𝐵∈𝑃 𝑆 such that 𝐴∩𝐵≠𝑆 ∴ Inverse does not exists for any subset except 𝑆 (𝑃(𝑆),∩) is not a group but it is a monoid. Let (𝑮,∗) and (𝑯, ∆) be groups and 𝒈: 𝑮 → 𝑯 be a homomorphism. Then prove that kernel of 𝒈 is a normal sub-group of 𝑮. Solution: Let 𝐾= 𝑘𝑒𝑟 𝑔 = {𝑔 𝑎 = 𝑒′\𝑎∈𝐺, 𝑒′ ∈𝐻} To prove 𝐾 is a subgroup of 𝐺: We know that 𝑔 𝑒 = 𝑒′ ⇒𝑒∈𝐾 ∴ 𝐾 is a non-empty subset of 𝐺. By the definition of homomorphism 𝑔 𝑎∗𝑏 = 𝑔 𝑎 ∆ 𝑔 𝑏 , ∀𝑎, 𝑏∈𝐺 Let 𝑎, 𝑏∈𝐾⇒𝑔 𝑎 = 𝑒′ 𝑎𝑛𝑑 𝑔 𝑏 = 𝑒′ Now 𝑔 𝑎∗𝑏−1 = 𝑔 𝑎 ∆ 𝑔 𝑏−1 = 𝑔 𝑎 ∆ 𝑔 𝑏 −1 = 𝑒′∆ 𝑒′ −1 = 𝑒′∆ 𝑒′ = 𝑒′ ∴𝑎∗𝑏−1 ∈𝐾 ∴ 𝐾 is a subgroup of 𝐺 To prove 𝐾 is a normal subgroup of 𝐺: For any 𝑎∈𝐺 𝑎𝑛𝑑 𝑘∈𝐾, 𝑔 𝑎−1 ∗𝑘∗𝑎 = 𝑔 𝑎−1 ∆ 𝑔 𝑘 ∆𝑔 𝑎 = 𝑔 𝑎−1 ∆ 𝑔 𝑘 ∆ 𝑔 𝑎 = 𝑔 𝑎−1 ∆ 𝑒′∆ 𝑔 𝑎 = 𝑔 𝑎−1 ∆ 𝑔 𝑎 = 𝑔 𝑎−1 ∗𝑎 = 𝑔 𝑒 = 𝑒′ 𝑎−1 ∗𝑘∗𝑎∈𝐾 ∴ 𝐾 is a normal subgroup of 𝐺 State and Prove Fundamental theorem of homomorphism. Statement: Let 𝑔 be a homomorphism from a group (𝐺,∗) to a group (𝐻, ∆), and let 𝐾 be the kernel of 𝑔 and 𝐻′ ⊆𝐻 be the image set of 𝑔 in 𝐻. Then 𝐺/𝐾 is isomorphic to 𝐻′. Proof: Since 𝐾 is the kernel of homomorphism, it must be a normal subgroup of G. Also we can define a mapping 𝑓: (𝐺,∗) → (𝐺/𝐾,⊗) where ⊗ is defined as 𝑎∗𝑏 𝐻= 𝑎𝐻⊗ 𝑏𝐻, ∀𝑎, 𝑏∈𝐺… (1) i.e., 𝑓 𝑎 = 𝑎𝐾 𝑓𝑜𝑟 𝑎𝑛𝑦 𝑎∈𝐺… (2) Let us define a mapping : 𝐺/𝐾→ 𝐻′ such that 𝑎𝐾 = 𝑔 𝑎 … (3) To prove that is well defined: For any 𝑎, 𝑏∈𝐺, ∴𝑎𝐾= 𝑏𝐾 𝑎∗𝑏−1 ∈𝐾⇒𝑎∈𝐾𝑏 𝑔 𝑎∗𝑏−1 = 𝑒′ 𝑠𝑖𝑛𝑐𝑒 𝑘 𝑖𝑠 𝑘𝑒𝑟𝑛𝑒𝑙 𝑜𝑓 𝑜𝑚𝑜𝑚𝑜𝑟𝑝𝑖𝑠𝑚 𝑓𝑟𝑜𝑚 𝐺 𝑡𝑜 𝐻 www.tranquileducation.weebly.com 𝑔 𝑎 ∆ 𝑔 𝑏−1 = 𝑒′ 𝑠𝑖𝑛𝑐𝑒 𝑔 𝑖𝑠 𝑜𝑚𝑜𝑚𝑜𝑟𝑝𝑖𝑠𝑚 𝑓𝑟𝑜𝑚 𝐺 𝑡𝑜 𝐻 𝑔 𝑎 ∆ 𝑔 𝑏 −1 = 𝑒′ ∵ 𝑔 𝑏 −1 = 𝑔 𝑏−1 𝑔 𝑎 ∆ 𝑔 𝑏 −1∆ 𝑔 𝑏 = 𝑒′∆ 𝑔 𝑏 𝑔 𝑎 ∆ 𝑒′ = 𝑔 𝑏 ⇒𝑔 𝑎 = 𝑔(𝑏) 𝑎𝐾 = 𝑏𝐾 𝑎𝐾= 𝑏𝐾⇒ 𝑎𝐾 = 𝑏𝐾 ∴ is well defined. To prove that is homomorphism: 𝑎𝐾⊗𝑏𝐾 = 𝑎∗𝑏 𝐾 𝑓𝑟𝑜𝑚(1) = 𝑔 𝑎∗𝑏 𝑓𝑟𝑜𝑚(3) = 𝑔 𝑎 ∆ 𝑔 𝑏 𝑠𝑖𝑛𝑐𝑒 𝑔 𝑖𝑠 𝑜𝑚𝑜𝑚𝑜𝑟𝑝𝑖𝑠𝑚 𝑓𝑟𝑜𝑚 𝐺 𝑡𝑜 𝐻 = 𝑎𝐾 ∆ 𝑏𝐾 𝑓𝑟𝑜𝑚(3) ∴ is homomorphism To prove that is on to: The image set of the mapping is the same as the image set of the mapping 𝑔, so that : 𝐺/𝐾→ 𝐻′ is on to. To prove that is one to one: For any 𝑎, 𝑏∈𝐺, 𝑎𝐾 = 𝑏𝐾 𝑔 𝑎 = 𝑔 𝑏 𝑔 𝑎 ∆ 𝑔 𝑏 −1 = 𝑔 𝑏 ∆ 𝑔 𝑏 −1 𝑔 𝑎 ∆ 𝑔 𝑏−1 = 𝑒′ 𝑔 𝑏 −1 = 𝑔 𝑏−1 & 𝑔 𝑏 ∆ 𝑔 𝑏 −1 = 𝑒′ 𝑔 𝑎∗𝑏−1 = 𝑒′ 𝑠𝑖𝑛𝑐𝑒 𝑔 𝑖𝑠 𝑜𝑚𝑜𝑚𝑜𝑟𝑝𝑖𝑠𝑚 𝑓𝑟𝑜𝑚 𝐺 𝑡𝑜 𝐻 𝑎∗𝑏−1 ∈𝐾⇒𝑎∈𝐾𝑏 ∴𝑎𝐾= 𝑏𝐾 ∴ is one to one ∴: 𝐺/𝐾→ 𝐻′ is isomorphic. Show that every subgroup of a cyclic group is cyclic. Proof: Let 𝐺 be the cyclic group generated by the element 𝑎 and let 𝐻 be a subgroup of 𝐺. If 𝐻= 𝐺 𝑜𝑟 {𝑒}, 𝐻 is cyclic. If not the elements of 𝐻 are non-zero integral powers of 𝑎, since, if 𝑎𝑟𝜖 𝐻 , its inverse 𝑎−𝑟𝜖 𝐻. Let 𝑚 be the least positive integer for which 𝑎𝑚𝜖 𝐻 Now let 𝑎𝑛 be any arbitrary element of H. Let 𝑞 be the quotient and 𝑟 be the remainder when 𝑛 is divided by 𝑚. Then 𝑛= 𝑚𝑞+ 𝑟, 𝑤𝑒𝑟𝑒 0 ≤𝑟< 𝑚 Since, 𝑎𝑚𝜖 𝐻, 𝑎𝑚 𝑞𝜖 𝐻, 𝑏𝑦 𝑐𝑙𝑜𝑠𝑢𝑟𝑒 𝑝𝑟𝑜𝑝𝑒𝑟𝑡𝑦 𝑎𝑚𝑞𝜖 𝐻⇒ 𝑎𝑚𝑞 −1𝜖 𝐻, 𝑏𝑦 𝑒𝑥𝑖𝑠𝑡𝑒𝑛𝑐𝑒 𝑜𝑓 𝑖𝑛𝑣𝑒𝑟𝑠𝑒, 𝑎𝑠 𝐻 𝑖𝑠 𝑎 𝑠𝑢𝑏𝑔𝑟𝑜𝑢𝑝 𝑎−𝑚𝑞𝜖 𝐻. Now since, 𝑎𝑛𝜖 𝐻 𝑎𝑛𝑑 𝑎−𝑚𝑞𝜖 𝐻⇒𝑎𝑛−𝑚𝑞𝜖 𝐻⇒𝑎𝑟𝜖 𝐻 𝑟= 0 ∴𝑛= 𝑚𝑞 ∴𝑎𝑛= 𝑎𝑚𝑞= 𝑎𝑚 𝑞 Thus, every element 𝑎𝑛𝜖 𝐻 is of the form 𝑎𝑚 𝑞. www.tranquileducation.weebly.com Hence H is a cyclic subgroup generated by 𝑎𝑚. State and prove Cayley’s theorem on permutation groups. Statement: Every group 𝐺 is isomorphic to a subgroup of the group of permutation 𝑆𝐴 for some set 𝐴. Proof: We know that 𝑃⊆𝑆𝐺 is the subgroup of permutation group 𝑆𝐺 . We prove the result by choosing 𝐴 to be 𝐺. In fact, we prove that the mapping 𝜑: 𝐺,∗ → 𝑃, 𝑜 given by 𝜑 𝑎 = 𝑝𝑎 is an isomorphism from 𝐺 on to 𝑃. To prove 𝜑 is homomorphism: Let 𝑎, 𝑏 ∈ 𝐺, then 𝜑 𝑎∗𝑏 = 𝑝𝑎∗𝑏= 𝑝𝑎𝑜𝑝𝑏= 𝜑 𝑎 𝑜𝜑 𝑏 ∴𝜑 is homomorphism To prove 𝜑 is one to one: 𝜑 𝑎 = 𝜑 𝑏 𝑝𝑎= 𝑝𝑏⇒𝑝𝑎 𝑒 = 𝑝𝑏(𝑒) 𝑒∗𝑎= 𝑒∗𝑏 𝑎= 𝑏 ∴𝜑 is one to one To prove 𝜑 is on to: ∵𝜑 𝑎 = 𝑝𝑎, For every image 𝑝𝑎 in 𝑃 there is a pre image 𝑎 in 𝐺. ∴𝜑 is on to. ∴𝜑 is isomorphism. Prove that every finite integral domain is a field. Proof: Let 𝐷, +, . be a finite integral domain. Then 𝐷 has a finite number of distinct elements, say, 𝑎1, 𝑎2, … , 𝑎𝑛 . Let 𝑎≠0 be an element of 𝐷. Then the elements 𝑎. 𝑎1, 𝑎. 𝑎2, … , 𝑎. 𝑎𝑛𝜖 𝐷, since 𝐷 is closed under multiplication. The elements 𝑎. 𝑎1, 𝑎. 𝑎2, … , 𝑎. 𝑎𝑛 are distinct, because if 𝑎. 𝑎𝑖= 𝑎. 𝑎𝑗, 𝑡𝑒𝑛 𝑎. 𝑎𝑖−𝑎𝑗 = 0. But 𝑎≠0. Hence 𝑎𝑖−𝑎𝑗= 0, since 𝐷 is an integral domain i.e., 𝑎𝑖= 𝑎𝑗, which is not true, since 𝑎1, 𝑎2, … , 𝑎𝑛 are distinct elements of 𝐷. Hence the sets {𝑎. 𝑎1, 𝑎. 𝑎2, … , 𝑎. 𝑎𝑛} and 𝑎1, 𝑎2, … , 𝑎𝑛 are the same. Since 𝑎𝜖 𝐷 is in both sets, let 𝑎. 𝑎𝑘= 𝑎 for some 𝑘… (1) Then 𝑎𝑘 is the unity of 𝐷, detailed as follows Let 𝑎𝑗= 𝑎. 𝑎𝑖… (2) Now 𝑎𝑗. 𝑎𝑘= 𝑎𝑘. 𝑎𝑗, 𝑏𝑦 𝑐𝑜𝑚𝑚𝑢𝑡𝑎𝑡𝑖𝑣𝑖𝑡𝑦 = 𝑎𝑘. 𝑎. 𝑎𝑖 by (2) = 𝑎𝑘. 𝑎 . 𝑎𝑖 = 𝑎. 𝑎𝑘 . 𝑎𝑖 = 𝑎. 𝑎𝑖 𝑏𝑦 (1) www.tranquileducation.weebly.com = 𝑎𝑗 𝑏𝑦 (2) Since, 𝑎𝑗 is an arbitrary element of 𝐷 𝑎𝑘 is the unity of 𝐷 Let it be denoted by 1. Since 1𝜖 𝐷, there exist 𝑎≠0 and 𝑎𝑖𝜖 𝐷 such that 𝑎. 𝑎𝑖= 𝑎𝑖. 𝑎= 1 𝑎 has an inverse. Hence 𝐷, +, . is a field. |
189537 | https://www.khanacademy.org/math/statistics-probability/probability-library/basic-set-ops/v/relative-complement-or-difference-between-sets | Use of cookies
Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account.
For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session.
Functional Cookies
These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences.
Targeting Cookies
These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission.
We do not use cookies to serve third party ads on our Service.
Performance Cookies
These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates.
For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users.
We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
Cookie List
Consent Leg.Interest
label
label
label |
189538 | https://en.wikipedia.org/wiki/Inverse_demand_function | Published Time: 2006-02-06T07:20:43Z
Inverse demand function - Wikipedia
Jump to content
Main menu
Main menu
move to sidebar hide
Navigation
Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Contribute
Help
Learn to edit
Community portal
Recent changes
Upload file
Special pages
Search
Search
Appearance
Appearance
move to sidebar hide
Text
Small
Standard
Large
This page always uses small font size
Width
Standard
Wide
The content is as wide as possible for your browser window.
Color (beta)
Automatic
Light
Dark
This page is always in light mode.
Donate
Create account
Log in
Personal tools
Donate
Create account
Log in
Pages for logged out editors learn more
Contributions
Talk
Toggle the table of contents
Contents
move to sidebar hide
(Top)
1 Definition
2 Relation to marginal revenue
3 See also
4 References
5 Further reading
Inverse demand function
4 languages
Deutsch
Հայերեն
Bahasa Indonesia
Русский
Edit links
Article
Talk
English
Read
Edit
View history
Tools
Tools
move to sidebar hide
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Edit interlanguage links
Print/export
Download as PDF
Printable version
In other projects
Wikidata item
From Wikipedia, the free encyclopedia
Mathematical function in economics
In economics, an inverse demand function is the mathematical relationship that expresses price as a function of quantity demanded (it is therefore also known as a price function).
Historically, the economists first expressed the price of a good as a function of demand (holding the other economic variables, like income, constant), and plotted the price-demand relationship with demand on the x (horizontal) axis (the demand curve). Later the additional variables, like prices of other goods, came into analysis, and it became more convenient to express the demand as a multivariate function (the demand function): d e m a n d \= f ( p r i c e , i n c o m e , . . . ) {\displaystyle {demand}=f({price},{income},...)} , so the original demand curve now depicts the inverse demand function p r i c e \= f − 1 ( d e m a n d ) {\displaystyle {price}=f^{-1}({demand})} with extra variables fixed.
Definition
[edit]
In mathematical terms, if the demand function is d e m a n d \= f ( p r i c e ) {\displaystyle {demand}=f({price})} , then the inverse demand function is p r i c e \= f − 1 ( d e m a n d ) {\displaystyle {price}=f^{-1}({demand})} . The value of the inverse demand function is the highest price that could be charged and still generate the quantity demanded. This is useful because economists typically place price (P) on the vertical axis and quantity (demand, Q) on the horizontal axis in supply-and-demand diagrams, so it is the inverse demand function that depicts the graphed demand curve in the way the reader expects to see.
The inverse demand function is the same as the average revenue function, since P = AR.
To compute the inverse demand function, simply solve for P from the demand function. For example, if the demand function has the form Q \= 240 − 2 P {\displaystyle Q=240-2P} then the inverse demand function would be P \= 120 − 1 2 Q {\displaystyle P=120-{\frac {1}{2}}Q} . Note that although price is the dependent variable in the inverse demand function, it is still the case that the equation represents how the price determines the quantity demanded, not the reverse.
Relation to marginal revenue
[edit]
There is a close relationship between any inverse demand function for a linear demand equation and the marginal revenue function. For any linear demand function with an inverse demand equation of the form P = a - bQ, the marginal revenue function has the form MR = a - 2bQ. The inverse linear demand function and the marginal revenue function derived from it have the following characteristics:
Both functions are linear.
The marginal revenue function and inverse demand function have the same y intercept.
The x intercept of the marginal revenue function is one-half the x intercept of the inverse demand function.
The marginal revenue function has twice the slope of the inverse demand function.
The marginal revenue function is below the inverse demand function at every positive quantity.
The inverse demand function can be used to derive the total and marginal revenue functions. Total revenue equals price, P, times quantity, Q, or TR = P×Q. Multiply the inverse demand function by Q to derive the total revenue function: T R \= ( 120 − 1 2 Q ) ⋅ Q \= 120 Q − 1 2 Q 2 {\displaystyle TR=(120-{\frac {1}{2}}Q)\cdot Q=120Q-{\frac {1}{2}}Q^{2}} . The marginal revenue function is the first derivative of the total revenue function or MR = 120 - Q. Note that in this linear example the MR function has the same y-intercept as the inverse demand function, the x-intercept of the MR function is one-half the value of the demand function, and the slope of the MR function is twice that of the inverse demand function. This relationship holds true for all linear demand equations. The importance of being able to quickly calculate MR is that the profit-maximizing condition for firms regardless of market structure is to produce where marginal revenue equals marginal cost (MC). To derive MC the first derivative of the total cost function is taken.
For example, assume cost, C, equals 420 + 60Q + Q2. then MC = 60 + 2Q. Equating MR to MC and solving for Q gives Q = 20. So 20 is the profit-maximizing quantity: to find the profit-maximizing price simply plug the value of Q into the inverse demand equation and solve for P.
See also
[edit]
Hicksian demand function
Marshallian demand function
Excess demand function
Supply and demand
Demand
Law of demand
Profit (economics)
References
[edit]
^ R., Varian, Hal (7 April 2014). Intermediate microeconomics : with calculus (First ed.). New York. p. 115. ISBN 9780393123982. OCLC 884922812.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: multiple names: authors list (link)
^ Karaivanov, Alexander. "The demand function and the demand curve" (PDF). sfu.ca. Simon Fraser University. Retrieved 29 August 2023.
^ Varian, H.R (2006) Intermediate Microeconomics, Seventh Edition, W.W Norton & Company: London
^ Chiang & Wainwright, Fundamental Methods of Mathematical Economics 4th ed. Page 172. McGraw-Hill 2005
^ Samuelson & Marks, Managerial Economics 4th ed. (Wiley 2003)
^ Samuelson, W & Marks, S Managerial Economics 4th ed. Page 47. Wiley 2003.
^ Perloff, J: Microeconomics Theory & Applications with Calculus page 363. Pearson 2008.
^ Samuelson, W & Marks, S Managerial Economics 4th ed. Page 47. Wiley 2003.
^ Samuelson, W & Marks, S Managerial Economics 4th ed. Page 47. Wiley 2003.
^ Perloff, J: Microeconomics Theory & Applications with Calculus page 362. Pearson 2008.
^ Perloff, Microeconomics, Theory & Applications with Calculus (Pearson 2008) 240.ISBN 0-321-27794-5
Further reading
[edit]
Ryan, W. J. L.; Pearce, D. W. (1977). "Demand Functions". Price Theory. London: Macmillan Education UK. pp. 31–69. doi:10.1007/978-1-349-17334-1_2. ISBN 978-0-333-17913-0.
Authority control databases: National Germany
Retrieved from "
Categories:
Mathematical finance
Demand
Hidden categories:
CS1 maint: location missing publisher
CS1 maint: multiple names: authors list
Articles with short description
Short description is different from Wikidata
This page was last edited on 26 February 2025, at 15:29 (UTC).
Text is available under the Creative Commons Attribution-ShareAlike 4.0 License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Privacy policy
About Wikipedia
Disclaimers
Contact Wikipedia
Code of Conduct
Developers
Statistics
Cookie statement
Mobile view
Edit preview settings
Search
Search
Toggle the table of contents
Inverse demand function
4 languages Add topic |
189539 | https://vpracingfuels.com/blogs/tech-articles-1/oil-viscosity-explained?srsltid=AfmBOorf0mXapaRtRVbVUpi9pBH2Z8tHZru4OZpGFe4HQaSHjN9KChJ1 | Oil Viscosity Explained: How It Impacts Engine Performance
Back
Accessibility options
en
Are you sure you want to hide the widget forever? If you need it back, please clear your cookies.
Ok Cancel
Content
Bigger Text Bigger Text Bigger Cursor Bigger Cursor Text Reader Tooltips Line Height Line Height Hide Images Hide images Readable Fonts Readable fonts Dyslexic Font Dyslexic Font Bionic Reading Bionic Reading Stop Animations Stop Animations
Colors
Invert Color Invert Colors Brightness Brightness Contrast Contrast Saturation Saturation
Color Filters
Grayscale Red/Green Blue/Yellow Green/Red
Navigation
Reading Line Reading Line Highlight Links Highlight Links Text Reader Read page Reading Mask Reading Mask Page Structure Page Structure
Selected Language
English (United States)
All Languages
Search language
English (United States)English (British)Swedish Finnish German Italian Dutch Spanish Portuguese Turkish French Japanese Korean Hebrew Latvian Polish Greek Thai
Accessibility Statement
Our Commitment to Accessibility
We are committed to ensuring digital accessibility for people with disabilities. We are continually improving the user experience for everyone, and applying the relevant accessibility standards to help users with various disabilities access our website effectively.
Compliance Status
Our website strives to conform to the Web Content Accessibility Guidelines (WCAG) 2.1 Level AA standards. We also aim to be compliant with the Americans with Disabilities Act (ADA) and the European Accessibility Act requirements. These guidelines and regulations explain how to make web content more accessible to people with a wide range of disabilities. We acknowledge that some aspects of our website may not yet achieve full compliance, and we are actively working to address these areas.
Accessibility Features Available
Our website implements the Accessibly App, which provides a variety of tools to enhance website accessibility:
Visual Adjustments
Bigger Text: Increase text size up to 3x for better readability (WCAG 2.1/1.4.4)
Bigger Cursor: Enhanced cursor visibility for easier navigation
Color Adjustments: Invert colors, adjust contrast, brightness, and saturation
Grayscale Mode: Convert website to grayscale for users with visual impairments
Hide Images: Reduce visual distractions for easier reading
Reading and Navigation Aids
Reading Line: Adds a guide line to help follow text
Reading Mask: Places a mask over text to isolate lines and enhance focus
Readable Fonts: Converts to highly legible fonts for better comprehension
Highlight Links: Makes links more prominent for easier navigation
Dyslexic Fonts: Special typography for users with dyslexia
Page Structure: Simplified layout options for easier navigation
Assistive Technologies
Keyboard Navigation: Full website control using keyboard (Tab, Shift+Tab, Enter)
Alt Text for Images: AI-generated image descriptions where manual ones aren't provided
Read Page Aloud: Text-to-speech functionality
Stop Animations: Pause motion content for users with vestibular disorders
Limitations and Feedback
Despite our best efforts to ensure accessibility, there may be some limitations. Content provided by third parties, user-generated content, or certain legacy pages may not be fully accessible. We are continuously working to improve our website's accessibility.
We welcome your feedback on the accessibility of our website. If you encounter any barriers or have suggestions for improvement, please contact us. We are committed to addressing these issues promptly.
Technical Information
The accessibility features on this website are provided through the Accessibly App, which utilizes several technologies including HTML, CSS, JavaScript, and various frameworks to enhance accessibility. Our implementation strives to be compatible with major screen readers and assistive technologies.
Legal Disclaimer
While we strive to adhere to WCAG 2.1 Level AA standards and provide accessible content, we cannot guarantee that our website will be accessible to all users under all circumstances. This website is provided 'as is' without any representations or warranties, express or implied.
In no event shall we be liable for any damages arising from or related to:
Inability to access or use the website
Any alleged non-compliance with accessibility laws or regulations
Any disruption or errors in the functionality of accessibility features
By using this website, you agree to hold us harmless from any claims related to website accessibility issues. We are committed to addressing accessibility barriers in good faith but cannot guarantee immediate resolution in all cases.
Contact Us
If you have any questions about our accessibility efforts or encounter any barriers while using our website, please contact us.
Reset Settings
Accessibility statement
Skip to content
Get FREE shipping on all non-hazmat orders of $150 +
Get FREE shipping on all non-hazmat orders of $150 +
BECOME A DEALERCAREERS
Get FREE shipping on all non-hazmat orders of $150 +
Get FREE shipping on all non-hazmat orders of $150 +
Cart$0.00(0)
Fuels
Race FuelsLeadedUnleadedMethanolEthanolNitromethaneDiesel
Small Engines & Outdoor50:140:1Multi-MixPro-Max4-CycleFix-It Fuel ®
R/C Hobby FuelsCar BlendAviation BlendHeli BlendBoat BlendPowermix
Oils & Lubricants
Automotive LubricantsPassenger Car Motor Oils (PCMO) - Cars and Light Duty TrucksTransmission FluidGreaseSnowmobile Oil
Race & Performance LubricantsNon-SyntheticSynthetic BlendFull SyntheticGear OilAssembly LubeBreak-In Oil
Powersports LubricantsNon-SyntheticSynthetic BlendFull Synthetic2T OilGear OilAerosolsView All
Small Engine Oils2-Cycle4-CycleBar & Chain OilAW46 Hydraulic Oil
Additives & Coolants
AdditivesGasDieselPerformanceCleaningMaintenance
CoolantsPerformanceRace
Apparel & Merchandise
ApparelT-ShirtsOuterwearJacketsHatsAccessories
Branded MerchandiseDrinkwareHome DecorEnamel PinsBags/BackpacksMiscellaneous
CollectiblesDiecastNFT
Other Products
Cleaners & DetailersPower CleanPower WashAccessories
ContainersContainers & JugsAccessories
Surface & Track PrepAsphalt
Company
VP RacingAbout UsCareersAnnouncementsTech ArticlesQuality Support FormContact Us
DivisionsInternationalBranded RetailConsumer ProductsRaceSpecialty ChemicalsAviation
Locator ToolsDealer/Distributor LocatorRetail LocatorGas StationsInternational Locator
ResourcesTech BulletinsFuel TableSafety Data SheetsOil Viscosity
My Account
Search
Cart$0.00(0)
My Account
Search
Clear
Fuels Expand menu
Hide menuFuels
Race Fuels Expand menu
Hide menuRace Fuels
Leaded
Unleaded
Methanol
Ethanol
Nitromethane
Diesel
Small Engines & Outdoor Expand menu
Hide menuSmall Engines & Outdoor
50:1
40:1
Multi-Mix
Pro-Max
4-Cycle
Fix-It Fuel ®
R/C Hobby Fuels Expand menu
Hide menuR/C Hobby Fuels
Car Blend
Aviation Blend
Heli Blend
Boat Blend
Powermix
Oils & Lubricants Expand menu
Hide menuOils & Lubricants
Automotive Lubricants Expand menu
Hide menuAutomotive Lubricants
Passenger Car Motor Oils (PCMO) - Cars and Light Duty Trucks
Transmission Fluid
Grease
Snowmobile Oil
Race & Performance Lubricants Expand menu
Hide menuRace & Performance Lubricants
Non-Synthetic
Synthetic Blend
Full Synthetic
Gear Oil
Assembly Lube
Break-In Oil
Powersports Lubricants Expand menu
Hide menuPowersports Lubricants
Non-Synthetic
Synthetic Blend
Full Synthetic
2T Oil
Gear Oil
Aerosols
View All
Small Engine Oils Expand menu
Hide menuSmall Engine Oils
2-Cycle
4-Cycle
Bar & Chain Oil
AW46 Hydraulic Oil
Additives & Coolants Expand menu
Hide menuAdditives & Coolants
Additives Expand menu
Hide menuAdditives
Gas
Diesel
Performance
Cleaning
Maintenance
Coolants Expand menu
Hide menuCoolants
Performance
Race
Apparel & Merchandise Expand menu
Hide menuApparel & Merchandise
Apparel Expand menu
Hide menuApparel
T-Shirts
Outerwear
Jackets
Hats
Accessories
Branded Merchandise Expand menu
Hide menuBranded Merchandise
Drinkware
Home Decor
Enamel Pins
Bags/Backpacks
Miscellaneous
Collectibles Expand menu
Hide menuCollectibles
Diecast
NFT
Other Products Expand menu
Hide menu Other Products
Cleaners & Detailers Expand menu
Hide menuCleaners & Detailers
Power Clean
Power Wash
Accessories
Containers Expand menu
Hide menuContainers
Containers & Jugs
Accessories
Surface & Track Prep Expand menu
Hide menuSurface & Track Prep
Asphalt
Company Expand menu
Hide menu Company
VP Racing Expand menu
Hide menuVP Racing
About Us
Careers
Announcements
Tech Articles
Quality Support Form
Contact Us
Divisions Expand menu
Hide menu Divisions
International
Branded Retail
Consumer Products
Race
Specialty Chemicals
Aviation
Locator Tools Expand menu
Hide menuLocator Tools
Dealer/Distributor Locator
Retail Locator
Gas Stations
International Locator
Resources Expand menu
Hide menu Resources
Tech Bulletins
Fuel Table
Safety Data Sheets
Oil Viscosity
Cart
Your cart is empty.
Continue browsing
Shipping & taxes calculated at checkout
Enter discount code
Enter your discount code here
APPLY
Where can I get discount codes?
Subscribe to our newsletter to receive discount codes from us!
Searching...
✔️ Discount code found, it will be applied at checkout.
Discount code cannot be applied to the cart. Restrictions may apply or the cart may not contain the correct items to use this discount.
The discount code cannot be applied to the cart. Please ensure the cart meets the requirements to apply this discount code.
Please provide a valid discount code.
Discount code cannot be combined with the offers applied to the cart.
Checkout • $0.00 USD
Oil Viscosity Explained: What Is Oil Viscosity
Explaining Why Oil Viscosity Is Important & What It Means
Estimated reading time 6:00
In this article, we’ll explain oil viscosity in simple terms, covering how it affects your engine and why those numbers on your oil bottle matter. Motor oil is essential for engine performance and longevity, but what exactly is oil viscosity?
The simplest explanation is that oil viscosity measures the resistance of the oil to flow, or how easy oil pours at different temperatures. It’s a range of the thickness or thinness of oil at a certain temperature range. Oil viscosity is actually the most important physical property of motor oil.
To get the full picture, let’s explain oil viscosity in more detail, covering how it impacts your engine and why those numbers matter.
We need oil viscosity to form the lubrication film between metal parts within the engine. It has to be thick enough to form a film keeping engine parts separated but low enough so that it doesn’t cause excessive energy loss.
The viscosity of oil varies with temperature changes. A thin oil has a lower viscosity and pours easier in cold weather, while a thicker oil has a higher viscosity and pours slower.
For instance, a low-viscosity oil such as an SAE 0W or 5W reduces engine friction and allows engines to start quicker when the weather is cold. A higher viscosity oil like an SAE 40 or 50 does a good job of maintaining film strength at warmer temperatures.
To explain oil viscosity another way, pour out a cup of coffee in the sink. What happens? It flows out of the cup, right? Try doing the same thing with a bottle of shampoo. Shampoo pours slower out of the bottle because it’s more viscous than coffee; it has a higher viscosity.
Olive oil is another example. At room temperature, it pours fast. It thickens and pours slower if you stick it in the refrigerator for a few hours. That brings us to the viscosity index (VI).
Photo courtesy of Project Farm/YouTube
What Is Viscosity Index?
The viscosity index (VI) measures how viscosity changes relative to changes in temperature. Temperature affects viscosity. Remember our olive oil example? In other words, VI measures both the ability of an oil to resist becoming thinner at a higher temperature and thicker at a lower temperature.
How do they determine engine oil viscosity index? They measure its viscosity across a wide range of temperatures. The higher the VI number, the less change in oil viscosity in relationship to temperature.
Motor oil with a high VI protects the engine better. A full synthetic oil generally has a higher viscosity index than conventional motor oil.
What’s the most ideal motor oil? It’s one where the oil’s viscosity would never change at any temperature range. If you're really into the science of it, the folks at Machinery Lubrication go a little deeper on viscosity index.
What Is the W In Oil?
Exactly what is the W in oil and what does it mean? Have you ever looked at a bottle of SAE 10W-40 or any other multi-grade engine oil and wondered about that? Contrary to what many people believe, the “W” does not stand for “weight” but “winter," and it's important when explaining oil viscosity.
Several decades ago, they made engine oils for cold weather and warm weather. They called these winter grades and summer grades. They were straight-grade oils. An SAE 10W grade was pretty common for the winter months (low viscosity oil), while an SAE 30W grade was most commonly used during the summer months (higher oil viscosity).
Later, they started formulating multi-grade oils like SAE 10W-30, 5W-30, 20W-50, and so on. When you read the viscosity grade on a quart of oil, the number to the left (before the “W”) is its winter rating. It represents oil viscosity measured at lower temperatures. The lower that number, the less the oil will thicken in cold winter weather.
The higher number to the right of the “W” represents oil viscosity at operating temperatures. Higher numbers reflect thicker oil viscosity. This gives the engine better protection for high-heat and high-load applications.
They test oil at a lower temperature to determine the first number (SAE 0W, 5W, 10W, etc.). They test it at a higher temperature to determine the second number (SAE 30, 40, 50, etc.).
For instance, SAE 5W-30 oil flows better at colder temperatures than 10W-40 oil.
But, SAE 10W-40 oil is thicker. It provides better protection in warmer weather compared to SAE 5W-30.
Multi-viscosity engine oils don’t behave the same. It depends on the operating temperature. A multi-viscosity oil provides good flow in cold weather and dependable protection at operating temperatures in warm weather.
Is Higher Viscosity Oil Better?
Is a higher (thicker) viscosity oil better for your engine if you live in a warm climate? You might think a thicker oil would offer better protection. Well, yes, and no.
An oil’s viscosity is critical. It keeps metal parts separated within the engine. The proper viscosity grade does that. But, you don’t want an oil that’s too thick because it creates too much frictional drag within the oil itself. This creates extra heat. The added heat causes the oil to thicken (oxidation). It can also rob engine performance by reducing horsepower.
Types of Motor Oil: What Motor Oil Should I Use?
As we’ve noted, there are many types of motor oil with various oil viscosities, like VP’s line of engine oils. So, what kind of motor oil should you use?
The answer depends on a few things. For our subject explaining oil viscosity, we’ll stick with grades of oil.
The grade of oil you should use depends on engine clearances, operating conditions, and the climate. Where do you live? If you live in a cold environment, use something like an SAE 0W-20, 5W-20, or 5W-30. These are examples of lower oil viscosity.
What if you live in a place like Phoenix, Arizona? In a warm climate, you want an oil that offers more high-temperature protection. In that case, an SAE 10W-30, 10W-40, 15W-50, or 20W-50 would be more suitable. These four are examples of higher oil viscosity.
Photo courtesy of USLube.com
Because of government fuel economy requirements, newer cars call for lower-viscosity oil. Also, they now build car and truck engines with tighter clearances. These newer engines don’t need a higher oil viscosity like an SAE 10W-40 or 20W-50.
If you have an engine with a tighter clearance and use an oil with a higher viscosity range, the oil will cause frictional drag. That’s why it’s essential to follow the OEM recommendations.
Can You Mix Motor Oil Brands & Oil Viscosities?
In most circumstances, you can mix different viscosity oils. For example, let’s say you’re using VP Street Legal SAE 5W-20, but you only have Street Legal SAE 10W-30 sitting on your shelf. It wouldn’t hurt your engine if you mixed these different viscosity oils. It would have a slight effect on the viscosity, but in this case those oils have the same additive systems.
If you’re thinking of mixing different motor oil brands or viscosities, check the label. Make sure the oils are miscible (able to mix) with each other.
Refer to your owner’s manual when in doubt. It will have a viscosity chart that’s based on ambient temperatures. The chart specifies the ranges that you should follow.
Now that we've explained oil viscosity, be sure to check out part two of our motor oil series that discusses the differences betweensynthetic oil vs. conventional oil vs. synthetic blend.
Additionally, our friends at EngineLabs have a great Q&A segment covering a wide array of topics, including oil viscosity.
Share
Share
Link
Close Copy link
← Older PostNewer Post →
Newsletter Sign-up
Join to get special offers, free giveaways, and once-in-a-lifetime deals.
This customer is already subscribed
Email Join
COMPANY
[x] Footer menu column 1
About Us
Careers
Quality Support Form
Contact Us
Privacy Policy
Refund Policy
Terms of Service
DIVISIONS
[x] Footer menu column 2
Consumer Products
Race
International
Branded Retail
Specialty Chemicals
Aviation
CATEGORIES
[x] Footer menu column 3
Race Fuels
Small Engine Fuels
R/C Hobby Fuels
Oils & Lubricants
Additives & Coolants
Apparel & Merchandise
Containers
RESOURCES
[x] Footer menu column 4
Announcements
Tech Articles
Tech Bulletins
Fuel Table
SDS
Oil Viscosity
LOCATORS
[x] Footer menu column 5
Dealer Locator
Retail Locator
Gas Station Locator
International Locator
American express Diners club Discover Master Shopify pay Visa
© VP Racing Fuels, Inc. 2025
FacebookInstagramLinkedinYouTube
Search
Clear
{"themeColor":"#0047bb","iconColor":"#0a0a0a","showLogo":false,"topBottomPosition":10,"rightLeftPosition":10,"iconSize":"small","iconCustomSize":64,"position":"middle-right"}
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.
Cookies Settings Reject All Accept All Cookies
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
Allow All
Manage Consent Preferences
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Targeting Cookies
[x] Targeting Cookies
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Reject All Confirm My Choices
Submit Close |
189540 | https://pubmed.ncbi.nlm.nih.gov/9690220/ | Plasma alpha 2 macroglobulin is increased in nephrotic patients as a result of increased synthesis alone - PubMed
Clipboard, Search History, and several other advanced features are temporarily unavailable.
Skip to main page content
An official website of the United States government
Here's how you know
The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Log inShow account info
Close
Account
Logged in as:
username
Dashboard
Publications
Account settings
Log out
Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation
Search: Search
AdvancedClipboard
User Guide
Save Email
Send to
Clipboard
My Bibliography
Collections
Citation manager
Display options
Display options
Format
Save citation to file
Format:
Create file Cancel
Email citation
Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page.
To:
Subject:
Body:
Format:
[x] MeSH and other data
Send email Cancel
Add to Collections
Create a new collection
Add to an existing collection
Name your collection:
Name must be less than 100 characters
Choose a collection:
Unable to load your collection due to an error
Please try again
Add Cancel
Add to My Bibliography
My Bibliography
Unable to load your delegates due to an error
Please try again
Add Cancel
Your saved search
Name of saved search:
Search terms:
Test search terms
Would you like email updates of new search results? Saved Search Alert Radio Buttons
Yes
No
Email: (change)
Frequency:
Which day?
Which day?
Report format:
Send at most:
[x] Send even when there aren't any new results
Optional text in email:
Save Cancel
Create a file for external citation management software
Create file Cancel
Your RSS Feed
Name of RSS Feed:
Number of items displayed:
Create RSS Cancel
RSS Link Copy
Full text links
Elsevier Science
Full text links
Actions
Cite
Collections
Add to Collections
Create a new collection
Add to an existing collection
Name your collection:
Name must be less than 100 characters
Choose a collection:
Unable to load your collection due to an error
Please try again
Add Cancel
Permalink
Permalink
Copy
Display options
Display options
Format
Page navigation
Title & authors
Abstract
Similar articles
Cited by
Publication types
MeSH terms
Substances
Related information
LinkOut - more resources
Kidney Int
Actions
Search in PubMed
Search in NLM Catalog
Add to Search
. 1998 Aug;54(2):530-5.
doi: 10.1046/j.1523-1755.1998.00018.x.
Plasma alpha 2 macroglobulin is increased in nephrotic patients as a result of increased synthesis alone
M G de Sain-van der Velden1,T J Rabelink,D J Reijngoud,M M Gadellaa,H A Voorbij,F Stellaard,G A Kaysen
Affiliations Expand
Affiliation
1 Department of Nephrology, University Hospital Utrecht, The Netherlands. M.G.deSain@lab.azu.nl
PMID: 9690220
DOI: 10.1046/j.1523-1755.1998.00018.x
Free article
Item in Clipboard
Plasma alpha 2 macroglobulin is increased in nephrotic patients as a result of increased synthesis alone
M G de Sain-van der Velden et al. Kidney Int.1998 Aug.
Free article
Show details
Display options
Display options
Format
Kidney Int
Actions
Search in PubMed
Search in NLM Catalog
Add to Search
. 1998 Aug;54(2):530-5.
doi: 10.1046/j.1523-1755.1998.00018.x.
Authors
M G de Sain-van der Velden1,T J Rabelink,D J Reijngoud,M M Gadellaa,H A Voorbij,F Stellaard,G A Kaysen
Affiliation
1 Department of Nephrology, University Hospital Utrecht, The Netherlands. M.G.deSain@lab.azu.nl
PMID: 9690220
DOI: 10.1046/j.1523-1755.1998.00018.x
Item in Clipboard
Full text links Cite
Display options
Display options
Format
Abstract
Background: alpha 2 Macroglobulin (alpha 2M), a protease inhibitor, is often increased in plasma of patients with the nephrotic syndrome. Although it has been speculated that synthesis is increased, no direct measurements have been performed.
Methods: alpha 2M synthesis in both normal subjects (N = 4) and nephrotic patients (N = 7) were measured using endogenous labeling with 13C valine in order to establish the mechanism of increased plasma level in the nephrotic syndrome and the relationship between alpha 2M synthesis rate and plasma concentration over a wide range of plasma concentration values. A primed (15 mumol/kg)/continuous (15 mumol/kg/hr) infusion was administered for six hours. Blood samples were collected at different intervals and at each time point alpha 2M was isolated from EDTA plasma using immunoprecipitation and SDS-polyacrylamide gel electrophoresis (PAGE). Care was taken to ensure that the alpha 2M used for combustion had not been subjected to proteolysis. The rate of appearance of 13C valine derived from the isolated alpha 2M was measured by gas chromatography combustion isotope ratio mass spectrometry.
Results: Plasma alpha 2M was significantly elevated in nephrotic subjects (3.13 +/- 0.33 g/liter) versus controls (1.64 +/- 0.15 g/liter; P = 0.012). The alpha 2M fractional synthesis rate [(FSR), which is equal to fractional catabolic rate (FCR) in steady state] was the same in the two groups: 2.70 +/- 0.18%/day for the nephrotic patients versus controls 2.74 +/- 0.21%/day. However, the alpha 2M absolute synthesis rate (ASR) was significantly (P = 0.012) increased in the patients (3.69 +/- 0.33 mg/kg/day) versus controls (2.06 +/- 0.35 mg/kg/day). Plasma alpha 2M concentration correlated directly to its ASR (r2 = 0.821; P = 0.0001; N = 11).
Conclusions: Increased plasma alpha 2M concentration in nephrotic patients is therefore a result of increased synthesis alone.
PubMed Disclaimer
Similar articles
Serum alpha 2-macroglobulin and alpha 1-inhibitor 3 concentrations are increased in hypoalbuminemia by post-transcriptional mechanisms.Stevenson FT, Greene S, Kaysen GA.Stevenson FT, et al.Kidney Int. 1998 Jan;53(1):67-75. doi: 10.1046/j.1523-1755.1998.00734.x.Kidney Int. 1998.PMID: 9453001
Proportionate increase of fibrinogen and albumin synthesis in nephrotic patients: measurements with stable isotopes.de Sain-van der Velden MG, Kaysen GA, de Meer K, Stellaard F, Voorbij HA, Reijngoud DJ, Rabelink TJ, Koomans HA.de Sain-van der Velden MG, et al.Kidney Int. 1998 Jan;53(1):181-8. doi: 10.1046/j.1523-1755.1998.00729.x.Kidney Int. 1998.PMID: 9453016
Increased VLDL in nephrotic patients results from a decreased catabolism while increased LDL results from increased synthesis.de Sain-van der Velden MG, Kaysen GA, Barrett HA, Stellaard F, Gadellaa MM, Voorbij HA, Reijngoud DJ, Rabelink TJ.de Sain-van der Velden MG, et al.Kidney Int. 1998 Apr;53(4):994-1001. doi: 10.1111/j.1523-1755.1998.00831.x.Kidney Int. 1998.PMID: 9551409
Alpha 2-macroglobulin and pregnancy zone protein. Serum levels, alpha 2-macroglobulin receptors, cellular synthesis and aspects of function in relation to immunology.Petersen CM.Petersen CM.Dan Med Bull. 1993 Sep;40(4):409-46.Dan Med Bull. 1993.PMID: 7693397 Review.
[Immunological background of plasma protein abnormalities--the relation between inflammation and malignant neoplasm].Ohtani H.Ohtani H.Rinsho Byori. 1999 Oct;47(10):909-16.Rinsho Byori. 1999.PMID: 10590664 Review.Japanese.
See all similar articles
Cited by
"3D, human renal proximal tubule (RPTEC-TERT1) organoids 'tubuloids' for translatable evaluation of nephrotoxins in high-throughput".Vidal Yucha SE, Quackenbush D, Chu T, Lo F, Sutherland JJ, Kuzu G, Roberts C, Luna F, Barnes SW, Walker J, Kuss P.Vidal Yucha SE, et al.PLoS One. 2022 Nov 21;17(11):e0277937. doi: 10.1371/journal.pone.0277937. eCollection 2022.PLoS One. 2022.PMID: 36409750 Free PMC article.
Reference distributions for alpha2-macroglobulin: a comparison of a large cohort to the world's literature.Ritchie RF, Palomaki GE, Neveux LM, Navolotskaia O.Ritchie RF, et al.J Clin Lab Anal. 2004;18(2):148-52. doi: 10.1002/jcla.20013.J Clin Lab Anal. 2004.PMID: 15065216 Free PMC article.Review.
Differential expression of alpha 2 macroglobulin in response to dietylstilbestrol and in ovarian carcinomas in chickens.Lim W, Jeong W, Kim JH, Lee JY, Kim J, Bazer FW, Han JY, Song G.Lim W, et al.Reprod Biol Endocrinol. 2011 Oct 7;9:137. doi: 10.1186/1477-7827-9-137.Reprod Biol Endocrinol. 2011.PMID: 21978460 Free PMC article.
Plasma Proteomic Signature of Endometrial Cancer in Patients with Diabetes.Mujammami M, Rafiullah M, Akkour K, Alfadda AA, Masood A, Joy SS, Alhalal H, Arafah M, Alshehri E, Alanazi IO, Benabdelkamel H.Mujammami M, et al.ACS Omega. 2024 Jan 18;9(4):4721-4732. doi: 10.1021/acsomega.3c07992. eCollection 2024 Jan 30.ACS Omega. 2024.PMID: 38313512 Free PMC article.
Homotrimeric MMP-9 is an active hitchhiker on alpha-2-macroglobulin partially escaping protease inhibition and internalization through LRP-1.Serifova X, Ugarte-Berzal E, Opdenakker G, Vandooren J.Serifova X, et al.Cell Mol Life Sci. 2020 Aug;77(15):3013-3026. doi: 10.1007/s00018-019-03338-4. Epub 2019 Oct 23.Cell Mol Life Sci. 2020.PMID: 31642940 Free PMC article.
See all "Cited by" articles
Publication types
Research Support, Non-U.S. Gov't
Actions
Search in PubMed
Search in MeSH
Add to Search
Research Support, U.S. Gov't, Non-P.H.S.
Actions
Search in PubMed
Search in MeSH
Add to Search
MeSH terms
Adult
Actions
Search in PubMed
Search in MeSH
Add to Search
Female
Actions
Search in PubMed
Search in MeSH
Add to Search
Humans
Actions
Search in PubMed
Search in MeSH
Add to Search
Male
Actions
Search in PubMed
Search in MeSH
Add to Search
Nephrotic Syndrome / blood
Actions
Search in PubMed
Search in MeSH
Add to Search
Nephrotic Syndrome / metabolism
Actions
Search in PubMed
Search in MeSH
Add to Search
alpha-Macroglobulins / analysis
Actions
Search in PubMed
Search in MeSH
Add to Search
alpha-Macroglobulins / biosynthesis
Actions
Search in PubMed
Search in MeSH
Add to Search
Substances
alpha-Macroglobulins
Actions
Search in PubMed
Search in MeSH
Add to Search
Related information
MedGen
LinkOut - more resources
Full Text Sources
Elsevier Science
Full text links[x]
Elsevier Science
[x]
Cite
Copy Download .nbib.nbib
Format:
Send To
Clipboard
Email
Save
My Bibliography
Collections
Citation Manager
[x]
NCBI Literature Resources
MeSHPMCBookshelfDisclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
Follow NCBI
Connect with NLM
National Library of Medicine
8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov |
189541 | https://www.healio.com/news/optometry/20250418/working-distance-on-paper-vs-screen-not-a-factor-in-eyestrain-in-study | Working distance on paper vs. screen not a factor in eyestrain in study
News News
CME CME
Clinical Guidance Clinical Guidance
Community Community
Account
Account Anonymous User.append(data);})) " ")
Log In Log In.append(data);})) "Log In")
Log Out Log Out
News News
CME CME
Clinical Guidance Clinical Guidance
Community Community
SpecialtiesChoose a specialty All SpecialtiesAllergy/AsthmaCardiologyDermatologyEndocrinologyGastroenterologyHematology/OncologyHepatologyInfectious DiseaseNephrologyNeurologyOphthalmologyOptometryOrthopedicsPediatricsPrimary CarePsychiatryPulmonologyRheumatologyWomen's Health & OB/GYN
Home
Headline News
Meeting News
Podcasts
Blogs & Columns
Job Opportunities
Resources
Menu Close
Specialties
All Specialties
Allergy/Asthma
Cardiology
Dermatology
Endocrinology
Gastroenterology
Hematology/Oncology
Hepatology
Infectious Disease
Nephrology
Neurology
Ophthalmology
Optometry
Orthopedics
Pediatrics
Primary Care
Psychiatry
Pulmonology
Rheumatology
Women's Health & OB/GYN
Home
Headline News
Optometry Subspecialties
Business of Optometry
Cataract Surgery
Comanagement
Contact Lenses/Eye Wear
Cornea/External Disease
Glaucoma
Low Vision/Geriatrics
Nutrition
Optics
Pediatrics
Practice Management
Presbyopia
Primary Care Optometry
Refractive Surgery
Regulatory/Legislative
Retina/Vitreous
Technology
Therapeutics
Meeting News
Podcasts
Blogs & Columns
Job Opportunities
Resources
Account Account
Log In Log In.append(data);})) "Log In")
Log Out Log Out
Account
Account Anonymous User.append(data);})) " ")
Close Searchbar
Healio
News
Optometry
Optics
ByJustin Cooper
Fact checked byChristine Klimanskis, ELS
Read more
April 18, 2025
2 min read
Save
Working distance on paper vs. screen not a factor in eyestrain in study
ByJustin Cooper
Fact checked byChristine Klimanskis, ELS
Add topic to email alerts
Please provide your email address to receive an email when new articles are posted on Optics in Optometry.
Subscribe
Added to email alerts
You've successfully added Optics in Optometry to your alerts. You will receive an email when new content is published.
Click Here to Manage Email Alerts
You've successfully added Optics in Optometry to your alerts. You will receive an email when new content is published.
Click Here to Manage Email Alerts
Back to Healio
We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.
Back to Healio
Key takeaways:
Digital eyestrain symptoms were greatest with a cognitively demanding task on a screen and least with a less demanding task on paper.
Working distance decreased over time across all tasks.
Working distance decreased significantly over the course of 30-minute reading tasks on both paper and a screen, but digital eyestrain symptoms were instead linked to cognitive demand and the mode of presentation, according to a study.
“[Digital eyestrain], also known as computer vision syndrome, has been associated with a wide range of symptoms, with no definitive cause identified to date,” Elianna Sharvit,OD, MS, and Mark Rosenfield, MCOptom, PhD, FAAO, Dipl AAO, both of SUNY College of Optometry, wrote in Optometry and Vision Science.
Working distance decreased significantly over the course of 30-minute reading tasks on both paper and a screen, but digital eyestrain symptoms were instead linked to cognitive demand and the mode of presentation. Image: Adobe Stock
“Several investigations have reported that symptoms of eyestrain are worse when comparing the same reading or near task performed on paper vs. a digital screen,” they wrote. “Cognitive demand or load, otherwise known as the mental difficulty of a task, has also been associated with symptoms of digital eyestrain, although the mechanism linking the two has not been elucidated.”
Sharvit and Rosenfield conducted a study to learn more about how digital eyestrain symptoms correlated with working distance, cognitive demand and use of paper vs. a tablet screen. They recruited 30 students from SUNY College of Optometry (mean age, 24 years; 87% women), each of whom completed four 30-minute reading tasks:
Reading random words from an Apple iPad and identifying those beginning with a specific letter (considered to be “cognitively demanding”).
Reading a children’s story (Alice’s Adventures in Wonderland) on the same iPad (considered to be “less cognitively demanding”).
The same task as the first but using printed paper instead of an iPad.
The same task as the second but using printed paper instead of an iPad.
In each task, the iPad or paper was placed on a clipboard “so that the overall size and weight of the material would be as similar as possible,” the researchers wrote. The study participants could hold the reading material at any comfortable distance, but they were instructed to hold the clipboard upright and keep their chin pointed at the reading material.
Working distance was measured with a Clouclip device mounted to participants’ spectacles or spectacles with no lenses for those who did not wear glasses. The participants completed a symptom questionnaire immediately before and after each task.
All four tasks led to statistically significant increases in patient-reported digital eyestrain symptoms. However, the increase in symptoms was greatest with the cognitively demanding task on an iPad (median score change: 11) and lowest with the less cognitively demanding task on paper (median score change: 3.5), according to an analysis of variance.
When averaged across all four tasks, working distance decreased significantly within the first 10 minutes and then remained stable. However, working distance led to no significant change in symptom score in either univariate or multivariate mixed-effect linear regression models.
The findings indicate “that the increase in symptoms seen with more cognitively demanding tasks on the tablet computer was not related to a change in working distance,” Sharvit and Rosenfield wrote.
“Future investigations should seek to evaluate why cognitively demanding tasks performed on a tablet computer induced more subjective symptoms of digital eyestrain,” they said. “To explore this, objective testing of visual function (such as accommodative lag and ocular alignment), as well as assessment of the anterior surface of the eye and tear film, both during and after the task, could be used to determine whether the symptoms are truly visual in origin or rather if the perceived difficulty of the task is related to the development of symptoms on a psychological basis.”
Published by:
Sources/Disclosures
Source:
Sharvit E, et al. Optom Vis Sci . 2025;doi:10.1097/OPX.0000000000002238.
Disclosures: The authors report no relevant financial disclosures.
Read more about
computer vision syndrome
Add topic to email alerts
Please provide your email address to receive an email when new articles are posted on Optics in Optometry.
Subscribe
Added to email alerts
You've successfully added Optics in Optometry to your alerts. You will receive an email when new content is published.
Click Here to Manage Email Alerts
You've successfully added Optics in Optometry to your alerts. You will receive an email when new content is published.
Click Here to Manage Email Alerts
Back to Healio
We were unable to process your request. Please try again later. If you continue to have this issue please contact customerservice@slackinc.com.
Back to Healio
Facebook
Twitter
LinkedIn
Email
Print
Comment
Continue Reading
More to Consider
Promote Your Medical Practice on Hundreds of Health SitesTap Native
1990 to 2021 Saw Decline in Life Expectancy in the United StMedgoo
The World’s Most Loved Telemedicine Solutiondoxy.me
Earn a Wharton MBA with a Health Care Management MajorWharton MBA
×
Related Content
Play on Healio
Follow Healio
Twitter
Facebook
Instagram
Bluesky
Threads
LinkedIn
About
About Healio About Healio
About the Wyanoke Group About the Wyanoke Group
Editorial Policy and Philosophy Editorial Policy and Philosophy
Sitemap Sitemap
Account Information
My Account Login.append(data);})) "My Account")
My Account My Account
Help Help
Email Subscriptions Email Subscriptions
Email Subscriptions Email Subscriptions Unknown.append(data);})) "Email Subscriptions")
Newspaper and Journal Subscriptions Newspaper and Journal Subscriptions
Contact Us
Email Us Email Us
Contact Newsroom Contact Newsroom
Advertising Information Advertising Information
Permissions and Reprints Permissions and Reprints
Legal
Do Not Sell My Personal Information Do Not Sell My Personal Information
Terms and Conditions Terms and Conditions
Medical Disclaimer Medical Disclaimer
Cookies Cookies
Privacy Policy Privacy Policy
Sign Up for Email
Get the latest news and education delivered to your inbox
Email address
Enter your email
Enter your email
Update email address
Specialty
Choose your specialty
All Specialties
Allergy & Asthma
Cardiology
Dermatology
Endocrinology
Gastroenterology/Hepatology
Hematology Oncology
Infectious Disease
Nephrology
Neurology
Ophthalmology
Optometry
Orthopedics
Pediatrics
Primary Care
Psychiatry
Pulmonology
Rheumatology
Women’s Health & OB/GYN
Choose your specialty
Subscribe
Update email address
The email address associated with your Healio account is:
If you would like to edit or change the email address that your subscriptions and alerts are sent to, use the "Update email address" link.
Update email address
© 2025 Healio All Rights Reserved.
Job board for health care professionals Job Opportunities
We’re sorry, but an unexpected error has occurred.
Please refresh your browser and try again. If this error persists, please contact ITSupport@wyanokegroup.com for assistance.
Close
Would you like to receive email reminders to complete your saved activities from Healio CME?
Yes
No
Activity saved! You'll receive reminders to complete your saved activities from Healio CME.
Unsubscribe
Confirm your account
You're logged in as .
Please review your account details and complete any missing fields.
Continue
Not you? Log out
By using this website and accessing our video content, you acknowledge and agree that we may collect and share information about your video viewing activities and preferences. View the complete Video Privacy and Consent Policy.
Agree
Manage Cookie Preferences
When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.
More information
Allow All
Manage Preferences
Strictly Necessary Cookies
Required
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies Active
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Targeting Cookies
[x] Targeting Cookies Active
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Confirm Preferences
Subscribe for CME updates straight to your inbox!
Email Address |
189542 | https://brainly.in/question/25009172 | the value of (a+b)2-(a-b)2 is - Brainly.in
Skip to main content
Ask Question
Log in
Join for free
For parents
For teachers
Honor code
Textbook Solutions
Brainly App
ayushimaurya881
07.10.2020
Math
Primary School
answered • expert verified
The value of (a+b)2-(a-b)2 is
2
See answers
See what the community says and unlock a badge.
0:00
/
--
Read More
Expert-Verified Answer
15 people found it helpful
pulakmath007
pulakmath007
(a + b)² - (a - b)² = 4ab
Given :
The expression (a + b)² - (a - b)²
To find :
The value of the expression
Formula:
(a + b)² = a² + 2ab + b²
(a - b)² = a² - 2ab + b²
Solution :
Step 1 of 2 :
Write down the given expression
The given expression is
(a + b)² - (a - b)²
Step 2 of 2 :
Simplify the given expression
We use the formulas
(a + b)² = a² + 2ab + b²
(a - b)² = a² - 2ab + b²
Thus the given expression
= (a + b)² - (a - b)²
= ( a² + 2ab + b² ) - ( a² - 2ab + b² )
= a² + 2ab + b² - a² + 2ab - b²
= 4ab
━━━━━━━━━━━━━━━━
Learn more from Brainly :-
how many terms are there in an expression 12ab -24b +36a
brainly.in/question/22764723
On common factorization 12a+9 becomes 3a(4a+3)
brainly.in/question/30544360
Solve the equation 12p-11=25
brainly.in/question/26811798
Explore all similar answers
Thanks 15
rating answer section
Answer rating 4.6
(19 votes)
Find Math textbook solutions?
See all
Class 12
Class 11
Class 10
Class 9
Class 8
Class 7
Class 6
Class 5
Class 4
Class 3
Class 2
Class 1
NCERT Class 9 Mathematics
619 solutions
NCERT Class 8 Mathematics
815 solutions
NCERT Class 7 Mathematics
916 solutions
NCERT Class 10 Mathematics
721 solutions
NCERT Class 6 Mathematics
1230 solutions
Xam Idea Mathematics 10
2278 solutions
ML Aggarwal - Understanding Mathematics - Class 8
2090 solutions
R S Aggarwal - Mathematics Class 8
1964 solutions
R D Sharma - Mathematics 9
2199 solutions
R S Aggarwal - Mathematics Class 7
2222 solutions
SEE ALL
Advertisement
Loved by our community
32 people found it helpful
KrishNarsaria
KrishNarsaria
Step-by-step explanation:
(a+b)²-(a-b)²
a²+b²+2ab-a²-b²+2ab
4ab Ans.
Explore all similar answers
Thanks 32
rating answer section
Answer rating 4.0
(22 votes)
Advertisement
Still have questions?
Find more answers
Ask your question
New questions in Math
(x+2)(x-2) .को सरल करो। और x ज्ञात बताओ
How long does it take to count to 1 million if it takes an average of 1 second to say each number? Give answer to the nearest 1/2 day
(3.7 x 10 5) ( 2.1 x 10 3)''
Maria invests ₹ 93750 at 9.6% per annum for 3 years and the interest is compounded annually. Calculate the amount standing for her credit at the end
find the values of x, y, z in the following figures
PreviousNext
Advertisement
Ask your question
Free help with homework
Why join Brainly?
ask questions about your assignment
get answers with explanations
find similar questions
I want a free account
Company
Careers
Advertise with us
Terms of Use
Copyright Policy
Privacy Policy
Cookie Preferences
Help
Signup
Help Center
Safety Center
Responsible Disclosure Agreement
Get the Brainly App
⬈(opens in a new tab)⬈(opens in a new tab)
Brainly.in
We're in the know
(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab) |
189543 | https://arxiv.org/pdf/1903.03590 | arXiv:1903.03590v1 [math.OC] 8 Mar 2019
The Minkowski Difference for Convex Polyhedra and Some its Applications ∗
Z.R. Gabidullina
Abstract
The aim of the paper is to develop a unified algebraical approach to representing the Minkowski difference for convex polyhedra. Namely, there is proposed an exact analytical formulas of the Minkowski dif-ference for convex polyhedra with different representations. We study the cases when both operands under the Minkowski difference opera-tion simultaneously have a vertex or a half-space representation. We also focus on the description of the Minkowski difference for a such mixed case where the first operand has the linear constraint structure and the second one is expressible as the convex hull of a finite collec-tion of some given points. Unlike the widespread geometric approach considering mostly two-dimensional or three-dimensional spaces, we investigate the objects in finite-dimensional spaces of arbitrary dimen-sionality.
keywords: Minkowski difference, convex polyhedron, vertex repre-sentation, half-space representation, polyhedra, distance, projection, linear separability criterion, variational inequality
MSC classes: 90C30, 65K05
1 Introduction
The Minkowski difference of sets having the different configuration is basic to the treatment of a large class of problems often occurring in lots of interesting applications from a variety of areas, especially in problems of engineering design , ; in data classification , ,
∗
Kazan Federal University, e-mail: zgabid@mail.ru, zulfiya.gabidullina@kpfu.ru
1; image analysis and processing , ; motion planning for robots , ; real-time collision detection ; computer graphics , , and many other front-line fields. The Minkowski difference operation is actually useful as an inves-tigative as well as a conceptual tool. But unfortunately, it is widely known fact that there exist serious difficulties related to implemen-tation of the Minkowski difference for individual formulations of sets. They represent the basic impediment to making use of the Minkowski difference operation in various practical applications. For finite-dimen-sional spaces of arbitrary dimensionality, an exact analytical represen-tation of the Minkowski difference for the convex polyhedra with the different as well as identical configuration is stated here for the first time as a whole. To be adequate for a number of mathematical purposes, the dif-ferent approaches (with cross-fertilization of ideas) to such a basic concept of analysis as the Minkowski difference of sets are required. The geometric viewpoint among the others has the leading role. Mo-tivation for the development of this geometric approach has basically come from engineering design , computational geometry , , , collision detection , , robotics , and many other subjects. The nature of these subjects dictates the sufficiency of considering only two-dimensional or three-dimensional spaces (see, for instance, , , ). Unlike the geometric approach, we study the objects in spaces of arbitrary dimensionality and develop the algebraic approach. Namely, we present the exact analytical representation of the Min-kowski difference for convex polyhedra given by different ways. More precisely, we investigate the following cases where:
• both operands under the Minkowski difference operation are de-termined in the similar way as the convex hull of the finite col-lection of some given points,
• the operands have the different representation (namely, the first operand is given by a linear constraint system, and the second one is expressible in terms of the convex hull),
• both operands have the same representation as the intersection of the closed half-spaces. Let us note that thanks to the obtained results, there appeared the possibility to treat the relevant problems such as:
2• problem of linear separation of the convex polyhedra in the Eu-clidean space,
• variational inequalities problems,
• problem of finding the distance between the convex polyhedra by projecting the origin of the Euclidean space onto a convex polyhedron,
• problem of finding the nearest points of convex polyhedra.
2 Definitions and Preliminaries
This section includes the brief description of notations, definitions of all utilized in the present paper main concepts. We use standard no-tation that is certainly well known to all readers. Nevertheless, let us briefly describe some of notations. As usual, ‖ · ‖ stands for the Euclidean norm of the vector in Rn, 〈· , ·〉 denotes the standard scalar product of two vectors, conv {·} corresponds to the convex hull of some collection of the given vectors. Let Bd (·), int (·) denote the boundary and interior of some set ˜Φ, respectively. In a context of applications for the concept of the Minkowski dif-ference, we need further to recall some key definitions and theorems which are closely related to the linear separability property of sets. In the theory of convex sets and nonlinear programming, the study of the linear separation problem is a highly important topic with a large literature (see, for instance, , , , , , , , etc.).
Definition 2.1 (separating hyperplane) (see, for instance, , p. 198) The hyperplane
π(c, γ ) := {x ∈ Rn : 〈c, x 〉 = γ}
with normal vector c 6 = 0 separates the sets A and B from the Eu-clidean space Rn, iff 〈c, a 〉 ≥ γ for all a ∈ A and 〈c, b 〉 ≤ γ for all
b ∈ B, i.e., iff it holds that:
sup
b∈B
〈c, b 〉 ≤ γ ≤ inf
a∈A
〈c, a 〉.
Definition 2.2 (strong separability) (see , p. 198) Two sets A
and B are said to be strongly separable, iff there exists some vector
c ∈ Rn such that:
sup
b∈B
〈c, b 〉 < inf
a∈A
〈c, a 〉.
If it holds that 〈c, b 〉 < 〈c, a 〉 for all a ∈ A, b ∈ B, then the sets A
and B are said to be strictly separable.
3The next theorem gives the rigorous justification of the fact that the problem of strong separation of the two arbitrary sets A, B ⊂ Rn can be reformulated as the problem of strong separating the origin of Rn
from the Minkowski difference A − B = {a − b, a ∈ A, b ∈ B}, and vice versa.
Theorem 2.1 (strong separation) (, p. 150) For the sets A and
B to be strongly separable, it is necessary and sufficient that the origin of Rn be strongly separable from the set A − B.
The previous theorem immediately implies that two sets A and B
are not strongly separable if and only if the set A − B is not strongly separable from the origin of Rn.The analogous results are certainly well known for the problems on non-strong linear separation of the considered sets.
Theorem 2.2 (linear separation)(, p. 151) For two sets A and
B to be linearly separable, it is necessary and sufficient that the origin of Rn be linearly separable from the set A − B.
The preceding theorem obviously implies that to say two sets A and B
are linearly inseparable is to say the origin of Rn is linearly inseparable from the set A − B.
We note that if the sets A and B are convex, then the Minkowski difference A − B is convex as well (see, for example, , p. 162). It is not hard to prove that if A and B are simultaneously bounded sets, then the set A − B is also bounded. Lastly, under the condition that at least one of the sets A and B is bounded, closedness of both sets under the Minkowski difference operation implies closedness of A − B.The proof of this assertion may be found, for instance, in , p. 201. Another question can now be addressed. What conditions on some sets A, B ensure that they are linearly (strongly or not) separable from each other? As already noted above, we reduce the problem of linear separation of A and B to the program of separation the origin of Rn from A − B. Questions about the separability tests or criteria for sets can actually be answered from the platform of the Minkowski difference operation. It is widely known that the Minkowski difference is often interpreted as the translational configuration space obstacle (see, for instance, ). One says that A − B represents the set of translations of B that brings it into interference with A. The additional nice property of the Minkowski difference consists in that
4for any objects A and B it holds
dist (A, B ) = dist (0, A − B),
where dist (A, B ) = inf {‖ a − b‖, a ∈ A, b ∈ B}
denotes the distance between A and B. Clearly, dist (0, A − B) =
‖PA−B (0)‖, where PA−B (0) denotes the projection of the origin of
Rn onto A − B. As it is known, two convex objects collide if and only if their Minkowski difference contains the origin. For the origin of Rn
and A − B, the next results of this subsection give a sort of linear separation principle. First we recall briefly some other relevant notations and definitions about the cones of generalized strong and strict support vectors, etc. In , there have been defined the following sets:
W˜Φ := {w ∈ Rn : inf
x∈˜Φ
〈w, x 〉 ≥ 0}, V ˜Φ := {v ∈ Rn : inf
x∈˜Φ
〈v, x 〉 > 0},Q˜Φ := {q ∈ Rn : 〈q, x 〉 > 0, ∀x ∈ ˜Φ},
Ω˜Φ := {y ∈ Rn : 〈y, x 〉 ≥ ‖ y‖2, ∀x ∈ ˜Φ},E(Ω ˜Φ) := {x ∈ Rn : x = λy, λ ≥ 0, y ∈ Ω˜Φ},
where ˜Φ is a nonempty subset of Rn. The set W˜Φ{ 0} is called a cone of generalized support vectors (or, briefly, GSVs) of the set
˜Φ. The notations V˜Φ and Q˜Φ are used for the cones of generalized strong and strict support vectors of the set ˜Φ, respectively. The main properties of the mentioned cones and their relationship with well-known other ones have been investigated in . For details, the interested reader is recommended to refer to . However, recalling some main properties of GSVs seems as vital for understanding the results of the present paper:
W˜Φ = Wc l ( ˜Φ) = Wco (c l ( ˜Φ)) , V˜Φ = Vco ( ˜Φ) = Vc l (co ( ˜Φ)) ,V˜Φ ⊆ Q˜Φ ⊆ W˜Φ, V ˜Φ = E(Ω ˜Φ){ 0},V˜Φ = Vc l ( ˜ Φ) = Vco (c l ( ˜ Φ)) , W˜Φ = Wco ( ˜ Φ) = Wc l (co ( ˜ Φ)) , Q˜Φ = Qco ( ˜ Φ) .
The conditions which are necessary and sufficient for the emptiness of the cones of GSVs V˜Φ, Q˜Φ, and W˜Φ{ 0} were established in . Let, t˜Φ(c) := inf
x∈˜Φ
〈c, x 〉, ˜Φ ⊂ Rn. For some kinds of convex
5polyhedra which have the compactness property, due to the presence of convexity and continuity of the linear function 〈c, x 〉, we can state that
〈c, x 〉 attains its minimum on ˜Φ. So, for these cases, we can rewrite
t˜Φ(c) := min
x∈˜Φ
〈c, x 〉. Further, we note that the following problem
max
‖c‖=1
t˜Φ(c) (1) is solvable. For the proof of the fact, the interested reader is directed to (see p. 149). Let the vector c∗ ∈ Rn denote an optimizer of the problem (1). The following three theorems describe a linear separability criterion for the pair of objects such as the origin and some nonempty set of Rn. Based on the optimal value of the objective function of (1), the above-mentioned criterion allows us to recognize these objects as strongly separable, non-strongly linearly separable, or inseparable.
Theorem 2.3 (strong separability criterion) (, p.149) For the ori-gin of Rn to be strongly separable from the nonempty set ˜Φ ⊂ Rn it is necessary and sufficient to have t˜Φ(c∗) > 0.
Theorem 2.4 (non-strong linear separability criterion) (, p.150) For the origin of Rn to be non-strongly linearly separable from the nonempty set ˜Φ ⊂ Rn it is necessary and sufficient to have t˜Φ(c∗) = 0.
Theorem 2.5 (linear inseparability criterion) (, p.150) For the origin of Rn to be linearly inseparable from the nonempty set ˜Φ ⊂ Rn
it is necessary and sufficient to have t˜Φ(c∗) < 0.
The following theorems allows to detect which one of the cones of generalized support vectors ( V˜Φ, W˜Φ{ 0}, and Q˜Φ) is empty, and which ones are not.
Theorem 2.6 (emptiness of the cone of generalized strong support vectors) If ˜Φ is a nonempty convex and closed subset of Rn, then
V˜Φ = ∅ ⇔ 0 ∈ ˜Φ.
The previous theorem represents the particular case of Theorem 3.3 from (for the proof, see p. 703). For the convex and closed set ˜Φ,due to having V˜Φ = E(Ω ˜Φ){ 0}, from Theorem 2.6, there obviously holds the following implication: Ω˜Φ = {0} ⇔ 0 ∈ ˜Φ.
6Theorem 2.7 (emptiness of the cone of generalized support vectors) If ˜Φ ⊂ Rn is a nonempty convex set, then
W˜Φ = {0} ⇔ 0 ∈ int ( ˜Φ) .
The preceding theorem is the particular case of Theorem 3.2 from (for the proof, see p. 703).
Theorem 2.8 (simultaneous degeneracy of the cone of generalized strict support vectors & non-degeneracy of the cone of GSVs) If ˜Φ ⊂
Rn is a nonempty convex and closed set, then:
W˜Φ 6 = {0} & Q˜Φ = ∅ ⇔ 0 ∈ Bd ( ˜Φ) .
According to , the assertion of the preceding theorem follows from Lemmas 3.13–3.14 (see p. 702). Let us note that for the set ˜Φ hav-ing the compactness property, it is fulfilled V˜Φ = Q˜Φ. According to Theorems 2.3, 2.6, we obviously have
0 /∈ ˜Φ ⇔ t˜Φ(c∗) > 0.
From Theorems 2.5, 2.7, it immediately follows that
0 ∈ int ( ˜Φ) ⇔ t˜Φ(c∗) < 0.
Due to Theorems 2.4, 2.8, there holds the following implication:
0 ∈ Bd ( ˜Φ) ⇔ t˜Φ(c∗) = 0 .
Thus, for the origin of Rn, the linear separability criterion provides a certificate of being an exterior, interior, or boundary point of ˜Φ.
3 Binary Operation of Minkowski Dif-ference
3.1 Both Operands with a Vertex Representa-tion
For many applications nowadays, the sets expressible as the convex hull of finitely many points from the Euclidean space Rn are especially im-portant. They are really ubiquitous structures having a fundamental
7role not only in variational analysis, computational geometry and op-timization, but in data classification, image analysis and processing, motion planning for robots, collision detection, and many other front-line areas. This subsection is devoted to representing the Minkowski difference for both convex polyhedra having the above-mentioned con-figuration. Further, let us be given two polyhedra of Rn :
L := conv {zi}i∈I M := conv {pj }j∈J ,
where I = {1, 2, · · · , l }, J = {1, 2, · · · , m }, i.e.
L = {z ∈ Rn : z = ∑
i∈I
αizi, ∑
i∈I
αi = 1 , α i ≥ 0, i ∈ I},M = {p ∈ Rn : p = ∑
i∈J
βj pj , ∑
j∈J
βj = 1 , β j ≥ 0, j ∈ J}.
Obviously, L and M are nonempty, convex, and compact sets. Due to set algebra, the Minkowski difference of these polyhedra is de-fined as a set of pairwise differences of points from L and M . Namely, as follows
L − M = {z − p, z ∈ L, p ∈ M }.The following theorem rigorously justifies that the point set L− M
coincides with the convex hull of the vectors zi − pj , i ∈ I, j ∈ J.
Theorem 3.1 . (Minkowski difference for both polyhedra given as convex hull) (, p. 552) L − M = conv {zi − pj }i∈I,j ∈J .
Proof. Part I. At first, we establish the inclusion
conv {zi − pj }i∈I,j ∈J ⊆ L − M.
By the definition of the convex hull, for any l ∈ conv {zi − pj }i∈I,j ∈J ,there can be found the real numbers γij ≥ 0, ∑
i∈I
∑
j∈J
γij = 1 such that:
l = ∑
i∈I
∑
j∈J
γij (zi − pj ) = ∑
i∈I
(∑
j∈J
γij )zi − ∑
j∈J
(∑
i∈I
γij )pj .
Assuming from now that αi = ∑
j∈J
γij , β j = ∑
i∈I
γij , one can easily see that the coefficients αi and βj satisfy the conditions αi ≥ 0, ∀i ∈ I, ∑
i∈I
αi = 1 , βj ≥ 0, ∀j ∈ J, ∑
j∈J
βj = 1 . Consequently, any vector
l ∈ conv {zi − pj }i∈I,j ∈J satisfies the following equation:
8l = ∑
i∈I
αizi − ∑
j∈J
βj pj = z − p, z ∈ L, p ∈ M, i.e. l ∈ L − M.
Part II. Now let us justify the following backward inclusion:
L − M ⊆ conv {zi − pj }i∈I,j ∈J .
By the definition of the set L − M , taking some vector l ∈ L − M,
we then have that l = z − p, z ∈ L, p ∈ M. According to the construc-tion of the sets L and M , there will be found αi ≥ 0 ∀i ∈ I, β j ≥ 0
∀j ∈ J, ∑
i∈I
αi = 1 , ∑
j∈J
βj = 1 such that z = ∑
i∈I
αizi, p = ∑
j∈J
βj pj .
Then
l = ∑
i∈I
αizi − ∑
j∈J
βj pj = ∑
j∈J
βj (∑
i∈I
αizi) − ∑
i∈I
αi(∑
j∈J
βj pj ) = = ∑
i∈I
∑
j∈J
αiβj (zi − pj ) = ∑
i∈I
∑
j∈J
γij (zi − pj ),
where γij ≥ 0, ∑
i∈I
∑
j∈J
γij = 1 . Thus, there is true the following inclu-sion: L − M ⊆ conv {zi − pj }i∈I,j ∈J . The latter should be compared to the earlier proved inclusion conv {zi − pj }i∈I,j ∈J ⊆ L − M . This comparison allows us to complete the proof.
Since due to Theorem 3.1, L − M has a representation as a convex hull of finite number (l · m) of points zi − pj for all i ∈ I, j ∈ J, it is characterized as a nonempty, compact, and convex point set. Conse-quently, for the case, the operation of the Minkowski difference thereby preserves the compactness and convexity. For practical applications, there has the extremely importance a question consisting of how to decrease the number of points forming
L − M . For the low enough dimension of Rn, we have a possibility to make this number of points as small as possible by means of using some software package. In the case of the two-dimensional or three-dimensional space, the function Convhull, for instance, in MATLAB not only computes and returns the convex hull of the given collection of points, but provides the option of removing vertices that do not contribute to the area or volume of the convex hull. Moreover, this package allows to visualize the output of Convhull with the help of the function Plot in 2-D. The function Trisurf or Trimesh provides the possibility of plotting the output of Convhull in 3-D. Let us note that in four or more dimensions, there can efficiently be used, for instance, the proposed in Quickhull algorithm for computing the convex hulls. This method is realized in MATLAB by means of the function
9Convhulln. This function returns the indices of input points that form the faces of the convex hull. Consequently, to compute, for instance, the distance between L and M or to linearly separate these sets, one first should select those of the (l · m) points of the type zi − pj , i ∈ I, j ∈ J that are really formed the faces of L − M . Luckily, the inner points of the Minkowski difference L − M may be ignored. As a result of taking into account of the only points that are vertices of the convex hull, the collection of points zi − pj , i ∈ I, j ∈ J may considerably be reduced. Therefore, it is more easier to deal with such a reduced family of input points. The expected time complexity of the Quickhull algorithm depends on the different parameters such as dimension of the space, the number of input and processed points, the maximum number of facets (for details, see , ).
3.2 Binary Mixture of Sets with Different Con-straint Structure
In this subsection, we focus on a precise representation of the Minkowski difference for a common case where the first point set from the sets pair under the operation has the general constraint (i.e. not necessar-ily linear constraint) structure, and the second operand is identified by the abstract constraint. Such sets identification is exhibited as too crucial for applications to be considered below. Furthermore, the ab-stract constraint specification is a really useable form since it does not restrict the variations on how the corresponding point set might be defined. The presence of the abstract constraint characterizes our approach as very flexible , , since constraints might not even be present. The purpose of considering in this subsection such a case of the more general settings is twofold - to recall some fundamental results from the previous research and to explain how they can be applied to a topic of our interest. In a wide range of practical applications, the set of feasible solutions
Φ is representable by a system of inequality constraints in the general form:
Φ = {x ∈ X : fk(x) ≤ bk, k ∈ K}, K = {1, 2, · · · , r }, (2) where fk(x), k ∈ K are arbitrary real-scaled quasi-convex functions defined on a convex set X ⊆ Rn. We recall that a function f (x) is said to be a quasi-convex on a convex set X if and only if [Sd, f ]Lo X
10 is a convex set for all d ∈ R1, where
[Sd, f ]Lo X := {x ∈ X : f (x) ≤ d}.
Therefore, as an intersection of the convex sets [Sbi , f k]Lo X , k ∈ K, the set Φ is convex, too.
Theorem 3.2 (closedness of the lower level set) Let X be a closed set of Rn, then f (x) is a lower semicontinuous function over X if and only if the lower level set [Sd, f ]Lo X is closed for all d ∈ R1.
The interested reader can find the proof of the theorem, for instance, in (see p. 81). The previous theorem implies that if fk(x), k ∈ K are lower semi-continuous functions over a closed set X, then the lower level sets
[Sbk , f k]Lo X are closed for all bk, k ∈ K. Consequently, being an in-tersection of closed sets [Sbk , f k]Lo X , k ∈ K, the set Φ is closed as well. Next, we present the analytical description of the Minkowski dif-ference of two sets Φ and Ψ, when Φ is given by (2), and Ψ is an arbitrarily defined set.
Theorem 3.3 (Minkowski difference when the two sets under the op-eration are given by a constraints system and an abstract constraint, respectively) (, p. 716) Let be given an arbitrary nonempty set
Ψ ⊆ Rn, the set Φ 6 = ∅ be defined by (2), X = Rn, then Φ − Ψ = Φ 1,where
Φ1 = {x ∈ Rn : fk(x + y) ≤ bk, k ∈ K, y ∈ Ψ},K = {1, 2, · · · , r }, Φ − Ψ = {z ∈ Rn : z = x − y, x ∈ Φ, y ∈ Ψ}.
Proof. Part I. First we select arbitrarily some fixed point ¯x from
Φ. We further consider ˜x(y) = ¯ x − y for ∀y ∈ Ψ. It is clear that
fk(˜ x(y) + y) = fk(¯ x) ≤ bk, ∀k ∈ K, ∀y ∈ Ψ, i.e. ˜x(y) ∈ Φ1. There-fore, we have
¯x ∈ Φ ⇒ ˜x(y) = ¯ x − y ∈ Φ1, ∀y ∈ Ψ.Through the fact that ¯x ∈ Φ was chosen arbitrarily, there was proved the following inclusion: Φ − Ψ ⊆ Φ1.Part II. Conversely, we take now an arbitrary fixed point ¯t ∈ Φ1
and, for all y ∈ Ψ, check whether the points ̂ t(y) = ¯ t+ y belong to Φ.Then there can be observed that fk(̂t(y)) = fk(¯ t + y) ≤ bk, ∀k ∈ K,
∀y ∈ Ψ, i.e. ̂ t(y) ∈ Φ. In other words, it holds ¯t ∈ Φ − y, ∀y ∈ Ψ.Thanks to the arbitrary choice of ¯t ∈ Φ1, we get that Φ1 ⊆ Φ − Ψ.
11 We can now finalize our proof of the theorem. Since, taking into account the forward and backward inclusions, we have the claimed equality: Φ1 = Φ − Ψ.
An abstract constraint y ∈ Ψ is very convenient as well for rep-resenting the Minkowski difference in the case of a more complicated nature of the second point set from the sets pair under the operation. The previous theorem was presented for the first time in , but it is brought out here along with the prime role it plays in our further research. In actual practice, strict inequalities are rarely seen in constraints, however, if the nonempty set Φ happens to be described by the strict inequality constraints, then Φ1 should be also expressed by the system of strict inequalities. For the proof of the previous theorem, no matter how the set Ψ
was expressed analytically or alternatively in other ways. For instance,
Ψ may be specified in a similar way as the set Φ:
Ψ := {x ∈ Rn : gs(x) ≤ ds, s ∈ S}, S = {1, 2, · · · , t }.
In particular, Ψ can consist of a single point as in the conditions of the next lemma.
Lemma 3.1 (Minkowski difference for a set with constraint structure and a singleton) Let be given an arbitrary vector p ∈ Rn, Φ be defined by (2), Φ 6 = ∅,
Φ1 = {x ∈ Rn : fk(x + p) ≤ bk, k ∈ K}, K = {1, 2, · · · , r },
then Φ − p = Φ 1, where Φ − p = {z ∈ Rn : z = x − p, x ∈ Φ}.
The result of the previous lemma evidently follows from Theorem 3.3 under the assumption that Ψ is a singleton, i.e. Ψ = {p}. Lemma 3.1 has in turn the following quite obvious corollaries.
Corollary 3.1 (Minkowski difference for the nonnegative orthant and a singleton) Let be given an arbitrary vector p = ( p1, · · · , p n), Φ be described as the nonnegative orthant
Φ = Rn
+
= {x = ( x1, · · · , x n) : xj ≥ 0, j = 1, n },
then Φ − p = {x = ( x1, · · · , x n) : xj ≥ − pj , j = 1, n }.
12 Corollary 3.2 (Minkowski difference for a set with linear constraint structure and a singleton) Let be given an arbitrary vector p ∈ Rn,
Φ 6 = ∅,
Φ = {x ∈ Rn : 〈ak , x 〉 ≤ bk, a k ∈ Rn, b k ∈ R1, k ∈ K},K = {1, 2, · · · , r }, (3)
then Φ − p = {x ∈ Rn : 〈ak, x 〉 ≤ ˜bk, ˜bk = bk − 〈 ak, p 〉, k ∈ K}.
Corollary 3.3 (Minkowski difference for a set with box constraints structure and a singleton) Let be given an arbitrary vector p ∈ Rn, Φ
be specified by the box constraints on x of the form
Φ = {x ∈ Rn : l ≤ x ≤ u, l, u ∈ Rn}, Φ 6 = ∅,
then Φ − p = {x ∈ Rn : l − p ≤ x ≤ u − p}.
Corollary 3.4 (Minkowski difference for a closed ball and a singleton) Let be given an arbitrary vector p ∈ Rn, Φ be given as a closed ball of radius q around some point of o, i.e.
Φ = {x ∈ Rn : ‖x − o‖2 ≤ q2, o ∈ Rn, q ∈ R1+},
then Φ − p = {x ∈ Rn : ‖x − ¯o‖2 ≤ q2, ¯o = o − p}.
3.3 Operands: Convex Polyhedra with Differ-ent Representation
Based on the foregoing results, we introduce in this subsection the representation of the Minkowski difference for some binary mixture of convex polyhedra having differently defined shapes. In more detail, we focus now on the mixed case where the first convex polyhedron has the linear constraint structure and the second one is expressible as the convex hull of a finite collection of points from Rn. In this case, for the space of arbitrary dimensionality, there is successfully reached an exact representation of the Minkowski difference without the necessity of any transition to higher dimensions. Furthermore, it will be shown below that a number of linear constraints describing the Minkowski difference of sets is exactly the same as it is for the first operand. A set Φ ⊂ Rn is said to be a polyhedral set if it can be specified as the intersection of a finite family of closed half-spaces, or equivalently,
13 can be expressed by finitely many linear constraints of form:
Φ = {x ∈ Rn : Ax ≤ b}, (4) where A is the given nonvacuous matrix in Rr×n with components
aik , b is the vector in Rr with components bi. It is well known that a polyhedral set is a convex polyhedron. Let M be the set of all convex combinations of some vectors pj , j ∈ J, i.e. M be specified in the similar manner as it was defined in Subsection 2.1. To construct a refined representation of the Minkowski difference for Φ and M , we utilize first Theorem 3.3 as follows:
Φ − M = {x ∈ Rn : A(x + y) ≤ b, y ∈ M }.
For the further analysis, we will need to construct the following sup-plementary set:
Ψ1 = {x ∈ Rn : Ax ≤ b − Ap j , p j ∈ Rn, j ∈ J}. (5)
Theorem 3.4 (Representation of Minkowski difference for the mixed case before refinement) Let be given an arbitrary collection of vectors
pj ∈ Rn, j ∈ J, Φ be defined by (4), Φ 6 = ∅, M = conv {pj }j∈J , then
Φ − M = Ψ 1.
Proof. Part I. Without loss of generality, we fix some arbitrary point
¯x from Ψ1. By our construction of Ψ1, it then holds A¯x ≤ b − Ap j .
Multiplying this system through by the nonnegative coefficients βj ,j ∈ J (giving ∑
j∈J
βj = 1 ), summing, and rearranging, we obtain
A¯x ≤ b − A ∑
j∈J
βj pj . In other words, it holds A(¯ x + ∑
j∈J
βj pj ) ≤ b
subject to βj ≥ 0, j ∈ J, ∑
j∈J
βj = 1 . By definition, the set M as the convex hull of the vectors pj , j ∈ J consists of all their convex combinations, so it is convex. More precisely, all the points y ∈ Ψ are specified by some possible convex combinations ∑
j∈J
βj pj . This allows us to conclude that the following implication holds
A(¯ x + y) ≤ b, ∀y ∈ Ψ ⇒ ¯x ∈ Φ − M.
14 Taking into account the arbitrary selection of ¯x ∈ Ψ1, we thereby get the desired inclusion Ψ1 ⊆ Φ − M .Part II. Now, there is no loss of generality in taking some point ¯x
arbitrarily from Φ − M . We aim at showing that ¯x ∈ Ψ1. In this case, the proof is trivial. Indeed, by our assumption, there is fulfilled
A(¯ x + y) ≤ b, ∀y ∈ Ψ. Since pj ∈ Ψ, ∀j ∈ J, we obviously have
A(¯ x + pj ) ≤ b for all j ∈ J and given points pj ∈ Rn, i.e. ¯x ∈ Ψ1.Due to our arbitrary choice of ¯x ∈ Φ − M , this therefore implies that
Φ − M ⊆ Ψ1.
The right-hand side for a system of inequalities in (5), taken with all possible choices j ∈ J, characterized this system as overdetermined. In what follows, having reduced the number of inequality constraints in (5), we will come latter to the refined representation of the Minkowski difference Φ − M .For all j ∈ J, let us use the following notation: Ap j = ˜bj . Then we have
bi − ˆbi = bi − max
j∈J
˜bij ≤ bi − ˜bij , ∀i ∈ K, ∀j ∈ J.
For the system from (5) can then be formulated some subsystem of the form Ax ≤ s, where
s ∈ Rr, s T = bT − ˆbT = bT − (max
j∈J
˜b1
j
, max
j∈J
˜b2
j
, . . . , max
j∈J
˜brj ).Define the set with the following constraint representation
Ψ2 = {x ∈ Rn : Ax ≤ s}.
Theorem 3.5 (Refined Minkowski difference for convex polyhedra hav-ing different representations) Let be given an arbitrary collection of vec-tors pj ∈ Rn, j ∈ J, Φ be defined by (4), Φ 6 = ∅, M = conv {pj }j∈J ,then
Φ − M = Ψ 2.
Proof. The core of the proof consists in showing that Ψ1 = Ψ 2.Part I. There is no loss of generality in selecting some fixed point ¯x
from Ψ2. Writing
A¯x ≤ s = b − ˆb ≤ b − ˜bj = b − Ap j , ∀j ∈ J,we see that ¯x lies in Ψ1. The arbitrariness of choosing ¯x ∈ Ψ2 con-firms that there is true the following inclusion: Ψ2 ⊆ Ψ1.Part II. To prove the converse inclusion, take some point ¯x arbitrar-
15 ily from Ψ1. Due to the special construction of the right-hand side of constraint system from the description of Ψ1, for each i ∈ K, it is not hard to see that the following subsystem of linear constraints
n
∑
k=1
aik ¯xk ≤ bi −
n
∑
k=1
aik pk
1
,. . . . . . . . . . . . . . . . . . . . .
n
∑
k=1
aik ¯xk ≤ bi −
n
∑
k=1
aik pkm
can be replaced by a unique inequality
n
∑
k=1
aik ¯xk ≤ bi − max
j∈Jn
∑
k=1
aik pkj .
The truth of the previous assertion is quite obvious since it holds
min
j∈J
(bi − n∑
k=1
aik pkj ) = bi − max
j∈Jn
∑
k=1
aik pkj .In the above chain of the formulas, ¯xk, pkj are the k−th components of ¯x and pj , respectively. Besides, aik denotes the k−th element in the row indexed by i of the matrix A, bi corresponds to the i−th item of the vector b. In consequence, the system A¯x ≤ b − Ap j ,
j ∈ J can be reduced to the system A¯x ≤ s with the much fewer number of constraints but with the same dimension, i.e. ¯x ∈ Ψ2.Due to an arbitrary manner of selecting ¯x from Ψ1, this justifies the inclusion Ψ1 ⊆ Ψ2. Through the forward and backward inclusions, we have Ψ1 = Ψ 2. Consequently, from Theorem 3.4, it follows that
Φ − M = Ψ 2.
In light of these assertions, inter alia, we can now apply the formula of the Minkowski difference Φ − M in solving the separation problem for Φ and M . To linearly separate the sets Φ and M , for a beginning, we thereby need to solve the problem (1). Of course, in the setting of (1), we take the set Φ − M instead of ˜Φ. After that, we are now in the position of being able to analyze the values of tΦ−M (c∗)
and c∗. This analysis allows us to characterize Φ and M as linearly (strongly or not) separable or inseparable. In the separable case, c∗
represents the normal vector of the best linear separator for Φ and M
with the maximal thickness. In the case of the sets inseparability, c∗
corresponds to the best pseudo-separator with the minimal thickness. In other words, for any case, the special setting of (1) provides an
16 opportunity for obtaining the optimal thickness of the margin between the supporting hyperplanes to the sets. For the details relatively the terms of separator, pseudo-separator, their thickness, and not only, the interested reader is again directed to (see p. 161). Notice that if we need only inspect the issue of whether the Minkowski difference
Φ−M fails to be strongly separable from the origin of Rn, then we can simply test whether or not the origin satisfies the system of constraints describing Φ − M . Another question may now arise. How we can reveal that the sets Φ and M are non-strongly linearly separable from each other? Due to constraint structure of Φ − M , it is also not hard to check the question of whether the origin of Rn is the boundary point of Φ − M . Indeed, if some constraints is fulfilled at the origin as equations, whereas all the others hold as the strict inequalities, obviously this means then that the origin belongs to the boundary of
Φ − M .Let us note that the problem of computing the distance between
Φ and M can be solved by means of projecting the origin of Rn onto
Φ − M . It will especially be expedient to apply such a reduction of the problems in the case when all of the right-hand side components sk for the system identifying the refined version of the Minkowski difference
Φ − M are negative. Since in this event, as it was proved in , a projection problem for convex polyhedron given by a system of linear constraints can be solved by reduction to the problem of projecting the origin onto the polyhedron described by a finite collection of points from Rn. In its own turn, this projection problem may be solved utilizing one of the various problem settings presented in . The proposed reduction makes wider a range of suitable optimization tools which can effectively be operated for solving the projection problem in consideration.
3.4 Both Operands with a Half-space Repre-sentation
The purpose of this subsection is to obtain the description of the Minkowski difference for the case where both operands have the same setting in the form of the half-spaces intersection (or, briefly, have a so-called half-space representation). Let us be given the two following polyhedra described as the inter-
17 section of closed half-spaces of Rn :
Φ = {x ∈ Rn : A1x ≤ b1},
Ψ = {x ∈ Rn : A2x ≤ b2},
where A1 ∈ Rr1×n, A2 ∈ Rr2×n are some nonvacuous matrices, b1 ∈
Rr1 , b2 ∈ Rr2 . By the same arguments already applied in Subsecti-on 2.3, there can be shown that Φ and Ψ are closed and convex sets. According to Theorem 3.3, the Minkowski difference of sets Φ and Ψ
can be expressed as follows
Φ − Ψ = {x ∈ Rn : A1(x + y) ≤ b1, y ∈ Ψ}.
If we replace further the abstract constraint by the system defining Ψ,then we immediately obtain the following system of linear constraints:
A1x + A1y ≤ b1,A2y ≤ b2.
This system can be equivalently rewritten in matrix form as follows:
Φ − Ψ = {z ∈ R2n : Dz ≤ b},
where D ∈ R(r1+r2)×2n is a block-structured matrix,
D =
( A1 | A1
Θ | A2
)
.
Besides, the right-hand side of the system and the vector of vari-ables have also the block structure, i.e. b ∈ Rr1+r2 and b =
( b1
b2
)
,z ∈ R2n and z = ( x | y). Here, Θ ∈ Rr2×n denotes the null matrix. Being the intersection of the closed half-spaces in R2n, the Minkowski difference Φ − Ψ is the closed and convex point set. For computing the Euclidean distance between the sets Φ and Ψ,we can formulate and solve the following problem of minimizing the strongly convex quadratic function subject to the linear inequalities:
min ‖z‖2 (6)
Dz ≤ b, (7) where the number of constraints is equal to r1 + r2, the amount of variables equals 2n. For solving the same problem of measuring the
18 distance between Φ and Ψ, in , they dealt with the following optimization problem:
min ‖x − y‖2 (8)
A1x ≤ b1, (9)
A2y ≤ b2. (10) Observe that this problem has the exactly similar number of con-straints and variables as the problem (6)–(7). Nevertheless, the objec-tive function of (8)–(10) does not have the strong convexity property. For this reason, the program (6)–(7) compares favorably with (8)–(10).
3.5 Applications in Variational Inequalities Prob-lems Related to a Concept of Linear Separability
Naturally, the analytical representation of the Minkowski difference of sets already has its own utility and independent significant applications in various fields of mathematical sciences. Likewise, the Minkowski difference operation can take its place now as a mathematical tool ready for new applications. In regard the topic of interest, it is quite clear that the Minkowski difference is a tool suitable for dealing with solving the variational inequalities that are closely relevant to a concept of the linear separability of sets. Let us consider the following variational inequalities, which consist in determining a nonzero vector c ∈ Rn such that
〈c, x − y〉 ≥ ∆, x ∈ A, y ∈ B, ∆ > 0, (11)
〈c, x − y − c〉 ≥ 0, x ∈ A, y ∈ B, (12)
〈c, x − y〉 ≥ 0, x ∈ A, y ∈ B. (13) Note that the first two of these variational inequalities correspond to the strong linear separability term in a sense that the inequalities are solvable in the case when the sets A and B are strongly linearly sep-arable. The third inequality is closely connected with the potentially non-strong linear separability of the considered sets. Clearly, using the Minkowski difference A − B, for the above-mentioned inequali-ties (11)–(13), a reduction can be made to the following variational
19 inequalities, respectively:
〈c, z 〉 ≥ ∆, x ∈ A − B, ∆ > 0, (14)
〈c, z − c〉 ≥ 0, z ∈ A − B, (15)
〈c, z 〉 ≥ 0, z ∈ A − B. (16) Of course, the goal is to find the nonzero vector c ∈ Rn satisfying (14)– (16). A set of possible solutions for (16) coincides with WA−B { 0}.For the more general than convex polyhedral setting, the proof of this fact was represented in 29. Moreover, it was proved that the inequality (16) has nonzero solutions if and only if WA−B 6 = {0}
or, equivalently, 0 /∈ int (A − B). For the convex sets A and B with nonempty interiors, due to Lemma 3.16 from , it holds
int (A − B) = int (A) − int (B).
This means that (16) (in tandem with (13)) is solvable if and only if the convex polyhedra A and B with nonempty interiors have no the common interior points. For convex and closed sets A and B,a solution set of (14) coincides with VA−B . In , it was justified that the variational inequality (14) together with (11) has solutions if and only if VA−B 6 = ∅ or, equivalently, 0 /∈ A − B. Thus, the absence of any common points for A and B is the condition for the solvability of (14) and (11). Formally, the fulfillment of this con-dition can be verified with the help of projecting the origin of Rn
onto A − B. If as the result of projecting we obtain PA−B (0) 6 = 0,then c = PA−B (0) represents the solution for (14). Otherwise, the inequalities (14) and (11) have no solutions. For the variational in-equality which consists in determining a vector c ∈ Rn{ 0} satisfying (15), a solution set coincides with ΩA−B { 0}. Let us remind that the considered pair of convex polyhedra A and B are closed. Then the boundedness of one of these sets implies the closedness of A − B. By the closedness and convexity properties of A − B, due to Theorem 3.3 from , we immediately obtain PA−B (0) ∈ Bd (Ω A−B ). Having obtained PA−B (0) = 0, we can conclude that ΩA−B = {0}, since Theorem 3.2 from yields
min
x∈A−B
‖x‖2 = max
y∈ΩA−B
‖y‖2.
The fulfillment of ΩA−B = {0} is equivalent to having 0 ∈ A − B,
20 since E(Ω A−B ){ 0} = VA−B .
For the Minkowski difference operation, we underline the next ap-plication which consists in finding the nearest points of sets A and B
by solving the system
{ 〈c, x − ¯x〉 ≥ 0, x ∈ A,
〈c, ¯y − y〉 ≥ 0, y ∈ B,
where c = PA−B (0) is the projection of the origin of Rn onto A −
B. Let us note that the projection problem can be reduced to the maximin problem of the type (1). To solve the maximin problem, we can apply some software package after a simple transition to the minimax problem as follows
max
w∈χ
inf
x∈A−B
〈x, w 〉 = − min
w∈χ
(− inf
x∈A−B
〈x, w 〉) == − min
w∈− χ
sup
x∈A−B
〈x, w 〉 = − min
w∈χ
sup
x∈A−B
〈x, w 〉, where
χ := {c ∈ Rn : ‖c‖ = 1 }, −χ := {− x, x ∈ χ}.
We notice that, for instance, a package Optimization Toolbox in MAT-LAB contains the function Fminimax which is usable for solving the minimax constraint problem.
4 Conclusions
We have presented the novel analytical representation of the Minkowski difference for convex polyhedra given by different ways. In particular, we have considered the following cases where:
• both operands under the Minkowski difference operation are sim-ilarly determined as convex hulls of finite collections of points,
• the operands have the different performance (or more precisely, the first operand is expressible by a linear constraint system, and the second one is given in terms of convex hull),
• both operands have the same representation as the intersection of the closed half-spaces. It can be concluded that thanks to the obtained results, there appeared the possibility to investigate the relevant problems such as: problem of linear separation of the convex polyhedra in the Euclidean space,
21 variational inequalities problems, problem of finding the distance be-tween the convex polyhedra by projecting the origin of the Euclidean space onto a convex polyhedron, the problem of determining the closest points of convex polyhedra.
References
Zagajac, J. "A fast method for estimating discrete field values in early engineering design." IEEE Transactions on Visualization and Computer Graphics 2(1) (1996): 35-43. Ilies, H. T., and V. Shapiro. "The dual of sweep." Computer-Aided Design 31(3) (1999): 185-201. Mampaey M., et al. "Efficient algorithms for finding richer sub-group descriptions in numeric and nominal data." 2012 IEEE 12th International Conference on Data Mining. IEEE, 2012. Takeda, A., H. Mitsugi, and T. Kanamori. "A unified classification model based on robust optimization." Neural Computation 25(3) (2013): 759-804. Mavroforakis, M. E., M. Sdralis, and S. Theodoridis "A geometric nearest point algorithm for the efficient solution of the SVM classi-fication task." IEEE Transactions on Neural Networks 18(5) (2007): 1545-1549. J. Serra (1982) Image Analysis and Mathematical Morphology, Academic Press, London, vol. 1. Barki, H., Dupont, F., Denis, F. et al. Contributing vertices-based Minkowski difference (CVMD) of polyhedra and applications, 3D Res (2013) 4: 1. T. Lozano-Perez and M.A. Wesley, An algorithm for planning collision-free paths among polyhedral obstacles, Communications of ACM 22 (lo), 560-570, (1979). S. Cameron. Enhancing gjk: Computing minimum and penetration dis- tances between convex polyhedra. In Proceedings of Interna-tional Conference on Robotics and Automation , pages 3112Џ3117, 1997. C. Ericson. Real-Time Collision Detection . Morgan Kaufmann, 2004.
22 P. Ghosh (1993) A unified computational framework for Minkowski operations, Comput. Graph., 17(4): 357Џ378. P. Ghosh (1990) A solution of polygon containment, spatial plan-ning, and other related problems using Minkowski operations, Com-put. Vision Graph. Image Process, 49(1): 1Џ35. Nelaturi, S., and V. Shapiro. "Configuration products and quotients in geometric modeling." Computer-Aided Design 43(7) (2011): 781-794. Cappelli, Federico, et al. "Design for disassembly: a methodol-ogy for identifying the optimal disassembly sequence." Journal of Engineering Design 18(6) (2007): 563-575. J. OЎRourke (1998) Computational geometry in C (2nd ed.), Cambridge University Press, New York, NY, USA. Klette, R., and A. Rosenfeld. Digital geometry: Geometric meth-ods for digital picture analysis. Elsevier, 2004 Karasik, Y. B., and M. Sharir. "The power of geometric duality and Minkowski sums in optical computational geometry." Annual Symposium on Computational Geometry: Proceedings of the ninth annual symposium on Computational geometry. Vol. 18. No. 21. 1993. G. van den Bergen. Collision Detection in Interaction 3D Envi-ronments . Morgan Kaufmann, 2003. G. van den Bergen. A fast and robust gjk implementation for collision detection of convex objects. Technical report, Department of Mathematics and Computing Science, Eindhoven University of Technology, 1999. E. G. Gilbert, D. W. Johnson, and S. S. Keerthi. A fast proce-dure for computing the distance between complex objects in three-dimensional space. In IEEE Journal of Robotics and Automation , volume 4, pages 193Џ203, April 1988. E. G. Gilbert and C.-P. Foo. Computing the distance between general convex objects in three-dimensional space. In IEEE Transac-tions on Robotics and Automation , volume 6, pages 53Џ61, Febru-ary 1990. Hachenberger, P. Exact Minkowksi Sums of Polyhedra and Ex-act and Efficient Decomposition of Polyhedra into Convex Pieces.
23 Algorithmica (2009) 55: 329. Giannessi F.: Constrained optimization and image space analysis. Vol.1, Separation of sets and Optimality conditions, Springer, New York (2005) Vasil’ev, F.P.: Numerical Methods for Solving Extremum Prob-lems, Nauka, Moscow (1980) Eremin I.I. : Fier’s Methods of Strict Separability of Convex Poly-hedral Sets. Izvestiya VUZ. Matematika, 12, 33-43(2006) Gabidullina, Z. R. A Theorem on Strict Separability of Convex Polyhedra and its Applications in Optimization, J. of Optimization Theory and Applications, 148(3), 550-570 (2011) Gabidullina Z.R.: A Linear Separability Criterion for Sets of Eu-clidean Space. J.Optim.Theory Appl., 158(1), 145–171(2013) Gabidullina Z.R.: A Theorem on Separability of a Convex Poly-hedron from Zero point Of the Space and Its Applications in Opti-mization. Izvestiya VUZ. Matematika, 12, 21-26 (2006) (Engl.trasl. Russian Mathematics (Iz.VUZ), 50(12), 18-23(2006)) Gabidullina, Z. R.: Necessary and Sufficient Conditions for Emptiness of the Cones of Generalized Support Vectors, Optimiza-tion Letters, 9(4), 693-729 (2015) C. Barber, D. Dobkin, H. Huhdanpaa (1996) The quickhull algo-rithm for convex hulls, ACM Trans. Math. Softw, 22(4): 469Џ483. Avis, David; Bremner, David; Seidel, Raimund (1997), "How good are convex hull algorithms?", Computational Geometry: Theory and Applications, 7 (5Џ6): 265Џ301, doi:10.1016/S0925-7721(96)00023-5. Z.R. Gabidullina, The Minkowski difference of sets with the con-straint structure, VIII Moscow International Conference on Oper-ations Research (ORM2016) Moscow, October 17-22, PROCEED-INGS Volume I, 30–33 (2016). Gabidullina, Z.R.: Solving of a projection problem for convex polyhedra given by a system of linear constraints. Constructive Non-smooth Analysis and Related Topics (Dedicated to the Memory of V.F. Demyanov), CNSA Proceedings. -IEEE, Art. є 7973958. (2017)
24 Gabidullina, Z.R.: The problem of projecting the origin of eu-clidean space onto the convex polyhedron. Lobachevskii Journal of Mathematics. 39(1), 35-45 (2018)
25 |
189544 | https://algs4.cs.princeton.edu/25applications/KendallTau.java.html | KendallTau.java
KendallTau.java
Below is the syntax highlighted version of KendallTau.java from §2.5 Sorting Applications.
/ Compilation: javac KendallTau.java Execution: java KendallTau n Dependencies: StdOut.java Inversions.java Generate two random permutations of size N and compute their Kendall tau distance (number of inversions). /public class KendallTau { // return Kendall tau distance between two permutations public static long distance(int[] a, int[] b) { if (a.length != b.length) { throw new IllegalArgumentException("Array dimensions disagree"); } int n = a.length; int[] ainv = new int[n]; for (int i = 0; i < n; i++) ainv[a[i]] = i; Integer[] bnew = new Integer[n]; for (int i = 0; i < n; i++) bnew[i] = ainv[b[i]]; return Inversions.count(bnew); } // return a random permutation of size n public static int[] permutation(int n) { int[] a = new int[n]; for (int i = 0; i < n; i++) a[i] = i; StdRandom.shuffle(a); return a; } public static void main(String[] args) { // two random permutation of size n int n = Integer.parseInt(args); int[] a = KendallTau.permutation(n); int[] b = KendallTau.permutation(n); // print initial permutation for (int i = 0; i < n; i++) StdOut.println(a[i] + " " + b[i]); StdOut.println(); StdOut.println("inversions = " + KendallTau.distance(a, b)); }} Copyright © 2000–2019, Robert Sedgewick and Kevin Wayne.
Last updated: Thu Aug 11 09:08:22 EDT 2022. |
189545 | https://askfilo.com/user-question-answers-mathematics/theorem-3-1-the-general-solution-of-is-where-proof-as-is-a-3132313138353637 | Question asked by Filo student
Theorem 3.1 : The general solution of sinθ=sinα is θ=nπ+(−1)nα, where Proof: As sinθ=sinα,α is a solution. As sin(π−α)=sinα,π−α is also a solution. Using periodically, we get sinθ=sinα=sin(2π+α)=sin(4π+α)=… and sinθ=sin(π−α)=sin(3π−α)=sin(5π−α)=… ∴sinθ=sinα if and only if θ=α,2π+α,4π+α,… or θ=π−α,3π−α, ∴θ=…,α,π−α,2π+α,3π−α,4π+α,5π−α,… ∴ The general solution of sinθ=sinα is θ=nπ+(−1)nα, where n∈Z.
Views: 5,516 students
Updated on: Jul 18, 2024
Filo tutor solution
Learn from their 1-to-1 discussion with Filo tutors.
Students who ask this question also asked
Views: 5,328
Topic:
Trigonometry
View solution
Views: 5,094
Topic:
Trigonometry
View solution
Views: 5,458
Topic:
Trigonometry
View solution
Views: 5,357
Topic:
Trigonometry
View solution
| | |
--- |
| Question Text | Theorem 3.1 : The general solution of sinθ=sinα is θ=nπ+(−1)nα, where Proof: As sinθ=sinα,α is a solution. As sin(π−α)=sinα,π−α is also a solution. Using periodically, we get sinθ=sinα=sin(2π+α)=sin(4π+α)=… and sinθ=sin(π−α)=sin(3π−α)=sin(5π−α)=…∴sinθ=sinα if and only if θ=α,2π+α,4π+α,… or θ=π−α,3π−α, ∴θ=…,α,π−α,2π+α,3π−α,4π+α,5π−α,…∴ The general solution of sinθ=sinα is θ=nπ+(−1)nα, where n∈Z. |
| Updated On | Jul 18, 2024 |
| Topic | Trigonometry |
| Subject | Mathematics |
| Class | Class 12 |
Are you ready to take control of your learning?
Download Filo and start learning with your favorite tutors right away!
Questions from top courses
Explore Tutors by Cities
Blog
Knowledge
© Copyright Filo EdTech INC. 2025 |
189546 | https://math.stackexchange.com/questions/319262/if-the-first-10-positive-integer-is-placed-in-a-circleany-order-3-integer-in | arithmetic - If the first 10 positive integer is placed in a circle(any order), 3 integer in consecutive locations around the circle that have a sum > 17? - Mathematics Stack Exchange
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
If the first 10 positive integer is placed in a circle(any order), 3 integer in consecutive locations around the circle that have a sum > 17?
Ask Question
Asked 12 years, 7 months ago
Modified10 years, 8 months ago
Viewed 12k times
This question shows research effort; it is useful and clear
6
Save this question.
Show activity on this post.
If the first 10 positive integer is placed around a circle, in any order, there exists 3 integer in consecutive locations around the circle that have a sum greater than or equal to 17?
This was from a textbook called "Discrete math and its application", however it does not provide solution for this question.
May I know how to tackle this question.
Edit: I relook at the actual question and realize it is sum greater or equal to 17. My apologies.
arithmetic
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this question to receive notifications
edited Mar 3, 2013 at 14:22
Jun HaoJun Hao
asked Mar 3, 2013 at 7:36
Jun HaoJun Hao
317 1 1 gold badge 2 2 silver badges 7 7 bronze badges
5
1 You accepted an answer that doesn't answer the question as posed. Did you mean ≥17≥17? Since >17>17 is actually true (I checked by enumeration), the question as posed is valid and remains unanswered.joriki –joriki 2013-03-03 09:59:18 +00:00 Commented Mar 3, 2013 at 9:59
Even if OP changes to ≥17≥17 it still is interesting to find a nonenumerative proof of >17>17.coffeemath –coffeemath 2013-03-03 10:30:51 +00:00 Commented Mar 3, 2013 at 10:30
Yes. I relize my question posted was wrong. My apologies. Yet at the same time, it will be very interesting to see a proof for > 17.Jun Hao –Jun Hao 2013-03-03 14:24:40 +00:00 Commented Mar 3, 2013 at 14:24
2 The "wrong" question led to better answers than the "right" question would have. No need to apologize.Gerry Myerson –Gerry Myerson 2013-03-03 22:44:06 +00:00 Commented Mar 3, 2013 at 22:44
18 18 is possible, for example with 1,7,3,8,4,5,9,2,6,10 1,7,3,8,4,5,9,2,6,10 and many others Henry –Henry 2018-09-14 11:21:23 +00:00 Commented Sep 14, 2018 at 11:21
Add a comment|
5 Answers 5
Sorted by: Reset to default
This answer is useful
9
Save this answer.
Show activity on this post.
Gerry's answer shows that the average sum of the triples is 16.5 16.5. If there's no sum above 17 17, then at least five sums have to be 17 17 for the average to be 16.5 16.5. Since two successive sums can't be equal, at most five sums are 17 17, and thus exactly five sums are 17 17, and thus the other five sums are 16 16 and they alternate. But that's impossible, because it implies that moving by three goes up or down by 1 1 and moving another three goes down or up by 1 1, respectively, leading to the same number again.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Mar 3, 2013 at 12:30
jorikijoriki
243k 15 15 gold badges 311 311 silver badges 548 548 bronze badges
Add a comment|
This answer is useful
7
Save this answer.
Show activity on this post.
Remove the number 1 and unwrap the circle of numbers into a row a,b,c,d,e,f,g,h,i a,b,c,d,e,f,g,h,i, where {a,b,c,d,e,f,g,h,i}={2,3,4,5,6,7,8,9,10}{a,b,c,d,e,f,g,h,i}={2,3,4,5,6,7,8,9,10}. Then (a+b+c)+(d+e+f)+(g+h+i)=∑10 j=2 j=54(a+b+c)+(d+e+f)+(g+h+i)=∑j=2 10 j=54, therefore at least one of (a+b+c),(d+e+f),(a+b+c),(d+e+f), or (g+h+i)(g+h+i) must be ≥54 3=18≥54 3=18.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Mar 3, 2013 at 15:09
Steve KassSteve Kass
15.4k 1 1 gold badge 24 24 silver badges 34 34 bronze badges
Add a comment|
This answer is useful
4
Save this answer.
Show activity on this post.
EDIT: it has been pointed out that this answer only gives ≥17≥17, while the question asks for >17>17. More work is needed.
Let A 1=a 1+a 2+a 3 A 1=a 1+a 2+a 3, A 2=a 2+a 3+a 4 A 2=a 2+a 3+a 4, and so on, A 10=a 10+a 1+a 2 A 10=a 10+a 1+a 2. Then A 1+A 2+⋯+A 10=3(a 1+a 2+⋯+a 10)=(3)(55)=165 A 1+A 2+⋯+A 10=3(a 1+a 2+⋯+a 10)=(3)(55)=165, so some A i≥165/10=16.5 A i≥165/10=16.5, so some A i≥17 A i≥17.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
edited Mar 3, 2013 at 11:20
answered Mar 3, 2013 at 8:35
Gerry MyersonGerry Myerson
187k 13 13 gold badges 235 235 silver badges 408 408 bronze badges
5
I think the question is >17>17, not ≥17≥17. (My solution was that a random triple has sum 16.5, so there must be one such that ≥17≥17, but it does not work for >17>17.)dtldarek –dtldarek 2013-03-03 08:37:36 +00:00 Commented Mar 3, 2013 at 8:37
1 Perhaps the inequality ( > 17) is true for more complicated reasons. Can anybody find an example with every triple ≤≤ 17?TonyK –TonyK 2013-03-03 09:29:29 +00:00 Commented Mar 3, 2013 at 9:29
1 @Tony: I checked by enumeration; the minimal maximal sum is indeed 18 18, for 1 4 9 5 3 7 8 2 6 10.joriki –joriki 2013-03-03 09:57:49 +00:00 Commented Mar 3, 2013 at 9:57
1 So now all we need is a proof...TonyK –TonyK 2013-03-03 10:00:44 +00:00 Commented Mar 3, 2013 at 10:00
@Gerry: I found a relatively simple proof for >17>17 that builds on your proof for ≥17≥17.joriki –joriki 2013-03-03 12:33:07 +00:00 Commented Mar 3, 2013 at 12:33
Add a comment|
This answer is useful
1
Save this answer.
Show activity on this post.
Original answer:
To have all sums ≤17≤17, all four numbers from 7 7 to 10 10 would have to be separated by at least two numbers; but it would take at least 12 12 slots to space them like that.
As has been pointed out in the comments, this is wrong, but Gerry showed how to complete it.
The numbers from 8 8 to 10 10 must be separated by at least two numbers. That leaves two number to separate the 7 7 from them. If it's not adjacent to any of them, it must be separated from both 8 8 and 9 9 by just one number, and those have to be 2 2 and 1 1, respectively; that leaves only 3 3 to 6 6 in the four slots around 10 10, which leads to at least one sum of at least 18 18.
So the 7 7 must be next to one of the numbers from 8 8 to 10 10. It can't be next to the 9 9 or 10 10 because that would lead to a sum of at least 18 18 even with 1 1 and 2 2 adjacent. So it must be next to the 8 8, and Gerry's argument completes the proof.
That's rather inelegant case work; a more systematic proof would be nice.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
edited Mar 3, 2013 at 11:47
answered Mar 3, 2013 at 10:48
jorikijoriki
243k 15 15 gold badges 311 311 silver badges 548 548 bronze badges
3
1 Couldn't 7 7 be adjacent to 8 8 without causing problems?EuYu –EuYu 2013-03-03 11:10:47 +00:00 Commented Mar 3, 2013 at 11:10
7+8 is only 15, leaving a possibility of inserting either a single 1 or a single 2 between them. Maybe one needs to consider the possible cyclic patterns among 7,8,9,10. [Even between 7 and 9 there could be a single 1.]coffeemath –coffeemath 2013-03-03 11:14:13 +00:00 Commented Mar 3, 2013 at 11:14
1 If 7 and 8 are adjacent, we must have 1-7-8-2 (or 2-7-8-1). Then a-b-10-c-d must have a triple too big, e.g., a-b-10-c-1-7-8-2 and even if a, b, c is some arrangement of 3, 4, 5 there will be a sum 18 or more.Gerry Myerson –Gerry Myerson 2013-03-03 11:19:08 +00:00 Commented Mar 3, 2013 at 11:19
Add a comment|
This answer is useful
-1
Save this answer.
Show activity on this post.
Solution: For any given the first 10 positive integers placed around a circle, in any order, there are exactly 10 choices of 3 consecutive numbers around the circle. And each number appears exactly 3 times among the 10 choices. Hence, the sum of all numbers in 10 choices of 3 adjacent number is (1 + 2 + · · · + 10) × 3 = 165 This implies that at least one choice of 3 adjacent numbers has a sum greater than or equal to 17. Indeed, if there is no such triple, the sum of all numbers in 10 triples will be strictly less than 160 because each one is strictly less than 16. This contradicts to the sum of all triples is 165.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Jan 31, 2015 at 18:47
BrandonBrandon
1
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
arithmetic
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Report this ad
Linked
18What does it mean when proof by contradiction doesn't lead to a contradiction?
0Proof from Discrete Mathematics
Related
15How to calculate the number of decimal digits for a binary number?
9What's the maximum possible score in this mathy solitaire puzzle?
5Mathematical proof for order of operations
1What method can be used to solve a numeric finding puzzle involving getting the right sum?
2When asked to solve a question without using a calculator, how much mental computation is reasonable?
18What does it mean when proof by contradiction doesn't lead to a contradiction?
0Creation of minus zero demands 3 new neutral signs?
Hot Network Questions
My dissertation is wrong, but I already defended. How to remedy?
Can you formalize the definition of infinitely divisible in FOL?
Childhood book with a girl obsessed with homonyms who adopts a stray dog but gives it back to its owners
Non-degeneracy of wedge product in cohomology
Is encrypting the login keyring necessary if you have full disk encryption?
How different is Roman Latin?
Alternatives to Test-Driven Grading in an LLM world
Is it safe to route top layer traces under header pins, SMD IC?
Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road?
Program that allocates time to tasks based on priority
If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church?
The geologic realities of a massive well out at Sea
Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth?
How to solve generalization of inequality problem using substitution?
Direct train from Rotterdam to Lille Europe
For every second-order formula, is there a first-order formula equivalent to it by reification?
ICC in Hague not prosecuting an individual brought before them in a questionable manner?
Implications of using a stream cipher as KDF
How to convert this extremely large group in GAP into a permutation group.
How to rsync a large file by comparing earlier versions on the sending end?
Does the Mishna or Gemara ever explicitly mention the second day of Shavuot?
What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left?
Does a Linux console change color when it crashes?
Weird utility function
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices |
189547 | https://mathematica.stackexchange.com/questions/10135/handing-a-list-of-constraint-expressions-to-a-c-function-with-mathlink | mathematical optimization - Handing a list of constraint expressions to a C++ function with MathLink - Mathematica Stack Exchange
Join Mathematica
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematica helpchat
Mathematica Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
Handing a list of constraint expressions to a C++ function with MathLink
Ask Question
Asked 13 years ago
Modified13 years ago
Viewed 646 times
This question shows research effort; it is useful and clear
3
Save this question.
Show activity on this post.
I need to solve an optimization problem, which is defined in a Mathematica notebook.
Using Mathematica's FindMinimum is not an option, because it is too slow. So, the idea is to use an external solver for quadratically constraint quadratic problems and use MathLink to get the constraints of the problem from Mathematica, calculate the solution, and return the solution to Mathematica.
And here is my problem: The constraints (quadratic and linear) are given to me as a list of expressions, e.g.
mathematica
{ -1+(-0.363263+x)^2+(-0.329466+y)^2<=0,
-1+(-0.248721+x)^2+(0.451803 +y)^2<=0,
-1+(0.33444 +x)^2+(-0.41341+y)^2<=0,
-1+(0.414249 +x)^2+(0.384528 +y)^2<=0,
-1+(-0.65488+x)^2+(0.242478 +y)^2<=0,
-1+(-0.176244+x)^2+(0.30843 +y)^2<=0,
-1.4+x<=t, -0.5+y<=t, 0.4 -x<=t, -0.5-y<=t }
How do I deal with such a list in my C++ function? Do I need to change its template signature so that it takes a SymbolList instead of a RealList (as in my current version)?
Or is there a way to extract all the numbers in my constraint list and put them in a list in Mathematica?
mathlink-or-wstp
mathematical-optimization
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Improve this question
Follow
Follow this question to receive notifications
edited Sep 2, 2012 at 21:44
Dr. belisarius
117k 13 13 gold badges 208 208 silver badges 465 465 bronze badges
asked Sep 2, 2012 at 20:53
Daniel EbertsDaniel Eberts
139 1 1 bronze badge
8
4 Welcome to Mathematica.SE! Are you sure that you don't want to investigate speeding up your optimization problem in Mathematica first? What function of x and y do you want to optimize? What's the value of t? I tried a few rather complex functions and they all finish below 1/10 s.Sjoerd C. de Vries –Sjoerd C. de Vries 2012-09-02 21:17:40 +00:00 Commented Sep 2, 2012 at 21:17
5 Downvoted because the assertion FindMinimum is not an option, because it is too slow is not justified. I'll remove the vote if you share some insight about the problem that justifies it Dr. belisarius –Dr. belisarius 2012-09-02 22:06:53 +00:00 Commented Sep 2, 2012 at 22:06
1 Lets replace prove by illustrate in my above comment.halirutan –halirutan 2012-09-03 03:29:18 +00:00 Commented Sep 3, 2012 at 3:29
1 The others have a point. By compiling your objective function you can easily get a factor of 10. Also take a look at this: mathematica.stackexchange.com/questions/4700/…. Mathematica is quite fast for numerics nowadays.Ajasja –Ajasja 2012-09-03 07:37:59 +00:00 Commented Sep 3, 2012 at 7:37
5 You mention that your objective is simple, you could give the Jacobian to FindMinimum, or chose a better algorithm. It were good if you would actually show to issue at hand. Even if FindMinimum is the actual bottleneck most likely MathLink is not the way to go.user21 –user21 2012-09-03 09:18:25 +00:00 Commented Sep 3, 2012 at 9:18
|Show 3 more comments
1 Answer 1
Sorted by: Reset to default
This answer is useful
8
Save this answer.
Show activity on this post.
I don't know if this answer helps you but what I wanted to convey would be too lengthy for a comment. First of all quoting from your comment
"...the objective function (it is t, so fairly trivial)"
this problem actually renders to be so simple that any mechanism involving Compile or binding with external optimization routine (e.g CPLEX) involving Mathlink becomes completely unnecessary. Given that you agree with my conclusion of your comment you can see that MMA built-in optimization functions FindMinimum and NMinimize are already pretty efficient.
mathematica
linearcons=
-1.4 + x <= t &&
-0.5 + y <= t &&
0.4 - x <= t &&
-0.5 - y <= t;
quadraticcons=
-1+(-0.363263+x)^2+(-0.329466+y)^2<=0&&
-1+(-0.248721+x)^2+(0.451803+y)^2<=0&&
-1+(0.33444+x)^2+(-0.41341+y)^2<=0&&
-1+(0.414249+x)^2+(0.384528+y)^2<=0&&
-1+(-0.65488+x)^2+(0.242478+y)^2<=0&&
-1+(-0.176244+x)^2+(0.30843+y)^2<=0
Now call the optimization functions and see the Timing
mathematica
res=NMinimize[{t,quadraticcons && linearcons},{x,y,t}]; // AbsoluteTiming
res1 = FindMinimum[{t,quadraticcons && linearcons},{x,y,t}]; // AbsoluteTiming
{0.2100003, Null}
{0.0300001, Null}
Is not it practically fast enough for your "fairly trivial" objective function? Added advantage is that MMA finds the unique global minimum here. You can visualize the minimum as in case of this too simple objective the intersection of the parameter space spanned by the quadratic and the linear constraint is a unique point. Lets see the two parameter spaces seperately using RegionPlot3D and RegionPlot.
Now using the visualization code from a past question we can see that the linear and the quadratic constrain agrees at a single point and MMA finds it really fast as we have seen above. The sought after point {x,y,t} is
mathematica
FindArgMin[{t, quadraticcons && linearcons}, {x, y, t}]
{0.537203, -0.076731, -0.137203}
However I am using a high end desktop with core i7 extreme processor so on average machine the timing may turn out to be a bit worse..
BR
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Improve this answer
Follow
Follow this answer to receive notifications
edited Apr 13, 2017 at 12:56
CommunityBot
1
answered Sep 3, 2012 at 12:54
PlatoManiacPlatoManiac
15k 2 2 gold badges 44 44 silver badges 76 76 bronze badges
4
1 +1 I was going to post the same. Just for reference in a Core2Duo @2GHz your timing for FindMinimum[] gets a x3 factor. And what the OP is looking for is the min t (ie the square's size) where these two figures intersect i.sstatic.net/EnhO6.png. Perhaps the only justification for trying to optimize this is having to solve thousands of these problems per second.Dr. belisarius –Dr. belisarius 2012-09-03 13:46:26 +00:00 Commented Sep 3, 2012 at 13:46
1 This is a nice answer, but given that he is being told to do this with CPLEX I suspect that he does need to solve thousands of such problems per second in some line-of-business application. FindMinimum uses the nonlinear interior point method for this problem, which unfortunately is implemented as top-level code. A modest speedup can be had by setting Method -> {"InteriorPoint", "Theta" -> 1/3}, but I think that's the best that can be done, unless somebody (somebody else this time!) wants to reimplement this method in compiled code.Oleksandr R. –Oleksandr R. 2012-09-03 14:09:32 +00:00 Commented Sep 3, 2012 at 14:09
@OleksandrR. The method option you specified is it documented? However I did not see any speed improvement contrarily it was a bit slower. Is it a correct observation?PlatoManiac –PlatoManiac 2012-09-03 14:24:52 +00:00 Commented Sep 3, 2012 at 14:24
4 @PlatoManiac I averaged over 100 runs of FindMinimum. The run-to-run variability is quite large; I got anywhere between your stated value of about 30ms and a much greater figure of 265ms. The option is not documented but you can see what options exist with Options[FindMinimumInteriorPoint]`. Theta was the only one I found to make a significant difference.Oleksandr R. –Oleksandr R. 2012-09-03 14:41:46 +00:00 Commented Sep 3, 2012 at 14:41
Add a comment|
Your Answer
Thanks for contributing an answer to Mathematica Stack Exchange!
Please be sure to answer the question. Provide details and share your research!
But avoid …
Asking for help, clarification, or responding to other answers.
Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Draft saved
Draft discarded
Sign up or log in
Sign up using Google
Sign up using Email and Password
Submit
Post as a guest
Name
Email
Required, but never shown
Post Your Answer Discard
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
mathlink-or-wstp
mathematical-optimization
See similar questions with these tags.
The Overflow Blog
The history and future of software development (part 1)
Getting Backstage in front of a shifting dev experience
Featured on Meta
Spevacus has joined us as a Community Manager
Introducing a new proactive anti-spam measure
Linked
187How to compile effectively?
96Shaving the last 50 ms off NMinimize
31Can 2D and 3D plots be combined so that the 2D plot is the bottom surface of the 3D plot boundary?
Related
4Cannot call custom function loaded from script using C/C++ MathLink
3MathLink c++ passing lists of real values simple example
5How to do assignments through MathLink?
0Optimization with constraint or with the constraint embedded into the problem?
0How do we extract exact constraint function values from FindMinimum?
6Optimization with vector constraint function
Hot Network Questions
What happens if you miss cruise ship deadline at private island?
On being a Maître de conférence (France): Importance of Postdoc
в ответе meaning in context
Why are LDS temple garments secret?
My dissertation is wrong, but I already defended. How to remedy?
Overfilled my oil
What is a "non-reversible filter"?
Is it safe to route top layer traces under header pins, SMD IC?
Why is the definite article used in “Mi deporte favorito es el fútbol”?
Storing a session token in localstorage
Cannot build the font table of Miama via nfssfont.tex
ConTeXt: Unnecessary space in \setupheadertext
Does a Linux console change color when it crashes?
Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward"
Is existence always locational?
How long would it take for me to get all the items in Bongo Cat?
Are there any world leaders who are/were good at chess?
I have a lot of PTO to take, which will make the deadline impossible
Analog story - nuclear bombs used to neutralize global warming
How to home-make rubber feet stoppers for table legs?
How different is Roman Latin?
Is encrypting the login keyring necessary if you have full disk encryption?
An odd question
ICC in Hague not prosecuting an individual brought before them in a questionable manner?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
lang-mma
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematica
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
Mathematica is a registered trademark of Wolfram Research, Inc. While the mark is used herein with the limited permission of Wolfram Research, Stack Exchange and this site disclaim all affiliation therewith. |
189548 | http://www.cwladis.com/math100/Lecture4Sets.htm | Untitled Document
In regular algebra, we can combine simple operations into more complex statements; for example, we can write an expression which includes both addition and multiplication: 4x+7, because we have rules which tell us that when we encounter this expression, we always do the multiplication first, and then the addition.
Now that we have learned about four operations on sets: union, intersection, difference, and complement, we want to be able to write more complex expressions, such as (A∪B′)∩A, for example. We want to be able to draw venn diagrams of these compound expressions, and we want to be able to calculate the result for these compound expressions if we are given the exact contents of each of the sets involved.
More information on Drawing Venn Diagrams:
Before we go any further in our exploration of compound operations on sets, we need to be sure we understand some important points about how to draw a Venn diagram.
How to draw a Venn Diagram Containing Two Sets::
When drawing Venn diagrams, we must be careful not to make assumptions about relationships between sets when we don’t know anything about the sets themselves. If we have two sets there are 3 possible ways they could be related:
Let’s call our two sets A and B.
One set is a subset of the other: (Here B is a subset of A.)
The two sets are disjoint:
The two sets intersect:
When we don’t know anything about the sets we are working with, we always draw them as though they overlap:
Why does this work? We'll shade some areas different colors for a moment to help us explain why. In this next diagram, we have shaded the set A in red and the set B in blue; purple indicates where both red and blue have been shaded.
If the two sets are DISJOINT, this picture still works if we assume that A∩B is EMPTY. Remember that A∩B is purple region in the middle where A and B overlap.
If A⊆B, then this picture still works if we assume that A-B is EMPTY. Remember that A-B is just the red area of A to the left (it does not include the purple area of A where it intersects with B.
If B⊆A, then this picture still works if we assume that B-A is EMPTY. Remember that B-A is just the blue area of B to the right (it does not include the purple are of B where it intersects with A.
If you are drawing a Venn diagram in order to solve a problem and you have no idea what the two sets are, be sure to draw your diagram like this so that the two sets overlap.
It is important to understand that we can never assume that an arbitrary set is not empty.
Just because a set takes up space on a Venn diagram doesn’t mean it necessarily has any elements.For example, if A were actually an empty set in the venn diagram above, then this would mean that the red shaded area and the purple shaded area (where there is also red shading) will both be empty. This means that even though these shaded areas on the diagram take up space, there may be nothing in them.
Venn Diagrams with 3 Sets:
In all of the examples we will be looking at in this section, we will often be working with 3 or more sets. We can draw a Venn diagram with as many sets in it as we like. A general Venn diagram for three sets would look something like this:
It is important to always label your venn diagrams. Whenever you use a venn diagram to represent a set, you should put the label which indicates what the shading depicts above or below the venn diagram. See some of the examples below:
Order of operations on sets:
Sometimes we want to combine more than two sets and more than one operation to create a more compound expression. But in order to do this we have to establish some set of rules so that we know in what order to do each operation. Just like with numbers, we use parentheses if we want an operation to be done first.
Just like with numbers, we always do anything in parentheses first. If there is more than one set of parentheses, we work from the inside out.
We do complements first.
Union , intersection, and difference operations are all equal in the order. So if we have more than one of these at a time, we have to use parentheses to indicate which of these operations should be done first. For example, the expression A∪B-C doesn’t make any sense because we don’t know which operation we should do first: should we take the union first, and then the difference, or should we take the difference first and then the union? In order to make this clear, we need to either write (A∪B)-C or A∪(B-C).
Let’s look at some examples. For each of these examples, let U={1,2,3,4,5,6,7,8,9}, A={1,2,3,4,5}, B={2,4,6,8}, C={4,5,6}:
So our Venn diagram will look like this: (I have put each of the elements in its place on the Venn diagram.)
Example 1:
(A∪B)-C
First we find A∪B. In this case, A∪B={1,2,3,4,5,6,8}. On a Venn diagram for A∪B, we would get:
A∪B
Remember that when we take A∪B we ignore the set C. Don’t let the fact that the set C is there distract you. It doesn’t have anything to do with the operation A∪B, so we ignore it for the moment; we’ll use it in the next step.
Next we find (A∪B)-C. Because A∪B={1,2,3,4,5,6,8} and C={4,5,6}, we just substitute these sets in for A∪B and C respectively to get:
(A∪B)-C={1,2,3,4,5,6,8}-{4,5,6}={1,2,3,8}
To create a Venn diagram for (A∪B)-C, we must first have a venn diagram for A∪B and for C, so here is a venn diagram of C, where the set C is shaded in blue:
C
Now we put both these shadings for A∪B and C together on the same venn diagram, and we get the following picture; the set C is in blue, the set A∪B is in red, and the area where they overlap becomes purple:
A∪B in red, C in blue
Now to take (A∪B)-C we simply take away anything that has been shaded blue (this includes the purple part) because we want to keep everything that is in the set A∪B but get rid of anything that is in C. What’s left over is this, which is the Venn diagram of (A∪B)-C:
A∪B
Notice that exactly what is left over in the red shaded region is the set {1,2,3,8}, which is exactly what we got above when we calculated this using only the sets without the diagrams!
Example 2:
A∪(B∩C′)
First we find C′, because we first do what is inside the parentheses; within the parentheses, we have two operations: intersection and complement, and the order of operations tell us to do complement first.
In this case, C={4,5,6}. So C′={4,5,6}′={1,2,3,7,8,9}.
To find a Venn diagram for C′, we would first draw a Venn Diagram for C: (Here C is shaded in blue.)
C
Now to find C′, we shade in any part that was white on the diagram for C, to get the following diagram of C′, where C′ is shaded in red:
C′
Now that we have C′, next we want to find B∩C′. Because B={2,4,6,8} and C′={1,2,3,7,8,9}, by substituting these sets into B∩C′ we get:
B∩C′={2,4,6,8}∩{1,2,3,7,8,9}={2,8}
To make a Venn diagram of B∩C′, we first need a venn diagram of B and a venn diagram of C′. In our venn diagram of B, we shade B in blue:
B
Now we put our venn diagrams for B and for C′ together; we shade in C′ in red and B in blue so that the area where B and C′ overlap is purple. Notice that this purple area is where C′ and B intersect, and so the numbers in the shaded purple are exactly the elements of the set {2,8}!
C′ in red, B in blue
So we take only the purple region where B and C′ intersect and draw our Venn diagram of B∩C′ like this:
B∩C′
Now we can take A∪(B∩C′). Because we now know that B∩C′={2,8}, and A={1,2,3,4,5}, we can substitute these sets into A∪(B∩C′) to get:
A∪(B∩C′)={1,2,3,4,5}∪{2,8}={1,2,3,4,5,8}.
Be careful! Remember that when we take the UNION, we put ALL the elements of both sets together. This is the OPPOSITE of taking the INTERSECTION, where we only take the elements that are in BOTH sets at the same time! Be careful not to confuse the union with the intersection.
To get a Venn diagram for A∪(B∩C′), we need to have venn diagrams for A and for B∩C′. We draw a venn diagram of A by shading A in yellow:
A
So now we put our shading for A together with our shading for B∩C′ so that A is shaded yellow and B∩C′ is shaded purple on the same venn diagram. Where the yellow set A and the purple set (B∩C′) overlap, we end up with a kind of blue-green, so that the picture looks like this:
A in yellow, B∩C′ in purple
Because we want to take the UNION of the yellow set A and the purple set B∩C′, we take the whole shaded area, where there is yellow and where there is purple and where the yellow and purple overlap to make blue-green. To make our final Venn diagram clear, we shade the whole area in a single color (here we pick orange):
A∪(B∩C′)
Notice that our final shaded area is the set {1,2,3,4,5,8}, which is exactly the answer we got for A∪(B∩C′) above!
When should I use venn diagrams?
For any given sets and operations, we can do the operations either by drawing several Venn diagrams step-by-step, or we can simply work with the elements in the sets themselves. Pay attention to instructions to decide which of these methods a question is asking you to use.
Typically, if we know what the sets are, we will just calculate the answer, but if we don't know what the sets are, we will draw venn diagrams, because venn diagrams are the only way that we can represent abstract sets that we don't know anything about!
Using Venn Diagrams to Prove that Two Expressions are Equal:
We can also use Venn diagrams to prove that two expressions are equal. If we want to show that two expressions involving sets and set operations are actually equal no matter what the sets are, all we need to do is to draw a venn diagram of each expression and show that the two venn diagrams are identical.
For example, when we learned the definitions of complement and difference, we realized that A′ and U-A are the same. So we can write A′=U-A. The best way to prove that two expressions are the same is to draw their Venn diagrams and show that they have exactly the same area shaded.
We know that A′ has the following Venn Diagram:
A′
Now let's draw a venn diagram of U-A. First we must have a venn diagram of U and a venn diagram of A.
U
Here the set U is shaded in red. Notice that this is the whole venn diagram, because the universal set always includes everything.
A
Here the set A is shaded in yellow.
Now we put both U and A together on the same venn diagram, so that A is shaded yellow and U is shaded red, so that the places where the red and the yellow overlap are orange:
A is yellow, U is red
To take the difference U-A, we must begin with U, the red shaded part, and remove any parts which were in A, or which were shaded yellow. Since the orange portion of this diagram was shaded yellow, we must remove that from the diagram to get the diagram for U-A:
U-A
Notice that the venn diagram for A′ and the venn diagram for U-A are identical! This proves that A′=U=A, no matter what the sets U and A contain!
Be careful - You CANNOT prove that two expressions are always equal by giving EXAMPLES of specific sets that would make the expressions equal; you can only do this by showing that their venn diagrams are equal.This is a very important concept in mathematics: AN EXAMPLE IS NOT A PROOF! Make sure you know the difference between an example and a proof!
To illustrate this idea, let’s come up with both an example and a proof for a few different problems:
Example:
A-B=(A′∪B)′
First let’s come up with an example. If we make A={15,20,25} and B={20,30}, and U={5,10,15,20,25,30}, then:
A-B={15,20,25}-{20,30}={15,25}
To find (A′∪B)′, we need to find A′ first, because it is inside the parentheses, and complements come first. Here A′={5,10,30}.
Next we can find A′∪B={5,10,30}∪{20,30}={5,10,20,30}.
Then we can find (A′∪B)′={5,10,20,30}′={15,25}.
Because A-B={15,25} and (A′∪B)′={15,25} we can say that A-B=(A′∪B)′.
But this is just an EXAMPLE! This only shows that A-B=(A′∪B)′ when A={15,20,25} and B={20,30}, and U={5,10,15,20,25,30}.
We cannot tell if A-B=(A′∪B)′ is true for ALL POSSIBLE SETS A and B! So this is NOT a proof!
Proof:
Now let’s try a proof. The only way to prove that A-B=(A′∪B)′ for all possible sets A and B is to show that the Venn diagram of A-B is the same as the Venn diagram for (A′∪B)′.
First we recall that the Venn diagram of A-B looks like this:
A-B
Now we need to construct the Venn diagram for (A′∪B)′ step by step to show that it is exactly like the one above.
To do this we must first draw A′, (which we already did in a previous example, so we'll just use that diagram here) because we must do what is in the parentheses first, and the complement comes before every other operation.
The Venn diagram of A′ is:
A′
Now we take the venn diagram of B:
B
Be careful - Notice here that we have an oval indicating the set B in the Venn diagram for A′ and we have an oval indicating the set A in the Venn diagram of B, which is not necessary in general. HOWEVER, since in this case, we need these Venn diagrams only as a step to create the Venn diagram of (A′∪B)′, and THIS FINAL EXPRESSION contains BOTH sets A and B, then BOTH an oval indicating the set A AND an oval indicating the set B must be in EVERY Venn diagram which we make leading up to our Venn diagram for (A′∪B)′. If the Venn diagram for A′ were missing an oval indicating the set B or the Venn diagram for B were missing an oval indicating the set A, then we wouldn't be able to put them together on the same Venn diagram to take the union.
Now we shade in B in yellow and A′ in blue on the same venn diagram so that we can find A′∪B. The areas on the Venn diagram where the yellow B and the blue A’ overlap become green:
A′ in blue, B in yellow
We want to take the union of the yellow set B and the blue set A′, so we take everything that has been shaded yellow or blue (or green where both yellow and blue overlap). To make our diagram clearer, we recopy it so that it is all in one color, so the Venn diagram we get for A′∪B is:
A′∪B
Now we need to find (A′∪B)′ . So we want to take the complement of the red shaded area above which represents the set A′∪B. We recall that a complement is everything that is in the universal set but not in A′∪B. The area that is in U but not in A′∪B is exactly the white area on the Venn diagram. So to shade in (A′∪B)′ , all we need to do is shade in the area that was white in the above venn diagram like this:
(A′∪B)′
This is the same as the Venn diagram for A-B! So we have PROVED that A-B=(A′∪B)′!
This is a proof because the Venn diagrams are all general and don’t require us to know what A and B are.
We have shown, by using the Venn diagrams, that A-B=(A′∪B)′ for ALL sets A, B and U. A, B, and U can be ANYTHING, and A-B=(A′∪B)′ will still be true!
Using Venn Diagrams to Prove that Two Sets are Not Equal:
If we want to prove that two sets are NOT equal, all we have to do is show that their Venn diagrams are not the same:
Let’s consider A∪B=A∩B. Is this always true? Sometimes true? Or never true?
To see if it is true in the general case, let’s draw the Venn diagrams:
We recall from previous examples that A∪B has the Venn diagram:
A∪B
We also recall from previous examples that A∩B has the Venn diagram:
A∩B
These two diagrams are NOT the same! This means that it is NOT true that A∪B=A∩B for ALL POSSIBLE SETS A and B.
However, there might be some sets for which A∪B=A∩B.
Clearly if A={1,2,3} and B={2,4,6}, A∪B={1,2,3,4,6} and A∩B={2}. Obviously {1,2,3,4,6}≠{2}.
But if A={1,2,3} and B={1,2,3}, then A∪B={1,2,3} and A∩B={1,2,3} so in this case A∪B=A∩B.
So A∪B=A∩B is true SOMETIMES but NOT ALWAYS.
If you look carefully, you’ll notice that A∪B=A∩B will be true only when A=B. To see this, look at the Venn diagrams above. The only way to make the red shaded area and the yellow shaded area the same would be to squish the sets A and B together into one set so that they are exactly the same set.
DeMorgan's Laws:
When can you replace one compound expression containing sets with another?
Right now we will do another example showing how we can prove that two compound expressions containing sets are equivalent. This example is actually one of two well-know rules in set theory, called DeMorgan's Laws (after the mathematician who discovered them).
Once set theory was invented, and we have compound expressions involving sets and multiple operations, it becomes natural to ask if two different compound expressions are actually equal. For example, with regular algebra using numbers and operations on numbers such as addition, subtraction, multiplication, and division, there are rules that say that you can rewrite 3(4 x+2) as 12 x+6 by distributing the multiplication over the addition. Is the same process true for sets?
For example, can I write (A∪B)′ as A′∪B′? In other words, can I distribute set complement over set union? The only way to tell is to draw a Venn diagram of (A∪B)′ and a Venn diagram of A′∪B′ and see if the two Venn diagrams are identical:
We begin by drawing the Venn diagram of (A∪B)′:
We recall from a previous problem that the Venn diagram for A∪B looks like this: (remember that because the union is inside the parentheses, we must do it before we take the complement, which is outside the parentheses)
A∪B
Now to take the complement of A∪B, we take only the white area in the previous Venn diagram to get:
(A∪B)′
Now we draw the Venn diagram of A′∪B′. We begin with the Venn diagrams for A′ and B′, which we recall from previous examples:
A′
B′
Now we need to put A′ and B′ on the same Venn diagram. Because A′ is in blue and B′ is in red, the area where they overlap will be purple:
A′ in blue, B′ in red
Because we are taking a union, we keep everything that has been shaded in blue or red (or purple, because the purple comes from both red and blue shading). To make our diagram clearer, we choose to shade in all one color, so that this is the diagram of A′∪B′:
A′∪B′
This is clearly not the same as the Venn diagram for (A∪B)′! So we can conclude that in general, (A∪B)′ cannot be replaced with A′∪B′ without changing the resulting set!
Be careful! You can NEVER replace (A∪B)′ with A′∪B′, because these two sets are NOT (in general) equal! If you are asked to draw a Venn diagram for (A∪B)′, for example, be sure to follow the order of operations - NEVER begin by taking the complements of A and B, because that is OUTSIDE the parentheses! Likewise, you CANNOT replace (A∩B)′ with A′∩B′, because these two sets are NOT (in general) equal! (To see this, try drawing a Venn diagram of each side and compare them.)
DeMorgan's Laws:
It turns out that there actually is a kind of twisted distribution rule that allows you to "distribute" set complement over set union (or intersection)
It turns out that the following two rules are ALWAYS true for ANY two sets A and B:
1) A′∪B′=(A∩B)′
2) A′∩B′=(A∪B)′
These rules are called DEMORGAN’S LAWS.
How do we know that these two rules will always be true?
Let's start with the first rule - how do we know that A′∪B′ will always be equal to (A∩B)′, no matter what the sets A and B are? We draw Venn diagrams of each side of this equation, and show that they turn out to be the same.
First we draw a Venn diagram of A′∪B′; since we just did this in the previous example, we can simply use that same diagram here:
A′∪B′
Now let’s construct the Venn diagram for (A∩B)′:
Because we always do what is in the parentheses first, we first draw a Venn diagram of A∩B, which we have from a previous example:
A∩B
Now we take the complement of by shading in the white area in the diagram above (Remember that the complement is everything that is NOT in A∩B!):
So we get the following Venn diagram for (A∩B)′:
(A∩B)′
This is the same Venn diagram we got for A′∪B′.
So we have proved DeMorgan’s First Law: A′∪B′=(A∩B)′, because the Venn diagrams of each side of the equation are identical.
Drawing Venn Diagrams of the Null and Universal Sets
Sometimes you may need to draw Venn diagrams of the Null Set Ø or the Universal set U. To draw a Venn diagram of the null set, we need only draw any Venn diagram without any shading. Below are several examples of different Venn diagrams of the null set:
Ø
Ø
Ø
Ø
To draw a Venn diagram of the Universal set, we shade in everything. Below are several different examples of Venn diagrams of the universal set:
U
U
U
U |
189549 | https://www.brighamandwomens.org/radiation-oncology/brachytherapy | skip to Cookie NoticeSkip to contents
Header Skipped.
Home
Departments & Services
Radiation Oncology
Treatment Approaches
Brachytherapy
Back to Treatment Approaches
Brachytherapy
Brachytherapy is derived from the Greek word “brachy,” which means short or small. Brachytherapy uses small radioactive isotopes or seeds placed close to the tumor.
What is brachytherapy, and how does it work?
Brachytherapy is a treatment option that uses small radioactive seeds to deliver radiation to tumors. It can deliver radiation either deep within the body (for example, to gynecological and prostate cancers) or to a superficial surface such as the skin.
What are some advantages of brachytherapy?
The treatment is highly localized, so radiation exposure to nearby normal tissues is limited.
Brachytherapy uses isotopes with different energies that are able to penetrate tissues to varying depths.
Treatment planning and delivery has become more sophisticated over the past decade with advances in image guidance, surgery, and medicine.
What type of imaging may be used with brachytherapy?
Our gynecologic, prostate cancer and general brachytherapy programs routinely use real-time guidance via one or more of the following imaging methods, in our state-of-the-art suite:
Ultrasound
CT scan
Magnetic Resonance Imaging (MRI)
What does brachytherapy help treat?
Cancer (such as prostate, gynecologic, and other cancers)
Noncancerous proliferative diseases (keloids, coronary artery in-stent restenosis, peripheral vascular disease)
What is prostate brachytherapy?
It can be beneficial whether you have early, intermediate or high-risk disease. It can also be beneficial if you have prostate cancer that recurs after radiation therapy. It can be used with or without external beam radiotherapy.
Implants can be permanent or temporary:
Permanent seed implant: uses a very low dose rate (vLDR) iodine or palladium source. Seeds are implanted under ultrasound guidance. Afterwards, seed placement is confirmed by CT and/or MRI.
Temporary implant: uses a high dose rate (HDR) technique with an iridium source. The source travels through catheters inserted under ultrasound guidance.
MRI can allow the radiation oncologist to precisely localize tumors within the prostate, so that ablative doses can be given directly to the tumor while sparing surrounding tissues.
What is gynecologic brachytherapy?
Usually an outpatient procedure
Commonly used to treat endometrial and cervical cancer
Uses an iridium seed that travels through an applicator for a treatment time that ranges from 5 to 15 minutes
Is treatment different for gynecologic patients who have been treated with a hysterectomy?
For patients treated with hysterectomy, vaginal cylinder brachytherapy with or without external beam radiotherapy may be recommended to reduce the risk of recurrence at the vaginal cuff. This outpatient treatment is delivered 2 days per week for a total of 2-6 treatments.
For patients who have not had a hysterectomy, tandem-based brachytherapy application is performed under general anesthesia. The total number of treatments may vary from 3-5, either as an outpatient (2 days per week), or inpatient (3-4 day hospitalization).
Select patients with gynecologic cancer may require interstitial brachytherapy, where the radiation oncologist places catheters directly within and surrounding a tumor. Interstitial brachytherapy is performed under image-guidance using a combination of ultrasound, CT and/or MRI. These procedures are performed under anesthesia and require an inpatient hospitalization. At the time of treatment, an iridium source is inserted through each individual catheter to deliver a highly conformal brachytherapy dose. Interstitial brachytherapy is given twice daily for a treatment time of 10-15 minutes.
What is surface applicator brachytherapy?
Surface applicator brachytherapy can be an excellent treatment option for patients with certain cutaneous malignancies or medically refractory benign conditions. This form of treatment involves creating a customized applicator that is specifically designed to maximize dosimetric coverage to potentially geometrically complex lesions. After CT radiation treatment planning, brachytherapy is then administered through the applicator on an outpatient basis.
What are some other brachytherapy treatments?
Other treatments covered on the general brachytherapy service include coronary artery radiation therapy for in-stent restenosis, brachytherapy for peripheral vascular disease, radioembolization for select liver malignancies, esophageal brachytherapy, and endobronchial brachytherapy.
Innovative Applications
What innovative research is happening at Brigham and Women’s Hospital?
Our department is constantly exploring innovative uses of advanced technologies to improve patient outcomes with brachytherapy.
Optimizing Brachytherapy Delivery with MRI-Guidance for Gynecologic Cancer
Lead physicist: Robert Cormack, PhD
Image-guided brachytherapy for gynecologic cancer offers significant potential to improve tumor control and increase cure rates. CT and MR-based brachytherapy planning also reduces the risk of bladder or bowel toxicity. With MRI guidance at the time of the procedure, we are developing novel methods to track catheter placement and positioning for real-time evaluation of the HDR brachytherapy dose. We are also developing novel MRI sequences to identify residual tumor in collaboration with the Surgical Planning Laboratory within the Advanced Multimodality Image Guided Operating (AMIGO) suite.
Radiation Oncology
Find a DoctorRequest AppointmentLocationsServices
Learn more about Brigham and Women's Hospital
For over a century, a leader in patient care, medical education and research, with expertise in virtually every specialty of medicine and surgery.
About BWH
Stay Informed.
Connect with us. |
189550 | https://www.studysmarter.co.uk/explanations/english/rhetoric/illustration/ | Login Sign up
Log out Go to App
Go to App
Learning Materials
Explanations
Anthropology
Archaeology
Architecture
Art and Design
Bengali
Biology
Business Studies
Chemistry
Chinese
Combined Science
Computer Science
Economics
Engineering
English
English Literature
Environmental Science
French
Geography
German
Greek
History
Hospitality and Tourism
Human Geography
Japanese
Italian
Law
Macroeconomics
Marketing
Math
Media Studies
Medicine
Microeconomics
Music
Nursing
Nutrition and Food Science
Physics
Politics
Polish
Psychology
Religious Studies
Sociology
Spanish
Sports Sciences
Translation
Features
Flashcards
StudySmarter AI
Notes
Study Plans
Study Sets
Exams
Discover
Find a job
Student Deals
Magazine
Mobile App
Illustration
School buses come in a lot of colors, not just yellow. This is an interesting and very possibly true statement, but someillustration would help solidify the idea in the mind of the reader. Why? Because an illustration is another way of saying an example, and examplesgive details and help explain a claim. Illustration can help to prove that a statement is true or merely enrich and clarify someone's position.
Get started
Millions of flashcards designed to help you ace your studies
Sign up for free
+ Add tag
Immunology
Cell Biology
Mo
"50% of all income on the Mars Colony is not taxed, reports Earth Weekly."Would this statement benefit from illustration?
Show Answer + Add tag
Immunology
Cell Biology
Mo
"My boss is a penny-pincher. He's a real Mr. Potter type, like from It's A Wonderful Life." What kind of illustration is the underlined part?
Show Answer + Add tag
Immunology
Cell Biology
Mo
If an illustration would be helpful to a thought or claim, it would likely come in the form of a(n) _____.
Show Answer + Add tag
Immunology
Cell Biology
Mo
Does an illustration require evidence?
Show Answer + Add tag
Immunology
Cell Biology
Mo
Illustration is a basic rhetorical mode that adds _____ to a claim or thought.
Show Answer + Add tag
Immunology
Cell Biology
Mo
Can illustrations clarify a thought?
Show Answer + Add tag
Immunology
Cell Biology
Mo
Can illustrations be used to support a stance?
Show Answer + Add tag
Immunology
Cell Biology
Mo
"I've never been to the zoo before."Would this statement benefit from illustration?
Show Answer + Add tag
Immunology
Cell Biology
Mo
How can you link your ideas to your illustrations?
Show Answer + Add tag
Immunology
Cell Biology
Mo
Effective illustrations are _____ with your audience and tone.
Show Answer + Add tag
Immunology
Cell Biology
Mo
For a casual account, evidence is necessary.
Show Answer
+ Add tag
Immunology
Cell Biology
Mo
"50% of all income on the Mars Colony is not taxed, reports Earth Weekly."Would this statement benefit from illustration?
Show Answer + Add tag
Immunology
Cell Biology
Mo
"My boss is a penny-pincher. He's a real Mr. Potter type, like from It's A Wonderful Life." What kind of illustration is the underlined part?
Show Answer + Add tag
Immunology
Cell Biology
Mo
If an illustration would be helpful to a thought or claim, it would likely come in the form of a(n) _____.
Show Answer + Add tag
Immunology
Cell Biology
Mo
Does an illustration require evidence?
Show Answer + Add tag
Immunology
Cell Biology
Mo
Illustration is a basic rhetorical mode that adds _____ to a claim or thought.
Show Answer + Add tag
Immunology
Cell Biology
Mo
Can illustrations clarify a thought?
Show Answer + Add tag
Immunology
Cell Biology
Mo
Can illustrations be used to support a stance?
Show Answer + Add tag
Immunology
Cell Biology
Mo
"I've never been to the zoo before."Would this statement benefit from illustration?
Show Answer + Add tag
Immunology
Cell Biology
Mo
How can you link your ideas to your illustrations?
Show Answer + Add tag
Immunology
Cell Biology
Mo
Effective illustrations are _____ with your audience and tone.
Show Answer + Add tag
Immunology
Cell Biology
Mo
For a casual account, evidence is necessary.
Show Answer
Achieve better grades quicker with Premium
PREMIUM
Karteikarten Spaced Repetition Lernsets AI-Tools Probeklausuren Lernplan Erklärungen Karteikarten Spaced Repetition Lernsets AI-Tools Probeklausuren Lernplan Erklärungen
Kostenlos testen
Geld-zurück-Garantie, wenn du durch die Prüfung fällst
Did you know that StudySmarter supports you beyond learning?
Find your perfect university
Get started for free
Find your dream job
Get started for free
Claim big discounts on brands
Get started for free
Finance your studies
Get started for free
Sign up for free and improve your grades
Scan and solve every subject with AI
Try our homework helper for free
Recommended
Create a study plan
Get a personalized plan and set yourself up for success. Generate flashcards
Upload or scan documents to turn them into flashcards. Solve a problem
Get instant explanations on your homework or other topics.
Review generated flashcards
Sign up for free
to start learning or create your own AI flashcards
Sign up for free
You have reached the daily AI limit
Start learning or create your own AI flashcards
StudySmarter Editorial Team
Team Illustration Teachers
12 minutes reading time
Checked by StudySmarter Editorial Team
Save Article Save Article
Sign up for free to save, edit & create flashcards.
Save Article Save Article
Fact Checked Content
Last Updated: 19.08.2022
Published at: 02.07.2022
12 min reading time
5 Paragraph Essay
Argumentative Essay
Blog
Creative Story
Creative Writing
Cues and Conventions
Discourse
Email
English Grammar
English Grammar Summary
English Language Study
Essay Plan
Essay Prompts
Essay Writing Skills
Free Response Essay
Global English
Graphology
History of English Language
International English
Key Concepts in Language and Linguistics
Language Acquisition
Language Analysis
Language and Social Groups
Lexis and Semantics
Lexis and Semantics Summary
Linguistic Terms
Listening and Speaking
Morphology
Multiple Choice Questions
Phonetics
Phonology
Pragmatics
Prosody
Research and Composition
Rhetoric
Action Verbs
Ad Hominem
Adjectival Clause
Adverbial Clause
Analogy
Anecdotes
Anthropomorphism
Appositive Phrase
Argument from Authority
Argumentation
Asyndeton
Auditory Description
Bandwagon
Basic Rhetorical Modes
Begging the Question
Building Credibility
Caricature
Causal Flaw
Causal Relationships
Cause and Effect Rhetorical Mode
Central Idea
Chronological Description
Circular Reasoning
Circumlocution
Classical Appeals
Classification
Close Reading
Coherence Between Sentences
Coherence within Paragraphs
Coherences within Sentences
Cohesive Devices
Comparison
Complex Rhetorical Modes
Compound Complex Sentences
Concessions
Concrete Adjectives
Concrete Nouns
Consistent Voice
Contrast
Counter Argument
Debate tactics
Deduction
Definition
Definition by Negation
Description
Description Rhetorical Mode
Diction
Direct Discourse
Dogmatism
Equivocation
Ethos
Euphemism
Evaluative writing
Exemplification
Exposition
Extended Metaphor
False Connections
False Dichotomy
False Equivalence
Faulty Analogy
Faulty Causality
Fear Arousing
Gustatory Description
Half Truth
Hasty Generalization
Illustration
Induction Rhetoric
Lampoon
Levels of Coherence
Line of Reasoning
Literal vs. Nonliteral
Logos
Missing the Point
Mnemonics
Modifiers
Modifiers that Qualify
Modifiers that Specify
Narration Rhetorical Mode
Non-Sequitur
Non-Testable Hypothesis
Objective Description
Olfactory Description
Paragraphing
Parenthetical Element
Participial Phrase
Pathos
Personal Narrative
Placement of Modifiers
Polysyndeton
Post-Hoc Argument
Process Analysis Rhetorical Mode
Rebuttals
Red Herring
Refutation
Reverse Causation
Rhetorical Fallacy
Rhetorical Modes
Rhetorical Question
Rhetorical Situation
Sarcasm
Scare Tactics
Sentimental Appeals
Situational Irony
Slippery Slope
Spatial Description
Straw Man Argument
Subject Consistency
Subjective Description
Syllogism
Tactile Description
Tense Consistency
Tone Shift
Tone and Word Choice
Transitions
Twisting the Language Around
Unstated Assumption
Verbal Irony
Visual Description
Zeugma
Rhetorical Analysis Essay
Semiotics
Single Paragraph Essay
Sociolinguistics
Summary Text
Syntax
Synthesis Essay
TESOL (English)
Text Comparison
Textual Analysis
Contents
5 Paragraph Essay
Argumentative Essay
Blog
Creative Writing
Cues and Conventions
Discourse
Email
English Grammar
English Language Study
Essay Plan
Essay Prompts
Essay Writing Skills
Global English
Graphology
History of English Language
International English
Key Concepts in Language and Linguistics
Language Acquisition
Language Analysis
Language and Social Groups
Lexis and Semantics
Linguistic Terms
Listening and Speaking
Morphology
Multiple Choice Questions
Phonetics
Phonology
Pragmatics
Prosody
Research and Composition
Rhetoric
Action Verbs
Ad Hominem
Adjectival Clause
Adverbial Clause
Analogy
Anecdotes
Anthropomorphism
Appositive Phrase
Argument from Authority
Argumentation
Asyndeton
Auditory Description
Bandwagon
Basic Rhetorical Modes
Begging the Question
Building Credibility
Caricature
Causal Flaw
Causal Relationships
Cause and Effect Rhetorical Mode
Central Idea
Chronological Description
Circular Reasoning
Circumlocution
Classical Appeals
Classification
Close Reading
Coherence Between Sentences
Coherence within Paragraphs
Coherences within Sentences
Cohesive Devices
Comparison
Complex Rhetorical Modes
Compound Complex Sentences
Concessions
Concrete Adjectives
Concrete Nouns
Consistent Voice
Contrast
Counter Argument
Debate tactics
Deduction
Definition
Definition by Negation
Description
Description Rhetorical Mode
Diction
Direct Discourse
Dogmatism
Equivocation
Ethos
Euphemism
Evaluative writing
Exemplification
Exposition
Extended Metaphor
False Connections
False Dichotomy
False Equivalence
Faulty Analogy
Faulty Causality
Fear Arousing
Gustatory Description
Half Truth
Hasty Generalization
Illustration
Induction Rhetoric
Lampoon
Levels of Coherence
Line of Reasoning
Literal vs. Nonliteral
Logos
Missing the Point
Mnemonics
Modifiers
Modifiers that Qualify
Modifiers that Specify
Narration Rhetorical Mode
Non-Sequitur
Non-Testable Hypothesis
Objective Description
Olfactory Description
Paragraphing
Parenthetical Element
Participial Phrase
Pathos
Personal Narrative
Placement of Modifiers
Polysyndeton
Post-Hoc Argument
Process Analysis Rhetorical Mode
Rebuttals
Red Herring
Refutation
Reverse Causation
Rhetorical Fallacy
Rhetorical Modes
Rhetorical Question
Rhetorical Situation
Sarcasm
Scare Tactics
Sentimental Appeals
Situational Irony
Slippery Slope
Spatial Description
Straw Man Argument
Subject Consistency
Subjective Description
Syllogism
Tactile Description
Tense Consistency
Tone Shift
Tone and Word Choice
Transitions
Twisting the Language Around
Unstated Assumption
Verbal Irony
Visual Description
Zeugma
Rhetorical Analysis Essay
Semiotics
Single Paragraph Essay
Sociolinguistics
Summary Text
Syntax
Synthesis Essay
TESOL (English)
Textual Analysis
Contents
Fact Checked Content
Last Updated: 19.08.2022
12 min reading time
Content creation process designed by
Content cross-checked by
Content quality checked by
Sign up for free to save, edit & create flashcards.
Save Article Save Article
Jump to a key chapter
Definition and Purpose of Illustration
Illustration is a basic rhetorical mode that adds detail to a claim or thought.The reason it is so common in rhetoric is because rarely does a statement provide all the necessary context needed to interpret it. Illustrations provide that context.
A rhetorical mode is a way to organize speech or writing. A basic rhetorical mode is one that doesn't require taking an intimate look at the subject. You can illustrate something without having to know it through and through, and so it's a basic rhetorical mode. Complex rhetorical modes, on the other hand, do require an in-depth knowledge of the subject. Some examples of complex rhetorical modes include cause and effect, process analysis, and description.
Illustration is another way of saying "example" because you can use an object or idea to illustrate—or provide an example of—another object or idea. An illustration can be used to either clarify a thought or to support a stance.
Illustration Used to Clarify a Thought
The following statement would be enhanced by an illustration.
Walking around empty malls is an almost otherworldly experience, like living in a memory.
In this example, an illustration would be helpful to enrich the idea. If the writer provided a first-hand account of their experiences in a mall, for example, it would provide helpful emotional context that might further engage the reader. The writer could also provide images of empty malls, which would both help to illustrate “empty malls” and also to engage the reader’s own sense of mystery or nostalgia.
Fig. 1 - Empty Mercury Mall. This is a literal illustration that exemplifies the statement above.
Statements that would benefit from illustration beg for a story to be told. They have a way of making a reader ask for more.
Illustration Used to Support a Stance
Here's something that could also use an illustration:
Butterflies aren’t as common around here as they used to be.
In this example, an illustration would be helpful to prove a point. Examples would be great, including first-hand accounts or even secondhand accounts. Obviously, verifiable evidence would also add helpful detail.
Verifiable evidence means documented proof by which something can be validated. Verifiable evidence is the gold standard for details in essays because these details illustrate an empirical picture of a situation. In our example, our situation is a lack of butterflies in the area.
Synonyms for Illustration
As previously mentioned, a synonym for illustration in the rhetorical sense is example. Some other ideas related to illustration are description, evidence, and anecdotes. A writer may use all three of these devices to explain an idea to their audience.
Description
A description is similar to illustration, but it is more about detailing things in terms of a story, such as how things look and smell.
A description narrows the mental distance between you and the subject described.
Evidence
Evidence is a kind of illustration used in argumentation to provide support for an argument.
Evidence supports an argument with facts.
Anecdote
An anecdote is a method to illustrate what happened. Because it is a story, it is a kind of description, rich with sensory details.
An anecdote is a short, informal, and descriptive personal story.
Types of Illustration in Writing
Nearly everything that an illustration accomplishes can be considered an example. This is why the terms are sometimes used interchangeably. Here are some of the ways a writer might illustrate their ideas.
Illustration Using Common Examples
Common examples exist in the “public consciousness.”Things like famous events, historical figures, or movies fall under the public consciousness category.
It was the weirdest thing. I was walking, and I swear I saw the snowman in Sandy’s yard wave to me. It was like Frosty the Snowman or something.
This common example helps to illustrate the surreal, even magical experience that the writer experienced. The thought (which is their experience of seeing the snowman) is enriched by a common and recognizable illustration (Frosty the Snowman) that likens the writer’s experience to a magical experience the reader probably knows.
This type of illustration is good for telling a story, or quickly providing amusement or context to an argument, but it is not a strong form of illustration in formal arguments.
Illustration Using First-Hand Evidence
First-hand evidence is something that someone witnessed themselves.
This mall used to be the most happening place in town. When I was a young man, Gerald Ford came through here and gave a speech after cutting the red tape at its grand opening.
This example of a firsthand account is close to becoming an anecdote. An anecdote is a short, usually personal story that describes an event. It will often capture the flavor of a place in time, and sometimes will be used to make a point about then versus now. An anecdote is a more complex mode of rhetoric in which the descriptive capabilities of a storyteller are put to the test.
Biographies, histories, and courts of law champion first-hand accounts.
Illustration Using Second-Hand Evidence
Second-hand evidence is something that someone heard from someone else. It can also be something that someone read about or heard about, although it lacks any citation.
Olympic National Park is like nowhere else in the United States. My dad went there, and he saw all these cool jungle trees and giant yellow slugs.
This second-hand account provides detail about the thought “Olympic Park is like nowhere else in the United States.”
Second-hand evidence is not as strong as first-hand evidence because the information is further from the source. The writer of this example could be misremembering what their dad said, for instance. Or, the writer might be using their dad’s illustration out of context.
This mall used to be the most happening place in town. I read in an article once that JFK gave a speech here.
This is second-hand evidence from an article that the writer once read. It is not as strong a piece of evidence as the article itself, because the article is 1. closer to the event it illustrates, and 2. contains greater and more accurate details than a memory of it.
If you remember the last section, for instance, Gerald Ford gave the speech, not JFK.
Illustration Using Verifiable Evidence
Verifiable evidence includes things such as photos and research, which directly support a point.A first-hand account can become verifiable evidence if it is supported by enough corroborating evidence, including other first-hand accounts.
Ultimately though, what qualifies as verified, valid evidence will be up to the reader. This is why, when writing an essay or paper, citations are critically important. In a timed essay, citing the focal text or passage is a strong way to prove your point. This is also why when analyzing a piece of literature, the analyst will spend the majority of their time examining a piece of literature: it will contain much of the analyst’s verifiable evidence.
Alucard drinks blood. On page 123, it says, “He drank her blood like sipping soda out of a can.”
This passage illustrates the point—that Alucard drink blood—directly from the source text.
Fresno gets very hot. Its median yearly high temperature is among the top 1% of cities in America, according to a 2021 report in Oh My Gosh That’s Crazy Magazine.
Statistics like this one (although it's made up) are a great source of verifiable evidence. Most statistics come from studies that publish their findings publically, so anyone can double-check the numbers.
How Illustration Functions in Sentences and Essays
Essays are built from building blocks: sentences. Every time you complete another sentence, you add another brick to the wall. Your goal is a firm construction so that your thesis doesn’t come crashing down around you.
As we’ve discussed, illustrations are a key way for a writer to provide details about their thoughts and claims. That’s all well and good, but how should a writer connect their “thoughts and claims” to their illustrations?
With transitions.
Transitions are a basic way to provide context to thoughts by linking them to other related thoughts. Transitions, which use dozens of words such as “therefore” and “for example,” can stitch together sentences, paragraphs, and chapters to create vast illustrative anecdotes or logical arguments.
If sentences are the building blocks for your essay, then transitions are the mortar. Use transitions to bind together your various points and examples.
How to Write Good Illustrations
Effective illustrations are consistent with your audience and your tone. Use illustrations when they provide necessary context or logical support for your points. Anecdotes, for example, are far more helpful in casual essays and histories. They can be used in longer papers and books where many forms of context might be helpful.
In your essay, use stronger forms of illustration. Use evidence or first-hand accounts from reputable sources who know more about a topic than you do. In timed essays, you will want to draw on the passage as much as you can to support your ideas.
How to Analyze Illustrations in a Casual Account
First, identify the kind of work you are analyzing. If it is a casual account, don’t worry about evidence. Analyze the emotional effectiveness of the illustrations, such as how well they communicate the feelings of the scene or idea. Analyze how vibrant a picture the illustrations paint. The more details, the better.
Casual accounts can include short passages, short stories, memoirs, and other media that don’t present a thesis.
Here is a passage from Mark Twain's A Connecticut Yankee in King Arthur's Court (1889) and an example analysis.
When I came to again, I was sitting under an oak tree, on the grass, with a whole beautiful and broad country landscape all to myself—nearly. Not entirely; for there was a fellow on a horse, looking down at me—a fellow fresh out of a picture-book. He was in old-time iron armor from head to heel, with a helmet on his head the shape of a nail-keg with slits in it; and he had a shield, and a sword, and a prodigious spear; and his horse had armor on, too, and a steel horn projecting from his forehead, and gorgeous red and green silk trappings that hung down all around him like a bedquilt, nearly to the ground." (A Word of Explanation)
Fig. 2 - Oak Tree in Peaceful Landscape
This is the moment that Hank Morgan wakes up in Camelot. Here, Hank illustrates the idyllic and peaceful surroundings, which are in sharp contrast to modern-day Connecticut. This illustration paints a place not crowded with industry. Twain is also very purposeful in how he writes Hank’s description of the knight. Rather than describing the knight’s armor in terms of its actual pieces, Hank describes it in terms of things he knows: picture-books, nail-kegs, and bedquilts. Mark Twain’s illustration creates a contrast between Hank’s world and the knight's world. These illustrations set up the humor and satire to follow.
How to Analyze Illustrations in an Essay
If you are analyzing an essay or report, consider the objectivity of its sources. In other words, does the essay or report use verified evidence to support its points? If not, you should analyze any flaws of the illustrations. Compare the illustrations to the thesis. Do they directly support it? If not, search for logical fallacies and other errors in the essay or report.
Illustration - Key Takeaways
Illustration is a basic rhetorical method that adds detail to a claim or thought.
Methods of illustration might include:
Common examples
First-hand evidence
Second-hand evidence
Verifiable evidence
Use transitions to effectively bind your illustrations to their claim or thought.
Effective illustrations are consistent with your audience and your tone.
In your essay use stronger forms of illustration, such as first-hand accounts.
References
Fig. 1 - Empty Mercury Mall ( Image by Mx. Granger ( licensed by Creative Commons CC0 1.0 Universal Public Domain Dedication (
Fig. 2 - Oak Tree ( Image by Ymblanter ( licensed by Creative Commons Attribution-Share Alike 4.0 International license (
Similar topics in English
Phonetics
Multiple Choice Questions
Syntax
Free Response Essay
Pragmatics
Single Paragraph Essay
Argumentative Essay
Cues and Conventions
Essay Plan
Textual Analysis
Creative Story
Listening and Speaking
Research and Composition
Essay Prompts
Graphology
International English
Phonology
Text Comparison
Key Concepts in Language and Linguistics
Rhetoric
Essay Writing Skills
Rhetorical Analysis Essay
Morphology
5 Paragraph Essay
Sociolinguistics
Language Analysis
Blog
Summary Text
Lexis and Semantics Summary
English Language Study
Synthesis Essay
Email
English Grammar Summary
Prosody
History of English Language
Lexis and Semantics
Language Acquisition
Language and Social Groups
English Grammar
Linguistic Terms
Global English
Semiotics
Discourse
Creative Writing
TESOL (English)
Related topics to Rhetoric
Deduction
Parenthetical Element
Reverse Causation
Anthropomorphism
Syllogism
Complex Rhetorical Modes
Euphemism
Argumentation
Argument from Authority
Coherence Between Sentences
Exposition
Twisting the Language Around
Causal Relationships
Lampoon
Rhetorical Situation
Appositive Phrase
Tone and Word Choice
Comparison
Levels of Coherence
Illustration
Placement of Modifiers
Objective Description
Rhetorical Modes
Post-Hoc Argument
Refutation
Rhetorical Question
Mnemonics
Modifiers that Specify
Dogmatism
Anecdotes
Non-Sequitur
Begging the Question
Action Verbs
Concessions
Rebuttals
Visual Description
Subjective Description
Counter Argument
Ethos
Verbal Irony
Classification
Situational Irony
Basic Rhetorical Modes
Sarcasm
Non-Testable Hypothesis
Straw Man Argument
Adverbial Clause
Olfactory Description
Line of Reasoning
Diction
Transitions
Half Truth
Contrast
Equivocation
Unstated Assumption
Rhetorical Fallacy
False Dichotomy
Close Reading
Subject Consistency
Central Idea
Personal Narrative
Coherence within Paragraphs
Circumlocution
Scare Tactics
Fear Arousing
Faulty Analogy
Bandwagon
Auditory Description
Gustatory Description
Extended Metaphor
False Equivalence
Consistent Voice
Caricature
Modifiers that Qualify
Red Herring
Pathos
Adjectival Clause
Faulty Causality
Analogy
Spatial Description
Tense Consistency
Tone Shift
Cause and Effect Rhetorical Mode
Missing the Point
Ad Hominem
Hasty Generalization
Logos
Circular Reasoning
Tactile Description
Concrete Nouns
Chronological Description
Paragraphing
Definition by Negation
Slippery Slope
Concrete Adjectives
Modifiers
Building Credibility
Participial Phrase
Compound Complex Sentences
Sentimental Appeals
Narration Rhetorical Mode
Coherences within Sentences
Classical Appeals
Induction Rhetoric
Process Analysis Rhetorical Mode
Causal Flaw
Description
Definition
Direct Discourse
False Connections
Description Rhetorical Mode
Evaluative writing
Debate tactics
Literal vs. Nonliteral
Exemplification
Asyndeton
Polysyndeton
Cohesive Devices
Zeugma
## Flashcards in Illustration
17
Start learning
"50% of all income on the Mars Colony is not taxed, reports Earth Weekly."Would this statement benefit from illustration?
Trick question. This piece of evidence is the illustration.
"My boss is a penny-pincher. He's a real Mr. Potter type, like from It's A Wonderful Life." What kind of illustration is the underlined part?
A common example.
If an illustration would be helpful to a thought or claim, it would likely come in the form of a(n) _____.
Example
Does an illustration require evidence?
No, it can be purely anecdotal.
Illustration is a basic rhetorical mode that adds _____ to a claim or thought.
Details
Can illustrations clarify a thought?
Yes.
Learn faster with the 17 flashcards about Illustration
Sign up for free to gain access to all our flashcards.
Sign up with Email
Already have an account? Log in
Frequently Asked Questions about Illustration
What is illustration in writing?
Illustration is a basic rhetorical method that adds detail to a claim or thought.
What is an example of illustration in a sentence?
"This mall used to be the most happening place in town. When I was a young man, Gerald Ford came through here and gave a speech after cutting the red tape at its grand opening."
This is an example of first-hand evidence being used to illustrate a thought.
Can illustrations include words?
Yes, illustrations can include words in a rhetorical sense. "Illustration" is used in many fields and contexts. Illustrations can also be drawn pictures.
How do you write a good illustration in a paragraph?
Illustrations should provide necessary context or logical support for your points. Effective illustrations are consistent with your audience and your tone.
What kind of rhetorical mode is "illustration"?
Illustration is a basic rhetorical mode.The reason that illustration is so common in rhetoric is that rarely does a statement provide all the necessary context needed to interpret it. Illustrations provide that context.
Save Article
Test your knowledge with multiple choice flashcards
Score
That was a fantastic start!
You can do better!
Sign up to create your own flashcards
Access over 700 million learning materials
Study more efficiently with flashcards
Get better grades with AI
Sign up for free
Already have an account? Log in
Good job!
Keep learning, you are doing great.
Don't give up!
Next
Open in our app
How we ensure our content is accurate and trustworthy?
At StudySmarter, we have created a learning platform that serves millions of students. Meet the people who work hard to deliver fact based content as well as making sure it is verified.
Content Creation Process:
Lily Hulatt
Digital Content Specialist
Lily Hulatt is a Digital Content Specialist with over three years of experience in content strategy and curriculum design. She gained her PhD in English Literature from Durham University in 2022, taught in Durham University’s English Studies Department, and has contributed to a number of publications. Lily specialises in English Literature, English Language, History, and Philosophy.
Get to know Lily
Content Quality Monitored by:
Gabriel Freitas
AI Engineer
Gabriel Freitas is an AI Engineer with a solid experience in software development, machine learning algorithms, and generative AI, including large language models’ (LLMs) applications. Graduated in Electrical Engineering at the University of São Paulo, he is currently pursuing an MSc in Computer Engineering at the University of Campinas, specializing in machine learning topics. Gabriel has a strong background in software engineering and has worked on projects involving computer vision, embedded AI, and LLM applications.
Get to know Gabriel
Discover learning materials with the free StudySmarter app
Sign up for free
About StudySmarter
StudySmarter is a globally recognized educational technology company, offering a holistic learning platform designed for students of all ages and educational levels. Our platform provides learning support for a wide range of subjects, including STEM, Social Sciences, and Languages and also helps students to successfully master various tests and exams worldwide, such as GCSE, A Level, SAT, ACT, Abitur, and more. We offer an extensive library of learning materials, including interactive flashcards, comprehensive textbook solutions, and detailed explanations. The cutting-edge technology and tools we provide help students create their own learning materials. StudySmarter’s content is not only expert-verified but also regularly updated to ensure accuracy and relevance.
Learn more
StudySmarter Editorial Team
Team English Teachers
12 minutes reading time
Checked by StudySmarter Editorial Team
Save Explanation Save Explanation
Study anywhere. Anytime.Across all devices.
Sign-up for free
Create a free account to save this explanation.
Save explanations to your personalised space and access them anytime, anywhere!
Sign up with Email Sign up with Apple
By signing up, you agree to the Terms and Conditions and the Privacy Policy of StudySmarter.
Already have an account? Log in
Sign up to highlight and take notes. It’s 100% free.
Get Started Free
Explore our app and discover over 50 million learning materials for free.
Sign up for free
94% of StudySmarter users achieve better grades with our free platform.
Download now! |
189551 | https://math.stackexchange.com/questions/2896396/how-to-find-lines-of-invariant-points | matrices - How to find lines of invariant points? - Mathematics Stack Exchange
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
How to find lines of invariant points?
Ask Question
Asked 7 years, 1 month ago
Modified7 years, 1 month ago
Viewed 8k times
This question shows research effort; it is useful and clear
4
Save this question.
Show activity on this post.
Every time I try a question on this topic I get it wrong.
My textbook says:
Invariant points satisfy B(u v)=(u v)B(u v)=(u v)
Re-write this as a system of equations.
Check whether both equations are in fact the same.
If so, they give a line of invariant points.
So I tried this question using that method (and the method needed for finding invariant lines):
For [this] matrix, find any lines of invariant points and any other invariant lines through the origin.
(3 4−2−3)(3−2 4−3)
What I did was: (3 4−2−3)(u m u)=(u v)=(3 u−2 m u 4 u−3 m u)(3−2 4−3)(u m u)=(u v)=(3 u−2 m u 4 u−3 m u)
These equations are not the same, so no lines of invariant points.
Then, to find invariant lines:
(3 4−2−3)(u m u)=(u′v′)=(3 u−2 m u 4 u−3 m u)(3−2 4−3)(u m u)=(u′v′)=(3 u−2 m u 4 u−3 m u)
4 u−3 m u=m(3 u−2 m u)4 u−3 m u=m(3 u−2 m u)
2 m 2−6 m+4=0⇒m=1 2 m 2−6 m+4=0⇒m=1 or m=2 m=2
So the invariant lines should be y=2 x y=2 x or y=x y=x. As far as I know this is sort of right, except y=x y=x is apparently also a line of invariant points. What did I do wrong?
matrices
geometry
linear-transformations
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this question to receive notifications
edited Jun 12, 2020 at 10:38
CommunityBot
1
asked Aug 27, 2018 at 16:51
ChxChx
401 5 5 silver badges 16 16 bronze badges
5
1 If m=1 m=1 then 3 u−2 m u=4 u−3 m u 3 u−2 m u=4 u−3 m u.Sort of Damocles –Sort of Damocles 2018-08-27 16:54:08 +00:00 Commented Aug 27, 2018 at 16:54
Oh. So I can't rule out the equations despite the fact that they don't appear the same?Chx –Chx 2018-08-27 16:56:36 +00:00 Commented Aug 27, 2018 at 16:56
1 right. Instead, you should set them equal to eachother and see what solutions you get. In this case, you have: u(3−2 m)=u(4−3 m)u(3−2 m)=u(4−3 m) which evidently has solutions if a) u=0 u=0 or b) 3−2 m=4−3 m 3−2 m=4−3 m. You can discard the first, and from the second you can solve to get m=1 m=1.Sort of Damocles –Sort of Damocles 2018-08-27 17:47:27 +00:00 Commented Aug 27, 2018 at 17:47
That was an accident, sorry. Will fix.Chx –Chx 2018-08-27 18:48:01 +00:00 Commented Aug 27, 2018 at 18:48
1 “The equations are the same” doesn’t mean that they’re identical, but that they’re equivalent.amd –amd 2018-08-27 18:50:44 +00:00 Commented Aug 27, 2018 at 18:50
Add a comment|
1 Answer 1
Sorted by: Reset to default
This answer is useful
2
Save this answer.
Show activity on this post.
Let
B=[b 11 b 21 b 12 b 22],p=[u v]B=[b 11 b 12 b 21 b 22],p=[u v]
If p p is an invariant point with respect to B B, then
B p=p(1)(1)B p=p
which is equivalent to
{b 11 u+b 12 v b 21 u+b 22 v=u=v{b 11 u+b 12 v=u b 21 u+b 22 v=v
and
{(b 11−1)u+b 12 v b 21 u+(b 22−1)v=0=0(2)(2){(b 11−1)u+b 12 v=0 b 21 u+(b 22−1)v=0
To find invariant points, you solve (2)(2) for u u and v v. In some cases, the solution is not a single point, but a line. If the matrix is all zeros, then all points are invariant.
Let's examine B=[1 2 4−1]B=[1 4 2−1]. Substituting to (2)(2) we get
{(1−1)u+4 v 2 u+(−1−1)v=0=0⟺{4 v 2 u−2 v=0=0{(1−1)u+4 v=0 2 u+(−1−1)v=0⟺{4 v=0 2 u−2 v=0
The only solution to this is u=v=0 u=v=0. So, this matrix has an invariant point at origin.
Let's examine B=[3 4−2−3]B=[3−2 4−3]. Substituting to (2)(2) we get
{2 u−2 v 4 u−4 v=0=0{2 u−2 v=0 4 u−4 v=0
If we divide the upper one by 2, or the lower one by 4, we get the same equation: u−v=0 u−v=0. Thus, this matrix has an invariant line u=v u=v.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Aug 27, 2018 at 20:42
answered Aug 27, 2018 at 17:51
Nominal AnimalNominal Animal
9,551 2 2 gold badges 16 16 silver badges 22 22 bronze badges
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
matrices
geometry
linear-transformations
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Report this ad
Related
0Finding Points with Same Distance to two Lines
0Find invariant points, how to express using parameter
0Finding invariant lines
0Find invariant points under matrix transformation (degeneracy)
0Does this easier method to find the invariant lines of a transformation always work?
0Matrix transformations. Invariant lines problem. 2D Matrices
1Finding the invariant lines of (3 6−1−2)(3−1 6−2)
0How to describe this transformation and find invariant lines
1Invariant Lines with 2D Matrix transformations
Hot Network Questions
Identifying a thriller where a man is trapped in a telephone box by a sniper
What can be said?
Is encrypting the login keyring necessary if you have full disk encryption?
Why are LDS temple garments secret?
в ответе meaning in context
A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man
Riffle a list of binary functions into list of arguments to produce a result
Can a cleric gain the intended benefit from the Extra Spell feat?
Proof of every Highly Abundant Number greater than 3 is Even
How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done?
Find non-trivial improvement after submitting
Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done?
Passengers on a flight vote on the destination, "It's democracy!"
My dissertation is wrong, but I already defended. How to remedy?
Does the curvature engine's wake really last forever?
"Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf
Xubuntu 24.04 - Libreoffice
What’s the usual way to apply for a Saudi business visa from the UAE?
Do we need the author's permission for reference
What "real mistakes" exist in the Messier catalog?
Who is the target audience of Netanyahu's speech at the United Nations?
Storing a session token in localstorage
How many stars is possible to obtain in your savefile?
Numbers Interpreted in Smallest Valid Base
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices |
189552 | https://www.climate.gov/news-features/blogs/enso/impacts-el-nino-and-la-nina-hurricane-season | Published Time: Fri, 30 May 2014 08:00:00 -0400
Impacts of El Niño and La Niña on the hurricane season | NOAA Climate.gov
Skip to main content
Menu
Climate.gov Science & information for a climate-smart nation
Main Menu
Main menu
News & Features
Home
All News & Features
Blogs
ENSO Blog
Polar Vortex Blog
Beyond the Data Blog
Climate And ...
Climate Case Studies
Climate Dashboard
Climate Q&A
Event Tracker
Climate Tech
Featured Images
Decision Makers Take 5
Videos
Decision Makers Toolbox
El Niño & La Niña Page
NOAA Greenhouse Gas
Features
News & Research Highlights
Understanding Climate
View News & Features section
Maps & Data
Home
All Maps & Data
Climate Dashboard
Climate Data Primer
Dataset Gallery
Tools & Interactives
Data Snapshots
View Maps & Data section
Teaching Climate
Home
Climate Dashboard
Teaching Climate
Teaching Energy
Toolbox for Climate & Energy
Partnership with CLEAN collection
View Teaching Climate section
Resilience Toolkit
About
About Us
What's New
FAQs
Sitemap
Facebook
BlueSky
Twitter
Instagram
YouTube
Facebook
BlueSky
Twitter
Instagram
YouTube
Search Search
powered by webLyzard technology
Breadcrumb
Home
News & Features
Blogs
ENSO
Latest Blogs
A 'Hitchhiker's Guide' to the June 2025 ENSO update
Tornado season 2025: active through April, and May is keeping pace
May 2025 ENSO update: eye of neutral
More than just weather: how climate shapes life in Washington, D.C., and the Galapagos
Previous
Next
277 of 282
Related Content
RSS Feed
ENSO
Impacts of El Niño and La Niña on the hurricane season
By Gerry Bell
Published May 30, 2014
20 Comments
facebook
BlueSky
twitter
envelope
print
With the approach of the 2014 hurricane season and the strong potential for El Niño to develop during the next few months, the effect that El Niño has on both the Atlantic and Pacific hurricanes seasons is worth exploring. The hurricane impacts of El Niño and its counterpart La Niña are like a see-saw between the Pacific and Atlantic oceans, strengthening hurricane activity in one region while weakening it in the other.
Typical influence of El Niño on Pacific and Atlantic seasonal hurricane activity. Map by NOAA Climate.gov, based on originals by Gerry Bell.
Simply put, El Niño favors stronger hurricane activity in the central and eastern Pacific basins, and suppresses it in the Atlantic basin (Figure 1). Conversely, La Niña suppresses hurricane activity in the central and eastern Pacific basins, and enhances it in the Atlantic basin (Figure 2).
Typical influence of La Niña on Pacific and Atlantic seasonal hurricane activity. Map by NOAA Climate.gov, based on originals by Gerry Bell.
These impacts are primarily caused by changes in the vertical wind shear, which refers to the change in wind speed and direction between roughly 5,000-35,000 ft. above the ground. Strong vertical wind shear can rip a developing hurricane apart, or even prevent it from forming.
ENSO perturbs tropical and subtropical atmospheric circulation
During El Niño, the area of tropical Pacific convection and its associated Hadley circulation expand eastward from the western Pacific, sometimes extending to the west coast of South America. (A tutorial on El Niño and La Niña can be found at the NOAA Climate Prediction Center website.) At the same time, the equatorial Walker circulation is weaker than average.
These conditions produce an anomalous upper-level, ridge-trough pattern in the subtropics, with an amplified ridge over the subtropical Pacific in the area north of the enhanced convection, and a downstream trough over the Caribbean Sea and western tropical Atlantic. Over the central and eastern Pacific, the enhanced subtropical ridge is associated with weaker upper-level winds and reduced vertical wind shear, which favors more hurricane activity.
Over the Atlantic basin, the amplified trough is associated with stronger upper-level westerly winds and stronger lower-level easterly trade winds, both of which increase the vertical wind shear and suppress hurricane activity. In addition to enhanced vertical wind shear, El Niño suppresses Atlantic hurricane activity by increasing the amount of sinking motion and increasing the atmospheric stability.
La Niña has opposite impacts across the Pacific and Atlantic basins. During La Niña, the area of tropical convection and its Hadley circulation is retracted westward to the western Pacific and Indonesia, and the equatorial Walker circulation is enhanced. Convection is typically absent across the eastern half of the equatorial Pacific.
In the upper atmosphere, these conditions produce an amplified trough over the subtropical Pacific in the area north of the suppressed convection, and a downstream ridge over the Caribbean Sea and western tropical Atlantic. Over the central and eastern subtropical Pacific, the enhanced trough is associated with stronger upper-level winds and stronger vertical wind shear, which suppress hurricane activity. Over the Atlantic basin, the anomalous upper-level ridge is associated with weaker upper- and lower- level winds, both of which reduce the vertical wind shear and increased hurricane activity. La Niña also favors increased Atlantic hurricane activity by decreasing the amount of sinking motion and decreasing the atmospheric stability.
ENSO phases interact with other climate patterns that influence hurricanes
Another prominent climate factor to influence Atlantic hurricane activity is the Atlantic Multi-Decadal Oscillation (AMO) (Goldenberg et al. 2001, Bell and Chelliah 2006). The warm phase of the AMO is associated with high-activity eras for Atlantic hurricanes, such as has been in place since 1995. Conversely, the cold phase of the AMO is associated with low-activity eras (such as the period 1971-1994).
The warm phase of the AMO reflects warmer SSTs across the Atlantic hurricane Main Development Region (MDR, Figure 3). A key atmospheric feature of this pattern is a stronger West African monsoon, which produces a westward extension of the upper-level easterly winds (near 35,000 ft), along with weaker easterly trade winds in the lower atmosphere (near 5,000 ft).
Seasonal climate patterns associated with active hurricane seasons. La Niña contributes to reduced vertical wind shear in the Main Development Region for hurricanes in this basin. Map by NOAA Climate.gov, based on originals by Gerry Bell.
This wind pattern is very conducive to increased Atlantic hurricane activity, partly because it results in weaker vertical wind shear. The weaker trade winds also contribute to a more conducive structure (i.e. increased cyclonic shear) of the mid-level (near 10,000 ft) African Easterly Jet (AEJ), favoring hurricane development from tropical cloud systems (i.e. easterly waves) moving westward from Africa. At the same time, these wind patterns are associated with a more northward push into the MDR of deep tropical moisture and unstable air, each of which also favors stronger hurricanes.
The hurricane activity in a given season often reflects a combination of the multi-decadal signals and ENSO. During an Atlantic high-activity era, El Niño typically results in a near-normal season, and La Niña produces an above-normal season. During an Atlantic low-activity era, El Niño typically results in a below-normal season and La Niña results in a near-normal season (Bell and Chelliah 2006).
Similarly for the central and eastern Pacific basins, the combination of a low-activity era and El Niño often produces a near-normal season, while La Niña produces a below-normal season. For a Pacific high-activity era, El Niño often produces an above-normal season, while La Niña produces a near-normal season.
This year the expectation that El Niño will develop, combined with the multi-decadal climate signals results in a forecast for a near- or below-normal season in the Atlantic, and a near- or above-normal season in both the central and eastern Pacific. For more detail, check out NOAA’s 2014 Hurricane Season Outlook.
References
Bell, G. D., and M. Chelliah, 2006: Leading tropical modes associated with interannual and multi-decadal fluctuations in North Atlantic hurricane activity. J. Climate, 19, 590-612.
Goldenberg, S. B., C. W. Landsea, A. M. Mestas-Nuñez, and W. M. Gray, 2001: The recent increase in Atlantic hurricane activity: Causes and implications. Science, 293, 474-479.
Gray, W. M., 1984: Atlantic seasonal hurricane frequency: Part I: El Niño and 30-mb quasi-bienniel oscillation influences. Mon. Wea. Rev., 112, 1649-1668.
--Emily Becker, lead reviewer
Add new comment
Comments
Defenitions
Permalink
A valuable change would be to hot link Hadley and Walker circulation descriptions to their names. These and other weather specific terms are of interest to those who want more insight on their effects. Thanks, Dan
Reply
Submitted by Daniel Sutton on Tue, 06/03/2014 - 10:42
RE: Defenitions
Permalink
Thanks for the suggestion. We plan to build a glossary in coming months. Until then, we'll look for a good link for those concepts.
Reply
Submitted by rebecca.lindsey on Thu, 06/05/2014 - 14:55
In reply to Defenitions by Daniel Sutton
RE: Defenitions
Permalink
thank you!
Reply
Submitted by Malika Khodjieva on Mon, 03/30/2020 - 16:22
In reply to Defenitions by Daniel Sutton
ENSO and tornadoes
Permalink
Dear Gerry, I've just read your 'Impacts of El Niño and La Niña on the hurricane season' which I found really interesting. Now the question came up if there will be a similar blog on 'Impacts of El Niño and La Niña on the tornado season'? Best regards Kurt
Reply
Submitted by Kurt Baldenhofer on Wed, 06/04/2014 - 09:11
Hurricanes
Permalink
Thank you very much for this page and information. it is very important for all of us that live in an area of hurricanes. i live in Quintana Roo Mexico.
Reply
Submitted by Roberto Rivas on Wed, 06/04/2014 - 13:21
El Nino-La Nina
Permalink
How will the increased Pacific hurrican activity affect the temperatures and precipitation for the eastern Rocky Mountain region for this summer? Higher temperatures and stronger thunder/rain storm activity with likelihood of hail?
Reply
Submitted by Kris Brooks on Fri, 06/06/2014 - 15:29
Wonderful Site
Permalink
Keep up the good work..was wondering how you feel about the Fukishima radiation leak and the heating up of the Pacific on the El Nino and La Nina predictions !!!
Reply
Submitted by Bill on Fri, 06/06/2014 - 16:09
climate.gov
Permalink
NOAA is my number one source for weather and climate particulars. I don't believe there is another or better source. For the past 3 to 4 years I have followed the El Nino page that has informed me of the 3rd part of the the El Nino phenomena called the El Nino Neutral phase of the El Nino/SO.
Reply
Submitted by j.long on Sun, 06/08/2014 - 15:42
El Nino's
Permalink
i wonder about the history of the El Nino and La Nina weather patterns ? Is this just a recent phenom or has it been going on for 10,000 years? Would it be possible to build a model that takes us back in weather time to see how they affected the landscape and animal kingdoms? Personally i like El Nino! It means rain wherever i've been during their process ~ Los Angeles in wetter days and now Billings, MT ~ experienced two since being here and does it ever make for great birding wetlands! And this year is no different ~ snow in the Bearthooths %200 of normal and now lots of rain as well as HAIL and Thunderstorms. Did the El Nino effect have anything to do with our Winter? and come migration time do birds make allowances for these weather patterns?
Reply
Submitted by Darwin26 on Tue, 06/10/2014 - 02:33
hurricanes
Permalink
You are doing an extraordinary job by giving us all this info and by doing incredible amounts of additional research, year to year. This will give the population in hurricane prone areas more insight into the weatherpatterns and thus make these areas safer for its inhabitants. Keepup this extremely valuable work!
Reply
Submitted by doris neckelmann on Sun, 06/15/2014 - 10:49
ENSO impact on North West/South West Pacific Typhoons
Permalink
Excellent BLOG - loving it - especially the high res graphics and insightful commentary. Are you guys going to do a follow up to cover ENSO's impacts on genesis frequency, intensity, track direction and season length of the North/South West Pacific Typhoons/Cyclones? Many thanks!
Reply
Submitted by golfy on Wed, 09/02/2015 - 03:16
el ninas
Permalink
how will the cooler waters affect philippines?/typhoons, thanks
Reply
Submitted by Dennis Gramm on Fri, 05/20/2016 - 20:58
Hurricanes/Tornados
Permalink
Would the east coast of South America be susceptible to hurricanes or tornados during El Nino or La Nina
Reply
Submitted by Sierra on Sun, 10/02/2016 - 19:42
Got it!
Permalink
Information gathered on El Niño and La Niña was successful
Reply
Submitted by Ken lee on Thu, 06/02/2022 - 15:34
Dear climate.gov…
Permalink
Dear climate.gov administrator, Your posts are always well-referenced and credible.
Reply
Submitted by Kaylee Bruner on Sat, 06/10/2023 - 17:20
What impact could the ENSO have had on the historic 2017 Hurrica
Permalink
El Niño would have stronger hurricane activity in the central and eastern Pacific basins, and suppresses it in the Atlantic basin, wind speed and direction between roughly 5,000-35,000, and it will destroyed everything on its path.
Reply
Submitted by Roberto Jacobo on Wed, 09/13/2023 - 11:13
2014 Typo
Permalink
First sentence, 2014?
“With the approach of the 2014 hurricane season….”
Reply
Submitted by Ying on Thu, 05/30/2024 - 12:06
2024
Permalink
Stay tuned for an updated post on ENSO and hurricanes in the last week of June.
Reply
Submitted by Nathaniel.Johnson on Mon, 06/03/2024 - 21:00
In reply to 2014 Typo by Ying
This post was written in…
Permalink
This post was written in spring 2014, ahead of the 2014 hurricane season.
Reply
Submitted by emily.becker on Mon, 06/03/2024 - 11:37
Thank you so much. The time…
Permalink
Thank you so much. The time is NOW to prepare. Food, water, cash, batteries, flashlights, pet preparedness, having a family plan, radios ec. Go to FEMA.GOV for a list of necessities. Taking photos of ALL property prior to a severe storm is very important. Have an evacuation plan handy as well. Stay safe. Be smart. Prepare now.
Reply
Submitted by Anonymous on Wed, 06/05/2024 - 22:59
Add new comment
Your name
Subject
Comment
Rich Text Editor, Comment field
Editor toolbars ToolsMaximize "Maximize")Source "Source")Cut Keyboard shortcut Command+X "Cut (⌘+X)")Copy Keyboard shortcut Command+C "Copy (⌘+C)")Paste Keyboard shortcut Command+V "Paste (⌘+V)")Paste as plain text Keyboard shortcut Command+Shift+V "Paste as plain text (⌘+⇧+V)")Paste from Word "Paste from Word")Insert Special Character "Insert Special Character")FormattingBold Keyboard shortcut Command+B "Bold (⌘+B)")Italic Keyboard shortcut Command+I "Italic (⌘+I)")Strikethrough "Strikethrough")Underline Keyboard shortcut Command+U "Underline (⌘+U)")Subscript "Subscript")Superscript "Superscript")Decrease Indent "Decrease Indent")Increase Indent "Increase Indent")Remove Format "Remove Format")LinksLink Keyboard shortcut Command+K "Link (⌘+K)")Unlink "Unlink")ListsInsert/Remove Bulleted List "Insert/Remove Bulleted List")Insert/Remove Numbered List "Insert/Remove Numbered List")Table "Table")
Press ALT 0 for help
Elements path
About text formats
Leave this field blank
Related Content
NEWS & FEATURES
How does NOAA see the 2024 Atlantic hurricane season shaping up?
06/27/2024
How have changing ENSO forecasts impacted Atlantic seasonal hurricane forecasts for 2017?
08/24/2017
2011 Atlantic Hurricane Outlook
05/20/2011
MAPS & DATA
El Niño-Southern Oscillation - Indicators and technical discussions
03/04/2015
SST - ENSO Region, Monthly Difference from Average
11/18/2016
Historical Hurricane Tracks - GIS Map Viewer
01/14/2015
TEACHING CLIMATE
Toolbox for Teaching Climate & Energy
02/26/2018
CLIMATE RESILIENCE TOOLKIT
Southwest Climate Outlook
09/01/2016
Preparing for La Niña
10/18/2016
Risks of Seasonal Climate Extremes Related to ENSO
02/25/2016
Return to top
Stay Connected
Follow Climate.gov
Facebook
BlueSky
Twitter
Instagram
YouTube
Main Menu
Home
News & Features
Teaching Climate
Maps & Data
FAQs
Feeds
Sitemap
News & Features
News & Features Home
Beyond the Data Blog
Climate Case Studies
Climate Q&A
Climate Tech
Climate and …
Decision Makers Take 5
Decision Makers Toolbox
ENSO Blog
Event Tracker
Featured Images
Featured Videos
Features
News and Research Highlights
Understanding Climate
Maps & Data
Maps and Data Home
Climate Dashboard
Climate Data Primer
Data Snapshots
Dataset Gallery
Teaching Climate
Teaching Climate Home
Department of Commerce
NOAA
Privacy
Accessibility
FOIA
Information Quality
No-FEAR Act
Ready.gov
USA.gov |
189553 | https://www.themathdoctors.org/polygons-and-handshakes/ | Polygons and Handshakes – The Math Doctors
Typesetting math: 100%
Skip to content
Main Menu
Home
Ask a Question
Blog
FAQ
About The Math Doctors
Contact
Search Search for:
Polygons and Handshakes
September 7, 2020 March 2, 2024 / Probability / Counting / By Dave Peterson
We’ll spend the next couple weeks looking at various counting problems. This topic, called combinatorics, is often studied along with probability, but many of the topics we’ll see here feel more like geometry problems! Here, we’ll be counting the diagonals of a polygon, and handshakes between people at a party.
Counting diagonals
We’ll start with a question from 1998 about counting the diagonals of a polygon:
Counting Diagonals
How many diagonals can be drawn for a polygon with 15 sides? How many diagonals can be drawn for a polygon with n sides?
I tried to find a pattern in how many diagonals there are for a polygon with 4 sides, 5 sides, etc., but I couldn't extract a pattern. Please show me a method for this question.
Here are examples of polygons with smaller numbers of sides, showing sides in blue and diagonals in red:
Molly probably didn’t go as far as 9 sides; but as I count here, for n = 3, 4, 5, 6, 7, 8, 9, the number of diagonals is 0, 2, 5, 9, 14, 20, 27. If you made a table of these numbers and apply some of the methods we’ve discussed previously for finding patterns in sequences, you could probably find one – but you wouldn’t be sure it’s the right pattern, and might not find a formula. (By the way, my pictures are all of regular polygons, but nothing will change if we make them irregular, except that if the polygon were not convex, some diagonals might go outside. The counts wouldn’t change.)
Doctor Wilkinson answered, starting with Molly’s very appropriate plan, counting and then looking for patterns:
Looking for a pattern with polygons of 4 sides, 5 sides, etc. was an excellent idea. Sometimes, though, it's pretty hard to see a pattern in a sequence of numbers. It may be better to look for the pattern in the whole process of counting the diagonals. That is, look for a systematic way of counting diagonals as you look at specific cases like 4 and 5.
So let's take another look at polygons with 4 and 5 sides.
For 4 sides, we have a quadrilateral ABCD. The diagonals are:
AC
BD
There's one diagonal containing A, one containing B, one containing C, and one containing D.
We’re looking not just for the numbers, but for a way to count that will lead to a formula. So we haven’t even counted the two diagonals yet, but just noticed that there is one from each vertex …
Let's leave it at that for a minute, and look at case 5.
Here we have a pentagon ABCDE. The diagonals are:
AC
AD
BD
BE
CE
Here we have two diagonals containing A, two containing B, two containing C, two containing D, and two containing E.
This way of describing it suggests the beginning of a pattern!
So for 4 sides there is one diagonal for each vertex and for 5 there are two. Why? Well, there is a diagonal from each vertex to each other vertex except the two adjacent vertices. For 4 vertices, there are three other vertices for each vertex, and two of them are adjacent, which leaves 1; for 5 vertices, there are four other vertices for each vertex, and two of them are adjacent, which leaves 2.
So for n vertices, there will be n – 3 diagonals through each.
Okay now, we're starting to get somewhere. For four vertices there is one diagonal containing each vertex, but we don't end up with four diagonals but just two. For five vertices there are two diagonals containing each vertex, but we don't end up with ten diagonals but just five. What's going on?
Aha! If you count one diagonal for every vertex of the quadrilateral, you're counting every diagonal twice, because every diagonal contains two vertices. And similarly for the pentagon. So you have to divide by two.
I hope this helps you with the general problem.
Without stating the actual answer, he has effectively given it, so that Molly should be able to write the formula now:
D i a g o n a l s=n(n−3)2
Applying this to small values of n, you should find the numbers I got above. But to be honest, when I wrote those numbers down, I wasn’t using this formula; I used the recursive formula we’ll see below, with which I could write each number based on the previous one. And then I checked that the actual count matched by actually seeing, for instance, that with n = 9, there were 6 lines coming from each of 9 vertices, for a total of (6)(9)2=27.
Now, what do you get when n = 15? I’m not about to draw and count for that case!
Explicit and recursive formulas
A student in 1997 had been shown the formula that results from this thinking, and wanted more:
Diagonals of Polygons
My teacher showed us how to find diagonals using two different equations. I lost the sheet which I wrote them on. Do you know them? One of them might have been n(n-3)/2 and I am not sure of the other one, but it was called recursive.
Doctor Wilkinson answered this one too, first correcting the implied question (since finding diagonals would mean actually drawing or naming them), and quickly deriving the formula:
The question apparently is "How many diagonals does a polygon with n sides have?"
You have remembered the first formula correctly: it is n(n-3)/2. One way to see this is to notice that you can draw (n-3) diagonals from every vertex of the polygon. This is because there are (n-1) other vertices, but two of them are adjacent vertices and so don't count towards making diagonals. This seems to give n(n-3) diagonals, but this way of counting counts every diagonal twice since each diagonal connects to two vertices, so you have to divide by 2.
This, of course, is what we just did, stated fully.
By a recursive formula, we mean a way of expressing the answer for n vertices in terms of the answer for n-1 vertices. Suppose, for example, that you already know the answer for a polygon with n-1 vertices. Now if you add another vertex between two of the vertices of the original polygon, then all the diagonals of the original polygon will still be diagonals of the new polygon, and so will the side joining the two vertices that you added a new vertex between, and so will the line segments joining the new vertex to all the other vertices of the original polygon. (Got all that?) So if we let diag(n) be the number of diagonals for a polygon with n sides, we get the formula:
diag(n) = diag(n-1) + n - 3 + 1 or
diag(n-1) + n - 2
Here (for n = 6) we insert a new vertex into a pentagon, which adds 3 new diagonals and changes one side to a diagonal (all in purple):
In applying this method to make my list above, I first knew that for n = 5, Diag = 5; then for n = 6, I had to add 4 to that (6 – 2 = 4); each time, I added a number greater by 1 than the last one.
The first formula is better, since it actually gives you the answer. But sometimes it's easier to get a recursive formula first and use that to get an explicit formula (your first formula is an explicit one since you only need the number of vertices in the polygon to get the number of diagonals in that polygon). This is called "solving the recursion." Sometimes a recursive formula is the best you can do because there simply is no explicit formula.
We’ll be seeing below another way to think of it, summing a series; this is closely related to the recursive formula, as for example my count for n = 6 can be written as 2 + 3 + 4. But doing this all the way to n = 15 (or 42, or 100, as in other questions) would not be pleasant.
Handshakes all around: Three ways to get the formula
Our next question, from 2001, will help us move over to a different but related problem:
Handshakes and Polygon Diagonals
Dear Dr. Math,
I can't figure this one out!
If a polygon has 42 sides, how many diagonals does it have?
That’s a lot of sides; to answer, you’ll need some sort of formula.
Doctor Pete answered, starting with some leading questions like those we’ve seen:
Let's look at a hexagon, with 6 sides. Pick a vertex. How many diagonals come from that vertex? (Three.) How many vertices in a hexagon? (Six.)
So there should be 36 = 18 diagonals, but in fact if you count them there are only half as many. Why? Well, since a single diagonal joins exactly two vertices, for each vertex we counted the same diagonal twice.
Each diagonal is the starting point for 3 “directed diagonals”:
Two of these correspond to one actual diagonal.
So look at a 42-gon.
How many diagonals come from a single vertex?
How many vertices are there in a 42-gon?
How many diagonals are in a 42-gon?
A diagonal can go to any vertex except the one it starts at and the two neighbors, so there are 39 destinations for each of the 42 starting points. This gives a total of 42⋅39=1638 “directed diagonals”; dividing by 2, we have 819 diagonals. We didn’t need an actual formula after all, did we?
Now Doctor Pete switched to a non-geometrical problem, which we’ll be focusing on for most of the rest of this post.
Handshakes per person
Let's look at something similar to this. If you have 100 people at a party, and everyone shakes hands with everyone else, how many handshakes take place? Well, pick a person. He shakes hands with 99 others. In fact, each person shakes hands with 99 others, so there should be 10099 = 9900 handshakes; but again, it takes two people to shake hands, so this is double the correct amount, which is 4950 handshakes.
By now the pattern should be clear. If there are n people at a party, the number of handshakes is n(n-1)/2.
This is like counting, not the diagonals only, but the diagonals together with the edges of a polygon – all the lines joining any two vertices. This time, there are n – 1 handshakes/segments for each person/vertex. Here are the handshakes originating from one person in a six-person party:
Summing a series
Now, there is a different way to count these handshakes. The first person shakes hands with 99 others. The second person shakes hands with 98 others, not including the first person, whom we have already counted. The third person shakes hands with 97 others, not including the first two. And so on, until the 99th person, who shakes hands with the 100th person. So there are
99 + 98 + 97 + ... + 2 + 1
handshakes. But we have seen that this is equal to 10099/2, and this suggests a formula. In general, if there are n people at a party, then counting the number of handshakes in two different ways gives the formula
1 + 2 + ... + (n-2) + (n-1) = n(n-1)/2.
Such a number is called a triangular number, and the formula is known. We have just proved it, merely by seeing our handshake number from a different perspective. In a moment we’ll see how to find it if we hadn’t already known the formula.
Here is a representation of the 10 edges of a pentagon as the sum 4 + 3 + 2 + 1, starting at A, B, C, and D respectively:
The diagonals can similarly be seen as a sum, but it’s harder to see without thinking in terms of the recursive formula I showed above.
Finally, we can prove this summation in yet another way. Write
S = 1 + 2 + ... + (n-2) + (n-1)
S = (n-1) + (n-2) + ... + 2 + 1
2S = n + n + ... + n + n,
and since there are n-1 "n"'s, then we have 2S = n(n-1), or
S = n(n-1)/2.
This is how we prove the formula for triangular numbers, or equivalently for the sum of an arithmetic series.
Another way: counting pairs
I couldn’t find am archived page where we explicitly mentioned this, but the total number of segments can lso be thought of as the number of pairs of vertices, which is the number of combinations of n vertices taken two at a time, written as
(n 2)=n C 2=n!2!(n−2)!=n(n−1)2
Back to the diagonals
However, the handshake analogy isn't exact. At a party, each person gets to shake hands with everyone but himself. But in a polygon, each vertex doesn't get to form a diagonal with himself... or with the two vertices nearest him. (He's already connected to them by edges.) So for a polygon, we have to subtract the number of edges from the total for the handshake problem:
S = n(n-1)/2 - n
= [n(n-1) - 2n]/2
= [n(n - 1 - 2)]/2
= n(n-3)/2
D i a g o n a l s=S e g m e n t s–E d g e s=n(n−1)2–n=n 2−n−2 n 2=n 2−3 n 2=n(n−3)2
The Handshake Problem on its own
The handshake question is one we have often been asked on its own, so let’s look at a couple answers to that, with or without reference to polygons. First, one from 1997:
Handshake Problem
Our 5th grade math class was learning to solve story problems by looking for a pattern and setting up a chart. Most of the problems were of the nature: If you had 8 people in a group and each one had to shake everyone else's hand one, how many handshakes would take place?
After working through several problems, one boy in the room said "I can get it faster without making a chart. If you multiply whatever the number is by one less, then divide by 2, you get the right answer each time."
Will this always work? Why?
Doctor Anthony answered:
Yes, it will always work. The boy concerned was very observant. A good way to model the situation is to think of an n-sided polygon, which has n vertices.
This time the question was about handshakes, but the answer is the polygon model. It works both ways! He uses the lines-per-vertex approach:
Now consider how many diagonals you can draw between the vertices, and also include two lines connecting a particular vertex to the two adjacent ones. It is clear that there are (n-1) lines joining any one vertex to the other vertices in the polygon. If we consider each of the n vertices, each requires (n-1) lines to join to the other vertices. There are thus n(n-1) links. But each of these links has been produced twice, once from each end, and so the number n(n-1) is too large by a factor of 2.
Hence total number of connections joining every vertex to every other one is:
n(n-1)
2
This is the result that the boy discovered by observation.
Handshakes again, two ways
Finally, here is a question from 2001:
How Many Handshakes?
I'm trying to help my daughter figure out how to set up this problem:
There are 40 people in a room. They shake each other's hands once and only once. How many handshakes are there altogether?
We know that that 40 people shake hands, so each person shakes 39 hands. The first person shakes 39 hands, the second person shakes 38 hands, the third person shakes 37 hands, and so on.
How do you put this into a formula to solve the problem?
One sentence there is the start to the handshakes-per-person approach, and the next is the start to the series approach.Doctor Greenie answered by completing both, starting with the series:
Hi, Kathy -
This is a great problem for young math students to work on using two completely different approaches, so they can see that they get the same answer.
You have a good start on the first approach. The total number of handshakes is
39 + 38 + 37 + 36 + ... + 3 + 2 + 1
To find this total without having to add all these numbers together, notice the following:
The average of the first and last numbers is 20:
(39+1)/2 = 20.
The average of the second and next-to-last numbers is 20:
(38+2)/2 = 20.
If you think about it, you should be able to see that all the numbers can be paired in groups of two whose average is 20. You will have 39 numbers (1 to 39) whose average is 20 - so what is the sum of all those numbers? It is just the number of numbers multiplied by the average of the numbers: 3920 = 780.
Next, the multiplication method, explained in a way that matches a common way to explain combinations:
Here is the other (more mathematically sophisticated) approach:
Each handshake consists of two people shaking hands. How many ways can you choose those two people if there are 40 people in the group? You can choose any one of 40 people for the first person involved in the handshake; then for each of those 40 choices, you have 39 choices for the second person. So you have 4039 = 1560 ways of choosing the two people for the handshake.
But wait a minute. Person A shaking hands with person B is the same as person B shaking hands with person A; this means that in counting the 1560 handshakes you have actually counted each one twice, so the actual number of handshakes is half of 1560, or 780.
Two seemingly completely different ways to look at the problem, and they give the same answer!
There’s a lot of dividing by two going on around here; both methods have a lot in common.
In gathering these (and several other) discussions of these two related problems, I ran across some other “handshake” problems that are too good to skip, even though they are not about counting. They’ll be the subject of the next post, before I get back to counting.
Post navigation
← Previous Post
Next Post →
3 thoughts on “Polygons and Handshakes”
Pingback: More Handshake Problems – The Math Doctors
Pingback: Counting Diagonals of a Polyhedron – The Math Doctors
Pingback: Cutting Up a Circle I: Using n Chords – The Math Doctors
Leave a Comment Cancel Reply
Your email address will not be published.Required fields are marked
Type here..
Name
Email
Website
Δ
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Have a question? Ask it here!
We are a group of experienced volunteers whose main goal is to help you by answering your questions about math. To ask anything, just click here.
Recent Blog Posts
Is a Circle One-Dimensional or Two-Dimensional?
Pigeonhole Principle II: Sets, Subsets, and Sums
Pigeonhole Principle I: Paths, Penguins, and Points
Sample Standard Deviation as an Unbiased Estimator
Formulas for Standard Deviation: More Than Just One!
Blog Archive
September 2025(2)
August 2025(2)
July 2025(4)
June 2025(4)
May 2025(5)
April 2025(4)
March 2025(4)
February 2025(4)
January 2025(5)
December 2024(4)
November 2024(4)
October 2024(3)
September 2024(3)
August 2024(5)
July 2024(4)
June 2024(4)
May 2024(2)
April 2024(4)
March 2024(5)
February 2024(4)
January 2024(4)
December 2023(5)
November 2023(4)
October 2023(4)
September 2023(5)
August 2023(4)
July 2023(4)
June 2023(5)
May 2023(4)
April 2023(4)
March 2023(5)
February 2023(4)
January 2023(4)
December 2022(5)
November 2022(4)
October 2022(4)
September 2022(5)
August 2022(4)
July 2022(5)
June 2022(4)
May 2022(4)
April 2022(5)
March 2022(4)
February 2022(4)
January 2022(4)
December 2021(5)
November 2021(4)
October 2021(5)
September 2021(4)
August 2021(4)
July 2021(5)
June 2021(4)
May 2021(9)
April 2021(9)
March 2021(9)
February 2021(7)
January 2021(7)
December 2020(7)
November 2020(8)
October 2020(9)
September 2020(8)
August 2020(9)
July 2020(8)
June 2020(9)
May 2020(8)
April 2020(9)
March 2020(9)
February 2020(8)
January 2020(9)
December 2019(9)
November 2019(8)
October 2019(9)
September 2019(9)
August 2019(9)
July 2019(9)
June 2019(8)
May 2019(9)
April 2019(9)
March 2019(8)
February 2019(8)
January 2019(9)
December 2018(13)
November 2018(13)
October 2018(14)
September 2018(12)
August 2018(4)
July 2018(5)
June 2018(13)
May 2018(13)
April 2018(13)
March 2018(10)
February 2018(12)
January 2018(14)
Categories
Algebra (211)
AQOTW (67)
Arithmetic (107)
Ask Dr. Math (6)
Calculus (91)
Geometry (102)
Higher math (20)
Logic (32)
NQOTW (171)
Probability (51)
Puzzles (30)
Statistics (34)
Study skills (7)
Trigonometry (64)
Tags
AlgorithmsAlternativesAmbiguityAssumptionsAveragesChallengesCheckingCombinatoricsComplex numbersCountingCuriosityDecimalsDefinitionsDerivativesEstimationFactorsFibonacciFormulasFractionsFunctionsGraphingHistoryInconsistencyInductionIntegrationIntuitionLimitsLogicMethodsMistakesModelsNotationPedagogyPEMDASPolynomialsPrimesProofProofsReal lifeStrategiesTextbook errorsVectorsWhyWord problemsWords
Recent Comments
Dave Peterson on Frequently Questioned Answers: Trisecting an Angle
Hans J. Berge on Frequently Questioned Answers: Trisecting an Angle
Pigeonhole Principle II: Sets, Subsets, and Sums – The Math Doctors on Pigeonhole Principle I: Paths, Penguins, and Points
Vladimir on Casting Out Nines: Why It Works
Dave Peterson on Distances on Earth 1: The Cosine Formula
About This Site
The Math Doctors is run entirely by volunteers who love sharing their knowledge of math with people of all ages. We have over 20 years of experience as a group, and have earned the respect of educators. For some of our past history, see About Ask Dr. Math. If you would like to volunteer or to contribute in other ways, please contact us.
Have a question of your own?
Ask a Question
Search Blog
Search for:
Meta
Log in
Entries feed
Comments feed
WordPress.org
Copyright © 2025 The Math Doctors | Powered by Astra WordPress Theme
Email |
189554 | https://www.geeksforgeeks.org/program-to-find-whether-a-given-number-is-power-of-2/?ref=shm | Program to find whether a given number is power of 2
Table of Content
Given a positive integer n, check whether it is a power of 2 or not.
Examples:
Input : n = 16
Output : true
Explanation: 24 = 16
Input : n = 42
Output : false
Explanation: 42 is not a power of 2
Input : n = 1
Output : true
Explanation: 20 = 1
Approach 1: Using Log - O(1) time and O(1) space
The idea is to use the mathematical property that if a number n is a power of 2, then log2(n) will be an integer. We can compute log2(n) and check if it's an integer by comparing if 2 raised to the power of log2(n) equals n.
````
include
include
using namespace std;
bool isPowerofTwo(int n) {
if (n <= 0)
return false;
// Calculate log base 2 of n
int logValue = log2(n);
// Check if log2(n) is an integer
// and 2^(logn) = n
return pow(2, logValue) == n;
}
int main() {
int n = 16;
if (isPowerofTwo(n)) {
cout << "true" << endl;
}
else {
cout << "false" << endl;
}
}
import java.lang.Math;
class GfG {
static boolean isPowerofTwo(int n) {
if (n <= 0)
return false;
// Calculate log base 2 of n
int logValue = (int)(Math.log(n) / Math.log(2));
// Check if log2(n) is an integer
// and 2^(logn) = n
return Math.pow(2, logValue) == n;
}
public static void main(String[] args) {
int n = 16;
if (isPowerofTwo(n)) {
System.out.println("true");
} else {
System.out.println("false");
}
}
}
import math
def isPowerofTwo(n):
if n <= 0:
return False
# Calculate log base 2 of n
logValue = int(math.log2(n))
# Check if log2(n) is an integer
# and 2^(logn) = n
return pow(2, logValue) == n
if name == "main":
n = 16
if isPowerofTwo(n):
print("true")
else:
print("false")
using System;
class GfG {
static bool isPowerofTwo(int n) {
if (n <= 0)
return false;
// Calculate log base 2 of n
int logValue = (int)(Math.Log(n, 2));
// Check if log2(n) is an integer
// and 2^(logn) = n
return Math.Pow(2, logValue) == n;
}
static void Main() {
int n = 16;
if (isPowerofTwo(n)) {
Console.WriteLine("true");
} else {
Console.WriteLine("false");
}
}
}
function isPowerofTwo(n) {
if (n <= 0)
return false;
// Calculate log base 2 of n
let logValue = Math.floor(Math.log2(n));
// Check if log2(n) is an integer
// and 2^(logn) = n
return Math.pow(2, logValue) === n;
}
let n = 16;
if (isPowerofTwo(n)) {
console.log("true");
} else {
console.log("false");
}
````
Approach 2: Using Division and Modulo Operator - O(log n) time and O(1) space
The idea is to repeatedly divide the number by 2 and check if there's any remainder during the process. If n is a power of 2, then dividing it by 2 repeatedly will eventually reduce it to 1 without encountering any odd number in between. If at any point n becomes odd (except for the final value 1), then n is not a power of 2.
````
include
using namespace std;
bool isPowerofTwo(int n) {
if (n <= 0)
return false;
while (n > 1) {
if (n % 2 != 0)
return false;
n = n / 2;
}
return true;
}
int main() {
int n = 16;
if (isPowerofTwo(n)) {
cout << "true" << endl;
}
else {
cout << "false" << endl;
}
}
class GfG {
static boolean isPowerofTwo(int n) {
if (n <= 0)
return false;
while (n > 1) {
if (n % 2 != 0)
return false;
n = n / 2;
}
return true;
}
public static void main(String[] args) {
int n = 16;
if (isPowerofTwo(n)) {
System.out.println("true");
} else {
System.out.println("false");
}
}
}
Python program to find whether
a given number is power of 2
def isPowerofTwo(n):
if n <= 0:
return False
while n > 1:
if n % 2 != 0:
return False
n = n // 2
return True
if name == "main":
n = 16
if isPowerofTwo(n):
print("true")
else:
print("false")
// C# program to find whether
// a given number is power of 2
using System;
class GfG {
static bool isPowerofTwo(int n) {
if (n <= 0)
return false;
while (n > 1) {
if (n % 2 != 0)
return false;
n = n / 2;
}
return true;
}
static void Main() {
int n = 16;
if (isPowerofTwo(n)) {
Console.WriteLine("true");
} else {
Console.WriteLine("false");
}
}
}
// JavaScript program to find whether
// a given number is power of 2
function isPowerofTwo(n) {
if (n <= 0)
return false;
while (n > 1) {
if (n % 2 !== 0)
return false;
n = Math.floor(n / 2);
}
return true;
}
let n = 16;
if (isPowerofTwo(n)) {
console.log("true");
} else {
console.log("flase");
}
````
Approach 3: Using Count of Set Bits - O(log n) time and O(1) space
The idea is to leverage the binary representation of powers of 2, which always have exactly one bit set to 1 (the rest are 0). We can count the number of set bits in the binary representation of n, and if there's exactly one set bit, then n is a power of 2. This works because powers of 2 have the form 2^k, which in binary is a 1 followed by k zeros.
````
include
using namespace std;
bool isPowerofTwo(int n) {
if (n <= 0)
return false;
// Count set bits
int count = 0;
while (n > 0) {
if (n & 1)
count++;
n = n >> 1;
}
// If count of set bits is 1,
// then n is a power of 2
return (count == 1);
}
int main() {
int n = 16;
if (isPowerofTwo(n)) {
cout << "true" << endl;
}
else {
cout << "false" << endl;
}
}
class GfG {
static boolean isPowerofTwo(int n) {
if (n <= 0)
return false;
// Count set bits
int count = 0;
while (n > 0) {
if ((n & 1) != 0)
count++;
n = n >> 1;
}
// If count of set bits is 1,
// then n is a power of 2
return (count == 1);
}
public static void main(String[] args) {
int n = 16;
if (isPowerofTwo(n)) {
System.out.println("true");
} else {
System.out.println("false");
}
}
}
def isPowerofTwo(n):
if n <= 0:
return False
# Count set bits
count = 0
while n > 0:
if n & 1:
count += 1
n = n >> 1
# If count of set bits is 1,
# then n is a power of 2
return count == 1
if name == "main":
n = 16
if isPowerofTwo(n):
print("true")
else:
print("false")
using System;
class GfG {
static bool isPowerofTwo(int n) {
if (n <= 0)
return false;
// Count set bits
int count = 0;
while (n > 0) {
if ((n & 1) != 0)
count++;
n = n >> 1;
}
// If count of set bits is 1,
// then n is a power of 2
return (count == 1);
}
static void Main() {
int n = 16;
if (isPowerofTwo(n)) {
Console.WriteLine("true");
} else {
Console.WriteLine("false");
}
}
}
function isPowerofTwo(n) {
if (n <= 0)
return false;
// Count set bits
let count = 0;
while (n > 0) {
if (n & 1)
count++;
n = n >> 1;
}
// If count of set bits is 1,
// then n is a power of 2
return count === 1;
}
let n = 16;
if (isPowerofTwo(n)) {
console.log("true");
} else {
console.log("false");
}
````
Approach 4: Using AND Operator - O(1) time and O(1) space
The idea is to use bit manipulation based on the observation that if n is a power of 2, then its binary representation has exactly one bit set to 1, and n-1 will have all bits to the right of that bit set to 1. Therefore, n & (n-1) will always be 0 for powers of 2. This provides an elegant one-line solution to check if a number is a power of 2 or not.
````
// C++ program to find whether
// a given number is power of 2
include
using namespace std;
bool isPowerofTwo(int n) {
// Check if n is positive and n & (n-1) is 0
return (n > 0) && ((n & (n-1)) == 0);
}
int main() {
int n = 16;
if (isPowerofTwo(n)) {
cout << "true" << endl;
}
else {
cout << "false" << endl;
}
}
// Java program to find whether
// a given number is power of 2
class GfG {
static boolean isPowerofTwo(int n) {
// Check if n is positive and n & (n-1) is 0
return (n > 0) && ((n & (n - 1)) == 0);
}
public static void main(String[] args) {
int n = 16;
if (isPowerofTwo(n)) {
System.out.println("true");
} else {
System.out.println("false");
}
}
}
Python program to find whether
a given number is power of 2
def isPowerofTwo(n):
# Check if n is positive and n & (n-1) is 0
return (n > 0) and ((n & (n - 1)) == 0)
if name == "main":
n = 16
if isPowerofTwo(n):
print("true")
else:
print("false")
using System;
class GfG {
static bool isPowerofTwo(int n) {
// Check if n is positive and n & (n-1) is 0
return (n > 0) && ((n & (n - 1)) == 0);
}
static void Main() {
int n = 16;
if (isPowerofTwo(n)) {
Console.WriteLine("true");
} else {
Console.WriteLine("false");
}
}
}
// JavaScript program to find whether
// a given number is power of 2
function isPowerofTwo(n) {
// Check if n is positive and n & (n-1) is 0
return (n > 0) && ((n & (n - 1)) === 0);
}
let n = 16;
if (isPowerofTwo(n)) {
console.log("true");
} else {
console.log("false");
}
````
Approach 5: Using AND & NOT Operator - O(1) time and O(1) space
The idea is to use a bit manipulation technique similar to approach 4, but with a different pattern. When n is a power of 2, it has exactly one bit set to 1. For n-1, all bits to the right of that single set bit will be 1, and the set bit itself will be 0. Taking the NOT of (n-1) flips all these bits, resulting in a number where all bits to the right of the original set bit position will be 0, and all bits to the left (including the bit position itself) will be 1. When we perform n & (~(n-1)), for a power of 2, this operation preserves the original number n because the only set bit in n aligns with a set bit in ~(n-1). For non-powers of 2, this equality doesn't hold.
````
// C++ program to find whether
// a given number is power of 2
include
using namespace std;
bool isPowerofTwo(int n) {
// Check if n is positive and n & ~(n-1) equals n
return (n > 0) && ((n & (~(n-1))) == n);
}
int main() {
int n = 16;
if (isPowerofTwo(n)) {
cout << "true" << endl;
}
else {
cout << "false" << endl;
}
}
class GfG {
// Check if n is positive and n & ~(n-1) equals n
static boolean isPowerofTwo(int n) {
return (n > 0) && ((n & (~(n - 1))) == n);
}
public static void main(String[] args) {
int n = 16;
if (isPowerofTwo(n)) {
System.out.println("true");
} else {
System.out.println("false");
}
}
}
def isPowerofTwo(n):
# Check if n is positive and n & ~(n-1) equals n
return (n > 0) and ((n & (~(n - 1))) == n)
if name == "main":
n = 16
if isPowerofTwo(n):
print("Yes")
else:
print("No")
using System;
class GfG {
static bool isPowerofTwo(int n) {
// Check if n is positive and n & ~(n-1) equals n
return (n > 0) && ((n & (~(n - 1))) == n);
}
static void Main() {
int n = 16;
if (isPowerofTwo(n)) {
Console.WriteLine("true");
} else {
Console.WriteLine("false");
}
}
}
// JavaScript program to find whether
// a given number is power of 2
function isPowerofTwo(n) {
// Check if n is positive and n & ~(n-1) equals n
return (n > 0) && ((n & (~(n - 1))) === n);
}
let n = 16;
if (isPowerofTwo(n)) {
console.log("true");
} else {
console.log("false");
}
````
K
Explore
DSA Fundamentals
Logic Building Problems
Analysis of Algorithms
Data Structures
Array Data Structure
String in Data Structure
Hashing in Data Structure
Linked List Data Structure
Stack Data Structure
Queue Data Structure
Tree Data Structure
Graph Data Structure
Trie Data Structure
Algorithms
Searching Algorithms
Sorting Algorithms
Introduction to Recursion
Greedy Algorithms
Graph Algorithms
Dynamic Programming or DP
Bitwise Algorithms
Advanced
Segment Tree
Binary Indexed Tree or Fenwick Tree
Square Root (Sqrt) Decomposition Algorithm
Binary Lifting
Geometry
Interview Preparation
Interview Corner
GfG160
Practice Problem
GeeksforGeeks Practice - Leading Online Coding Platform
Problem of The Day - Develop the Habit of Coding
Thank You!
What kind of Experience do you want to share? |
189555 | https://online.stat.psu.edu/stat414/ | STAT 414 Introduction to Probability Theory
STAT 414
Welcome to STAT 414!
User Preferences
Content Preview
Arcu felis bibendum ut tristique et egestas quis:
Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
Duis aute irure dolor in reprehenderit in voluptate
Excepteur sint occaecat cupidatat non proident
Lorem ipsum dolor sit amet, consectetur adipisicing elit. Odit molestiae mollitia laudantium assumenda nam eaque, excepturi, soluta, perspiciatis cupiditate sapiente, adipisci quaerat odio voluptates consectetur nulla eveniet iure vitae quibusdam? Excepturi aliquam in iure, repellat, fugiat illum voluptate repellendus blanditiis veritatis ducimus ad ipsa quisquam, commodi vel necessitatibus, harum quos a dignissimos.
Keyboard Shortcuts
Help
: F1 or ?
Previous Page
: ← + CTRL (Windows)
: ← + ⌘ (Mac)
Next Page
: → + CTRL (Windows)
: → + ⌘ (Mac)
Search Site
: CTRL + SHIFT + F (Windows)
: ⌘ + ⇧ + F (Mac)
Close Message
: ESC
Welcome to STAT 414!
About this Course
Welcome to the course notes for STAT 414: Introduction to Probability Theory. These notes are designed and developed by Penn State's Department of Statistics and offered as open educational resources. These notes are free to use under Creative Commons license CC BY-NC 4.0.
This course is part of the Online Master of Applied Statistics program offered by Penn State's World Campus.
Currently enrolled?
If you are a current student in this course, please see Canvas for your syllabus, assignments, lesson videos and communication from your instructor.
How to enroll?
If you would like to enroll and experience the entire course for credit please see 'How to enroll in a course' on the World Campus website.
Course Overview Section
As the title of Stat 414 suggests, we will be studying the theory of probability, probability, and more probability throughout the course. Here's a (brief!) overview of what we'll do in the course:
Section 1
In Section 1, one of our primary focuses will be to develop an understanding of the various ways in which we can assign a probability to some chance event. We'll also learn the fundamental properties of probability, investigate how probability behaves, and learn how to calculate the probability of a new chance event.
Section 2
In Section 2, we'll explore discrete random variables and discrete probability distributions. The basic idea is that when certain conditions are met, we can derive formulas for calculating the probability of an event. Then, instead of returning to the basic probability rules we learned in Section 1 to calculate the probability of an event, we can use the new formulas we derived, provided that the certain conditions are met.
Section 3
In Section 3, as the title suggests, we will investigate probability distributions of continuous random variables, that is, random variables whose possible outcomes fall on an infinite interval. It's in this section that you'll want to make sure your calculus skills of integration and differentiation are sufficient before tackling many of the problems you'll encounter.
Section 4
In Section 4, we'll extend many of the definitions and concepts that we learned in Sections 2 and 3 to the case in which we have two random variables. More specifically, we will:
extend the definition of a probability distribution of one random variable to the joint probability distribution of two random variables,
learn how to use the correlation coefficient as a way of quantifying the extent to which two random variables are linearly related,
extend the definition of the conditional probability of events in order to find the conditional probability distribution of one random variable given that another that has occurred, and
investigate a particular joint probability distribution, namely the bivariate normal distribution.
Section 5
Finally, in Section 5, as the name of this section suggests, we will spend some time learning how to find the probability distribution of functions of random variables. For example, we might know the probability density function of (X), but want to know instead the probability density function of (X^2). We'll learn several different techniques for finding the distribution of functions of random variables, including the distribution function technique, the change-of-variable technique and the moment-generating function technique. The more important functions of random variables that we'll explore will be those involving random variables that are independent and identically distributed. |
189556 | https://www.randomservices.org/random/expect/Matrices.html | (\newcommand{\var}{\text{var}}) (\newcommand{\cov}{\text{cov}}) (\newcommand{\cor}{\text{cor}}) (\newcommand{\vc}{\text{vc}}) (\renewcommand{\P}{\mathbb{P}}) (\newcommand{\E}{\mathbb{E}}) (\newcommand{\R}{\mathbb{R}}) (\newcommand{\N}{\mathbb{N}}) (\newcommand{\bs}{\boldsymbol})
Random
3. Expected Value
1
2
3
4
5
6
7
8
9
10
11
12
13
8. Expected Value and Covariance Matrices
The main purpose of this section is a discussion of expected value and covariance for random matrices and vectors. These topics are somewhat specialized, but are particularly important in multivariate statistical models and for the multivariate normal distribution. This section requires some prerequisite knowledge of linear algebra.
We assume that the various indices ( m, \, n, p, k ) that occur in this section are positive integers. Also we assume thatexpected values of real-valued random variables that we reference exist as real numbers, although extensions to cases where expected values are (\infty) or (-\infty) are straightforward, as long as we avoid the dreaded indeterminate form (\infty - \infty).
Basic Theory
Linear Algebra
We will follow our usual convention of denoting random variables by upper case letters and nonrandom variables and constants by lower case letters. In this section, that convention leads to notation that is a bit nonstandard, since the objects that we will be dealing with are vectors and matrices. On the other hand, the notation we will use works well for illustrating the similarities between results for random matrices and the corresponding results in the one-dimensional case. Also, we will try to be careful to explicitly point out the underlying spaces where various objects live.
Let (\R^{m \times n}) denote the space of all (m \times n) matrices of real numbers. The ( (i, j) ) entry of ( \bs{a} \in \R^{m \times n} ) is denoted ( a_{i j} ) for ( i \in {1, 2, \ldots, m} ) and ( j \in {1, 2, \ldots, n} ). We will identify (\R^n) with (\R^{n \times 1}), so that an ordered (n)-tuple can also be thought of as an (n \times 1) column vector. The transpose of a matrix (\bs{a} \in \R^{m \times n}) is denoted (\bs{a}^T)—the ( n \times m ) matrix whose ( (i, j) ) entry is the ( (j, i) ) entry of ( \bs{a} ). Recall the definitions of matrix addition, scalar multiplication, and matrix multiplication. Recall also the standard inner product (or dot product) of ( \bs{x}, \, \bs{y} \in \R^n ): [ \langle \bs{x}, \bs{y} \rangle = \bs{x} \cdot \bs{y} = \bs{x}^T \bs{y} = \sum_{i=1}^n x_i y_i ] The outer product of ( \bs{x} ) and (\bs{y}) is ( \bs{x} \bs{y}^T ), the ( n \times n ) matrix whose ( (i, j) ) entry is ( x_i y_j ). Note that the inner product is the trace (sum of the diagonal entries) of the outer product. Finally recall the standard norm on ( \R^n ), given by [ \|\bs{x}\| = \sqrt{\langle \bs{x}, \bs{x}\rangle} = \sqrt{x_1^2 + x_2^2 + \cdots + x_n^2}] Recall that inner product is bilinear, that is, linear (preserving addition and scalar multiplication) in each argument separately. As a consequence, for ( \bs{x}, \, \bs{y} \in \R^n ), [ \|\bs{x} + \bs{y}\|^2 = \|\bs{x}\|^2 + \|\bs{y}\|^2 + 2 \langle \bs{x}, \bs{y} \rangle ]
Expected Value of a Random Matrix
As usual, our starting point is a random experiment, modeled by a probability space ((\Omega, \mathscr F, \P)). So to review, ( \Omega ) is the set of outcomes, ( \mathscr F ) the collection of events, and ( \P ) the probability measure on the sample space ( (\Omega, \mathscr F) ). It's natural to define the expected value of a random matrix in a component-wise manner.
Suppose that (\bs{X}) is an (m \times n) matrix of real-valued random variables, whose ((i, j)) entry is denoted (X_{i j}). Equivalently, (\bs{X}) is as a random (m \times n) matrix, that is, a random variable with values in ( \R^{m \times n} ). The expected value (\E(\bs{X})) is defined to be the (m \times n) matrix whose ((i, j)) entry is (\E\left(X_{i j}\right)), the expected value of (X_{i j}).
Many of the basic properties of expected value for random variables have analogous results for expected value of random matrices, with matrix operation replacing the ordinary ones. Our first two properties are the critically important linearity properties. The first part is the additive property—the expected value of a sum is the sum of the expected values.
(\E(\bs{X} + \bs{Y}) = \E(\bs{X}) + \E(\bs{Y})) if (\bs{X}) and (\bs{Y}) are random (m \times n) matrices.
Details:
This is true by definition of the matrix expected value and the ordinary additive property. Note that ( \E\left(X_{i j} + Y_{i j}\right) = \E\left(X_{i j}\right) + \E\left(Y_{i j}\right) ). The left side is the ( (i, j) ) entry of ( \E(\bs{X} + \bs{Y}) ) and the right side is the ( (i, j) ) entry of ( \E(\bs{X}) + \E(\bs{Y}) ).
The next part of the linearity properties is the scaling property—a nonrandom matrix factor can be pulled out of the expected value.
Suppose that (\bs{X}) is a random (n \times p) matrix.
(\E(\bs{a} \bs{X}) = \bs{a} \E(\bs{X})) if (\bs{a} \in \R^{m \times n}).
( \E(\bs{X} \bs{a}) = \E(\bs{X}) \bs{a}) if ( \bs{a} \in \R^{p \times n} ).
Details:
By the ordinary linearity and scaling properties, ( \E\left(\sum_{j=1}^n a_{i j} X_{j k}\right) = \sum_{j=1}^n a_{i j} \E\left(X_{j k}\right) ). The left side is the ( (i, k) ) entry of ( \E(\bs{a} \bs{X}) ) and the right side is the ( (i, k) ) entry of ( \bs{a} \E(\bs{X}) ).
The proof is similar to (a).
Recall that formula independent, real-valued variables, the expected value of the product is the product of the expected values. Here is the analogous result for random matrices.
(\E(\bs{X} \bs{Y}) = \E(\bs{X}) \E(\bs{Y})) if (\bs{X}) is a random (m \times n) matrix, (\bs{Y}) is a random (n \times p) matrix, and (\bs{X}) and (\bs{Y}) are independent.
Details:
By the ordinary linearity properties and by the independence assumption, [ \E\left(\sum_{j=1}^n X_{i j} Y_{j k}\right) = \sum_{j=1}^n \E\left(X_{i j} Y_{j k}\right) = \sum_{j=1}^n \E\left(X_{i j}\right) \E\left(Y_{j k}\right)] The left side is the ( (i, k) ) entry of ( \E(\bs{X} \bs{Y}) ) and the right side is the ( (i, k) ) entry of ( \E(\bs{X}) \E(\bs{Y}) ).
Actually the previous result holds if ( \bs{X} ) and ( \bs{Y} ) are simply uncorrelated in the sense that ( X_{i j} ) and ( Y_{j k} ) are uncorrelated for each ( i \in {1, \ldots, m} ), ( j \in {1, 2, \ldots, n} ) and ( k \in {1, 2, \ldots p} ). We will study covariance of random vectors in the next subsection.
Covariance Matrices
Our next goal is to define and study the covariance of two random vectors.
Suppose that (\bs{X}) is a random vector in (\R^m) and (\bs{Y}) is a random vector in (\R^n).
The covariance matrix of (\bs{X}) and (\bs{Y}) is the (m \times n) matrix (\cov(\bs{X}, \bs{Y})) whose ((i,j)) entry is (\cov\left(X_i, Y_j\right)) the ordinary covariance of (X_i) and (Y_j).
Assuming that the coordinates of ( \bs{X} ) and (\bs{Y}) have positive variance, the correlation matrix of ( \bs{X} ) and ( \bs{Y} ) is the ( m \times n ) matrix ( \cor(\bs{X}, \bs{Y}) ) whose ( (i, j) ) entry is ( \cor\left(X_i, Y_j\right)), the ordinary correlation of ( X_i ) and ( Y_j )
Many of the standard properties of covariance and correlation for real-valued random variables have extensions to random vectors. For the following three results, ( \bs X ) is a random vector in ( \R^m ) and ( \bs Y ) is a random vector in ( \R^n ).
(\cov(\bs{X}, \bs{Y}) = \E\left(\left[\bs{X} - \E(\bs{X})\right]\left[\bs{Y} - \E(\bs{Y})\right]^T\right))
Details:
By the definition of the expected value of a random vector and by the defintion of matrix multiplication, the ( (i, j) ) entry of ( \left[\bs{X} - \E(\bs{X})\right]\left[\bs{Y} - \E(\bs{Y})\right]^T ) is simply ( \left[X_i - \E\left(X_i\right)\right] \left[Y_j - \E\left(Y_j\right)\right] ). The expected value of this entry is ( \cov\left(X_i, Y_j\right) ), which in turn, is the ( (i, j) ) entry of ( \cov(\bs{X}, \bs{Y}) )
Thus, the covariance of ( \bs{X} ) and ( \bs{Y} ) is the expected value of the outer product of ( \bs{X} - \E(\bs{X}) ) and ( \bs{Y} - \E(\bs{Y}) ). Our next result is the computational formula for covariance: the expected value of the outer product of ( \bs{X} ) and ( \bs{Y} ) minus the outer product of the expected values.
(\cov(\bs{X},\bs{Y}) = \E\left(\bs{X} \bs{Y}^T\right) - \E(\bs{X}) \left[\E(\bs{Y})\right]^T).
Details:
The ( (i, j) ) entry of ( \E\left(\bs{X} \bs{Y}^T\right) - \E(\bs{X}) \left[\E(\bs{Y})\right]^T) is ( \E\left(X_i, Y_j\right) - \E\left(X_i\right) \E\left(Y_j\right) ), which by the standard computational formula, is ( \cov\left(X_i, Y_j\right) ), which in turn is the ( (i, j) ) entry of ( \cov(\bs{X}, \bs{Y}) ).
The next result is the matrix version of the symmetry property.
(\cov(\bs{Y}, \bs{X}) = \left[\cov(\bs{X}, \bs{Y})\right]^T).
Details:
The ( (i, j) ) entry of ( \cov(\bs{X}, \bs{Y}) ) is ( \cov\left(X_i, Y_j\right) ), which is the ((j, i) ) entry of ( \cov(\bs{Y}, \bs{X}) ).
In the following result, ( \bs{0} ) denotes the ( m \times n ) zero matrix.
(\cov(\bs{X}, \bs{Y}) = \bs{0}) if and only if (\cov\left(X_i, Y_j\right) = 0) for each (i) and (j), so that each coordinate of (\bs{X}) is uncorrelated with each coordinate of (\bs{Y}).
Details:
This follows immediately from the definition of ( \cov(\bs{X}, \bs{Y}) ).
Naturally, when ( \cov(\bs{X}, \bs{Y}) = \bs{0} ), we say that the random vectors ( \bs{X} ) and (\bs{Y}) are uncorrelated. In particular, if the random vectors are independent, then they are uncorrelated. The following results establish the bi-linear properties of covariance.
The additive properties.
(\cov(\bs{X} + \bs{Y}, \bs{Z}) = \cov(\bs{X}, \bs{Z}) + \cov(\bs{Y}, \bs{Z})) if (\bs{X}) and (\bs{Y}) are random vectors in (\R^m) and (\bs{Z}) is a random vector in (\R^n).
(\cov(\bs{X}, \bs{Y} + \bs{Z}) = \cov(\bs{X}, \bs{Y}) + \cov(\bs{X}, \bs{Z})) if (\bs{X}) is a random vector in (\R^m), and (\bs{Y}) and (\bs{Z}) are random vectors in (\R^n).
Details:
From the ordinary additive property of covariance, ( \cov\left(X_i + Y_i, Z_j\right) = \cov\left(X_i, Z_j\right) + \cov\left(Y_i, Z_j\right) ). The left side is the ( (i, j) ) entry of ( \cov(\bs{X} + \bs{Y}, \bs{Z}) ) and the right side is the ( (i, j) ) entry of ( \cov(\bs{X}, \bs{Z}) + \cov(\bs{Y}, \bs{Z}) ).
The proof is similar to (a), using the additivity of covariance in the second argument.
The scaling properties
(\cov(\bs{a} \bs{X}, \bs{Y}) = \bs{a} \cov(\bs{X}, \bs{Y})) if (\bs{X}) is a random vector in (\R^n), (\bs{Y}) is a random vector in (\R^p), and (\bs{a} \in \R^{m \times n}).
(\cov(\bs{X}, \bs{a} \bs{Y}) = \cov(\bs{X}, \bs{Y}) \bs{a}^T) if (\bs{X}) is a random vector in (\R^m), (\bs{Y}) is a random vector in (\R^n), and (\bs{a} \in \R^{k \times n}).
Details:
Using the ordinary linearity properties of covariance in the first argument, we have [ \cov\left(\sum_{j=1}^n a_{i j} X_j, Y_k\right) = \sum_{j=1}^n a_{i j} \cov\left(X_j, Y_k\right) ] The left side is the ( (i, k) ) entry of ( \cov(\bs{a} \bs{X}, \bs{Y}) ) and the right side is the ( (i, k) ) entry of ( \bs{a} \cov(\bs{X}, \bs{Y}) ).
The proof is similar to (a), using the linearity of covariance in the second argument.
Variance-Covariance Matrices
Suppose that (\bs{X}) is a random vector in (\R^n). The covariance matrix of (\bs{X}) with itself is called the variance-covariance matrix of (\bs{X}): [ \vc(\bs{X}) = \cov(\bs{X}, \bs{X}) = \E\left(\left[\bs{X} - \E(\bs{X})\right]\left[\bs{X} - \E(\bs{X})\right]^T\right)]
Recall that the variance an ordinary real-valued random variable ( X ) can be computed in terms of the covariance: ( \var(X) = \cov(X, X) ). Thus the variance-covariance matrix of a random vector in some sense plays the same role that variance does for a random variable.
(\vc(\bs{X})) is a symmetric (n \times n) matrix with (\left(\var(X_1), \var(X_2), \ldots, \var(X_n)\right)) on the diagonal.
Details:
Recall that ( \cov\left(X_i, X_j\right) = \cov\left(X_j, X_i\right) ). Also, the ( (i, i) ) entry of ( \vc(\bs{X}) ) is ( \cov\left(X_i, X_i\right) = \var\left(X_i\right) ).
The following result is the formula for the variance-covariance matrix of a sum, analogous to the formula for the variance of a sum of real-valued variables.
(\vc(\bs{X} + \bs{Y}) = \vc(\bs{X}) + \cov(\bs{X}, \bs{Y}) + \cov(\bs{Y}, \bs{X}) + \vc(\bs{Y})) if (\bs{X}) and (\bs{Y}) are random vectors in (\R^n).
Details:
This follows from the additive property of covariance: [ \vc(\bs{X} + \bs{Y}) = \cov(\bs{X} + \bs{Y}, \bs{X} + \bs{Y}) = \cov(\bs{X}, \bs{X}) + \cov(\bs{X}, \bs{Y}) + \cov(\bs{Y}, \bs{X}) + \cov(\bs{Y}, \bs{Y}) ]
Recall that ( \var(a X) = a^2 \var(X) ) if ( X ) is a real-valued random variable and ( a \in \R ). Here is the analogous result for the variance-covariance matrix of a random vector.
(\vc(\bs{a} \bs{X}) = \bs{a} \vc(\bs{X}) \bs{a}^T) if (\bs{X}) is a random vector in (\R^n) and (\bs{a} \in \R^{m \times n}).
Details:
This follows from the scaling property of covariance: [ \vc(\bs{a} \bs{X}) = \cov(\bs{a} \bs{X}, \bs{a} \bs{X}) = \bs{a} \cov(\bs{X}, \bs{X}) \bs{a}^T ]
Recall that if ( X ) is a random variable, then ( \var(X) \ge 0 ), and ( \var(X) = 0 ) if and only if ( X ) is a constant (with probability 1). Here is the analogous result for a random vector:
Suppose that ( \bs{X} ) is a random vector in ( \R^n ).
( \vc(\bs{X}) ) is either positive semi-definite or positive definite.
(\vc(\bs{X})) is positive semi-definite but not positive definite if and only if there exists (\bs{a} \in \R^n) and (c \in \R) such that, with probability 1, (\bs{a}^T \bs{X} = \sum_{i=1}^n a_i X_i = c)
Details:
From the previous result, (0 \le \var\left(\bs{a}^T \bs{X}\right) = \vc\left(\bs{a}^T \bs{X}\right) = \bs{a}^T \vc(\bs{X}) \bs{a} ) for every ( \bs{a} \in \R^n ). Thus, by definition, ( \vc(\bs{X}) ) is either positive semi-definite or positive definite.
In light of (a), ( \vc(\bs{X}) ) is positive semi-definite but not positive definite if and only if there exists ( \bs{a} \in \R^n ) such that ( \bs{a}^T \vc(\bs{X}) \bs{a} = \var\left(\bs{a}^T \bs{X}\right) = 0 ). But in turn, this is true if and only if ( \bs{a}^T \bs{X} ) is constant with probability 1.
Recall that since (\vc(\bs{X})) is either positive semi-definite or positive definite, the eigenvalues and the determinant of (\vc(\bs{X})) are nonnegative. Moreover, if (\vc(\bs{X})) is positive semi-definite but not positive definite, then one of the coordinates of (\bs{X}) can be written as a linear transformation of the other coordinates (and hence can usually be eliminated in the underlying model). By contrast, if (\vc(\bs{X})) is positive definite, then this cannot happen; (\vc(\bs{X})) has positive eigenvalues and determinant and is invertible.
Best Linear Predictor
Suppose that (\bs{X}) is a random vector in (\R^m) and that (\bs{Y}) is a random vector in (\R^n). We are interested in finding the function of (\bs{X}) of the form (\bs{a} + \bs{b} \bs{X}), where (\bs{a} \in \R^n) and (\bs{b} \in \R^{n \times m}), that is closest to (\bs{Y}) in the mean square sense. Functions of this form are analogous to linear functions in the single variable case. However, unless ( \bs{a} = \bs{0} ), such functions are not linear transformations in the sense of linear algebra, so the correct term is affine function of ( \bs{X} ). This problem is of fundamental importance in statistics when random vector (\bs{X}), the predictor vector is observable, but not random vector (\bs{Y}), the response vector. Our discussion here generalizes the one-dimensional case, when (X) and (Y) are random variables. That problem was solved in the section on covariance. We will assume that (\vc(\bs{X})) is positive definite, so that ( \vc(\bs{X}) ) is invertible, and none of the coordinates of (\bs{X}) can be written as an affine function of the other coordinates. We write ( \vc^{-1}(\bs{X}) ) for the inverse instead of the clunkier ( \left[\vc(\bs{X})\right]^{-1} ).
As with the single variable case, the solution turns out to be the affine function that has the same expected value as ( \bs{Y} ), and whose covariance with ( \bs{X} ) is the same as that of ( \bs{Y} ).
Define ( L(\bs{Y} \mid \bs{X}) = \E(\bs{Y}) + \cov(\bs{Y},\bs{X}) \vc^{-1}(\bs{X}) \left[\bs{X} - \E(\bs{X})\right] ). Then ( L(\bs{Y} \mid \bs{X}) ) is the only affine function of ( \bs{X} ) in ( \R^n ) satisfying
( \E\left[L(\bs{Y} \mid \bs{X})\right] = \E(\bs{Y}) )
( \cov\left[L(\bs{Y} \mid \bs{X}), \bs{X}\right] = \cov(\bs{Y}, \bs{X}) )
Details:
From linearity, [ \E\left[L(\bs{Y} \mid \bs{X})\right] = E(\bs{Y}) + \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X})\left[\E(\bs{X}) - \E(\bs{X})\right] = 0] From linearity and the fact that a constant vector is independent (and hence uncorrelated) with any random vector, [ \cov\left[L(\bs{Y} \mid \bs{X}), \bs{X}\right] = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{X}) = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \vc(\bs{X}) = \cov(\bs{Y}, \bs{X}) ] Conversely, suppose that ( \bs{U} = \bs{a} + \bs{b} \bs{X} ) for some ( \bs{a} \in \R^n ) and ( \bs{b} \in \R^{m \times n} ), and that ( \E(\bs{U}) = \E(\bs{Y}) ) and ( \cov(\bs{U}, \bs{X}) = \cov(\bs{Y}, \bs{X}) ). From the second equation, again using linearity and the uncorrelated property of constant vectors, we get ( \bs{b} \cov(\bs{X}, \bs{X}) = \cov(\bs{Y}, \bs{X}) ) and therefore ( \bs{b} = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) ). Then from the first equation, ( \bs{a} + \bs{b} \E(\bs{X}) = \bs{Y} ) so ( \bs{a} = \E(\bs{Y}) - \bs{b} \E(\bs{X}) ).
A simple corollary is the ( \bs{Y} - L(\bs{Y} \mid \bs{X}) ) is uncorrelated with any affine function of ( \bs{X} ):
If ( \bs{U} ) is an affine function of ( \bs{X} ) then
( \cov\left[\bs{Y} - L(\bs{Y} \mid \bs{X}), \bs{U}\right] = \bs{0} )
( \E\left(\langle \bs{Y} - L(\bs{Y} \mid \bs{X}), \bs{U}\rangle\right) = 0)
Details:
Suppose that ( \bs{U} = \bs{a} + \bs{b} \bs{X} ) where ( \bs{a} \in \R^n ) and ( \bs{b} \in \R^{m \times n} ). For simplicity, let ( \bs{L} = L(\bs{Y} \mid \bs{X}) )
From the previous result, ( \cov(\bs{Y}, \bs{X}) = \cov(\bs{L}, \bs{X}) ). Hence using linearity, [ \cov\left(\bs{Y} - \bs{L}, \bs{U}\right) = \cov(\bs{Y} - \bs{L}, \bs{a}) + \cov(\bs{Y} - \bs{L}, \bs{X}) \bs{b}^T = \bs{0} + \left[\cov(\bs{Y}, \bs{X}) - \cov(\bs{L}, \bs{X})\right] = \bs{0} ]
Recall that (\langle \bs{Y} - \bs{L}, \bs{U}\rangle) is the trace of ( \cov(\bs{Y} - \bs{L}, \bs{U}) ) and hence has expected value 0 by part (a).
The variance-covariance matrix of ( L(\bs{Y} \mid \bs{X}) ), and its covariance matrix with ( \bs{Y} ) turn out to be the same, again analogous to the single variable case.
Additional properties of ( L(\bs{Y} \mid \bs{X}) ):
( \cov\left[\bs{Y}, L(\bs{Y} \mid \bs{X})\right] = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y}) )
( \vc\left[L(\bs{Y} \mid \bs{X})\right] = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y}) )
Details:
Recall that ( L(\bs{Y} \mid \bs{X}) = \E(\bs{Y}) + \cov(\bs{Y},\bs{X}) \vc^{-1}(\bs{X}) \left[\bs{X} - \E(\bs{X})\right] )
Using basic properties of covariance, [ \cov\left[Y, L(\bs{Y} \mid \bs{X})\right] = \cov\left[\bs{Y}, \bs{X} - \E(\bs{X})\right] \left[\cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X})\right]^T = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y}) ]
Using basic properties of variance-covariance, [ \vc\left[L(\bs{Y} \mid \bs{X})\right] = \vc\left[\cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \bs{X} \right] = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \vc(\bs{X}) \left[\cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X})\right]^T = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y})]
Next is the fundamental result that ( L(\bs{Y} \mid \bs{X}) ) is the affine function of ( \bs{X} ) that is closest to ( \bs{Y} ) in the mean square sense.
Suppose that ( \bs{U} \in \R^n ) is an affine function of ( \bs{X} ). Then
( \E\left(\|\bs{Y} - L(\bs{Y} \mid \bs{X})\|^2\right) \le \E\left(\|\bs{Y} - \bs{U}\|^2\right) )
Equality holds in (a) if and only if ( \bs{U} = L(\bs{Y} \mid \bs{X}) ) with probability 1.
Details:
Again, let ( \bs{L} = L(\bs{Y} \mid \bs{X}) ) for simplicity and let ( \bs{U} \in \R^n ) be an affine function of ( \bs{X}).
Using the linearity of expected value, note that [ \E\left(\|\bs{Y} - \bs{U}\|^2\right) = \E\left[\|(\bs{Y} - \bs{L}) + (\bs{L} - \bs{U})\|^2\right] = \E\left(\|\bs{Y} - \bs{L}\|^2\right) + 2 \E(\langle \bs{Y} - \bs{L}, \bs{L} - \bs{U}\rangle) + \E\left(\|\bs{L} - \bs{U}\|^2\right) ] But ( \bs{L} - \bs{U} ) is an affine function of ( \bs{X} ) and hence the middle term is 0 by our previous corollary. Hence ( \E\left(\|\bs{Y} - \bs{U}\|^2\right) = \E\left(\|\bs{L} - \bs{Y}\|^2\right) + \E\left(\|\bs{L} - \bs{U}\|^2\right) \ge \E\left(\|\bs{L} - \bs{Y}\|^2\right) )
From (a), equality holds in the inequality if and only if ( \E\left(\|\bs{L} - \bs{U}\|^2\right) = 0 ) if and only if ( \P(\bs{L} = \bs{U}) = 1 ).
The variance-covariance matrix of the difference between ( \bs{Y} ) and the best affine approximation is given in the next theorem.
( \vc\left[\bs{Y} - L(\bs{Y} \mid \bs{X})\right] = \vc(\bs{Y}) - \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{X}, \bs{Y}) )
Details:
Again, we abbreviate ( L(\bs{Y} \mid \bs{X}) ) by (\bs{L}). Using basic properties of variance-covariance matrices, [ \vc(\bs{Y} - \bs{L}) = \vc(\bs{Y}) - \cov(\bs{Y}, \bs{L}) - \cov(\bs{L}, \bs{Y}) + \vc(\bs{L}) ] But ( \cov(\bs{Y}, \bs{L}) = \cov(\bs{L}, \bs{Y}) = \vc(\bs{L}) = \cov(\bs{Y}, \bs{X}) \vc^{-1}(\bs{X}) \cov(\bs{Y}, \bs{X}) ). Substituting gives the result.
The actual mean square error when we use ( L(\bs{Y} \mid \bs{X}) ) to approximate ( \bs{Y} ), namely ( \E\left(\left\|\bs{Y} - L(\bs{Y} \mid \bs{X})\right\|^2\right) ), is the trace (sum of the diagonal entries) of the variance-covariance matrix above. The function of (\bs{x}) given by [ L(\bs{Y} \mid \bs{X} = \bs{x}) = \E(\bs{Y}) + \cov(\bs{Y},\bs{X}) \vc^{-1}(\bs{X}) \left[\bs{x} - \E(\bs{X})\right] ] is known as the (distribution) linear regression function. If we observe (\bs{x}) then (L(\bs{Y} \mid \bs{X} = \bs{x})) is our best affine prediction of (\bs{Y}).
Multiple linear regression is more powerful than it may at first appear, because it can be applied to non-linear transformations of the random vectors. That is, if ( g: \R^m \to \R^j ) and ( h: \R^n \to \R^k ) then ( L\left[h(\bs{Y}) \mid g(\bs{X})\right] ) is the affine function of ( g(\bs{X}) ) that is closest to ( h(\bs{Y}) ) in the mean square sense. Of course, we must be able to compute the appropriate means, variances, and covariances.
Moreover, Non-linear regression with a single, real-valued predictor variable can be thought of as a special case of multiple linear regression. Thus, suppose that (X) is the predictor variable, (Y) is the response variable, and that ((g_1, g_2, \ldots, g_n)) is a sequence of real-valued functions. We can apply the results of this section to find the linear function of (\left(g_1(X), g_2(X), \ldots, g_n(X)\right)) that is closest to (Y) in the mean square sense. We just replace (X_i) with (g_i(X)) for each (i). Again, we must be able to compute the appropriate means, variances, and covariances to do this.
Examples and Applications
Suppose that ((X, Y)) has probability density function (f) defined by (f(x, y) = x + y) for (0 \le x \le 1), (0 \le y \le 1). Find each of the following:
(\E(X, Y))
(\vc(X, Y))
Details:
(\left(\frac{7}{12}, \frac{7}{12}\right))
(\left[\begin{matrix} \frac{11}{144} & -\frac{1}{144} \ -\frac{1}{144} & \frac{11}{144}\end{matrix}\right])
Suppose that ((X, Y)) has probability density function (f) defined by (f(x, y) = 2 (x + y)) for (0 \le x \le y \le 1). Find each of the following:
(\E(X, Y))
(\vc(X, Y))
Details:
(\left(\frac{5}{12}, \frac{3}{4}\right))
(\left[\begin{matrix} \frac{43}{720} & \frac{1}{48} \ \frac{1}{48} & \frac{3}{80} \end{matrix} \right])
Suppose that ((X, Y)) has probability density function (f) defined by (f(x, y) = 6 x^2 y) for (0 \le x \le 1), (0 \le y \le 1). Find each of the following:
(\E(X, Y))
(\vc(X, Y))
Details:
Note that (X) and (Y) are independent.
(\left(\frac{3}{4}, \frac{2}{3}\right))
(\left[\begin{matrix} \frac{3}{80} & 0 \ 0 & \frac{1}{18} \end{matrix} \right])
Suppose that ((X, Y)) has probability density function (f) defined by (f(x, y) = 15 x^2 y) for (0 \le x \le y \le 1). Find each of the following:
(\E(X, Y))
(\vc(X, Y))
(L(Y \mid X))
(L\left[Y \mid \left(X, X^2\right)\right])
Sketch the regression curves on the same set of axes.
Details:
(\left( \frac{5}{8}, \frac{5}{6} \right))
(\left[ \begin{matrix} \frac{17}{448} & \frac{5}{336} \ \frac{5}{336} & \frac{5}{252} \end{matrix} \right])
(\frac{10}{17} + \frac{20}{51} X)
(\frac{49}{76} + \frac{10}{57} X + \frac{7}{38} X^2)
Suppose that ((X, Y, Z)) is uniformly distributed on the region (\left{(x, y, z) \in \R^3: 0 \le x \le y \le z \le 1\right}). Find each of the following:
(\E(X, Y, Z))
(\vc(X, Y, Z))
(L\left[Z \mid (X, Y)\right])
(L\left[Y \mid (X, Z)\right])
(L\left[X \mid (Y, Z)\right])
( L\left[(Y, Z) \mid X\right] )
Details:
(\left(\frac{1}{4}, \frac{1}{2}, \frac{3}{4}\right))
(\left[\begin{matrix} \frac{3}{80} & \frac{1}{40} & \frac{1}{80} \ \frac{1}{40} & \frac{1}{20} & \frac{1}{40} \ \frac{1}{80} & \frac{1}{40} & \frac{3}{80} \end{matrix}\right])
(\frac{1}{2} + \frac{1}{2} Y). Note that there is no (X) term.
(\frac{1}{2} X + \frac{1}{2} Z). Note that this is the midpoint of the interval ([X, Z]).
(\frac{1}{2} Y). Note that there is no (Z) term.
( \left[\begin{matrix} \frac{1}{3} + \frac{2}{3} X \ \frac{2}{3} + \frac{1}{3} X \end{matrix}\right] )
Suppose that (X) is uniformly distributed on ((0, 1)), and that given (X), random variable (Y) is uniformly distributed on ((0, X)). Find each of the following:
(\E(X, Y))
(\vc(X, Y))
Details:
(\left(\frac{1}{2}, \frac{1}{4}\right))
(\left[\begin{matrix} \frac{1}{12} & \frac{1}{24} \ \frac{1}{24} & \frac{7}{144} \end{matrix} \right]) |
189557 | https://www.facebook.com/100044385285417/videos/hip%C3%B3tesis-teor%C3%ADa-y-ley/1065880750964742/ | Facebook
Log In
Log In
Forgot Account?
Hipótesis, Teoría y Ley
Javier Santaolalla "Date un Voltio"
Public
10.1K
131
862
Video
See more on Facebook
See more on Facebook
Email or phone number
Password
Log In
Forgot password?
or
Create new account |
189558 | https://artofproblemsolving.com/wiki/index.php/Absolute_value?srsltid=AfmBOooEqynprwNuc-U3ADcO51GbOBwjnQ2CWCaNlkiwCEW43jgE_A4W | Art of Problem Solving
Absolute value - AoPS Wiki
Art of Problem Solving
AoPS Online
Math texts, online classes, and more
for students in grades 5-12.
Visit AoPS Online ‚
Books for Grades 5-12Online Courses
Beast Academy
Engaging math books and online learning
for students ages 6-13.
Visit Beast Academy ‚
Books for Ages 6-13Beast Academy Online
AoPS Academy
Small live classes for advanced math
and language arts learners in grades 2-12.
Visit AoPS Academy ‚
Find a Physical CampusVisit the Virtual Campus
Sign In
Register
online school
Class ScheduleRecommendationsOlympiad CoursesFree Sessions
books tore
AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates
community
ForumsContestsSearchHelp
resources
math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten
contests on aopsPractice Math ContestsUSABO
newsAoPS BlogWebinars
view all 0
Sign In
Register
AoPS Wiki
ResourcesAops Wiki Absolute value
Page
ArticleDiscussionView sourceHistory
Toolbox
Recent changesRandom pageHelpWhat links hereSpecial pages
Search
Absolute value
The absolute value of a real number, denoted , is the unsigned portion of . Geometrically, is the distance between and zero on the real number line.
The absolute value function exists among other contexts as well, including complex numbers.
Contents
[hide]
1 Real numbers
2 Complex numbers
3 Examples
4 Problems
5 See Also
Real numbers
When is real, is defined as For all real numbers and , we have the following properties:
(Alternative definition)
(Non-negativity)
(Positive-definiteness)
(Multiplicativeness)
(Triangle Inequality)
(Symmetry)
Note that
and
Complex numbers
For complex numbers, the absolute value is defined as , where and are the real and imaginary parts of , respectively. It is equivalent to the distance between and the origin, and is usually called the complex modulus.
Note that , where is the complex conjugate of .
Examples
If , for some real number , then or .
If , for some real numbers , , then or , and therefore or .
Problems
Find all real values of if .
Find all real values of if .
(AMC 12 2000) If , where , then find .
See Also
Magnitude
Norm
Valuation
Retrieved from "
Art of Problem Solving is an
ACS WASC Accredited School
aops programs
AoPS Online
Beast Academy
AoPS Academy
About
About AoPS
Our Team
Our History
Jobs
AoPS Blog
Site Info
Terms
Privacy
Contact Us
follow us
Subscribe for news and updates
© 2025 AoPS Incorporated
© 2025 Art of Problem Solving
About Us•Contact Us•Terms•Privacy
Copyright © 2025 Art of Problem Solving
Something appears to not have loaded correctly.
Click to refresh. |
189559 | https://www.youtube.com/watch?v=2_21erD-nBg | Calculus 3 - Intro To Vectors
The Organic Chemistry Tutor
9690000 subscribers
19160 likes
Description
1312260 views
Posted: 23 Aug 2018
This calculus 3 video tutorial provides a basic introduction into vectors. It contains plenty of examples and practice problems.
Vectors - Free Formula Sheet:
3D Coordinate System:
3D Distance Formula:
Equation of a 3D Sphere:
Calculus 3 - Intro to Vectors:
Calculus 3 - The Dot Product:
Angle Between Two Vectors:
Parallel & Orthogonal Vectors:
Direction Cosines and Vectors:
Calculus 3 - Vector Projections:
Cross Product of 2 Vectors:
Area - Vector Cross Product:
Triple Scalar Product:
Vector Equations of Lines:
The Equation of a Plane:
Planar Equation - 3 Points:
Final Exams and Video Playlists:
Full-Length Videos and Worksheets:
590 comments
Transcript:
Intro in this video we're going to talk about vectors but what is a vector really and let's distinguish it from something called a scalar quantity so what's the difference between a vector quantity and a scalar quantity well a scalar quantity only has a magnitude a vector quantity has both a magnitude but it also has direction so for instance speed is considered to be a scalar quantity however velocity is a vector quantity so let's say if i'm driving a car going 40 meters per second what i told you is my speed i only gave you a magnitude just a number but if i say i'm going 40 meters per second north now i've given you my velocity i gave you both speed and direction so the 40 meters per second represents the magnitude it tells me i mean it tells you how fast i'm going and north tells you the direction of where i'm going and so that's basically a vector it gives you it combines a magnitude with a direction another example would be force let's say if i have a box i can apply a force of 300 newtons directed east or i can apply a force of 200 newtons directed north so let's say the first example 300 newtons that's the magnitude of the force and the direction is that's the second component which makes it a vector so make sure you understand that a vector has both magnitude and direction another scalar quantity would be temperature you can't apply direction to temperature if i ask someone hey what is the temperature today well let's see it is 80 degrees east what that would make no sense you can't say you can't apply a temperature with a direction it doesn't make sense so temperature would be considered a scalar quantity you can't say 80 degrees fahrenheit north or something like that it just doesn't it doesn't apply and so that's a scalar quantity another one would be mass so let's say you want to find out Mass how much matter is in the object so this box has a mass of let's say 50 kilograms you can't apply a direction to that you can't say it's 50 kilograms north or 80 kilograms south it just doesn't work and so mass is a scalar quantity so if you want to determine if something is scalar or a vector ask yourself can i apply direction to this whatever it is that i'm considering so like volume is volume a scalar quantity or is it a vector quantity what would you say so let's say this container can hold 50 milliliters of fluid can i apply direction to that can i say it has 50 milliliters of fluid north that wouldn't make sense so therefore volume cannot be a vector it's a scalar quantity and so hopefully that highlights the difference between what is scalar and what can be a vector Directed Line Segment another way to think of a vector is to think of a directed line segment so let's say if we have two points point a and point b a is the initial point b is the terminal point so we can draw a vector from a to b that looks like this and so you can write it this way with the arrow pointed at b so that's vector a b the left of the arrow indicates the magnitude now where the arrow is pointed tells you the direction so let's try another one so let's say this is point c and this is point d so based on this arrow which one is the initial point and which one is the terminal point so the initial point is where the arrow starts and where it it ends so to speak that is the terminal point so we can call this vector vector cd with the arrow pointed at the letter d or towards d Magnitude and Angle now there's two common ways in which we could describe a vector and that is we can give its a magnitude and its angle so let's say this is the x-axis this is the y-axis and let's say this vector has a length of 5 and it's directed at a 40 degree angle let's call it vector v so the magnitude of vector v is 5. the magnitude represents the length of the vector how long it is so this vector would have a magnitude of 10 because it's longer and this vector will probably have a magnitude of 2 or 3 because it's shorter and so the length of the directed line segment tells you the magnitude so that's one way in which you can describe a vector using its magnitude and an angle Components another way in which you could describe a vector is using components let's say we have the vector a and we have two comma three so this tells us that the x component of the vector you can say a sub x that's two and the y component is three so let's represent it graphically so here's the y-axis here's the x-axis so starting from the origin we're going to travel two units to the right and three units up so here is the x component it's two units along the x axis and the y component is three units parallel to the y axis and that will give us this vector vector a so you can describe a vector using the x and y components of that vector Point vs Vector now there's something that you need to be able to do and that is you need to be able to distinguish a point from a vector a point has parentheses for example this is the point 3 comma 4. a vector let's say vector b has these inequality symbols like this one let's say this is it has the components 4 and 5. so this would be a vector you wouldn't use parentheses to represent a vector so if you want to represent a vector in component form you need to use those arrows instead of a set of parenthesis keep that in mind so if we were to plot it a point would be just a point so point p is right here it's at three comma four and that's it it's not a directed line segment it's simply a dot in space now a vector on the other hand is different let's graph vector b on this graph so we're going to start from the origin and then we're going to travel 4 units to the right because that's the x component and then we're going to go up 5 units so let me use a different color so this is 4 this is 5 and vector b starts from the initial point which we just based it from the origin and it ends at the terminal point which in this case is the point four comma five but the initial point is zero comma zero and so that's a vector b it has an x component of four and a y component of five Practice Problem now let's work on a practice problem vector v has the initial point one comma negative two and terminal point five comma one part a find the component form of vector v so what we want to be able to do is we want to find the component form of vector v we need to determine its x component and its y component how can we do so given the initial point and the terminal point what we need to do in order to find it is take the difference between the x values of the terminal point and the initial point so v x is going to be five minus one to find v y it's simply the difference between the y coordinates of the terminal point and the initial point so think of the final minus initial we need to subtract the terminal point minus the initial point so 1 minus negative two five minus one is four and one minus negative two that's basically one plus two which is three so the component form of vector v is 4 negative 3. let's illustrate this with a graph so the initial point is that one negative two so here it is and the terminal point is five one i was going to plot six one here's five one so this is the initial point and this is the terminal point now to go from the initial point to the terminal point how many units do we need to travel along the x direction or parallel to the x-axis the initial point has an x-value of one and the terminal point has an x-value of five so we need to travel from one to five parallel to the x-axis so we have to travel four units to the right and that gives us the x-component as you can see it's four so that's a v-x now the initial point has a y value of let's see if i can fit it here negative two and a terminal point has a y value of one so to go from negative two to 1 we need to travel up 3 units and so this is the y component this is the x component now how can we determine the magnitude of vector v remember the vector is right here i'm going to highlight it in red how can we determine the length of that vector since we have a right triangle we could use the pythagorean theorem to determine the length of the hypotenuse of the right triangle now if you recall the hypotenuse is c and the legs are a and b and so we have the formula c squared is equal to a squared plus b squared so c is the square root of a squared plus b squared therefore if we wish to find the magnitude v which is the length of the vector which is the same as c we could use this formula it's the square root of v x squared plus v y squared in this example v x corresponds to a v y corresponds to b so to finish part b the magnitude of vector v is going to be the square root of four squared plus three squared and four squared is sixteen three squared is nine sixteen plus 9 is 25 and the square root of 25 is 5. so basically we have the 3 4 5 right triangle and so that's how you could find the magnitude and the component form of the vector v if you're given the initial point and the terminal point let's try this problem vector v has initial point a negative four comma one and terminal point b eight comma six and vector u has initial point c negative seven comma negative fifteen and terminal point d three comma nine determine if the two vectors are equivalent so how can we determine if two vectors are equivalent well they need to have the same magnitude Component Forms and the same direction so let's begin by finding the component forms of vector v and u so the component form for vector v let's start with v x is going to be the difference between the x values of the terminal point and the initial point so it's going to be 8 minus negative 4. and v y the y component of v is going to be the y component or the y coordinate of the terminal point minus the y coordinate of the initial point and so that's going to be 6 minus 1. and so we have 8 plus 4 which is 12 and 6 minus 1 which is 5. so this is the component form of vector v now let's do the same for vector u the x component of the terminal point is three and the x component of the initial point is negative seven so we're going to subtract three and a negative 7 together now the y component of the terminal point is 9 and the y component of the initial point is negative 15. so it's nine minus negative fifteen so we're gonna have three plus seven and nine plus fifteen three plus seven is ten nine plus fifteen is twenty four so this is the component form of vector u here is a question for you what is another way in which we can describe vectors v and u based on the information that we were given another way in which we could describe it is using the initial and terminal points vector u has the initial point c and the terminal point d therefore vector u can be called vector c d and the vector v has the initial point a and the terminal point b so vector v is also vector a b which is equal to 12 comma 5. so now that we have this information let me get rid of a few things now that we have the component form of vector v and a vector u can we say that these two vectors are equivalent so looking at them the numbers are different which means that the vectors will be different but if we want to determine if two vectors are equivalent we need to determine the magnitude and the direction let's start with the magnitude first let's find the magnitudes of vector v and vector u the magnitude of vector v is going to be the square root of 12 squared plus 5 squared 12 squared is 144 5 squared is 29 and 144 plus i think i said 29 5 squared is 25. the 144 plus 25 is 169 and the square root of 169 is 13. so that's the magnitude of vector v let's do the same for vector u it's going to be the square root of 10 squared plus 24 squared now 10 squared is 124 squared that's 576 i believe so 100 plus 576 is 676 which i'm going to use a calculator at this point the square root of 676 is 26 so the magnitudes of these two vectors are different because the components are different but what about the direction do they have the same direction a quick way to determine if the two vectors have the same direction is to determine the slope of each vector so if you recall the slope is the change in y values divided by the change in the x values now for vector v the change in the y values looking at the initial point and the terminal point it's six minus one and that basically gave us v y which was five now the change in the x values for vector v it's eight minus negative four so you could say x two is eight x one is negative four and so that gave us twelve which is the x component of v so basically the slope of vector v it's simply v y over v x so i'm going to write m sub v the slope of vector v so it's v y over v x which is 5 over 12. the slope for vector u is going to be u y over u x so the y component of vector u is 24 and the x component of vector u is 10. now we can reduce 24 and 10. they're both even numbers so if we divide 24 by 2 that's twelve if we divide ten by two it's five and so these two vectors they don't have the same slope this is five over twelve this is twelve over five so they're completely different in order for two vectors to be equivalent they need to have number one the same magnitude which these two vectors do not have and at the same time they need to have the same slope which these two vectors also do not have so right from the beginning you can tell if they're equivalent just by looking at the component forms of the two vectors but if you don't have that if you have the magnitude and the direction then you can look at that so if two vectors have the same magnitude and the same slope they will be equivalent or if they have the same components they will also be equivalent to each other let's talk about adding vectors Adding Vectors so let's say this is vector a and this is a vector b and let's say c is the sum of vectors a and b how can we add the two vectors together so what you need to do to graphically add the vectors is connect them from head to tail so if we draw vector a and then we draw vector b right after a so basically we want to take the initial point of b and connect it to the terminal point of a and to draw vector c start from the initial point of a and let me use a different color draw the vector towards the terminal point of b and this is vector c that is the sum of vectors a and b now you could start with a or you could start with b so let's say if we started with b if we drew b first and then afterward we drew a we would still get the same vector c which is going in the same direction and has the same left so the order in which you do it doesn't matter now let's say if we want to draw a vector d and we're going to define it as being b minus a how can we draw this vector well let's start with b we have positive b which is the original vector and when you see b minus a it's best to view it as b plus a negative a so we're going to add negative a to vector b so if this is a what's negative a when you multiply a vector by negative 1 the direction of the vector will change it's going to change by 180 degrees so all you need to do is reverse the vector so vector a is going to be going this way so now to draw a vector d we're going to start from the initial point of vector b to the terminal point of vector a and so that's going to be vector d so that's how you can subtract vectors b and a from each other now let's say that vector e is 2a plus b go ahead and draw this vector what's going to happen if we multiply vector a by a scalar quantity which is just a number in this case 2. so what is 2a all it is it's basically doubling the length of vector a so if this is a 2a is going to be twice the length and then we need to add vector b to it now to draw vector e we are going to draw it from the initial point of the first vector to the terminal point of the second vector and so this is vector e it's the sum of two a and b so now you know how to graphically add and subtract two vectors together and you could do it with three vectors if you want to so let's say that vector d is the sum of vectors a b and c and we're going to say this is a this is b and let's say this is c so let's start with a and then connect it to b and then connect b to c so starting with the initial point of the first vector we're going to draw a vector to the terminal point of the last one and so this is vector d that's how you can add three vectors together now let's work on some vector operations so let's say if we're given vector a and it's going to be 4 negative and let's say that vector b in its component form is negative three comma five so perform the following operations let's calculate the value of 2a plus 3b and also let's say 5a minus 4b go ahead and try those two problems feel free to pause the video so let's start with the first one 2a plus 3b so the first thing i'm going to do is replace a with what it's equal to so a is equal to 4 comma negative 2 and then i'm going to replace a b with the stuff that it's equal to which is negative three comma five and then i'm going to multiply a by two so i'm going to multiply the x component by two two times four is eight and i'm gonna multiply the y component by two two times negative two is negative four now i'm gonna do the same thing with b so three times negative three that's negative nine and three times five is fifteen now when adding vectors together you need to add the corresponding x components together and the corresponding y components together so eight plus negative nine is negative one and negative four plus fifteen is eleven so the answer for the first part that is part a is this it's negative of one comma 11. now let's do the same for part b so we have 5 a minus 4b so let's begin by replacing a with four comma negative two and let's replace b with negative three comma five so let's begin by distributing the five five times four is twenty and five times negative two is negative 10. now i'm going to distribute the negative sign so i'm going to put a plus here negative 4 times negative 3 is 12 and negative 4 times 5 is negative twenty if you don't use the negative sign it will look like this four times negative three which is negative twelve and four times five is twenty so this is what you should have if you don't change the negative sign to a positive sign but i like to put a positive sign here so i'm gonna distribute the negative to the stuff that's on the inside that's just the way i like to do it to avoid making mistakes so now let's add the corresponding x components together 20 plus 12 is 32 and negative 10 plus negative 20 is negative 30. and so this is the answer this is equal to 5 a minus 4 b that's it Position Vector now let's talk about position vectors what is a position vector how would you describe it a position vector is any vector that has its initial point placed at the origin so the initial point has the coordinates 0 0. let's call this vector v and let's say the terminal point of vector v is 3 comma 4. so vector v is a position vector but notice that vector v its components is the same as the coordinates of the terminal point because if you subtract the x coordinates three minus zero will still give you three and if you subtract the y coordinates four minus zero will still give you four so when dealing with position vectors the coordinates of the terminal point will be the same as the components of the position vector due to the fact that the initial point is placed at the origin and that's basically what you need to know about position vectors so to review remember a position vector is any vector that has its initial point placed at the origin Unit Vector now what about unit vectors what is a unit vector think of the keyword units what is the basic unit of any number most numbers are based on a unit of one and think of the unit circle when you hear the word unit circle what do you think of the unit circle is basically a circle with a radius of 1. likewise a unit vector is a vector with a magnitude of one so some textbooks will describe a unit vector as just being u sometimes you'll see this symbol associated with a unit vector but what you need to know is that the magnitude of any unit vector is always one that's what you want to take from this now how do we go about finding unit vectors from other vectors so for instance let's say if we have some generic vector vector v and let's say it has a magnitude of 4. what do we need to do to find the unit vector of vector v now recall that every unit vector has a magnitude of one so to find the unit vector of vector v we just need to divide vector v by four because four divided by four is one and so that will give us a shorter vector that's in the same direction as v but with a magnitude of one and this is going to be called the unit vector of v so therefore to find any unit vector you need to take the vector let's say if we want to find the unit vector v take that vector and divide it by its magnitude and that's how you could find the unit vector of any generic vector now it's important to be familiar with this equation because if we multiply both sides by the magnitude of vector v we are going to get this vector v can be described as the product of the magnitude of vector v times its unit vector so what does this mean recall that in the beginning of the video we said that vectors can be described using two things a vector has both magnitude and direction so this represents the magnitude of the vector therefore the unit vector describes the direction of vector v it tells us where the vector is going and we could extend the length of that vector to any other number we can increase the left decrease the left we just need to multiply by some magnitude so all vectors can be described as the product of the magnitude of the vector times its unit vector so it's a combination of magnitude and direction now let's work on this problem Find Unit Vector find the unit vector in the same direction of v so how can we do that what's the first thing that we should do what i recommend doing is write in the formula so the unit vector u is going to be the vector v divided by the magnitude of v and v in its component form is 4 comma negative 3. and now the magnitude of v is going to be the square root of four squared plus negative three squared and we know four squared is sixteen negative three squared is positive nine 16 plus 9 is 25 and the square root of 25 is 5. so this becomes 4 negative 3 all divided by 5. and now what you could do is divide each number by five so four divided by five is just four-fifths and then we have negative three divided by five and so this is the unit vector of vector v it's four over five comma negative three over five and if you want to check it you need to show that the magnitude of this vector is one and let's do that let me make some space first so if i take the square root of 4 over 5 squared plus negative 3 over 5 squared i need to get 1 in order to prove that this is indeed a unit vector four squared is sixteen five squared is twenty-five negative three squared is nine and sixteen plus nine is twenty-five so we have twenty-five over twenty-five which is 1 and the square root of 1 is 1. so we can confirm that this is indeed is a unit vector so now that you know how to find a unit vector i think it's a good time to talk about standard unit vectors so what exactly is a standard unit vector there's three of them that you need to be familiar with perhaps you've seen them before i j and k each of these are unit vectors with a magnitude of one so i has an x value of 1 but a y value and a z value of 0 when dealing with 3 dimensional systems j is a unit vector with a y value of one and k is a unit vector with a length of one along the z axis so make sure you associate i with the x coordinate system j with y and k with z and remember that each of these vectors have a length of one so for a two-dimensional system where this is the x-axis and this is the y-axis i is going to have a length of 1 along the x-axis so that's i and j is a unit vector with a length of 1 along the y-axis and z is the same thing but along the z axis so how do we apply this Vector V well let's say that we have vector v and in component form it's four comma negative seven how can we represent this vector using the standard unit vectors so we can also say that vector v it's four times the unit vector i and then plus negative seven times the unit vector j or we could simply say it's four i minus seven j so this is the x component of vector v and this is the y component so the x component v x is basically 4i and the y component of vector v is negative 7 j and so you could represent any vector in its component form or using standard unit vectors so let's try another one let's say we have vector w Vector W and it's 3 comma negative 5 comma 4. go ahead and represent this vector using the standard unit vectors so vector w can be written as 3 i minus 5 j plus 4 k so wx is 3i wy is negative 5j and wz the z component of vector w is 4k but this is the answer though Vector Operations now let's say that vector a is 3i minus 5j and vector b is 2i plus 6j and let's define vector w as being 4a minus 3b go ahead and find vector w using the standard unit vectors so go ahead and perform the vector operations so what we're going to do is we're going to replace a with what it's equal to 3i minus 5j and then we're going to replace b with 2i plus 6j and so we have 4 times 3i which is 12i and 4 times negative 5j which is negative 20j and negative 3 times 2i that's negative 6i negative 3 times 6j that's negative 18j and now all we need to do is add like terms 12 minus 6 is 6 and then negative 20 minus 18 is negative 38 and as you can see when using the standard unit vectors it's very easy to perform vector operations it reduces to simple algebra now you do have to be careful not to make mistakes so i recommend doing everything one step at a time because it's easy to make a mistake or miss the negative sign but that's how you can perform vector operations with the standard unit vectors Unit Circle now let's talk about the unit circle and how it relates to unit vectors but let's focus on the first quadrant of the circle now the unit circle is a circle with a radius of one so the distance between the center and any point on a circle will always be one which is the same as the length of a unit vector so basically the radius of the unit circle we could say is the unit vector now what is the x component of the unit vector and what is the y component of it now this unit vector will have an angle with the x axis and if you recall from trigonometry the x component of the unit circle is cosine and the y component is sine so we can say that u x is equal to cosine theta and u y is equal to sine theta so as we represent the unit vector u as being ux comma u y in its component form we can also say that it's basically cosine theta and sine theta now earlier we said that we can represent any vector as being the product of its magnitude times the unit vector and so the unit vector is basically just cosine and sine so the vector v is going to be the magnitude of v times cosine and since that's the x component we're going to attach one of the standard unit vectors associated with the x component that's i and then plus we multiply v by sine and that's the y component so we're going to attach a j to it so we could find any vector if we know the magnitude and the angle so we can express it as the sum of the standard unit vectors i j and if we want to k as well now let's work on number five Unit Vector V write vector v as a linear combination of the unit vectors i and j given that vector v has a magnitude of 16 and an angle of 30 degrees with the positive x-axis so let's say this is the y-axis here is the x-axis and here's the vector v with a magnitude of 16. and it makes a 30 degree angle with the positive x-axis let me make this longer so what we're looking for are the x and y components we need to determine v x and v y but let's use the formula that we wrote earlier so vector v is going to be the magnitude times cosine that's going to give us the x component and to get the y component it's going to be the magnitude times sine so make sure you understand this v x it's going to be v times cosine theta and v y is v times sine theta so we have v that's 16 we're going to multiply it by cosine 30. and to get the y component we're going to multiply sine 30 by 16. and this should be a j what is cosine 30 cosine 30 is the square root of 3 over 2. let's not forget to put the unit vector i and sine 30 is one half so now 16 divided by 2 is 8 so i'm going to have 8 square root 3 times i and half of 16 is 8 as well so that's going to be plus h8 this is the answer so that's how you can express vector v in its component form using the unit vectors i and j if you're given the magnitude and the angle now let's work on another problem so let's say that vector v is 3i minus 8j find the magnitude of the vector shown below and determine the angle that it makes with the positive x-axis so this problem is the reverse of number five we're given a vector and we need to determine the magnitude and the angle well we know how to find the magnitude it's simply going to be the square root of 3 squared plus negative 8 squared 3 squared is 9 negative eight squared is sixty-four nine plus sixty-four that's going to be seventy-three so the magnitude is seventy-three now how do we find the angle well if we were to plot this vector as a position vector starting from the origin we would have to travel three units to the right and then we would travel down 8 units now this graph is not drawn to scale but vector v would be going in quadrant 4. so how do we find the angle that the vector makes with the positive x-axis and also let's find this angle going counterclockwise from the x-axis but before we find that angle let's find what is known as the reference angle that is the angle inside the triangle the reference angle will always be a positive angle between 0 and 90. and what we can use is the tangent function if you recall from sohcahtoa intrigue tangent theta is equal to the opposite side divided by the hypotenuse in this case opposite to theta is v y and adjacent to it is v x so tan theta is v y over v x now to find a reference angle it's going to be the arc tangent of the y component divided by the x component now when looking for the reference angle even though v y is negative i recommend using the positive value or the absolute value of v y it's going to make your life a lot easier so we're going to take the arc tangent of 8 and divide it by three and make sure your calculator is in radian i mean at radian but degree mode mines was in radian mode just now so you should get this angle now this is the reference angle it's 69.4 degrees now sometimes your teacher simply wants this angle between the vector and the x-axis and that would be the answer however if you want the angle measured from the x-axis but in a counter-clockwise direction as opposed to a clockwise direction if you want to go counterclockwise we need to do some extra work here but all we need to do if the angle is in quadrant four is subtracted by 360. or more specifically subtract the angle from 360. so it's going to be 360 minus 69.4 and so the angle measured from the positive x-axis going in a clockwise i mean a counterclockwise direction it's 290.6 this is the angle that we want that's the standard angle and so we have a magnitude of 73 and an angle of 290.6 so if your vector is in quadrant one the angle that you want is the same as the reference angle in quadrant two the angle that you're looking for it's going to be 180 minus the reference angle in quadrant three it's going to be the reference angle minus 180 and in quadrant four the angle measured from the positive x-axis going in a counter-clockwise direction will be 360 minus the reference angle now this is going to be the last problem of the video go ahead and work on it find the resultant force of two vectors take a minute pause the video and see if you can get the answer now for those of you who wish to subscribe to this channel that is of course if you like this video don't forget to click the notification bell if you wish to receive any updates on new videos that i'm going to be posting in the future now let's begin let's graph the vectors or at least a rough sketch of it so the first force vector f1 it's going to be going in this general direction it has an x component of five and a y component of two now f2 will be going it's towards quadrant three we need to travel two units to the left and down eight units so we could say that for the most part it's going in this general direction now the resultant force is the sum of these two force vectors in which quadrant do you think the resultant force which we'll call fr is located in so if we take f1 and graphically add it to f2 where is fr located so fr is going to be going in this direction so if we move that vector to the origin of this graph we can say that approximately it's going in quadrant four that means that the x component should be positive and the y component should be negative so what we're going to do is we're going to find the magnitude of the resulting force vector and also its angle so let's go ahead and begin the resultant force vector is simply the sum of f1 and f2 and f1 is 5i plus 2j f2 is negative 2i minus aj so what we need to do is add the x components together so 5i plus negative 2i that's 3i and then add the y components together 2j plus negative 8j is negative 6j so that's the results in force vector in terms of the standard unit vectors i and j now once we have that our next step is to find the magnitude of the resultant force vector and so it's going to be the square root of 3 squared plus negative 6 squared 3 squared is 9 6 squared is 36 9 plus 36 is 45. now if you want to you can simplify the square root of 45. 45 is 5 times 9 and the square root of 9 is 3 so you get 3 square root 5 which i'm going to write here so that's the magnitude of the resultant force vector in this problem now let's go ahead and find the angle that it makes with the positive x-axis going in the counterclockwise direction but let's graph it first so to graph this vector we need to travel three units to the right and then down six units so as we can see this force vector is indeed in quadrant four so let's find a reference angle first so the reference angle is going to be the arc tangent of the absolute value of v y over v x so v y it's absolute value it's positive six we're going to get rid of the negative sign v x is three so we need to take the arc tangent of two and so it will give us this angle which is 63.4 degrees so that's the reference angle inside here let me erase this so this is 63.4 degrees now we need to find this angle the angle that is counterclockwise measured from the positive x-axis now because our resultant force vector is in quadrant four the angle is going to be 360 minus the reference angle and the reference angle is always an acute angle between 0 and 90. so our reference angle in this example is 63.4 so if we take 360 and subtract it by 63.4 we get this angle which is 296.6 degrees and that is the answer so now you know how to add two vectors to get the resulting force vector and you could describe it using unit vectors or using the magnitude and the angle so that's it for this video thanks for watching if you like it feel free to comment like and definitely subscribe to this channel |
189560 | https://math.stackexchange.com/questions/320779/existence-of-gergonne-point-without-ceva-theorem | geometry - Existence of Gergonne point, without Ceva theorem - Mathematics Stack Exchange
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
Existence of Gergonne point, without Ceva theorem
Ask Question
Asked 12 years, 7 months ago
Modified12 years, 7 months ago
Viewed 963 times
This question shows research effort; it is useful and clear
2
Save this question.
Show activity on this post.
The intersection at one point (called Gergonne point) of the lines from vertices of a triangle to contact points of the inscribed circle can be proved immediately using Ceva's theorem.
Is there a direct proof that does not pass through Ceva's formula?
Edit: I am hoping for a metric Euclidean proof using lengths and angles but this should not limit the answers. If you can prove it using algebraic K theory, go ahead.
geometry
euclidean-geometry
contest-math
triangles
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this question to receive notifications
edited Mar 5, 2013 at 18:52
zyxzyx
asked Mar 4, 2013 at 21:20
zyxzyx
36.2k 4 4 gold badges 53 53 silver badges 113 113 bronze badges
3
In case it saves time for search engine aficionados, I did try that in several variations. There is a large number of web pages with the usual Ceva proof that the question seeks to avoid.zyx –zyx 2013-03-04 21:24:43 +00:00 Commented Mar 4, 2013 at 21:24
What's wrong with Ceva? (Just being curious.)dtldarek –dtldarek 2013-03-04 21:29:29 +00:00 Commented Mar 4, 2013 at 21:29
Nothing. This is a methodological question.zyx –zyx 2013-03-04 21:49:45 +00:00 Commented Mar 4, 2013 at 21:49
Add a comment|
1 Answer 1
Sorted by: Reset to default
This answer is useful
3
Save this answer.
Show activity on this post.
It is a conclusion of a degenerated case of the Brianchon's theorem:
For a hexagon A B C D E F A B C D E F with an inscribed conic section the lines A D A D, B E B E, C F C F coincide in a common point.
Just set A A, C C and E E to be vertices of the triangle, and B B, D D and F F the points where the incircle touches the triangle. Some immediate conclusion: the Gergonne point could be generalized to ellipse and other conics. Quick search reveals this idea is not new and there is even some literature on it, e.g. this nice pdf and also this page.
I hope this is what you were looking for ;-)
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Mar 4, 2013 at 22:49
dtldarekdtldarek
38k 9 9 gold badges 58 58 silver badges 135 135 bronze badges
7
Thank you (+1). It is a bit late to clarify, but I am looking for a metric equality-chasing proof in classical Euclidean style.zyx –zyx 2013-03-04 22:52:59 +00:00 Commented Mar 4, 2013 at 22:52
Also, Ceva theorem is itself a degeneration of projective theorems on hexagons, so in a sense this is just a deformation of the usual proof.zyx –zyx 2013-03-04 22:54:19 +00:00 Commented Mar 4, 2013 at 22:54
Well, Ceva does involve the formula-checking, and I thought you wanted to avoid this particular thing. Anyway, if I were you, I would do a quick search through Pascal's theorem 2013-03-04 23:01:55 +00:00 Commented Mar 4, 2013 at 23:01
For example, if there is some characterization of each line as a locus of points p p where f(p,A i)=f(p,A j)f(p,A i)=f(p,A j) and the triangle is A 1 A 2 A 3 A 1 A 2 A 3, this would prove concurrency. I don't mind checking formulas but I wonder if some insight is available as in the other non-Ceva proofs.zyx –zyx 2013-03-04 23:06:12 +00:00 Commented Mar 4, 2013 at 23:06
mathworld.wolfram.com/BrianchonPoint.html . List of "Brianchon points" of "inconics"!zyx –zyx 2013-03-19 05:57:22 +00:00 Commented Mar 19, 2013 at 5:57
|Show 2 more comments
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
geometry
euclidean-geometry
contest-math
triangles
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Report this ad
Linked
2Good book introducing Inconics
Related
2Proving the existence of a triangle's incenter and Gergonne point
3What is the shape of the region where a special point can exist?
4Proving perpendicularity
17Another beauty hidden in a simple triangle (2)
0Prove that if O O lies on the point of intesection of two of the bisectors of a triangle's angles it also lies on the third one.
0Is this the Tucker-Gergonne-Nagel cubic ? or some other one
2Line segments intersecting at the nine-point center
2Showing three lines pass throw one point (external)
1Prove line segments are equal in length
Hot Network Questions
In Dwarf Fortress, why can't I farm any crops?
Discussing strategy reduces winning chances of everyone!
Vampires defend Earth from Aliens
Does a Linux console change color when it crashes?
Is it safe to route top layer traces under header pins, SMD IC?
Proof of every Highly Abundant Number greater than 3 is Even
Cannot build the font table of Miama via nfssfont.tex
Where is the first repetition in the cumulative hierarchy up to elementary equivalence?
A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man
Childhood book with a girl obsessed with homonyms who adopts a stray dog but gives it back to its owners
Matthew 24:5 Many will come in my name!
Weird utility function
Storing a session token in localstorage
Checking model assumptions at cluster level vs global level?
What’s the usual way to apply for a Saudi business visa from the UAE?
Gluteus medius inactivity while riding
If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church?
Interpret G-code
Overfilled my oil
Non-degeneracy of wedge product in cohomology
Is encrypting the login keyring necessary if you have full disk encryption?
Calculating the node voltage
Why does LaTeX convert inline Python code (range(N-2)) into -NoValue-?
Is existence always locational?
more hot questions
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices |
189561 | https://pubmed.ncbi.nlm.nih.gov/38728382/ | How I treat challenging transfusion cases in sickle cell disease - PubMed
Clipboard, Search History, and several other advanced features are temporarily unavailable.
Skip to main page content
An official website of the United States government
Here's how you know
The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Log inShow account info
Close
Account
Logged in as:
username
Dashboard
Publications
Account settings
Log out
Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation
Search: Search
AdvancedClipboard
User Guide
Save Email
Send to
Clipboard
My Bibliography
Collections
Citation manager
Display options
Display options
Format
Save citation to file
Format:
Create file Cancel
Email citation
Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page.
To:
Subject:
Body:
Format:
[x] MeSH and other data
Send email Cancel
Add to Collections
Create a new collection
Add to an existing collection
Name your collection:
Name must be less than 100 characters
Choose a collection:
Unable to load your collection due to an error
Please try again
Add Cancel
Add to My Bibliography
My Bibliography
Unable to load your delegates due to an error
Please try again
Add Cancel
Your saved search
Name of saved search:
Search terms:
Test search terms
Would you like email updates of new search results? Saved Search Alert Radio Buttons
Yes
No
Email: (change)
Frequency:
Which day?
Which day?
Report format:
Send at most:
[x] Send even when there aren't any new results
Optional text in email:
Save Cancel
Create a file for external citation management software
Create file Cancel
Your RSS Feed
Name of RSS Feed:
Number of items displayed:
Create RSS Cancel
RSS Link Copy
Actions
Cite
Collections
Add to Collections
Create a new collection
Add to an existing collection
Name your collection:
Name must be less than 100 characters
Choose a collection:
Unable to load your collection due to an error
Please try again
Add Cancel
Permalink
Permalink
Copy
Display options
Display options
Format
Page navigation
Title & authors
Abstract
Similar articles
Cited by
Publication types
MeSH terms
Substances
Related information
Grants and funding
LinkOut - more resources
Case Reports
Blood
Actions
Search in PubMed
Search in NLM Catalog
Add to Search
. 2025 May 15;145(20):2257-2265.
doi: 10.1182/blood.2023023648.
How I treat challenging transfusion cases in sickle cell disease
Stella T Chou12,Jeanne E Hendrickson34
Affiliations Expand
Affiliations
1 Division of Hematology, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, PA.
2 Division of Transfusion Medicine, Department of Pathology and Laboratory Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA.
3 Department of Pathology and Laboratory Medicine, Center for Transfusion and Cellular Therapies, Emory University School of Medicine, Atlanta, GA.
4 Department of Laboratory Medicine, Yale University School of Medicine, New Haven, CT.
PMID: 38728382
DOI: 10.1182/blood.2023023648
Item in Clipboard
Case Reports
How I treat challenging transfusion cases in sickle cell disease
Stella T Chou et al. Blood.2025.
Show details
Display options
Display options
Format
Blood
Actions
Search in PubMed
Search in NLM Catalog
Add to Search
. 2025 May 15;145(20):2257-2265.
doi: 10.1182/blood.2023023648.
Authors
Stella T Chou12,Jeanne E Hendrickson34
Affiliations
1 Division of Hematology, Department of Pediatrics, The Children's Hospital of Philadelphia, Philadelphia, PA.
2 Division of Transfusion Medicine, Department of Pathology and Laboratory Medicine, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA.
3 Department of Pathology and Laboratory Medicine, Center for Transfusion and Cellular Therapies, Emory University School of Medicine, Atlanta, GA.
4 Department of Laboratory Medicine, Yale University School of Medicine, New Haven, CT.
PMID: 38728382
DOI: 10.1182/blood.2023023648
Item in Clipboard
Cite
Display options
Display options
Format
Abstract
Transfusion of red blood cells (RBCs) can be lifesaving for individuals living with sickle cell disease (SCD). However, alloimmunization after transfusion is more common with patients with SCD than in other patient populations, resulting in morbidity and mortality. Management of complications related to RBC alloantibodies, including delayed hemolytic transfusion reactions (DHTRs) and identifying compatible RBCs for future transfusions, remains a challenge for hematologists and transfusion medicine providers. Although transfusion guidelines from organizations, including the American Society for Hematology provide general recommendations, individual cases remain challenging. Antibody evanescence and the lack of widespread RBC alloantibody data sharing across hospitals pose unique challenges, as do RH variants in both transfusion recipients and blood donors. Further, as potentially curative therapies require RBC transfusions to lower the hemoglobin S before cellular therapy collections and infusions, patients who are highly alloimmunized may be deemed ineligible. The cases described are representative of clinical dilemmas the authors have encountered, and the approaches are as evidence-based as the literature and the authors' experiences allow. A future desired state is one in which RBC alloantibody data are efficiently shared across institutions, Rh alloimmunization can be mitigated, better treatments exist for DHTRs, and a label of difficult to transfuse does not prevent desired therapies.
© 2025 American Society of Hematology. Published by Elsevier Inc. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
PubMed Disclaimer
Similar articles
Sickle Cell Disease.Bender MA, Carlberg K.Bender MA, et al.2003 Sep 15 [updated 2025 Feb 13]. In: Adam MP, Feldman J, Mirzaa GM, Pagon RA, Wallace SE, Amemiya A, editors. GeneReviews® [Internet]. Seattle (WA): University of Washington, Seattle; 1993–2025.2003 Sep 15 [updated 2025 Feb 13]. In: Adam MP, Feldman J, Mirzaa GM, Pagon RA, Wallace SE, Amemiya A, editors. GeneReviews® [Internet]. Seattle (WA): University of Washington, Seattle; 1993–2025.PMID: 20301551 Free Books & Documents.Review.
Transfusion thresholds for guiding red blood cell transfusion.Carson JL, Stanworth SJ, Dennis JA, Trivella M, Roubinian N, Fergusson DA, Triulzi D, Dorée C, Hébert PC.Carson JL, et al.Cochrane Database Syst Rev. 2021 Dec 21;12(12):CD002042. doi: 10.1002/14651858.CD002042.pub5.Cochrane Database Syst Rev. 2021.PMID: 34932836 Free PMC article.
Red blood cell specifications for patients with hemoglobinopathies: a systematic review and guideline.Compernolle V, Chou ST, Tanael S, Savage W, Howard J, Josephson CD, Odame I, Hogan C, Denomme G, Shehata N; International Collaboration for Transfusion Medicine Guidelines.Compernolle V, et al.Transfusion. 2018 Jun;58(6):1555-1566. doi: 10.1111/trf.14611. Epub 2018 Apr 26.Transfusion. 2018.PMID: 29697146
Restrictive versus liberal red blood cell transfusion strategies for people with haematological malignancies treated with intensive chemotherapy or radiotherapy, or both, with or without haematopoietic stem cell support.Estcourt LJ, Malouf R, Trivella M, Fergusson DA, Hopewell S, Murphy MF.Estcourt LJ, et al.Cochrane Database Syst Rev. 2017 Jan 27;1(1):CD011305. doi: 10.1002/14651858.CD011305.pub2.Cochrane Database Syst Rev. 2017.Update in: Cochrane Database Syst Rev. 2024 May 23;5:CD011305. doi: 10.1002/14651858.CD011305.pub3.PMID: 28128441 Free PMC article.Updated.
Preoperative blood transfusions for sickle cell disease.Estcourt LJ, Fortin PM, Trivella M, Hopewell S.Estcourt LJ, et al.Cochrane Database Syst Rev. 2016 Apr 6;4(4):CD003149. doi: 10.1002/14651858.CD003149.pub3.Cochrane Database Syst Rev. 2016.Update in: Cochrane Database Syst Rev. 2020 Jul 2;7:CD003149. doi: 10.1002/14651858.CD003149.pub4.PMID: 27049331 Free PMC article.Updated.
See all similar articles
Cited by
Improving CORM technology for the treatment of delayed hemolytic transfusion reaction.Asperti M, Vinchi F.Asperti M, et al.Hemasphere. 2024 Aug 5;8(8):e140. doi: 10.1002/hem3.140. eCollection 2024 Aug.Hemasphere. 2024.PMID: 39104765 Free PMC article.No abstract available.
Publication types
Case Reports
Actions
Search in PubMed
Search in MeSH
Add to Search
MeSH terms
Adult
Actions
Search in PubMed
Search in MeSH
Add to Search
Anemia, Sickle Cell / blood
Actions
Search in PubMed
Search in MeSH
Add to Search
Anemia, Sickle Cell / therapy
Actions
Search in PubMed
Search in MeSH
Add to Search
Erythrocyte Transfusion / adverse effects
Actions
Search in PubMed
Search in MeSH
Add to Search
Erythrocyte Transfusion / methods
Actions
Search in PubMed
Search in MeSH
Add to Search
Female
Actions
Search in PubMed
Search in MeSH
Add to Search
Humans
Actions
Search in PubMed
Search in MeSH
Add to Search
Isoantibodies / blood
Actions
Search in PubMed
Search in MeSH
Add to Search
Isoantibodies / immunology
Actions
Search in PubMed
Search in MeSH
Add to Search
Male
Actions
Search in PubMed
Search in MeSH
Add to Search
Transfusion Reaction / etiology
Actions
Search in PubMed
Search in MeSH
Add to Search
Transfusion Reaction / therapy
Actions
Search in PubMed
Search in MeSH
Add to Search
Substances
Isoantibodies
Actions
Search in PubMed
Search in MeSH
Add to Search
Related information
MedGen
Grants and funding
R01 HL147879/HL/NHLBI NIH HHS/United States
U01 HL134696/HL/NHLBI NIH HHS/United States
LinkOut - more resources
Full Text Sources
Elsevier Science
[x]
Cite
Copy Download .nbib.nbib
Format:
Send To
Clipboard
Email
Save
My Bibliography
Collections
Citation Manager
[x]
NCBI Literature Resources
MeSHPMCBookshelfDisclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
Follow NCBI
Connect with NLM
National Library of Medicine
8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov |
189562 | https://skdragonmathprecal.files.wordpress.com/2013/03/precalculus-condensing-logarithms-practice.pdf | ©N e2j0e163O iKIuStTaD ISOoSfithwyaSrheS ELWLhCb.r t qAhlelI drriPgWhgtpsC krwe3sCeprdvfeQdq.u J 8M6aEdWeE zwqiFtIhW HIqnOfYiXnYiGtpeC jAhlvg0e0bWrKad 42P.7 Worksheet by Kuta Software LLC Precalculus Name_____ Period_ Date___ ©g 02H0r1o3B KKoubtqa7 2S7ovfut6wwafrIeh 8LfLiCV.B 3 5A1l2l7 BrziqgghTt5sE xr1ecsXeWrpv8eJdx.S Condensing Logarithms Practice Condense each expression to a single logarithm.
1) log 8 x 2 2) 3log 3 a 3) log 4 u 2 4) log 9 x + log 9 y 5) log 5 a − log 5 b 6) log 2 u + log 2 v 7) ln a − ln b 8) log a 2 9) 2log 4 a + 6log 4 b 10) 2log 8 10 − 3log 8 3 11) log 5 7 2 + log 5 10 2 + log 5 3 2 12) 2log 6 x − 3log 6 y 13) 4log 4 x − 4log 4 y 14) log 7 w + log 7 u 2 + log 7 v 2 15) 2log 8 x + 4log 8 y 16) 2log 6 10 + 6log 6 3 ©F x2e0o1I3k JKbuutKa7 jSooQfxtpwbaNrCex KLRL6Cv.x 9 8AXlalO erRivg4hctks1 yruersVeOr6v8ezd4.7 R JMJaDd3ef HwSiAtPhG lIMnafViznfigtfeu LA2lkgrepbjrKal 82S.T Worksheet by Kuta Software LLC Precalculus Name_____ Period Date__ ©W s2N071P3v YKBu8t6aM YSToCfdtVw9aMrXea NL7LHC0.I l 2A0lClz GriiIgDhitlsE DroeisneHrrvveyde.u Condensing Logarithms Practice Condense each expression to a single logarithm.
1) log 8 x 2 log 8 x 2) 3log 3 a log 3 a3 3) log 4 u 2 log 4 u 4) log 9 x + log 9 y log 9 yx 5) log 5 a − log 5 b log 5 a b 6) log 2 u + log 2 v log 2 vu 7) ln a − ln b ln a b 8) log a 2 log a 9) 2log 4 a + 6log 4 b log 4 ( b6 a2) 10) 2log 8 10 − 3log 8 3 log 8 102 33 11) log 5 7 2 + log 5 10 2 + log 5 3 2 log 5 210 12) 2log 6 x − 3log 6 y log 6 x2 y3 13) 4log 4 x − 4log 4 y log 4 x4 y4 14) log 7 w + log 7 u 2 + log 7 v 2 log 7 ( w vu) 15) 2log 8 x + 4log 8 y log 8 ( y4 x2) 16) 2log 6 10 + 6log 6 3 log 6 ( 36 ⋅ 102) |
189563 | https://www.groundzerobooksltd.com/pages/books/75712/james-d-foley-andries-van-dam/fundamentals-of-interactive-computer-graphics | Fundamentals of Interactive Computer Graphics | James D. Foley, Andries Van Dam | Presumed First Edition, First printing
Home
Browse
Search
Wants
About
Contact
Back
All Categories
Featured Books
New Arrivals
×Dismiss Alert
For international sales other than Canada, customers should contact us directly as PayPal is not currently a payment option.
Skip to main content
Your Account|Cart (0 items)
Ground Zero Books, Ltd.
Main Navigation
Toggle main navigation
Submit Search
Home
Browse
All Categories
Featured Books
New Arrivals
Search
Wants
About
Contact
Search Submit Search
Advanced Search
Foley, James D., and Van Dam, Andries
Fundamentals of Interactive Computer Graphics
Reading, MA: Addison-Wesley Publishing Company, 1982. Presumed First Edition, First printing. Hardcover. xx, 664, pages. Illustrated endpapers. Illustrations. Exercises. Bibliography. Index. This is one of the Systems Programming Series. Mark on bottom edge. James David Foley (born July 20, 1942) is an American computer scientist and computer graphics researcher. He is a Professor Emeritus and held the Stephen Fleming Chair in Telecommunications in the School of Interactive Computing at Georgia Institute of Technology (Georgia Tech). He was Interim Dean of Georgia Tech's College of Computing from 2008–2010. He is perhaps best known as the co-author of several widely used textbooks in the field of computer graphics. In 1997, Foley was recognized by ACM SIGGRAPH with the prestigious Steven A. Coons Award. The receipt of this biannual award places Foley among the company of computer graphics pioneers such as Andy van Dam, Jim Blinn, Edwin Catmull and Ivan Sutherland. In 2007 he was recognized by ACM SIGCHI with their Lifetime Achievement Award. Andries "Andy" van Dam (born December 8, 1938) is a Dutch-born American professor of computer science and former vice-president for research at Brown University in Providence, Rhode Island. Together with Ted Nelson he contributed to the first hypertext system, Hypertext Editing System (HES) in the late 1960s. He co-founded the precursor of today's ACM SIGGRAPH conference. This is a relatively early work on this important and rapidly evolving field. Condition: Very good / Good.
Keywords: Interactive Graphics, Computer Graphics, Geometric Models, Object Hierarchy, Planar Geometric, 3D Viewing, Display Architecture, Raster Algorithms, Visual Realism, Three Dimensional, Transformations, Data Structures, Color Theory, Segmentation
ISBN: 0201144689
[Book #75712]
Price: $75.00
Add to CartAsk a Question
See all items by James D. Foley, Andries Van Dam
Ground Zero Books, Ltd.
9033 Georgia Ave.
Silver Spring, MD 20910
Phone 301-585-1471
info@groundzerobooksltd.com
Search Our Inventory
Browse Categories
New Arrivals
Featured Items
Leave a Want
About Us
Contact Us
Privacy Policy
Accessibility
Shopping Cart
My Account
Create an Account
Forgot Password
© 2025 Ground Zero Books, Ltd.. All rights reserved. Site Map | Site by Bibliopolis
×
I'd like to be notified of new arrivals in the following categories.
Check all categories that are of interest to you.
ShareThis Copy and Paste |
189564 | https://www.youtube.com/watch?v=AjKsGBAh4FQ | Parallel Vectors
Mathispower4u
330000 subscribers
598 likes
Description
169834 views
Posted: 28 Dec 2010
This video explains how to determine if vectors are parallel.
23 comments
Transcript:
Welcome to a lesson on parallel vectors. The goals of this video are to define parallel vectors and also to determine if given vectors are parallel. Two vectors are parallel if they are scalar multiples of one another. So we can say that if u and v are two nonzero vectors, then vector u must equal c times vector v. And if this is true, u and v are parallel. This is true in R2 and R3. Here’s an illustration of several parallel vectors is R2. Notice how they’re all slanted in the same direction. But they don’t necessarily have to point in the same direction, then they would be scalar multiples of one another. Another important thing to notice is if we take a look at this longer blue vector, and this gray unit vector, they are multiples of one another, and therefore they are parallel even though they are on top of each other, or they intersect. And of course this is different from the definition of parallel lines. So if we have 2 parallel vectors that are also position vectors, meaning their initial point is at the origin, then they would be on top of each other as we see here. Unless they point in opposite direction. Another way to express parallel vectors in component form in R2 and R3 would be as follows. If vector u expressed in component form as u sub 1, u sub 2, is equal to c times vector v expressed as v sub 1 comma v sub 2. Then u sub 1 would equal c times v sub 1, and u sub 2 would equal c times v sub 2. The reason it’s nice to express it this way, if we want to determine if 2 vectors are equal to each other, we can set u sub 1, equal to c times v sub 1, find the value of c that makes that true, and then check to make sure that u sub 2 is equal to c times v sub 2. And then of course in R3 we have a similar situation except now we not only have the x-component and the y-component, but we also have the z-component. But again we can pick any pair, find the value of c, and then check to make sure it holds true for the other 2 components. Let’s go ahead and give it a try. Let’s say we want to determine which of these vectors in red are parallel to vector v in blue. So if these vectors are parallel they must be multiples of one another, or we should be able to express the give vector as c times vector v. So if the blue vector and the red vector are parallel, then the x-component of this vector must be equal to negative 3 times c. The y-component must be equal to negative 2 times c. And the z-component must be equal to 5 times some c. So if we know that 6 must equal negative 3c, that would make c equal to negative 2. Now what we have to do is using this value of c, make sure that it also satisfies the y and z-component. Well negative 2 times negative 2 would be positive 4, that checks. Positive 5 times negative 2 would be negative 10, therefore this vector here is parallel to vector v. Let’s go and apply the same procedure. If these are parallel, then it must be equal to vector v times some constant c. So we’d have negative 3 times c for the x-component. Negative 2 times c for the y-component. And 5 times c for the z-component. Again, we can select any component to identify a possible value of c, and then check it for the remaining components. Now the reason I mention that is, it’s gonna be easier to solve for c if we use the y-component this time. If negative 2c must equal negative 1, c would be equal to 1 half. No we’ll go and check to make sure it satisfies the x and z-component as well. Well negative 3 times 1 half would be negative 3 halves’. That checks. And then for the z-component, 5 times c, or 5 times 1 half would be 5 halves’. And notice how it doesn’t work now, because the z component is supposed be negative 5 halves. So we have the wrong sign, therefore the value of c does not work on all 3 components, therefore the given vector and vector v are not parallel. Let’s go ahead and try one more. So one more time, if this vector is parallel to vector v, must equal to a constant times vector v, so we’ll have a negative 3 times c for the x-component. Negative 2 times c for the y, and 5c for the z-component. Again we can select any component to identify a possible value of c. Looks like using the x-component would be easiest here. So here we have negative 3c must equal 2. So c would be equal to negative 2 thirds if these vectors are parallel. Let’s go ahead and check it. For the y component, we’d have negative 2 times negative 2 thirds. That would be positive 4 thirds. That checks. And then for the z-component we’d have 5 times negative 2 thirds. We’ll that’d be negative 10 thirds, which also checks. And since c makes every component of these vectors equal to each other, these 2 vectors are multiples of one another and therefore parallel. Now for this example, we used vectors in R3, but of course we could use the same process for vectors in R2 as well. I hope you found this example helpful, thank you for watching. |
189565 | https://sheffield.ac.uk/media/32065/download?attachment | Changing Coordinates
27.4
Introduction
We have seen how changing the variable of integration of a single integral or changing the coordinate system for multiple integrals can make integrals easier to evaluate. In this Section we introduce the Jacobian. The Jacobian gives a general method for transforming the coordinates of any multiple integral.
'&$%
Prerequisites
Before starting this Section you should . . .
• have a thorough understanding of the various techniques of integration
• be familiar with the concept of a function of several variables
• be able to evaluate the determinant of a matrix
'&$%
Learning Outcomes
On completion you should be able to . . .
• decide which coordinate transformation simplifies an integral
• determine the Jacobian for a coordinate transformation
• evaluate multiple integrals using a transformation of coordinates
66 HELM (2008): Workbook 27: Multiple Integration ®1. Changing variables in multiple integrals
When the method of substitution is used to solve an integral of the form
∫ ba
f (x) dx three parts of the integral are changed, the limits, the function and the infinitesimal dx . So if the substitution is of the form x = x(u) the u limits, c and d, are found by solving a = x(c) and b = x(d) and the function is expressed in terms of u as f (x(u)) .uδu δx x(u)
δx δu
Figure 28
Figure 28 shows why the dx needs to be changed. While the δu is the same length for all u, the δx
change as u changes. The rate at which they change is precisely ddu x(u). This gives the relation
δx = dx du δu
Hence the transformed integral can be written as
∫ ba
f (x) dx =
∫ dc
f (x (u)) dx du du
Here the dx du is playing the part of the Jacobian that we will define. Another change of coordinates that you have seen is the transformations from cartesian coordinates
(x, y ) to polar coordinates (r, θ ).Recall that a double integral in polar coordinates is expressed as
∫ ∫
f (x, y ) dxdy =
∫ ∫
g(r, θ ) rdrdθ r1 r1 + δr r2 r2 + δr δθ A Bδr δr
Figure 29
We can see from Figure 29 that the area elements change in size as r increases. The circumference of a circle of radius r is 2πr , so the length of an arc spanned by an angle θ is 2πr θ
2π = rθ . Hence
HELM (2008): Section 27.4: Changing Coordinates
67 the area elements in polar coordinates are approximated by rectangles of width δr and length rδθ .Thus under the transformation from cartesian to polar coordinates we have the relation
δxδy → rδrδθ
that is, rδrδθ plays the same role as δxδy . This is why the r term appears in the integrand. Here r
is playing the part of the Jacobian.
2. The Jacobian
Given an integral of the form
∫ ∫
A
f (x, y ) dxdy
Assume we have a change of variables of the form x = x(u, v ) and y = y(u, v ) then the Jacobian of the transformation is defined as
J(u, v ) =
∣∣∣∣∣∣∣∣∣
∂x ∂u ∂x ∂v ∂y ∂u ∂y ∂v
∣∣∣∣∣∣∣∣∣
Key Point 10
Jacobian in Two Variables
For given transformations x = x(u, v ) and y = y(u, v ) the Jacobian is
J(u, v ) =
∣∣∣∣∣∣∣∣∣
∂x ∂u ∂x ∂v ∂y ∂u ∂y ∂v
∣∣∣∣∣∣∣∣∣
Notice the pattern occurring in the x, y, u and v. Across a row of the determinant the numerators are the same and down a column the denominators are the same.
Notation
Different textbooks use different notation for the Jacobian. The following are equivalent.
J(u, v ) = J(x, y ; u, v ) = J
(x, y u, v
)
=
∣∣∣∣
∂(x, y )
∂(u, v )
∣∣∣∣
The Jacobian correctly describes how area elements change under such a transformation. The required relationship is
dxdy → | J(u, v )| dudv
that is, |J(u, v )| dudv plays the role of dxdy .
68 HELM (2008): Workbook 27: Multiple Integration ®Key Point 11
Jacobian for Transforming Areas
When transforming area elements employing the Jacobian it is the modulus of the Jacobian that must be used.
Example 24
Find the area of the circle of radius R.θ = π θ = 0
θ= 2 πθ=π/ 2
θr= 0
r=Rxy
Figure 30 Solution
Let A be the region bounded by a circle of radius R centred at the origin. Then the area of this region is
∫
A
dA . We will calculate this area by changing to polar coordinates, so consider the usual transformation x = r cos θ, y = r sin θ from cartesian to polar coordinates. First we require all the partial derivatives
∂x ∂r = cos θ ∂y ∂r = sin θ ∂x ∂θ = −r sin θ ∂y ∂θ = r cos θ
Thus
J(r, θ ) =
∣∣∣∣∣∣∣∣∣
∂x ∂r ∂x ∂θ ∂y ∂r ∂y ∂θ
∣∣∣∣∣∣∣∣∣
=
∣∣∣∣ cos θ −r sin θ
sin θ r cos θ
∣∣∣∣ = cos θ × r cos θ − (−r sin θ) × sin θ
= r (cos 2 θ + sin 2 θ) = r
HELM (2008): Section 27.4: Changing Coordinates
69 Solution (contd.)
This confirms the previous result for polar coordinates, dxdy → rdrdθ . The limits on r are r = 0
(centre) to r = R (edge). The limits on θ are θ = 0 to θ = 2 π, i.e. starting to the right and going once round anticlockwise. The required area is
∫
A
dA =
∫ 2π
0
∫ R
0
|J(r, θ )| drdθ =
∫ 2π
0
∫ R
0
r drdθ = 2 π R2
2 = πR 2
Note that here r > 0 so |J(r, θ )| = J(r, θ ) = r.
Example 25
The diamond shaped region A in Figure 31(a) is bounded by the lines x + 2 y = 2 ,
x − 2y = 2 , x + 2 y = −2 and x − 2y = −2. We wish to evaluate the integral
I =
∫ ∫
A
(3 x + 6 y)2 dA
over this region. Since the region A is neither vertically nor horizontally simple, evaluating I without changing coordinates would require separating the region into two simple triangular regions. So we use a change of coordinates to transform A
to a square region in Figure 31(b) and evaluate I.yx
−1
−221
x−2y=−2x+ 2 y= 2
x+ 2 y=−2x−2y= 2
uv=−2
u=−2
u= 2
v= 2
A′
vA
(a) (b)
Figure 31 70 HELM (2008): Workbook 27: Multiple Integration ®Solution
By considering the equations of the boundary lines of region A it is easy to see that the change of coordinates
du = x + 2 y (1) v = x − 2y (2)
will transform the boundary lines to u = 2 , u = −2, v = 2 and v = −2. These values of u and v
are the new limits of integration. The region A will be transformed to the square region A′ shown above. We require the inverse transformations so that we can substitute for x and y in terms of u and v.By adding (1) and (2) we obtain u + v = 2 x and by subtracting (1) and (2) we obtain u − v = 4 y,thus the required change of coordinates is
x = 12 (u + v) y = 14 (u − v)
Substituting for x and y in the integrand (3 x + 6 y)2 of I gives
(32 (u + v) + 64 (u − v))2 = 9 u2
We have the new limits of integration and the new form of the integrand, we now require the Jacobian. The required partial derivatives are
∂x ∂u = 12
∂x ∂v = 12
∂y ∂u = 14
∂y ∂v = −14
Then the Jacobian is
J(u, v ) =
∣∣∣∣∣∣∣∣∣
121214 −14
∣∣∣∣∣∣∣∣∣
= −14
Then dA ′ = |J(u, v )|dA = 14dA . Using the new limits, integrand and the Jacobian, the integral can be written
I =
∫ 2
−2
∫ 2
−2
94u2 dudv.
You should evaluate this integral and check that I = 48 .
HELM (2008): Section 27.4: Changing Coordinates
71 Task This Task concerns using a transformation to evaluate
∫ ∫
(x2 + y2) dxdy .(a) Given the transformations u = x + y, v = x − y express x and y in terms of u and v to find the inverse transformations:
Your solution Answer
u = x + y (1)
v = x − y (2) Add equations (1) and (2) u + v = 2 x
Subtract equation (2) from equation (1) u − v = 2 y
So x = 12 (u + v) y = 12 (u − v)
(b) Find the Jacobian J(u, v ) for the transformation in part (a):
Your solution Answer
Evaluating the partial derivatives, ∂x ∂u = 12, ∂x ∂v = 12, ∂y ∂u = 12 and ∂y ∂v = −12 so the Jacobian
∣∣∣∣∣∣∣∣∣
∂x ∂u ∂x ∂v ∂y ∂u ∂y ∂v
∣∣∣∣∣∣∣∣∣
=
∣∣∣∣∣∣∣∣∣
121212 −12
∣∣∣∣∣∣∣∣∣
= −14 − 14 = −12
72 HELM (2008): Workbook 27: Multiple Integration ®(c) Express the integral I =
∫ ∫ (x2 + y2) dxdy in terms of u and v, using the transformations introduced in (a) and the Jacobian found in (b):
Your solution Answer
On letting x = 12(u + v), y = 12 (u − v) and dxdy = |J| dudv = 12 dudv , the integral
∫ ∫ (x2 + y2) dxdy becomes
I =
∫ ∫ (14(u + v)2 + 14(u − v)2
)
× 12 dudv
=
∫ ∫ 12
(u2 + v2) × 12 dudv
=
∫ ∫ 14
(u2 + v2) dudv
(d) Find the limits on u and v for the rectangle with vertices (x, y ) = (0 , 0) , (2 , 2) , (−1, 5) , (−3, 3) :
Your solution Answer
For ( 0, 0), u = 0 and v = 0
For ( 2, 2), u = 4 and v = 0
For ( −1, 5), u = 4 and v = −6
For ( −3, 3), u = 0 and v = −6
Thus, the limits on u are u = 0 to u = 4 while the limits on v are v = −6 to v = 0 .
HELM (2008): Section 27.4: Changing Coordinates
73 (e) Finally evaluate I:
Your solution Answer
The integral is
I =
∫ 0
v=−6
∫ 4
u=0
14
(u2 + v2) dudv
= 14
∫ 0
v=−6
[13u3 + uv 2
]4
u=0
dudv =
∫ 0
v=−6
[16 3 + v2
]
dv
=
[16 3 v + 13v3
]0
−6
= 0 −
[16 3 × (−6) + 13 × (−216)
]
= 104
3. The Jacobian in 3 dimensions
When changing the coordinate system of a triple integral
I =
∫ ∫ ∫
V
f (x, y, z ) dV
we need to extend the above definition of the Jacobian to 3 dimensions.
Key Point 12
Jacobian in Three Variables
For given transformations x = x(u, v, w ), y = y(u, v, w ) and z = z(u, v, w ) the Jacobian is
J(u, v, w ) =
∣∣∣∣∣∣∣∣∣∣∣∣∣∣
∂x ∂u ∂x ∂v ∂x ∂w ∂y ∂u ∂y ∂v ∂y ∂w ∂z ∂u ∂z ∂v ∂z ∂w
∣∣∣∣∣∣∣∣∣∣∣∣∣∣
The same pattern persists as in the 2-dimensional case (see Key Point 10). Across a row of the determinant the numerators are the same and down a column the denominators are the same.
74 HELM (2008): Workbook 27: Multiple Integration ®The volume element dV = dxdydz becomes dV = |J(u, v, w )| dudvdw . As before the limits and integrand must also be transformed.
Example 26
Use spherical coordinates to find the volume of a sphere of radius R.(x, y, z )
xyzφrθ
Figure 32 Solution
The change of coordinates from Cartesian to spherical polar coordinates is given by the transforma-tion equations
x = r cos θ sin φ y = r sin θ sin φ z = r cos φ
We now need the nine partial derivatives
∂x ∂r = cos θ sin φ ∂x ∂θ = −r sin θ sin φ ∂x ∂φ = r cos θ cos φ∂y ∂r = sin θ sin φ ∂y ∂θ = r cos θ sin φ ∂y ∂φ = r sin θ cos φ∂z ∂r = cos φ ∂z ∂θ = 0 ∂z ∂φ = r sin φ
Hence we have
J(r, θ, φ ) =
∣∣∣∣∣∣
cos θ sin φ −r sin θ sin φ r cos θ cos φ
sin θ sin φ r cos θ sin φ r sin θ cos φ
cos φ 0 −r sin φ
∣∣∣∣∣∣
J(r, θ, φ ) = cos φ
∣∣∣∣ −r sin θ sin φ r cos θ cos φr cos θ sin φ r sin θ cos φ
∣∣∣∣ + 0 − r sin φ
∣∣∣∣ cos θ sin φ −r sin θ sin φ
sin θ sin φ r cos θ sin φ
∣∣∣∣
Check that this gives J(r, θ, φ ) = −r2 sin φ. Notice that J(r, θ, φ ) ≤ 0 for 0 ≤ φ ≤ π, so
|J(r, θ, φ )| = r2 sin φ. The limits are found as follows. The variable φ is related to ‘latitude’ with
φ = 0 representing the ‘North Pole’ with φ = π/ 2 representing the equator and φ = π representing the ‘South Pole’.
HELM (2008): Section 27.4: Changing Coordinates
75 Solution (contd.)
The variable θ is related to ‘longitude’ with values of 0 to 2π covering every point for each value of
φ. Thus limits on φ are 0 to π and limits on θ are 0 to 2π. The limits on r are r = 0 (centre) to
r = R (surface). To find the volume of the sphere we then integrate the volume element dV = r2 sin φ drdθdφ
between these limits. Volume =
∫ π
0
∫ 2π
0
∫ R
0
r2 sin φ drdθdφ =
∫ π
0
∫ 2π
0
13R3 sin φ dθdφ
=
∫ π
0
2π
3 R3 sin φ dφ = 43πR 3
Example 27
Find the volume integral of the function f (x, y, z ) = x − y over the parallelepiped with the vertices of the base at
(x, y, z ) = (0 , 0, 0) , (2 , 0, 0) , (3 , 1, 0) and (1 , 1, 0)
and the vertices of the upper face at
(x, y, z ) = (0 , 1, 2) , (2 , 1, 2) , (3 , 2, 2) and (1 , 2, 2) .xyz
Figure 33 76 HELM (2008): Workbook 27: Multiple Integration ®Solution
This will be a difficult integral to derive limits for in terms of x, y and z. However, it can be noted that the base is described by z = 0 while the upper face is described by z = 2 . Similarly, the front face is described by 2y − z = 0 with the back face being described by 2y − z = 2 . Finally the left face satisfies 2x − 2y + z = 0 while the right face satisfies 2x − 2y + z = 4 .The above suggests a change of variable with the new variables satisfying u = 2 x−2y+z, v = 2 y−z
and w = z and the limits on u being 0 to 4, the limits on v being 0 to 2 and the limits on w being
0 to 2.Inverting the relationship between u, v, w and x, y and z, gives
x = 12(u + v) y = 12(v + w) z = w
The Jacobian is given by
J(u, v, w ) =
∣∣∣∣∣∣∣∣∣∣∣∣∣∣
∂x ∂u ∂x ∂v ∂x ∂w ∂y ∂u ∂y ∂v ∂y ∂w ∂z ∂u ∂z ∂v ∂z ∂w
∣∣∣∣∣∣∣∣∣∣∣∣∣∣
=
∣∣∣∣∣∣∣∣∣∣∣∣∣
1212 00 12120 0 1
∣∣∣∣∣∣∣∣∣∣∣∣∣
= 14
Note that the function f (x, y, z ) = x − y equals 12 (u + v) − 12 (v + w) = 12 (u − w). Thus the integral is
∫ 2
w=0
∫ 2
v=0
∫ 4
u=0
12(u − w)14 dudvdw =
∫ 2
w=0
∫ 2
v=0
∫ 4
u=0
18(u − w) dudvdw
=
∫ 2
w=0
∫ 2
v=0
[ 116 u2 − 18uw
]40
dvdw
=
∫ 2
w=0
∫ 2
v=0
(
1 − 12w
)
dvdw
=
∫ 2
w=0
[
v − vw
2
]20
dw
=
∫ 2
w=0
(2 − w) dw
=
[
2w − 12w2
]20
= 4 − 42 − 0= 2
HELM (2008): Section 27.4: Changing Coordinates
77 Task Find the Jacobian for the following transformation:
x = 2 u + 3 v − w, y = v − 5w, z = u + 4 w
Your solution Answer
Evaluating the partial derivatives,
∂x ∂u = 2 , ∂x ∂v = 3 , ∂x ∂w = −1,
∂y ∂u = 0 , ∂y ∂v = 1 , ∂y ∂w = −5,
∂z ∂u = 1 , ∂z ∂v = 0 , ∂z ∂w = 4
so the Jacobian is
∣∣∣∣∣∣∣∣∣∣∣∣∣∣
∂x ∂u ∂x ∂v ∂x ∂w ∂y ∂u ∂y ∂v ∂y ∂w ∂z ∂u ∂z ∂v ∂z ∂w
∣∣∣∣∣∣∣∣∣∣∣∣∣∣
=
∣∣∣∣∣∣∣∣∣∣
2 3 −10 1 −51 0 4
∣∣∣∣∣∣∣∣∣∣
= 2
∣∣∣∣∣∣
1 −50 4
∣∣∣∣∣∣ + 1
∣∣∣∣∣∣
3 −11 −5
∣∣∣∣∣∣ = 2 × 4 + 1 × (−14) = −6
where expansion of the determinant has taken place down the first column.
78 HELM (2008): Workbook 27: Multiple Integration ®Engineering Example 3 Volume of liquid in an ellipsoidal tank
Introduction
An ellipsoidal tank (elliptical when viewed from along x-, y- or z-axes) has a volume of liquid poured into it. It is useful to know in advance how deep the liquid will be. In order to make this calculation, it is necessary to perform a multiple integration and calculate a Jacobian. acbh
Figure 34 Problem in words
The metal tank is in the form of an ellipsoid, with semi-axes a, b and c. A volume V of liquid is poured into the tank ( V < 43 πabc , the volume of the ellipsoid) and the problem is to calculate the depth, h, of the liquid.
Mathematical statement of problem
The shaded area is expressed as the triple integral
V =
∫ hz=0
∫ y2
y=y1
∫ x2
x=x1
dxdydz
where limits of integration
x1 = −a
√
1 − y2
b2 − (z − c)2
c2 and x2 = + a
√
1 − y2
b2 − (z − c)2
c2
which come from rearranging the equation of the ellipsoid
(x2
a2 + y2
b2 + (z − c)2
c2 = 1
)
and limits
y1 = − bc
√c2 − (z − c)2 and y2 = + bc
√c2 − (z − c)2
from the equation of an ellipse in the y-z plane
(y2
b2 + (z − c)2
c2 = 1
)
.
HELM (2008): Section 27.4: Changing Coordinates
79 Mathematical analysis
To calculate V , use the substitutions
x = aτ cos φ
(
1 − (z − c)2
c2
)12
y = bτ sin φ
(
1 − (z − c)2
c2
)12
z = z
now expressing the triple integral as
V =
∫ hz=0
∫ φ2
φ=φ1
∫ τ2
τ=τ1
J dτ dφdz
where J is the Jacobian of the transformation calculated from
J =
∣∣∣∣∣∣∣∣∣∣∣∣∣
∂x ∂τ ∂x ∂φ ∂x ∂z ∂y ∂τ ∂y ∂φ ∂y ∂z ∂z ∂τ ∂z ∂φ ∂z ∂z
∣∣∣∣∣∣∣∣∣∣∣∣∣
and reduces to
J = ∂x ∂τ ∂y ∂φ − ∂x ∂φ ∂y ∂τ since ∂z ∂τ = ∂z ∂φ = 0 =
{
a cos φ
(
1 − (z − c)2
c2
)12
bτ cos φ
(
1 − (z − c)2
c2
)12
}
−
{
−aτ sin φ
(
1 − (z − c)2
c2
)12
b sin φ
(
1 − (z − c)2
c2
)12
}
= abτ (cos 2 φ + sin 2 φ) (
1 − (z − c)2
c2
)
= abτ
(
1 − (z − c)2
c2
)
To determine limits of integration for φ, note that the substitutions above are similar to a cylindrical polar co-ordinate system, and so φ goes from 0 to 2π. For τ , setting τ = 0 ⇒ x = 0 and y = 0 , i.e. the z-axis. Setting τ = 1 gives
x2
a2 = cos 2 φ
(
1 − (z − c)2
c2
)
(1) and
y2
b2 = sin 2 φ
(
1 − (z − c)2
c2
)
(2)
80 HELM (2008): Workbook 27: Multiple Integration ®Summing both sides of Equations (1) and (2) gives
x2
a2 + y2
b2 = (cos 2 φ + sin 2 φ)
(
1 − (z − c)2
c2
)
or
x2
a2 + y2
b2 + (z − c)2
c2 = 1
which is the equation of the ellipsoid, i.e. the outer edge of the volume. Therefore the range of τ
should be 0 to 1. Now
V = ab
∫ hz=0
(
1 − (z − c)2
c2
) ∫ 2πφ=0
∫ 1
τ=0
τ dτ dφdz
= ab c2
∫ hz=0
(2 zc − z2)
∫ 2πφ=0
[τ 2
2
]1
τ=0
dφdz
= ab
2c2
∫ hz=0
(2 zc − z2)
[
φ
]2πφ=0
dz
= πab c2
[
cz 2 − z3
3
]hz=0
= πab c2
(
ch 2 − h3
3
)
Interpretation
Suppose the tank has actual dimensions of a = 2 m, b = 0.5 m and c = 3 m and a volume of 7 m 3
is to be poured into it. (The total volume of the tank is 4π m 3 ≈ 12.57 m 3). Then, from above
V = πab c2
(
ch 2 − h3
3
)
which becomes
7 = π
9
(
3h2 − h3
3
)
with solution h = 3.23 m (2 d.p.), compared to the maximum height of the ellipsoid of 6 m.
HELM (2008): Section 27.4: Changing Coordinates
81 Exercises
The function f = x2 + y2 is to be integrated over an elliptical cone with base being the ellipse,
x2/4 + y2 = 1 , z = 0 and apex (point) at (0 , 0, 5) . The integral can be made simpler by means of the change of variables x = 2(1 − w
5 )τ cos θ, y = (1 − w
5 )τ sin θ, z = w.xy
(−2, 0, 0) (0 , −1, 0) (2 , 0, 0) (0 , 0, 5)
z
(a) Find the limits on the variables τ , θ and w.(b) Find the Jacobian J(τ, θ, w ) for this transformation. (c) Express the integral
∫ ∫ ∫
(x2 + y2) dxdydz in terms of τ , θ and w.(d) Evaluate this integral. [Hint:- it may be worth noting that cos 2 θ ≡ 12 (1 + cos 2 θ)]. Note: This integral has relevance in topics such as moments of inertia. 2. Using cylindrical polar coordinates, integrate the function f = z√x2 + y2 over the volume between the surfaces z = 0 and z = 1 + x2 + y2 for 0 ≤ x2 + y2 ≤ 1.3. A torus (doughnut) has major radius R and minor radius r. Using the transformation x =(R + τ cos α) cos θ, y = ( R + τ cos α) sin θ, z = τ sin α, find the volume of the torus. [Hints:-limits on α and θ are 0 to 2π, limits on τ are 0 to r. Show that Jacobian is τ (R + τ cos α)]. xyzατR rθ
82 HELM (2008): Workbook 27: Multiple Integration ®4. Find the Jacobian for the following transformations. (a) x = u2 + vw , y = 2 v + u2w, z = uvw
(b) Cylindrical polar coordinates. x = ρ cos θ, y = ρ sin θ, z = z(x, y, z )
xyρzθ
Answers
(a) τ : 0 to 1, θ : 0 to 2π, w : 0 to 5
(b) 2(1 − w
5 )2τ
(c) 2
∫ 1
τ=0
∫ 2πθ=0
∫ 5
w=0
(1 − w
5 )4τ 3(4 cos 2 θ + sin 2 θ) dwdθdτ
(d) 52π
92 105 π
2π2Rr 2
(a) 4u2v − 2u4w + u2vw 2 − 2v2w, (b) ρ
HELM (2008): Section 27.4: Changing Coordinates
83 |
189566 | https://www.youmath.it/lezioni/analisi-matematica/le-funzioni-elementari-e-le-loro-proprieta/377-arcotangente.html | Arcotangente arctan(x)
- [x]
MENU
Lezioni
Algebra
Geometria
Analisi Matematica 1
Algebra Lineare
Analisi 2
Probabilità
Fisica
Esercizi
Esercizi di Algebra
Problemi di Geometria
Esercizi di Analisi 1
Esercizi di Algebra Lineare
Esercizi di Analisi 2
Esercizi di Probabilità
Risolutore
Domande e risposte
Forum
Scuola Primaria
Shop
Giochi matematici
Blog
Arcotangente arctan(x)
Home
Lezioni
Analisi Matematica 1
Funzioni elementari, grafici e proprietà
L'arcotangente è una funzione goniometrica inversa, indicata con arctan(x), con arctg(x) o talvolta con atan(x), che viene definita come inversa della funzione tangente e che ha come valori gli angoli tra -∏/2 e +∏/2 espressi in radianti.
Cerchi un rapido formulario in cui leggere le proprietà della funzione arcotangentey=arctan(x) e vederne il grafico? Sei finito nel posto giusto! Qui trovi tutto quello che serve sapere sull'arcotangente, ma andiamo con ordine...
Definizione di arcotangente: fissato , si definisce l'angolo in la cuitangenteè .
Da qui si capisce immediatamente che l'arcotangente è la funzione inversa della funzione tangentesull'intervallo .
Per calcolare l'arcotangente di un valore dobbiamo necessariamente appellarci alla definizione. Nello specifico, si deve individuare l'angolo la cui tangente vale .
A titolo di esempio
e di conseguenza
Eventualmente potete aiutarvi con la tabella dei valori delle funzioni goniometriche.
Grafico della funzione arcotangente
Identità dell'arcotangente
Prima di cominciare, vale la pena di segnalare alcune identità dell'arcotangente, utili (seppur di rado) nella risoluzione degli esercizi. Ad ogni modo non preoccupatevi, non vanno ricordate a memoria. ;)
(Dimostrazione sin(arctan(x)))
Per approfondire potete consultare la lezione con tutte le principali formule trigonometriche.
Proprietà della funzione arcotangente di x
1)Dominio: .
2) È una funzione dispari.
3)Funzione limitata con immagine .
4)Funzione monotona strettamente crescente su tutto il dominio.
5)Concava sull'intervallo , convessa sul .
6)Continua su tutto , derivabile su tutto .
7) Limiti agli estremi del dominio:
8)Limite notevole associato:
9)Derivata dell'arcotangente:
10)Integrale dell'arcotangente:
11) Per studenti universitari:sviluppo di Taylorcon centro in :
Se siete in cerca di esercizi svolti e non, o in caso di dubbi, non esitate e usate la barra di ricerca interna: lo staff di YM ha risposto a migliaia di domande e risolto altrettanti esercizi. ;)
Tchau, see you soon guys!
Fulvio Sbranchella (Agente Ω)
.....
Tags: lezione di riepilogo con la definizione, il grafico e tutte le proprietà della funzione arcotangente di x arctan(x), tra cui: il dominio, la monotonia, la convessità, i limiti agli estremi, il limite notevole associato, la derivata e l'integrale dell'arcotangente.
Ultima modifica: 19/04/2023
Contattaci
Il Team di YouMath
Chi siamo
Dicono di noi
Termini e Privacy
YouMath® -
Didattica e divulgazione dal 2011.
Corsi, lezioni, esercitazioni e approfondimenti per tutti.
Contenuti originali ideati e scritti da matematici e fisici, non da algoritmi AI.
© 2011-2025 - Math Industries Srl. P.IVA 07608320961.
Informativa
YouMath usa cookie tecnici (necessari per il funzionamento del sito) e cookie facoltativi, tra cui cookie analitici, cookie relativi a strumenti forniti da terze parti (necessari per il loro funzionamento) e cookie per la personalizzazione degli annunci in base alle tue preferenze di navigazione. Gli utilizzi e le finalità dei cookie sono regolati dalla privacy policy. Riguardo ai cookie facoltativi, puoi accettarli cliccando sul bottone qui di seguito e proseguire la navigazione. Cliccando su "Gestisci" è possibile accedere al pannello di controllo e rifiutare tutti i cookie. In alternativa puoi rifiutare direttamente, ma attenzione: rifiutando tutto useresti solo i cookie tecnici, e svariate parti del sito non funzionerebbero correttamente, e alcune funzionalità e contenuti non potrebbero essere erogati per conformità con la normativa.
Powered by
Gestisci Ok, ho capito |
189567 | https://brainly.in/question/41940229 | Prove that sum of n terms in an ap is n/2 (2a+(n-1)d) - Brainly.in
Skip to main content
Ask Question
Log in
Join for free
For parents
For teachers
Honor code
Textbook Solutions
Brainly App
jennyyyyy
15.06.2021
Math
Secondary School
answered
Prove that sum of n terms in an ap is n/2 (2a+(n-1)d)
1
See answer
See what the community says and unlock a badge.
Add answer+5 pts
0:00
/
0:15
Read More
jennyyyyy is waiting for your help.
Add your answer and earn points.
Add answer +5 pts
Answer
5 people found it helpful
akash20114c
akash20114c
Virtuoso
8 answers
3K people helped
Answer:
will learn how to find the sum of first n terms of an Arithmetic Progression.
Prove that the sum Sn of n terms of an Arithmetic Progress (A.P.) whose first term ‘a’ and common difference ‘d’ is
S = n2[2a + (n - 1)d]
Or, S = n2[a + l], where l = last term = a + (n - 1)d
Proof:
Suppose, a1, a2, a3, ……….. be an Arithmetic Progression whose first term is a and common difference is d.
Then,
a1 = a
a2 = a + d
a3 = a + 2d
a4 = a + 3d
………..
………..
an = a + (n - 1)d
Now,
S = a1 + a2 + a3 + ………….. + an−1 + an
S = a + (a + d) + (a + 2d) + (a + 3d) + ……….. + {a + (n - 2)d} + {a + (n - 1)d} ……………….. (i)
By writing the terms of S in the reverse order, we get,
S = {a + (n - 1)d} + {a + (n - 2)d} + {a + (n - 3)d} + ……….. + (a + 3d) + (a + 2d) + (a + d) + a
Adding the corresponding terms of (i) and (ii), we get
2S = {2a + (n - 1)d} + {2a + (n - 1)d} + {2a + (n - 1)d} + ………. + {a + (n - 2)d}
2S = n[2a + (n -1)d
⇒ S = n2[2a + (n - 1)d]
Now, l = last term = nth term = a + (n - 1)d
Therefore, S = n2[2a + (n - 1)d] = n2[a {a + (n - 1)d}] = n2[a + l].
We can also find find the sum of first n terms of an Arithmetic Progression according to the process below.
Suppose, S denote the sum of the first n terms of the Arithmetic Progression {a, a + d, a + 2d, a + 3d, a + 4d, a + 5d ……………...}.
Now nth term of the given Arithmetic Progression is a + (n - 1)d
Let the nth term of the given Arithmetic Progression = l
Therefore, a + (n - 1)d = l
Hence, the term preceding the last term is l – d.
The term preceding the term (l - d) is l - 2d and so on.
Therefore, S = a + (a + d) + (a + 2d) + (a + 3d) + …………………….. to n tems
Or, S = a + (a + d) + (a + 2d) + (a + 3d) + …………………….. + (l - 2d) + (l - d) + l ……………… (i)
Writing the above series in reverse order, we get
S = l + (l - d) + (l - 2d) + ……………. + (a + 2d) + (a + d) + a………………(ii)
Adding the corresponding terms of (i) and (ii), we get
2S = (a + l) + (a + l) + (a + l) + ……………………. to n terms
⇒ 2S = n(a + l)
⇒ S = n2(a + l)
⇒ S = Numberofterms2 × (First term + Last term) …………(iii)
⇒ S = n2[a + a + (n - 1)d], Since last term l = a + (n - 1)d
⇒ S = n2[2a + (n - 1)d
Explore all similar answers
Thanks 5
rating answer section
Answer rating 4.5
(18 votes)
Find Math textbook solutions?
See all
Class 12
Class 11
Class 10
Class 9
Class 8
Class 7
Class 6
Class 5
Class 4
Class 3
Class 2
Class 1
NCERT Class 9 Mathematics
619 solutions
NCERT Class 8 Mathematics
815 solutions
NCERT Class 7 Mathematics
916 solutions
NCERT Class 10 Mathematics
721 solutions
NCERT Class 6 Mathematics
1230 solutions
Xam Idea Mathematics 10
2278 solutions
ML Aggarwal - Understanding Mathematics - Class 8
2090 solutions
R S Aggarwal - Mathematics Class 8
1964 solutions
R D Sharma - Mathematics 9
2199 solutions
R S Aggarwal - Mathematics Class 7
2222 solutions
SEE ALL
Advertisement
Still have questions?
Find more answers
Ask your question
New questions in Math
the sum of money amount to Rs 5454 at 8% p.a. Find the time, interest, and principal
6. If (x – 1/x) = 4, then evaluate (x2 + 1/x2)
1/2 turn of rectangle in figure
Increase 80 by 10 percent
a square park has side 100m. In two opposite corners,there are two flowerbeds of side 10 m × 5 m. find the remaining area of the park.Also find the
PreviousNext
Advertisement
Ask your question
Free help with homework
Why join Brainly?
ask questions about your assignment
get answers with explanations
find similar questions
I want a free account
Company
Careers
Advertise with us
Terms of Use
Copyright Policy
Privacy Policy
Cookie Preferences
Help
Signup
Help Center
Safety Center
Responsible Disclosure Agreement
Get the Brainly App
⬈(opens in a new tab)⬈(opens in a new tab)
Brainly.in
We're in the know
(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab) |
189568 | https://artofproblemsolving.com/wiki/index.php/Geometric_inequality?srsltid=AfmBOooGgUI3DKbwipD3L6HcQWMnYHj3ppLhD95YJ6oDd4Ge_P84X4HT | Art of Problem Solving
Geometric inequality - AoPS Wiki
Art of Problem Solving
AoPS Online
Math texts, online classes, and more
for students in grades 5-12.
Visit AoPS Online ‚
Books for Grades 5-12Online Courses
Beast Academy
Engaging math books and online learning
for students ages 6-13.
Visit Beast Academy ‚
Books for Ages 6-13Beast Academy Online
AoPS Academy
Small live classes for advanced math
and language arts learners in grades 2-12.
Visit AoPS Academy ‚
Find a Physical CampusVisit the Virtual Campus
Sign In
Register
online school
Class ScheduleRecommendationsOlympiad CoursesFree Sessions
books tore
AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates
community
ForumsContestsSearchHelp
resources
math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten
contests on aopsPractice Math ContestsUSABO
newsAoPS BlogWebinars
view all 0
Sign In
Register
AoPS Wiki
ResourcesAops Wiki Geometric inequality
Page
ArticleDiscussionView sourceHistory
Toolbox
Recent changesRandom pageHelpWhat links hereSpecial pages
Search
Geometric inequality
A geometric inequality is an inequality involving various measures (angles, lengths, areas, etc.) in geometry.
Contents
[hide]
1 Triangle Inequality
2 Pythagorean Inequality
3 Isoperimetric Inequality
4 Trigonometric Inequalities
5 Euler's inequality
6 Ptolemy's inequality
7 Erdos-Mordell inequality
Triangle Inequality
The Triangle Inequality says that the sum of the lengths of any two sides of a nondegenerate triangle is greater than the length of the third side. This inequality is particularly useful and shows up frequently on Intermediate level geometry problems. It also provides the basis for the definition of a metric space in analysis.
Pythagorean Inequality
The Pythagorean Inequality is a generalization of the Pythagorean Theorem. The Theorem states that in a right triangle with sides of length we have . The Inequality extends this to obtuse and acute triangles. The inequality says:
For an acute triangle with sides of length , . For an obtuse triangle with sides , .
This inequality is a direct result of the Law of Cosines, although it is also possible to prove without using trigonometry.
Isoperimetric Inequality
The Isoperimetric Inequality states that if a figure in the plane has area and perimeter, then . This means that given a perimeter for a plane figure, the circle has the largest area. Conversely, of all plane figures with area , the circle has the least perimeter.
Trigonometric Inequalities
In , .
Proof: is a concave function from . Therefore we may use Jensen's inequality:
Alternatively, we may use a method that can be called "perturbation". If we let all the angles be equal, we prove that if we make one angle greater and the other one smaller, we will decrease the total value of the expression. To prove this, all we need to show is if , then . This inequality reduces to , which is equivalent to . Since this is always true for , this inequality is true. Therefore, the maximum value of this expression is when , which gives us the value .
Similarly, in , .
Euler's inequality
Euler's inequality states that with equality when is equailateral, where and denote the circumradius and inradius of triangle , respectively.
Proof: The distance from the circumcenter and incenter of a triangle can be expressed as , meaning or equivalently with equality if and only if the incenter equals the circumcenter, namely the triangle is equilateral.
Ptolemy's inequality
Ptolemy's inequality states that for any quadrilateral , with equality when quadrilateral is cyclic.
First Proof: Let P be the point such that . By SAS we also have that . By the triangle inequality, . calculating the lengths, we obtain an equivalent statement: . Multiplying by we get the desired result with equality when P is on DC. This happens when . But so , or quadrilateral is cyclic.
Second Proof (using inversion): Let the inversion map B,C and D to B',C' and D' respectively. We then have By the triangle inequality, we have By multiplying on both sides we get the desired result with equality when is collinear, implying either ABCD is cyclic or collinear.
Erdos-Mordell inequality
The Erdős–Mordell inequality states that if lies in then where are the foot of the altitudes from to and , respectively.
Proof: First, we prove a lemma.
Mordell's Lemma:
Proof of Lemma: Let and be the projections of and onto line Note that is cyclic with diameter By the Law of Sines, Since is cyclic, we have that and are supplementary. Since is a line, This means that Similarly, So the problem is reduced to proving that but this is obvious by the Pythagorean Theorem.
Now the rest of the problem is straightforward. We know that Adding these cyclically implies By AM-GM, with equality when ABC is equilateral and P is the center of it. This article is a stub. Help us out by expanding it.
Retrieved from "
Categories:
Geometry
Geometric Inequalities
Stubs
Art of Problem Solving is an
ACS WASC Accredited School
aops programs
AoPS Online
Beast Academy
AoPS Academy
About
About AoPS
Our Team
Our History
Jobs
AoPS Blog
Site Info
Terms
Privacy
Contact Us
follow us
Subscribe for news and updates
© 2025 AoPS Incorporated
© 2025 Art of Problem Solving
About Us•Contact Us•Terms•Privacy
Copyright © 2025 Art of Problem Solving
Something appears to not have loaded correctly.
Click to refresh. |
189569 | https://www.youtube.com/watch?v=_uFT16QsXD0 | Solve a/3=9: Linear Equation Video Solution | Tiger Algebra
Tiger Algebra
5510 subscribers
Description
4 views
Posted: 6 Jun 2024
Struggling with a/3=9? Watch our step-by-step video on Tiger Algebra to master this linear equation effortlessly.
Transcript:
you asked tiger to solve this deals with linear equations with one unknown the final result is AAL 27 let's solve it step by step isolate the a multiply both sides by three group like terms multiply the coefficients simplify the fraction simplify the arithmetic and so the final result is a = 27 |
189570 | https://www.gutnliver.org/journal/view.html?volume=3&number=4&spage=329 | Published Time: Fri, 19, Sep 2025 20:59:12 GMT
Achalasia Combined with Esophageal Cancer Treated by Concurrent Chemoradiation Therapy
Login to Submission
Select Language▼
2024 Impact Factor ### 3.2
_Indexed In : Science Citation Index Expanded(SCIE), MEDLINE,
Pubmed/Pubmed Central, Elsevier Bibliographic, Google Scholar,
Databases(Scopus & Embase), KCI, KoreaMed, DOAJ_
Articles
Current Issue
Advanced Search
Online First
Archives
Most Cited
Most Read
Journal Information
Aims and Scope
Editorial Board
Gastroenterology Council
Jointly owned / Operated
Best Practice
Subscription
Contact Us
For Authors
Instructions for Authors
Submit a Paper
Policy
Ethical Standards
Peer Review
Open Access
Crossmark
Similarity Check
Articles
Current Issue
Advanced Search
Online First
Archives
Most Cited
Most Read
Journal Information
Aims and Scope
Editorial Board
Gastroenterology Council
Jointly owned / Operated
Best Practice
Subscription
Contact Us
For Authors
Instructions for Authors
Submit a Paper
Policy
Ethical Standards
Peer Review
Open Access
Crossmark
Similarity Check
Article Search 검색 검색검색 팝업 닫기
Menu
Search
Search
Submission
Metrics
My Read
Subscription
Help
Help
Search
All Menu
Articles
Current Issue
Advanced Search
Online First
Archives
Most Cited
Most Read
Journal Information
Aims and Scope
Editorial Board
Gastroenterology Council
Jointly owned / Operated
Best Practice
Subscription
Contact Us
For Authors
Instructions for Authors
Submit a Paper
Policy
Ethical Standards
Peer Review
Open Access
Crossmark
Similarity Check
Metrics
Help
1. Aims and ScopeGut and Liver is an international journal of gastroenterology, focusing on the gastrointestinal tract, liver, biliary tree, pancreas, motility, and neurogastroenterology. Gut and Liver delivers up-to-date, authoritative papers on both clinical and research-based topics in gastroenterology. The Journal publishes original articles, brief communications, letters to the editor, perspectives and review articles in the field of gastroenterology and hepatology. The Journal is operated by internationally renowned editorial boards and designed to provide a global opportunity to promote academic developments in the field of gastroenterology and hepatology. +MORE
2. Editorial Board### Editor-in-Chief
Editor-in-Chief Yong Chan Lee Professor of Medicine
Director, Gastrointestinal Research Laboratory
Veterans Affairs Medical Center, Univ. California San Francisco
San Francisco, USA
Honorary Editor-in-Chief
Honorary Editor-in-Chief Young S. Kim University of California San Francisco School of Medicine, San Francisco, USA
Deputy Editor
Deputy Editor C. Richard Boland University of California San Diego School of Medicine, San Diego, CA, USA
Jong Pil Im Seoul National University College of Medicine, Seoul, Korea
Joo Ha Hwang Stanford University School of Medicine, Stanford, CA, USA + MORE
3. Editorial Office
office@gutnliver.org
+82-2-393-3344
+82-2-393-3344
+ MORE
4. Articles
1. Current Issue
2. Advanced Search
3. Online First
4. Archives
5. Most Cited
5. Instructions for Authors
1. EDITORIAL POLICY
2. SUBMISSION
3. PEER REVIEW AND ACCEPTANCE
4. COVER LETTER
5. PREPARATION OF THE MANUSCRIPT
6. ARTICLE PROCESSING CHARGES
6. File Download (PDF version)
1. Instructions for Authors
2. Copyright Transfer Form
3. Graphical Abstract Template
4. Manuscript Checklist
5. Endnote Style
7. Ethical Standards
Go to Standards
8. Peer ReviewAll papers submitted to Gut and Liver are reviewed by the editorial team before being sent out for an external blind peer review; this is done to check for papers that have low priority, insufficient originality, scientific flaws, or the absence of a message of importance to the readers of the Journal. A decision on these papers is usually made within two or three weeks. ...
+ MORE
Search
Search
Search Clear
Year
to
Article Type
Search Clear
Case Report
Split Viewer
Achalasia Combined with Esophageal Cancer Treated by Concurrent Chemoradiation Therapy
Jun Chul Park, Yong Chan Lee, Sang Kyum Kim†, Yu Jin Kim, Sung Kwan Shin, Sang Kil Lee, Hoguen Kim†, and Choong Bai Kim‡
Author Info +
Department of Internal Medicine, Institute of Gastroenterology, Yonsei University College of Medicine, Seoul, Korea.
†Department of Pathology, Yonsei University College of Medicine, Seoul, Korea.
‡Department of Surgery, Yonsei University College of Medicine, Seoul, Korea.
Correspondence to: Yong Chan Lee. Department of Internal Medicine, Yonsei University College of Medicine, 134, Shinchon-dong, Seodaemun-gu, Seoul 120-752, Korea. Tel: +82-2-2228-1960, Fax: +82-2-393-6884, leeyc@yuhs.ac
Received: May 18, 2009; Accepted: August 3, 2009
Gut Liver 2009;3(4):329-333.
Published online December 31, 2009, Published date December 31, 2009
Copyright © Gut and Liver.
Abstract
Go to
Abstract
INTRODUCTION
CASE REPORT
DISCUSSION
Figure
References
Achalasia is a rare neurological deficit of the esophagus that produces an impaired relaxation of the lower esophageal sphincter and decreased motility of the esophageal body. Achalasia is generally accepted to be a pre-malignant disorder, since, particularly in the mega-esophagus, chronic irritation by foods and bacterial overgrowth may contribute to the development of dysplasia and carcinoma. We present a case of a 51-year-old man with achalasia combined with esophageal cancer who has had dysphagia symptoms for more than 20 years. Since there was a clinically high possibility of supraclavicular lymph node metastasis, concurrent chemoradiation therapy was scheduled. After the third cycle of chemoradiation therapy, transthoracic esophageolymphadenectomy was performed. Histopathological examination of the main esophagus specimen revealed no residual carcinoma. And the entire regional lymph node areas were free of carcinoma except for one azygos metastatic lymph node. In summary, achalasia is a predisposing factor for esophageal squamous cell carcinoma. Although surveillance endoscopy in achalasia patients is still controversial, periodic screening for cancer development in long-standing achalasia patients might be advisable.
Keywords: Esophageal achalasia, Esophageal neoplasms
INTRODUCTION
Go to
Abstract
INTRODUCTION
CASE REPORT
DISCUSSION
Figure
References
Achalasia is a disease of unknown cause in which aperistalsis in the distal esophagus and a failure of relaxation of the lower esophageal sphincter (LES) occurs. Since the etiology is unclear, treatment focuses on reliving symptoms. Nonetheless, despite treatment, food stasis often persists, causing development of chronic inflammation, dysplasia and possibly esophageal cancer.
The correlation between achalasia and esophageal cancer was first noted by Fagge in 1872.1 Since this initial report, achalasia has frequently been described as a predisposing factor for esophageal cancer.2,3 When esophageal cancer develops in patients with underlying achalasia, diagnosis tends to be in the more advanced stages of cancer, compared to cases with no achalasia, because both physicians and patients often regard symptoms such as dysphagia and chest discomfort as attributable to the achalasia, rather than to other causes. Therefore, additional approaches that would lead to earlier diagnosis might be pursued less aggressively.
Here we report a case of achalasia combined with esophageal cancer treated by concurrent chemoradiation therapy.
CASE REPORT
Go to
Abstract
INTRODUCTION
CASE REPORT
DISCUSSION
Figure
References
A 51-year-old man was admitted to our hospital because of mild dysphagia. He was diagnosed as esophageal motor disorder at a local clinic when he was teenager. The symptoms, which occurred for for both solids and liquids simultaneously, had begun more than 20 years ago. And whenever swallowing solid or liquid diet, he always felt the food was retained in the esophagus in a moment and passing through after a few seconds. However, there was no epigastric pain or chest pain. Sometimes vomiting and regurgitation developed after meal. Above symptoms had continued for several decades without significant change of their patterns and severities. Since his initial disagnosis at a local clinic, he did not go to hospital thereafter till this visit to our institute.
Physical examination revealed a small, round, palpable mass at the right supraclavicular fossa and routine blood tests were unremarkable. The patient underwent upper gastroendoscopy, showing a fungating, exophytic mass lesion with an irregular and friable surface involving 90% of the luminal circumference at 42 cm from the upper incisors, just above the gastroesophageal (GE) junction (Fig. 1A, B). The scope could be passed through the GE junction into the stomach body with mild resistance. Multiple biopsies were taken from the fungating mass, which allowed identification of squamous cell carcinoma upon histopathological examination. An esophagogram also showed a fungating mass of approximately 9 cm at the distal esophagus, and a dilated thoracic esophagus with bird-beak sign (Fig. 1C). Based on these results, esophageal squamous cell carcinoma with achalasia was considered and esophageal manometric study and positron emission tomography- computed tomography (PET-CT) were undertaken. The result of manometry was compatible with achalasia, specifically simultaneous contraction (mean pressure 14.48 mm Hg) and incomplete lower esophageal sphincter relaxation. PET-CT showed a large, hypermetabolic mass lesion at distal esophagus which was compatible with esophageal cancer (Fig. 2A). Hypermetabolism at the right hilar lymph node area and the right supraclavicular lymph node were observed. Although the aspiration biopsy at the right supraclavicular lymph node was negative for malignancy, there was a clinically high possibility of lymph node metastasis. For this reason, concurrent chemoradiation therapy was suggested instead of esophagectomy. By this time, the patient's dysphagia symptoms had worsened, so dilation of the lower gastroesophageal junction was performed using a 30 mm achalasia balloon dilator (6 psi for 5 seconds, Boston Scientific Microvasive®, Natick, MA, USA). Dysphagia symptoms greatly improved after dilation. Thereafter, concurrent chemoradiation therapy of 800 mg/m 2/d fluorouracil was administered by continuous infusion on day 1 to 3, and cisplatin 80 mg/m 2 intravenously on day 2. Radiotherapy was delivered in five daily fractions per week of 1.8 Gy over 5.5 weeks, for a total of 50.4 Gy. Radiation fields also included the right supraclavicular lymph node area. After the third cycle of chemotherapy with radiation, followup PET-CT was undertaken. The right hilar lymph node showed no definite 18 F-fluorodeoxyglucose (FDG) uptake. The main esophageal mass with the right supraclavicular lymph node demonstrated decreased size and mild FDG uptake, compared to the previous analysis (Fig. 2B). The follow-up gastroendoscopy also showed disappearance of the distal esophageal mass, with mild erythematous mucosal change at the previous cancer site (Fig. 1D). The patient had no serious complications during the third cycle of concurrent chemoradiation therapy. He was informed about his disease status and trans-thoracic esophageolymphadenectomy was performed (Fig. 3A). Histopathological examination of the main esophagus specimen revealed no residual carcinoma and negative resection margin. Chronic inflammatory cell infiltration with stromal edema, telangiectasia and fibroblast proliferation was observed from secondary changes of radiation therapy (Fig. 3B). The entire paraesophageal, subcarnial and regional lymph node areas were free of carcinoma except for one azygos metastatic lymph node. Myenteric inflammation with lymphocytic infiltration within the myenteric plexus was observed, and in particular, ganglion cells were absent (Fig. 3C).
The patient was discharged on the 29th postoperative day with no serious complications. Further postoperative chemotherapy was scheduled.
DISCUSSION
Go to
Abstract
INTRODUCTION
CASE REPORT
DISCUSSION
Figure
References
Although achalasia is a common functional disorder of the esophageal body and lower esophageal sphincter (LES), it is still a relatively rare disease. Epidemiological data have demonstrated an annual incidence in the range of 0.5 cases per 100,000 people, with a prevalence of about 8 cases per 100,000 people per year.4
The risk of esophageal malignancy for patients with long-standing achalasia is between 14- and 140-fold over the rest of the population.3,5 A recent large cohort study showed male achalasia patients have substantially greater risk for both squamous cell carcinoma and adenocarcinoma of the esophagus.6 Several autopsy studies have reported an esophageal carcinoma prevalence of 20-29% in achalasia patients,7 and evaluation of esophagectomy specimens from patients with end-stage achalasia has demonstrated squamous hyperplasia.8 Most likely, in the late phase of achalasia, chronic irritation by food and bacterial overgrowth causes epithelial proliferation of the mucosa that may progress from squamous hyperplasia to dysplasia and carcinoma, similar to that seen in sporadic esophageal squamous cell carcinoma.9
Apart from esophageal food stasis, the possibility of iatrogenic reflux after myotomy or pneumatic dilatation has been also been suggested as cause of adenocarcinoma.10 In a Dutch study of 331 achalasia patients treated with pneumatic dilation, 28 (8.5%) developed endoscopical evidence of Barrett's metaplasia with intestinal metaplasia, seen in histological samples.11 Other study also reported adenocarcinoma in Barrett's metaplasia in achalasia.2 Barrett's esophagitis is probably the result of LES tone-lowering therapy, which may induce significant sphincter insufficiency, which, in theory, may lead to worsened gastroesophageal reflux.
In patients with underlying achalasia, esophageal cancers are generally diagnosed in advanced stages because the two diseases have similar symptoms, so rigorous diagnostic evaluation of symptoms might be undertaken less frequently. Patients also have a large amount of retained food in the mega-esophagus, making visualization difficult. Therefore, reports of operability have been infrequent and 80% of patients are inoperable at initial diagnosis.3,5 Nonetheless, whether surveillance endoscopy should be generally recommended for all patients with esophageal achalasia is still controversial because of the long interval between initial symptoms of achalasia and the development of carcinoma. A previous study on 1,062 achalasia patients reported that the mean age at entry was 57.2 years and the mean age at cancer diagnosis was 71 years.12 Other studies also showed an interval between the first symptoms of achalasia and the diagnosis of esophageal cancer of at least 15 years.3,5 Based on these observations, a recommendation of regular surveillance endoscopy might be beneficial in long-standing achalasia patients. The benefits of surveillance endoscopy in these patients are seen in a higher prevalence of early esophageal cancer stages.3
The treatment of choice for early esophageal cancer is curative resection. Surgery for advanced esophageal carcinoma has been disappointing, however, because of low resectability and a high risk of distant metastasis, which is seen in about half of the patients at the time of initial diagnosis.13 Surgery has limited therapeutic effect, and therefore effective multimodality treatment is required to obtain better survival in advanced stage esophageal carcinoma cases.
Radiation and chemotherapy may synergistically enhance anti-tumor effects against esophageal cancer, so concurrent chemoradiotherapy (CRT) is an attractive strategy for radiosensitization and control of micrometastatic disease.14 Recent studies have shown that radiotherapy with chemotherapy is superior to radiotherapy alone, based on the results of a number of randomized trials.15-17 A landmark study by the Radiation Therapy Oncology Group (RTOG) compared radiation alone versus chemoradiation treatment (RTOG 85-01).16 At 5 years of follow-up, the overall survival rates were 26% for patients who received combined modality treatment and 0% for those who received radiotherapy alone.
Cytotoxic agents, for example 5-Fluorouracil (5-FU) and cisplatin, have been used as radiosensitizers in many tumors, and a synergistic effect has been shown between the two drugs.15,17 Based on these results, continuous 5-FU and cisplatin can be used in concurrent chemoradiotherapy. In phase II studies on advanced or recurrent esophageal squamous cell carcinoma, this combination induced a response rate of about 33-35%, leading to a median overall survival of 6-8 months, although complete response was rare.18,19 This case presented here also represents an excellent response to preoperative, concurrent chemoradiation therapy. A large main esophageal mass disappeared with no remaining malignant cells. We also observed loss of ganglion cells in the myenteric plexuses, surrounded by lymphocytes representing achalasia (Fig. 3C). This inflammatory degeneration preferentially involves the nitric oxide-producing, inhibitory neurons that induce the relaxation of esophageal smooth muscle. Thus, the cholinergic neurons that contribute to LES tone by causing smooth muscle contraction are spared.20
Our case represents achalasia with squamous esophageal carcinoma and mild dysphagia, which the patient experienced for more than 20 years. The long duration of dysphagia symptoms interfered with early diagnosis of esophageal cancer, and long-term food stasis might have been a factor in the precancerous state. Although surveillance endoscopy in achalasia patients is still controversial, periodic screening for cancer development might be advisable in long-standing achalasia patients.
Figures
Go to
Abstract
INTRODUCTION
CASE REPORT
DISCUSSION
Figure
References
Fig. 1.(A, B) Endoscopic view of fungating mass lesion with dilated lower esophagus. (C) Fungating mass at distal esophagus and dilated distal esophagus with bird-beak sign. (D) Follow-up endoscopy showing disappearance of the previous cancer mass after the third cycle of concurrent chemoradiation therapy.
Fig. 2.(A) Intense 18 F-fluorodeoxyglucose (FDG) uptake is seen in the primary distal esophagus lesion with right hilar lymph node area and in the right supraclavicular lymph node. (B) The right hilar lymph node shows no definite FDG uptake. The main esophageal mass with right supraclavicular lymph node shows a decrease in size and mild FDG uptake compared to previous imaging.
Fig. 3.(A) Gross specimen of the surgically removed esophagus after concurrent chemoradiation therapy. (B) Microscopic findings of achalasia-associated squamous cell carcinoma after concurrent chemoradiation therapy. Chronic inflammatory cell infiltration with stromal edema, telangiectasia and fibroblast proliferation is shown. (C) Myenteric inflammation with lymphocytes infiltration observed in the myenteric plexus (arrow). Ganglion cells are absent.
References
Go to
Abstract
INTRODUCTION
CASE REPORT
DISCUSSION
Figure
References
Fagge, CH. A case of simple stenosis of the oesophagus, followed by epithelioma. Guy's Hosp Rep, 1872;17;413.
Ellis, FH, Gibb, SP, Balogh, K, Schwaber, JR. Esophageal achalasia and adenocarcinoma in Barrett's esophagus: a report of two cases and a review of the literature. Dis Esophagus, 1997;10;55-60.
Br?cher, BL, Stein, HJ, Bartels, H, Feussner, H, Siewert, JR. Achalasia and esophageal cancer: incidence, prevalence, and prognosis. World J Surg, 2001;25;745-749.
Mayberry, JF. Epidemiology and demographics of achalasia. Gastrointest Endosc Clin N Am, 2001;11;235-248.
Meijssen, MA, Tilanus, HW, van Blankenstein, M, Hop, WC, Ong, GL. Achalasia complicated by oesophageal squamous cell carcinoma: a prospective study in 195 patients. Gut, 1992;33;155-158.
Zendehdel, K, Nyr?n, O, Edberg, A, Ye, W. Risk of esophageal adenocarcinoma in achalasia patients, a retrospective cohort study in Sweden. Am J GastroenterolForthcoming 2007
Carter, R, Brewer, LA. Achalasia and esophageal carcinoma. Studies in early diagnosis for improved surgical management. Am J Surg, 1975;130;114-120.
Goldblum, JR, Whyte, RI, Orringer, MB, Appelman, HD. Achalasia. A morphologic study of 42 resected specimens. Am J Surg Pathol, 1994;18;327-337.
Pajecki, D, Zilberstein, B, dos Santos, MA, et al. Megaesophagus microbiota: a qualitative and quantitative analysis. J Gastrointest Surg, 2002;6;723-729.
Guo, JP, Gilman, PB, Thomas, RM, Fisher, RS, Parkman, HP. Barrett's esophagus and achalasia. J Clin Gastroenterol, 2002;34;439-443.
Scholten, P, Leeuwenburgh, I, Vaessen, R, et al. Barrett's esophagus after pneumo-dilatation for achalasia. Gastroenterology, 2004;126;A635.
Sandler, RS, Nyr?n, O, Ekbom, A, Eisen, GM, Yuen, J, Josefsson, S. The risk of esophageal cancer in patients with achalasia. A population-based study. JAMA, 1995;274;1359-1362.
Kelsen, DP, Ginsberg, R, Pajak, TF, et al. Chemotherapy followed by surgery compared with surgery alone for localized esophageal cancer. N Engl J Med, 1998;339;1979-1984.
Leichman, L, Steiger, Z, Seydel, HG, Vaitkevicius, VK. Combined preoperative chemotherapy and radiation therapy for cancer of the esophagus: the Wayne State University, Southwest Oncology group and Radiation Therapy Oncology Group experience. Semin Oncol, 1984;11;178-185.
al-Sarraf, M, Martz, K, Herskovic, A, et al. Progress report of combined chemoradiotherapy versus radiotherapy alone in patients with esophageal cancer: an intergroup study. J Clin Oncol, 1997;15;277-284.
Cooper, JS, Guo, MD, Herskovic, A, et al, Radiation Therapy Oncology Group. Chemoradiotherapy of locally advanced esophageal cancer: long-term follow-up of a prospective randomized trial (RTOG 85-01). JAMA, 1999;281;1623-1627.
Herskovic, A, Martz, K, al-Sarraf, M, et al. Combined chemotherapy and radiotherapy compared with radiotherapy alone in patients with cancer of the esophagus. N Engl J Med, 1992;326;1593-1598.
Hayashi, K, Ando, N, Watanabe, H, et al. Phase II evaluation of protracted infusion of cisplatin and 5-fluorouracil in advanced squamous cell carcinoma of the esophagus: a Japan Esophageal Oncology Group (JEOG) Trial (JCOG9407). Jpn J Clin Oncol, 2001;31;419-423.
Bleiberg, H, Conroy, T, Paillot, B, et al. Randomised phase II study of cisplatin and 5-fluorouracil (5-FU) versus cisplatin alone in advanced squamous cell oesophageal cancer. Eur J Cancer, 1997;33;1216-1220.
Holloway, RH, Dodds, WJ, Helm, JF, Hogan, WJ, Dent, J, Arndorfer, RC. Integrity of cholinergic innervation to the lower esophageal sphincter in achalasia. Gastroenterology, 1986;90;924-929.
PDF
Standard view
Export citation
Share
Twitter
Linkedin
Line
Contents
Figure
Table
References
Article
Case Report
Gut Liver 2009; 3(4): 329-333
Published online December 31, 2009
Copyright © Gut and Liver.
Achalasia Combined with Esophageal Cancer Treated by Concurrent Chemoradiation Therapy
Jun Chul Park, Yong Chan Lee, Sang Kyum Kim†, Yu Jin Kim, Sung Kwan Shin, Sang Kil Lee, Hoguen Kim†, and Choong Bai Kim‡
Department of Internal Medicine, Institute of Gastroenterology, Yonsei University College of Medicine, Seoul, Korea.
†Department of Pathology, Yonsei University College of Medicine, Seoul, Korea.
‡Department of Surgery, Yonsei University College of Medicine, Seoul, Korea.
Correspondence to: Yong Chan Lee. Department of Internal Medicine, Yonsei University College of Medicine, 134, Shinchon-dong, Seodaemun-gu, Seoul 120-752, Korea. Tel: +82-2-2228-1960, Fax: +82-2-393-6884, leeyc@yuhs.ac
Received: May 18, 2009; Accepted: August 3, 2009
Abstract
Achalasia is a rare neurological deficit of the esophagus that produces an impaired relaxation of the lower esophageal sphincter and decreased motility of the esophageal body. Achalasia is generally accepted to be a pre-malignant disorder, since, particularly in the mega-esophagus, chronic irritation by foods and bacterial overgrowth may contribute to the development of dysplasia and carcinoma. We present a case of a 51-year-old man with achalasia combined with esophageal cancer who has had dysphagia symptoms for more than 20 years. Since there was a clinically high possibility of supraclavicular lymph node metastasis, concurrent chemoradiation therapy was scheduled. After the third cycle of chemoradiation therapy, transthoracic esophageolymphadenectomy was performed. Histopathological examination of the main esophagus specimen revealed no residual carcinoma. And the entire regional lymph node areas were free of carcinoma except for one azygos metastatic lymph node. In summary, achalasia is a predisposing factor for esophageal squamous cell carcinoma. Although surveillance endoscopy in achalasia patients is still controversial, periodic screening for cancer development in long-standing achalasia patients might be advisable.
Keywords: Esophageal achalasia, Esophageal neoplasms
INTRODUCTION
Achalasia is a disease of unknown cause in which aperistalsis in the distal esophagus and a failure of relaxation of the lower esophageal sphincter (LES) occurs. Since the etiology is unclear, treatment focuses on reliving symptoms. Nonetheless, despite treatment, food stasis often persists, causing development of chronic inflammation, dysplasia and possibly esophageal cancer.
The correlation between achalasia and esophageal cancer was first noted by Fagge in 1872.1 Since this initial report, achalasia has frequently been described as a predisposing factor for esophageal cancer.2,3 When esophageal cancer develops in patients with underlying achalasia, diagnosis tends to be in the more advanced stages of cancer, compared to cases with no achalasia, because both physicians and patients often regard symptoms such as dysphagia and chest discomfort as attributable to the achalasia, rather than to other causes. Therefore, additional approaches that would lead to earlier diagnosis might be pursued less aggressively.
Here we report a case of achalasia combined with esophageal cancer treated by concurrent chemoradiation therapy.
CASE REPORT
A 51-year-old man was admitted to our hospital because of mild dysphagia. He was diagnosed as esophageal motor disorder at a local clinic when he was teenager. The symptoms, which occurred for for both solids and liquids simultaneously, had begun more than 20 years ago. And whenever swallowing solid or liquid diet, he always felt the food was retained in the esophagus in a moment and passing through after a few seconds. However, there was no epigastric pain or chest pain. Sometimes vomiting and regurgitation developed after meal. Above symptoms had continued for several decades without significant change of their patterns and severities. Since his initial disagnosis at a local clinic, he did not go to hospital thereafter till this visit to our institute.
Physical examination revealed a small, round, palpable mass at the right supraclavicular fossa and routine blood tests were unremarkable. The patient underwent upper gastroendoscopy, showing a fungating, exophytic mass lesion with an irregular and friable surface involving 90% of the luminal circumference at 42 cm from the upper incisors, just above the gastroesophageal (GE) junction (Fig. 1A, B). The scope could be passed through the GE junction into the stomach body with mild resistance. Multiple biopsies were taken from the fungating mass, which allowed identification of squamous cell carcinoma upon histopathological examination. An esophagogram also showed a fungating mass of approximately 9 cm at the distal esophagus, and a dilated thoracic esophagus with bird-beak sign (Fig. 1C). Based on these results, esophageal squamous cell carcinoma with achalasia was considered and esophageal manometric study and positron emission tomography- computed tomography (PET-CT) were undertaken. The result of manometry was compatible with achalasia, specifically simultaneous contraction (mean pressure 14.48 mm Hg) and incomplete lower esophageal sphincter relaxation. PET-CT showed a large, hypermetabolic mass lesion at distal esophagus which was compatible with esophageal cancer (Fig. 2A). Hypermetabolism at the right hilar lymph node area and the right supraclavicular lymph node were observed. Although the aspiration biopsy at the right supraclavicular lymph node was negative for malignancy, there was a clinically high possibility of lymph node metastasis. For this reason, concurrent chemoradiation therapy was suggested instead of esophagectomy. By this time, the patient's dysphagia symptoms had worsened, so dilation of the lower gastroesophageal junction was performed using a 30 mm achalasia balloon dilator (6 psi for 5 seconds, Boston Scientific Microvasive®, Natick, MA, USA). Dysphagia symptoms greatly improved after dilation. Thereafter, concurrent chemoradiation therapy of 800 mg/m 2/d fluorouracil was administered by continuous infusion on day 1 to 3, and cisplatin 80 mg/m 2 intravenously on day 2. Radiotherapy was delivered in five daily fractions per week of 1.8 Gy over 5.5 weeks, for a total of 50.4 Gy. Radiation fields also included the right supraclavicular lymph node area. After the third cycle of chemotherapy with radiation, followup PET-CT was undertaken. The right hilar lymph node showed no definite 18 F-fluorodeoxyglucose (FDG) uptake. The main esophageal mass with the right supraclavicular lymph node demonstrated decreased size and mild FDG uptake, compared to the previous analysis (Fig. 2B). The follow-up gastroendoscopy also showed disappearance of the distal esophageal mass, with mild erythematous mucosal change at the previous cancer site (Fig. 1D). The patient had no serious complications during the third cycle of concurrent chemoradiation therapy. He was informed about his disease status and trans-thoracic esophageolymphadenectomy was performed (Fig. 3A). Histopathological examination of the main esophagus specimen revealed no residual carcinoma and negative resection margin. Chronic inflammatory cell infiltration with stromal edema, telangiectasia and fibroblast proliferation was observed from secondary changes of radiation therapy (Fig. 3B). The entire paraesophageal, subcarnial and regional lymph node areas were free of carcinoma except for one azygos metastatic lymph node. Myenteric inflammation with lymphocytic infiltration within the myenteric plexus was observed, and in particular, ganglion cells were absent (Fig. 3C).
The patient was discharged on the 29th postoperative day with no serious complications. Further postoperative chemotherapy was scheduled.
DISCUSSION
Although achalasia is a common functional disorder of the esophageal body and lower esophageal sphincter (LES), it is still a relatively rare disease. Epidemiological data have demonstrated an annual incidence in the range of 0.5 cases per 100,000 people, with a prevalence of about 8 cases per 100,000 people per year.4
The risk of esophageal malignancy for patients with long-standing achalasia is between 14- and 140-fold over the rest of the population.3,5 A recent large cohort study showed male achalasia patients have substantially greater risk for both squamous cell carcinoma and adenocarcinoma of the esophagus.6 Several autopsy studies have reported an esophageal carcinoma prevalence of 20-29% in achalasia patients,7 and evaluation of esophagectomy specimens from patients with end-stage achalasia has demonstrated squamous hyperplasia.8 Most likely, in the late phase of achalasia, chronic irritation by food and bacterial overgrowth causes epithelial proliferation of the mucosa that may progress from squamous hyperplasia to dysplasia and carcinoma, similar to that seen in sporadic esophageal squamous cell carcinoma.9
Apart from esophageal food stasis, the possibility of iatrogenic reflux after myotomy or pneumatic dilatation has been also been suggested as cause of adenocarcinoma.10 In a Dutch study of 331 achalasia patients treated with pneumatic dilation, 28 (8.5%) developed endoscopical evidence of Barrett's metaplasia with intestinal metaplasia, seen in histological samples.11 Other study also reported adenocarcinoma in Barrett's metaplasia in achalasia.2 Barrett's esophagitis is probably the result of LES tone-lowering therapy, which may induce significant sphincter insufficiency, which, in theory, may lead to worsened gastroesophageal reflux.
In patients with underlying achalasia, esophageal cancers are generally diagnosed in advanced stages because the two diseases have similar symptoms, so rigorous diagnostic evaluation of symptoms might be undertaken less frequently. Patients also have a large amount of retained food in the mega-esophagus, making visualization difficult. Therefore, reports of operability have been infrequent and 80% of patients are inoperable at initial diagnosis.3,5 Nonetheless, whether surveillance endoscopy should be generally recommended for all patients with esophageal achalasia is still controversial because of the long interval between initial symptoms of achalasia and the development of carcinoma. A previous study on 1,062 achalasia patients reported that the mean age at entry was 57.2 years and the mean age at cancer diagnosis was 71 years.12 Other studies also showed an interval between the first symptoms of achalasia and the diagnosis of esophageal cancer of at least 15 years.3,5 Based on these observations, a recommendation of regular surveillance endoscopy might be beneficial in long-standing achalasia patients. The benefits of surveillance endoscopy in these patients are seen in a higher prevalence of early esophageal cancer stages.3
The treatment of choice for early esophageal cancer is curative resection. Surgery for advanced esophageal carcinoma has been disappointing, however, because of low resectability and a high risk of distant metastasis, which is seen in about half of the patients at the time of initial diagnosis.13 Surgery has limited therapeutic effect, and therefore effective multimodality treatment is required to obtain better survival in advanced stage esophageal carcinoma cases.
Radiation and chemotherapy may synergistically enhance anti-tumor effects against esophageal cancer, so concurrent chemoradiotherapy (CRT) is an attractive strategy for radiosensitization and control of micrometastatic disease.14 Recent studies have shown that radiotherapy with chemotherapy is superior to radiotherapy alone, based on the results of a number of randomized trials.15-17 A landmark study by the Radiation Therapy Oncology Group (RTOG) compared radiation alone versus chemoradiation treatment (RTOG 85-01).16 At 5 years of follow-up, the overall survival rates were 26% for patients who received combined modality treatment and 0% for those who received radiotherapy alone.
Cytotoxic agents, for example 5-Fluorouracil (5-FU) and cisplatin, have been used as radiosensitizers in many tumors, and a synergistic effect has been shown between the two drugs.15,17 Based on these results, continuous 5-FU and cisplatin can be used in concurrent chemoradiotherapy. In phase II studies on advanced or recurrent esophageal squamous cell carcinoma, this combination induced a response rate of about 33-35%, leading to a median overall survival of 6-8 months, although complete response was rare.18,19 This case presented here also represents an excellent response to preoperative, concurrent chemoradiation therapy. A large main esophageal mass disappeared with no remaining malignant cells. We also observed loss of ganglion cells in the myenteric plexuses, surrounded by lymphocytes representing achalasia (Fig. 3C). This inflammatory degeneration preferentially involves the nitric oxide-producing, inhibitory neurons that induce the relaxation of esophageal smooth muscle. Thus, the cholinergic neurons that contribute to LES tone by causing smooth muscle contraction are spared.20
Our case represents achalasia with squamous esophageal carcinoma and mild dysphagia, which the patient experienced for more than 20 years. The long duration of dysphagia symptoms interfered with early diagnosis of esophageal cancer, and long-term food stasis might have been a factor in the precancerous state. Although surveillance endoscopy in achalasia patients is still controversial, periodic screening for cancer development might be advisable in long-standing achalasia patients.
Abstract
INTRODUCTION
CASE REPORT
DISCUSSION
Fig 1.
Download
Figure 1.(A, B) Endoscopic view of fungating mass lesion with dilated lower esophagus. (C) Fungating mass at distal esophagus and dilated distal esophagus with bird-beak sign. (D) Follow-up endoscopy showing disappearance of the previous cancer mass after the third cycle of concurrent chemoradiation therapy.
Gut and Liver 2009; 3: 329-333
Fig 2.
Download
Figure 2.(A) Intense 18 F-fluorodeoxyglucose (FDG) uptake is seen in the primary distal esophagus lesion with right hilar lymph node area and in the right supraclavicular lymph node. (B) The right hilar lymph node shows no definite FDG uptake. The main esophageal mass with right supraclavicular lymph node shows a decrease in size and mild FDG uptake compared to previous imaging.
Gut and Liver 2009; 3: 329-333
Fig 3.
Download
Figure 3.(A) Gross specimen of the surgically removed esophagus after concurrent chemoradiation therapy. (B) Microscopic findings of achalasia-associated squamous cell carcinoma after concurrent chemoradiation therapy. Chronic inflammatory cell infiltration with stromal edema, telangiectasia and fibroblast proliferation is shown. (C) Myenteric inflammation with lymphocytes infiltration observed in the myenteric plexus (arrow). Ganglion cells are absent.
Gut and Liver 2009; 3: 329-333
References
Fagge, CH. A case of simple stenosis of the oesophagus, followed by epithelioma. Guy's Hosp Rep, 1872;17;413.
Ellis, FH, Gibb, SP, Balogh, K, Schwaber, JR. Esophageal achalasia and adenocarcinoma in Barrett's esophagus: a report of two cases and a review of the literature. Dis Esophagus, 1997;10;55-60.
Br?cher, BL, Stein, HJ, Bartels, H, Feussner, H, Siewert, JR. Achalasia and esophageal cancer: incidence, prevalence, and prognosis. World J Surg, 2001;25;745-749.
Mayberry, JF. Epidemiology and demographics of achalasia. Gastrointest Endosc Clin N Am, 2001;11;235-248.
Meijssen, MA, Tilanus, HW, van Blankenstein, M, Hop, WC, Ong, GL. Achalasia complicated by oesophageal squamous cell carcinoma: a prospective study in 195 patients. Gut, 1992;33;155-158.
Zendehdel, K, Nyr?n, O, Edberg, A, Ye, W. Risk of esophageal adenocarcinoma in achalasia patients, a retrospective cohort study in Sweden. Am J GastroenterolForthcoming 2007
Carter, R, Brewer, LA. Achalasia and esophageal carcinoma. Studies in early diagnosis for improved surgical management. Am J Surg, 1975;130;114-120.
Goldblum, JR, Whyte, RI, Orringer, MB, Appelman, HD. Achalasia. A morphologic study of 42 resected specimens. Am J Surg Pathol, 1994;18;327-337.
Pajecki, D, Zilberstein, B, dos Santos, MA, et al. Megaesophagus microbiota: a qualitative and quantitative analysis. J Gastrointest Surg, 2002;6;723-729.
Guo, JP, Gilman, PB, Thomas, RM, Fisher, RS, Parkman, HP. Barrett's esophagus and achalasia. J Clin Gastroenterol, 2002;34;439-443.
Scholten, P, Leeuwenburgh, I, Vaessen, R, et al. Barrett's esophagus after pneumo-dilatation for achalasia. Gastroenterology, 2004;126;A635.
Sandler, RS, Nyr?n, O, Ekbom, A, Eisen, GM, Yuen, J, Josefsson, S. The risk of esophageal cancer in patients with achalasia. A population-based study. JAMA, 1995;274;1359-1362.
Kelsen, DP, Ginsberg, R, Pajak, TF, et al. Chemotherapy followed by surgery compared with surgery alone for localized esophageal cancer. N Engl J Med, 1998;339;1979-1984.
Leichman, L, Steiger, Z, Seydel, HG, Vaitkevicius, VK. Combined preoperative chemotherapy and radiation therapy for cancer of the esophagus: the Wayne State University, Southwest Oncology group and Radiation Therapy Oncology Group experience. Semin Oncol, 1984;11;178-185.
al-Sarraf, M, Martz, K, Herskovic, A, et al. Progress report of combined chemoradiotherapy versus radiotherapy alone in patients with esophageal cancer: an intergroup study. J Clin Oncol, 1997;15;277-284.
Cooper, JS, Guo, MD, Herskovic, A, et al, Radiation Therapy Oncology Group. Chemoradiotherapy of locally advanced esophageal cancer: long-term follow-up of a prospective randomized trial (RTOG 85-01). JAMA, 1999;281;1623-1627.
Herskovic, A, Martz, K, al-Sarraf, M, et al. Combined chemotherapy and radiotherapy compared with radiotherapy alone in patients with cancer of the esophagus. N Engl J Med, 1992;326;1593-1598.
Hayashi, K, Ando, N, Watanabe, H, et al. Phase II evaluation of protracted infusion of cisplatin and 5-fluorouracil in advanced squamous cell carcinoma of the esophagus: a Japan Esophageal Oncology Group (JEOG) Trial (JCOG9407). Jpn J Clin Oncol, 2001;31;419-423.
Bleiberg, H, Conroy, T, Paillot, B, et al. Randomised phase II study of cisplatin and 5-fluorouracil (5-FU) versus cisplatin alone in advanced squamous cell oesophageal cancer. Eur J Cancer, 1997;33;1216-1220.
Holloway, RH, Dodds, WJ, Helm, JF, Hogan, WJ, Dent, J, Arndorfer, RC. Integrity of cholinergic innervation to the lower esophageal sphincter in achalasia. Gastroenterology, 1986;90;924-929.
Vol.19 No.5
September 2025
Frequency : Bimonthly
pISSN 1976-2283
eISSN 2005-1212
Article Tools
Print this Article
Export to Citation
Article as PDF
P PMC
Pubmed
Similar articles on PubMed
G Google Scholar
N Naver Academic
Stats or Metrics
CrossRef
View 796
Download 461
SCOPUS 4
Pubmed 0
Share this article on :
Related articles in Gut and Liver
Incidence, Morbidity, and Mortality of Achalasia: A Nationwide, Population-Based Cohort Study in South Korea
2023; 17(6): 894-904
### Histopathological Analysis of Esophageal Mucosa in Patients with Achalasia
2021; 15(5): 713-722
### Long-term Outcomes of Endoscopic Radiofrequency Ablation versus Endoscopic Submucosal Dissection for Widespread Superficial Esophageal Squamous Cell Neoplasia
2025; 19(2): 198-206
Popular Keywords
CarcinomahepatocellularColitisHepatitis CPrognosisCrohn disease
Gut and LiverQR code Download
Editorial Office
office@gutnliver.org
+82-2-393-3344
+82-2-393-3344
Help
© 2025. Gut and Liver.
Powered by INFOrang.co., Ltd.
Download to Citation(s)
Park JC, Lee YC, Kim SK, Kim YJ, Shin SK, Lee SK, Kim H, , Kim CB.Achalasia Combined with Esophageal Cancer Treated by Concurrent Chemoradiation Therapy.Gut and Liver 2009;3:329-333.
| Export type | Please select a format and type blew. |
| Format | EndNote Reference Manager RIS Format Plain Text |
Download Citation(s)
Figure 3.
(A, B) Endoscopic view of fungating mass lesion with dilated lower esophagus. (C) Fungating mass at distal esophagus and dilated distal esophagus with bird-beak sign. (D) Follow-up endoscopy showing disappearance of the previous cancer mass after the third cycle of concurrent chemoradiation therapy.|@|~(^,^)~|@|(A) Intense 18 F-fluorodeoxyglucose (FDG) uptake is seen in the primary distal esophagus lesion with right hilar lymph node area and in the right supraclavicular lymph node. (B) The right hilar lymph node shows no definite FDG uptake. The main esophageal mass with right supraclavicular lymph node shows a decrease in size and mild FDG uptake compared to previous imaging.|@|~(^,^)~|@|(A) Gross specimen of the surgically removed esophagus after concurrent chemoradiation therapy. (B) Microscopic findings of achalasia-associated squamous cell carcinoma after concurrent chemoradiation therapy. Chronic inflammatory cell infiltration with stromal edema, telangiectasia and fibroblast proliferation is shown. (C) Myenteric inflammation with lymphocytes infiltration observed in the myenteric plexus (arrow). Ganglion cells are absent.
Gut Liver 2009;3:329~333
© Gut and Liver
1. Aims and ScopeGut and Liver is an international journal of gastroenterology, focusing on the gastrointestinal tract, liver, biliary tree, pancreas, motility, and neurogastroenterology. Gut and Liver delivers up-to-date, authoritative papers on both clinical and research-based topics in gastroenterology. The Journal publishes original articles, brief communications, letters to the editor, perspectives and review articles in the field of gastroenterology and hepatology. The Journal is operated by internationally renowned editorial boards and designed to provide a global opportunity to promote academic developments in the field of gastroenterology and hepatology. +MORE
2. Editorial Board### Editor-in-Chief
Editor-in-Chief Yong Chan Lee Professor of Medicine
Director, Gastrointestinal Research Laboratory
Veterans Affairs Medical Center, Univ. California San Francisco
San Francisco, USA
Honorary Editor-in-Chief
Honorary Editor-in-Chief Young S. Kim University of California San Francisco School of Medicine, San Francisco, USA
Deputy Editor
Deputy Editor C. Richard Boland University of California San Diego School of Medicine, San Diego, CA, USA
Jong Pil Im Seoul National University College of Medicine, Seoul, Korea
Joo Ha Hwang Stanford University School of Medicine, Stanford, CA, USA + MORE
3. Editorial Office
office@gutnliver.org
+82-2-393-3344
+82-2-393-3344
+ MORE
4. Articles
1. Current Issue
2. Advanced Search
3. Online First
4. Archives
5. Most Cited
5. Instructions for Authors
1. EDITORIAL POLICY
2. SUBMISSION
3. PEER REVIEW AND ACCEPTANCE
4. COVER LETTER
5. PREPARATION OF THE MANUSCRIPT
6. ARTICLE PROCESSING CHARGES
6. File Download (PDF version)
1. Instructions for Authors
2. Copyright Transfer Form
3. Graphical Abstract Template
4. Manuscript Checklist
5. Endnote Style
7. Ethical Standards
Go to Standards
8. Peer ReviewAll papers submitted to Gut and Liver are reviewed by the editorial team before being sent out for an external blind peer review; this is done to check for papers that have low priority, insufficient originality, scientific flaws, or the absence of a message of importance to the readers of the Journal. A decision on these papers is usually made within two or three weeks. ...
+ MORE
Share by SNS
Close
Contents
Abstract
INTRODUCTION
CASE REPORT
DISCUSSION
References
Original text
Rate this translation
Your feedback will be used to help improve Google Translate |
189571 | https://inflateyourmind.com/macroeconomics/unit-5/section-4-the-tax-multiplier-and-the-balanced-budget-multiplier/ | How a Change in Taxes Affects GDP
If an increase in government spending leads to an increase in total spending and GDP, then an increase in taxes must lead to a decrease in total spending and GDP, and vice versa. When the government raises taxes, private spending decreases. Keynes noted, however, that the decrease in overall spending from a tax increase is not as large as the increase in overall spending from the same amount of a government spending increase. The reason for this is that people save a portion of their additional income (tax refund) whereas the government spends all of its money. The example in the next paragraph illustrates this.
The Tax Multiplier
Let’s say that taxes increase by $1,000. Therefore, people’s after-tax income (income available for consumption or savings) decreases by $1,000. If the MPC is 80%, then people would have only consumed $800 of this $1,000. Therefore, total spending throughout the economy decreases by 5 (the multiplier) times $800 = $4,000. This $4,000 is 4 times the change in taxes.
Mathematically, we can prove that the tax multiplier is the negative of the spending multiplier minus 1. In the above example, the regular spending multiplier from the previous section is 5 and, therefore, the tax multiplier is -4. Thus,
| |
| The tax multiplier = – (the regular multiplier – 1) In the above example: The tax multiplier = – (5 – 1) = -4 |
The following applications provide further explanations of this concept.
Examples of How a Change in Taxes Affects GDP
Example 1
Problem: Let’s say that we are experiencing a recession and the government decreases taxes by $25 billion. Let’s also assume that the MPC equals .75. By how much will GDP increase?
Solution: Because the MPC equals .75, the regular (spending) multiplier equals 4, and the tax multiplier equals -3.
The spending multiplier = 1 / (1 minus .75) = 1 / .25 = 4. The tax multiplier equals 4 minus 1 with a negative sign: -(4 – 1) = -3.
To get the increase in GDP, we multiply the multiplier by the decrease in taxes:
Change in GDP = -3 -$25 billion = +$75 billion.
This means that if GDP was $800 billion before the change, it will be $875 billion after the change.
Recessionary Gap
Example 2Problem: Let’s say that we are experiencing a recessionary gap of $360 billion. A recessionary gap is how much GDP needs to increase from the current GDP in order to achieve full employment. Also assume that the MPC equals .90. How much will the government have to decrease taxes in order to close the recessionary gap?
Solution: We know that the decrease in taxes times the tax multiplier equals the increase in GDP. The MPC is .9, so the regular multiplier is 1 / (1 – .9) = 10.
That means that the tax multiplier is -(10 – 1) = -9.
So: (the change in taxes) (the tax multiplier) = the change in GDP.
So: (the change in taxes) (-9) = $360 billion.
So: (the change in taxes) = $360 / (-9) = -$40 billion.
In other words, if the government decreases taxes by $40 billion, and the tax multiplier is -9, then GDP will increase by $360 billion. Since we need to add $360 billion to GDP to achieve full employment, we will have closed the recessionary gap.
Inflationary Gap
Example 3Problem: Let’s say that we are experiencing an inflationary gap of $200 billion. An inflationary gap is how much GDP needs to decrease from the current GDP in order to achieve full employment without causing inflation. Also assume that the MPC equals .80.
Solution: The change in taxes the tax multiplier = the change in GDP.
The regular multiplier is 5 (calculation: 1 / (1 – .8), so the tax multiplier is -4.
So: (the change in taxes) (-4) = -$200 billion.
So: (the change in taxes) = (-$200) / (-4)= +$50 billion.
In other words, if the government increases taxes by $50 billion, and the tax multiplier is -4, then GDP will decrease by $200 billion. Since we need to lower GDP by $200 billion to achieve full employment without inflation, we will have closed the inflationary gap.
The Balanced Budget Multiplier
When the government increases spending by a certain amount and it increases taxes by the same amount, then GDP will increase by that amount. The following example illustrates this.
Example 4Problem: Let’s say the government increases spending by $1,000 and also increases taxes by $1,000, and the MPC equals .8. By how much will GDP change?
Solution: The multiplier equals 5 and so the tax multiplier equals -4. Therefore, GDP will increase by $5,000 from the $1,000 additional government spending (5 times $1,000). And GDP will decrease by $4,000 from the additional $1,000 in taxes (-4 times $1,000). Thus, on balance, equilibrium income (GDP) will increase by $1,000 ($5,000 minus $4,000).
Therefore, when the government spends $1,000 and imposes taxes of $1,000, it balances its budget, while increasing equilibrium GDP by $1,000.
Thus, when the government changes spending and taxes by the same amount, then equilibrium income (GDP) changes by 1 times this amount. We say that
| |
| The balanced budget multiplier = 1. |
The balanced budget multiplier implies that if the government increases spending and taxation by the same amount, then equilibrium national income (GDP) rises by this amount.
This balanced budget stimulation is possible, according to Keynes, because when the government receives $1,000, it spends it all. On the other hand, when private citizens receive $1,000, they spend only a fraction of it (in the above example, they spend 80%). They save the other fraction. Because savings, according to Keynes, is a “leakage” from the economy, the economy “loses” 20% in stimulation if private citizens spend it, compared to no loss (no savings) if the government spends it.
Do you agree with Keynes that it is possible to stimulate the economy by, for example, $1 trillion, simply by raising government spending and taxes by $1 trillion?
For a video explanation of additional examples involving the Keynesian multiplier, please watch:
9 Comments
Peter
on March 6, 2018 at 3:03 am
Hi
Could you help with the following question. The answer is B, but why?
Reply
John Bouman
on December 30, 2018 at 5:17 am
Nikhil, the balanced budget multiplier of 1 refers to dollar amounts of taxes and government spending that are the same (which is why the budget is balanced). So if the government increases spending by a certain amount (for example, $100) and taxes go up by that same amount ($100), then according to Keynes, total spending in the economy goes up by $100. If you have tax in the form of a function, the multiplier can only be 1 if the spending function is the same (G = Ga + GY).
Peter, which question are you referring to? If you can copy it in your message, I will respond.
Reply
2. mannie efule
on October 22, 2019 at 6:53 pm
Dear Dr. Bouman with a recessionary gap of $60b and an MPC of .8
Whats the combination of government spending and taxes will eliminate this gap?
I work on this problem and can’t seem to get your suggested answers of Spending of $10b and Decrease in Taxes of $2.5b.
Respectfully,
Mannie
Reply
John Bouman
on December 24, 2019 at 4:17 pm
Mannie, thanks for your question.
We have a recessionary gap of $60 billion, so we have to increase total spending in the economy (GDP) by $60b. If you increase government spending by $10b and decrease taxes by $2.5b, then you will achieve this. Explanation: you have to use the two Keynesian equations that involve the multipliers:
change in government spending multiplier = change in the economy’s spending, and
change in taxes tax multiplier = change in the economy’s spending.
The multiplier is 1 / (1-.8) = 5. The tax multiplier is 1 – 5 = -4. So:
$10b 5 = $50 b, and
-$2.5 -4 = $10 b
$50b + $10b = $60b, which is the desired increase in the economy’s total spending.
Hope this helps.
Reply
Hassan Abdullahi Omobolaji
on September 8, 2021 at 9:17 pm
I must say a very big thank you to all who contributed to this project and Sharing to the world all for free… Thanks I gained a lot…..
Reply
3. alexa
on January 25, 2021 at 2:58 am
this is awesome. thanks so much!
Reply
4. Hassan Abdullahi Omobolaji
on September 8, 2021 at 9:18 pm
I must say a very big thank you to all who contributed to this project and having access to it all for free… Thanks, I gained a lot…..
Reply
Iqra
on March 9, 2022 at 3:13 am
It’s very good . You explain exactly in the
same manner that I wanted.
Reply
5. John Bouman
on September 9, 2023 at 10:20 pm
Gracias, Robert. Even though Keynes supported it, the budget balanced multiplier does not really work in the long run. The balanced budget multiplier is based on the idea that people’s savings is a loss to economic stimulation. However, savings goes into banks and other financial markets and eventually get borrowed and spent. So savings eventually (in the long run) leads to spending and this means that in the long run $1,000 in government spending has the same effect as $1,000 in the hands of people who spend $800 of it and save $200 of it. So a government that spends #1,000 and taxes the people #1,000 has a zero effect on economic stimulation. It then becomes an issue of: do you want your government to spend money, or do you want private citizens to spend money? Who spends the money more wisely and more efficiently? What is better for the economy?
Reply
Trackbacks/Pingbacks
Imaginemos un resurgimiento keynesiano, por Robert Skidelsky - i360 Noticias - […] paquete contiene el germen de una buena idea conocida como el multiplicador del presupuesto equilibrado: un incremento del gasto…
Leave a reply Cancel reply |
189572 | https://justinwillmert.com/articles/2020/numerically-computing-the-exponential-function-with-polynomial-approximations/ | Numerically computing the exponential function with polynomial approximations
Written 2020 Aug 10 — Tags: Algorithms, Julia, Linear Algebra, Minimax, Numerical AccuracyUsing Julia v1.5.0
Numerically computing any non-trivial function (spanning the trigonometric functions through to special functions like the complex gamma function) is a large field of numerical computing, and one I am interested in expanding my knowledge within. In this article, I describe my recent exploration of how the exponential function can be implemented numerically.
Introduction¶
A recent pull request in the Julia repository (PR 36761) was proposed that aims to speed up the calculation of the exponential function ($e^x$) for double floating point values. While reading through the code and comments, I found myself wondering how some of the “magic constants” were being derived, especially because they are similar — but not identical — to the Taylor expansion coefficients any mathematician or physicist is likely to recognize on sight. Curiosity led me to devote the past few days to investigating some of the standard techniques used in implementing numerical approximations of special functions, so this article serves as a useful reference for myself (and hopefully others) for some of the techniques I’ve learned.
Taylor Series¶
To any physicist, the first obvious choice for approximating any (well behaved) function is its Taylor series expansion. For an infinitely differential function $f(x)$ at $x = a$, the Taylor series is defined as the infinite sum
\begin{align} f(x) = \sum_{n = 0}^{\infty} \frac{1}{n!} \frac{d^n f(a)}{dx^n} (x - a)^n \end{align}
where $!$ denotes the factorial function and $d^nf(a)/dx^n$ is the $n$-th derivative of $f(x)$ evaluated at $x = a$.1 The approximation is made by truncating the infinite series to a finite series of some arbitrary number of terms, giving a finite-degree polynomial approximation to the function $f(x)$.
There is an immediately obvious problem with just using the Taylor series, though — what value of $x = a$ do you choose to expand about? The exponential function is defined on the entire real line, but you only have one point about which the function is well approximated, with the approximation growing poorer the further from $a$ you go. The span of the well-approximated region can be increased by including more terms in the series, but this is a strategy of diminishing returns given that increasing the accuracy to arbitrary precision everywhere requires an infinite series.
In general, there are also issues of ensuring convergence of the series (i.e. the function does not have to converge everywhere for a given choice of $a$), and the rapidly growing size of $n!$ and high powers of $(x-a)^n$ will quickly lead to issues of numerically computing their values without under/overflow.
Despite these problems, though, let us ignore them for now and proceed to investigate the family of Taylor series approximations $P_n(x)$ of degree $n$. The Taylor series for $e^x$ about $x = 0$ is
\begin{align} e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} = 1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \ldots \end{align}
which can be easily coded up as a generated function (so that the factorial coefficients need only be calculated once):
@generated function exp_taylor(x::Float64, ::Val{N}) where {N} @generated function exp_taylor(x::Float64, ::Val{N}) where {N} @generated function exp_taylor(x:: Float64,:: Val{N}) where{N} coeff = ntuple(i -> inv(factorial(i-1)), N+1) # (1/0!, 1/1!, ..., 1/N!) coeff = ntuple(i -> inv(factorial(i-1)), N+1) # (1/0!, 1/1!, ..., 1/N!) coeff = ntuple(i -> inv(factorial(i - 1)), N + 1)# (1/0!, 1/1!, ..., 1/N!) return :(evalpoly(x, $coeff)) return :(evalpoly(x, $coeff)) return:(evalpoly(x, $ coeff)) end end end
The following figure plots the first through third degree Taylor series approximations versus $e^x$ (top panel), along with the absolute error $\Delta = e^x - P_n(x)$ (middle) and relative error $\epsilon = \Delta / e^x$ (bottom).
With just this simple plot, we can already make some useful observations, both good and bad. First, a good feature is that the Taylor series expansion is almost trivial to implement — we defined it using only 4 lines of code, and this works up to a 20th-degree series. (After that factorial(n) overflows, so continuing to add degrees would require a bit more effort to work around.) Second, we also see the expected behavior that adding higher degree terms to the series improves the approximation, with the third-order approximation having uniformly smaller error than the first-degree approximation.
On the other hand, though, there are some problems evident as well. For one, the odd degree polynomial terms lead to values which are generally biased low. This is clearly visible in the first-degree term where the orange line in the upper panel is everywhere below the black line except at $x = 0$ where they are equal. Ideally we’d like to have an approximation which gives an unbiased answer over some domain.
Next, the error is distributed unevenly in both the absolute and relative sense, and we do not have a way to tune or control that other than to just increase the number of terms in the series and hope for the best. In particular, the relative error rapidly increases for $x < 0$ where the exponential takes on very small values, and a calculation based on a value which deviates from its true value by $>100%$ is likely to give problematic results.
Thankfully, we have plenty of tools to use yet which can address each of these problem.
Range reduction¶
Let us first address the problem of choosing where to center the Taylor expansion. In the previous section I chose to expand the exponential about $x = 0$, and that caused the approximations to be best near only that point. The general technique of “range reduction” aims to reformulate a calculation in terms of multiple (hopefully easy) calculations that shift one domain (here infinite) to a smaller one of our choice.
The simplest case of such a range reduction is for a periodic function where only a single period of the domain has to be considered. For instance, $\cos(x + 2n\pi) = \cos(x)$ for all integers $n$, so the Taylor approximation only has to converge to the required accuracy for a domain like $x \in [0, 2\pi)$ and all other inputs are remapped into the range.
We can do something analogous for the exponential function, but instead of periodicity we will exploit the identity $e^{a + b} = e^a e^b$ for some carefully chosen values of $a$ and $b$.
Let $x = k \ln 2 + r$ for some integer $k$, which implies that the residual $r$ is limited to finite domain $|r| \le \frac{1}{2}\ln 2$. Then substituting and simplifying with some exponential-logarithmic identities, we find that $e^x$ can be re-expressed as
\begin{align} e^x = e^{k\ln 2 + r} = e^{k\ln 2} e^r = 2^k e^r \end{align}
This is a manifestly simpler expression because the factor of $2^k$ is easy to compute. Standard floating point values2 are stored in base-2 “scientific notation” in terms of its fractional part $f$ and a power-of-two $p$ such that $z = f \cdot 2^p$. Because of this representation, the particular power $2^k$ is constructed by a few fast bit manipulations, and then the answer is a simple product of two floating point values (where we assume we can calculate $e^r$, such as with a Taylor series approximation).
Rearranging the expression to solve for $k$, we invoke the requirement that $k$ be an integer to eliminate the residual $r$ so that
\begin{align} k = \operatorname{round}\left(\frac{x}{\ln 2}\right) = \left\lfloor \frac{x}{\ln 2} + \frac{1}{2} \right\rfloor \end{align}
where we assume $\operatorname{round}(z)$ rounds toward zero (to agree with the definition that $|r| \le \frac 1 2 \ln 2$ and still not have $k$ change) and the the rightmost expression just rewrites the rounding as the floor function. Then the residual is obtained by simply again solving the expression again but this time for $r$ now that $k$ is known:
\begin{align} r &= x - k\ln 2 \end{align}
We can then put this range reduction and the Taylor series approximation from the previous section together3 to make a range-reducing Taylor series approximation to $e^x$.
function reduce_exp(x::Float64) function reduce_exp(x::Float64) function reduce_exp(x:: Float64) ln2 = 0.6931471805599453 # log(2) ln2 = 0.6931471805599453 # log(2) ln2 =0.6931471805599453# log(2) invln2 = 1.4426950408889634 # 1 / log(2) invln2 = 1.4426950408889634 # 1 / log(2) invln2 =1.4426950408889634# 1 / log(2) k = floor(muladd(x, invln2, 0.5)) # ⌊ x/log(2) + 1/2 ⌋ k = floor(muladd(x, invln2, 0.5)) # ⌊ x/log(2) + 1/2 ⌋ k = floor(muladd(x, invln2,0.5))# ⌊ x/log(2) + 1/2 ⌋ r = muladd(k, -ln2, x) # x - k ln2 r = muladd(k, -ln2, x) # x - k ln2 r = muladd(k, - ln2, x)# x - k ln2 return (trunc(Int, k), r) return (trunc(Int, k), r) return(trunc(Int, k), r) end end endfunction exp_reduce_taylor(x::Float64, n::Val{N}) where {N} function exp_reduce_taylor(x::Float64, n::Val{N}) where {N} function exp_reduce_taylor(x:: Float64, n:: Val{N}) where{N} k, r = reduce_exp(x) k, r = reduce_exp(x) k, r = reduce_exp(x) return exp2(k) exp_taylor(r, n) # 2^k exp(r) return exp2(k) exp_taylor(r, n) # 2^k exp(r) return exp2(k) exp_taylor(r, n)# 2^k exp(r) end end end
Figure 3.1 is analogous to Fig. 2.1 from the previous section but instead calls our new exp_reduce_taylor() function which does range-reduction. The first obvious feature is the periodicity of the relative error across the domain caused by the range reduction and scaling. This technique has allowed us to constrain the relative error to a bounded range everywhere (and equal to the error near $x = 0$), and the bound is progressively tighted by adding higher-order terms to the Taylor series. Note that unlike the relative error, though, the absolute error grows as $x \rightarrow \infty$ because any approximation error in $e^r$ is then scaled by the factor of $2^k$.
While we have successfully improved upon the problem of choosing a suitable point to expand about and the problem of an infinite domain, we still see the biased average result in the odd orders. Furthermore, we’ve added a new problem with the fact that the numerical function is no longer continuous across the $\frac{k}{2}\ln 2$ boundaries.
To solve these problems, we have to move beyond the Taylor series approximation.
Minimax polynomial¶
To take a new approach, let us consider some conditions we’d like our approximation to satisfy. First, the approximation should be a “good” approximation, which we can quantify by requiring that the absolute (relative) error be minimized over all possibly $n$-th degree polynomials $P_n(x)$:
\begin{align} \def\min{\operatorname{min}} \def\max{\operatorname{max}} \text{absolute error}\quad &\Rightarrow \quad \min_{P_n} \big\lVert f(x) - P_n(x) \big\rVert_\infty \ \text{relative error}\quad &\Rightarrow \quad \min_{P_n} \left\lVert \frac{f(x)-P_n(x)}{f(x)} \right\rVert_\infty \end{align}
where $|\cdot|_\infty$ is notation for the infinity norm which reduces to the maximum of the expression. The solution $P_n(x)$ is then called the minimax polynomial. To generically describe both the absolute and relative error minimizations simultaneously, we’ll add a weight function $w(x)$ so that we are instead minimizing
\begin{align} \min_{P_n} \Big\lVert w(x) \big[ f(x) - P_n(x) \big] \Big\rVert_\infty \end{align}
where $w(x) = 1$ to minimize the absolute error and $w(x) = 1 / f(x)$ to minimize the relative error.
The solution to this minimization problem is given by the Remez algorithm, which is a method for iteratively improving the polynomial approximation, and it can be easily understood if you keep a couple of facts about polynomials in mind.
First, $n$-th degree polynomials have $n + 1$ degrees of freedom (coefficients) and $n + 1$ roots. For any arbitrary polynomial, the roots may be repeated or complex, but we can ensure the polynomial has all distinct and real roots by adding an additional constraint — if the polynomial is constructed such that it has $n$ local extrema that successively change signs, then all $n + 1$ roots will be real and distinct.
Second, recall that all non-constant ($n > 0$) polynomials will always diverge as $|x| \rightarrow \infty$, so to constrain the error over a finite domain $[a, b]$, we are motivated to augment the local extrema with the endpoints. This gives us an ordered list of $n + 2$ points $(a = x_0) < x_1 < x_2 < \ldots < x_n < (x_{n+1} = b)$ at which the error function is to be minimized.
The minimax condition is then achieved when the error function evaluated at each of these extrema are equal in magnitude. (This is the equioscillation theorem.) That is to say, we are seeking the polynomial that satisfies the system of equations over all of the extrema nodes given by
\begin{align} e(x_i) = (-1)^i E = w(x_i) \big[ f(x_i) - P_n(x_i) \big] \end{align}
for $i = {0, 1, \ldots, n+1}$. Note that this constraint adds another degree of freedom (the error $E$) to the system (on top of the polynomial coefficients) so that there are $n+2$ undetermined coefficients and $n+2$ equations and therefore forms a solvable system.
Rewritten into the form of a matrix equation, we can formally solve for the polynomial coefficients $a_i$ — where $P_n(x) = \sum_{k=0}^n a_k x^k$ — and the maximum deviation $E$ with the weights and oscillation constraint as:
\begin{align} \def\vs{\vphantom{(-1)_{n+1}^{n+1}}} \begin{bmatrix} 1 & x_0 & x_0^2 & \ldots & x_0^n & 1/w(x_0) \vs \ 1 & x_1 & x_1^2 & \ldots & x_1^n & -1/w(x_1) \vs \ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \ 1 & x_n & x_n^2 & \ldots & x_n^n & (-1)^n/w(x_n) \vs \ 1 & x_{n+1} & x_{n+1}^2 & \ldots & x_{n+1}^n & (-1)^{n+1}/w(x_{n+1}) \vs \end{bmatrix} \begin{bmatrix} a_0\vs \ a_1\vs \ \vdots \ a_{n+1}\vs \ E\vs \end{bmatrix} &= \begin{bmatrix} f(x_0)\vs \ f(x_1)\vs \ \vdots \ f(x_n)\vs \ f(x_{n+1})\vs \end{bmatrix} \label{eqn:sys} \end{align}
You may have noticed by now, though, that there are actually twice as many unknowns than equations since we haven’t actually stated how to a priori choose the nodes $x_i$ other than to say they should be the local extrema of the minimax polynomial. This is where the iterative part of the Remez algorithm comes in — instead of solving the exact solution to the system of equations, we make an initial guess at some nodal points and solve the least-squares solution. This will almost surely fail to solve to the minimax polynomial with equal magnitude oscillations of the error function, but we can use this best-fit estimate to update our choice of nodes and repeat.
Finally, we are ready to state the full iterative Remez algorithm:
Choose an initial vector of trial nodes $\mathbf{x}$.
Calculate the regressor matrix $\mathbf{R}$ (square matrix on left-hand side of Eqn. $\ref{eqn:sys}$) and the vector of function values $\mathbf{f} = f(\mathbf{x})$ (vector on the right-hand side).
Solve (in a least-squares sense) the system of equations $\mathbf{R} \mathbf{a} = \mathbf{f}$ for the vector of polynomial coefficients and deviation magnitude.
For the best-fit polynomial, find the local extrema $\mathbf{x’}$ and compare their deviation magnitudes. If they are equal (or suffiently close within some tolerance), the current solution is the minimax polynomial. Quit iterating; the polynomial is defined by all but the last entry in the $\mathbf{a}$ vector.
Otherwise, replace the nodes $\mathbf{x}$ with the locations of the extrema (i.e. let $\mathbf{x} \leftarrow \mathbf{x’}$) and return to step 2 to iterate the process.
For additional context on solving a system of equations like this built from evaluating basis functions over a set of points, see my previous article on building linear regression matrix operators. For our purposes, though, it is sufficient to just know that the least-squares regression can be obtained by using Julia’s built-in backslash (\) operator and/or LinearAlgebra.ldiv! methods.
We’ll also skip working through the details of finding the extrema except to say that it can be easily accomplished though a couple more iterative algorithms that first find the zeros of the error function (rootbisect() in the function below) and then search for extrema between each pair of roots (extremumsearch()). The code for these functions is included in a hidden section which can be shown by clicking the link below the following code block.
Finally, the only remaining definition we require is a method for choosing the initial node distribution. We’ve already argued that the endpoints are two of the nodes, but we have $n$ more to choose. In theory almost any distribution of non-overlapping nodes should work, but the convergence could be very slow if they are very poor initial estimates. A common suggestion is to the zeros of the Chebyshev polynomial of degree $n$ remapped from $[-1,1]$ to $[a,b]$. In the definition below, I have chosen a modified, Chebyshev-like distribution4 that differs by naturally including the endpoints.
The following remez() function is a translation of this process. It takes in as arguments the exact function to approximate f, the endpoints a and b, the degree of the polynomial to produce N, and an optional weight function w (defaulting to the unit function which as you’ll recall corresponds to finding the absolute error minimax approximation).
using LinearAlgebra using LinearAlgebra using LinearAlgebra function remez(f, a, b, N, w = one; atol = sqrt(eps(b - a)), maxiter = 20) function remez(f, a, b, N, w = one; atol = sqrt(eps(b - a)), maxiter = 20) function remez(f, a, b, N, w = one; atol = sqrt(eps(b - a)), maxiter = 20) # Initialize node locations # Initialize node locations # Initialize node locations nodes = @. (a + b)/2 - (b - a)/2 cospi((0:N+1) / (oftype(a, N)+1)) nodes = @. (a + b)/2 - (b - a)/2 cospi((0:N+1) / (oftype(a, N)+1)) nodes =@.(a + b)/ 2 -(b - a)/ 2 cospi((0: N + 1)/(oftype(a, N) + 1)) # Working space for exact function values, the regressor matrix, and the # Working space for exact function values, the regressor matrix, and the # Working space for exact function values, the regressor matrix, and the # locations of the roots of the error function # locations of the roots of the error function # locations of the roots of the error function fvals = similar(nodes) fvals = similar(nodes) fvals = similar(nodes) regr = similar(nodes, N + 2, N + 2) regr = similar(nodes, N + 2, N + 2) regr = similar(nodes, N + 2, N + 2) roots = similar(nodes, N + 1) roots = similar(nodes, N + 1) roots = similar(nodes, N + 1) # Weighted error function # Weighted error function # Weighted error function coeff = @view fvals[1:N+1] coeff = @view fvals[1:N+1] coeff = @view fvals[1: N + 1] err(x) = w(x) (f(x) - evalpoly(x, coeff)) err(x) = w(x) (f(x) - evalpoly(x, coeff)) err(x) = w(x)(f(x) - evalpoly(x, coeff)) # Difference in magnitude between the error function and best-fit deviation constant # Difference in magnitude between the error function and best-fit deviation constant # Difference in magnitude between the error function and best-fit deviation constant dev(x) = abs(abs(err(x)) - abs(fvals[N+2])) dev(x) = abs(abs(err(x)) - abs(fvals[N+2])) dev(x) = abs(abs(err(x)) - abs(fvals[N + 2])) iter = 0 iter = 0 iter = 0 while true while true while true # Update the exact solution and the regressor # Update the exact solution and the regressor # Update the exact solution and the regressor regr[:,1:N+1] .= nodes .^ transpose(0:N) regr[:,1:N+1] .= nodes .^ transpose(0:N) regr[:, 1: N + 1].= nodes.^ transpose(0: N) regr[:, N+2] .= ((-1)^i / w(nodes[i]) for i in 1:N+2) regr[:, N+2] .= ((-1)^i / w(nodes[i]) for i in 1:N+2) regr[:, N + 2].=((- 1) ^ i/ w(nodes[i]) for i in 1: N + 2) # Perform the regression, overwriting the exact solution # Perform the regression, overwriting the exact solution # Perform the regression, overwriting the exact solution fvals .= f.(nodes) fvals .= f.(nodes) fvals.= f.(nodes) ldiv!(factorize(regr), fvals) # coeff .= regr \ fvals ldiv!(factorize(regr), fvals) # coeff .= regr \ fvals ldiv!(factorize(regr), fvals)# coeff .= regr \ fvals iter += 1 iter += 1 iter += 1 if iter > maxiter if iter > maxiter if iter> maxiter @warn "Maximum number of iterations reached. Quitting." @warn "Maximum number of iterations reached. Quitting." @warn"Maximum number of iterations reached. Quitting." break break break end end end # Search for each root between successive pairs of nodes. # Search for each root between successive pairs of nodes. # Search for each root between successive pairs of nodes. roots .= (rootbisect(err, nodes[i], nodes[i+1]) for i in 1:N+1) roots .= (rootbisect(err, nodes[i], nodes[i+1]) for i in 1:N+1) roots.=(rootbisect(err, nodes[i], nodes[i + 1]) for i in 1: N + 1) # And then update all the internal nodes with the locations of the actual # And then update all the internal nodes with the locations of the actual # And then update all the internal nodes with the locations of the actual # local extrema # local extrema # local extrema nodes[2:N+1] .= (extremumsearch(err, roots[i], roots[i+1]) for i in 1:N) nodes[2:N+1] .= (extremumsearch(err, roots[i], roots[i+1]) for i in 1:N) nodes[2: N + 1].=(extremumsearch(err, roots[i], roots[i + 1]) for i in 1: N) # Stopping condition is when the actual deviations differ from the best-fit # Stopping condition is when the actual deviations differ from the best-fit # Stopping condition is when the actual deviations differ from the best-fit # deviation coefficient in magnitude by less than `atol`. # deviation coefficient in magnitude by less than `atol`. # deviation coefficient in magnitude by less than `atol`. maximum(dev, nodes) ≤ atol && break maximum(dev, nodes) ≤ atol && break maximum(dev, nodes) ≤ atol && break end end end return (nodes, copy(coeff)) return (nodes, copy(coeff)) return(nodes, copy(coeff)) end end end
The following helper functions to find the zeros and extrema of the error functions are very simple implementations of the bisection method for finding roots and the golden-section search for finding the extrema. In particular, these are not robust against all possible inputs and are not guaranteed to work in all cases, but they were simple to code and are sufficient for the few cases shown here.
""" """ """ Simple bisecting search for the zero of function ``f(x)`` in the interval ``a ≤ x ≤ b``. Simple bisecting search for the zero of function ``f(x)`` in the interval ``a ≤ x ≤ b``. Simple bisecting search for the zero of function ``f(x)`` in the interval ``a ≤ x ≤ b``. """ """ """function rootbisect(f, a, b; atol = sqrt(eps(abs(b - a)))) function rootbisect(f, a, b; atol = sqrt(eps(abs(b - a)))) function rootbisect(f, a, b; atol = sqrt(eps(abs(b - a)))) fa, fb = f(a), f(b) fa, fb = f(a), f(b) fa, fb = f(a), f(b) abs(fa) < atol && return a # Left bound is a zero abs(fa) < atol && return a # Left bound is a zero abs(fa)< atol && return a # Left bound is a zero abs(fb) < atol && return b # Right bound is a zero abs(fb) < atol && return b # Right bound is a zero abs(fb)< atol && return b # Right bound is a zero signbit(fa) == !signbit(fb) || # no zero crossing is an error signbit(fa) == !signbit(fb) || # no zero crossing is an error signbit(fa) ==! signbit(fb) || # no zero crossing is an error error("Invalid interval — does not cross zero") error("Invalid interval — does not cross zero") error("Invalid interval — does not cross zero") while true while true while true m = (a + b) / 2 # bisect halfway between intervals m = (a + b) / 2 # bisect halfway between intervals m =(a + b)/ 2 # bisect halfway between intervals fm = f(m) # and recalculate f(m) fm = f(m) # and recalculate f(m) fm = f(m)# and recalculate f(m) abs(fm) < atol && return m # found the zero abs(fm) < atol && return m # found the zero abs(fm)< atol && return m # found the zero # If f(a) and f(m) are the same sign, then drop lower half of remaining interval # If f(a) and f(m) are the same sign, then drop lower half of remaining interval # If f(a) and f(m) are the same sign, then drop lower half of remaining interval if signbit(fa) == signbit(fm) if signbit(fa) == signbit(fm) if signbit(fa) == signbit(fm) a, fa = m, fm a, fa = m, fm a, fa = m, fm # otherwise the upper half has been excluded. # otherwise the upper half has been excluded. # otherwise the upper half has been excluded. else else else b, fb = m, fm b, fb = m, fm b, fb = m, fm end end end end end end end end end """ """ """ Search for an extremum of ``f(x)`` in the interval ``a ≤ x ≤ d``. Search for an extremum of ``f(x)`` in the interval ``a ≤ x ≤ d``. Search for an extremum of ``f(x)`` in the interval ``a ≤ x ≤ d``. """ """ """function extremumsearch(f, a, d; atol = sqrt(eps(abs(d - a)))) function extremumsearch(f, a, d; atol = sqrt(eps(abs(d - a)))) function extremumsearch(f, a, d; atol = sqrt(eps(abs(d - a)))) T = typeof(atol) T = typeof(atol) T = typeof(atol) ϕinv = T(2) / (one(T) + sqrt(T(5))) ϕinv = T(2) / (one(T) + sqrt(T(5))) ϕinv = T(2)/(one(T) + sqrt(T(5))) b = d - ϕinv (d - a) b = d - ϕinv (d - a) b = d - ϕinv(d - a) c = a + ϕinv (d - a) c = a + ϕinv (d - a) c = a + ϕinv(d - a) fa, fb, fc, fd = abs.(f.((a, b, c, d))) fa, fb, fc, fd = abs.(f.((a, b, c, d))) fa, fb, fc, fd = abs.(f.((a, b, c, d))) while true while true while true # Stop when the search bounds have converged to sufficient accuracy # Stop when the search bounds have converged to sufficient accuracy # Stop when the search bounds have converged to sufficient accuracy d - a < atol && return (a + d) / 2 d - a < atol && return (a + d) / 2 d - a< atol && return(a + d)/ 2 # The extremum must be in [a, c] # The extremum must be in [a, c] # The extremum must be in [a, c] if fb > fc if fb > fc if fb> fc c, d = b, c c, d = b, c c, d = b, c fc, fd = fb, fc fc, fd = fb, fc fc, fd = fb, fc b = d - ϕinv (d - a) b = d - ϕinv (d - a) b = d - ϕinv(d - a) fb = abs(f(b)) fb = abs(f(b)) fb = abs(f(b)) # otherwise the extremem must be in [b, d] # otherwise the extremem must be in [b, d] # otherwise the extremem must be in [b, d] else else else a, b = b, c a, b = b, c a, b = b, c fa, fb = fb, fc fa, fb = fb, fc fa, fb = fb, fc c = a + ϕinv (d - a) c = a + ϕinv (d - a) c = a + ϕinv(d - a) fc = abs(f(c)) fc = abs(f(c)) fc = abs(f(c)) end end end end end end end end end
We are now finally ready to improve our approximation of the exponential function over that given by a Taylor series. As mentioned earlier, we can choose to minimize the error in an absolute or relative sense, so we’ll do both by defining variations of exp optimized under both weighting functions. These kernel functions replace the Taylor-series approximation on the interval $|x| \le \frac 1 2 \ln 2$, and we continue to use range reduction over the entire real axis.
```
Minimizes absolute error in |x| ≤ log(2)/2 # Minimizes absolute error in |x| ≤ log(2)/2 # Minimizes absolute error in |x| ≤ log(2)/2@generated function exp_absminimax_kernel(x::Float64, ::Val{N}) where {N} @generated function exp_absminimax_kernel(x::Float64, ::Val{N}) where {N} @generated function exp_absminimax_kernel(x:: Float64,:: Val{N}) where{N} ln2by2 = log(2.0) / 2 ln2by2 = log(2.0) / 2 ln2by2 = log(2.0)/ 2 , coeff = remez(exp, -ln2by2, ln2by2, N) , coeff = remez(exp, -ln2by2, ln2by2, N) , coeff = remez(exp, - ln2by2, ln2by2, N) return :(evalpoly(x, $(tuple(coeff...)))) return :(evalpoly(x, $(tuple(coeff...)))) return:(evalpoly(x, $(tuple(coeff...)))) end end end# Minimizes relative error in |x| ≤ log(2)/2 # Minimizes relative error in |x| ≤ log(2)/2 # Minimizes relative error in |x| ≤ log(2)/2@generated function exp_relminimax_kernel(x::Float64, ::Val{N}) where {N} @generated function exp_relminimax_kernel(x::Float64, ::Val{N}) where {N} @generated function exp_relminimax_kernel(x:: Float64,:: Val{N}) where{N} ln2by2 = log(2.0) / 2 ln2by2 = log(2.0) / 2 ln2by2 = log(2.0)/ 2 , coeff = remez(exp, -ln2by2, ln2by2, N, x -> exp(-x)) , coeff = remez(exp, -ln2by2, ln2by2, N, x -> exp(-x)) , coeff = remez(exp, - ln2by2, ln2by2, N, x -> exp(- x)) return :(evalpoly(x, $(tuple(coeff...)))) return :(evalpoly(x, $(tuple(coeff...)))) return:(evalpoly(x, $(tuple(coeff...)))) end end end function exp_reduce_kernel(x::Float64, n::Val{N}, kernel) where {N} function exp_reduce_kernel(x::Float64, n::Val{N}, kernel) where {N} function exp_reduce_kernel(x:: Float64, n:: Val{N}, kernel) where{N} k, r = reduce_exp(x) k, r = reduce_exp(x) k, r = reduce_exp(x) return exp2(k) kernel(r, n) # 2^k exp(r) return exp2(k) kernel(r, n) # 2^k exp(r) return exp2(k) kernel(r, n)# 2^k exp(r) end end endexp_absminimax(x::Float64, n::Val{N}) where {N} = exp_reduce_kernel(x, n, exp_absminimax_kernel) exp_absminimax(x::Float64, n::Val{N}) where {N} = exp_reduce_kernel(x, n, exp_absminimax_kernel) exp_absminimax(x:: Float64, n:: Val{N}) where{N} = exp_reduce_kernel(x, n, exp_absminimax_kernel)exp_relminimax(x::Float64, n::Val{N}) where {N} = exp_reduce_kernel(x, n, exp_relminimax_kernel) exp_relminimax(x::Float64, n::Val{N}) where {N} = exp_reduce_kernel(x, n, exp_relminimax_kernel) exp_relminimax(x:: Float64, n:: Val{N}) where{N} = exp_reduce_kernel(x, n, exp_relminimax_kernel)
```
First, both variations of the minimax polynomial eliminate the systematic bias in the odd-order polynomials that was present while using the Taylor series approximation.
Second, we can see that both variations do minimize the absolute and relative error, respectively. (This is probably easiest to see by comparing the last one and a half of the range-reducing periods from about $x = 1$ to the right side.) In the left column which shows the absolute minimax approximation, the left and right end points of each range-reducing period has the same deviation. (Compare this against Fig. 3.1 where lower and upper endpoints of the period have different absolute errors.) Then in the right column which shows the relative minimax approximation, we see a similar per-period relationship but now in the relative error measurement.
For the relative error case, we also gain continuity of the approximation across range-reducing periods in the odd-degree approximations; this is a direct consequence of the error polynomial being a leading-order-even function together with constraining the relative errors to be equal on the lower and upper edges of the range-reducing period.
Finally, let us directly compare the known Taylor series coefficients against the minimax polynomial coefficients (since, as mentioned in the introduction, the observation that the Julia pull request contained not-quite Taylor series coefficients is what initially motivated my exploration into this topic). The following table shows the Taylor series and relative minimax polynomial coefficients up to degree 3, along with the difference (minimax minus Taylor) the relative percent change with respect to the Taylor series coefficients.
| | | | |
--- --- |
| Degree | Taylor series | Minimax poly | Abs. (Rel.) change |
| 0 | 1.0000000000000000 | 0.9999280752600668 | -7.192e-05 (-0.00719 %) |
| 1 | 1.0000000000000000 | 1.0001641903948264 | 0.0001642 (0.0164 %) |
| 2 | 0.5000000000000000 | 0.5049632650961922 | 0.004963 (0.993 %) |
| 3 | 0.1666666666666667 | 0.1656683995499798 | -0.0009983 (-0.599 %) |
The bottom-line conclusion is that only very small (less than 1%) tweaks to the Taylor series coefficients are necessary to transform the Taylor polynomial into the minimax polynomial, and the coefficients observed in the Julia pull request could reasonably be due to a minimax-like optimization.
A more complex minimax example — expm1¶
Before we conclude, let’s take a quick look at a slightly more complex example of defining minimax polynomials.
A well known issue with numerically calculating the exponential function is that for $|x| \ll 1$, the answer is very nearly $1$, and accuracy in the result is lost due to the limits of finite floating point representation. Namely, calculating exp(x) for a value very nearly $0$ and then subtracting 1.0 is subject to so-called “catastrophic” cancellation. Therefore, getting a high-accuracy result requires direct calculation of a special form of the function — exmp1(x) which is defined to be $e^x - 1$ — without intermediately using exp(x) and subtracting. The expm1 function is commonly provided in math libraries (including the C99 standard and, of course, by Julia’s base library).
Given the simple mathematical definition, we can consider how to form a numerical approximation by expressing the function in terms of the exponential’s Taylor series and simplifying:
\begin{align} e^x - 1 &= \left(\sum_{k=0}^{\infty} \frac{x^k}{k!}\right) - 1 \ {} &= \left(1 + x + \frac{x^2}{2} + \frac{x^3}{6} + \ldots\right) - 1 \ {} &= x \left(1 + \frac{x}{2} + \frac{x^2}{6} + \ldots \right) \ e^x - 1 &= x \sum_{k=0}^{\infty} \frac{x^k}{(k+1)!} \approx x P_n(x) \end{align}
Thus we have identified $e^x - 1$ as the product of $x$ and an infinite power series which can be truncated and approximated by some $n$-th order polynomial.
To have the remez() function above solve for the minimax polynomial, we must manipulate the error between $e^x - 1$ and $x P_n(x)$ for either absolute or relative error minimization back into the standard form $w(x)\left[ f(x) - P_n(x) \right]$. Doing so for the relative case, we simply divide through by the extra factor of $x$ and make the appropriate identifications:
\begin{align} \frac{\left[e^x - 1\right] - \left[x P_n(x)\right]}{\left[e^x - 1\right]} &= \frac{x}{e^x - 1}\left[ \frac{e^x - 1}{x} - P_n(x) \right] \ {} &= w(x) \left[ f(x) - P_n(x) \right] \ {} &\qquad\text{where}\quad f(x) \equiv \frac{1}{w(x)} \equiv \frac{e^x - 1}{x} \ \end{align}
Note that both $f(x)$ and $w(x)$ limit to unity as $x \rightarrow 0$, so the functions are well-defined.
For a generic solution, we also need to replace the range-reduction step with one which accounts for the subtraction by one. With a bit of algebra, it’s easy to show that for $x = k\ln 2 + r$ as previously, we have
\begin{align} e^x - 1 = 2^k(e^r - 1) + 2^k - 1 \end{align}
Translating these definitions into code:
```
Minimizes relative error in |x| ≤ log(2)/2 # Minimizes relative error in |x| ≤ log(2)/2 # Minimizes relative error in |x| ≤ log(2)/2@generated function expm1_minimax_kernel(x::Float64, ::Val{N}) where {N} @generated function expm1_minimax_kernel(x::Float64, ::Val{N}) where {N} @generated function expm1_minimax_kernel(x:: Float64,:: Val{N}) where{N} ln2by2 = log(2.0) / 2 ln2by2 = log(2.0) / 2 ln2by2 = log(2.0)/ 2 expm1_kernel(x) = iszero(x) ? one(x) : expm1(x) / x expm1_kernel(x) = iszero(x) ? one(x) : expm1(x) / x expm1_kernel(x) = iszero(x) ? one(x): expm1(x)/ x expm1_weight(x) = iszero(x) ? one(x) : x / expm1(x) expm1_weight(x) = iszero(x) ? one(x) : x / expm1(x) expm1_weight(x) = iszero(x) ? one(x): x/ expm1(x) # Solve for one fewer degrees than requested to make comparisons with # Solve for one fewer degrees than requested to make comparisons with # Solve for one fewer degrees than requested to make comparisons with # exp(x)-1 fair — i.e. the highest degree in (x P_n(x)) == N # exp(x)-1 fair — i.e. the highest degree in (x P_n(x)) == N # exp(x)-1 fair — i.e. the highest degree in (x P_n(x)) == N , coeff = remez(expm1_kernel, -ln2by2, ln2by2, N - 1, expm1_weight) , coeff = remez(expm1_kernel, -ln2by2, ln2by2, N - 1, expm1_weight) _, coeff = remez(expm1_kernel, - ln2by2, ln2by2, N - 1, expm1_weight) return :(x evalpoly(x, $(tuple(coeff...)))) # Note the leading mult. by x return :(x evalpoly(x, $(tuple(coeff...)))) # Note the leading mult. by x return:(x evalpoly(x, $(tuple(coeff...))))# Note the leading mult. by x end end end function expm1_minimax(x::Float64, n::Val{N}) where {N} function expm1_minimax(x::Float64, n::Val{N}) where {N} function expm1_minimax(x:: Float64, n:: Val{N}) where{N} k, r = reduce_exp(x) k, r = reduce_exp(x) k, r = reduce_exp(x) s = exp2(k) s = exp2(k) s = exp2(k) return muladd(s, expm1_minimax_kernel(r, n), s - 1.0) return muladd(s, expm1_minimax_kernel(r, n), s - 1.0) return muladd(s, expm1_minimax_kernel(r, n), s -1.0) end end end
```
and then directly comparing a third-order relative minimax approximation of exp(x)-1 versus expm1(x):
The catastrophic calculation for small $x$ is visible in the bottom panel of the left column where the fractional error diverges as $x \rightarrow 0$. This is because the absolute error (middle row of left column) is non-zero near and at $x = 0$, so since $\lim_{x\rightarrow 0} e^x - 1 = 0$, the fractional error is unbounded. In contast, the right column shows that the direct minimax optimization of expm1(x) actually constructs a minimax polynomial that approaches $0$ as $x \rightarrow 0$, and therefore the relative error is tightly bounded.
Conclusions¶
For myself (and likely many physicists) the Taylor series is a well known and familiar method of constructing polynomial approximations of arbitrary functions. Numerically, though, the minimax polynomial is an alternative construction that provides more desirable properties, and it is a concept that any numerically-oriented person should probably have in their toolbox.
In this article, I ignored the problem of bootstrapping the implementation — i.e. we optimized minimax polynomials against Julia’s built-in exp(x::Float64) and expm1(x::Float64) functions — but this isn’t a deal-breaking problem. In some cases you may want to build a custom, fast approximation of an intricate function which can be accurately (but too slowly) calculated in terms of known functions. Then this is just a performance optimization, and there’s no issue of how to bootstrap the process.
If the initial bootstrapping of a special function is the goal, though, then we already have a perfectly servicable definition — just calculate the exact answers to machine precision using a large number of terms in the Taylor series, and optimize for a minimax polynomial that requires fewer terms to achieve the same accuracy. In practice the use of extended-precision arithmetic will almost certainly be required to avoid loss of precision in standard floating point implementations, but that is not an insurmountable problem.
Finally, this is far from being a comprehensive explanation of how to write accurate numerical calculations of special functions. For instance, our range reduction for exp only contracted the domain down to the interval $|x| \le \frac 1 2 \ln 2$, but a commonly-referenced extension of this idea is Tang’s method which makes use of a pre-computed table of non-integer powers of two to further reduce the working range. This reduces the polynomial degree required to calculate all answers to machine precision. (The Julia pull request linked to in the introduction uses a similar scheme, and the range is reduced to $|x| \le \ln(2)/512$.) There’s also an entire domain of implementation details to be considered to ensure the calculation is actually performant on real hardware — e.g. optimizing for branch (mis-)prediction, instruction pipelining, auto-vectorization, etc — that are outside the scope of this article to discuss.
Note that for $n = 0$, the $0$-th derivative of $f$ is defined to be itself (e.g. $d^0 f(x)/dx^0 = f(x)$), and $0! = 1$ so that the first term always simplifies to just $f(a)$. ↩︎
This is technically not true for all possible ways of staring floating point values, but the IEEE 754 standard is used almost everywhere most users are likely to encounter floating point values. ↩︎
Using the exp2() function is not a subtle cheat and/or circular definition that itself depends on a properly defined generic exp() function when the argument is an integer type. You can verify with the Julia source code that it does exactly as I claimed and uses bit-level manipulations to directly construct the floating point value equal to $2^k$. ↩︎
The Chebyshev polynomial zeros are all contained within the interval $-1$ to $1$ (excluding the endpoints) and are distributed such that they correspond to evenly distributed points on a unit sphere mapped onto the $x$-axis. The left-hand panel below shows the $n$ Chebyshev zeros for a 12th degree polynomial as the blue points, with the two endpoints added in as the red points. Notice that the spacing between the first and second points and the penultimate and last points are not equal with the spacing between the rest of the successive pairs.The initial distribution of points used in remez() is very similar but modified to directly produce $n + 2$ points distributed uniformly across the unit sphere with the endpoints included. This case is shown in the right-hand panel. ↩︎ |
189573 | https://www.stat.cmu.edu/~arinaldo/Teaching/36710-36752/Lecture_Notes/lec_notes_1.pdf | 36710-36752 Fall 2020 Advanced Probability Overview Lecture Notes Set 1: Course Overview, σ-Fields, and Measures Instructor: Alessandro Rinaldo Associated reading: Sec 1.1-1.4 of Ash and Dol´ eans-Dade; Sec 1.1 and A.1 of Durrett.
1 Introduction How is this course different from your earlier probability courses? There are some problems that simply can’t be handled with finite-dimensional sample spaces and random variables that are either discrete or have densities.
Example 1 Try to express the strong law of large numbers without using an infinite-dimensional space.
Oddly enough, the weak law of large numbers requires only a sequence of finite-dimensional spaces, but the strong law concerns entire infinite sequences.
Example 2 Consider a distribution whose cumulative distribution function (cdf) increases continuously part of the time but has some jumps. Such a distribution is neither discrete nor continuous. How do you define the mean of such a random variable? Is there a way to treat such distributions together with discrete and continuous ones in a unified manner?
General Measures Both of the above examples are accommodated by a generalization of the theories of summation and integration. Indeed, summation becomes a special case of the more general theory of integration. It all begins with a generalization of the concept of “size” of a set.
Example 3 One way to measure the size of a set is to count its elements. All infinite sets would have the same size (unless you distinguish different infinite cardinals).
Example 4 Special subsets of Euclidean spaces can be measured by length, area, volume, etc. But what about sets with lots of holes in them? For example, how large is the set of irrational numbers between 0 and 1?
We will use measures to say how large sets are. First, we have to decide which sets we will measure.
1 2 Set-Theoretic Preliminaries Universe set. Let Ωbe a universe set. Every set will be implicitly assumed to be a subset of Ωand set theoretic operations (union, intersection and complement) are well defined only with respect to Ω.
Power set. The power set of A, denoted with 2A is the set of all subsets of A (including the empty set and A itself).
Monotone Sequences of Sets. A sequence (finite or infinite) of sets A1, A2, . . . such that A1 ⊂A2 ⊂· · · is said to be increasing. The sequence has limit A = S n An, in which case we say that An increases to A, written An ↑A. Similarly, if A1 ⊃A2 ⊃· · · , the sequence is decreasing; it is said to decrease to its limit set A = T n An, written An ↓A. A sequence of sets is monotone if it is either increasing or decreasing.
Exercise 1 Let a < b be real numbers and set An = [a −1 n, b −1 n]. Find S n An. Similarly, let Bn = (a + 1 n, b + 1 n). Find T Bn. This shows that an infinite union of closed sets needs not be closed and an infinite (non-empty) intersection of open sets needs not be open. What about arbitrary unions of open sets and intersections of closed sets?
DeMorgan Laws: If A1, A2, . . . is an arbitrary sequence, (S n An)c = T n Ac n and (T n An)c = S n Ac n. Thus An ↓A if and only if Ac n ↑Ac.
From union of sets to union of disjoint sets. Let A1, A2, . . . an arbitrary sequence of sets in Ω. Then [ n An = [ n Bn, where Bn = An ∩Ac n−1 ∩. . . ∩Ac 1, with A0 = ∅, and the Bn’s are disjoint.
Upper and lower limit of sequence of sets. For any arbitrary sequence A1, A2, . . ., its limit superior is lim sup n An = ∞ \ n=1 ∞ [ k=n Ak and its limit inferior is lim inf n An = ∞ [ n=1 ∞ \ k=n Ak.
Thus, ω ∈lim supn An iif for every n, ω ∈Ak for some (in fact, infinitely many) k ≥n.
Equivalently, ω ∈lim supn An if and only if ω ∈An for infinitely many n’s, or infinitely often.
2 Conversely, ω ∈lim infn An iif, for some n, ω ∈Ak for all k ≥n. That is, ω ∈lim infn An iif ω ∈An for all but finitely many n’s, or eventually.
The sequence A1, A2, . . . has a limit A iif lim supn An = lim infn An = A. In case of monotone (i.e. either increasing or decreasing) sequences, we recover the notion of limit introduced above.
Exercise 2 It is instructive to compare the notion of limit superior and inferior to the analogous notion for sequences, where for a sequence of real numbers x1, x2, . . . recall that lim supn xn = infn supk≥n xk and lim infn xn = supn infk≥n xk. For a set A, let IA : Ω→{0, 1} be its indicator function: IA(ω) = 1 if ω ∈A, 0 if ω ̸∈A.
Then Ilim supn An = lim supn IAn and Ilim infn An = lim infn IAn .
Countable vs uncountable sets. A set A is finite if |A| < ∞, where |A| (or card(A) or #A) is the number of elements or cardinality of A, and infinite otherwise.
A better distinction, which is very important in measure theory, is between sets that are countable versus sets that are uncountable. A set A is countable if there exists a function φ: A →N mapping the elements of A into the naturals that is injective. It is just a mathematical way of expressing the fact that the set of natural numbers is “large enough” compared to A so that all elements of A can be labeled using the naturals (or a subset thereof). A countable set may be finite or infinite. An uncountable set is always infinite. A finite set if always countable.
Exercise 3 Show that the cartesian product of two countable sets is countable. Conclude that the cartesian product of finitely many countable sets is countable. Use this to show that countable unions of countable sets if countable. On the other hand, the countable cartesian product of countable sets is not countable. To see this, show that even the set of infinite binary sequences is not countable.
Exercise 4 Use the result in the previous exercise to show that the power set of an infinite countable set is not countable.
Example 5 (The unit interval is uncountable.) We have seen that the set of infinite binary sequences is uncountable. The claim therefore will follow if we can show that each number in (0, 1] can be expressed as an infinite binary sequence. Let T a mapping of the interval Ω= (0, 1] into itself given by Tω = 2ω if 0 < ω ≤1/2, 2ω −1 if 1/2 < ω ≤1.
3 Now define d1 on Ωby d1(ω) = 0 if 0 < ω ≤1/2, 1 if 1/2 < ω ≤1, and for any integer i > 1 and ω ∈Ω, set di(ω) = d1(T i−1ω). Then, it can be shown that, for all n ≥1, n X i=1 di(ω) 2i < ω ≤ n X i=1 di(ω) 2i + 1 2n, ∀ω ∈Ω.
(5) As as result, ω = ∞ X i=1 di(ω) 2i , ∀ω ∈Ω.
This gives the dyadic representation of each ω in (0, 1] as a binary sequence (di(ω), i = 1, 2, . . .). Notice that if di(ω) = 0 for all i > n, then ω = Pn i=1 di(ω) 2i , contradicting the strict inequality in (5). Thus, the binary representation of each ω ∈Ωdoes not terminate in 0’s (equivalently, it contains an infinite number of 1’s). Thus, we have shown that Ωcan be represented as the set if infinite binary sequences that do not terminate in 0’s. Since (0, 1] is in one-to-one correspondence with any interval on the real line of the form (a, b] with a < b, we conclude that any interval on the real line has uncountably many points.
3 σ-fields Definition 1 (fields and σ-fields) Let Ωbe a set. A collection F of subsets of Ωis called a field if it satisfies • Ω∈F, • for each A ∈F, AC ∈F, • for all A1, A2 ∈F, A1 ∪A2 ∈F.
A field F is a σ-field if, in addition, it satisfies • for every sequence {Ak}∞ k=1 in F, S∞ k=1 Ak ∈F.
We will define measures on fields and σ-field’s.
Definition 2 (Measurable Space) A set Ωtogether with a σ-field F is called a measur-able space (Ω, F), and the elements of F are called measurable sets .
4 Example 6 (Intervals on R1) Let Ω= I R and define U to be the collection of all unions of finitely many disjoint intervals of the form (a, b] or (−∞, b] or (a, ∞) or (−∞, ∞), together with ∅. Then U is a field.
Example 7 (Power set) Let Ωbe an arbitrary set. The collection of all subsets of Ωis a σ-field, in fact the largest σ-field containing Ω. It is denoted 2Ωand is called the power set of Ω.
Example 8 (Trivial σ-field) Let Ωbe an arbitrary set.
Let F = {Ω, ∅}.
This is the trivial σ-field.
Exercise 6 Let F1, F2, . . . be classes of sets in a common space Ωsuch that Fn ⊂Fn+1 for each n. Show that if each Fn is a field, then ∪∞ n=1Fn is also a field.
If each Fn is a σ-field, then is ∪∞ n=1Fn also necessarily a σ-field? Think about the following case: Ωis the set of nonnegative integers and Fn is the σ-field of all subsets of {0, 1, . . . , n} and their complements.
Generated σ-fields A field is closed under finite set theoretic operations whereas a σ-field is closed under countable set theoretic operations. In a problem dealing with probabilities, one usually deals with a small class of subsets A, for example the class of subintervals of (0, 1]. It is possible that if we perform countable operations on such a class A of sets, we might end up operating on sets outside the class A. Hence, we would like to define a class denoted by σ(A) in which we can safely perform countable set-theoretic operations. This class σ(A) is called the σ-field generated by A, and it is defined as the intersection of all the σ-fields containing A (exercise: show that this is a σ-field). σ(A) is the smallest σ-field containing A.
Example 9 Let C = {A} for some nonempty A that is not itself Ω. Then σ(C) = {∅, A, AC, Ω}.
Example 10 Let Ω= I R and let C be the collection of all intervals of the form (a, b]. Then the field generated by C is U from Example 6 while σ(C) is larger.
Example 11 (Borel σ-field) Let Ωbe a topological space and let C be the collection of open sets. Then σ(C) is called the Borel σ-field. If Ω= I R, the Borel σ-field is the same as σ(C) in Example 10. The Borel σ-field of subsets of I Rk is denoted Bk.
Exercise 7 Give some examples of classes of sets C such that σ(C) = B1.
Exercise 8 Are there subsets of I R which are not in B1?
5 4 Measures Notation 12 (Extended Reals) The extended reals is the set of all real numbers together with ∞and −∞. We shall denote this set I R. The positive extended reals, denoted I R + is (0, ∞], and the nonnegative extended reals, denoted I R +0 is [0, ∞].
Definition 3 Let (Ω, F) be a measurable space. Let µ : F →I R +0 satisfy • µ(∅) = 0, • for every sequence {Ak}∞ k=1 of mutually disjoint elements of F, µ(S∞ k=1 Ak) = P∞ k=1 µ(Ak).
Then µ is called a measure on (Ω, F) and (Ω, F, µ) is a measure space. If F is merely a field, then a µ that satisfies the above two conditions whenever S∞ k=1 Ak ∈F is called a measure on the field F.
Example 13 Let Ωbe arbitrary with F the trivial σ-field. Define µ(∅) = 0 and µ(Ω) = c for arbitrary c > 0 (with c = ∞possible).
Example 14 (Counting measure) Let Ωbe arbitrary and F = 2Ω. For each finite subset A of Ω, define µ(A) to be the number of elements of A. Let µ(A) = ∞for all infinite subsets.
This is called counting measure on Ω.
Definition 4 (Probability measure) Let (Ω, F, P) be a measure space.
If P(Ω) = 1, then P is called a probability, (Ω, F, P) is a probability space, and elements of F are called events.
Sometimes, if the name of the probability P is understood or is not even mentioned, we will denote P(E) by Pr(E) for events E.
Infinite measures pose a few unique problems. Some infinite measures are just like finite ones.
Definition 5 (σ-finite measure) Let (Ω, F, µ) be a measure space, and let C ⊆F. Sup-pose that there exists a sequence {An}∞ n=1 of elements of C such that µ(An) < ∞for all n and Ω= S∞ n=1 An. Then we say that µ is σ-finite on C. If µ is σ-finite on F, we merely say that µ is σ-finite.
Example 15 Let Ω= Z Z with F = 2Ωand µ being counting measure. This measure is σ-finite. Counting measure on an uncountable space is not σ-finite.
Exercise 9 Prove the claims in Example 15.
6 4.1 Basic properties of measures There are several useful properties of measures that are worth knowing.
First, measures are countably subadditive in the sense that µ ∞ [ n=1 An !
≤ ∞ X n=1 µ(An), (10) for arbitrary sequences {An}∞ n=1. The proof of this uses a standard trick for dealing with countable sequences of sets. Let B1 = A1 and let Bn = An \Sn−1 i=1 Bi for n > 1. The Bn’s are disjoint and have the same finite and countable unions as the An’s. The proof of Equation 10 relies on the additional fact that µ(Bn) ≤µ(An) for all n.
Next, if µ(An) = 0 for all n, it follows that µ (S∞ n=1 An) = 0. This gets used a lot in proofs.
Similarly, if µ is a probability and µ(An) = 1 for all n, then µ (T∞ n=1 An) = 1.
Definition 6 (Almost sure/almost everywhere) Suppose that some statement about el-ements of Ωholds for all ω ∈AC where µ(A) = 0. Then we say that the statement holds almost everywhere, denoted a.e. [µ]. If P is a probability, then almost everywhere is often replaced by almost surely, denoted a.s. [P].
Example 16 Let (Ω, F, P) be a probability space. Let {Xn}∞ n=1 be a sequence of functions from Ωto I R. To say that Xn converges to X a.s. [P] (denoted Xn a.s.
→X) means that there is a set A with P(A) = 0 and limn→∞Xn(ω) = X(ω) for all ω ∈AC.
Proposition 11 (Linearity) If µ1, µ2, . . . are all measures on (Ω, F) and if {an}∞ n=1 is a sequence of positive numbers, then P∞ n=1 anµn is a measure on (Ω, F).
Exercise 12 Prove Proposition 11.
4.2 Monotone sequences of sets and limits of measure Definition 7 (Monotone sequences of sets) Let (Ω, F, µ) be a measure space.
A se-quence {An}∞ n=1 of elements of F is called monotone increasing if An ⊆An+1 for each n. It is monotone decreasing if An ⊇An+1 for each n.
Lemma 17 Let (Ω, F, µ) be a measure space.
Let {An}∞ n=1 be a monotone sequence of elements of F. Then limn→∞µ(An) = µ (limn→∞An) if either of the following hold: • the sequence is increasing, 7 • the sequence is decreasing and µ(Ak) < ∞for some k.
If {An}∞ n=1 is any sequence of measurable sets and µ is finite, then µ lim inf n An ≤lim inf n µ(An) ≤lim sup n µ(An) ≤µ(lim sup n An).
In particular, if limn An = A exists, then limn µ(An) = µ(A).
Proof: Define A∞= limn→∞An. In the first case, write B1 = A1 and Bn = An \ An−1 for n > 1. Then An = Sn k=1 Bk for all n (including n = ∞). Then µ(An) = Pn k=1 µ(Bk), and µ lim n→∞An = µ(A∞) = ∞ X k=1 µ(Bk) = lim n→∞ n X k=1 µ(Bk) = lim n→∞µ(An).
In the second case, write Bn = An \ An+1 for all n ≥k. Then, for all n > k, Ak \ An = n−1 [ i=k Bi, Ak \ A∞ = ∞ [ i=k Bi.
By the first case, lim n→∞µ(Ak \ An) = µ ∞ [ i=k Bi !
= µ(Ak \ A∞).
Because An ⊆Ak for all n > k and A∞⊆Ak, it follows that µ(Ak \ An) = µ(Ak) −µ(An), µ(Ak \ A∞) = µ(Ak) −µ(A∞).
It now follows that limn→∞µ(An) = µ(A∞).
As for the second claim, for each n ≥1 let Bn = ∩∞ k=nAn and Cn ∪∞ k=n An. Then, Bn → lim infn An and Cn →lim supn An. Thus, for each n µ(An) ≥µ(Bn), which implies that lim inf n µ(An) ≥lim inf n µ(Bn) = lim n µ(Bn) = µ(lim inf n An).
Similarly, the fact that µ(An) ≤µ(Cn) for all n implies that and lim sup n µ(An) ≤lim sup n µ(Cn) = lim n µ(Cn) = µ(lim sup n An).
The claims easily follow.
Exercise 13 Construct a simple counterexample to show that the condition µ(Ak) < ∞is required in the second claim of Lemma 17.
8 4.3 Uniqueness of Measures There is a popular method for proving uniqueness theorems about measures. The idea is to define a function µ on a convenient class C of sets and then prove that there can be at most one extension of µ to σ(C).
Example 18 Suppose it is given that for any a ∈I R, P((−∞, a]) = Z a −∞ 1 √ 2π exp −u2/2 du.
Does that uniquely define a unique probability measure on the class of Borel subsets of the line, B1?
Definition 8 (π-system and λ-system) A collection A of subsets of Ωis a π-system if, for all A1, A2 ∈A, A1 ∩A2 ∈A. A class C is a λ-system if • Ω∈C, • for each A ∈C, AC ∈C, • for each sequence {An}∞ n=1 of disjoint elements of C, S∞ n=1 An ∈C.
Example 19 The collection of all intervals of the form (−∞, a] is a π-system of subsets of I R. So too is the collection of all intervals of the form (a, b] (together with ∅). The collection of all sets of the form {(x, y) : x ≤a, y ≤b} is a π-system of subsets of I R2. So too is the collection of all rectangles with sides parallel to the coordinate axes.
Some simple results about π-systems and λ-systems are the following.
Proposition 14 If Ωis a set and C is both a π-system and a λ-system, then C is a σ-field.
Proposition 15 Let Ωbe a set and let Λ be a λ-system of subsets. If A ∈Λ and A∩B ∈Λ then A ∩BC ∈Λ.
Exercise 16 Prove Propositions 14 and 15.
Lemma 20 (π −λ theorem) Let Ωbe a set and let Π be a π-system and let Λ be a λ-system that contains Π. Then σ(Π) ⊆Λ.
9 Proof: Define λ(Π) to be the smallest λ-system containing Π. For each A ⊆Ω, define GA to be the collection of all sets B ⊆Ωsuch that A ∩B ∈λ(Π).
First, we show that GA is a λ-system for each A ∈λ(Π). To see this, note that A∩Ω∈λ(Π), so Ω∈GA. If B ∈GA, then A ∩B ∈λ(Π), and Proposition 15 says that A ∩BC ∈λ(Π), so BC ∈GA. Finally, {Bn}∞ n=1 ∈GA with the Bn disjoint implies that A ∩Bn ∈λ(Π) with A∩Bn disjoint, so their union is in λ(Π). But their union is A∩(S∞ n=1 Bn). So S∞ n=1 Bn ∈GA.
Next, we show that λ(Π) ⊆GC for every C ∈λ(Π).
Let A, B ∈Π, and notice that A∩B ∈Π, so B ∈GA. Since GA is a λ-system containing Π, it must contain λ(Π). It follows that A ∩C ∈λ(Π) for all C ∈λ(Π). If C ∈λ(Π), it then follows that A ∈GC. So, Π ⊆GC for all C ∈λ(Π). Since GC is a λ-system containing Π, it must contain λ(Π).
Finally, if A, B ∈λ(Π), we just proved that B ∈GA, so A ∩B ∈λ(Π) and hence λ(Π) is also a π-system. By Proposition 14, λ(Π) is a σ-field containing Π and hence must contain σ(Π). Since λ(Π) ⊆Λ, the proof is complete.
The uniqueness theorem is the following.
Theorem 21 (Uniqueness theorem) Suppose that µ1 and µ2 are measures on (Ω, F) and F = σ(Π), for a π-system Π. If µ1 and µ2 are both σ-finite on Π and they agree on Π, then they agree on F.
Proof: First, let C ∈Π be such that µ1(C) = µ2(C) < ∞, and define GC to be the collection of all B ∈F such that µ1(B ∩C) = µ2(B ∩C). It is easy to see that GC is a λ-system that contains Π, hence it equals F by Lemma 20. (For example, if B ∈GC, µ1(BC ∩C) = µ1(C) −µ1(B ∩C) = µ2(C) −µ2(B ∩C) = µ2(BC ∩C), so BC ∈GC.) Since µ1 and µ2 are σ-finite, there exists a sequence {Cn}∞ n=1 ∈Π such that µ1(Cn) = µ2(Cn) < ∞, and Ω= S∞ n=1 Cn. (Since Π is only a π-system, we cannot assume that the Cn are disjoint.) For each A ∈F, µj(A) = lim n→∞µj n [ i=1 [Ci ∩A] !
for j = 1, 2.
Since µj (Sn i=1[Ci ∩A]) can be written as a linear combination of values of µj at sets of the form A ∩C, where C ∈Π is the intersection of finitely many of C1, . . . , Cn, it follows from A ∈GC that µ1 (Sn i=1[Ci ∩A]) = µ2 (Sn i=1[Ci ∩A]) for all n, hence µ1(A) = µ2(A).
Exercise 17 Return to Example 18. You should now be able to answer the question posed there.
Exercise 18 Suppose that Ω= {a, b, c, d, e} and I tell you the value of P({a, b}) and P({b, c}). For which subset of Ωdo I need to define P(·) in order to have a unique ex-tension of P to a σ-field of subsets of Ω?
10 5 Lebesgue Measure and Caratheodory’s Extension The-orem Let F be a cdf (nondecreasing, right-continuous, limits equal 0 and 1 at −∞and ∞respec-tively). Let U be the field in Example 6 (unions of finitely many disjoint intervals). Define µ : U →[0, 1] by µ(A) = Pn k=1 F(bk) −F(ak) when A = Sn k=1(ak, bk] and {(ak, bk]} are disjoint. This set-function is well-defined and finitely additive.
Is µ countably additive as probabilities are supposed to be? That is, if A = S∞ i=1 Ai where the Ai’s are disjoint, each Ai is a union of finitely many disjoint intervals, and A itself is the union of finitely many disjoint intervals (ak, bk] for k = 1, . . . , n, does µ(A) = P∞ i=1 µ(Ai)?
First, take the collection of intervals that go into all of the Ai’s and split them, if necessary, so that each is a subset of at most one of the (ak, bk] intervals. Then apply the following result to each (ak, bk].
Lemma 22 Let (a, b] = S∞ k=1(ck, dk] with the (ck, dk]’s disjoint.
Then F(b) −F(a) = P∞ k=1 F(dk) −F(ck).
Proof: Since (a, b] ⊇Sn k=1(ck, dk] for all n, it follows that F(b)−F(a) ≥Pn k=1 F(dk)−F(ck) (because (ck, dk)’s are disjoint), hence F(b)−F(a) ≥P∞ k=1 F(dk)−F(ck). We need to prove the opposite inequality.
Suppose first that both a and b are finite. Let ϵ > 0. For each k, there is ek > dk such that F(dk) ≤F(ek) ≤F(dk) + ϵ 2k .
Also, there is f > a such that F(a) ≥F(f) −ϵ. Now, the interval [f, b] is compact and [f, b] ⊆S∞ k=1(ck, ek). So there are finitely many (ck, ek)’s (suppose they are the first n) such that [f, b] ⊆Sn k=1(ck, ek). Now, F(b) −F(a) ≤F(b) −F(f) + ϵ ≤ϵ + n X k=1 F(ek) −F(ck) ≤2ϵ + n X k=1 F(dk) −F(ck).
Here we have to work with finitely many (ck, ek)’s because we do not yet have countable sub-additivity. It follows that F(b) −F(a) ≤2ϵ + P∞ k=1 F(dk) −F(ck). Since this is true for all ϵ > 0, it is true for ϵ = 0.
If −∞= a < b < ∞, let g > −∞be such that F(g) < ϵ. The above argument shows that F(b) −F(g) ≤ ∞ X k=1 F(dk ∨g) −F(ck ∨g) ≤ ∞ X k=1 F(dk) −F(ck).
Since limg→−∞F(g) = 0, it follows that F(b) ≤P∞ k=1 F(dk) −F(ck). Similar arguments work when a < b = ∞and −∞= a < b = ∞.
11 In Lemma 22 you can replace F by an arbitrary nondecreasing right-continuous function with only a bit more effort. (See the supplement following at the end of this lecture.) The function µ defined in terms of a nondecreasing right-continuous function is a measure on the field U. There is an extension theorem that gives conditions under which a measure on a field can be extended to a measure on the generated σ-field. Furthermore, the extension is unique.
Example 23 (Lebesgue measure) Start with the function F(x) = x, form the measure µ on the field U and extend it to the Borel σ-field. The result is called Lebesgue measure, and it extends the concept of “length” from intervals to more general sets.
Example 24 Every distribution function for a random variable has a corresponding proba-bility measure on the real line.
Theorem 25 (Caratheodory Extension) Let µ be a σ-finite measure on the field C of subsets of Ω. Then µ has a unique extension to a measure on σ(C).
Exercise 19 In this exercise, we prove Theorem 25. Note that the uniqueness of the exten-sion is a direct consequence of Theorem 21. We only need to prove the existence.
First, for each B ∈2Ω, define µ∗(B) = inf ∞ X i=1 µ(Ai), (20) where the inf is taken over all {Ai}∞ i=1 such that B ⊆S∞ i=1 Ai and Ai ∈C for all i. Since C is a field, we can assume that the Ai’s are mutually disjoint without changing the value of µ∗(B). Let A = {B ∈2Ω: µ∗(C) = µ∗(C ∩B) + µ∗(C ∩BC), for all C ∈2Ω}.
Now take the following steps: 1. Show that µ∗extends µ, i.e. that µ∗(A) = µ(A) for each A ∈C.
2. Show that µ∗is monotone and subadditive.
3. Show that C ⊆A.
4. Show that A is a field.
5. Show that µ∗is finitely additive on A.
6. Show that A is a σ-field.
7. Show that µ∗is countably additive on A.
12 5.1 Extension to Rk The Borel σ-field on Rk, denoted with Bk, is generated by the class of hyper-rectangles of the form A = x ∈Rk : ai < xi ≤bi, i = 1, . . . , k = (a1, b1] × . . . × (ak, bk].
Above, −∞≤ai < bi for all i = 1, . . . , k.
Let F : Rk →R be a function. Typically F takes values in [0, 1] or a bounded interval but this is not necessary. For each hyper-rectangle A let ∆AF = X v∈VA sgn(v)F(v), where VA = {a1, b1} × . . . × {ak, bk} is the set of vertices of A and, for any v ∈VA, sgn(v) is −1 or 1 depending on whether v has an odd or even number of a’s.
Assume that F is a distribution function, i.e. that it satisfies the properties: 1. F is right continuous: if y ↓x in Rk (meaning that xi ↓xi for all i) then F(y) ↓F(x); 2. F is non-decreasing: ∆AF ≥0 for all hyper-rectangles A.
Carathedory extension’s theorem allows for the general following construction of measure on (Rk, Bk) from distribution functions.
Theorem 26 Let F be a distribution function in Rk and for a hyper-rectangle A, set µ(A) = ∆AF. Then, µ has a unique extension to a measure on Bk.
Example 27 The Lebesgue measure on (Rk, Bk) is the measure corresponding to the distri-bution function F(x) = k Y i=1 xi, x ∈Rk.
It is can be seen that F is right-continuous (as it is continuous) and non-decreasing, since, for any hyper-rectangle A = (a1, b1] × . . . × (ak, bk], ∆AF = k Y i=1 (bi −ai).
In particular, the Lebesgue measure of any Borel set A (not just a hyper-rectangle) coincides with its volume.
13 Supplement: Measures from Increasing Functions Lemma 22 deals only with functions F that are cdf’s. Suppose that F is an unbounded nondecreasing function that is continuous from the right. If −∞< a < b < ∞, then the proof of Lemma 22 still applies. Suppose that (−∞, b] = S∞ k=1(ck, dk] with b < ∞nd all (ck, dk] disjoint. Suppose that limx→−∞F(x) = −∞. We want to show that P∞ k=1 F(dk) −F(ck) = ∞. If one ck = −∞, the proof is immediate, so assume that all ck > −∞. Then there must be a subsequence {kj}∞ j=1 such that limj→∞ckj = −∞. For each j, let {(c′ j,n, d′ j,n]}∞ n=1 be the subsequence of intervals that cover (ckj, b]. For each j, the proof of Lemma 22 applies to show that F(b) −F(ckj) = ∞ X n=1 F(d′ j,n) −F(c′ j,n).
(21) As j →∞, the left side of Equation 21 goes to ∞while the right side eventually includes every interval in the original collection.
A similar proof works for an interval of the form (a, ∞) when limx→∞F(x) = ∞. A combi-nation of the two works for (−∞, ∞).
14 |
189574 | https://h2tools.org/file/4845/download?token=OQs3riCD | ÐÏࡱá����������������>��þÿ ���������������2����������þÿÿÿ����þÿÿÿ����1���ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ ���ÓÍÉÀ����á��°Á�������\�p���. B��°a���À��=������������������¯���¼���=��àx�Ä;à8����y@��������"�������·���Ú����1��È���ÿ�����cA�r�i�a�l�1��È���ÿ�����cA�r�i�a�l�1��È���ÿ�����cA�r�i�a�l�1��È���ÿ�����cA�r�i�a�l�1�� ���ÿ�����cA�r�i�a�l�1��È���ÿ���cA�r�i�a�l�1��È��ÿ¼����cA�r�i�a�l�1��È���ÿ����cA�r�i�a�l�1��ð��ÿ¼����cA�r�i�a�l�1��È�� �¼����cA�r�i�a�l�1��È��� �����cA�r�i�a�l�1��È�� �����cA�r�i�a�l�1��È��$�����cA�r�i�a�l�1��È���ÿ���cA�r�i�a�l�1��ð��ÿ����cA�r�i�a�l�1��È�� ����cA�r�i�a�l�����"$"#,##0_);("$"#,##0)!����"$"#,##0_);Red"����"$"#,##0.00_);("$"#,##0.00)'��"��"$"#,##0.00_);Red7��2��_("$" #,##0_);_("$" (#,##0);_("$" "-"_);_(@_).�)�)��_( #,##0_);_( (#,##0);_( "-"_);_(@_)?�,�:��_("$" #,##0.00_);_("$" (#,##0.00);_("$" "-"??_);_(@_)6�+�1��_( #,##0.00_);_( (#,##0.00);_( "-"??_);_(@_)$�¤���[$-409]dddd\,\ mmmm\ dd\,\ yyyy�¥���[$-409]mmmmm-yy;@�¦���[$-409]h:mm:ss\ AM/PM �§���#,##0.0�¨���0.0 �©���0.000�ª���"Yes";"Yes";"No"�«���"True";"True";"False"�¬���"On";"On";"Off"]��,�[�$�¬ -�2�]�\� �#�,�#�#�0�.�0�0�_�)�;�[�R�e�d�]�\�(�[�$�¬ -�2�]�\� �#�,�#�#�0�.�0�0�\�)� �®���0.0000 �¯���0.00000à������õÿ �����������À à�����õÿ ��ô��������À à�����õÿ ��ô��������À à�����õÿ ��ô��������À à�����õÿ ��ô��������À à������õÿ ��ô��������À à������õÿ ��ô��������À à������õÿ ��ô��������À à������õÿ ��ô��������À à������õÿ ��ô��������À à������õÿ ��ô��������À à������õÿ ��ô��������À à������õÿ ��ô��������À à������õÿ ��ô��������À à������õÿ ��ô��������À à������� �����������À à���+�õÿ ��ø��������À à���)�õÿ ��ø��������À à���,�õÿ ��ø��������À à����õÿ ��ø��������À à�� ���ôÿ���ô��������À à�� ���ôÿ���ô��������À à��� �õÿ ��ø��������À à�������!����������À à������ ����������À à������!����������À à������!����������À à������ ����������À à������ ��h��������À à������!��8@�@���À à������!��8���@���À à������!��x�������, à������ ��h�@�����À à������!��x��������À à������ ��h�������, à�� ����!��x�������, à������!��x�������, à������!��x �� ���, à������!��x �� ����À à������!��8 � @���À à������!����������À à������!��x�@����, à������ ��h�@����, à������!��X�������4 à������!��8�@�����À à������!��8��������À à������!��8 �� ����À à������!��x�@�����À à������!��x��������À à������!��x �� ����À à������)��8�@�����À à������)��8��������À à������)��8 �� ����À à�����Q!��8�@�����À à�����Q!��8��������À à�����Q!��8 �� ����À à�����Q)��8�@�����À à�����Q)��8��������À à�����Q)��8 �� ����À à�����Q)��8 @�� ��À à�����Q)��8� ��� ��À à�����Q)��8 � � ��À à���®��!��|�������, à���©��!��|�������, à�����!��|�������, à�����!��|�������, à�� ���� ��h�������, à���¯��!��|�������, à���¨��!��|�������, à���¨��!��| �� ���, à������ ��h�������, à���� �!��|��������À à�����!��| �� ����À à���1��!��|��������À à�� ���� ����������À à������!��8� ��� ��À �ÿ�ÿ�ÿ�ÿ� ÿ�ÿ���ÿ�ÿ���
�����Properties of Hydrogen
��|�����Comparative Properties����Á�Á��"¾�ü�îb���X�����Property��Units��Hydrogen��g/cm-sec��air = 1��vol%��Color ��Colorless��Odor��None��Critical temperature��Minimum ignition energy��Heat of combustion by mass��Boiling point (1 atm)��oC �����Melting/freezing point ��millijoules��kcal/kg ��lb/ft-sec ��1.4 - 7.6 ��100 - 105 ��450 - 900 ��230 - 480��< -423��< -253��27 - 225 ��Properties ��2.1 - 10.1 ��5.0 - 15.0 ��6.7 - 36.0��N/A ��4.0 - 75.0��kg/m3����lb/ft3����oC����oF����[a, b]��Molecular Weight��Normal Boiling Point ��Flash Point��Flammability Range in Air ��Auto Ignition Temperature in Air ��Hydrogen ��Methane ��Propane ��Methanol ��Ethanol ��Gasoline .�� Properties of a range of commercial grades��N/A - Not applicable$�� Properties of the pure substance��[b, d] ��[c, b, d]��4.3 - 19��0.0037 - 0.0044��80 - 437��H2����CH4����C3H8��������CH3OH������C2H5OH����������Chemical Formula��Sources: ��[3, a, c] ��[3, a, b] ��[3, a, d]>��[a] NIST Chemistry WebBook. "Alternatives to Traditional Transportation Fuels: An Overview." DOE/EIA-0585/O. Energy Information Administration. U.S. Department of Energy. Washington, DC. June 1994.º��[d] "Hydrogen Fuel Cell Engines and Related Technologies. Module 1: Hydrogen Properties." U.S. DOE. 2001, (x = 4 - 12)�������� ��Density (NTP)��Viscosity (NTP)��Vapor Specific Gravity (NTP)(�� NTP = 20 oC (68 oF) and 1 atmosphere ��������J��[c] Perry's Chemical Engineers' Handbook (7th Edition), 1997, McGraw-Hill.P��Hydrogen Analysis Resource Center: Comparative Properties of Hydrogen and Fuels$� � ��Notes/Sources ��8.81 x 10-5 �� ��5.92 x 10-6 �� ��1.10 x 10-4 �� ��7.41 x 10-6 �� ��8.012 x 10-5 �� ��5.384 x 10-6 �� ��9.18 x 10-3 �� ��6.17 x 10-4 �� ��7.99 x 10-4 ����2.486 x 10-4 - 2.957 x 10-4 �� ����ÿ�Z�� �� ���à ��Z���z��ô���×��Q��2��¬��¦�� ��=��·��é��c��l��æ��q��ë��½��7��c�c����������������Ò ��� ���ÓÍÉÀ���� ������������U���� ��� ��d����������ü©ñÒMbP?\_���\����+�����������������%���ÿ���Á��������������M�¢��H�P� �L�a�s�e�r�J�e�t� �6�P�/�6�M�P� �P�o�s�t�S�c�r�i�p�t�������Ü�ÄSï����ê od���X��X��L�e�t�t�e�r�������������������������������������������������������������������������������������������������������PRIV0���������������������������������������������������������������������������������������'''��'����������Ä����������������������������������\K�\K����������������������������=&}���������ÿ������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������¡�"��d������XX������à?������à?�U���}� ����������}� ���¶����}� ���I������������������������ÿ�����������ÿ������������ÿ���������������������������������������������ÿ������������ÿ������������ÿ�������� ����ÿ�������� ����ÿ�������� ����ÿ�������� ����ÿ�������� ����ÿ������������ÿ������������ÿ�����������ÿ�����������ÿ�����������ÿ�����������ÿ�����������ÿ�������ý� ����������ý� ��������ý� ��������ý� ��������ý� �������ý� ��������ý� ���� ���ý� ����� ���ý� �������~ ����ØÀý� ����� ���ý� �������~ ������nÀý� ��������ý� �������~ ����PÙÀý� ����� ���ý� �������~ ������@ý� ����� ���ý� �������~ ����ÿÛ@¾� ������¾� � �����¾� � �����¾� � �����¾� � �����¾� � �����¾� ������������������������������×�.�v��\���\�\�\�\�\�������� � � � � �>�¶�����@��������������������ï���7��� ��� ���ÓÍÉÀ���� ���������!���B/��¦:�� ;�� ��� ��d����������ü©ñÒMbP?\_���\����+�����������������%���ÿ���Á��������������M���\�\�P�N�L�P�S�3�\�P�R�T�1�1�8�7���������������������������������Ü�$Sÿ���ê o[���X��X��L�e�t�t�e�r�������������������������������������������������������������������������������������������������������PRIV0���������������������������������������������������������������������������������������'''��'����������Ä������������������������������� �\K�hC���������������������������b¥÷l��������������������������ÿ�ÿ��������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������T��RJPHAA������������A�3�J�4�4�8������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� ��BAPHAA������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������¡�"��[������XX������à?������à?�U���}� �����¶����}� ���Û ����}� ���� ����}� ���¶ ����}� ���m ����}� ���¶ ����}� ���Û����}� � ��$ �����������!����� �������� �J����@����� ������ ���� �ÿ���������� �����@����� �ÿ����������� ����������� ����������� ����������� �������� ��� �������� ��� �������� ��� �ÿ�������� ��� �������� ��� ����������� �ÿ����������� ����������� �,����@����� �ÿ�����@����� �ÿ�����@����� �ÿ�����@����� �ÿ�����@����� �,����@����� �����@����� �ÿ�����@����� �ÿ�����@����� �ÿ�����@����� �����@����� �ÿ�����@����� �����@ ���� �ÿ����������� �ÿ�������ý� �����+�L���¾�����+�+�+�+�+�+�+�+��¾�����K�K�K�K�K�K�K�K�K��ý� ��������ý� ����M���ý� �������ý� �������ý� ����+���ý� ����,���ý� ����-���ý� ����.���ý� ���'�/���ý� ����)�=���¾� ���#�$��ý� ����8���ý� ����9���ý� ����:���ý� ����;���ý� ����<���ý� ���%�F����� �!�ý� ���� �%���ý� ����$�������½�$���!�@i@!�@!�:±@!�©@!�ÿ±@�ý� ���&����ý� �����G���ý� ���"�?���ý� ���"� ���~ ���>�À @���?�ýe÷äaå?½����@�Pg@A��¸@��¨@%��x@�¾� �����B��ý� ���"�!������C�ô4ôiu?���>�ìÕ[[¥?���?�\ Añc̽?½����D�J³@D�>³@E�P²@�ý� ���� �H���ý� ����@���ý� �������ý� ���!�N���ý� ���!�P���ý� ���!�R���ý� ���!�T������!�tFö\_?ý� ���&�6���¾� ���� ���ý� �������ý� ���!�O���ý� ���!�Q���ý� ���!�S���ý� ���!�U���ý� ���!�V���ý� ���&�W���ý� � ���\�&���ý� � ��"�$���ý� � ��F�"���½�$� ��A�°ØÀA��0dÀ�r°À�� P@�� S@�ý� � ��%����¾� � ���\�B��ý� � ��F�#���½�$� ��A��pzÀA�CÙÀ�±ÀA�íÌ@D�ìÐ@�ý� � ��%�7���ý� � ��� �I���ý� � ���A���ý� � ������� ��!�¨5Í;Nѱ?½�� ��!�ÀK@G�c@�ý� � ��!����ý� � ��!����� ��H�²ï§ÆK @ý� � ����'���ý� � ��"�3���ý� � ��F�"���ý� � ������½�$� ����gÀ���ZÀ���&@���@%��EÀ�¾� � ����"��ý� � ��F�#���ý� � ������½�$� ���� sÀ��cÀ���J@��K@%��FÀ�ý� ���� �(���ý� ����4���ý� �������ý� ���I����ý� ���!����ý� ���!����ý� ���!����ý� ���!�5���ý� ���&����ý� ����\�)���ý� ���"�3���ý� ���F�"���½�$�����H@��à@�� ~@��x@��pz@�ý� ���%����¾� ����\�B��ý� ���F�#���½�$�����ô@��X@��@��@��È@�ý� ���%����ý� ����,�E���¾����-�-�-�-�-�-�-�.��¾�����,�-�-�-�-�-�-�-�.��ý� ����,�2���¾����-�-�-�-�-�-�-�.��ý� ����/�0���¾����0�0�0�0�0�0�0�1��ý� ����,�J���¾����-�-�-�-�-�-�-�.��ý� ����,�1���¾����-�-�-�-�-�-�-�.��ý� ����,�>���¾����-�-�-�-�-�-�-�.��¾�����,�-�-�-�-�-�-�-�.��ý� ����5�B���¾����6�6�6�6�6�6�6�7��ý� ����8�C���¾����9�9�9�9�9�9�9�:��ý� ����2�K���¾����3�3�3�3�3�3�3�4��ý� ����;�D���¾����<�<�<�<�<�<�<�=��¾����(�(�(�(��¾� �������×�B�Ì ��X(��~�z�\�l�n��p��R���R�~��R�(��(�(�(�(�(��(�(�(�(��� ���ÿ�������� ��J�×�������>�¶����@�������������� ������ å�b� ���������������������������������������������������������������ï���7���¸ü�������ÐÉêyùºÎ�ª�K© ������?���[�a�]� �N�I�S�T� �C�h�e�m�i�s�t�r�y� �W�e�b�B�o�o�k�.� �h�t�t�p�:�/�/�w�e�b�b�o�o�k�.�n�i�s�t�.�g�o�v�/�c�h�e�m�i�s�t�r�y�/���àÉêyùºÎ�ª�K© F���h�t�t�p�:�/�/�w�e�b�b�o�o�k�.�n�i�s�t�.�g�o�v�/�c�h�e�m�i�s�t�r�y�/���¸P�����ÐÉêyùºÎ�ª�K© ������»���[�d�]� �"�H�y�d�r�o�g�e�n� �F�u�e�l� �C�e�l�l� �E�n�g�i�n�e�s� �a�n�d� �R�e�l�a�t�e�d� �T�e�c�h�n�o�l�o�g�i�e�s�.� �M�o�d�u�l�e� �1�:� �H�y�d�r�o�g�e�n� �P�r�o�p�e�r�t�i�e�s�.�"� �U�.�S�.� �D�O�E�.� �2�0�0�1�,� �h�t�t�p�:�/�/�w�w�w�.�e�e�r�e�.�e�n�e�r�g�y�.�g�o�v�/�h�y�d�r�o�g�e�n�a�n�d�f�u�e�l�c�e�l�l�s�/�t�e�c�h�_�v�a�l�i�d�a�t�i�o�n�/�p�d�f�s�/�f�c�m�0�1�r�0�.�p�d�f���àÉêyùºÎ�ª�K© ¢���h�t�t�p�:�/�/�w�w�w�.�e�e�r�e�.�e�n�e�r�g�y�.�g�o�v�/�h�y�d�r�o�g�e�n�a�n�d�f�u�e�l�c�e�l�l�s�/�t�e�c�h�_�v�a�l�i�d�a�t�i�o�n�/�p�d�f�s�/�f�c�m�0�1�r�0�.�p�d�f���¸ �����ÐÉêyùºÎ�ª�K© ������®���[�b�]� �"�A�l�t�e�r�n�a�t�i�v�e�s� �t�o� �T�r�a�d�i�t�i�o�n�a�l� �T�r�a�n�s�p�o�r�t�a�t�i�o�n� �F�u�e�l�s�:� �A�n� �O�v�e�r�v�i�e�w�.�"� �D�O�E�/�E�I�A�-�0�5�8�5�/�O�.� �E�n�e�r�g�y� �I�n�f�o�r�m�a�t�i�o�n� �A�d�m�i�n�i�s�t�r�a�t�i�o�n�.� �U�.�S�.� �D�e�p�a�r�t�m�e�n�t� �o�f� �E�n�e�r�g�y�.� �W�a�s�h�i�n�g�t�o�n�,� �D�C�.� �J�u�n�e� �1�9�9�4�.���àÉêyùºÎ�ª�K© x���h�t�t�p�:�/�/�t�o�n�t�o�.�e�i�a�.�d�o�e�.�g�o�v�/�F�T�P�R�O�O�T�/�a�l�t�e�r�n�a�t�i�v�e�f�u�e�l�s�/�0�5�8�5�o�.�p�d�f��� ����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������þÿ����������������������à
òùOh«�+'³Ù0���´���������H������P������d������p��� ������ ������ ��� ������¬������ä����� ���Yunhua Zhu��������.���������Microsoft Excel�@��� GÁQÆ@����ȸ³Å@����ü:ÐYÆ�����������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������þÿ����������������������ÕÍÕ.�+,ù®D���ÕÍÕ.�+,ù®X���� ������P������X��������� ��������������� ������¨��� ���°��� ���î������ä�����(���Pacific Northwest National Laboratory������ � ������� ������� ������� ���������������Properties of Hydrogen����Comparative Properties� �������� ���Worksheets����������L��������� ������8������@��������� ���_PID_HLINKS����ä��A�����������>�1����������������������<���h�t�t�p�:�/�/�t�o�n�t�o�.�e�i�a�.�d�o�e�.�g�o�v�/�F�T�P�R�O�O�T�/�a�l�t�e�r�n�a�t�i�v�e�f�u�e�l�s�/�0�5�8�5�o�.�p�d�f���������������>�����������������������Q���h�t�t�p�:�/�/�w�w�w�.�e�e�r�e�.�e�n�e�r�g�y�.�g�o�v�/�h�y�d�r�o�g�e�n�a�n�d�f�u�e�l�c�e�l�l�s�/�t�e�c�h�_�v�a�l�i�d�a�t�i�o�n�/�p�d�f�s�/�f�c�m�0�1�r�0�.�p�d�f�����������������!�b�����������������������#���h�t�t�p�:�/�/�w�e�b�b�o�o�k�.�n�i�s�t�.�g�o�v�/�c�h�e�m�i�s�t�r�y�/���������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������� ��� ��� ��� ��� ��������������������������������������������������������� ���þÿÿÿ"���#���$���%���&���'���(���þÿÿÿ���+���,���-���.���/���0���þÿÿÿýÿÿÿþÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿR�o�o�t� �E�n�t�r�y����������������������������������������������ÿÿÿÿÿÿÿÿ��� �����À������F��������������������þÿÿÿ��������W�o�r�k�b�o�o�k��������������������������������������������������ÿÿÿÿÿÿÿÿÿÿÿÿ����������������������������������������A�������S�u�m�m�a�r�y�I�n�f�o�r�m�a�t�i�o�n���������������������������(�������ÿÿÿÿ������������������������������������!�����������D�o�c�u�m�e�n�t�S�u�m�m�a�r�y�I�n�f�o�r�m�a�t�i�o�n�����������8�ÿÿÿÿÿÿÿÿÿÿÿÿ������������������������������������)���������� |
189575 | https://pubmed.ncbi.nlm.nih.gov/33129378/ | Leprosy post-exposure prophylaxis with single-dose rifampicin (LPEP): an international feasibility programme - PubMed
Clipboard, Search History, and several other advanced features are temporarily unavailable.
Skip to main page content
An official website of the United States government
Here's how you know
The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Log inShow account info
Close
Account
Logged in as:
username
Dashboard
Publications
Account settings
Log out
Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation
Search: Search
AdvancedClipboard
User Guide
Save Email
Send to
Clipboard
My Bibliography
Collections
Citation manager
Display options
Display options
Format
Save citation to file
Format:
Create file Cancel
Email citation
Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page.
To:
Subject:
Body:
Format:
[x] MeSH and other data
Send email Cancel
Add to Collections
Create a new collection
Add to an existing collection
Name your collection:
Name must be less than 100 characters
Choose a collection:
Unable to load your collection due to an error
Please try again
Add Cancel
Add to My Bibliography
My Bibliography
Unable to load your delegates due to an error
Please try again
Add Cancel
Your saved search
Name of saved search:
Search terms:
Test search terms
Would you like email updates of new search results? Saved Search Alert Radio Buttons
Yes
No
Email: (change)
Frequency:
Which day?
Which day?
Report format:
Send at most:
[x] Send even when there aren't any new results
Optional text in email:
Save Cancel
Create a file for external citation management software
Create file Cancel
Your RSS Feed
Name of RSS Feed:
Number of items displayed:
Create RSS Cancel
RSS Link Copy
Actions
Cite
Collections
Add to Collections
Create a new collection
Add to an existing collection
Name your collection:
Name must be less than 100 characters
Choose a collection:
Unable to load your collection due to an error
Please try again
Add Cancel
Permalink
Permalink
Copy
Display options
Display options
Format
Page navigation
Title & authors
Abstract
Comment in
Similar articles
Cited by
Publication types
MeSH terms
Substances
Related information
LinkOut - more resources
Review
Lancet Glob Health
Actions
Search in PubMed
Search in NLM Catalog
Add to Search
. 2021 Jan;9(1):e81-e90.
doi: 10.1016/S2214-109X(20)30396-X. Epub 2020 Oct 29.
Leprosy post-exposure prophylaxis with single-dose rifampicin (LPEP): an international feasibility programme
Jan Hendrik Richardus1,Anuj Tiwari2,Tanja Barth-Jaeggi3,Mohammad A Arif4,Nand Lal Banstola5,Rabindra Baskota6,David Blaney7,David J Blok2,Marc Bonenberger8,Teky Budiawan9,Arielle Cavaliero10,Zaahira Gani10,Helena Greter3,Eliane Ignotti11,Deusdedit V Kamara12,Christa Kasang13,Pratap R Manglani4,Liesbeth Mieras14,Blasdus F Njako15,Tiara Pakasi16,Basu Dev Pandey6,Paul Saunderson17,Rajbir Singh18,W Cairns S Smith19,René Stäheli8,Nayani D Suriyarachchi20,Aye Tin Maung21,Tin Shwe21,Jan van Berkel14,Wim H van Brakel14,Bart Vander Plaetse8,Marcos Virmond22,Millawage S D Wijesinghe23,Ann Aerts10,Peter Steinmann3
Affiliations Expand
Affiliations
1 Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands. Electronic address: j.richardus@erasmusmc.nl.
2 Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands.
3 Swiss Tropical and Public Health Institute, Basel, Switzerland; University of Basel, Basel, Switzerland.
4 NLR, New Delhi, India.
5 NLR, Kathmandu, Nepal.
6 Ministry of Health and Population of Nepal, Kathmandu, Nepal.
7 Centers for Disease Control and Prevention, Atlanta, GA, USA.
8 FAIRMED, Bern, Switzerland.
9 NLR, Jakarta, Indonesia.
10 Novartis Foundation, Basel, Switzerland.
11 Universidade do Estado de Mato Grosso, Cáceres, Brazil.
12 National Tuberculosis and Leprosy Programme, Dodoma, Tanzania.
13 German Leprosy and Tuberculosis Relief Association, Würzburg, Germany.
14 NLR, Amsterdam, Netherlands.
15 German Leprosy and Tuberculosis Relief Association, Dar es Salaam, Tanzania.
16 Ministry of Health of the Republic of Indonesia, Jakarta, Indonesia.
17 American Leprosy Missions, Greenville, SC, USA.
18 German Leprosy and Tuberculosis Relief Association, New Delhi, India.
19 University of Aberdeen, Aberdeen, UK.
20 FAIRMED, Colombo, Sri Lanka.
21 American Leprosy Missions, Yangon, Myanmar.
22 Instituto Lauro de Souza Lima & UNINOVE, Bauru, Brazil.
23 Anti-Leprosy Campaign, Colombo, Sri Lanka.
PMID: 33129378
DOI: 10.1016/S2214-109X(20)30396-X
Item in Clipboard
Review
Leprosy post-exposure prophylaxis with single-dose rifampicin (LPEP): an international feasibility programme
Jan Hendrik Richardus et al. Lancet Glob Health.2021 Jan.
Show details
Display options
Display options
Format
Lancet Glob Health
Actions
Search in PubMed
Search in NLM Catalog
Add to Search
. 2021 Jan;9(1):e81-e90.
doi: 10.1016/S2214-109X(20)30396-X. Epub 2020 Oct 29.
Authors
Jan Hendrik Richardus1,Anuj Tiwari2,Tanja Barth-Jaeggi3,Mohammad A Arif4,Nand Lal Banstola5,Rabindra Baskota6,David Blaney7,David J Blok2,Marc Bonenberger8,Teky Budiawan9,Arielle Cavaliero10,Zaahira Gani10,Helena Greter3,Eliane Ignotti11,Deusdedit V Kamara12,Christa Kasang13,Pratap R Manglani4,Liesbeth Mieras14,Blasdus F Njako15,Tiara Pakasi16,Basu Dev Pandey6,Paul Saunderson17,Rajbir Singh18,W Cairns S Smith19,René Stäheli8,Nayani D Suriyarachchi20,Aye Tin Maung21,Tin Shwe21,Jan van Berkel14,Wim H van Brakel14,Bart Vander Plaetse8,Marcos Virmond22,Millawage S D Wijesinghe23,Ann Aerts10,Peter Steinmann3
Affiliations
1 Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands. Electronic address: j.richardus@erasmusmc.nl.
2 Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands.
3 Swiss Tropical and Public Health Institute, Basel, Switzerland; University of Basel, Basel, Switzerland.
4 NLR, New Delhi, India.
5 NLR, Kathmandu, Nepal.
6 Ministry of Health and Population of Nepal, Kathmandu, Nepal.
7 Centers for Disease Control and Prevention, Atlanta, GA, USA.
8 FAIRMED, Bern, Switzerland.
9 NLR, Jakarta, Indonesia.
10 Novartis Foundation, Basel, Switzerland.
11 Universidade do Estado de Mato Grosso, Cáceres, Brazil.
12 National Tuberculosis and Leprosy Programme, Dodoma, Tanzania.
13 German Leprosy and Tuberculosis Relief Association, Würzburg, Germany.
14 NLR, Amsterdam, Netherlands.
15 German Leprosy and Tuberculosis Relief Association, Dar es Salaam, Tanzania.
16 Ministry of Health of the Republic of Indonesia, Jakarta, Indonesia.
17 American Leprosy Missions, Greenville, SC, USA.
18 German Leprosy and Tuberculosis Relief Association, New Delhi, India.
19 University of Aberdeen, Aberdeen, UK.
20 FAIRMED, Colombo, Sri Lanka.
21 American Leprosy Missions, Yangon, Myanmar.
22 Instituto Lauro de Souza Lima & UNINOVE, Bauru, Brazil.
23 Anti-Leprosy Campaign, Colombo, Sri Lanka.
PMID: 33129378
DOI: 10.1016/S2214-109X(20)30396-X
Item in Clipboard
Cite
Display options
Display options
Format
Abstract
Background: Innovative approaches are required for leprosy control to reduce cases and curb transmission of Mycobacterium leprae. Early case detection, contact screening, and chemoprophylaxis are the most promising tools. We aimed to generate evidence on the feasibility of integrating contact tracing and administration of single-dose rifampicin (SDR) into routine leprosy control activities.
Methods: The leprosy post-exposure prophylaxis (LPEP) programme was an international, multicentre feasibility study implemented within the leprosy control programmes of Brazil, India, Indonesia, Myanmar, Nepal, Sri Lanka, and Tanzania. LPEP explored the feasibility of combining three key interventions: systematically tracing contacts of individuals newly diagnosed with leprosy; screening the traced contacts for leprosy; and administering SDR to eligible contacts. Outcomes were assessed in terms of number of contacts traced, screened, and SDR administration rates.
Findings: Between Jan 1, 2015, and Aug 1, 2019, LPEP enrolled 9170 index patients and listed 179 769 contacts, of whom 174 782 (97·2%) were successfully traced and screened. Of those screened, 22 854 (13·1%) were excluded from SDR mainly because of health reasons and age. Among those excluded, 810 were confirmed as new patients (46 per 10 000 contacts screened). Among the eligible screened contacts, 1182 (0·7%) refused prophylactic treatment with SDR. Overall, SDR was administered to 151 928 (86·9%) screened contacts. No serious adverse events were reported.
Interpretation: Post-exposure prophylaxis with SDR is safe; can be integrated into different leprosy control programmes with minimal additional efforts once contact tracing has been established; and is generally well accepted by index patients, their contacts, and health-care workers. The programme has also invigorated local leprosy control through the availability of a prophylactic intervention; therefore, we recommend rolling out SDR in all settings where contact tracing and screening have been established.
Funding: Novartis Foundation.
Copyright © 2021 The Author(s). Published by Elsevier Ltd. This is an Open Access article under the CC BY 4.0 license. Published by Elsevier Ltd.. All rights reserved.
PubMed Disclaimer
Comment in
Leprosy post-exposure prophylaxis: innovation and precision public health.Moraes MO, Düppre NC.Moraes MO, et al.Lancet Glob Health. 2021 Jan;9(1):e8-e9. doi: 10.1016/S2214-109X(20)30512-X.Lancet Glob Health. 2021.PMID: 33338461 No abstract available.
Leprosy post-exposure prophylaxis risks not adequately assessed.Lockwood DNJ, de Barros B, Negera E, Gonçalves H, Hay RJ, Kahawita IP, Singh RK, Kumar B, Lambert SM, Pai V, Penna GO, Prescott G, de Arquer GR, Talhari S, Srikantam A, Walker SL.Lockwood DNJ, et al.Lancet Glob Health. 2021 Apr;9(4):e400-e401. doi: 10.1016/S2214-109X(21)00046-2.Lancet Glob Health. 2021.PMID: 33740405 No abstract available.
Leprosy post-exposure prophylaxis risks not adequately assessed - Author's reply.Richardus JH, Mieras L, Saunderson P, Ignotti E, Virmond M, Arif MA, Pandey BD, Cavaliero A, Steinmann P.Richardus JH, et al.Lancet Glob Health. 2021 Apr;9(4):e402-e403. doi: 10.1016/S2214-109X(21)00047-4.Lancet Glob Health. 2021.PMID: 33740406 No abstract available.
Similar articles
The Leprosy Post-Exposure Prophylaxis (LPEP) programme: update and interim analysis.Steinmann P, Cavaliero A, Aerts A, Anand S, Arif M, Ay SS, Aye TM, Barth-Jaeggi T, Banstola NL, Bhandari CM, Blaney D, Bonenberger M, VAN Brakel W, Cross H, DAS VK, Fahrudda A, Fernando N, Gani Z, Greter H, Ignotti E, Kamara D, Kasang C, Kömm B, Kumar A, Lay S, Mieras L, Mirza F, Mutayoba B, Njako B, Pakasi T, Saunderson P, Shengelia B, Smith CS, Stäheli R, Suriyarachchi N, Shwe T, Tiwari A, Wijesinghe MSD, VAN Berkel J, Vander Plaetse B, Virmond M, Richardus JH.Steinmann P, et al.Lepr Rev. 2018 2nd Quarter;89(2):102-116. doi: 10.47276/lr.89.2.102.Lepr Rev. 2018.PMID: 37180343 Free PMC article.
The long-term impact of the Leprosy Post-Exposure Prophylaxis (LPEP) program on leprosy incidence: A modelling study.Blok DJ, Steinmann P, Tiwari A, Barth-Jaeggi T, Arif MA, Banstola NL, Baskota R, Blaney D, Bonenberger M, Budiawan T, Cavaliero A, Gani Z, Greter H, Ignotti E, Kamara DV, Kasang C, Manglani PR, Mieras L, Njako BF, Pakasi T, Saha UR, Saunderson P, Smith WCS, Stäheli R, Suriyarachchi ND, Tin Maung A, Shwe T, van Berkel J, van Brakel WH, Vander Plaetse B, Virmond M, Wijesinghe MSD, Aerts A, Richardus JH.Blok DJ, et al.PLoS Negl Trop Dis. 2021 Mar 31;15(3):e0009279. doi: 10.1371/journal.pntd.0009279. eCollection 2021 Mar.PLoS Negl Trop Dis. 2021.PMID: 33788863 Free PMC article.
Leprosy Post-Exposure Prophylaxis (LPEP) programme: study protocol for evaluating the feasibility and impact on case detection rates of contact tracing and single dose rifampicin.Barth-Jaeggi T, Steinmann P, Mieras L, van Brakel W, Richardus JH, Tiwari A, Bratschi M, Cavaliero A, Vander Plaetse B, Mirza F, Aerts A; LPEP study group.Barth-Jaeggi T, et al.BMJ Open. 2016 Nov 17;6(11):e013633. doi: 10.1136/bmjopen-2016-013633.BMJ Open. 2016.PMID: 27856484 Free PMC article.
Role of contact tracing and prevention strategies in the interruption of leprosy transmission.Smith WC, Aerts A.Smith WC, et al.Lepr Rev. 2014 Mar;85(1):2-17.Lepr Rev. 2014.PMID: 24974438
Negligible risk of inducing resistance in Mycobacterium tuberculosis with single-dose rifampicin as post-exposure prophylaxis for leprosy.Mieras L, Anthony R, van Brakel W, Bratschi MW, van den Broek J, Cambau E, Cavaliero A, Kasang C, Perera G, Reichman L, Richardus JH, Saunderson P, Steinmann P, Yew WW.Mieras L, et al.Infect Dis Poverty. 2016 Jun 8;5(1):46. doi: 10.1186/s40249-016-0140-y.Infect Dis Poverty. 2016.PMID: 27268059 Free PMC article.Review.
See all similar articles
Cited by
RLEP LAMP for the laboratory confirmation of leprosy: towards a point-of-care test.Saar M, Beissner M, Gültekin F, Maman I, Herbinger KH, Bretzel G.Saar M, et al.BMC Infect Dis. 2021 Nov 25;21(1):1186. doi: 10.1186/s12879-021-06882-2.BMC Infect Dis. 2021.PMID: 34823479 Free PMC article.
The role of CXCL10 as a biomarker for immunological response among patients with leprosy: a systematic literature review.Prakoeswa FRS, Haningtyas N, Dewi LM, Handoko EJ, Azenta MT, Ilyas MF.Prakoeswa FRS, et al.PeerJ. 2024 Apr 5;12:e17170. doi: 10.7717/peerj.17170. eCollection 2024.PeerJ. 2024.PMID: 38590701 Free PMC article.
Leprosy: treatment, prevention, immune response and gene function.Li X, Ma Y, Li G, Jin G, Xu L, Li Y, Wei P, Zhang L.Li X, et al.Front Immunol. 2024 Feb 19;15:1298749. doi: 10.3389/fimmu.2024.1298749. eCollection 2024.Front Immunol. 2024.PMID: 38440733 Free PMC article.Review.
Feasibility and accuracy of mobile QT interval monitoring strategies in bedaquiline-enhanced prophylactic leprosy treatment.Bergeman AT, Nourdine S, Piubello A, Salim Z, Braet SM, Baco A, Grillone SH, Snijders R, Hoof C, Tsoumanis A, van Loen H, Assoumani Y, Mzembaba A, Ortuño-Gutiérrez N, Hasker E, van der Werf C, de Jong BC.Bergeman AT, et al.Clin Transl Sci. 2024 Aug;17(8):e13861. doi: 10.1111/cts.13861.Clin Transl Sci. 2024.PMID: 39075882 Free PMC article.Clinical Trial.
Less is more: Developing an approach for assessing clustering at the lower administrative boundaries that increases the yield of active screening for leprosy in Bihar, India.Ortuño-Gutiérrez N, Shih PW, Wagh A, Mugudalabetta S, Pandey B, de Jong BC, Richardus JH, Hasker E.Ortuño-Gutiérrez N, et al.PLoS Negl Trop Dis. 2022 Sep 12;16(9):e0010764. doi: 10.1371/journal.pntd.0010764. eCollection 2022 Sep.PLoS Negl Trop Dis. 2022.PMID: 36095018 Free PMC article.
See all "Cited by" articles
Publication types
Research Support, Non-U.S. Gov't
Actions
Search in PubMed
Search in MeSH
Add to Search
Review
Actions
Search in PubMed
Search in MeSH
Add to Search
MeSH terms
Feasibility Studies
Actions
Search in PubMed
Search in MeSH
Add to Search
Humans
Actions
Search in PubMed
Search in MeSH
Add to Search
Leprostatic Agents / therapeutic use
Actions
Search in PubMed
Search in MeSH
Add to Search
Leprosy / prevention & control
Actions
Search in PubMed
Search in MeSH
Add to Search
Post-Exposure Prophylaxis / methods
Actions
Search in PubMed
Search in MeSH
Add to Search
Precision Medicine / methods
Actions
Search in PubMed
Search in MeSH
Add to Search
Public Health / methods
Actions
Search in PubMed
Search in MeSH
Add to Search
Rifampin / therapeutic use
Actions
Search in PubMed
Search in MeSH
Add to Search
Substances
Leprostatic Agents
Actions
Search in PubMed
Search in MeSH
Add to Search
Rifampin
Actions
Search in PubMed
Search in MeSH
Add to Search
Related information
MedGen
PubChem Compound (MeSH Keyword)
LinkOut - more resources
Miscellaneous
NCI CPTAC Assay Portal
[x]
Cite
Copy Download .nbib.nbib
Format:
Send To
Clipboard
Email
Save
My Bibliography
Collections
Citation Manager
[x]
NCBI Literature Resources
MeSHPMCBookshelfDisclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
Follow NCBI
Connect with NLM
National Library of Medicine
8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov |
189576 | https://medium.com/@nawafalbalushi/the-rocket-equation-and-its-derivation-8f978b916daa | Sitemap
Open in app
Sign in
Sign in
The Rocket Equation and its Derivation
The rocket equation is a popular scenario for learning about the conservation of linear momentum in introductory classical mechanics with interesting results.
Nawaf Al Balushi
6 min readOct 10, 2022
Prerequisites:
Knowledge of vectors and scalars
Basic kinematics, Newton’s laws, conservation of linear momentum
Fundamental knowledge of limits, derivatives, exponentials/logarithms
Integrals as inverses of derivatives and the difference between definite and indefinite integration
Basic techniques in integration and differential equations
Newton’s Law of Universal Gravitation (Only for problem 3)
The quadratic drag equation (Only for problem 3)
The goal of the derivation of the rocket equation is to be able to give the velocity of a rocket at any moment in its trajectory, purely as a function of the mass ejected from take-off or the time elapsed (for the force-less case, other forces may cause velocity to gain dependence on other factors). In addition, we can reflect on what could’ve been changed in the derivation in order to make the solution more precise or to apply it in a different setting.
To use the conservation of momentum, we must compare the initial and final total momentum; however, there is a minor complication in this. Mass is not ejected one-by-one in a rocket, instead it is fired continuously. Therefore, we approximate the continuous process by assuming that the exhaust gases are released with a mass loss of -Δm (Δm is negative) at a constant speed of u (relative to the rocket) for every passage of Δt. The discrete process can be turned continuous by taking the limit as Δt goes to zero of the equation while keeping the ratio Δm/Δt the same.
Slight but important nuances must be realized in the last term in equation 2. Firstly, since the constant exhaust velocity is measured relative to the rocket, a velocity addition must be applied as the frame of reference being considered is the Earth (or wherever you are measuring the rocket’s velocity from). Secondly, u is the constant speed of the exhaust gases which travel downwards, which is represented by the minus sign in front of the u.
By Newton’s 2nd Law, we claim that dP/dt = Fₑₓₜ . We can find the derivative of momentum from our existing equations by using the limit definition of the derivative.
The first fraction in the limit becomes mass times acceleration since Δv goes to zero as Δt does. The same logic applies with the third fraction to give the derivative of the rocket’s mass with respect to time. However, in the second fraction, there are two terms in the numerator which go to zero, allowing us to write the term as a product of a term that goes to zero times a finite derivative, making the whole fraction go to zero. After all this, we obtain the following differential equation.
In order to solve this equation, we must first find the net force on the rocket or determine whether it is a constant or not, as we will have to integrate with respect to time to solve for v or m. For instance, if the rocket is in an empty vacuum and far away from any planets, the net force would be zero. If the rocket is stationed on the surface of the Earth, then air resistance and gravity will be acting against the rocket’s motion.
Get Nawaf Al Balushi’s stories in your inbox
Join Medium for free to get updates from this writer.
We will first solve the equation for the case in which the net force is zero:
Using the chain rule, we can see that the equation loses its time-dependence.
Then, using the method of separation of variables and definite integration, using the initial conditions for the limits of integration, we can find the velocity as a function of the rocket’s mass.
The prime symbol is to make the differences between the dummy integration variables and the final velocity v and final mass m clear.
Now for the case with gravity acting on the rocket.
For simplicity, we will assume that the rocket experiences Earth’s gravity as a multiple of its mass mg. However, this is only an approximation which is valid when the distance traveled by the rocket, in the duration in which the thrust is engaged, is too small to have a significant change on the gravitational force experienced by the rocket. This derivation will be omitted as the process is very similar to the Fₑₓₜ = 0 case.
If we assume that dm/dt is constant, we can write the velocity as a function of time only by introducing an additional parameter: γ =-dm/dt.
Now with the differential equation solved, we can explore some interesting properties of the rocket equation. For instance, we can figure out a rocket’s maximum speed based on the total mass of the rocket and it’s fuel, the mass of the rocket without the fuel, and the exhaust speed.
Using the differential equation in terms of Fₑₓₜ, we could also find the velocity for a number of different forces acting on the rocket. However, the complexity of the differential equation can become overwhelming very quickly, which may necessitate the use of approximative methods.
An interesting application of this equation is predicting the height that a rocket could reach. This may be used as a very crude estimation for fuel needed to reach an orbit around Earth. It is crude because it neglects air resistance and the variation of the gravitational field strength with distance from the Earth. Another application of the differential equation is to calculate the acceleration experienced by the rocket, which could be useful in determining whether the rocket — and its potential inhabitants — will be able to withstand it.
If you’d like to, try these problems below that apply the equations in different ways:
Derive equation 8 or 9 using F = ma, where m is the rocket’s mass and a is the rocket’s acceleration, and thus determine the thrust of the rocket (the reaction force that accelerates the rocket upwards) . (Difficulty: Easy)
Using equation 16, determine the displacement of the rocket at time t above the surface of the Earth given that the rocket was initially positioned on the surface of the Earth. (Difficulty: Easy)
Using equation 6, assuming constant γ =-dm/dt , derive the differential equation that describes the motion of a rocket, including drag (F = -kv²). Can this differential equation be solved by separation of variables, integrating factor method, or by a substitution? (Difficulty: Moderately difficult)
Using equation 16, determine the set of conditions on u and γ such that the maximum acceleration experienced by the rocket does not exceed ng where n is a positive integer and g = 9.81 ms⁻¹ in terms of the total time where the thrust is engaged and n. (Difficulty: Moderate)
Thank you very much for reading my first article! Please let me know what you think and follow me if you enjoyed it or found it informative. There may be a follow up that explores solving the equation with air resistance, which will be analyzed using numerical and approximative methods.
Physics
Applied Mathematics
Space Travel
Engineering
Calculus
## Written by Nawaf Al Balushi
63 followers
·32 following
Physics student at Imperial College London.
No responses yet
Write a response
What are your thoughts?
More from Nawaf Al Balushi
Nawaf Al Balushi
## An Introduction to the Calculus of Variations and the Principle of Stationary Action
In 1687, Newton published his Principia Mathematica, revealing his three laws of motion along with his development of calculus. After that…
Oct 24, 2022
137
1
Nawaf Al Balushi
## The Path Integral Formulation of Quantum Mechanics
The path integral is a generalization of the principle of stationary action and is one of the most important advances in modern physics.
Mar 21, 2023
11
1
Nawaf Al Balushi
## The Relation Between Symmetries and Conservation Laws using Classical Lagrangians (Noether’s…
In a previous article of mine, I explored the principle of stationary action, which allowed one to obtain the equations of motion of a…
Nov 18, 2022
10
See all from Nawaf Al Balushi
Recommended from Medium
In
Philosophy Today
by
Pedro Barbalho
## Have We Been Tricked by Gödel?
According to Kreisel, yes.
4d ago
583
18
Mohith
## Linear Regression Explained: From Theory to Real-World Implementation
Understanding the math, assumptions, and practical steps to predict continuous outcomes with confidence
Aug 9
153
2
Manin Bocss
## An Introduction to Combinatorics
Combinatorics is how we count.
6d ago
10
In
Science Spectrum
by
Cole Frederick
## What, Exactly, is Bayes’ Theorem?
How to Make Better Decisions
3d ago
830
10
Will Lockett
## This Could Kill Tesla
Floridi’s Conjecture.
4d ago
2.4K
33
Maxwell's Demon
## Towards Hamiltonian Mechanics 005 — An Involved Example
Applying all concepts to an involved mechnical system
Apr 3
23
See more recommendations
Text to speech |
189577 | https://brainly.com/question/39473441 | [FREE] Explain the formation of the bond between ammonia and boron trichloride. - brainly.com
Search
Learning Mode
Cancel
Log in / Join for free
Browser ExtensionTest PrepBrainly App Brainly TutorFor StudentsFor TeachersFor ParentsHonor CodeTextbook Solutions
Log in
Join for free
Tutoring Session
+78,8k
Smart guidance, rooted in what you’re studying
Get Guidance
Test Prep
+34,2k
Ace exams faster, with practice that adapts to you
Practice
Worksheets
+8,6k
Guided help for every grade, topic or textbook
Complete
See more
/
Chemistry
Textbook & Expert-Verified
Textbook & Expert-Verified
Explain the formation of the bond between ammonia and boron trichloride.
1
See answer Explain with Learning Companion
NEW
Asked by Nanamoney5773 • 10/06/2023
0:03
/
0:15
Read More
Community
by Students
Brainly
by Experts
ChatGPT
by OpenAI
Gemini
Google AI
Community Answer
This answer helped 61764751 people
61M
0.0
0
Upload your school material for a more relevant answer
A bond forms between ammonia and boron trichloride through a Lewis Acid-Base interaction. The lone pair of electrons on the nitrogen atom of ammonia is shared with the boron atom of boron trichloride, forming a dative bond and resulting in a stable adduct with the formula BCl3.NH3.
The formation of a bond between ammonia and boron trichloride involves a Lewis Acid-Base interaction. In ammonia, the nitrogen atom is at the apex of a trigonal pyramid, with three hydrogen atoms forming the base. It also has a lone pair of electrons. On the other hand, the boron atom in boron trichloride is very reactive as it lacks a full octet of electrons and is in a trigonal planar arrangement.
When the ammonia molecule (a Lewis base) comes into contact with boron trichloride (a Lewis acid), the lone pair of electrons on the nitrogen atom of ammonia can be shared with the boron atom of boron trichloride, resulting in the formation of a dative bond. This Lewis Acid-Base interaction leads to a stable adduct with the formula BCl3.NH3.
Learn more about Lewis Acid-Base Interaction here:
brainly.com/question/37767200
SPJ11
Answered by DonnieLarson •90.8K answers•61.8M people helped
Thanks 0
0.0
(0 votes)
Textbook &Expert-Verified⬈(opens in a new tab)
This answer helped 61764751 people
61M
0.0
0
Inorganic Chemistry
Principles of Chemistry I - Michael Nelson
Organic Chemistry - Morsch et al.
Upload your school material for a more relevant answer
The bond between ammonia and boron trichloride forms through a Lewis acid-base interaction, where ammonia donates a lone pair of electrons to boron, resulting in a dative bond. This leads to the formation of a stable adduct, BCl₃·NH₃. Essentially, nitrogen shares its electrons with the electron-deficient boron, stabilizing both molecules.
Explanation
The bond between ammonia (NH₃) and boron trichloride (BCl₃) forms through a Lewis acid-base interaction. Here’s a step-by-step explanation of how this bond is formed:
Lewis Acids and Bases: Ammonia acts as a Lewis base because it has a lone pair of electrons on the nitrogen atom. Boron trichloride is a Lewis acid because boron has an incomplete octet, making it electron-deficient and reactive.
Molecular Shapes: The nitrogen atom in ammonia has a trigonal pyramidal geometry due to the three hydrogen atoms and the lone pair on nitrogen. Conversely, boron trichloride has a trigonal planar structure, with boron at the center and three chlorine atoms around it.
Electron Pair Donation: When ammonia approaches boron trichloride, the lone pair of electrons on nitrogen can be shared with boron. This donation of the lone pair creates a new bond known as a dative bond or coordinate covalent bond because both of the shared electrons come from the nitrogen atom alone.
Formation of the Adduct: As a result of this interaction, a stable complex forms, which is referred to as an adduct, with the formula BCl₃·NH₃. This compound stabilizes both the Lewis acid and the Lewis base through the coordinate bond.
Bond Representation: In chemical diagrams, this coordinate bond can be represented with an arrow pointing from the nitrogen atom to the boron atom, indicating the direction of electron donation.
In summary, the bond forms because of the sharing of the lone pair of electrons from nitrogen to boron, resulting in a stable Lewis acid-base adduct.
Examples & Evidence
An example is the bond formation between ammonia and boron trifluoride (BF₃), which also illustrates the concept of coordinate covalent bonds, as BF₃ accepts a lone pair from NH₃.
This explanation is based on well-established principles of Lewis acid-base theory and molecular geometry, found in standard chemistry textbooks and resources on chemical bonding.
Thanks 0
0.0
(0 votes)
Advertisement
Nanamoney5773 has a question! Can you help?
Add your answer See Expert-Verified Answer
### Free Chemistry solutions and answers
Community Answer 4.2 19 A drink that contains 4 1/2 ounces of a proof liquor… approximately how many drinks does this beverage contain?
Community Answer 5.0 7 Chemical contamination is more likely to occur under which of the following situations? When cleaning products are not stored properly When dishes are sanitized with a chlorine solution When raw poultry is stored above a ready-to-eat food When vegetables are prepared on a cutting board that has not been sanitized
Community Answer 4.3 189 1. Holding 100mL of water (ebkare)__2. Measuring 27 mL of liquid(daudgtear ldnreiyc)____3. Measuring exactly 43mL of an acid (rtube)____4. Massing out120 g of sodium chloride (acbnela)____5. Suspending glassware over the Bunsen burner (rwei zeagu)____6. Used to pour liquids into containers with small openings or to hold filter paper (unfenl)____7. Mixing a small amount of chemicals together (lewl letpa)____8. Heating contents in a test tube (estt ubet smalcp)____9. Holding many test tubes filled with chemicals (estt ubet karc) ____10. Used to clean the inside of test tubes or graduated cylinders (iwer srbuh)____11. Keeping liquid contents in a beaker from splattering (tahcw sgasl)____12. A narrow-mouthed container used to transport, heat or store substances, often used when a stopper is required (ymerereel kslaf)____13. Heating contents in the lab (nuesnb bneurr)____14. Transport a hot beaker (gntos)____15. Protects the eyes from flying objects or chemical splashes(ggloges)____16. Used to grind chemicals to powder (tmraor nda stlepe) __
Community Answer Food waste, like a feather or a bone, fall into food, causing contamination. Physical Chemical Pest Cross-conta
Community Answer 8 If the temperature of a reversible reaction in dynamic equilibrium increases, how will the equilibrium change? A. It will shift towards the products. B. It will shift towards the endothermic reaction. C. It will not change. D. It will shift towards the exothermic reaction.
Community Answer 4.8 52 Which statements are TRUE about energy and matter in stars? Select the three correct answers. Al energy is converted into matter in stars Only matter is conserved within stars. Only energy is conserved within stars. Some matter is converted into energy within stars. Energy and matter are both conserved in stars Energy in stars causes the fusion of light elements
Community Answer 4.5 153 The pH of a solution is 2.0. Which statement is correct? Useful formulas include StartBracket upper H subscript 3 upper O superscript plus EndBracket equals 10 superscript negative p H., StartBracket upper O upper H superscript minus EndBracket equals 10 superscript negative p O H., p H plus P O H equals 14., and StartBracket upper H subscript 3 upper O superscript plus EndBracket StartBracket upper O upper H superscript minus EndBracket equals 10 to the negative 14 power.
Community Answer 5 Dimensional Analysis 1. I have 470 milligrams of table salt, which is the chemical compound NaCl. How many liters of NaCl solution can I make if I want the solution to be 0.90% NaCl? (9 grams of salt per 1000 grams of solution). The density of the NaCl solution is 1.0 g solution/mL solution.
Community Answer 8 For which pair of functions is the exponential consistently growing at a faster rate than the quadratic over the interval mc015-1. Jpg? mc015-2. Jpg mc015-3. Jpg mc015-4. Jpg mc015-5. Jpg.
Community Answer 4.8 268 How did Mendeleev come up with the first periodic table of the elements? (1 point) A He determined the mass of atoms of each element. B He estimated the number of electrons in atoms of each element. C He arranged the elements by different properties to find a pattern. D He organized the elements by their atomic number.
New questions in Chemistry
Reactants undergo chemical reaction to form products. This chemical equation represents one such reaction. The coefficient for one of the reactants or products is incorrect. Which part of the chemical equation is incorrect? 2 C 4H 1010 O 2⋯8 C O 2+10 H 2O
Which of the samples most likely had the highest solubility? | Sample | Name | Chemical formula | Temperature of water (∘C) | :---: :---: | | 1 | Table sugar | C 12H 22O 11 | 80 | | 2 | Table sugar | C 12H 22O 11 | 45 | | 3 | Table salt | NaCl | 55 | | 4 | Table salt | NaCl | 63 | \bigcirc 1 \bigcirc 2 \bigcirc 3 \bigcirc 4
Which of the samples most likely had the highest solubility? | Sample | Name | Chemical formula | Temperature of water ( ∘C ) | :--- :--- | | 1 | Table sugar | C 12H 22O 11 | 80 | | 2 | Table sugar | C 12H 22O 11 | 45 | | 3 | Table salt | NaCl | 55 | | 4 | Table salt | NaCl | 63 | A. 1 B. 2 C. 3 D. 4
Consider the chemical equations shown here. 2 H 2(g)+O 2(g)→2 H 2O(g)Δ H 1=−483.6 k J+2=−241.8 k J/m o l 3 O 2(g)→2 O 3(g)Δ H 2=284.6 k J+2=142.3 k J/m o l What is the overall enthalpy of reaction for the equation shown below? 3 H 2(g)+O 3(g)→3 H 2O(g)□ kJ
Consider the chemical equations shown here. P 4(s)+3 O 2(g)→P 4O 6(s)Δ H 1=−1,640.1 k J P 4O 10(s)→P 4(s)+5 O 2(g)Δ H 2=2,940.1 k J What is the overall enthalpy of reaction for the equation shown below? Round the answer to the nearest whole number. P 4O 6(s)+2 O 2(g)→P 4O 10(s)
Previous questionNext question
Learn
Practice
Test
Open in Learning Companion
Company
Copyright Policy
Privacy Policy
Cookie Preferences
Insights: The Brainly Blog
Advertise with us
Careers
Homework Questions & Answers
Help
Terms of Use
Help Center
Safety Center
Responsible Disclosure Agreement
Connect with us
(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)
Brainly.com
Dismiss
Materials from your teacher, like lecture notes or study guides,
help Brainly adjust this answer to fit your needs.
Dismiss |
189578 | https://math.stackexchange.com/questions/1706475/extremum-of-the-cyclic-sum-of-polynomial-ratios-same-degree | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Extremum of the cyclic sum of polynomial ratios (same degree)
Ask Question
Asked
Modified 9 years, 6 months ago
Viewed 144 times
0
$\begingroup$
I've noticed a few times (probably nothing new) that cyclic sums (assuming $x, y, z > 0$) like:
$\frac{x^2+y^2}{yz} + \frac{y^2+z^2}{zx} + \frac{z^2+x^2}{xy}$,
where in each of the 3 ratios, all terms in both the numerator and denominator polynomials have the same degree, reach an extremum when $x=y=z$.
It's interesting and I want to work on proving as generalized a version of this result as possible ($n$ variables, allowing negative signs in polynomials, etc.). The current generalized version I have in mind is as follows:
Let $x_i > 0$, with $i \in {1, 2, \ldots, n}$. Suppose $f_1(x_i)$ and $f_2(x_i)$ are polynomials of the same degree $m$ and with a finite number of terms, such that each term in each polynomial is of the form $c_k\prod_{i=1}^n x_i^{y_i}$, where $y_i \in \mathbb{N}$ for all $i \in {1, 2, \ldots, n}$ and $\sum_{i=1}^n y_i = m$. Then
$\sum_{p}\frac{f_1(x_i)}{f_2(x_i)}$ attains an extremum when $x_1 = x_2 = \ldots = x_n$, where the summation is over all the cyclic permutations of ${1, 2, \ldots, n}$.
Before that, I'd appreciate clarification on what exactly would be the prerequisites and what all I would need to learn in order to tackle a problem like this. Also, I'd be grateful if someone can tell me what work has been done or what results already exist on this sort of a problem.
Thanks in advance!
inequality
reference-request
polynomials
optimization
Share
edited Mar 21, 2016 at 1:00
user9343456user9343456
asked Mar 20, 2016 at 23:23
user9343456user9343456
24511 silver badge77 bronze badges
$\endgroup$
Add a comment |
1 Answer 1
Reset to default
0
$\begingroup$
Using the AM-GM inequality \begin{align} \frac{x^2 + y^2}{yz} + \frac{y^2 + z^2}{zx} + \frac{z^2 + x^2}{xy} &= \frac{x^3 + xy^2 + y^3 + yz^2 + z^3 + zx^2}{xyz}\ &\geq \frac{6\sqrt{x^6 y^6 z^6}}{xyz}= \frac{6xyz}{xyz} = 6 \end{align} This means the sum has an minimum value of 6. Equality will occur in AM-GM when all terms of the arithmetic mean are the same. That means equality will occur if \begin{equation} x^3 = xy^2 = y^3 =yz^2 = z^3 = zx^2 \end{equation} Since $x^3 = zx^2$ if follows that $x = z$, and since $y^3 = xy^2$ if follows that $y = x$. Therefore the minimum will occur when $x = y = z$. Note that this is not the only case for extremum, as this requires the condition that $x,y,z \geq 0$.
Share
answered Mar 20, 2016 at 23:38
Jeevan DevaranjanJeevan Devaranjan
2,5751212 silver badges1616 bronze badges
$\endgroup$
3
$\begingroup$ Thanks! But I'm not looking for a proof of the example I included in the question. The example was only to illustrate a result whose generalization I want to work on. The question is basically a reference request and to ask others what existing research, if any, exists regarding such a generalized conjecture. $\endgroup$
user9343456
– user9343456
2016-03-20 23:44:36 +00:00
Commented Mar 20, 2016 at 23:44
$\begingroup$ @ShirishKulhari In order to answer that question you need to define the general form of this symmetric sum, as many symmetric sums may have take this form for a specific case. $\endgroup$
Jeevan Devaranjan
– Jeevan Devaranjan
2016-03-20 23:54:19 +00:00
Commented Mar 20, 2016 at 23:54
$\begingroup$ I edited the question to include a tentative idea of what the generalized form of the problem would look like. $\endgroup$
user9343456
– user9343456
2016-03-21 00:34:11 +00:00
Commented Mar 21, 2016 at 0:34
Add a comment |
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
inequality
reference-request
polynomials
optimization
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
0 Prove a set of polynomials factorized in products of first degree polinynomials is a basis of the polynomial vector space
Question about a technique used to find extremum of a polynomial function
Degree of extension of root of a polynomial system
Hot Network Questions
An odd question
Verify a Chinese ID Number
Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road?
Is direct sum of finite spectra cancellative?
Analog story - nuclear bombs used to neutralize global warming
Interpret G-code
Is this commentary on the Greek of Mark 1:19-20 accurate?
Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator
Determine which are P-cores/E-cores (Intel CPU)
What is this chess h4 sac known as?
Riffle a list of binary functions into list of arguments to produce a result
Why, really, do some reject infinite regresses?
Identifying a thriller where a man is trapped in a telephone box by a sniper
With with auto-generated local variables
Why do universities push for high impact journal publications?
What were "milk bars" in 1920s Japan?
What is the meaning of 率 in this report?
Do sum of natural numbers and sum of their squares represent uniquely the summands?
How can I get Remote Desktop (RD) to scale properly AND set maximum windowed size?
Origin of Australian slang exclamation "struth" meaning greatly surprised
Is it ok to place components "inside" the PCB
How to home-make rubber feet stoppers for table legs?
How big of a hole can I drill in an exterior wall's bottom plate?
Copy command with cs names
more hot questions
Question feed |
189579 | https://www.youtube.com/playlist?list=PLSQl0a2vh4HBsB7fGzaMzqT9Gr0SXThXi | Back
Sign in
Home
Shorts
Subscriptions
You
History
Sign in to like videos, comment, and subscribe.
Sign in
Explore
Shopping
Music
Movies & TV
Live
Gaming
News
Sports
Courses
Fashion & Beauty
Podcasts
Playables
More from YouTube
YouTube Premium
YouTube TV
YouTube Music
YouTube Kids
Settings
Report history
Help
Send feedback
Home
Shorts
Subscriptions
You
History
PLAY ALL
Interest and debt | Finance and Capital Markets | Khan Academy
16 videos44,135 viewsLast updated on Jul 2, 2018
Save playlist
Shuffle play
Share
Show more
Khan Academy
Khan Academy
Play all
Interest and debt | Finance and Capital Markets | Khan Academy
by Khan Academy
Playlist•16 videos•44,135 views
Play all
1
7:42
7:42
Now playing
Present Value 3 | Interest and debt | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
372K views • 17 years ago
•
Fundraiser
2
10:12
10:12
Now playing
Present Value 2 | Interest and debt | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
513K views • 17 years ago
•
Fundraiser
3
10:03
10:03
Now playing
Present Value 4 (and discounted cash flow) | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
395K views • 17 years ago
•
Fundraiser
4
8:17
8:17
Now playing
Time value of money | Interest and debt | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
895K views • 14 years ago
•
Fundraiser
5
10:26
10:26
Now playing
Payday Loans | Interest and debt | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
223K views • 15 years ago
•
Fundraiser
6
11:38
11:38
Now playing
e and compound interest | Interest and debt | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
383K views • 11 years ago
•
7
9:56
9:56
Now playing
Introduction to interest | Interest and debt | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
1.7M views • 17 years ago
•
Fundraiser
8
13:04
13:04
Now playing
Personal bankruptcy - Chapters 7 and 13 | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
118K views • 11 years ago
•
Fundraiser
9
11:41
11:41
Now playing
Institutional roles in issuing and processing credit cards | Khan Academy
Khan Academy
Khan Academy
•
190K views • 11 years ago
•
Fundraiser
10
6:38
6:38
Now playing
Compound interest introduction | Interest and debt | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
1.2M views • 11 years ago
•
Fundraiser
11
7:12
7:12
Now playing
Annual Percentage Rate (APR) and effective APR | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
542K views • 11 years ago
•
Fundraiser
12
10:20
10:20
Now playing
Introduction to present value | Interest and debt | Finance & Capital Markets | Khan Academy
Khan Academy
Khan Academy
•
1.1M views • 17 years ago
•
Fundraiser
13
9:11
9:11
Now playing
The rule of 72 for compound interest | Interest and debt | Finance & Capital Markets | Khan Academy
Fundraiser
8:59
Now playing
5:39
Now playing
8:02
Now playing
Fundraiser
NaN / NaN |
189580 | https://www3.nd.edu/~apilking/Calculus2ResourcesOriginal/Calculus2ResourcesOriginal/Lecture%202/Slides/3.%20Limits_Derivatives_and_Integrals.pdf | Limits involving ln(x) We can use the rules of logarithms given above to derive the following information about limits.
lim x→∞ln x = ∞, lim x→0 ln x = −∞.
▶We saw the last day that ln 2 > 1/2.
▶Using the rules of logarithms, we see that ln 2m = m ln 2 > m/2, for any integer m.
▶Because ln x is an increasing function, we can make ln x as big as we choose, by choosing x large enough, and thus we have lim x→∞ln x = ∞.
.
▶Similarly ln 1 2n = −n ln 2 < −n/2 and as x approaches 0 the values of ln x approach −∞.
Example Find the limit limx→∞ln( 1 x2+1).
▶As x →∞, we have 1 x2+1 →0 ▶Letting u = 1 x2+1, we have lim x→∞ln( 1 x2 + 1) = lim u→0 ln(u) = −∞.
Extending the antiderivative of 1/x We can extend our antiderivative of 1/x( the natural logarithm function) to a function with a larger domain by composing ln x with the absolute value function |x|. . We have : ln |x| = ln x x > 0 ln(−x) x < 0 This is an even function with graph We have ln|x| is also an antiderivative of 1/x with a larger domain than ln(x).
d dx (ln |x|) = 1 x and Z 1 x dx = ln |x| + C Using Chain Rule for Differentiation d dx (ln |x|) = 1 x and d dx (ln |g(x)|) = g ′(x) g(x) ▶Example 1: Differentiate ln | sin x|.
▶Using the chain rule, we have d dx ln | sin x| = 1 (sin x) d dx sin x ▶ = cos x sin x Using Chain Rule for Differentiation : Example 2 Differentiate ln | 3 √ x −1|.
▶We can simplify this to finding d dx 1 3 ln |x −1| , since ln | 3 √x −1| = ln |x −1|1/3 ▶ d dx 1 3 ln |x −1| = 1 3 1 (x −1) d dx (x −1) = 1 3(x −1) Using Substitution Reversing our rules of differentiation above, we get: Z 1 x dx = ln |x| + C and Z g ′(x) g(x) dx = ln |g(x)| + C ▶Example Find the integral R x 3−x2 dx ▶Using substitution, we let u = 3 −x2.
du = −2x dx, x dx = du −2, ▶ Z x 3 −x2 dx = Z 1 −2(u) du ▶ = −1 2 ln |u| + C = −1 2 ln |3 −x2| + C |
189581 | https://www.khanacademy.org/math/8th-grade-illustrative-math/unit-7-exponents-and-scientific-notation/lesson-8-combining-bases/v/exponent-properties-with-products-indian-accent | Exponent properties with products (video) | Khan Academy
Skip to main content
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked.
Explore
Browse By Standards
Explore Khanmigo
Math: Florida B.E.S.T.
Math: Pre-K - 8th grade
Math: Get ready courses
Math: High school & college
Math: Multiple grades
Science
Math: Illustrative Math-aligned
Math: Eureka Math-aligned
Test prep
Computing
Reading & language arts
Economics
Life skills
Social studies
Partner courses
Khan for educators
Select a category to view its courses
Search
AI for Teachers FreeDonateLog inSign up
Search for courses, skills, and videos
Help us do more
We'll get right to the point: we're asking you to help support Khan Academy. We're a nonprofit that relies on support from people like you. If everyone reading this gives $10 monthly, Khan Academy can continue to thrive for years. Please help keep Khan Academy free, for anyone, anywhere forever.
Select gift frequency
One time
Recurring
Monthly
Yearly
Select amount
$10
$20
$30
$40
Other
Give now
By donating, you agree to our terms of service and privacy policy.
Skip to lesson content
8th grade Math (Illustrative Math-aligned)
Course: 8th grade Math (Illustrative Math-aligned)>Unit 7
Lesson 4: Combining bases
Exponent properties with products
Exponent properties review
Math>
8th grade Math (Illustrative Math-aligned)>
Exponents and scientific notation>
Combining bases
© 2025 Khan Academy
Terms of usePrivacy PolicyCookie NoticeAccessibility Statement
Exponent properties with products
Google Classroom
Microsoft Teams
About About this video
To simplify expressions with exponents, there are a few properties that may help. One is that when two numbers with the same base are multiplied, the exponents can be added. Another is that when a number with an exponent is raised to another exponent, the exponents can be multiplied.Created by Skyloom (Dubbing).
Skip to end of discussions
Questions Tips & Thanks
Want to join the conversation?
Log in
Sort by:
Top Voted
Video transcript
Creative Commons Attribution/Non-Commercial/Share-AlikeVideo on YouTube
Up next: article
Use of cookies
Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy
Accept All Cookies Strictly Necessary Only
Cookies Settings
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
Allow All
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session.
Functional Cookies
[x] Functional Cookies
These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences.
Targeting Cookies
[x] Targeting Cookies
These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service.
Performance Cookies
[x] Performance Cookies
These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Reject All Confirm My Choices |
189582 | https://www.ncbi.nlm.nih.gov/books/NBK578201/ | Oral Lichen Planus - StatPearls - NCBI Bookshelf
An official website of the United States government
Here's how you know
The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.
The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Log inShow account info
Close
Account
Logged in as:
username
Dashboard
Publications
Account settings
Log out
Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation
Bookshelf
Search database
Search term
Search
Browse Titles
Advanced
Help
Disclaimer
NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.
StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-.
StatPearls [Internet].
Show details
Treasure Island (FL): StatPearls Publishing; 2025 Jan-.
Search term
Oral Lichen Planus
Grace Raj; Mary Raj.
Author Information and Affiliations
Authors
Grace Raj 1; Mary Raj 2.
Affiliations
1 National University of Singapore
2 NUS
Last Update: February 6, 2023.
Go to:
Continuing Education Activity
Oral lichen planus (OLP) is a chronic T-cell mediated inflammatory disease that affects the oral mucosa. It is characterized by periods of symptomatic exacerbation and remission, and treatment targets reducing inflammation and providing symptomatic relief. This activity reviews the evaluation and management of oral lichen planus and explains the role of the interprofessional team in evaluating and treating patients with this condition.
Objectives:
Summarize the pathophysiology of oral lichen planus.
Describe the clinical and histologic diagnostic criteria for oral lichen planus.
Review the management strategies and key patient education points for oral lichen planus.
Explain the importance and clinical relevance of long-term clinical follow-up.
Access free multiple choice questions on this topic.
Go to:
Introduction
Lichen planus is a chronic inflammatory disease that affects the skin, hair follicles, nails, and mucosa.Mucosal surfaces affected include the oral, genital, ocular, otic, esophageal surfaces, and in rarer instances, the bladder, nasal, laryngeal, and anal surfaces. The skin and oral mucosa are the major sites that are affected.
The oral variant, termed oral lichen planus (OLP), is a chronic condition with periods of relapses and remissions, requiring long-term symptomatic treatment and surveillance monitoring.About 15% of patients with oral lichen planus (OLP) develop cutaneous lesions,and 20% develop genital lesions.
Cutaneous involvement is usually self-limiting and characterized by violaceous, pruritic papules with overlying reticular white striae,known as Wickam striae.Lesions most commonly appear on the trunk or extremities, such as the wrists and ankles.
Genital involvement in females demonstrates erythema, erosion, white reticulated plaques, agglutination, resorption of the labia, or scarring.Males may demonstrate annular, papulosquamous lesions on the glans penis. Associated symptoms may include dysuria and dyspareunia.
More than a quarter of OLP patients also display oesophageal involvement, with symptomatic patients complaining of dysphagia and odynophagia.An endoscopic examination may reveal friable mucosa, white plaques, erythema, ulceration, erosions, and stricture formation.
Go to:
Etiology
Oral lichen planus is a known T-cell mediated chronic inflammatory response affecting the oral mucosa (see Image.Oral Lichen Planus). Evidence suggests that other factors such as trauma, dental plaque, and stress may play a role in exacerbating OLP symptoms.
Studies have also demonstrated that in certain geographic locations such as Japan, the Mediterranean region, and the United States metropolitan population, there is a strong association between Hepatitis C virus infection and OLP.
Go to:
Epidemiology
The estimated global prevalence of oral lichen planus is about 2%.It is twice as common in women and is often diagnosed between the fifth and sixth decades of life, although it may also occur in children and young adults.
Go to:
Pathophysiology
While the pathophysiology of oral lichen planus is not entirely understood, the two main proposed mechanisms are antigen-specific and non-specific mechanisms.
The antigen-specific mechanism suggests that antigen presentation by Langerhans cells or basal keratinocytes leads to the activation of CD4+ helper T cells, stimulating the release of pro-inflammatory T-helper 1 (Th1) cytokines such as tumor necrosis factor-alpha (TNF-α) and interferon-gamma (IFNγ).This induces a CD8+ T cell-mediated cytotoxic reaction against the epidermal basal cell layer resulting in keratinocyte apoptosis via TNF-α, granzyme, or Fas-Fas ligand-mediated mechanisms.
The non-specific mechanism suggests that the activation of mast cells releases pro-inflammatory mediators such as proteases and the upregulation of matrix metalloproteinases. This results in T cell infiltration of the superficial lamina propria, disruption of the basement membrane, and eventual keratinocyte apoptosis.
The chronic nature of OLP has been postulated to be due to the activation of nuclear factor kappa B (NF-kB)and the inhibition of the transforming growth factor control pathway (TGF-β/Smad), leading to hyperkeratosis and the appearance of distinct white lesions.Genetic polymorphisms of the first intron of the promoter gene of IFNγ have also been postulated to be risk factors for developing OLP.
Go to:
Histopathology
The first diagnostic histopathologic criteria for OLP was proposed by the WHO Collaborating Centre for Oral Precancerous Lesions in 1978. They noted that OLP lesions might display the following microscopic features: orthokeratosis or parakeratosis, saw-tooth rete ridges, Civatte bodies, a narrow band of eosinophilic material in the basement membrane, a band-like zone of predominately lymphocytic infiltrate in the superficial lamina propria and liquefaction degeneration in the basal cell layer.
It was subsequently highlighted that many of these features are not specific to OLP. As such, a recent position paper by the American Academy of Oral and Maxillofacial Pathology (AAOMP) proposed a new set of diagnostic histopathologic criteria in attempting to exclude several of the lichenoid mimics and to improve the accuracy of diagnosis.The proposed criteria now include the following: a band-like zone of predominately lymphocytic infiltrate in the epithelium-lamina propria interface, liquefactive degeneration in the basal cell layer, lymphocytic exocytosis, and the absence of epithelial dysplasia and verrucous epithelial architectural changes.
When examined under direct immunofluorescence, samples from patients with OLP show fibrinogen deposition in a shaggy pattern along the basement membrane zone, without immunoglobulins and complements.
Since histologic findings are not specific to OLP, a detailed history taking with clinical correlation is necessary to arrive at a conclusive diagnosis.
Go to:
History and Physical
Approximately two-thirds of patients with oral lichen planus are symptomatic, presenting with alternating periods of exacerbation and quiescence.The intensity of symptoms varies, but patients typically report sensitivity to spicy or acidic foods, painful oral mucosa, mucosal surface roughness, and tightness of the mucosa.
There are six clinical subtypes of OLP: reticular, papular, plaque, atrophic, erosive, and bullous.These may be present either individually or in combination with each other, and the classification is usually based on the predominant subtype manifested.
The most common presentations are the reticular, erosive, and plaque subtypes.Reticular OLP has the most characteristic manifestation and displays a white lacy network (Wickham striae)with hyperkeratotic plaques (see Image. Lichen Planus Present With Wickham's Striae on Buccal Mucosa).Erosive or atrophic OLP typically presents with erythema and ulcerations, often associated with pain and sensitivity. The periphery of the lesions may also display reticular keratotic striae.The papular or plaque subtypes appear as white keratotic papules or plaques that may resemble leukoplakia.Patients with a darker skin tone may also exhibit post-inflammatory pigmentation, with diffuse brown or black pigmentation observed together with the lesions.
OLP typically has a bilateral distribution and most commonly appears on the buccal mucosa, tongue, and gingiva, followed by the labial mucosa and the lower lip.The Koebner phenomenon, where the lesions develop at sites with mechanical trauma (e.g., friction from rough dental restorations/ teeth or lip/cheek biting), may explain why the lesions appear more commonly on the buccal mucosa and tongue, which are more prone to trauma.Approximately 10% of OLP patients present with lesions confined to the gingiva, appearing as desquamative gingivitis.
Go to:
Evaluation
An accurate diagnosis of OLP must be based on thorough history taking, clinical examination, and histopathologic findings.In patients presenting with the characteristic reticular form of OLP, the clinical presentation alone can be sufficiently diagnostic. However, an oral biopsy is still advisable to confirm the clinical diagnosis and to rule out evidence of dysplasia and malignancy.
For patients who present with desquamative gingivitis, direct immunofluorescence (DIF)may be necessary to rule out autoimmune vesiculobullous diseases that share similar clinical features.
Go to:
Treatment / Management
There is no cure for oral lichen planus. The primary aim of treatment is to reduce inflammation and provide symptomatic relief. Topical corticosteroids are the first-line treatment for OLP and can be applied as an adhesive gel or used as a mouth rinse.The use of topical agents is preferred as they are effective and are associated with fewer side effects compared to systemic agents.While triamcinolone acetonide gel is commonly used in treating patients with OLP, higher potency corticosteroids such as clobetasol propionate are more effective in providing symptomatic relief.
Patients applying the topical gel are instructed to dry the oral mucosa before application and avoid eating and drinking for 30 minutes after the application to provide sufficient contact time with the oral mucosa.Using mouth rinses such as dexamethasone is particularly beneficial for patients who demonstrate widespread oral lesions or when the lesions are not readily accessible for topical gel application.
Intralesional corticosteroid injections may also be administered to treat persistent erosive OLP.One of the most common side effects of topical corticosteroids is oropharyngeal candidiasis. Clinicians may consider adjunctive therapy with topical/systemic antimycotics if necessary.
Systemic corticosteroids are primarily used to treat recalcitrant OLP that is resistant to topical therapies, severe OLP with widespread ulceration and erythema, and lichen planus with extra-oral involvement affecting multiple sites.The dosage and frequency of use can be reduced as the lesions heal or symptoms improve.
Second-line therapies can be considered to manage recalcitrant OLP that does not respond to topical corticosteroids. These include calcineurin inhibitors (e.g., cyclosporine, tacrolimus), retinoids, steroid-sparing agents (e.g., azathioprine, hydroxychloroquine), or mycophenolate mofetil.The most common reported side effect from the use of retinoids and cyclosporine is a transient burning sensation, which may limit its use in patients with erosive OLP.
Newer calcineurin inhibitors such as tacrolimus should be used with caution,given the Food and Drug Administration (FDA) black box warning regarding the reported increased risk of squamous cell carcinoma and lymphoma.The use of retinoids is also cautioned in pregnant women as it is potentially teratogenic.Steroid-sparing agents such as azathioprine or hydroxychloroquine should also be prescribed carefully due to the risk of bone marrow aplasia and retinal damage, respectively.
Go to:
Differential Diagnosis
Several conditions resemble oral lichen planus both clinically and histologically. These include oral lichenoid drug reactions, oral lichenoid contact hypersensitivity reactions, mucous membrane pemphigoid, chronic graft-versus-host disease, lupus erythematosus, lichen planus pemphigoides, chronic ulcerative stomatitis, proliferative verrucous leukoplakia, and oral epithelial dysplasia.
The most common medications implicated in oral lichenoid drug reactions include nonsteroidal anti-inflammatory drugs (NSAIDs), antihypertensives, anticonvulsants, antimalarials, and antiretrovirals.Others include oral hypoglycemics, dapsone, gold salts, penicillamine, phenothiazines.
Lichenoid drug reactions may not necessarily occur immediately, with some lesions developing even years after introducing the medication.Diagnosing a lichenoid drug reaction is difficult, and a thorough clinical history taking may aid in identifying the offending drug. While usually not warranted, discussion with the medication prescriber can be considered to check the feasibility of substituting the medication and monitoring for resolution of the lesions, which may take months.
Oral lichenoid contact hypersensitivity reactions are typically seen on mucosal surfaces in direct contact with the offending agent. These agents may include dental restorative materials (e.g., metals, composites, and glass ionomer cement) or flavoring agents (e.g., cinnamon, menthol, eugenol).Amalgam restorations are most commonly implicated in these reactions.The lesions usually resolve within several months following replacement of the dental restoration or discontinuation of the flavoring agent.
Go to:
Prognosis
Lifestyle modifications and medication can provide symptomatic relief and improve patients' quality of life. Long-term clinical surveillance will aid in the early detection of malignancy to ensure a favorable long-term prognosis.
Go to:
Complications
The malignant potential of oral lichen planus remains a highly debated topic in the literature. The reported rate of malignant transformation to squamous cell carcinoma ranges from 0 to 12.5%. This wide disparity is due to the marked variations in the inclusion and exclusion criteria of the studies included. The majority of malignancies were detected in patients with atrophic or erosive OLP.
A recent meta-analysis examining a total of 12,838 OLP patients found that after applying strict inclusion and exclusion criteria, the risk of malignant transformation was found to be 0.44%, significantly lower than most reported studies. Patients who smoke, consume alcohol, are seropositive for Hepatitis C or have a red OLP subtype have a higher risk of malignancy.
While the reported malignant transformation rates may vary in the literature, all studies unanimously concur that routine long-term follow-up is necessary for managing a patient’s symptoms and ruling out evidence of malignancy.
Go to:
Deterrence and Patient Education
Patients diagnosed with OLP should always be informed that the treatment is not curative but aimed at providing symptomatic relief. Lifestyle modification to avoid triggers such as acidic or spicy food helps alleviate symptoms.The removal of potential exacerbating factors such as smoothening sharp teeth/dental restorations and patient education to maintain optimal oral hygiene and manage stress levels.
Highlighting the possibility of malignant transformation is necessary to emphasize the importance of long-term clinical follow-up. Regular patient self-monitoring may also be beneficial to observe for any suspicious changes such as persistent oral ulcerations or growths.
Go to:
Enhancing Healthcare Team Outcomes
Oral lichen planus can pose a diagnostic dilemma as patients may exhibit non-specific clinical and histologic features that may overlap with several other conditions. A thorough history taking and clinical and histopathological examination are necessary to arrive at an accurate diagnosis.A dentist or oral surgeon is usually involved in caring for patients with OLP. For patients who present with extra-oral involvement, an inter-professional team including dermatologists, gynecologists, gastroenterologists,otorhinolaryngologists, or ophthalmologists may be involved, depending on the sites affected.Pharmacists can also help with drug recommendations, check for drug-drug interactions, verify dosing, and provide patients with medication counseling.
The use of pharmacotherapy, patient education, and lifestyle modification form the cornerstone in managing OLP and ensuring favorable patient outcomes.Future studies may provide more clarity on the pathogenesis and clinical progression of this condition.(Level I)
Go to:
Review Questions
Access free multiple choice questions on this topic.
Click here for a simplified version.
Comment on this article.
Figure
Lichen Planus Present With Wickham's Striae on Buccal Mucosa Contributed by H Olmo, DDS, MS
Figure
Oral Lichen Planus Contributed by S Munakomi, MD, and S Shah, MD
Go to:
References
1.
Farhi D, Dupin N. Pathophysiology, etiologic factors, and clinical management of oral lichen planus, part I: facts and controversies. Clin Dermatol. 2010 Jan-Feb;28(1):100-8. [PubMed: 20082959]
2.
Gorouhi F, Davari P, Fazel N. Cutaneous and mucosal lichen planus: a comprehensive review of clinical subtypes, risk factors, diagnosis, and prognosis. ScientificWorldJournal. 2014;2014:742826. [PMC free article: PMC3929580] [PubMed: 24672362]
3.
Parashar P. Oral lichen planus. Otolaryngol Clin North Am. 2011 Feb;44(1):89-107, vi. [PubMed: 21093625]
4.
Eisen D. The evaluation of cutaneous, genital, scalp, nail, esophageal, and ocular involvement in patients with oral lichen planus. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 1999 Oct;88(4):431-6. [PubMed: 10519750]
5.
Eisen D. The clinical manifestations and treatment of oral lichen planus. Dermatol Clin. 2003 Jan;21(1):79-89. [PubMed: 12622270]
6.
Rogers RS, Eisen D. Erosive oral lichen planus with genital lesions: the vulvovaginal-gingival syndrome and the peno-gingival syndrome. Dermatol Clin. 2003 Jan;21(1):91-8, vi-vii. [PubMed: 12622271]
7.
Abraham SC, Ravich WJ, Anhalt GJ, Yardley JH, Wu TT. Esophageal lichen planus: case report and review of the literature. Am J Surg Pathol. 2000 Dec;24(12):1678-82. [PubMed: 11117791]
8.
Fox LP, Lightdale CJ, Grossman ME. Lichen planus of the esophagus: what dermatologists need to know. J Am Acad Dermatol. 2011 Jul;65(1):175-83. [PubMed: 21536343]
9.
Katzka DA, Smyrk TC, Bruce AJ, Romero Y, Alexander JA, Murray JA. Variations in presentations of esophageal involvement in lichen planus. Clin Gastroenterol Hepatol. 2010 Sep;8(9):777-82. [PubMed: 20471494]
10.
Mignogna MD, Lo Russo L, Fedele S. Gingival involvement of oral lichen planus in a series of 700 patients. J Clin Periodontol. 2005 Oct;32(10):1029-33. [PubMed: 16174264]
11.
Ivanovski K, Nakova M, Warburton G, Pesevska S, Filipovska A, Nares S, Nunn ME, Angelova D, Angelov N. Psychological profile in oral lichen planus. J Clin Periodontol. 2005 Oct;32(10):1034-40. [PubMed: 16174265]
12.
Shengyuan L, Songpo Y, Wen W, Wenjing T, Haitao Z, Binyou W. Hepatitis C virus and lichen planus: a reciprocal association determined by a meta-analysis. Arch Dermatol. 2009 Sep;145(9):1040-7. [PubMed: 19770446]
13.
Erkek E, Bozdogan O, Olut AI. Hepatitis C virus infection prevalence in lichen planus: examination of lesional and normal skin of hepatitis C virus-infected patients with lichen planus for the presence of hepatitis C virus RNA. Clin Exp Dermatol. 2001 Sep;26(6):540-4. [PubMed: 11678885]
14.
Beaird LM, Kahloon N, Franco J, Fairley JA. Incidence of hepatitis C in lichen planus. J Am Acad Dermatol. 2001 Feb;44(2):311-2. [PubMed: 11174399]
15.
De Rossi SS, Ciarrocca K. Oral lichen planus and lichenoid mucositis. Dent Clin North Am. 2014 Apr;58(2):299-313. [PubMed: 24655524]
16.
McCartan BE, Healy CM. The reported prevalence of oral lichen planus: a review and critique. J Oral Pathol Med. 2008 Sep;37(8):447-53. [PubMed: 18624932]
17.
Carrozzo M, Gandolfo S. The management of oral lichen planus. Oral Dis. 1999 Jul;5(3):196-205. [PubMed: 10483064]
18.
Eisen D. The clinical features, malignant potential, and systemic associations of oral lichen planus: a study of 723 patients. J Am Acad Dermatol. 2002 Feb;46(2):207-14. [PubMed: 11807431]
19.
Eisen D, Carrozzo M, Bagan Sebastian JV, Thongprasom K. Number V Oral lichen planus: clinical features and management. Oral Dis. 2005 Nov;11(6):338-49. [PubMed: 16269024]
20.
Roopashree MR, Gondhalekar RV, Shashikanth MC, George J, Thippeswamy SH, Shukla A. Pathogenesis of oral lichen planus--a review. J Oral Pathol Med. 2010 Nov;39(10):729-34. [PubMed: 20923445]
21.
Olson MA, Rogers RS, Bruce AJ. Oral lichen planus. Clin Dermatol. 2016 Jul-Aug;34(4):495-504. [PubMed: 27343965]
22.
Santoro A, Majorana A, Bardellini E, Festa S, Sapelli P, Facchetti F. NF-kappaB expression in oral and cutaneous lichen planus. J Pathol. 2003 Nov;201(3):466-72. [PubMed: 14595759]
23.
Karatsaidis A, Schreurs O, Axéll T, Helgeland K, Schenck K. Inhibition of the transforming growth factor-beta/Smad signaling pathway in the epithelium of oral lichen. J Invest Dermatol. 2003 Dec;121(6):1283-90. [PubMed: 14675171]
24.
Carrozzo M, Uboldi de Capei M, Dametto E, Fasano ME, Arduino P, Broccoletti R, Vezza D, Rendine S, Curtoni ES, Gandolfo S. Tumor necrosis factor-alpha and interferon-gamma polymorphisms contribute to susceptibility to oral lichen planus. J Invest Dermatol. 2004 Jan;122(1):87-94. [PubMed: 14962095]
25.
Scully C, Carrozzo M. Oral mucosal disease: Lichen planus. Br J Oral Maxillofac Surg. 2008 Jan;46(1):15-21. [PubMed: 17822813]
26.
Kramer IR, Lucas RB, Pindborg JJ, Sobin LH. Definition of leukoplakia and related lesions: an aid to studies on oral precancer. Oral Surg Oral Med Oral Pathol. 1978 Oct;46(4):518-39. [PubMed: 280847]
27.
Abell E, Presbury DG, Marks R, Ramnarain D. The diagnostic significance of immunoglobulin and fibrin deposition in lichen planus. Br J Dermatol. 1975 Jul;93(1):17-24. [PubMed: 1191524]
28.
Crincoli V, Di Bisceglie MB, Scivetti M, Lucchese A, Tecco S, Festa F. Oral lichen planus: update on etiopathogenesis, diagnosis and treatment. Immunopharmacol Immunotoxicol. 2011 Mar;33(1):11-20. [PubMed: 20604639]
29.
Ismail SB, Kumar SK, Zain RB. Oral lichen planus and lichenoid reactions: etiopathogenesis, diagnosis, management and malignant transformation. J Oral Sci. 2007 Jun;49(2):89-106. [PubMed: 17634721]
30.
Andreasen JO. Oral lichen planus. 1. A clinical evaluation of 115 cases. Oral Surg Oral Med Oral Pathol. 1968 Jan;25(1):31-42. [PubMed: 5235654]
31.
Mergoni G, Ergun S, Vescovi P, Mete Ö, Tanyeri H, Meleti M. Oral postinflammatory pigmentation: an analysis of 7 cases. Med Oral Patol Oral Cir Bucal. 2011 Jan 01;16(1):e11-4. [PubMed: 20526252]
32.
Ramón-Fluixá C, Bagán-Sebastián J, Milián-Masanet M, Scully C. Periodontal status in patients with oral lichen planus: a study of 90 cases. Oral Dis. 1999 Oct;5(4):303-6. [PubMed: 10561718]
33.
Al-Hashimi I, Schifter M, Lockhart PB, Wray D, Brennan M, Migliorati CA, Axéll T, Bruce AJ, Carpenter W, Eisenberg E, Epstein JB, Holmstrup P, Jontell M, Lozada-Nur F, Nair R, Silverman B, Thongprasom K, Thornhill M, Warnakulasuriya S, van der Waal I. Oral lichen planus and oral lichenoid lesions: diagnostic and therapeutic considerations. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2007 Mar;103 Suppl:S25.e1-12. [PubMed: 17261375]
34.
Davari P, Hsiao HH, Fazel N. Mucosal lichen planus: an evidence-based treatment update. Am J Clin Dermatol. 2014 Jul;15(3):181-95. [PubMed: 24781705]
35.
Carbone M, Goss E, Carrozzo M, Castellano S, Conrotto D, Broccoletti R, Gandolfo S. Systemic and topical corticosteroid treatment of oral lichen planus: a comparative study with long-term follow-up. J Oral Pathol Med. 2003 Jul;32(6):323-9. [PubMed: 12787038]
36.
Schlosser BJ. Lichen planus and lichenoid reactions of the oral mucosa. Dermatol Ther. 2010 May-Jun;23(3):251-67. [PubMed: 20597944]
37.
Xia J, Li C, Hong Y, Yang L, Huang Y, Cheng B. Short-term clinical evaluation of intralesional triamcinolone acetonide injection for ulcerative oral lichen planus. J Oral Pathol Med. 2006 Jul;35(6):327-31. [PubMed: 16762012]
38.
Vincent SD, Fotos PG, Baker KA, Williams TP. Oral lichen planus: the clinical, historical, and therapeutic features of 100 cases. Oral Surg Oral Med Oral Pathol. 1990 Aug;70(2):165-71. [PubMed: 2290644]
39.
Giustina TA, Stewart JC, Ellis CN, Regezi JA, Annesley T, Woo TY, Voorhees JJ. Topical application of isotretinoin gel improves oral lichen planus. A double-blind study. Arch Dermatol. 1986 May;122(5):534-6. [PubMed: 3518638]
40.
Hersle K, Mobacken H, Sloberg K, Thilander H. Severe oral lichen planus: treatment with an aromatic retinoid (etretinate). Br J Dermatol. 1982 Jan;106(1):77-80. [PubMed: 7037037]
41.
Cheng YS, Gould A, Kurago Z, Fantasia J, Muller S. Diagnosis of oral lichen planus: a position paper of the American Academy of Oral and Maxillofacial Pathology. Oral Surg Oral Med Oral Pathol Oral Radiol. 2016 Sep;122(3):332-54. [PubMed: 27401683]
42.
Khudhur AS, Di Zenzo G, Carrozzo M. Oral lichenoid tissue reactions: diagnosis and classification. Expert Rev Mol Diagn. 2014 Mar;14(2):169-84. [PubMed: 24524807]
43.
Müller S. Oral manifestations of dermatologic disease: a focus on lichenoid lesions. Head Neck Pathol. 2011 Mar;5(1):36-40. [PMC free article: PMC3037467] [PubMed: 21221868]
44.
McCartan BE, McCreary CE. Oral lichenoid drug eruptions. Oral Dis. 1997 Jun;3(2):58-63. [PubMed: 9467343]
45.
Issa Y, Duxbury AJ, Macfarlane TV, Brunton PA. Oral lichenoid lesions related to dental restorative materials. Br Dent J. 2005 Mar 26;198(6):361-6; disussion 549; quiz 372. [PubMed: 15789104]
46.
Thornhill MH, Pemberton MN, Simmons RK, Theaker ED. Amalgam-contact hypersensitivity lesions and oral lichen planus. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2003 Mar;95(3):291-9. [PubMed: 12627099]
47.
Laeijendecker R, Dekker SK, Burger PM, Mulder PG, Van Joost T, Neumann MH. Oral lichen planus and allergy to dental amalgam restorations. Arch Dermatol. 2004 Dec;140(12):1434-8. [PubMed: 15611418]
48.
López-Jornet P, Camacho-Alonso F, Gomez-Garcia F, Bermejo Fenoll A. The clinicopathological characteristics of oral lichen planus and its relationship with dental materials. Contact Dermatitis. 2004 Oct;51(4):210-1. [PubMed: 15500671]
49.
Thornhill MH, Sankar V, Xu XJ, Barrett AW, High AS, Odell EW, Speight PM, Farthing PM. The role of histopathological characteristics in distinguishing amalgam-associated oral lichenoid reactions and oral lichen planus. J Oral Pathol Med. 2006 Apr;35(4):233-40. [PubMed: 16519771]
50.
Lodi G, Scully C, Carrozzo M, Griffiths M, Sugerman PB, Thongprasom K. Current controversies in oral lichen planus: report of an international consensus meeting. Part 2. Clinical management and malignant transformation. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2005 Aug;100(2):164-78. [PubMed: 16037774]
51.
Giuliani M, Troiano G, Cordaro M, Corsalini M, Gioco G, Lo Muzio L, Pignatelli P, Lajolo C. Rate of malignant transformation of oral lichen planus: A systematic review. Oral Dis. 2019 Apr;25(3):693-709. [PubMed: 29738106]
52.
Fitzpatrick SG, Hirsch SA, Gordon SC. The malignant transformation of oral lichen planus and oral lichenoid lesions: a systematic review. J Am Dent Assoc. 2014 Jan;145(1):45-56. [PubMed: 24379329]
53.
González-Moles MÁ, Ruiz-Ávila I, González-Ruiz L, Ayén Á, Gil-Montoya JA, Ramos-García P. Malignant transformation risk of oral lichen planus: A systematic review and comprehensive meta-analysis. Oral Oncol. 2019 Sep;96:121-130. [PubMed: 31422203]
54.
Gonzalez-Moles MA, Scully C, Gil-Montoya JA. Oral lichen planus: controversies surrounding malignant transformation. Oral Dis. 2008 Apr;14(3):229-43. [PubMed: 18298420]
55.
Idrees M, Kujan O, Shearston K, Farah CS. Oral lichen planus has a very low malignant transformation rate: A systematic review and meta-analysis using strict diagnostic and inclusion criteria. J Oral Pathol Med. 2021 Mar;50(3):287-298. [PubMed: 31981238]
56.
Lodi G, Scully C, Carrozzo M, Griffiths M, Sugerman PB, Thongprasom K. Current controversies in oral lichen planus: report of an international consensus meeting. Part 1. Viral infections and etiopathogenesis. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 2005 Jul;100(1):40-51. [PubMed: 15953916]
57.
Stone SJ, McCracken GI, Heasman PA, Staines KS, Pennington M. Cost-effectiveness of personalized plaque control for managing the gingival manifestations of oral lichen planus: a randomized controlled study. J Clin Periodontol. 2013 Sep;40(9):859-67. [PubMed: 23800196]
Disclosure:Grace Raj declares no relevant financial relationships with ineligible companies.
Disclosure:Mary Raj declares no relevant financial relationships with ineligible companies.
Continuing Education Activity
Introduction
Etiology
Epidemiology
Pathophysiology
Histopathology
History and Physical
Evaluation
Treatment / Management
Differential Diagnosis
Prognosis
Complications
Deterrence and Patient Education
Enhancing Healthcare Team Outcomes
Review Questions
References
Copyright © 2025, StatPearls Publishing LLC.
This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.
Bookshelf ID: NBK578201 PMID: 35201729
Share on Facebook
Share on Twitter
Views
PubReader
Print View
Cite this Page
In this Page
Continuing Education Activity
Introduction
Etiology
Epidemiology
Pathophysiology
Histopathology
History and Physical
Evaluation
Treatment / Management
Differential Diagnosis
Prognosis
Complications
Deterrence and Patient Education
Enhancing Healthcare Team Outcomes
Review Questions
References
Related information
PMCPubMed Central citations
PubMedLinks to PubMed
Similar articles in PubMed
Lichen Planus.[StatPearls. 2025]Lichen Planus.Arnold DL, Krishnamurthy K. StatPearls. 2025 Jan
[Lichen planus with esophageal involvement].[Actas Dermosifiliogr. 2007][Lichen planus with esophageal involvement].Valdés F, Caparrini A, Calzada JM. Actas Dermosifiliogr. 2007 Jun; 98(5):361-4.
Review Diagnosis and treatment of lichen planus.[Am Fam Physician. 2011]Review Diagnosis and treatment of lichen planus.Usatine RP, Tinitigan M. Am Fam Physician. 2011 Jul 1; 84(1):53-60.
Oral lichen planus.[Clin Dermatol. 2016]Oral lichen planus.Olson MA, Rogers RS 3rd, Bruce AJ. Clin Dermatol. 2016 Jul-Aug; 34(4):495-504. Epub 2016 Mar 2.
Review Oral lichen planus and lichenoid reactions: etiopathogenesis, diagnosis, management and malignant transformation.[J Oral Sci. 2007]Review Oral lichen planus and lichenoid reactions: etiopathogenesis, diagnosis, management and malignant transformation.Ismail SB, Kumar SK, Zain RB. J Oral Sci. 2007 Jun; 49(2):89-106.
See reviews...See all...
Recent Activity
Clear)Turn Off)Turn On)
Oral Lichen Planus - StatPearlsOral Lichen Planus - StatPearls
Your browsing activity is empty.
Activity recording is turned off.
Turn recording back on)
See more...
Follow NCBI
Connect with NLM
National Library of Medicine
8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov
PreferencesTurn off
External link. Please review our privacy policy.
Cite this Page Close
Raj G, Raj M. Oral Lichen Planus. [Updated 2023 Feb 6]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Available from:
Making content easier to read in Bookshelf Close
We are experimenting with display styles that make it easier to read books and documents in Bookshelf. Our first effort uses ebook readers, which have several "ease of reading" features already built in.
The content is best viewed in the iBooks reader. You may notice problems with the display of some features of books or documents in other eReaders.
Cancel Download
Share
Share on Facebook
Share on Twitter
URL |
189583 | https://scholarship.law.wm.edu/cgi/viewcontent.cgi?article=1148&context=wmlr | William & Mary Law Review William & Mary Law Review
Volume
49 (2007-2008)
Issue 4
Constitution Drafting in Post-conflict
States Symposium Article 13
3-1-2008
Post-conflict Rule of Law Building: The Need for a Multi-Layered, Post-conflict Rule of Law Building: The Need for a Multi-Layered,
Synergistic Approach Synergistic Approach
Jane Stromseth
Follow this and additional works at:
Part of the Human Rights Law Commons , and the Military, War, and Peace Commons
Repository Citation Repository Citation
Jane Stromseth,
Post-conflict Rule of Law Building: The Need for a Multi-Layered, Synergistic
Approach , 49 Wm. & Mary L. Rev. 1443 (2008),
iss4/13
Copyright c 2008 by the authors. This article is brought to you by the William & Mary Law School Scholarship
Repository.
POST-CONFLICT RULE OF LAW BUILDING: THE NEED FOR A MULTI-LAYERED, SYNERGISTIC APPROACH JANE STROMSETH INTRODUCTION In recent years, considerable blood, sweat, and treasure have been devoted to building the rule of law in the wake of armed conflicts and military interventions in many parts of the world. From Afghanistan to Iraq, Kosovo to East Timor, and Sierra Leone to Haiti and elsewhere, international interveners and local leaders have struggled to address both security and humanitarian chal-lenges in societies seeking to overcome a legacy of violent conflict. Increasingly, international and domestic reformers have come to appreciate that long-term solutions to security and humanitarian problems depend crucially on building and strengthening the rule of law: fostering effective, inclusive, and transparent indigenous governance structures; creating fair and independent judicial systems and responsible security forces; reforming and updating legal codes; and creating a widely shared public commitment to human rights and to using the new or reformed civic structures rather than relying on violence or self-help to resolve problems.' Building the rule of law is no simple matter, although triumphal interventionist rhetoric occasionally implies that it is. The idea of the rule of law is often used as a handy shorthand way to describe the extremely complex bundle of cultural commitments and institutional structures that support peace, human rights, democ-racy, and prosperity. On the institutional level, the rule of law
Professor of Law and Director, Human Rights Institute, Georgetown University Law Center. Portions of this Article were previously published in JANE STROMSETH, DAVID
WIPPMAN &ROSA BROOKS, CAN MIGHT MAKE RIGHTS? BUILDING THE RULE OF LAW AFTER
MILITARY INTERVENTIONS (2006). 1. JANE STROMSETH, DAVID WIPPMAN &ROSA BROOKS, CAN MIGHT MAKE RIGHTS? BUILDING THE RULE OF LAW AFTER MILITARY INTERVENTIONS 3(2006).
1443 WILLIAM AND MARY LAW REVIEW involves courts, legislatures, statutes, executive agencies, elections, a strong educational system, a free press, and independent non-governmental organizations (NGOs) such as bar associations, civic associations, political parties, and the like. On the cultural level, the rule of law requires human beings who are willing to give their labor and their loyalty to these institutions, eschewing self-help solutions and violence in favor of democratic and civil participation. 2Especially in societies in which state institutions and the law itself have been deeply discredited by repressive or ineffectual govern-ments, persuading people to buy into rule of law ideals is difficult. 3Building public trust in newly developing legal institutions can take years, and mutually reinforcing reforms may be needed in multiple areas-from constitutional and justice system reform to initiatives designed to strengthen access to justice and increase public confidence in the very idea of the rule of law. Both institutionally and culturally, building the rule of law also requires extensive human and financial resources, careful policy coordination between numerous international actors and national players, and at the same time an ability to respond quickly, creatively, and sensitively to unpredictable developments on the ground. 4For the last few years, my colleagues David Wippman, Rosa Brooks, and I conducted research in many societies-including Sierra Leone, Iraq, Kosovo, Bosnia, East Timor, and elsewhere -with the aim of understanding the many challenges of strengthen-ing the rule of law in the aftermath of conflict. We interviewed national officials and civil society leaders, UN officials and staff, rule of law experts from different governments, and practitioners from many different NGOs and organizations, and we looked carefully at the unique circumstances and challenges in particular diverse, post-conflict societies. Our resulting book, Can Might Make Rights? Building the Rule of Law After Military Interventions, is designed to pull together in one volume the disparate bits of knowledge gained in the past decade with the goal of understanding why past international efforts to strengthen the rule of law after conflict have so often
Id. at 4.
Id. at 5.
Id.
1444
[Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING fallen short, and to offer concrete suggestions for what might be done better in the future. 5 We deliberately aimed to write a practical book that would assist policymakers and field workers, offer enough theoretical, legal, and historical background to enable readers to contextualize and understand the basic dilemmas inherent in interventions designed to build the rule of law, and help to evaluate unique as well as recurring challenges in different post-conflict societies. 6 Our aim throughout was to examine the rule of law holistically, to make the whole elephant visible-not just the trunk or the tail-and to explore how the pieces fit together: from blueprints and constitutional frameworks for post-conflict gover-nance, to security and justice system reform, to accountability for human rights atrocities, to initiatives aimed at building public and cultural support for the rule of law. Recent experience holds both good news and bad. The bad news is that the track record of interveners in building the rule of law after conflict has not been very impressive.' This is in part because post-conflict societies-with legacies of insecurity, discredited political institutions, damaged infrastructure, limited resources, and public distrust-are usually not ideal environments for promoting the rule of law. 8But to some degree, the poor track record of rule of law promotion efforts is due to the failure of interveners to appreciate the complex-ities of the project of creating the rule of law. The good news is that we are finally beginning to have a sense of "best practices," an increasingly nuanced understanding of what works and what does not in post-conflict settings. 9 For instance: We now have a clear sense of the critical importance of immediately reestablishing basic security in the wake of military interventions, which in turn requires that the international community plan in advance for the rapid deployment of civilian police in the post-conflict period. 105. Id. at 10.
Id. 7. See id. at 9.
See id.
Id.
Id.
20081 1445 WILLIAM AND MARY LAW REVIEW
" Similarly, we now understand that effectively reestablishing security means far more than simply ensuring that looting and violent crime are kept in check: it also involves ensuring that basic daily needs are met and that people have ade-quate food, water, shelter, medical care, and so on. 1 ' After more than a decade of well-intentioned but flawed interventions, we now understand the importance of ad-dressing the various aspects of post-conflict reconstruction in
a coordinated way, rather than allowing security, economic issues, civil society, and governmental issues all to be dealt with by separate offices operating on more or less separate tracks. 12 " Perhaps most importantly, we have learned from failures in the past that there is no "one size fits all" template for rebuilding the rule of law in post-conflict settings: to be successful, programs to rebuild the rule of law must respect and respond to the unique cultural characteristics and needs of each post-intervention society." Moreover, the rule of law cannot be imposed from on high; to enjoy legitimacy, it must be built with the support and leadership of the local popula-tion.' 4This Article will highlight some of the major themes and conclu-sions of our book. First, I will discuss our diagnosis of why past efforts to strengthen the rule of law in the wake of conflict and intervention have not been as effective as any of us would like. 1 5 Second, I will sketch out the positive framework we offer: an integrated, synergistic approach to building the rule of law that keeps a clear focus on ultimate goals, is adaptive and seeks to build upon solid cultural foundations, and is systemic in stressing the need for mutually reinforcing reforms in multiple areas. 6 Third, Iwill discuss some of the key obstacles that need to be tackled in
Id.
Id.
Id. at 9-10.
See id. at 195. 15. See infra Part I.
See infra Part II.
[Vol. 49:1443 1446 POST-CONFLICT RULE OF LAW BUILDING order to achieve more effective results on the ground. 17 Above all, Iwill stress the importance of a multi-layered approach to building the rule of law-an approach that focuses not only on strengthening institutions but also on building cultural and political support for the rule of law. Indeed, without a widely shared commitment to the idea of the rule of law, courts are just buildings, judges are just bureaucrats, and constitutions are just pieces of paper." 8I. WHY HAVE PAST POST-CONFLICT RULE OF LAW EFFORTS Too OFTEN FALLEN SHORT? Despite millions of dollars and considerable human effort and sacrifice over recent years, initiatives to strengthen the rule of law in many post-conflict societies have often been disappointing in terms of their results and impact. 9 There are many reasons for this, including reasons specific to particular post-conflict countries, such as the extent to which domestic leaders are committed to rule of law reforms and are viewed by the public as effective and legitimate. But let me highlight three overarching reasons that have tended to recur across different societies. First is simply the inherent difficulty of the endeavor. In societies that have been wracked by violent conflict, building the rule of law, understood broadly, is incredibly hard. Particularly in post-conflict societies where political institutions are discredited, legal institu-tions are distrusted to the extent they exist at all, and infrastruc-tures are devastated, positive change generally will require enormous commitments of time, energy, and resources. Spoilers may contend for power and resources and local leaders may oppose reforms. Overcoming public distrust and building institutions worthy of confidence can take many years. 2"To complicate matters further, efforts of external interveners to help build the rule of law are fraught with paradoxes and difficul-ties. Indeed, a paradox is inherent in the very project of trying to 17. See infra Part III.
STROMSETH ET AL., supra note 1, at 76. 19. See id. at 65-68 (discussing challenges in societies as diverse as Kosovo, Haiti, East Timor, Afghanistan, and Iraq). However, despite problems, rule of law programs have helped people in many societies. See id. at 388. 20. See id. at 311.
20081 1447 WILLIAM AND MARY LAW REVIEW build the rule of law in the aftermath of a military intervention: acore idea of the rule of law is that reason is better than force as ameans of resolving disputes, yet by definition, interventions resolve problems through force. 21 There is no way around this problem, but interveners should at least acknowledge the paradox inherent in trying to pull the rule of law from the barrel of a gun and recognize the ways in which the very fact of the intervention itself may undermine their claims about the value of the rule of law. 2 2 The intervention's perceived legality and legitimacy, for example, can have significant effects on subsequent efforts to build the rule of law. Interventions increasingly involve a long-term process of transformation; their legitimacy will constantly be reassessed by relevant actors as circumstances evolve on the ground, but a strong consensus about the intervention's legality and legitimacy at the outset can increase the prospects for ongoing cooperation from both local and international actors. 2" Recent experience in Iraq and elsewhere, for example, has shown that governments are far more likely to participate in an intervention-and contribute to post-conflict reconstruction-if they view the underlying intervention itself as legitimate. 2 4 An intervention's perceived legitimacy will also be influenced by the concrete objectives interveners pursue once they are deployed on the ground. 2 5 Here again international legal norms are relevant. Whatever factors trigger states to intervene in the first place, they increasingly face international pressure to help build institutions that advance self-determination and protect basic international human rights, even as they also seek to respect the unique culture of the people whose future is directly at stake. 2 6 In short, if interven-ers want to be successful at a practical level in building the rule of law domestically, they will need to take seriously international legal norms that can have a significant impact on whether other states-as well as the local population-view their intervention as legitimate and worth supporting. 21. See id. at 312-14. 22. Id. at 325.
Id. at 19. 24. See id. at 18-19. 25. See id. at 19. 26. Id.
1448
[Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING Another set of challenges includes the difficult trade-offs that accompany efforts to build the rule of law. Building the rule of law is a holistic process, and it is almost inevitably marked by internal contradictions. 2 7 For example:
" Short-term interests may genuinely conflict with long-term interests (for instance, collaboration with local warlords or militias may be useful in establishing security in the short term but may dangerously empower "spoilers" in the long term). 28
" Fostering 'local ownership" and respecting local cultural norms may conflict with efficiency interests or international standards. 2 9
" Satisfying minority political participation interests may conflict with satisfying majorities, 0 and constitutional blueprints designed to bring an end to armed conflict may defer rather than resolve difficult tradeoffs. 1
" Promoting the rule of law is not politically neutral, although interveners often like to imagine that it is. In practice, the decisions interveners make necessarily empower some local actors at the expense of others. This incites opposition (sometimes violent), which can in turn force interveners to respond with coercion, which then generates more opposi-
32
tion. Building the rule of law requires a constant balancing act. As aresult, movement toward the rule of law often is not linear, but back and forth. Interveners must constantly make choices among 27. Id. at 12.
Id.
Id.
Id.
See id. at 88-89 (citing Donald Horowitz, Constitutional Design: Proposal Versus Processes, in THE ARCHITECTURE OF DEMOCRACY: CONSTITUTIONAL DESIGN, CONFLICT MANAGEMENT AND DEMOCRACY 1, 15-16 (A. Reynolds ed., 2002)). Blueprints emerge from aprocess of political bargaining and compromise; what emerges is not an ideal, coherent design, but a cobbled-together mix of sometimes conflicting and ambiguous provisions focused principally on meeting the short-term interests of key international and local actors. The better that interveners understand the risks of different blueprint options, the easier it will be to avoid the pitfalls associated with each, and to resist seemingly attractive short-term options that have disastrous long-term consequences. STROMSETH ET AL., supra note 1, at 132. 32. See id. at 12.
20081 1449 WILLIAM AND MARY LAW REVIEW problematic alternatives. But interveners, precisely because they are interveners (and so do not fully understand local culture, interests, or institutions), are often not well positioned to make such choices and may not fully understand the likely consequences." Although progress will rarely be linear, reformers nonetheless need to strive to keep the momentum going in the direction of the rule of law. 4 Once the balance tips too much in the other direction, it can be exceptionally difficult to recover. 3 5 This does not mean that building the rule of law is a fool's errand. It does mean that it is far more difficult than is generally under-stood. The evidence suggests, however, that reformers can achieve moderate success if they take these complexities into account and plan accordingly. 6A second reason why past rule of law efforts have fallen short is the tendency of interveners to focus on formal institutions-the means-with insufficient appreciation for the underlying complexi-ties of building the rule of law. Especially important is the need to focus on the perceptions of ordinary citizens and to strengthen cultural commitments to the very idea of the rule of law. 37 Theorists disagree about whether the rule of law is primarily formal in nature (a matter of institutions and procedures), or fundamentally substantive as well (a matter of rights and justice)." In practice, when policymakers talk about the rule of law as a goal, they usually have in mind a mix of formalistic and substantive outcomes. That is, they are looking for an outcome in which new or reformed legal and judicial institutions are created, respected, and actually used by members of the society in a way that furthers core international human rights norms. 3But rule of law programs usually focus almost exclusively on institutions and legal codes as a means of achieving such an outcome, assuming that a cultural commitment to the rule of law and the values that underlie it will follow naturally from reformed 33. Id.
Id. at 391. 35. Id.
Id. at 13. 37. See id. at 179. 38. See id. at 13. 39. See id. at 69.
[Vol. 49:1443 1450 POST-CONFLICT RULE OF LAW BUILDING institutions and codes. This assumption is flawed. Although insti-tutions and codes are an important part of the picture, for the rule of law to exist, people must also believe in the value and efficacy of legal institutions as a means of resolving disputes. A belief in the value of the rule of law does not follow inevitably from the creation of formal legal structures. 40Getting people to believe in the rule of law is very complex. 41 It not only requires creating appropriate background conditions (fostering perceptions that the intervention is legitimate, establish-ing security, creating viable governance blueprints, moving forward on justice system reform, and so forth), but it also involves grap-pling with complex issues of how cultures change and creating rule of law programs that foster cultural change. In order to build the rule of law, interveners must understand how complex an edifice it really is, as it includes both formal and substantive aspects. We found that although many organizations and individuals have an intuitive sense of what the rule of law entails, too often interven-ers pay insufficient attention to its complexities-especially to the need for cultural commitments to the very idea of the rule of law-and to whether reforms are accessible and responsive to acountry's people and their needs. 42 The importance of cultural context is illustrated by a story, perhaps apocryphal, that we tell early in the book and is worth recounting here. During the nineteenth and early twentieth centuries, some Middle Eastern governments were anxious to improve the lot of nomadic tribespeople, who roamed from place to place, living in tents, and rarely having reliable access to clean water, health care, or schools. The governments built new houses for the nomads in various towns and gave them to the tribes for free, confidently expecting that the nomads would immediately transform them-selves into ordinary townspeople. The nomads promptly quartered their camels in the fine shelters and then lived themselves in their old tents outside the houses, to the government's great consterna-tion. The houses soon deteriorated, as they were not designed with the needs of camels in mind, and before long, most of the nomads
See id. at 310-11.
See id. at 311. 42. See id. at 56-57.
20081 1451 WILLIAM AND MARY LAW REVIEW returned to their wanderings. The nomads, it turned out, did not particularly want to live in one place. 4"Well-intentioned efforts by interveners to build the rule of law in post-conflict societies by creating formal structures and rewriting constitutions can run into similar problems. 44 Rule of law reformers may assume that "if you build it, they will come" applies to courts as much as to baseball fields. But courts and constitutions do not occupy the same place in every culture, and external efforts focusing on formal institutional reform can often appear irrelevant to the concerns and practices of ordinary people, social groups, tribes, and other segments of the population in different post-conflict societies. 4 5 Furthermore, law cannot-nor should it-be imposed from on high by external interveners; we emphasize the need to understand the unique culture, history, and political terrain in each country and to work with local leaders to build upon existing cultural founda-tions. The intervener's understanding (or lack of understanding) of the unique culture and aspirations of the local population will have a profound impact." We offer a pragmatic definition of the rule of law that seeks to capture what most policymakers are aiming for in promoting the rule of law: The "rule of law" describes a state of affairs in which the state successfully monopolizes the means of violence, and in which most people, most of the time, choose to resolve disputes in amanner consistent with procedurally fair, neutral, and univer-sally applicable rules, and in a manner that respects fundamen-tal human rights norms (such as prohibitions on racial, ethnic, religious and gender discrimination, torture, slavery, prolonged arbitrary detentions, and extrajudicial killings). In the context of today's globally interconnected world, this requires modern and effective legal institutions and codes, and it also requires awidely shared cultural and political commitment to the values underlying these institutions and codes. 47
Id. at 76-77. 44. See id. at 77.
Id.
See id. at 195.
Id. at 78.
1452 [Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING Our pragmatic definition highlights that the rule of law is a matter of cultural acceptance as well as institutions and legal codes; it focuses on both substantive aspects (protection of basic human rights) as well as processes and institutions; and it recognizes that much conflict resolution takes place in the "shadow of the law" and depends on the public's confidence and belief in the very idea of law. 4 8 A third and related problem with many rule of law programs is their segmentation. Different agencies and organizations under-standably focus on their areas of specialization and expertise, such as improving police, or courts, or prisons, or reforming legal codes. But too often these efforts fail to pay adequate attention to how different institutions and reforms relate to one another or to the larger political system in which they are embedded. 4 9 Also, rule of law projects are often primarily determined by the practical constraints of external interveners: their need to show measurable outputs (number of courthouses built, computers installed, cops put out on the beat, judges trained), with a hope that these steps will add up to something we would recognize as the rule of law. 5"It is, of course, understandable that interveners (including government agencies, NGOs, and so forth) focus on building needed institutions-including courts, police, and prisons. The problem is not an institutional focus per se, but rather an overly narrow and insular concentration on institutions with insufficient attention to the interrelations between them or to the larger cultural and political context in which they function. A narrow and segmented approach to institutions can lead to piecemeal reforms of little enduring impact, 51 and to a continuing deficit in public support. 5 2 It also can reinforce a tendency for predatory politics-that is, if institutions are built without careful attention to the accountability of newly empowered actors and to their role in the larger political system, interveners may simply
See id. at 78-80.
See id. at 179. 50. See id.
See id.
See id.
2008] 1453 WILLIAM AND MARY LAW REVIEW provide new tools for self-interested power holders to pursue their own agendas rather than effectively building the rule of law. 3The intervention in Haiti during the mid-1990s, for example, ran up against many of these problems. Although enormous initial progress was made in vetting and training the Haitian National Police during the United Nations mission following Jean-Bertrand Aristide's return to power, other parts of the justice system did not receive the same degree of assistance or pressure for reform. 54 As aresult, corrupt judges could be bribed to release suspects, and bad governance generally undermined the larger political system in which police operated. 5 5 Large numbers of pre-trial detainees languished for months or years in squalid prisons without legal process. 5 6 Little was done, moreover, to address the widespread suspicion among ordinary people that law is a vehicle of control and repression rather than of justice. 5 7 And the initial police reforms were ultimately undercut most profoundly by the failure to build amore accountable political system in Haiti more generally. 5" Similar kinds of problems have occurred in other post-conflict societies as well. II. A POSITIVE, SYNERGISTIC APPROACH TO BUILDING THE RULE OF LAW Recognizing the enormous challenges of building the rule of law in the wake of conflict and intervention, we offer a positive frame-work that acknowledges these complexities but seeks to move in apositive direction. 5 9 We begin by stressing that interveners need to be much more self-conscious about the inherent paradoxes and complexities of building the rule of law after intervention. Interveners, as mentioned earlier, need to acknowledge the paradox of trying to build the rule of law from the barrel of a gun, and they need to recognize that the 53. See id.
Id. at 179-80. 55. Id. at 180. 56. See id. at 215. 57. Id. at 180. 58. See generally id. at 180, 214-18. 59. See id. at 388.
1454 [Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING perceived legitimacy of the intervention-and of their own conduct on the ground-will profoundly influence the credibility of their efforts to argue that law matters. Poorly thought-through rule of law programs can be self-undermining and actually do damage to the project of building the rule of law." °We argue that interveners, like physicians, should therefore do their best to first "do no harm."'" To avoid inadvertently undermin-ing their own efforts, interveners should:
" Be transparent. When interveners seek to create consulta-tive processes, but then ignore advice they don't like, it undermines their credibility. Acknowledging that certain policies and principles are nonnegotiable, at least in the short term, may make interveners unpopular-but hardly more unpopular than they are when they feign a willing-ness to allow local societies to make their own choices but then veto those choices. 62
" Be accountable. When interveners insist that legal codes in a post-conflict society must reflect international human rights standards, but interveners themselves seem unable or unwilling to comply with those same standards, their credibility inevitably suffers. Similarly, when anyone affiliated with an intervening power behaves inappropriately or commits crimes, interveners need to ensure that investi-gation, trial, and punishment are prompt, fair, and public. 63
" Be better educated about the language and culture of the local society.' Lack of cultural understanding can severely undermine efforts to build the rule of law, whereas cultural sensitivity can go a long way to building trust and confi-dence. 6 5
" Plan carefully. It is easier to prevent damage than to undo it, and poor planning often leads only to a need for embar-rassing about-faces further down the road. 66
See id. at 390.
Id. at 315. 62. Id. at 326.
Id.
Id.
Id.
Id. at 327.
2008] 1455 WILLIAM AND MARY LAW REVIEW
Act multilaterally and collaboratively. Given the paradoxes inherent in trying to create the rule of law in the wake of military interventions, interveners need to gain as much legitimacy as possible. Ensuring that no one state or religion or ethnicity is seen as behind all rule of law programs can help diminish resentment and skepticism about the motives of interveners. 6 7 Next, we advocate a multi-layered approach to building the rule of law that explicitly recognizes the need for multiple, mutually reinforcing reforms that are carefully attuned to strengthening cultural and political as well as institutional foundations for the rule of law. We call this the synergistic approach." 5The synergistic approach to building the rule of law is ends-based and strategic. 6" In other words, reformers need to be clear about the ultimate rule of law goals they are striving to achieve rather than focusing simply on building up institutions and formal structures. 70Working toward fundamental goals such as establishing basic law and order, a government bound by law, equality before the law, and protection of basic human rights, among others, will require avariety of cross-cutting programs and initiatives. 1 Reformers will need to recognize that tensions may sometimes arise between some of these goals and that it may not be possible to advance each goal equally at the same pace, particularly in post-conflict settings with limited resources and fragile stability. 2 But keeping such overarch-ing goals firmly in mind-rather than assuming that they will naturally emerge by building institutional structures-will help interveners and local leaders focus on developing a range of interrelated capabilities and the programs and initiatives that are most likely to promote and sustain those capabilities. 7 3 67. Id.
See id. at 80-84.
See id. at 81. 70. See id.
Id.
See id. at 81, 181-83. For athoughtful defense of an ends-based approach to rule of law building, see Rachel Kleinfeld, Competing Definitions of the Rule of Law, in PROMOTING THE RULE OF LAW ABROAD: IN SEARCH OF KNOWLEDGE 31, 34-37 (Thomas Carothers ed., 2006). 73. STROMSETH ET AL., supra note 1, at 81.
[Vol. 49:1443 1456 POST-CONFLICT RULE OF LAW BUILDING The synergistic approach to strengthening the rule of law is also adaptive and dynamic. In this respect, it aims to build upon existing cultural and institutional resources for the rule of law and to move them in a constructive direction, but it also recognizes that the rule of law is always a work in progress, requiring continual mainte-nance and reevaluation. 7 4 This approach understands that the rule of law cannot be imported wholesale; instead, it needs to be built upon preexisting cultural commitments, enjoy popular legitimacy, and address the needs of ordinary people in particular societies in order to be sustainable." We thus devote a major chapter 76 to the difficult issue of building rule of law cultures and stress that the rule of law is as much amatter of creating new cultural commitments as a matter of creating new legal institutions and codes. 7 7 Yet we still do not know much about how cultures change. As a result, fostering rule of law cultures is exceptionally difficult, even assuming that all the background conditions, such as security and institutions, are in place. These background conditions in themselves are exceptionally difficult to establish. It is particularly tough to foster a rule of law culture in post-conflict settings where former leaders have discred-ited the law through past inefficacy or by prior use of law as an instrument of repression. 7"Successfully building a rule of law culture requires interveners and local leaders to think broadly and creatively, to understand the needs and aspirations of the local population, and to work innovatively on a wide range of programs. 79 For example, when considering ways to foster rule of law cultures, interveners should examine the degree to which formal legal institutions such as courts are utilized in a given society. In some societies, educated urbanites may have made extensive use of formal legal institutions prior to an intervention and be ready to use them again once stability is restored, but these same institutions may have been virtually irrelevant in rural areas or among less educated and less affluent 74. Id. at 80. 75. See id. at 183. 76. Id. ch. 8. 77. Id. at 310. 78. See id. at 311. 79. See id. at 345. 20081 1457 WILLIAM AND MARY LAW REVIEW people. In such contexts, many people may be more accustomed to turning to traditional or informal dispute settlement mechanisms. These may range in complexity and legitimacy, but in some soci-eties, such as East Timor, traditional dispute resolution mecha-nisms enjoy considerable local support. 8 0 Reaching rural and less affluent people may require finding creative ways to engage with traditional and customary dispute settlement regimes while working to alleviate discriminatory aspects of such systems. 81 We urge reformers who seek to build rule of law cultures to:
" Get to the grassroots. Reformers need to find ways to reach beyond cities, state institutions, and political elites and to build citizens' education, access to justice, and community organizing and advocacy programs that will reach a larger segment of the population. 2
Strengthen civil society. Building and sustaining a rule of law culture also requires a strong civil society. 83 An independent and effective media, strong NGOs, and other civil institu-tions can be essential to effectively promoting the rule of law. 4 Independent NGOs can play an important role in helping to build a more transparent, effective, and fair justice system and in keeping officials accountable. 85 In East Timor, for example, the Judicial System Monitoring Programme (JSMP) has helped to strengthen the country's developing justice institutions by monitoring and evaluating problems and recommending reforms. 8 6
• Focus on the next generation. In post-conflict societies with little or no prior rule of law tradition, reformers may have to overcome years of skepticism about law before the rule of law can flourish. 87 Educational programs and cultural exchange
Id. at 334-35.
See id. at 334-39.
See id. at 341. 83. Id. at 341. 84. See id. at 341-42. 85. See id.
See id. at 330-31. For information about the Judicial System Monitoring Programme, see Judicial System Monitoring Programme, (last visited Feb.
16, 2008). 87. See STRoMSETH ETAL., supra note 1, at 342.
1458 [Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING initiatives can help educate and inspire young people about the importance of human rights and the rule of law." 8
" Give people a stake in the law. To sustain support for the rule of law, reformers need to find ways to give large numbers of people a stake in laws, legal institutions, and the institutions of governance more broadly. This can be done both through programs that seek to give ordinary people a sense of "owner-ship" over the law, 9 for instance by involving them in contributing ideas for new institutions and codes, and by linking rule of law programs to development and anti-poverty initiatives. 9 °
" Include marginalized groups. In addition to traditional rule of law programs that focus mainly on the "supply side" of the law-judges, lawyers, legislators, and so on-reformers also need to focus on the "demand side" and on marginalized groups, 9 1 such as women, youth, and minorities, who will otherwise remain vulnerable to abuse and possibly, in the case of underemployed or unemployed youth, become disaffected enough to pose a significant threat to stability. 9 2
" Be creative. Since rule of law is a culture as much as a set of institutions and legal codes, interveners need to be willing to use the tools of "culture" as well as the tools of law and development. 3 To capture the imagination and loyalty of citizens in any society, reformers need to be willing to use media, pop culture, and traditional narratives and methods creatively to build support for rule of law reforms. 9 4 In South Africa, for example, reformers used such means successfully to solicit ideas from the general public on the new constitu-tion. 95 88. See id. at 342-43. 89. Id. at 343. 90. See id.
See id. at 345. 92. See id. at 343-44. 93. Id. at 345. 94. See id.
See id. at 343.
20081 1459 WILLIAM AND MARY LAW REVIEW Finally, in addition to being adaptive and dynamic, the synergis-tic approach to building the rule of law is systemic. 9 6 Appreciating how institutions intersect and operate as a system is vital to designing effective and balanced programs for reform. 9" Although the priorities in a particular society will depend on the areas of greatest need, a systemic perspective can help to achieve more enduring results not possible by focusing on single institutions in isolation. 9"A systemic approach is especially crucial in working to strengthen justice systems in post-conflict societies. 99 Here it is important to keep a clear eye both on the linkages between the different parts of the justice system and on how they function within the larger political system and culture." ° If one component of the justice system, such as police forces, is strengthened without adequate attention to the others, such as courts and prisons, reforms may end up being unsustainable. Likewise, if particular institutions are built up without careful attention to the accountability of newly empow-ered actors or to their role in the political system, reforms may end up being unsustainable or simply empowering unaccountable local actors rather than genuinely building the rule of law.' ° 'In addition to addressing the now familiar "justice triad" of police, courts, and prisons, we stress the importance of developing critical capacities in the broad areas of law making, law enforcement, and dispute adjudication, as well as a capacity for legal education.' °2Thinking in these terms helps to underscore the principle that building and sustaining an effective justice system requires an interrelated web of supporting institutions, personnel, and capabili-ties.1 0 3 To be sure, each post-conflict society presents unique circum-stances, obstacles, and opportunities for justice system reform. Thus a crucial starting point is a strategic assessment that takes account of factors such as the distinctive "conflict legacy" in that society; the 96. See id. at 82.
Id.
See id.
See id. at 183. 100. See id.
See id. at 183-84. 102. See id. at 184. 103. See id.
1460 [Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING particular needs; the available human, cultural, and material resources; and the main obstacles and threats to reform. 10 4 At the same time, countries recovering from conflict frequently face anumber of common challenges. Law reform, for example, is usually a critical task in post-conflict societies. 1 5 Existing law--or parts of it-may lack public legitimacy, fail to address complex criminal activity, and fall short of interna-tional human rights standards. 0 0 Criminal law and procedure often require particularly urgent reforms. 107 All too often, however, new laws are drafted by outsiders with limited local involvement. 10 8 We emphasize the importance of strengthening local capacity for lawmaking and compromise. Laws, after all, are a reflection of the values and tradeoffs in a particular society, and strengthening the processes of local lawmaking is just as important as substantively reforming the law. 1 9 Regarding law enforcement capacity, we recognize the particu-larly daunting challenges of transforming police-society relations" 0and changing long-standing expectations and patterns of behavior, especially in societies in which public distrust runs deep."' Chang-ing the organizational culture of police is often critical to meaningful reform."' A successful post-conflict police force will need fair and transparent selection and promotion criteria, adequate pay, good training, incentives for good performance, and mechanisms in place for improving police-society relations." 3 Police reform often moves 104. See id. at 189. 105. See id. at 193-95. 106. See id. at 194-95. 107. See id. at 192. 108. See id. at 199 (discussing the lack of local involvement in post-conflict legal reform in Kosovo and Afghanistan); see also David Marshall & Shelley Inglis, The Disempowerment of Human Rights-Based Justice in the United Nations Mission in Kosovo, 16 HARV. HUM. RTS.
J. 95, 117-19, 145 (2003). 109. See STROMSETH ET AL., supra note 1, at 199. 110. See id. at 204-05.
111. See id. at 203.
112. See id. at 205; see also DAVID H. BAYLEY, U.S. DEP'T OF JUSTICE, DEMOCRATIZING THE POLICE ABROAD: WHAT To Do AND How To Do IT 13-15 (2001), available at gov/pdffilesl/nij/188742.pdf.
113. STROMSETH ETAL., supra note 1, at 211-13; William G. O'Neill,
Police Reform in Post-Conflict Societies: What We Know and What We Still Need To Know 9 (Intl Peace Acad. Policy Paper, Apr. 2005), available at Police-society relations can be improved through public outreach and complaint mechanisms. See id. at 7. 2008] 1461 WILLIAM AND MARY LAW REVIEW more quickly than reforms in other parts of the justice system, as was the case in Haiti. 114 But without corresponding reforms in prisons and the judiciary, problems such as extended pretrial detention, lack of due process, and unfairness in treatment of suspects will persist." 5In fact, prisons are usually shortchanged in post-conflict justice reform."' Yet, as experience in Iraq and elsewhere has shown, neglecting prisons can result in severe abuse and can have devastat-ing long-term costs." 7 The Abu Ghraib prison abuse scandal in Iraq, for instance, has greatly undermined U.S. efforts to promote the rule of law in Iraq and elsewhere." ' Effective prison reform requires clear rules that protect fundamental rights, good training, compe-tent personnel, credible monitoring and accountability, adequate resources, and often sustained international support." 9In many post-conflict societies, building more impartial and competent judiciaries has proven to be the most complex and difficult aspect of justice system reform.' 2 ° The specific challenges and obstacles vary in different countries, but critical reforms generally include: transparent and merit-based appointment procedures; good training; building structural protections for impartial decision making by increasing the transparency and accountability of judicial operations; providing adequate resources and budgets; supporting independent court monitoring organiza-tions; investing in legal education; and, above all, addressing larger systemic problems of external influence, political control, and corruption that prevent impartial adjudication.' 2 'Changing the attitudes and expectations of officials, police, judges, and the public regarding how the justice system should operate may be the hardest challenge of all, particularly in societies in which police and courts previously served as tools of self-inter-114. See STROMSETH ETAL., supra note 1, at 217. 115. See id. at 218. 116. See id.
See id. at 218-19. 118. See id. at 219. 119. See id. at 221-22, 226. 120. See id. at 247. 121. Id. at 247-48.
[Vol. 49:1443 1462 POST-CONFLICT RULE OF LAW BUILDING ested leaders and other powerful actors rather than as instruments of justice.' 2 2 Even as each post-conflict society presents distinctive obstacles and opportunities, some common problems and pitfalls have plagued efforts to strengthen justice systems in a number of post-conflict countries.' 2 3 These include:
" A failure to provide for applicable law that enjoys local legitimacy 1 14 or to involve local decision makers suffi-ciently in law reform efforts; 2 5
" Unbalanced reform in the justice sector and premature institution building without corresponding political reforms;' 2 6
" Premature empowerment ofjudges or other justice system officials without adequate training and before credible disciplinary mechanisms are established.' 2 7 In East Timor, for instance, newly appointed national judges assumed the bench without sufficient training. 2 ' They later failed retention exams and were replaced by international judges while they underwent extensive training;' 2 9
" Failure to address sufficiently the needs of vulnerable segments of the population, including women and girls, who often face increased violence after conflicts; 30
" Neglecting rural areas and problems of access to justice more generally;' 3 ' and
" Focusing on institutional building blocks and surface indicators, such as numbers of courthouses established and computers installed, without paying sufficient atten-122. See id. at 246. 123. See id. at 248. 124. This was initially a problem in Kosovo. See id. at 195-96, 316-18. 125. See id. at 248. 126. Id.
See id.
See id. at 234; see also Hansjdrg Strohmeyer, Collapse and Reconstruction of aJudicial System: The United Nations Missions in Kosovo and East Timor, 95 AM. J. INT'L L. 46, 55-56 (2001). 129. See STROMSETH ET AL., supra note 1, at 234-35; see also JUDICIAL SYS. MONITORING PROGRAMME, OVERVIEW OFTHEJUSTICE SECTOR: MARCH 2005, at 7, 12,27-28 (2005), available at 130. STROMSETH ET AL., supra note 1, at 248. 131. Id.
20081 1463 WILLIAM AND MARY LAW REVIEW tion to building the solid political foundations of a fair justice system.' 32 At the same time, recent experience has also shown a number of positive practices in post-conflict justice system reform. 1 33 These include:
"
Putting together an effective mix of international and local jurists to help strengthen domestic justice systems after conflict; 34
" Pursuing systemic reforms that address connections and build synergies between key justice institutions, such as police, prisons, and courts; 1 35
" Promoting greater transparency in the justice system and instituting merit-based selection and promotion proce-dures; 136
" Working to promote sustainable reforms by investing in civil society organizations that can monitor legal institu-tions and advocate for reform, and by investing in legal education; 37 and
" Working to develop inclusive and representative composi-tion in justice institutions and paying greater attention to problems of access to justice. 138
None of this is easy. For example, developing an optimal combina-tion of local and international jurists who can work together to strengthen a domestic justice system after conflict is a difficult, complex, highly context-specific endeavor. 139 On the one hand, building local capacity, ownership, responsibility, and leadership is fundamental. On the other hand, experienced international judges, prosecutors, and defense counsel from different countries can provide an infusion of skills that can assist new domestic legal personnel. International judges, for instance, can provide valuable
Id.
See id.
See id.
See id.
Id.
Id.
Id.
See id. at 236-37.
1464 [Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING balance in highly charged, ethnically divided post-conflict settings. 14° International jurists can also help reinforce impartiality and independence in the domestic judicial system.1 4 1 In Bosnia's Brcko district, for example, local judges credit international jurists for contributing to significant improvements and reforms in the justice system. 4 2 International jurists have made an impact in Kosovo, East Timor, Sierra Leone, and other countries as well.14 But finding experienced judges who can deploy rapidly to post-conflict settings, or who have relevant cultural knowledge and language ability, has often not been easy, and ensuring a beneficial relationship between national and international jurists requires considerable effort. 144The experience to date highlights the need for more systematic thinking up front about designing effective arrangements and partnerships. 145The need for a systemic and synergistic approach to post-conflict rule of law building is not limited to the justice system alone. The requirement for more self-conscious connections and synergies among different areas of reform exists more broadly. In many post-conflict societies, for instance, groups and individuals who work on justice system reform often have little contact with those who focus on legal accountability for war crimes and human rights atrocities through criminal tribunals and truth and reconciliation commis-sions.' 4 6 Yet, with thoughtful planning, these accountability proceedings can be designed more effectively to help build and strengthen domestic rule of law after conflict. Sierra Leone's Special Court is one concrete example. A hybrid war crimes tribunal designed to bring to justice those who bear the greatest responsibility for the brutal atrocities that marked Sierra Leone's violent civil war, the Special Court includes both interna-tional and national judges, prosecutors, defenders, investigators, and administrative personnel. ' Though not without its challenges, 140. Id. at 236. 141. See id.
See id. at 239. 143. See id. at 237-38. 144. See id. at 237. 145. See id. at 239. 146. See id. at 256-57. 147. See id. at 289-90.
20081 1465 WILLIAM AND MARY LAW REVIEW the Special Court has made positive contributions to strengthening the rule of law in Sierra Leone through its demonstration effects regarding the value and importance of fair justice, and through concrete capacity-building effects.' 48 These include the direct training and experience gained by the Sierra Leonean investigators, lawyers, and other personnel participating in the Special Court, and social capacity building more broadly by creative outreach to Sierra Leone's population. 1 49 The Special Court has reached out to the public through community-based dialogues about the meaning and significance of the Special Court's proceedings, through the establishment of "Accountability Now" clubs at local universities and schools, and through networking with local NGOs that focus on the work of the court and on strengthening governmental account-ability more generally. 5 ° The experience in Sierra Leone highlights that much more can be accomplished if reformers think systematically and creatively from the beginning about how accountability proceedings can contribute to strengthening domestic rule of law. III. CHALLENGES AHEAD IN PURSUING A MORE EFFECTIVE APPROACH TO RULE OF LAW BUILDING Even with a more nuanced understanding of the multi-layered challenges of strengthening the rule of law in post-conflict societies, many practical obstacles to actually implementing a more effective approach on the ground still remain. Rule of law assistance is part and parcel of the larger rebuilding effort in post-intervention societies. 5 ' Just as the larger effort will not succeed unless rule of law takes hold, so too will rule of law efforts fail if the larger effort to restore peace and foster democratic governance does not succeed.' 5 2 In addition to specific and often daunting obstacles in 148. See id. at 295-98 (discussing demonstration and capacity-building effects in Sierra Leone); Jane E. Stromseth, Pursuing Accountability for Atrocities After Conflict: What Impact on Building the Rule of Law?, 38 GEO. J. INTL L. 251,304-08 (2006). See generally STROMSETH ET AL., supra note 1, at 258-62 (discussing demonstration and capacity-building effects in general). 149. STROMSETH ET AL., supra note 1, at 297-99. 150. See id. at 295-98. 151. See id. at 61. 152. See id. at 386. 1466 [Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING particular societies," 3 a number of practical challenges have under-mined rule of law building in many post-conflict situations. These include:
" Insufficient resources, commitment, and patience; 5 4
" A lack of unified planning domestically or inter-nationally; 155 and
" Inability or failure to engage local populations effectively in designing and sustaining reforms. 5 'Because military interventions of the sort we consider in our book take place principally in conflict-ridden states whose governmental institutions are often discredited, devastated, and in enormous need of assistance, post-conflict reconstruction necessarily takes substan-tial commitments of time, money, and personnel.' 5 7 Half-hearted efforts to build peace after a conflict will likely fail and may even render a bad situation worse.' Hence, interveners must strive to match resources and commitment to the problems at hand.' 5 9 In difficult environments, a major or regional power's support based on perceived national interest may be critical. Even then, however, states frequently underestimate the commitment and resources required and have a hard time sustaining domestic support in their own societies for the extended effort needed. 6 ° Contributions from many states and organizations are often required, but the present system of cobbling together resources is inadequate.' 6 ' Donors have their own particular agendas and priorities, and their incentives to provide sustained funding are often limited, resulting in sometimes inadequate and poorly coordinated assistance programs.' 6 2 Also, the explosion in the 153. For example, in Iraq, sectarian violence and insurgency impede efforts to create security and build the rule of law in much of the country. See id. at 368. 154. See id.
See id. at 350-51. 156. See id. at 376-77. 157. See id. at 367. 158. See id.
See id. at 368. 160. See id.
See id. at 371; see also United Nations Dev. Programme, Governance in Post-conflict Situations 5-6 (UNDP Background Paper for Working Group Discussions, 2004), available at
162. See STROMSETH AT AL., supra note 1, at 372-73.
1467 2008] WILLIAM AND MARY LAW REVIEW number of actors involved in post-conflict governance and rule of law promotion has led to duplication of effort, confusion, competition for resources, gaps in assistance, mixed messages, and lost time, and has opened the door for spoilers intent on playing different international actors off against each other. 1 6 Efforts to strengthen coordination mechanisms, monitor pledges and disbursements, and increase transparency and accountability are crucial." M In recent years, progress has been made in improving mission planning, and various planning models now exist. 16 5 Less important than the particular structure chosen is the need to foster a genuine partner-ship among the multiplicity of organizations and government agencies involved and to allocate clear responsibility for achieving agreed objectives in post-conflict reform. 66In this process, promoting local ownership of post-conflict reforms has justifiably become a touchstone.' 6 7 Several key points must be kept in mind. First, durable social change needs to come from within; it can be guided but not imposed. 6 8 Local leadership and support is essential to building sustainable institutions and aculture of respect for democratic governance and rule of law that will outlast the presence of the interveners.' 6 9 Second, the notion of "local ownership" is complicated. 7 ° Building the rule of law is an inherently political exercise, with local winners and losers in terms of opportunities, resources, power, and status.' 7 ' For better or worse, an intervener's choice of local interlocutors necessarily shifts the balance of power among competing domestic actors with different visions of their society and different claims to represent it. 72 Reformers need to appreciate the political complexities and challenges involved, and must be savvy about who they are
Id. at 350-51.
See id. at 376. 165. Id. at 355-67, 376. The newly created United Nations Peacebuilding Commission, for
example, despite its limitations, may help to better coordinate funding and planning of post-conflict peace building, including rule of law assistance. See id. at 362-64, 376. 166. See id. at 386.
See id. at 376; see also Mich~le Flournoy &Michael P6n, Dealing with Demons: Justice and Reconciliation, WASH. Q., Autumn 2002, at 112. 168. STROMSETH ET AL., supra note 1, at 377. 169. See id. at 378-79. 170. Id. at 377.
See id. at 379.
Id. at 380.
1468 [Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING supporting and who is likely to have an interest in sabotaging reform.' 7 3 Third, international and local priorities, standards, and values will at times conflict, forcing interveners to strike a delicate balance between respect for local preferences and promotion of international norms.' 7 4 On top of all these dilemmas, deep political, sectarian, and other conflicts within post-conflict societies can pose enormous challenges to rule of law building efforts. This does not mean that building the rule of law is impossible. But it does mean that it is far more difficult than generally under-stood. Interveners need to have realistic goals and more humility about their ability to achieve cultural change.' 7 5 They need to be more aware of the many paradoxes of building the rule of law after conflict and of the need to move ahead on multiple fronts simulta-neously, despite frequent setbacks. They need to understand the enormous amount of time, resources, and patience that building the rule of law requires and the political trade-offs and challenges in particular societies. If reformers understand these complexities, take them into account up front, and plan accordingly, their efforts to build the rule of law may have a greater chance of some success. Although we are not overly optimistic, we are not overly pessimis-tic, either, and we stress that post-conflict efforts to strengthen the rule of law will continue to be as important as they are difficult. 176To maximize chances of success, interveners must:
" Recognize that perceptions of the intervention's legality and legitimacy will have an impact on the ground;' 7 7
" Acknowledge the complexity of the rule of law and be clear about what it is that they are trying to achieve; 7 '
" Develop basic governance blueprints to determine how to create appropriate institutions, while recognizing that the choice of blueprints will inevitably constrain and possibly undermine some rule of law goals;' 79
" Seize early opportunities to ensure basic security; 80173. See id. at 379-84 (discussing the importance and challenges of local ownership). 174. Id. at 377. 175. See id. at 81. 176. Id. at 392. 177. See id. at 52. 178. Id. at 391-92. 179. Id. at 392. 180. Id.
20081 1469 WILLIAM AND MARY LAW REVIEW
" Reform police, prisons, courts, law schools, and so on, all in tandem; 18 1
" Ensure that accountability efforts send the right messages and enhance local capacity; 182
• Avoid undermining rule of law efforts through cultural insensitivity, poor planning, lack of transparency, or appear-ance of hypocrisy; 8 3
" Think creatively about building rule of law cultures, which requires going beyond the traditionally 'legal" to consider informal dispute resolution, community organizing and advocacy, civil society, education, media, antipoverty and development initiatives, and ensuring inclusion of nonelites and marginalized groups;"&
" Plan and coordinate effectively and ensure resources are commensurate to the task. 8 5Although it may be impossible to do all of this, strengthening the rule of law will remain an urgent, compelling task in many post-conflict societies. To move ahead constructively, reformers will need to be both more humble and more ambitious at the same time. 8 6 They need to be more humble in acknowledging the magnitude of the task and in recognizing that the role of outsiders is limited. Reformers must also be more humble in recognizing that durable cultural change is exceptionally difficult, and that we do not know as much about how to foster such change as any of us would like. Yet reformers must also be more ambitious in their willingness to pursue the rule of law in more creative and holistic ways. By providing an integrated, thematic study of the many chal-lenges of building the rule of law after intervention, my co-authors and I hope to encourage those working on these efforts to think more clearly about the broader endeavor and about how their particular activities can contribute to a larger whole. For although each of us may only paint one small piece of the picture, it will
Id.
Id.
Id.
Id.
Id.
See id. at 81, 392.
1470 [Vol. 49:1443 POST-CONFLICT RULE OF LAW BUILDING surely be a better work of art if we all understand just what the picture is supposed to represent in the end, and if we have all given careful thought to how our own sketches or brushstrokes may fit into the complex and evolving whole.' 8 7
Id. at 392.
20081 1471 |
189584 | https://pmc.ncbi.nlm.nih.gov/articles/PMC7452649/ | Structure-dependent effects of sweet and sweet taste affecting compounds on their sensorial properties - PMC
Skip to main content
An official website of the United States government
Here's how you know
Here's how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Search
Log in
Dashboard
Publications
Account settings
Log out
Search… Search NCBI
Primary site navigation
Search
Logged in as:
Dashboard
Publications
Account settings
Log in
Search PMC Full-Text Archive
Search in PMC
Journal List
User Guide
View on publisher site
Download PDF
Add to Collections
Cite
Permalink PERMALINK
Copy
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice
Food Chem X
. 2020 Jul 8;7:100100. doi: 10.1016/j.fochx.2020.100100
Search in PMC
Search in PubMed
View in NLM Catalog
Add to search
Structure-dependent effects of sweet and sweet taste affecting compounds on their sensorial properties
Corinna M Karl
Corinna M Karl
a Christian Doppler Laboratory for Taste Research, Faculty of Chemistry, University of Vienna, Vienna, Austria
Find articles by Corinna M Karl
a, Martin Wendelin
Martin Wendelin
b Symrise Distribution GmbH, Vienna, Austria
Find articles by Martin Wendelin
b, Dariah Lutsch
Dariah Lutsch
c Symrise AG, Holzminden, Germany
Find articles by Dariah Lutsch
c, Gerhard Schleining
Gerhard Schleining
d Institute of Food Science, University of Natural Resources and Life Sciences, Vienna, Austria
Find articles by Gerhard Schleining
d, Klaus Dürrschmid
Klaus Dürrschmid
d Institute of Food Science, University of Natural Resources and Life Sciences, Vienna, Austria
Find articles by Klaus Dürrschmid
d, Jakob P Ley
Jakob P Ley
c Symrise AG, Holzminden, Germany
Find articles by Jakob P Ley
c, Gerhard E Krammer
Gerhard E Krammer
c Symrise AG, Holzminden, Germany
Find articles by Gerhard E Krammer
c, Barbara Lieder
Barbara Lieder
a Christian Doppler Laboratory for Taste Research, Faculty of Chemistry, University of Vienna, Vienna, Austria
e Department of Physiological Chemistry, Faculty of Chemistry, University of Vienna, Vienna, Austria
Find articles by Barbara Lieder
a,e,⁎
Author information
Article notes
Copyright and License information
a Christian Doppler Laboratory for Taste Research, Faculty of Chemistry, University of Vienna, Vienna, Austria
b Symrise Distribution GmbH, Vienna, Austria
c Symrise AG, Holzminden, Germany
d Institute of Food Science, University of Natural Resources and Life Sciences, Vienna, Austria
e Department of Physiological Chemistry, Faculty of Chemistry, University of Vienna, Vienna, Austria
⁎
Corresponding author at: Christian Doppler Laboratory for Taste Research, Faculty of Chemistry, University of Vienna, Vienna, Austria. barbara.lieder@univie.ac.at
Received 2020 Mar 11; Revised 2020 Jul 2; Accepted 2020 Jul 2; Collection date 2020 Sep 30.
© 2020 The Author(s)
This is an open access article under the CC BY-NC-ND license (
PMC Copyright notice
PMCID: PMC7452649 PMID: 32904296
Abstract
A reduction in sugar consumption is desirable from a health point of view. However, the sensory profiles of alternative sweet tasting compounds differ from sucrose regarding their temporal profile and undesired side tastes, reducing consumers’ acceptance. The present study describes a sensory characterization of a variety of sweet and sweet taste affecting compounds followed by a comparison of similarity to sucrose and a multivariate regression analysis to investigate structural determinants and possible interactions for the temporal profile of the sweetness and side-tastes. The results of the present study suggest a pivotal role for the number of ketones, aromatic rings, double bonds and the M LogP in the temporal profile of sweet and sweet taste affecting compounds. Furthermore, interactions between aggregated physicochemical descriptors demonstrate the complexity of the sensory response, which should be considered in future models to predict a comprehensive sensory profile of sweet and sweet taste affecting compounds.
Keywords: Sweet taste, Temporal sensory profile, Onset, Lingering, Side-tastes, Physicochemical descriptors, Molecular structure of tastants
1. Introduction
During the past decades, the consumption of sugary drinks increased globally (Gakidou et al., 2017). However, an excessive consumption of sugar especially in soft drinks contributes to overweight, obesity and associated diseases like type 2 diabetes, hypertension and hyperlipidemia (Lustig, Schmidt, & Brindis, 2012). In order to reduce sugar consumption, but still sustain the pleasant sweet taste of food, there is a worldwide rising trend for sugar-reduced products using alternative sweeteners with no or a reduced caloric load (Sylvetsky & Rother, 2016). A major challenge when applying alternative sweeteners are the striking differences in the sensory profile of sweeteners in comparison to sucrose, which is the sweet-standard for most consumers. Especially differences in the time-intensity response, the potency, and undesired side-tastes, for example bitterness, metallic, astringency or licorice like taste, limit the application and acceptance of alternative sweeteners (DuBois, 2016, Reyes et al., 2017).
The terms onset and lingering are commonly used to describe differences in the sensory time-intensity profile. Onset is used to express the time it takes to reach the maximum of a taste sensation, while lingering is the more or less long lasting time of a sensation in the mouth (DuBois, Crosby, Stephenson, & Wingard Jr., 1977). A large variety of sweet tasting compounds is known, however none of them has exactly the same sensory profile as sucrose. Moreover, the molecular basis for these differences has not been fully elucidated so far. The perception of sweet taste is mediated by activation of the sweet taste receptor, the G protein-coupled heterodimeric receptor T1R2/R3, which has multiple agonist binding sites (Chéron et al., 2016, Morini et al., 2005). An alternative pathway for the perception of mono- and disaccharides via glucose transporters has been discussed as well (Sukumaran et al., 2016, Yee et al., 2011). However, it is still not fully understood how the sweet taste receptor recognizes the sensory variety of structures of ligands (Chéron et al., 2016, Masuda et al., 2012). Early, but outdated attempts to provide structure-sweetness relationships without the knowledge of the sweet taste receptor described the hydrophobicity and the logP value, which is the partition coefficient in octanol/water and represents the solubility of a compound, as important characteristics for sweet compounds (Deutsch & Hansch, 1966). This equation was followed by the so called AH/B theory, describing the occurrence of a hydrogen bond donor group and a Lewis base (Shallenberger & Acree, 1969). The discovery of the human sweet taste receptor succeeded in the early 2000s (Li et al., 2002, Montmayeur et al., 2001, Nelson et al., 2001), providing a base for advanced prediction models. Lately, there have been several studies describing the prediction of sweetness for example by quantitative structure activity relationship (QSAR) models (Chéron et al., 2017, Goel et al., 2018, Yang et al., 2011) or by machine learning methods (Zhong, Chong, Nie, Yan, & Yuan, 2013). In addition, in silico methods based on one of the binding sites were applied, for example molecular docking and homology models. Those models have shown to be useful tools to provide insights into the mechanism of G-protein coupled taste receptors, including sweet taste, by analyzing selected ligand binding sites (Spaggiari, Di Pizio, & Cozzini, 2020). For example, Ben Shoshan-Galeczki and Niv (2020) recently published homology models for virtual screening to provide novel predictions of sweet tasting molecules. Nevertheless, the crystal structure of the sweet taste receptor remains unknown and the main limitation for structure-based modelling is the availability of closely related proteins, although there have been some important improvements over the last few years (Spaggiari et al., 2020). Furthermore, the available models only describe the sweetness of a compound but are lacking the complete sensory profile including the temporal profile and potential side-tastes which are very important for the consumers’ acceptance and preference of a sweet tasting compound. For unpleasant aftertaste, an interaction with the umami receptor has been proposed (Acevedo & Temussi, 2019) and it is known that some sweeteners can also activate one or more bitter taste receptors (Kuhn et al., 2004). In addition, an extended lingering, as well as a delay in the onset of sweet taste is common amongst several non-nutritive sweeteners (DuBois, 2016, DuBois and Prakash, 2012). However, the structural basis for these differences has not been clarified so far.
In order to improve the current understanding of the structural determinants and their interactions for the sensory perception of sweet taste, a ligand-based approach was chosen. In more detail, we performed a comparative sensory characterization of a variety of test compounds at equally sweet levels in one test setup in order to investigate the structural driving forces for onset and lingering, as main parts of the temporal profile of sweet sensation, in addition to selected side-tastes. We hypothesize here that not only single structural characteristics, but also interactions between several characteristics are driving forces for undesired side-tastes and in particular for the onset and lingering of the sweet sensation.
2. Materials and methods
2.1. Chemicals
Acesulfame K, advantame, aspartame, hesperetin sodium salt, iron lactate-II hydrate, maltitol, maltose, neotame, phyllodulcin, phloretin, rebaudioside (reb) A (nat., 99%), reb C, reb D, reb E (contains 20% reb D) and reb M, rubusoside, saccharin sodium salt, sodium cyclamate, sorbitol (d-), stevioside, sucralose, tannic acid (nat.), thaumatin B (pur) and trehalose were kindly provided in food grade (FG) quality by Symrise AG (Holzminden, Germany). Caffeine (anhydrous, 99%, FG), hesperetin (>95%), neohesperidin dihydrochalcone (>96%, FG), rhamnose (l-, 99%, FG) and sorbitol (d-; 98%, FCC, FG) were obtained from Sigma-Aldrich (Steinheim, Germany). d-Tryptophan (99%) was obtained from Carbolution Chemicals GmbH (St. Ingbert, Germany), glucose (>99%, FG) from Dr. Lohmann Diaclean GmbH (Dortmund, Germany), fructose and sucrose were purchased from Wiener Zucker (Vienna, Austria). Citric acid, erythritol, ethanol, isomalt, lactose, monosodium glutamate, palatinose, sodium chloride, sucrose and xylitol were purchased from local supermarkets and pharmacies in the Viennese region in Austria.
2.2. Sensory panel
A total of 23 panelists (19 F, 4 M; 23–34 years) were recruited via notices on billboards at the University of Vienna and the surrounding areas. They confirmed they were in good general health condition, not pregnant and not taking medication. The panelists were asked not to consume intense tasting food or drinks (e.g., chewing gum, garlic, chilli, coffee) or to smoke for at least one hour before testing and to avoid strong odors or perfume, as well as strong abdominal fullness or hunger. All panelists gave their written informed consent.
The panelists were screened in three sessions within three weeks. The first session for basal tastes was performed according to DIN-EN-ISO 8586:2014-05 (2014) with 0.3 g/L citric acid for sour taste, 2.0 g/L sodium chloride for salt taste, 10.0 g/L sucrose for sweet taste, 0.6 g/L monosodium glutamate for umami taste and 0.3 g/L caffeine for bitter taste. To continue with the panel work, at least 80% had to be rated correctly. Additionally, 1.0 g/L iron lactate-II hydrate and 0.5 g/L tannic acid were given to train the panelists for metallic and astringent taste. In a second training session, the stimulus threshold level for sweet taste with sucrose and for bitter taste with caffeine was obtained according to DIN-NORM (1998). Only panelists with a threshold level for sweet below or at 4.0 g/L sucrose and for bitter below or at 0.125 g/L caffeine were allowed to continue. Furthermore, a ranking test for sweet and bitter was conducted according to Busch-Stockfisch (2015), to assess the ability to differentiate between concentrations. After the screening sessions, 20 panelists (17 F, 3 M; 23–34 years) were qualified and willing to continue with the sensory evaluations. At the third session, the panelists were introduced to the test method and the corresponding questionnaire on paper (see description for sensory evaluation) and were provided with the opportunity to train the evaluation-sheet. The attribute onset was separately trained by a guided tasting of sucrose compared to reb A and aspartame, which are known to have a delayed onset (DuBois & Prakash, 2012). This training for onset was repeated several times during overall panel work. The general performance of the panel was assessed by panel check using EyeOpenR with the complete data of sensory evaluation of the test compounds (see Section 2.3). Discrimination performance of the whole panel was good (p < 0.05), as was the reproducibility (p > 0.15). In particular, the discrimination of onset was excellent with p < 0.001, and the reproducibility was good with p = 0.234. Because of the overall good performance of the panel, no evaluation or any of the panelists had to be excluded.
2.3. Sensory evaluation
Every compound was tested at least in two sessions on separate test days with a minimum of eight panelists. The panelists were free to choose whether to participate in each of the sessions and the compounds were randomly assigned to the sessions, leading to a randomized order of the compounds to each individual panelist. On average, 30 single evaluations were made per compound on 2–3 separate test days, and 12 panelists evaluated one compound per test day (see Table S1). The reproducibility of the evaluations was tested by repeated rating of several compounds (e.g. sucrose and aspartame). To receive the taste characteristics of the 35 test compounds, an evaluation sheet was created based on a descriptive profile at two time-points, namely taste and aftertaste, in addition to rating of onset and lingering (see Fig. S1). The evaluation-sheet was customized for this study to rate the attributes using unstructured scales (0–10) for taste and aftertaste (“not at all” to “very intensive”), namely the intensity of sweetness, bitterness, astringency and metallic. For onset, panelists were supposed to rate the perceived time until the maximal sweet intensity was reached (“immediately” to “substantially delayed”) on an unstructured scale (0–10). Panelists were asked to rinse the mouth with tap water before and in between the tastings, and white bread was provided for optional additional neutralization. The panelists were instructed to start a new sample after complete neutralization only. A maximum of five test solutions was evaluated at one session and one test day. A total volume of 20 mL of each sample was provided in cups labelled with 3-digit random numbers and presented to the panelists in a randomized order. The panelists rinsed their mouth for 30 s with 20 mL of the test solution, evaluating the onset of sweetness in the first seconds and afterwards the intensity of bitterness, metallic, astringent, and sweet taste on unstructured scales. After spitting out the sample, lingering time was measured using a standard timer while rating the sweet, bitter, metallic and astringent aftertastes on unstructured scales. According to the measured time of sweetness staying in the mouth, the lingering time was recalculated to the range 0–10. The concentrations of the compounds were chosen so that the sweetness was equivalent to 5% (w/v) sucrose. The selected concentrations were determined in preliminary tastings by comparison tests with five selected panelists on structured scales (weak, marked and strong difference) with 5% (w/v) sucrose as reference solution, according to a just-about-right scale.
2.4. Stimuli
The test compounds used in the present study are shown in Table 1, including concentration for a sweetness equivalence of 5% sucrose where applicable, M logP and viscosity (Pa s). All compounds were carefully dissolved in water in a 500 mL graduated flask (±0.5 mL, DURAN®). Due to the limited solubility in water, phyllodulcin (14), hesperetin (31) and phloretin (33) were dissolved in ethanol (EtOH) to 200× concentrated stock solutions, reaching a final concentration of 0.5% EtOH in the test solution. It must be noticed that these concentrations (see # in Table 1) exceed the common use levels and were only reached using ethanol as solvent. Sucrose with 0.5% EtOH (25) was evaluated as a control for the impact of ethanol on the rated attributes. All solutions were prepared freshly in the morning of the tastings and served at room temperature in 25-mL plastic cups. The test compounds hesperetin + 0.5% EtOH (31), hesperetin sodium salt (32), phloretin + 0.5% EtOH (33), rebaudioside C (reb C) (34) and rubusoside (35) did not reach a sweetness equivalence of 5% sucrose in water soluble concentrations or tolerable bitterness (see in Table 1) and therefore have been excluded for the statistical analyses.
Table 1.
Test compounds and the concentrations (conc.) used [g/L] in the present study, M logP, viscosity and molecular structure. Compounds tested in concentrations exceeding the common or realistic use levels are labelled with #; compounds, which did not reach the required sweetness level of a 5% sucrose solution are labelled with .
| | Substance | Conc. [g/L] | M logP | Viscosity [Pa s] | Structure |
--- --- --- |
| 1 | Acesulfame K | 0.3 | −0.908 | 0.041 | |
| 2 | Advantame | 0.003 | 2.293 | 0.042 | |
| 3 | Aspartame | 0.25 | −0.231 | 0.046 | |
| 4 | Erythritol | 100 | −1.724 | 0.048 | |
| 5 | Fructose | 42 | −2.483 | 0.042 | |
| 6 | Glucose | 100 | −2.483 | 0.050 | |
| 7 | Isomalt | 167 | −4.304 | 0.051 | |
| 8 | Lactose | 175 | −3.898 | 0.053 | |
| 9 | Maltitol (Maltit) | 100 | −4.304 | 0.053 | |
| 10 | Maltose | 180 | −3.898 | 0.049 | |
| 11 | Neohesperidin dihydrochalcone | 0.07 | −2.176 | 0.053 | |
| 12 | Neotame | 0.01 | 1.215 | 0.047 | |
| 13 | Palatinose | 130 | −3.898 | 0.053 | |
| 14 | Phyllodulcin + 0.5% EtOH | 0.075# | 2.105 | 0.058 | |
| 15 | Reb A | 0.3 | −4.703 | 0.052 | |
| 16 | Reb D | 0.25 | −6.648 | 0.059 | |
| 17 | Reb E | 0.3 | −4.703 | 0.036 | |
| 18 | Reb M | 0.25 | −8.579 | 0.039 | |
| 19 | Rhamnose (l-) | 100 | −1.711 | 0.051 | |
| 20 | Saccharin sodium salt | 0.2 | 0.17 | 0.048 | |
| 21 | Sodium cyclamate | 2.1 | 0.448 | 0.046 | |
| 22 | Sorbitol (d-) | 100 | −2.497 | 0.049 | |
| 23 | Stevioside | 0.4 | −2.739 | 0.051 | |
| 24 | Sucralose | 0.09 | −0.913 | 0.046 | |
| 25 | Sucrose + 0.5% EtOH | 50 | −3.898 | 0.043 | |
| 26 | Sucrose | 50 | −3.898 | 0.038 | |
| 27 | Thaumatin B | 0.03 | 1.891 | 0.048 | |
| 28 | Trehalose | 150 | −3.898 | 0.045 | |
| 29 | Tryptophan (d-) | 1.0 | −2.147 | 0.053 | |
| 30 | Xylitol | 65 | −2.103 | 0.047 | |
| 31 | Hesperetin + 0.5% EtOH | 0.07# | 0.927 | 0.048 | |
| 32 | Hesperetin sodium salt | 0.07# | 0.927 | 0.058 | |
| 33 | Phloretin + 0.5% EtOH | 0.25# | 1.842 | 0.047 | |
| 34 | Reb C | 1.2# | −4.033 | 0.040 | |
| 35 | Rubusoside | 0.7# | −0.745 | 0.039 | |
Open in a new tab
2.5. Viscosity measurement
As a part of the physicochemical descriptors, the viscosity η [Pas] was measured using the rotary viscometer Physica SM (Anton Paar, Graz, Austria) with D/1/s = 20 at 25°C ± 0,5°C for 30 s with seven measurement time-points and 2–3 repetitions per sample. Outliers were determined using the Nalimov outlier test. The mean viscosities [Pas] of the 35 compounds are listed in Table 1. Each compound was dissolved as described in Section 2.3 and about 100 mL were filled into the test cylinder.
2.6. Computational and statistical analysis
The means and standard errors of the sensory characteristics of all test compounds were calculated with MS-Excel. The heatmap for visualization of the mean ratings of the sensory results with associated dendrogram was created with R studio (R version 3.6.1) using the library “gplots” and the application “heatmap.2” (as.matrix(data_sweet), col = colorRampPalette (c(“white”,“grey”,“black”)) (256), scale = “none”, key = T, keysize = 1.5, density.info = “none”, trace = “none”, cexCol = 0.9). The physicochemical descriptors for each test compound (molecular weight [g/mol], structure, area polar surface [A 2], rotatable bonds, complexity, length glycone, length alkyl chain, as well the numbers of heavy atoms, C-atoms, double bonds, OH-groups, ketones, bonded glucose, aromatic rings, defined atom stereocenters, donors and acceptors) were taken from the open chemistry database PubChem (Aug. 2018). Additionally, the M logP-value was calculated with MedChemDesigner 3.1.0.30 (see Table 1), which estimates the solubility of a compound as octanol/water distribution coefficient (Lipinski, Lombardo, Dominy, & Feeney, 1997). The relative sweetness for each compound was calculated based on the concentrations used in this study to receive a sweetness equivalent to 5% sucrose (relative sweetness = 1).
The calculation of the molecular fingerprints according to Morgan of each test compound, which translates the molecular structure to a binary code, was done with KNIME analytical platform 3.7 using the RD Kit node. Structural similarities to sucrose were then computed by “Tanimoto”-similarity index. To investigate relationships between the sensory attributes and the similarity index or physicochemical descriptors, the Pearson’s product moment correlations were calculated and illustrated with SigmaPlot 13.0. Additionally, a multivariate linear regression analysis with interactions, which includes a factor analysis (FA) with varimax rotation for aggregation of the dependent and independent variables, was carried out using JMP 14.0.0 to consider possible interactions of the physicochemical interactions to explicate the sensory attributes. The explanatory power for the independent factors (IF) of the multivariate linear regression analysis with interactions is explained by the FDR-LogWorth for each IF and their interactions and by the t ratio for each dependent factor separately. The higher the value of FDR-LogWorth or t ratio, the more impact the factor or interaction has for the model. The FA with varimax rotation was performed in order to reduce the number of factors for the multiple regression analysis. A sensory attribute (dependent factor, see Table S2) or physicochemical descriptor (IF, see S3) is represented by the reduced factor with the highest absolute value.
3. Results & discussion
In the present study, a comparative quantitative sensory description of known sweet and sweet taste affecting compounds was performed in order to analyze structural characteristics leading to differences in the sensory temporal profile and undesired side-tastes. A total of 35 compounds previously associated with sweet taste or sweet taste affecting properties was selected based on the availability in food grade from commercial sources. A sensory characterization of the test compounds at a sweetness level equivalent to 5% sucrose was carried out, evaluating the time-intensity response as well as bitter, metallic and astringent side-tastes.
The mean ratings of the attributes (see also Table S1) are displayed as a heat map, showing the sensory mean values for each of the 30 test compounds that reached a sweetness equivalent to 5% sucrose (Fig. 1 with related numbers of the compounds in Table 1). The color of each field represents the mean value of an attribute for each of the compounds with light to dark indicates a value from 0 to 10 (see color key in Fig. 1). Furthermore, the compounds are sorted vertically and attributes horizontally by similarity. A dendrogram demonstrates the clustering of the sensory attributes, as well as the clustering of the test compounds by similarity. The clustering of the attributes shows that taste and aftertaste for each attribute are associated. The attribute onset pertained to the cluster of the attributes “metallic” and “astringent”. In contrast, lingering pertained to the cluster of sweet sensation. Although the concentration of each compound was adjusted to be as similar as possible to 5% sucrose, the perception of sweetness may vary based on the individual rating of each panelist. In addition, the intensity of sweet aftertaste after spitting out, but not the taste ratings within the first 30 s, correlated positively (r = 0.56, p < 0.01 by Pearson correlation) with the lingering time. Hence, the more intensive the sweet aftertaste, the longer the lingering of the tested compounds. Moreover, we found a significant enhanced onset (p < 0.05 by ANOVA on ranks with Dunńs test as post-hoc, compared to sucrose) for advantame (2), aspartame (3), neotame (12), phyllodulcin + 0.5%EtOH (14) and thaumatin (27), when tested at a sweetness level according to 5% sucrose.
Fig. 1.
Open in a new tab
Heatmap of sensory attributes of sweet and sweet taste affecting compounds with 3 clusters of compounds with rating values 0 – 10. T = taste; A = aftertaste. Numbers refer to compounds given in Table 1.
The grouping of the test compounds resulted in three main clusters according to their tastes, side-tastes and aftertastes. In the first cluster, mainly caloric sweeteners and polyols were assigned, namely erythritol (4), fructose (5), glucose (6), isomalt (7), lactose (8), maltitol (9), maltose (10), palatinose (13), sodium cyclamate (21), d-sorbitol (22), sucralose (24), sucrose + 0.5% EtOH (25), sucrose (26), trehalose (28) and xylitol (30). Thus, in the first cluster the sweet taste and aftertaste is nearly exclusively present. To the second cluster belonged almost all steviol glycosides (15–18), the amino acid d-tryptophan (29), the 6-deoxy-monosaccharide l-rhamnose (19), and the sweeteners saccharin sodium salt (20), acesulfame K (1), advantame (2), and aspartame (3). This cluster comprises compounds that had, in addition to the sweet taste and aftertaste, also some negative side-tastes and as well a slightly enhanced lingering effect. In the third cluster, five compounds, noteworthy the isocoumarin phyllodulcin (14), and the non-nutritive sweeteners stevioside (23), neohesperidin dihydrochalcone (NHDC, 11), thaumatin (27) and neotame (12) were assigned. Thus, in the third cluster, the negative side-tastes are the most intense, supplemented by a strongly enhanced lingering of sweetness. These clustering results are consistent with the results of Tan, Wee, Tomic, and Forde (2019), who showed by using the Temporal Check-all-that-Apply (TCATA) method that the side taste profiles, sweetness onset and lingering of compounds like fructose and maltitol, are most similar to 10% sucrose. Furthermore, Tan et al. (2019) showed that aspartame, acesulfame K, reb A, sucralose, as well as allulose and sorbitol had higher bitterness than sucrose which was mostly accompanied by higher metallic taste and chemical taste compared to sucrose. Also Reyes et al. (2017) described that non-nutritive sweeteners showed more side tastes compared to carbohydrate based sweet compounds when evaluating sucrose, aspartame, acesulfame K, sucralose, reb A, fructose, NHDC, thaumatin, glucose and saccharin with weak and moderate sweetening concentrations by TCATA (Reyes et al., 2017).
As a next step, structural characteristics that are associated with sweetness, onset, lingering and undesired side-tastes were analyzed. Firstly, the overall structure of the compounds was characterized using Morgan’s Fingerprints, followed by calculation of the Tanimoto similarity index to sucrose, which was correlated with the taste attributes. The results are displayed in Fig. 2, demonstrating a negative correlation for bitter and astringent taste (p < 0.05) to the similarity index, meaning that the higher the similarity to sugar, the lower was the rating of bitterness and astringency. In addition, there was a trend (p < 0.1) to a negative correlation with onset and lingering and the Tanimoto similarity index. Thus, also here, we found that the higher the similarity to sugar was, the lower was the rating for both attributes of the temporal profile. Since the relative sweetness did not correlate with the similarity index to sucrose, it can be assumed that compounds can taste sweet independently of the structural similarity to sucrose. However, they are more likely to have undesired bitter and astringent side tastes, as well as an increased onset and lingering. Moreover, the results suggest that structural characteristics are important for the taste attributes. Thus, in a second step, a variety of physicochemical descriptors was evaluated which are commonly used to differentiate the overall shape, size, degree of branching and flexibility of molecules as numerical values (Zhong et al., 2013). The calculation of the physicochemical descriptors is based on the 2D structure of a compound and additionally the physicochemical descriptors are supplemented with values for relative sweetness and viscosity. In the present study, we focused on the structural driving forces for onset, lingering and the relative sweetness compared to a 5% sucrose solution. For this purpose, the chemical information of each test compound was transformed into various numerical quantities within a symbolic representation of a molecule for the IF. Such conformation-independent methods have been validated as an efficient alternative strategy to evolve models based on constitutional and topological molecular characteristics of chemical compounds (Chéron et al., 2017, Ojha and Roy, 2018, Rojas et al., 2016, Rojas et al., 2017), avoiding that differences between the 3D conformers manipulate the descriptor values due to geometrical optimization. However, this is at the same time one limitation of this study, since the physicochemical descriptors based on the 2D structure ignore the conformation of the test compounds, which might affect a compound’s binding to the receptor. To gain more insight into the role of the 3D structure of a compound, e.g. homology modelling based on the structure of the receptor is needed in further studies.
Fig. 2.
Open in a new tab
Pearson correlation of the RD Kit-Fingerprint-Tanimoto similarity index to sucrose with the sensory rating of A: relative sweetness compared to 5% sucrose, B: onset, C: bitter, D: metallic, E:astringent and F: lingering.
Before understanding the driving forces of onset and lingering by a multivariate linear regression analysis with interactions, multiple dependent and various independent variables were aggregated to fewer factors by factor analysis (FA) with varimax rotation to reduce the number of factors. The relative sweetness and ten sensory attributes were aggregated into five factors serving as dependent variables (Table S2) according to the strongest interaction of an attribute with one factor. Each of the five dependent factors had an eigenvalue above 1.0 and together 67.05 cumulative percent of variance. The taste and aftertaste of each side-taste metallic, bitter and astringent were aggregated to the first three factors. Sweet taste, aftertaste and lingering are allocated to the fourth factor. The fifth factor combines the relative sweetness and onset. This reduction confirms the results of the clustering of the sensory attributes in Fig. 1, in which the taste and aftertaste of each attribute appear to be highly correlated. The reduction of the independent variables, the 18 physicochemical descriptors, resulted in three independent factors (IF) (Table S3) according to the strongest interaction of a descriptor with one factor. Each of the three IF had an eigenvalue above 1.0 and a predictive power of 90.15 cumulative percent of variance. Here, IF-1 consolidated the most physicochemical descriptors, namely heavy atom count, molecular weight [g/mol], complexity, C-atoms, acceptors, bounded glucose, area polar surface [A 2], defined atom stereocenter count, donors, length glycone, rotatable bonds and OH-groups. IF-2 aggregates double bonds, ketones, aromatic rings and M logP and IF-3 combines the length of the alkyl chain and viscosity.
The explanatory power of the multivariate linear regression analysis with interactions is shown in Table 2 and is defined by the FDR-LogWorth for each IF and their interactions. The explanatory power by IF and interactions for each of the dependent factors is shown with the t ratio. FDR-LogWorth and t ratio were calculated within the multivariate linear regression analysis using JMP 14.0.0 and represent the power of the influence on the model. The darker the background color of a value, the stronger the effect, whereas red colors indicate positive and blue colors indicate negative associations. The multivariate linear regression analysis with interactions revealed that IF-2 with its descriptors double bonds, ketone, aromatic rings and M logP had the strongest explanatory power on the whole regression model with a FDR LogWorth of 91.7, followed by interaction of IF-1 and IF-2 with 16.4, IF-1 with 12.6, the interaction of IF-1, IF-2 and IF-3 with 11.1 and the interaction of IF-2 and IF-3 with 6.7 as FDR LogWorth, see Table 2. Except the interaction of IF-1 and IF-3, all interactions were significant (FDR p-value <0.001) and with a value of 3.5, IF-3 had the lowest FDR LogWorth. This clearly shows that interactions among physicochemical descriptors may influence the sensory attributes and thus the perception of the tested compounds, particularly the temporal profile of the sweet sensation. After having profiled the 30 test compounds with a sweet-equivalence to 5% sucrose for selected sensory taste attributes, the influence of aggregated IF-1, IF-2 and IF-3 on the aggregated taste attributes as dependent variables were explored. Therefore, a multivariate regression analysis with preceding aggregation of dependent and independent variables to five dependent and three independent factors (IF) was carried out. The influence of the independent factors on dependent sensory factors are depicted for each dependent factor separately in Fig. 3 and related t-ratios are shown in Table 2. The t ratio reflects the strength of a factor or of an interaction of several factors. Each interaction plot in a matrix shows the interaction of the row effect with the column effect for a dependent factor, demonstrating whether the impact of one factor depends on the value of another one. The analysis demonstrated that the impact of IF-1 on relative sweetness & onset changes as IF-2 increases, but is independent of the value of IF-3. Besides, the effect of IF-2 on sweetness & onset depends on the value of IF-3 and the other way around (see Fig. 3A). The analysis for relative sweetness & onset showed that IF-2 and IF-3 alone and as well the interaction of IF-1 and IF-2 are positively associated with the sweetness & onset, whereas IF-3 alone and the interaction of IF-2 and IF-3, as well as the interaction of all three IF had a negative association (see Table 2). In addition, the analysis showed that IF-2 and IF-3 alone and as well the interaction of IF-2 and IF-3 enhanced, whereas the interaction of IF-1 and IF-2 suppressed the intensity of bitterness (see Table 2). In contrast, the rating of metallic was only influenced by IF-1, IF-2 and IF-3 alone, but not by interactions (see Fig. 3C and Table 2). An increased astringency was associated with IF-2, in addition to the interaction of IF-2 and IF-3 besides the interaction of IF-1, IF-2 and IF-3 (see Table 2 and Fig. 3D). Sweet taste and sweet lingering were enhanced with increasing values for IF-1 and IF-3, but not by any interactions (see Fig. 3E and Table 2). This seems to be reasonable, because the sweet taste was adjusted to 5% sucrose, and lingering was correlated with the relative sweetness. Overall, the analysis demonstrated a pivotal role for the number of double bonds, ketones, aromatic rings and the M LogP. This is also demonstrated by the t ratio, reflecting the strength of a factor or interaction, which is largest for IF-2 for relative sweetness & onset (see Table 2).
Table 2.
LogWorth of independent factors (IF) on the whole model and t ratios of main and interaction effects of IF-1, IF-2 and IF-3 on the aggregated sensory attributes bitter, metallic, astringent, sweet & lingering and SF & onset in sweet tasting compounds. Depending on a positive (red) or negative (blue) t ratio, the interaction effect on a dependent factor is positive or negative. Calculated by a multiple regression analysis after aggregation of dependent to 5 factors and independent variables to 3 factors. Significant p values are labelled with for p < 0.05, for p < 0.01 and for p < 0.001.
Open in a new tab
Fig. 3.
Open in a new tab
Regression plots visualizing the interactions between the independent factors (IF–1, IF–2, and IF–3) for each dependent factor (relative sweetness & onset, bitter, metallic, astringent and sweet & lingering). Each interaction plot in a matrix shows the interaction of the row effect with the column effect for the dependent factor. This was calculated by a multiple regression analysis after aggregation of dependent and independent variables to 5 and 3 factors with JMP. Significant interactions between the IF are labelled with () for p < 0.1, for p < 0.05, for p < 0.01 and for p < 0.001.
Zhong et al. (2013) found a correlation between the aqueous solubility, which is related to the M logP value, and the sweetness, which is supported by the findings of a higher relative sweetness correlating with M logP in the present analysis. Sweet taste chemoreceptors in the oral cavity are covered mainly by water based saliva, hence solubility is thought to play an important role in sweet taste perception (Behrens et al., 2011, Meyerhof, 2015). The logP value was discovered quite early as an important descriptor for structure-sweet relationships (Deutsch & Hansch, 1966), but so far has not been associated with a delay in the onset of the sweet sensation. Clemens et al. (2016) summarized that the relative sweetness of sugars was associated with attached groups, especially hydroxyl groups as part of stereochemical configuration. In our analysis, no correlations between the relative sweetness and attached glucose, the length of alkyl chains or hydroxyl groups were detected. When focusing on the relationship of structure and sweetness of steviol glycosides, the C 16-C 17 part was identified to be essential for the sweetness (Hellfritsch et al., 2012, Upreti et al., 2012). Furthermore, it has also been discovered by Hellfritsch et al. (2012) that the glycone chain length and the pyranose substitution are responsible for the differences in the taste profile of steviol glycosides, too. However, when comparing several structurally strikingly different sweeteners and not only steviol glycosides, the similarities rather than the structural differences of the steviol glycosides are predominant. It can be assumed that due to their structural similarity, all steviol glycosides interact with the same binding site. This is supported by the fact that most of the steviol glycosides tested in the present study (substances 15–18 in Table 1) were joined together in one sensory-based cluster (see Fig. 1). This gives rise to the hypothesis that for the steviol glycosides tested here, the undesired side-tastes, as well as onset and lingering of the sweet sensation, are based on the core structure of steviol glycosides rather than on the variable side chains.
Based on the results of the present study, we hypothesize that a longer lasting lingering is associated with more complex and heavier molecules, which might be based on the receptor binding. Also Tan et al. (2019) concluded that an enhanced lingering is the result of higher affinities of the non-nutritive sweeteners to the binding sites of the taste receptor. Similarly, a delay in onset could be due to an inferior fit of a compound to the respective binding site. This would explain the finding that more rigid double bonds, ketones and aromatic rings were associated with a high onset of the sweet sensation. Acevedo and Temussi (2019) suggested that one of the main reasons for unpleasant aftertaste is an interaction of sweeteners with the umami receptor. Furthermore, they reviewed that some sweeteners can of course also be recognized by other receptors, e.g. bitter and umami, which can contribute to an unpleasant side-taste (Acevedo & Temussi, 2019). Hence, we hypothesize that there are some similarities of compounds binding to the same receptor, as shown with the correlation and interaction analysis of the present study, also to the umami receptor. Moreover, Acevedo, Ramirez-Sarmiento, and Agosin (2018) could show that the electrostatic potential is important for the interaction of sweet proteins with the sweet taste receptor, as well the stabilization of the receptor by formation of hydrogen bonds, for example by the occurrence of sugars in the structures (Acevedo et al., 2018), which is represented by the IF-1 in this work. However, the actual sweetening potency cannot necessarily be inferred from the binding affinity. By analyzing the sweetness of isovanillyl derivates, Bassoli, Merlini, and Morini (2002) associated a 6-membered ring with two oxygen atoms in position 1,3 with a more intense sweetness. In the analysis of the present study, IF-1, to which the bonded OH-groups belong to, is positively correlated with the relative sweetness and the onset of a compound. Additionally IF-2, to which the aromatic rings belong to, is as well positively correlated to relative sweetness and onset, but here there was no interaction, as Bassoli et al. (2002) could show it for the group of isovanillic sweeteners. Thus, a group-specific structure–activity relationship, depending also on the different binding sites of the receptor, is supported by the results of the present study.
4. Conclusions
In the present study, a variety of sweet taste or sweet taste affecting compounds was used in a comparative sensory evaluation in order to analyze structural characteristics leading to differences in the time-intensity profile and undesired side-tastes. Our results show that the taste is highly correlated with the aftertaste, and that less structural similarity to sucrose results in enhanced bitterness, astringency and as well as a trend for onset and lingering. In addition, we demonstrate here for the first time that interactions between several physicochemical descriptors explain the relative sweetness and onset, providing an enhanced understanding of the molecular base for temporal sensory perception. The prediction of time intensity profiles and of undesired side-tastes of sweet and sweet taste affecting compounds has not been considered in previous models and the present study provides a starting point for improving those models in future studies in order to get a more detailed prediction and suggest the consideration of interactions.
CRediT authorship contribution statement
Corinna M. Karl: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing - original draft, Writing - review & editing, Visualization. Martin Wendelin: Conceptualization, Methodology, Software, Validation, Resources. Dariah Lutsch: Software, Validation, Formal analysis, Visualization. Gerhard Schleining: Methodology, Resources. Klaus Dürrschmid: Conceptualization, Methodology. Jakob P. Ley: Conceptualization, Methodology, Resources, Supervision, Funding acquisition. Gerhard E. Krammer: Resources, Supervision, Funding acquisition. Barbara Lieder: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Resources, Data curation, Writing - original draft, Writing - review & editing, Supervision, Project administration, Funding acquisition.
Declaration of Competing Interest
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: The authors M. Wendelin, D. Lutsch, J. P. Ley and G. E. Krammer are employees of the company Symrise AG.
Acknowledgments
The financial support by the Austrian Federal Ministry for Digital and Economic Affairs, the Austrian National Foundation for Research, Technology, and Development, and the Symrise AG is gratefully acknowledged. Christian Doppler Research Association, Austria. The authors thank Anja Eiing (Symrise AG, Holzminden, Germany) for her assistance with panel check.
Footnotes
Appendix A
Supplementary data to this article can be found online at
Appendix A. Supplementary data
The following are the Supplementary data to this article:
Supplementary data 1
mmc1.pdf (873.6KB, pdf)
References
Acevedo W., Ramirez-Sarmiento C.A., Agosin E. Identifying the interactions between natural, non-caloric sweeteners and the human sweet receptor by molecular docking. Food Chemistry. 2018;264:164–171. doi: 10.1016/j.foodchem.2018.04.113. [DOI] [PubMed] [Google Scholar]
Acevedo W., Temussi P.A. The origin of unpleasant aftertastes in synthetic sweetener: A hypothesis. Frontiers in Molecular BioScience. 2019;5(119) doi: 10.3389/fmolb.2018.00119. [DOI] [PMC free article] [PubMed] [Google Scholar]
Bassoli A., Merlini L., Morini G. Isovanillyl sweeteners. From molecules to receptors. Pure and Applied Chemistry. 2002;74(7):1181–1187. doi: 10.1351/pac200274071181. [DOI] [Google Scholar]
Behrens M., Meyerhof W., Hellfritsch C., Hofmann T. Sweet and umami taste: Natural products, their chemosensory targets, and beyond. Angewandte Chemie International Edition. 2011;50(10):2220–2242. doi: 10.1002/anie.201002094. [DOI] [PubMed] [Google Scholar]
Ben Shoshan-Galeczki Y., Niv M.Y. Structure-based screening for discovery of sweet compounds. Food Chemistry. 2020;315 doi: 10.1016/j.foodchem.2020.126286. [DOI] [PubMed] [Google Scholar]
Busch-Stockfisch M. Prüferauswahl und Prüferschulung. In: Busch-Stockfisch M., editor. Sensorik kompakt in der Produktentwicklung und Qualitätssicherung. B. Behr's Verlag GmbH & Co. KG; Hamburg, Germany: 2015. [Google Scholar]
Chéron J.B., Casciuc I., Golebiowski J., Antonczak S., Fiorucci S. Sweetness prediction of natural compounds. Food Chemistry. 2017;221:1421–1425. doi: 10.1016/j.foodchem.2016.10.145. [DOI] [PubMed] [Google Scholar]
Chéron J.B., Golebiowski J., Antonczak S., Fiorucci S. The anatomy of mammalian sweet taste receptors. Proteins. 2016;85(2):332–341. doi: 10.1002/prot.25228. [DOI] [PubMed] [Google Scholar]
Clemens R.A., Jones J.M., Kern M., Lee S.-Y., Mayhew E.J., Slavin J.L., Zivanovic S. Functionality of sugars in foods and health. Comprehensive Reviews in Food Science and Food Safety. 2016;15(3):433–470. doi: 10.1111/1541-4337.12194. [DOI] [PubMed] [Google Scholar]
Deutsch E.W., Hansch C. Dependence of relative sweetness on hydrophobic bonding. Nature. 1966;211:75. doi: 10.1038/211075a0. [DOI] [PubMed] [Google Scholar]
DIN-EN-ISO. (2014). Sensory analysis - General guidelines for the selection, training and monitoring of selected assessors and expert sensory assessors. German version, DIN EN ISO 8586:2014-05. Geneva, Switzerland.
DIN-NORM . Beuth; Berlin, Germany: 1998. Bestimmung der Geschmacksempfindlichkeit. [Google Scholar]
DuBois G.E. Molecular mechanism of sweetness sensation. Physiology & Behavior. 2016;164:453–463. doi: 10.1016/j.physbeh.2016.03.015. [DOI] [PubMed] [Google Scholar]
DuBois G.E., Crosby G.A., Stephenson R.A., Wingard R.E., Jr. Dihydrochalcone sweeteners. Synthesis and sensory evaluation of sulfonate derivatives. Journal of Agricultural and Food Chemistry. 1977;25(4):763–772. doi: 10.1021/jf60212a056. [DOI] [PubMed] [Google Scholar]
DuBois G.E., Prakash I. Non-caloric sweeteners, sweetness modulators, and sweetener enhancers. Annual Review of Food Science and Technology. 2012;3:353–380. doi: 10.1146/annurev-food-022811-101236. [DOI] [PubMed] [Google Scholar]
Gakidou E., Afshin A., Abajobir A.A., Abate K.H., Abbafati C., Abbas K.M.…Murray C.J.L. Global, regional, and national comparative risk assessment of 84 behavioural, environmental and occupational, and metabolic risks or clusters of risks, 1990–2016: A systematic analysis for the Global Burden of Disease Study 2016. The Lancet. 2017;390(10100):1345–1422. doi: 10.1016/S0140-6736(17)32366-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
Goel A., Gajula K., Gupta R., Rai B. In-silico prediction of sweetness using structure-activity relationship models. Food Chemistry. 2018;253:127–131. doi: 10.1016/j.foodchem.2018.01.111. [DOI] [PubMed] [Google Scholar]
Hellfritsch C., Brockhoff A., Stahler F., Meyerhof W., Hofmann T. Human psychometric and taste receptor responses to steviol glycosides. Journal of Agricultural and Food Chemistry. 2012;60(27):6782–6793. doi: 10.1021/jf301297n. [DOI] [PubMed] [Google Scholar]
Kuhn C., Bufe B., Winnig M., Hofmann T., Frank O., Behrens M.…Meyerhof W. Bitter taste receptors for saccharin and acesulfame K. Journal of Neuroscience. 2004;24(45):10260–10265. doi: 10.1523/JNEUROSCI.1225-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
Li X., Staszewski L., Xu H., Durick K., Zoller M., Adler E. Human receptors for sweet and umami taste. PNAS. 2002;99(7):4692–4696. doi: 10.1073/pnas.072090199. [DOI] [PMC free article] [PubMed] [Google Scholar]
Lipinski C.A., Lombardo F., Dominy B.W., Feeney P.J. Experimental and computational approaches to estimate solubility and permeability in drug discovery and development settings. Advanced Drug Delivery. 1997;23:3–25. doi: 10.1016/S0169-409X(96)00423-1. [DOI] [PubMed] [Google Scholar]
Lustig R.H., Schmidt L.A., Brindis C.D. Public health: The toxic truth about sugar. Nature. 2012;482(7383):27–29. doi: 10.1038/482027a. [DOI] [PubMed] [Google Scholar]
Masuda K., Koizumi A., Nakajima K., Tanaka T., Abe K., Misaka T., Ishiguro M. Characterization of the modes of binding between human sweet taste receptor and low-molecular-weight sweet compounds. PLoS ONE. 2012;7(4) doi: 10.1371/journal.pone.0035380. [DOI] [PMC free article] [PubMed] [Google Scholar]
Meyerhof W. Sensorische Grundlagen - Geschmack. In: Busch-Stockfisch M., editor. Sensorik kompakt in der Produktentwicklung und Qualitätssicherung. B. Behr’s Verlag GmbH & Co. KG; Hanburg, Germany: 2015. [Google Scholar]
Montmayeur J.P., Liberles S.D., Matsunami H., Buck L.B. A candidate taste receptor gene near a sweet taste locus. Nature Neuroscience. 2001;4(5):492–498. doi: 10.1038/87440. [DOI] [PubMed] [Google Scholar]
Morini G., Bassoli A., Temussi P.A. From small sweeteners to sweet proteins: anatomy of the binding sites of the human T1R2_T1R3 receptor. Journal of Medicinal Chemistry. 2005;48:5520–5529. doi: 10.1021/jm0503345. [DOI] [PubMed] [Google Scholar]
Nelson G., Hoon M.A., Chandrashekar J., Zhang Y., Ryba N.J.P., Zuker C.S. Mammalian sweet taste receptors. Cell. 2001;106(3):381–390. doi: 10.1016/S0092-8674(01)00451-2. [DOI] [PubMed] [Google Scholar]
Ojha P.K., Roy K. Development of a robust and validated 2D-QSPR model for sweetness potency of diverse functional organic molecules. Food and Chemical Toxicology. 2018;112:551–562. doi: 10.1016/j.fct.2017.03.043. [DOI] [PubMed] [Google Scholar]
Reyes M.M., Castura J.C., Hayes J.E. Characterizing dynamic sensory properties of nutritive and nonnutritive sweeteners with temporal check-all-that-apply. Journal of Sensory Studies. 2017;32(3) doi: 10.1111/joss.12270. [DOI] [PMC free article] [PubMed] [Google Scholar]
Rojas C., Ballabio D., Consonni V., Tripaldi P., Mauri A., Todeschini R. Quantitative structure–activity relationships to predict sweet and non-sweet tastes. Theoretical Chemistry Accounts. 2016;135(66):13. doi: 10.1007/s00214-016-1812-1. [DOI] [Google Scholar]
Rojas C., Todeschini R., Ballabio D., Mauri A., Consonni V., Tripaldi P., Grisoni F. A QSTR-based expert system to predict sweetness of molecules. Frontiers in Chemistry. 2017;5:53. doi: 10.3389/fchem.2017.00053. [DOI] [PMC free article] [PubMed] [Google Scholar]
Shallenberger R.S., Acree T.E. Molecular structure and sweet taste. Journal of Agricultural and Food Chemistry. 1969;17(4):701–703. doi: 10.1021/jf60164a032. [DOI] [Google Scholar]
Spaggiari G., Di Pizio A., Cozzini P. Sweet, umami and bitter taste receptors: State of the art of in silico molecular modeling approaches. Trends in Food Science & Technology. 2020;96:21–29. doi: 10.1016/j.tifs.2019.12.002. [DOI] [Google Scholar]
Sukumaran S.K., Yee K.K., Iwata S., Kotha R., Quezada-Calvillo R., Nichols B.L.…Margolskee R.F. Taste cell-expressed alpha-glucosidase enzymes contribute to gustatory responses to disaccharides. PNAS. 2016;113(21):6035–6040. doi: 10.1073/pnas.1520843113. [DOI] [PMC free article] [PubMed] [Google Scholar]
Sylvetsky A.C., Rother K.I. Trends in the consumption of low-calorie sweeteners. Physiology & Behavior. 2016;164:446–450. doi: 10.1016/j.physbeh.2016.03.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
Tan V.W.K., Wee M.S.M., Tomic O., Forde C.G. Temporal sweetness and side tastes profiles of 16 sweeteners using temporal check-all-that-apply (TCATA) Food Research International. 2019;121:39–47. doi: 10.1016/j.foodres.2019.03.019. [DOI] [PubMed] [Google Scholar]
Upreti M., Dubois G., Prakash I. Synthetic study on the relationship between structure and sweet taste properties of steviol glycosides. Molecules. 2012;17(4):4186–4196. doi: 10.3390/molecules17044186. [DOI] [PMC free article] [PubMed] [Google Scholar]
Yang X., Chong Y., Yan A., Chen J. In-silico prediction of sweetness of sugars and sweeteners. Food Chemistry. 2011;128(3):653–658. doi: 10.1016/j.foodchem.2011.03.081. [DOI] [Google Scholar]
Yee K.K., Sukumaran S.K., Kotha R., Gilbertson T.A., Margolskee R.F. Glucose transporters and ATP-gated K+ (KATP) metabolic sensors are present in type 1 taste receptor 3 (T1r3)-expressing taste cells. PNAS. 2011;108(13):5431–5436. doi: 10.1073/pnas.1100495108. [DOI] [PMC free article] [PubMed] [Google Scholar]
Zhong M., Chong Y., Nie X., Yan A., Yuan Q. Prediction of sweetness by multilinear regression analysis and support vector machine. Journal of Food Science. 2013;78(9):S1445–1450. doi: 10.1111/1750-3841.12199. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplementary data 1
mmc1.pdf (873.6KB, pdf)
Articles from Food Chemistry: X are provided here courtesy of Elsevier
ACTIONS
View on publisher site
PDF (2.0 MB)
Cite
Collections
Permalink PERMALINK
Copy
RESOURCES
Similar articles
Cited by other articles
Links to NCBI Databases
On this page
Abstract
1. Introduction
2. Materials and methods
3. Results & discussion
4. Conclusions
CRediT authorship contribution statement
Declaration of Competing Interest
Acknowledgments
Footnotes
Appendix A. Supplementary data
References
Associated Data
Cite
Copy
Download .nbib.nbib
Format:
Add to Collections
Create a new collection
Add to an existing collection
Name your collection
Choose a collection
Unable to load your collection due to an error
Please try again
Add Cancel
Follow NCBI
NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed
Connect with NLM
NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov
Back to Top |
189585 | https://ocw.mit.edu/courses/10-37-chemical-and-biological-reaction-engineering-spring-2007/pages/lecture-notes/ | Browse Course Material
Course Info
Instructors
Prof. K. Dane Wittrup
Prof. William Green, Jr.
Departments
Chemical Engineering
As Taught In
Spring 2007
Level
Undergraduate
Topics
Engineering
Biological Engineering
Chemical Engineering
Science
Chemistry
Physical Chemistry
Learning Resource Types
assignment_turned_in Problem Sets with Solutions
grading Exams with Solutions
notes Lecture Notes
assignment_turned_in Activity Assignments with Examples
Download Course
search GIVE NOW about ocw help & faqs contact us
10.37 | Spring 2007 | Undergraduate
Chemical and Biological Reaction Engineering
Lecture Notes
These lecture notes were prepared by Tiffany Iaconis, Frederick Jao, and Vicky Loewer for MIT OpenCourseWare. They are preliminary and may contain errors.
Instructors:
WHG = William H. GreenKDW = K. Dane Wittrup
| LEC # | TOPICS |
--- |
| 1 | Preliminaries and remembrance of things past. Reaction stoichiometry, lumped stoichiometries in complex systems such as bioconversions and cell growth (yields); extent of reaction, independence of reactions, measures of concentration. Single reactions and reaction networks, bioreaction pathways. (WHG) (PDF) |
| 2 | The reaction rate and reaction mechanisms: Definition in terms of reacting compounds and reaction extent; rate laws, Arrhenius equation, elementary, reversible, non-elementary, catalytic reactions. (WHG) (PDF) |
| 3 | Kinetics of cell growth and enzymes. Cell growth kinetics; substrate uptake and product formation in microbial growth; enzyme kinetics, Michaelis-Menten rate form. (KDW) (PDF) |
| 4 | Reaction mechanisms and rate laws: Reactive intermediates and steady state approximation in reaction mechanisms. Rate-limiting step. Chain reactions. Pyrolysis reactions. (WHG) (PDF) |
| 5 | Continuous stirred tank reactor (CSTR). Reactions in a perfectly stirred tank. Steady-state CSTR. (KDW) (PDF) |
| 6 | Concentration that optimizes desired rate. Selectivity vs. Conversion. Combining reactors with separations. (WHG) (PDF) Lecture 6 correction (PDF) |
| 7 | Batch reactor: Equations, reactor sizing for constant volume and variable volume processes. (KDW) (PDF) |
| 8 | The plug flow reactor. (WHG) (PDF) |
| 9 | Reactor size comparisons for PFR and CSTR. Reactors in series and in parallel. How choice of reactor affects selectivity vs. conversion. (KDW) (PDF) |
| 10 | Non-ideal reactor mixing patterns. Residence time distribution. Tanks in series model. Combinations of ideal reactors. (KDW) (PDF) |
| 11 | Non isothermal reactors. Equilibrium limitations, stability. Derivation of energy balances for ideal reactors; equilibrium conversion, adiabatic and nonadiabatic reactor operation. (WHG) (PDF) |
| 12 | Data collection and analysis. Experimental methods for the determination of kinetic parameters of chemical and enzymatic reactions; determination of cell growth parameters; statistical analysis and model discrimination. (WHG) (PDF) |
| 13 | Biological reactors - chemostats. Theory of the chemostat. Fed batch or semi-continuous fermentor operation. (KDW) (PDF) |
| 14 | Kinetics of non-covalent bimolecular interactions. Significance; typical values and diffusion limit; approach to equilibrium; multivalency. (KDW) (PDF) |
| 15 | Gene expression and trafficking dynamics. Approach to steady state; receptor trafficking. (KDW) (PDF) |
| 16 | Catalysis. Inorganic and enzyme catalysts and their properties; kinetics of heterogeneous catalytic reactions; adsorption isotherms, derivation of rate laws; Langmuir-Hinshelwood kinetics. (WHG) (PDF) |
| 17 | Mass transfer resistances. External diffusion effects. Non-porous packed beds and monoliths, immobilized cells. (WHG) (PDF) |
| 18 | External mass-transfer resistance: Gas-liquid reactions in multiphase systems. (KDW) (PDF) |
| 19 | Oxygen transfer in fermentors. Applications of gas-liquid transport with reaction. (KDW) (PDF) |
| 20 | Reaction and diffusion in porous catalysts. Effective diffusivity, internal and overall effectiveness factor, Thiele modulus, apparent reaction rates. (KDW) (PDF) |
| 21 | Reaction and diffusion in porous catalysts (cont.). Packed bed reactors. (WHG) (PDF) |
| 22 | Combined internal and external transport resistances. (WHG) (PDF) Biot numbers review. (PDF) (Courtesy of David Adrian. Used with permission.) |
| 23 | Pulling it all together; applications to energy/chemicals industry. Presentation of current research. (WHG) |
| 24 | Pulling it all together; applications to bioengineering and medicine. Presentation of current research. (KDW) |
| 25 | Course review. (WHG) (PDF) |
Course Info
Instructors
Prof. K. Dane Wittrup
Prof. William Green, Jr.
Departments
Chemical Engineering
As Taught In
Spring 2007
Level
Undergraduate
Topics
Engineering
Biological Engineering
Chemical Engineering
Science
Chemistry
Physical Chemistry
Learning Resource Types
assignment_turned_in Problem Sets with Solutions
grading Exams with Solutions
notes Lecture Notes
assignment_turned_in Activity Assignments with Examples
Download Course
Over 2,500 courses & materials
Freely sharing knowledge with learners and educators around the world. Learn more
© 2001–2025 Massachusetts Institute of Technology
Creative Commons License
Terms and Conditions
Proud member of:
© 2001–2025 Massachusetts Institute of Technology
You are leaving MIT OpenCourseWare
Please be advised that external sites may have terms and conditions, including license rights, that differ from ours. MIT OCW is not responsible for any content on third party sites, nor does a link suggest an endorsement of those sites and/or their content.
Continue |
189586 | https://www.chegg.com/homework-help/questions-and-answers/consider-following-int-1-6-frac-ln-x-x-d-x-let-u-ln-x--find-d-u--d-v-pi-indicate-limits-in-q124241823 | Solved Consider the following: ∫16xln(x)dx Let u=ln(x). Find | Chegg.com
Skip to main content
Books
Rent/Buy
Read
Return
Sell
Study
Tasks
Homework help
Understand a topic
Writing & citations
Tools
Expert Q&A
Math Solver
Citations
Plagiarism checker
Grammar checker
Expert proofreading
Career
For educators
Help
Sign in
Paste
Copy
Cut
Options
Upload Image
Math Mode
÷
≤
≥
o
π
∞
∩
∪
√
∫
Math
Math
Geometry
Physics
Greek Alphabet
Math
Calculus
Calculus questions and answers
[Consider the following: ∫16xln(x)dx Let u=ln(x). Find du. dv=π Indicate how the limits of integration should be adjusted in order to perform the integration with respect to U. . (Enter your answer using interval notation. 1,6)− Evaluate the definite integral.
Your solution’s ready to go!
Our expert help has broken down your problem into an easy-to-learn solution you can count on.
See Answer See Answer See Answer done loading
Question: Consider the following: ∫16xln(x)dx Let u=ln(x). Find du. dv=π Indicate how the limits of integration should be adjusted in order to perform the integration with respect to U. . (Enter your answer using interval notation. [1,6)− Evaluate the definite integral.
Show transcribed image text
There are 2 steps to solve this one.Solution Share Share Share done loading Copy link Step 1 Given, ∫1 6 7 lnx x d x Let u =lnx then du will be d u=1 x d x Now the limits of integration will be changed to: View the full answer Step 2 UnlockAnswer Unlock Next question
Transcribed image text:
Consider the following: ∫1 6x l n(x)d x Let u=ln(x). Find d u. d v=π Indicate how the limits of integration should be adjusted in order to perform the integration with respect to U. . (Enter your answer using interval notation. [1,6)− Evaluate the definite integral.
Not the question you’re looking for?
Post any question and get expert help quickly.
Start learning
Chegg Products & Services
Chegg Study Help
Citation Generator
Grammar Checker
Math Solver
Mobile Apps
Plagiarism Checker
Chegg Perks
Company
Company
About Chegg
Chegg For Good
Advertise with us
Investor Relations
Jobs
Join Our Affiliate Program
Media Center
Chegg Network
Chegg Network
Busuu
Citation Machine
EasyBib
Mathway
Customer Service
Customer Service
Give Us Feedback
Customer Service
Manage Subscription
Educators
Educators
Academic Integrity
Honor Shield
Institute of Digital Learning
© 2003-2025 Chegg Inc. All rights reserved.
Cookie NoticeYour Privacy ChoicesDo Not Sell My Personal InformationGeneral PoliciesPrivacy PolicyHonor CodeIP Rights
Do Not Sell My Personal Information
When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.
More information
Allow All
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Sale of Personal Data
[x] Sale of Personal Data
Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link.
If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.
Targeting Cookies
[x] Switch Label label
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Reject All Confirm My Choices
mmmmmmmmmmlli mmmmmmmmmmlli mmmmmmmmmmlli |
189587 | https://math.libretexts.org/Bookshelves/Linear_Algebra/Linear_Algebra_with_Applications_(Nicholson)/06%3A_Vector_Spaces/6.01%3A_Examples_and_Basic_Properties | Skip to main content
6.1: Examples and Basic Properties
Last updated
: Jul 25, 2023
Save as PDF
6: Vector Spaces
6.1E: Examples and Basic Properties Exercises
Page ID
: 58862
W. Keith Nicholson
University of Calgary via Lyryx Learning
( \newcommand{\kernel}{\mathrm{null}\,})
Many mathematical entities have the property that they can be added and multiplied by a number. Numbers themselves have this property, as do m×nm×n matrices: The sum of two such matrices is again m×nm×n as is any scalar multiple of such a matrix. Polynomials are another familiar example, as are the geometric vectors in Chapter [chap:4]. It turns out that there are many other types of mathematical objects that can be added and multiplied by a scalar, and the general study of such systems is introduced in this chapter. Remarkably, much of what we could say in Chapter [chap:5] about the dimension of subspaces in RnRn can be formulated in this generality.
Vector Spaces017629 A vector space consists of a nonempty set VV of objects (called vectors) that can be added, that can be multiplied by a real number (called a scalar in this context), and for which certain axioms hold.If vv and ww are two vectors in VV, their sum is expressed as v+wv+w, and the scalar product of vv by a real number aa is denoted as avav. These operations are called vector addition and scalar multiplication, respectively, and the following axioms are assumed to hold.
Axioms for vector addition
If uu and vv are in VV, then u+vu+v is in VV.
u+v=v+uu+v=v+u for all uu and vv in VV.
u+(v+w)=(u+v)+wu+(v+w)=(u+v)+w for all uu, vv, and ww in VV.
An element 00 in VV exists such that v+0=v=0+vv+0=v=0+v for every vv in VV.
For each vv in VV, an element −v−v in VV exists such that −v+v=0−v+v=0 and v+(−v)=0v+(−v)=0.
Axioms for scalar multiplication
If vv is in VV, then avav is in VV for all aa in RR.
a(v+w)=av+awa(v+w)=av+aw for all vv and ww in VV and all aa in RR.
(a+b)v=av+bv(a+b)v=av+bv for all vv in VV and all aa and bb in RR.
a(bv)=(ab)va(bv)=(ab)v for all vv in VV and all aa and bb in RR.
1v=v1v=v for all vv in VV.
The content of axioms A1 and S1 is described by saying that VV is closed under vector addition and scalar multiplication. The element 00 in axiom A4 is called the zero vector, and the vector −v−v in axiom A5 is called the negative of vv.
The rules of matrix arithmetic, when applied to RnRn, give
017663 RnRn is a vector space using matrix addition and scalar multiplication.
It is important to realize that, in a general vector space, the vectors need not be nn-tuples as in RnRn. They can be any kind of objects at all as long as the addition and scalar multiplication are defined and the axioms are satisfied. The following examples illustrate the diversity of the concept.
The space RnRn consists of special types of matrices. More generally, let MmnMmn denote the set of all m×nm×n matrices with real entries. Then Theorem [thm:002170] gives:
017672 The set MmnMmn of all m×nm×n matrices is a vector space using matrix addition and scalar multiplication. The zero element in this vector space is the zero matrix of size m×nm×n, and the vector space negative of a matrix (required by axiom A5) is the usual matrix negative discussed in Section [sec:2_1]. Note that MmnMmn is just RmnRmn in different notation.
In Chapter [chap:5] we identified many important subspaces of RnRn such as imAimA and \funcnullA\funcnullA for a matrix AA. These are all vector spaces.
017680 Show that every subspace of RnRn is a vector space in its own right using the addition and scalar multiplication of RnRn.
Axioms A1 and S1 are two of the defining conditions for a subspace UU of RnRn (see Section [sec:5_1]). The other eight axioms for a vector space are inherited from RnRn. For example, if xx and yy are in UU and aa is a scalar, then a(x+y)=ax+aya(x+y)=ax+ay because xx and yy are in RnRn. This shows that axiom S2 holds for UU; similarly, the other axioms also hold for UU.
017691 Let VV denote the set of all ordered pairs (x,y)(x,y) and define addition in VV as in R2R2. However, define a new scalar multiplication in VV by
a(x,y)=(ay,ax)
a(x,y)=(ay,ax)
Determine if VV is a vector space with these operations.
Axioms A1 to A5 are valid for VV because they hold for matrices. Also a(x,y)=(ay,ax)a(x,y)=(ay,ax) is again in VV, so axiom S1 holds. To verify axiom S2, let v=(x,y)v=(x,y) and w=(x1,y1)w=(x1,y1) be typical elements in VV and compute
a(v+w)=a(x+x1,y+y1)=(a(y+y1),a(x+x1))av+aw=(ay,ax)+(ay1,ax1)=(ay+ay1,ax+ax1)
a(v+w)av+aw=a(x+x1,y+y1)=(a(y+y1),a(x+x1))=(ay,ax)+(ay1,ax1)=(ay+ay1,ax+ax1)
Because these are equal, axiom S2 holds. Similarly, the reader can verify that axiom S3 holds. However, axiom S4 fails because
a(b(x,y))=a(by,bx)=(abx,aby)
a(b(x,y))=a(by,bx)=(abx,aby)
need not equal ab(x,y)=(aby,abx)ab(x,y)=(aby,abx). Hence, VV is not a vector space. (In fact, axiom S5 also fails.)
Sets of polynomials provide another important source of examples of vector spaces, so we review some basic facts. A polynomial in an indeterminate xx is an expression
p(x)=a0+a1x+a2x2+⋯+anxn
p(x)=a0+a1x+a2x2+⋯+anxn
where a0,a1,a2,…,ana0,a1,a2,…,an are real numbers called the coefficients of the polynomial. If all the coefficients are zero, the polynomial is called the zero polynomial and is denoted simply as 00. If p(x)≠0p(x)≠0, the highest power of xx with a nonzero coefficient is called the degree of p(x)p(x) denoted as degp(x)degp(x). The coefficient itself is called the leading coefficient of p(x)p(x). Hence deg(3+5x)=1deg(3+5x)=1, deg(1+x+x2)=2deg(1+x+x2)=2, and deg(4)=0deg(4)=0. (The degree of the zero polynomial is not defined.)
Let PP denote the set of all polynomials and suppose that
p(x)=a0+a1x+a2x2+⋯q(x)=b0+b1x+b2x2+⋯
p(x)q(x)=a0+a1x+a2x2+⋯=b0+b1x+b2x2+⋯
are two polynomials in PP (possibly of different degrees). Then p(x)p(x) and q(x)q(x) are called equal [written p(x)=q(x)p(x)=q(x)] if and only if all the corresponding coefficients are equal—that is, a0=b0, a1=b1, a2=b2, and so on. In particular, a0+a1x+a2x2+⋯=0 means that a0=0, a1=0, a2=0, …, and this is the reason for calling x an indeterminate. The set P has an addition and scalar multiplication defined on it as follows: if p(x) and q(x) are as before and a is a real number,
p(x)+q(x)=(a0+b0)+(a1+b1)x+(a2+b2)x2+⋯ap(x)=aa0+(aa1)x+(aa2)x2+⋯
Evidently, these are again polynomials, so P is closed under these operations, called pointwise addition and scalar multiplication. The other vector space axioms are easily verified, and we have
017729 The set P of all polynomials is a vector space with the foregoing addition and scalar multiplication. The zero vector is the zero polynomial, and the negative of a polynomial p(x)=a0+a1x+a2x2+… is the polynomial −p(x)=−a0−a1x−a2x2−… obtained by negating all the coefficients.
There is another vector space of polynomials that will be referred to later.
017741 Given n≥1, let Pn denote the set of all polynomials of degree at most n, together with the zero polynomial. That is
Pn={a0+a1x+a2x2+⋯+anxn∣a0,a1,a2,…,an in R}.
Then Pn is a vector space. Indeed, sums and scalar multiples of polynomials in Pn are again in Pn, and the other vector space axioms are inherited from P. In particular, the zero vector and the negative of a polynomial in Pn are the same as those in P.
If a and b are real numbers and a<b, the interval [a,b] is defined to be the set of all real numbers x such that a≤x≤b. A (real-valued) function f on [a,b] is a rule that associates to every number x in [a,b] a real number denoted f(x). The rule is frequently specified by giving a formula for f(x) in terms of x. For example, f(x)=2x, f(x)=sinx, and f(x)=x2+1 are familiar functions. In fact, every polynomial p(x) can be regarded as the formula for a function p.
The set of all functions on [a,b] is denoted F[a,b]. Two functions f and g in F[a,b] are equal if f(x)=g(x) for every x in [a,b], and we describe this by saying that f and g have the same action. Note that two polynomials are equal in P (defined prior to Example [exa:017729]) if and only if they are equal as functions.
If f and g are two functions in F[a,b], and if r is a real number, define the sum f+g and the scalar product rf by
2(f+g)(x)=f(x)+g(x)for each x in a,b(x)=rf(x)for each x in [a,b]
In other words, the action of f+g upon x is to associate x with the number f(x)+g(x), and rf associates x with rf(x). The sum of f(x)=x2 and g(x)=−x is shown in the diagram. These operations on F[a,b] are called pointwise addition and scalar multiplication of functions and they are the usual operations familiar from elementary algebra and calculus.
017760 The set F[a,b] of all functions on the interval [a,b] is a vector space using pointwise addition and scalar multiplication. The zero function (in axiom A4), denoted 0, is the constant function defined by
0(x)=0 for each x in [a,b]
The negative of a function f is denoted −f and has action defined by
(−f)(x)=−f(x) for each x in [a,b]
Axioms A1 and S1 are clearly satisfied because, if f and g are functions on [a,b], then f+g and rf are again such functions. The verification of the remaining axioms is left as Exercise [ex:6_1_14].
Other examples of vector spaces will appear later, but these are sufficiently varied to indicate the scope of the concept and to illustrate the properties of vector spaces to be discussed. With such a variety of examples, it may come as a surprise that a well-developed theory of vector spaces exists. That is, many properties can be shown to hold for all vector spaces and hence hold in every example. Such properties are called theorems and can be deduced from the axioms. Here is an important example.
Cancellation017768 Let u, v, and w be vectors in a vector space V. If v+u=v+w, then u=w.
We are given v+u=v+w. If these were numbers instead of vectors, we would simply subtract v from both sides of the equation to obtain u=w. This can be accomplished with vectors by adding −v to both sides of the equation. The steps (using only the axioms) are as follows:
\begin{aligned} \mathbf{v} + \mathbf{u} &= \mathbf{v} + \mathbf{w} \nonumber \ -\mathbf{v} + (\mathbf{v} + \mathbf{u}) &= -\mathbf{v} + (\mathbf{v} + \mathbf{w}) \tag{axiom A5} \ (-\mathbf{v} + \mathbf{v}) + \mathbf{u} &= (-\mathbf{v} + \mathbf{v}) + \mathbf{w} \tag{axiom A3} \ \mathbf{0} + \mathbf{u} &= \mathbf{0} + \mathbf{w} \tag{axiom A5} \ \mathbf{u} &= \mathbf{w} \tag{axiom A4}\end{aligned}
This is the desired conclusion.1
As with many good mathematical theorems, the technique of the proof of Theorem [thm:017768] is at least as important as the theorem itself. The idea was to mimic the well-known process of numerical subtraction in a vector space V as follows: To subtract a vector v from both sides of a vector equation, we added −v to both sides. With this in mind, we define difference u−v of two vectors in V as
u−v=u+(−v)
We shall say that this vector is the result of having subtracted v from u and, as in arithmetic, this operation has the property given in Theorem [thm:017781].
017781 If u and v are vectors in a vector space V, the equation
x+v=u
has one and only one solution x in V given by
x=u−v
The difference x=u−v is indeed a solution to the equation because (using several axioms)
x+v=(u−v)+v=[u+(−v)]+v=u+(−v+v)=u+0=u
To see that this is the only solution, suppose x1 is another solution so that x1+v=u. Then x+v=x1+v (they both equal u), so x=x1 by cancellation.
Similarly, cancellation shows that there is only one zero vector in any vector space and only one negative of each vector (Exercises [ex:6_1_10] and [ex:6_1_11]). Hence we speak of the zero vector and the negative of a vector.
The next theorem derives some basic properties of scalar multiplication that hold in every vector space, and will be used extensively.
017797 Let v denote a vector in a vector space V and let a denote a real number.
0v=0.
a0=0.
If av=0, then either a=0 or v=0.
(−1)v=−v.
(−a)v=−(av)=a(−v).
Observe that 0v+0v=(0+0)v=0v=0v+0 where the first equality is by axiom S3. It follows that 0v=0 by cancellation.
The proof is similar to that of (1), and is left as Exercise ex:6_1_12.
Assume that av=0. If a=0, there is nothing to prove; if a≠0, we must show that v=0. But a≠0 means we can scalar-multiply the equation av=0 by the scalar 1a. The result (using (2) and Axioms S5 and S4) is
v=1v=(1aa)v=1a(av)=1a0=0
4. We have −v+v=0 by axiom A5. On the other hand,
(−1)v+v=(−1)v+1v=(−1+1)v=0v=0
5. The proof is left as Exercise [ex:6_1_12].
The properties in Theorem [thm:017797] are familiar for matrices; the point here is that they hold in every vector space. It is hard to exaggerate the importance of this observation.
Axiom A3 ensures that the sum u+(v+w)=(u+v)+w is the same however it is formed, and we write it simply as u+v+w. Similarly, there are different ways to form any sum v1+v2+⋯+vn, and Axiom A3 guarantees that they are all equal. Moreover, Axiom A2 shows that the order in which the vectors are written does not matter (for example: u+v+w+z=z+u+w+v).
Similarly, Axioms S2 and S3 extend. For example
a(u+v+w)=a[u+(v+w)]=au+a(v+w)=au+av+aw
for all a, u, v, and w. Similarly (a+b+c)v=av+bv+cv hold for all values of a, b, c, and v (verify). More generally,
a(v1+v2+⋯+vn)=av1+av2+⋯+avn(a1+a2+⋯+an)v=a1v+a2v+⋯+anv
hold for all n≥1, all numbers a,a1,…,an, and all vectors, v,v1,…,vn. The verifications are by induction and are left to the reader (Exercise [ex:6_1_13]). These facts—together with the axioms, Theorem [thm:017797], and the definition of subtraction—enable us to simplify expressions involving sums of scalar multiples of vectors by collecting like terms, expanding, and taking out common factors. This has been discussed for the vector space of matrices in Section [sec:2_1] (and for geometric vectors in Section [sec:4_1]); the manipulations in an arbitrary vector space are carried out in the same way. Here is an illustration.
017838 If u, v, and w are vectors in a vector space V, simplify the expression
2(u+3w)−3(2w−v)−3[2(2u+v−4w)−4(u−2w)]
The reduction proceeds as though u, v, and w were matrices or variables.
2(u+3w)−3(2w−v)−3[2(2u+v−4w)−4(u−2w)]=2u+6w−6w+3v−3[4u+2v−8w−4u+8w]=2u+3v−3[2v]=2u+3v−6v=2u−3v
Condition (2) in Theorem [thm:017797] points to another example of a vector space.
017847 A set {0} with one element becomes a vector space if we define
0+0=0 and a0=0 for all scalars a.
The resulting space is called the zero vector space and is denoted {0}.
The vector space axioms are easily verified for {0}. In any vector space V, Theorem [thm:017797] shows that the zero subspace (consisting of the zero vector of V alone) is a copy of the zero vector space.
Observe that none of the scalar multiplication axioms are needed here.↩
6: Vector Spaces
6.1E: Examples and Basic Properties Exercises |
189588 | https://books.google.com/books/about/Lectures_on_Functional_Equations_and_The.html?id=n7vckU_1tY4C | Lectures on Functional Equations and Their Applications - Google Books
Sign in
Hidden fields
Try the new Google Books
Books
View sample
Add to my library
Try the new Google Books
Check out the new look and enjoy easier access to your favorite features
Try it now
No thanks
Try the new Google Books
My library
Help
Advanced Book Search
Good for:
Web
Tablet / iPad
eReader
Smartphone#### Features:
Flowing text
Scanned pages
Help with devices & formats
Learn more about books on Google Play
Buy eBook - $92.00
Get this book in print▼
Access Online via Elsevier
Amazon.com
Barnes&Noble.com
Books-A-Million
IndieBound
Find in a library
All sellers»
My library
My History
Lectures on Functional Equations and Their Applications ======================================================= J. Aczél Academic Press, Jan 1, 1966 - Computers - 509 pages Numerous detailed proofs highlight this treatment of functional equations. Starting with equations that can be solved by simple substitutions, the book then moves to equations with several unknown functions and methods of reduction to differential and integral equations. Also includes composite equations, equations with several unknown functions of several variables, vector and matrix equations, more. 1966 edition. More » Preview this book »
Selected pages
Page 29
Title Page
Table of Contents
Index
References
Contents
Introduction1
Equations for Functions of a Single Variable13
Equations for Functions of Several Variables 211
Concluding Remarks Some Unsolved Problems 379
Bibliography 383
Author Index497
Subject Index509
Copyright
Other editions - View all
Lectures on Functional Equations and Their Applications
J. Aczel,Hansjorg Oser
Limited preview - 2006
Lectures on Functional Equations and Their Applications
J. Aczél
Snippet view - 1966
Common terms and phrases
Abelian groupsAcadACZÉLadditionAkadalgebraicanamorphosisANGHELUȚAapplicationsarbitrary constantassumeassumptionsBullCauchyCauchy'sCauchy's functional equationComptcontinuous and strictlycontinuous functioncontinuous solutionscoshDebrecendefineddifferential equationsDokléquations fonctionnellesexampleexistsf(xyfo(xfollowing Theoremfunction F(xfunctional equation f(xFunktionalgleichungenFunktionengeometryGHERMĂNESCUHamel basishomogeneous functionsHosszúI. J. SCHOENBERGintervalinverseinverse elementJ. E. LITTLEWOODKUCZMALapokLemmaMathmatrixmethodmonotonic functionN. H. ABELNauk SSSRobtainParisPolonproblemProcproofproved the followingquasigroupsreal numbersreduciblereine angewRendRÉNYIRepubrespectRomîneRusssatisfies EqSectsolvedŞtistrictly monotonicsubstitutesystem of functionalsystem of solutionstheoryTimişoaraunknown functionsvalidvaluesvariablesvectorVINCZEx₁
Bibliographic information
Title Lectures on Functional Equations and Their Applications
Volume 19 of Mathematics in Science and Engineering
EditorJ. Aczél
Publisher Academic Press, 1966
ISBN 0080955258, 9780080955254
Length 509 pages
SubjectsComputers
›
Information Theory
Computers / Information Theory
Export CitationBiBTeXEndNoteRefMan
About Google Books - Privacy Policy - Terms of Service - Information for Publishers - Report an issue - Help - Google Home |
189589 | https://codegolf.stackexchange.com/questions/1487/determine-if-4-points-form-a-square | Skip to main content
Asked
Modified
7 days ago
Viewed
13k times
This question shows research effort; it is useful and clear
Save this question.
Show activity on this post.
17
1
2
Next
This answer is useful
14
Save this answer.
Show activity on this post.
Python ~~176~~ ~~90~~ 76 bytes
```
def S(A):c=sum(A)/4.0;return set(A)==set((A-c)1ji+c for i in range(4))
```
Function S takes a list of complex numbers as its input (A). If we know both the centre and one corner of a square, we can reconstruct the square by rotating the corner 90,180 and 270 degrees around the centre point (c). On the complex plane rotation by 90 degrees about the origin is done by multiplying the point by i. If our original shape and the reconstructed square have the same points then it must have been a square.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
edited Aug 15 at 1:55
Lucenaposition
2,26711 gold badge33 silver badges2828 bronze badges
answered Mar 12, 2011 at 7:00
paperhorsepaperhorse
26411 silver badge44 bronze badges
6
A few optimizations: 1) use "S" instead of "is_square" 2) put it all on one line using ; 3) iterate over the 4 directions directly "for i in(1,1j,-1,-1j)" 4) don't need [] in set argument.
– Keith Randall
Commented
Mar 13, 2011 at 5:29
2
@Keith Randall - Why was this accepted when J B has a much shorter solution?
– aaaaaaaaaaaa
Commented
Mar 18, 2011 at 13:21
1
Two reasons. One, J would always win. So I like to normalize a bit by language. Also, I like this answer better because it doesn't suffer from the same problem as the distance-only answers where other figures (admittedly, only irrational ones) give false positives.
– Keith Randall
Commented
Mar 18, 2011 at 15:04
5
@Keith Randall - Quotes from the question: "The points will have integral coordinates" "Shortest code wins.". It's perfectly fine if you choose different criteria for selecting an answer, even subjective criteria, but then you should state that in the question.
– aaaaaaaaaaaa
Commented
Mar 18, 2011 at 19:23
1
This can be shortened to def U(A):c=sum(A)/4;d=A-c;return{d+c,c-d,d1j+c,c-d1j}==set(A) (66 chars).
– Reinstate Monica
Commented
May 13, 2013 at 21:23
|
Show 1 more comment
This answer is useful
13
Save this answer.
Show activity on this post.
J, 28 ~~17~~ ~~25~~ ~~27~~
J doesn't really have functions, but here's a monadic verb that takes a vector of points from the complex plane:
```
4 8 4-:#/.~&(/:~&:|&,&(-/~))
```
Method is a mix of Michael Spencer (work solely on inter-vertex lengths; but he's currently failing my rhombus2) and Eelvex's (check the sets' sizes) works. Reading right to left:
-/~ compute all point differences
, flatten
| extract magnitude
/:~ sort up
#/.~ nub and count
4 8 4 -: must have exactly 4 equidistant (at 0), 8 a bit bigger (length 1, sides), 4 bigger yet (length sqrt 2, diagonals)
Demonstration:
```
NB. give the verb a name for easier use
f =: 4 8 4-:#/.~&(/:~&:|&,&(-/~))
NB. standard square
f 0 0j1 1j1 1
1
NB. non-axis-aligned square
f 0 2j1 3j_1 1j_2
1
NB. different order
f 0 1j1 0j1 1
1
NB. rectangle
f 0 0j2 3j2 3
0
NB. rhombus 1
f 0 3j4 8j4 5
0
NB. rhombus 2
f 0 1ad_60 1ad0 1ad60
0
```
For memory's sake, my previous method (required ordered vertices, but could detect regular polygons of any order):
```
./&(={.)&(%1&|.)&(-1&|.)
```
See history for explanation and demo. The current method could probably be expanded to other polygons, that 4 8 4 does look a lot like a binomial distribution.
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Apr 13, 2017 at 12:39
CommunityBot
1
answered Mar 10, 2011 at 23:30
J BJ B
10.1k2828 silver badges6060 bronze badges
12
Can you link to this language?
– Sargun Dhillon
Commented
Mar 11, 2011 at 5:17
1
@gnibbler: Why not? I'm pretty sure it does.
– Eelvex
Commented
Mar 11, 2011 at 13:10
1
Actually, a non-square figure that satisfy the conditions that you check for does exist, a regular triangle plus one point a side-length from a tip of the triangle placed on the extended median. But the question called for integer input, so I guess the solution is ok.
– aaaaaaaaaaaa
Commented
Mar 11, 2011 at 15:34
1
Ah, ok. I was thinking of equilateral triangles with the 4th point being the centre, but that is ruled out by the integer co-ordinates
– gnibbler
Commented
Mar 11, 2011 at 18:17
1
You could cut 3 characters by changing it to an explicit definition: 3 :'4 8 4-:#/.~/:~|,-/~y'
– isawdrones
Commented
Apr 8, 2011 at 17:57
|
Show 7 more comments
This answer is useful
6
Save this answer.
Show activity on this post.
Python, 71 ~~42~~
```
lambda A: len(set(A))==4 and len(set(abs(i-j)for i in A for j in A))==3
```
Update 1) to require 4 different points (would previously give false positives for repeated points - are there others?) 2) to define a function per spec
For a square, the vector between any two points must be 0 (the same point), a side, or a diagonal. So, the set of the magnitude of these vectors must have length 3.
```
Accepts co-ordinates as sequences of complex numbers
SQUARES=[
(0+0j,0+1j,1+1j,1+0j), # standard square
(0+0j,2+1j,3-1j,1-2j), # non-axis-aligned square
(0+0j,1+1j,0+1j,1+0j) # different order
]
NONSQUARES=[
(0+0j,0+2j,3+2j,3+0j), # rectangle
(0+0j,3+4j,8+4j,5+0j), # rhombus
(0+0j,0+1j,1+1j,0+0j), # duplicated point
(0+0j,1+60j,1+0j,1-60j) # rhombus 2 (J B)
]
test = "lambda A: len(set(A))==4 and len(set(abs(i-j)for i in A for j in A))==3"
assert len(test)==71
is_square=lambda A: len(set(A))==4 and len(set(abs(i-j)for i in A for j in A))==3
for A in SQUARES:
assert is_square(A)
for A in NONSQUARES:
assert not is_square(A)
```
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
edited Jun 17, 2020 at 9:04
CommunityBot
1
answered Mar 11, 2011 at 6:35
user932user932
5
I think the question explicitly stated a list of points, and not a vector.
– Sargun Dhillon
Commented
Mar 11, 2011 at 10:17
False positives.
– aaaaaaaaaaaa
Commented
Mar 11, 2011 at 13:13
1
So (0+0j,0+0j,1+0j,0+1j) is a square?
– mhagger
Commented
Mar 11, 2011 at 15:39
My rhombus 2 is not 1+/-60j, it's more like exp(ijpi/3) for values from -1, 0, 1. Note that as eBusiness pointed out they can't all be integral, so not really in the scope of the question.
– J B
Commented
Mar 11, 2011 at 17:00
59 bytes lambda A:5>len({A})>len({abs(i-j)for i in A for j in A})>2 Attempt This Online!
– movatica
Commented
Aug 15 at 23:35
Add a comment
|
This answer is useful
4
Save this answer.
Show activity on this post.
Haskell, 100 characters
Here's how I'd write the JB's J solution in Haskell. With no attempt made to damage readability by removing nonessential characters, it's about 132 characters:
```
import Data.List
d (x,y) (x',y') = (x-x')^2 + (y-y')^2
square xs = (== [4,8,4]) . map length . group . sort $ [d x y | x<-xs, y<-xs]
```
You can scrape it down a bit to 100 by removing excess spaces and renaming some things
```
import Data.List
d(x,y)(a,b)=(x-a)^2+(y-b)^2
s l=(==[4,8,4]).map length.group.sort$[d x y|x<-l,y<-l]
```
Let's use QuickCheck to ensure that it accepts arbitrary squares, with one vertex at (x,y) and edge vector (a,b):
```
prop_square (x,y) (a,b) = square [(x,y),(x+a,y+b),(x-b,y+a),(x+a-b,y+b+a)]
```
Trying it in ghci:
```
ghci> quickCheck prop_square
Failed! Falsifiable (after 1 test):
(0,0)
(0,0)
```
Oh right, the empty square isn't considered a square here, so we'll revise our test:
```
prop_square (x,y) (a,b) =
(a,b) /= (0,0) ==> square [(x,y),(x+a,y+b),(x-b,y+a),(x+a-b,y+b+a)]
```
And trying it again:
```
ghci> quickCheck prop_square
+++ OK, passed 100 tests.
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Mar 15, 2011 at 4:58
answered Mar 15, 2011 at 4:38
user1027user1027
1
1
Save 11 chars by unrolling the function d. s l=[4,8,4]==(map length.group.sort)[(x-a)^2+(y-b)^2|(x,y)<-l,(a,b)<-l]
– Ray
Commented
Jun 19, 2013 at 20:06
Add a comment
|
This answer is useful
3
Save this answer.
Show activity on this post.
Factor
An implementation in the Factor programming language:
```
USING: kernel math math.combinatorics math.vectors sequences sets ;
: square? ( seq -- ? )
members [ length 4 = ] [
2 [ first2 distance ] map-combinations
{ 0 } diff length 2 =
] bi and ;
```
And some unit tests:
```
[ t ] [
{
{ { 0 0 } { 0 1 } { 1 1 } { 1 0 } } ! standard square
{ { 0 0 } { 2 1 } { 3 -1 } { 1 -2 } } ! non-axis-aligned square
{ { 0 0 } { 1 1 } { 0 1 } { 1 0 } } ! different order
{ { 0 0 } { 0 4 } { 2 2 } { -2 2 } } ! rotated square
} [ square? ] all?
] unit-test
[ f ] [
{
{ { 0 0 } { 0 2 } { 3 2 } { 3 0 } } ! rectangle
{ { 0 0 } { 3 4 } { 8 4 } { 5 0 } } ! rhombus
{ { 0 0 } { 0 0 } { 1 1 } { 0 0 } } ! only 2 distinct points
{ { 0 0 } { 0 0 } { 1 0 } { 0 1 } } ! only 3 distinct points
} [ square? ] any?
] unit-test
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Mar 19, 2011 at 1:43
answered Mar 12, 2011 at 3:32
mrjbq7mrjbq7
4122 bronze badges
Add a comment
|
This answer is useful
3
Save this answer.
Show activity on this post.
OCaml, 145 164
```
let(%)(a,b)(c,d)=(c-a)(c-a)+(d-b)(d-b)
let t a b c d=a%b+a%c=b%c&&d%c+d%b=b%c&&a%b=a%c&&d%c=d%b
let q(a,b,c,d)=t a b c d||t a c d b||t a b d c
```
Run like this:
```
q ((0,0),(2,1),(3,-1),(1,-2))
```
Let's deobfuscate and explain a bit.
First we define a norm:
```
let norm (ax,ay) (bx,by) = (bx-ax)(bx-ax)+(by-ay)(by-ay)
```
You'll notice that there is no call to sqrt, it's not needed here.
```
let is_square_with_fixed_layout a b c d =
(norm a b) + (norm a c) = norm b c
&& (norm d c) + (norm d b) = norm b c
&& norm a b = norm a c
&& norm d c = norm d b
```
Here a, b, c and d are points.
We assume that these points are layed out like this:
```
a - b
| / |
c - d
```
If we have a square then all these conditions must hold:
a b c is a right triangle
b c d is a right triangle
the smaller sides of each right triangle have the same norms
Observe that the following always holds:
```
is_square_with_fixed_layout r s t u = is_square_with_fixed_layout r t s u
```
We will use that to simplify our test function below.
Since our input is not ordered, we also have to check all permutations. Without loss of generality we can avoid permuting the first point:
```
let is_square (a,b,c,d) =
is_square_with_fixed_layout a b c d
|| is_square_with_fixed_layout a c b d
|| is_square_with_fixed_layout a c d b
|| is_square_with_fixed_layout a b d c
|| is_square_with_fixed_layout a d b c
|| is_square_with_fixed_layout a d c b
```
After simplification:
```
let is_square (a,b,c,d) =
is_square_with_fixed_layout a b c d
|| is_square_with_fixed_layout a c d b
|| is_square_with_fixed_layout a b d c
```
Edit: followed M.Giovannini's advice.
Share
CC BY-SA 3.0
Improve this answer
Follow this answer to receive notifications
edited Apr 18, 2011 at 14:13
answered Mar 11, 2011 at 16:50
bltxdbltxd
19133 bronze badges
2
Nice. We haven't seen much OCaml here :)
– Eelvex
Commented
Mar 11, 2011 at 17:46
Use an operator instead of n for a reduction in 20 characters: let t a b c d=a%b+a%c=b%c&&d%c+d%b=b%c&&a%b=a%c&&d%c=d%b.
– Matías Giovannini
Commented
Apr 17, 2011 at 14:26
Add a comment
|
This answer is useful
3
Save this answer.
Show activity on this post.
Jelly, 8 bytes
```
_Æm×ıḟƊṆ
```
Try it online!
Takes a list of complex numbers as a command line argument. Prints 1 or 0.
```
_Æm Subtract mean of points from each point (i.e. center on 0)
×ıḟƊ Rotate 90°, then compute set difference with original.
Ṇ Logical negation: if empty (i.e. sets are equal) then 1 else 0.
```
This seems like an enjoyable challenge to revive!
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered Apr 20, 2019 at 3:16
lynnlynn
69.6k1111 gold badges136136 silver badges285285 bronze badges
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
Python (105)
Points are represented by (x,y) tuples. Points can be in any order and only accepts squares. Creates a list, s, of pairwise (non-zero) distances between the points. There should be 12 distances in total, in two unique groups.
```
def f(p):s=filter(None,[(x-z)2+(y-w)2for x,y in p for z,w in p]);return len(s)==12and len(set(s))==2
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
answered Mar 11, 2011 at 4:08
Hoa Long TamHoa Long Tam
2,1021717 silver badges1818 bronze badges
3
You could leave out the filter and check if the len of the set is 3. This suffers from the same false positive problem as my answer though.
– gnibbler
Commented
Mar 11, 2011 at 5:29
>>> f([(0,0),(0,4),(2,2),(-2,2)]) = True
– Sargun Dhillon
Commented
Mar 11, 2011 at 6:17
2
f([(0,0),(0,4),(2,2),(-2,2)]) is a square
– gnibbler
Commented
Mar 11, 2011 at 18:19
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
Python - 42 chars
Looks like its an improvement to use complex numbers for the points
```
len(set(abs(x-y)for x in A for y in A))==3
```
where
A = [(11+13j), (14+12j), (13+9j), (10+10j)]
old answer:
```
from itertools import
len(set((a-c)2+(b-d)2 for(a,b),(c,d)in combinations(A,2)))==2
```
Points are specified in any order as a list, eg
```
A = [(11, 13), (14, 12), (13, 9), (10, 10)]
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Mar 11, 2011 at 18:32
answered Mar 11, 2011 at 0:29
gnibblergnibbler
15.3k44 gold badges5151 silver badges7676 bronze badges
5
>>> A=[(0,0),(0,0),(1,1),(0,0)] >>> len(set((a-c)2+(b-d)2 for(a,b),(c,d)in combinations(A,2)))==2 True
– Sargun Dhillon
Commented
Mar 11, 2011 at 3:30
@Sargun, that's a special case of a whole class of inputs that don't work. I'm trying to think of a fix that doens't blow out the size of the answer. Meanwhile, Can work out the general class of failing cases?
– gnibbler
Commented
Mar 11, 2011 at 5:26
A=[(0,0),(0,4),(2,2),(-2,2)]; len(set((a-c)2+(b-d)2 for(a,b),(c,d)in combinations(A,2)))==2
– Sargun Dhillon
Commented
Mar 11, 2011 at 6:15
@Sargun: that example is a square.
– Keith Randall
Commented
Mar 11, 2011 at 15:59
to get rid of duplicated points you can add -set()
– Keith Randall
Commented
Mar 11, 2011 at 16:00
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
C# -- not exactly short. Abusing LINQ. Selects distinct two-combinations of points in the input, calculates their distances, then verifies that exactly four of them are equal and that there is only one other distinct distance value. Point is a class with two double members, X and Y. Could easily be a Tuple, but meh.
```
var points = new List
{
new Point( 0, 0 ),
new Point( 3, 4 ),
new Point( 8, 4 ),
new Point( 5, 0 )
};
var distances = points.SelectMany(
(value, index) => points.Skip(index + 1),
(first, second) => new Tuple(first, second)).Select(
pointPair =>
Math.Sqrt(Math.Pow(pointPair.Item2.X - pointPair.Item1.X, 2) +
Math.Pow(pointPair.Item2.Y - pointPair.Item1.Y, 2)));
return
distances.Any(
d => distances.Where( p => p == d ).Count() == 4 &&
distances.Where( p => p != d ).Distinct().Count() == 1 );
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
answered Mar 14, 2011 at 5:57
Daniel CoffmanDaniel Coffman
12111 bronze badge
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
PHP, 82 characters
```
//$x=array of x coordinates
//$y=array of respective y coordinates
/ bounding box of a square is also a square - check if Xmax-Xmin equals Ymax-Ymin /
function S($x,$y){sort($x);sort($y);return ($x-$x==$y-$y)?true:false};
//Or even better (81 chars):
//$a=array of points - ((x1,y1), (x2,y2), (x3,y3), (x4,y4))
function S($a){sort($a);return (bool)($a-$a-abs($a-$a))};
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Mar 14, 2011 at 17:40
answered Mar 14, 2011 at 12:28
arekarek
2122 bronze badges
1
But just because the bounding box is square doesn't mean the points lie in a square. Necessary but not sufficient condition. Consider (0,0), (5,5), (10,0), (0,-5). Bounding box is square (0:10, -5:5); figure is not.
– Floris
Commented
Apr 26, 2014 at 1:13
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
C#, 107 characters
```
return p.Distinct().Count()==4&&
(from a in p from b in p select (a-b).LengthSquared).Distinct().Count()==3;
```
Where points is List of Vector3D containing the points.
Computes all distances squared between all points, and if there are exactly three distinct types (must be 0, some value a, and 2a) and 4 distinct points then the points form a square.
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Mar 19, 2011 at 11:04
YOU
5,08122 gold badges2020 silver badges2323 bronze badges
answered Mar 11, 2011 at 18:17
user965user965
0
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
K - 33
Translation of the J solution by J B:
```
{4 8 4~#:'=_sqrt+/'_sqr,/x-/:\:x}
```
K suffers here from its reserved words(_sqr and _sqrt).
Testing:
```
f:{4 8 4~#:'=_sqrt+/'_sqr,/x-/:\:x}
f (0 0;0 1;1 1;1 0)
1
f 4 2#0 0 1 1 0 1 1 0
1
f 4 2#0 0 3 4 8 4 5 0
0
```
Share
CC BY-SA 3.0
Improve this answer
Follow this answer to receive notifications
edited Apr 13, 2017 at 12:39
CommunityBot
1
answered Apr 8, 2011 at 18:22
isawdronesisawdrones
25622 silver badges33 bronze badges
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
OCaml + Batteries, 132 characters
```
let q l=match List.group(-)[?List:(x-z)(x-z)+(y-t)(y-t)|x,y<-List:l;z,t<-List:l;(x,y)<(z,t)?]with->2s=d|_->false
```
(look, Ma, no spaces!) The list comprehension in q forms the list of squared norms for each distinct unordered pair of points. A square has four equal sides and two equal diagonals, the squared lengths of the latter being twice the squared lengths of the former. Since there aren't equilateral triangles in the integer lattice the test isn't really necessary, but I include it for completeness.
Tests:
```
q [(0,0);(0,1);(1,1);(1,0)] ;;
- : bool = true
q [(0,0);(2,1);(3,-1);(1,-2)] ;;
- : bool = true
q [(0,0);(1,1);(0,1);(1,0)] ;;
- : bool = true
q [(0,0);(0,2);(3,2);(3,0)] ;;
- : bool = false
q [(0,0);(3,4);(8,4);(5,0)] ;;
- : bool = false
q [(0,0);(0,0);(1,1);(0,0)] ;;
- : bool = false
q [(0,0);(0,0);(1,0);(0,1)] ;;
- : bool = false
```
Share
CC BY-SA 3.0
Improve this answer
Follow this answer to receive notifications
edited Apr 17, 2011 at 14:36
answered Mar 15, 2011 at 19:01
Matías GiovanniniMatías Giovannini
32111 silver badge44 bronze badges
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
Mathematica 65 80 69 66
Checks that the number of distinct inter-point distances (not including distance from a point to itself) is 2 and the shorter of the two is not 0.
```
h = Length@# == 2 [And] Min@# != 0 &[Union[EuclideanDistance @@@ Subsets[#, {2}]]] &;
```
Usage
```
h@{{0, 0}, {0, 1}, {1, 1}, {1, 0}} (standard square )
h@{{0, 0}, {2, 1}, {3, -1}, {1, -2}} (non-axis aligned square )
h@{{0, 0}, {1, 1}, {0, 1}, {1, 0}} (a different order )
h@{{0, 0}, {0, 2}, {3, 2}, {3, 0}} ( rectangle )
h@{{0, 0}, {3, 4}, {8, 4}, {5, 0}} ( rhombus )
h@{{0, 0}, {0, 0}, {1, 1}, {0, 0}} ( only 2 distinct points )
h@{{0, 0}, {0, 1}, {1, 1}, {0, 1}} ( only 3 distinct points )
```
True
True
True
False
False
False
False
N.B.: \[And] is a single character in Mathematica.
Share
CC BY-SA 3.0
Improve this answer
Follow this answer to receive notifications
edited Jun 17, 2020 at 9:04
CommunityBot
1
answered May 13, 2013 at 12:56
DavidCDavidC
25.4k22 gold badges5353 silver badges106106 bronze badges
1
1
Are you telling me that Mathematica doesn't have a built-in IsSquare function?
– starikcetin
Commented
Mar 25, 2018 at 15:13
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
Python 2, 49 bytes
```
lambda l:all(1jz+(1-1j)sum(l)/4in l for z in l)
```
Try it online!
Takes a list of four complex numbers as input. Rotates each point 90 degrees around the average, and checks that each resulting point is in the original list.
Same length (though shorter in Python 3 using {l}).
```
lambda l:{1jz+(1-1j)sum(l)/4for z in l}==set(l)
```
Try it online!
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered Apr 20, 2019 at 5:13
xnorxnor
149k2626 gold badges284284 silver badges670670 bronze badges
2
Why not use Python 3 if that is shorter? Also, if it is allowed to return arbitrary truthy / falsy values in Python, ^ can be used instead of ==.
– Joel
Commented
Aug 26, 2019 at 23:49
@Joel Python 2 is mostly preference, and that this is a really old challenge from 2011 when Python 2 was pretty much assumed Python golfing. And the challenge says to return true or false, so I stuck with that. If this was posted today, it would probably specify outputting truthy/falsey or one of two distinct values, and it might even by OK to assume that by default.
– xnor
Commented
Aug 27, 2019 at 0:30
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Haskell (212)
```
import Data.List;j=any f.permutations where f x=(all g(t x)&&s(map m(t x)));t x=zip3 x(drop 1$z x)(drop 2$z x);g(a,b,c)=l a c==sqrt 2l a b;m(a,b,_)=l a b;s(x:y)=all(==x)y;l(m,n)(o,p)=sqrt$(o-m)^2+(n-p)^2;z=cycle
```
Naive first attempt. Checks the following two conditions for all permutations of the input list of points (where a given permutation represents, say, a clockwise ordering of the points):
all angles are 90 degrees
all sides are the same length
Deobfuscated code and tests
```
j' = any satisfyBothConditions . permutations
--f
where satisfyBothConditions xs = all angleIs90 (transform xs) &&
same (map findLength' (transform xs))
--t
transform xs = zip3 xs (drop 1 $ cycle xs) (drop 2 $ cycle xs)
--g
angleIs90 (a,b,c) = findLength a c == sqrt 2 findLength a b
--m
findLength' (a,b,_) = findLength a b
--s
same (x:xs) = all (== x) xs
--l
findLength (x1,y1) (x2,y2) = sqrt $ (x2 - x1)^2 + (y2 - y1)^2
main = do print $ "These should be true"
print $ j [(0,0),(0,1),(1,1),(1,0)]
print $ j [(0,0),(2,1),(3,-1),(1,-2)]
print $ j [(0,0),(1,1),(0,1),(1,0)]
print $ "These should not"
print $ j [(0,0),(0,2),(3,2),(3,0)]
print $ j [(0,0),(3,4),(8,4),(5,0)]
print $ "also testing j' just in case"
print $ j' [(0,0),(0,1),(1,1),(1,0)]
print $ j' [(0,0),(2,1),(3,-1),(1,-2)]
print $ j' [(0,0),(1,1),(0,1),(1,0)]
print $ j' [(0,0),(0,2),(3,2),(3,0)]
print $ j' [(0,0),(3,4),(8,4),(5,0)]
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
answered Mar 11, 2011 at 5:03
Dan BurtonDan Burton
11944 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Smalltalk for 106 characters
```
s:=Set new.
p permutationsDo:[:e|s add:((e first - e second) dotProduct:(e first - e third))].
s size = 2
```
where p is a collection of points, e.g.
```
p := { 0@0. 2@1. 3@ -1. 1@ -2}. "twisted square"
```
I think the math is sound...
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
answered Mar 11, 2011 at 14:39
user952user952
1
Checking for 2 distinct dot products doesn't cut it. Points placed in the same position may produce false positives.
– aaaaaaaaaaaa
Commented
Mar 11, 2011 at 15:52
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Mathematica, 123 characters (but you can do better):
```
Flatten[Table[x-y,{x,a},{y,a}],1]
Sort[DeleteDuplicates[Abs[Flatten[Table[c.d,{c,%},{d,%}]]]]]
%==0&&%/%==2
```
Where 'a' is the input in Mathematica list form, eg: a={{0,0},{3,4},{8,4},{5,0}}
The key is to look at the dot products between all the vectors and note that they must have exactly three values: 0, x, and 2x for some value of x. The dot product checks both perpendicularity AND length in one swell foop.
I know there are Mathematica shortcuts that can make this shorter, but I don't know what they are.
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
answered Mar 11, 2011 at 14:54
user503user503
5
I think this one is wrong as well, but I can't figure what the code does.
– aaaaaaaaaaaa
Commented
Mar 11, 2011 at 15:55
It calculates all the vectors between the 4 points, takes all of the dot products (absolute value), and expects the result to consist exactly of 0, x, 2x for some value of x.
– user503
Commented
Mar 11, 2011 at 16:00
So 16 vectors -> 256 dot products, and you check that the high value is 2 times the low, but you don't how many there is of each value. Is that correct understood?
– aaaaaaaaaaaa
Commented
Mar 11, 2011 at 16:25
Yes, that correctly describes my algorithm. And I now think you're right: you could construct a scenario in which all 3 values occurred, but not in the right quantity. Rats. Should be fixable though?
– user503
Commented
Mar 11, 2011 at 17:10
@barrycarter You can save characters by using Union instead of Sort@DeleteDuplicates. I put your 3 lines together also:# == 0 && #/# == 2 &[ Union@Abs@Flatten[Table[c.d, {c, #}, {d, #}]] &[ Flatten[Table[x - y, {x, a}, {y, a}], 1]]]
– DavidC
Commented
May 15, 2013 at 3:21
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Scala (146 characters)
```
def s(l:List[List[Int]]){var r=Set(0.0);l map(a=>l map(b=>r+=(math.pow((b.head-a.head),2)+math.pow((b.last-a.last),2))));print(((r-0.0).size)==2)}
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
answered Mar 11, 2011 at 16:45
GarethGareth
11.4k11 gold badge3737 silver badges8585 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
JavaScript 144 characters
Mathematically equal to J Bs answer. It generates the 6 lengths and assert that the 2 greatest are equal and that the 4 smallest are equal. Input must be an array of arrays.
```
function F(a){d=[];g=0;for(b=4;--b;)for(c=b;c--;d[g++]=(ee+ff)/1e6)e=a[c]-a[b],f=a[c]-a[b];d.sort();return d==d&&d==d} //Compact function
testcases=[
,
,
,
,
,
,
]
for(v=0;v<7;v++){
document.write(F(testcases[v])+"")
}
function G(a){ //Readable version
d=[]
g=0
for(b=4;--b;){
for(c=b;c--;){
e=a[c]-a[b]
f=a[c]-a[b]
d[g++]=(ee+ff)/1e6 //The division tricks the sort algorithm to sort correctly by default method.
}
}
d.sort()
return (d==d&&d==d)
}
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Jun 17, 2020 at 9:04
CommunityBot
1
answered Mar 11, 2011 at 13:25
aaaaaaaaaaaaaaaaaaaaaaaa
4,41511 gold badge1717 silver badges2424 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
PHP, ~~161~~ 158 characters
```
function S($a){for($b=4;--$b;)for($c=$b;$c--;){$e=$a[$c]-$a[$b];$f=$a[$c]-$a[$b];$d[$g++]=$e$e+$f$f;}sort($d);return$d==$d&&$d==$d;}
```
Proof (1x1):
This is based off of eBuisness's JavaScript answer.
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Apr 13, 2017 at 12:39
CommunityBot
1
answered Mar 10, 2011 at 22:50
Kevin Brown-SilvaKevin Brown-Silva
6,52855 gold badges4343 silver badges5959 bronze badges
3
It's a bit unclear from the problem statement the points would come in ordered. I'll go ask.
– J B
Commented
Mar 10, 2011 at 23:01
1
I don't think this will properly handle lots of cases. For example, it will incorrectly label rhombuses as squares.
– Keith Randall
Commented
Mar 10, 2011 at 23:31
Updated this to match one of the JavaScript answers, should handle all cases.
– Kevin Brown-Silva
Commented
Mar 11, 2011 at 21:09
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
JavaScript 1.8, 112 characters
Update: saved 2 characters by folding the array comprehensions together.
```
function i(s)(p=[],[(e=x-a,f=y-b,d=ee+ff,p[d]=~~p[d]+1)for each([a,b]in s)for each([x,y]in s)],/8,+4/.test(p))
```
Another reimplementation of J B's answer. Exploits JavaScript 1.7/1.8 features (expression closures, array comprehensions, destructuring assignment). Also abuses ~~ (double bitwise not operator) to coerce undefined to numeric, with array-to-string coercion and a regexp to check that the length counts are [4, 8, 4] (it assumes that exactly 4 points are passed). The abuse of the comma operator is an old obfuscated C trick.
Tests:
```
function assert(cond, x) { if (!cond) throw ["Assertion failure", x]; }
let text = "function i(s)(p=[],[(e=x-a,f=y-b,d=ee+ff,p[d]=~~p[d]+1)for each([a,b]in s)for each([x,y]in s)],/8,+4/.test(p))"
assert(text.length == 112);
assert(let (source = i.toSource()) (eval(text), source == i.toSource()));
// Example squares:
assert(i()) // standard square
assert(i()) // non-axis-aligned square
assert(i()) // different order
// Example non-squares:
assert(!i()) // rectangle
assert(!i()) // rhombus
assert(!i()) // only 2 distinct points
assert(!i()) // only 3 distinct points
// Degenerate square:
assert(!i()) // we reject this case
```
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Mar 11, 2011 at 21:20
answered Mar 11, 2011 at 20:03
ecatmurecatmur
1,6751010 silver badges1515 bronze badges
0
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
GoRuby - 66 characters
```
f=->a{z=12;a.pe(2).m{|k,l|(k-l).a}.so.go{|k|k}.a{|k,l|l.sz==z-=4}}
```
expanded:
```
f=->a{z=12;a.permutation(2).map{|k,l|(k-l).abs}.sort.group_by{|k|k}.all?{|k,l|l.size==(z-=4)}}
```
Same algorithm as J B's answer.
Test like:
```
p f
```
Outputs true for true and blank for false
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Jun 17, 2020 at 9:04
CommunityBot
1
answered Mar 13, 2011 at 11:08
Nemo157Nemo157
2,1311212 silver badges1717 bronze badges
3
Never heard of GoRuby. Is there anything official written about it? stackoverflow.com/questions/63998/hidden-features-of-ruby/…
– Jonas Elfström
Commented
Mar 18, 2011 at 12:05
@Jonas: I haven't seen anything really official about it, the best blog post I have seen is this one. I wasn't actually able to get it built and working but an alternative is to just copy the golf-prelude into the same folder and run ruby -r ./golf-prelude.rb FILE_TO_RUN.rb and it will work exatcly the same.
– Nemo157
Commented
Mar 18, 2011 at 21:55
it is not necessary to sort before group_by. .sort.group_by {...} should be written as .group_by {...}
– user102008
Commented
Aug 25, 2011 at 21:43
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Python 97 (without complex points)
```
def t(p):return len(set(p))-1==len(set([pow(pow(a-c,2)+pow(b-d,2),.5)for a,b in p for c,d in p]))
```
This will take lists of point tuples in [(x,y),(x,y),(x,y),(x,y)] in any order, and can handle duplicates, or the wrong number of points. It does NOT require complex points like the other python answers.
You can test it like this:
```
S1 = [(0,0),(1,0),(1,1),(0,1)] # standard square
S2 = [(0,0),(2,1),(3,-1),(1,-2)] # non-axis-aligned square
S3 = [(0,0),(1,1),(0,1),(1,0)] # different order
S4 = [(0,0),(2,2),(0,2),(2,0)] #
S5 = [(0,0),(2,2),(0,2),(2,0),(0,0)] #Redundant points
B1 = [(0,0),(0,2),(3,2),(3,0)] # rectangle
B2 = [(0,0),(3,4),(8,4),(5,0)] # rhombus
B3 = [(0,0),(0,0),(1,1),(0,0)] # only 2 distinct points
B4 = [(0,0),(0,0),(1,0),(0,1)] # only 3 distinct points
B5 = [(1,1),(2,2),(3,3),(4,4)] # Points on the same line
B6 = [(0,0),(2,2),(0,2)] # Not enough points
def tests(f):
assert(f(S1) == True)
assert(f(S2) == True)
assert(f(S3) == True)
assert(f(S4) == True)
assert(f(S5) == True)
assert(f(B1) == False)
assert(f(B2) == False)
assert(f(B3) == False)
assert(f(B4) == False)
assert(f(B5) == False)
assert(f(B6) == False)
def t(p):return len(set(p))-1==len(set([pow(pow(a-c,2)+pow(b-d,2),.5)for a,b in p for c,d in p]))
tests(t)
```
This will take a little explaining, but the overall idea is that there are only three distances between the points in a square (Side, Diagonal, Zero(point compared to itself)):
```
def t(p):return len(set(p))-1==len(set([pow(pow(a-c,2)+pow(b-d,2),.5)for a,b in p for c,d in p]))
```
for a list p of tuples (x,y)
Remove duplicates using set(p) and then test the length
Get every combination of points (a,b in p for c,d in p)
Get list of the distance from every point to every other point
Use set to check there are only three unique distances
-- Zero (point compared to itself)
-- Side length
-- Diagonal length
To save code characters I am:
using a 1 char function name
using a 1 line function definition
Instead of checking the number of unique points is 4, I check that it is -1 the different point lengths (saves ==3==)
use list and tuple unpacking to get a,b in p for c,d in p, instead of using a,a
uses pow(x,.5) instead of including math to get sqrt(x)
not putting spaces after the )
not putting a leading zero on the float
I fear someone can find a test case that breaks this. So please do and Ill correct. For instance the fact I just check for three distances, instead of doing an abs() and checking for side length and hypotenuse, seems like an error.
First time I've tried code golf. Be kind if I've broken any house rules.
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Mar 15, 2011 at 4:28
answered Mar 15, 2011 at 4:20
JaguJagu
14922 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Clojure, 159 chars.
```
user=> (def squares
; standard square
; non-axis-aligned square
]) ; different order
'user/squares
user=> (def non-squares
; rectangle
]) ; rhombus
'user/non-squares
user=> (defn norm
[x y]
(reduce + (map (comp #( % %) -) x y)))
'user/norm
user=> (defn square?
(let [[x y z] (sort (map #(norm a %) [b c d]))]
(and (= x y) (= z ( 2 x)))))
'user/square?
user=> (every? square? squares)
true
user=> (not-any? square? non-squares)
true
```
Edit: To also explain a little bit.
First define a norm which basically gives the distance between two given points.
Then calculate the distance of the first point to the other three points.
Sort the three distances. (This allows any order of the points.)
The two shortest distances must be equal to be a square.
The third (longest) distance must be equal to the square root of the sum of the squares of the short distances by the theorem of Pythagoras.
(Note: the square rooting is not needed and hence in the code saved above.)
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Mar 18, 2011 at 10:02
answered Mar 11, 2011 at 8:52
MeikelMeikel
5122 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Python, 66
Improving paperhorse's answer from 76 to 66:
```
def U(A):c=sum(A)/4;d=A-c;return{d+c,c-d,d1j+c,c-d1j}==set(A)
```
Share
CC BY-SA 3.0
Improve this answer
Follow this answer to receive notifications
edited Apr 13, 2017 at 12:38
CommunityBot
1
answered May 14, 2013 at 12:51
Reinstate MonicaReinstate Monica
93977 silver badges1010 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
C (gcc), ~~159~~ 141 bytes
```
g;f(a,e)_Complexa,e;{e=(a+a+a+a)/4;for(g=0;a[g]-=e,g<4&cabs(a[g])==cabs(a)&!fmod(carg(a[g++])-carg(a),atan(1)););return g>4;}
```
-18 thanks to @G.Sliepen.
Try it online!
After subtracting the center point, checks that the absolute values of each complex number are equal, and that all the arguments are equal modulo π/2.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
edited Aug 26, 2019 at 22:05
answered Apr 20, 2019 at 8:27
ceilingcatceilingcat
8,11522 gold badges3131 silver badges3131 bronze badges
1
146+3 bytes. After subtracting the center point, checks that the absolute values of each complex number are equal, and that all the arguments are equal modulo π/2.
– G. Sliepen
Commented
Aug 25, 2019 at 11:38
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Wolfram Language (Mathematica), ~~32~~ 31 bytes
```
Tr[#^2]==Tr[#^3]==0&[#-Mean@#]&
```
Try it online!
Takes a list of points represented by complex numbers, calculates the second and third central moment, and checks that both are zero.
Un-golfed:
```
S[p_] := Total[(p - Mean[p])^2] == Total[(p - Mean[p])^3] == 0
```
or
```
S[p_] := CentralMoment[p, 2] == CentralMoment[p, 3] == 0
```
proof
This criterion works on the entire complex plane, not just on the Gaussian integers.
First, we note that the central moments do not change when the points are translated together. For a set of points
```
P = Table[c + x[i] + Iy[i], {i, 4}]
```
the central moments are all independent of c (that's why they are called central):
```
{FreeQ[FullSimplify[CentralMoment[P, 2]], c], FreeQ[FullSimplify[CentralMoment[P, 3]], c]}
( {True, True} )
```
2. Second, the central moments have a simple dependence on overall complex scaling (scaling and rotation) of the set of points:
```
P = Table[f (x[i] + Iy[i]), {i, 4}];
FullSimplify[CentralMoment[P, 2]]
( f^2 (...) )
FullSimplify[CentralMoment[P, 3]]
( f^3 (...) )
```
This means that if a central moment is zero, then scaling and/or rotating the set of points will keep the central moment equal to zero.
3. Third, let's prove the criterion for a list of points where the first two points are fixed:
```
P = {0, 1, x + Iy, x + Iy};
```
Under what conditions are the real and imaginary parts of the second and third central moments zero?
```
C2 = CentralMoment[P, 2] // ReIm // ComplexExpand // FullSimplify;
C3 = CentralMoment[P, 3] // ReIm // ComplexExpand // FullSimplify;
Solve[Thread[Join[C2, C3] == 0], {x, y, x, y}, Reals] // FullSimplify
( {{x -> 0, y -> -1, x -> 1, y -> -1},
{x -> 0, y -> 1, x -> 1, y -> 1},
{x -> 1/2, y -> -1/2, x -> 1/2, y -> 1/2},
{x -> 1/2, y -> 1/2, x -> 1/2, y -> -1/2},
{x -> 1, y -> -1, x -> 0, y -> -1},
{x -> 1, y -> 1, x -> 0, y -> 1}} )
```
All of these six solutions represent squares:
Therefore, the only way that a list of points of the form {0, 1, x + Iy, x + Iy} can have zero second and third central moments is when the four points form a square.
Due to the translation, rotation, and scaling properties demonstrated in points 1 and 2, this means that any time the second and third central moments are zero, we have a square in some translation/rotation/scaling state. ∎
generalization
The k-th central moment of a regular n-gon is zero if k is not divisible by n. Enough of these conditions must be combined to make up a sufficient criterion for detecting n-gons. For the case n=4 it was enough to detect zeros in k=2 and k=3; for detecting, e.g., hexagons (n=6) it may be necessary to check k=2,3,4,5 for zeros. I haven't proved the following, but suspect that it will detect any regular n-gon:
```
isregularngon[p_List] :=
And @@ Table[PossibleZeroQ[CentralMoment[p, k]], {k, 2, Length[p] - 1}]
```
The code challenge is essentially this code specialized for length-4 lists.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
edited Aug 30, 2019 at 16:50
answered Aug 27, 2019 at 7:09
RomanRoman
1,99399 silver badges1717 bronze badges
5
The solution looks fairly interesting. Could you explain why it gives the correct answer?
– Joel
Commented
Aug 28, 2019 at 12:10
@Joel I've added a proof.
– Roman
Commented
Aug 28, 2019 at 12:24
Thanks a lot. It would be ideal that there could be a more intuitive mathematical explanation of this nice solution.
– Joel
Commented
Aug 28, 2019 at 12:47
@Joel I can give you the thread that led me to this solution. I started by noticing that squares (as lists of coordinates, not complex numbers) have a covariance matrix that is proportional to the unit matrix; however, this condition is not sufficient (false positives). The third central moment must be zero for any structure of point symmetry. So I switched to complex representation to place a condition on the second and third central moments, and to my surprise it turned out that the second central moment is zero for squares.
– Roman
Commented
Aug 28, 2019 at 12:55
Great. Thanks for showing the path to this solution.
– Joel
Commented
Aug 28, 2019 at 13:05
Add a comment
|
This answer is useful
0
Save this answer.
Show activity on this post.
J, 31 29 27 26
```
3=[:#[:~.[:,([:+/:@-)"1/~
```
checks if the 8 smallest distances between the points are the same.
checks if there are exactly three kinds of distances between the points (zero, side length and diagonal length).
```
f 4 2 $ 0 0 2 1 3 _1 1 _2
1
f 4 2 $ 0 0 0 2 3 2 3 0
0
```
4 2 $ is a way of writing an array in J.
Share
CC BY-SA 2.5
Improve this answer
Follow this answer to receive notifications
edited Mar 11, 2011 at 9:16
answered Mar 11, 2011 at 8:20
EelvexEelvex
5,53011 gold badge3030 silver badges4646 bronze badges
12
This fails the rhombus test.
– J B
Commented
Mar 11, 2011 at 9:13
@JB: I had a typo. I changed the method anyway now.
– Eelvex
Commented
Mar 11, 2011 at 9:18
Eeew... you're taking the same method I was stealing. Except my version's shorter :p
– J B
Commented
Mar 11, 2011 at 9:33
@JB: really? I didn't notice that. Who else checks (3 == #distances)?
– Eelvex
Commented
Mar 11, 2011 at 9:39
@JB: oic... some check for combinations of 2. :-/
– Eelvex
Commented
Mar 11, 2011 at 9:40
|
Show 7 more comments
1
2
Next
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
code-golf
math
See similar questions with these tags. |
189590 | https://plato.stanford.edu/entries/set-theory/ | Set Theory (Stanford Encyclopedia of Philosophy)
Stanford Encyclopedia of Philosophy
Menu
Browse
Table of Contents
What's New
Random Entry
Chronological
Archives
About
Editorial Information
About the SEP
Editorial Board
How to Cite the SEP
Special Characters
Advanced Tools
Contact
Support SEP
Support the SEP
PDFs for SEP Friends
Make a Donation
SEPIA for Libraries
Entry Navigation
Entry Contents
Bibliography
Academic Tools
Friends PDF Preview
Author and Citation Info
Back to Top
Set Theory
First published Wed Oct 8, 2014; substantive revision Tue Jan 31, 2023
Set theory is the mathematical theory of well-determined collections, called sets, of objects that are called members, or elements, of the set. Pure set theory deals exclusively with sets, so the only sets under consideration are those whose members are also sets. The theory of the hereditarily-finite sets, namely those finite sets whose elements are also finite sets, the elements of which are also finite, and so on, is formally equivalent to arithmetic. So, the essence of set theory is the study of infinite sets, and therefore it can be defined as the mathematical theory of the actual—as opposed to potential—infinite.
The notion of set is so simple that it is usually introduced informally, and regarded as self-evident. In set theory, however, as is usual in mathematics, sets are given axiomatically, so their existence and basic properties are postulated by the appropriate formal axioms. The axioms of set theory imply the existence of a set-theoretic universe so rich that all mathematical objects can be construed as sets. Also, the formal language of pure set theory allows one to formalize all mathematical notions and arguments. Thus, set theory has become the standard foundation for mathematics, as every mathematical object can be viewed as a set, and every theorem of mathematics can be logically deduced in the Predicate Calculus from the axioms of set theory.
Both aspects of set theory, namely, as the mathematical science of the infinite, and as the foundation of mathematics, are of philosophical importance.
1. The origins
2. The axioms of set theory
2.1 The axioms of ZFC
3. The theory of transfinite ordinals and cardinals
3.1 Cardinals
4. The universe V V of all sets
5. Set theory as the foundation of mathematics
5.1 Metamathematics
5.2 The incompleteness phenomenon
6. The set theory of the continuum
6.1 Descriptive Set Theory
6.2 Determinacy
6.3 The Continuum Hypothesis
7. Gödel’s constructible universe
8. Forcing
8.1 Other applications of forcing
9. The search for new axioms
10. Large cardinals
10.1 Inner models of large cardinals
10.2 Consequences of large cardinals
11. Forcing axioms
Bibliography
Academic Tools
Other Internet Resources
Related Entries
1. The origins
Set theory, as a separate mathematical discipline, begins in the work of Georg Cantor. One might say that set theory was born in late 1873, when he made the amazing discovery that the linear continuum, that is, the real line, is not countable, meaning that its points cannot be counted using the natural numbers. So, even though the set of natural numbers and the set of real numbers are both infinite, there are more real numbers than there are natural numbers, which opened the door to the investigation of the different sizes of infinity. See the entry on the early development of set theory for a discussion of the origin of set-theoretic ideas and their use by different mathematicians and philosophers before and around Cantor’s time.
According to Cantor, two sets A A and B B have the same size, or cardinality, if they are bijectable, i.e., the elements of A A can be put in a one-to-one correspondence with the elements of B B. Thus, the set N N of natural numbers and the set R R of real numbers have different cardinalities. In 1878 Cantor formulated the famous Continuum Hypothesis (CH), which asserts that every infinite set of real numbers is either countable, i.e., it has the same cardinality as N N, or has the same cardinality as R R. In other words, there are only two possible sizes of infinite sets of real numbers. The CH is the most famous problem of set theory. Cantor himself devoted much effort to it, and so did many other leading mathematicians of the first half of the twentieth century, such as Hilbert, who listed the CH as the first problem in his celebrated list of 10 (later expanded to 23 in the published version) unsolved mathematical problems presented in 1900 at the Second International Congress of Mathematicians, in Paris. The attempts to prove the CH led to major discoveries in set theory, such as the theory of constructible sets, and the forcing technique, which showed that the CH can neither be proved nor disproved from the usual axioms of set theory. To this day, the CH remains open.
Early on, some inconsistencies, or paradoxes, arose from a naive use of the notion of set; in particular, from the deceivingly natural assumption that every property determines a set, namely the set of objects that have the property. One example is Russell’s Paradox, also known to Zermelo:
consider the property of sets of not being members of themselves. If the property determines a set, call it A A, then A A is a member of itself if and only if A A is not a member of itself.
Thus, some collections, like the collection of all sets, the collection of all ordinals numbers, or the collection of all cardinal numbers, are not sets. Such collections are called proper classes.
In order to avoid the paradoxes and put it on a firm footing, set theory had to be axiomatized. The first axiomatization was due to Zermelo (1908) and it came as a result of the need to spell out the basic set-theoretic principles underlying his proof of Cantor’s Well-Ordering Principle. Zermelo’s axiomatization avoids Russell’s Paradox by means of the Separation axiom, which is formulated as quantifying over properties of sets, and thus it is a second-order statement. Further work by Skolem and Fraenkel led to the formalization of the Separation axiom in terms of formulas of first-order, instead of the informal notion of property, as well as to the introduction of the axiom of Replacement, which is also formulated as an axiom schema for first-order formulas (see next section). The axiom of Replacement is needed for a proper development of the theory of transfinite ordinals and cardinals, using transfinite recursion (see Section 3). It is also needed to prove the existence of such simple sets as the set of hereditarily finite sets, i.e., those finite sets whose elements are finite, the elements of which are also finite, and so on; or to prove basic set-theoretic facts such as that every set is contained in a transitive set, i.e., a set that contains all elements of its elements (see Mathias 2001 for the weaknesses of Zermelo set theory). A further addition, by von Neumann, of the axiom of Foundation, led to the standard axiom system of set theory, known as the Zermelo-Fraenkel axioms plus the Axiom of Choice, or ZFC.
Other axiomatizations of set theory, such as those of von Neumann-Bernays-Gödel (NBG), or Morse-Kelley (MK), allow also for a formal treatment of proper classes.
2. The axioms of set theory
ZFC is an axiom system formulated in first-order logic with equality and with only one binary relation symbol ∈∈ for membership. Thus, we write A∈B A∈B to express that A A is a member of the set B B. See the
Supplement on Basic Set Theory
for further details. See also the
Supplement on Zermelo-Fraenkel Set Theory
for a formalized version of the axioms and further comments. We state below the axioms of ZFC informally.
2.1 The axioms of ZFC
Extensionality: If two sets A A and B B have the same elements, then they are equal.
Null Set: There exists a set, denoted by ∅∅ and called the empty set, which has no elements.
Pair: Given any sets A A and B B, there exists a set, denoted by {A,B}{A,B}, which contains A A and B B as its only elements. In particular, there exists the set {A}{A} which has A A as its only element.
Power Set: For every set A A there exists a set, denoted by P(A)P(A) and called the power set of A A, whose elements are all the subsets of A A.
Union: For every set A A, there exists a set, denoted by ⋃A⋃A and called the union of A A, whose elements are all the elements of the elements of A A.
Infinity: There exists an infinite set. In particular, there exists a set Z Z that contains ∅∅ and such that if A∈Z A∈Z, then ⋃{A,{A}}∈Z⋃{A,{A}}∈Z.
Separation: For every set A A and every given property, there is a set containing exactly the elements of A A that have that property. A property is given by a formula φ φ of the first-order language of set theory.
Thus, Separation is not a single axiom but an axiom schema, that is, an infinite list of axioms, one for each formula φ φ.
Replacement: For every given definable function with domain a set A A, there is a set whose elements are all the values of the function.
Replacement is also an axiom schema, as definable functions are given by formulas.
Foundation: Every non-empty set A A contains an ∈∈-minimal element, that is, an element such that no element of A A belongs to it.
These are the axioms of Zermelo-Fraenkel set theory, or ZF. The axioms of Null Set and Pair follow from the other ZF axioms, so they may be omitted. Also, Replacement implies Separation.
Finally, there is the Axiom of Choice (AC):
Choice: For every set A A of pairwise-disjoint non-empty sets, there exists a set that contains exactly one element from each set in A A.
The AC was, for a long time, a controversial axiom. On the one hand, it is very useful and of wide use in mathematics. On the other hand, it has rather unintuitive consequences, such as the Banach-Tarski Paradox, which says that the unit ball can be partitioned into finitely-many pieces, which can then be rearranged to form two unit balls. The objections to the axiom arise from the fact that it asserts the existence of sets that cannot be explicitly defined. But Gödel’s 1938 proof of its consistency, relative to the consistency of ZF, dispelled any suspicions left about it.
The Axiom of Choice is equivalent, modulo ZF, to the Well-ordering Principle, which asserts that every set can be well-ordered, i.e., it can be linearly ordered so that every non-empty subset has a minimal element.
Although not formally necessary, besides the symbol ∈∈ one normally uses for convenience other auxiliary defined symbols. For example, A⊆B A⊆B expresses that A A is a subset of B B, i.e., every member of A A is a member of B B. Other symbols are used to denote sets obtained by performing basic operations, such as A∪B A∪B, which denotes the union of A A and B B, i.e., the set whose elements are those of A A and B B; or A∩B A∩B, which denotes the intersection of A A and B B, i.e., the set whose elements are those common to A A and B B. The ordered pair(A,B)(A,B) is defined as the set {{A},{A,B}}{{A},{A,B}}. Thus, two ordered pairs (A,B)(A,B) and (C,D)(C,D) are equal if and only if A=C A=C and B=D B=D. And the Cartesian product A×B A×B is defined as the set of all ordered pairs (C,D)(C,D) such that C∈A C∈A and D∈B D∈B. Given any formula φ(x,y 1,…,y n)φ(x,y 1,…,y n), and sets A,B 1,…,B n A,B 1,…,B n, by the axiom of Separation one can form the set of all those elements of A A that satisfy the formula φ(x,B 1,…,B n)φ(x,B 1,…,B n). This set is denoted by {a∈A:φ(a,B 1,…,B n)}{a∈A:φ(a,B 1,…,B n)}. In ZF one can easily prove that all these sets exist. See the Supplement on Basic Set Theory for further discussion.
3. The theory of transfinite ordinals and cardinals
In ZFC one can develop the Cantorian theory of transfinite (i.e., infinite) ordinal and cardinal numbers. Following the definition given by Von Neumann in the early 1920s, the ordinal numbers, or ordinals, for short, are obtained by starting with the empty set and performing two operations: taking the immediate successor, and passing to the limit. Thus, the first ordinal number is ∅∅. Given an ordinal α α, its immediate successor, denoted by α+1 α+1, is the set α∪{α}α∪{α}. And given a non-empty set X X of ordinals such that for every α∈X α∈X the successor α+1 α+1 is also in X X, one obtains the limit ordinal⋃X⋃X. One shows that every ordinal is (strictly) well-ordered by ∈∈, i.e., it is linearly ordered by ∈∈ and there is no infinite ∈∈-descending sequence. Also, every well-ordered set is isomorphic to a unique ordinal, called its order-type.
Note that every ordinal is the set of its predecessors. However, the class O N O N of all ordinals is not a set. Otherwise, O N O N would be an ordinal greater than all the ordinals, which is impossible. The first infinite ordinal, which is the set of all finite ordinals, is denoted by the Greek letter omega (ω ω). In ZFC, one identifies the finite ordinals with the natural numbers. Thus, ∅=0∅=0, {∅}=1{∅}=1, {∅,{∅}}=2{∅,{∅}}=2, etc., hence ω ω is just the set N N of natural numbers.
One can extend the operations of addition and multiplication of natural numbers to all the ordinals. For example, the ordinal α+β α+β is the order-type of the well-ordering obtained by concatenating a well-ordered set of order-type α α and a well-ordered set of order-type β β. The sequence of ordinals, well-ordered by ∈∈, starts as follows
0, 1, 2,…, n n,…, ω ω, ω+1 ω+1, ω+2 ω+2,…, ω+ω ω+ω,…, ω⋅n ω⋅n, …, ω⋅ω ω⋅ω,…, ω n ω n, …, ω ω ω ω, …
The ordinals satisfy the principle of transfinite induction: suppose that C C is a class of ordinals such that whenever C C contains all ordinals β β smaller than some ordinal α α, then α α is also in C C. Then the class C C contains all ordinals. Using transfinite induction one can prove in ZFC (and one needs the axiom of Replacement) the important principle of transfinite recursion, which says that, given any definable class-function G G (namely a definable operation that takes any set x x to a set G(x)G(x)), one can define a class-function F F with domain the class O N O N of ordinals, such that F(α)F(α) is the value of the function G G applied to the function F F restricted to α α. One uses transfinite recursion, for example, in order to define properly the arithmetical operations of addition, product, and exponentiation on the ordinals.
Recall that an infinite set is countable if it is bijectable, i.e., it can be put into a one-to-one correspondence, with ω ω. All the ordinals displayed above are either finite or countable. But the set of all finite and countable ordinals is also an ordinal, called ω 1 ω 1, and is not countable. Similarly, the set of all ordinals that are bijectable with some ordinal less than or equal to ω 1 ω 1 is also an ordinal, called ω 2 ω 2, and is not bijectable with ω 1 ω 1, and so on.
3.1 Cardinals
A cardinal is an ordinal that is not bijectable with any smaller ordinal. Thus, every finite ordinal is a cardinal, and ω ω, ω 1 ω 1, ω 2 ω 2, etc. are also cardinals. The infinite cardinals are represented by the letter aleph (ℵ ℵ) of the Hebrew alphabet, and their sequence is indexed by the ordinals. It starts like this
ℵ 0 ℵ 0, ℵ 1 ℵ 1, ℵ 2 ℵ 2, …, ℵ ω ℵ ω, ℵ ω+1 ℵ ω+1, …, ℵ ω+ω ℵ ω+ω, …, ℵ ω 2 ℵ ω 2, …, ℵ ω ω ℵ ω ω, …, ℵ ω 1 ℵ ω 1, …, ℵ ω 2 ℵ ω 2, …
Thus, ω=ℵ 0 ω=ℵ 0, ω 1=ℵ 1 ω 1=ℵ 1, ω 2=ℵ 2 ω 2=ℵ 2, etc. For every cardinal there is a bigger one, and the limit of an increasing sequence of cardinals is also a cardinal. Thus, the class of all cardinals is not a set, but a proper class.
An infinite cardinal κ κ is called regular if it is not the union of less than κ κ smaller cardinals. Thus, ℵ 0 ℵ 0 is regular, and so are all infinite successor cardinals, such as ℵ 1 ℵ 1. Non-regular infinite cardinals are called singular. The first singular cardinal is ℵ ω ℵ ω, as it is the union of countably-many smaller cardinals, namely ℵ ω=⋃{ℵ n:n<ω}ℵ ω=⋃{ℵ n:n<ω}.
The cofinality of a cardinal κ κ, denoted by c f(κ)c f(κ) is the smallest cardinal λ λ such that κ κ is the union of λ λ-many smaller ordinals. Thus, c f(ℵ ω)=ℵ 0 c f(ℵ ω)=ℵ 0.
By the AC (in the form of the Well-Ordering Principle), every set A A can be well-ordered, hence it is bijectable with a unique cardinal, called the cardinality of A A. Given two cardinals κ κ and λ λ, the sum κ+λ κ+λ is defined as the cardinality of the set consisting of the union of any two disjoint sets, one of cardinality κ κ and one of cardinality λ λ. And the product κ⋅λ κ⋅λ is defined as the cardinality of the Cartesian product κ×λ κ×λ. The operations of sum and product of infinite cardinals are trivial, for if κ κ and λ λ are infinite cardinals, then κ+λ=κ⋅λ=m a x i m u m{κ,λ}κ+λ=κ⋅λ=m a x i m u m{κ,λ}.
In contrast, cardinal exponentiation is highly non-trivial, for even the value of the simplest non-trivial infinite exponential, namely 2 ℵ 0 2 ℵ 0, is not known and cannot be determined in ZFC (see below). The cardinal κ λ κ λ is defined as the cardinality of the Cartesian product of λ λ copies of κ κ; equivalently, as the cardinality of the set of all functions from λ λ into κ κ. König’s theorem asserts that κ c f(κ)>κ κ c f(κ)>κ, which implies that the cofinality of the cardinal 2 ℵ 0 2 ℵ 0, whatever that cardinal is, must be uncountable. But this is essentially all that ZFC can prove about the value of the exponential 2 ℵ 0 2 ℵ 0.
In the case of exponentiation of singular cardinals, ZFC has a lot more to say. In 1989, Shelah proved the remarkable result that if ℵ ω ℵ ω is a strong limit, that is, 2 ℵ n<ℵ ω 2 ℵ n<ℵ ω, for every n<ω n<ω, then 2 ℵ ω<ℵ ω 4 2 ℵ ω<ℵ ω 4 (see Shelah (1994)). The technique developed by Shelah to prove this and similar theorems, in ZFC, is called pcf theory (for possible cofinalities), and has found many applications in other areas of mathematics.
4. The universe V V of all sets
A posteriori, the ZF axioms other than Extensionality—which needs no justification because it just states a defining property of sets—may be justified by their use in building the cumulative hierarchy of sets. Namely, in ZF we define using transfinite recursion the class-function that assigns to each ordinal α α the set V α V α, given as follows:
V 0=∅V 0=∅
V α+1=P(V α)V α+1=P(V α)
V α=⋃{V β:β<α}V α=⋃{V β:β<α}, whenever α α is a limit ordinal.
The Power Set axiom is used to obtain V α+1 V α+1 from V α V α. Replacement and Union allow one to form V α V α for α α a limit ordinal. Indeed, consider the function that assigns to each β<α β<α the set V β V β. By Replacement, the collection of all the V β V β, for β<α β<α, is a set, hence the Union axiom applied to that set yields V α V α. The axiom of Infinity is needed to prove the existence of ω ω and hence of the transfinite sequence of ordinals. Finally, the axiom of Foundation is equivalent, assuming the other axioms, to the statement that every set belongs to some V α V α, for some ordinal α α. Thus, ZF proves that the set theoretic universe, denoted by V V, is the union of all the V α V α, α α an ordinal.
The proper class V V, together with the ∈∈ relation, satisfies all the ZFC axioms, and is thus a model of ZFC. It is the intended model of ZFC, and one may think of ZFC as providing a description of V V, a description however that is highly incomplete, as we shall see below.
One important property of V V is the so-called Reflection Principle. Namely, for each formula φ(x 1,…,x n)φ(x 1,…,x n), ZFC proves that there exists some V α V α that reflects it, that is, for every a 1,…,a n∈V α a 1,…,a n∈V α,
φ(a 1,…,a n)φ(a 1,…,a n) holds in V V if and only if φ(a 1,…,a n)φ(a 1,…,a n) holds in V α V α.
Thus, V V cannot be characterized by any sentence, as any sentence that is true in V V must be also true in some initial segment V α V α. In particular, ZFC is not finitely axiomatizable, for otherwise ZFC would prove that, for unboundedly many ordinals α α, V α V α is a model of ZFC, contradicting Gödel’s second incompleteness theorem (see Section 5.2).
The Reflection Principle encapsulates the essence of ZF set theory, for as shown by Levy (1960), the axioms of Extensionality, Separation, and Foundation, together with the Reflection Principle, formulated as the axiom schema asserting that each formula is reflected by some set that contains all elements and all subsets of its elements (note that the V α V α are like this), is equivalent to ZF.
5. Set theory as the foundation of mathematics
Every mathematical object may be viewed as a set. For example, the natural numbers are identified with the finite ordinals, so N=ω N=ω. The set of integers Z Z may be defined as the set of equivalence classes of pairs of natural numbers under the equivalence relation (n,m)≡(n′,m′)(n,m)≡(n′,m′) if and only if n+m′=m+n′n+m′=m+n′. By identifying every natural number n n with the equivalence class of the pair (n,0)(n,0), one may extend naturally the operations of sum and product of natural numbers to Z Z (see Enderton (1977) for details, and Levy (1979) for a different construction). Further, one may define the rationals Q Q as the set of equivalence classes of pairs (n,m)(n,m) of integers, where m≠0 m≠0, under the equivalence relation (n,m)≡(n′,m′)(n,m)≡(n′,m′) if and only if n⋅m′=m⋅n′n⋅m′=m⋅n′. Again, the operations ++ and ⋅⋅ on Z Z may be extended naturally to Q Q. Moreover, the ordering ≤Q≤Q on the rationals is given by: r≤Q s r≤Q s if and only if there exists t∈Q t∈Q such that s=r+t s=r+t. The real numbers may be defined as Dedekind cuts of Q Q, namely, a real number is given by a pair (A,B)(A,B) of non-empty disjoint sets such that A∪B=Q A∪B=Q, A A has no greatest element, and a<Q b a<Q b for every a∈A a∈A and b∈B b∈B. One can then extend again the operations of ++ and ⋅⋅ on Q Q, as well as the ordering ≤Q≤Q, to the set of real numbers R R.
Let us emphasize that it is not claimed that, e.g., real numbers are Dedekind cuts of rationals, as they could also be defined using Cauchy sequences, or in other different ways. What is important, from a foundational point of view, is that the set-theoretic version of R R, together with the usual algebraic operations, satisfies the categorical axioms that the real numbers satisfy, namely those of a complete ordered field. The metaphysical question of what the real numbers really are is irrelevant here.
Algebraic structures can also be viewed as sets, as any n n-ary relation on the elements of a set A A can be viewed as a set of ordered n n-tuples (a 1,…,a n)(a 1,…,a n) of elements of A A. And any n n-ary function f f on A A, with values on some set B B, can be seen as the set of ordered n+1 n+1-tuples ((a 1,…,a n),b)((a 1,…,a n),b) such that b b is the value of f f on (a 1,…,a m)(a 1,…,a m). Thus, for example, a group is just a triple (A,+,0)(A,+,0), where A A is a non-empty set, ++ is a binary function on A A that is associative, 0 0 is an element of A A such that a+0=0+a=a a+0=0+a=a, for all a∈A a∈A, and for every a∈A a∈A there is an element of A A, denoted by −a−a, such that a+(−a)=(−a)+a=0 a+(−a)=(−a)+a=0. Also, a topological space is just a set X X together with a topology τ τ on it, i.e., τ τ is a subset of P(X)P(X) containing X X and ∅∅, and closed under arbitrary unions and finite intersections. Any mathematical object whatsoever can always be viewed as a set, or a proper class. The properties of the object can then be expressed in the language of set theory. Any mathematical statement can be formalized into the language of set theory, and any mathematical theorem can be derived, using the calculus of first-order logic, from the axioms of ZFC, or from some extension of ZFC. It is in this sense that set theory provides a foundation for mathematics.
The foundational role of set theory for mathematics, while significant, is by no means the only justification for its study. The ideas and techniques developed within set theory, such as infinite combinatorics, forcing, or the theory of large cardinals, have turned it into a deep and fascinating mathematical theory, worthy of study by itself, and with important applications to practically all areas of mathematics.
5.1 Metamathematics
The remarkable fact that virtually all of mathematics can be formalized within ZFC, makes possible a mathematical study of mathematics itself. Thus, any questions about the existence of some mathematical object, or the provability of a conjecture or hypothesis can be given a mathematically precise formulation. This makes metamathematics possible, namely the mathematical study of mathematics itself. So, the question about the provability or unprovability of any given mathematical statement becomes a sensible mathematical question. When faced with an open mathematical problem or conjecture, it makes sense to ask for its provability or unprovability in the ZFC formal system. Unfortunately, the answer may be neither, because ZFC, if consistent, is incomplete.
5.2 The incompleteness phenomenon
Gödel’s completeness theorem for first-order logic implies that ZFC is consistent—i.e., no contradiction can be derived from it—if and only if it has a model. A model of ZFC is a pair (M,E)(M,E), where M M is a non-empty set and E E is a binary relation on M M such that all the axioms of ZFC are true when interpreted in (M,E)(M,E), i.e., when the variables that appear in the axioms range over elements of M M, and ∈∈ is interpreted as E E. Thus, if φ φ is a sentence of the language of set theory and one can find a model of ZFC in which φ φ holds, then its negation ¬φ¬φ cannot be proved in ZFC. Hence, if one can find a model of φ φ and also a model of ¬φ¬φ, then φ φ is neither provable nor disprovable in ZFC, in which case we say that φ φ is undecidable in, or independent of, ZFC.
In 1931, Gödel announced his striking incompleteness theorems, which assert that any reasonable formal system for mathematics is necessarily incomplete. In particular, if ZFC is consistent, then there are propositions in the language of set theory that are undecidable in ZFC. Moreover, Gödel’s second incompleteness theorem implies that the formal (arithmetical) statement C O N(Z F C)C O N(Z F C), which asserts that ZFC is consistent, while true, cannot be proved in ZFC. And neither can its negation. Thus, C O N(Z F C)C O N(Z F C) is undecidable in ZFC.
If ZFC is consistent, then it cannot prove the existence of a model of ZFC, for otherwise ZFC would prove its own consistency. Thus, a proof of consistency or of undecidability of a given sentence φ φ is always a relative consistency proof. That is, one assumes that ZFC is consistent, hence it has a model, and then one constructs another model of ZFC where the sentence φ φ is true. We shall see several examples in the next sections.
6. The set theory of the continuum
From Cantor and until about 1940, set theory developed mostly around the study of the continuum, that is, the real line R R. The main topic was the study of the so-called regularity properties, as well as other structural properties, of simply-definable sets of real numbers, an area of mathematics that is known as Descriptive Set Theory.
6.1 Descriptive Set Theory
Descriptive Set Theory is the study of the properties and structure of definable sets of real numbers and, more generally, of definable subsets of R n R n and other Polish spaces (i.e., topological spaces that are homeomorphic to a separable complete metric space), such as the Baire space N N of all functions f:N→N f:N→N, the space of complex numbers, Hilbert space, and separable Banach spaces. The simplest sets of real numbers are the basic open sets (i.e., the open intervals with rational endpoints), and their complements. The sets that are obtained in a countable number of steps by starting from the basic open sets and applying the operations of taking the complement and forming a countable union of previously obtained sets are the Borel sets. All Borel sets are regular, that is, they enjoy all the classical regularity properties. One example of a regularity property is the Lebesgue measurability: a set of reals is Lebesgue measurable if it differs from a Borel set by a null set, namely, a set that can be covered by sets of basic open intervals of arbitrarily-small total length. Thus, trivially, every Borel set is Lebesgue measurable, but sets more complicated than the Borel ones may not be. Other classical regularity properties are the Baire property (a set of reals has the Baire property if it differs from an open set by a meager set, namely, a set that is a countable union of sets that are not dense in any interval), and the perfect set property (a set of reals has the perfect set property if it is either countable or contains a perfect set, namely, a nonempty closed set with no isolated points). In ZFC one can prove that there exist non-regular sets of reals, but the AC is necessary for this (Solovay 1970).
The analytic sets, also called Σ 1 1 Σ 1 1, are the continuous images of Borel sets. And the co-analytic, or Π 1 1 Π 1 1, sets are the complements of analytic sets.
Starting from the analytic (or the co-analytic) sets and applying the operations of projection (from the product space R×N R×N to R R) and complementation, one obtains the projective sets. The projective sets form a hierarchy of increasing complexity. For example, if A⊆R×N A⊆R×N is co-analytic, then the projection {x∈R:∃y∈N((x,y)∈A)}{x∈R:∃y∈N((x,y)∈A)} is a projective set in the next level of complexity above the co-analytic sets. Those sets are called Σ 1 2 Σ 2 1, and their complements are called Π 1 2 Π 2 1.
The projective sets come up very naturally in mathematical practice, for it turns out that a set A A of reals is projective if and only if it is definable in the structure
R=(R,+,⋅,Z).R=(R,+,⋅,Z).
That is, there is a first-order formula φ(x,y 1,…,y n)φ(x,y 1,…,y n) in the language for the structure such that for some r 1,…,r n∈R r 1,…,r n∈R,
A={x∈R:R⊨φ(x,r 1,…,r n)}.A={x∈R:R⊨φ(x,r 1,…,r n)}.
ZFC proves that every analytic set, and therefore every co-analytic set, is Lebesgue measurable and has the Baire property. It also proves that every analytic set has the perfect set property. But the perfect set property for co-analytic sets implies that the first uncountable cardinal, ℵ 1 ℵ 1, is a large cardinal in the constructible universe L L (see Section 7), namely a so-called inaccessible cardinal (see Section 10), which implies that one cannot prove in ZFC that every co-analytic set has the perfect set property.
The theory of projective sets of complexity greater than co-analytic is completely undetermined by ZFC. For example, in L L there is a Σ 1 2 Σ 2 1 set that is not Lebesgue measurable and does not have the Baire property, whereas if Martin’s axiom holds (see Section 11), every such set has those regularity properties. There is, however, an axiom, called the axiom of Projective Determinacy, or PD, that is consistent with ZFC, modulo the consistency of some large cardinals (in fact, it follows from the existence of some large cardinals), and implies that all projective sets are regular. Moreover, PD settles essentially all questions about the projective sets. See the entry on large cardinals and determinacy for further details.
6.2 Determinacy
A regularity property of sets that subsumes all other classical regularity properties is that of being determined. For simplicity, we shall work with the Baire space N N. Recall that the elements of N N are functions f:N→N f:N→N, that is, sequences of natural numbers of length ω ω. The space N N is topologically equivalent (i.e., homeomorphic) to the set of irrational points of R R. So, since we are interested in the regularity properties of subsets of R R, and since countable sets, such as the set of rationals, are negligible in terms of those properties, we may as well work with N N, instead of R R.
Given A⊆N A⊆N, the game associated to A A, denoted by G A G A, has two players, I and II, who play alternatively n i∈N n i∈N: I plays n 0 n 0, then II plays n 1 n 1, then I plays n 2 n 2, and so on. So, at stage 2 k 2 k, player I plays n 2 k n 2 k and at stage 2 k+1 2 k+1, player II plays n 2 k+1 n 2 k+1. We may visualize a run of the game as follows:
I I n 0 n 0 n 2 n 2 n 4 n 4⋯⋯n 2 k n 2 k⋯⋯
I I I I n 1 n 1 n 3 n 3⋯⋯⋯⋯n 2 k+1 n 2 k+1⋯⋯
After infinitely many moves, the two players produce an infinite sequence n 0,n 1,n 2,…n 0,n 1,n 2,… of natural numbers. Player I wins the game if the sequence belongs to A A. Otherwise, player II wins.
The game G A G A is determined if there is a winning strategy for one of the players. A winning strategy for one of the players, say for player II, is a function σ σ from the set of finite sequences of natural numbers into N N, such that if the player plays according to this function, i.e., she plays σ(n 0,…,n 2 k)σ(n 0,…,n 2 k) at the k k-th turn, she will always win the game, no matter what the other player does.
We say that a subset A A of N N is determined if and only if the game G A G A is determined.
One can prove in ZFC—and the use of the AC is necessary—that there are non-determined sets. Thus, the Axiom of Determinacy (AD), which asserts that all subsets of N N are determined, is incompatible with the AC. But Donald Martin proved, in ZFC, that every Borel set is determined. Further, he showed that if there exists a large cardinal called measurable (see Section 10), then even the analytic sets are determined. The axiom of Projective Determinacy (PD) asserts that every projective set is determined. It turns out that PD implies that all projective sets of reals are regular, and Woodin has shown that, in a certain sense, PD settles essentially all questions about the projective sets. Moreover, PD seems to be necessary for this. Another axiom, A D L(R)A D L(R), asserts that the AD holds in L(R)L(R), which is the least transitive class that contains all the ordinals and all the real numbers, and satisfies the ZF axioms (see Section 7). So, A D L(R)A D L(R) implies that every set of reals that belongs to L(R)L(R) is regular. Also, since L(R)L(R) contains all projective sets, A D L(R)A D L(R) implies PD.
6.3 The Continuum Hypothesis
The Continuum Hypothesis (CH), formulated by Cantor in 1878, asserts that every infinite set of real numbers has cardinality either ℵ 0 ℵ 0 or the same cardinality as R R. Thus, the CH is equivalent to 2 ℵ 0=ℵ 1 2 ℵ 0=ℵ 1.
Cantor proved in 1883 that closed sets of real numbers have the perfect set property, from which it follows that every uncountable closed set of real numbers has the same cardinality as R R. Thus, the CH holds for closed sets. More than thirty years later, Pavel Aleksandrov extended the result to all Borel sets, and then Mikhail Suslin to all analytic sets. Thus, all analytic sets satisfy the CH. However, the efforts to prove that co-analytic sets satisfy the CH would not succeed, as this is not provable in ZFC.
In 1938 Gödel proved the consistency of the CH with ZFC. Assuming that ZF is consistent, he built a model of ZFC, known as the constructible universe, in which the CH holds. Thus, the proof shows that if ZF is consistent, then so is ZF together with the AC and the CH. Hence, assuming ZF is consistent, the AC cannot be disproved in ZF and the CH cannot be disproved in ZFC.
See the entry on the continuum hypothesis for the current status of the problem.
7. Gödel’s constructible universe
Gödel’s constructible universe, denoted by L L, is defined by transfinite recursion on the ordinals, similarly as V V, but at successor steps, instead of taking the power set of V α V α to obtain V α+1 V α+1, one only takes the set of those subsets of L α L α that are definable in L α L α, using elements of L α L α as parameters. Thus, letting P D e f(X)P D e f(X) to denote the set of all the subsets of X X that are definable in the structure (X,∈)(X,∈) by a formula of the language of set theory, using elements of X X as parameters of the definition, we let
L 0=∅L 0=∅
L α+1=P D e f(L α)L α+1=P D e f(L α)
L λ=⋃{L α:α<λ}L λ=⋃{L α:α<λ}, whenever λ λ is a limit ordinal.
Then L L is the union of all the L α L α, for α α an ordinal.
Gödel showed that L L satisfies all the ZFC axioms, and also the CH. In fact, it satisfies the Generalized Continuum Hypothesis (GCH), namely 2 ℵ α=ℵ α+1 2 ℵ α=ℵ α+1, for every ordinal α α.
The statement V=L V=L, called the axiom of constructibility, asserts that every set belongs to L L. It holds in L L, hence it is consistent with ZFC, and implies both the AC and the GCH.
The proper class L L, together with the ∈∈ relation restricted to L L, is an inner model of ZFC, that is, a transitive (i.e., it contains all elements of its elements) class that contains all ordinals and satisfies all the ZFC axioms. It is in fact the smallest inner model of ZFC, as any other inner model contains it.
More generally, given any set A A, one can build the smallest transitive model of ZF that contains A A and all the ordinals in a similar manner as L L, but now starting with the transitive closure of {A}{A}, i.e., the smallest transitive set that contains A A, instead of ∅∅. The resulting model, L(A)L(A), need not be however a model of the AC. One very important such model is L(R)L(R), the smallest transitive model of ZF that contains all the ordinals and all the real numbers.
The theory of constructible sets owes much to the work of Ronald Jensen. He developed the so-called fine structure theory of L L and isolated some combinatorial principles, such as the diamond (♢♢) and square (□◻), which can be used to carry out complicated constructions of uncountable mathematical objects. Fine structure theory plays also an important role in the analysis of bigger L L-like models, such as L(R)L(R) or the inner models for large cardinals (see Section 10.1).
8. Forcing
In 1963, twenty-five years after Gödel’s proof of the consistency of the CH and the AC, relative to the consistency of ZF, Paul Cohen (1966) proved the consistency of the negation of the CH, and also of the negation of the AC, relative to the consistency of ZF. Thus, if ZF is consistent, then the CH is undecidable in ZFC, and the AC is undecidable in ZF. To achieve this, Cohen devised a new and extremely powerful technique, called forcing, for expanding countable transitive models of ZF, or of ZFC.
Since the axiom V=L V=L implies the AC and the CH, any model of the negation of the AC or the CH must violate V=L V=L. So, let’s illustrate the idea of forcing in the case of building a model for the negation of V=L V=L. We start with a transitive model M M of ZFC, which we may assume, without loss of generality, to be a model of V=L V=L. To violate V=L V=L we need to expand M M by adding a new set r r so that, in the expanded model, r r will be non-constructible. Since all hereditarily-finite sets are constructible, we aim to add an infinite set of natural numbers. The first problem we face is that M M may contain already all subsets of ω ω. Fortunately, by the Löwenheim-Skolem theorem for first-order logic, M M has an elementary submodel which is isomorphic to a countable transitive model N N. So, since we are only interested in the statements that hold in M M, and not in M M itself, we may as well work with N N instead of M M, and so we may assume that M M itself is countable. Then, since P(ω)P(ω) is uncountable, there are plenty of subsets of ω ω that do not belong to M M. But, unfortunately, we cannot just pick any infinite subset r r of ω ω that does not belong to M M and add it to M M. The reason is that r r may encode a lot of information, so that when added to M M, M M is no longer a model of ZF, or it is still a model of V=L V=L. To avoid this, one needs to pick r r with great care. The idea is to pick r r generic over M M, meaning that r r is built from its finite approximations in such a way that it does not have any property that is definable in M M and can be avoided. For example, by viewing r r as an infinite sequence of natural numbers in the increasing order, the property of r r containing only finitely-many even numbers can be avoided, because given any finite approximation to r r—i.e., any finite increasing sequence of natural numbers—one can always extend it by adding more even numbers, so that at the end of the construction r r will contain infinitely-many even numbers; while the property of containing the number 7 cannot be avoided, because when a finite approximation to r r contains the number 7, then it stays there no matter how the construction of r r proceeds. Since M M is countable, there are such generic r r. Then the expanded model M[r]M[r], which includes M M and contains the new set r r, is called a generic extension of M M. Since we assumed M M is a transitive model of V=L V=L, the model M[r]M[r] is just L α(r)L α(r), where α α is the supremum of the ordinals of M M. Then one can show, using the forcing relation between finite approximations to r r and formulas in the language of set theory expanded with so-called names for sets in the generic extension, that M[r]M[r] is a model of ZFC and r r is not constructible in M[r]M[r], hence the axiom of constructibility V=L V=L fails.
In general, a forcing extension of a model M M is obtained by adding to M M a generic subset G G of some partially ordered set P P that belongs to M M. In the above example, P P would be the set of all finite increasing sequences of natural numbers, seen as finite approximations to the infinite sequence r r, ordered by ⊆⊆; and G G would be the set of all finite initial segments of r r.
In the case of the consistency proof of the negation of the CH, one starts from a model M M of ZFC, as before, and adds ℵ 2 ℵ 2 new subsets of ω ω, so that in the generic extension the CH fails. In this case one needs to use an appropriate partial ordering P P so that the ℵ 2 ℵ 2 of M M is not collapsed, i.e., it is the same as the ℵ 2 ℵ 2 of the generic extension, and thus the generic extension M[G]M[G] will satisfy the sentence that says that there are ℵ 2 ℵ 2 real numbers.
8.1 Other applications of forcing
Besides the CH, many other mathematical conjectures and problems about the continuum, and other infinite mathematical objects, have been shown undecidable in ZFC using the forcing technique.
One important example is Suslin’s Hypothesis (SH). Cantor had shown that every linearly ordered set S S without endpoints that is dense (i.e., between any two different elements of S S there is another one), complete (i.e., every subset of S S that is bounded above has a supremum), and with a countable dense subset is isomorphic to the real line. Suslin conjectured that this is still true if one relaxes the requirement of containing a countable dense subset to being ccc, i.e., every collection of pairwise-disjoint intervals is countable. In the early 1970s, Thomas Jech produced a consistent counterexample using forcing, and Ronald Jensen showed that a counterexample exists in L L. About the same time, Robert Solovay and Stanley Tennenbaum (1971) developed and used for the first time the iterated forcing technique to produce a model where the SH holds, thus showing its independence from ZFC. In order to make sure that the SH holds in the generic extension, one needs to destroy all counterexamples, but by destroying one particular counterexample one may inadvertently create new ones, and so one needs to force again and again; in fact one needs to go on for at least ω 2 ω 2-many steps. This is why a forcing iteration is needed.
Among other famous mathematical problems that have been shown undecidable in ZFC thanks to the forcing technique, especially using iterated forcing and sometimes combined with large cardinals, we may mention the Measure Problem and the Borel Conjecture in measure theory, Kaplansky’s Conjecture on Banach algebras, and Whitehead’s Problem in group theory.
9. The search for new axioms
As a result of 60 years of development of the forcing technique, and its applications to many open problems in mathematics, there are now literally hundreds of problems and questions, in practically all areas of mathematics, that have been shown independent of ZFC. These include almost all important questions about the structure of uncountable sets. One might say that the undecidability phenomenon is pervasive, to the point that the investigation of the uncountable has been rendered nearly impossible in ZFC alone (see however Shelah (1994) for remarkable exceptions).
This prompts the question about the truth-value of the statements that are undecided by ZFC. Should one be content with them being undecidable? Does it make sense at all to ask for their truth-value? There are several possible reactions to this. One is the skeptic’s position: the statements that are undecidable in ZFC have no definite answer; and they may even be inherently vague. Another, the common one among mathematicians, is Gödel’s position: the undecidability only shows that the ZFC system is too weak to answer those questions, and therefore one should search for new axioms that once added to ZFC would answer them. The search for new axioms has been known as Gödel’s Program. See Hauser (2006) for a thorough philosophical discussion of the Program, and also the entry on large cardinals and determinacy for philosophical considerations on the justification of new axioms for set theory.
A central theme of set theory is thus the search and classification of new axioms. These fall currently into two main types: the axioms of large cardinals and the forcing axioms.
10. Large cardinals
One cannot prove in ZFC that there exists a regular limit cardinal κ κ, for if κ κ is such a cardinal, then L κ L κ is a model of ZFC, and so ZFC would prove its own consistency, contradicting Gödel’s second incompleteness theorem. Thus, the existence of a regular limit cardinal must be postulated as a new axiom. Such a cardinal is called weakly inaccessible. If, in addition κ κ is a strong limit, i.e., 2 λ<κ 2 λ<κ, for every cardinal λ<κ λ<κ, then κ κ is called strongly inaccessible. A cardinal κ κ is strongly inaccessible if and only if it is regular and V κ V κ is a model of ZFC. If the GCH holds, then every weakly inaccessible cardinal is strongly inaccessible.
Large cardinals are uncountable cardinals satisfying some properties that make them very large, and whose existence cannot be proved in ZFC. The first weakly inaccessible cardinal is just the smallest of all large cardinals. Beyond inaccessible cardinals there is a rich and complex variety of large cardinals, which form a linear hierarchy in terms of consistency strength, and in many cases also in terms of outright implication. See the entry on independence and large cardinals for more details.
To formulate the next stronger large-cardinal notion, let us say that a subset C C of an infinite cardinal κ κ is closed if every limit of elements of C C which is less than κ κ is also in C C; and is unbounded if for every α<κ α<κ there exists β∈C β∈C greater than α α. For example, the set of limit ordinals less than κ κ is closed and unbounded. Also, a subset S S of κ κ is called stationary if it intersects every closed unbounded subset of κ κ. If κ κ is regular and uncountable, then the set of all ordinals less than κ κ of cofinality ω ω is an example of a stationary set. A regular cardinal κ κ is called Mahlo if the set of strongly inaccessible cardinals smaller than κ κ is stationary. Thus, the first Mahlo cardinal is much larger than the first strongly inaccessible cardinal, as there are κ κ-many strongly inaccessible cardinals smaller than κ κ.
Much stronger large cardinal notions arise from considering strong reflection properties. Recall that the Reflection Principle (Section 4), which is provable in ZFC, asserts that every true sentence (i.e., every sentence that holds in V V) is true in some V α V α. A strengthening of this principle to second-order sentences yields some large cardinals. For example, κ κ is strongly inaccessible if and only if every Σ 1 1 Σ 1 1 sentence (i.e., existential second-order sentence in the language of set theory, with one additional predicate symbol) true in any structure of the form (V κ,∈,A)(V κ,∈,A), where A⊆V κ A⊆V κ, is true in some (V α,∈,A∩V α)(V α,∈,A∩V α), with α<κ α<κ. The same type of reflection, but now for Π 1 1 Π 1 1 sentences (i.e., universal second-order sentences), yields a much stronger large cardinal property of κ κ, called weak compactness. Every weakly compact cardinal κ κ is Mahlo, and the set of Mahlo cardinals smaller than κ κ is stationary. By allowing reflection for more complex second-order, or even higher-order, sentences one obtains large cardinal notions stronger than weak compactness.
The most famous large cardinals, called measurable, were discovered by Stanisław Ulam in 1930 as a result of his solution to the Measure Problem. A (two-valued) measure, or ultrafilter, on a cardinal κ κ is a subset U U of P(κ)P(κ) that has the following properties: (i) the intersection of any two elements of U U is in U U; (ii) if X∈U X∈U and Y Y is a subset of κ κ such that X⊆Y X⊆Y, then Y∈U Y∈U; and (iii) for every X⊆κ X⊆κ, either X∈U X∈U or κ−X∈U κ−X∈U, but not both. A measure U U is called κ κ-complete if every intersection of less than κ κ elements of U U is also in U U. And a measure is called non-principal if there is no α<κ α<κ that belongs to all elements of U U. A cardinal κ κ is called measurable if there exists a measure on κ κ that is κ κ-complete and non-principal.
Measurable cardinals can be characterized by elementary embeddings of the universe V V into some transitive class M M. That such an embedding j:V→M j:V→M is elementary means that j j preserves truth, i.e., for every formula φ(x 1,…,x n)φ(x 1,…,x n) of the language of set theory, and every a 1,…,a n a 1,…,a n, the sentence φ(a 1,…,a n)φ(a 1,…,a n) holds in V V if and only if φ(j(a 1),…,j(a n))φ(j(a 1),…,j(a n)) holds in M M. It turns out that a cardinal κ κ is measurable if and only if there exists an elementary embedding j:V→M j:V→M, with M M transitive, so that κ κ is the first ordinal moved by j j, i.e., the first ordinal such that j(κ)≠κ j(κ)≠κ. We say that κ κ is the critical point of j j, and write c r i t(j)=κ c r i t(j)=κ. The embedding j j is definable from a κ κ-complete non-principal measure on κ κ, using the so-called ultrapower construction. Conversely, if j:V→M j:V→M is an elementary embedding, with M M transitive and κ=c r i t(j)κ=c r i t(j), then the set U={X⊆κ:κ∈j(X)}U={X⊆κ:κ∈j(X)} is a κ κ-complete non-principal ultrafilter on κ κ. A measure U U obtained in this way from j j is called normal.
Every measurable cardinal κ κ is weakly compact, and there are many weakly compact cardinals smaller than κ κ. In fact, below κ κ there are many cardinals that are totally indescribable, i.e., they reflect all sentences, of any complexity, and in any higher-order language.
If κ κ is measurable and j:V→M j:V→M is given by the ultrapower construction, then V κ⊆M V κ⊆M, and every sequence of length less than or equal to κ κ of elements of M M belongs to M M. Thus, M M is quite similar to V V, but it cannot be V V itself. Indeed, a famous theorem of Kenneth Kunen shows that there cannot be any elementary embedding j:V→V j:V→V, other than the trivial one, i.e., the identity. All known proofs of this result use the Axiom of Choice, and it is an outstanding important question if the axiom is necessary. Kunen’s Theorem opens the door to formulating large cardinal notions stronger than measurability by requiring that M M is closer to V V.
For example, κ κ is called strong if for every ordinal α α there exists an elementary embedding j:V→M j:V→M, for some M M transitive, such that κ=c r i t(j)κ=c r i t(j) and V α⊆M V α⊆M.
Another important, and much stronger large cardinal notion is supercompactness. A cardinal κ κ is supercompact if for every α α there exists an elementary embedding j:V→M j:V→M, with M M transitive and critical point κ κ, so that j(κ)>α j(κ)>α and every sequence of elements of M M of length α α belongs to M M.
Woodin cardinals fall between strong and supercompact. Every supercompact cardinal is Woodin, and if δ δ is Woodin, then V δ V δ is a model of ZFC in which there is a proper class of strong cardinals. Thus, while a Woodin cardinal δ δ need not be itself very strong—the first one is not even weakly compact—it implies the existence of many large cardinals in V δ V δ.
Beyond supercompact cardinals we find the extendible cardinals, the huge, the super huge, etc.
Kunen’s theorem about the non-existence of a non-trivial elementary embedding j:V→V j:V→V actually shows that there cannot be an elementary embedding j:V λ+2→V λ+2 j:V λ+2→V λ+2 different from the identity, for any λ λ.
The strongest large cardinal notions not known to be inconsistent, modulo ZFC, are the following:
There exists an elementary embedding j:V λ+1→V λ+1 j:V λ+1→V λ+1 different from the identity.
There exists an elementary embedding j:L(V λ+1)→L(V λ+1)j:L(V λ+1)→L(V λ+1) different from the identity.
Large cardinals form a linear hierarchy of increasing consistency strength. In fact they are the stepping stones of the interpretability hierarchy of mathematical theories. See the entry on independence and large cardinals for more details. Typically, given any sentence φ φ of the language of set theory, exactly one the following three possibilities holds about the theory ZFC plus φ φ:
ZFC plus φ φ is inconsistent.
ZFC plus φ φ is equiconsistent with ZFC (i.e., ZFC is consistent if and only if so is ZFC plus φ φ).
ZFC plus φ φ is equiconsistent with ZFC plus the existence of some large cardinal.
Thus, large cardinals can be used to prove that a given sentence φ φ does not imply another sentence ψ ψ, modulo ZFC, by showing that ZFC plus ψ ψ implies the consistency of some large cardinal, whereas ZFC plus φ φ is consistent assuming the existence of a smaller large cardinal, or just assuming the consistency of ZFC. In other words, ψ ψ has higher consistency strength than φ φ, modulo ZFC. Then, by Gödel’s second incompleteness theorem, ZFC plus φ φ cannot prove ψ ψ, assuming ZFC plus φ φ is consistent.
As we already pointed out, one cannot prove in ZFC that large cardinals exist. But everything indicates that their existence not only cannot be disproved, but in fact the assumption of their existence is a very reasonable axiom of set theory. For one thing, there is a lot of evidence for their consistency, especially for those large cardinals for which it is possible to construct a canonical inner model.
10.1 Inner models of large cardinals
An inner model of ZFC is a transitive proper class that contains all the ordinals and satisfies all ZFC axioms. Thus, L L is the smallest inner model, while V V is the largest. Some large cardinals, such as inaccessible, Mahlo, or weakly-compact, may exist in L L. That is, if κ κ has one of these large cardinal properties, then it also has the property in L L. But some large cardinals cannot exist in L L. Indeed, Scott (1961) showed that if there exists a measurable cardinal κ κ, then V≠L V≠L. It is important to notice that κ κ does belong to L L, since L L contains all ordinals, but it is not measurable in L L because a κ κ-complete non-principal measure on κ κ cannot exist there.
If κ κ is a measurable cardinal, then one can construct an L L-like model in which κ κ is measurable by taking a κ κ-complete non-principal and normal measure U U on κ κ, and proceeding as in the definition of L L, but now using U U as an additional predicate. The resulting model, called L[U]L[U], is an inner model of ZFC in which κ κ is measurable, and in fact κ κ is the only measurable cardinal. The model is canonical, in the sense that any other normal measure witnessing the measurability of κ κ would yield the same model, and has many of the properties of L L. For instance, it has a projective well ordering of the reals, and it satisfies the GCH.
Building similar L L-like models for stronger large cardinals, such as strong, or Woodin, is much harder. Those models are of the form L[E]L[E], where E E is a sequence of extenders, each extender being a coherent system of measures, that encode the relevant elementary embeddings.
The largest L L-like inner models for large cardinals that have been obtained so far can contain Woodin cardinals that are limits of Woodin cardinals (Neeman 2002). However, building an L L-like model for a supercompact cardinal is still a challenge. The supercompact barrier seems to be the crucial one, for Woodin has shown that for a natural kind of inner model for a supercompact cardinal κ κ, which he calls a weak extender model for the supercompactness of κ κ, all stronger large cardinals greater than κ κ that may exist in V V, such as extendible, huge, etc. would also exist in the model.
10.2 Consequences of large cardinals
The existence of large cardinals has dramatic consequences, even for simply-definable small sets, like the projective sets of real numbers. For example, Solovay (1970) proved, assuming that there exists a measurable cardinal, that all Σ 1 2 Σ 2 1 sets of reals are Lebesgue measurable and have the Baire property, which cannot be proved in ZFC alone. And Shelah and Woodin (1990) showed that the existence of a proper class of Woodin cardinals implies that the theory of L(R)L(R), even with real numbers as parameters, cannot be changed by forcing, which implies that all sets of real numbers that belong to L(R)L(R) are regular. Further, under a weaker large-cardinal hypothesis, namely the existence of infinitely many Woodin cardinals, Martin and Steel (1989) proved that every projective set of real numbers is determined, i.e., the axiom of PD holds, hence all projective sets are regular. Moreover, Woodin showed that the existence of infinitely many Woodin cardinals, plus a measurable cardinal above all of them, implies that every set of reals in L(R)L(R) is determined, i.e., the axiom A D L(R)A D L(R) holds, hence all sets of real numbers that belong to L(R)L(R), and therefore all projective sets, are regular. He also showed that Woodin cardinals provide the optimal large cardinal assumptions by proving that the following two statements:
There are infinitely many Woodin cardinals.
A D L(R)A D L(R).
are equiconsistent, i.e., ZFC plus 1 is consistent if and only if ZFC plus 2 is consistent. See the entry on large cardinals and determinacy for more details and related results.
Another area in which large cardinals play an important role is the exponentiation of singular cardinals. The so-called Singular Cardinal Hypothesis (SCH) completely determines the behavior of the exponentiation for singular cardinals, modulo the exponentiation for regular cardinals. The SCH follows from the GCH, and so it holds in L L. A consequence of the SCH is that if 2 ℵ n<ℵ ω 2 ℵ n<ℵ ω, for all finite n n, then 2 ℵ ω=ℵ ω+1 2 ℵ ω=ℵ ω+1. Thus, if the GCH holds for cardinals smaller than ℵ ω ℵ ω, then it also holds at ℵ ω ℵ ω. The SCH holds above the first supercompact cardinal (Solovay). But Magidor (1977) showed that, remarkably, assuming the existence of large cardinals it is possible to build a model of ZFC where the GCH first fails at ℵ ω ℵ ω, hence the SCH fails. Large cardinals stronger than measurable are actually needed for this. In contrast, however, ZFC alone suffices to prove that if the SCH holds for all cardinals smaller than ℵ ω 1 ℵ ω 1, then it also holds for ℵ ω 1 ℵ ω 1. Moreover, if the SCH holds for all singular cardinals of countable cofinality, then it holds for all singular cardinals (Silver).
11. Forcing axioms
Forcing axioms are axioms of set theory that assert that certain existential statements are absolute between the universe V V of all sets and its (ideal) forcing extensions, i.e., some existential statements that hold in some forcing extensions of V V are already true in V V. The first forcing axiom was formulated by Donald Martin in the wake of the Solovay-Tennenbaum proof of the consistency of Suslin’s Hypothesis, and is now known as Martin’s Axiom (MA). Before we state it, let us say that a partial ordering is a non-empty set P P together with a binary relation ≤≤ on P P that is reflexive and transitive. Two elements, p p and q q, of P P are called compatible if there exists r∈P r∈P such that r≤p r≤p and r≤q r≤q. An antichain of P P is a subset of P P whose elements are pairwise-incompatible. A partial ordering P P is called ccc if every antichain of P P is countable. A non-empty subset G G of P P is called a filter if (i) every two elements of G G are compatible, and (ii) if p∈G p∈G and p≤q p≤q, then also q∈G q∈G. Finally, a subset D D of P P is called dense if for every p∈P p∈P there is q∈D q∈D such that q≤p q≤p.
MA asserts the following:
For every ccc partial ordering P P and every set {D α:α<κ}{D α:α<κ}, where κ<2 ℵ 0 κ<2 ℵ 0, of dense subsets of P P, there exists a filter G⊆P G⊆P that is generic for the set, i.e., G∩D α≠∅G∩D α≠∅, for all α<κ α<κ.
Since MA follows easily from the CH, MA is only of interest if the CH fails. Martin and Solovay (1970) proved that MA plus the negation of the CH is consistent with ZFC, using iterated forcing with the ccc property. At first sight, MA may not look like an axiom, namely an obvious, or at least reasonable, assertion about sets, but rather like a technical statement about ccc partial orderings. It does look more natural, however, when expressed in topological terms, for it is simply a generalization of the well-known Baire Category Theorem, which asserts that in every compact Hausdorff topological space the intersection of countably-many dense open sets is non-empty. Indeed, MA is equivalent to:
In every compact Hausdorff ccc topological space, the intersection of less than 2 ℵ 0 2 ℵ 0-many dense open sets is non-empty.
MA has many different equivalent formulations and has been used very successfully to settle a large number of open problems in other areas of mathematics. For example, MA plus the negation of the CH implies Suslin’s Hypothesis and that every Σ 1 2 Σ 2 1 set of reals is Lebesgue measurable and has the Baire property. It also implies that 2 ℵ 0 2 ℵ 0 is a regular cardinal, but it does not decide what cardinal it is. See Fremlin (1984) for many more consequences of MA and other equivalent formulations. In spite of this, the status of MA as an axiom of set theory is still unclear. Perhaps the most natural formulation of MA, from a foundational point of view, is in terms of generic absoluteness. Namely, MA is equivalent to the following:
For every ccc partial ordering P P, if an existential statement, containing subsets of some cardinal less than 2 ℵ 0 2 ℵ 0 as parameters, holds in an (ideal) generic extension of V V obtained by forcing with P P, then the statement is true, i.e., it holds in V V. In other words, if a set having a property that depends only on bounded subsets of 2 ℵ 0 2 ℵ 0 exists in some (ideal) generic extension of V V obtained by forcing with a ccc partial ordering, then a set with that property already exists in V V.
The notion of ideal generic extension of V V can be made precise in terms of so-called Boolean-valued models, which provide an alternative version of forcing.
Much stronger forcing axioms than MA (for ω 1 ω 1) were introduced in the 1980s, such as J. Baumgartner’s Proper Forcing Axiom (PFA), and the stronger Martin’s Maximum (MM) of Foreman, Magidor, and Shelah (1988). Both the PFA and MM are consistent relative to the existence of a supercompact cardinal. The PFA asserts the same as MA, but for partial orderings that have a property weaker than the ccc, called properness, introduced by Shelah. And MM asserts the same for the wider class of partial orderings that, when forcing with them, do not destroy stationary subsets of ω 1 ω 1.
Strong forcing axioms, such as the PFA and MM imply that all projective sets of reals are determined (PD), and have many other strong consequences in infinite combinatorics. Notably, they imply that the cardinality of the continuum is ℵ 2 ℵ 2.
A different forcing axiom is Woodin’s Axiom (∗)(∗), which resulted from his analysis of P m a x P m a x forcing extensions over models of AD. Even though a stronger form of MM known as M M++M M++ and Axiom (∗)(∗) have similar and very rich consequences, both their motivation and their formulation are very different, to the point that they were seen as competing axioms. Their connection was far from being clear, until Asperó and Schindler (2021) showed that, in the presence of large cardinals, Axiom (∗)(∗) is equivalent to a bounded form of M M++M M++, and therefore to a natural generic absoluteness principle.
Bibliography
Asperó, D. and Schindler, R. D., 2021, “Martin’s Maximum++++ implies Woodin’s Axiom (∗)(∗)”, Annals of Mathematics, 193(3): 793–835.
Bagaria, J., 2008, “Set Theory”, in The Princeton Companion to Mathematics, edited by Timothy Gowers; June Barrow-Green and Imre Leader, associate editors. Princeton: Princeton University Press.
Cohen, P.J., 1966, Set Theory and the Continuum Hypothesis, New York: W. A. Benjamin, Inc.
Enderton, H.B., 1977, Elements of Set Theory, New York: Academic Press.
Ferreirós, J., 2007, Labyrinth of Thought: A History of Set Theory and its Role in Modern Mathematics, Second revised edition, Basel: Birkhäuser.
Foreman, M., Magidor, M., and Shelah, S., 1988, “Martin’s maximum, saturated ideals and non-regular ultrafilters”, Part I, Annals of Mathematics, 127: 1–47.
Fremlin, D.H., 1984, “Consequences of Martin’s Axiom”, Cambridge tracts in Mathematics #84. Cambridge: Cambridge University Press.
Gödel, K., 1931, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I,” Monatshefte für Mathematik Physik, 38: 173–198. English translation in Gödel 1986, 144–195.
–––, 1938, “The consistency of the axiom of choice and of the generalized continuum hypothesis”, Proceedings of the National Academy of Sciences, U.S.A. 24: 556–557.
–––, 1986, Collected Works I. Publications 1929–1936, S. Feferman et al. (eds.), Oxford: Oxford University Press.
Hauser, K., 2006, “Gödel’s program revisited, Part I: The turn to phenomenology”, Bulletin of Symbolic Logic, 12(4): 529–590.
Jech, T., 2003, Set theory, 3d Edition, New York: Springer.
Jensen, R.B., 1972, “The fine structure of the constructible hierarchy”, Annals of Mathematical Logic, 4(3): 229–308.
Kanamori, A., 2003, The Higher Infinite, Second Edition. Springer Monographs in Mathematics, New York: Springer.
Kechris, A.S., 1995, Classical Descriptive Set Theory, Graduate Texts in Mathematics, New York: Springer Verlag.
Kunen, K., 1980, Set Theory, An Introduction to Independence Proofs, Amsterdam: North-Holland.
Levy, A., 1960, “Axiom schemata of strong infinity in axiomatic set theory”, Pacific Journal of Mathematics, 10: 223–238.
–––, 1979, Basic Set Theory, New York: Springer.
Magidor, M., 1977, “On the singular cardinals problem, II”, Annals of Mathematics, 106: 514–547.
Martin, D.A. and R. Solovay, 1970, “Internal Cohen Extensions”, Annals of Mathematical Logic, 2: 143–178.
Martin, D.A. and J.R. Steel, 1989, “A proof of projective determinacy”, Journal of the American Mathematical Society, 2(1): 71–125.
Mathias, A.R.D., 2001, “Slim models of Zermelo Set Theory”, Journal of Symbolic Logic, 66: 487–496.
Neeman, I., 2002, “Inner models in the region of a Woodin limit of Woodin cardinals”, Annals of Pure and Applied Logic, 116: 67–155.
Scott, D., 1961, “Measurable cardinals and constructible sets”, Bulletin de l’Académie Polonaise des Sciences. Série des Sciences Mathématiques, Astronomiques et Physiques, 9: 521–524.
Shelah, S., 1994, “Cardinal Arithmetic”, Oxford Logic Guides, 29, New York: The Clarendon Press, Oxford University Press.
–––, 1998, Proper and improper forcing, 2nd Edition, New York: Springer-Verlag.
Shelah, S. and W.H. Woodin, 1990, “Large cardinals imply that every reasonably definable set of reals is Lebesgue measurable”, Israel Journal of Mathematics, 70(3): 381–394.
Solovay, R., 1970, “A model of set theory in which every set of reals is Lebesgue measurable”, Annals of Mathematics, 92: 1–56.
Solovay, R. and S. Tennenbaum, 1971, “Iterated Cohen extensions and Souslin’s problem”, Annals of Mathematics (2), 94: 201–245.
Todorcevic, S., 1989, “Partition Problems in Topology”, Contemporary Mathematics, Volume 84. American Mathematical Society.
Ulam, S., 1930, ‘Zur Masstheorie in der allgemeinen Mengenlehre’, Fundamenta Mathematicae, 16: 140–150.
Woodin, W.H., 1999, The Axiom of Determinacy, Forcing Axioms, and the Nonstationary Ideal, De Gruyter Series in Logic and Its Applications 1, Berlin-New York: Walter de Gruyter.
–––, 2001, “The Continuum Hypothesis, Part I”, Notices of the AMS, 48(6): 567–576, and “The Continuum Hypothesis, Part II”, Notices of the AMS 48(7): 681–690.
Zeman, M., 2001, Inner Models and Large Cardinals, De Gruyter Series in Logic and Its Applications 5, Berlin-New York: Walter de Gruyter.
Zermelo, E., 1908, “Untersuchungen über die Grundlagen der Mengenlehre, I”, Mathematische Annalen 65: 261–281. Reprinted in Zermelo 2010: 189–228, with a facing-page English translation, and an Introduction by Ulrich Felgner (2010). English translation also in van Heijenoort 1967: 201–215.
Academic Tools
How to cite this entry.
Preview the PDF version of this entry at the Friends of the SEP Society.
Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO).
Enhanced bibliography for this entry at PhilPapers, with links to its database.
Other Internet Resources
Jech, Thomas, “Set Theory”, The Stanford Encyclopedia of Philosophy (Fall 2014 Edition), Edward N. Zalta (ed.), URL = . [This was the previous entry on set theory in the Stanford Encyclopedia of Philosophy — see the version history.]
Related Entries
set theory: continuum hypothesis | set theory: early development | set theory: independence and large cardinals | set theory: large cardinals and determinacy
Copyright © 2023 by
Joan Bagaria <joan.bagaria@icrea.cat>
Open access to the SEP is made possible by a world-wide funding initiative. Please Read How You Can Help Support the Growth and Development of the Encyclopedia
Browse
Table of Contents
What's New
Random Entry
Chronological
Archives
About
Editorial Information
About the SEP
Editorial Board
How to Cite the SEP
Special Characters
Advanced Tools
Accessibility
Contact
Support SEP
Support the SEP
PDFs for SEP Friends
Make a Donation
SEPIA for Libraries
Mirror Sites
View this site from another server:
USA (Main Site) Philosophy, Stanford University
Australia Library, University of Sydney
Netherlands ILLC, University of Amsterdam
The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab, Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054 |
189591 | https://www.pearson.com/channels/calculus/asset/6fb8e38d/deriving-trigonometric-identitiesc-differentiate-both-sides-of-the-identity-sin-?gclid=EAIaIQobChMI4sbG3b6c-wIVChpMCh1swwOTEAAYASAAEgKPRvD_BwE&sideBarCollapsed=true&gclsrc=aw.ds | Calculus
Improve your experience by picking them
{Use of Tech} Hours of daylight The number of hours of daylight at any point on Earth fluctuates throughout the year. In the Northern Hemisphere, the shortest day is on the winter solstice and the longest day is on the summer solstice. At 40° north latitude, the length of a day is approximated by D(t) = 12−3 cos (2π(t+10) / 365), where D is measured in hours and 0≤t≤365 is measured in days, with t=0 corresponding to January 1.
b. Find the rate at which the daylight function changes.
The Chain Rule for second derivatives
b. Use the formula in part (a) to calculate d2dx2(sin(3x4+5x2+2))\frac{d^2}{dx^2}\left(\sin\left(3x^4+5x^2+2\right)\right).
Deriving trigonometric identities
a. Differentiate both sides of the identity cos 2t = cos² t−sin² t to prove that sin 2 t= 2 sin t cos t.
Deriving trigonometric identities
b. Verify that you obtain the same identity for sin2t as in part (a) if you differentiate the identity cos 2t = 2 cos² t−1.
{Use of Tech} Cell population The population of a culture of cells after t days is approximated by the function P(t)=1600 / 1 + 7e^−0.02t, for t≥0.
e. Graph the growth rate. When is it a maximum and what is the population at the time that the growth rate is a maximum?
15–48. Derivatives Find the derivative of the following functions.
y = In (x³+1)^π
15–48. Derivatives Find the derivative of the following functions.
y = 5^3t
15–48. Derivatives Find the derivative of the following functions.
y = 10^In 2x |
189592 | https://www.cs.odu.edu/~zeil/cs390/s22/Public/turing-jflap/index.html | | |
| |
Turing Machines: Examples
CS390, Spring 2022
Last modified: Mar 25, 2022
Contents:
1 Automat
2 New Ways to Solve Old Problems
2.1 Contains 101
2.2 Ends with 101
2.3 $0^n1^n$
2.4 Recognizing the Language Accepted by a TM
2.5 Recognizing the Function Computed by a TM
2.6 TMs as Functions 2
2.7 TMs as Functions 3
3 Turing Machines as Language Acceptors
3.1 $0^n1^n2^n$
3.2 $\alpha c\alpha$ where $\alpha \in {a,b}$
4 Turing Machines as Functions
4.1 Unary Form Integer Increment
4.2 Unary Form Integer Addition
4.3 Binary Addition - Multitape
Practice designing and working with Turing machines.
1 Automat
Review the Turing machines section of the Automat help pages.
Construct the TM from examples 8.2/8.3. Use it to solve Exercise 8.2.1.
Construct your own Turing machine to solve Exercise 8.2.2a. (Note that this language is not a CFL.)
2 New Ways to Solve Old Problems
2.1 Contains 101
We have previously designed this FA to accepts strings that contain 101.
Design a Turing machine for the same language
Reveal
(Editor)
Note that we don’t need transitions out of the accept state, because the TM halts as soon as we reach an accept state.
Try running this on
010
01010
011001
2.2 Ends with 101
Here is a FA for accepting strings that end with 101.
In this automaton, we can enter the accepting state many times (e.g., 101010) but only accept the string if we are in the accepting state AND have processed all of the input.
Design a TM to accept the same language.
Reveal
(Editor)
Notice the difference in how the accepting state is handled. In a FA, we halt if we reach the end of input and are in a final state. But TMs don’t have the idea of “end of input” – a TM can make any number of passes over its input. A TM halts when it enters a final state regardless of how many symbols it has processed.
In this case, therefore, when we spot a 101, we look to see if we are at the end of the string (input $\epsilon$) and, if so, halt.
Try running this on
101
1010
1010101
(Editor)
But there’s a simpler, arguably more “Turing-machine-style” way to do this. In this TM, state 0 simply moves the head to the end of the string, then states 1…4 move right-to-left to check if the end of the string is “101”.
Try running this on
101
1010
1010101
2.3 $0^n1^n$
Now let’s consider a context-free language.
Consider the language of all strings of the form $0^n1^n$, i.e., some number of zeros followed by an equal number of ones. This is a typical “counting” problem for a PDA, as shown to the right.
Design a TM for this language.
Reveal
(Editor)
The TM does not use a stack, but we achieve much the same by “matching” symbols. We will remove pairs of ’0’s and ’1’s until we have no ’0’s left, then make sure that all of the ’1’s are also gone.
Starting from state 0, we erase the leftmost ‘0’.
In state 1, we then skip over any remaining ’0’s, expecting to hit a ‘1’.
When we see a ‘1’, we replace it with an ‘x’.
In state 2 we then run back to the left end of the string and return to state 0 so we can repeat the whole process.
If, in state 0, we are looking at an ‘x’, we make one last pass over the string to make sure we have no ’1’s remaining.
Try running this on
0011
00011
00111
The TMs in this section illustrate a number of “programing style” tips that are worth pointing out to people new to Turing machines:
The TM controller is a deterministic FA. So anything you can do with a DFA, you can translate directly to a TM, never changing anything on the tape. But sometimes TMs can do the same thing in a simpler fashion.
Unlike FAs, a TM can make multiple passes over the input data.
TM’s can make a “pass” over their data both from left-to-right and from right-to-left.
Many TM algorithms spend a lot of time bouncing back and forth from one end of the input string to the other.
# Recognizing TMs
Use a combination of inspection and of running test cases on each of the following Turing machines to determine what it does.
2.4 Recognizing the Language Accepted by a TM
What language does this TM accept?
Reveal
All strings over {0, 1} in which the number “01” pairs is odd.
This is actually a regular language. (The RE would be $10011(00110011)0$.) You might have noticed that it processes the input on the tape in a strict left-to-right progression. The only use of the tape is to mark off 01 pairs with a ‘x’ for an odd occurrence and a ‘y’ for even occurrences. And that is purely decorative, because the TM never goes back to look at any of the changed markings.In effect, all of the work is done in the TM’s controller, and that controller is a DFA.
2.5 Recognizing the Function Computed by a TM
Often we think of TMs not so much as acceptors of languages as computers of functions. The input to the function is the initial content of the tape and the output is the final content of the tape when the TM reaches an accepting state.
What is the function computed by this TM when presented with inputs over the language {a,b,A,B} ?
Reveal
If computes the “lower-case equivalent” of a string over {a, b, A, B}.
Try running this on the input “aAbBa”.
2.6 TMs as Functions 2
What is the function computed by this TM?
Reveal
It shifts all of the characters on the right of its starting position one step to the left, overwriting whatever character the head was originally positioned over.
The interesting thing here is the kind of two-step shuffle carried out in the transitions $q_0 \rightarrow q_1 \rightarrow q_4$, which sees and erases an ‘a’, steps to the left and writes an ‘a’, then steps back to the right into the newly erased position. The transitions $q_0 \rightarrow q_b \rightarrow q_4$, but for ‘b’ instead of ‘a’.
Now, of course, on an infinite tape of empty cells, shifting your entire input one one position to the left has no real effect. But we could use this as a building block or design pattern as part of a larger TM function.
For example, suppose that we had a string over ${a,b}^$ and that we wanted to erase the first ‘a’. How would you write a TM to do this?
Reveal
We could design a TM that scans until it reaches the first ‘a’, then apply that “shift-left” pattern to move the remainder of the string one step to the left, overwriting and, in effect, deleting the ‘a’.
Try this TM on the input “bbaabab” to see this in action.
There are two changes here compared to our “shift-left” TM:
The new starting state $q_5$, which scans right until we find an ‘a’, then moves into our “shift-left” TM pattern.
The cells to the left of the starting position for the shift-left are not likely to be empty. So, in the transitions $q_1 \rightarrow q_4$ and $q_2 \rightarrow q_4$, instead of matching the input $\epsilon$, I’ve used a shortcut provided by Automat: ‘~’ denoting “any character”.
You may notice a symmetry in our shift-left pattern between the way that ‘a’ and ‘b’ are handled. If we wanted to do a shift-left for a language over three symbols instead of two, we would add another branch similar to $q_0 \rightarrow q_1 \rightarrow q_4$ and $q_0 \rightarrow q_2 \rightarrow q_4$. For a language over four characters, we would add yet another branch. Languages with larger alphabets add more states and transitions to this pattern, making everything just a little bit messier and hard to read.
A shortcut discussed in your text and supported by Automat is to store one character in a “variable”, allowing it’s retrieval later. For example, a transition with an input labelled “a,b}w” means “accept input ‘a’ or ‘b’, storing whichever you actually see in the variable w. Any later transition that mentions ”w" is then assumed to be referring to the variable.
How would you use that to simplify our “delete-the-1st-a” TM?
Reveal
Here is the simplified TM. Try running it on the input “bbaabab” to see this in action.
In the transition $q_0 \rightarrow q_1$, we save the input character into the variable w.
In the transition $q_1 \rightarrow q_w$, we write the saved character onto the tape.
2.7 TMs as Functions 3
What is the function (over the alphabet ${a,b,c}$) computed by this TM?
Hints:
It works from the middle out.
The language is over ${a,b,c}$. But this also uses tape symbols ${A, B, C}$ to denote the “abc” character that have already been “processed”.
At the end of processing, states 5 and 6 rewrite all of the upper-case characters back to their lower-case equivalent.
Reveal
This is a palindrome generator. Given an input string over {a,b,c}, it copies the reverse of that string onto its own end.
For example, to process abc, it
Moves to the right end of the string.
Changes the final ‘c’ to ‘C’: abC
Moves to the right until it hits an empty cell and writes a ‘C’ there: abCC.
Moves to the left until it hits a lower-case letter (‘b’) and changes that to ‘B’: aBCC
Moves to the right until it hits an empty cell and writes a ‘B’ there: aBCCB.
Moves to the left until it hits a lower-case letter (‘a’) and changes that to ‘A’: ABCC
Moves to the right until it hits an empty cell and writes a ‘B’ there: ABCCBA.
Moves to the left looking for a lower-case letter, but hits the left end of the string instead.
Enters state 5, which scans down the string reducing each upper case character to its lower-case equivalent: abccba
These TMs illustrate some more common programming tricks common to TMs:
Shifting data to one side or the other, often to make room to insert a symbol into the middle.
Using special symbols, that may or may not be part of the language’s alphabet, as markers to help the TM navigate.
3 Turing Machines as Language Acceptors
Earlier we saw ways to use TMs to accept languages that we had seen with earlier, less powerful automata.
Next, we can consider problems that could not be solved using the automata we have had before.
3.1 $0^n1^n2^n$
Design a TM to recognize the language of strings of the form $0^n1^n2^n$.
(Although $0^n1^n$ is a CFL and can be recognized by a pushdown automaton, $0^n1^n2^n$ is not context-free and requires a more powerful approach.)
Reveal
This is a language that is not CF and cannot be accepted by a PDA. But we can recognize this language with a fairly minor change to our TM for $0^n1^n. That TM worked like this:
Starting from state 0, we erase the leftmost ‘0’.
In state 1, we then skip over any remaining ’0’s, expecting to hit a ‘1’.
When we see a ‘1’, we replace it with an ‘x’.
In state 2 we then run back to the left end of the string and return to state 0 so we can repeat the whole process.
If, in state 0, we are looking at an ‘x’, we make one last pass over the string to make sure we have no ’1’s remaining.
This TM inserts one additional state, $q_5$, that does for inputs of ‘2’ what state $q_2$ does for inputs of ‘1’. In essence, we implement a new algorithm:
Starting from state 0, we erase the leftmost ‘0’.
In state 1, we then skip over any remaining ’0’s, expecting to hit a ‘1’.
When we see a ‘1’, we replace it with an ‘x’.
In state 2 we continue moving to the right, skipping over any 1’s we encounter.
When we encounter a ‘2’, we replace it with an ‘x’, then run to the left end of the string and return to state 0 so we can repeat the whole process.
If, in state 0, we are looking at an ‘x’, we make one last pass over the string to make sure we have no ’1’s or ’2’s remaining.
Try executing this on inputs:
001122
00122
0011222
012012
3.2 $\alpha c\alpha$ where $\alpha \in {a,b}$
3.2.1 Basic Implementation
Consider the problem of designing a TM to compare two strings over ${a,b}$ to see if they are equal.
All input to the TM must be on the tape, so we could choose to separate the two strings by a blank, e.g.,
aba abb
or by a separator character
abacabb
I’m going to choose the latter.
Another way to view this machine is to say that it recognizes strings over {a,b,c} of the form
[ { \alpha c\alpha | \alpha \in {a,b} }, ]
which is definitely not a CFL.
Design a TM to recognize this language:
Reveal
First, we can try to figure out an algorithmic approach for doing this
(video)
We can actually simplify that algorithm slightly by merging the handling of the ‘a’ and ‘b’ inputs.
Look at the first character in the first string (the leftmost character).
If it is an ‘a’, erase it and move forward to the marker (‘c’).
If it is a ‘b’, erase it and move forward to the marker (‘c’).
If it is the marker character ‘c’, then there are no characters remaining in the 1st string.
Make a scan to be sure that there are no ’a’s or ’b’s remaining in the second string. If there are none, accept – the strings were identical.
Move past any “already processed” characters ‘x’.
Look at the character after the ‘c’ and any ’x’s.
If it is an ‘a’ and the character from step 1 was an ‘a’, replace this ‘a’ by ‘x’.
If it is a ‘b’ and the character from step 1 was a ‘b’, replace this ‘b’ by ‘x’.
If the current character does not match the character from step 1, the two inputs strings are different – fail.
Move back to the left of the string and repeat starting with step 1.
Then we can try to design a TM to carry out this approach.
Here is the TM.
Things of note:
In the transition from $q_0$ to $q_1$, we recognize an ‘a’ or a ‘b’ and save it in the variable w.
$q_1$ moves us forward to the inter-string marker ‘c’.
$q_2$ moves us past any already-matched characters in the second string.
In the transition from $q_2$ to $q_3$, we recognize a match in w to the character encountered in the transition from $q_0$ to $q_1$.
$q_3$ moves us back to the start of the first string so that we can repeat the process.
3.2.2 Using Multiple Tapes
We can get an even simpler (IMO) TM by using multiple tapes.
Solve the same problem using a multi-tape TM.
Reveal
This isn’t quite as simple as we might hope. Our convention is that all input needs to be supplied on the first tape, and so the input will still be presented in the form:
string1cstring2
The procedure in this case is:
Copy the first string onto the second tape.
At the end of this procedure, the head of the first tape will be pointing at ‘c’, the head of the second tape just past the end of the copied first string.
2. Leave the 2nd tape head where it is, and move the first tape head to the right until we hit the end of the second string.
3. Now start moving to the left, comparing corresponding characters in the two strings. If they match we keep going. If they don’t match, we fail.
If we reach the beginning of both strings with all matching, we accept.
Here is the TM for this new procedure.
Try executing this on:
aabcaab
abbcaab
abbbcaab
abbcabbb
Whether this is simpler or not than the single-tape approach may be, I suppose, open to debate. After all, trying to envision two tapes is unfamiliar and adds a bit of complication of its own. However, what cannot be denied is that the multi-tape version is faster. Because of the need to repeatedly traverse back and forthe across the entire input, the single-tape machine runs in $O(n^2)$ time (where $n$ is the length of the longer of the two strings). This multi-tape TM makes exactly two passes over the input, and so does the same thing in $O(n)$ time.
4 Turing Machines as Functions
As computer scientists, we are familiar with the fact that numbers and strings of symbols can be encoded in binary (base-2).
Arithmetic in Turing machines is often conducted in an even simpler form: unary encoding, where a single symbol is used (either ‘0’ or ‘1’) and the value of the number is indicated by the length of the string. For example, the decimal number 4 is 100 in binary, and 1111 in unary. The decimal number 6 is 110 in binary, but 111111 in unary.
Unary encoding is often employed for TMs simply because it makes many elementary arithmetic operations nearly trivial
4.1 Unary Form Integer Increment
Suppose that a tape contains an integer $k$ in unary form (i.e., a string of 1’s, where the value of the integer is the length of the string. Construct a TM to replace its input by the value of the function $f(k) = k+1$.
Reveal
You may have expected to be more difficult than it really is.
To add 1 to a number written in unary format, we just add another ‘1’ to either end.
Try it out on:
1
11
111
4.2 Unary Form Integer Addition
Suppose that a tape contains pair of integers $m, k$ in unary form separated by a single ‘x’. Construct a TM to replace its input by the value of the function $f(m,k) = m+k$.
Reveal
All we need to do is to remove the ‘x’ separating the two numbers. Then shift the remaining characters left to fill in the gap.
Run this on
111x11
11x1
x11
11x
4.3 Binary Addition - Multitape
The convenience of unary does not mean that we can’t do binary arithmetic in TMs.
Suppose that we have two binary integers on a tape, separated by a symbol ‘c’. Design a TM to compute the sum of those two integers.
This will be easiest to do with a multi-tape TM. As in our previous multi-tape example, we will start by copying the first string/number from tape 1 to tape 2, then position the heads of both tapes on the right-most symbol of each string.
After that, we can enact a binary addition, working right to left, in much the way that you would do manually.
For example, given the input tape:
| | | | | | |
--- --- --- |
| 1 | 1 | c | 1 | 1 | 0 |
we will split the two inputs onto two tapes like this.
| | | | | |
--- ---
| □ | 1 | 1 | 0 | □ |
| □ | □ | 1 | 1 | □ |
We will compute the sum and put it onto the first tape:
| | | | | |
--- ---
| 1 | 0 | 0 | 1 | □ |
| (don’t care) |
Reveal
This is the most complicated TM we have attempted so far, so let’s take it in stages.
First, let’s copy the 1st number onto the 2nd tape:
Let’s take this in stages:
Start by copying the 1st number to tape 2. Run this on 11c110 to confirm that it works.
Next, let’s shift the tape head on tape 1 to the end of the 2nd number, and then position both tape heads on the right-most digits.
Run this on 11c110, using single-stepping to watch the movement of the tape heads, to confirm that it works.
Now we are ready to start actually doing addition. If we see two zeros, we write a zero onto the answer in tape 1. If we see a one and a zero, we write a one onto the answer in tape 1. Either way, we then shift the tape heads left to the next higher pair of digits. Of course, that leaves the case of seeing a pair of ones. For that, we would write a ‘0’ and “carry the one”.
Run this on inputs
0c0
0c1
1c0
1c1
to see how the single digits are handled.
The idea of the “carry” reminds us that our prior rules (e.g., $1+0 \rightarrow 1$) only apply when there is no carry. So we will use state $q_2$ to add digits with no carry, and a different state, $q_3$, will enact the rules for addition when we have a carry.
Here we have added the “with carry” addition rules, including the case of adding $0+0$ with a carry, which gives us a sum of $1$ but returns us to the non-carry state $q_2$.
Run this on input 011c010.
Now it’s time to start thinking about how to end this.
If we are in state 2 (no carry), and looking at two empty cells, then we have finished adding the numbers and can stop.
If we are in state 3 (carry), and looking at two empty cells, then we would need to write the 1 from the carry onto the left of the answer.
If we are in either of those states and see only one empty cell, that means that one of our two numbers has more digits than the other. We would treat those empty cells as if they contained a zero.
Run this on:
011c010
11c110
1111c1
and convince yourself that we really have implemented binary addition.
| |
| |
© 2016-2022, Old Dominion Univ. |
189593 | https://chemistrytalk.org/balance-redox-reactions/ | Skip to content
How to Balance Redox Reactions
Core Concepts
Redox reactions are incredibly interesting and important reactions in chemistry, particularly in electrochemistry! However, they are not of much use unless properly balanced. In this tutorial, you will learn how to balance redox reactions, and what it means for such a reaction to become balanced.
Topics Covered in Other Articles
What is Electrochemistry?
Redox reactions
Oxidation States
Electrochemical Cells
Vocabulary
Oxidation: a type of chemical reaction where one or more electrons are lost.
Oxidation State/Number: a number assigned to an atom describing its degree of oxidation, meaning how many electrons it has gained or lost.
Reduction: a type of chemical reaction where one or more electrons are gained.
Review: What is a redox reaction?
A redox reaction is a reaction where both reduction and oxidation take place, meaning that the electrons lost during the oxidation of one species are then gained during the reduction of another species. In this manner, the electrons are exchanged between species, and there is no net charge on the equation as a whole.
What does it mean for a redox reaction to be balanced?
A redox reaction is balanced if all of the atoms in the reaction are balanced, meaning that there are the same number of X and Y atoms on the reactants’ side as there are on the products’ side. It also means that the electrons and charges are balanced: the number of electrons lost during oxidation is equal to the number of electrons gained during reduction. Balancing redox reactions allows for the net reaction equation to be conveyed in a concise manner.
How to balance redox reactions: step by step
Write the reduction half reaction
Write oxidation half reaction
Balance the number of atoms within the half reactions
Determine the number of electrons transferred in each half reaction
Balance the number of electrons between half reactions, so the number lost in one is equal to the number gained in the other.
Combine the two half reactions into a net equation. Be sure to always double check at the end if the atoms are balanced on both sides!
Examples of how to balance redox reactions
Example #1
Let’s go through an example, step by step, to understand this. Let’s look at a simple example, of a common electrochemical cell comprised of copper and zinc.
Step 1.
Write the reduction half-reaction. Copper gets reduced in this reaction, which makes the following half-reaction:
Cu2+ (aq) → Cu0 (s)
Step 2.
Write the oxidation half-reaction. Zinc gets oxidized in this reaction, which makes the following half-reaction:
Zn0 (s) → Zn2+ (aq)
Step 3.
Balance the atoms within the half-reactions. All balanced! There is one copper atom in both the products and reactants of the reduction half-reaction and only one zinc atom in both the products and reactants of the oxidation half-reaction.
Step 4.
Determine the number of electrons in each half-reaction. As the oxidation number in the reduction half-reaction decreases by two, this indicates that two electrons are transferred: Cu2+ (aq) + 2e– → Cu0 (s).In this same manner, as the oxidation number in the oxidation half-reaction increases by two, this indicates that two electrons are transferred: Zn0 (s) → Zn2+ (aq) + 2e–
Step 5.
Balance the number of electrons. There are two electrons lost in the oxidation half-reaction and two electrons gained in the reduction half-reaction, so all balanced!
Step 6.
Combine the half-reactions. The full reaction is thus: Cu2+ (aq) + Zn0 (s) + 2e– → Zn2+ (aq) + Cu0 (s) + 2e–, or just Cu2+ (aq) + Zn0 (s) → Zn2+ (aq) + Cu0 (s) since the electrons cancel out.
Example #2
Let’s look at a less straightforward example, like the reaction involved in our own making copper powder from aluminum foil experiment. In this experiment, we start with aluminum foil, Al, and copper sulfate, CuSO4. The single displacement reaction results in the formation of copper, Cu, and aluminum sulfate, Al2(SO4)3.
Step 0:
Identify oxidation states. This is an important step to take before even starting to balance the redox reaction, as this will help us identify which species undergo reduction, and which undergo oxidation.
Reactants: Al0, Cu2+, SO42-Products: Al3+, Cu0, SO42-
Since the oxidation state of the SO4 ion does not change throughout the reaction and it appears as both a reactant and a product, it is a spectator ion. Hence, we’re going to leave it out until the end as it may make the half-reactions more confusing.
Step 1.
Write the reduction half-reaction. Reduction, as we know, is the gaining of electrons, meaning a decrease in the oxidation number. This indicates that the copper gets reduced: Cu2+ → Cu0
Step 2.
Write the oxidation half-reaction. Oxidation, as we know, is the loss of electrons, meaning an increase in the oxidation number. This indicates that the aluminum gets oxidized: Al0 → Al3+
Step 3.
Balance the atoms within the half-reactions:
Cu2+ → Cu0 stays as is since there is one mole of copper in both the reactants and the products.
However, we need to account for the fact that the product is actually Al2(SO4)3, meaning that for every single aluminum in the reactants, there are two aluminum atoms in the products. Hence, we place a coefficient of 2 on the reactant side: 2Al0 → Al3+. At this point, it may be helpful to bring the spectator ion back in, for clarification. In that case, we see that 2Al → Al2(SO4)3.
Step 4.
Determine the number of electrons in each half reaction.
Cu2+ → Cu0 gains 2 electrons, so it becomes: Cu2+ + 2e– → Cu0
2Al0 → Al3+ each aluminum atom loses three electrons, multiplied by the coefficient of 2: 2Al0 → Al3+ + 6e–, or 2Al → Al2(SO4)3 + 6e–.
Step 5.
Balance the number of electrons. The reduction half-reaction only gains 2 electrons, while the oxidation half-reaction loses 6 electrons total. Thus, we multiply the reduction half-reaction by a coefficient of 3 in order to obtain 6 electrons in both half-reactions, evening it out: 3Cu2+ + 6e– → 3Cu0. Again, bringing the spectator ion back in for clarity, we see that 3CuSO4 + 6e– → 3Cu.
Step 6.
Combine the half-reactions. The full reaction thus becomes: 2Al + 3CuSO4 + 6e– → Al2(SO4)3 + 3Cu + 6e–, or just 2Al + 3CuSO4 → Al2(SO4)3 + 3Cu.
Balance redox reactions: acidic or basic conditions
When balancing a redox reaction under acidic or basic conditions, follow the same steps as outlined above, with one significant caveat.
In acidic conditions, the atoms will often not balance as they are, so you should add H2O to balance an excess of oxygen, and then H+ to balance an excess of hydrogen. This is because both H2O and H+ are found under acidic conditions, so they will be present in the reaction, so you can add as many as needed. To this extent, you should also be sure to keep track of your charges on the reactants and products sides, as the net charge must be conserved (this is particularly relevant when balancing the half-reactions). If adding H+ creates an excess of charge, you can offset it by adding electrons where appropriate. In the end, however, the electrons must still cancel out on both sides of the full redox equation, as described in step 5 above.
Balancing a redox reaction under basic conditions is very similar to the method described above, but with the use of OH– rather than H+.
Further Reading
Standard Reduction Potentials Made Easy
Lab Procedure: Titration
Percent Yield Calculation
Balancing Chemical Equations |
189594 | https://www.youtube.com/watch?v=rncPikAOBSI | Determination of pKa from pH| pKa1 and pKa2 of Oxalic acid by pH metric titration| pH meter
Spectrum Classes
9380 subscribers
18 likes
Description
1569 views
Posted: 4 Feb 2024
In this video is in continuation of my previous video in which I have carried out a lab activity of pH metric titration for determination of Oxalic acid strength on titration against NaOH, I have also carried out the potentiometric titration of oxalic acid vs NaOH. The link of the video is
Determination of Oxalic acid strength by pH metric titration| oxalic acid diprotic acid| pH meter
In this video I have calculated pKa1 and pKa2 values of oxalic acid
oxalicacid
phmeter
pharmacy
2 comments
Transcript:
hello everyone welcome back to Spectrum glasses this video is in continuation with my previous video in which we have done the potentiometric titration of oxalic acid versus NaOH and where we have measured the strength of the oxalic acid by potentiometric titration and here in this video we are just going to discuss how to calculate the pka1 value and pka2 value for this oxalic acid because it is a diprotic acid it has two dissociable prot protons so for that we are going to calculate the pk1 and pk2 values we have already discussed the lab activity and all and now here I'm just going to show you how to calculate the pka1 and pka2 values for this oxalic acid previously we have also discussed this is diprotic acid and how it is diprotic acid this is the formula of the oxalic acid and in this formula it is having two protons and these two protons it can dissociate so on dissociation of first proton we are having this type of a structure Co minus and Co plus H+ right so if you started with the neutral compound you will end up with the neutral species right so here one is negative and the other one is positive so ultimately we get the neutral species on the right hand side also since it dissociates only one proton therefore this is known as its first dissociation constant stent because this is in equilibrium half of the species is undissociated and some of the species is dissociated so these two species are undissociated and dissociated are in equilibrium and since one proton is first dissociated so now you may have a question you have considered this this as a first proton and this as a second proton that is all your choice you can consider either of the proton as first proton right since it dissociate it first H+ ion so here this is known as its first dissociation constant and the value reported for this first dissociation constant is 1.28 similarly in the second step it dissociates this species going to there and from here the second proton will be removed right and that is again in equilibrium some of the species is half dissociated and some of the dissociated right and this is step is termed as its second dissociation step and for this the pka2 value will be calculated and for this PKA two value reported in the literature is 4.27 now I'm just showing you how to calculate these values from the readings which we have recorded in for 20 ml of oxalic acid and here are the P volume and here is the pH value and from here I have calculated Delta pH Delta v v average and Delta pH upon Delta this is for derivative and here are the volume and pH readings are written fine so from here I have drawn the graph and the graph is like this so here in this case I have two inflection points first is this here you can see and the second is this which is very steep right for this we have drawn the derivative graph also just to easily locate the inflection point so here is the inflection point and at this moment the value is 19.95 and and here the Delta value Delta pH upon Delta V is 31.5 right so this is our second inflection point and first inflection point is somewhere here which is very difficult to identify so the other uh alternative way to identify the inflection point is here so you just going through this pH Delta pH upon Delta B values and from here you just see here you just see point 0.18 and it increases increases and here we are having 0.36 value and after that it starts decreasing so this is our second inflection point and at this moment we can record the volume similarly we can measure the second inflection point which which is seen prominently in the graph so the value is 19.9 at this moment fine so this is how we can measure the inflection points first and second so here I'm just showing you what does this inflection point mean so this is our first inflection point and this is our second inflection point and how to determine these inflection points actually we draw these tangential lines and we uh again draw the vertical line which passes through this steep graph right so it cover maximum part of this steep graph and here you can see the height of this graph you can take the point say this is our point a and this is our point B and take the difference of these points and divided it by two and on divided by two you will get somewhere here so the value will be like this right somewhere here similarly you draw the line somewhere here like this and take the value half of this and you will get you will be here somewhere here so this is our first inflection point and this is our second inflection point so here I'm just showing you uh here so the first inflection point is this and the second is this much right so these are our first and second equivalence point now the question comes what is our equivalence points in the previous case I have shown you that this Co on reaction with no it gives us Co minus and Co fine plus H+ so the moment when the O minus ion completely replace one of the proton that is called equivalence point so here what I get I get this 10.5 is my first equivalence point if I divided it by two then I will get the volume value at this much right now I'm going to add my no here in continuation and the moment when this o minus ion is exactly equal to this H+ ion that is our second equivalence point right so I hope you understand the first and second equivalence point now from these equivalence point how we are going to measure our pka1 and pka2 so to determine our pka1 and pka2 we need to First calculate okay the second point in which you may have confusion if this is 19 15.9 divided by 2 then how you are going to get 15.2 so this is actually on the difference between the first and the second equivalence point here this so take the value at this point this point divided it by two so where you will be you will be somewhere here so how we are going to calculate do the calculations so 19.9 - 10.5 I will get this much divided it by 2 so either I will just subtract it from 19.9 or I'll just add this into 10.5 lower limit or subtract from the upper limit so I simply add this and I will get this 14.7 so this is how we are just going to calculate the half of the equivalence point now the question comes why this half of the equivalence point so here I'm just showing you again what is the does this mean half of the equivalence point if at this point r c h is exactly equal to co minus Co at this point fine if I will consider this reaction somewhere here so what will be the situation at this point so at this point the situation will be like this C Co is in equilibrium with Co minus Co and at this point both are in half of concentration at this point this is completely converted to this one if I take half of this first equivalence point what will be the situation half of the H remains there and half of the salt is formed at half of the first equivalence point this is my hander an equation so at this moment if the concentration of salt is exactly equal to concentration of acid then these two values will be cancelled out this will be canceled out by this so we will get log base 10 on 1 and this can be written as log base 10 10 on0 will we get 0 into 1 and then zero zero this will go to this side and we will get this one so 0 into 1 one we will get this zero so if I will get this kind of situation then what will be there pH is equal to PKA so this if I am taking the first equivalence point so first equal this will be equal to my at this pH value I will get the pka1 and similarly I will take half of the second equivalence point half of this second equivalence point means at that moment I am just having this Co is in equilibrium Co minus and Co minus so half of O this H+ is undissociated and half of the salt is present over there and similarly if both are in same concentration half of then again we will cancel out these two species from the Henderson equation and we will get pH is equal to at this second equivalence point pH is equal to PKA A2 so this is the easiest way now here I'll just show you these are the values which I experimentally calculated and these are the reported values for pka1 and pka2 now I am showing you in the Excel sheet so here in the Excel sheet this is my pka2 15 at 15.25 and at this moment I am having the value of 3.72 right and in the case of first half of this first equivalence point will be where 5.25 and at this moment I am having the pH value at this 1.5 so this is my pka1 because this is half of the first dissociation and this is my second PKA here PKA is second so this is my second dissociation constant so from the previous case previous experiment I will get the values like this and from this this experiment I'll get the values like this and the first dissociation constant uh is very consistent in my case and uh this is literally away from the reported values but I hope you understand how to calculate the pka1 and pka2 values from the potentiometric titration if you guys find this video helpful please like share and subscribe thank you all thanks for watching |
189595 | https://ocw.mit.edu/courses/res-6-008-digital-signal-processing-spring-2011/1dcccd9943712942639d89090db35256_MITRES_6_008S11_lec07.pdf | 7 Z-TRANSFORM PROPERTIES 1. Lecture 7 -56 minutes (.) a.l Transform Properbies ) x(n)hn -X (7 z 2)1(n+ n( -73 (i) 3) 1(-n') 5n Y-(n Z Bo-xcar e 'P~oercp 7.1 SinwN Fourier Transform of a rectangular sequence.
sin 15 N=15 A It \I 1 d.
2. Comments In this lecture two primary ideas are discussed. The first is the determination of frequency response geometrically in the z-plane. This is particularly useful for identifying approximately the effect of pole and zero locations on the frequency response of a system. The second topic discussed in this lecture is that of properties of the z-transform. As with the Fourier transform, properties of the z-trans-form are useful for evaluating the z-transform and inverse z-transform as well as for developing insight into the relationship between sequences and their z-transforms. The more important properties are summarized in section 4.4 and table 4.2 (page 180) of the text. As stressed during the lecture the style in which many of these properties are proven is similar and it is important to develop a facility with this style rather than memorizing the various properties.
3. Reading Text: Sections 4.4 (page 172), 5.2 (page 206) and 5.3.
4. Problems Problem 7.1 In Figure P7.1-1 are shown three pole-zero patterns and three possible frequency response magnitude characteristics. By considering the behavior of the pole and zero vectors in the z-plane determine which frequency response characteristic could correspond to each of the pole-zero patterns.
7.2 unit circle unit circle or 0 (a) (b) Tr ?r (iii) 0 --Figure P7.1-1 7.3 x x (c) (i) (ii) Problem 7.2 Consider a causal linear shift-invariant system with system function H(z) = -z_ 1- a z-where "a" is real (a) If 0 < a < 1, plot the pole-zero diagram and shade the region of convergence.
(b) Show graphically in the z-plane that this system is an allpass system, i.e., that the magnitude of the frequency response is a constant.
Problem 7.3 With X(z) denoting the z-transform of x(n) show that: (i) X(l/z) is the z-transform of x(-n) (ii) -z dX(z) is the z-transform of nx(n) dz Problem 7.4 Consider a linear discrete-time shift-invariant system with input x(n) and output y(n) for which y(n -1) -10 y(n) + y(n + 1) = x(n) The system is stable. Determine the unit-sample response.
Problem 7.5 X(z) = loge (1 -az-) where |at < 1 and the region of convergence is Izi > jai.
(a) Determine the inverse z-transform by using the differentiation property.
(b) Determine the inverse z-transform by using the power series method.
Compare your result with that obtained in (a).
Problem 7.6 Suppose that we have a sequence x(n) from which we construct a new sequence x1 (n) defined as x([ ) n = 0, + M, + 2M,...
x 1(n)= 0 otherwise 7.4 (a) Determine X (z) in terms of X(z) (b) If X(e )W is as sketched below, sketch X (e "') for M = 2.
X(ejW) Figure P7.6-1 7.5 MIT OpenCourseWare Resource: Digital Signal Processing Prof. Alan V. Oppenheim The following may not correspond to a particular course on MIT OpenCourseWare, but has been provided by the author as an individual learning resource. For information about citing these materials or our Terms of Use, visit: |
189596 | https://stackoverflow.com/questions/32643811/c-product-of-three-consecutive-numbers | algorithm - C product of three consecutive numbers - Stack Overflow
Join Stack Overflow
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
Sign up with GitHub
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Overflow
1. About
2. Products
3. For Teams
Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers
Advertising Reach devs & technologists worldwide about your product, service or employer brand
Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models
Labs The future of collective knowledge sharing
About the companyVisit the blog
Loading…
current community
Stack Overflow helpchat
Meta Stack Overflow
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Let's set up your homepage Select a few topics you're interested in:
python javascript c#reactjs java android html flutter c++node.js typescript css r php angular next.js spring-boot machine-learning sql excel ios azure docker
Or search from our full list:
javascript
python
java
c#
php
android
html
jquery
c++
css
ios
sql
mysql
r
reactjs
node.js
arrays
c
asp.net
json
python-3.x
.net
ruby-on-rails
sql-server
swift
django
angular
objective-c
excel
pandas
angularjs
regex
typescript
ruby
linux
ajax
iphone
vba
xml
laravel
spring
asp.net-mvc
database
wordpress
string
flutter
postgresql
mongodb
wpf
windows
xcode
amazon-web-services
bash
git
oracle-database
spring-boot
dataframe
azure
firebase
list
multithreading
docker
vb.net
react-native
eclipse
algorithm
powershell
macos
visual-studio
numpy
image
forms
scala
function
vue.js
performance
twitter-bootstrap
selenium
winforms
kotlin
loops
express
dart
hibernate
sqlite
matlab
python-2.7
shell
rest
apache
entity-framework
android-studio
csv
maven
linq
qt
dictionary
unit-testing
asp.net-core
facebook
apache-spark
tensorflow
file
swing
class
unity-game-engine
sorting
date
authentication
go
symfony
t-sql
opencv
matplotlib
.htaccess
google-chrome
for-loop
datetime
codeigniter
perl
http
validation
sockets
google-maps
object
uitableview
xaml
oop
visual-studio-code
if-statement
cordova
ubuntu
web-services
email
android-layout
github
spring-mvc
elasticsearch
kubernetes
selenium-webdriver
ms-access
ggplot2
user-interface
parsing
pointers
c++11
google-sheets
security
machine-learning
google-apps-script
ruby-on-rails-3
templates
flask
nginx
variables
exception
sql-server-2008
gradle
debugging
tkinter
delphi
listview
jpa
asynchronous
web-scraping
haskell
pdf
jsp
ssl
amazon-s3
google-cloud-platform
jenkins
testing
xamarin
wcf
batch-file
generics
npm
ionic-framework
network-programming
unix
recursion
google-app-engine
mongoose
visual-studio-2010
.net-core
android-fragments
assembly
animation
math
svg
session
intellij-idea
hadoop
rust
next.js
curl
join
winapi
django-models
laravel-5
url
heroku
http-redirect
tomcat
google-cloud-firestore
inheritance
webpack
image-processing
gcc
keras
swiftui
asp.net-mvc-4
logging
dom
matrix
pyspark
actionscript-3
button
post
optimization
firebase-realtime-database
web
jquery-ui
cocoa
xpath
iis
d3.js
javafx
firefox
xslt
internet-explorer
caching
select
asp.net-mvc-3
opengl
events
asp.net-web-api
plot
dplyr
encryption
magento
stored-procedures
search
amazon-ec2
ruby-on-rails-4
memory
canvas
audio
multidimensional-array
random
jsf
vector
redux
cookies
input
facebook-graph-api
flash
indexing
xamarin.forms
arraylist
ipad
cocoa-touch
data-structures
video
azure-devops
model-view-controller
apache-kafka
serialization
jdbc
woocommerce
razor
routes
awk
servlets
mod-rewrite
excel-formula
beautifulsoup
filter
docker-compose
iframe
aws-lambda
design-patterns
text
visual-c++
django-rest-framework
cakephp
mobile
android-intent
struct
react-hooks
methods
groovy
mvvm
ssh
lambda
checkbox
time
ecmascript-6
grails
google-chrome-extension
installation
cmake
sharepoint
shiny
spring-security
jakarta-ee
plsql
android-recyclerview
core-data
types
sed
meteor
android-activity
activerecord
bootstrap-4
websocket
graph
replace
scikit-learn
group-by
vim
file-upload
junit
boost
memory-management
sass
import
async-await
deep-learning
error-handling
eloquent
dynamic
soap
dependency-injection
silverlight
layout
apache-spark-sql
charts
deployment
browser
gridview
svn
while-loop
google-bigquery
vuejs2
dll
highcharts
ffmpeg
view
foreach
makefile
plugins
redis
c#-4.0
reporting-services
jupyter-notebook
unicode
merge
reflection
https
server
google-maps-api-3
twitter
oauth-2.0
extjs
terminal
axios
pip
split
cmd
pytorch
encoding
django-views
collections
database-design
hash
netbeans
automation
data-binding
ember.js
build
tcp
pdo
sqlalchemy
apache-flex
mysqli
entity-framework-core
concurrency
command-line
spring-data-jpa
printing
react-redux
java-8
lua
html-table
ansible
jestjs
neo4j
service
parameters
enums
material-ui
flexbox
module
promise
visual-studio-2012
outlook
firebase-authentication
web-applications
webview
uwp
jquery-mobile
utf-8
datatable
python-requests
parallel-processing
colors
drop-down-menu
scipy
scroll
tfs
hive
count
syntax
ms-word
twitter-bootstrap-3
ssis
fonts
rxjs
constructor
google-analytics
file-io
three.js
paypal
powerbi
graphql
cassandra
discord
graphics
compiler-errors
gwt
socket.io
react-router
solr
backbone.js
memory-leaks
url-rewriting
datatables
nlp
oauth
terraform
datagridview
drupal
oracle11g
zend-framework
knockout.js
triggers
neural-network
interface
django-forms
angular-material
casting
jmeter
google-api
linked-list
path
timer
django-templates
arduino
proxy
orm
directory
windows-phone-7
parse-platform
visual-studio-2015
cron
conditional-statements
push-notification
functional-programming
primefaces
pagination
model
jar
xamarin.android
hyperlink
uiview
visual-studio-2013
vbscript
google-cloud-functions
gitlab
azure-active-directory
jwt
download
swift3
sql-server-2005
configuration
process
rspec
pygame
properties
combobox
callback
windows-phone-8
linux-kernel
safari
scrapy
permissions
emacs
scripting
raspberry-pi
clojure
x86
scope
io
expo
azure-functions
compilation
responsive-design
mongodb-query
nhibernate
angularjs-directive
request
bluetooth
reference
binding
dns
architecture
3d
playframework
pyqt
version-control
discord.js
doctrine-orm
package
f#
rubygems
get
sql-server-2012
autocomplete
tree
openssl
datepicker
kendo-ui
jackson
yii
controller
grep
nested
xamarin.ios
static
null
statistics
transactions
active-directory
datagrid
dockerfile
uiviewcontroller
webforms
discord.py
phpmyadmin
sas
computer-vision
notifications
duplicates
mocking
youtube
pycharm
nullpointerexception
yaml
menu
blazor
sum
plotly
bitmap
asp.net-mvc-5
visual-studio-2008
yii2
electron
floating-point
css-selectors
stl
jsf-2
android-listview
time-series
cryptography
ant
hashmap
character-encoding
stream
msbuild
asp.net-core-mvc
sdk
google-drive-api
jboss
selenium-chromedriver
joomla
devise
cors
navigation
anaconda
cuda
background
frontend
binary
multiprocessing
pyqt5
camera
iterator
linq-to-sql
mariadb
onclick
android-jetpack-compose
ios7
microsoft-graph-api
rabbitmq
android-asynctask
tabs
laravel-4
environment-variables
amazon-dynamodb
insert
uicollectionview
linker
xsd
coldfusion
console
continuous-integration
upload
textview
ftp
opengl-es
macros
operating-system
mockito
localization
formatting
xml-parsing
vuejs3
json.net
type-conversion
data.table
kivy
timestamp
integer
calendar
segmentation-fault
android-ndk
prolog
drag-and-drop
char
crash
jasmine
dependencies
automated-tests
geometry
azure-pipelines
android-gradle-plugin
itext
fortran
sprite-kit
header
mfc
firebase-cloud-messaging
attributes
nosql
format
nuxt.js
odoo
db2
jquery-plugins
event-handling
jenkins-pipeline
nestjs
leaflet
julia
annotations
flutter-layout
keyboard
postman
textbox
arm
visual-studio-2017
gulp
stripe-payments
libgdx
synchronization
timezone
uikit
azure-web-app-service
dom-events
xampp
wso2
crystal-reports
namespaces
swagger
android-emulator
aggregation-framework
uiscrollview
jvm
google-sheets-formula
sequelize.js
com
chart.js
snowflake-cloud-data-platform
subprocess
geolocation
webdriver
html5-canvas
centos
garbage-collection
dialog
sql-update
widget
numbers
concatenation
qml
tuples
set
java-stream
smtp
mapreduce
ionic2
windows-10
rotation
android-edittext
modal-dialog
spring-data
nuget
doctrine
radio-button
http-headers
grid
sonarqube
lucene
xmlhttprequest
listbox
switch-statement
initialization
internationalization
components
apache-camel
boolean
google-play
serial-port
gdb
ios5
ldap
youtube-api
return
eclipse-plugin
pivot
latex
frameworks
tags
containers
github-actions
c++17
subquery
dataset
asp-classic
foreign-keys
label
embedded
uinavigationcontroller
copy
delegates
struts2
google-cloud-storage
migration
protractor
base64
queue
find
uibutton
sql-server-2008-r2
arguments
composer-php
append
jaxb
zip
stack
tailwind-css
cucumber
autolayout
ide
entity-framework-6
iteration
popup
r-markdown
windows-7
airflow
vb6
g++
ssl-certificate
hover
clang
jqgrid
range
gmail
Next You’ll be prompted to create an account to view your personalized homepage.
Home
Questions
AI Assist Labs
Tags
Challenges
Chat
Articles
Users
Jobs
Companies
Collectives
Communities for your favorite technologies. Explore all Collectives
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Collectives™ on Stack Overflow
Find centralized, trusted content and collaborate around the technologies you use most.
Learn more about Collectives
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
C product of three consecutive numbers
Ask Question
Asked 10 years ago
Modified10 years ago
Viewed 4k times
This question shows research effort; it is useful and clear
1
Save this question.
Show activity on this post.
I was trying to solve this tricky question, but for some reason my code is doing something wrong... I don't exactly know why, but I'll try to explain as much as I can.
Consecutive products: Write a program that reads a positive integer from standard input and verifies if it's equal to the product of three natural and consecutive numbers. For example, the number 120 is equal to 4x5x6, as for number 90 there aren't any three consecutive natural numbers whose product is 90. Your program should generate as output 'S' if there are 3 consecutive natural numbers whose product is the value read, or 'N' if none.
```c
Input
120
Expected Output
"S"
Input
60
Expected Output
"S"
Input
80
Expected Output
"N"
Input
120
Expected Output
"S"
```
And this is my code:
```c
include
int main(){
int int1,i,count=10,j,k,w=0;
scanf("%i",&int1);
for (i = 1; i <= count; ++i)
{
for (j = 1; j <= count+1; ++j)
{
for ( k = 1; k <= count+2; ++k)
{
if ((i==j+1 && i==k+2) && (ijk==int1)){
w=1;
}
}
}
}
if (w==0)
{
printf("N");
}
else{
printf("S");
}
}
```
So basically what this does is I have 3 loops that will generate random numbers in a kij form... and it checks if we are getting what we want(the product of three natural and consecutive numbers) . This is for an assignment.
c
algorithm
loops
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Improve this question
Follow
Follow this question to receive notifications
edited Sep 18, 2015 at 7:32
michael_wu
16k 5 5 gold badges 66 66 silver badges 77 77 bronze badges
asked Sep 18, 2015 at 4:19
Leonardo DiasLeonardo Dias
71 2 2 silver badges 9 9 bronze badges
4
Your process doesn't seem effective, try finding prime factors and then check whether consecutive numbers can be found using different combinations of multiples.Kumar Vikramjeet –Kumar Vikramjeet 2015-09-18 04:24:22 +00:00 Commented Sep 18, 2015 at 4:24
What is the question?Alex Lop. –Alex Lop. 2015-09-18 04:26:39 +00:00 Commented Sep 18, 2015 at 4:26
What do you expect that your program doesn't output? It appears to work for the examples you gave.Claudiu –Claudiu 2015-09-18 04:26:59 +00:00 Commented Sep 18, 2015 at 4:26
yeah i know, thats exactly why I was very confused...Leonardo Dias –Leonardo Dias 2015-09-18 04:34:12 +00:00 Commented Sep 18, 2015 at 4:34
Add a comment|
3 Answers 3
Sorted by: Reset to default
This answer is useful
4
Save this answer.
Show activity on this post.
I modified your code. Please let me know if the problem still exists. The change made is exactly as WDS said.
```c
include
int main(){
int int1,i,count=10,j,k,w=0,comp;
scanf("%i",&int1);
for (i = 1; i <= count; ++i)
{
comp = i(i+1)(i+2);
if(comp==int1)
{
w = 1;
}
}
if (w==0)
{
printf("N");
}
else
{
printf("S");
}
return 0;
}
```
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Improve this answer
Follow
Follow this answer to receive notifications
answered Sep 18, 2015 at 4:42
Rahul NoriRahul Nori
718 6 6 silver badges 17 17 bronze badges
1 Comment
Add a comment
gian1200
gian1200Over a year ago
This code won't work as is, but is a good way to give hints without saying the final answer
2015-09-18T05:15:15.973Z+00:00
0
Reply
Copy link
This answer is useful
4
Save this answer.
Show activity on this post.
You don't need 3 loops. One trival approach would be:
c
int test(int num)
{
for (int i = 1; i < num; i++)
{
int product = i (i + 1) (i + 2);
if ( product == num )
return true;
else if (product > num)
break;
}
return false;
}
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Improve this answer
Follow
Follow this answer to receive notifications
edited Sep 18, 2015 at 5:02
answered Sep 18, 2015 at 4:37
Harvey PhamHarvey Pham
660 4 4 silver badges 17 17 bronze badges
6 Comments
Add a comment
Leonardo Dias
Leonardo DiasOver a year ago
The way you declared the variables is only be correct for C++ right?
2015-09-18T04:48:41.283Z+00:00
0
Reply
Copy link
Harvey Pham
Harvey PhamOver a year ago
It might work with C99 :). Kidding, I'm come from C++ background, hence the variables declared as needed. But seriously, in C99, they are legal. en.wikipedia.org/wiki/C99
2015-09-18T04:55:18.413Z+00:00
1
Reply
Copy link
Klas Lindbäck
Klas LindbäckOver a year ago
This code could produce inaccurate results if the multiplication overflows, which it will if num is greater then the greatest number that produces an S output and fits in the data type.
2015-09-18T10:53:50.22Z+00:00
0
Reply
Copy link
Harvey Pham
Harvey PhamOver a year ago
@KlasLindbäck I went over your case, and couldn't find how this code can produce inaccurate result. The test product==num still works so there is no change to recognize N as S. The loop terminates by the for condition (i < num) instead of (product > num). (Now if negative num is another story, but I assume that the caller catches that condition :)) Can you elaborate on how the code can break?
2015-09-18T16:24:32.907Z+00:00
0
Reply
Copy link
Klas Lindbäck
Klas LindbäckOver a year ago
For high num the product will overflow. Given how most compilers deal with overflow the product will become negative. Then as i grows bigger the product will turn positive again and might happen to become num. Try with num=139154728 which should not return true for i=2058, but it does (assuming 32 bit int). The real value of 205820592060 is 8729089320 and doesn't fit in a 32 bit integer.
2015-09-21T06:31:56.103Z+00:00
0
Reply
Copy link
Add a comment|Show 1 more comment
This answer is useful
2
Save this answer.
Show activity on this post.
You might want to add the algorithm tag on this question. That said, my approach would be to consider what the product of 3 consecutive numbers is. You could write it as x (x+1) (x+2). But there is a better way.
Write it as (x-1) x (x+1). Then multiply and simplify. The result is x^3-x.
Now for any given number, start a single loop on x from x = 2 (because if x=1 then x-1=0 and this will never be a solution) and incrementing by 1 each loop. Check on each loop for a match with the input number. If it is a match, return true. If it is not a match and exceeds the input number return false. If it is not a match and is less than the input number, loop again.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Improve this answer
Follow
Follow this answer to receive notifications
answered Sep 18, 2015 at 4:28
WDSWDS
984 1 1 gold badge 9 9 silver badges 17 17 bronze badges
1 Comment
Add a comment
michael_wu
michael_wuOver a year ago
To improve even further one could solve the equation x^3 -x = num and see if the result is an integer. That would be on O(1) solution.
2015-09-18T07:34:56.24Z+00:00
1
Reply
Copy link
Your Answer
Thanks for contributing an answer to Stack Overflow!
Please be sure to answer the question. Provide details and share your research!
But avoid …
Asking for help, clarification, or responding to other answers.
Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Draft saved
Draft discarded
Sign up or log in
Sign up using Google
Sign up using Email and Password
Submit
Post as a guest
Name
Email
Required, but never shown
Post Your Answer Discard
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
c
algorithm
loops
See similar questions with these tags.
The Overflow Blog
The history and future of software development (part 1)
Getting Backstage in front of a shifting dev experience
Featured on Meta
Spevacus has joined us as a Community Manager
Introducing a new proactive anti-spam measure
New and improved coding challenges
New comment UI experiment graduation
Policy: Generative AI (e.g., ChatGPT) is banned
Report this ad
Report this ad
Related
1Loops in C challenge: can it be done another way?
14Algorithm in C - playing with numbers - number with 3 in units place
0Finding the total number combinations for an integer using three numbers
0How to run multiple for loops in C programming?
0C Programming integer
0Greatest product of consecutive digits
0Addition of products of (first and last), (second and last but 1) and so on numbers of an array
1C - Problem in finding sum of consecutive numbers
2Why this code is working for 1 and 2 yet fails for input that is more than 3?
0How to iterate over numbers in C
Hot Network Questions
How to rsync a large file by comparing earlier versions on the sending end?
An odd question
How to home-make rubber feet stoppers for table legs?
Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth?
Languages in the former Yugoslavia
Lingering odor presumably from bad chicken
Can a cleric gain the intended benefit from the Extra Spell feat?
Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road?
Program that allocates time to tasks based on priority
Overfilled my oil
Why multiply energies when calculating the formation energy of butadiene's π-electron system?
Copy command with cs names
Implications of using a stream cipher as KDF
How long would it take for me to get all the items in Bongo Cat?
Do sum of natural numbers and sum of their squares represent uniquely the summands?
Bypassing C64's PETSCII to screen code mapping
Xubuntu 24.04 - Libreoffice
Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator
Can you formalize the definition of infinitely divisible in FOL?
"Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf
Do we need the author's permission for reference
Should I let a player go because of their inability to handle setbacks?
ConTeXt: Unnecessary space in \setupheadertext
Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
lang-c
Why are you flagging this comment?
Probable spam.
This comment promotes a product, service or website while failing to disclose the author's affiliation.
Unfriendly or contains harassment/bigotry/abuse.
This comment is unkind, insulting or attacks another person or group. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Stack Overflow
Questions
Help
Chat
Products
Teams
Advertising
Talent
Company
About
Press
Work Here
Legal
Privacy Policy
Terms of Service
Contact Us
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings |
189597 | https://projecteuclid.org/journals/annals-of-probability/volume-42/issue-4/Asymptotic-distribution-of-complex-zeros-of-random-analytic-functions/10.1214/13-AOP847.pdf | The Annals of Probability 2014, Vol. 42, No. 4, 1374–1395 DOI: 10.1214/13-AOP847 © Institute of Mathematical Statistics, 2014 ASYMPTOTIC DISTRIBUTION OF COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS BY ZAKHAR KABLUCHKO AND DMITRY ZAPOROZHETS Ulm University and Steklov Institute of Mathematics Let ξ0,ξ1,... be independent identically distributed complex-valued ran-dom variables such that Elog(1 + |ξ0|) < ∞. We consider random analytic functions of the form Gn(z) = ∞ k=0 ξkfk,nzk, where fk,n are deterministic complex coefficients. Let μn be the random measure counting the complex zeros of Gn according to their multiplici-ties. Assuming essentially that −1 n logf[tn],n →u(t) as n →∞, where u(t) is some function, we show that the measure 1 nμn converges in probabil-ity to some deterministic measure μ which is characterized in terms of the Legendre–Fenchel transform of u. The limiting measure μ does not depend on the distribution of the ξk’s. This result is applied to several ensembles of random analytic functions including the ensembles corresponding to the three two-dimensional geometries of constant curvature. As another applica-tion, we prove a random polynomial analogue of the circular law for random matrices.
1. Introduction.
1.1. Statement of the problem.
Let ξ0,ξ1,... be nondegenerate independent identically distributed (i.i.d.) random variables with complex values. The simplest ensemble of random polynomials are the Kac polynomials defined as Kn(z) = n k=0 ξkzk.
The distribution of zeros of Kac polynomials has been much studied; see [1, 10, 14–16, 28, 31, 36]. It is known that under a very mild moment assumption, the complex zeros of Kn cluster asymptotically near the unit circle T = {|z| = 1} and that the distribution of zeros is asymptotically uniform with regard to the argument.
To make this precise, we need to introduce some notation. Let G be an analytic function in some domain D ⊂C. Assuming that G does not vanish identically, Received May 2012; revised January 2013.
MSC2010 subject classifications. Primary 30C15; secondary 30B20, 26C10, 60G57, 60B10, 60B20.
Key words and phrases. Random analytic function, random polynomial, random power series, empirical distribution of zeros, circular law, logarithmic potential, equilibrium measure, Legendre– Fenchel transform.
1374 COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1375 we consider a measure μG counting the complex zeros of G according to their multiplicities: μG = z∈D : G(z)=0 nG(z)δ(z).
Here, nG(z) is the multiplicity of the zero at z and δ(z) is the unit point mass at z.
If G vanishes identically, we put μG = 0. Then, Ibragimov and Zaporozhets proved that the following two conditions are equivalent: (1) With probability 1, the sequence of measures 1 nμKn converges as n →∞ weakly to the uniform probability distribution on T.
(2) Elog(1 + |ξ0|) < ∞.
Along with the Kac polynomials, many other remarkable ensembles of random polynomials (or, more generally, random power series) appeared in the literature.
These ensembles are usually characterized by invariance properties with respect to certain groups of transformations and have the general form Gn(z) = ∞ k=0 ξkfk,nzk, where ξ0,ξ1,... are i.i.d. complex-valued random variables and fk,n are complex deterministic coefficients. The aim of the present work is to study the distribution of zeros of Gn asymptotically as n →∞. We will show that under certain assump-tions on the coefficients fk,n, the random measure 1 nμGn converges, as n →∞, to some limiting deterministic measure μ. The limiting measure μ does not de-pend on the distribution of the random variables ξk; see Figure 1. Results of this FIG. 1.
Zeros of the Weyl random polynomial Wn(z) = n k=0 ξk zk √ k! of degree n = 2000. The zeros were divided by √n. Left: Complex normal coefficients. Right: Coefficients are positive with P[logξk > t] = t−4 for t > 1. In both cases, the limiting distribution of zeros is uniform on the unit disk.
1376 Z. KABLUCHKO AND D. ZAPOROZHETS type are known in the context of random matrices; see, for example, . How-ever, the literature on random polynomials and random analytic functions usually concentrates on the Gaussian case, since in this case explicit calculations are pos-sible; see, for example, [2, 4, 6, 8, 10, 13, 28–30, 32, 33]. The only ensemble of random polynomials for which the independence of the limiting distribution of ze-ros on the distribution of the coefficients is well understood is the Kac ensemble; see [1, 15, 16, 36]. In the context of random polynomials, there were many results on the universal character of local correlations between close zeros [3, 19, 29, 30].
In this work, we focus on the global distribution of zeros.
The paper is organized as follows. In Sections 2.1–2.4, we state our results for a number of concrete ensembles of random analytic functions. These results are special cases of the general Theorem 2.8 whose statement, due to its technicality, is postponed to Section 2.5. Proofs are given in Sections 3 and 4.
1.2. Notation.
Let Dr = {z ∈C:|z| < r} be the open disk with radius r > 0 centered at the origin. Let D = D1 be the unit disk. Put D∞= C. Denote by λ the Lebesgue measure on C. A Borel measure μ on a locally compact metric space X is called locally finite (l.f.) if μ(A) < ∞for every compact set A ⊂X. A se-quence μn of l.f. measures on X converges vaguely to a l.f. measure μ if for every continuous, compactly supported function ϕ :X →R, lim n→∞ X ϕ(z)μn(dz) = X ϕ(z)μ(dz).
(1) If μn and μ are probability measures, the vague convergence is equivalent to the more familiar weak convergence for which (1) is required to hold for all continu-ous, bounded functions ϕ; see Lemma 4.20 in . Let M(X) be the space of all l.f. measures on X endowed with the vague topology. Note that M(X) is a Polish space; see Theorem A2.3 in . A random measure on X is a random element defined on some probability space (,F,P) and taking values in M(X). The a.s.
convergence and convergence in probability of random measures are defined as the convergence of the corresponding M(X)-valued random elements. An equivalent definition: a sequence of random measures μn converges to a random measure μ in probability (resp., a.s.), if (1) holds in probability (resp., a.s.) for every continuous, compactly supported function ϕ :X →R.
2. Statement of results.
2.1. The three invariant ensembles.
Let ξ0,ξ1,... be i.i.d. random variables.
Unless stated otherwise, they take values in C, are nondegenerate, and satisfy the condition Elog(1 + |ξ0|) < ∞. Fix a parameter α > 0. We start by considering the COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1377 following three ensembles of random analytic functions (see, e.g., [13, 33]): Fn(z) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ n k=0 ξk n(n −1)···(n −k + 1) k! α zk (elliptic, n ∈N, z ∈C), ∞ k=0 ξk nk k! α zk (flat, n > 0, z ∈C), ∞ k=0 ξk n(n + 1)···(n + k −1) k! α zk (hyperbolic, n > 0, z ∈D).
Note that in the elliptic case Fn is a random polynomial of degree n, in the flat case it is a random entire function, whereas in the hyperbolic case it is a random analytic function defined on the unit disk D. The a.s. convergence of the series in the latter two cases follows from Lemma 4.4 below. In the particular case when α = 1/2 and ξk are complex standard Gaussian with density z →π−1 exp{−|z|2} on C, the zero sets of these analytic functions possess remarkable invariance properties relating them to the three geometries of constant curvature; see [13, 33]. In this special case, the expected number of zeros of Fn in a Borel set B can be computed exactly [13, 33]: E μFn(B) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ n π B 1 + |z|2−2λ(dz) (elliptic case, B ⊂C), n π λ(B) (flat case, B ⊂C), n π B 1 −|z|2−2λ(dz) (hyperbolic case, B ⊂D).
In the next theorem, we compute the asymptotic distribution of zeros of Fn for more general ξk’s.
THEOREM 2.1.
Let ξ0,ξ1,... be nondegenerate i.i.d. random variables such that Elog(1 + |ξ0|) < ∞. As n →∞, the sequence of random measures 1 nμFn converges in probability to the deterministic measure having a density ρα with respect to the Lebesgue measure, where ρα(z) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ 1 2πα |z|(1/α)−2
1 + |z|1/α−2 (elliptic case, z ∈C), 1 2πα |z|(1/α)−2 (flat case, z ∈C), 1 2πα |z|(1/α)−2
1 −|z|1/α−2 (hyperbolic case, z ∈D).
1378 Z. KABLUCHKO AND D. ZAPOROZHETS 2.2. Littlewood–Offord random polynomials.
Next, we consider an ensemble of random polynomials which was introduced by Littlewood and Offord [21, 22].
It is related to the flat model. First, we give some motivation. Let ξ0,ξ1,... be non-degenerate i.i.d. random variables. Given a sequence w0,w1,... ∈C{0} consider a random polynomial Wn defined by Wn(z) = n k=0 ξkwkzk.
(2) For wk = 1, we recover the Kac polynomials, for which the zeros concentrate near the unit circle. The next result shows that the structure of the zeros does not differ essentially from the Kac case if the sequence wk grows or decays not too fast.
THEOREM 2.2.
Let ξ0,ξ1,... be nondegenerate i.i.d. random variables such that Elog(1+|ξ0|) < ∞. If limk→∞1 k log|wk| = w for some constant w ∈R, then the sequence of random measures 1 nμWn converges in probability to the uniform probability distribution on the circle of radius e−w centered at the origin.
We would like to construct examples where there is no concentration near a circle. Let us make the following assumption on the sequence wk: log|wk| = −α(k logk −k) −βk + o(k), k →∞, (3) where α > 0 and β ∈R are parameters. Particular cases are polynomials of the form W(1) n (z) = n k=0 ξk (k!)α zk, W(2) n (z) = n k=0 ξk kαk zk, W(3) n (z) = n k=0 ξk (αk + 1)zk.
The family W(1) n has been studied by Littlewood and Offord [21, 22] in one of the earliest works on random polynomials. They were interested in the number of real zeros. In the next theorem, we describe the limiting distribution of complex zeros of Wn. Let μn be the measure counting the points of the form e−βn−αz, where z is a zero of Wn. That is, for every Borel set B ⊂C, μn(B) = μWn eβnαB .
(4) THEOREM 2.3.
Let ξ0,ξ1,... be nondegenerate i.i.d. random variables such that Elog(1 + |ξ0|) < ∞. Let w0,w1,... be a complex sequence satisfying (3).
COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1379 With probability 1, the sequence of random measures 1 nμn converges to the deter-ministic probability measure having the density z → 1 2πα |z|(1/α)−21z∈D (5) with respect to the Lebesgue measure on C.
For the so-called Weyl random polynomials Wn(z) = n k=0 ξk zk √ k! having α = 1/2 and β = 0, the limiting distribution is uniform on D; see Figure 1. This re-sult can be seen as an analogue of the famous circular law for the distribution of eigenvalues of non-Hermitian random matrices with i.i.d. entries [5, 35]. For-rester and Honner stated the circular law for Weyl polynomials and discussed differences and similarities between the matrix and the polynomial cases; see also .
Under a minor additional assumption on the coefficients wk we can prove that the logarithmic moment condition is not only sufficient, but also necessary for the a.s. convergence of the empirical distribution of zeros. It is easy to check that the additional assumption is satisfied for Wn = W(i) n with i = 1,2,3.
THEOREM 2.4.
Let ξ0,ξ1,... be nondegenerate i.i.d. random variables. Let w0,w1,... be a complex sequence satisfying (3) and such that for some C > 0, |wn−k/wn| < Ceβknαk for all n ∈N,k ≤n.
(6) Let μn be as in (4). Then, the following are equivalent: (1) With probability 1, the sequence of random measures 1 nμn converges to the probability measure with density (5).
(2) Elog(1 + |ξ0|) < ∞.
It should be stressed that in all our results we assume that the random vari-ables ξk are nondegenerate (i.e., not a.s. constant). To see that this assumption is essential, consider the deterministic polynomials sn(z) = n k=0 zk k! .
(7) A classical result of Szeg˝ o states that the zeros of sn(nz) cluster asymptot-ically (as n →∞) along the curve {|ze1−z| = 1} ∩D; see Figure 2 (left). This behavior is manifestly different from the distribution with density 1/(2π|z|) on D we have obtained in Theorem 2.3 for the same polynomial with randomized coef-ficients; see Figure 2 (right).
1380 Z. KABLUCHKO AND D. ZAPOROZHETS FIG. 2.
Left: Zeros of the Szeg˝ o polynomial sn(z) = n k=0 zk k! of degree n = 200. Right: Zeros of the Littlewood–Offord random polynomial Wn(z) = n k=0 ξk zk k! of degree n = 2000 with complex normal coefficients. In both cases, the zeros were divided by n.
2.3. Littlewood–Offord random entire function.
Next we discuss a random en-tire function which also was introduced by Littlewood and Offord [23, 24]. Their aim was to describe the properties of a “typical” entire function of a given or-der 1/α. Given a complex sequence w0,w1,... satisfying (3) consider a random entire function W(z) = ∞ k=0 ξkwkzk.
(8) Examples are given by W(1)(z) = ∞ k=0 ξk (k!)α zk, W(2)(z) = ∞ k=0 ξk kαk zk, W(3)(z) = ∞ k=0 ξk (αk + 1)zk.
The first function is essentially the flat model considered above, namely W(1)(nαz) = Fn(z). For α = 1, it is a randomized version of the Taylor series for the exponential. The last function is a randomized version of the Mittag–Leffler function. Our aim is to describe the density of zeros of W on the global scale. Let μn be the measure counting the points of the form e−βn−αz, where z is a zero COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1381 of W. That is, for every Borel set B ⊂C, μn(B) = μW eβnαB .
(9) We have the following strengthening of the flat case of Theorem 2.1.
THEOREM 2.5.
Let ξ0,ξ1,... be nondegenerate i.i.d. random variables such that Elog(1 + |ξ0|) < ∞. Let w0,w1,... be a complex sequence satisfying (3).
With probability 1, the random measure 1 nμn converges to the deterministic mea-sure having the density z → 1 2πα |z|(1/α)−2 (10) with respect to the Lebesgue measure on C.
As a corollary, we obtain a law of large numbers for the number of zeros of W.
COROLLARY 2.6.
Let N(r) = μW(Dr) be the number of zeros of W in the disk Dr. Under the assumptions of Theorem 2.5, N(r) = e−β/αr1/α
1 + o(1) a.s. as r →∞.
In the case α = 1/2 the limiting measure in Theorem 2.5 has constant den-sity 1/π. The difference between the limiting densities in Theorems 2.3 and 2.5 is that in the latter case there is no restriction to the unit disk. It has been pointed out by the unknown referee that in the special case of the Bernoulli-distributed ξk’s Theorem 2.5 can be deduced from the results of Littlewood and Offord [23, 24] using the Levin–Pfluger theory (, Chapter 3). Our proof is simpler than the proof of Littlewood and Offord [23, 24]. For a related work, see also [25, 26].
Let us again stress the importance of the nondegeneracy assumption. The ex-ponential function ez has no complex zeros, whereas the zeros of its randomized version ∞ k=0 ξk zk k! have the global-scale density 1/(2π|z|) on C. For the absolute values of the zeros, the limiting density is constant and equal to 1 on (0,∞).
2.4. Randomized theta function.
Given a parameter α ∈(0,1) ∪(1,∞) we consider a random analytic function Hn(z) = ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ∞ k=0 ξken1−αkαzk (case α < 1, z ∈D), ∞ k=0 ξke−n1−αkαzk (case α > 1, z ∈C).
1382 Z. KABLUCHKO AND D. ZAPOROZHETS THEOREM 2.7.
Let ξ0,ξ1,... be nondegenerate i.i.d. random variables such that Elog(1 + |ξ0|) < ∞. As n →∞, the sequence of random measures 1 nμHn converges in probability to the deterministic measure having the density z → 1 2πα|1 −α| 1 |z|2 log|z| α (2−α)/(α−1) with respect to the Lebesgue measure on C. The density is restricted to D in the case α < 1 and to C \ D in the case α > 1.
As the parameter α crosses the value 1, the zeros of Hn jump from the unit disk D to its complement C \ D. Note that the case α = 1 corresponds formally to Kac polynomials for which the zeros are on the boundary of D. The special case α = 2 corresponds to the randomized theta function Hn(z) = ∞ k=0 ξke−k2/nzk.
(11) The limiting distribution of zeros has the density 1 4π|z|2 on C\D. One can also take the sum in (11) over k ∈Z in which case the zeros fill the whole complex plane with the same density.
A similar model, namely the polynomials Qn(z) = n k=0 ξke−kαzk, where α > 1, has been considered by Schehr and Majumdar . Assuming that ξk are real-valued they showed that almost all zeros of Qn become real if α > 2. In our model, the distribution of the arguments of the zeros remains uniform for every α.
2.5. The general result.
We are going to state a theorem which contains all examples considered above as special cases. Let ξ0,ξ1,... be nondegenerate i.i.d.
complex-valued random variables such that Elog(1 + |ξ0|) < ∞. Consider a ran-dom Taylor series Gn(z) = ∞ k=0 ξkfk,nzk, (12) where fk,n ∈C are deterministic coefficients. Essentially, we will assume that for some function u(t) the coefficients fk,n satisfy |fk,n| = e−nu(k/n)+o(n), n →∞.
Here is a precise statement. We assume that there is a function f :[0,∞) →[0,∞) and a number T0 ∈(0,∞] such that (A1) f (t) > 0 for t < T0 and f (t) = 0 for t > T0.
(A2) f is continuous on [0,T0), and, in the case T0 < +∞, left continuous at T0.
(A3) limn→∞supk∈[0,An] ||fk,n|1/n −f ( k n)| = 0 for every A > 0.
COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1383 (A4) R0 := liminft→∞f (t)−1/t ∈(0,∞], liminfk→∞|fk,n|−1/k ≥R0 for ev-ery fixed n ∈N and additionally, liminfn,k/n→∞|fk,n|−1/k ≥R0.
It will be shown later that condition (A4) ensures that the series (12) defin-ing Gn converges with probability 1 on the disk DR0. Let I :R →R ∪{+∞} be the Legendre–Fenchel transform of the function u(t) = −logf (t), where log0 = −∞. That is, I(s) = sup t≥0 st −u(t) = sup t≥0 st + logf (t) .
(13) Note that I is a convex function, I(s) is finite for s < logR0 and I(s) = +∞for s > logR0. Recall that μGn is the measure assigning to each zero of Gn a weight equal to its multiplicity.
THEOREM 2.8.
Under the above assumptions, the sequence of random mea-sures 1 nμGn converges in probability to some deterministic locally finite measure μ on the disk DR0. The measure μ is rotationally invariant and is characterized by μ(Dr) = I ′(logr), r ∈(0,R0).
(14) By convention, I ′ is the left derivative of I. Since I is convex, the left derivative exists everywhere on (−∞,logR0) and is a nondecreasing, left-continuous func-tion. Since the supremum in (13) is taken over t ≥0, we have lims→−∞I ′(s) = 0.
Hence, μ has no atom at zero. If I ′ is absolutely continuous on some interval (logr1,logr2), then the density of μ on the annulus r1 < |z| < r2 with respect to the Lebesgue measure on C is ρ(z) = I ′′(log|z|) 2π|z|2 .
(15) It is possible to give a characterization of the measure μ without referring to the Legendre–Fenchel transform. The radial part of μ is a measure ¯ μ on (0,∞) defined by ¯ μ((0,r)) = μ(Dr). Suppose first that u is convex on (0,T0) (which is the case in all our examples). Then, ¯ μ is the image of the Lebesgue measure on (0,∞) under the mapping t →eu′(t), where u′ is the left derivative of u. This follows from the fact that (u′)←= I ′ and (I ′)←= u′ by the Legendre–Fenchel duality, where ϕ←(t) = inf{s ∈R:ϕ(s) ≥t} is the generalized left-continuous in-verse of a nondecreasing function ϕ. In particular, the support of μ is contained in the annulus elimt↓0 u′(t) ≤|z| ≤elimt↑T0 u′(t) and is equal to this annulus if u′ has no jumps. In general, any jump of u′ (or, by duality, any constancy interval of I ′) corresponds to a missing annulus in the support of μ. Also, any jump of I ′ (or, by duality, any constancy interval of u′) corresponds to a circle with positive μ-measure. More precisely, if I ′ has a jump 1384 Z. KABLUCHKO AND D. ZAPOROZHETS at s (or, by duality, u′ takes the value s on an interval of positive length), then μ assigns a positive weight (equal to the size of the jump) to the circle of radius es centered at the origin. In the case when u is nonconvex, we can apply the same considerations after replacing u by its convex hull.
One may ask what measures μ may appear as limits in Theorem 2.8. Clearly, μ has to be rotationally invariant, with no atom at 0. The next theorem shows that there are no further essential restrictions.
THEOREM 2.9.
Let μ be a rotationally invariant measure on C such that (1) μ(C \ DR0) = 0, where R0 := sup{r > 0:μ(Dr) < ∞} ∈(0,∞].
(2) R 0 μ(Dr)r−1 dr < ∞for some (hence, every) R < R0.
Then, there is a random Taylor series Gn of the form (12) with convergence radius a.s. R0 such that 1 nμGn converges in probability to μ on the disk DR0.
EXAMPLE 2.10.
Consider a random polynomial Gn(z) = n k=0 ξkzk + 2n 2n k=n+1 ξk z 2 k + 9 2 n 3n k=2n+1 ξk z 3 k .
(16) We can apply Theorem 2.8 with u(t) = ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ 0, t ∈[0,1], (log2)(t −1), t ∈[1,2], (log3)t −log 9 2, t ∈[2,3], +∞, t ≥3, I(s) = ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ 0, s ≤0, s, s ∈[0,log2], 2s −log2, s ∈[log2,log3], 3s −log6, t ≥log3.
The function u′ has three constancy intervals of length 1 where it takes values 0,log2,log3. Dually, the function I ′ has three jumps of size 1 at 0,log2,log3 and is locally constant outside these points. It follows that the limiting distribution of the zeros of Gn is the sum of uniform probability distributions on three concentric circles with radii 1,2,3.
REMARK 2.11.
Suppose that Gn satisfies the assumptions of Theorem 2.8.
Then, so does the derivative G′ n (and, moreover, f is the same in both cases). Thus, the derivative of any fixed order of Gn has the same limiting distribution of zeros as Gn. Similarly, for every complex sequence cn such that limsupn→∞ 1 n log|cn| ≤ f (0), the function Gn(z) −cn satisfies the assumptions. Hence, the limiting dis-tribution of the solutions of the equation Gn(z) = cn is the same as for the zeros of Gn.
COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1385 3. Proofs: Special cases.
We are going to prove the results of Section 1. We will verify the assumptions of Section 2.5 and apply Theorem 2.8. Recall the no-tation u(t) = −logf (t).
PROOF OF THEOREM 2.2.
We can assume that w = 0 since otherwise we can consider the polynomial Wn(e−wz). It follows from limk→∞1 k log|wk| = 0 that assumptions (A1)–(A4) of Section 2.5 are fulfilled with T0 = 1, R0 = +∞and f (t) = 1, t ∈[0,1], 0, t > 1, u(t) = 0, t ∈[0,1], +∞, t > 1.
The Legendre–Fenchel transform of u is given by I(r) = max(0,r). It follows from (14) that μ is the uniform probability measure on T.
□ REMARK 3.1.
Under a slightly more restrictive assumption Elog|ξ0| < ∞, Theorem 2.2 can be deduced from the result of Hughes and Nikeghbali (which is partially based on the Erd˝ os–Turan inequality). This method, however, requires a subexponential growth of the coefficients and therefore fails in all other examples we consider here.
PROOF OF THEOREM 2.1.
By the Stirling formula, logn! = nlogn−n+o(n) as n →∞. It follows that assumption (A3) holds with u(t) = ⎧ ⎪ ⎨ ⎪ ⎩ α t logt + (1 −t)log(1 −t) , 0 ≤t ≤1, elliptic case, α(t logt −t), t ≥0, flat case, α t logt −(1 + t)log(1 + t) , t ≥0, hyperbolic case.
In the elliptic case, u(t) = +∞for t > 1. The Legendre–Fenchel transform of u is given by I(s) = ⎧ ⎪ ⎨ ⎪ ⎩ α log 1 + es/α, s ∈R, elliptic case, αes/α, s ∈R, flat case, −α log 1 −es/α, s < 0, hyperbolic case.
In the hyperbolic case, I(s) = +∞for s ≥0. We have R0 = 1 in the hyperbolic case and R0 = +∞in the remaining two cases. The proof is completed by apply-ing Theorem 2.8.
□ PROOF OF THEOREM 2.3.
We are going to apply Theorem 2.8 to the poly-nomial Gn(z) = Wn(eβnαz). We have fk,n = eβk+αk lognwk, for 0 ≤k ≤n. Equa-tion (3) implies that assumption (A3) is satisfied with u(t) = α(t logt −t), t ∈[0,1], +∞, t > 1.
The Legendre–Fenchel transform of u is given by I(s) = αes/α, s ≤0, α + s, s ≥0.
1386 Z. KABLUCHKO AND D. ZAPOROZHETS Applying Theorem 2.8, we obtain that 1 nμGn converges in probability to the re-quired limit. A.s. convergence will be demonstrated in Section 4.6 below.
□ PROOF OF THEOREM 2.5.
We apply Theorem 2.8 to Gn(z) = W(eβnαz). We have u(t) = α(t logt −t) for all t ≥0. Hence, I(s) = αes/α for all s ∈R. We can apply Theorem 2.8 to prove convergence in probability. A.s. convergence will be demonstrated in Section 4.7 below.
□ PROOF OF THEOREM 2.7.
Put σ = +1 in the case α > 1 and σ = −1 in the case α < 1. We have u(t) = σtα for t ≥0. It follows that I(r) = ⎧ ⎨ ⎩ σ(α −1) σr α α/(α−1) , σr ≥0, +∞, σr < 0.
We can apply Theorem 2.8.
□ 4. Proofs: General results.
4.1. Method of proof of Theorem 2.8.
We use the notation and the assump-tions of Section 2.5. We denote the probability space on which the random vari-ables ξ0,ξ1,... are defined by (,F,P). We will write μn = μGn for the measure counting the zeros of Gn. To stress the randomness of the object under considera-tion we will sometimes write Gn(z;ω) and μn(ω) instead of Gn(z) and μn. Here, ω ∈. The starting point of the proof of Theorem 2.8 is the formula μn(ω) = 1 2π log Gn(z;ω) (17) for every fixed ω ∈ for which Gn(z;ω) does not vanish identically. Here, de-notes the Laplace operator in the complex z-plane. The Laplace operator should always be understood as an operator acting on D′(DR0), the space of generalized functions on the disk DR0; see, for example, Chapter II of . Equation (17) follows from the formula 1 2π log|z −z0| = δ(z0), for every z0 ∈C; see Exam-ple 4.1.10 in . First, we will compute the limiting logarithmic potential in (17).
THEOREM 4.1.
Under the assumptions of Section 2.5, for every z ∈DR0 {0}, pn(z) := 1 n log Gn(z) P − → n→∞I log|z| .
(18) We will prove Theorem 4.1 in Sections 4.2, 4.3, 4.4 below. Theorem 4.1 fol-lows from equations (22) and (27) below. Moreover, it follows from (22) that limsupn→∞pn(z) ≤I(log|z|) a.s. Unfortunately, we were unable to prove that liminfn→∞pn(z) ≥I(log|z|) a.s. Instead, we have the following slightly weaker statement.
COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1387 PROPOSITION 4.2.
Let l1,l2,... be an increasing sequence of natural num-bers such that lk ≥k3 for all k ∈N. Under the assumptions of Section 2.5 we have, for every z ∈DR0 \ {0}, plk(z) = 1 lk log Glk(z) a.s.
− → k→∞I log|z| .
(19) Proposition 4.2 follows from equations (22) and (27) by noting that ∞ k=1 k−3/2 < ∞and applying the Borel–Cantelli lemma. The next proposition allows us to pass from convergence of potentials to convergence of measures. We will prove it Section 4.5. Recall that μn counts the zeros of Gn.
PROPOSITION 4.3.
Let l1,l2,... be any increasing sequence of natural num-bers. Assume that for Lebesgue-a.e. z ∈DR0 equation (19) holds. Then, 1 lk μlk a.s.
− → k→∞ 1 2π I log|z| .
(20) With these results, we are in position to prove Theorem 2.8. We need to show that 1 nμn converges to μ in probability, as a sequence of M(DR0)-valued random variables. A sequence of random variables with values in a metric space converges in probability to some limit if and only if every subsequence of these random variables contains a subsubsequence which converges a.s. to the same limit; see, for example, Lemma 3.2 in . Let a subsequence 1 n1 μn1, 1 n2 μn2,..., where n1 < n2 < ···, be given. Write lk = nk3, so that {lk} is a subsequence of {nk} and lk ≥k3. It follows from Propositions 4.2 and 4.3 that (20) holds. So, the random measure 1 nμn converges in probability to 1 2π I(log|z|). It remains to observe that the generalized function 1 2π I(log|z|) is equal to the measure μ given in (14).
This follows from the fact that the radial part of in polar coordinates is given by 1 r d dr r d dr . This gives the desired result.
4.2. The logarithmic moment condition.
The next well-known lemma states that i.i.d. random variables grow subexponentially with probability 1 if and only if their logarithmic moment is finite.
LEMMA 4.4.
Let ξ0,ξ1,... be i.i.d. random variables. Fix ε > 0. Then, S := sup k=0,1,...
|ξk| eεk < +∞a.s.
⇐ ⇒ Elog 1 + |ξ0| < ∞.
(21) PROOF.
For every nonnegative random variable X we have ∞ k=1 P[X ≥k] ≤EX ≤ ∞ k=0 P[X ≥k].
1388 Z. KABLUCHKO AND D. ZAPOROZHETS With X = 1 ε log(1 + |ξ0|) it follows that Elog(1 + |ξ0|) < ∞if and only if ∞ k=1 P[|ξ0| ≥eεk −1] < ∞for some (equivalently, every) ε > 0. The proof is completed by applying the Borel–Cantelli lemma.
□ Note in passing that Lemma 4.4 and condition (A4) imply that for every n ∈N the series (12) converges with probability 1 on DR0.
4.3. Upper bound in Theorem 4.1.
Fix an ε > 0. All constants which we will introduce below depend only on ε. Let us agree that all inequalities will hold uni-formly over z ∈De−2εR0 \ {0} if R0 < ∞and over z ∈D1/ε \ {0} if R0 = ∞. We will show that there exists an a.s. finite random variable M = M(ε) such that for all sufficiently large n, Gn(z) ≤Men(I(log|z|)+3ε).
(22) First, we estimate the tail of the Taylor series (12) defining Gn. By assumption (A4) there is A > max(0,−logf (0)) such that for all n ≥A and all k ≥An, |fk,n| < |z|e2ε−k.
Lemma 4.4 implies that there exist a.s. finite random variables S,M′ such that for all n ≥A, k≥An ξkfk,nzk ≤S k≥An eεk|fk,n||z|k ≤S k≥An e−εk ≤M′e−An.
(23) We now consider the initial part of the Taylor series (12) defining Gn. Take some δ > 0. By assumption (A3), there is N such that for all n > N and all k ≤An, |fk,n| < f k n + δ n .
(24) It follows from (13) that for all t ≥0, t log|z| + logf (t) ≤I log|z| .
(25) Using (24), (25) and Lemma 4.4 with ε/A instead of ε we obtain that there is an a.s. finite random variable M′′ such that for all sufficiently large n, 0≤k<An ξkfk,nzk ≤M′′ 0≤k<An e(εk)/A f k n + δ n |z|k (26) ≤M′′eεn 0≤k<An e(k/n)log|z|+logf (k/n) + δ|z|k/nn ≤M′′e2εn
eI(log|z|) + δ max 1,|z|An ≤M′′en(I(log|z|)+3ε), COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1389 where the last inequality holds if δ = δ(ε) is sufficiently small. Combining (23) and (26) and noting that −A < logf (0) ≤I(log|z|) by (25), we obtain that (22) holds with M = M′ + M′′ for sufficiently large n. By enlarging M, if necessary, we can achieve that it holds for all n ≥A.
4.4. Lower bound in Theorem 4.1.
Fix ε > 0 and z ∈DR0 \ {0}. We are going to show that P Gn(z) < en(I(log|z|)−4ε) = O 1 √n , n →∞.
(27) We will use the Kolmogorov–Rogozin inequality in a multidimensional form which can be found in . Given a d-dimensional random vector X define its concentration function by Q(X;r) = sup x∈Rd P X ∈Dr(x) , r > 0, (28) where Dr(x) is a d-dimensional ball of radius r centered at x. An easy conse-quence of (28) is that for all independent random vectors X,Y and all r,a > 0, Q(X + Y;r) ≤Q(X;r), Q(aX;r) = Q(X;r/a).
(29) The next result follows from Corollary 1 on page 304 of .
THEOREM 4.5 (Kolmogorov–Rogozin inequality).
There is a constant Cd de-pending only on d such that for all independent (not necessarily identically dis-tributed) random d-dimensional vectors X1,...,Xn and for all r > 0, we have Q(X1 + ··· + Xn;r) ≤Cd · n k=1 1 −Q(Xk;r) −1/2 .
The idea of our proof of (27) is to use the Kolmogorov–Rogozin inequality to show that the probability of very strong cancellation among the terms of the series (12) defining Gn is small. First, we have to single out those terms of Gn in which |fk,nzk| is large enough. By definition of I, see (13), there is t0 ∈[0,T0] such that t0 log|z| + logf (t0) > I(log|z|) −ε. Moreover, by assumption (A2), we can find a closed interval J of length |J| > 0 containing t0 such that f (t)|z|t > eI(log|z|)−2ε, t ∈J.
Define a set Jn = {k ∈N0 :k/n ∈J}. By assumption (A3) there is N such that for all n > N and all k ∈Jn, |fk,n||z|k > en(I(log|z|)−3ε).
Let n > N. For k ∈N0 define ak,n = e−n(I(log|z|)−3ε)fk,nzk.
1390 Z. KABLUCHKO AND D. ZAPOROZHETS Note that |ak,n| > 1 for k ∈Jn. Define Gn,1 = k∈Jn ak,nξk, Gn,2 = k/ ∈Jn ak,nξk.
By considering real and imaginary parts, we can view the complex random vari-ables ak,nξk as two-dimensional random vectors. Using (29), we arrive at P Gn(z) < en(I(log|z|)−4ε) ≤Q Gn,1 + Gn,2;e−εn ≤Q Gn,1;e−εn.
(30) By Theorem 4.5, there is an absolute constant C such that for all r > 0, Q(Gn,1;r) ≤C · k∈Jn 1 −Q(ak,nξk;r) −1/2 ≤C · k∈Jn 1 −Q(ξk;r) −1/2 .
Here, the second inequality follows from the fact that |ak,n| > 1 for k ∈Jn. Now, since the random variable ξ0 is supposed to be nondegenerate, we can choose r > 0 so small that Q(ξ0;r) < 1. Note that this is the only place in the proof of Theo-rem 2.8 where we use the randomness of the ξk’s in a nonobvious way. The rest of the proof is valid for any deterministic sequence ξ0,ξ1,... such that |ξn| = O(eδn) for every δ > 0. If n is sufficiently large, then e−εn ≤r and hence, Q Gn,1;e−εn ≤Q(Gn,1;r) ≤C1|Jn|−1/2 ≤C2n−1/2.
(31) In the last inequality, we have used that the number of elements of Jn is larger than (|J|/2)n for large n. Taking (30) and (31) together completes the proof of (27).
4.5. Proof of Proposition 4.3.
Define a set A ⊂DR0 × , measurable with respect to the product of the Borel σ-algebra on DR0 and F, by A = (z,ω): lim k→∞plk(z;ω) = I log|z| .
We know from assumption (19) that for Lebesgue-a.e. z ∈DR0 it holds that 1(z,ω)/ ∈AP(dω) = 0. By Fubini’s theorem, for P-a.e. ω ∈, it holds that DR0 1(z,ω)/ ∈Aλ(dz) = 0. Hence, there is a measurable set E1 ⊂ with P[E1] = 0 such that for every ω / ∈E1, lim k→∞plk(z;ω) = I log|z| , for Lebesgue-a.e. z ∈DR0.
(32) Let k(ω) = min{k ∈N0 :ξk(ω) ̸= 0}, ω ∈. Since the ξk’s are assumed to be nondegenerate, the set E0 = {ω ∈:k(ω) = ∞} satisfies P[E0] = 0. By condi-tions (A3) and (A1), after ignoring finitely many values of n, we can assume that fk,n ̸= 0 for 0 ≤k ≤T0n/2. Define n(ω) = 2k(ω)/T0. For ω / ∈E0 and n > n(ω) the function Gn does not vanish identically. For every fixed ω / ∈E0 and n > n(ω) COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1391 the function pn(z;ω) = 1 n log|Gn(z;ω)| is subharmonic, as a function of z; see Example 4.1.10 in . Also, it follows from (22) that there is a measurable set E2 ⊂ with P[E2] = 0 such that for every ω / ∈E2, the family of functions Pω = {z →plk(z;ω):k ∈N}, is uniformly bounded above on every compact sub-set of DR0. Let E = E0 ∪E1 ∪E2, so that P[E] = 0. Fix ω / ∈E. By Theorem 4.1.9 of , the family Pω is either precompact in D′(DR0), the space of generalized functions on the disk DR0, or contains a subsequence converging to −∞uniformly on compact subsets of DR0. The latter possibility is excluded by (32). Thus, the family Pω is precompact in D′(DR0). Any subsequential limit of Pω must coin-cide with the function I(log|z|) by (32) and Proposition 16.1.2 in . It follows that for every fixed ω / ∈E, plk(z;ω) − → k→∞I log|z| in D′(DR0).
(33) Since the Laplace operator is continuous on D′(DR0), we may apply it to the both sides of (33). Recalling (17) we obtain that for every ω / ∈E, 1 lk μlk(ω) = 1 2π plk(z;ω) − → k→∞ 1 2π I log|z| in D′(DR0).
A sequence of locally finite measures converges in D′(DR0) if and only if it con-verges vaguely. This completes the proof of (20).
4.6. Proof of the a.s. convergence in Theorem 2.3.
Recall that convergence in probability has already been established in Section 3. To prove the a.s. convergence we first extract a subsequence to which we can apply the Borel–Cantelli lemma.
Given n ∈N we can find a unique jn ∈N such that j3 n ≤n < (jn + 1)3. Write mn = j3 n and Gn(z) = Wn(eβmα nz). Note that limn→∞mn/n = 1. Thus, it suffices to show that 1 nμGn converges a.s. to the measure with density (5). As a first step, we will prove the a.s. convergence of the corresponding potentials. Fix z ∈D{0}.
We will prove that pn(z) = 1 n log Gn(z) a.s.
− → n→∞α|z|1/α.
(34) Note that Gn satisfies all assumptions of Section 2.5. It follows from Proposi-tion 4.2 applied to the subsequence lj = j3 that 1 mn log Gmn(z) a.s.
− → n→∞α|z|1/α.
(35) Let now n ∈N be a sufficiently large number not of the form j3. We have, by Lemma 4.4 and (3), Gn(z) −Gmn(z) = n k=mn+1 ξkwkeβkmαk n zk ≤Se2εn n k=mn+1 e−α(k logk−k)nαk|z|k.
1392 Z. KABLUCHKO AND D. ZAPOROZHETS The function x →−α(x logx −x) + αx logn defined for x > 0 attains its maxi-mum, which is equal to αn, at x = n. Recall that |z| < 1. Since mn > (1 −ε)n and εnS < eεn if n is sufficiently large, we have the estimate Gn(z) −Gmn(z) ≤e3εneαn|z|(1−ε)n.
Since α + log|z| < α|z|1/α, we have, if ε > 0 is small enough, Gn(z) −Gmn(z) ≤e(1−ε)n(α|z|1/α−2ε) ≤emn(α|z|1/α−2ε).
(36) Bringing (35) and (36) together we obtain (34).
We are ready to complete the proof. It follows from (34) and Proposition 4.3 that the restriction of 1 nμGn to D converges a.s. to a measure μ with density (5), as a sequence of random elements with values in M(D). To prove that the a.s.
convergence holds in the sense of M(C)-valued elements, we need to show that limn→∞1 nμGn(C \ D) = 0 a.s., or, equivalently, that liminfn→∞1 nμGn(D) = 1 a.s. Let f :C →[0,1] be a continuous function with support in D. Then, since ν → f dν defines a continuous functional on M(D), liminf n→∞ 1 nμGn(D) ≥liminf n→∞ 1 n C f dμGn = C f dμ a.s.
The supremum of the right-hand side over all admissible f is equal to 1 since μ(D) = 1. This proves the claim.
4.7. Proof of the a.s. convergence in Theorem 2.5.
Let mn be defined in the same way as in the previous proof. Write Gn(z) = W(eβmα nz). Note that Gn satis-fies the assumptions of Section 2.5 with I(s) = αes/α. By Proposition 4.2, for all z ∈C \ {0}, pn(z) = 1 n log Gn(z) a.s.
− → n→∞α|z|1/α.
(37) Then, it follows from Proposition 4.3 that 1 nμGn converges a.s. to the measure with density (10).
4.8. Proof of Theorem 2.4.
We prove only the implication (1) ⇒(2) since the converse implication has been established in Theorem 2.3. Let Wn(z) = n k=0 ξkwkzk, where wk is a sequence satisfying (3) and (6). Assume that Elog(1+|ξ0|) = ∞. Fix ε > 0. We will show that with probability 1 there exist in-finitely many n’s such that all zeros of Wn(eβnαz) are located in the disk D2ε. This implies that 1 nμn does not converge a.s. to the measure with density (5). We use an idea of . By Lemma 4.4, limsupn→∞|ξn|1/n = +∞. Hence, with probability 1 there exist infinitely many n’s such that |ξn|1/n > max k=1,...,n−1|ξn−k|1/(n−k), (38) |ξn|1/n > max 3C + 1 ε , 1 eαε .
COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1393 Let n be such that (38) holds. By (6) and (38), we have for every z ∈C and k < n, wn−kξn−k eβnαz n−k ≤C|wn|eβknαk|ξn|(n−k)/n eβnαz n−k = C wnξn eβnαz n
|ξn|1/n|z| −k.
For every z such that |z| > ε, we obtain n−1 k=1 wn−kξn−k eβnαz n−k ≤C wnξn eβnαz n · n−1 k=1 1 (3C + 1)k < 1 3 wnξn eβnαz n .
By (3) and (38), the right-hand side of this inequality goes to +∞as n →∞.
In particular, for sufficiently large n, it is larger than |ξ0w0|. It follows that for |z| > ε, the term of degree n in the polynomial Wn(eβnαz) is larger, in the sense of absolute value, than the sum of all other terms. Hence, the polynomial Wn(eβnαz) has no zeros outside the disk D2ε.
4.9. Proof of Theorem 2.9.
Start with a measure μ satisfying the assumptions of Theorem 2.9. Define a function I by I(s) = s −∞μ(Der)dr for s < logR0. The integral is finite by the second assumption of the theorem. Clearly, I is nonde-creasing, continuous and convex on (−∞,logR0). For s > logR0 let I(s) = +∞.
Define I(logR0) by left continuity. Let now u be defined as the Legendre–Fenchel transform of I: u(t) = sup s∈R st −I(s) .
We claim that the random analytic function Gn(z) = ∞ k=0 ξkfk,nzk with fk,n = e−nu(k/n) satisfies assumptions (A1)–(A4) of Theorem 2.8 with f = e−u. By the Legendre–Fenchel duality, the function u possesses the following properties.
First, it is convex and lower-semicontinuous. Second, it is finite on the interval [0,T0), where T0 = limsupt→+∞I(t)/t satisfies T0 ∈(0,+∞]. This holds since I is nondecreasing and lims→−∞I(s) = 0 by construction. Third, u(t) = +∞for t > T0 and t < 0. This verifies assumption (A1). Fourth, formula (13) holds and limt→+∞u(t)/t = logR0. This, together with Lemma 4.4, shows that the conver-gence radius of Gn is R0 a.s. and verifies assumption (A4). Finally, u is continuous on [0,T0) (since it is convex and finite there), and, in the case T0 < +∞, the func-tion u is left continuous at T0 (follows from the lower-semicontinuity of u). This verifies assumption (A2). Assumption (A3) holds trivially with f = e−u.
Acknowledgment.
The authors are grateful to the unknown referee who con-siderably simplified the original proof of Theorem 2.8. The argument in Sec-tion 4.5 follows the idea of the referee.
1394 Z. KABLUCHKO AND D. ZAPOROZHETS REFERENCES ARNOLD, L. (1966). Über die Nullstellenverteilung zufälliger Polynome. Math. Z. 92 12–18.
MR0200966 BHARUCHA-REID, A. T. and SAMBANDHAM, M. (1986). Random Polynomials. Academic Press, Orlando, FL. MR0856019 BLEHER, P. and DI, X. (2004). Correlations between zeros of non-Gaussian random polyno-mials. Int. Math. Res. Not. IMRN 2004 2443–2484. MR2078308 BLOOM, T. and SHIFFMAN, B. (2007). Zeros of random polynomials on Cm. Math. Res. Lett.
14 469–479. MR2318650 BORDENAVE, C. and CHAFAÏ, D. (2012). Around the circular law. Probab. Surv. 9 1–89.
MR2908617 EDELMAN, A. and KOSTLAN, E. (1995). How many zeros of a random polynomial are real?
Bull. Amer. Math. Soc. (N.S.) 32 1–37. MR1290398 ESSEEN, C. G. (1968). On the concentration function of a sum of independent random vari-ables. Z. Wahrsch. verw. Gebiete 9 290–308. MR0231419 FARAHMAND, K. (1998). Topics in Random Polynomials. Pitman Research Notes in Mathe-matics Series 393. Longman, Harlow. MR1679392 FORRESTER, P. J. and HONNER, G. (1999). Exact statistical properties of the zeros of complex random polynomials. J. Phys. A 32 2961–2981. MR1690355 HAMMERSLEY, J. M. (1956). The zeros of a random polynomial. In Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1954–1955, vol. II, 89– 111. Univ. California Press, Berkeley and Los Angeles. MR0084888 HÖRMANDER, L. (1983). The Analysis of Linear Partial Differential Operators I. Grundlehren der Mathematischen Wissenschaften 256. Springer, Berlin.
HÖRMANDER, L. (1983). The Analysis of Linear Partial Differential Operators II.
Grundlehren der Mathematischen Wissenschaften 257. Springer, Berlin.
HOUGH, J. B., KRISHNAPUR, M., PERES, Y. and VIRÁG, B. (2009). Zeros of Gaussian An-alytic Functions and Determinantal Point Processes. University Lecture Series 51. Amer.
Math. Soc., Providence, RI. MR2552864 HUGHES, C. P. and NIKEGHBALI, A. (2008). The zeros of random polynomials cluster uni-formly near the unit circle. Compos. Math. 144 734–746. MR2422348 IBRAGIMOV, I. and ZEITOUNI, O. (1997). On roots of random polynomials. Trans. Amer.
Math. Soc. 349 2427–2441. MR1390040 IBRAGIMOV, I. A. and ZAPOROZHETS, D. N. (2013). On distribution of zeros of random poly-nomials in complex plane. In Prokhorov and Contemporary Probability Theory (A. N.
Shiryaev, S. R. S. Varadhan and E. L. Presman, eds.). Springer Proceedings in Mathemat-ics and Statistics 33 303–324. Springer, Berlin.
KALLENBERG, O. (1997). Foundations of Modern Probability. Springer, New York.
MR1464694 KRISHNAPUR, M. and VIRÁG, B. (2014). The Ginibre ensemble and Gaussian analytic func-tions. Int. Math. Res. Not. IMRN. 2014 1441–1464.
LEDOAN, A., MERKLI, M. and STARR, S. (2012). A universality property of Gaussian analytic functions. J. Theoret. Probab. 25 496–504. MR2914439 LEVIN, B. J. (1964). Distribution of Zeros of Entire Functions. Translations of Mathematical Monographs 5. Amer. Math. Soc., Providence, RI. MR0156975 LITTLEWOOD, J. E. and OFFORD, A. C. (1938). On the number of real roots of a random algebraic equation. J. Lond. Math. Soc. (2) 13 288–295.
LITTLEWOOD, J. E. and OFFORD, A. C. (1939). On the number of real roots of a random algebraic equation. II. Math. Proc. Cambridge Philos. Soc. 35 133–148.
COMPLEX ZEROS OF RANDOM ANALYTIC FUNCTIONS 1395 LITTLEWOOD, J. E. and OFFORD, A. C. (1945). On the distribution of the zeros and a-values of a random integral function. I. J. Lond. Math. Soc. (2) 20 130–136. MR0019123 LITTLEWOOD, J. E. and OFFORD, A. C. (1948). On the distribution of zeros and a-values of a random integral function. II. Ann. of Math. (2) 49 885–952. Errata: 50 990–991 (1949).
MR0029981 OFFORD, A. C. (1965). The distribution of the values of an entire function whose coefficients are independent random variables. Proc. Lond. Math. Soc. (3) 14a 199–238. MR0177117 OFFORD, A. C. (1995). The distribution of the values of an entire function whose coefficients are independent random variables. II. Math. Proc. Cambridge Philos. Soc. 118 527–542.
MR1342969 SCHEHR, G. and MAJUMDAR, S. N. (2009). Condensation of the roots of real random poly-nomials on the real axis. J. Stat. Phys. 135 587–598. MR2544105 SHEPP, L. A. and VANDERBEI, R. J. (1995). The complex zeros of random polynomials.
Trans. Amer. Math. Soc. 347 4365–4384. MR1308023 SHIFFMAN, B. and ZELDITCH, S. (1999). Distribution of zeros of random and quantum chaotic sections of positive line bundles. Comm. Math. Phys. 200 661–683. MR1675133 SHIFFMAN, B. and ZELDITCH, S. (2003). Equilibrium distribution of zeros of random poly-nomials. Int. Math. Res. Not. IMRN 2003 25–49. MR1935565 SHMERLING, E. and HOCHBERG, K. J. (2002). Asymptotic behavior of roots of random poly-nomial equations. Proc. Amer. Math. Soc. 130 2761–2770 (electronic). MR1900883 SODIN, M. (2005). Zeroes of Gaussian analytic functions. In European Congress of Mathe-matics 445–458. Eur. Math. Soc., Zürich. MR2185759 SODIN, M. and TSIRELSON, B. (2004). Random complex zeroes. I. Asymptotic normality.
Israel J. Math. 144 125–149. MR2121537 SZEG ˝ O, G. (1924). Über eine Eigenschaft der Exponentialreihe. Sitzungsber. Berl. Math. Ges.
23 50–64.
TAO, T. and VU, V. (2010). Random matrices: Universality of ESDs and the circular law. Ann.
Probab. 38 2023–2065. With an appendix by Manjunath Krishnapur. MR2722794 ŠPARO, D. I. and ŠUR, M. G. (1962). On the distribution of roots of random polynomials.
Vestnik Moskov. Univ. Ser. I Mat. Meh. 1962 40–43. MR0139199 INSTITUTE OF STOCHASTICS ULM UNIVERSITY HELMHOLTZSTR. 18 89069 ULM GERMANY E-MAIL: zakhar.kabluchko@uni-ulm.de ST. PETERSBURG BRANCH STEKLOV INSTITUTE OF MATHEMATICS FONTANKA STR. 27 191011 ST. PETERSBURG RUSSIA E-MAIL: zap1979@gmail.com |
189598 | https://www.khanacademy.org/math/cc-fourth-grade-math/plane-figures/imp-parallel-and-perpendicular/e/recognizing-parallel-and-perpendicular-lines | Use of cookies
Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account.
For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session.
Functional Cookies
These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences.
Targeting Cookies
These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission.
We do not use cookies to serve third party ads on our Service.
Performance Cookies
These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates.
For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users.
We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. |
189599 | https://www.digitalhistory.uh.edu/teachers/lesson_plans/pdfs/unit1_4.pdf | Page 14 Chapter 4 British Mercantilism and the Cost of Empire hree hundred years ago, nations wanted colonies in order to increase their power. According to the economic thinkers of those days, colonies would help the mother country become self-sufficient and wealthy. No great nation could exist without colonies. This was the idea behind mercantilism, a forerunner of the present day idea of imperialism. T England, Spain, France, and other nations competed with each other to own colonies in North America, South America, Asia, and Africa. Their competition often led to wars. The mercantilists reasoned that even wars were worth the price, because each colony would be a help to its conqueror. England needed raw materials that her colonies could supply. Lumber, wool, iron, cotton, tobacco, rice, and indigo were among the products needed in England. British manufacturers in the meantime needed markets for the goods they produced. The American colonies bought their cloth, furniture, knives, guns, and kitchen utensils from England. In addition, England’s survival as a nation depended on her navy, and the colonies were a constant source of both the timber for her ships and the men who could sail them. Since each nation's wealth in those days was measured in the amounts of gold and silver it possessed, England had yet a another reason for establishing and ruling a vast colonial empire: the colonists would supply their British masters with gold and silver simply by selling their raw materials and buying England’s manufactured products. The difference between what the colonists could pay through their sales of raw materials, and what they owed because of the purchase of manufactured goods, is called the balance of trade. Since the colonists bought more than they sold, their balance of trade was said to be unfavorable. The difference would have to be made up in such precious metals as gold and silver. By thus supplying Britain with this gold and silver, to make up for their unfavorable balance of trade, the American colonists were fulfilling the British mercantilists’ fondest dreams. England was not content with allowing trade to develop in whatever manner their colonies found convenient or best for their own interests. Instead, England passed special laws to govern the flow of goods across the Atlantic. England placed restrictions on colonial exports, imports, and manufacturing. At the same time, she encouraged the production of certain naval products in the colonies, and permitted American as well as British ships to transport goods between mother England and colonial America. These laws, of course, irritated the colonists who were adversely affected by them. But, whether the colonists were seriously hurt by these laws is an open question which the reader is invited to explore. The question is important because, in this economic relation between crown and colony, one may find the real causes of the American Revolution. Enumerated Goods—Restrictions on Exports When the first Englishmen settled in Jamestown, Virginia, and in Plymouth, Massachusetts, England did little to direct their trade. As the colonies grew more prosperous, however, England began to enforce her mercantile ideas. A series of laws were passed in the 1660s known as the Navigation Acts. They were designed to make the American colonies dependent on the manufactured products of England. The colonists, of course, were expected to buy more from England than they sold to her and pay the difference in gold and silver. Therefore, the British forbade all non-English ships from trading with the colonies. Because ships made in the colonies were considered British, they too were restricted to trade between homeland and mother country. Thomas Ladenburg, copyright, 1974, 1998, 2001, 2007 t.ladenburg@verizon.net Page 15 In addition to these regulations, England also enumerated, or listed, special products that could be sold only to British merchants. Included in this list of enumerated goods were products most generally considered essential to England’s wealth and power: sugar, tobacco, cotton, indigo, and later rice, molasses, naval stores (tar, pitch, etc.), furs and iron. English merchants were allowed to sell these goods to whomever they chose as long as they were first taken to England or Scotland where a tariff would be charged. Thus, if a Virginian planter wished to sell his tobacco, he could only sell it to an English merchant. The Englishman then had to take it to England, pay taxes on it there, and only then could he sell the tobacco in France or any other country. Staples Act: Restrictions on Imports In 1663 Parliament passed the Staples Act, which forbade the colonists from buying any products grown or manufactured in Africa, Europe, or Asia. Unlike the enumerated list of export restrictions, the Staples Act prohibited the importing of almost every article that was either not produced in England or was not shipped there first. Thus, if a colonist wished to buy French silks, Dutch linens, or Indian tea, he would buy these goods from an English importer. The Englishman in this example would have bought these goods from France, India, or Holland. Neither the Englishman nor the colonial merchant was allowed to bring these products directly to the colonies. Instead, all had to pay for the added expense and inconvenience of all non-English products taking a far longer route to the colonies which included the loading and unloading, storing, and taxing of all the goods involved. The exceptions to the provisions of this bothersome Staples Act, like wine from the Madeira Islands, were few and relatively unimportant. Restrictions on Manufacturing According to mercantile theory, colonies were to supply their mother nation with raw materials and buy their manufactured goods. Therefore, colonies should not have been encouraged to develop their own industries. England, however, made few attempts to restrict colonial manufacturing. She merely prevented the colonists from shipping certain products from one colony to another. For example, colonists were not permitted to sell either wooden goods or beaver hats to other colonies. After 1750, a far more serious restriction was placed on the manufacturer of such iron goods as rifles, axes, and pots. Bounties Not all aspects of mercantilism were bad for the colonies. Since England needed certain products to maintain her navy, she offered special payments to producers of naval stores. Thus, bounties were paid for tar, pitch, resin, turpentine, hemp, lumber, and indigo. Between 1761 and 1776, these special bounties cost England £120,000. The Effects of Mercantile Laws on the Colonies England’s mercantile laws certainly made life more difficult for the colonists. “A colonist cannot make button, horseshoe, nor a hob nail,” one outspoken Bostonian complained, “but some sooty iron-manufacturer or respectable button maker of Britain shall bawl…that he is most terribly cheated and robbed by the rascally Americans.” Nevertheless, the best way to examine just how seriously the colonial laws hurt the colonies, we need to take a careful look at the each of the three major groupings of colonies, New England, Middle, and Southern. Thomas Ladenburg, copyright, 1974, 1998, 2001, 2007 t.ladenburg@verizon.net Page 16 The New England Colonies Under Mercantilism Because of the difficulty of earning a living from the rocky soil found in New England, the Puritans of Connecticut, Massachusetts, and the surrounding states lived by their wits. They learned to build ships that carried about one-third of all the trade between England and her colonies. They founded a thriving fishing industry, and manufactured shoes, candles, coaches, and leather goods. These they sold in Europe with no interference from England. Their famous triangle trade with Africa and the West Indies was also carried out without British restrictions. Rum was manufactured from molasses in the Rhode Island distilleries and taken to Africa where it was bartered for slaves. The slaves were then taken to the West Indies and exchanged for molasses, gold, and silver. These precious metals allowed the New England merchants to make up for their unfavorable trade balance with England. The molasses was made into rum and the brutal triangle trade was continued. Where their trade was hindered by British regulations, the resourceful New Englanders simply resorted to smuggling illegal cargoes under the noses of British officials. Because England more or less winked at this lawbreaking, Boston quickly became the Empire’s largest single port outside of Great Britain itself. The Middle Colonies Under Mercantilism Mercantile restrictions hardly interfered with the economy of the Middle colonies, which included New York, New Jersey, Delaware, and Pennsylvania. Flour, cereals, and lumber, their main articles of trade with Europe, never appeared on the list of enumerated items. Only the Molasses Act of 1733, which placed a tax on that item, could have hurt the economy of the Middle colonies. But they, like their neighbors in the North, generally avoided paying this tax. However, a small but thriving iron industry in Pennsylvania was hurt by the Iron Act of 1750, which prohibited the export of iron ware. The Southern Colonies Under Mercantilism The Southern colonies were England’s prize possessions. Unlike their brothers in New England, Southerners never developed industries that competed with Great Britain. Instead, they produced such staples as rice, tobacco, cotton, naval stores, and indigo for British consumption. The planters in Maryland and Virginia in particular were hurt by the enumeration of tobacco. Of 96,000 hogsheads of tobacco sent each year to England from these colonies, 82,000 were re-shipped to Europe. The yearly cost in extra duties and labor amounted to £185,000. Meanwhile, the South's dependence on British manufacturers put it at a severe trade disadvantage. To escape this dependency on British merchants, some planters like George Washington began planting wheat rather than tobacco. Another Virginian considered himself a ‘piece of property attached to certain banking houses in London.’ The planters blamed their constant debt to British money lenders on mercantile policies. Eighty years after the Revolution, however, they found themselves similarly in debt to Northern bankers, and this time blamed their condition on the government in Washington rather than the one in London. Conclusion A final evaluation of the effects of British mercantilism on her American colonies must take into consideration the benefits of living in the British Empire, as well as the costs. The benefits included first and foremost the protection given colonial ships that sailed the world under the British flag, and the protection received from her mighty army. Secondly, the benefits can be measured by the bounties paid for producing necessary products. Against these assurances, the colonists need to weigh the added cost of Thomas Ladenburg, copyright, 1974, 1998, 2001, 2007 t.ladenburg@verizon.net Page 17 all her imports resulting from the Staples Act even though consistent smuggling certainly reduced this price. Finally, the colonists would need to assess the restrictions on their exports, either under the enumerated list, or under the laws regulating export of iron, beaver hats, and woolen goods. A careful analysis of all these factors could provide some tentative answers and also help in judging the role these economic issues played in causing the American Revolution. Suggested Student Exercises: 1. Define or identify and briefly showing the importance to the chapter of each of the following: i. 3 kinds of colonies e. 4 kinds of mercantile laws a. mercantilism j. triangular trade f. Navigation Acts b. exports & imports k. smuggling g. enumerated goods c. favorable balance of trade l. benefits of being British subjects h. "British ships" d. balance of payments 2. Using the following list (on the next page), giving the value of exports to and imports from England, complete the three exercises below: a. Give the balance of trade in 1710, 1740, and 1770. b. Give the value of exports in current dollars (assuming (1£ = $200 for those years). c. Construct a bar graph, using the sample sheet on last page of this chapter, showing trends in exports, imports, or trade balances between 1700 and 1774. The chart should be judged on accuracy, neatness, and information provided. (Note: a trend is a movement in a certain direction). To make a good bar graph 1. Study statistics to find a trend 3. Label axis 5. Complete bars 2. Make a scale of measurement 4. Locate points 6. Label & title Value of Exports to England and Imports from England YEAR EXPORTS IMPORTS BALANCE OF TRADE 1700 £395.000 £344,300 + £54,700 1710 £249,800 £293,700 1720 £468,200 £319,700 1730 £572,600 £536,900 + £5,700 1740 £718,400 £813,400 1750 £814,800 £1,313,100 1760 £761,100 £2,611,800 – £1,850,700 1770 £1,015,500 £1,925,600 1774 £1,373,846 £2,596 Thomas Ladenburg, copyright, 1974, 1998, 2001, 2007 t.ladenburg@verizon.net Page 18 Title: P O U N D S YEARS Thomas Ladenburg, copyright, 1974, 1998, 2001, 2007 t.ladenburg@verizon.net |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.